Information about this feed is obtained from the feed XML file. Feed owners may supply additional information by updating their XML file or by sending email to firstname.lastname@example.org
Description: The History of the Future of Education Technology
Generator: Jekyll v3.4.3
Rationalizing Those 'Irrational' Fears Of Inbloom
This article first appeared on Points, a Data & Society publication in February 2017
That inBloom might exist as a cautionary tale in the annals of ed-tech is rather remarkable, if for no other reason than ed-tech – at least its manifestation as a current blend of venture capital exuberance, Silicon Valley hype, philanthropic dollars, and ed-reform policy-making – tends to avoid annals. That is to say, ed-tech today has very little sense of its own history. Everything is “new” and “innovative” and “disruptive.” It’s always forward-facing, with barely a glance over its should at the past – at the history of education or the history of technology. No one had ever thought about using computers in the classroom – or so you might glean if you only read the latest marketing about apps and analytics – until this current batch of philanthropists and entrepreneurs and investors and politicians suddenly stumbled upon the idea circa 2010.
Perhaps that very deliberate dismissal of history helped doom inBloom from the start. Those who worked on the initiative seemed to ignore the legacy of the expensive and largely underutilized ARIS (Achievement Reporting and Innovation System) system that had been built for New York City schools, for example, hiring many of ARIS’s staff and soliciting the company in charge of building it, Wireless Generation, to engineer the inBloom product.
While those making sweeping promises about data collection and data analytics wanted to suggest that, thanks to digital technologies, InBloom offered a unique opportunity to glean insights from data from the classroom, many parents and educators likely had a different sense – a deeper history –of what data had already done or undone, of what data could do or undo. They certainly had a different sense of risk.
The compulsion to gather more and more data is hardly new, although certainly new technologies facilitate it, generating more and more data in turn. In 1962, Raymond Callahan published Education and the Cult of Efficiency, tracing to the early twentieth century the eagerness of school leaders to adopt the language and the practices of business management in the hopes that schools might be run more efficiently and more “scientifically.”
There’s something quite compelling about those hopes, it seems, as they underlie much of the push for education reform and education technology in schools still today. Indeed, this belief in efficiency and science helped to justify inBloom, as Data & Society’s new report on the history of the $100 million data infrastructure initiative demonstrates.
That belief is evident in the testimonies from various politicians, administrators, entrepreneurs, and technologists involved in the project. Data collection – facilitated by inBloom – was meant to be “the game-changer,” in the words of the CEO of the Data Quality Campaign, providing a way to “actually use individual student information to guide teaching and learning and to really leverage the power of this information to help teachers tailor learning to every single child in their class. That’s what made inBloom revolutionary.” “The promise was that [inBloom] was supposed to be adaptive differentiated instruction for individual students, based on test results and other data that the states had. InBloom was going to provide different resources based on those results,” according to the superintendent of a New York school district.
But this promise of a data-driven educational “revolution” was – and still is – mostly that: a promise. The claims about “personalized learning” attainable through more data collection and data analysis remain primarily marketing hype. Indeed, “personalized learning” is itself a rather nebulous concept. As Data & Society observed in a 2016 report on the topic,
Description of personalized learning encompass such a broad range of possibilities – from customized interfaces to adaptive tutors, from student-centered classrooms to learning management systems – that expectations run high for their potential to revolutionize learning. Less clear from these descriptions are what personalized learning systems actually offer and whether they improve the learning experiences and outcomes for students.
So while “personalized learning” might be a powerful slogan for the ed-tech industry and its funders, the sweeping claims about its benefits are largely unproven by educational research.
But it sounds like science. With all the requisite high-tech gadgetry and data dashboards, it looks like science. It signifies science, and that signification is, in the end, the justification that inBloom largely relied upon. I’m someone who tried to get the startup to clarify “what inBloom will gather, how long it will store it, and what recourse parents have who want to opt out,” and I remember clearly that there was nevertheless much more hand-waving and hype than there ever was a clear explanation (“scientific” or otherwise) of “how” or “why” it would work.
No surprise then, there was pushback, primarily from parents, educators, and a handful of high profile NYC education activists who opposed InBloom’s data collection, storage, and sharing practices. But as the Data & Society report details, “instead of seeking to build trust at the district level with teachers and parents, many interview participants observed that inBloom and the Gates Foundation responded to what were very emotional concerns with complex technical descriptions or legal defenses.”
This juxtaposition of parents as “emotional” and inBloom and the project’s supporters as “scientific” and “technical” runs throughout the report, which really serves to undermine and belittle the fears of inBloom opponents. (This was also evident in many media reports at the time of inBloom’s demise that tended to describe parents as “hysterical” or that patronized them by contending the issues were “understandably obscure to the average PTA mom.”) The opposition to inBloom is described in the Data & Society report as a “visceral, fervently negative response to student data collection,” for example, while the data collection itself is repeatedly framed in terms of its “great promise.” While the report does point to the failure of inBloom officials to build parents’ trust, many of the interviewees repeatedly dismiss the mistrust as irrational. “The activism about InBloom felt like anti-vaccination activism. Just fear,” said one participant. “I don’t know how else to put it,” said another. “It was not rational.”
But inBloom opponents did have reason – many perfectly rational reasons – for concern. As the report chronicles, there were a number of concurrent events that prompted many people to be highly suspicious of plans for the data infrastructure initiative – its motivations and its security. These included inBloom’s connection to the proponents of the Common Core and other education reform policies; the growing concern about the Gates Foundation’s role in shaping these very policies; Edward Snowden’s revelations about NSA surveillance; several high profile data breaches, including credit card information of some 70 million Target customers; the role of News Corp’s subsidiary Wireless Generation in building the inBloom infrastructure, coinciding with News Corp’s phone hacking scandal in the UK, as well as its decision to hire Joel Klein, the former NYC schools chancellor who’d commissioned the failed ARIS system, to head News Corp’s new education efforts. As the report notes, “The general atmosphere of data mistrust combined with earlier education reform movements that already characterized educational data as a means of harsh accountability.”
In the face of this long list of concerns, the public’s “low tolerance for uncertainty and risk” surrounding student data is hardly irrational. Indeed, I’d argue it serves as a perfectly reasonable challenge to a technocratic ideology that increasingly argues that “the unreasonable effectiveness of data” will supplant theory and politics and will solve all manner of problems, including the challenge of “improving teaching” and “personalizing learning.” There really isn’t any “proof” that more data collection and analysis will do this – mostly just the insistence that this is “science” and therefore must be “the future.”
History – the history of inBloom, the history of ed-tech more generally – might suggest otherwise. Thu, 16 Mar 2017 12:35:00 +0000
The History Of The Future Of E-Rate
While much of the speculation about the future of education technology under President Trump has been focused on the new Secretary of Education Betsy DeVos (her investment in various ed-tech companies, her support for vouchers and charter schools), it’s probably worth remembering that the Department of Education is hardly the only agency that shapes education and education technology policy.
The FCC plays a particularly important role in regulating the telecommunications industry, and as such, it has provided oversight for the various technologies long touted as “revolutionizing” education – radio, television, the Internet. (The FCC was established in 1934; the Department of Education, in 1979; its Office of Educational Technology, in 1994.)
Tom Wheeler, the head of the FCC under President Obama, stepped down from his role and left the agency on January 20 – the day of President Trump’s inauguration. Wheeler had been a “champion” of net neutrality and E-rate reform, according to Education Week at least, but his replacement, Trump appointee Ajit Pai, seems poised to lead the agency with a very different set of priorities – and those priorities will likely shape in turn what happens to ed-tech under Trump. As an op-ed in The Washington Post put it, “The FCC talks the talk on the digital divide – and then walks in the other direction.”
Indeed, one of the first moves made by the FCC under Pai was to block nine companies from providing subsidized Internet service to low-income families.The agency also rescinded a report about the progress made in modernizing the E-rate program, something that had been the focus of Wheeler’s tenure – a report that had been released just two days before Wheeler left office – removing it from the FCC website altogether. (An archived copy is available via Doug Levin’s website.)
Senator Bill Nelson (D-FL), the ranking member of the Senate Committee on Commerce, Science and Transportation, issued a strongly worded rebuke to that move, calling E-rate “without question the single most important educational technology program in the country.”
Despite this praise, the program has long been controversial, frequently criticized for fraud and waste. Arguably, E-rate is one of the key pieces of ed-tech-related legislation in the US, and as such it’s worth examining its origins, its successes, and its failures.
What can E-rate tell us about the relationship between politics and ed-tech? Who has benefited?A History of E-rate Legislation
E-rate is the name commonly used to describe the Schools and Libraries Program of the Universal Service Fund, established as part of the Telecommunications Act of 1996. The act called for “universal service” so that all Americans could have access to affordable telecommunications services, regardless of their geographical location. The legislation also ordered telecom companies to provide their services to all public schools and libraries at discounted rates – from 20% to 90% off depending on the services provided and number of students receiving free and reduced school lunches. The program, whose subsidies were initially capped at $2.25 billion, was to be funded through mandatory contributions from telecom providers – the Universal Service Fund (USF). (Telecom providers added fees to customers’ bills in order to pay for their contributions.)
The FCC initially appointed the National Carrier Exchange Association, the non-profit organization charged with managing the USF, with handling the E-rate program, but eventually a new organization was created to do this: the Universal Service Administrative Company (USAC).
From the outset, the program faced Congressional scrutiny, with questions about its scope, its management, and its funding. In particular, legislators were concerned that the charges levied on telecoms in order to pay for E-rate might be a tax (rather than a fee). (If the charges were a tax, it would be unconstitutional for the Executive branch and not Congress to exact them.) Some members of Congress also objected to the level of funding for E-rate. They argued that the program cost too much money and took needed funds away from other “universal service” efforts; some proposed that the program be replaced by block grants.
In 2014, the FCC undertook a “modernization” plan for E-rate in part to address the changing demand for telecommunications services. The agency issued an order to support affordable access to high-speed broadband in particular (not merely “access to the Internet”) and to boost access and bandwidth of schools’ WiFi networks.
As part of these modernization efforts, in 2015 the funding cap for E-rate was increased to $3.9 billion and the way in which funds were allocated was an adjusted – all in an attempt to “spread the wealth” beyond just a few large districts that had historically benefited most from the program.
According its January 2017 report, the FCC’s modernization push enabled some 77% of school districts to meet the minimum federal connectivity targets by the end of 2016; just 30% had met those requirements in 2013. (That is, Internet speeds of 100 Mbps per 1000 users.) During the same period, the cost that schools paid for Internet connectivity fell from $22 to $7 per Mbps.
“Progress,” the FCC boasted in the report. “No comment,” the FCC said in February when asked why the report on the modernization efforts had been pulled from its website. Commissioner Pai had voted against those efforts, for what it’s worth, back in 2014, saying that the FCC order did little to curb bureaucracy or waste.A Brief History of E-rate Fraud
Throughout its history, the E-rate program has faced repeated scrutiny from Congress, from Republican members of the FCC like Pai, and from the General Accounting Office, which issued a report in 2005 that took issue with the “unusual” organizational structure of the USAC and questioned whether or not E-rate was sufficiently responsive to accountability standards that would “help protect the program and the fund from fraud, waste, and abuse.”
And there have been plenty of accusations and lawsuits regarding “fraud, waste, and abuse.” Among them: an $8.71 million settlement paid by Inter-Tel in 2004 over accusations of rigging the bidding process. A $21 million settlement paid by NEC in 2006 for price-fixing. An $8.2 million settlement paid by AT&T in 2009 over accusations of non-competitive bidding practices and overcharging. A $16.25 million settlement paid by Hewlett Packard in 2010 over accusations of fraud. A $3 million settlement paid by the New York City DOE in 2016 over accusations of mishandling the bidding process. (Here is the full list of those who’ve been convicted of criminal or civil violations and have therefore been barred from participating in the E-rate program.)
As some of these settlements highlight, while the E-rate program was supposed to ensure that schools received discounted telecommunications services, this hasn’t always happened. ProPublica reported on over-charging in the E-rate program in 2012,
Lawsuits and other legal actions in Indiana, Wisconsin, Michigan and New York have turned up evidence that AT&T and Verizon charged local school districts much higher rates than it gave to similar customers or more than what the program allowed.
AT&T has charged some schools up to 325 percent more than it charged others in the same region for essentially the same services. Verizon charged a New York school district more than twice as much as it charged government and other school customers in that state.
Despite these issues, a court decision in 2014 blocked the USAC from prosecuting telecoms for making false statements about offering schools and libraries the “Lowest Corresponding Price,” arguing that this falls outside the False Claims Act, a statute that allows the government to pursue fraud claims against businesses. The burden of proof that schools and libraries are being offered a competitive price falls on the applicants themselves.E-rate and the History of the Future of the “Digital Divide”
When the E-rate program was first established in 1996, only 14% of K–12 classrooms in the US had access to the Internet. Almost all schools are now connected to the Internet, although – as that FCC modernization report underscores – not all classrooms have access to high-speed broadband, and not all schools have WiFi networks that can support the heavy data demands on their bandwidth. According to EducationSuperhighway, a non-profit organization that lobbies for increased Internet access, 88% of public schools now have the minimum level of Internet access – that is, 100 kbps per student), although just 15% offer the FCC’s goal – that is, 1 Mbps per student.
According to both EducationSuperhighway and the FCC, it is imperative to “level the playing field” so that schools and libraries, regardless of geographic location or the income level of students they serve, all have access to affordable high speed Internet. Certainly in the 1990s, when E-rate was introduced, its goal was to address this very issue – “the digital divide.”
Cost has certainly remained a barrier for the poorest schools, as has the infrastructure itself in some areas – a lack of high speed broadband service altogether, for example. Some schools “cannot overcome the 19th century buildings to take advantage of 20th century technology,” Education Secretary Richard Riley told The New York Times in 2000.
But there’s another access to “the digital divide” beyond simply who can afford “the digital,” and that’s something that Macomb Community College professor Chris Gilliard calls “digital redlining”: “the growing sense that digital justice isn’t only about who has access but also about what kind of access they have, how it’s regulated, and how good it is.” That issue with “what kind of access” is core to E-rate because of an associated law, the Children’s Internet Protection Act.
The act, known as CIPA, was passed in 2000 – one of a series of pieces of legislation that attempted to curb if not criminalize “adult materials” online in places “where minors would be able to find it.” The Communications Decency Act, for example, was passed in 1996 – the same year as the Telecommunications Act – but was found unconstitutional by the Supreme Court the following year. In 1998, Congress again sought to address children’s exposure to “harmful materials” with passage of the Child Online Protection Act, but this too was challenged in court. The Supreme Court also found the Child Pornography Prevention Act of 1998 unconstitutional in 2002.
Recognizing these legal challenges, Congress took a slightly different tact with CIPA. Rather than regulating content on the Web writ large, it opted to restrict what schools and libraries that receive federal funding – through the Library Services and Technology Act, Title III of the Elementary and Secondary Act, the Museum and Library Services Act, or E-rate – could allow people to view online. CIPA requires schools and libraries to create “acceptable use” policies for Internet usage, to hold a public meeting about how it will ensure safety online, and to use a “technology protection measure” to keep Internet users away from materials online that are “obscene,” “child pornography,” or “harmful to minors.” That is, CIPA requires Web filtering.
The law has faced its own share of legal challenges, including one from the American Library Association. The Supreme Court ruled in 2003 that CIPA does not violate the Constitution.
One of the myriad complaints about CIPA is that it results in “over-filtering” – that schools and libraries block content that are not “obscene” or “harmful to minors.” There are many stories about how information about things like breast cancer or LGBTQ issues or drug abuse is inaccessible at certain schools. (I have found that my website is blocked by many because it contains that dangerous word “hack.”)
Now that schools are increasingly providing students with laptops or tablets, filtering software often happens at the device-level, not simply at the school network level. That is, the Internet remains filtered, even when students are on their laptops at home.
Clearly this is an equity issue – one that complicates how “the digital divide” has traditionally been framed and what E-rate was supposed to address. Those who rely on the Internet networks at E-rate supported schools have their Internet access restricted and monitored in turn.E-rate and the Future of Ed-Tech
The decision by the new FCC to rescind its report on E-rate raises plenty of questions about the future of the program under President Trump. Will the FCC reduce spending on universal service? Will the agency revise regulatory oversight for the E-rate program? What might this look like?
How might this, alongside Ajit Pai’s opposition to “net neutrality,” reshape access to information at schools and libraries (particularly those that serve a low-income population and those in rural areas)? Wed, 08 Mar 2017 12:35:00 +0000
What 'Counts' As Ed-Tech When Counting Venture Capital?
I have updated my Ed-Tech Funding project with the dollars and deals from February. This past month, ed-tech startups raised $566,950,000.
But there’s an asterisk by that figure as it includes $500,000,000 raised by one company, student loan provider SoFi – money that SoFi says it plans to use to “push beyond lending.”
Many ed-tech publications do not count SoFi as “ed-tech,” preferring to label it as “fintech” and thereby excluding it and other student loan startups from their calculations. Edsurge, for example, does not include student loan startups in the “ka-ching!” reports it sells as it says it only considers ed-tech to be those “technology companies whose primary purpose is to improve outcomes for all learners, regardless of age.”
But that isn’t a particularly helpful delineation in my mind. Would a student information system or any sort of administrative software fall under that definition? Isn’t the point of financial aid – public and private – ostensibly “to improve outcomes”? Does a messaging app like Yik Yak count? It was marketed to students after all. Does a company that offers career assistance to college students count? Why not? (And you can’t say “because it doesn’t improve learning.” Most ed-tech doesn’t actually “improve learning,” let’s be honest.)
I try to cast a wide net when I include companies in my funding research because I want to be able to have as full a picture as possible about the types of education companies that are getting funded. But I’m also incredibly interested in the types of market opportunities that venture capitalists have identified in education.
That’s why excluding private student loans from “the state of ed-tech” strikes me as so disingenuous if not outright dangerous. Ignore student loan startups and you have a very skewed sense of what the priorities are for investors, all of whom are actively trying to shape the narratives about the future of education. Think Peter Thiel and his proclamation of a “college bubble.” Think Ryan Craig and that mantra about the “unbundling” of higher ed. (Both are partners in VC firms that are investors in student loan startups, funnily enough.)
Tressie McMillan Cottom’s new book, Lower Ed: The Troubling Rise of For-Profit Colleges in the New Economy, is particularly useful in thinking about the “financialization” of education. (And there’s a reason why she and I have described “coding bootcamps” as “the new for-profit higher ed.”) Well beyond the push for “everyone learn to code,” it’s worth considering how digital technologies – in the classroom, in administrators’ offices, in human resources departments, at home – have become a core part of the “Wall-Street”-ification of education. You cannot separate venture capital from all that; you cannot. And I would argue that the growing power of the investor class is a far more significant development in education than any technical or pedagogical advance that ed-tech purports to bring to “learning.”
Earlier this week, news broke that ResearchGate, a social network for scholars, had raised $52.6 million… back in November 2015. When Business Insider asked the founder why he hadn’t disclosed the investment (until required by law to do so by the German government), he said that “I didn’t really want to announce it because I think talking about funding generally is pretty boring.” Or perhaps (and more likely) he was concerned about how faculty might respond to news that Goldman Sachs and Bill Gates were so heavily invested in his idea.
Regardless, disclosure about ed-tech funding is incredibly important – for transparency, certainly, but also because it helps remind us that the for-profit companies involved with education have other missions besides simply “improving learning outcomes.” Wed, 01 Mar 2017 12:35:00 +0000
Calling Education To A Count
This article first appeared in the Data & Society publication Points in September 2016. It’s a response, in part, to the organization’s primer on accountability in education: “The Myth of Accountability: How Data (Mis)Use is Reinforcing the Problems of Public Education.”
To be accountable is to be answerable; to be required to justify one’s actions; to be called to account. That reckoning could take the form of an explanation; in an obsolete usage of the word –obsolete according to the Oxford English Dictionary at least – accountability explicitly involves calculation. But this particular meaning isn’t completely lost to us; in its contemporary usage in education policy, “accountability” certainly demands a calculation as well, one derived primarily from standardized test scores.A Brief History of Accountability
“Accountability” in public education has a long history, but today it's most commonly associated with one of the key pieces of legislation passed under George W. Bush’s presidency: No Child Left Behind, the 2001 reauthorization of the Elementary and Secondary Education Act. No Child Left Behind is credited with ushering in, at a national level, an education reform movement focused on measuring students' performance on reading and math assessments.
Of course, standardized testing pre-dates the NCLB legislation – by over a thousand years if you trace the history of testing back through the examinations used in Imperial China to select candidates for civil service. But No Child Left Behind has always been positioned as a new and necessary intervention, one aimed at the improvement of K–12 schools and one coinciding with long-standing narratives about American educational excellence (and the lack thereof). As such, NCLB and its notion of accountability has shaped the public discourse about how we know – or think we know – whether schools are good or bad; and the law has, until its recent re-write as the Every Student Succeeds Act of 2015, dictated what is supposed to happen when schools are categorized as the latter: these schools will be held accountable.Carrots, Sticks, and the Bully Pulpit
“Accountability” now provides the framework for how we measure school success. And to be clear, this is a measurement. But only certain things “count” for this accounting.
As the pro-business American Enterprise Institute (AEI) has described these sorts of policies, accountability in US public education in the last few decades has taken the shape of “carrots, sticks, and the bully pulpit.” This includes policies that demand a school’s performance be evaluated annually based on its students’ performance on standardized tests. Depending on how well or how poorly a school performs, it might be rewarded or punished, carrots or sticks – by being allocated more or less funding, for example, or by being prompted to hire or fire certain staff members, or in the most extreme cases, by being shut down altogether. But as the AEI’s phrase suggests, a key part of accountability has become “the bully pulpit” and involves a number of powerful narratives about failing schools, incompetent teachers, underperforming students, and as such, the need for more oversight into how tax dollars are being spent.
There are other shapes that accountability efforts might take (and do take and have taken), no doubt: “Accountability” could refer to the democratic process; that is, elections for local school boards and other education-related offices such as Superintendent of Public Instruction. Accountability could be encouraged through more information transparency, publishing publicly more school data (and not just test scores). Accountability could also be pushed via “markets”; that is offering “choice” or even vouchers to parents so they can opt where they send their children to school beyond simply their neighborhood school. Accountability could focus on mechanisms that reward and punish individual teachers or students (as opposed to entire schools or districts). While that could conceivably involve teachers or students defining their own teaching and learning goals and responsibilities, accountability is often a framework imposed by administrative forces with a narrow set of what educational data and what educational outcomes “count.”What Accountability Practices are Missing
Accountability tends to focus on the outputs of the school system – by measuring different levels of “student achievement” via standardized testing. As such, it is less apt to examine the inputs – at inequalities of funding, at differences in staffing, and so on. It presumes that students’ success or failure is the responsibility of the school, ignoring or at least minimizing the role of poverty or structural racism. Its calculations posit a highly instrumental view of student achievement, not to mention student learning. To be held accountable, it must be quantifiable.
This instrumentality dovetails quite handily with the increasing use of technologies in the classroom – technologies that collect more and more data on students' various activities. This data collection goes far beyond standardized test scores, making assessment an ongoing and incessant practice. But it’s a practice that, in part because of the very demands of today’s accountability framework, remains focused on surveillance and punishment.
The word “accountability” is related to the word “responsibility.” As public institutions, there is an expectation that schools spend taxpayer money responsibly. Schools are responsible for teaching students; they are responsible for students’ safety and well-being during the school day and, according to our popular narratives surrounding the effects of education, responsible for their success far beyond school. New digital data collection and analytics promise to improve the responsiveness of teachers and schools to students’ individual needs. But it’s a promise largely unfulfilled.So when we think about “what counts” and who’s held to account under public education’s accountability regime, it’s still worth asking if accountability can co-exist with “response-ability” – accountable to whom, how and to what ends; responsible to whom, how, and to what ends. Mon, 20 Feb 2017 12:01:00 +0000
What'S On The Horizon (Still, Again, Always) For Ed-Tech
The New Media Consortium and the EDUCAUSE Learning Initiative have released the latest NMC Horizon Report for Higher Education.
I have written quite a bit about the problems (as I see them) with the Horizon Report, most recently in a talk I gave last fall at VCU: “The Best Way to Predict the Future is to Issue a Press Release.” I have taken issue with the NMC’s refusal to revisit previous years’ predictions, for example, which is why I started a project where you can see at a glance how the predictions have and have not changed over the decade-plus of the Horizon Report’s existence. My project also makes some of the information available in a machine-readable format instead of solely in a PDF. (It seems like a missed opportunity to be touting “the future of ed-tech” in a report that is designed for the printer.)
This year, the Horizon Report’s Higher Education Edition does include graphics with some historical data, demonstrating how some technologies and topics appear and reappear and how some simply disappear altogether from the horizon.
Click for full-size
The topic names have been modified “for consistency,” the report’s authors say (although I’m a little unclear about some of these choices – how are “mobile learning,” “tablet computing,” and “bring your own device” separate technological developments? Why are “virtual assistants,” “learning analytics,” “adaptive learning technologies,” and “robotics” distinct from the overarching category of “artificial intelligence”?). Of course, the Horizon Report dates back to 2004, so this is only a partial look back at its own history. But the graphic still underscores (probably unintentionally) how haphazard the predictions about coming technological developments just might be.
Perhaps part of the problem is a compulsion to always pick something new simply for the sake of newness (for the newness of tech and for the continued relevance and circulation of the Horizon Report itself).
This year, the Horizon Report posits that the “Time to Adoption Horizon” for technologies in higher ed looks something like this:
One Year or LessAdaptive Learning TechnologiesMobile Learning
Two to Three YearsThe Internet of ThingsNext-Generation LMS
Four to Five YearsArtificial IntelligenceNatural User Interfaces
Here’s what fourteen years’ worth of predictions look like:
Click for full-size
I can’t help but notice that mobile technologies have been one to three years out from widespread adoption since 2006. “Smart objects” (a.k.a. “the Internet of Things”) have been on the horizon since 2009. The LMS is now on the horizon for the very first time, despite being one of the oldest education technology systems out there, with origins in the 1970s and the development of PLATO. And gone from the horizon, these technologies from last year’s report: learning analytics, augmented reality and VR, makerspaces, affective computing, and robotics. Were they adopted? Were they rejected? The report does little to help us understand this.
Those technologies that are supposedly “on the horizon” have long been the primary focus and selling point of the report; but in 2014, it expanded its analysis, identifying the trends that might drive the adoption of education technology.
These are the trends the Horizon Report has identified this year:
One to Two YearsBlended Learning DesignsCollaborative Learning
Three to Five YearsGrowing Focus on Measuring LearningRedesigning Learning Spaces
Five or More YearsAdvancing Cultures of InnovationDeep Learning Approaches
These “trends” strike me as at once ahistorical and utterly meaningless – or even, as I described them in my VCU talk, “not even wrong.” “Measuring learning”? “Collaborative learning”? “Cultures of innovation”? How are these not already deeply intertwined with existing systems and practices of educational institutions? (Or is it, rather, that are these not intertwined in ways that further the ideologies underpinning a certain vision of a technologized future of education?)
The report also identifies certain challenges to ed-tech adoption – solvable, difficult, and wicked challenges – but these too seem to reflect a rather odd set of tests that higher education might face. There’s no mention of Trump and little discussion of state and federal education policies (accreditation, financial aid, for-profit higher education, DACA, Title IX, campus carry, for example). No mention of academic freedom (although, to be fair, there is a brief discussion of adjunctification). There’s very limited discussion of funding (that is, limited to discussion of “funding innovation” and not to funding higher education more broadly or to how students themselves will pay for post-secondary education or personal computing devices and broadband). Education technology in the Horizon Report is almost entirely stripped of politics, a political move in and of itself.
No doubt, I am asking the Horizon Report to do something and to be something that it hasn’t done, that it hasn’t been. But at some point (I hope), instead of a fixation on new technologies purportedly “on the horizon,” ed-tech will need to turn to the political reality here and now. Thu, 16 Feb 2017 07:01:00 +0000
Ed-Tech In A Time Of Trump
This talk was delivered at the University of Richmond. The full slide deck can be found here.
Thank you very much for inviting me to speak here at the University of Richmond – particularly to Ryan Brazell for recognizing my work and the urgency of the conversations that hopefully my visit here will stimulate.
Hopefully. Funny word that – “hope.” Funny, those four letters used so iconically to describe a Presidential campaign from a young Illinois Senator, a campaign that seems now lifetimes ago. Hope.
My talks – and I guess I’ll warn you in advance if you aren’t familiar with my work – are not known for being full of hope. Or rather I’ve never believed the hype that we should put all our faith in, rest all our hope on technology. But I’ve never been hopeless. I’ve never believed humans are powerless. I’ve never believed we could not act or we could not do better.
There were a couple of days, following our decision about the title and topic of this keynote – “Ed-Tech in a Time of Trump,” when I wondered if we’d even see a Trump presidency. Would some revelation about his business dealings, his relationship with Russia, his disdain for the Constitution prevent his inauguration? Should we have been so lucky, I suppose. Hope.
The thing is, I’d still be giving the much the same talk, just with a different title. “A Time of Trump” could be “A Time of Neoliberalism” or “A Time of Libertarianism” or “A Time of Algorithmic Discrimination” or “A Time of Economic Precarity.” All of this is – from President Trump to the so-called “new economy” – has been fueled to some extent by digital technologies; and that fuel, despite what I think many who work in and around education technology have long believed – have long hoped – is not necessarily (heck, even remotely) progressive.
I’ve had a sinking feeling in my stomach about the future of education technology long before Americans – 26% of them, at least – selected Donald Trump as our next President. I am, after all, “ed-tech’s Cassandra.” But President Trump has brought to the forefront many of the concerns I’ve tried to share about the politics and the practices of digital technologies. I want to state here at the outset of this talk: we should be thinking about these things no matter who is in the White House, no matter who runs the Department of Education (no matter whether we have a federal department of education or not). We should be thinking about these things no matter who heads our university. We should be asking – always and again and again: just what sort of future is this technological future of education that we are told we must embrace?
Of course, the future of education is always tied to its past, to the history of education. The future of technology is inexorably tied to its own history as well. This means that despite all the rhetoric about “disruption” and “innovation,” what we find in technology is a layering onto older ideas and practices and models and systems. The networks of canals, for example, were built along rivers. Railroads followed the canals. The telegraph followed the railroad. The telephone, the telegraph. The Internet, the telephone and the television. The Internet is largely built upon a technological infrastructure first mapped and built for freight. It’s no surprise the Internet views us as objects, as products, our personal data as a commodity.
When I use the word “technology,” I draw from the work of physicist Ursula Franklin who spoke of technology as a practice: “Technology is not the sum of the artifacts, of the wheels and gears, of the rails and electronic transmitters,” she wrote. “Technology is a system. It entails far more than its individual material components. Technology involves organization, procedures, symbols, new words, equations, and, most of all, a mindset.” “Technology also needs to be examined as an agent of power and control,” Franklin insisted, and her work highlighted “how much modern technology drew from the prepared soil of the structures of traditional institutions, such as the church and the military.”
I’m going to largely sidestep a discussion of the church today, although I think there’s plenty we could say about faith and ritual and obeisance and technological evangelism. That’s a topic for another keynote perhaps. And I won’t dwell too much on the military either – how military industrial complexes point us towards technological industrial complexes (and to ed-tech industrial complexes in turn). But computing technologies undeniably carry with them the legacy of their military origins. Command. Control. Communication. Intelligence.
As Donna Haraway argues in her famous “Cyborg Manifesto,” “Feminist cyborg stories have the task of recoding communication and intelligence to subvert command and control.” I want those of us working in and with education technologies to ask if that is the task we’ve actually undertaken. Are our technologies or our stories about technologies feminist? If so, when? If so, how? Do our technologies or our stories work in the interest of justice and equity? Or, rather, have we adopted technologies for teaching and learning that are much more aligned with that military mission of command and control? The mission of the military. The mission of the church. The mission of the university.
I do think that some might hear Haraway’s framing – a call to “recode communication and intelligence” – and insist that that’s exactly what education technologies do and they do so in a progressive reshaping of traditional education institutions and practices. Education technologies facilitate communication, expanding learning networks beyond the classroom. And they boost intelligence – namely, how knowledge is created and shared.
Perhaps they do.
But do our ed-tech practices ever actually recode or subvert command and control? Do (or how do) our digital communication practices differ from those designed by the military? And most importantly, I’d say, does (or how does) our notion of intelligence?
“Intelligence” – this is the one to watch and listen for. (Yes, that’s ironic that “ed-tech in a time of Trump” will be all about intelligence, but hear me out.)
“Intelligence” means understanding, intellectual, mental faculty. Testing intelligence, as Stephen Jay Gould and others have argued, has a long history of ranking and racism. The word “intelligence” is also used, of course, to describe the gathering and assessment of tactical information – information, often confidential information, with political or military value. The history of computing emerges from cryptography, tracking and cracking state secrets. And the word “intelligence” is now used – oh so casually – to describe so-called “thinking machines”: algorithms, robots, AI.
It’s probably obvious – particularly when we think of the latter – that our notions of “intelligence” are deeply intertwined with technologies. “Computers will make us smarter” – you know those assertions. But we’ve long used machines to measure and assess “intelligence” and to monitor and surveil for the sake of “intelligence.” And again, let’s recall Franklin’s definition of technologies includes not just hardware or software, but ideas, practices, models, and systems.
One of the “hot new trends” in education technology is “learning analytics” – this idea that if you collect enough data about students that you can analyze it and in turn algorithmically direct students towards more efficient and productive behaviors, institutions towards more efficient and productive outcomes. Command. Control. Intelligence.
And I confess, it’s that phrase “collect enough data about students” that has me gravely concerned about “ed-tech in a time of Trump.” I’m concerned, in no small part, because students are often unaware of the amount of data that schools and the software companies they contract with know about them. I’m concerned because students are compelled to use software in educational settings. You can’t opt out of the learning management system. You can’t opt out of the student information system. You can’t opt out of required digital textbooks or digital assignments or digital assessments. You can’t opt out of the billing system or the financial aid system. You can’t opt of having your cafeteria purchases, Internet usage, dorm room access, fitness center habits tracked. Your data as a student is scattered across multiple applications and multiple databases, most of which I’d wager are not owned or managed by the school itself but rather outsourced to a third-party provider.
School software (and I’m including K–12 software here alongside higher ed) knows your name, your birth date, your mailing address, your home address, your race or ethnicity, your gender (I should note here that many education technologies still require “male” or “female” and do not allow for alternate gender expressions). It knows your marital status. It knows your student identification number (it might know your Social Security Number). It has a photo of you, so it knows your face. It knows the town and state in which you were born. Your immigration status. Your first language and whether or not that first language is English. It knows your parents’ language at home. It knows your income status – that is, at the K–12 level, if you quality for a free or reduced lunch and at the higher ed level, if you qualify for a Pell Grant. It knows if you are the member of a military family. It knows if you have any special education needs. It knows if you were identified as “gifted and talented.” It knows if you graduated high school or passed a high school equivalency exam. It knows your attendance history – how often you miss class as well as which schools you’ve previously attended. It knows your behavioral history. It knows your criminal history. It knows your participation in sports or other extracurricular activities. It knows your grade level. It knows your major. It knows the courses you’ve taken and the grades you’ve earned. It knows your standardized test scores.
Obviously it’s not a new practice to track much of that data, and as such these practices are not dependent entirely on new technologies. There are various legal and policy mandates that have demanded for some time now that schools collect this information. Now we put it in “the cloud” rather than in a manila folder in a locked file cabinet. Now we outsource this to software vendors, many of whom promise that because of the era of “big data” that we should collect even more information about students – all their clicks and their time spent “on task,” perhaps even their biometric data and their location in real time – so as to glean more and better insights. Insights that the vendors will then sell back to the school.
Command. Control. Intelligence.
This is the part of the talk, I reckon, when someone who speaks about the dangers and drawbacks of “big data” turns the focus to information security and privacy. No doubt schools are incredibly vulnerable on the former front. Since 2005, US universities have been the victim of almost 550 data breaches involving nearly 13 million known records. We typically think of these hacks as going after Social Security Numbers or credit card information or something that’s of value on the black market.
The risk isn’t only hacking. It’s also the rather thoughtless practices of information collection, information sharing, and information storage. Many software companies claim that the data that’s in their systems is their data. It’s questionable if much of this data – particularly metadata – is covered by FERPA. As such, student data can be sold and shared, particularly when the contracts signed with a school do not prevent a software company from doing so. Moreover, these contracts often do not specify how long student data can be kept.
In this current political climate – ed-tech in a time of Trump – I think universities need to realize that there’s a lot more at stake than just financially motivated cybercrime. Think Wikileaks’ role in the Presidential election, for example. Now think about what would happen if the contents of your email account was released to the public. President Trump has made it a little bit easier, perhaps, to come up with “worse case scenarios” when it comes to politically-targeted hacks, and we might be able to imagine these in light of all the data that higher ed institutions have about students (and faculty).
Again, the risk isn’t only hacking. It’s amassing data in the first place. It’s profiling. It’s tracking. It’s surveilling. It’s identifying “students at risk” and students who are “risks.”
Several years ago – actually, it’s been five or six or seven now – when I was first working as a freelance tech journalist, I interviewed an author about a book he’d written on big data and privacy. He made one of those casual remarks that you hear quite often from people who work in computing technologies: privacy is dead. He’d given up on the idea that privacy was possible or perhaps even desirable; what he wanted instead was transparency – that is, to know who has your data, what data, what they do with it, who they share it with, how long they keep it, and so on. You can’t really protect your data from being “out there,” he argued, but you should be able to keep an eye on where “out there” it exists.
This particular author reminded me that we’ve counted and tracked and profiled people for decades and decades and decades and decades. In some ways, that’s the project of the Census – first conducted in the United States in 1790. It’s certainly the project of much of the data collection that happens at school. And we’ve undertaken these practices since well before there was “big data” or computers to collect and crunch it. Then he made a comment that, even at the time, I found deeply upsetting. “Just as long as we don’t see a return of Nazism,” he joked, “we’ll be okay. Because it’s pretty easy to know if you’re a Jew. You don’t have to tell Facebook. Facebook knows.”
We can substitute other identities there. It’s easy to know if you’re Muslim. It’s easy to know if you’re queer. It’s easy to know if you’re pregnant. It’s easy to know if you’re Black or Latino or if your parents are Syrian or French. It’s easy to know your political affinities. And you needn’t have given over that data, you needn’t have “checked those boxes” in your student information system in order for the software to develop a fairly sophisticated profile about you.
This is a punch card, a paper-based method of proto-programming, one of the earliest ways in which machines could be automated. It’s a relic, a piece of “old tech,” if you will, but it’s also a political symbol. Think draft cards. Think the slogan “Do not fold, spindle or mutilate.” Think Mario Savio on the steps of Sproul Hall at UC Berkeley in 1964, insisting angrily that students not be viewed as raw materials in the university machine.
The first punch cards were developed to control the loom, industrializing the craft of weaving women around 1725. The earliest design – a paper tape with holes punched in it – was improved upon until the turn of the 19th century, when Joseph Marie Jacquard first demonstrated a mechanism to automate loom operation.
Jacquard’s invention inspired Charles Babbage, often credited with originating the idea of a programmable computer. A mathematician, Babbage believed that “number cards,” “pierced with certain holes,” could operate the Analytical Engine, his plans for a computational device. “We may say most aptly that the Analytical Engine weaves algebraical patterns just as the Jacquard-loom weaves flowers and leaves,” Ada Lovelace, Babbage’s translator and the first computer programmer, wrote.
But it was Herman Hollerith who invented the recording of data on this medium so that it could then be read by a machine. Earlier punch cards – like those designed by Jacquard – were used to control the machine. They weren’t used to store data. But Hollerith did just that. The first Hollerith card had 12 rows and 9 columns, and data was recorded by the presence or absence of a hole at a specific location on a card.
Hollerith founded The Tabulating Machine Company in 1896, one of four companies consolidated to form Computing-Tabulating-Recording Company, later renamed the International Business Machines Corporation. IBM.
Hollerith’s punch card technology was first used in the US Census in 1890 to record individual’s traits – their gender, race, nationality, occupation, age, marital status. These cards could then be efficiently sorted to quantify the nation. The Census was thrilled as it had taken almost a decade to tabulate the results of the 1880 census, and by using the new technology, the agency saved $5 million.
Hollerith’s machines were also used by Nicholas II, the czar of Russia for the first (and only) census of the Russian Imperial Empire in 1897. And they were adopted by Hitler’s regime in Germany. As Edwin Black chronicles in his book IBM and the Holocaust,
When Hitler came to power, a central Nazi goal was to identify and destroy Germany’s 600,000-member Jewish community. To Nazis, Jews were not just those who practiced Judaism, but those of Jewish blood, regardless of their assimilation, intermarriage, religious activity, or even conversion to Christianity. Only after Jews were identified could they be targeted for asset confiscation, ghettoization, deportation, and ultimately extermination. To search generations of communal, church, and governmental records all across Germany – and later throughout Europe – was a cross-indexing task so monumental, it called for a computer. But in 1933, no computer existed.
What did exist at the time was the punch card and the IBM machine, sold to the Nazi government by the company’s German subsidiary, Dehomag.
Hitler’s regime made it clear from the outset that it was not interested in merely identifying those Jews who claimed religious affiliation, who said that they were Jewish. It wanted to be able to find those who had Jewish ancestry, Jewish “blood,” those who were not Aryan.
Hitler called for a census in 1933, and Germans filled out the census on pen and paper – one form per household. There was a census again in 1939, and as the Third Reich expanded, so did the Nazi compulsion for data collection. Census forms were coded and punched by hand and then sorted and counted by machine. IBM punch cards and IBM machines. During its relationship with the Nazi regime – one lasting throughout Hitler’s rule, throughout World War II – IBM derived about a third of its profits from selling punch cards.
Column 22 on the punch card was for religion – punched at hole 1 to indicate Protestant, hole 2 for Catholic, hole 3 for Jew. The Jewish cards were processed separately. The cards were sorted and indexed and filtered by profession, national origin, address, and other traits. The information was correlated with other data – community lists, land registers, medical information – in order to create a database, “a profession-by-profession, city-by-city, and indeed a block-by-block revelation of the Jewish presence.”
It was a database of inference, relying heavily on statistics alongside those IBM machines. This wasn’t just about those who’d “ticked the box” that they were Jewish. Nazi “race science” believed it could identify Jews by collecting and analyzing as much data as possible about the population. “The solution is that every interesting feature of a statistical nature … can be summarized … by one basic factor,” the Reich Statistical Office boasted. “This basic factor is the Hollerith punch card.”
Command. Control. Intelligence.
The punch card and the mechanized processing of its data were used to identify Jews, as well as Roma and other “undesirables” so they could be imprisoned, so their businesses and homes could be confiscated, so their possessions could be inventoried and sold. The punch card and the mechanized processing of its data was used to determine which “undesirables” should be sterilized, to track the shipment of prisoners to the death camps, and to keep tabs on those imprisoned and sentenced to die therein. All of this recorded on IBM punch cards. IBM machines.
The CEO of IBM at this time, by the way: Thomas Watson. Yes, this is who IBM has named their “artificial intelligence” product Watson after. IBM Watson, which has partnered with Pearson and with Sesame Street, to “personalize learning” through data collection and data analytics.
Now a quick aside, since I’ve mentioned Nazis.
Back in 1990, in the early days of the commercialized Internet, those heady days of Usenet newsgroup discussion boards, attorney Mike Godwin “set out on a project in memetic engineering.” Godwin felt as though comparisons to Nazis occurred too frequently in online discussions. He believed that accusations that someone or some idea was “Hitler-like” were thrown about too carelessly. “Godwin’s Law,” as it came to be known, says that “As an online discussion grows longer, the probability of a comparison involving Hitler approaches 1.” Godwin’s Law has since been invoked to decree that once someone mentions Hitler or Nazis, that person has lost the debate altogether. Pointing out Nazism online is off-limits.
Perhaps we can start to see now how dangerous, how damaging to critical discourse this even rather casual edict has been.
Let us remember the words of Supreme Court Justice Robert Jackson in his opening statement for the prosecution at the Nuremburg Trials:
What makes this inquest significant is that these prisoners represent sinister influences that will lurk in the world long after their bodies have returned to dust. We will show them to be living symbols of racial hatreds, of terrorism and violence, and of the arrogance and cruelty of power. … Civilization can afford no compromise with the social forces which would gain renewed strength if we deal ambiguously or indecisively with the men in whom those forces now precariously survive.
We need to identify and we need to confront the ideas and the practices that are the lingering legacies of Nazism and fascism. We need to identify and we need to confront them in our technologies. Yes, in our education technologies. Remember: our technologies are ideas; they are practices. Now is the time for an ed-tech antifa, and I cannot believe I have to say that out loud to you.
And so you hear a lot of folks in recent months say “read Hannah Arendt.” And I don’t disagree. Read Arendt. Read The Origins of Totalitarianism. Read her reporting from the Nuremberg Trials.
But also read James Baldwin. Also realize that this politics and practice of surveillance and genocide isn’t just something we can pin on Nazi Germany. It’s actually deeply embedded in the American experience. It is part of this country as a technology.
Let’s think about that first US census, back in 1790, when federal marshals asked for the name of each head of household as well as the numbers of household members who were free white males over age 16, free white males under 16, free white females, other free persons, and slaves. In 1820, the categories were free white males, free white female, free colored males and females, and slaves. In 1850, the categories were white, Black, Mulatto, Black slaves, Mulatto slaves. In 1860, white, Black, Mulatto, Black slaves, Mulatto slaves, Indian. In 1870, white, Black, Mulatto, Indian, Chinese. In 1890, white, Black, Mulatto, Quadroon, Octoroon, Indian, Chinese, Japanese. In 1930, white, Negro, Indian, Chinese, Japanese, Filipino, Korean, Hindu, Mexican.
You might see in these changing categories a changing demographic; or you might see this as the construction and institutionalization of categories of race – particularly race set apart from a whiteness of unspecified national origin, particularly race that the governing ideology and governing system wants identified and wants managed. The construction of Blackness. “Census enumeration is a means through which a state manages its residents by way of formalized categories that fix individuals within a certain time and a particular space,” as Simone Browne writes in her book Dark Matters: On the Surveillance of Blackness, “making the census a technology that renders a population legible in racializing as well as gendering ways.” It is “a technology of disciplinary power that classifies, examines, and quantifies populations.”
Command. Control. Intelligence.
Does the data collection and data analysis undertaken by schools work in a similar way? How does the data collection and data analysis undertaken by schools work? What bodies and beliefs are constituted therein? Is whiteness and maleness always there as “the norm” against which all others are compared? Are we then constructing and even naturalizing certain bodies and certain minds as “undesirable” bodies and “undesirable” minds in the classroom, in our institutions by our obsession with data, by our obsession with counting, tracking, and profiling?
Who are the “undesirables” of ed-tech software and education institutions? Those students who are identified as “cheats,” perhaps. When we turn the cameras on, for example with proctoring software, those students whose faces and gestures are viewed – visually, biometrically, algorithmically – as “suspicious.” Those students who are identified as “out of place.” Not in the right major. Not in the right class. Not in the right school. Not in the right country. Those students who are identified – through surveillance and through algorithms – as “at risk.” At risk of failure. At risk of dropping out. At risk of not repaying their student loans. At risk of becoming “radicalized.” At risk of radicalizing others. What about those educators at risk of radicalizing others. Let’s be honest with ourselves, ed-tech in a time of Trump will undermine educators as well as students; it will undermine academic freedom. It’s already happening. Trump’s tweets this morning about Berkeley.
What do schools do with the capabilities of ed-tech as surveillance technology now in the time of a Trump? The proctoring software and learning analytics software and “student success” platforms all market themselves to schools claiming that they can truly “see” what students are up to, that they can predict what students will become. (“How will this student affect our averages?”) These technologies claim they can identify a “problem” student, and the implication, I think, is that then someone at the institution “fixes” her or him. Helps the student graduate. Convinces the student to leave.
But these technologies do not see students. And sadly, we do not see students. This is cultural. This is institutional. We do not see who is struggling. And let’s ask why we think, as the New York Times argued today, we need big data to make sure students graduate. Universities have not developed or maintained practices of compassion. Practices are technologies; technologies are practices. We’ve chosen computers instead of care. (When I say “we” here I mean institutions not individuals within institutions. But I mean some individuals too.) Education has chosen “command, control, intelligence.” Education gathers data about students. It quantifies students. It has adopted a racialized and gendered surveillance system – one that committed to disciplining minds and bodies – through our education technologies, through our education practices.
All along the way, or perhaps somewhere along the way, we have confused surveillance for care.
And that’s my takeaway for folks here today: when you work for a company or an institution that collects or trades data, you’re making it easy to surveil people and the stakes are high. They’re always high for the most vulnerable. By collecting so much data, you’re making it easy to discipline people. You’re making it easy to control people. You’re putting people at risk. You’re putting students at risk.
You can delete the data. You can limit its collection. You can restrict who sees it. You can inform students. You can encourage students to resist. Students have always resisted school surveillance.
But I hope that you also think about the culture of school. What sort of institutions will we have in a time of Trump? Ones that value open inquiry and academic freedom? I swear to you this: more data will not protect you. Not in this world of “alternate facts,” to be sure. Our relationships to one another, however, just might. We must rebuild institutions that value humans’ minds and lives and integrity and safety. And that means, in its current incarnation at least, in this current climate, ed-tech has very very little to offer us.
Thu, 02 Feb 2017 07:01:00 +0000
What Happened In Ed-Tech In 2016 (And Who Paid For It)?
Here is a list of all the articles I wrote as part of my look at the “Top Ed-Tech Trends” of the year.
“Trends” is perhaps the wrong word here. These are my observations about what’s happened in education technology (and education more broadly) over the course of the past 12 months. This project – something I’ve done every year since 2010 – aims to serve as an in-depth analysis of the noteworthy events and products and politics and financing and tries to piece together the narratives and ideologies that drive ed-tech.Education Technology and the Year of Wishful ThinkingThe Politics of Education TechnologyThe Business of Education TechnologyEducation Technology and the Promise of “Free” and “Open”Education Technology and the “New” For-Profit Higher EducationEducation Technology and the “New Economy”Education Technology and the History of the Future of CredentialingEducation Technology and Data InsecurityEducation Technology and the Ideology of PersonalizationEducation Technology’s Inequalities
This year, I also published a number of supplemental articles detailing the funding for each of these “trends”: How Much Venture Capital Did Ed-Tech Raise in 2016?Who Were 2016’s Most Active Ed-Tech Investors?Who’s Funding Tutoring Startups?Who’s Funding Test Prep Startups?Who’s Funding Ed-Tech in Africa?Who’s Funding Predictive Analytics in Education?Who’s Funding ‘Personalized Learning’ Startups?Who’s Funding ‘Character Education’ Startups?Who’s Funding Testing and Test Monitoring Startups?Who’s Funding Startups that ‘Monitor’ Schools and Students?Who’s Funding the Blockchain in Education?Who’s Funding ‘Credentialing’ Startups?Who’s Funding Corporate Training Startups?Who’s Funding Job Placement Startups?Who’s Funding Learn-to-Code Startups?Who’s Funding Student Loan Startups?Who’s Funding MOOCs in 2016?Who’s Funding Virtual Reality Startups?Mark Zuckerberg’s Education Investment PortfolioThe Emerson Collective’s Education Investment PortfolioWho Received Gates Foundation Grants in 2016?Education Companies and the Stock MarketPublic Funding for Education in 2016The 2016 Ed-Tech Dead Pool
No one else writes these sorts of reviews of ed-tech. No one. A reminder: this site is not funded by ads or venture capitalists or corporations or philanthropic organizations – it’s supported by individual readers. You can donate via PayPal or support me via Patreon.Icon credits: The Noun Project Sat, 31 Dec 2016 07:01:00 +0000
The Curse Of The Monsters Of Education Technology
My latest book is now available for purchase.
The Curse of the Monsters of Education Technology is the latest in my “monsters of ed-tech” series – a sequel to The Monsters of Education Technology (2014) and The Revenge of the Monsters of Education Technology (2015). Like those two books, this new one is a collection of all the keynotes and talks I delivered in 2016 – seven altogether.
E-book versions are available for purchase for $4.99 via the usual online retailers: Amazon and Smashwords. Even better (as far as my royalties go at least): you can buy from me directly via Gumroad.
Coming soon: print and audio versions.
As always, thanks for supporting my work. Tue, 27 Dec 2016 07:01:00 +0000
Education Technology'S Inequalities
This is part ten of my annual review of the year in ed-tech
The richest 1% now possess as much wealth as the rest of the world combined.
That was the conclusion of an Oxfam report issued in January. It was a slogan of the Occupy Movement too, of course, one reprised this year in the Presidential campaign of Senator Bernie Sanders, who would frequently repeat that “Now is the time to create a government which represents all Americans and not just the 1%.”
It doesn’t look as though we’ve done that, sadly. “Trump’s 17 cabinet-level picks have more money than a third of American households combined,” according to Quartz.
Income inequality continues to grow – both within nations and globally – and it poses a grave risk for democracy and for the environment.
The American Dream, a phrase invented during the Great Depression, feels more and more out of reach for more and more Americans, as increasingly people are making less money than their parents did.
One of the mantras of that dream – the idea that economic success is possible if not inevitable – involves the necessity of education. “Education can be the difference, that education can save lives, that education can put folks on a path to opportunity,” Secretary of Education John B. King Jr. told the students at Milwaukee Area Technical College’s graduation ceremony in May. But it’s a “false promise,” Jacobin’s David I. Backer contends. As economic inequality has grown, so has schooling: “United States citizens are more educated than they ever have been. More people have graduated from more kinds of schools than at any point in history.”
Indeed, rather than a “silver bullet,” education often serves to reinforce inequalities. Sixty-two years after Brown v Board of Education, segregation is worsening – in neighborhood schools, at elite schools, at charters. This comes as the majority of students in the US public school system are now students of color. (The majority of teachers are still white.)
Data released in June by the Department of Education’s Office for Civil Rights highlighting the ongoing disparities – between the experiences of white students and students of color, between the experiences of affluent students and low-income students, and between the experiences of students with disabilities and those without – serves to underscore the systematic failure to provide equitable education at the preschool and K–12 levels in the US.
Black preschool children, for example, are 3.6 times more likely to be suspended than white preschool children. Black K–12 students are 3.8 times more likely to be suspended than white students. Students with disabilities are more than twice as likely to be suspended than students without disabilities. Black students are 1.9 times more likely to be expelled than white students. Black students are 2.2 times more likely to be referred to law enforcement than white students. Charter schools, according to a study based on this OCR data, have an even higher rate of suspending Black students and students with disabilities. And some charters have been charged with purposefully refusing to enroll certain students – a violation of the law.
Black, Latino, and Native students are less likely to have access to high-level math and science courses. They are underrepresented in gifted and talented programs. They are underrepresented in AP courses. Black, Latino, and Native students are more likely to attend schools with high concentrations of inexperienced teachers. They’re more likely to attend schools where teachers have not met all state certification requirements. 87% of white students graduate on time; 76% of Latino students and 73% of Black students do. Native American students have the worst graduation rates in the country, particularly those attending schools run by the Bureau of Indian Affairs. And while white, Black, and Latino students enroll in college after graduation at roughly the same rate, students of color are much less likely to graduate with a Bachelor’s degree in six years or less. “That disparity hints at the large enduring difference in the quality of the K–12 preparation many minority students are receiving,” writes Ronald Brownstein in The Atlantic.
The inequalities of K–12 education extend into higher ed, exacerbated by high tuition, inadequate financial aid, and admissions policies that privilege white and affluent students. (Take, for example, ProPublica’s article on Donald Trump’s son-in-law and “consigliere”: “The Story Behind Jared Kushner’s Curious Acceptance into Harvard.”)
This fall, Georgetown University announced its plans to “atone for its slave past” – like many universities, the Jesuit-run institution had a long connection to the slave trade, selling 272 men, women, and children in 1838 to pay off its debts. The university said it would begin offering preferential admission status – like the children of alumna already receive – to the descendants of the slaves owned by the university. It was a gesture that The Atlantic’s Adrienne Green said “falls short,” and it certainly does not count as reparations according to sociologist Tressie McMillan Cottom, which she argues must contain three components: “acknowledgement, restitution, and closure.”
The idea that preferred admission equals payment stems from the American ideology that opportunity, especially educational opportunity, is a “fair” form of recompense. Opportunity has a moral basis: It will only be valuable for those who deserve it and will not inconvenience or harm those who already have the opportunity (whether they deserve it or not). Our society likes opportunity because it does not demand redistribution of resources acquired through harm. As you can tell, I’m not a fan of this logic. But even if I were, preferred admission doesn’t equate to much of an opportunity.
Preferred admissions gives a narrowly defined group of black descendants a chance to compete for achievements that are defined by accumulated disadvantage. The chance to be preferred in admissions to Georgetown still relies on racial differences in college preparation, racial wealth, and income gaps that condition the ability to pay college tuition, and racial gaps in knowledge about competitive college admissions. Preferential admissions says if you somehow manage to navigate all those other legacies of slavery – wealth disparities, income disparities, information disparities – then we will give you additional consideration in admissions. That is generous when judged by how little other universities have done but it is not much of an opportunity and it isn’t a form of payment at all.
College campuses have become much more diverse over the past few decades, true, but these institutions remain insensitive, unwelcoming, and hostile to students and faculty of color, to students and faculty with disabilities, to queer students and faculty, and to women.
The Office for Civil Rights said it received a record number of complaints this year – a 61% increase from last year. The number of reports regarding sexual assault on college campuses increased 831%; complaints regarding web accessibility for persons with disabilities was up 511%; complaints involving the restraint or seclusion of students with disabilities increased 100%; and complaints involving harassment on the basis of race or national origin increased by 17%. Teaching Tolerance released a report on the increased harassment and bullying witnessed this year – something it tied directly to the Trump campaign: 90% of K–12 educators that the organization surveyed said that their school climate had been adversely affected by the racist, nationalist, sexist rhetoric of the Presidential campaign. Reports of hate crimes and racist graffiti spiked on school campuses across the country following Trump’s election.
Will President Trump make all these educational inequalities worse? Certainly there are serious concerns about his choice for Secretary of Education, Betsy DeVos, and the overwhelmingly negative impact that her political influence has had on Michigan schools, particularly for students in low-income urban schools. There are also fears that the Trump administration will be less likely to enforce Title IX and might scrap the Office for Civil Rights altogether. Furthermore, his promise to deport undocumented immigrants has schools scrambling to plan for how they will protect their students.
I also want to consider that, with or without a President Trump, education technology also might make things worse, might also contribute to these ongoing inequalities – and not simply because many in ed-tech seem quite eager to work with the new administration.Don’t Believe “Don’t Be Evil”
My own concerns about the direction of education technology cannot be separated from my concerns with digital technologies more broadly. I’ve written repeatedly about the ideologies of Silicon Valley: neoliberalism, libertarianism, imperialism, late stage capitalism. These ideologies permeate education technology too, as often the same investors and same entrepreneurs and the same engineers are involved.
As I wrote in my article on “the ‘new’ economy,” automation, so we’re told, is poised to reshape “work.” It has reshaped work. None of this has played out equitably, as the benefits have accrued in management and not by labor. Indeed, the World Bank issued a report in January arguing that digital technologies – not just robots in factories – stand to widen inequalities as well, “and even hasten the hollowing out of middle-class employment.” While new technologies are spreading rapidly, the “digital dividends – growth, jobs and services – have lagged behind.” As venture capitalist Om Malik wrote on the eve of 2016, “In Silicon Valley Now, It’s Almost Always Winner Takes All.” Money and data – they’re intertwined for technology companies – are monopolized in a handful of corporate giants.
The technology industry – its products and its politics – furthers inequality, particularly in its own backyard in the Bay Area. Its high profile executives then have the audacity to claim that reality – human suffering – is merely a simulation. Or they say they’re prepared to leave Earth and colonize Mars. Or they back Donald Trump.
Trump, for his part, indicated on the campaign trail he might be interested in creating a registry to track Muslims’ whereabouts in the country; and while some technology companies and tech workers have sworn they would never participate in building a database to do this, no doubt, the metadata to identify us and track us – by our religion, by our sexual identity, by our race, by our political preferences –already exists in these companies’ and in the government’s hands. Trump will soon have vast surveillance powers – thanks in part to technology companies like Palantir, thanks in part to expanded NSA surveillance, authorized by President Obama – under his control.
Meanwhile, schools and education companies have also expanded their surveillance of students and faculty, with little concern, it seems, to how politically regressive all this data-mining and algorithmic decision-making might actually be.Inequality and the “Top Ed-Tech Trends”
The inequalities that I’ve chronicled above – income inequality, wealth inequality, information inequality – have been part of our education system for generations, and these are now being hard-coded into our education technologies. This is apparent in every topic in every article I’ve written in this years’ year-end series: for-profit higher education, surveillance in the classroom, and so on.
These inequalities are apparent in the longstanding biases that are found in standardized testing, for example, often proxies for “are you rich?” and “are you white?” and “are you male?” Despite all the Common Core-aligned revisions and all the headlines to the contrary, “The New SAT Won’t Close the Achievement Gap.” (Shocking, I know.) In fact, according to Reuters, the College Board has redesigned the SAT in ways “that may hurt neediest students.”
Ed-tech’s inequalities are evident too in the results, in many cases, of moving standardized testing from pencil-and-paper to computer. Scores for some students who took their PARCC exams on computers were lower – lower in Rhode Island and lower in Maryland, for example.
There were also significant gaps on a new NAEP exam administered this year, one measuring “technology and engineering literacy”: “Students whose families are so poor that they qualify for free or reduced-price lunch scored 28 points lower, on average, than students from more affluent families. The gap between black and white students was even more pronounced, with 56 percent of white students scoring at or above ‘proficient’ and just 18 percent of black students meeting that bar,” Chalkbeat reported in May. (Girls, for what it’s worth, out-performed boys.) Another study conducted by the Department of Education found that using computers widens the “achievement gap” between high-performing and low-performing students. The latter group, which is more likely to be comprised of Black, Latino, and low-income students, performed better on writing assessments when writing with pencil and paper.
This “gap” seems to extend to online courses too. A study from Northwestern University, for example, found that “high-achieving North Carolina 8th graders who took Algebra 1 online performed worse than similar students who took the course in a traditional classroom.” A study from the American Institutes of Research found that “students working online were 10 percentage points less likely to pass than the students randomly assigned to take the course face-to-face – 66 percent compared with 76 percent.” A report issued by the National Education Policy Center confirmed what we’ve known for some time now – that students at virtual schools fare very poorly – but added that students at blended schools (those that combine face-to-face and online instruction) are struggling as well, with 77% of the blended schools the NEPC reviewed performing below state averages. And this problem exists at the college level too. Research from California’s Public Policy Institute found that students in the state’s community college system are 10 to 14% less likely to pass a class when they take it online. – but there’s an “online paradox,” according to The Chronicle of Higher Education, because students who successfully complete at least one online course are 25% more likely to graduate than those who only take classes face-to-face.
Despite the serious flaws in online and blended learning, many education technology advocates continue to push for more and more education technology, and Silicon Valley investors in turn continue to fund the expansion of use of these products, particularly in low-income schools, in the US as well as in the developing world.
As I wrote in the first article in this series, one of the latter companies, Bridge International Academies, was poised to take over Liberia’s public school system. Bridge International – funded by the Chan Zuckerberg Initiative, the Gates Foundation, the Omidyar Network, and others – is a private school startup that hires teachers to read scripted lessons from a tablet that in turn tracks students’ assessments and attendance – as well as teachers’ own attendance. Expansion of Bridge International Academies has been controversial, and the Ugandan government ordered all BIA schools there to close their doors. Other companies with similar models: Spark Schools, which raised $9 million this year from the Omidyar Network and Pearson, and APEC, also funded by Pearson. In April, journalist Anya Kamenetz looked closely at “Pearson’s Quest to Cover the Planet in Company-Run Schools”: "Pearson would like to become education’s first major conglomerate, serving as the largest private provider of standardized tests, software, materials, and now the schools themselves.
Whether it’s selling schools or MOOCs or access to the Internet itself, technology companies and education companies are, as Edsurge put it, “Building Effective Edtech Business Models to Reach the Global Poor.” Whether or not the education itself is “effective,” let alone equitable, is another question altogether.
Data about who’s funding the expansion of private schools in the developing world can be found on funding.hackeducation.com.From the “Digital Divide” to “Digital Redlining”
Discussions about education technology (and new digital technologies more generally) were, for many years, framed in terms of the “digital divide” – that is, the gap between those who have access to computers and to the Internet and those who do not. It’s a gap resulting from a variety of factors, including socioeconomic status, race, age, and geographic location.
Community college professors Chris Gilliard and Hugh Culik contend that there’s a “growing sense that digital justice isn’t only about who has access but also about what kind of access they have, how it’s regulated, and how good it is.”
We need to understand how the shape of information access controls the intellectual (and, ultimately, financial) opportunities of some college students. If we emphasize the consequences of differential access, we see one facet of the digital divide; if we ask about how these consequences are produced, we are asking about digital redlining. The comfortable elision in “edtech” is dangerous; it needs to be undone by emphasizing the contexts, origins, aims, and ideologies of technologies.
Sociologist Tressie McMiillan Cottom, briefly banned from Facebook for not using her real name on the site, argues that,
This kind of stratified access to information and participation in digitally-mediated social interactions isn’t just about who can post cat memes and who is denied.
As Facebook itself had to admit this week, its platform has become a central means for distributing access to favorable information about jobs, housing, banking, and financial resources.
Being othered on Facebook increasingly means being relegated to unfavorable information schemes that shape the quality of your life.
How do digital redlining and these “unfavorable information schemes” permeate education technology – in its implementation and in its very design?Discrimination by Design
Discriminatory practices can be “hard-coded” into education technologies through the data they collect and how they label and model that data. Information systems that offer only two choices for sex or gender, for example, fail to accommodate transgender students – and violate Title IX, according to the Department of Education. This year, the Department of Education also encouraged schools to stop asking applicants about their criminal histories, and while some researchers have sought the collection of data about students’ sexual orientation – ostensibly to identify discrimination – there are concerns about how this information might easily be used against LGBTQ students.
Harassment is pervasive online, but harassment and cyberstalking are not experienced equally by everyone. A report by Data & Society issued this fall found that 47% of American Internet users say they’ve personally experienced online harassment or abuse. 72% say they’ve witnessed online harassment or abuse. “Internet users ages 15–29, Black internet users, and those who identify as lesbian, gay, or bisexual are all more likely to witness online harassment,” and LGB Internet users are more than twice as likely to experience harassment online than their straight peers. Black and LBG Internet users were more likely to say that people online are “mostly unkind.”
“Mostly unkind” – and yet education technology (and digital technologies more generally) demands students and faculty be online.
Discriminatory practices online are certainly a reflection of discriminatory practices offline, but it’s important to recognize how these become part of the technological infrastructure, part of the code, in ways that are both subtle and overt. Harassment in virtual reality. Harassment using annotation tools.
These new technologies are designed (predominantly) by white, able-bodied, English-speaking heterosexual men from the global north – designed by men for men.
“Just use your initials online instead of your name,” was one venture capitalist’s advice to women this year.
I don’t want to overlook two of those descriptors above: English-speaking and able-bodied. 53% of the World Wide Web is in English. The majority of programming languages are in English. (English-language learning software has long had a large market globally, and venture capitalists seem keen to fund companies that offer these products to K–12 schools as the number of ELL students grows.) What sorts of biases are built into digital technologies because of this?
What sorts of discriminatory practices are we reinstating and reinforcing online?
Despite the requirements of the Americans with Disabilities Act, much education technology remains in accessible. This includes software, digital content, and websites. There were several lawsuits this year demanding schools and their technology vendors comply with the law.
UC Berkeley, on the other hand, announced in September that “may eliminate free online content rather than comply with a U.S. Justice Department order that it make the content accessible to those with disabilities.” The material involved MOOCs that it had produced with edX as well as videos posted to iTunes and YouTube. MOOCs. “Free and open.” “In many cases,” the university said, “the requirements proposed by the department would require the university to implement extremely expensive measures to continue to make these resources available to the public for free.”
So instead, it opted to pull them offline altogether.Predictive Analytics and Algorithmic Discrimination
In January, the student newspaper at Mount St. Mary’s University in Maryland reported that the school’s president had a plan to push out students at risk of dropping out in the first few weeks of class. Doing so early in the semester would mean these students would not count against the university’s retention rate. The paper recounted a conversation the president reportedly had with faculty, encouraging them to rethink their approach to struggling students: “This is hard for you because you think of the students as cuddly bunnies, but you can’t. You just have to drown the bunnies ... put a Glock to their heads.” President Simon Newman, a former private equity CEO, said it was “immoral” to keep struggling students enrolled.
Education technology companies now promise that they can help schools identify these struggling students, through an algorithmic assessment of who’s at risk. These systems weigh a variety of data: standardized test scores, grades, attendance, gender, marital status, age, military service, learning management system log-ins, and “digital footprint.” “Digital footprint” – that is, all manner of students’ online behaviors might be tracked by this software, purportedly “for their own good.”
Predictive analytics like this are supposed help to guide schools so they can offer support services – ideally, better and more responsive services – to struggling students, keeping them enrolled and on a path to graduation. Or, no doubt, predictive analytics can help identify those “drowning bunnies” that must be eliminated.
In a report released this fall titled “The Promise and Peril of Predictive Analytics in Higher Education,” New America’s Manuela Ekowo and Iris Palmer cautioned that,
Predictive models can discriminate against historically underserved groups because demographic data, such as age, race, gender, and socioeconomic status are often central to their analyses. Predictive tools can also produce discriminatory results because they include demographic data that can mirror past discrimination included in historical data. For example, it is possible that the algorithms used in enrollment management always favor recruiting wealthier students over their less affluent peers simply because those are the students the college has always enrolled?
Discrimination, labeling, and stigma can manifest in different ways depending on how colleges use these algorithms. For instance, colleges that use predictive analytics in the enrollment management process run a serious risk of disfavoring low-income and minority students, no matter how qualified these individuals are for enrollment. Predictive models that rely on demographic data like race, class, and gender or do not take into account disparate outcomes based on demographics may entrench disparities in college access among these groups.
Furthermore, predictive analytics, recommendation engines, and other analytics software might keep some students enrolled – and that’s a boon to schools’ bottom lines – but it might also steer them into courses that are less intellectually challenging.
Data about who’s funding predictive analytics in education can be found on funding.hackeducation.com.
“We will literally predict their life outcomes,” claims one scientist. Another group of researchers says they can predict “which children will grow up to be drain on society – when they are just three years old.” Others say they’re working on the nascent field of “educational genomics,” to predict students strengths and weakness and, of course, “personalize” their education.
While much of this sounds like (dystopian) futuristic science fiction, predictive analytics are currently being used to identify students who might be suicidal and those who might develop “extremist” political beliefs or are at risk for “radicalization.”
Law enforcement increasingly uses predictive analytics to identify future criminals, and courts are using predictive analytics to determine sentencing. These have a demonstrable bias against African-Americans, and some of these systems admitting they use facial features to identify criminality. Phrenology 2.0. Schools work with these companies, handing over student data in the process. This relationship between schools and law enforcement cannot be understated, and a study released this fall found that “campuses with larger populations of students of color are more likely to use harsh surveillance techniques.”
In China, credit scores will be determined using people’s Web browsing history, and again, we shouldn’t just dismiss this as something from another time or another place. In the US, loan companies are already starting to use analytics to determine loan eligibility, and there’s talk of expanding the type of data that’s used to determine student loan eligibility as well.
Predictive analytics are being utilized in hiring decisions, testing job candidates for “culture fit.” (Code for “white guy.”) MIT professors Erik Brynjolfsson and John Silbert have called for “moneyball for professors,” using analytics to determine tenure. One education technology startup claims it’s devised a proprietary screening tool that can “accurately predict whether a prospective hire will be an effective teacher, and more specifically whether they will be able to boost students’ test scores.”
There’s no research to back up these claims. And when the software is proprietary, there’s little chance one can examine the algorithms in play.
That’s a problem with all these algorithms – we can’t see them, we can’t evaluate them, and we can’t verify their “accuracy.” In April, high school students in France demanded to know what powers the algorithm that’s used to dictate their post-baccalaureate education options. Everyone should know how these sorts of decisions are being made for them. ProPublica, for its part, has published a series of stories this year “breaking the black box” and investigating algorithmic decision-making, noting how these often function as “discrimination by design.”
There remains very little insight and very little accountability in these algorithms, particularly in education. And, based on what we know about institutional and corporate biases, there is every reason to believe that these algorithms are exacerbating educational inequalities.Education Technology and Digital Polarization
We trust algorithms to make more and more decisions for us, often quite uncritically. Whose values and interests are actually reflected in these algorithms?
Algorithms dictate much of what we see (and what we don’t see and who sees what) online, the news and media we consume – whether on Facebook, or Google, or Amazon, or Twitter, or Netflix. How algorithms shape new information technologies will have profound effects on education, on knowledge – and on democracy.
We saw hints of this, no doubt, in this year’s US Presidential election, although the malaise is much deeper and broader than one electoral event. “Fake news.” Red feeds versus blue feeds. “Post-truth.” Information warfare. The fragmentation of knowledge. Distraction. Expertise, trumped. Digital polarization. It’s “personalization,” we’re told, and we’re supposed to like it.
I’ll close here with words from Maciej Cegłowski, who runs the bookmarking site Pinboard, speaking at the SASE conference in June on “The Moral Economy of Tech”:
The first step towards a better tech economy is humility and recognition of limits. It’s time to hold technology politically accountable for its promises. I am very suspicious of attempts to change the world that can’t first work on a local scale. If after decades we can’t improve quality of life in places where the tech élite actually lives, why would we possibly make life better anywhere else?
We should not listen to people who promise to make Mars safe for human habitation, until we have seen them make Oakland safe for human habitation. We should be skeptical of promises to revolutionize transportation from people who can’t fix BART, or have never taken BART. And if Google offers to make us immortal, we should check first to make sure we’ll have someplace to live.
Techies will complain that trivial problems of life in the Bay Area are hard because they involve politics. But they should involve politics. Politics is the thing we do to keep ourselves from murdering each other. In a world where everyone uses computers and software, we need to exercise democratic control over that software.
I recognize that many people are committed to the belief that the adoption of education technology means “progress.” But it isn’t necessarily politically progressive. At all. We must understand how education technology, in its current manifestation, might actually to reinforce education’s longstanding inequalities.
We must consider too, as we move into a new year with a new President, that it might also be – algorithmically, financially, culturally – profoundly anti-democratic.
Financial data on the major corporations and investors involved in this and all the trends I cover in this series can be found on funding.hackeducation.com. Icon credits: The Noun Project Wed, 21 Dec 2016 07:01:00 +0000
Education Technology And The Ideology Of Personalization
This is part nine of my annual review of the year in ed-techFacebook’s Plans to “Personalize” Education
Facebook, like many digital technology companies, promises that in exchange for collecting your personal data – your name, your age, your gender, your photos, metadata on your photos, your location, your preferences, your browsing and clicking habits, your friends’ names – it will deliver “personalization.” A personalized news feed, for example. Personalized ads.
“Personalization” is also the cornerstone of the investment strategy for Mark Zuckerberg’s new venture philanthropy firm, the Chan-Zuckerberg Initiative. And “personalization” is the underlying promise of the new education software Facebook is itself building.
Facebook worked with the Summit Public Schools charter chain in order to develop this “personalized learning platform,” which it released last year and now licenses to other schools under the product name “Basecamp.” Some 20 Facebook engineers work on the software, and according to The Washington Post, the student information it tracks and stores is not housed on Facebook servers, although Facebook does have access to the data.
Parents must sign away the privacy rights for their children who use Basecamp, as is required under COPPA. But in this case, they must also sign away their right to sue Facebook or Summit Public Schools in case of a problem (like, say, a data breach). Basecamp’s Terms of Service “require disputes to be resolved through arbitration, essentially barring a student’s family from suing if they think data has been misused. In other realms, including banking and health care, such binding arbitration clauses have been criticized as stripping consumers of their rights.” Data can be shared with any company that Facebook deems necessary. “A truly terrible deal,” says Cathy O’Neil, author of Weapons of Math Destruction.
Basecamp is essentially a learning management system (with the adjective “personalized” appended to it). According to The New York Times, “The software gives students a full view of their academic responsibilities for the year in each class and breaks them down into customizable lesson modules they can tackle at their own pace. A student working on a science assignment, for example, may choose to create a project using video, text or audio files. Students may also work asynchronously, tackling different sections of the year’s work at the same time.” I’ll discuss some of the competing definitions of what “personalization” might mean, but in this case, it’s the emphasis on working “at your own pace” on school assignments.
According to Summit’s own reports on those piloting the Basecamp software, “student growth has been positive amongst the cohort schools thus far. Specifically, students who were the furthest behind (in that lowest [Measure of Academic Progress] testing bracket) outperformed the national U.S. average by 1.23 in math and 1.95 in reading, shown below. Translation: if the average American student grew by 1 point in math, the average Basecamp student grew by 1.23 points.” It’s a meager growth, but as the CEO of Summit Public Schools contends, it’s better than what traditional schools are doing. (Stanford historian Larry Cuban has also written a number of articles this year on his observations of the instructional practices and technology usage at the charter school chain.)
Charter schools have been a core part of Mark Zuckerberg’s investment in education reform since the Facebook founder famously donated $100 million to the Newark, New Jersey school system – well, not to the Newark school system, but rather to a local foundation in charge of handling the money. Despite the press coverage – the funding announcement was made on Oprah – things didn’t really go as planned, as journalist Dale Russakoff has recounted in her 2015 book The Prize.
Since his Newark fumbles, Zuckerberg has continued to fund charter school chains, but most of his investment has gone towards software companies that he hopes will help bring about structural changes in the education system, specifically through personalization. Perhaps the most high-profile of these: AltSchool.
More details on Mark Zuckerberg’s education investment portfolio can be found at funding.hackeducation.com.Personalized Surveillance at AltSchool
AltSchool, a private school startup, was founded in 2014 by Max Ventilla, a former Google executive. AltSchool has raised $133 million in venture funding from Zuckerberg Education Ventures, the Emerson Collective (the venture philanthropy firm founded by Steve Jobs’ widow Laurene Powell Jobs), Founders Fund (Peter Thiel’s investment firm), Andreessen Horowitz, and others.
None of that funding came in 2015; and there were rumors of layoffs at the startup, as it pivoted towards a focus on selling its “personalized learning” software to other schools – the seat license will cost $1000 per student – rather than opening more schools of its own. Or “Phase 2,” as Techcrunch politely called it.
In April, Edsurge reported that AltSchool had hired a new chief operating officer, Coddy Johnson, a former executive at Activision who’d been in charge of the Call of Duty video game line. Call of Duty is often touted for its ultra-violence, but hey! Max Ventilla told Edsurge that “There aren’t a lot of people who have multiple times, managed a thousand-plus organization and done it in a way where anyone you talked to says they're absolutely incredible from a leadership perspective.” (Of course, AltSchool is nowhere near a thousand-plus person organization, even if you count the students as workers, which perhaps you should.) Johnson’s management of education-focused companies includes his seat on the board of Twigtale, a “personalized” children’s book startup. That’s his wife Carrie Southworth’s company, and its investors include Ivanka Trump and Rupert Murdoch’s ex-wife Wendi Deng. Johnson himself is the godson of George W. Bush. (Johnson’s dad was roommates with the former President while at Yale.) It’s a small world, I guess, when one is disrupting education via “personalization.”
Everything at AltSchool is driven by data. As Education Week’s Benjamin Herold observed in January at a product team meeting for the startup school’s software, Stream, the following information was analyzed by developers:"Parent usage, measured by total views per week, as well as the percentage of parents who viewed the app at least once each week;Teacher adoption, measured by the frequency with which each teacher in each classroom posted updates to the app;Personalization, measured by the number of student-specific posts and “highlights” per student shared over the previous two weeks;Quality, measured by a review of the content of every single post that every teacher had made to Stream;Parent and teacher satisfaction, measured through constant AltSchool surveys of each group."
The AltSchool classroom is one of total surveillance: cameras and microphones and sensors track students and teachers – their conversations, their body language, their facial expressions, their activities. The software – students are all issued computing devices – track the clicks. Everything is viewed as a transaction that can be monitored and analyzed and then re-engineered.
AltSchool has attempted to brand this personalized surveillance as progressive education – alternately Montessori or Reggio Emilia (because apparently neither Silicon Valley tech executives nor education technology marketers know the difference between their Italian learning theories).Defining Personalization (and Insisting It “Works”)
There isn’t one agreed-upon definition of “personalization” – although there were lots and lots and lots and lots and lots and lots of articles published this year that tried to define it (and at least one that said “stop trying”).
That fuzziness – “moving goalposts” as math educator Dan Meyer has called it – does not stop the word “personalization” from being used all the time in policy documents and press releases: “personalized test prep” and “personalized CliffNotes” and the like. These two examples highlight quite well the mental gymnastics necessary to believe that a “personalized” product is actually personalized. This isn’t about a student pursuing her own curiosity – the topics covered by both CliffNotes and standardized tests are utterly constrained. Personalization is not about the personal; it does not involve students controlling the direction or depth of their inquiry.
It’s just the latest way to describe what B. F. Skinner called “programmed instruction” back in the 1950s.
There were several attempts this year to link the history of “personalized learning” to recent education reforms (but not surprisingly, not to Skinner). “The hottest trend in education actually started in special-ed classrooms 40 years ago,” as Business Insider contended in October. These sorts of articles, many parroting the quite paltry historical knowledge of ed-tech investors, tend to argue that personalization has its roots in the 1970s, in the work of educational psychologists like Benjamin Bloom, for example. Alas, no one reads Rousseau anymore, do they? Or more likely, Rousseau’s vision of education is harder to systematize and monetize and turn into “personalized” flashcards. “Can Venture Capital Put Personalized Learning Within Reach of All Students?” Edsurge asked in June. Poor Rousseau. Without NewSchools Venture Fund, he never had a chance.
Many of the discussions about “personalized learning” insist that technology is necessary for “personalization,” often invoking stereotypes of whole class instruction and denying the myriad of ways that teachers have long tailored what they do in the classroom to the individual students in it. Teachers look for interpersonal cues; they walk around the classroom and check on students’ progress; they adjust their lessons and their assignments in both subtle and conspicuous ways. In other words, “personalization” need not rely on technology or on data-mining; it does, however, demand that teachers attend to students’ needs and to students’ interests.
But “personalization” – at least as it’s promoted by education technology companies and their proponents – requires data collection, and it requires algorithms and analytics. The former, as a practice, is already in place in education. Indeed, in April, the Data Quality Campaign issued a report claiming that schools have collected plenty of data, and now it’s time to use it to “personalize learning.”
But again, what does that phrase “personalize learning” mean?
Education technology companies hope it means that schools buy their products. In a Data & Society report on personalized learning – probably the most helpful guide on the topic – Monica Bulger has identified five types of products that market themselves as “personalized”:Customized learning interface: Invites student to personalize learning experience by selecting colors and avatars, or uses interest, age or geographic indicators to tailor the interface.Learning management: Platforms that automate a range of classroom management tasks.Data-driven learning: A majority of platforms described as ‘adaptive’ fall into this category of efficient management systems that provide materials appropriate to a students’ proficiency level.Adaptive learning: Data-driven learning that potentially moves beyond a pre-determined decision tree and uses machine learning to adapt to a students’ behaviors and competency.Intelligent tutor: Instead of providing answers and modular guidance, inspires questions, interacts conversationally and has enough options to move beyond a limited decision tree.
Bulger’s report also underscores one of the most important caveats for “personalized learning” products: they aren’t very good.
While the responsiveness of personalized learning systems hold promise for timely feedback, scaffolding, and deliberate practice, the quality of many systems are low. Most product websites describe the input of teachers or learning scientists into development as minimal and after the fact. Products are not field tested before adoption in schools and offer limited to no research on the efficacy of personalized learning systems beyond testimonials and anecdotes. In 2010, Houghton Mifflin Harcourt commissioned independent randomized studies of its Algebra 1 program: Harcourt Fuse. The headline findings reported significant gains for a school in Riverside, California. The publicity did not mention that Riverside was one of four schools studied, the other three showed no impact, and in Riverside, teachers who frequently used technologies were selected for the study, rather than being randomly assigned. In short, very little is known about the quality of these systems or their generalizability.
Nevertheless, Knewton claims Knewton’s personalized learning products work. Pearson claims Pearson’s personalized learning products work. Blackboard claims Blackboard’s personalized learning products work. McGraw-Hill claims McGraw-Hill’s personalized learning products work. Front Row claims Front Row’s personalized learning products work. Organizations in the business of lobbying for and investing in “personalized” ed-tech claim personalized ed-tech works. And so on.IBM Watson and the “Cognitive Era”
Perhaps the company with the biggest advertising budget for promoting its version of “personalized learning” is IBM, which has been running TV spots for about a year now touting the capabilities of Watson, its artificial intelligence product. Watson famously won Jeopardy! in 2011, a PR stunt that the company hoped would demonstrate how well it could handle Q&A. Since then, IBM has moved to commercialize Watson, particularly in healthcare and education.
This year, IBM announced Watson would be used to power an advising system at the University of Michigan. IBM released a Watson-powered iPad app. IBM partnered with the American Federation of Teachers. It partnered with Sesame Street. It partnered with Blackboard. It partnered with Pearson.
University of Stirling’s Ben Williamson has described the partnership between IBM and Pearson as “part of a serious aspiration to govern the entire infrastructure of education systems through real-time analytics and machine intelligences, rather than through the infrastructure of test-based accountability that currently dominates. … IBM and Pearson are seeking to sink a cognitive infrastructure of accountability into the background of education – an automated, data-driven, decision-making system which is intended to measure, compare, reorganize and optimize whole systems, institutions and individuals alike.”
For its part, IBM says that, with Watson, it will bring education into the “cognitive era” through personalization: “Cognitive solutions that understand, reason and learn help educators gain insights into learning styles, preferences, and aptitude of every student. The results are holistic learning paths, for every learner, through their lifelong learning journey.” Its product, Watson Element, “is designed to transform the classroom by providing critical insights about each student – demographics, strengths, challenges, optimal learning styles, and more – which the educator can use to create targeted instructional plans, in real-time.”
Roger Schank, a pioneer of “cognitive computing,” doesn’t buy it. “Could IBM stop lying about Watson already? I guess not,” he wrote in April. “Is IBM trying to kill off AI research by misusing the word ‘cognitive?’” he wrote in May. The word “cognitive,” he argues, no longer has any meaning.
I am trying to understand what IBM could possibly mean when it uses the word cognitive and announces that we are now in the “cognitive era”. Do they think they Watson is actually thinking? I certainly hope not.
Do they think that Watson is imitating how people think in some way? I can’t believe that they think that either. No one has ever proposed that machines that can search millions of pages of text are smart. Matching key words, no matter how well you do it, is not even a human capability much less one that underlies the human ability to think.
The use of Watson at Georgia Tech to create a “robot teaching assistant” garnered lots of headlines about the possibilities for automation and artificial intelligence to “save education.” But it also confirms some of Schank’s arguments about how truly overrated Watson is as any sort of pedagogical agent. Jill Watson, as the program was called (of course it’s a woman’s name), answered students’ questions on a course website – or rather, answered those questions when it had a confidence rate of 97% it could respond correctly. “Most chatbots operate at the level of a novice,” Ashok Goel, the CS professor who built the program told The Wall Street Journal. “Jill operates at the level of an expert.” What Jill demonstrates isn’t really “smarts” or “intelligence,” and it isn’t “pedagogical”; it’s just a more efficient (and expensive) Q&A system.
Nevertheless, how IBM imagines intelligence – how it imagines the human brain works and how the brain learns – will shape the cognitive systems it builds. And the marketing – all those TV ads – will shape our understanding of “intelligence” in turn.
The goal, says IBM: to “achieve the utopia of personalised learning.”Marketing the Mindsets
Intertwined with the push for “personalization” in education are arguments for embracing a “growth mindset.” The phrase, coined by Stanford psychologist Carol Dweck, appears frequently alongside talk of “personalized learning” as students are encouraged to see their skills and competencies as flexible rather than fixed. (Adaptive teaching software. Adaptive students.)
The marketing of mindsets was everywhere this year: “How to Develop Mindsets for Compassion and Caring in Students.” “Building A Tinkering Mindset In Young Students Through Making.” “6 Must-Haves for Developing a Maker Mindset.” The college president mindset. Help wanted: must have an entrepreneurial mindset. The project-based learning mindset. (There’s also Gorilla Mindset, a book written by alt-right meme-maker Mike Cernovich, just to show how terrible the concept can get.)
“Mindset” joins “grit” as a concept that’s quickly jumped from the psychology department to (TED Talk to) product. Indeed, Angela Duckworth, who popularized the latter (and had a new book out this year on grit), now offers an app to measure “character growth.” “Don’t Grade Schools on Grit,” she wrote in an op-ed in The New York Times. But there are now calls that students should be tested – and in turn, of course, schools graded – on “social emotional skills.”
Promising to measure and develop these skills are, of course, ed-tech companies. Pearson even has a product called GRIT™. But it’s probably ClassDojo, a behavior tracking app, that’s been most effective in marketing itself as a “mindset” product, even partnering with Carol Dweck’s research center at Stanford.
The startup, which has raised $31.1 million in venture funding ($21 million of that this year), is “teaching kids empathy in 90% of K–8 schools nationwide,” according to Fast Company. Edsurge says ClassDojo is used by two-thirds of schools, and Inc says it’s used by one out of four students, but hey. What’s wrong with a little exaggeration, right? It’s only “character education.”
More details on who’s funding “character education” startups are available at funding.hackeducation.com.
Ben Williamson argues that ClassDojo exemplifies the particularly Silicon Valley bent of “mindset” management:
The emphasis … is on fixing people, rather than fixing social structures. It prioritizes the design of interventions that seek to modify behaviours to make people perform as optimally as possible according to new behavioural and psychological norms. Within this mix, new technologies of psychological measurement and behaviour management such as ClassDojo have a significant role to play in schools that are under pressure to demonstrate their performance according to such norms.
In doing so, ClassDojo – and other initiatives and products – are enmeshed both in the technocratic project of making people innovative and entrepreneurial, and in the controversial governmental agenda of psychological measurement. ClassDojo is situated in this context as a vehicle for promoting the kind of growth mindsets and character qualities that are seen as desirable behavioural norms by Silicon Valley and government alike.
ClassDojo’s popularity is down to its meeting of teachers’ concerns about behaviour management. But, it has fast become part of a loose network of governmental, academic and entrepreneurial agendas focused on behavioural measurement and modification.
ClassDojo is, Williamson argues, “prototypical of how education is being reshaped in a ‘platform society.’”Personalization in a Platform Society
Media scholars José van Dijck and Thomas Poell have argued that “Over the past decade, social media platforms have penetrated deeply into the mechanics of everyday life, affecting people’s informal interactions, as well as institutional structures and professional routines. Far from being neutral platforms for everyone, social media have changed the conditions and rules of social interaction.” In this new social order – “the platform society” – “social, economic and interpersonal traffic is largely channeled by an (overwhelmingly corporate) global online infrastructure that is driven by algorithms and fueled by data.”
We readily recognize Facebook and Twitter as these sorts of platforms; but I’d argue that they’re more pervasive and more insidious, particularly in education. There, platforms include the learning management systems and student information systems, which fundamentally define how teachers and students and administrators interact. They define how we conceive of “learning”. They define what “counts” and what’s important.
They do so, in part, through this promise of “personalization.” Platforms insist that, through data mining and analytics, they offer an improvement over existing practices, existing institutions, existing social and political mechanisms. This has profound implications for public education in a democratic society. More accurately perhaps, the “platform society” offers merely an entrenchment of surveillance capitalism, and education technologies, along with the ideology of “personalization”, work to normalize and rationalize that.
Financial data on the major corporations and investors involved in this and all the trends I cover in this series can be found on funding.hackeducation.com. Icon credits: The Noun Project Mon, 19 Dec 2016 07:01:00 +0000