Hack Education

Information about this feed is obtained from the feed XML file. Feed owners may supply additional information by updating their XML file or by sending email to stephen@downes.ca


[HTML] [XML]

Identifier:
Description: The History of the Future of Education Technology
Last Updated:
Generator: Jekyll v3.4.3

Creator: ()
Managing Editor:
Webmaster:
Publisher:
Copyright:

Recent Posts

Higher Education In The Disinformation Age
Hack Education

This was what I said this evening at a panel at the University of Mary Washington as part of its Presidential Inauguration Week. The panel was titled "Higher Education in the Disinformation Age: Can America's public liberal arts universities restore critical thinking and civility in public discourse?" The other panelists included Steve Farnsworth (University of Mary Washington), Sara Cobb (George Mason University), and Julian Hayter (University of Richmond). I only had ten minutes, so my remarks really only scratch the surface. In February 2014, I happened to catch a couple of venture capitalists complaining about journalism on Twitter. (Honestly, you could probably pick any month or year and find the same.) “When you know about a situation, you often realize journalists don’t know that much,” one tweeted. “When you don’t know anything, you assume they’re right.” Another VC responded, “there’s a name for this and I think Murray Gell-Mann came up with it but I’m sick today and too lazy to search for it.” A journalist helpfully weighed in: “Michael Crichton called it the ”Murray Gell-Mann Amnesia Effect," providing a link to a blog with an excerpt in which Crichton explains the concept. Apologies for quoting Crichton at length: Media carries with it a credibility that is totally undeserved. You have all experienced this, in what I call the Murray Gell-Mann Amnesia effect. (I call it by this name because I once discussed it with Murray Gell-Mann, and by dropping a famous name I imply greater importance to myself, and to the effect, than it would otherwise have.) Briefly stated, the Gell-Mann Amnesia effect works as follows. You open the newspaper to an article on some subject you know well. In Murray’s case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward-reversing cause and effect. I call these the “wet streets cause rain” stories. Paper’s full of them. In any case, you read with exasperation or amusement the multiple errors in a story – and then turn the page to national or international affairs, and read with renewed interest as if the rest of the newspaper was somehow more accurate about far-off Palestine than it was about the story you just read. You turn the page, and forget what you know. That is the Gell-Mann Amnesia effect. I’d point out it does not operate in other arenas of life. In ordinary life, if somebody consistently exaggerates or lies to you, you soon discount everything they say. In court, there is the legal doctrine of falsus in uno, falsus in omnibus, which means untruthful in one part, untruthful in all. But when it comes to the media, we believe against evidence that it is probably worth our time to read other parts of the paper. When, in fact, it almost certainly isn’t. The only possible explanation for our behavior is amnesia. I remember, at the time, appreciating parts of this observation. Or at least, I too have often felt frustrated with the reporting I read on education and technology – topics I like to think I know something about. But I hope we can see how these assertions that we shouldn’t read and shouldn’t trust newspapers are dangerous – or at the very least, how these assertions might have contributed to our current misinformation “crisis.” And I’d add too – and perhaps this can be part of our discussion – that how we’ve typically thought about or taught “information literacy” or “media literacy” has seemingly done little to help us out of this mess. This isn’t just about Michael Crichton’s dismissal of journalism (and I’ll get to why he’s such a problematic figure here in a minute.) It’s the President. “Forget the press,” he said during the campaign. “Read the Internet.” It’s the digital technology industry – including those venture capitalists in my opening anecdote – which has invested in narratives and literally invested in products designed to “disrupt” if not destroy “traditional media.” Facebook. Twitter. Automattic (the developer of the blogging software WordPress). Despite the promises that these sorts of tools would “democratize” information, that the “blogosphere” and later social media would provide an important corrective to the failures of “mainstream journalism,” we find ourselves instead in a world in which institutions and experts are no longer trustworthy. And yet, all sorts of dis- and misinformation – on the Internet and (to be fair) on TV – is believed. And it’s believed in part because it’s not in print and not from experts or academics or certain journalists. I wanted to share this Michael Crichton story for a number of reasons. As I was preparing my remarks, I faced a couple of challenges. First, I couldn’t remember where or when I’d seen these tweets, although I was certain I’d first heard about the Gell-Mann Amnesia Effect from venture capitalists on Twitter. Searching for old tweets – verifying Twitter itself as a source – is not easy. Twitter’s search function offers us to “See what’s happening right now.” The architecture of the platform is not designed as a historical record or source. I guess these tweets were the conversation I saw – I spent a lot of time looking through old VC tweets from 2013 and 2014 – although my memory tells me it was Tim O’Reilly, a different venture capitalist, who’d mentioned the Gell-Mann Amnesia Effect and had caught my eye. When and if you do find an old tweet you’re looking for – as a scholar, perhaps, or as a journalist – it is stripped from its context within the Twitter timeline, within the user’s stream of tweets. What was happening on February 28, 2014 that prompted venture capitalist Dave Pell to complain about journalism? I couldn’t really divine. In this exchange, we have a series of other Internet-based information claims. Journalist Mathew Ingram links to a blog post to explain the Gell-Mann Amnesia Effect, but if you click, you’ll find all of the links in that particular post are dead, including the one that goes to “The Official Site of Michael Crichton.” If you google “Gell-Mann Amnesia Effect,” the top search result is Goodreads, a book review site owned by Amazon. The excerpt there doesn’t give a date or a source or a link to Crichton’s commentary. The Internet doesn’t magically surface “the truth.” Its infrastructure can quite readily obscure things. You have to understand how to look for information online, and you have to have some domain expertise (or know someone with domain expertise) so you can actually verify things. The “Gell-Mann Amnesia Effect” comes from a talk titled “Why Speculate?” that Crichton gave in 2002 at the International Leadership Forum, a think tank run by the now-dormant Western Behavioral Sciences Institute. You can google this stuff, of course. Or maybe you know it. Maybe this is all, to borrow from Crichton “some subject you know well.” Maybe you’re familiar with Crichton too, or more likely you’ve heard his name – a best-selling author; medically trained, but never formally licensed to practice medicine; creator of the TV show ER; writer and director of the movie Westworld (the one with Yul Brenner); and author of many novels including Jurassic Park, The Andromedia Strain, Disclosure, and State of Fear. After the publication of Disclosure, Crichton was accused of being anti-feminist; after the publication of State of Fear, he sealed his status as one of the leading skeptics of global climate change. And this is all part of the message of that talk in which he argues for the existence of the Gell-Mann Amnesia Effect. Journalism, Crichton contends, is almost entirely speculation. Sunday talk shows, speculation. Global climate change, speculation. “False fears.” Crichton blames the end of fact-checking on the praise for Susan Faludi’s feminist book Backlash. He blames academia, particularly post-modernism: “most areas of intellectual life have discovered the virtues of speculation. In academia, speculation is usually dignified as theory.” This was 2002 – Crichton doesn’t blame the Internet. He doesn’t blame the Web. He doesn’t blame Facebook. He blames MSNBC. He blames The New York Times. 2002 – A year before Judith Miller’s now discredited reporting on the weapons of mass destruction in Iraq appeared in that very newspaper. In the past 15 years, I wonder if that the “amnesia effect” has worn off in some troubling rather than liberatory ways. Increasingly we trust very little that the media says. Last year, Gallup found Americans’ trust in the media had dropped to the lowest level in polling history. The media, as Crichton and others contend, is all speculation. “Fake news.” But it’s not just the media. We face a crisis in all our information institutions – journalism and higher education, in particular. Expertise is now utterly suspect. We mistrust (print) journalists – “the mainstream media,” whatever that means; we mistrust academics; we mistrust scientists. We still trust some stories sometimes. Importantly, we trust what confirms our pre-existing beliefs. Perhaps we can call this the Michael Crichton Ego Effect. We have designated ourselves as experts-of-sorts whenever we confront the news. We know better than journalists, because of course we do. (This effect applies most readily to men.) The Internet has made it particularly easy for us to confirm our beliefs and our so-called expertise. Digital technologists (and venture capitalists) promised this would be a good thing for knowledge-building; it appears, instead, to be incredibly destructive. And that's the challenge for journalism, sure. It's the challenge for universities. It's the challenge for democracy. Thu, 20 Apr 2017 21:01:00 +0000

The Omidyar Network And The (Neoliberal) Future Of Education
Hack Education

This article is part of my research into "who funds education technology," which I plan to expand with my Spencer Education Fellowship The Omidyar Network announced earlier this week that it has invested in Data & Society, a New York City-based research institute co-founded by danah boyd. The two-year $850,000 grant will fund Data & Society’s work on “the social and cultural issues arising from the development of data-centric technology.” The grant is just one of a slew of recent investments by the Omidyar Network in companies and organizations that work in and around education technology, including Khan Academy, Hypothes.is, and Edsurge. And much like Edsurge (as well as another portfolio company, Glenn Greenwald’s The Intercept), the Omidyar Network’s investment in Data & Society certainly raises questions about that organization’s ability to be “independent” in its research and analysis. The Omidyar Network, a “venture philanthropy” firm founded by eBay founder Pierre Omidyar and his wife Pam, has invested over $1 billion in various projects – those run both by for-profit companies and not-for-profit organizations in finance, public policy, property rights, journalism, and education. According to its promotional materials, the Omidyar Network is “dedicated to harnessing the power of markets to create opportunity for people to improve their lives. We invest in and help scale innovative organizations to catalyze economic and social change.” The “power of markets,” according to this investment approach, is a force for “social good.” However, the history and the impact of the Omidyar Network’s investments, particularly in the Global South, tell a very different story. It’s a story of neoliberalism; it’s a story of privatized investment at the expense of public infrastructure. And when it comes to education – in the Global North and South – that story is of profound political importance.The Omidyar Network’s Education Portfolio Where the dollars have gone:African Leadership Academy (leadership training) – $1.5 millionAfrican Leadership University (accredited university) – investment amount unknownAkshara Foundation (private school chain in India) – $950,000AltSchool (private school chain in the US) – $133 millionAndela (coding bootcamp in Africa) – $27 millionAnudip Foundation (coding bootcamp in India) – $850,000Artemisia (entrepreneurial training and startup accelerator program in Brazil) – investment amount unknownAspiring Minds (career placement in India) – investment amount unknownBridge International Academies (private school chain in Africa) – investment amount unknownCode.org (computer science career marketing) – $3.5 millionCommon Sense Media (media education) – $4.25 millionCreative Commons (open licenses) – investment amount unknownDonorsChoose.org (crowdfunding school projects) – investment amount unknownEdsurge (ed-tech marketing) – $2.8 millionEllevation (English-language learning software in the US) – $6.4 millionEnglishHelper (English-language learning services in India) – investment amount unknownFunDza (literacy program in South Africa) – $300,000Geekie (adaptive learning platform in Brazil) – investment amount unknownGuten News (literacy program in Brazil) – investment amount unknownHypothes.is (annotation software) – $1.9 millionIkamva Youth (after-school tutoring program in South Africa) – $1.33 millionIMCO (think tank in Mexico) – $202,500Innovation Edge (early childhood education in South Africa) – investment amount unknownKalibrr (predictive analytics for hiring in the Philippines) – investment amount unknownKhan Academy (video-based instruction) – $3 millionLearnZillion (instructional content and professional development company in the US) – investment amount unknownLinden Lab (best known as the maker of Second Life) – $19 millionLively Minds (preschools in Ghana and Uganda) – $360,000Numeric (tutoring program in South Africa) – investment amount unknownOpen Knowledge (data and knowledge-sharing organization) – $2.64 millionPlatzi (online coding classes) – $2.1 millionReach Capital (venture capital firm) – investment amount unknownRLabs (entrepreneurship training in South Africa) – $465,000Siyavula (adaptive textbooks in South Africa) – investment amount unknownSkillshare (course marketplace) – $12 millionSocratic (homework help) – $6 millionSPARK Schools (a private school chain in Africa) – $9 millionTeach for All (Teach for America, globalized) – investment amount unknownTeach for India (Teach for America but for India) – $2.5 millionThe Education Alliance (organization supporting public-private partnerships in education in India) – investment amount unknownTinkergarten (marketplace for early childhood education) – $1.2 millionVarthana (private student loans in India) – investment amount unknownWikimedia Foundation (operator of Wikipedia) – investment amount unknown (Funding data drawn from Crunchbase and from the Omidyar Network’s website)Investment (as) Ideology In some ways, the Omidyar Network’s education investments look just like the rest of venture capitalists’: money for tutoring companies, learn-to-code companies, and private student loan companies. While many insist that the latter should not “count” as ed-tech, to ignore the companies offering private financing for education is to misconstrue the shape and direction that investors and philanthropists like Pierre Omidyar want education to take. It also obscures the shape and direction that these investors are pushing finance to take, particularly for the very poor and the “unbanked.” Indeed, microfinance initiatives in the developing world have been the cornerstone of the Omidyar Network’s investment strategy for over a decade now. This work has been incredibly controversial, and despite the hype about the promise of micro-loans – “financial inclusion” as the Omidyar Network calls it – the results from these programs have been mixed at best. That is, they have not pulled people out of extreme poverty but rather have saddled many with extreme debt. “Take SKS Microfinance,” write Mark Ames and Yasha Levine in a 2013 profile, “an Omidyar-backed Indian micro-lender whose predatory lending practices and aggressive collection tactics have caused a rash of suicides across India.” (The winners in microfinance investing: the investors.) In a 2012 article in the World Economic Review, Milford Bateman and Ha-Joon Chang argue that “microfinance in international development policy circles cannot be divorced from its supreme serviceability to the neoliberal/globalisation agenda.” Nor can the Omidyar Network’s investment policy – in microfinance and beyond – be separated from its explicitly neoliberal agenda. That holds particularly true for its education investments. The Omidyar Network has backed DonorsChoose.org, for example, which encourages teachers to crowdfund projects and supplies. “The end result,” write Ames and Levine, “is that it normalizes the continued strangling of public schools and the sense that only private funding can save education.” The Omidyar Network has backed AltSchool, a private school startup that blends algorithmic command-and-control with rhetoric about progressive education. “Montessori 2.0” and such. I recently spoke about AltSchool and its “full stack” approach to education – a technology platform that manages and monitors all digital activities and physical practices in the classroom. AltSchool is one of the most commonly-cited examples of how Silicon Valley plans to “disrupt” and reshape education. I find this “platforming” of education to be profoundly chilling (and profoundly anti-democratic), particularly with its penchant for total surveillance; but it’s probably Bridge International Academies that serves as the most troubling example of the Omidyar Network’s vision for the future of education. Bridge International Academies, which is also funded by the Gates Foundation and the Chan Zuckerberg Initiative – is a private school chain operating in several African countries that hires untrained adults as teachers. These teachers read scripted lessons from a tablet that in turn tracks students’ assessments and attendance – as well as teachers’ own attendance and pay. Families must pay tuition – this isn’t free public education – and the cost is wildly prohibitive for most. Moreover, outsourcing to scripted lesson delivery does not build the capacity – in terms of infrastructure or human resources – that many African nations need. As such expansion of Bridge International Academies has been controversial, and the Ugandan government ordered all the Bridge schools there to close their doors in August of last year. But earlier in the year, Liberia announced its plans to outsource its entire education system to Bridge International. So, while in the US we see neoliberalism pushing to dismantle public institutions and public funding for public institutions, in the Global South, these very forces are there touting the “power of markets” to make sure public institutions can never emerge or thrive in the first place. Investors like the Omidyar Network are poised to extract value from the very people they promise their technologies and businesses are there to help. Conveniently, the Omidyar Network’s investment portfolio also includes journalistic and research organizations that are also poised to promote and endorse the narratives that aggrandize these very technocratic, market-based solutions. Disclosure: I have done some paid research for Data & Society on school accountability, and I have published a couple of articles on its website. Fri, 14 Apr 2017 18:01:00 +0000

'Education Technology'S Completely Over'
Hack Education

This was the first-half of a joint presentation at Coventry University as part of my visiting fellowship at the Disruptive Media Learning Lab. The better half was delivered by Jim Groom. Our topic, broadly speaking: "a domain of one's own" “The Internet’s completely over,” Prince told The Daily Mirror in 2010. People laughed at him. Or many of the digital technorati did. They scoffed at his claims, insisting instead that the Internet was inevitable. The Internet was the future of everything. When it came to music, the technorati contended, no longer would any of us own record albums. (We wouldn’t own books or movies or cars or houses either. Maybe we wouldn’t even own our university degrees.) We’d just rent. We’d pay for subscription services. We’d stream singles instead. We’d share – well, not really “share,” but few would complain when a post-ownership society got labeled as such. Few would care, of course, except those of us struggling to make money in this “new economy.” Prince was wrong about the Internet, the technorati insisted. Turns out, Prince was right. The “new economy” sucks. It’s utterly exploitative. But many technorati would never admit that Prince was right – perhaps until Prince’s death this time last year when everyone hailed him as one of the greatest artists of our day. The Electronic Frontier Foundation, for example – an organization that, as its name suggests, sees itself as a defender of “Internet freedom,” particularly with regards to copyright and free speech online – had inducted Prince into the Takedown Hall of Shame in 2013, establishing and then awarding him with the “Raspberry Beret Lifetime Aggrievement Award for extraordinary abuses of the takedown process in the name of silencing speech.” Prince was, no doubt, notorious for demanding that bootleg versions of his songs and his performances be removed from the Web. He threatened websites like YouTube with lawsuits; he demanded fans pull photos and lyrics and cellphone videos offline. It was, until recently, almost impossible to find Prince’s music on streaming services like Spotify or video services like YouTube. And thus Prince was viewed by some as a Luddite. But many of those folks utterly misunderstood Prince’s relationship to technologies – much like many, I’d argue, misconstrue what the Luddites in the early nineteenth century were actually so angry about when they took to smashing looms. It was never about the loom per se. It’s always about who owns the machines; it’s about who benefits from one’s labor, from one’s craft. From the outset of his career, Prince was incredibly interested in computers and with technological experimentation – in how computers might affect art and relationships and creativity and love. He released an interactive CD-ROM in 1994, for example, a game that played a lot like another popular video game at the time, Myst. That video game was one of the few ways you could get ahold of the original font file for the symbol that Prince had adopted the previous when he officially changed his name. (His label was forced to mail floppy disks with the font to journalists so they could accurately write about the name change.) You could see Prince’s interest in computer technologies too in songs like “Computer Blue” from the Purple Rain soundtrack (1984) and “My Computer” from the album (his nineteenth) Emancipation (1996). The lyrics in the latter, which some argue presage social media – okay, sure – but perhaps more aptly simply reflect someone who was active in (or at least aware of) the discussion forums and chatrooms of the 1990s: I scan my computer looking 4 a site Somebody 2 talk 2, funny and bright I scan my computer looking 4 a site Make believe it’s a better world, a better life The following year, Prince released Crystal Ball, and in what was a novel move at the time, put all the album’s liner notes online, via a fairly new technology called a “Web site.” A few years later, Prince launched a subscription service that promised to give fans exclusive access to new music, again via a site he controlled. See, Prince didn’t hate the Internet per se, although he certainly had a complicated relationship with what has become an increasingly commodified and exploitative Internet and Web (one actively commodifying and exploiting not just musicians and recording artists). Rather, the problem that Prince identified with the Internet was that it enables – is built on, really – the idea of multiple digital copies, permission-less digital copying. And Prince has always, always fought to retain control of the copies of his work, to retain control of his copyright. “I don’t see why I should give my new music to iTunes or anyone else,” Prince told The Daily Mirror in that 2010 interview. They won’t pay me an advance for it and then they get angry when they can’t get it." The internet’s like MTV. At one time MTV was hip and suddenly it became outdated. Anyway, all these computers and digital gadgets are no good. They just fill your head with numbers and that can’t be good for you. He later clarified what he meant to The Guardian: “What I meant was that the internet was over for anyone who wants to get paid, and I was right about that. Tell me a musician who’s got rich off digital sales. Apple’s doing pretty good though, right?” If you’re wondering why I’m talking about Prince today and not education technology, you’re not paying close enough attention to the ways in which the ed-tech industry gets rich off of the creative work (and the mundane work) of students and scholars alike. Indeed, I wanted to invoke Prince today and talk a little bit about how his stance on the Internet – and much more importantly, his stance on the control and the ownership of his creative work – might help us think about the flaws in education technology and how it views ownership and control of data, how it extracts value from us in order to profit from our labor, our intellectual property. And I hope that by retelling the story of Prince and the Internet, by telling a counter-narrative to one that’s simply “Prince hated it,” we can think about what’s wrong with how ed-tech – as an industry and as an institutional practice – treats those doing creative and scholarly work. Not because we hate or resist the Internet, but because we want to build and support technologies that are not exploitative or extractive. Me, I will gladly echo Prince – I do so with the utmost respect and with a great deal of shock and sadness still to this day that he’s gone – “education technology’s completely over.” “If you don’t own your masters, your master owns you,” Prince told Rolling Stone in 1996, on the cusp of the release of his album Emancipation. (A master recording is the first, the original recording of a song, from which all subsequent copies are made.) Prince had famously battled with Warner Bros over his contract and his catalog. He’d recorded with the label from 1978 to 1996 – and that included his biggest hit record, Purple Rain. Fighting with Warner Bros had prompted Prince to change his name to the symbol. Born Prince Rogers Nelson, Prince discovered that he didn’t even own his own name, let alone his music. He hoped that by changing his name, he’d be able to get out of his contract – or at least protest its terms. He appeared with the word “slave” written on his cheek at the 1995 BRIT Awards. His ­acceptance speech at the event: “Prince. In concert: ­perfectly free. On record: slave.” In 2014, Prince signed a deal to get his masters back. He controlled his music. The original copies of his music. He could decide what to release and what not to release and when and how to release it. Prince fought for a long time with record labels, and arguably that makes his response to the new digital “masters” – Apple, Google, Spotify, and such – more understandable. But his assertions about masters and slaves are perhaps more than a little overstated, overwrought. And as such, I want to be a little cautious about making too much about a connection between the ownership of ideas and the ownership of bodies and how control and exploitation function in academia. In the US (and I’m not sure how this works in the UK), if you request a copy of your educational records from your university, they send you a transcript. That is, they send you a copy. You can request a copy of your articles from academic publications. Rarely – although hopefully increasingly – do authors retain their rights. Students often find themselves uploading their content – their creative work – into the learning management system (the VLE). Perhaps they retain a copy of the file on their computer; but with learning analytics and plagiarism detection software, they still often find themselves having their data scanned and monetized, often without their knowledge or consent. So I want us to think about the ways in which students and scholars, like Prince, find themselves without control over their creative work, find themselves signing away their rights to their data, their identity, their future. We sign these rights away all the time. We compel students to do so. We tell them that this is simply how the industry, the institution works. You want a degree, you want a record label, you must use the institutional technology. You must give up your masters. You needn’t. None of us need to. (Of course, none of us are Prince. Perhaps it seems a little overwhelming to fight the corporate masters like he did. But I believe that “domains” is one small step towards that.) Fri, 07 Apr 2017 19:01:00 +0000

'Technology-Enhanced Retention' And Other Ed-Tech Interventions
Hack Education

These remarks were given yesterday at Coventry University as part of my visiting fellowship at the Disruptive Media Learning Lab. I took part in a panel on "Technology-Enhanced Student Attainment and Retention" with Daniel Burgos from the International University of La Rioja and Lynn Clouder from Coventry University As I prepped for my remarks here, I did stop to think a bit about whether I’m the right respondent – am I a social scientist? My formal academic training was really much more in the humanities – I dropped out of a PhD program in Comparative Literature. I do have a graduate degree in Folklore Studies, which is a kin of anthropology and is a field that is a bit of both, I suppose: social sciences and the humanities. I do consider myself, in some ways, an ethnographer. What I am not – not really or not particularly well – is a quantitative researcher. Or at least, I’ve never taken a class in educational research methods, and it’s been about 20 years since I took a class in statistics. I have only the vaguest recollection of what p values are and why they’re significant (I think that’s a bit of word play. But I am not certain). What I do have, with full confidence, is a solid rolodex. I have friends who do education research and run regression tables for a living. And when press releases about studies on various education technologies cross my desk, I often ask for their help in deciphering the findings. That’s what journalists should do instead of relying on the PR or on the abstracts from journal articles – which in fairness, if you don’t have access to a research library is sometimes all you can read. That’s what academics and administrators should do instead of relying on the PR or on the salespeople who offer you freebies at conferences. But let me pause for a minute and restate that: when press releases about studies on various education software cross my desk… Press releases, my inbox is full of them – sometimes from universities, more often from the software makers themselves. Salespeople, the industry is full of them. There’s a lot of marketing about educational software. There’s a lot of hype about educational software. But that’s not necessarily because there’s a lot of solid research that demonstrates “effectiveness” or (and this is key) a lot of “good” ed-tech. And I’ll say something that people might find upsetting or offensive: I’m not sure that “solid research” would necessarily impress me. I don’t actually care about “assessments” or “effectiveness.” That is, they’re not interesting to me as a scholar. My concerns about “what works” about ed-tech have little to do with whether or not there’s something we can measure or something we can bottle as an “outcome”; indeed, I fear that what we can measure often shapes our discussions of “effect.” What interests me nonetheless are the claims that are made about ed-tech – what we are told it can do. I listen for these stories and the recurring themes in them because I think they reveal a number of really important things: what we value, who we value in education; how we imagine learning happens; what we think is wrong with the current model of teaching and/or system of education; what we think will fix all this; and so on. (I use that pronoun “we” in its broadest sense. Like “we all humans.” I do want us to recognize there are many, many competing values and many, many competing visions for education. And that means there are lots of opinions – many, many that are grounded in “research” and many, many that are peddled by researchers themselves – about what we should do to make teaching and learning “better.”) Why does attainment matter, for example? To whom does attainment matter? What do we mean by attainment? If it something we can measure? Is it something that education can actually intervene upon? If so, how? If so, in what ways? Why does retention matter? To whom does retention matter? Why? Why do we use words like “intervention” to describe our efforts to address “retention” or “attainment”? Do we use the word “intervention” because it’s a medical term? A scientific term? Are we diagnosing something about students? As such, I’m very interested in the phrase “technology-enhanced” in the title of this panel. First of all, I think it does underscore that what we do without technology – to attain, to retain – doesn’t work. (We can ask “doesn’t work for whom?) Let’s consider why not. Is it an institutional issue? A systemic, societal one? Is there something ”wrong“ with students? Do we see this as a human issue? Or, as it’s technology-enhanced,” is it an engineering problem? My concern, I think – and I repeat this a lot – is that we have substituted surveillance for care. Our institutions do not care for students. They do not care for faculty. They have not rewarded those in it for their compassion, for their relationships, for their humanity. Adding a technology layer on top of a dispassionate and exploitative institution does not solve anyone’s problems. Indeed, it creates new ones. What do we lose, for example, if we more heavily surveil students? What do we lose when we more heavily surveil faculty? The goal with technology-enhanced efforts, I fear, is compliance not compassion and not curiosity. So sure, some “quantitative metrics” might tick upward. But at what cost? And at what cost to whom? Wed, 05 Apr 2017 07:01:00 +0000

Why 'A Domain Of One'S Own' Matters (For The Future Of Knowledge)
Hack Education

These remarks were given at Coventry University as part of my visiting fellowship at the Disruptive Media Learning Lab I am best known, no doubt, for my criticism of education technology. And perhaps for that reason, people perk up when I point to things that I think are interesting or innovative (and to be clear, interesting or innovative because of their progressive not regressive potential). Often when I say that I think that the “Domain of One’s Own” initiative is one of the most important education technologies, I always hear pushback from the Twitter riffraff. “What’s so special about a website?” folks will sneer. Well, quite a lot, I’d contend. The Web itself is pretty special – Sir Tim Berners-Lee’s vision of a global hyperlinked information system. A system that was – ideally at least – openly available and accessible to everyone, designed for the purpose of sharing information and collaborating on knowledge-building endeavors. That purpose was not, at the outset, commercial. The technologies were not, at the outset, proprietary. The World Wide Web just had its 28th anniversary, and Tim Berners-Lee penned an article – an “open letter” – in which he identified three major trends that he’s become increasingly worried about:We’ve lost control of our personal dataIt’s too easy for misinformation to spread on the WebPolitical advertising online needs transparency and understanding These are trends that should concern us as citizens, no doubt. But they’re expressly trends that should concern us as educators. I think we could slightly reword these trends too to identify problems with education technology as it’s often built and implemented:Students have lost control of their personal dataBy working in digital silos specially designed for the classroom (versus those tools that they will encounter in their personal and professional lives) students are not asked to consider how digital technologies work and/or how these technologies impact their livesEducation technologies, particularly those that enable “algorithmic decision-making,” need transparency and understanding (You can substitute the word “scholar” for “student” in all cases above, too, I think.) By providing students and staff with a domain, I think we can start to address this. Students and staff can start to see how digital technologies work – those that underpin the Web and elsewhere. They can think about how these technologies shape the formation of their understanding of the world – how knowledge is formed and shared; how identity is formed and expressed. They can engage with that original purpose of the Web – sharing information and collaborating on knowledge-building endeavors – by doing meaningful work online, in the public, with other scholars. That they have a space of their own online, along with the support and the tools to think about what that can look like. It doesn’t have to be a blog. It doesn’t have to be a series of essays presented in reverse chronological order. You don’t have to have comments. You don’t have to have analytics. You can delete things after a while. You can always make edits to what you’ve written. You can use a subdomain. (I do create a new subdomain for each project I’m working on. And while it’s discoverable – ostensibly – this work is not always linked or showcased from the “home page” of my website.) You can license things how you like. You can make some things password-protected. You can still post things elsewhere on the Internet – long rants on Facebook, photos on Instagram, mixes on Soundcloud, and so on. But you can publish stuff on your own site first, and then syndicate it to these other for-profit, ad-based venues. I recognize that learning these technologies takes time and effort. So does learning how to navigate the VLE. Website design, I promise you – skills like HTML and CSS and Markdown – are going to look better on a CV than… well, no one boasts they can use a VLE except instructional technologists, and I don’t think the mission of Coventry is to graduate hundreds of those. I’m pretty resistant to framing “domains” as simply a matter of “skills.” Because I think its potential is far more radical than that. This isn’t about making sure literature students “learn to code” or history students “learn to code” or medical faculty “learn to code” or chemistry faculty “learn to code.” Rather it’s about recognizing that the World Wide Web is site for scholarly activity. It’s about recognizing that students are scholars. Washington State University’s Mike Caulfield has laid out a different set of concerns than Tim Berners-Lee’s (although I think they overlap substantially when it comes to questions of misinformation and democracy). Mike talks about the difference between what he describes as the “garden” and the “stream.” The stream are the other threats to the Web, I’d argue – these are Twitter and Facebook most obviously. The status updates and links that rush past us, often stripped of context and meaning and certainly stripping us of any opportunity for contemplation or reflection. The garden, on the other hand, encourages just that. It does so by design. And that’s the Web. That’s your domain. You cultivate ideas there – quite carefully, no doubt, because others might pop by for a think. But also because it’s your space for a think. Tue, 04 Apr 2017 13:01:00 +0000

The Top Ed-Tech Trends (Aren'T 'Tech')
Hack Education

This talk was presented at Coventry University as part of my visiting fellowship at the Disruptive Media Learning Lab Every year since 2010, I’ve undertaken a fairly massive project in which I’ve reviewed the previous twelve months’ education and technology news in order to write ten articles covering “the top ed-tech trends.” This is how I spend my November and December – researching and writing a series that usually tops out at about 75,000 words – which I didn’t realize until print copies were made for my visit here is about 240 pages. Now, all those words and pages make this quite a different undertaking than most year-in-review stories, than much of the “happy new year” clickbait that tend to offer a short, bulleted list of half a dozen or so technologies that are new enough or cool enough to hype with headlines like “these are the six tools poised to revolutionize education.” To be honest, these sorts of articles are partly why I undertake this project – although each year, when I’m about 15,000 words in, I do ask myself “why am I doing this?!” (This talk will hopefully serve as an explanation for you and a nice reminder for me.) Last year, I gave a lecture at Virginia Commonwealth University titled “The Best Way to Predict the Future Is to Issue a Press Release.” (The transcript is on my website.) It was one articulation of what’s a recurring theme in my work: we must be more critical about the stories we tell and we’re told about the future of education. Indeed, we need to look at histories of the future and ask why certain people have wanted the future to take a certain shape, why certain technologies (and their stories) have been so compelling. To be clear then when when I write my “trends” series, it’s not meant to be predictive. Rather it’s a history itself – ideally one that’s useful for our thinking about the past, present, and future in the way in which the study of history always should be. It’s a look back at what’s happened over the course of each year, not simply – to counter that totally overused phrase from hockey player Wayne Gretsky’s dad – to “skate to where the puck is going,” but to examine where it has been. And more importantly, to ascertain where some folks – those who issue press releases, for example – want the puck to head. So I am not here to tell you, based on my analysis of ed-tech “trends,” what new tools you should buy or what new tools you should incorporate into your teaching or what old tools you should discard. That’s not my role – I’m not an advocate or evangelist or salesperson for ed-tech. I realize this makes some people angry – “she didn’t tell us what we should do!” some folks always seem to complain about my talks. “She didn’t deliver a fully fleshed-out 300-point plan to ‘fix education.’” “She didn’t say anything positive about technology, dammit.” That’s not the point of my work. I’m not a consultant hired to talk you through the implementation of your next project. My work is not “market research” in the way that “market research” typically functions (or in the way “market research” hopes it functions). According to the press releases at least, ed-tech markets are always growing larger. The sales are always increasing. The tech is always amazing. I want us to think more critically about all these claims, about the politics, not just the products (perhaps so the next time we’re faced with consultants or salespeople, we can do a better job challenging their claims or advice). As you can see, much of what I write isn’t really about technologies at all, but rather about the ideologies that are deeply embedded with them. I write about technologies as practices – political practices, pedagogical practices – not simply tools, practices that tools might enable and that tools might foreclose. Throughout the year, I follow the money, and I follow the press releases. I scrutinize the headlines. I listen to stories. I try to verify the (wild, wild) claims of marketers and salespeople and politicians. I look for the patterns in the promises that people make about what technologies will do for and to education. And it’s based on these patterns that I eventually select the ten “Top Ed-Tech Trends” for my year-end review. They’re not “trends,” really. They’re themes. They’re categories. They’re narratives. And admittedly, because of my methods, how I piece my research together, they’re narratives that are quite US-centric. I’d say even more specifically, they’re California- and Silicon Valley-centric. I use “Silicon Valley” in my work as a shorthand to describe the contemporary high tech industry – its tech and just as importantly, its ideology. Sticklers about geography will readily point out that the Silicon Valley itself isn’t the most accurate descriptor for the locus of today’s booming tech sector. It ignores what happens in Cambridge, Massachusetts, for example: the site of Harvard and MIT. It ignores what happens in Seattle: the home of Amazon, Microsoft, and the Bill and Melinda Gates Foundation. (The influence of Bill Gates in education and education technology policy really cannot be overstated. Bill Gates is not part of Silicon Valley per se, but the anti-democratic bent of his philanthropic efforts – justified through claims about “genius,” through a substitution with charity (which is also tax relief) for justice – I would contend is absolutely part of the “Silicon Valley narrative.”) Silicon Valley is itself just one part of Northern California, one part of the San Francisco Bay area – the Santa Clara Valley. Santa Clara Valley’s county seat and the locus of Silicon Valley (historically at least) is San Jose, not San Francisco or Oakland, where many startups are increasingly located today. Silicon Valley does include Mountain View, where Google is headquartered. It also includes Cupertino, where Apple is headquartered. It includes Palo Alto, home to Stanford University, founded in 1885 by railroad tycoon Leland Stanford. The “silicon” in “Silicon Valley” refers to the silicon-based integrated circuits that were first developed and manufactured in the area. But I extend the phrase “Silicon Valley” to all of the high tech industry, not just the chip makers. And those chip makers aren’t all located in the area these days. Arguably the phrase “Silicon Valley” obscures the international scope of the operations of today’s tech industry – tax havens in Ireland, manufacturing in China, and so on. But if the scope is international, the flavor is distinctly Californian. A belief in the re-invention of the self. A “dream factory.” A certain optimism for science as the penultimate solution to any of the world’s problems. A belief in technological utopia. A belief in the freedom of information technologies, in information technologies as freedom. An advocacy for libertarian politics – think Peter Thiel (a Stanford graduate) now advising Donald Trump. A faith in the individual and a distrust for institutions. A fierce embrace of the new. A disdain for the past. California – the promised land, the end-of-the-road of the US’s westward (continental) expansion, the fulfillment of Manifest Destiny, colonization upon colonization, the gold rush, the invention of a palm-tree paradise. The California too of military bases and aeronautics and oil. California, the giant economy. The California that imagines itself – and hopes others imagine it – in Silicon Valley and Hollywood but not on the farms of the Central Valley. The California that ignores race and labor and water and war. The California that once could boast the greatest public higher education system in the US – that is until Ronald Reagan became governor of the state in 1966 after campaigning on a vow to “clean up that mess in Berkeley” and promising during his first year in office that he’d make sure taxpayers in the state were no longer “subsidizing intellectual curiosity.” We can see in Reagan’s pledge the roots of ongoing efforts to defund public education, something that enabled for-profit schools to step in to meet the demand for college. We can see too in Reagan a redefinition of the purpose of higher ed – it’s not about “intellectual curiosity”; it’s about “jobs,” it’s about “skills.” Despite thinking of themselves as liberal-learning, today’s tech companies re-inscribe much of this. “Everyone should learn to code,” as they like to tell us. “Higher education is a bubble,” as Peter Thiel has said. “Disrupt.” “Unbundle.” “It’s like Uber for education.” And so on. “The Californian Ideology,” as Richard Barbrook and Andy Cameron described all this in their terrifically prescient essay from 1995, does not tend to make many lists of the “top ed-tech trends.” But the ideology permeates our digital technologies, whether we like it or not. And if and when we ignore it, I fear we misconstrue what’s going on with Silicon Valley’s products and press releases. We’re more likely to overlook the role that venture capital plays, for example. 2015 was a record-setting year for investment in education technology, with some $4 billion flowing into the industry globally. But the total dollars fell sharply in 2016 – “only” $2.2 billion. The number of investments fell by 11%. (It’s a bit too early to tell what 2017 will bring.) I repeatedly select “the business of ed-tech” as one of my “top ed-tech trends” because I think it’s crucial to questions about investors’ interest in education and education technology. What sorts of companies and what sorts of products do venture capitalists like, for example? What’s the appeal – profits, privatization? (Turns out, lately investors like testing companies, tutoring companies, “learn to code” companies, and private student loan providers.) Why has investment fallen off? (Turns out that “free” might not be the best business model for a for-profit company, particularly one that cannot rely on advertising the same way that other “free” products like Facebook and Google can. Turns out too that a lot of the education startups that have been promising “revolution” or hell even “improved outcomes” for the past few years have been selling snake oil. Turns out that the typical timeline that venture capitalists work with – about three to five years after making their investment, they expect a return in the form of an acquisition or a public offering and very, very few ed-tech companies go public. Turns out that Pearson, which once funded and acquired a lot of startups, isn’t in particularly good financial shape itself.) Now, it’s so very typically American to come to the UK to talk about ed-tech and to insist “oh really, it’s all about the US – our values.” “It’s all about the state I live in” even – to invoke Pearson, a company founded in Yorkshire in 1844, the largest education company in the world, and still insist that the “Silicon Valley narrative” and the “California ideology” are the dominant forces shaping education technology. (I’m not thrilled about this either, mind you!) In Distrusting Educational Technology, sociologist Neil Selwyn identifies three contemporary ideologies that are intertwined with today’s digital technologies – my reference to “Silicon Valley narratives” are meant to invoke these: libertarianism, neoliberalism, and “the ideology of the ‘new economy.’” Selwyn writes, Most people, it would seem, are happy to assume that educational technologies are “neutral” tools that are essentially free from values and intent (or, at most, shaped by generally optimistic understandings and meanings associated with educational change and improvement). In this sense, it is difficult at first glance to see educational technology as entwined with any aspect of the dominant ideologies just described. Yet, as was noted earlier, one of the core characteristics of hegemony is the ability of dominant ideologies to permeate commonsensical understandings and meaning. Following this logic, then, the fact that educational technology appears to be driven by a set of values focused on the improvement of education does not preclude it also serving to support and legitimate wider dominant ideological interests. Indeed, if we take time to unpack the general orthodoxy of educational technology as a “positive” attempt to improve education, then a variety of different social groups and with different interests, values and agendas are apparent. …While concerned ostensibly with changing specific aspects of education, all of these different interests could be said to also endorse (or at least provide little opposition to) notions of libertarianism, neo-liberalism and new forms of capitalism. Thus educational technologies can still be said to be “ideologically freighted”, although this may not always be a primary intention of those involved in promoting their use. I’d add another ideological impulse that Selwyn doesn’t mention here: that is, a fierce belief in technological solutionism (I’m building on Evgeny Morozov’s work here) – if students are struggling to graduate, or they’re not “engaged,” or they’re not scoring well on the PISA test, the solution is necessarily technological. More analytics. More data collection. More surveillance. I would point to this “ideological freighted-ness” in almost all of the trends in which I’ve written about since 2010. You can see neoliberalism, for example, in efforts towards privatization and the outsourcing of core technological capacities to third party vendors. (This is part of the push for MOOCs, we must be honest.) I’m not sure there’s any better expression of this “Silicon Valley narrative” or “California ideology” than in “personalization,” a word used to describe how Netflix suggests movies to us, how Amazon suggests products to do, how Google suggests search results to us, and how educational software suggests the next content module you should click on. Personalization, in all these manifestations, is a programmatic expression of individualism. The individual, as the Silicon Valley narrative insists, whose sovereignty is most important, whose freedom is squelched by the collective. Personalization – this belief that the world can be and should be algorithmically crafted to best suit each individual individually (provided, of course, that individual’s needs and desire coincide with the person who wrote the algorithm and with the platform that’s promising “personalization.”) Personalization. Platforms. These aren’t simply technological innovations. They are political, social – shaping culture and politics and institutions and individuals in turn. In 2012, I chose “the platforming of education” as one of the “top ed-tech trends.” I made that selection in part because several ed-tech companies indicated that year that this was what they hoped to become – the MOOC startups, for example, as well as Edmodo, a social network marketed to K–12 schools. And “platforming” was a story that technology companies were telling about their own goals too. To become a platform is to be “the next Facebook” or “the next Google” (and as such, to be a windfall for investors). Platforms aim to centralize services and features and functionality so that you go nowhere else online. They aspire to be monopolies. Platforms enable and are enabled by APIs, by data collection and transference, by data analysis and data storage, by a marketplace of data (with users creating the data and users as the product). They’re silos, where all your actions can be tracked and monetized. In education, that’s the learning management system (the VLE) perhaps. I wondered briefly last year if we were seeing a failure in education platforms – or at least, a failure to fulfill some of the wild promises that investors and entrepreneurs were making back in 2012. A failure to “platform.” Despite raising some $87.5 million in venture capital, for example, Edmodo hadn’t even figured out a business model, let alone become a powerful platform. Similarly MOOC startups have now all seemed to pivot towards corporate technology training, but certainly all corporate training isn’t running through these companies. Neither Coursera nor Udacity nor edX have become corporate training platforms, although perhaps that’s what Microsoft hopes to become, as a result of its acquisition of the professional social network LinkedIn, which had previously acquired the online training company Lynda.com. Platforms haven’t gone away, even if specifically education technology companies haven’t successfully platformed education – yet. Technology companies, on the other hand, seem well poised to do so – not just Microsoft, but Google and Apple, of course. And even Facebook has made an effort to this end, partnering with a chain of charter schools in the US, Summit Public Schools, in order to build a “personalized learning platform.” From the company’s website: The platform comes with a comprehensive curriculum developed by teachers in classrooms. The base curriculum is aligned with the Common Core, and each course includes meaningful projects, playlists of content and assessments, all of which can be customized. Teachers can adapt or create new playlists and projects to meet their students’ needs. “Playlists” – this seems to be one of the latest buzzwords connected to personalization. “Students build content knowledge by working at their own pace and take assessments on demand,” the Summit website says. But while students might be able to choose which order they tackle the “playlist,” there isn’t really open inquiry about what “songs” (if you will) they get to listen to. AltSchool is another Silicon Valley company working on a “personalized learning platform.” It was founded in 2014 by Max Ventilla, a former Google executive. AltSchool has raised $133 million in venture funding from Zuckerberg Education Ventures, the Emerson Collective (the venture philanthropy firm founded by Steve Jobs’ widow Laurene Powell Jobs), Founders Fund (Peter Thiel’s investment firm), Andreessen Horowitz, and others. The AltSchool classroom is one of total surveillance: cameras and microphones and sensors track students and teachers – their conversations, their body language, their facial expressions, their activities. The software – students are all issued computing devices – track the clicks. Everything is viewed as a transaction that can be monitored and analyzed and then re-engineered. Stirling University’s Ben Williamson has written fairly extensively about AltSchool, noting that the company describes itself as a “full stack” approach to education. From the AltSchool blog, As opposed to the traditional approach of selling or licensing technology to established organizations, the full stack startup builds and manages a complete end-to-end product or service, thereby bypassing incumbents. So why take a full stack approach to education? “You want to own the total outcome,” says A16z General Partner and AltSchool investor, Lars Delgaard. “We are building the world’s biggest private school system. To make that experience the one we want – one that is more affordable, better, and revolutionary – you need to have full ownership.” While the company initially started as with aspirations of launching a chain of private schools, like many education startups, it’s had to “pivot” – focusing less on opening schools (hiring teachers, recruiting students) and more on building and selling software (hiring engineers, hiring marketers). But it retains, I’d argue, this “full stack” approach. Rather than thinking about the platforming of education as just a matter of centralizing and controlling the software, the data, the analytics, we have this control spilling out into the material world – connected to sensors and cameras, but also shaping the way in which all the practices of school happen and – more frighteningly, I think – the shape our imagination of school might take. John Herrman recently wrote in The New York Times that Platforms are, in a sense, capitalism distilled to its essence. They are proudly experimental and maximally consequential, prone to creating externalities and especially disinclined to address or even acknowledge what happens beyond their rising walls. And accordingly, platforms are the underlying trend that ties together popular narratives about technology and the economy in general. Platforms provide the substructure for the “gig economy” and the “sharing economy”; they’re the economic engine of social media; they’re the architecture of the “attention economy” and the inspiration for claims about the “end of ownership.” Platforms are not substitutes for community. They are not substitutes for collective political action. We should resist the platforming of education, I’d argue. We should resist because of the repercussions for labor – the labor of teaching, the labor of learning. We should resist because of the repercussions for institutions, for the law, for democracy. And these are the things I try to point out when I select the “top ed-tech trends” – too many other people want us to simply marvel at their predictions and products. I want us to consider instead the ideologies, the implications. Mon, 03 Apr 2017 19:01:00 +0000

Spencer Fellow!
Hack Education

Columbia Journalism School has awarded me a Spencer Education Journalism Fellowship for the 2017–2018 academic year. I can’t even begin to articulate how truly thrilled and humbled I am for this opportunity. It’s not going to change much here on Hack Education. The fellowship will help me pursue the work I already undertake here. It will change my home base as I will be relocating from Los Angeles to New York City for all or part of the school year. The project that I proposed involves studying the networks of education technology investors and how they are shaping education policies (as well as our imagination about what the future of education might look like). I already pay quite close attention to how venture capital flows into ed-tech, and during my fellowship I’ll expand this research in a couple of ways. First, the selection of Betsy DeVos as Secretary of Education, with her 108-page financial disclosure form, serves as a reminder that venture capital (particularly “Silicon Valley” venture capital) is really just one of the players here. Private equity and hedge funds are particularly important too (although much less boastful than their VC counterparts), as are the socio-political relationships among the various investors and entrepreneurs and philanthropists. During my fellowship, I’ll be investigating and tracing out these networks in order to identify some of the more powerful groups – education technology’s equivalent to the “Paypal Mafia,” if you will – and the ideas and policies that they are pushing. “Personalization” is an obvious one. I posted a rough draft of my fellowship proposal on a new subdomain where I’ll be posting some of my research and analysis along the way: network.hackeducation.com. But most of my stories will, as usual, appear here on Hack Education. Fri, 31 Mar 2017 08:01:00 +0000

Driverless Ed-Tech: The History Of The Future Of Automation In Education
Hack Education

This talk was presented at The University of Edinburgh's Moray House School of Education Let me begin with a story. In December 2012 – we all remember 2012 right? “The Year of the MOOC” – I was summoned to Palo Alto, California for a small gathering to discuss the future of teaching, learning, and technology. I use the verb “summoned” deliberately. The event was organized by Sebastian Thrun, who at the beginning of the year had announced that he was resigning his full time professor position at Stanford in order to launch Udacity, his online education startup. It was held at Stanford in its artificial intelligence lab, which was a little awkward a venue as Thrun’s office – he still had an office on campus, of course – was right next to those of Daphne Koller and Andrew Ng, his fellow Stanford AI professors who’d announced in April that they were launching a competitor company, Coursera. When Thrun first invited us all to this event – about ten of us – he promised that at the end of the weekend, we would take a ride in a zeppelin over San Francisco. And I thought “like hell I will.” I’ve seen A View to a Kill. I know what happened to the dissenters who got into a zeppelin in that movie. But as it turned out, the zeppelin company had gone out of business – I imagine that many people, like myself, could only think about Christopher Walken and Grace Jones’ characters and opted not to go. So instead of a zeppelin, we got to ride in one of Google’s self-driving cars, which was of course the project that Thrun had been working on when he gave his famous TED Talk in 2011 – and that, in turn, was where he heard Salman Khan give his famous TED Talk. It was when and where Thrun decided that he needed to rethink his work as a Stanford professor in order to “scale” education. Thrun “drove.” He steered the car onto I–280 and then let the car take over, and I have to say – and I say this as a professional skeptic of technology – it was this strange combination of the utterly banal and the utterly impressive. (It was 2012, I should reiterate, so it was right at the beginning of all this hype about a future of autonomous vehicles.) The car was covered in cameras and sensors, inside and out – even a QR code on the driver’s side glove compartment that you were supposed to scan to sign Google’s Terms of Service before riding. Seemingly the most dangerous element of our little jaunt was that other drivers swerved and slowed down as they stared at the car, with its giant camera on top and Google logo on the sides. There was Thrun with his hands off the wheel, feet off the pedals, eyes not on the road, sometimes turning around entirely to face the passengers in the back seat, explaining how the car (and Google, of course) collected massive amounts of data in order to map the road and move efficiently along it. Efficiency. That’s the goal of the self-driving car. (You’re free to insert here some invented statistic about the percentage of space and energy that are wasted by human-driven traffic and human driving patterns and that will be corrected by roads full of autonomous vehicles. I vaguely recall Thrun doing so at least.) It was then and there on that trip that I had a revelation about how many entrepreneurs and engineers in Silicon Valley conceive of education and the role of technology in reshaping it: that is, if you collect enough data – lots and lots and lots of data – you can build a map. This is their conceptual framework for visualizing how “learners” (and that word is used to describe various, imagined students, workers, and consumers) get from here to there, whether it’s through a course or through a degree program or towards a job. With enough data and some machine learning, you can identify – statistically – the most common obstacles. You can plot the most frequently traveled path and the one that folks traverse most quickly. You can optimize. And once you’ve trained those algorithms, you can apply them everywhere. You can scale. We can debate this model (we should debate this model) – how it works or doesn’t work when applied to education. (Is learning “like a map”? Is learning an engineering problem? Is the absence of “data” or algorithms really a problem?) But one of the most important things to remember is that this is (largely) a computer scientist’s model. It’s the model of human learning by someone who claims expertise in machine learning, a field of study which has aspired to model if not surpass the human mind. And that makes it a model in turn that rests on a lot of assumptions about “learning” – both how humans “learn” and how machines “learn” to conceptualize and navigate their worlds. It’s a model. It’s a metaphor. It’s an aspiration – a human aspiration, to be clear. This isn’t what machines “want.” (Machines have no wants.) I think many of us quickly recognized back in 2012 that, despite the AI expertise in the executive offices of these MOOC companies, there wasn’t much “artificial intelligence” beyond a few of their course offerings; there wasn’t much “intelligence” in their assessments or in their course recommendation engines. What these MOOCs were, nonetheless, were (and still are) massive online honeypots into which we’ve been lured – registering and watching and clicking in order to generate massive education datasets. Perhaps with this data, the MOOC providers can build a map of professional if not cognitive pathways. Perhaps. Someday. Maybe. In the meantime, these companies continue to collect a lot of “driving” data. Who controls the mapping data and who controls the driving data and who controls the autonomous vehicle patents are, of course, a small part of the legal and financial battles that are brewing over the future of autonomous vehicles. Google versus Uber. Google versus Didi (a Chinese self-driving car company). We can speculate, I suppose, about what the analogous battles might be in education – which corporation will sue which corporation, claiming they “own” learning data and learning roadmaps and learning algorithms and learning software IP. (Spoiler alert: it won’t actually be learners – just like it’s not actually drivers – even though that’s where the interesting data comes from: not from mapping the roads, but from monitoring the traffic.) As we were driving on the freeways around Palo Alto in the Google autonomous vehicle, someone asked Sebastian Thrun what happens if there’s an unexpected occurrence while the car is in self-driving mode. Now, the car is constantly making small adjustments – to its speed, to its distance to other vehicles. “But what would happen if, say, a tree suddenly came crashing down in the road right in front of it,” the passenger asked Thrun. “The car would stop,” he said. The human driver would be prompted to take over. Hopefully the human driver is paying attention. Hopefully there’s a human driver. Of course, the “unexpected” occurs all the time – on the road and in the classroom. Recently the “ride-sharing” company Uber flouted California state regulations in order to start offering an autonomous vehicle ride-sharing service in San Francisco. The company admitted that it hadn’t addressed at least one flaw in their programming: that its cars would make a right hand turn through a bicycle lane (the equivalent of a left-hand turn here in the UK). Uber didn’t have a model for recognizing the existence of “bike lane” (and as such “cyclists”). It’s not that the car didn’t see something “unexpected”; that particular “unexpected” was not fully modeled, and the self-driving car didn’t slow, and it didn’t stop. In this testing phase of Uber’s self-driving cars, it did still have a driver sitting behind the wheel. Documents recently obtained by the tech publication Recode revealed that Uber’s autonomous vehicles drove, on average, less than a mile without requiring human intervention. The technology simply isn’t that good yet. At the conclusion of our ride, Thrun steered the Google self-driving car back to his house, where he summoned a car service to take us back to our hotel. Giddy from the experience, one professor boasted to the driver what we’d just done. He frowned. “Oh,” he said. “So, you just put me out of a job?” “Put me out of a job.” “Put you out of a job.” “Put us all out of work.” We hear that a lot, with varying levels of glee and callousness and concern. “Robots are coming for your job.” We hear it all the time. To be fair, of course, we have heard it, with varying frequency and urgency, for about 100 years now. “Robots are coming for your job.” And this time – this time – it’s for real. I want to suggest – and not just because there are flaws with Uber’s autonomous vehicles (and there was just a crash of a test vehicle in Arizona last Friday) – that this is not entirely a technological proclamation. Robots don’t do anything they’re not programmed to do. They don’t have autonomy or agency or aspirations. Robots don’t just roll into the human resources department on their own accord, ready to outperform others. Robots don’t apply for jobs. Robots don’t “come for jobs.” Rather, business owners opt to automate rather than employ people. In other words, this refrain that “robots are coming for your job” is not so much a reflection of some tremendous breakthrough (or potential breakthrough) in automation, let alone artificial intelligence. Rather, it’s a proclamation about profits and politics. It’s a proclamation about labor and capital. And this is as true in education as it is in driving. As Recode wrote in that recent article, Successfully creating self-driving technology has become a crucial factor to Uber’s profitability. It would allow Uber to generate higher sales per ride since it would keep all of the fare. Uber has currently suffered losses in some markets partly because of having to offer subsidies to attract drivers. Computers are cheaper in the long run. “Computers are cheaper in the long run.” Cheaper for whom? Cheaper how? Well, robots don’t take sick days. They don’t demand retirement or health insurance benefits. You tell them the rules, and they obey the rules. They don’t ask questions. They don’t unionize. They don’t strike. A couple of years ago, there was a popular article in circulation in the US that claimed that the most common occupation in every state is “truck driver.” The data is a little iffy – the US is a service economy, not a shipping economy – but its claim about why “truck driver” is still fairly revealing: unlike other occupations, the work of “truck driver” has not been affected by globalization, the article claimed, and it has not (yet) been affected by automation. (The CEO of Otto, a self-driving trucking company now owned by Uber, just predicted this week that AI will reshape the industry within the next ten years.) Truck driving is also a profession – an industry – that’s been subject to decades of regulation and deregulation. That regulatory framework is just one of the objects of derision – of “disruption” and dismantling – of the ride-sharing company Uber. Founded in 2008 – ostensibly when CEO Travis Kalanick was unable to hail a cab while in Paris – the company has become synonymous with the so-called “sharing” or “freelance” economy, Silicon Valley’s latest rebranding of technologically-enhanced economic precarity and job insecurity. “Anyone” can drive for Uber, no special training or certification required. Well, anyone who’s 21 or older and has three years of driving experience and a clean driving record. Anyone with car insurance. Anyone whose car has at least four doors and is newer than 2001 – Uber will also help you finance a new car, even if you have a terrible credit score. Your loan payments are simply deducted from your Uber earnings each week. All along, Uber has been quite clear, that despite wooing drivers to its platform, using “independent contractors” is only temporary. The company plans to replace drivers with driverless cars. Since its launch, Uber has become infamous for its opposition to regulations and to unions. (Uber has recently been using podcasts broadcast from its app in order to dissuade drivers in Seattle from unionizing, for example.) And I’ll note here in case this sounds too much like a talk on autonomous vehicles and not enough on automated education, I am purposefully putting these two “disruptions” side by side. After all, education is fairly regulated as well – accreditation, for example, dictates who gets to offer “real” degrees. There are rules about who gets to run a “real school.” Trump University, not a real school. And there are rules as to who gets to be in the classroom, rules about who can teach. But any semblance of job protections – at both the K–12 level and at the higher education level in the US – is under attack. (Again, this isn’t simply about replacing teachers with computers because computers have become so powerful. But it is about replacing teachers nonetheless.) You no longer need a teaching degree (or any teaching training) in Utah. And while the certification demands might still be in place in colleges and universities, they’ve been moving towards a precarious teaching labor force for some time now. More than three-quarters of the teaching staff in the US are adjuncts – short-time employees with no job security and often no benefits. “Independent contractors.” Uber encourages educators to earn a little cash on the side as drivers. Like I said, I’m not sure I believe that the most prevalent job in the US is “truck driver.” But I do know this to be true: the largest union in the United States is the National Education Association. The other teachers’ union, the American Federation of Teachers, is the sixth largest. Many others who work in public education are represented by the second largest union in the US, the Service Employees International Union. Silicon Valley hates unions. It loathes organized labor just as it loathes regulations (until it benefits from regulations, of course). Now, for its part, Uber has also been accused of violating “regulations” like the Americans with Disabilities Act for refusing to pick up riders with service dogs or with wheelchairs. A fierce proponent of laissez-faire capitalism, Uber has received a fair amount of negative press for its price gouging practices – it uses what it calls “surge pricing” during peak demand, increasing the amount a ride will cost in order, Uber says, to lure more drivers out onto the road. It’s implemented surge pricing not just on holidays like New Year’s Eve but during several weather-related emergencies. The company has also actively sabotaged its rivals – attacking other ride service companies as well as journalists. None of this makes the phrase “Uber for Education” particularly appealing. But that’s how Sebastian Thrun described his company Udacity in a series of interviews in 2015. “At Udacity, we built an Uber-like platform,” he told the MIT Technology Review. “With Uber any normal person with a car can become a driver, and with Udacity now every person with a computer can become a global code reviewer. … Just like Uber, we’ve made the financials line up. The best-earning global code reviewer makes more than 17,000 bucks a month. I compare this to the typical part-time teacher in the U.S. who teaches at a college – they make about $2,000 a month.” “We want to be the Uber of education,” Thrun told The Financial Times, which added that, “Mr Thrun knows what he doesn’t want for his company: professors in tenure, which he claims limits the ability to react to market demands.” In other words, “disrupt” job protections through a cheap, precarious labor force doing piecemeal work until the algorithms are sophisticated enough to perform those tasks. Universities have already taken plenty of steps towards this end, without the help of algorithms or for-profit software providers. But universities are still bound by accreditation (and by tradition). “Anyone can teach” is not a stance on labor and credentialing that many universities are ready to take. Udacity is hardly the only company that invokes the “Uber for Education” slogan. There’s PeerUp, whose founder describes the company as “Uber for tutors.” There’s ProfHire and Adjunct Professor Link, Uber for contingent faculty. There’s The Graide Network, Uber for teaching assistants and exam markers. There’s Parachute Teachers, which describes itself as “Uber for substitute teachers.” Again, what we see here with these services are companies that market “on demand” labor as “disruption.” These certainly reflect larger trends at work dismantling the teaching profession – de-funding, de-professionalization, adjunctification, a dismissal of expertise and experience. Anyone can teach. Indeed, the only ones who shouldn’t are probably the ones in the classroom right now – or so this story goes. The right wing think tank The Heritage Foundation has called for an “Uber-ized Education.” The right wing publication The National Review has called for “an Uber for Education.” Echoing some of the arguments made by Uber CEO Travis Kalanick, these publications (and many many others) speak of ending the monopolies that “certain groups” (unions, women, liberals, I don’t know) have on education – ostensibly, I guess, on public schools – and bringing more competition to the education system. US Secretary of Education in a speech earlier this week also invoked Uber as a model that education should emulate: “Just as the traditional taxi system revolted against ridesharing,” she told the Brookings Institution, “so too does the education establishment feel threatened by the rise of school choice. In both cases, the entrenched status quo has resisted models that empower individuals.” All this is a familiar refrain in Silicon Valley, which has really cultivated its own particular brand of consumerism wrapped up in the mantle of libertarianism. Travis Kalanick is just one of many tech CEOs who have praised the work of objectivist “philosopher” and “novelist” Ayn Rand, once changing the background of his Twitter profile to the cover of her book The Fountainhead. He told The Washington Post in a 2012 Q&A that the regulations that the car service industry faced bore an “uncanny resemblance” to Rand’s other novel, Atlas Shrugged. (A quick summary for those lucky enough to be unfamiliar with the plot: the US has become a dystopia overrun by regulations that cause industries to collapse, innovation to be stifled. The poor are depicted as leeches; the heroes are selfish individualists. Eventually business leaders rise up against the government, led by John Galt. The government collapses, and Galt announced that industrialists will rebuild the world. It is a terrible, terrible novel. It is nonetheless many libertarians’ Bible of sorts.) I’ve argued elsewhere (and I’ve argued repeatedly) that libertarianism is deeply intertwined in the digital technologies developed by those like Uber’s Kalanick. And I don’t mean here simply or solely that these technologies are wielded to dismantle “big government” or “big unions.” I mean that embedded in these technologies, in their design and in their development and in their code, are certain ideological tenets – in the case of libertarianism, a belief in order, freedom, work, self-governance, and individualism. That last one is key, I think, for considering the future of education and education technology – as designed and developed and coded by Silicon Valley. Individualism. Now obviously these beliefs are evident throughout American culture and have been throughout American history. Computers didn’t cause neoliberalism. Computers didn’t create libertarians. (It just hooked them all up on Twitter.) Indeed, there’s that particular strain of individualism that is deeply, deeply American which contributed to libertarianism and to neoliberalism and to computers in turn. I’d argue that that strain of individualism has been a boon for the automotive industry – for car culture. Many Americans would rather drive their own vehicles rather than rely on – and/or fund – public transportation. I think this is both Uber’s great weakness and also, strangely, its niche: you hail a car, rather than take the bus. The car comes immediately; you do not have to wait. It takes you to your destination; you needn’t stop for others. As such, you can dismiss the need to develop a public transportation infrastructure as some cities in the US have done, some opting to outsource this to Uber instead. In a car, you can move at your own pace. In a car, you can move in the direction you choose – when and where you want to go. In a car, you can stop and start, sure, but most often you want to get where you’re going efficiently. In a car – and if you watch television ads for car companies, you can see evidence of this powerful imaginary most strikingly – you are truly free. Unlike the routes of public transportation – the bus route, the subway line – routes that are prescribed for and by the collective, the car is for you and you alone. The car is another one of these radically individualistic, individualizing technologies. The car is a prototype of sorts for the concept of “personalization.” Branded. Controlled. Manufactured en masse. Mass-marketed. And yet somehow this symbol of the personal, the individual. We can think about the relationship too between education systems and individualism. I believe increasingly that’s how education is defined – not as a collective endeavor or a public good, but as an individual investment. “Personalization” is a reflection of that. “Personalized” education promises you can move at your own pace. You can (ostensibly) move in the direction you choose. You can stop and start, sure, but most often you want to get where you’re going efficiently. With “personalized” software – – and if you read publications like Edsurge, you can see evidence of this powerful imaginary most strikingly – the learner is truly free. Unlike the routes of “traditional” education – the lecture hall, the classroom – those routes that are prescribed for and by the collective, “personalized software” is for you and you alone. The computer is a radically individualistic, individualizing technology; education becomes a radically individualistic act. (I’ll just whisper this because I’d hate to ruin the end of the movie for folks: this freedom actually involves you driving.) Let me pause here and note that there are several directions that I could take this talk: data collection and analysis as “personalization,” for example. The New York Times just wrote about an app called Greyball that Uber has utilized to avoid scrutiny from law enforcement and regulators in the cities into which it’s tried to expand. The app would ascertain, based on a variety of signals, when cops might be trying to summon an Uber and would prevent them from doing so. Instead, they’d see a special version of Uber – “personalized” – that misinformed them that there were no cars in the vicinity. How is “personalized learning” – the automation of education through algorithms – a form of “greyballing”? I am really intrigued by this question. Another piece of the automation puzzle for education (and for “smart car” and for “smart homes”) involves questions of what we mean by “intelligence” in that phrase “artificial intelligence.” What are the histories and practices of “intelligence” – how have humans been ranked, categorized, punished, and rewarded based on an assessment of intelligence? How is intelligence performed – by man (and I do mean “man”) and by machine? What do we read as signs of intelligence? What do we cultivate as signs of intelligence – in our students and in our machines? What role have educational institutions had in developing and sanctioning intelligence? How does believing there’s such a thing as “machine intelligence” challenge some institutions (and prop up others)? But I want to press on a little more with a look at automation and labor: this issue of driverless cars and driverless school, this issue of “freedom” as being intertwined with algorithmic decision-making and precarious labor. I am lifting the phrase “driverless school” for the title of this talk from Karen Gregory who recently tweeted something about the “driverless university.” I believe she was at a conference, but in the horrible way that Twitter strips context from our utterances, I’m going to borrow it without knowing who or what she was referring to and re-contextualize the phrase here for my purposes because that’s the visiting speaker’s prerogative. I do think that in many ways MOOCs were envisioned – by Thrun and by others – as a move towards this idea of a “driverless university.” And that phrase and the impulse behind it should prompt us to ask, no doubt, who is currently “driving” school? Who do education engineers imagine is doing the driving? Is it the administration? The faculty? The government? The unions? Who is exactly going to be displaced by algorithms, by software that purport to make a university “driverless”? What’s important to consider, I’d argue, is that if we want to rethink how the university functions – and I’ll just assume that we all do in some way or another – “driverlessness” certainly doesn’t give the faculty a greater say in governance. (Indeed, faculty governance seems, in many cases, one of the things that automation seeks to eliminate. Think Thrun’s comments on tenure, for example.) More troubling, the “driverlessness” of algorithms is opaque – even more opaque than universities’ decision-making already is (and that is truly saying something). And despite all the talk of catering to what Silicon Valley has lauded in the “self-directed learner,” to those whom Tressie McMillan Cottom has called the “roaming autodidacts,” the “driverless university” certainly does not give students a greater say in their own education either. The “driverless university,” rather, is controlled by the engineers who write the algorithms, those who model the curriculum, those who think they can best navigate a learning path. There is still a “driver,” but that labor and decision-making power is obscured. We can see the “driverless university” already under development perhaps at the Math Emporium at Virginia Tech, which The Washington Post once described as “the Wal-Mart of higher education, a triumph in economy of scale and a glimpse at a possible future of computer-led learning.” Eight thousand students a year take introductory math in a space that once housed a discount department store. Four math instructors, none of them professors, lead seven courses with enrollments of 200 to 2,000. Students walk to class through a shopping mall, past a health club and a tanning salon, as ambient Muzak plays. The pass rates are up. That’s good traffic data, I suppose, if you’re obsessed with moving bodies more efficiently along the university’s pre-determined “map.” Get the students through pre-calc and other math requirements without having to pay for tenured faculty or, hell, even adjunct faculty. “In the Emporium, the computer is teacher,” The Washington Post tells us. “Students click their way through courses that unfold in a series of modules.” Of course, students who “click their way through courses” seem unlikely to develop a love for math or a deep understanding of math. They’re unlikely to become math majors. They’re unlikely to become math graduate students. They’re unlikely to become math professors. (And perhaps you think this is a good thing if you believe there are too many mathematicians or if you believe that the study of mathematics has nothing to offer a society that seems increasingly obsessed with using statistics to solve every single problem that it faces or if you think mathematical reasoning is inconsequential to twenty-first century life.) Students hate the Math Emporium, by the way. Despite The Washington Post’s pronouncement that “the time has come” for computers as teachers, the time has been coming for years now. “Programmed instruction” and teaching machines – these are concepts that are almost one hundred years old. (So to repeat, the push to automate education is not about technology as much as it’s about ideology.) In his autobiography, B. F. Skinner described how he came upon the idea of a teaching machine in 1953: Visiting his daughter’s fourth grade classroom, he was struck by the inefficiencies. Not only were all the students expected to move through their lessons at the same pace, but when it came to assignments and quizzes, they did not receive feedback until the teacher had graded the materials – sometimes a delay of days. Skinner believed that both of these flaws in school could be addressed by a machine, so he built a prototype that he demonstrated at a conference the following year. Skinner’s teaching machine broke concepts down into small concepts – “bite-sized learning” is today’s buzzword. Students moved through these concepts incrementally, which Skinner believe was best for “good contingency management.” Skinner believed that the machines could be used to minimize the number of errors that students made along the way, maximizing the positive behavioral reinforcement that students received. Skinner called this process “programmed instruction.” Driverless ed-tech. “In acquiring complex behavior the student must pass through a carefully designed sequence of steps,” Skinner wrote, “often of considerable length. Each step must be so small that it can always be taken, yet in taking it the student moves somewhat closer to fully competent behavior. The machine must make sure that these steps are taken in a carefully prescribed order.” Driverless and programmatically constrained. Skinner had a dozen of the machines he prototyped installed in the self-study room at Harvard in 1958 for use in teaching the undergraduate course Natural Sciences 114. “Most students feel that machine study has compensating advantages,” he insisted. “They work for an hour with little effort, and they report that they learn more in less time and with less effort than in conventional ways.” (And we all know that if it’s good enough for Harvard students…) “Machines such as those we use at Harvard,” Skinner boasted, “could be programmed to teach, in whole and in part, all the subjects taught in elementary and high school and many taught in college.” The driverless university. One problem – there are many problems, but here’s a really significant one – those Harvard students hated the teaching machines. They found them boring. And certainly we can say “well, the technology just wasn’t very good” – but it isn’t very good now either. Ohio State University psychology professor Sidney Pressey – he’d invented a teaching machine about a decade before B. F. Skinner did – said in 1933 that, There must be an “industrial revolution” in education, in which educational science and the ingenuity of educational technology combine to modernize the grossly inefficient and clumsy procedures of conventional education. Work in the schools of the future will be marvelously though simply organized, so as to adjust almost automatically to individual differences and the characteristics of the learning process. There will be many labor-saving schemes and devices, and even machines – not at all for the mechanizing of education, but for the freeing of teacher and pupil from educational drudgery and incompetence. Oh not replace you, teacher. To free you from drudgery, of course. Just like the Industrial Revolution freed workers from the drudgery of handicraft. Just like Uber drivers have been freed from the drudgery of full-time employment by becoming part of the “gig economy” and just like Uber will free them from the drudgery of precarious employment when it replaces them with autonomous vehicles. Teaching machines – the driverless school – will replace just some education labor at first, the bits of it the engineers and their investors have deemed repetitive, menial, unimportant, and let’s be honest, those bits that are too liberal. But it doesn’t seem interested, however, in stopping students from having to do menial tasks. The “driverless university” will still mandate students sit in front of machines and click on buttons and answer multiple choice questions. “Personalized,” education will be stripped of all that is personal. It’s a dismal future, this driverless one, and not because “the machines have taken over,” but because the libertarians who build the machines have. A driverless future offers us only more surveillance, more algorithms, less transparency, fewer roads, and less intellectual freedom. Skinner would love it. Trump would love it. But we, we should resist it. Thu, 30 Mar 2017 18:01:00 +0000

Rationalizing Those 'Irrational' Fears Of Inbloom
Hack Education

This article first appeared on Points, a Data & Society publication in February 2017 That inBloom might exist as a cautionary tale in the annals of ed-tech is rather remarkable, if for no other reason than ed-tech – at least its manifestation as a current blend of venture capital exuberance, Silicon Valley hype, philanthropic dollars, and ed-reform policy-making – tends to avoid annals. That is to say, ed-tech today has very little sense of its own history. Everything is “new” and “innovative” and “disruptive.” It’s always forward-facing, with barely a glance over its should at the past – at the history of education or the history of technology. No one had ever thought about using computers in the classroom – or so you might glean if you only read the latest marketing about apps and analytics – until this current batch of philanthropists and entrepreneurs and investors and politicians suddenly stumbled upon the idea circa 2010. Perhaps that very deliberate dismissal of history helped doom inBloom from the start. Those who worked on the initiative seemed to ignore the legacy of the expensive and largely underutilized ARIS (Achievement Reporting and Innovation System) system that had been built for New York City schools, for example, hiring many of ARIS’s staff and soliciting the company in charge of building it, Wireless Generation, to engineer the inBloom product. While those making sweeping promises about data collection and data analytics wanted to suggest that, thanks to digital technologies, InBloom offered a unique opportunity to glean insights from data from the classroom, many parents and educators likely had a different sense – a deeper history –of what data had already done or undone, of what data could do or undo. They certainly had a different sense of risk. The compulsion to gather more and more data is hardly new, although certainly new technologies facilitate it, generating more and more data in turn. In 1962, Raymond Callahan published Education and the Cult of Efficiency, tracing to the early twentieth century the eagerness of school leaders to adopt the language and the practices of business management in the hopes that schools might be run more efficiently and more “scientifically.” There’s something quite compelling about those hopes, it seems, as they underlie much of the push for education reform and education technology in schools still today. Indeed, this belief in efficiency and science helped to justify inBloom, as Data & Society’s new report on the history of the $100 million data infrastructure initiative demonstrates. That belief is evident in the testimonies from various politicians, administrators, entrepreneurs, and technologists involved in the project. Data collection – facilitated by inBloom – was meant to be “the game-changer,” in the words of the CEO of the Data Quality Campaign, providing a way to “actually use individual student information to guide teaching and learning and to really leverage the power of this information to help teachers tailor learning to every single child in their class. That’s what made inBloom revolutionary.” “The promise was that [inBloom] was supposed to be adaptive differentiated instruction for individual students, based on test results and other data that the states had. InBloom was going to provide different resources based on those results,” according to the superintendent of a New York school district. But this promise of a data-driven educational “revolution” was – and still is – mostly that: a promise. The claims about “personalized learning” attainable through more data collection and data analysis remain primarily marketing hype. Indeed, “personalized learning” is itself a rather nebulous concept. As Data & Society observed in a 2016 report on the topic, Description of personalized learning encompass such a broad range of possibilities – from customized interfaces to adaptive tutors, from student-centered classrooms to learning management systems – that expectations run high for their potential to revolutionize learning. Less clear from these descriptions are what personalized learning systems actually offer and whether they improve the learning experiences and outcomes for students. So while “personalized learning” might be a powerful slogan for the ed-tech industry and its funders, the sweeping claims about its benefits are largely unproven by educational research. But it sounds like science. With all the requisite high-tech gadgetry and data dashboards, it looks like science. It signifies science, and that signification is, in the end, the justification that inBloom largely relied upon. I’m someone who tried to get the startup to clarify “what inBloom will gather, how long it will store it, and what recourse parents have who want to opt out,” and I remember clearly that there was nevertheless much more hand-waving and hype than there ever was a clear explanation (“scientific” or otherwise) of “how” or “why” it would work. No surprise then, there was pushback, primarily from parents, educators, and a handful of high profile NYC education activists who opposed InBloom’s data collection, storage, and sharing practices. But as the Data & Society report details, “instead of seeking to build trust at the district level with teachers and parents, many interview participants observed that inBloom and the Gates Foundation responded to what were very emotional concerns with complex technical descriptions or legal defenses.” This juxtaposition of parents as “emotional” and inBloom and the project’s supporters as “scientific” and “technical” runs throughout the report, which really serves to undermine and belittle the fears of inBloom opponents. (This was also evident in many media reports at the time of inBloom’s demise that tended to describe parents as “hysterical” or that patronized them by contending the issues were “understandably obscure to the average PTA mom.”) The opposition to inBloom is described in the Data & Society report as a “visceral, fervently negative response to student data collection,” for example, while the data collection itself is repeatedly framed in terms of its “great promise.” While the report does point to the failure of inBloom officials to build parents’ trust, many of the interviewees repeatedly dismiss the mistrust as irrational. “The activism about InBloom felt like anti-vaccination activism. Just fear,” said one participant. “I don’t know how else to put it,” said another. “It was not rational.” But inBloom opponents did have reason – many perfectly rational reasons – for concern. As the report chronicles, there were a number of concurrent events that prompted many people to be highly suspicious of plans for the data infrastructure initiative – its motivations and its security. These included inBloom’s connection to the proponents of the Common Core and other education reform policies; the growing concern about the Gates Foundation’s role in shaping these very policies; Edward Snowden’s revelations about NSA surveillance; several high profile data breaches, including credit card information of some 70 million Target customers; the role of News Corp’s subsidiary Wireless Generation in building the inBloom infrastructure, coinciding with News Corp’s phone hacking scandal in the UK, as well as its decision to hire Joel Klein, the former NYC schools chancellor who’d commissioned the failed ARIS system, to head News Corp’s new education efforts. As the report notes, “The general atmosphere of data mistrust combined with earlier education reform movements that already characterized educational data as a means of harsh accountability.” In the face of this long list of concerns, the public’s “low tolerance for uncertainty and risk” surrounding student data is hardly irrational. Indeed, I’d argue it serves as a perfectly reasonable challenge to a technocratic ideology that increasingly argues that “the unreasonable effectiveness of data” will supplant theory and politics and will solve all manner of problems, including the challenge of “improving teaching” and “personalizing learning.” There really isn’t any “proof” that more data collection and analysis will do this – mostly just the insistence that this is “science” and therefore must be “the future.” History – the history of inBloom, the history of ed-tech more generally – might suggest otherwise. Thu, 16 Mar 2017 12:35:00 +0000

The History Of The Future Of E-Rate
Hack Education

While much of the speculation about the future of education technology under President Trump has been focused on the new Secretary of Education Betsy DeVos (her investment in various ed-tech companies, her support for vouchers and charter schools), it’s probably worth remembering that the Department of Education is hardly the only agency that shapes education and education technology policy. The FCC plays a particularly important role in regulating the telecommunications industry, and as such, it has provided oversight for the various technologies long touted as “revolutionizing” education – radio, television, the Internet. (The FCC was established in 1934; the Department of Education, in 1979; its Office of Educational Technology, in 1994.) Tom Wheeler, the head of the FCC under President Obama, stepped down from his role and left the agency on January 20 – the day of President Trump’s inauguration. Wheeler had been a “champion” of net neutrality and E-rate reform, according to Education Week at least, but his replacement, Trump appointee Ajit Pai, seems poised to lead the agency with a very different set of priorities – and those priorities will likely shape in turn what happens to ed-tech under Trump. As an op-ed in The Washington Post put it, “The FCC talks the talk on the digital divide – and then walks in the other direction.” Indeed, one of the first moves made by the FCC under Pai was to block nine companies from providing subsidized Internet service to low-income families.The agency also rescinded a report about the progress made in modernizing the E-rate program, something that had been the focus of Wheeler’s tenure – a report that had been released just two days before Wheeler left office – removing it from the FCC website altogether. (An archived copy is available via Doug Levin’s website.) Senator Bill Nelson (D-FL), the ranking member of the Senate Committee on Commerce, Science and Transportation, issued a strongly worded rebuke to that move, calling E-rate “without question the single most important educational technology program in the country.” Despite this praise, the program has long been controversial, frequently criticized for fraud and waste. Arguably, E-rate is one of the key pieces of ed-tech-related legislation in the US, and as such it’s worth examining its origins, its successes, and its failures. What can E-rate tell us about the relationship between politics and ed-tech? Who has benefited?A History of E-rate Legislation E-rate is the name commonly used to describe the Schools and Libraries Program of the Universal Service Fund, established as part of the Telecommunications Act of 1996. The act called for “universal service” so that all Americans could have access to affordable telecommunications services, regardless of their geographical location. The legislation also ordered telecom companies to provide their services to all public schools and libraries at discounted rates – from 20% to 90% off depending on the services provided and number of students receiving free and reduced school lunches. The program, whose subsidies were initially capped at $2.25 billion, was to be funded through mandatory contributions from telecom providers – the Universal Service Fund (USF). (Telecom providers added fees to customers’ bills in order to pay for their contributions.) The FCC initially appointed the National Carrier Exchange Association, the non-profit organization charged with managing the USF, with handling the E-rate program, but eventually a new organization was created to do this: the Universal Service Administrative Company (USAC). From the outset, the program faced Congressional scrutiny, with questions about its scope, its management, and its funding. In particular, legislators were concerned that the charges levied on telecoms in order to pay for E-rate might be a tax (rather than a fee). (If the charges were a tax, it would be unconstitutional for the Executive branch and not Congress to exact them.) Some members of Congress also objected to the level of funding for E-rate. They argued that the program cost too much money and took needed funds away from other “universal service” efforts; some proposed that the program be replaced by block grants. In 2014, the FCC undertook a “modernization” plan for E-rate in part to address the changing demand for telecommunications services. The agency issued an order to support affordable access to high-speed broadband in particular (not merely “access to the Internet”) and to boost access and bandwidth of schools’ WiFi networks. As part of these modernization efforts, in 2015 the funding cap for E-rate was increased to $3.9 billion and the way in which funds were allocated was an adjusted – all in an attempt to “spread the wealth” beyond just a few large districts that had historically benefited most from the program. According its January 2017 report, the FCC’s modernization push enabled some 77% of school districts to meet the minimum federal connectivity targets by the end of 2016; just 30% had met those requirements in 2013. (That is, Internet speeds of 100 Mbps per 1000 users.) During the same period, the cost that schools paid for Internet connectivity fell from $22 to $7 per Mbps. “Progress,” the FCC boasted in the report. “No comment,” the FCC said in February when asked why the report on the modernization efforts had been pulled from its website. Commissioner Pai had voted against those efforts, for what it’s worth, back in 2014, saying that the FCC order did little to curb bureaucracy or waste.A Brief History of E-rate Fraud Throughout its history, the E-rate program has faced repeated scrutiny from Congress, from Republican members of the FCC like Pai, and from the General Accounting Office, which issued a report in 2005 that took issue with the “unusual” organizational structure of the USAC and questioned whether or not E-rate was sufficiently responsive to accountability standards that would “help protect the program and the fund from fraud, waste, and abuse.” And there have been plenty of accusations and lawsuits regarding “fraud, waste, and abuse.” Among them: an $8.71 million settlement paid by Inter-Tel in 2004 over accusations of rigging the bidding process. A $21 million settlement paid by NEC in 2006 for price-fixing. An $8.2 million settlement paid by AT&T in 2009 over accusations of non-competitive bidding practices and overcharging. A $16.25 million settlement paid by Hewlett Packard in 2010 over accusations of fraud. A $3 million settlement paid by the New York City DOE in 2016 over accusations of mishandling the bidding process. (Here is the full list of those who’ve been convicted of criminal or civil violations and have therefore been barred from participating in the E-rate program.) As some of these settlements highlight, while the E-rate program was supposed to ensure that schools received discounted telecommunications services, this hasn’t always happened. ProPublica reported on over-charging in the E-rate program in 2012, Lawsuits and other legal actions in Indiana, Wisconsin, Michigan and New York have turned up evidence that AT&T and Verizon charged local school districts much higher rates than it gave to similar customers or more than what the program allowed. AT&T has charged some schools up to 325 percent more than it charged others in the same region for essentially the same services. Verizon charged a New York school district more than twice as much as it charged government and other school customers in that state. Despite these issues, a court decision in 2014 blocked the USAC from prosecuting telecoms for making false statements about offering schools and libraries the “Lowest Corresponding Price,” arguing that this falls outside the False Claims Act, a statute that allows the government to pursue fraud claims against businesses. The burden of proof that schools and libraries are being offered a competitive price falls on the applicants themselves.E-rate and the History of the Future of the “Digital Divide” When the E-rate program was first established in 1996, only 14% of K–12 classrooms in the US had access to the Internet. Almost all schools are now connected to the Internet, although – as that FCC modernization report underscores – not all classrooms have access to high-speed broadband, and not all schools have WiFi networks that can support the heavy data demands on their bandwidth. According to EducationSuperhighway, a non-profit organization that lobbies for increased Internet access, 88% of public schools now have the minimum level of Internet access – that is, 100 kbps per student), although just 15% offer the FCC’s goal – that is, 1 Mbps per student. According to both EducationSuperhighway and the FCC, it is imperative to “level the playing field” so that schools and libraries, regardless of geographic location or the income level of students they serve, all have access to affordable high speed Internet. Certainly in the 1990s, when E-rate was introduced, its goal was to address this very issue – “the digital divide.” Cost has certainly remained a barrier for the poorest schools, as has the infrastructure itself in some areas – a lack of high speed broadband service altogether, for example. Some schools “cannot overcome the 19th century buildings to take advantage of 20th century technology,” Education Secretary Richard Riley told The New York Times in 2000. But there’s another access to “the digital divide” beyond simply who can afford “the digital,” and that’s something that Macomb Community College professor Chris Gilliard calls “digital redlining”: “the growing sense that digital justice isn’t only about who has access but also about what kind of access they have, how it’s regulated, and how good it is.” That issue with “what kind of access” is core to E-rate because of an associated law, the Children’s Internet Protection Act. The act, known as CIPA, was passed in 2000 – one of a series of pieces of legislation that attempted to curb if not criminalize “adult materials” online in places “where minors would be able to find it.” The Communications Decency Act, for example, was passed in 1996 – the same year as the Telecommunications Act – but was found unconstitutional by the Supreme Court the following year. In 1998, Congress again sought to address children’s exposure to “harmful materials” with passage of the Child Online Protection Act, but this too was challenged in court. The Supreme Court also found the Child Pornography Prevention Act of 1998 unconstitutional in 2002. Recognizing these legal challenges, Congress took a slightly different tact with CIPA. Rather than regulating content on the Web writ large, it opted to restrict what schools and libraries that receive federal funding – through the Library Services and Technology Act, Title III of the Elementary and Secondary Act, the Museum and Library Services Act, or E-rate – could allow people to view online. CIPA requires schools and libraries to create “acceptable use” policies for Internet usage, to hold a public meeting about how it will ensure safety online, and to use a “technology protection measure” to keep Internet users away from materials online that are “obscene,” “child pornography,” or “harmful to minors.” That is, CIPA requires Web filtering. The law has faced its own share of legal challenges, including one from the American Library Association. The Supreme Court ruled in 2003 that CIPA does not violate the Constitution. One of the myriad complaints about CIPA is that it results in “over-filtering” – that schools and libraries block content that are not “obscene” or “harmful to minors.” There are many stories about how information about things like breast cancer or LGBTQ issues or drug abuse is inaccessible at certain schools. (I have found that my website is blocked by many because it contains that dangerous word “hack.”) Now that schools are increasingly providing students with laptops or tablets, filtering software often happens at the device-level, not simply at the school network level. That is, the Internet remains filtered, even when students are on their laptops at home. Clearly this is an equity issue – one that complicates how “the digital divide” has traditionally been framed and what E-rate was supposed to address. Those who rely on the Internet networks at E-rate supported schools have their Internet access restricted and monitored in turn.E-rate and the Future of Ed-Tech The decision by the new FCC to rescind its report on E-rate raises plenty of questions about the future of the program under President Trump. Will the FCC reduce spending on universal service? Will the agency revise regulatory oversight for the E-rate program? What might this look like? How might this, alongside Ajit Pai’s opposition to “net neutrality,” reshape access to information at schools and libraries (particularly those that serve a low-income population and those in rural areas)? Wed, 08 Mar 2017 12:35:00 +0000