About two months back I was asked by the Globe and Mail to review Clay Shirky’s Cognitive Surplus and Nicholas Carr’s The Shallows, two books with sharply contrasting accounts of the digital age to date. It was a fun exercise, with lots to contemplate in both Shirky’s and Carr’s work. The downside was that I had to squeeze a comprehensive review into the measly 1,200 words the Globe editors afforded me. So while the official G&M review can still be found online, I fortunately have no such restrictions here and have posted the full, unedited version below:
When the spread of the printing press popularized the reading and writing of books it spread panic throughout Europe’s ruling classes. For the first time the clergy could no longer control religion, science or medicine. Nor could the monarchs dominate every aspect of political life or grip the reins of a burgeoning capitalistic economy. There were concerns among intellectuals too. Robert Burton, the vicar of Oxford, complained of the “vast chaos and confusion of books” that make the eyes and fingers ache. Even Edgar Allan Poe called “The enormous multiplication of books in every branch of knowledge one of the greatest evils of this age; since it presents one of the most serious obstacles to the acquisition of correct information.” In hindsight, it’s clear that the invention of printing, perhaps more than any other contemporary technology, helped pave the way from a largely feudal agrarian existence to the modern industrial societies we know today.
Now evidence is mounting that we’re undergoing a similarly historic transition as the Internet rewires our social fabric and perhaps even our neural circuitry. As with the printing press, our growing immersion in digital life is sparking similarly vigorous debates over the impact of digital technologies on the way we think and behave, as individuals and as a globally interconnected society.
Two recent and noteworthy books—one by Nicholas Carr and the other by Clay Shirky—provide contrasting accounts of the digital age to date and help illuminate the deep divisions that exist among the Web gurus who endeavour to explain how the Internet is changing the world around us. One book posits the existence of a vast cognitive surplus drawn from a growing global community of Internet users whose enormous pool of free time and creative capacity can now be tapped in pursuit of virtually any shared goal or endeavor. The other poses a cognitive deficit of sorts as the Internet robs the online population of their attention spans and their capacity for deep contemplation. Put simply, Shirky says the Internet is making us smarter. Carr claims it’s making us dumber.
Shirky, an adjunct professor at New York University and keen observer of all things digital, starts with the observation that increases in GDP, educational attainment and lifespan since the Second World War have forced the industrialized world to grapple with a novel problem: a massive abundance of free time. Thanks to the advent of the 40-hour workweek, the educated population on the planet has something like a trillion hours a year of free time to spend doing things they care about.
Sounds great, except there’s a problem. Most of our surplus time is absorbed by television. Americans spend some two hundred billion hours watching sitcoms every year. Someone born in 1960 has watched something like 50,000 hours to TV already, and may watch another 30,000 hours before he or she dies. This is not a peculiarly North American problem. In every country with rising GDP, TV viewing per capita has invariably grown every year since 1950.
Intellectual elites typically bemoan the mass public’s affection for television sagas like Survivor and American Idol in the same way that Karl Marx abhorred religion. But Shirky is a glass is half-full kind of guy, arguing that our television addiction is actually an enormous cognitive surplus in waiting: a vast reservoir of time and talent that could be redirected to serving the common good if only we could be convinced to swap our sitcoms for time spent contributing to projects like Wikipedia.
Shirky is not so naïve to suggest that we will all soon awaken from our sitcom-induced slumber and start cranking out Wikipedia entries en masse. According to Shirky’s estimates, only a small proportion of the public would need to become more civically engaged to make a big difference. A back of the envelope calculation suggests Wikipedia was built with roughly 1 percent of the man-hours that Americans spend watching TV every year (the rough equivalent 100 million hours of thought). If even a fraction of our surplus time could be directed to the creation of other digital public goods, the connected population could be producing hundreds of Wikipedia-like projects every year. Today we may have The World’s Funniest Home Videos running 24/7 on YouTube, but the potentially world-changing uses of cognitive surplus are slowly emerging, he says.
It’s an intriguing notion. And Shirky provides lots of anecdotal evidence to show how collaborative communities are already reshaping diverse fields ranging from health care to higher education. But unfortunately the anecdotes fail to add up to a complete account of the deep changes unleashed by the Internet or provide a cohesive view of the future. Shirky takes too long to explain why new forms of online collaboration are now possible and directs too little effort toward explaining how this new force could be harnessed to help humanity resolve some of its most critical issues—a possibility which, though often alluded to, is largely left unfulfilled.
In The Shallows: What the Internet is Doing to Our Brains, Nicholas Carr begins with essentially the same raw facts as Shirky, but reaches a very different set of conclusions. The Internet is an ecosystem of interruption technologies, says Carr, and these ever-present sources of online distraction are changing the way our brains process information and hence the way we think and communicate. “When we’re constantly distracted and interrupted, as we tend to be online, our brains are unable to forge the strong and expansive neural connections that give depth and distinctiveness to our thinking,” Carr argues. “We become mere signal-processing units, quickly shepherding disjointed bits of information into and then out of short-term memory.” We are, in a word, becoming “shallower.”
Given that we are living through the largest expansion in expressive capability in human history, it would indeed be a paradox if it turned out that the Internet was systematically destroying our ability to think. So to shore-up the argument, Carr grounds his book in the details of modern neuroscience, citing, for example, the work of Patricia Greenfield, a UCLA development psychologist who concludes that every medium develops some cognitive skills at the expense of others. Our growing use of screen-based media, Greenfield argues, has strengthened visual-spatial intelligence, which can improve the ability to do jobs that involve keeping track of lots of simultaneous signals, like air traffic control. But that has been accompanied by new weaknesses in higher-order cognitive processes, including abstract vocabulary, mindfulness, reflection, inductive problem solving, critical thinking, and imagination.
What Carr neglects to mention is that the scientific evidence is much less decisive than he lets on. A comprehensive 2009 review of studies published on the cognitive effects of video games found that gaming led to significant improvements in performance on various cognitive tasks, from visual perception to sustained attention. Neuroscientists at the UCLA found that performing Google searches led to increased brain activity in the dorsolateral prefrontal cortex, the area of the brain responsible for talents like selective attention and deliberate analysis—the very traits Carr says have vanished in the age of the Internet.
So on the question of whether Internet usage will lead to the loss of important mental function, the scientific jury is still very much out. Even if a majority of cognitive scientists were to render a guilty verdict, Carr’s case for digitally-driven stupidity assumes we will fail to integrate digital freedoms into society as well as we integrated literacy in the wake of Guttenberg’s printing press. Shirky points out that literate societies only become literate by investing extraordinary resources, every year, training children to read. Now he argues that we need a new kind of literacy for the digital age, one that leverages the best that digital tools have to offer while compensating for their shortcomings.
If there is one thing that both Carr and Shirky agree on is that there are likely to be troubling dislocations and tough adjustments for institutions and industries whose modus operandi largely revolve around scarcities of knowledge and capability that no longer exist. Wherever the Web enables people to connect and collaborate around tasks that used to be done exclusively by professionals, there is now an historic opportunity for people with passion, drive and talent to participate fully in forging alternative institutions that do the same things, only better.
Of course, should it turn out that Carr is right, and we’re all just getting dumber, there is also the distinct possibility that we’ll all be too distracted by the latest utterances on Twitter to take full advantage of the Internet’s revolutionary potential.