Cognitive surpluses and deficits

Category: Media & Technology | Popular
Published on Aug 23, 2010

About two months back I was asked by the Globe and Mail to review Clay Shirky’s Cognitive Surplus and Nicholas Carr’s The Shallows, two books with sharply contrasting accounts of the digital age to date. It was a fun exercise, with lots to contemplate in both Shirky’s and Carr’s work. The downside was that I had to squeeze a comprehensive review into the measly 1,200 words the Globe editors afforded me. So while the official G&M review can still be found online, I fortunately have no such restrictions here and have posted the full, unedited version below:

When the spread of the printing press popularized the reading and writing of books it spread panic throughout Europe’s ruling classes. For the first time the clergy could no longer control religion, science or medicine. Nor could the monarchs dominate every aspect of political life or grip the reins of a burgeoning capitalistic economy. There were concerns among intellectuals too. Robert Burton, the vicar of Oxford, complained of the “vast chaos and confusion of books” that make the eyes and fingers ache. Even Edgar Allan Poe called “The enormous multiplication of books in every branch of knowledge one of the greatest evils of this age; since it presents one of the most serious obstacles to the acquisition of correct information.” In hindsight, it’s clear that the invention of printing, perhaps more than any other contemporary technology, helped pave the way from a largely feudal agrarian existence to the modern industrial societies we know today.

Now evidence is mounting that we’re undergoing a similarly historic transition as the Internet rewires our social fabric and perhaps even our neural circuitry. As with the printing press, our growing immersion in digital life is sparking similarly vigorous debates over the impact of digital technologies on the way we think and behave, as individuals and as a globally interconnected society.

Two recent and noteworthy books—one by Nicholas Carr and the other by Clay Shirky—provide contrasting accounts of the digital age to date and help illuminate the deep divisions that exist among the Web gurus who endeavour to explain how the Internet is changing the world around us. One book posits the existence of a vast cognitive surplus drawn from a growing global community of Internet users whose enormous pool of free time and creative capacity can now be tapped in pursuit of virtually any shared goal or endeavor. The other poses a cognitive deficit of sorts as the Internet robs the online population of their attention spans and their capacity for deep contemplation. Put simply, Shirky says the Internet is making us smarter. Carr claims it’s making us dumber.

Shirky, an adjunct professor at New York University and keen observer of all things digital, starts with the observation that increases in GDP, educational attainment and lifespan since the Second World War have forced the industrialized world to grapple with a novel problem: a massive abundance of free time. Thanks to the advent of the 40-hour workweek, the educated population on the planet has something like a trillion hours a year of free time to spend doing things they care about.

Sounds great, except there’s a problem. Most of our surplus time is absorbed by television. Americans spend some two hundred billion hours watching sitcoms every year. Someone born in 1960 has watched something like 50,000 hours to TV already, and may watch another 30,000 hours before he or she dies. This is not a peculiarly North American problem. In every country with rising GDP, TV viewing per capita has invariably grown every year since 1950.

Intellectual elites typically bemoan the mass public’s affection for television sagas like Survivor and American Idol in the same way that Karl Marx abhorred religion. But Shirky is a glass is half-full kind of guy, arguing that our television addiction is actually an enormous cognitive surplus in waiting: a vast reservoir of time and talent that could be redirected to serving the common good if only we could be convinced to swap our sitcoms for time spent contributing to projects like Wikipedia.

Shirky is not so naïve to suggest that we will all soon awaken from our sitcom-induced slumber and start cranking out Wikipedia entries en masse. According to Shirky’s estimates, only a small proportion of the public would need to become more civically engaged to make a big difference. A back of the envelope calculation suggests Wikipedia was built with roughly 1 percent of the man-hours that Americans spend watching TV every year (the rough equivalent 100 million hours of thought). If even a fraction of our surplus time could be directed to the creation of other digital public goods, the connected population could be producing hundreds of Wikipedia-like projects every year. Today we may have The World’s Funniest Home Videos running 24/7 on YouTube, but the potentially world-changing uses of cognitive surplus are slowly emerging, he says.

It’s an intriguing notion. And Shirky provides lots of anecdotal evidence to show how collaborative communities are already reshaping diverse fields ranging from health care to higher education. But unfortunately the anecdotes fail to add up to a complete account of the deep changes unleashed by the Internet or provide a cohesive view of the future. Shirky takes too long to explain why new forms of online collaboration are now possible and directs too little effort toward explaining how this new force could be harnessed to help humanity resolve some of its most critical issues—a possibility which, though often alluded to, is largely left unfulfilled.

In The Shallows: What the Internet is Doing to Our Brains, Nicholas Carr begins with essentially the same raw facts as Shirky, but reaches a very different set of conclusions. The Internet is an ecosystem of interruption technologies, says Carr, and these ever-present sources of online distraction are changing the way our brains process information and hence the way we think and communicate. “When we’re constantly distracted and interrupted, as we tend to be online, our brains are unable to forge the strong and expansive neural connections that give depth and distinctiveness to our thinking,” Carr argues. “We become mere signal-processing units, quickly shepherding disjointed bits of information into and then out of short-term memory.” We are, in a word, becoming “shallower.”

Given that we are living through the largest expansion in expressive capability in human history, it would indeed be a paradox if it turned out that the Internet was systematically destroying our ability to think. So to shore-up the argument, Carr grounds his book in the details of modern neuroscience, citing, for example, the work of Patricia Greenfield, a UCLA development psychologist who concludes that every medium develops some cognitive skills at the expense of others. Our growing use of screen-based media, Greenfield argues, has strengthened visual-spatial intelligence, which can improve the ability to do jobs that involve keeping track of lots of simultaneous signals, like air traffic control. But that has been accompanied by new weaknesses in higher-order cognitive processes, including abstract vocabulary, mindfulness, reflection, inductive problem solving, critical thinking, and imagination.

What Carr neglects to mention is that the scientific evidence is much less decisive than he lets on. A comprehensive 2009 review of studies published on the cognitive effects of video games found that gaming led to significant improvements in performance on various cognitive tasks, from visual perception to sustained attention. Neuroscientists at the UCLA found that performing Google searches led to increased brain activity in the dorsolateral prefrontal cortex, the area of the brain responsible for talents like selective attention and deliberate analysis—the very traits Carr says have vanished in the age of the Internet.

So on the question of whether Internet usage will lead to the loss of important mental function, the scientific jury is still very much out. Even if a majority of cognitive scientists were to render a guilty verdict, Carr’s case for digitally-driven stupidity assumes we will fail to integrate digital freedoms into society as well as we integrated literacy in the wake of Guttenberg’s printing press. Shirky points out that literate societies only become literate by investing extraordinary resources, every year, training children to read. Now he argues that we need a new kind of literacy for the digital age, one that leverages the best that digital tools have to offer while compensating for their shortcomings.

If there is one thing that both Carr and Shirky agree on is that there are likely to be troubling dislocations and tough adjustments for institutions and industries whose modus operandi largely revolve around scarcities of knowledge and capability that no longer exist. Wherever the Web enables people to connect and collaborate around tasks that used to be done exclusively by professionals, there is now an historic opportunity for people with passion, drive and talent to participate fully in forging alternative institutions that do the same things, only better.

Of course, should it turn out that Carr is right, and we’re all just getting dumber, there is also the distinct possibility that we’ll all be too distracted by the latest utterances on Twitter to take full advantage of the Internet’s revolutionary potential.

Be Sociable, Share!

Comments

I have read (and admired) previous books by both Shirky (Here Comes Everybody) and Carr (The Big Switch). So I was not surprised by the divergence in their views represented by their latest writings, but I was struck by the increasingly wide gap. How to reconcile these views? Given my admiration for both authors, I was motivated to appreciate what I imagined to be the wisdom in each view. Yet I felt daunted by the task. Fortunately, Anthony Williams has saved me the trouble. I found this blog post enormously helpful. Thanks!

posted by Grady McGonagill on 09.06.10 at 6:55 pm

Happy to help Grady. The divergence of views is fairly striking, although it’s worth noting that Carr is known for taking up critical (and provocative) positions on the role of information technology in society and business. His book “IT Doesn’t Matter” rejected the hypothesis that the Internet could help create competitive advantage in the companies that deployed it strategically at a time when many business gurus were arguing the opposite. Now he is warning that the Internet is undermining our cognitive functions, at a time when it has been fashionable to argue that the Internet is creating new forms of collective intelligence. If nothing else, I admire his ability to take up contrarian perspectives and argue them persuasively.

posted by Anthony D. Williams on 09.07.10 at 12:46 pm

I agree, Anthony. I tend to like constructive contrarians, of which I see Carr to be an example–though in this case I tend to side with Shirky.

To pursue a related point, I’m drawn to your comment about “new forms of collective intelligence.” Kevin Kelly got me thinking about this some time back with his notion of “global mind” in his 1994 book Out of Control. I saw a video interview with Don Tapscott (by Jerry Michalski at a Fastforward conference a couple of years ago), in which he also endorsed the notion of a global brain.” What’s your view of this? Has anyone written on this notion in a way that makes sense to you?

posted by Grady McGonagill on 09.16.10 at 5:52 am

It’s doubtless true that every medium develops some cognitive skills at the expense of others, but “develops” is surely the key concept. Oral cultures demand far more of memory than do literate cultures, and a whole arsenal of mnemonic devices are abandoned in the acquisition of new skills to master literacy. Similarly, the wide-spread spontaneous erudition of a “bookish” culture declined with the informal cultural conversations that emerged with the advent of radio and television. None of this points to humans becoming “dumber”, but rather to privileged cultural norms being eroded by new technologies and the difficulty the old culture experiences in recognizing depth and intelligence in the expressive forms of an emerging new paradigm.

posted by alan towers on 01.30.11 at 10:05 pm

Great points Alan. Thanks for commenting.

posted by Anthony D. Williams on 01.30.11 at 10:31 pm

If fault is to be found with Shirky and Carr, as well as almost all other internet pundits on information overload, it is in their premises, not their conclusions. Almost all hold the implicit assumption that humans are sensitive to information as static facts. However, if informed by the most recent findings from affective neuroscience on human decision making, this position cannot be true.

Specifically, Shirky and Carr (and nearly all of their peers) hold to positions that are not neurally realistic, and would have to abandon much of their opinions (and specifically the reality of information overload) if they were informed by the recent findings in affective neuroscience on how human minds actually process and choose information. Surprisingly, this argument can be made quite simply, and is made (link below) using an allegory of the Boston Red Sox pennant run over the years.

http://mezmer.blogspot.com/2012/02/searching-for-red-stockings-myth-of.html

(Alas, my argument at three pages is a bit long for a comments section, but perhaps not as a link.)

A. J. Marr

posted by A J Marr on 02.26.12 at 12:14 am

Leave a comment