“When your heart stops beating, you’ll keep tweeting” is the reassuring slogan greeting visitors at the Web site for LivesOn, a soon-to-launch service that promises to tweet on your behalf even after you die. By analyzing your earlier tweets, the service would learn “about your likes, tastes, syntax” and add a personal touch to all those automatically composed scribblings from the world beyond.
LivesOn may yet prove to be a parody, or it may fizzle for any number of reasons, but as an idea it highlights the dominant ideology of Silicon Valley today: what could be disrupted should be disrupted — even death.
Barriers and constraints — anything that imposes artificial limits on the human condition — are being destroyed with particular gusto. Superhuman, another mysterious start-up that could enliven any comedy show, promises to offer, as its co-founder recently put it, an unspecified service that “helps people be superhuman.” Well, at least they had the decency not to call it The Übermensch.
Recent debates about Twitter revolutions or the Internet’s impact on cognition have mostly glossed over the fact that Silicon Valley’s technophilic gurus and futurists have embarked on a quest to develop the ultimate patch to the nasty bugs of humanity. If they have their way, no individual foibles would go unpunished — ideally, technology would even make such foibles obsolete.
Even boredom seems to be in its last throes: designers in Japan have found a way to make our train trips perpetually fun-filled. With the help of an iPhone, a projector, a GPS module and Microsoft’s Kinect motion sensor, their contrivance allows riders to add new objects to what they see “outside,” thus enlivening the bleak landscape in their train windows. This could be a big hit in North Korea — and not just on trains.
Or, if you tend to forget things, Silicon Valley wants to give you an app to remember everything. If you occasionally prevaricate in order to meet your clashing obligations as a parent, friend or colleague, another app might spot inconsistencies in your behavior and inform your interlocutors if you are telling the truth. If you experience discomfort because you encounter people and things that you do not like, another app or gadget might spare you the pain by rendering them invisible.
Sunny, smooth, clean: with Silicon Valley at the helm, our life will become one long California highway.
Last month Randi Zuckerberg, Facebook’s former marketing director, enthused about a trendy app to “crowdsource absolutely every decision in your life.” Called Seesaw, the app lets you run instant polls of your friends and ask for advice on anything: what wedding dress to buy, what latte drink to order and soon, perhaps, what political candidate to support.
Seesaw offers an interesting twist on how we think about feedback and failure. It used to be that we bought things to impress our friends, fully aware that they might not like our purchases. Now this logic is inverted: if something impresses our friends, we buy it. The risks of rejection have been minimized; we know well in advance how many Facebook “likes” our every decision would accumulate.
Jean-Paul Sartre, the existentialist philosopher who celebrated the anguish of decision as a hallmark of responsibility, has no place in Silicon Valley. Whatever their contribution to our maturity as human beings, decisions also bring out pain and, faced with a choice between maturity and pain-minimization, Silicon Valley has chosen the latter — perhaps as a result of yet another instant poll.
The only exception to the pain-minimization rule is when pain — or at least discomfort — must be induced to ensure that we behave honestly and consistently.
Take Google Glass, the company’s overhyped “smart glasses,” which can automatically snap photos of everything we see and store them for posterity. To some, this can finally solve the problem of forgetting, a longtime ambition of many geeks, who have also been developing stamp-size cameras that can be worn on the lapel of a jacket and snap a picture — at set intervals of time — of things around us.
This idea of obliterating forgetting was laid out by the visionary Microsoft computer scientist Gordon Bell in his highly provocative 2009 book, written with Jim Gemmell, “Total Recall: How the E-Memory Revolution Will Change Everything.”
Mr. Bell promised that new recording technologies would provide us with “enhanced self-insight, the ability to relive one’s own life story in Proustian detail, the freedom to memorize less and think creatively more.” (Alas, “Proustian” is an inapt adjective: the writer actually opposed what he called a “simple cinematographic vision,” which he feared treated memory as nothing but the accumulation of facts, rather than a complex interplay of sensory experiences and storytelling.)
For Mr. Bell, these always-on recording devices can make us aware of our own faults, of our inconsistencies, of the many lies we tell ourselves and others. “Successful people don’t shy away from the honest record,” he wrote. “Imagine being confronted with the actual amount of time you spend with your daughter rather than your rosy accounting of it. Or having your eyes opened to how truly abrasive you were in a conversation.” Doctor Freud, meet the iFreud!
This sounds nice in theory, but in the world that we actually inhabit, Mr. Bell’s quest for consistency borders on the tyrannical. In his brilliant essay “In Praise of Inconsistency,” published in Dissent in 1964, the Polish philosopher Leszek Kolakowski argued that, given that we are regularly confronted with equally valid choices where painful ethical reflection is in order, being inconsistent is the only way to avoid becoming a doctrinaire ideologue who sticks to an algorithm. For Kolakowski, absolute consistency is identical to fanaticism.
“The breed of the hesitant and the weak …of those …who believe in telling the truth but rather than tell a distinguished painter that his paintings are daubs will praise him politely,” he wrote, “this breed of the inconsistent is still one of the main hopes for the continued survival of the human race.” If the goal of being confronted with one’s own inconsistency is to make us more consistent, then there is little to celebrate here.
But smart glasses could do so much more! Why not edit out disturbing sights that haunt us on the way to work? Last year the futurist Ayesha Khanna even described smart contact lenses that could make homeless people disappear from view, “enhancing our basic sense” and, undoubtedly, making our lives so much more enjoyable. In a way, this does solve the problem of homelessness — unless, of course, you happen to be a homeless person. In that case, Silicon Valley would hand you a pair of overpriced glasses that would make the streets feel like home. To quote an ad for Samsung’s fancy TV sets, “Reality. What a letdown.”
All these efforts to ease the torments of existence might sound like paradise to Silicon Valley. But for the rest of us, they will be hell. They are driven by a pervasive and dangerous ideology that I call “solutionism”: an intellectual pathology that recognizes problems as problems based on just one criterion: whether they are “solvable” with a nice and clean technological solution at our disposal. Thus, forgetting and inconsistency become “problems” simply because we have the tools to get rid of them — and not because we’ve weighed all the philosophical pros and cons.
Solutionists do not limit themselves to fixing the problems of individuals; they are as keen to fix the problems of institutions. Civic-minded start-ups like Ruck.us, which helps people create and join political movements, seek to bypass the conventional party system and allow individuals to practice politics without any mediation by institutions, on the assumption that the only reason we needed representative democracy in the past was because the communication costs were too high. Now that digital technologies have lowered the costs of participation, political parties can go the way of the dodo and be replaced, ad-hoc style, by online groups of concerned citizens.
It’s hard to defend America’s current political system, but it’s even harder to rally behind the solutionist project for one simple reason: the proposed Internet-powered “solution” is not sold to us based on its inherent merits — of those we hear very little — but, rather, on the demerits of the existing system, be it partisanship or sleaze. Yes, the current system teems with imperfections, but imperfection might be the price to pay for a half-functioning democracy. There is, after all, little partisanship in North Korea. Learning to appreciate the many imperfections of our institutions and of our own selves, at a time when the means to fix them are so numerous and glitzy, is one of the toughest tasks facing us today.
Solutionists err by assuming, rather than investigating, the problems they set out to tackle. Given Silicon Valley’s digital hammers, all problems start looking like nails, and all solutions like apps.
Such predisposition makes it harder to notice that not all problems are problems, and that those problems that do prove genuine might require long and protracted institutional responses, not just quick technological fixes produced at “hackathons” or viral videos to belatedly shame Ugandan warlords into submission.
Silicon Valley, oddly, likes to wear its “solutionism” on its sleeve. Its most successful companies fashion themselves as digital equivalents of Greenpeace and Human Rights Watch, not Wal-Mart or Exxon Mobil. “In the future,” says Eric Schmidt, Google’s executive chairman, “people will spend less time trying to get technology to work … If we get this right, I believe we can fix all the world’s problems.”
Facebook’s Mark Zuckerberg concurs: “There are a lot of really big issues for the world that need to be solved and, as a company, what we are trying to do is to build an infrastructure on top of which to solve some of these problems.” As he noted in Facebook’s original letter to potential investors, “We don’t wake up in the morning with the primary goal of making money.”
Such digital humanitarianism aims to generate good will on the outside and boost morale on the inside. After all, saving the world might be a price worth paying for destroying everyone’s privacy, while a larger-than-life mission might convince young and idealistic employees that they are not wasting their lives tricking gullible consumers to click on ads for pointless products. Silicon Valley and Wall Street are competing for the same talent pool, and by claiming to solve the world’s problems, technology companies can offer what Wall Street cannot: a sense of social mission.
The ideology of solutionism is thus essential to helping Silicon Valley maintain its image. The technology press — along with the meme-hustlers at the TED conference — are only happy to play up any solutionist undertakings. “Africa? There’s an app for that,” reads a real (!) headline on the Web site of the British edition of Wired. Could someone lend that app to the World Bank, please?
Shockingly, saving the world usually involves using Silicon Valley’s own services. As Mr. Zuckerberg put it in 2009, “the world will be better if you share more.” Why doubt his sincerity on this one?
Whenever technology companies complain that our broken world must be fixed, our initial impulse should be to ask: how do we know our world is broken in exactly the same way that Silicon Valley claims it is? What if the engineers are wrong and frustration, inconsistency, forgetting, perhaps even partisanship, are the very features that allow us to morph into the complex social actors that we are?
“I wish it would dawn upon engineers that, in order to be an engineer, it is not enough to be an engineer,” wrote the Spanish philosopher José Ortega y Gasset in 1939. Given the cultural and political relevance of Silicon Valley — from education to publishing and from music to transportation — this advice is particularly worth heeding. Just ask your friends on Seesaw.
Evgeny Morozov is the author of To Save Everything, Click Here: The Folly of Technological Solutionism.