Why Robots Will Always Need Us

“Human Beings are ashamed to have been born instead of made,” wrote the philosopher Günther Anders in 1956. Our shame has only deepened as our machines have grown more adept.

Every day we’re reminded of the superiority of our computers. Self-driving cars don’t fall victim to distractions or road rage. Robotic trains don’t speed out of control. Algorithms don’t suffer the cognitive biases that cloud the judgments of doctors, accountants and lawyers. Computers work with a speed and precision that make us look like bumbling slackers.

It seems obvious: The best way to get rid of human error is to get rid of humans.

But that assumption, however fashionable, is itself erroneous. Our desire to liberate ourselves from ourselves is founded on a fallacy. We exaggerate the abilities of computers even as we give our own talents short shrift.

It’s easy to see why. We hear about every disaster involving human fallibility — the chemical plant that exploded because the technician failed to open a valve, the plane that fell from the sky because the pilot mishandled the yoke — but what we don’t hear about are all the times that people use their expertise to avoid accidents or defuse risks.

Pilots, physicians and other professionals routinely navigate unexpected dangers with great aplomb but little credit. Even in our daily routines, we perform feats of perception and skill that lie beyond the capacity of the sharpest computers. Google is quick to tell us about how few accidents its autonomous cars are involved in, but it doesn’t trumpet the times the cars’ backup drivers have had to take the wheel. Computers are wonderful at following instructions, but they’re terrible at improvisation. Their talents end at the limits of their programming.

Human skill has no such constraints. Think of how Capt. Chesley B. Sullenberger III landed that Airbus A320 in the Hudson River after it hit a flock of geese and its engines lost power. Born of deep experience in the real world, such intuition lies beyond calculation. If computers had the ability to be amazed, they’d be amazed by us.

While our flaws loom large in our thoughts, we view computers as infallible. Their scripted consistency presents an ideal of perfection far removed from our own clumsiness. What we forget is that our machines are built by our own hands. When we transfer work to a machine, we don’t eliminate human agency and its potential for error. We transfer that agency into the machine’s workings, where it lies concealed until something goes awry.

Computers break down. They have bugs. They get hacked. And when let loose in the world, they face situations that their programmers didn’t prepare them for. They work perfectly until they don’t.

Many disasters blamed on human error actually involve chains of events that are initiated or aggravated by technological failures. Consider the 2009 crash of Air France Flight 447 as it flew from Rio de Janeiro to Paris. The plane’s airspeed sensors iced over. Without the velocity data, the autopilot couldn’t perform its calculations. It shut down, abruptly shifting control to the pilots. Investigators later found that the aviators appeared to be taken by surprise in a stressful situation and made mistakes. The plane, with 228 passengers, plunged into the Atlantic.

The crash was a tragic example of what scholars call the automation paradox. Software designed to eliminate human error sometimes makes human error more likely. When a computer takes over a job, the workers are left with little to do. Their attention drifts. Their skills, lacking exercise, atrophy. Then, when the computer fails, the humans flounder.

In 2013, the Federal Aviation Administration noted that overreliance on automation has become a major factor in air disasters and urged airlines to give pilots more opportunities to fly manually. The best way to make flying even safer than it already is, the research suggests, may be to transfer some responsibility away from computers and back to people. Where humans and machines work in concert, more automation is not always better.

We’re in this together, our computers and ourselves. Even if engineers create automated systems that can handle every possible contingency — far from a sure bet — it will be years before the systems are fully in place. In aviation, it would take decades to replace or retrofit the thousands of planes in operation, all of which were designed to have pilots in their cockpits. The same goes for roads and rails. Infrastructure doesn’t change overnight.

We should view computers as our partners, with complementary abilities, not as our replacements. What we’ll lose if we rush to curtail our involvement in difficult work are the versatility and wisdom that set us apart from machines.

The world is safer than ever, thanks to human ingenuity, technical advances and thoughtful regulations. Computers can help sustain that progress. Recent train crashes, including the Amtrak derailment this month, might have been prevented had automated speed-control systems been in operation. Algorithms that sense when drivers are tired and sound alarms can prevent wrecks.

The danger in dreaming of a perfectly automated society is that it makes such modest improvements seem less pressing — and less worthy of investment. Why bother taking small steps forward, if utopia lies just around the bend?

Nicholas Carr is the author, most recently, of The Glass Cage: Automation and Us.

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *