What’s in it for them? This is the question. Last week, Google’s Deepmind announced a scientific breakthrough. As you’ll know, it had used artificial intelligence to . . . do something. Perhaps you understand more. I do, a bit, but if you want a truly reliable guide to what “protein folding” is, then for God’s sake ask somebody else. This is not to be a column about what they did. This is to be a column about why they did it.
“They” in this context is Google, which bought Deepmind in 2014, reportedly for about half a billion dollars. According to the open resource that is Companies House (because, of course, Deepmind is British) the AI firm went on to lose tens of millions every year, right up to a really quite exciting £470 million in 2018, which is the most recent year you can see. Eek.
The reason for these losses is pretty straightforward. It is that this sort of research — the bleeding edge of our tech future! The quest to create almost-life from silicon and maths alone! The stuff of which Alex Garland films are made! — costs a staggering amount of money and makes back almost none. Until now, the firm has excelled at building programs that can play chess and Go, which I gather is very exciting, mathswise, but is also probably quite hard to pitch on Dragon’s Den. And yet on Google has gone, firehosing in the cash. Why?
One option is that they’re just lovely. No, don’t laugh, I’m serious. This weekend The Sunday Times ran an interview with Demis Hassabis, the former chess prodigy who founded Deepmind, and if that guy is driven by anything other than an altruistic desire to do mad stuff with maths that saves humanity and makes people go “woah!” (roughly; I’m an arts graduate) then I must say it didn’t come across. Never forget quite how bogglingly huge these bigger tech companies’ resources are. What’s a few hundred million in losses when your profits — as is the case with Google — soar into the billions? Money becomes a simply irrelevant way of quantifying the worth of anything. A limitless resource, like the air.
Quite often I’ve found myself bickering with the terrestrial representatives of tech firms, accusing them of turning a blind eye to some online harm or free speech outrage for the sake of their bottom line. Invariably, they respond with incredulity, and I’m not always sure they’re faking it. Do I not realise, they ask, how incredibly far away the bottom line is? They can barely see it. To paraphrase Joey from Friends, “the line is a dot to them”. And so, perhaps, with all this cash sloshing around, they genuinely just want to funnel some of it towards utopian sci-fi projects that get them excited. Such as electric cars (Elon Musk) or space travel (also Elon Musk) or tiny submarines to rescue lost potholers (actually, Elon Musk again). Or, in the case of Google, AI.
So that’s one option. The other is a bit less cuddly. In The Age of Surveillance Capitalism, the Harvard professor Shoshana Zuboff slams home the point that most giant tech firms, with the possible exception of Apple, are first and foremost in the business of amassing data. So when Google seeks to put every book in the world online (via Google Books) or every piece of video in the world (via Youtube) or every birds’ eye view, street corner or front door (via Googles Maps and Earth) they are not really doing it for the general advancement of mankind. Likewise when Amazon sells you a smart speaker for £15, that controls your lights and plays your music and sits in your kitchen listening while you scream at your kids. It’s not just being nice. It is all about seeing what people do, and when, and how, before finding a way to monetise that information by reversing the flow.
It is not rocket science to see where AI fits into this (except perhaps literally). So, without for a moment disparaging the brains, integrity or even altruism of the boffins at the coalface, on a corporate level — no offence intended — their work is basically just another grinding cog in the relentless gobbling up of absolutely everything.
The most important question, I suppose, is how much that matters. Who else will do this work? There are not many nation states able to spend half a billion pounds every year on teaching toasters to play Jenga, or however it was that Deepmind began. Really there’s just the US and China, and if you’re more scared of Google than you are of China then the more AI somebody can create to help you along the better. On my Times Radio show this weekend past, I spoke to a professor of bioinformatics (computer science meets biology, basically) about the impact of vastly rich tech firms helping out in her field. “This is wonderful, right?” I said. “Yes,” she said, simply.
We do need to understand, though, that these things are taking us into places that are hard to, well, understand. Also this weekend, recalling a year of reporting on Covid, this paper’s science editor Tom Whipple voiced his frustration about the dismally low level on which most politicians had managed to engage. “Suddenly, though, science is political,” he wrote, “and the political people don’t understand it.” The same is true with big tech, and it has been for ages. I don’t want to sound melodramatic, but it’s hard not to. Bluntly, how much power over humanity’s future are we comfortable for these huge, shuttered, powerful firms to have? How much is too much? Is anybody watching? Does anyone care?
Hugo Rifkind is a columnist and leader writer for The Times. Formerly a columnist for The Herald, he joined The Times in 2005 as a diarist and features writer. He now writes a weekly opinion column and My Week, a diary parody. He also writes regular columns for The Spectator and GQ and is a frequent panellist on BBC Radio 4’s The News Quiz. His novel, Overexposure, was published in 2006. Hugo was named columnist of the year at the Editorial Intelligence Comment Awards 2011.