Arthur C. Clarke's gift to science

Doodling on his notepad in 1981, the science fiction writer William Gibson was trying to think of a name for an invisible electronic communications network he had dreamt up for a new short story, Burning Chrome. “Dataspace”? No, he crossed that out. “Infospace”? No, too nerdy. Then he hit on the perfect word, scientific- sounding but also alliterative and oddly poetic: cyberspace. Later, he reflected: “It seemed like an effective buzzword. It seemed evocative and essentially meaningless.”

It would be another decade before the internet would transform the world, but a novelist's imagination had already given shape and meaning to something that science had yet to invent. The naming of cyberspace is just one example of the way fiction has informed scientific fact, which in turn enriches the fiction of science in a strange, endlessly self-replicating process that is unique to the genre. Science fiction writing is too often dismissed as childish, badly written and unrealistic (it is often all three); science writing tends to be drab and dry. But science fiction, at its best, is proof of the art and poetry that lie at the heart of great science, and the way science can underpin the finest literature.

The late Arthur C. Clarke embodied the symbiosis between scientific expertise and the novelist's imagination. Like all great science fiction writers, he wrote of futures and technologies on the outer edge of possibility that almost magically lured the truth towards them.

His science was scrupulous and rigorous. His was not science fantasy, nor the creation of invented worlds to cast a light on this one. He imagined humans in a not-so-distant future in which science - genuine science - has changed the world. His imagineering was usually optimistic, and astonishingly accurate.

Clarke's capacity for prophecy was extraordinary precisely because his imagination was so wide, and his scientific expertise so deep. In 1945, more than a decade before the first orbital rocket flight, he predicted communications satellites in fixed orbits high above the Earth. (He was dissuaded from patenting this idea by a lawyer, who insisted it was too outlandish to be taken seriously: Clarke later wrote a book on the subject, with the subtitle: How I Lost a Billion Dollars in My Spare Time.) He explained how man would land on the Moon, and when. 2001: A Space Odyssey imagined a Moon base. Nasa now envisages a permanent Moon colony as a staging post on the journey to Mars.

Clarke knew that the greatest technological achievements lay not simply in the appliance of the laws of physics, but in the more subtle and unpredictable ways of the imagination. The dream precedes the reality: “I'm sure we would not have had men on the Moon if it had not been for H.G. Wells and Jules Verne,” he once said.

The greatest science fiction writers sometimes got it spectacularly wrong (one that sticks in the mind is the inspired though sadly never-attempted idea of keeping hundreds of cats in an insulator with a device for stroking them to create static electricity). Or very nearly right but not quite. H.G. Wells correctly predicted, as early as 1907, that there would be a fierce aerial conflict with Germany (The War in the Air), but he got the technology wrong: the fighting flying machines in his novel flap their wings.

Fiction writers have also imagined and inspired future reality, often unintentionally. Douglas Adams thought up an electronic, handheld book with all of galactic knowledge on it. He named this impossible invention the Hitchhikers Guide to the Galaxy; we call it a BlackBerry.

Orwell's “versificator” in 1984, which generates pap music to keep the proletariat docile, prefigured the computer software used to churn out pop music today. Verne predicted the submarine and the rocket ship, deploying the most reliable science he could muster. Igor Sikorsky was inspired to build the first helicopter in 1939 by Verne's Robur the Conqueror (1886), which features a device with propellers that has “made conquest of the air”. The pioneer of parallel supercomputing, Daniel Hills, decided to study artificial intelligence after reading the works of Isaac Asimov and Robert Heinlein.

Sci-fi has crept, almost unnoticed and usually unacknowledged, from popular culture into popular technology. Your mobile telephone with the flip-down mouthpiece owes a debt to Star Trek, as do automatic sliding doors in supermarkets.

But perhaps the most important legacy of Verne, Clarke and other science fiction pioneers is the simple idea that fiction can inspire fact; that making up vivid stories from science inspires more and better science. Many scientists openly acknowledge the inspiration of fiction. Astronauts read space novels to expand their own dreams, to influence and inspire real life, as all great literature must.

Science fiction is important less for its prophetic ability to offer blueprints for machines as yet unmade than for its capacity to instil wonder and adventure in the pursuit of earthly science. “Anything a man can imagine, another man can create,” wrote Verne.

Science and literature are too often seen as polar opposites. Good science fiction represents an extraordinary fusion of the two into a single narrative, a way of imagining the impossible based on the scientifically plausible. Technology is the point where human imagination and science intersect: this was the mysterious world explored by Clarke, and his most enduring invention.

A quarter of a century ago William Gibson conjured up a word that was, by his own account, meaningless, a portentous-sounding term for something that did not yet exist. That might stand as the best definition of great science fiction: the art of inventing words for science to aspire to.

Ben Macintyre