“After a glorious 50 years, Moore’s law — which states that computer power doubles every two years at the same cost — is running out of steam”, has recently been published in The Economist. After five decades, the end of Moore’s law is in sight: in short, this means that making smaller transistors no longer guarantees that they will also be cheaper or faster. This is a hot topic, so much so that just a few days ago The Economist published another article in which a series of possible scenarios for the future of computing are presented.
The experts’ opinions are numerous, but they all agree on a central point: the twilight of Moore’s law will not represent the end of progress, but rather, that progress is changing and that, as reported in Nature, “now things could get a lot more interesting”. Why?
Because this gives life to many scenarios. Some are imagining to follow what might be called the “More than Moore” strategy: instead of improving chips and letting applications follow, applications will be improved first — from smartphones and supercomputers to cloud data centres — and then experts will work downwards to see what chips are needed to support them. Among those chips will be new generations of sensors, power-management circuits and other silicon devices required by a world in which computing is increasingly mobile.
Some hope to redefine computers themselves. One idea is to use quantum mechanics to perform certain calculations much faster than any classical computer. This technique is known as quantum computing. While traditional digital computers use bits – ones and zeros – to perform calculations, quantum computers use subatomic quantum bits, or qubits, that can be in multiple states at once. This means they can carry out more calculations at the same time and could offer new ways of solving problems that traditional digital computers find very hard to solve. This is why companies like IBM and Google are pouring millions of dollars into quantum computing in the hope of creating the next big thing in computing.
Another futuristic idea is neuromorphic computing. It consists in emulating biological brains, since they can perform impressive tasks using very little energy. Research in this sense has started back in the 1980s, but has seen a consistent comeback during the last 10 years.
Yet another solution would be ubiquitous computing: i.e., to diffuse computer power rather than concentrating it, spreading the ability to calculate and communicate across an ever greater range of everyday objects in the nascent Internet of things. The goal of ubiquitous (or pervasive) computing, which combines current network technologies with wireless computing, voice recognition, Internet capability and artificial intelligence, is to create an environment where the connectivity of devices is embedded in such a way that it becomes unobtrusive and always available.
However, the ultimate frontier in computing seems to be biology, or rather, bio-computing. Bio-computing would allow us to design and use a kind of technology that would be more advanced than anything ever created by man. The idea there is to harness the power of the human brain by using actual brain cells to power the next generation of computers. Neuroscientist Osh Agabi, who has developed a prototype 64-neuron silicon chip, says: “There are no practical limits to how large we can make our devices or how much we can engineer our neurons.”
But computing is already evolving in the near term: technologies like perceptual computing, the use of smartphones as PCs and applications like Quadro are already part of this revolution. Actually, computing startups are many: just in the quantum sector, for instance, the US Rigetti Computing and the Australian QxBranch are working on the creation of prototype chips that will disrupt traditional computing, while the Canadian D-Wave has already sold its chips to Google and the CIA! This industry is undergoing rapid change and gives us the chance to imagine extremely fascinating futuristic scenarios.