Artificial Intelligence (AI) is set to create one of the largest market opportunities in history, with its potential value estimated to be between $3.5 and 5.8 trillion. Capturing a significant share of this market could redefine national economies, acting as a powerful engine of growth for decades to come. For India, it is important to use AI to achieve the vision of a developed India by 2047.
While AI has long been a subject of fascination, it has also seen cycles of successes and disappointments. A closer look reveals a serious drawback – these breakthroughs come with huge energy demands and expensive, time-consuming training processes. If nothing changes, projections suggest that AI’s power needs could exceed global energy production by 2035, with serious economic and environmental consequences. This demands a leap in computing hardware that could be dramatically more energy-efficient than the hardware we have today.
Why is this leap necessary? It is based on the old von Neumann architecture, which has been the blueprint for all computers over the past 60 years. In this model, computation and memory are separated, which slows down operations and consumes energy. For tasks requiring billions of calculations per second, such as those used in AI, von Neumann design has become a major hurdle. What’s worse, the data we generate and use in AI systems is often stored by large corporations, raising privacy concerns.
The solution may be closer than we think—within our own minds. The human brain, which weighs less than two kilograms and consumes only 20 watts of energy, is capable of performing billions of operations per second, all the while seamlessly storing and processing information in one place. This extraordinary efficiency has inspired a new approach to computing, based on the brain’s neural networks.
The concept of brain-inspired computing is not new. In the 1980s, visionary American engineer Carver Mead laid the foundation for the future of computing. Fast forward to the 2010s – industry giants like Intel and IBM rekindled interest in brain-like computing. With advanced manufacturing technologies at their disposal, these companies attempted to mimic the brain’s learning processes using traditional binary transistors and software-driven systems. Not surprisingly, this brute-force approach failed.
The lesson was clear: To get close to the computational efficiency of the brain, we need to re-imagine computing with new circuit elements that can learn and adapt, like biological neurons and synapses. We also need to rethink the entire computing architecture, moving beyond the limitations of the von Neumann system, where memory and processing are separated.
The race to develop brain-inspired computers isn’t just about mimicking the brain’s processing power — it’s about doing so with the same energy efficiency and compactness that makes the brain so remarkable. The question is, can we create machines that are as smart and efficient as the human brain? The challenge is in creating computing systems that can store information in thousands of states and operate at the edge of chaos, just like the brain.
In a groundbreaking study published in Nature, a team led by Dr. Sritosh Goswami of the Indian Institute of Science, Bengaluru, invented a revolutionary molecular neuromorphic platform capable of storing and processing data across an astonishing 16,500 states – Leaving traditional transistor-based computers, which operate in only two states, far behind. Using the dance of ions within a molecular film, the team created a system that mimics the brain’s complex method of data processing. Molecules and ions, moving within the film, generate a multitude of unique memory states. Each activity was mapped to a specific electrical signal – essentially a computer capturing thousands of computing states that excel in both energy efficiency and space-saving potential.
Success doesn’t stop here. In a stunning technological leap, the team used their molecular platform to recreate NASA’s iconic Pillars of Creation image from the James Webb Telescope on a simple tabletop setup. What’s more, they achieved this feat 4,000 times faster and with 460 times less energy than conventional computers.
With 14-bit precision, equivalent to 16,384 analog levels, this chip could transform fields ranging from artificial intelligence (AI) to scientific computing. Imagine training complex AI models like large language models (LLM) directly on personal devices like laptops and smartphones – a process that currently relies on huge server farms and invasive personal data collection by large corporations. This invention could bring AI processing to individual users, provide unprecedented data privacy and democratize access to advanced AI tools. It is arguably one of the most disruptive computing innovations to emerge from India, with the potential to position the country at the forefront of global technological advancements.
This article is written by Brainerd Prince, Director of the Center for Thinking, Language and Communication at the University of Plaka.