The Intel 4004, the first commercial microprocessor, was released in 1971. With 2,300 transistors packed into 12mm2, it heralded a revolution in computing. A little over 50 years later, Apple’s M2 Ultra contains 134 billion transistors.
The scale of progress is difficult to comprehend, but the evolution of semiconductors, driven for decades by Moore’s Law, has paved a path from the emergence of personal computing and the internet to today’s AI revolution.
But this pace of innovation is not guaranteed, and the next frontier of technological advances—from the future of AI to new computing paradigms—will only happen if we think differently.
Atomic challenges The modern microchip stretches both the limits of physics and credulity. Such is the atomic precision, that a few atoms can decide the function of an entire chip. This marvel of engineering is the result of over 50 years of exponential scaling creating faster, smaller transistors.
But we are reaching the physical limits of how small we can go, costs are increasing exponentially with complexity, and efficient power consumption is becoming increasingly difficult. In parallel, AI is demanding ever-more computing power. Data from Epoch AI indicates the amount of computing needed to develop AI is quickly outstripping Moore’s Law, doubling every six months in the “deep learning era” since 2010.
These interlinked trends present challenges not just for the industry, but society as a whole. Without new semiconductor innovation, today’s AI models and research will be starved of computational resources and struggle to scale and evolve. Key sectors like AI, autonomous vehicles, and advanced robotics will hit bottlenecks, and energy use from high-performance computing and AI will continue to soar.
Materials intelligence At this inflection point, a complex, global ecosystem—from foundries and designers to highly specialized equipment manufacturers and materials solutions providers like Merck—is working together more closely than ever before to find the answers. All have a role to play, and the role of materials extends far, far beyond the silicon that makes up the wafer.
Instead, materials intelligence is present in almost every stage of the chip production process—whether in chemical reactions to carve circuits at molecular scale (etching) or adding incredibly thin layers to a wafer (deposition) with atomic precision: a human hair is 25,000 times thicker than layers in leading edge nodes.
Yes, materials provide a chip’s physical foundation and the substance of more powerful and compact components. But they are also integral to the advanced fabrication methods and novel chip designs that underpin the industry’s rapid progress in recent decades.
For this reason, materials science is taking on a heightened importance as we grapple with the limits of miniaturization. Advanced materials are needed more than ever for the industry to unlock the new designs and technologies capable of increasing chip efficiency, speed, and power. We are seeing novel chip architectures that embrace the third dimension and stack layers to optimize surface area usage while lowering energy consumption. The industry is harnessing advanced packaging techniques, where separate “chiplets” are fused with varying functions into a more efficient, powerful single chip. This is called heterogeneous integration.
Materials are also allowing the industry to look beyond traditional compositions. Photonic chips, for example, harness light rather than electricity to transmit data. In all cases, our partners rely on us to discover materials never previously used in chips and guide their use at the atomic level. This, in turn, is fostering the necessary conditions for AI to flourish in the immediate future.
New frontiers The next big leap will involve thinking differently. The future of technological progress will be defined by our ability to look beyond traditional computing.
Answers to mounting concerns over energy efficiency, costs, and scalability will be found in ambitious new approaches inspired by biological processes or grounded in the principles of quantum mechanics.
While still in its infancy, quantum computing promises processing power and efficiencies well beyond the capabilities of classical computers. Even if practical, scalable quantum systems remain a long way off, their development is dependent on the discovery and application of state-of-the-art materials.
Similarly, emerging paradigms like neuromorphic computing, modelled on the human brain with architectures mimicking our own neural networks, could provide the firepower and energy-efficiency to unlock the next phase of AI development. Composed of a deeply complex web of artificial synapses and neurons, these chips would avoid traditional scalability roadblocks and the limitations of today’s Von Neumann computers that separate memory and processing.
Our biology consists of super complex, intertwined systems that have evolved by natural selection, but it can be inefficient; the human brain is capable of extraordinary feats of computational power, but it also requires sleep and careful upkeep. The most exciting step will be using advanced compute—AI and quantum—to finally understand and design systems inspired by biology. This combination will drive the power and ubiquity of next-generation computing and associated advances to human well-being.
Until then, the insatiable demand for more computing power to drive AI’s development poses difficult questions for an industry grappling with the fading of Moore’s Law and the constraints of physics. The race is on to produce more powerful, more efficient, and faster chips to progress AI’s transformative potential in every area of our lives.
Materials are playing a hidden, but increasingly crucial role in keeping pace, producing next-generation semiconductors and enabling the new computing paradigms that will deliver tomorrow’s technology.
GIPHY App Key not set. Please check settings