ERROR 1
ERROR 1
ERROR 2
ERROR 2
ERROR 2
ERROR 2
ERROR 2
Password and Confirm password must match.
If you have an ACS member number, please enter it here so we can link this account to your membership. (optional)
ERROR 2
ACS values your privacy. By submitting your information, you are gaining access to C&EN and subscribing to our weekly newsletter. We use the information you provide to make your reading experience better, and we will never sell your data to third party members.
Fifty years ago this month, Intel introduced the first commercial microprocessor, the 4004. Microprocessors are tiny, general-purpose chips that use integrated circuits made up of transistors to process data; they are the core of a modern computer. Intel created the 12 mm2 chip for a printing calculator made by the Japanese company Busicom. The 4004 had 2,300 transistors—a number dwarfed by the billions found in today’s chips. But the 4004 was leaps and bounds ahead of its predecessors, packing the computing power of the room-sized, vacuum tube–based first computers into a chip the size of a fingernail. In the past 50 years, microprocessors have changed our culture and economy in unimaginable ways.
Here’s a look at the historic achievements in materials chemistry and electrical engineering that made the 4004 possible, the subsequent advances that resulted in today’s computers, and the cutting-edge technology that could lead to tomorrow’s devices.
Vacuum tubes
Early computers used vacuum tubes to serve as switches for digital logic calculations. These vacuum tubes consisted of metal electrodes inside a glass tube and could turn the current flowing through them on or off to create the necessary 1 and 0 states. Vacuum tubes were large, power hungry, and failure prone. The first electronic computer in the US, the electronic numerical integrator and computer (ENIAC), booted up in 1945. The size of a room, it used over 17,000 vacuum tubes whose wiring had to be reconfigured for each new series of calculations.
Birth of the transistor
Physicist Julius Edgar Lilienfeld proposed an alternative to the vacuum tube in 1926: the transistor. He couldn’t build one at that time, because researchers didn’t have access to the high-quality semiconductors that are the heart of the transistor. Transistors typically use three metal electrodes and a semiconducting channel. A gate electrode provides enough electrical energy to switch the channel between conducting and insulating states. When conducting, current can flow between a source electrode and a drain. Like vacuum tubes, these devices can switch between 1 and 0 by turning current on and off, and they can amplify electrical signals. But they are more compact, sturdy, and energy efficient than vacuum tubes. The first transistor was made by a team at AT&T’s Bell Labs in 1947. It was a small slab of germanium addressed with two gold electrodes held in place by a plastic wedge. A voltage applied to one of the electrodes could modulate the current through the other.
The silicon transistor
Silicon transistors were more reliable and performed better than those made from germanium—particularly at the high temperatures demanded in military applications, which were some of the major early uses of computing. Texas Instruments announced in 1954 that it had come up with a way to work with silicon, and by 1960 silicon had become the dominant transistor material. Like the Texas Instruments transistor shown here—the first such silicon device sold commercially—early transistors were individual components, packed in metal cans with trailing wire connectors. Engineers built circuits from these individual digital switches.
Integrated circuits
The first integrated circuit, which brought together multiple transistors on a single piece of germanium, used so-called flying wires made of gold. Jack Kilby at Texas Instruments built the circuit in 1958. The industry would soon replace this design with flat integrated circuits that included electrical contacts within the same plane.
Metal-oxide semiconductor transistors
Another key ingredient to microprocessors as we know them today—and a crucial element of Intel’s first microprocessors—is the metal-oxide semiconductor transistor recipe. These transistors consist of layers of conductive metal contacts, insulating oxides, and semiconductors that use only one kind of charge carrier: either electrons or positively charged species called holes. Early transistors carried both kinds of charges and were faster. But metal-oxide semiconductor transistors used less energy and could be aggressively miniaturized.The microprocessor grows up
Moore’s law
In 1965, Intel cofounder Gordon Moore, then at Fairchild Semiconductor, predicted that the number of components on integrated circuits would double every year for 10 years. In 1975, he revised his law to state that components would double every 2 years. Continued innovations in semiconductor manufacturing over the ensuing decades have allowed the industry to match this prediction. In the 50 years since the Intel 4004, computer chips have become increasingly complex and densely packed with transistors and other components—without corresponding increases in size, cost, or energy use.
Widening wafers
To make integrated circuits today, engineers start with thin slices of crystalline silicon called wafers and then repeatedly deposit materials and etch features onto them. They also add other elements to the silicon to change its electrical properties, a process called doping. After those steps, machines dice wafers into computer chips. One key way the semiconductor industry has kept costs down is by using larger and larger wafers to make more and more chips at once without adding process steps. Early wafers were just a few centimeters in size; chips now are mostly made on 300 mm discs. Wafers are sliced from pure, single crystal ingots of silicon grown from molten solutions.
Copper interconnects
As important as transistors are to a microprocessor, they’d be nothing without the minuscule metal wires that connect these switches. Circuits are faster and more efficient when these so-called interconnects are highly conductive and don’t heat up. Interconnects were initially made from aluminum. IBM introduced more-conductive copper wiring into its chips in 1997, after working out a fix to prevent copper ions from migrating into the surrounding silicon.
Strained silicon
The semiconductor industry made a lot of progress by cramming ever-greater numbers of ever-smaller transistors into circuits. But another way to boost chip performance is to make the individual switches faster. In 2002, Intel announced it could achieve this by putting the meat of its transistors—the silicon channel through which electrical current flows when the transistor is on—under mechanical strain. By adding materials adjacent to the channels that caused the silicon crystal to stretch apart or compress by about 1%, Intel could boost current through the transistors and make the circuits faster.
FinFETs
As transistors got smaller, chipmakers hit a wall. In smaller transistors, it’s more challenging to fully turn off the flow of current, so transistors that are supposed to be off can still leak current and waste power. This problem led the industry to switch to a new transistor design that wraps insulating material around three of four sides of a silicon channel sticking up from the wafer surface. Called FinFETs or Tri-gate transistors, these devices were developed by researchers at the University of California, Berkeley, in the late 1990s; Intel was the first to adopt them commercially, in 2011.
Gate-all-around transistors
One likely pathway for the semiconductor industry to continue miniaturizing chip components is the gate-all-around transistor, which goes to the next logical step after the FinFET advance, completely enveloping the silicon channel in an insulating material to further prevent leakage. This year, IBM demonstrated highly miniaturized chips based on this technology.
Layered nanomaterials
As it gets harder to make silicon transistors smaller, some researchers and companies are looking to other materials entirely. One alternative approach would be to make layer-cake-like processors consisting of stacked circuits made from nanomaterials such as carbon nanotubes. Carbon nanotubes, graphene, and other novel semiconductor materials can be processed at much lower temperatures than silicon. These lower temperatures would allow engineers to stack circuits into towers without damaging previous layers. These towers of stacked circuits would pack more computing power in a given 2D footprint. Researchers demonstrated the first full computer based on carbon nanotubes in 2013 and demonstrated that its capabilities were similar to those of the Intel 4004. Engineers are now working out how to manufacture these circuits commercially.
Join the conversation
Contact the reporter
Submit a Letter to the Editor for publication
Engage with us on X