http://www.nytimes.com/2015/09/27/technology/smaller-faster-cheaper-over-the-future-of-computer-chips.html?WT.mc_id=2015-OCTOBER-FB-INYT-MC7-AUD_DEV-1001-1031&WT.mc_ev=click&ad-keywords=IntlAudDev&_r=0
By JOHN MARKOFF SEPT. 26, 2015
Max Shulaker, a graduate student at Stanford, working in 2011 on a new kind of semiconductor circuit. As chips continue to shrink, computer scientists are seeking new technological breakthroughs. CreditLianne Milton for The New York Times
At the inaugural International Solid-State Circuits Conference held on the campus of the University of Pennsylvania in Philadelphia in 1960, a young computer engineer named Douglas Engelbart introduced the electronics industry to the remarkably simple but groundbreaking concept of “scaling.”
Dr. Engelbart, who would later help develop the computer mouse and other personal computing technologies, theorized that as electronic circuits were made smaller, their components would get faster, require less power and become cheaper to produce — all at an accelerating pace.
Sitting in the audience that day was Gordon Moore, who went on to help found the Intel Corporation, the world’s largest chip maker. In 1965, Dr. Moore quantified the scaling principle and laid out what would have the impact of a computer-age Magna Carta. He predicted that the number of transistors that could be etched on a chip would double annually for at least a decade, leading to astronomical increases in computer power.
The Nehalem wafer processor, introduced by Intel in 2008. CreditIntel
His prediction appeared in Electronics magazine in April 1965 and was later called Moore’s Law. It was never a law of physics, but rather an observation about the economics of a young industry that ended up holding true for a half-century.
One transistor, about as wide as a cotton fiber, cost roughly $8 in today’s dollars in the early 1960s; Intel was founded in 1968. Today, billions of transistors can be squeezed onto a chip the size of a fingernail, and transistor costs have fallen to a tiny fraction of a cent.
That improvement — the simple premise that computer chips would do more and more and cost less and less — helped Silicon Valley bring startling advances to the world, from the personal computer to the smartphone to the vast network of interconnected computers that power the Internet.
In recent years, however, the acceleration predicted by Moore’s Law has slipped. Chip speeds stopped increasing almost a decade ago, the time between new generations is stretching out, and the cost of individual transistors has plateaued.
Technologists now believe that new generations of chips will come more slowly, perhaps every two and a half to three years. And by the middle of the next decade, they fear, there could be a reckoning, when the laws of physics dictate that transistors, by then composed of just a handful of molecules, will not function reliably. Then Moore’s Law will come to an end, unless a new technological breakthrough occurs.
To put the condition of Moore’s Law in anthropomorphic terms, “It’s graying, it’s aging,” said Henry Samueli, chief technology officer for Broadcom, a maker of communications chips. “It’s not dead, but you’re going to have to sign Moore’s Law up for AARP.”
In 1995, Dr. Moore revised the doubling rate to two-year intervals. Still, he remains impressed by the longevity of his forecast: “The original prediction was to look at 10 years, which I thought was a stretch,” he said recently at a San Francisco event held to commemorate the 50th anniversary of Moore’s Law.
Advertisement
But the ominous question is what will happen if that magic combination of improving speeds, collapsing electricity demand and lower prices cannot be sustained.
The impact will be felt far beyond the computer industry, said Robert P. Colwell, a former Intel electrical engineer who helped lead the design of the Pentium microprocessor when he worked as a computer architect at the chip maker from 1990 to 2000.
“Look at automobiles, for example,” Dr. Colwell said. “What has driven their innovations over the past 30 years? Moore’s Law.” Most automotive industry innovations in engine controllers, antilock brakes, navigation, entertainment and security systems have come from increasingly low-cost semiconductors, he said.
These fears run contrary to the central narrative of an eternally youthful Silicon Valley. For more than three decades the industry has argued that computing will get faster, achieve higher capacity and become cheaper at an accelerating rate. It has been described both as “Internet time” and even as the Singularity, a point at which computing power surpasses human intelligence, an assertion that is held with near religious conviction among many in Silicon Valley.
Photo
Gordon Moore, a founder of the Intel Corporation, in a photograph from the late 1960s. In 1965, in what came to be called Moore’s Law, Dr. Moore laid out the principle that the number of transistors that could be etched on a chip would double annually for at least a decade. CreditIntel
When you’re thinking that big, bumping into the limits of physics could be a most humbling experience.
“I think the most fundamental issue is that we are way past the point in the evolution of computers where people auto-buy the next latest and greatest computer chip, with full confidence that it would be better than what they’ve got,” Dr. Colwell said.
The Limits of Physics
Chips are made from metal wires and semiconductor-based transistors — tiny electronic switches that control the flow of electricity. The most advanced transistors and wires are smaller than the wavelength of light, and the most advanced electronic switches are smaller than a biological virus.
Chips are produced in a manufacturing process called photolithography. Since it was invented in the late 1950s, photolithography has constantly evolved. Today, ultraviolet laser light is projected through glass plates that are coated with a portion of a circuit pattern expressed in a metal mask that looks like a street map.
Each map makes it possible to illuminate a pattern on the surface of the chip in order to deposit or etch away metal and semiconducting materials, leaving an ultrathin sandwich of wires, transistors and other components.
The masks are used to expose hundreds of exact copies of each chip, which are in turn laid out on polished wafers of silicon about a foot in diameter.
Machines called steppers, which currently cost about $50 million each, move the mask across the wafer, repeatedly exposing each circuit pattern to the surface of the wafer, alternately depositing and etching away metal and semiconducting components.
A finished computer chip may require as many as 50 exposure steps, and the mask must be aligned with astonishing accuracy. Each step raises the possibility of infinitesimally small errors.
Advertisement
“I’ve worked on many parts of the semiconductor process,” said Alan R. Stivers, a physicist whose career at Intel began in 1979 and who helped introduce a dozen new semiconductor generations before retiring in 2007. “By far, lithography is the hardest.”
To build devices that are smaller than the wavelength of light, chip makers have added a range of tricks like “immersion” lithography, which uses water to bend light waves sharply and enhance resolution. They also have used a technique called “multiple pattern” lithography, which employs separate mask steps to sharpen the edges and further thin the metal wires and other chip components.
Photo
The Westmere Die, a processor introduced by Intel in 2010. CreditIntel
As the size of components and wires have shrunk to just a handful of molecules, engineers have turned to computer simulations that require tremendous computational power. “You are playing tricks on the physics,” said Walden C. Rhines, chief executive of Mentor Graphics, a Wilsonville, Ore., design automation software firm.
If that scaling first described by Dr. Engelbart ends, how can big chip companies avoid the Moore’s Law endgame? For one, they could turn to software or new chip designs that extract more computing power from the same number of transistors.
And there is hope that the same creativity that has extended Moore’s Law for so long could keep chip technology advancing.
If silicon is, in the words of David M. Brooks, a Harvard University computer scientist, “the canvas we paint on,” engineers can do more than just shrink the canvas.
Silicon could also give way to exotic materials for making faster and smaller transistors and new kinds of memory storage as well as optical rather than electronic communications links, said Alex Lidow, a physicist who is chief executive of Efficient Power Conversion Corporation, a maker of special-purpose chips in El Segundo, Calif.
There are a number of breakthrough candidates, like quantum computing, which — if it became practical — could vastly speed processing time, and spintronics, which in the far future could move computing to atomic-scale components.
Recently, there has been optimism in a new manufacturing technique, known as extreme ultraviolet, or EUV, lithography. If it works, EUV, which provides light waves roughly a tenth the length of the shortest of the light waves that make up the visible spectrum, will permit even smaller wires and features, while at the same time simplifying the chip-making process.
But the technology still has not been proved in commercial production.
Earlier this year ASML, a Dutch stepper manufacturer partly owned by Intel, said it had received a large order for EUV steppers from a United States customer that most people in the industry believe to be Intel. That could mean Intel has a jump on the rest of the chip-making industry.
Intel executives, unlike major competitors such as Samsung and Taiwan Semiconductor Manufacturing Company, or TSMC, insist the company will be able to continue to make ever-cheaper chips for the foreseeable future. And they dispute the notion that the price of transistors has reached a plateau.
Advertisement
Advertisement
Yet while Intel remains confident that it can continue to resist the changing reality of the rest of the industry, it has not been able to entirely defy physics.
Photo
Under yellow light to filter out the ultraviolet spectrum, Max Shulaker worked on a wafer embedded with a circuit at Stanford University, in Palo Alto, Calif., in 2011. Most of the material used in the etching of semiconductor circuits is UV-sensitive. Mr. Shulaker was part of a team helping to make prototypes of a new kind of semiconductor circuit.CreditLianne Milton for The New York Times
“Intel doesn’t know what to do about the impending end of Moore’s Law,” said Dr. Colwell.
In July, Intel said it would push back the introduction of 10-nanometer technology (a human hair, by comparison, is about 75,000 nanometers wide) to 2017. The delay is a break with the company’s tradition of introducing a generation of chips with smaller wires and transistors one year, followed by adding new design features the next.
“The last two technology transitions have signaled that our cadence is closer to two and a half years than two years,” Brian Krzanich, Intel’s chief executive, said in a conference call with analysts.
No More ‘Free Ride’
The glass-is-half-full view of these problems is that the slowdown in chip development will lead to more competition and creativity. Many semiconductor makers do not have the state-of-the-art factories now being designed by four chip manufacturers, GlobalFoundries, Intel, Samsung and TSMC.
The delays might allow the trailing chip makers to compete in markets that don’t require the most bleeding-edge performance, said David B. Yoffie, a professor at Harvard Business School.
And even if shrinking transistor size doesn’t make chips faster and cheaper, it will lower the power they require.
Ultra-low-power computer chips that will begin to appear at the end of this decade will in some cases not even require batteries — they will be powered by solar energy, vibration, radio waves or even sweat. Many of them will be sophisticated new kinds of sensors, wirelessly woven into centralized computing systems in the computing cloud.
What products might those chips lead to? No one knows yet, but product designers will be forced to think differently about what they’re building, rather than play a waiting game for chips to get more powerful. Thanks to Moore’s Law, computers have gotten smaller and smaller but have essentially followed the same concept of chips, hardware and software in a closed box.
“In the past, designers were lazy,” said Tony Fadell, an electrical engineer who headed the team that designed the original iPod, and led the hardware design of the iPhone before founding Nest Labs, a maker of smart home devices like thermostats and smoke alarms.
Carver Mead, the physicist who actually coined the term Moore’s Law, agrees. “We’ve basically had a free ride,” he said. “It’s really nuts, but that’s what paid off.”
Indeed, a graying Moore’s Law could be alive and well for at least another decade. And if it is not, humans will just have to get more creative.
No comments:
Post a Comment