By Mark P. Mills
A note about this series:
Part 1 of this series (here) focused on the “Cloud” of datacenters—energy-gobbling warehouse-scale computers—that sit largely out of sight at the center of the Internet. In part 2 we explore the invisible “information superhighway” that connects the Cloud to everyone and, soon, everything. The ethereal magic of radio photons now forms the most far-reaching network humanity has ever built; and one that uses more electricity than the country of Italy.
Fifty years ago this month Apollo 7marked two firsts: the first manned flight of the Apollo crew module which put the program on track for the moon landing just nine months later; and the first live TV transmission from a manned spacecraft, seen by hundreds of millions of people around the world.
Soon after that, in 1974, it was the ubiquity of television – i.e., the wireless transmission of video, not voice -- that inspired Korean American artist Nam June Paik in a prescient essay to coin the term “information superhighway,” a phrase later borrowed by Senator Al Gore in a 1978 speech. And to be punctilious, history likely records the first use of “superhighway” as applied to any communication network in a 1964 book by Bell Labs physicist Manfred Brotherton. (The old Bell Labs, and physicists, generated a lot of “firsts.”)
Why the word “superhighway”? Because networks that transport commerce have been critical to economies since Roman times. And, as Paik observed circa 1974, in those days the interstate superhighways “became the backbone for … economic growth.”
If he were still with us, Paik would likely be unsurprised by the fact that video already comprises 60% of Internet traffic. Humans are visual creatures. Research across cultures and languages points to sight as the dominant of the five senses, with ten-fold more of the brain’s neurons dedicated to sight than to sound.
Consider Paik’s prescience in that decades-old essay regarding the business and societal implications of unleashed video:
“The mass entertainment TV as we see it now will be divided into, or rather gain many branches and tails of, differentiated video cultures. Picture phone, tele-facsimile, two way inter-active TV for shopping, library research, opinion polling, health consultation, bio-communication, inter-office data transmission and many other variants will turn the TV set into the expanded mixed media telephone system for thousands and one new applications, not only for daily convenience but also for the enrichment of life itself.”
But it took far longer than Paik imagined for truly useful “two-way inter-active” wireless video networks to emerge; i.e., what we now call the mobile Internet that has profoundly changed computing, and so much more. Only a decade ago did the Internet finally became untethered with the emergence of practical smartphones – devices that are essentially tiny, smart radio-linked TVs.
Paik got a lot right about what mobile video would spur. What he got wrong were the energy implications, on two counts.
First, he mistakenly believed that ubiquitous video would “drastically reduce air travel, and … the chaotic shuttling of airport buses through city streets – forever!” In fact, air traffic soared (it’s up nearly 1,000% since then) and so too road traffic including, in no small irony, freight traffic for two-day delivery driven by “inter-active TV for shopping.” The untethered Internet turned out to be an economic accelerant for global commerce and thus collaterally the need and desire for travel. It’s a trend with a far-reaching, if indirect, energy impact that continues today.
Second, Paik offered up an energy trope that is still commonly believed; i.e., that video-teleconferences somehow “consumes no energy.” It’s true that beaming a video signal on a radio wave uses less energy than propelling an automobile or aircraft. But all things use energy. The key to total energy use on any highway, paved or photonic, is the distance and frequency of travel.
All the world’s physical highways collectively span a distance of 24 million miles; one-fourth the way to our sun. While the information superhighway is incredibly energy efficient, it’s also astronomically bigger, and busier. The world’s 4 millioncell towers connect billions of people on an invisible network that is effectively 100 billion miles long. That’d take you to the sun 1,000 times.
One should not be surprised that it takes serious quantities of power to energize such an expansive network. The world’s cellular network operators spend, collectively, over $20 billion to buy somewhere between 200 and 300 TWh of electricity a year. (The exact number depends on differing assumptions.) That’s also roughly the same quantity of electricity that the world’s datacenters use (see part 1in this series), or as much electricity as the country of Italy uses for all purposes. Or in highway terms, that quantity of energy is consumed by all of California’s automobiles each year.
Even so, electricity purchases are typically only 10% of total operating expenses for cellular service providers in mature economies. This may explain why, setting aside virtue signaling about efficiency, there hasn’t been widespread anxiety about energy in those circles – that is, until lately. The story is different in emerging economies however, where network energy bills can account for 40 to 50% of operating expenses.
Radios have always used a lot of power. The radios in a smartphone use 40% of the onboard battery’s power. (The screen and logic circuits use the rest in roughly equal proportions.) But even all the energy needed to charge all the world’s smartphones doesn’t add up to bupkis at the global level. In fact, a few dozen shale rigs in Texas can supply enough energy to recharge all the smartphones on the planet.
The radios in smartphones are low-power precisely to allow long operating times on a battery – but low-power means very limited radio range. Thus, the utility of smartphones is made possible by placing millions of far more powerful radios (cellular “base stations”) close enough to every possible mobile location to enable continual connectivity. The energy story of interest, then, is in the two-way broadcast stations that invisibly enable that connectivity.
In 1902, the first radio station to ever broadcast a commercial wireless signal was built by Guglielmo Marconi at Glace Bay, Nova Scotia. The signal, received nearly instantaneously at Cornwall, England, was energized by a dedicated several hundred kilowatt coal-fired power plant. That world-changing demonstration took place a little more than a decade after the German physicist Heinrich Hertz had first demonstrated the then radically new phenomenon of electromagnetic wave propagation. (Hertz tragically died at 36 and Marconi would go on to win the 1909 Nobel Prize in physics.)
Engineers knew from day one that shrinking the physical and energy footprints were key to the proliferation of radio technology. (A truism for all technologies.) By World War I the first “portable” radio sets weighed a ‘mere’ one ton and could be drawn on a horse-cart. By WWII Motorola delivered the iconic 35-pound back-pack radio that GIs carried. But the subsequent time-line to get to tiny handhelds took longer than the equivalent trajectory for computers because the technology for making semiconductor radio chipsets was -- and in many ways remains -- more difficult than for computer chipsets.
By 1984, Apple and others democratized the personal computer and the wired Internet quickly followed. That same year the world’s first cell phone was also released (again, a Motorola invention). It was a $9,000 (today’s dollars), two-pound ‘brick’ with a 30-minute talk time; a powerful portent but hardly practical. That phone was made famous as a sign of excess in 1987 by Hollywood’s Wall Street caricature, Gordon Gekko.
It wasn’t until 2007 that the wireless mobile revolution began with the emergence of a useful smartphone (credit to Apple of course). Computers themselves had long before collapsed in size, but that was insufficient to unlock a mobile revolution. It took the maturation of three other innovations: low-power semiconductor radio chipsets, tiny color screens, and powerful lithium-ion batteries.
With that convergence, it would only take a half-dozen years until, in 2013, the world had more smartphones than smart desktops. Traffic on the wireless networks today is greater than the entire wired Internet in 2013. That was the pivot year when the real age of the radio finally began, with all its economic, social – and energy – implications..
Going from landline networks dedicated to voice and text traffic to a video-dominated mobile Internet is, in energy terms, equivalent to going from local roads filled with scooters to super-highways full of SUVs. The mobile Internet delivered a double whammy in energy impacts.
First, carrying data on a wireless instead of a wired network entails a roughly 10-fold increase in energy consumed per unit of data transported. (In no small irony, that’s roughly the ratio of increased energy used per pound-mile in going from truck to drone.) Efficiencies improve over time of course, but roughly at an equal rate in both transport modes…until physics limits interpose the end game.
And secondly, mobility itself accelerates the demand for data precisely because of the convenience. Mobility has enabled entirely new products and services from Uber to mapping, and from virtual assistants and real-time language translation, to real-time, anytime and anywhere video entertainment. Consequently, in order to access all those services, the average smartphone user today pays about $1,000 every year to ‘rent’ time on the radio superhighway. Part of that rent includes, in effect, a fuel charge.
Hans Thirring may have been the first person to think about the aggregate energy consumption of broadcasting radio waves. In his path-breaking 1958 book, Energy for Man, (which my colleague and I happily discovered when we were writing our book, The Bottomless Well) Thirring estimated that the tens of thousands of broadcast towers of his day used about 0.1% of America’s electricity.
Neither Thirring, nor Marconi earlier, could imagine two-way radios small enough to fit into pockets. And, no one before Bell Labs, in 1947, conceived the idea of a network of myriad, complementary, radio towers structured in “cells” to forge a practical two-way connection to portable/mobile radios. But it took over a half century before the explosion of mobile devices using trivial amounts of energy spawned the symbiotic proliferation of the millions of power-hungry cell towers that exist today – and the resultant $20 billion in electricity consumed by them each year.
So what comes next? You can be sure that cellular equipment vendors and their customers are still feverishly exploring every avenue to improve power efficiency in radio domains, a pursuit dating back to Marconi. The more interesting question is what new architecture and, derivatively, businesses are about to be made possible by next-generation radio technology? Here we don’t need a crystal ball, but can – to borrow Peter Drucker’s maxim – predict what’s already happening.
Just as today’s mobile ecosystem was launched by the above-noted intersection of three key technologies maturing, so too another trifecta of tech advances is changing the game. The effect of the newest radio revolution will be the same as that which kicked off “mobile” a mere decade back: another, likely greater, expansion of the scope and scale of wireless “highways,” leading to yet another proliferation of new classes of business, services and, derivatively, network traffic. All this will bring similar energy surprises, as it did last time. In order to appreciate both the market and energy implications, we need to highlight the underlying triad of drivers which are bringing a revolution in radios that are smaller, faster, and smarter.
As with the first mobile revolution, we see again the shrinking size of practical radios. Engineers have collapsed radio chipsets down to dust-mote size which enables entirely new kinds of mobility with wireless connections to smart sensors embedded into nearly any product, machine, or thing, including our bodies (never mind just fitting in pockets and purses).
This has been made possible in large measure because engineers have so radicallyimproved energy efficiency that short-range Bluetooth-type, or EZ-pass-type, radios can power themselves both wire-free and battery-free by “harvesting” the infinitesimal amounts of energy available in the ambient environment, including radio noise itself. Of course, such super-efficient low-power radios have extremely limited range. Just as that constraint was resolved by using a cellular network of nearby base-stations for mobile handsets, so too will an even greater proliferation of “micro,” “nano” and “femto” cells do the same again for mobile radios embedded everywhere.
Speed, or more properly, bandwidth, is the second feature of the new mobile trifecta. Just as fiber optics vaulted speed/bandwidth far past what traditional copper cable could handle, we now have the maturation (size, performance, cost) of millimeter-wave radio chipsets – monolithic microwave integrated circuits (MMICs). In the ineffable physics of electromagnetic waves, higher speeds/bandwidths come from shorter wavelength (which is inversely the same as higher frequency). The “millimeter” waves are, tautologically, far shorter the nearly meter-long waves used by the first cellular radios, and nearly a million-fold shorter than the several hundred meter long radio waves Marconi first launched.
At millimeter wavelengths (i.e., frequencies at thousands of MHz) radios can handle traffic at the speed of the first-generation fiber optics, effectively unlocking fiber-speed over the air. That’s why proponents of the emerging 5G network talk about the 100-fold faster speed that will supersede today’s (comparatively) sloth-like 4G. That kind of speed/bandwidth unlocks not only high-resolution streaming video – for better or worse, TV on-demand anywhere, anytime – but also the kind of performance necessary for augmented and virtual reality, and for a new range of automation devices from drones and self-driving cars, to the era of collaborative robots and artificial intelligence, all requiring near-instantaneous access to the Cloud’s supercomputing capacity. Relevant to the architecture of such high-speed networks is the disadvantage of shorter wavelengths; they can only propagate over shorter distances.
(One should mention that the symbiotic data-traffic-inducing applications of tiny, solid-state millimeter-wave chipsets which enable the cost-effective radars now used on cars for navigation and cruise control, as well as for precision indoor navigation of autonomous industrial vehicles, including robots.)
MMICs were slower in coming, by a couple of decades, compared to the history of silicon logic chips based on large-scale integrated (LSI) circuits. LSI’s ever-shrinking components led to massive numbers, billions, of transistors per chip. While MMICs are built with the same tools developed for silicon logic, the physics of ever-shrinking radio wavelengths (again, inversely, higher frequencies) presents uniquely challenging problems. In addition MMICs required the development of new semiconductor materials (gallium arsenide, indium phosphide, and gallium nitride).
And the third part of the trifecta is the development of practical smart antennas, or in proper technical terms, massive multiple-input multiple-output (MIMO) antennas. A limiting factor for any base station is in the number of wireless users (whether people, or radios embedded in moving things, from cars to cattle) that a single antenna can handle. With MIMO, rather than a base station with a few antennas, an array of hundreds of computer-controlled antennas interleave and ‘steer’ connections so precisely and quickly that the same ‘highway’ can handle 20-fold more traffic. Nothing equivalent is possible in the physical world of trucks and asphalt.
What follows from all this? Count on the massive proliferation of a new class of mobile base-stations to connect to an even greater proliferation of new mobile radios, and new kinds of mobile-enabled services. The global Small Cell Forum estimates that with 5G rollout starting soon, there will be over 70 million small cells installed in urban and industrial environments within a half-dozen years; i.e., a more than 10-fold increase over today’s network of cell base stations.
In order to manage the dual challenge of short ranges inherent in 5G wavelengths, and far greater data traffic, designers talk in terms of hyperdense networks. Traditional cellular networks began with 7 base stations per square kilometer (km2) rising and, with rising traffic and users, that site density has risen to some two dozen per km2. But 5G networks are expected to start with 150 sites per km2 rising to as many as 2,000 or more.
We are moving towards a world that has gone from one radio per household a century ago, to hundreds and even thousands of radios per person. The term “information highway” loses salience; that near future is one of ambient high-speed connectivity with radio links literally as accessible as the air we breath.
What exactly will people do with such capabilities? By now, we should have at least some glimmer of what’s coming having so recently lived through the transition from the tethered desktop Internet to today’s mobile network.
For starters, and to return to the beginning -- a lot more video. A survey earlier this year from the Interactive Advertising Bureau found that nearly half of the people who livestream video do so more now than last year, and in consequence watch less TV. (In energy terms, which we return to shortly, if you live stream to a mobile vs watch a cable-connected TV, that’s equivalent driving an SUV instead of taking the train.)
Meanwhile, mobile video viewing has already risen to a global average of 35 minutes per user per day, an increase of 700% over the past half-dozen years during the rise of the first generation mobile Internet. We also know that better video capability will spur greater usage. A recent consumer survey sponsored by Adobe found slow speed and related bandwidth constraints are primarily why people stop using, or don’t use, mobile video.
Video consumption is almost entirely entertainment-centric. Now that we finally have access to a videophone -- a dream originating in the 1960s – it’s clear that video bandwidth is mainly about entertainment. It is an understatement to say that the entertainment industry is huge, with all the content created remotely, increasingly accessed live, and often literally created whole-cloth (i.e., video games).
Annual household spending on entertainment has risen more than 60% over the past two decades that track the rise of the Internet era. And that spending increase has been directed at more use of “information superhighways” rather than asphalt roads. Household spending on location-based entertainment – going to movie theaters, parks, sporting events, museums, etc. – has remained essentially flat. But the average household has nonetheless increased spending by some $1,500 a year on entertainment using electronics, with half of that in wireless domains. We can expect wireless to drive more spending going forward.
Netflix and all the related video purveyors certainly think so. As does Microsoft. The tech giant will start its xCloud gaming for Xbox in 2019 because the performance of the networks is now good enough for real-time verisimilitude – wherein achieving the latter requires the supercomputing that is available in the Cloud. Google has similar plans, and Sony’s been in online gaming mode for some time. Next comes the rise of (bandwidth hungry) E-sports, which is already an ‘industry’ in the billion-dollar realm. Even Big Ten universities last year fielded teams in the online League of Legends game, including broadcasting some games.
On the heels of entertainment, and ultimately a far bigger ‘industry’, comes the expansion of the still nascent category of “wearables” – by which one means of course, wearable computing of some kind connected wireless to the Internet. Guessing how big wearables will become, as equipment-maker Ericsson gamely attempts in a ConsumerLab Report, is as fraught as guessing circa 1998 where mobile would lead. But few doubt that one of the most profound wearable applications, and ultimately biggest market, will come in medical and health fields.
And once 5G with its fiber-speed wireless is up and running, the biggest wildcard will come from augmented and virtual reality tools – the ultimate fusion of supercomputers and wireless superhighways. Analysts at forward-thinking CBInsights recently mapped out 19 industries that AR and VR are “posed to transform.” On that list: retail, conferences, marketing, law enforcement, training, manufacturing, construction, automotive, agriculture, sports, entertainment, education, energy, entertainment, and of course healthcare; in short, everything. (For those interested in a deeper dive on 5G implications, I recommend a still timely 2016 tutorial from Bret Swanson, here.)
In the physics of our universe, speed and volume always have an energy cost. Few engineers doubt that more speed (faster connections, or in technical terms, less latency) and more bandwidth (more traffic) will lead to more, not less, energy consumed by the network infrastructure. We are entering an unprecedented era in terms of the sheer scale and proliferation of base stations with radio broadcasting electronics.
One expert noted recently in a publication of the Institute of Electrical and Electronics Engineers, that network power is “one of the most important, yet underappreciated aspects of the next generation cellular network.” The author went on to observe that 5G entails “major paradigm shifts in the architecting, distribution, and utilization of energy in every aspect of the network.” Another engineer went so far as to call energy use the “lurking threat behind the promise of 5G delivering up to 1,000 times as much data as today’s networks is that 5G could also consume up to 1,000 times as much energy.”
In the physics of photons, however, there are always lots of clever work-arounds. There’s still is a lot of ‘head room’ to improve the operating efficiency of nearly every electronic component in the system, just as there is with all semiconductors. But those gains won’t be enough to offset the energy impact of a rapid rise in traffic.
Even so, operators are counting on two key energy-saving features of 5G. One is the use of far lower power cell sites, although offset by necessary far greater number of them. The other is the use of computer power to dynamically manage the radios, i.e., smart antennas.
One benefit of smart radios is the ability to use sleep mode – an old idea for desktops and smartphones – but only now possible with 5G base stations. The sleep concept is perhaps obvious (it’s on phones and laptops), but has not been possible to effect in base stations until now. The jury is out on just how much energy will be saved by powering down during periods when there is little to no data being transported. The key variable is whether the new features, products and services of 5G leads to so much traffic that there will be far fewer slow times than now exist.
The big advantage, however, with smart antennas -- the above-noted MIMO (massive multiple-input multiple-output) – is in dramatically reducing the energy needed to deliver bytes on a radio wave. MIMO can dynamically beam-steer and enable on-demand use of an antenna’s capacity, instead of broadcasting inefficiently like a lawn sprinkler. Notably, MIMO’s primary purpose is to achieve “spectrum efficiency” and not energy efficiency; i.e., to carry a whole lot more data on the same radio channel.
The little ‘secret’ is that MIMO saves radio energy by using more computer power. MIMO requires dedicated kilowatt-class computers at each cell site in order to choreograph in real time the array of hundreds of antennas operating at gigahertz speeds. The MIMO computers are expected to use more energy just to manage the antennas than the energy used to create and broadcast the radio waves. (There is nothing like this needed for today’s base stations.) Consequently, rolling out 5G will substantially increase total base station energy use.
The hope is that MIMO’s energy-saving features offset the higher energy cost of generating and managing millimeter-length radio waves. Of course, the whole point of less-energy-per-byte is specifically to stimulate increased data usage. We shall soon see how much more data is transported. (Analogizing again to transporting atoms not bits: aircraft energy use per seat-mile is some ten-fold lower than early days; that spurred rising net energy use.)
Engineers have always pursued more efficiency. An engineering consortium was formed in 2010, called GreenTouch, to develop a roadmap for finding new ways to improve 5G energy efficiencies. The group finished in 2015 and concluded that energy consumption could be reducing by up to 98% over current network technologies. That may sound like a big jump, and it would be if we were talking about automobiles and airplanes. But in the world of bytes, such gains are business-as-usual.
And odds are traffic will grow faster than efficiency, as has been the story since the dawn of data. The consortium estimated that with optimal efficiencies, by 2025 global networks will spend over $90 billion a year on electricity, a more than four-rise from where we are. But guessing the actual number is complicated by the biggest wild card: actual future traffic. A data tsunami is coming.
Consider that in the GreenTouch analysis, mobile traffic was forecast in 2010 to increase 89-fold by 2020. We’re almost at 2020; now Cisco forecasts traffic will have risen by at least 120-fold over 2010 levels – push that growth out just one more year to 2021, and Cisco sees a 160-fold growth over 2010. Oops. Add to that, Ericsson’s latest forecast is that video will jump to nearly 75% of all mobile traffic by 2022.
Not counted in today’s forecasts is the data traffic yet to come from artificial intelligence (AI) engines placed at cell tower sites; so-called edge computing. AI is inherently the most data-intensive – and thus energy-intensive -- use of computers in history. Vendors and carriers plan to spend billions of dollars putting AI at the “edge,” or close to end-users precisely because speed (proximity) is everything when it comes to real-time applications. The traffic impact? As the CEO of MobilEdgeX put it recently, it looks to be as big a deal as when “the telcos went from voice calls to video.”
The future energy demand of the information superhighway depends on how much digital magic will emerge from the next information revolution. Futurists generally underestimate these kinds of transformations, in particular in information technologies. Back at the dawn of telephony, the great poet Henry Thoreau was reputed to have said: “Even if the telephone companies should ever succeed in connecting the people of Maine with the people of Tennessee, what would those people have to say to one another?” Similarly, Paul Krugman, of New York Times fame, wrote less grammatically in 1999 that “the growth of the Internet will slow drastically as the flaw [in network forecasts] becomes apparent: most people have nothing to say to each other!” [sic]
The intersection of digital mobility and the Cloud are the defining technologies of our time, as was the intersection of automotive mobility and the highways nearly a century ago. Gasoline consumption by cars rose some 700% over those many decades, only recently leveling off. Fuel use on the information superhighway has a long way to go.
The share of mobile traffic for video alone will likely consume two-fold more kilowatt-hours than is used by the entire nation of South Korea today. If he were still alive, Nam June Paik would have understood the popularity of video networks. But he would doubtless have been shocked at the energy appetite of his “information superhighway.”
High-speed, huge bandwidth, ubiquitous wireless connectivity is what makes everything possible that dreamers and innovators imagine for the next era of the mobile Internet. The coming leap will be bigger than going from Gordon Gekko’s 2-pound ‘brick’ to today’s smartphone. In highway terms… it’s not 1974, it’s 1958, a time when there were already lots of roads, but the build-out of the interstate highway system and its associated energy use and economic acceleration had just beginning.
Mark P. Mills is a senior fellow at the Manhattan Institute and a McCormick School of Engineering Faculty Fellow at Northwestern University, and author of “Work in the Age of Robots,” just published by Encounter Books. Support for the research in this series came in part from the Forbes School of Business & Technology at Ashford University, where Mills serves on the Advisory Board.
No comments:
Post a Comment