In my first post, I discussed the limits on sustainable pharmaceutical development. The state of the industry does create an opportunity for a dynamic biotech industry as well as many opportunities for the venture capitalists that support them. Ultimately, optimality is not achieved though because the short-term focus on earnings and the overall culture of pharmaceutical companies limits the ability of the pharmaceutical industry to make big bets on the future that will propel the kind of sustained growth we saw from the pharmaceutical industry in the postwar period is not likely to be replicated. No amount of Trump tax cuts will change that. In fact, if those tax cuts cause a reduction in NIH funding of basic research, it is likely that the pharmaceutical and biotech industries will suffer over the long run.
In this post, I now focus on limits on sustainable growth in telecommunications and Internet networking, an industry that produced massive economic growth in the last decade of the 20th Century and the first decade of the 21st Century. The poster child for that growth is Cisco, which has reached almost $50 BB in sales. Unfortunately for both Cisco and the US economy, its growth has now abated and it has made severe job cuts.
The engine that propelled Cisco’s growth — the router and the accompanying IP protocol — have also been the basis for the development of the Internet. Let’s return to Al Gore’s claim that he was crucial to the creation of the Internet (No. He never said that he “created” the Intenet). Well, Gore did not play a direct role, but Federal funding surely did. The original IP Protocol software and the related development of he router occurred in research laboratories in Massachusetts and California. In Massachusetts, the original ARPANET network nodes and routers were deployed by Bolt Beranek & Newman. On the West Coast, similar research occurred at Stanford and XeroxPARC. This work was important because it allowed for the development of the IP protocol, which enabled the transfer of data among disparate kinds of networks regardless of protocols used within those networks. The data between the networks was chopped into packets and sent from one node to another with the router — powerful computers that transferred the packets from node to node electronically.
Needless to say, the idea took off in the 90s and powered the development of the Internet (Cisco did acquire and develop other technologies, but the router proved to be its key growth engine). Of course, an interesting twist of fate is that the original Cisco software program that powered its routers was the property of Stanford, which complained to authorities and then criminal charges were pursued against Cisco’s founders. The case was dropped, however, when the parties came to an agreement that allowed Stanford to share in the Cisco bounty. the bounty proved to be substantial. Cisco began its march up the so-called S curve in the 90’s and sustained that growth until the last five years. From 2015 to 2016, its revenues were essentially flat after having lackluster growth for the prior few years. Last quarter it actually had a decline in revenues and announced that it would cut another 1100 jobs after much larger layoffs last year at Cisco and much of the tech industry. See http://fortune.com/2016/08/18/expect-more-tech-layoffs/.
A couple of factors account for the Cisco decline — both relating to software defined networking technology developed to a great extent at Stanford. The technology is open source, though, so Cisco could not dominate the market. Using SDN technology and the open source Linux operating system, former Cisco engineers developed a new company called Arista that developed their own routers to compete with Cisco. While Cisco was leveling off, Arista grew from $193 MM in sales in 2012 to $1.13 BB in sales in 2016 — a 42% compounded growth rate. The new software allowed users to develop their own networking protocols with the Linux software program and with th common platform for all Arista routers. In addition, many users started to develop their own network nodes using off-the-shelf routers and the open source SDN software (as com[pared wit h the proprietary Cisco software).
Those are significant developments for Cisco in the short run, but I believe that the entire networking industry faces a greater challenge and one that very few — if any — companies are addressing. The problem lies with packet networking as a whole.
Packet Vs. Circuit Switching
Let’s consider some of the claimed benefits of packet switching. The first and most obvious is that it’s more efficient. Here’s the idea. If you have a circuit-based network (Baby Boomers — think back to the Lily Tomlin operator routine on Laugh-In), you have to complete the circuit for the entirety of the information you want to transmit. If you have to reserve the circuit for the entirety of the information transformation period, that seems like you need to have too much downtime of the network to enable reliable transmission. So, it seems like packets would be the way to go — especially if you are transmitting data rather than voice phone calls because distributing data like videos require much more bandwidth. There’s a fly in the ointment, however. It turns out that packet networks have to reserve a lot of capacity as well. There are lots of reasons for that. Packet networks can get congested and the points of congestion lead to potential delays and we all know that when we click on something the last thing we want to see is the hourglass or the swirling ball. We want our download when we click or at least soon thereafter. This delay is called latency. There are other problems that actually can be more significant. For example, because the data is divided into packets the delay can result in losing the data altogether when there is too much congestion. And, another factor is that internet service providers simply have to have some extra capacity for the bursts in data that arise from time to time. As a result, data centers use only 12-18% of their capacity! See Section 2.2 of the following study prepared for the Natural Resources Defense Council. https://www.nrdc.org/sites/default/files/data-center-efficiency-assessment-IP.pdf.
Another claim of packet networks is that they are more robust or reliable than circuit networks. The gold standard in the telecommunications market is that the network operate on a 5 9’s basis. That means that it’s up and running 99.999% of the time. In a year, there are 24 hours in a day, 60 minutes in an hour and 365 days in a year. That totals 525,600 minutes in a year. At five nines, that means that they are up and running almost all of the time — leaving downtime room of only 5.26 minutes a year. Whoa! If you have an Internet Service Provider that is down only 5.26 minutes a year, please tell me what it is. I’m all in for that. Look at Table II in the following study. http://iwgcr.org/wp-content/uploads/2014/03/downtime-statistics-current-1.3.pdf. YouTube is the only website that is up on a 5 9’s basis. Here’s the data on the big cloud service providers. http://www.networkworld.com/article/2866950/cloud-computing/which-cloud-providers-had-the-best-uptime-last-year.html. Amazon Web Services is the best at 99.972% uptime, which is really good. Most others are not so good.
There is nothing inherently unreliable about circuit networks. It’s not clear that you can say the same thing about packet networks. WE can at least understand that packet networks may be as unreliable or perhaps more unreliable than circuit markets. First, packet networks send the routing information in-band (i.e., with the data to be communicated). If the data is held up due to congestion, so is the routing information and the congestion may result in loss. Second, the routing algorithm in an IP network can be very complex, which can result in either congestion or misconfiguration. The uptime data seems to confirm the notion that packet networks have an inherent level of unreliability (at least if you want to maintain control over costs.
So, why isn’t there more optical-based circuit switching?
Let’s delve into one more piece of background information. In the late nineties and the turn of the Century, the technology industry perceived the inexorable growth of the Internet. As much as we have seen phenomenal Internet growth, the Net has not grown as fast as predicted back then. As a result, both telecommunications experts and the financial community thought that the telco network would have to include new all-optical elements in the form of optical switches. Surely, according to the consensus at the time, the only way to keep up with the inexorable growth of the Internet would be to use optical switches that don’t have to be swapped out any time you increase the line rate — the speed at which data is transmitted through the network.
Two factors got in the way of optical switch adoption. One was that Internet data was “only” growing at an approximate rate of doubling every year rather than every six months. The other factor is that the development of optical switches did not move along as fast as projected. That led to a massive expectation/achievement gap in the technology and financial markets as ~$5 BB was invested by venture capitalists and public companies like Nortel and Corning in optical switching — all of which ended up as pretty worthless. The switches that were produced did not meet expected specifications with respect to size, cost, flexibility, etc. Moreover, these switches did not do what packet switches do — “groom” data traffic so that it could end up at the right network node.
There is one kind of optical switch, which actually is based on a hybrid technology, that did get significant — but not huge — adoption in the telecommunications network, the ROADM — reconfigurable add drop multiplexer. This device allowed large flows to be passed through the optical part of the switch and allowed traffic to be added or dropped at network nodes along the way with filters or other methods.
In the meantime, the router market continued to grow at a fast rate to keep up with Internet traffic and with the significant growth in data centers supporting social networking, distribution of video and music and large amounts of corporate data. Cisco does a really good job of tracking this data and publishing it. Here’s their data for the period from 2015 to 2020. http://www.cisco.com/c/dam/en/us/solutions/collateral/service-provider/global-cloud-index-gci/white-paper-c11-738085.pdf. Look in particular in table 1 of that piece. Data center traffic in the cloud is massive and is growing at a compound growth rate of almost 30%! That’s a massive growth rate.
Now, can we overcome the problems with optical switches to see them adopted in networking applications. There are some limited uses of them today in addition to ROADMs. Look at the data on how much of the traffic stays within the data center. Almost 80% of data center traffic stays within the data center for the large hyper scale data centers that number is more than 80%. Consider what this means. For every piece of data that goes to the cloud, 4 times the amount of data moves within the data center. You just don’t need the granularity of the router to move that data around in most cases within the data center. Unfortunately and this is where we get into the public policy implications of this, most of the big data center owners and operators do not use optical switching to move very much data around. Instead, they use the same routers that we have been talking about.
The implications are high cost (big routers are very expensive — especially proprietary ones, which can cost almost $1 MM). They also take up a lot of space and space comes at a premium in data centers. Finally, routers are the biggest energy hogs in data centers. A Cisco core router can require more than 9,600 Watts versus an optical switch, which may require as little as 35 Watts or as much as 85 Watts — two orders of magnitude less.
Several of the big data center owners and operators recognize that they eventually need to adopt optical switching in the data center to send large elephant flows along relatively stable paths within the data center. Only one of the major data center owners and operators has published that it is following this path.
In addition to the data center, some academics have proposed creating hybrid networks that combine packet and circuit switching. One of the leading advocates of this is Professor Nick McKeown of Stanford. See http://yuba.stanford.edu/~nickm/papers/191340-2a.pdf. Some telecommunications companies recognize that like data centers they can increasingly adopt software defined networking in their core networks. None of them have sought to create these hybrid networks.
These represent incremental changes. SDN represents a significant advance, but it does not fundamentally change the nature of networking.
Can there be radical changes that eliminate the router altogether. I participated in planning related to this kind of network. While we received some conceptual funding from one government agency, we never received enough funding to get the idea off the ground, although we did obtain a patent on the idea. I suspect it would take so long to get the idea off the ground in the current climate that the patent will never have any value. The basic idea is that you can eliminate the router if you can create a way of using more than 40 wavelengths (frequencies or colors) in fiber optic cable — substantially more. That would require sub-wavelength resolution and we contemplated a method of achieving this. I fundamentally believe it can be done with an investment of perhaps $25 MM. The rest of the network would require some new algorithms for assigning colors and a frequency generator, optical switches and some very high speed modems at or close to user devices because computers today could not receive data at the rates I can imagine in this kind of network. The frequency generator would be similar to atomic clocks that already exist. The algorithms would require some optimization, but should not be difficult. All told, you could probably bring this networking idea to fruition for something like $50 MM. We calculated that this networking method could cost-effectively deliver one gigabit per second service to every computer in the U.S. with high reliability, quality of service and probably greater security because circuit switching could probably function as permission-based information distribution.
That is a trivial investment for a system that would revolutionize networking and propel both commercial success and job generation for a long period of time — perhaps longer than the nearly three decades of Cisco’s dominance. Will it happen? Perhaps some day, but why not now? Public companies won’t do it because of the constraints on R&D spending and the lack of will caused by their inertial cultures. It’s possible that a company like Google would do it if it were motivated. Google certainly is willing to make big bets on infrastructure designed to give Google a cost per bit advantage over any of its competitors. Venture capitalists are not likely to pursue this kind of idea because at the end of the day they are also somewhat conservative and would fear that Cisco would crush this nascent effort so that it can maintain its router franchise even though it’s starting to lose that franchise today.
The key point for me, however, is that tax cuts have absolutely nothing to do with the implementation of this idea. Saving a little bit of money in a profitable entity is not the make or break issue for this kind of idea. A new generation of networking technology may come out of an academic effort, a government-funded effort or a venture capital effort. It is highly unlikely to come from any other source. Tax cuts will not help the entities that might consider adopting a new network paradigm.