Monday, April 23, 2012

The network after next


There were a lot of good speakers at MoDevUX last Friday, but maybe the most interesting one to me was John Schmidt's "Inconvenient Laws of Physics and the Mobile Experience." Almost everyone missed it, due to unfortunate scheduling and going over time in the upstairs room; everyone ran out to see the Frog keynote. But you missed something that is going to matter more and more for any topic starting with an "M," pretty much right now. 

When working in any medium, it's good to understand the underlying technology. This has traditionally been true, so painters understand how paints are made, and in school have to stretch their own canvas on frames they build in the shop. And so on. But in a lot of digital technologies, there's assumptions, misunderstandings and blind spots. Radio is a key one of these. 

Close to ten years ago I realized I didn't know enough of what I was talking about, and actually took an RF Engineering class through my employer at the time (Sprint). Subsequently, I have found it not only makes a lot of my work better, but relatively small amounts of knowledge cane be passed along to other designers to make their work better. This is one reason the appendix to Designing Mobile Interfaces is so long. There's a dozen pages on the Introduction to Mobile Radiotelephony, for example. Like everything in the book, the whole content is out on the wiki, so click the link to read it. Though I am also just fine if you buy a copy.

But one thing I did not do was speculate very much on the future. We can assume that real 4G networks will be deployed pretty universally, that more people will get wireless devices and fewer people will use wireline devices, and more and more services will be deployed to wireless devices.

But after that the picture is murky, and these trends lead rapidly to highly questionable territory. This is at least a book-length topic, and people smarter than me seem to have more questions than answers when they talk about this. But a few of the topics I see being crisis points in the next decade or two are: 

  • Radio bandwidth - First, I am deliberately simplifying the discussion and using the colloquial "bandwidth" which isn't really very illuminating. Martin Geddes can gnash his teeth at me. Regardless of what attribute of information transfer, wires don't have the inverse square law, or raindrops, or buildings in the way (JCB Fade notwithstanding). Our networks and architectures of information transfer were built for wires. Using radio (mobile networks, or even just WiFi, Bluetooth, Weightless, etc.  is always lower speed, variable speed, and of significantly limited range.

  • System bandwidth - And that's just the last mile. The switchgear switchgear, and backhaul are very limited in capacity, and are architected for voice communications. That doesn't just mean circuit-switched, it means voice-like patterns of data consumption. Phone calls are not a few seconds long, several times per minute. Even though all those networks were upgraded in the past few decades, and there are now empty buildings with tiny ATM (et. al.) switches in the corner of the room, these are all configured for voice traffic patterns.

  • Payloads - Okay, and as though that wasn't enough, the content of the message is both different, and variable. Email is different from weather API calls, which is vastly different from streaming video. Can one network even do all these things efficiently? Compromises are made today, to the point that anger over Netflix rebuffering messages appear in cartoons.

  • Whither the PSTN? - The network that lets you pick up a phone anywhere in the country, and call anywhere else in the country (and which interconnects with all other countries so you can call any wireline phone in the whole world is called the Public Switched Telephone Network. From a technical, architectural and even legal (next) point of view it is the core to information transfer today. Information services developed to ride on top of it, from FAX to the first use of modems, not to mention the public internet, acoustically encoding data was not because everyone was stupid but because there was a pervasive, reliable, if overly-controlled semi-global network already in place.

    So, what has this done for us lately? Mobile devices (and satellite phones, and some others) are simply not part of this network, despite being (now) assigned numbers that can be directly dialed. So what happens in ten or 20 years when there are no POTS (Plain Old Telephone Service) lines, and 99.9992% of network traffic is simply interconnect of various subsidiary networks? It is not designed to be an interconnect, and aside from possible technical issues, mobile telecoms could start directly connecting and interconnecting, to avoid paying fees to the (several) operators of the PSTN. Without revenue, what happens? Government ownership? Of what? To what end?

  • Legal framework and public safety - The entire legal framework is even based around the ubiquity of the PSTN. In the US, for example, there is no right to mobile communications, and when there was a drastic need for better account security, it was tacked on as regulation (without a change in law) to something originally intended to prevent cross-marketing. Mandates with serious public safety considerations like e911 (emergency services can get precision location data about each mobile call, as far as we're concerned) are poorly implemented, slowly. And there is no real conception of QoS; when a disaster strikes, and mobiles have come to be used by emergency personnel as ad hoc communications networks, they are as incapable of getting on as everyone else.

  • Costs - 15 years ago, a multi-billion dollar investment in a network seemed insane to a lot of people, but the operators proved it made sense. Today it's a bit marginal, and if we continue on the current trends then tomorrow it'll be untenable. Networks cannot be upgraded on an increasingly short timeframe, and we seem to have already reached an efficiency tipping point. While operations can still significantly be considered to have no marginal cost for additional users, that is because the network spent a LOT of money building capacity for those planned users. A network built for 400 million users is at least 4 times as expensive as a network built for 100 million users. And it takes a long time to make that money back. We're approaching the point where the curves cannot be made to fit into the available time, and either we all need to pay $100 a month for phone service, or the next network upgrade simply has to wait.



Remember, this is not all going to come to a head in the next 18 months. Some of these issues are happening now, but we can work around them. A few groups are actively working on quite tangential solutions that may solve some of these, but almost always for specific problems, and generally in competition with one or many other providers of network technology. 

I can only speculate on what might happen. Here's some wild ideas, which should not be taken too seriously. These are not proposals, but truly speculation; some are contradictory, as this is not a well-formed piece of speculative fiction, but just a bunch of thoughts:

  • Consumption model changes - Aside from consumer retail pricing -- which I didn't even talk about as a problem above, as it's more of a symptom -- what if there was a method to discourage use at peak times, or of peak resources? Or at least to let consumers really, truly understand that information may not always be available immediately. This is not as insane as it sounds; an always-present, always-on device could be made so that when an information request is made, it can be at low priority, and then the network will push notify when it's ready, or at high priority, where you get it now. High priority costs more, though whatever revenue model the operator wishes to try. QoS can be tacked on, so some businesses (or some services, if the other end of the data wants to pay) as well as public safety, can always be high priority. Yes, this may smack of non-neutral behavior, but the problem with those schemes today is opacity; granting the end user the ability to decide, and giving direct economic incentives may help.

  • Distribution model changes - Everything today is using the original internet model, of a server with information, sending it straight to a client who consumes it (view, compute, etc.). Breaking into packets is irrelevant at this point; it might as well be circuit switched. We need massively different ways of getting content to client devices. Akamai (and some others) have been on a reasonable path with cacheing services, but this may need to be extended massively further down the chain, and be much larger scale as well as more reactive. Imagine a network where the current YouTube viral video is the easiest one to view, as it's stored at the BTS (cell tower). Or, a model like P2P sharing is used, where everyone local who is watching it contributes a few kb of the stream to you. Or, perhaps something entirely different, where data centers essentially cease to exist, and data for high consumption services like Netflix simply become distributed across the network.

  • Computational changes - Software and storage have assumed, for far too long, that all computers are arbitrarily fast, have arbitrary amounts of memory, and storage is cheap, available and of low-latency. Then a few smart people come in and tack on some business rules to the error messages and increase the timeout to deal with long latency for mobile networks, and the business buys some carbon credits from a wind farm to make the data center seem more green. Fundamental, to-the-core changes need to happen to the way software is written, and the way data is encoded, decoded and stored.


    The solution cannot just be adoption of a few new tools, but will require changes in the approach, and the way we train developers and architects. I think some of the best buzzword-worthy ways I've seen it discussed are to assume the network is the computer, not in that you have more  resources, but in that your computer is now made of a few fast bits connected by very long wires and inefficient radio waves. The client/server mentality is breaking down at the edges, even on a few projects I have worked on. Start architecting the whole distributed product as one piece of software, even if it is built and managed as several, and some solutions come naturally.

  • Multiple networks - Far past the concept of QoS on the same network is the use of multiple networks. While there are a few organizations talking along similar lines, the Weightless guys are some of the more clear about opportunities. Basically, they want to build a low-bandwith, low-speed network for putting low-priority (but still guaranteed delivery!) messages across. Much of the thinking is for M2M (Machine to Machine) services, replacing the use of high speed mobile networks today for things like meter reading, and of course then opening up the concept of M2M to untold other services.


    Me, I think of this as having even broader uses. Consider the few services today that let you (or force you to) only synch heavy traffic over WiFi. Now, consider instead of that, you also have a Weightless radio in your handset. Sure, you get to browse the web at normal speeds, but a lot of those background tasks, like synching email or making sure there's reasonable up-to-date weather on that widget, which are transferring small amounts of data periodically, can use the low-speed network and leave the high speed network to streaming video, voice and so on, for you and everyone else.

  • Different infrastructure - For a while there, when cities started building WiFi mesh networks, it seemed like there might be an entirely different mentality in network infrastructure coming along. There's a lot to be said for small, readily-distributed devices, so dense that overloads can be avoided by adding more. I can even see a network that uses handsets as repeaters to other handsets. Or, if that's too crazy for say power management purposes, using the emerging crop of connected cars as mobile hotspot/repeaters. If you think the death of the mesh means it was a bad idea, it's not. It got conflated with free networks, and the paid network providers used all but thick-necked-men breaking legs to kill key projects. Really. But now, this is the sort of scheme that might be able to be leveraged by the operators to save themselves.


  • No more PSTNs - Very soon, there will be very few direct connections to the PSTN, and certainly compared to the traffic it carries. The operators of these networks are changing to be service providers and support data almost exclusively. But the underlying framework is the same. These networks need to change to a two tier system, with a backbone used for interconnect, and backhaul of the mobile operators, and then the classic wireline telecom and data services can be connected to this as though they are add on services as well. Alternatively, and perhaps by default of the wireline backbones don't change fast enough, the wireless telecoms will disregard the PSTN, and expand their backhaul into a private backbone, as well as making private interconnect agreements with other networks. Some of this is happening already, but it is thwarted partly by law regulatory billing agreements.

    To a certain degree, no wireline network architect can solve this. There are legal and regulatory requirements which require it to be configured in ways that are not efficient for the future.

  • A new legal framework - So, a whole new legal framework is needed. Though I have little hope for this, I'd like to see it founded on principles instead of technology. Build a test that if it walks like a communications device and talks like a communications device, it should be considered a common carrier, providing transport to everyone without discrimination. This would also mean instituting lifeline services, and probably requiring at least some coverage of every square inch of the country.

  • New revenue models - As much as we like to gripe about it when our bills arrive in the mail, the real troublemakers are fees for use of backbones, roaming and so on. If you get rid of those fees, the mobile operators would keep more of their money, but the backbone operators would go out of business three days later. A new, high-tech backbone as discussed above would not obviously reduce costs, so operators will either need to cut costs somewhere else or raise rates on end users, or both.I would not be surprised if pricing models akin to the tiers-of-service concept I discussed above.

  • Slower growth - Or, it may mean a reduction in the speed of rollout of new technologies. Those multi-billion dollar network swaps cannot follow the trend of other technologies and accelerate. There is no way to recoup costs in timescales less than several years, and it takes years to roll out changes on this scale anyway. Competitive pressures aside, operators may someday have to delay network upgrades to get some profit out of their revenue.

  • Consolidation and change - Or, if market pressures exceed fiscal sense, mobile operators may continue to update and swap networks at an accelerated rate, and go out of business. Of course, no competitor will let them leave millions of MRCs on the table, so there willl be further consolidation, and possible (regulatory approvals pending) more branching of other industries into owning telecoms. Companies like Dish Network have signaled that there is some opportunity in their owning general transport, so any number of organizations might be willing to invest in an over-stretched operator in the coming decades.



I am sure I have left out something crucial, or made some tragic and stupid assumption that someone will tell me about. But even without the details, the fundamental truths are there:

Mobile, as a data service and with growth big enough to impact traditional businesses, has only been around for a little more than a decade. The modern era of the always-connected data device is younger, at five years. We have no idea what the future holds, for devices, use patterns or technical solutions to these issues, but it is something very different than today. 

And sometime we're going to need solutions that last not even just into the coming decades, but can be the basis of communications for the next century. 

No comments: