Advertisement

Guest Post: Jonathan Zittrain (Still Worried)

Two of the smartest people we know on the subject of technology — Harvard's Jonathan Zittrain and The Wall Street Journal's Kara Swisher — joined us Monday for a thoughtful hour on what's at stake in the move from PCs to "cloud computing." If you've wondered what this phrase means, or why you should care, this show is a must-hear.

As a cyberlaw thinker, Zittrain has his worries about the shift to the cloud (worries he's spelled out recently in a New York Times op-ed, and in an earlier interview with On Point about his book "The Future of the Internet--and How to Stop It"). His arguments generated a big response during the show — both on the phones and in the online comments. I asked him if he wanted to follow up with a response here, and he sent the following:

Many thanks to the On Point community for the questions and comments arriving through various media during the show. I have two thoughts arising from them, one about definitions and the other trying to pin down the essence of my worries about the cloud despite Kara's and others' utterly reasonable optimism.

First, on definitions — I wouldn't get too hung up on them. For example, we bounced back and forth between two things: the first, which I opened with, was the movement of data and code from our custody to someone else's. The second, raised by a number of callers + tweeters, had to do with enterprise computing and the cloud — ways in which good Internet access is channeling employee work (and perhaps customer data) to an outsourced server. There are some privacy and security concerns with this, but I think they're entirely manageable. So my worries on the first front shouldn't reflect poorly on the opportunities that businesses have on the second front.

Speaking of which, here's a bit more on my worries about code migrating to others' control, drawn in part from my book. The key is the privilege that vendors now have, thanks to the Internet, to control the code you run on an ongoing basis. This is true of cloud platforms like that of Facebook Apps, and it's true of the products-as-services I mentioned in the program, like the iPhone. If producers can alter their products long after the products have been bought and installed in homes and offices, it occasions a sea change in the regulability of those products and their users. With products tethered to the network, regulators ­-- perhaps on their own initiative to advance broadly defined public policy, or perhaps acting on behalf of parties like TiVo claiming private harms when they sued EchoStar for patent infringement — finally have a toolkit for exercising meaningful control over the famously anarchic Internet and the devices attached to it.

I'm not nearly as confident as Kara that the market can just make things work out the way we'd want them to, even if there are different, competing cloud computing providers. Consider Google’s popular map service. It is not only highly useful to end users; it also has an open API (application programming interface) to its map data, which means that a third-party Web site creator can start with a mere list of street addresses and immediately produce on her site a Google Map with a digital push-pin at each address. This allows any number of “mash-ups” to be made, combining Google Maps with third-party geographic datasets. Internet developers are using the Google Maps API to create Web sites that find and map the nearest Starbucks, create and measure running routes, pinpoint the locations of traffic light cameras, and collate candidates on dating sites to produce instant displays of where one’s best matches can be found.

Because it allows coders access to its map data and functionality, Google’s mapping service is generative. But it is also contingent: Google assigns each Web developer a key and reserves the right to revoke that key at any time, for any reason, ­or to terminate the whole Google Maps service. It is certainly understandable that Google, in choosing to make a generative service out of something in which it has invested heavily, would want to control it. But this puts within the control of Google, and anyone who can regulate Google, all downstream uses of Google Maps­ and maps in general, to the extent that Google Maps’ popularity means other mapping services will fail or never be built.

Software built on open APIs that can be withdrawn is much more precarious than software built under the old PC model, where users with Windows could be expected to have Windows for months or years at a time, whether or not Microsoft wanted them to keep it. In this sense the Windows monopoly was much less powerful than people think. To the extent that we find ourselves primarily using a particular online service, whether to store our documents, photos, or buddy lists, we may find switching to a new service more difficult, as the data is no longer on our PCs in a format that other software can read. This disconnect can make it more difficult for third parties to write software that interacts with other software, such as desktop search engines that can currently paw through everything on a PC in order to give us a unified search across a hard drive. Sites may also limit functionality that the user expects or assumes will be available. In 2007, for example, MySpace asked one of its most popular users to remove from her page a piece of music promotion software that was developed by an outside company. She was using it instead of MySpace’s own code. Google unexpectedly closed its unsuccessful Google Video purchasing service and remotely disabled users’ access to content they had purchased; after an outcry, Google offered limited refunds instead of restoring access to the videos.

Continuous Internet access thus is not only facilitating the rise of appliances and PCs that can phone home and be reconfigured by their vendors at any moment. It is also allowing a wholesale shift in code and activities from endpoint PCs to the Web. There are many functional advantages to this, at least so long as one’s Internet connection does not fail. When users can read and compose e-mail online, their inboxes and outboxes await no matter whose machines they borrow­or what operating system the machines have­, so long as they have a standard browser. It is just a matter of getting to the right Web site and logging in. We are beginning to be able to use the Web to do word processing, spreadsheet analyses, indeed, nearly anything we might want to do.

Once the endpoint is consigned to hosting only a browser, with new features limited to those added on the other end of the browser’s window — the basic idea behind Google Chrome OS, among others — consumer demand for generative PCs can yield to demand for boxes that look like PCs but instead offer only that browser. Then, as with tethered appliances, when Web 2.0 services change their offerings, the user may have no ability to keep using an older version, as one might do with software that stops being actively made available.

This is an unfortunate transformation. It is a mistake to think of the Web browser as the apex of the PC’s evolution, especially as new peer-to-peer applications show that PCs can be used to ease network traffic congestion and to allow people directly to interact in new ways. Just as those applications are beginning to show promise­ — whether as ad hoc networks that PCs can create among each other in the absence of connectivity to an ISP, or as distributed processing and storage devices that could apply wasted computing cycles to faraway computational problems --­ there is less reason for those shopping for a PC to factor generative capacity into a short-term purchasing decision.

So: that's why I'm worried, even as we see plenty of innovation taking place around us, much of it in the cloud.
-Jonathan Zittrain

This program aired on August 10, 2009. The audio for this program is not available.

Advertisement

More from On Point

Listen Live
Close