Monday, August 08, 2005

The Wormhole Factory

2002 was the darkest, bleekest year of the market crash's aftermath. VC's like me, struck with a hangover worthy of the greatest market rally ever, struggled to either transform or euthanize our portfolio companies. Gripped in gory triage, it was hard to muster much enthusiasm or time for new investments.

However, a disruptive technology with the potential to transparently accelerate the internet 10X was brewing back then, so in 2002 I did make one investment. It was a long shot, but now that this cool technology has been developed, patented, deployed, field tested, and market approved, I can now confidently blog about it. You might call it The Big Internet Protocol Hack, but I prefer to think of it as the Wormhole Factory.

Light, it turns out, is just way too sluggish. 300 million meters per second is, well, adequate when it comes to illuminating your fridge. But if you need to light up fiber segments that snake their way from, say, San Jose to Boston and back, you're talking at least 10,000 miles of fiber, demanding a good 55 milliseconds of light travel (and practically speaking, you have to at least double that to accommodate electronic switches along the way). So if you need to make that cross-country round trip 25 times sequentially, take a snack with you because photons won't fly you there any faster than three seconds (longer than a round trip to the moon).

This explains the "World Wide Wait." When DARPA and BBN invented the Transmission Control Protocol (TCP) in 1969, lines were dirty and slow (we're talking dialup modems), and the only applications they anticipated running were ones like FTP, SMTP and TELNET, in which the data transfer rate is much more important than the session's setup time. So every time TCP opens a new network session, the protocol rallies packets dozens of times to verify the integrity of the connection, and to execute the "slow-start" mechanism by which TCP incrementally ramps up the data rate until it reaches an equilibirium defined by the capacity of the network (a very long process now that networks typically run up to 1000X DARPA's expectations). But HTTP is a stateless protocol, so just about every hyperlink you click opens a new TCP session. In all, it takes the typical web site about 25 to 35 round trips to respond to your browser (at 15 round trips, Google has been highly engineered to be what is likely the most efficient site on the web). That's at least a 3-second Physics Tax from Boston to San Jose, 6 seconds from London, and 12 seconds from Israel or Australia.

By 1996, Content Delivery Networks like Akamai emerged to scratch the itch for speed--a hack on the web that simulated local performance by caching content at the edge. Great idea, so long as the content is cachable. But by 2002, most interesting web sites were no longer static. Services like eBay, eTrade, Salesforce, Expedia, Amazon, Yahoo Finance, etc. as well as web-based enterprise applications all generate personalized pages. Sure, they all have cachable content (e.g. the logo) but when users click on hyperlinks, it is precisely the dynamic content that they await and await and await. It's easy to replicate a hundred content servers around the globe, but replicating even two dynamic databases across the internet is extremely difficult and expensive to do if you care about data integrity--so much so that practically no one even tries.

So along came this startup Netli, whose founders recognized that the problem essentially springs from an incompatibility between TCP and HTTP. These protocols are pretty well deployed already -- rather than try to replace them (as some standards groups have failed to do over the decades) you might be better off changing the speed of light! Of course, in the "real world" the only avenue to bypass the luminary speed limit is a wormhole, a Grand Short Cut through the universe that exploits the curvature of space-time to let you hop around faster than Paris Hilton on prom night. So that's what they figured the web needs: some wormholes.


Netli's wormhole is an Internet Protocol (IP) router that speaks both TCP and the Netli Protcol--a layer 4 replacement that transfers a web page in only ONE round trip. Realizing that the universe wasn't about to replace its routers, Netli deployed its own routers, co-located across the US, Europe, South America, Africa and Asia. Now, dozens of companies like Dell, HP, Toyota, Honda and even Blog China simply redirect the internet's Domain Name System (DNS) so that Netli's local routers serve the long-distance (S)HTTP/TCP/IP requests, and then traverse the globe in one round trip using the Netli protocol. At the server side, a Netli router converts the session back to TCP so that the servers have no idea (anthropomorphologically speaking) that they're not connected directly to the browsers. The TCP sessions created on either end of the connection still require 30 round trips each, but those are just local 10 millisecond errands--so serving an entire web page from San jose to Israel takes 300 milliseconds of TCP in Israel, 200 milliseconds for the Netli hop, and another 300 milliseconds of TCP in San Jose, totalling only 700 milliseconds!

Now any good warp-speed airline offers peanuts and in-flight movies. So once Netli started carrying mission critical traffic, the company threw in load balancing, performance monitoring, encryption, caching, compression, pre-fetching, and the like to flesh out a complete solution. The result is that web applications can now be delivered with speed and reliability an order magnitude better than any other web carrier, no matter how much fiber, compression, congestion avoidance, or edge caching they apply. Netli is the new, warp-speed internet.

As you might guess, the company is cranking, and Akamai has been desperately spinning, trying to position against the tiny startup. Too late--edge computing was a nice idea until Netli made it possible to much more simply and cheaply centralize applications without compromising performance and reliability.

Maybe 2002 wasn't such a bad year after all.

4 comments:

  1. Anonymous8:55 PM

    whoaaaa! dude, this is cool.

    ReplyDelete
  2. Anonymous2:21 AM

    First of all, let me commend -- once you strip away the hype, this is a noticably accurate technical description of the problem at hand. Kudos.

    That being said -- 10,000 miles of fiber isn't much compared to 22,300 miles of free space, that has to be traversed twice during a geosync satellite hop. I think you'll find that the solutions used to deal with this 4x latency hit are as advanced, if not far more, than even what Netil has. Check out SkyX, or any of the techniques used to get high performance over high latency links. (Heh, I had to write one to eke video over DNS, so that's why I know about this subject.)

    Not to mention -- browsers have gotten much smarter since 2002, at minimum continuing to use an already open TCP session (thus avoiding the constant TCP slow start penalty), and possibly even issuing multiple requests before their reply content returns (a process known as HTTP pipelining). Does that mean Netil is irrelevant? Of course not, sat proxies haven't been made obsolete by better clients and probably never will -- there are fundamental advantages that are gained when you're engineering solely for the high latency case. But I don't know about claiming 10x improvement.

    It is certainly _way_ less a PITA than replicating a web app worldwide though.

    ReplyDelete
  3. Anonymous2:22 AM

    s/Netil/Netli/g

    ReplyDelete
  4. I skimmed this post and was reminded that while getting bits from Hong Kong to LA is important now, in the near future we're going to have a heckuva lag from the Moon to the Earth.

    This will complicate little things like IM, teleoperating equipment and so on. This won't help with that, but it's nice to see that people are working on the issue.

    ReplyDelete