Twitter By The Back Of A Napkin
In which you talk me into finally getting a Twitter account by explaining to me why I don't understand Twitter.
I'm a Twitter luddite for perhaps the most pedantic of excuses: for years I've scratched my head at why what seemed like a solved problem has eluded Twitter in its search for scale with stability. A new presentation by Twitter engineer Raffi Krikorian deepens my confusion. First the numbers:
Avg. Inbound Tweets / Second | 800 |
Max. Inbound Tweets / Second | 3283 |
Tweet Size (bytes) | 200 |
Registered Users (M) | 150 |
Max Fanout (M) | 6.1 |
Social networks like Twitter are just that -- networks -- and to understand Twitter as a network we want to know how much traffic the Twitter "backbone" is routing. Knowing that Twitter does 800 messages inbound per second doesn't tell us but an estimate is possible. From a talk last year by another Twitter engineer, we know that Twitter users have less than 200 followers on average. That means that despite the eye-popping 6.1M follower (in networking terms "fanout") count for Lady GaGa, we should expect most tweets to generate significantly less load. Dealing just in averages, we should expect baseline load to be roughly 100K delivery attempts per second. Peak traffic is likely less than 1.5M delivery attempts per second (4K senders w/ double the average connectedness plus some padding for high-traffic outliers).
Knowing that peak loads are 4x average loads is useful and we can provision based on that. We also know that Twitter doesn't guarantee message order and has no SLA for delivery which means we can deal with the Lady GaGa case by smearing delivery for users with huge fanout, ordering by something smart (most active users get messages first?). Heck, Twitter doesn't even guarantee delivery, so we could even go best-effort if the system is congested, taking total load into account for the smear size of large senders or recovering out of band later by having listeners que a DB. So far our requirements are looking pretty sweet. Twitter's constraints significantly ease the engineering challenge for the core routing and delivery function (the thing that should never be down).
What about tweet size? How much will an individual tweet tax a network? Can we handle tweets as packets? Tweet text is clamped to 200 bytes (as per Raffi's slides) but Tweets now support extra metadata. The Twitter API Wiki notes that this metadata is also limited, clamped to 512 bytes. Assuming we need a GUID-sized counter for a unique tweet ID, that puts our payload at 200+512+16 = 728 bytes. That's less than half the size of default ethernet MTU -- 1500 bytes. IP allows packets up to 64K in size, and with jumbo ethernet frames we could avoid fragmentation at the link level and still accomodate 9K packets, but there's no need to worry about that now.
Twitter's subscriber base also fits neatly in the IPv4 address range of ~4 billion unique addresses. Even if we were to give every subscriber an address for every one of their subscribed delivery endpoints (SMS, web, etc.), we'd still fit nicely in IPv4 space. Raffi's slides show that they want to serve all of earth which means eventually switching to IPv6, but that's so far away from the trend line that we can ignore it for now. That means we can handle addressing (source and destination) and data in the size of a single IP packet and still have room to grow.
So now we're down to the question that's been in the back of my mind for years: can we buy Twitter's core routing and delivery function off the shelf? And if so, how much would it cost, assuming continued network growth? Assuming 4x average peak and a 2K/s inbound message baseline (enough to get them through 2011?) and an average fanout of 300 (we're being super generous here, after all), we're looking at 2.5 million packets to route per second. If we treat each delivery endpoint as an IP address and again multiply deliveries by endpoints and assume 4 delivery endpoints per user, we 're looking at a need to provision for 10M deliveries per second.
Is that a lot? Maybe, but I have reason to think not.
10M 1.5KB packets is ~15GB/second of traffic. Core routers now do terabits of traffic per second (125GB), but most of that traffic doesn't correspond to unique routes. Instead, we need to figure out if hardware can do either the 2.5M or 10M new "connections" per second that the Twitter workload implies. Ciscos's mid-range 7600 series appears to be able to handle 15M packets per second of raw forwarding. Remember, this is an "internal" network, no advanced L3 or L4 services -- just moving packets from one subnet to another as fast as possible, so quoting numbers with all the "real world" stuff turned off is OK.
I'm still not sure that I fully grok the limits of the gear I see for sale since I'm not a network engineer and most "connection per second" numbers I see appear to be related to VPN and Firewall/DPI. It looks like the likely required architecture would have multiple tiers of routing/switching to do things efficiently and not blow out routing tables, but overall it still seems doable to me. This workload is admittedly weird in it's composition relative to stateful TCP traffic and I have no insight into what that might to do in off-the-shelf hardware -- it might just be the sticky wicket. Knowing that there's some ambiguity here, I hope someone with more router experience can comment on reducing the Twitter workload to off-the-shelf hardware.
Perhaps the large number of unique and short-lived routes would require extra tiers that might reduce the viability of a hardware solution (if only economically)? ISTM that even if hardware can only keep 2-4M routes in memory at once and can only do a fraction of that in new connections per second, this could still be made to work with semi-intelligent "edge" coalescing and/or MPLS tagging...although based on the time it takes to get a word of memory from main memory (including the cache miss) on modern hardware, it seems feasible that tuned hardware should be able to do at least 1M route lookups per second which puts the current baseline well within hardware and the 2011 growth goals within reach.
So I'm left back where I started, wondering what's so hard? Yes, Twitter does a lot besides delivering messages, but all of those things (that I understand and/or know about) have the wonderful behavior that they're either dealing with the (relatively low) inbound rate of 4K messages/s (max) or that they're embarrassingly parallel.
So I ask you, lazyweb, what have I missed?