The Un(?)fair Advantage of Latency Arbitrage
July 24, 2009
For years, Major League Baseball has been trying to stamp out illegal substances that simply make players stronger. Think about how much of an advantage a batter would have if he knew exactly where every pitch was going to cross the plate before it left the pitchers hand.
Technologically advanced traders are giving themselves an advantage that some people feel is just that unfair. Using techniques and technologies Ill describe below, they squeeze every last microsecond of latency out of their market data feeds and trading systems to give themselves a sneak peak of market prices thats measured in milliseconds. Thanks to powerful algorithms and high-speed order executions systems thats enough time for them to engage in latency arbitrage the buying and selling of equities based on small price changes that have not yet been broadly recognized due to the varying speeds of market data delivery systems.
Similar types of trading activity have been around for years. For example, every investment bank has an index arbitrage desk where traders try to make money on the pricing differentials between the underlying companies that make up the index and the price of the index itself, which often takes a little longer to update. Now another form of latency arbitrage is becoming popular for those that have the technology to take advantage of it. To ensure fairness, all trades are to be based on the pricing of the National Best Bid and Offer (NBBO). But the exchanges publish NBBO separately (and more slowly) than raw price feeds. Technology has gotten so good that by aggregating raw prices its possible to come up with your own proprietary best bid and offer figure before the NBBO itself comes in. These differences in price represent an arbitrage opportunity.
Before considering whether or not latency arbitrage activity is fair, lets look at some of the things that make latency arbitrage possible.
Tricks of the Trade and Key Technologies
Co-location: In context of processes that occur in thousandths or millionths of seconds, the time it takes data to get from one point to another, even at the speed of light, becomes a significant factor. For example it takes 1 microsecond for light to travel 200 meters, so even the 5 kilometer trip from Manhattan to New Jersey adds at least 25 microseconds of latency if you had fiber straight as the crow flies between the points, which it never is. Add to that the latency introduced by routers and switches along the way and the use of lower speed links that exist over the MAN/WAN and you can easily double this latency. This has driven latency-conscious trading entities to host their systems as close as possible to the exchanges themselves, even co-locating key systems within the exchanges datacenter for a fee and has driven exchanges like NYSE Euronext to commit to building out a 100 Gbps network.
Cut-through Switches: Conventional Ethernet switches can handle very high throughput, but their latency suffers because they wait until all the bytes of a given packet have arrived before they route and send the packet along. Cut-through switches from companies like Arista Networks start figuring out where a packet needs to go as soon as they have basic header information, and start streaming the packet right away. This can cut latency by as much as 10-20 microseconds.