The Future of Trading
10 Top CIOs on Wall Street | Why Mutation Will Matter | Buyside Traders: Not Taking Orders Any More | Calculating Cost First, Trading Second | Trading Desk, in One Box | Changing the Minds of Star Programmers | 2015: The Always On Desk
Calculating Cost First, Trading Second
November 4, 2010
Transaction cost analysis, often called TCA for short, is now a fundamental metric used by buy-side firms and their broker dealers to analyze just how well they have executed an order.
But the financial crisis and the explosion of high-frequency trading have changed the rules of the game. No longer are standard benchmarks such as implementation shortfall and volume-weighted average price considered sufficient for traders to understand just how to adjust their strategies fast enough to effectively compete in a fast-paced world of algorithmic and high-frequency trading.
That is because those benchmarks do not consider the specific circumstances of each trade, the types of orders used or the investment style of the portfolio manager. They also sometimes are not fine-tuned enough to work with financial instruments other than equities. And then there’s speed. Firms may soon be picking a strategy at the instant of triggering a series of mathematical instructions to execute orders. The right strategy may depend on a cost analysis done at that moment.
“Firms that have embraced TCA are moving beyond simple measurement and intermittent evaluation of execution strategy to actually adopting strategies at the point of trade,” says Mayiz Habbal, senior vice president of the securities and investments group at Celent, a New York based research firm.
THE NEW GAMEPLAN
While broker dealers are the main providers of TCA tools, fund managers are increasingly turning to third-party suppliers they feel offer the independent metrics they need. Some of those suppliers are now leading the way to more granular post-trade and pre-trade analysis even going as far as to link the two in close to real-time.
Here is how the new gameplan works. Traders can input all the pertinent details of the transaction, such as number of shares, price, fees or commissions, after it has been executed into a tool almost immediately after execution. The results will then be integrated into a pool of such details. Then, the pool can be used to perform pre-trade analyses, once a critical mass of data is aggregated. Then, the tool will calculate the expected costs, in advance, and recommend what trading strategies to use. The recommendations will include the algorithm, the pace and timing of the execution. In some cases, the trader doesn’t have to make routine decisions. A machine can just crunch the numbers and automatically select an optimal algorithm before and even as the trade is being executed.
“The failure of standard TCA to consider the circumstances of each trade not only reduces the ability to rank brokers but also fails to support the traders need to design customized trading strategies,” says Henri Waelbroeck, director of research for Pipeline Trading Systems, a New York-based transaction cost analysis provider. “In addition to choosing between brokers, traders must also choose between aggressive and opportunistic strategies.”
Implementation shortfall, the difference between the price at which a trade is executed and the price at the time the order was sent in by the portfolio manager, doesn’t tell a trader how well the trade was executed. The biggest factors determining implementation shortfall are short-term alpha and opportunity costs when some shares are unfilled.
Although the volume weighted average price (VWAP) will tell a trader how he or she did relative to the average price for the day, it won’t say anything about market impact. As is the case with IS, average price performance can be improved at the expense of higher opportunity costs when some trades are left unfinished, so the use of limit prices makes it very difficult to interpret both implementation shortfall and VWAP performance.
“The trading speed, the effect of limit prices and extraneous market conditions must be taken into account to devise forward-looking strategies,” says Waelbroeck. “Trading schedules and limit prices if used appropriately can enhance alpha but if used incorrectly can exacerbate trading costs.”
Pipeline has come up with an advance in TCA that it says can decompose implementation shortfall into its root components. Those are short-term alpha loss, algorithmic impact, adverse selection, opportunistic savings and the tradeoff between the selected speed and impact prices.
Adverse selection refers to the problem where an algorithm delivers a worse average price than if you had simply participated with the market at a constant rate. This tends to happen when an algo doesn't get enough fills ahead of an adverse price move, or vice versa, when the algorithm speeds up just before the price moves in your favor. Opportunistic savings is the mirror image of adverse selection; an algorithm can anticipate an adverse price move and pick up liquidity before it happens, or vice versa, apply a maximum amount of trading to avoid moving too fast before the price improves. The adverse selection of one trader is the opportunistic savings of another; the trader with superior predictive technology will come out ahead.
Waelbroeck says that Pipeline’s TCA framework builds subsets of trades that profile a particular trader or manager’s trade arrivals and measures the number of basis points that are being gained or could be gained through decisions such as the choice of trading speed or limit prices, and do so separately for each trade profile.
“Armed with this kind of analysis the trading desk can separately assess the value added by the trader’s decisions from the underlying quality of the algorithmic trading tools provided by each broker,” says Waelbroeck. “The trader’s strategy can then be customized to specific situations.
For example, for a particular portfolio manager, when competitive order flow in the first hour of a trade is predictive of a continuing trend in the coming weeks, the trader will want to accept blocks of liquidity when they appear. They also will avoid the use of limits, since these are more likely to lead to delays and opportunity costs. By contrast, for another portfolio manager the same competitive order flow environment might be associated with overshoot-and-reversion after the trade is done; in this case the use of limits is critical to avoid buying on peaks and delays are more likely to lead to better prices.
Quantitative Services Group (GSG) says that its Sync service can calculate what it calls a Liquidity Charge. That “charge” is the difference in price that results from the execution of all the individual trades made in accumulating or selling a position, in a day. If buying 500,000 shares of stock in 1,000 500-share trades pushed the stock up from $48 to $50, the charge is $2 times 500,000 or $1 million. The same in reverse, if selling, broadly speaking.
“This analysis is done on a tick by tick basis and allows the trader to separate the impact of his or her strategy from what other market participants did in the market on the same stock,” says Alex Hagmeyer, manager of trading analytics at QSG, a global equity research and transaction cost analysis firm in Naperville, Illinois.
The empirical decomposition of implementation shortfall allows the trader to uncover brokers and algorithms that unnecessarily and excessively pay for liquidity, while illuminating the strategies that are truly able to stay under the radar and keep a small footprint in the market. Over time, choosing the trading strategies that limit their footprint in the market reduces the magnitude and uncertainty of costs in the implementation process, says Hagmeyer.
QSG’s SYNC platform also allows for pretrade analysis that uses forecasted costs and client-specific historical post-trade data for like trades to help the trader understand which brokers or algorithms have performed the best and with the least amount of uncertainty for any given benchmark or cost measure.
SJ Levinson and Sons, a New York-based agency broker-dealer which specializes in quantitative analytics, has just released its Trade Analysis Program (TAP), which allows traders and portfolio managers to compare execution results against hundreds of benchmarks historically on large data sets and in real-time via their desktops. SJLS takes into account the past performance of trading for a particular manager, security, sector, fund, trade size and liquidity.
“The goal is not to micromanage each order but to maximize the return to investors,” says Matthew Celebuski, senior managing director and head of quantitative research at SJ Levinson. “Eighty percent of the gain from TCA is to find a strategy that matches the trading style of each portfolio manager and consistently improves the returns of the fund. The remaining 20 percent of the gain is from the selection of algorithms, brokers and trying to access liquidity in the market in the most efficient way possible.”
Case in point: a trader executing an order for a value investment manager may need to execute more slowly than for a growth investment manager because growth managers typically have short term momentum in their orders.
Taking its analysis to the next level, SJ Levinson plans before year end to combine the results from its transaction cost analysis to conduct pre-trade analysis on the same platform. That means allowing the trader to set an optimal trading schedule based on the history in the type of trade. “The trader wouldn’t have to rely solely on his or her own instinct but could simply push a button to generate quantitative recommendations automatically,” says Celebuski.
As real-time analytics gains particular popularity one particular tack known as predictive analytics may become the wave of the future, according to Celent’s Habbal. Pipeline, for one, has already developed what it calls an Algorithmic Switching Engine that predicts the performance of algorithms under real-time conditions based on information from the post-trade analysis. “It takes a trader's instructions about how hard the trader would like to drive an order into the market and translates that minute-by-minute into a choice of an algorithm style, price limit for an algo, the control parameters on that algo and the size of the orders sent to that algorithm, and then the algorithms execute it," says Waelbroeck. Currently, Pipeline's switching engine can accommodate more than 100 algorithms and is adding about 10 more each month.
Just how effective is such switching? From January 2009 to May 2009, AllianceBernstein conducted a joint study with Pipeline Trading to define "adverse selection" and measure the effectiveness of predictive switching on adverse selection costs. Alliance Bernstein used a dark aggregator – a program that aggregated orders in dark pools that would match a given set of orders at a given price from a different sell-side provider while Pipeline ran its Algorithmic Switching Engine. The study showed that Pipeline's predictive switching strategy -- as compared to a common dark aggregator strategy -- saved as much as 70 percent of the costs attributed to adverse selection. The engine switched strategies, on the fly, when it saw that the original algorithm was wrong.
While stocks remain the dominant asset class for transaction cost analysis, there has been a growing push for TCA in other products which have to date been limited in large part because of the lack of sufficient market data and interest from fund managers.
As fund managers now understand the importance of including other financial instruments as asset classes in their analysis of best execution they are demanding the same sort of TCA.
Nowhere is the need for TCA more pressing than in foreign exchange transactions as was evidenced by a lawsuit filed last year by California's largest pension plans – the California Public Employees Retirement Plan and California State Teachers Retirement System -- against Boston-headquartered custodian bank State Street. They allege that State Street fraudulently priced foreign exchange trades which resulted in extraneous costs of over $56 million since 2001.
But performing TCA on FX trades is no easy task. While the market for foreign exchange trades is highly liquid, it is also highly fragmented across dozens of over-the-counter platforms. Making matters more complicated is the fact that not all custodian banks timestamp their trade executions, which would permit the fund manager to use benchmarks traditionally adopted for the equities market to determine values at any given point.
"For fund managers which have time-stamped trades we can offer traditional equity metrics and for those which do not we can perform analysis that is analogous to volume-weighted average price measurement ," says James McGeehan, chief executive officer for FX Transparency, a Boston-based firm specializing in transaction cost analysis for foreign exchange transactions.
FX Transparency also provides fund managers with a comparative analysis of how well their execution costs rank against 100 fund managers. For those who want to improve their execution, the firm then counsels fund managers and plan sponsors on execution strategies. McGeehan declined to specify what those were for competitive reasons but claims that one plan sponsor is saving $5 million in costs annually based on FX Transparency's advisory work .
PHONING IT IN
As more fund managers incorporate options into their trading strategies the need for transaction cost analysis becomes more obvious. But the continued widespread use of phone-based trading continues to prevent the buyside from capturing trade information and conducting TCA. And until recently the options market has lacked appropriate benchmarks.
“Due to the lower trade volume for options, traders need liquidity recovery, order flow, market impact, execution quality and trading cost metrics that rely on factors other than the Options Price Reporting Authority’s (OPRA’s) consolidated tape in understanding each transaction,” says Alan Shapiro, president of Transaction Auditing Group (TAG), a New York-based transaction cost analysis and execution quality provider. “That means a firm also needs to analyze all of an option’s quotes as well, which requires the capture and archiving of millions of market data ticks per day.”
One benchmark used by TAG is Gamma Weighted Average Price (GWAP) which incorporates equity and options tick data to benchmark listed option execution performance for the buy-side and sell-side. “Such an analysis resembles a volume-weighted average price for an equity security combined with option Greeks and premiums,” says Shapiro.
Five different Greek symbols represent factors that influence the pricing of options. Delta represents an option’s sensitivity to changes in the underlying price of a particular stock. Gamma is an even finer measure of the sensitivity of the delta itself to changes in the price of the stock. GWAP represents an understandable fair-price benchmark for the measurement of execution quality of institutionally-sized option orders. This benchmark is calculated based on trading activity over a period of time rather than on market conditions at a particular moment.
But do fund managers really need all the dizzying array of TCA tools? “It’s a bit overwhelming to understand the differences between each TCA service provider,” says one trader at a New York-based investment fund.
The trader says that while predictive switching is still a bit “avante garde” for all but the most sophisticated quantitative desks, advancements in algorithmic and high frequency trading has all but required all buy-side firms to understand – and embrace – the new technology. And ask more questions of their TCA providers on the differences in their methodologies and results.
Jason Lenzo, head of equity and fixed income trading at Russell Investments, a registered broker dealer and investment adviser in Tacoma, Wash., with more than $140 billion in assets under management, says that all of the data points and information do allow a firm to do better comparative analysis with its own trading strategies and metrics.
Russell uses multiple pre- and post-trade models, including both internally developed ones as well as third party, it declines to name . “The models rely on and sometimes share, a range of factors, including the percentage of volume provided or taken from the market, average spread, sector tracking bias, reaction to volatility and the number of times a print occurs in a day, says Lenzo.