Free Site Registration

Risk Management: In the Eye of the Storm

Biggest Innovators on Wall Street in 2010 |  CASE STUDY: Goldman Sachs Electronic Trading |  After VaR: Q&A with Ron Papanek |  Europe's Challenge |  NSX Cuts In |  Keep Out, At Your Own Risk

After VaR: Q&A with Ron Papanek

What Will Measure Risk Better?

December 15, 2010
By Tom Steinert-Threlkeld

Value at Risk, aka VaR, is a measure of how much value a portfolio of financial assets is at risk of losing in one day, watched closely by nearly all Wall Street firms in their daily operations.

But it is generally calculated as the amount that can be lost in one trading day out of 20.

The model came out scathed by the worldwide credit crisis, since it did not keep banks or other financial firms from experiencing bigger losses than projected by the benchmark.

Here’s the nub: Value at Risk judges the potential loss in a given period with either 95 percent or 99 percent confidence. By its nature, it does not forecast what the loss might be in an extreme case where a one percent or five percent probability event comes into play.

The result: The “value at risk” model is not set up to protect against a perfect storm of events, such as occurred in 2008. If banks have had lax lending practices, mortgagees have loans they can only afford if they keep refinancing and then housing prices fall … this measurement won’t capture the confluence or calculate a loss based on it.

The measure grew out of a request in 1989 by the chairman of J.P. Morgan for a daily report on the risks facing the investment bank.

In 1998, J.P. Morgan spun off its risk management expertise into a for-profit firm called RiskMetrics Group.

RiskMetrics this year was acquired by MSCI, the global marketer of indices, once known as Morgan Stanley Capital International.

Ron Papanek, who headed RiskMetrics Group’s RiskMetrics Labs unit, spent a decade at J.P. Morgan Securities. He is now head of MSCI’s Alternative Investments Business, focusing on hedge fund risk management.

STM: I guess the criticism that came up over the last two years with Value at Risk as a management tool is related to extreme volatility, which it doesn’t capture.

RP: VaR is a useful tool, it’s a useful framework for looking at risk, but it is not the only framework and it is not the end-all. And if you assume that it will solve all your problems, you will fail.

STM: So what comes next?

RP: There are other tools that are also very valuable and, naturally, should be used alongside of VaR. And many asset managers spend much more of their time focusing on these other tools. These come in three particular areas: stress testing, counterparty risk and liquidity risk.

STM: Take them, in order.

RP: Stress testing itself is probably the main tool that traders and investors use alongside of Value-at-Risk.

STM: And this, of course, is where you run portfolios against different economic and political scenarios to see what the effect might be.

RP: At one level stress testing is just about sensitivity analysis, so just looking at your risk relative to specific shocks in the marketplace. In other words, what happens if equities drop 10 percent? What happens if interest rates rise 50 basis points?

STM: How do you stress test for stuff you haven’t seen before, like 2008?

RP: The two areas I think that have gained a lot of focus recently are conditional stress testing and reverse stress testing.

Reverse stress testing is a term, interestingly enough, that was not even used at all prior to the fall of ’08. The basic concept behind a reverse stress test is identifying a particular loss threshold that, for example, might drive a firm out of business.

STM: So you look at conditions in reverse?

RP: It used to be, in a stress test, you would shock the market and look at how much money you lost. But the idea of a reverse stress test is you define how much money will kill you and then say, “How much of a market move can I handle?”

STM: So if you only have $2 billion in cash, what’s going to cause you to lose it all?

RP: Exactly. Or, for example, if you have $2 billion of cash, but you have also $1.5 billion worth of debt, it’s only a 25 percent portfolio loss that would cause you to essentially be insolvent.

STM: What about conditional stress testing?

RP: So, for example, let’s just assume that gold will get to $2,000 an ounce. A conditional stress test would say “How is the rest of my portfolio correlated with gold?” I’m not looking at just the direct sensitivity of my portfolio to gold, I’m not just shocking gold. I’m looking at how interest rates change based on historical correlations. I’m looking at how equities change. I’m looking at how all the assets in my portfolio might change given a particular shock or a macroeconomic event. The point is you’re not looking at just the direct impact, you’re looking at the indirect impact.

STM: What about counterparty risk, which people now worry about, since Lehman Brothers disappeared?

RP: There are two main challenges with counterparty risk. One is to have a methodology that will net your exposures, that will net and recognize what risks are offsetting and what are not.

And, two, you need to have the data to incorporate into the system. You need to know all your over-the-counter parties. You need to know who you purchased or sold all your transactions with.

STM: And what their financial health is.

RP: The first step is to do the calculation without any consideration of their health; it’s just to identify what the risk is.

You may have a small amount of exposure with someone that’s very vulnerable. That’s not a problem. But you may have a big amount of exposure with someone that appears safe. That big amount of exposure with someone that appears safe is probably a bigger issue for you.

STM: How do you evaluate what the riskiness of that position is if you don’t know the health of the other party?

RP: You can run stress tests on counterparty exposure as well. The market may have some assumptions about the volatility of a particular instrument, but you would run a sensitivity analysis and take a look at how your potential counterparty exposure would change if you shocked the market in a particular way. So you still would drive those changes to identify what your overall exposure is; an overall counterparty exposure is.

STM: Then, there’s liquidity risk.

RP: There’s asset liquidity and there’s funding liquidity.

Asset liquidity means or refers specifically to your ability to liquidate your assets. In other words, what I need to sell to convert your assets into cash.

Funding liquidity is a separate category and what that has to do with is whether you have access to capital to fund your liabilities or your ongoing operations. And if you have a line of credit and that gets drawn upon, your liquidity is decreasing.

STM: Which matters most?

RP: The liquidity that is probably most relevant for what we’re talking about here is asset liquidity.

STM: How do you assess that?

RP: You can look at trade-level data to identify for example, what’s the trade volume, open interest, bid-offer spread; all that gives you some idea of your ability to liquidate an asset.

I would say that asset managers generally have a good subjective view or opinion on liquidity. Unfortunately, there are not a lot of good systems.

STM: Why is that?

RP: For two reasons: one, it’s a complicated mathematical problem. The other reason is that the instrument types that have the biggest liquidity problems are those which there’s very little data for. If I have some type of exotic derivative or some type of debt that I want to sell, there’s no market for it and essentially there’s no market data to be used to calculate the liquidity.

STM: All right, so let’s come back to VaR itself. What do you think has been learned about VaR over the last couple of years? Do you think it still should be the primary way for valuing risk in a portfolio of assets?

RP: First of all, VaR is one component of risk analysis. Whether it’s the primary, secondary or whatever it is, I’m not going to argue that point. That’s not the issue. What I would change is the user base. VaR is not the problem; the problem is the incorrect usage of VaR.

STM: What do you mean?

RP: Here’s a report that I got from a company that I won’t name, but it says “Our data shows there have already been nine exceedances of 99 percent normal Value-at-Risk on the Dow.” Okay, by definition if you have nine exceedances of 99 percent VaR (in a three-month period) it means you’re not calculating your VaR right.

If you’re using the data you need to know when your model is not working properly. And by the way, it doesn’t mean the model is not working; maybe it just means the data is not incorporated correctly or the model is not being flexible enough or you’re not changing parameters.

I mean, if I flipped a coin 50 times and I got heads 50 times in a row, I’m gonna look at the other side of the coin.

STM: [ LAUGHS]

RP: I’m willing to bet there’s two heads.

RP: And so the problem with VAR has to do with the usage, not with the framework.

STM: So in effect, I guess your answer to the question, “What comes after VaR?” is other tools. Life goes on with VaR, just with a broader set of tools.

RP: Value at Risk is a useful analytical framework. But let’s not try to get too much out of it; let’s get out of it what we can get out of it. We should also incorporate other risk measures.

 

 

Download the Complete IMPACT REPORT