By: John Doe
Hello Readers. Below is a new column with the goal of educating Pioneer readers in matters of investment. This column will cover topics that translate well from math and science classes at The Cooper Union to real applications in financial markets.
Risk is a difficult thing to define. For most people, the risk of making a decision is the possible negative impact of the choice. In terms of investment, the risk is the total loss possible from that decision. Some people use volatility as a measurement of risk, volatility being the standard deviation or variance of an investment return.
The standard deviation of a set is well known to all engineering students at The Cooper Union. If we look at the close prices of Apple (NYSE: AAPL) and Microsoft (NYSE: MSFT) every week over the past month and tabulate the average and standard deviation (see Figure 1), we can see in this definition of risk, AAPL is a “riskier” investment than MSFT.
However, this definition of risk is extremely lacking in utility. For example, look at the prices for AAPL and MSFT; while the AAPL prices continue to rise, MSFT hovers around the average and not rising or falling. Now, investors are not distressed by rising prices; in fact they find it very good! So how can we define risk to be more useful for evaluating investment options?
In 1987, there was a particularly nasty stock market crash where overvalued investments corrected to very low prices. Most investors did not have a way of numerically understanding the possible losses associated with investing in the stock market bubble and lost a lot of money. Investment banks and financial institutions sought out statistical ways of determining—to a percentage certainty—how much money could be lost in a crash. It turns out this method is very simple to understand. Say we aggregate the daily returns of AAPL and MSFT into buckets and plot a histogram of the number of days each bucket of returns occurred for the last 5 years.
Figures 2 and 3 are the normal distributions of the daily returns of AAPL and MSFT—Cooper students are very familiar with the physics of normal distributions. Since we order the returns from worst to greatest, we want to look at the left tail of this distribution. This allows us to make a very useful statement: With a 99.92% confidence, an investment in AAPL will, at worst, absorb a daily loss of 13.2% and MSFT can incur 12.1%. Now, it is very clear that AAPL is riskier than MSFT with a 99.2% confidence interval.
This method has its faults, it makes two basic assumptions: (1) that past performance is predictive of future performance and (2) that the distribution of returns is normal. These are very basic statistical mistakes in finance, and so the value at risk (VaR) estimate, like the one done above, must be considered alongside other measures of risk. There are ways to make VaR more effective which will be covered in the next installment of “Buy High, Sell Low.” Happy investing, everyone! ◊