in Economics, R Stats, Uncategorized

Basic Time-Series Analysis: Modeling Volatility (GARCH)

This post is the third in a series explaining Basic Time Series Analysis. Click the link to check out the first post which focused on stationarity versus non-stationarity, and to find a list of other topics covered. As a reminder, this post is intended to be a very applied example of how use certain tests and models in time-sereis analysis, either to get someone started learning about time-series techniques or to provide a big-picture perspective to someone taking a formal time-series class where the stats are coming fast and furious. As in the first post, the code producing these examples is provided for those who want to follow along in R. If you aren’t into R, just ignore the code blocks and the intuition will follow.

In this post we will learn a standard technique for modelling volatility in a series of prices, the generalized auto-regressive conditional heteroskedasticity (GARCH) model. We build on the previous post, Basic Time-Series Analysis, Single Equation Models (ARIMA), where we learned the useful techniques of using recent returns (AR) and residuals (MA) to explain price returns.

The idea of the GARCH model of price volatility is to use recent realizations of the error structure to predict future realizations of the error structure. Put more simply, we often see clustering in periods of high or low volatility, so we can exploit the recent volatility to predict volatility in the near future.

Continuing the examples in previous posts, we will use SPY prices to illustrate volatility modeling. The plot below shows SPY price returns from 2007 through 2017.

# If you are following along in R, uncomment the next lines and run once to install the required packages 
# install.packages('ggplot2')
# install.packages('xts')
# install.packages('quantmod')
# install.packages('broom')
# install.packages('rugarch')
# install.packages('tibble')
library(quantmod)
library(ggplot2)
library(broom)
getSymbols(c('SPY'))
## [1] "SPY"
SPY              <- SPY$SPY.Adjusted
SPYRet           <- log(SPY) - log(lag(SPY))
SPYRet_xts       <- SPYRet
colnames(SPYRet) <- c('SPY')
SPYRet           <- tidy(SPYRet)

ggplot(SPYRet, aes(x = index, y = value, color = series)) + 
  geom_line() + 
  theme_bw() +
  labs(title = "SPY Returns Returns from 2007 to 2017", x = "")

As a reminder, the over-arching goal of of this and the previous posts has been to model the changing mean and variance of the price return series.

\[r^{SPY}_t \sim N(\mu_t, \sigma_t^2)\] The previous post used the ARIMA model to give structure to the changing mean of the series of price returns. Since the ARIMA model assumed constant variance, and the figure of SPY returns clearly has changing variance over time, this is something that can be improved upon, and the GARCH model is one way of accomplishing this.

Next, we will go through two ways that are commonly used to visualize the changing variance of returns. These are plotting the absolute value of price returns,

\[\left| r^{SPY}_t \right|, \]

or the square of price returns,

\[\left( r^{SPY}_t \right)^2.\]

Both cases make sense since the variance is always a positive number, and influenced by deviations from the mean. This is only true, of course, if we know that the return series has mean 0, \(r^{SPY}_t = 0 + \epsilon_t\). If this is true, then the average of squared returns is the sample variance,

\[\hat{\sigma}_t^2 = \sum_1^{n} \left( r^{SPY}_t \right)^2.\]

In price data, percent returns are almost have a mean very near to 0. If the mean return is non-zero, then we can just plot \(\left( r^{SPY}_t – \mu\right)^2\), or use the squared errors from an ARIMA model. Since we did not find very strong ARMA effects in the previous post, and especially since the intercept (mean) was zero, we can get a good sense of daily variance of SPY price returns by simply plotting daily squared returns or daily absolute value of returns.

library(tibble)
SPYRet <- add_column(SPYRet, SquaredReturns = SPYRet$value^2, AbsoluteReturns = abs(SPYRet$value))

ggplot(SPYRet, aes(x = index, y = AbsoluteReturns, color = series)) + 
  geom_line() + 
  theme_bw() +
  labs(title = "SPY Absolute Value of Returns from 2007 to 2017", x = "")

ggplot(SPYRet, aes(x = index, y = SquaredReturns, color = series)) + 
  geom_line() + 
  theme_bw() +
  labs(title = "SPY Squared Returns from 2007 to 2017", x = "")

The GARCH Model of Volatity

The plain vanilla (there are sooo many variations of the GARCH model) GARCH model is as follows:

\[\begin{align}
r_t &=\mu+\varepsilon_t \\
\varepsilon_t &= \sigma_t.z_t \\
\sigma_t^2 &=\omega + \alpha_1\sigma_{t-1}^2 + \beta_1\varepsilon_{t-1}^2\\
z_t &\sim \mathcal{N}(0,1).
\end{align}\]

The first line is an equation to model the mean. As presented here there are no ARMA effects, but they could easily be thrown in if you find they are important. There is only an intercept and an error term. The next three lines put more structure on the error term, but it can be confusing what is going on here.

I find the second line particularly confusing. Why do we multiply two things to get \(\epsilon_t\) rather than add like you would normally see in a regression error term?

To see this, it is important to keep the goal in mind here. We are looking for a model that will give us a changing variance of \(r^{SPY}_t\). Namely, to find a model for \(r^{SPY}_t\) that has the following basic form:

\[r^{SPY}_t \sim \mathcal{N}(\mu, \sigma_t^2)\]

So if the basic return model is \(r^{SPY}_t =\mu+\varepsilon_t\), it better be the case that \(Var(\mu+\varepsilon_t) = \sigma_t^2\).

The next steps rely on the properties of the variance of random variables. Specifically, if \(a\) and \(b\) are a constants and \(X\) is a random variable, \(Var(a + bX) = b^2Var(X)\).

\[\begin{align}
Var(r^{SPY}_t) &= Var(\mu+\varepsilon_t) \\
&= Var(\varepsilon_t)
\end{align}\]

So if we come up with a model for \(\varepsilon_t\) so that it’s variance depends on recent volatility and is big when recent volatility is big and is small when recent volatility is small, we will have created a model of conditional heteroskedasticity.

Consider the second line in the GARCH model.

\[\varepsilon_t = \sigma_t.z_t\]

Notice that \(\sigma_t\) is a constant, it is just a linear combination of past \(\sigma^2\)’s and past \(\epsilon^2\)’s, so it is known at time t.

\[\begin{align}
Var(r^{SPY}_t) &= Var(\mu+\varepsilon_t) \\
&= Var(\varepsilon_t) \text{ since } \mu \text{ is a constant}\\
&=Var(\sigma_t, z_t) \\
&=\sigma_t^2Var(z_t) \text{ since } \sigma^2_t \text{ is a constant}\\
&=\sigma_t^2
\end{align}\]

The last line follows since \(z_t \sim N(0, 1)\), and the \(Var(z_t) = 1\). So in the second equation of the GARCH model, multiplying the \(\sigma_t\) and the \(\epsilon_t\) takes advantage of the properties of variance to get just what we wanted, conditional variance of \(r^{SPY}_t\) that will be big when recent volatility is big and small when recent volatility is small. That last part follows because of how the \(\sigma^2_t\) is constructed in the third line of the GARCH model, \(\sigma_t^2 =\omega + \alpha_1\sigma_{t-1}^2 + \beta_1\varepsilon_{t-1}^2\).

Estimating a GARCH Model

The code below uses the rugarch R package to estimate a GARCH(p = 1, q = 1) model. Note that the p and q denote the number of lags on the \(\sigma^2_t\) and \(\epsilon^2_t\) terms, respectively.

The first command asks it to specify a plain vanilla GARCH by model = "sGARCH". It asks it to use an ARMA(1, 1) for the returns model by armaOrder = c(1, 1), include.mean = TRUE. We ask it to use the \(\mathcal{N}(0, 1)\) distribution for the \(z_t\)’s with the distribution.model = "norm". The second command asks it to fit the model. The output is printed below the code.

The main model output is displayed under ‘Optimal Parameters’. The mu, ar1 and ma1 coefficients are from the mean model (ARMA(1, 1)). and the omega, alpha1, and beta1 are coefficient estimates from the \(\sigma_t^2 =\omega + \alpha_1\sigma_{t-1}^2 + \beta_1\varepsilon_{t-1}^2\) equation of the main GARCH model. Under ‘Robust Standard Errors’ are the same coefficient estimates, but the standard errors are widened to account for the possibility that our distributional assumption is wrong. Notice that the estimates are about the same, but the p-values are all bigger, and in the case of alpha1 it becomes no longer statistically significant with the robust standard errors.

library(rugarch)

garch11        <- ugarchspec(variance.model = list(model = "sGARCH", garchOrder = c(1, 1)), 
                            mean.model = list(armaOrder = c(1, 1), include.mean = TRUE), 
                            distribution.model = "norm")

garchfit       <- ugarchfit(spec = garch11, data = SPYRet_xts["2007-02-01/"], solver = "hybrid")
garchfit
## 
## *---------------------------------*
## *          GARCH Model Fit        *
## *---------------------------------*
## 
## Conditional Variance Dynamics    
## -----------------------------------
## GARCH Model  : sGARCH(1,1)
## Mean Model   : ARFIMA(1,0,1)
## Distribution : norm 
## 
## Optimal Parameters
## ------------------------------------
##         Estimate  Std. Error  t value Pr(>|t|)
## mu      0.000745    0.000124   6.0160 0.000000
## ar1     0.653327    0.175617   3.7202 0.000199
## ma1    -0.709810    0.163236  -4.3484 0.000014
## omega   0.000002    0.000001   1.8630 0.062466
## alpha1  0.109014    0.013558   8.0404 0.000000
## beta1   0.876309    0.014097  62.1623 0.000000
## 
## Robust Standard Errors:
##         Estimate  Std. Error  t value Pr(>|t|)
## mu      0.000745    0.000116  6.41915 0.000000
## ar1     0.653327    0.247939  2.63503 0.008413
## ma1    -0.709810    0.231140 -3.07091 0.002134
## omega   0.000002    0.000007  0.27245 0.785276
## alpha1  0.109014    0.059249  1.83993 0.065778
## beta1   0.876309    0.073545 11.91524 0.000000
## 
## LogLikelihood : 9027.714 
## 
## Information Criteria
## ------------------------------------
##                     
## Akaike       -6.5186
## Bayes        -6.5057
## Shibata      -6.5186
## Hannan-Quinn -6.5139
## 
## Weighted Ljung-Box Test on Standardized Residuals
## ------------------------------------
##                         statistic p-value
## Lag[1]                    0.05765  0.8102
## Lag[2*(p+q)+(p+q)-1][5]   2.09322  0.9380
## Lag[4*(p+q)+(p+q)-1][9]   3.69477  0.7602
## d.o.f=2
## H0 : No serial correlation
## 
## Weighted Ljung-Box Test on Standardized Squared Residuals
## ------------------------------------
##                         statistic p-value
## Lag[1]                     0.8359 0.36058
## Lag[2*(p+q)+(p+q)-1][5]    6.1731 0.08187
## Lag[4*(p+q)+(p+q)-1][9]    7.2200 0.18143
## d.o.f=2
## 
## Weighted ARCH LM Tests
## ------------------------------------
##             Statistic Shape Scale P-Value
## ARCH Lag[3]    0.2890 0.500 2.000  0.5909
## ARCH Lag[5]    0.6501 1.440 1.667  0.8386
## ARCH Lag[7]    0.8684 2.315 1.543  0.9340
## 
## Nyblom stability test
## ------------------------------------
## Joint Statistic:  15.2107
## Individual Statistics:              
## mu     0.07367
## ar1    0.05754
## ma1    0.04233
## omega  1.51080
## alpha1 1.16158
## beta1  1.41582
## 
## Asymptotic Critical Values (10% 5% 1%)
## Joint Statistic:          1.49 1.68 2.12
## Individual Statistic:     0.35 0.47 0.75
## 
## Sign Bias Test
## ------------------------------------
##                    t-value      prob sig
## Sign Bias           3.2302 1.252e-03 ***
## Negative Sign Bias  0.9719 3.312e-01    
## Positive Sign Bias  1.3538 1.759e-01    
## Joint Effect       22.1174 6.166e-05 ***
## 
## 
## Adjusted Pearson Goodness-of-Fit Test:
## ------------------------------------
##   group statistic p-value(g-1)
## 1    20     156.5    1.192e-23
## 2    30     179.4    1.301e-23
## 3    40     208.4    5.190e-25
## 4    50     212.4    3.129e-22
## 
## 
## Elapsed time : 0.4521639

Now let’s use the rugarch’s standard functionality to use this estimated model to produce rolling forecasts of \(\sigma_t\) and plot them versus \(\left| r^{SPY}_t \right|\).

spec           <- getspec(garchfit)
setfixed(spec) <- as.list(coef(garchfit))
garchforecast1 <- ugarchforecast(spec, n.ahead = 1, n.roll = 2499, data = SPYRet_xts["2007-02-01/"], out.sample = 2500)

plot(garchforecast1, which = 4)

It looks like to me this model does a pretty good job of sensing how long a volatility spike will remain elevated, or rather modeling the path of a volatility spike back down to long-run mean levels. Since all econometric models use past values to predict current values, it cannot foresee the initial spike up in volatility.

That’s It!

Now you know the GARCH model for forecasting volatility. In the next post we will cover the multi-equation models, VAR and VECM, that are good for estimating the relationship among many variables.

 

 

Reposts

  • Levi Russell 📈
  • Robert S. Thompson
  • Ramaele Moshoeshoe
  • Paul Fenton Villar

Mentions

  • mindymallory

Write a Comment

Comment

  1. Dear mindymallory,

    At the beginning thank you for this learning material, it is really helpful !

    I have a question about last plot because i copied the whole code just as you wrote but in my plot i didn’t see the blue actual line after this orange forecast line has begun. I mean in your plot there can easily be noticed that prediction orange line is very, very close to actual blue line all the time and my problem is that in my plot i didn’t have this actual blue line after prediction has begun so i have a blue line and then only an orange. Do you know what is the problem here ? And I really zoomed these lines well, but it’s not a matter of a good eye :).

    Thank you in advance for your answer.

    Best regards,
    Lukas

Webmentions

  • Brighton Capital liked this article on twitter.com.

  • ig. liked this article on twitter.com.

  • Levi Russell 📈 reposted this article on twitter.com.

  • Eunchun Park liked this article on twitter.com.

  • Robert S. Thompson reposted this article on twitter.com.

  • Ramaele Moshoeshoe liked this article on twitter.com.

  • Ramaele Moshoeshoe reposted this article on twitter.com.

  • Paul Fenton Villar reposted this article on twitter.com.

  • mindymallory mentioned this article on blog.mindymallory.com.