Yan Ping Zhong

Yan Ping Zhong

PhD Thesis Title: Validating Risk Measurement Models for Market Risk Management.



Supervisor: Professor John Cotter

External Examiner: Professor Kevin Dowd, CASS Business School

Abstract

This dissertation investigates a number of different but related issues in validating risk measurement models for market risk management. Firstly, we evaluate the effectiveness of backtesting methodologies using the standard binomial approach in addition to the interval forecast backtest, the density forecast backtest and the probability forecast backtest. Our comparison is conducted for three risk measures: value-at-risk, expected shortfall and spectral risk measures. Our goal is to analyze the abilities of various backtesting methodologies in gauging the accuracy of risk models. In addition, we test the importance of distribution and volatility specifications in affecting backtesting results. Secondly, we investigate the performance of risk models at measuring extreme tail risks in the recent crisis. We focus on two issues: 1) the appropriateness and robustness of risk measures in capturing extreme risks. 2) the suitability and reliability of Extreme Value Theory in modelling the extreme tail events. By using the FTSE 100 index futures and the WTI crude oil futures from Jan 1998 to June 2010 as proxies and based on the backtesting sample from Jan 2008 and June 2010, we assess the appropriateness of risk models and their ability to capture a portfolio’s risk exposures during the recent crisis. Finally, we extend the previous study by investigating the model risk associated with the omission or the misspecification of risk factors in the underlying process and its impact on the performance of a risk measurement model. We pay special attention to testing if the risk factors of nonlinearity, stochastic volatility and jumps that are essential components of underlying interest rate dynamics. We investigate whether the risk factors found to be important for in-sample performance retain their materiality for out-of-sample forecasts and whether the best in-sample performing models still outperform out-of-sample forecasts. 

Discover our Rankings and Accreditations