**Approximate Bayesian Computation with Indirect Summary Statistics **

Approximate Bayesian Computation (ABC) has become a popular estimation method for situations where the likelihood function of a model is unavailable. In contrast to classical Bayesian inference, this method does not build on the availability of the likelihood function, but instead relies on a distance function which measures the distance between some empirical summary statistics and their simulation based counterparts. An open question is the selection of these statistics, particularly regarding their sufficiency properties. In this paper we propose an indirect approach with summary statistics based on statistics of a suitably chosen auxiliary model. We show sufficiency of these statistics for Indirect ABC methods based on parameter estimates (ABC-IP), likelihood functions (ABC-IL) and scores (ABC-IS) of that auxiliary model. In the case of score based summary statistics we propose a weighting scheme for the different summary statistics. We illustrate the performance of these Indirect ABC methods in a simulation study and compare them to a standard ABC approach. We furthermore apply the ABC-IS method to the problem of estimating a continuous-time stochastic volatility model based on OU-processes.

[submitted]

**Spurious Long-Range Dependence in Realized Volatilities? A Panel Examination **

Modeling volatility in a realistic manner is essential for many financial applications. One particularly relevant stylized fact of volatility proxies is their slow decay of the autocorrelation function. Although this pattern is exhibited by "real" long-memory processes as well as, spuriously, by short-memory processes with level jumps, very different strategies are implied in practice, e.g. in forecasting. Consequently, it is important to distinguish between the two sources of persistence. In this paper we propose a panel approach for analyzing the long memory property. We build on the regression based, lag-augmented version of the LM test for fractional integration and take into account all possible break points. We apply the test to investigate the long memory behavior of a high-frequency based measure of volatility of all 30 components of the Dow Jones Industrial Average between 2006 and 2008 and find that the high persistence is an artefact of mean shifts.

[submitted]

**Estimating Residual Hedging Risk With Least-Squares Monte Carlo **

Frequently, dynamic hedging strategies minimizing risk exposure are not given in closed form, but need to be approximated numerically. This makes it difficult to estimate residual hedging risk, also called basis risk, when only imperfect hedging instruments are at hand. We propose an easy to implement and computationally efficient least-squares Monte Carlo algorithm to estimate residual hedging risk. The algorithm approximates the variance minimal hedging strategy within general diffusion models. Moreover, the algorithm produces both high-biased and low-biased estimators for the residual hedging error variance, thus providing an intrinsic criterion for the quality of the approximation. In a number of examples we show that the algorithm delivers accurate hedging error characteristics within seconds.

International Journal of Theoretical and Applied Finance, 2014, Volume 17, Issue 7

**Beyond the Sharpe Ratio: An Application of the Aumann-Serrano Index to Performance Measurement **

We propose a performance measure that generalizes the Sharpe ratio. The new performance measure is monotone with respect to stochastic dominance and consistently accounts for mean, variance and higher moments of the return distribution. It is equivalent to the Sharpe ratio if returns are normally distributed. Moreover, the two performance measures are asymptotically equivalent as the underlying distributions converge to the normal distribution. We suggest a parametric and a non-parametric estimator for the new performance measure and provide an empirical illustration using mutual funds data.

Journal of Banking & Finance, 2012, Volume 36, Issue 8, 2274-2284

**Futures Cross-hedging with a Stationary Basis **

When managing risk, frequently only imperfect hedging instruments are at hand. We show how to optimally cross-hedge risk when the spread between the hedging instrument and the risk is stationary. At the short end, the optimal hedge ratio is close to the cross-correlation of the log returns, whereas at the long end, it is optimal to fully hedge the position. For linear risk positions we derive explicit formulas for the hedge error, and for non-linear positions we show how to obtain numerically efficient estimates. Finally, we demonstrate that even in cases with no clear-cut decision concerning the stationarity of the spread it is better to allow for mean reversion of the spread rather than to neglect it.

Journal of Financial and Quantitative Analysis, 2012, Volume 47, Issue 6, 1361-1395

**An operational interpretation and existence of the Aumann-Serrano index of riskiness **

In this note we provide an operational interpretation of the economic index of riskiness of Aumann and Serrano (2008) and discuss its existence in the case of non-finite gambles.

Economics Letters, 2012, Volume 114, Issue 3, 265-267

**A Multivariate Ornstein-Uhlenbeck Type Stochastic Volatility Model **

Using positive semidefinite processes of Ornstein-Uhlenbeck type a multivariate Ornstein-Uhlenbeck (OU) type stochastic volatility model is introduced. We derive many important statistical and probabilistic properties, e.g. the complete second order structure and a state-space representation. Noteworthy, many of our results are shown to be valid for the more general class of multivariate stochastic volatility models, which are driven by a stationary and square-integrable covariance matrix process. For the OU type stochastic volatility our results enable estimation and filtering of the volatility which we finally demonstrate with a short empirical illustration of our model.

[submitted]

**Volatility estimation based on high-frequency data **

This chapter presents a review and empirical illustration of nonparametric volatility estimators that exploit the information contained in high-frequency financial data. Such ex-post volatility measures can be directly used for the modelling and forecasting of the (future) volatility dynamics, which in turn may be essential for an adequate risk management or hedging decisions. Moreover, volatility constitutes the main ingredient in asset pricing and the knowledge of this quantity therefore plays a major role in most financial applications.

forthcoming in Handbook of Computational Finance (eds. Jin-Chuan Duan, James E. Gentle and Wolfgang Härdle). New York: Springer

**On the Definition, Stationary Distribution and Second Order Structure of Positive Semi-definite Ornstein-Uhlenbeck type Processes **

Several important properties of positive semi-definite processes of Ornstein-Uhlenbeck type are analysed. It is shown that linear operators of the form *X*→*A**X*+*X**A*^{T} with *A* ∈**M**_{d}(ℜ) are the only ones that can be used in the definition provided one demands a natural non-degeneracy condition. Furthermore, we analyse the absolute continuity properties of the stationary distribution (especially when the driving matrix subordinator is the quadratic variation of a *d*-dimensional Lévy process) and study the question of how to choose the driving matrix subordinator in order to obtain a given stationary distribution. Finally, we present results on the first and second order moment structure of matrix subordinators, which is closely related to the moment structure of positive semi-definite Ornstein-Uhlenbeck type processes. The latter results are important for method of moments based estimation.

Bernoulli, 2009, Volume 15, Issue 3, 754-773

**A Discrete-Time Model for Daily S&P500 Returns and Realized Variations: Jumps and Leverage Effects**

We develop an empirically highly accurate discrete-time daily stochastic volatility model that explicitly distinguishes between the jump and continuous-time components of price movements using nonparametric realized variation and Bipower variation measures constructed from high-frequency intraday data. The model setup allows us to directly assess the structural inter-dependencies among the shocks to returns and the two different volatility components. The model estimates suggest that the leverage effect, or asymmetry between returns and volatility, works primarily through the continuous volatility component. The excellent fit of the model makes it an ideal candidate for an easy-to-implement auxiliary model in the context of indirect estimation of empirically more realistic continuous-time jump diffusion and Lévy-driven stochastic volatility models, effectively incorporating the interdaily dependencies inherent in the high-frequency intraday data.

Journal of Econometrics, 2009, Volume 150, Issue 2, 151-166

**Financial Economics, Fat-tailed Distributions**

This article reviews some of the most important concepts and distributional models that are used in empirical finance to capture the (almost) ubiquitous stochastic properties of returns.

**The Volatility of Realized Volatility **

In recent years, with the availability of high-frequency financial market data modeling realized volatility has become a new and innovative research direction. The construction of observable or realized volatility series from intra-day transaction data and the use of standard time-series techniques has lead to promising strategies for modeling and predicting (daily) volatility. In this article, we show that the residuals of commonly used time-series models for realized volatility and logarithmic realized variance exhibit non-Gaussianity and volatility clustering. We propose extensions to explicitly account for these properties and assess their relevance for modeling and forecasting realized volatility. In an empirical application for S&P500 index futures we show that allowing for time-varying volatility of realized volatility and logarithmic realized variance substantially improves the fit as well as predictive performance. Furthermore, the distributional assumption for residuals plays a crucial role in density forecasting.

Econometric Reviews, 2008, Volume 27, Issue 1-3, 46-78