Statistik und Ökonometrie - Prof. Dr. Melanie Schienle

Projekt Scalable and Interpretable Models for Complex and Structured Data (SIMCARD)


supported by: Helmholtz-Association, Impuls- und Vernetzungsfonds
project duration: 05/2020-04/2023
project volume: 500.000€
principal investigators:

Sach Mukherjee (DZNE), Tilmann Gneiting (HITS), Melanie Schienle (KIT)



In allen Gebieten von Wissenschaft und Gesellschaft sind große, komplexe und hochdimensionale Daten heutzutage allgegenwärtig. Machine Learning- und KI-Methoden sind bereits sehr effektiv darin, solche Daten für Prognosen zu nutzen. Das Projekt SIMCARD wird neuartige Machine Learning-Verfahren entwickeln, die robust und zuverlässig sind und über einfache Vorhersagen hinausgehen. Der Fokus liegt dabei auf neuen Methoden für die Modellierung sehr großer Netzwerke und die Zuverlässigkeit von Vorhersagen. Ziel ist es, mit passgenauen skalierbaren, fundierten und interpretierbaren Data Science Methoden Antworten auf drängende Probleme in vielfältigen Anwendungsbereichen zu liefern. Das Projekt adressiert insbesondere die Felder datenintensive Biomedizin und Wettervorhersage.


Quantile methods for complex financial systems


supported by: DFG (Deutsche Forschungsgemeinschaft)
project duration: 12/2017 - 12/2022
project volume: 260.000€
principal investigator: Prof. Dr. M. Schienle



Financial systems have become increasingly complex. They are characterized by their large dimensional, strong cross-sectional dependence amounting to networks of unknown and time-varying form which appear as main drivers of risk of and within the system. Moreover, in turbulent market times, accurate estimation and prediction of systemic and idiosyncratic risk requires novel econometric models and techniques that account for the network topology but also for macro- and micro-economic factors and determinants in order to quickly adjust and respond to changing environments.

In this project, we provide innovative techniques to better estimate and predict moderate and extreme tail risk in complex financial systems. This is of key interest for both, market participants but also for prudential supervisors. We focus on econometric methodologies for conditional quantiles and expectiles directly related to the Value-at-Risk (VaR) concept for risk measurement. Though, all presented approaches are readily extendible to further risk measures such as expected shortfall or others. In particular, we focus on tail network models for detection of structural risk channels within large dimensional financial systems and on dynamic tail factor methods for accurate prediction.

This is a joint project with Prof. Weining Wang (HU Berlin)


Non- and Semiparametric Techniques for Euler Equations


supported by: DFG (Deutsche Forschungsgemeinschaft)
project duration: 07/2014 - 07/2017
project volume: 175.000€
principal investigator: Prof. Dr. M. Schienle



Individual risk perception is central to any form of decision making and its accurate empirical measurement is a prerequisite for practical applicability of many economic models. A valid econometric assessment of individual risk attitudes requires precise but tractable estimates of marginal utility in Euler equations associated with optimal intertemporal consumption choice.  For these elements of key economic interest, however, available standard analytical techniques depend on simplifying model assumptions to treat data challenges such as nonstationary consumption and unknown correct functional form specification of utility. But in practice, it is often these technical conditions which drive the overall results and have thus produced various well-known empirical puzzles as e.g. the equity premium puzzle with ambiguous and contradicting estimates of individual risk perception.

In order to avoid such restrictions, we develop general statistical techniques for such nonstandard conditions aiming to obtain novel insights of practical and economic relevance. In particular, our methods do not require parametric pre-specifications of utility functions but can flexibly determine their form from the data. Furthermore, these non- and semiparametric techniques are sufficiently general to allow for consistent estimation and testing with nonstationary but recurrent consumption entering utility in levels and not in stationary growth rates.  In this sense, the methods are of cointegration type. 

The focus of this project is on semiparametric models which still allow for a flexible model fit but yield substantial improvements to the poor feasibility of pure nonparametric methods in available sample sizes of nonstationary consumption.  In particular, we investigate estimation with recursive utility specifications and Epstein-Zin preferences for which many calibration studies have shown promising results.  We expect that such general model classes can significantly improve on the practical performance of intertemporal optimization models providing a new understanding of some of the present puzzles.