I am a Postdoctoral Researcher at the Statistical Methods Unit of the Institute for Employment Research in Nuremberg, Germany, and at the Chair for Statistics and Data Science in Social Sciences and the Humanities at the Ludwig Maximilian University of Munich.
My research focuses on quantitative methodology. I am specifically interested in applying deep learning algorithms to social science problems (e.g., multiple imputation of missing data and synthetic data for data sharing). Substantively, I am interested in predicting political behavior and the ethical implications of new trends in applied social science research, like Big Data and Artificial Intelligence, focusing on privacy. I have published multiple papers in internationally renowned journals, and through invited talks, I got the chance to communicate the results of my research to international audiences.
As a co-founder, contributor, and visualizationist of zweitstimme.org, I have played a pivotal role in creating a platform that communicates a scientific forecast for German Federal elections to a broad audience. This initiative reflects my commitment to making complex statistical concepts accessible and engaging to the public.
Ph.D., 2023
Graduate School of Economic and Social Sciences, University of Mannheim
M.A. in Political Science, 2016
University of Mannheim
B.A. in Governance and Public Policy, 2013
University of Passau
Regression models with log-transformed dependent variables are widely used by social scientists to investigate nonlinear relationships between variables. Unfortunately, this transformation complicates the substantive interpretation of estimation results and often leads to incomplete and sometimes even misleading interpretations. We focus on one valuable but underused method, the presentation of quantities of interest such as expected values or first differences on the original scale of the dependent variable. The procedure to derive these quantities differs in seemingly minor but critical aspects from the well-known procedure based on standard linear models. To improve empirical practice, we explain the underlying problem and develop guidelines that help researchers to derive meaningful interpretations from regression results of models with log-transformed dependent variables.
Differentially private GANs have proven to be a promising approach for generating realistic synthetic data without compromising the privacy of individuals. However, due to the privacy-protective noise introduced in the training, the convergence of GANs becomes even more elusive, which often leads to poor utility in the output generator at the end of training. We propose Private post-GAN boosting (Private PGB), a differentially private method that combines samples produced by the sequence of generators obtained during GAN training to create a high-quality synthetic dataset. Our method leverages the Private Multiplicative Weights method (Hardt and Rothblum, 2010) and the discriminator rejection sampling technique (Azadi et al., 2019) for reweighting generated samples, to obtain high quality synthetic data even in cases where GAN training does not converge. We evaluate Private PGB on a Gaussian mixture dataset and two US Census datasets, and demonstrate that Private PGB improves upon the standard private GAN approach across a collection of quality measures. Finally, we provide a non-private variant of PGB that improves the data quality of standard GAN training.
We offer a dynamic Bayesian forecasting model for multi-party elections. It com- bines data from published pre-election public opinion polls with information from fundamentals-based forecasting models. The model takes care of the multi-party nature of the setting and allows making statements about the probability of other quantities of interest, such as the probability of a plurality of votes for a party or the majority for certain coalitions in parliament. We present results from two ex ante forecasts of elections that took place in 2017 and are able to show that the model outperforms fundamentals-based forecasting models in terms of accuracy and the calibration of uncertainty. Provided that historical and current polling data are available, the model can be applied to any multi-party setting.
The introduction of new “machine learning” methods and terminology to political science complicates the interpretation of results. Even more so, when one term – like cross-validation – can mean very different things. We find different meanings of cross-validation in applied political science work. In the context of predictive modeling, cross-validation can be used to obtain an estimate of true error or as a procedure for model tuning. Using a single cross-validation procedure to obtain an estimate of the true error and for model tuning at the same time leads to serious misreporting of performance measures. We demonstrate the severe consequences of this problem with a series of experiments. We also observe this problematic usage of cross-validation in applied research. We look at Muchlinski et al. (2016) on the prediction of civil war onsets to illustrate how the problematic cross-validation can affect applied work. Applying cross-validation correctly, we are unable to reproduce their findings. We encourage researchers in predictive modeling to be especially mindful when applying cross-validation.
I am a teaching instructor for the following courses at the University of Mannheim:
In 2020 I was a lecturer for at the University of California, Berkeley:
I also taught at the University of Applied Sciences Ludwigshafen:
Besides that, I am also a instructor of professional training workshops:
June 2019: Big Data and Social Science, 1 day workshop, GRADE - Goethe Research Academy for Early Career Researchers, Frankfurt.
March 2019: Supervised and unsupervised Machine Learning and Deep Learning, 5 day workshop, Bundesbank, Frankfurt.
February 2018: Introduction to R, 1 day workshop, Geschäftsstelle für Qualitätssicherung Hessen, Eschborn.
Testimonials:
Best course this semester, thank you! (University of Mannheim)
Marcel was extremely good. Kept everyone engaged, curious and alert. I don’t think there was even a single question that he could not answer correctly. He was available all the time, on slack, mail, piazza. (University of California, Berkeley)
Wonderful! This tutorial and it’s corresponding course were my favorite. Marcel is a great teacher, a great speaker, and creates a great classroom environment. He is very supportive and encouraging. I always enjoyed attending and wish there were future tutorials and courses to attend. (University of Mannheim)
This is hands down the best course till now, both Daniel and Marcel are excellent teachers and effectively breaks down the concept in understandable concepts easy to consume. (University of California, Berkeley)
Marcel is an excellent tutor who knows his stuff very well an animates us students to further engage with quantitative methods. Great Job! (University of Mannheim)
Marcel was one of the best tutors I had in my 5 years at German universities. He was very helpful, open for questions, friendly towards students and easy to approach. (University of Mannheim)
Excellent course. I felt myself getting more and more employable from one session to the next. Really cool stuff we learn! (University of Mannheim)