Statistical Inference

Bootstrap-based, General-purpose Statistical Inference from Differential Private Releases

Statistical inference with differential privacy is essential and often depends on bespoke solutions. The combination of sampling and privacy noise for proper inference is not trivial, especially when sampling and privacy noise come from different distributions. We propose a general-purpose method combining the bootstrap with differentially private non-parametric distribution estimation. Our method applies non-private estimators (e.g., MLE for logistic regression) to differentially private synthetic data or distribution estimates. The advantage of our approach is that the bootstrap is pure post-processing of a differentially private mechanism—it does not access the sensitive data multiple times and does not increase the privacy budget. The joint sampling and privacy distribution of statistical estimators is approximated through statistical simulation. We present the results of a series of Monte Carlo experiments and show that our method produces valid inferences for a wide range of data sets (univariate data, multivariate data) and statistical problems (i.e., linear and non-linear queries). Furthermore, we show that our method produces valid confidence intervals that are narrower than confidence intervals produced by bespoke methods.

Really Useful Synthetic Data -- A Framework to Evaluate the Quality of Differentially Private Synthetic Data

Recent advances in generating synthetic data that allow to add principled ways of protecting privacy -- such as Differential Privacy -- are a crucial step in sharing statistical information in a privacy preserving way. But while the focus has been on privacy guarantees, the resulting private synthetic data is only useful if it still carries statistical information from the original data. To further optimise the inherent trade-off between data privacy and data quality, it is necessary to think closely about the latter. What is it that data analysts want? Acknowledging that data quality is a subjective concept, we develop a framework to evaluate the quality of differentially private synthetic data from an applied researcher's perspective. Data quality can be measured along two dimensions. First, quality of synthetic data can be evaluated against training data or against an underlying population. Second, the quality of synthetic data depends on general similarity of distributions or specific tasks such as inference or prediction. It is clear that accommodating all goals at once is a formidable challenge. We invite the academic community to jointly advance the privacy-quality frontier.