Secure and Private Machine Learning
Grant number: DE160100584 | Funding period: 2016 - 2018
This project intends to answer the question: How can machines learn from data when participants behave maliciously for personal gain? Machine learning and statistics are used in many technologies where participants have an incentive to game the system (eg internet ad placement, e-commerce rating systems, credit risk in finance, health analytics and smart utility grids). However, little is known about how well state-of-the-art statistical inference techniques fare when data is manipulated by a malicious participant. The project's outcomes aim to ensure that statistical analysis is accurate while preserving data privacy, providing theoretical foundations of secure machine learning in adversari..View full description
Related publications (9)
Differential Privacy for Bayesian Inference through Posterior Sampling
Christos Dimitrakakis, Blaine Nelson, Zuhe Zhang, Aikaterini Mitrokotsa, Benjamin IP Rubinstein
Differential privacy formalises privacy-preserving mechanisms that provide access to a database. Can Bayesian inference be used di..