de585e5619
PiperOrigin-RevId: 443383047 |
||
---|---|---|
.. | ||
attacks.py | ||
audit.py | ||
audit_test.py | ||
fmnist_audit.py | ||
mean_audit.py | ||
README.md |
Auditing Private Machine Learning
Code for "Auditing Differentially Private Machine Learning: How Private is Private SGD?": https://arxiv.org/abs/2006.07709. This implementation is simple but not easily parallelizable. For a parallelizable version which is harder to run, see https://github.com/jagielski/auditing-dpsgd.
Usage
This attack relies on the AuditAttack class found in audit.py. The class allows one to generate poisoning, run trials to compute membership scores for the poisoning, and then use the resulting membership scores to compute a lower bound on epsilon.
Examples
Two examples are provided, mean_audit.py and fmnist_audit.py. fmnist_audit.py attacks the FashionMNIST dataset. It allows the user to specify between standard bkdr attacks and clipping-aware attacks, and also allows the user to specify between multiple poisoning attack sizes, model types, and whether to load saved model weights to start training from. mean_audit.py audits a model which computes the mean of a dataset. This provides an example of user-provided poisoning samples, rather than those autogenerated from our attacks.py library.
Requirements
Requires scikit-learn=0.24.1, statsmodels=0.12.2, tensorflow=1.14.0