tensorflow_privacy/privacy/bolton/README.md

57 lines
2.1 KiB
Markdown
Raw Normal View History

2019-07-22 09:30:55 -06:00
# Bolton Subpackage
2019-07-22 09:30:29 -06:00
This package contains source code for the Bolton method. This method is a subset
of methods used in the ensuring privacy in machine learning that leverages
2019-07-25 08:38:37 -06:00
additional assumptions to provide a new way of approaching the privacy
guarantees.
2019-07-25 08:44:21 -06:00
## Bolton Description
This method uses 4 key steps to achieve privacy guarantees:
1. Adds noise to weights after training (output perturbation).
2. Projects weights to R after each batch
3. Limits learning rate
4. Use a strongly convex loss function (see compile)
For more details on the strong convexity requirements, see:
Bolt-on Differential Privacy for Scalable Stochastic Gradient
Descent-based Analytics by Xi Wu et al.
2019-07-25 08:44:21 -06:00
## Why Bolton?
The major difference for the Bolton method is that it injects noise post model
2019-07-25 08:38:37 -06:00
convergence, rather than noising gradients or weights during training. This
approach requires some additional constraints listed in the Description.
Should the use-case and model satisfy these constraints, this is another
approach that can be trained to maximize utility while maintaining the privacy.
2019-07-25 08:38:37 -06:00
The paper describes in detail the advantages and disadvantages of this approach
and its results compared to some other methods, namely noising at each iteration
and no noising.
2019-07-25 08:44:21 -06:00
## Tutorials
2019-07-25 08:38:37 -06:00
This package has a tutorial that can be found in the root tutorials directory,
under `bolton_tutorial.py`.
2019-07-25 08:44:21 -06:00
## Contribution
2019-07-25 08:38:37 -06:00
This package was initially contributed by Georgian Partners with the hope of
growing the tensorflow/privacy library. There are several rich use cases for
delta-epsilon privacy in machine learning, some of which can be explored here:
https://medium.com/apache-mxnet/epsilon-differential-privacy-for-machine-learning-using-mxnet-a4270fe3865e
https://arxiv.org/pdf/1811.04911.pdf
2019-07-25 08:44:21 -06:00
## Contacts
2019-07-25 08:38:37 -06:00
In addition to the maintainers of tensorflow/privacy listed in the root
README.md, please feel free to contact members of Georgian Partners. In
particular,
2019-07-25 08:38:37 -06:00
* Georgian Partners(@georgianpartners)
* Ji Chao Zhang(@Jichaogp)
* Christopher Choquette(@cchoquette)
2019-07-25 08:44:21 -06:00
## Copyright
Copyright 2019 - Google LLC