readme fixes

This commit is contained in:
npapernot 2019-07-25 14:38:37 +00:00
parent 9b08c163e0
commit d0ef1b380c

View file

@ -2,10 +2,10 @@
This package contains source code for the Bolton method. This method is a subset
of methods used in the ensuring privacy in machine learning that leverages
additional assumptions to provide a new way of approaching the privacy
additional assumptions to provide a new way of approaching the privacy
guarantees.
## Bolton Description
# Bolton Description
This method uses 4 key steps to achieve privacy guarantees:
1. Adds noise to weights after training (output perturbation).
@ -17,40 +17,40 @@ For more details on the strong convexity requirements, see:
Bolt-on Differential Privacy for Scalable Stochastic Gradient
Descent-based Analytics by Xi Wu et al.
### Why Bolton?
# Why Bolton?
The major difference for the Bolton method is that it injects noise post model
convergence, rather than noising gradients or weights during training. This
approach requires some additional constraints listed in the Description.
convergence, rather than noising gradients or weights during training. This
approach requires some additional constraints listed in the Description.
Should the use-case and model satisfy these constraints, this is another
approach that can be trained to maximize utility while maintaining the privacy.
The paper describes in detail the advantages and disadvantages of this approach
The paper describes in detail the advantages and disadvantages of this approach
and its results compared to some other methods, namely noising at each iteration
and no noising.
## Tutorials
# Tutorials
This package has a tutorial that can be found in the root tutorials directory,
under boton_tutorial.py.
This package has a tutorial that can be found in the root tutorials directory,
under `bolton_tutorial.py`.
## Contribution
# Contribution
This package was initially contributed by Georgian Partners with the hope of
growing the tensorflow/privacy library. There are several rich use cases for
This package was initially contributed by Georgian Partners with the hope of
growing the tensorflow/privacy library. There are several rich use cases for
delta-epsilon privacy in machine learning, some of which can be explored here:
https://medium.com/apache-mxnet/epsilon-differential-privacy-for-machine-learning-using-mxnet-a4270fe3865e
https://arxiv.org/pdf/1811.04911.pdf
## Contacts
# Contacts
In addition to the maintainers of tensorflow/privacy listed in the root
README.md, please feel free to contact members of Georgian Partners. In
In addition to the maintainers of tensorflow/privacy listed in the root
README.md, please feel free to contact members of Georgian Partners. In
particular,
* Georgian Partners (@georgianpartners)
* Ji Chao Zhang (@Jichaogp)
* Christopher Choquette (@cchoquette)
* Georgian Partners(@georgianpartners)
* Ji Chao Zhang(@Jichaogp)
* Christopher Choquette(@cchoquette)
## Copyright
# Copyright
Copyright 2019 - Google LLC