readme fixes

This commit is contained in:
npapernot 2019-07-25 14:38:37 +00:00
parent 9b08c163e0
commit d0ef1b380c

View file

@ -5,7 +5,7 @@ of methods used in the ensuring privacy in machine learning that leverages
additional assumptions to provide a new way of approaching the privacy
guarantees.
## Bolton Description
# Bolton Description
This method uses 4 key steps to achieve privacy guarantees:
1. Adds noise to weights after training (output perturbation).
@ -17,7 +17,7 @@ For more details on the strong convexity requirements, see:
Bolt-on Differential Privacy for Scalable Stochastic Gradient
Descent-based Analytics by Xi Wu et al.
### Why Bolton?
# Why Bolton?
The major difference for the Bolton method is that it injects noise post model
convergence, rather than noising gradients or weights during training. This
@ -28,12 +28,12 @@ The paper describes in detail the advantages and disadvantages of this approach
and its results compared to some other methods, namely noising at each iteration
and no noising.
## Tutorials
# Tutorials
This package has a tutorial that can be found in the root tutorials directory,
under boton_tutorial.py.
under `bolton_tutorial.py`.
## Contribution
# Contribution
This package was initially contributed by Georgian Partners with the hope of
growing the tensorflow/privacy library. There are several rich use cases for
@ -41,7 +41,7 @@ delta-epsilon privacy in machine learning, some of which can be explored here:
https://medium.com/apache-mxnet/epsilon-differential-privacy-for-machine-learning-using-mxnet-a4270fe3865e
https://arxiv.org/pdf/1811.04911.pdf
## Contacts
# Contacts
In addition to the maintainers of tensorflow/privacy listed in the root
README.md, please feel free to contact members of Georgian Partners. In
@ -51,6 +51,6 @@ particular,
* Ji Chao Zhang(@Jichaogp)
* Christopher Choquette(@cchoquette)
## Copyright
# Copyright
Copyright 2019 - Google LLC