diff --git a/privacy/bolton/README.md b/privacy/bolton/README.md index a423b65..4aef36f 100644 --- a/privacy/bolton/README.md +++ b/privacy/bolton/README.md @@ -5,7 +5,7 @@ of methods used in the ensuring privacy in machine learning that leverages additional assumptions to provide a new way of approaching the privacy guarantees. -# Bolton Description +## Bolton Description This method uses 4 key steps to achieve privacy guarantees: 1. Adds noise to weights after training (output perturbation). @@ -17,7 +17,7 @@ For more details on the strong convexity requirements, see: Bolt-on Differential Privacy for Scalable Stochastic Gradient Descent-based Analytics by Xi Wu et al. -# Why Bolton? +## Why Bolton? The major difference for the Bolton method is that it injects noise post model convergence, rather than noising gradients or weights during training. This @@ -28,12 +28,12 @@ The paper describes in detail the advantages and disadvantages of this approach and its results compared to some other methods, namely noising at each iteration and no noising. -# Tutorials +## Tutorials This package has a tutorial that can be found in the root tutorials directory, under `bolton_tutorial.py`. -# Contribution +## Contribution This package was initially contributed by Georgian Partners with the hope of growing the tensorflow/privacy library. There are several rich use cases for @@ -41,7 +41,7 @@ delta-epsilon privacy in machine learning, some of which can be explored here: https://medium.com/apache-mxnet/epsilon-differential-privacy-for-machine-learning-using-mxnet-a4270fe3865e https://arxiv.org/pdf/1811.04911.pdf -# Contacts +## Contacts In addition to the maintainers of tensorflow/privacy listed in the root README.md, please feel free to contact members of Georgian Partners. In @@ -51,6 +51,6 @@ particular, * Ji Chao Zhang(@Jichaogp) * Christopher Choquette(@cchoquette) -# Copyright +## Copyright Copyright 2019 - Google LLC