forked from 626_privacy/tensorflow_privacy
Slight language adjustments
PiperOrigin-RevId: 394363646
This commit is contained in:
parent
bb5ca9277b
commit
fc7504efca
1 changed files with 2 additions and 1 deletions
|
@ -77,7 +77,7 @@
|
|||
"id": "vsCUvXP0W4j2"
|
||||
},
|
||||
"source": [
|
||||
"[Differential privacy](https://en.wikipedia.org/wiki/Differential_privacy) (DP) is a framework for measuring the privacy guarantees provided by an algorithm. Through the lens of differential privacy, you can design machine learning algorithms that responsibly train models on private data. Learning with differential privacy provides provable guarantees of privacy, mitigating the risk of exposing sensitive training data in machine learning. Intuitively, a model trained with differential privacy should not be affected by any single training example, or small set of training examples, in its data set. This mitigates the risk of exposing sensitive training data in ML."
|
||||
"[Differential privacy](https://en.wikipedia.org/wiki/Differential_privacy) (DP) is a framework for measuring the privacy guarantees provided by an algorithm. Through the lens of differential privacy, you can design machine learning algorithms that responsibly train models on private data. Learning with differential privacy provides measurable guarantees of privacy, helping to mitigate the risk of exposing sensitive training data in machine learning. Intuitively, a model trained with differential privacy should not be affected by any single training example, or small set of training examples, in its data set. This helps mitigate the risk of exposing sensitive training data in ML."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -452,6 +452,7 @@
|
|||
"colab": {
|
||||
"collapsed_sections": [],
|
||||
"name": "classification_privacy.ipynb",
|
||||
"provenance": [],
|
||||
"toc_visible": true
|
||||
},
|
||||
"kernelspec": {
|
||||
|
|
Loading…
Reference in a new issue