Remove %tensorflow_version
and from __future__
PiperOrigin-RevId: 304645574
This commit is contained in:
parent
7647c54a27
commit
ca25bde1f8
1 changed files with 5 additions and 16 deletions
|
@ -17,16 +17,14 @@
|
|||
"id": "XAVN6c8prKOL"
|
||||
},
|
||||
"source": [
|
||||
"##### Copyright 2019 The TensorFlow Authors.\n",
|
||||
"\n",
|
||||
"\n"
|
||||
"##### Copyright 2019 The TensorFlow Authors.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 0,
|
||||
"metadata": {
|
||||
"cellView": "both",
|
||||
"cellView": "form",
|
||||
"colab": {},
|
||||
"colab_type": "code",
|
||||
"id": "SassPC7WQAUO"
|
||||
|
@ -131,10 +129,6 @@
|
|||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from __future__ import absolute_import\n",
|
||||
"from __future__ import division\n",
|
||||
"from __future__ import print_function\n",
|
||||
"\n",
|
||||
"try:\n",
|
||||
" # %tensorflow_version only exists in Colab.\n",
|
||||
" %tensorflow_version 1.x\n",
|
||||
|
@ -226,8 +220,7 @@
|
|||
},
|
||||
"source": [
|
||||
"## Define and tune learning model hyperparameters\n",
|
||||
"Set learning model hyperparamter values. \n",
|
||||
"\n"
|
||||
"Set learning model hyperparamter values. \n"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -391,10 +384,7 @@
|
|||
"\n",
|
||||
"This guarantee is sometimes referred to as the **privacy budget**. A lower privacy budget bounds more tightly an adversary's ability to improve their guess. This ensures a stronger privacy guarantee. Intuitively, this is because it is harder for a single training point to affect the outcome of learning: for instance, the information contained in the training point cannot be memorized by the ML algorithm and the privacy of the individual who contributed this training point to the dataset is preserved.\n",
|
||||
"\n",
|
||||
"In this tutorial, the privacy analysis is performed in the framework of Rényi Differential Privacy (RDP), which is a relaxation of pure DP based on [this paper](https://arxiv.org/abs/1702.07476) that is particularly well suited for DP-SGD.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n"
|
||||
"In this tutorial, the privacy analysis is performed in the framework of Rényi Differential Privacy (RDP), which is a relaxation of pure DP based on [this paper](https://arxiv.org/abs/1702.07476) that is particularly well suited for DP-SGD.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -414,8 +404,7 @@
|
|||
"1. The total number of points in the training data, `n`.\n",
|
||||
"2. The `batch_size`.\n",
|
||||
"3. The `noise_multiplier`.\n",
|
||||
"4. The number of `epochs` of training.\n",
|
||||
"\n"
|
||||
"4. The number of `epochs` of training.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
|
Loading…
Reference in a new issue