forked from 626_privacy/tensorflow_privacy
222c688210
PiperOrigin-RevId: 452587969
550 lines
19 KiB
Text
550 lines
19 KiB
Text
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "1eiwVljWpzM7"
|
|
},
|
|
"source": [
|
|
"Copyright 2020 The TensorFlow Authors.\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"cellView": "form",
|
|
"id": "4rmwPgXeptiS"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
|
|
"# you may not use this file except in compliance with the License.\n",
|
|
"# You may obtain a copy of the License at\n",
|
|
"#\n",
|
|
"# https://www.apache.org/licenses/LICENSE-2.0\n",
|
|
"#\n",
|
|
"# Unless required by applicable law or agreed to in writing, software\n",
|
|
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
|
|
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
|
|
"# See the License for the specific language governing permissions and\n",
|
|
"# limitations under the License."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "YM2gRaJMqvMi"
|
|
},
|
|
"source": [
|
|
"# Assess privacy risks with the TensorFlow Privacy Report"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "7oUAMMc6isck"
|
|
},
|
|
"source": [
|
|
"\u003ctable class=\"tfo-notebook-buttons\" align=\"left\"\u003e\n",
|
|
" \u003ctd\u003e\n",
|
|
" \u003ca target=\"_blank\" href=\"https://www.tensorflow.org/responsible_ai/privacy/tutorials/privacy_report\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" /\u003eView on TensorFlow.org\u003c/a\u003e\n",
|
|
" \u003c/td\u003e\n",
|
|
" \u003ctd\u003e\n",
|
|
" \u003ca target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/privacy/blob/master/g3doc/tutorials/privacy_report.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" /\u003eRun in Google Colab\u003c/a\u003e\n",
|
|
" \u003c/td\u003e\n",
|
|
" \u003ctd\u003e\n",
|
|
" \u003ca target=\"_blank\" href=\"https://github.com/tensorflow/privacy/blob/master/g3doc/tutorials/privacy_report.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" /\u003eView source on GitHub\u003c/a\u003e\n",
|
|
" \u003c/td\u003e\n",
|
|
" \u003ctd\u003e\n",
|
|
" \u003ca href=\"https://storage.googleapis.com/tensorflow_docs/privacy/g3doc/tutorials/privacy_report.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/download_logo_32px.png\" /\u003eDownload notebook\u003c/a\u003e\n",
|
|
" \u003c/td\u003e\n",
|
|
"\u003c/table\u003e"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "9rMuytY7Nn8P"
|
|
},
|
|
"source": [
|
|
"##Overview\n",
|
|
"In this codelab you'll train a simple image classification model on the CIFAR10 dataset, and then use the \"membership inference attack\" against this model to assess if the attacker is able to \"guess\" whether a particular sample was present in the training set. You will use the TF Privacy Report to visualize results from multiple models and model checkpoints."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "FUWqArj_q8vs"
|
|
},
|
|
"source": [
|
|
"## Setup\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"id": "Lr1pwHcbralz"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"import numpy as np\n",
|
|
"from typing import Tuple\n",
|
|
"from scipy import special\n",
|
|
"from sklearn import metrics\n",
|
|
"\n",
|
|
"import tensorflow as tf\n",
|
|
"\n",
|
|
"import tensorflow_datasets as tfds\n",
|
|
"\n",
|
|
"# Set verbosity.\n",
|
|
"tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)\n",
|
|
"from sklearn.exceptions import ConvergenceWarning\n",
|
|
"\n",
|
|
"import warnings\n",
|
|
"warnings.simplefilter(action=\"ignore\", category=ConvergenceWarning)\n",
|
|
"warnings.simplefilter(action=\"ignore\", category=FutureWarning)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "ucw81ar6ru-6"
|
|
},
|
|
"source": [
|
|
"### Install TensorFlow Privacy."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"id": "1n0K00S6zmfb"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"!pip install tensorflow_privacy"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"cellView": "both",
|
|
"id": "zcqAmiGH90kl"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from tensorflow_privacy.privacy.privacy_tests.membership_inference_attack import membership_inference_attack as mia\n",
|
|
"from tensorflow_privacy.privacy.privacy_tests.membership_inference_attack.data_structures import AttackInputData\n",
|
|
"from tensorflow_privacy.privacy.privacy_tests.membership_inference_attack.data_structures import AttackResultsCollection\n",
|
|
"from tensorflow_privacy.privacy.privacy_tests.membership_inference_attack.data_structures import AttackType\n",
|
|
"from tensorflow_privacy.privacy.privacy_tests.membership_inference_attack.data_structures import PrivacyMetric\n",
|
|
"from tensorflow_privacy.privacy.privacy_tests.membership_inference_attack.data_structures import PrivacyReportMetadata\n",
|
|
"from tensorflow_privacy.privacy.privacy_tests.membership_inference_attack.data_structures import SlicingSpec\n",
|
|
"from tensorflow_privacy.privacy.privacy_tests.membership_inference_attack import privacy_report"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"id": "VpOdtnbPbPXE"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"import tensorflow_privacy"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "pBbcG86th_sW"
|
|
},
|
|
"source": [
|
|
"## Train two models, with privacy metrics\n",
|
|
"\n",
|
|
"This section trains a pair of `keras.Model` classifiers on the `CIFAR-10` dataset. During the training process it collects privacy metrics, that will be used to generate reports in the bext section.\n",
|
|
"\n",
|
|
"The first step is to define some hyperparameters:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"id": "al0QK7O-0lk7"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"dataset = 'cifar10'\n",
|
|
"num_classes = 10\n",
|
|
"activation = 'relu'\n",
|
|
"num_conv = 3\n",
|
|
"\n",
|
|
"batch_size=50\n",
|
|
"epochs_per_report = 2\n",
|
|
"total_epochs = 50\n",
|
|
"\n",
|
|
"lr = 0.001"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "pu5IEzW6B-Oh"
|
|
},
|
|
"source": [
|
|
"Next, load the dataset. There's nothing privacy-specific in this code."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"id": "f1TT3ofN0qrq"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"#@title\n",
|
|
"print('Loading the dataset.')\n",
|
|
"train_ds = tfds.as_numpy(\n",
|
|
" tfds.load(dataset, split=tfds.Split.TRAIN, batch_size=-1))\n",
|
|
"test_ds = tfds.as_numpy(\n",
|
|
" tfds.load(dataset, split=tfds.Split.TEST, batch_size=-1))\n",
|
|
"x_train = train_ds['image'].astype('float32') / 255.\n",
|
|
"y_train_indices = train_ds['label'][:, np.newaxis]\n",
|
|
"x_test = test_ds['image'].astype('float32') / 255.\n",
|
|
"y_test_indices = test_ds['label'][:, np.newaxis]\n",
|
|
"\n",
|
|
"# Convert class vectors to binary class matrices.\n",
|
|
"y_train = tf.keras.utils.to_categorical(y_train_indices, num_classes)\n",
|
|
"y_test = tf.keras.utils.to_categorical(y_test_indices, num_classes)\n",
|
|
"\n",
|
|
"input_shape = x_train.shape[1:]\n",
|
|
"\n",
|
|
"assert x_train.shape[0] % batch_size == 0, \"The tensorflow_privacy optimizer doesn't handle partial batches\""
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "9l-55vOLCWZM"
|
|
},
|
|
"source": [
|
|
"Next define a function to build the models."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"id": "vCyOWyyhXLib"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"#@title\n",
|
|
"def small_cnn(input_shape: Tuple[int],\n",
|
|
" num_classes: int,\n",
|
|
" num_conv: int,\n",
|
|
" activation: str = 'relu') -\u003e tf.keras.models.Sequential:\n",
|
|
" \"\"\"Setup a small CNN for image classification.\n",
|
|
"\n",
|
|
" Args:\n",
|
|
" input_shape: Integer tuple for the shape of the images.\n",
|
|
" num_classes: Number of prediction classes.\n",
|
|
" num_conv: Number of convolutional layers.\n",
|
|
" activation: The activation function to use for conv and dense layers.\n",
|
|
"\n",
|
|
" Returns:\n",
|
|
" The Keras model.\n",
|
|
" \"\"\"\n",
|
|
" model = tf.keras.models.Sequential()\n",
|
|
" model.add(tf.keras.layers.Input(shape=input_shape))\n",
|
|
"\n",
|
|
" # Conv layers\n",
|
|
" for _ in range(num_conv):\n",
|
|
" model.add(tf.keras.layers.Conv2D(32, (3, 3), activation=activation))\n",
|
|
" model.add(tf.keras.layers.MaxPooling2D())\n",
|
|
"\n",
|
|
" model.add(tf.keras.layers.Flatten())\n",
|
|
" model.add(tf.keras.layers.Dense(64, activation=activation))\n",
|
|
" model.add(tf.keras.layers.Dense(num_classes))\n",
|
|
" \n",
|
|
" model.compile(\n",
|
|
" loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),\n",
|
|
" optimizer=tf.keras.optimizers.Adam(learning_rate=lr),\n",
|
|
" metrics=['accuracy'])\n",
|
|
"\n",
|
|
" return model"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "hs0Smn24Dty-"
|
|
},
|
|
"source": [
|
|
"Build two three-layer CNN models using that function.\n",
|
|
"\n",
|
|
"Configure the first to use a basic SGD optimizer, an the second to use a differentially private optimizer (`tf_privacy.DPKerasAdamOptimizer`), so you can compare the results."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"id": "nexqXAjqDgad"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"model_2layers = small_cnn(\n",
|
|
" input_shape, num_classes, num_conv=2, activation=activation)\n",
|
|
"model_3layers = small_cnn(\n",
|
|
" input_shape, num_classes, num_conv=3, activation=activation)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "D9nrWjP9D65l"
|
|
},
|
|
"source": [
|
|
"### Define a callback to collect privacy metrics\n",
|
|
"\n",
|
|
"Next define a `keras.callbacks.Callback` to periorically run some privacy attacks against the model, and log the results.\n",
|
|
"\n",
|
|
"The keras `fit` method will call the `on_epoch_end` method after each training epoch. The `n` argument is the (0-based) epoch number.\n",
|
|
"\n",
|
|
"You could implement this procedure by writing a loop that repeatedly calls `Model.fit(..., epochs=epochs_per_report)` and runs the attack code. The callback is used here just because it gives a clear separation between the training logic, and the privacy evaluation logic.\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"id": "won3NecEmzzg"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"class PrivacyMetrics(tf.keras.callbacks.Callback):\n",
|
|
" def __init__(self, epochs_per_report, model_name):\n",
|
|
" self.epochs_per_report = epochs_per_report\n",
|
|
" self.model_name = model_name\n",
|
|
" self.attack_results = []\n",
|
|
"\n",
|
|
" def on_epoch_end(self, epoch, logs=None):\n",
|
|
" epoch = epoch+1\n",
|
|
"\n",
|
|
" if epoch % self.epochs_per_report != 0:\n",
|
|
" return\n",
|
|
"\n",
|
|
" print(f'\\nRunning privacy report for epoch: {epoch}\\n')\n",
|
|
"\n",
|
|
" logits_train = self.model.predict(x_train, batch_size=batch_size)\n",
|
|
" logits_test = self.model.predict(x_test, batch_size=batch_size)\n",
|
|
"\n",
|
|
" prob_train = special.softmax(logits_train, axis=1)\n",
|
|
" prob_test = special.softmax(logits_test, axis=1)\n",
|
|
"\n",
|
|
" # Add metadata to generate a privacy report.\n",
|
|
" privacy_report_metadata = PrivacyReportMetadata(\n",
|
|
" # Show the validation accuracy on the plot\n",
|
|
" # It's what you send to train_accuracy that gets plotted.\n",
|
|
" accuracy_train=logs['val_accuracy'], \n",
|
|
" accuracy_test=logs['val_accuracy'],\n",
|
|
" epoch_num=epoch,\n",
|
|
" model_variant_label=self.model_name)\n",
|
|
"\n",
|
|
" attack_results = mia.run_attacks(\n",
|
|
" AttackInputData(\n",
|
|
" labels_train=y_train_indices[:, 0],\n",
|
|
" labels_test=y_test_indices[:, 0],\n",
|
|
" probs_train=prob_train,\n",
|
|
" probs_test=prob_test),\n",
|
|
" SlicingSpec(entire_dataset=True, by_class=True),\n",
|
|
" attack_types=(AttackType.THRESHOLD_ATTACK,\n",
|
|
" AttackType.LOGISTIC_REGRESSION),\n",
|
|
" privacy_report_metadata=privacy_report_metadata)\n",
|
|
"\n",
|
|
" self.attack_results.append(attack_results)\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "zLPHj5ZtFhC9"
|
|
},
|
|
"source": [
|
|
"### Train the models\n",
|
|
"\n",
|
|
"The next code block trains the two models. The `all_reports` list is used to collect all the results from all the models' training runs. The individual reports are tagged witht the `model_name`, so there's no confusion about which model generated which report."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"id": "o3U76c2Y4irD"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"all_reports = []"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"id": "Gywwxs6R1aLV"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"callback = PrivacyMetrics(epochs_per_report, \"2 Layers\")\n",
|
|
"history = model_2layers.fit(\n",
|
|
" x_train,\n",
|
|
" y_train,\n",
|
|
" batch_size=batch_size,\n",
|
|
" epochs=total_epochs,\n",
|
|
" validation_data=(x_test, y_test),\n",
|
|
" callbacks=[callback],\n",
|
|
" shuffle=True)\n",
|
|
"\n",
|
|
"all_reports.extend(callback.attack_results)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"id": "27qLElOR4y_i"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"callback = PrivacyMetrics(epochs_per_report, \"3 Layers\")\n",
|
|
"history = model_3layers.fit(\n",
|
|
" x_train,\n",
|
|
" y_train,\n",
|
|
" batch_size=batch_size,\n",
|
|
" epochs=total_epochs,\n",
|
|
" validation_data=(x_test, y_test),\n",
|
|
" callbacks=[callback],\n",
|
|
" shuffle=True)\n",
|
|
"\n",
|
|
"all_reports.extend(callback.attack_results)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "6mBEYh4utxiR"
|
|
},
|
|
"source": [
|
|
"## Epoch Plots\n",
|
|
"\n",
|
|
"You can visualize how privacy risks happen as you train models by probing the model periodically (e.g. every 5 epochs), you can pick the point in time with the best performance / privacy trade-off.\n",
|
|
"\n",
|
|
"Use the TF Privacy Membership Inference Attack module to generate `AttackResults`. These `AttackResults` get combined into an `AttackResultsCollection`. The TF Privacy Report is designed to analyze the provided `AttackResultsCollection`."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"id": "wT7zfUC8HXRI"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"results = AttackResultsCollection(all_reports)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"cellView": "both",
|
|
"id": "o7T8n0ffv3qo"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"privacy_metrics = (PrivacyMetric.AUC, PrivacyMetric.ATTACKER_ADVANTAGE)\n",
|
|
"epoch_plot = privacy_report.plot_by_epochs(\n",
|
|
" results, privacy_metrics=privacy_metrics)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "ijjwGgyixsFg"
|
|
},
|
|
"source": [
|
|
"See that as a rule, privacy vulnerability tends to increase as the number of epochs goes up. This is true across model variants as well as different attacker types.\n",
|
|
"\n",
|
|
"Two layer models (with fewer convolutional layers) are generally more vulnerable than their three layer model counterparts.\n",
|
|
"\n",
|
|
"Now let's see how model performance changes with respect to privacy risk."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "GbtlV-2Xu8s-"
|
|
},
|
|
"source": [
|
|
"## Privacy vs Utility"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"id": "Lt6fXGoivLH1"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"privacy_metrics = (PrivacyMetric.AUC, PrivacyMetric.ATTACKER_ADVANTAGE)\n",
|
|
"utility_privacy_plot = privacy_report.plot_privacy_vs_accuracy(\n",
|
|
" results, privacy_metrics=privacy_metrics)\n",
|
|
"\n",
|
|
"for axis in utility_privacy_plot.axes:\n",
|
|
" axis.set_xlabel('Validation accuracy')"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "m_6vg3pBPoyy"
|
|
},
|
|
"source": [
|
|
"Three layer models (perhaps due to too many parameters) only achieve a train accuracy of 0.85. The two layer models achieve roughly equal performance for that level of privacy risk but they continue to get better accuracy.\n",
|
|
"\n",
|
|
"You can also see how the line for two layer models gets steeper. This means that additional marginal gains in train accuracy come at an expense of vast privacy vulnerabilities."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "7u3BAg87v3qv"
|
|
},
|
|
"source": [
|
|
"This is the end of the tutorial. Feel free to analyze your own results."
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"accelerator": "GPU",
|
|
"colab": {
|
|
"collapsed_sections": [],
|
|
"name": "privacy_report.ipynb",
|
|
"provenance": [],
|
|
"toc_visible": true
|
|
},
|
|
"kernelspec": {
|
|
"display_name": "Python 3",
|
|
"name": "python3"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 0
|
|
}
|