tensorflow_privacy/g3doc/guide/_index.yaml
Steve Chien 0d05f2eb18 Fix link to API documentation in guide page.
PiperOrigin-RevId: 396006636
2021-09-10 14:22:08 -07:00

98 lines
4.3 KiB
YAML

# TODO(b/181782485): Switch to the main book for launch - /responsible_ai/_book.yaml
book_path: /responsible_ai/_book.yaml
project_path: /responsible_ai/_project.yaml
title: TensorFlow Privacy
description: >
Overview of TensorFlow Privacy library.
landing_page:
nav: left
custom_css_path: /site-assets/css/style.css
rows:
- heading: Privacy in Machine Learning
items:
- classname: devsite-landing-row-50
description: >
<p>
An important aspect of responsible AI usage is ensuring that ML models are prevented from
exposing potentially sensitive information, such as demographic information or other
attributes in the training dataset that could be used to identify people.
One way to achieve this is by using differentially private stochastic gradient descent
(DP-SGD), which is a modification to the standard stochastic gradient descent (SGD)
algorithm in machine learning.
</p>
<p>
Models trained with DP-SGD have measurable differential privacy (DP) improvements, which
helps mitigate the risk of exposing sensitive training data. Since the purpose of DP is
to help prevent individual data points from being identified, a model trained with DP
should not be affected by any single training example in its training data set. DP-SGD
techniques can also be used in federated learning to provide user-level differential privacy.
You can learn more about differentially private deep learning in
<a href="https://arxiv.org/pdf/1607.00133.pdf">the original paper</a>.
</p>
- code_block: |
<pre class = "prettyprint">
import tensorflow as tf
from tensorflow_privacy.privacy.optimizers import dp_optimizer_keras
# Select your differentially private optimizer
optimizer = tensorflow_privacy.DPKerasSGDOptimizer(
l2_norm_clip=l2_norm_clip,
noise_multiplier=noise_multiplier,
num_microbatches=num_microbatches,
learning_rate=learning_rate)
# Select your loss function
loss = tf.keras.losses.CategoricalCrossentropy(
from_logits=True, reduction=tf.losses.Reduction.NONE)
# Compile your model
model.compile(optimizer=optimizer, loss=loss, metrics=['accuracy'])
# Fit your model
model.fit(train_data, train_labels,
epochs=epochs,
validation_data=(test_data, test_labels),
batch_size=batch_size)
</pre>
- classname: devsite-landing-row-100
- heading: TensorFlow Privacy
options:
- description-100
items:
- classname: devsite-landing-row-100
description: >
<p>
Tensorflow Privacy (TF Privacy) is an open source library developed by teams in
Google Research. The library includes implementations of commonly used TensorFlow
Optimizers for training ML models with DP. The goal is to enable ML practitioners
using standard Tensorflow APIs to train privacy-preserving models by changing only a
few lines of code.
</p>
<p>
The differentially private optimizers can be used in conjunction with high-level APIs
that use the Optimizer class, especially Keras. Additionally, you can find differentially
private implementations of some Keras models. All of the Optimizers and models can be found
in the <a href="../api_docs/python/tf_privacy">API Documentation</a>.</p>
</p>
- classname: devsite-landing-row-cards
items:
- heading: "Introducing TensorFlow Privacy"
image_path: /resources/images/tf-logo-card-16x9.png
path: https://blog.tensorflow.org/2019/03/introducing-tensorflow-privacy-learning.html
buttons:
- label: "Read on TensorFlow blog"
path: https://blog.tensorflow.org/2019/03/introducing-tensorflow-privacy-learning.html
- heading: "TensorFlow Privacy at TF Dev Summit 2020"
youtube_id: UEECKh6PLhI
buttons:
- label: Watch the video
path: https://www.youtube.com/watch?v=UEECKh6PLhI
- heading: "TensorFlow Privacy on GitHub"
image_path: /resources/images/github-card-16x9.png
path: https://github.com/tensorflow/privacy
buttons:
- label: "View on GitHub"
path: https://github.com/tensorflow/privacy