Commit graph

823 commits

Author SHA1 Message Date
Shuang Song
27069d347d Fixes comments and membership scores for thresholds attack.
PiperOrigin-RevId: 555579896
2023-08-10 11:31:29 -07:00
Shuang Song
fafa69b65c
Merge pull request #484 from ethz-spylab/master
Fix training mode for LiRA code at inference
2023-08-03 22:10:59 -07:00
Steve Chien
a32e6ae5d0 Add DP versions of v1 FTRL optimizer.
PiperOrigin-RevId: 553186886
2023-08-02 10:30:35 -07:00
Galen Andrew
b7e9709ff7 Remove workspace dependency on specific github release of dp_accounting.
PiperOrigin-RevId: 551651015
2023-07-27 15:04:19 -07:00
Galen Andrew
67c9c2424c Internal change.
PiperOrigin-RevId: 551648705
2023-07-27 14:55:51 -07:00
Steve Chien
134c898ded Add DP-SGD version of v1 LinearClassifier.
PiperOrigin-RevId: 551350685
2023-07-26 16:38:41 -07:00
Shuang Song
225355258c Calls epsilon computation in MIA.
PiperOrigin-RevId: 551003589
2023-07-25 14:50:13 -07:00
Steve Chien
8e60864559 Minor code cleanup to compute_dp_sgd_privacy_lib and update dp_accounting dependency.
PiperOrigin-RevId: 550695787
2023-07-24 15:48:45 -07:00
A. Unique TensorFlower
c1c97f1c1c Modify fast clipping logic to support computation on TPUs.
PiperOrigin-RevId: 550673798
2023-07-24 14:28:45 -07:00
A. Unique TensorFlower
cb6659d11b COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/privacy/pull/489 from mhaghifam:dp-second-order-optimization 024904810a8f130d554cc3f04713d5562ccfe5df
PiperOrigin-RevId: 547312570
2023-07-11 16:02:29 -07:00
A. Unique TensorFlower
6b8007ddde Re-organize files and simplify test names.
These changes are intended to support a more modular system for when we
add more layer registry functions (and their corresponding tests). They are
also made so that we do not have an enormous number of lengthy tests inside
`clip_grads_test.py`.

PiperOrigin-RevId: 545779495
2023-07-05 14:01:09 -07:00
A. Unique TensorFlower
9536fb26e7 Update LICENSE rules for TF Privacy to the new API.
PiperOrigin-RevId: 545774183
2023-07-05 13:42:20 -07:00
Vadym Doroshenko
a147a480a5 Finish implementation of custom indices names.
PiperOrigin-RevId: 545440374
2023-07-04 07:19:18 -07:00
Vadym Doroshenko
93f5a5249c Add slice names for custom slices
PiperOrigin-RevId: 544599507
2023-06-30 02:33:03 -07:00
Galen Andrew
f953e834df New version updates dependendices.
PiperOrigin-RevId: 544133468
2023-06-28 12:46:28 -07:00
Michael Reneer
5b21aad36e Update the scipy dependency to 1.9.
PiperOrigin-RevId: 543452894
2023-06-26 08:59:44 -07:00
Vadym Doroshenko
45da453410 Implement possibility to return slice indices.
PiperOrigin-RevId: 540885025
2023-06-16 08:22:43 -07:00
Edoardo Debenedetti
6301f3ffef
Merge branch 'tensorflow:master' into master 2023-06-13 15:09:21 +02:00
Edoardo Debenedetti
ab4cb09399 Fix LiRA inference 2023-06-13 09:57:46 +02:00
Zheng Xu
a4bdb05b62 zCDP to epsilon for tree aggregation accounting.
PiperOrigin-RevId: 539706770
2023-06-12 11:09:14 -07:00
Walid Krichene
18c43b351b Support weighted losses in DPModel.
PiperOrigin-RevId: 538011437
2023-06-05 16:27:19 -07:00
Steve Chien
60d237be83 Update tensorflow-probability version to 0.20.0
PiperOrigin-RevId: 533550592
2023-05-19 14:22:24 -07:00
A. Unique TensorFlower
0f5acf868e Add additional tests and checks on the passed loss function.
PiperOrigin-RevId: 532225904
2023-05-15 14:27:46 -07:00
Shuang Song
8fdac5f833 Test DPModel in distributed training.
PiperOrigin-RevId: 528039164
2023-04-28 18:57:29 -07:00
Walid Krichene
e65e14b2d6 Fix bug in DPModel that shows up in distributed training.
PiperOrigin-RevId: 528026372
2023-04-28 17:31:18 -07:00
Galen Andrew
9710a4acc7 Bump version and update dependenciesfor pypi release.
PiperOrigin-RevId: 527377853
2023-04-26 14:35:24 -07:00
A. Unique TensorFlower
33bbc87ff2 Use better group privacy bound in computing user level privacy [TF Privacy]
PiperOrigin-RevId: 526852999
2023-04-24 22:17:24 -07:00
Michael Reneer
60cb0dd2fb Update tensorflow privacy to use NamedTuple instead of attrs.
This allows these objects to be traversed when nested in tree-like structures more easily.

PiperOrigin-RevId: 525532511
2023-04-19 13:18:25 -07:00
Shuang Song
e362f51773 Supports slicing for multi-label data.
PiperOrigin-RevId: 523846333
2023-04-12 17:14:11 -07:00
Galen Andrew
d5e41e20ad More detailed description of arguments in compute_dp_sgd_privacy.
PiperOrigin-RevId: 522693217
2023-04-07 15:07:35 -07:00
Shuang Song
c4628d5dbc Skips adding noise when noise_multiplier is 0 for fast clipping.
PiperOrigin-RevId: 522396275
2023-04-06 11:54:55 -07:00
Shuang Song
de9836883d Skips noise addition when noise_multiplier is 0. Fix a typo.
PiperOrigin-RevId: 521912964
2023-04-04 17:48:24 -07:00
A. Unique TensorFlower
ee1abe6930 Generalize generate_model_outputs_using_core_keras_layers().
This change adds the following two new features to the above function:
(i) it supports nested custom layers of depth >2;
(ii) it allows the caller to exclude certain layers from the expansion.

Feature (ii) will be needed for the development of DP models that use
Trasformer or BERT-type layers.

PiperOrigin-RevId: 520919934
2023-03-31 07:41:16 -07:00
Galen Andrew
abb0c3f9f6 Migrates compute_dp_sgd_privacy to print new privacy statement from compute_dp_sgd_privacy_lib.
PiperOrigin-RevId: 520147633
2023-03-28 15:14:12 -07:00
A. Unique TensorFlower
781483d1f2 Make compute_dp_sgd_privacy_statement visible.
PiperOrigin-RevId: 520105385
2023-03-28 12:43:21 -07:00
Shuang Song
e125951c9b Sets training set as positive class for sklearn.metrics.roc_curve.
sklearn.metrics.roc_curve uses classification rules in the form "score >= threshold ==> predict positive".
When calling roc_curve, we used to label test data as positive class. This way, TPR = % test examples classified as test, FPR = % training examples classified as test. The classification rule is "loss >= threshold ==> predict test".

For membership inference, TPR is usually defined as % training examples classified as training, and FPR is % test examples classified as training.
As training samples usually have lower loss, we usually use rules in the form of "loss <= threshold ==> predict training".

Therefore, TPR in the 2nd case is actually (1 - FPR) in the 1st case, FPR in the 2nd case is (1 - TPR) in the 1st case.
This mismatch does not affect attacker advantage or AUC, but this can cause problem to PPV.

Now, we:
- set training set as positive class.
- for threshold and entropy attacks, set score to be -loss, so that higher score corresponds to training data.
- negate the thresholds (computed based on -loss) so that it corresponds to loss.

PiperOrigin-RevId: 519880043
2023-03-27 18:00:25 -07:00
A. Unique TensorFlower
7796369d8b Support gradient norm computation with respect to a subset of variables.
PiperOrigin-RevId: 519245638
2023-03-24 14:57:54 -07:00
Galen Andrew
d5d60e2eac Adds compute_dp_sgd_privacy_statement for accurate privacy accounting report.
PiperOrigin-RevId: 518934979
2023-03-23 12:37:12 -07:00
Walid Krichene
52806ba952 In dp_optimizer_keras_sparse, update iterations to reflect the number of logical batches, rather than physical batches.
In the current behavior, when using gradient accumulation, the `iterations` variable is incremented at every physical batch, while variables are only updated at every logical batch (where logical batch = accumulation_steps many physical batches). This causes certain optimizers that explicitly depend on `iterations` (such as Adam) to behave very differently under gradient accumulation.

With this change, `iterations` is only incremented after each logical batch.

PiperOrigin-RevId: 517197044
2023-03-16 12:35:57 -07:00
A. Unique TensorFlower
7ae50c5ca5 Generalize model_forward_pass() to allow input models with multiple outputs.
PiperOrigin-RevId: 517145254
2023-03-16 09:36:16 -07:00
A. Unique TensorFlower
043e8b5272 Report the true loss in DPModel instead of the norm-adjusted loss.
PiperOrigin-RevId: 517112812
2023-03-16 07:15:13 -07:00
A. Unique TensorFlower
8f4ab1a8bb Allow custom per example loss functions for computing per microbatch gradient norm.
PiperOrigin-RevId: 516897864
2023-03-15 12:28:39 -07:00
Zheng Xu
d7d497bb69 Update script for pip package.
PiperOrigin-RevId: 515696284
2023-03-10 11:44:03 -08:00
Galen Andrew
c2bd4c3c6f Bump version number.
PiperOrigin-RevId: 515456888
2023-03-09 15:22:34 -08:00
Galen Andrew
701a585e1a Revert to dp-accounting 0.3.0 API.
PiperOrigin-RevId: 515432485
2023-03-09 13:56:34 -08:00
Galen Andrew
61dfbcc1f5 Adds functions for more accurate privacy accounting.
Adds function for computation of example-level DP epsilon taking into account microbatching and not assuming Poisson subsampling. Adds function for computation of user-level DP in terms of group privacy.

PiperOrigin-RevId: 515114010
2023-03-08 12:44:39 -08:00
A. Unique TensorFlower
4e1fc252e4 Add a kwargs argument to the registry API + small changes to docstrings.
This is a forward-looking change that is needed to support more complicated
layers, such as `tf.keras.layers.MultiHeadAttention`, which can take `kwargs`
as part of their `.call()` method and can generate arbitrary outputs.

PiperOrigin-RevId: 514775503
2023-03-07 10:35:04 -08:00
Steve Chien
21ee1a607a Fix unneeded dependency.
PiperOrigin-RevId: 514523996
2023-03-06 14:20:33 -08:00
Zheng Xu
0a0f377f3f Adaptive clipping in DP-FTRL with restart.
PiperOrigin-RevId: 513934548
2023-03-06 07:16:57 -08:00
A. Unique TensorFlower
8bfafdd74d Efficient DPSGD with support to microbatched losses.
PiperOrigin-RevId: 513886957
2023-03-06 07:01:03 -08:00