README.md 1.43 KB
Newer Older
piel82's avatar
piel82 committed
1
2
3
# dp-sgd


piel82's avatar
piel82 committed
4
In this repository we implement neural nets which use differentially private stochastic gradient descent using methods from [this paper](https://arxiv.org/abs/1607.00133). We train a dense net for the [mnist](http://yann.lecun.com/exdb/mnist/) dataset and a CNN for the [fashion mnist](https://github.com/zalandoresearch/fashion-mnist) dataset. For comparison we also train these nets without differential privacy.
piel82's avatar
piel82 committed
5

piel82's avatar
piel82 committed
6
7
To test the accuracy of a model saved in the folder models one can call test_model_accuracy.py with the following parameters:
1. name of the dataset the model was trained on (mnist or fashion_mnist)
piel82's avatar
piel82 committed
8
2. name of the file where the model is saved (file assumed to be located at the directory models)
piel82's avatar
piel82 committed
9

piel82's avatar
piel82 committed
10
11
If you want to recreate the results you can call mnist.py with the following parameters:
1. name of the dataset the model was trained on (mnist or fashion_mnist)
piel82's avatar
piel82 committed
12
2. (optionally) "False" if differential privacy should not be used
13
14
15
16

In the results folder there are graphs for the accuracy and the loss of the different models which are named after the pattern: \{privacy_version\}-\{metric\}-\{epochs\}-\{dataset\}-\{epsilon\}-\{delta\}.png where epsilon and delta are only mentioned when we use differential privacy.

In the models folder we saved the models which are named after the pattern: \{privacy_version\}-\{epochs\}-\{dataset\}-\{epsilon\}-\{delta\}.png where epsilon and delta are only mentioned when we use differential privacy.