In a recent blog post, the TensorFlow team announced TensorFlow Privacy, an open source library that allows researchers and developers to build machine learning models that have strong privacy. Using this library ensures user data are not remembered through the training process based upon strong mathematical guarantees.
Machine learning is very pervasive in today’s online products and services. Google felt it was important to embed strong privacy into TensorFlow in order to protect users’ privacy. Carey Radebaugh, a product manager at Google Brain, explains why embedding strong privacy into TensorFlow is important:
Modern machine learning is increasingly applied to create amazing new technologies and user experiences, many of which involve training machines to learn responsibly from sensitive data, such as personal photos or email. Ideally, the parameters of trained machine-learning models should encode general patterns rather than facts about specific training examples.
The introduction of TensorFlow Privacy aligns to the Responsible AI Practices commitment that Google released last year which seeks to “build fairness, interpretability, privacy, and security into these [AI] systems”. In addition to Google following Responsible AI Practices, they also want to enable external developers to apply the same practices to the applications and products that they build.
TensorFlow Privacy’s technical implementation is based on the differential privacy theory, which ensures that models don’t learn or remember user details by providing a framework for measuring privacy guarantees.
To demonstrate the effectiveness of TensorFlow Privacy, Google provided an example of training two models, one with differential privacy based on TensorFlow Privacy library, and one without. The dataset used to train these models was based upon the standard Penn Treebank training dataset. Both models did well in modelling the English language and were able to highly score the following financial news sentences:
There was little turnover and nothing to stimulate the market
South korea and japan continue to be profitable
Merchant banks were stronger across the board
However, there were areas where the two model’s scores diverge significantly. For example, the following three sentences were scored highly, using a traditional training approach, as a result of: “effectively memorized during standard training”. Conversely, the differentially-private model scored these sentences very low and were rejected.
Aer banknote berlitz calloway … ssangyong swapo wachter
The naczelnik stands too
My god and i know i am correct and innocent
These three sentences appear to be uncommon, in the context of financial news. As a result, these rare sentences may identify or reveal information about individuals, trained using sensitive data, and are subsequently rejected. Radebaugh provides an additional explanation:
The two models’ differences result from the private model failing to memorize rare sequences that are abnormal to the training data. We can quantify this effect by leveraging our earlier work on measuring unintended memorization in neural networks, which intentionally inserts unique, random canary sentences into the training data and assesses the canaries’ impact on the trained model. In this case, the insertion of a single random canary sentence is sufficient for that canary to be completely memorized by the non-private model.
The TensorFlow Privacy library and samples are available in their GitHub repo. In addition, the TensorFlow technical whitepaper has been updated to include these new privacy mechanisms in detail.
Leave a Reply