Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a blog post about privacy preserving ML, with sklearn, federated … #173

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
182 changes: 182 additions & 0 deletions _posts/2023-11-22-privacy-preserving-scikit-learn.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,182 @@
---
title: "End-to-end privacy for scikit-learn model training and inference"
date: November 22, 2023

#### Post Category and Tags ####
# Format in titlecase without dashes (Ex. "Open Source" instead of "open-source")
categories:
- Blogs
tags:
- Open Source
- Machine Learning
- Privacy Preserving Machine Learning
- Fully Homomorphic Encryption
- Federated Learning
- Privacy

#### Featured Image ####
featured-image: privacy-preserving-ml.jpg
bcm-at-zama marked this conversation as resolved.
Show resolved Hide resolved

#### Author Info ####
# Can accomodate multiple authors
# Add SQUARE Author Image to /assets/images/author_images/ folder
postauthors:
- name: Andrei Stoian
website: https://github.com/zama-ai/concrete-ml
email: andrei.stoian@zama.ai
image: andrei_stoian.jpeg

---
<div>
<img src="/assets/images/posts_images/{{ page.featured-image }}" alt="">
{% include postauthor.html %}
</div>

In the age of cloud computing and wide access to machine learning-based services, privacy is a major challenge. Adding end-to-end privacy to a collaborative machine learning use case sounds like a daunting task. Fortunately, cryptographic breakthroughs like fully homomorphic encryption (FHE) provide a solution. Zama’s [new demo](https://github.com/zama-ai/concrete-ml/tree/main/use_case_examples/federated_learning) shows how to leverage open-source ML tools to add privacy end-to-end using federated learning and FHE. This blog post explains how the demo works under the hood, combining scikit-learn, federated learning and FHE.

FHE is a technology that enables application providers to build cloud-based applications that preserve user privacy and [Concrete ML](https://github.com/zama-ai/concrete-ml) is a machine learning toolkit that converts models to use FHE. Concrete ML leverages the powerful and robust model training algorithms in scikit-learn to train FHE compatible models without requiring any knowledge of cryptography.

Concrete ML uses scikit-learn as a basis for building FHE compatible models due to scikit-learn’s excellent ease of use, extensibility, robustness and wide palette of tools for building, validating and tuning data pipelines. While deep learning is performant on unstructured data, it often requires hyper-parameter tuning to achieve high accuracy. On many use cases, especially on structured data, scikit-learn excels through the robustness of its training algorithms.

## Training a model locally and deploying it securely

When all training data is available to the data scientist, training is secure as no data leaves their machine and only inference needs to be secured when the model is deployed. However, training models for FHE secured inference imposes some constraints on model training. While in the past using FHE required cryptographic expertise, tools like Concrete ML abstract away the cryptography and make FHE accessible to data scientists. Furthermore, FHE adds computation overhead which means that machine learning models may need to be tuned for both accuracy and runtime latency. Concrete ML makes such tuning easy by leveraging parameter search using scikit-learn utility classes such as `GridSearchCV`.

To use Concrete ML to train a model locally the syntax is the same as for scikit-learn. Explanations can also be found in this [video tutorial](https://www.youtube.com/watch?v=p5Y25FWLrH0). For a logistic regression model on MNIST simply run the following snippets:

```
bcm-at-zama marked this conversation as resolved.
Show resolved Hide resolved
from sklearn.datasets import fetch_openml
from sklearn.model_selection import train_test_split

mnist_dataset = fetch_openml("mnist_784")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason to use MNIST for this demo? It feels like a very general dataset and I'm wondering if there's a better dataset for the point that you're making. Maybe something where it is clear that privacy could be at stake?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this one is for the author, @andrei-stoian-zama

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We moved from MNIST that you didn't like to Breast Cancer, which is indeed a better dataset for PPML

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fine with you to resolve this conversation?


x_train, x_test, y_train, y_test = train_test_split(
mnist_dataset.data,
mnist_dataset.target.astype("int"),
test_size=10000,
)
```

Next, fit the Concrete ML logistic regression model which is a drop-in replacement of scikit-learn’s equivalent. An additional step, compilation, is necessary to produce an FHE computation circuit that performs the inference on encrypted data. Compilation, which is done by [Concrete](https://github.com/zama-ai/concrete), is the process of turning a program into its FHE equivalent, working directly over encrypted data.


```
from concrete.ml.sklearn.linear_model import LogisticRegression

model = LogisticRegression(penalty="l2")
model.fit(X=x_train, y=y_train)
model.compile(x_train)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm curious about the internals of this .compile step? Could you share a diagram with a brief explainer of what might go wrong if one doesn't run that line?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok we'll explain: if one does not compile, the model stays in the clear, no FHE :)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it sufficient now (with in particular an image)? Else, we can link to https://docs.zama.ai/concrete-ml/advanced-topics/compilation#compilation-to-fhe, but it's more complicated and maybe a bit out of scope for this blog: your choice, you tell me

```

Now test the model’s accuracy when executed on encrypted data. This model obtains around 92% accuracy. Like scikit-learn, Concrete ML supports [many other linear models](https://docs.zama.ai/concrete-ml/built-in-models/linear) such as SVMs, Lasso and ElasticNet and you can use them by simply changing the model class. Furthermore, all hyper-parameters of the equivalent scikit-learn models are supported (like `penalty` in the snippet above)


```
from sklearn.metrics import accuracy_score

y_preds_clear = model.predict(x_test, fhe="execute")

print(f"The test accuracy of the model on encrypted data {accuracy_score(y_test, y_preds_clear):.2f}")
```

## Federated Learning for training data privacy

Oftentimes, in production systems with many users, a machine learning model needs to be trained on an aggregate of all of the users’ data, while preserving the privacy of each user. Common use-cases in this setting are digital health, spam detection, online advertising, or even simpler ones like next word prediction assistance.

Concrete ML can import models trained with federated learning (FL) by tools like [Flower](https://flower.dev/). To train the same model as above using FL, a client application and a server application must be defined. First, the clients are identified by a `partition_id` which is a number between 0 and the number of clients. To split the MNIST dataset and get the current client’s slice use Flower `federated_utils` package:

```
(X_train, y_train) = federated_utils.partition(X_train, y_train, 10)[partition_id]
```

Now define the training client logic:

```
import flwr as fl
from sklearn.linear_model import LogisticRegression

# Create LogisticRegression Model
model = LogisticRegression(
penalty="l2",
warm_start=True, # prevent refreshing weights when fitting
)

federated_utils.set_initial_params(model)

class MnistClient(fl.client.NumPyClient):
def get_parameters(self, config): # type: ignore
return federated_utils.get_model_parameters(model)

def fit(self, parameters, config): # type: ignore
federated_utils.set_model_params(model, parameters)
model.fit(X_train, y_train)
print(f"Training finished for round {config['server_round']}")
return federated_utils.get_model_parameters(model), len(X_train), {}

def evaluate(self, parameters, config): # type: ignore
federated_utils.set_model_params(model, parameters)
loss = log_loss(y_test, model.predict_proba(X_test))
accuracy = model.score(X_test, y_test)
return loss, len(X_test), {"accuracy": accuracy}

# Start Flower client
fl.client.start_numpy_client(
server_address="0.0.0.0:8080",
client=MnistClient()
)
```

Finally, a typical Flower server instance must be created:

```
model = LogisticRegression()
federated_utils.set_initial_params(model)
strategy = fl.server.strategy.FedAvg()

fl.server.start_server(
server_address="0.0.0.0:8080",
strategy=strategy,
config=fl.server.ServerConfig(num_rounds=5),
)
```

When training stops, the clients or the server can store the model to a file:

```
with open("model.pkl", "wb") as file:
pickle.dump(model, file)
```

Once the model is trained, it can be loaded from the pickled file and converted to a Concrete ML model to enable privacy preserving inference. Indeed, Concrete ML can either train new models, as shown in the previous section, or convert existing ones, like the one created by FL. This conversion step, using the `from_sklearn_model` function, is used below on the model trained with federated learning. This [video](https://www.youtube.com/watch?v=dfXlOhw9-ic) further explains how to use this function.

```
with path_to_model.open("rb") as file:
sklearn_model = pickle.load(file)

compile_set = numpy.random.randint(0, 255, (100, 784)).astype(float)

sklearn_model.classes_ = sklearn_model.classes_.astype(int)

from concrete.ml.sklearn.linear_model import LogisticRegression
model = LogisticRegression.from_sklearn_model(sklearn_model, compile_set)
model.compile(compile_set)
```

As for local training, evaluate the model on some test data:

```
from sklearn.metrics import accuracy_score

y_preds_enc = model.predict(x_test, fhe="execute")

print(f"The test accuracy of the model on encrypted data {accuracy_score(y_test, y_preds_enc):.2f}")
```

All in all, with only a few lines of code, using scikit-learn, Flower and Concrete ML, it is possible to train a model and predict on new data, in a completely privacy-preserving way: the dataset pieces are kept private and the predictions are performed over encrypted data. The model trained here achieves 92% accuracy when executed on encrypted data.

## Conclusion

The most important steps of the full end-to-end private training demo based on Flower and Concrete ML were discussed above. You can find [all the sources](https://github.com/zama-ai/concrete-ml/tree/main/use_case_examples/federated_learning) in our open-source repository. Compatibility with scikit-learn enables users of Concrete ML to use familiar programming patterns and facilitates compatibility with scikit-learn compatible toolkits like Flower. With only a few changes to the original scikit-learn pipeline, the examples in this article show how to add end-to-end privacy to training a classifier on MNIST with federated learning and FHE.



Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.