Skip to content

Latest commit

 

History

History
66 lines (51 loc) · 1.56 KB

README.md

File metadata and controls

66 lines (51 loc) · 1.56 KB

Experimental Support of Federated XGBoost using NVFlare

This directory contains a demo of Federated Learning using NVFlare.

Training with CPU only

To run the demo, first build XGBoost with the federated learning plugin enabled (see the README).

Install NVFlare (note that currently NVFlare only supports Python 3.8):

pip install nvflare

Prepare the data:

./prepare_data.sh

Start the NVFlare federated server:

./poc/server/startup/start.sh

In another terminal, start the first worker:

./poc/site-1/startup/start.sh

And the second worker:

./poc/site-2/startup/start.sh

Then start the admin CLI, using admin/admin as username/password:

./poc/admin/startup/fl_admin.sh

In the admin CLI, run the following commands:

upload_app hello-xgboost
set_run_number 1
deploy_app hello-xgboost all
start_app all

Once the training finishes, the model file should be written into ./poc/site-1/run_1/test.model.json and ./poc/site-2/run_1/test.model.json respectively.

Finally, shutdown everything from the admin CLI:

shutdown client
shutdown server

Training with GPUs

To demo with Federated Learning using GPUs, make sure your machine has at least 2 GPUs. Build XGBoost with the federated learning plugin enabled along with CUDA, but with NCCL turned off (see the README).

Modify config/config_fed_client.json and set use_gpus to true, then repeat the steps above.