You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Current XGBoost introduced the support for Federated Learning, both horizontal and vertical. However, their capability in supporting secure features is limited. Based on basic arithmetic operations - addition and multiplication - that is supported by common Homomorphic Encryption schemes (Paillier, BFV/BGV, or CKKS), the current horizontal and vertical pipeline cannot be integrated. The reason is server and/or clients need to perform operations that are not supported by HE schemes, including division and argmax.
It will be useful to implement a variation of current horizontal federated learning XGBoost to provide a solution with secure features.
Secure Pattern
Our current horizontal FL design is:
Each party compute their own local histograms of G and H based on local data - two float vectors.
An AllReduce call is made to sync all local histograms to a global version.
Each party continue with the tree construction based on the global histogram.
As the local histograms being transmitted across parties (especially via outside communication channels under federated setting), there is a potential concern that the local histogram information can be leaked and learnt by a third party. Hence users could have a need for protecting the local histograms.
There is essentially no major difference between the proposed method and our current HE solution for horizontal deep learning pipelines
Goals
Enhance XGBoost to support secure horizontal federated learning.
Support using NVFlare to coordinate the learning process, but the design should be amenable to support other federated learning platforms.
Support using any arbitrary encryption library, decoupled from xgboost via a secure interface.
Efficiency: training speed should be close to alternative distributed training design.
Accuracy: should be close to alternative vertical pipeline.
Non-Goals
Investigate the more sophisticated HE methods, supporting division/argmax, such that broader schemes (horizontal, vertical) can be performed within encrypted space - e.g. perform further tree construction beyond collecting and aggregating encrypted local histograms at federated server.
Assumptions
Same assumptions as our current horizontal federated learning scheme:
private set intersection (PSI) is already done
A few trusted partners jointly train a model.
Reasonably fast network connection between each participant and a central party.
Risks
No fundamental risk since we already implemented the functionality of secure vertical XGBoost by adding functions to the XGBoost codebase. Still, care must be taken to not break existing functionality, or make regular training harder.
Design for Encrypted Horizontal Training
With the basic HE operations of addition, a feasible solution can be achieved. Considering the fact that it may not be straightforward to couple AllReduce with a cipher-text addition, we can beak it to two steps: AllGather + cipher-text addition. We will use the processor interface designed and implemented in secure vertical XGBoost for achieving the encryption/aggregation/decryption.
The way processor interface works:
each party calls interface for data processing (serialization, etc.), providing necessary information: local G/H histograms - two float vectors
interface perfroms necessary processings and send the results back to xgboost
xgboost then forward the message to local gRPC handler
encryption are performed at local gRPC handler by reaching out to external encryption utils
secure aggregation will be performed at server end, and return a "modified AllGather" buffer containing the global histograms only (instead of individual party's submissions)
xgboost receives the buffer upon communication call of AllGather, and send it to interface for intepretation
interface performs decryption and post-processing after getting the buffer, recover proper information, and send back to xgboost
Upon responding to the AllGather call,
Each party send local G/H histograms to interface (by calling a specific function).
Interface process and prepare the buffer, and send to xgboost, which will be forward to local gRPC handler, where encryption will be performed and the encrypted local histograms will be sent to server.
Server collects AND AGGREGATES the global information, send back to local gRPC handlers.
The global histograms will be prcessed and decrypted via interface for each party.
Refer to secure vertical XGBoost for details of xgboost-interface communication patterns.
Potentially, there are two options for the global histogram aggregation:
At local gRPC handler: once server collects and send the AllGather results, local gRPC handlers can perform encrypted addition, then decryption. Potential concern is that if local party choose a reversed order - decrypt then add, then it can learn other parties' histograms.
At federated server: similar to our HE scheme for deep learning - global cipher-text aggregation happens at server-end. Encryption/decryption will happen at local gRPC handler, but we may be able to use the standard pipeline of using filters.
Given the potential concern, the second option is preferred, it will be performed at FL-end (e.g. NVFlare). Therefore, although we call "AllGather", the actual global aggregation has already been performed at server before the AllGather results are received. Interface will provide functionality to properly process the received buffer.
Same as secure vertical xgboost, only encrypted message will go from local to external (controlled by NVFlare), clean text info stays local.
Encryption Scheme:
Implement an alternative vertical scheme will likely have no adverse impact on user-experience, since it only modifies the information flow without changing the base theory.
To achieve best efficiency, a proper encryption scheme needs to be selected. Comparing with vertical xgboost using Pailier due to heavy single-number additions (sample-size), horizontal faces light vector additions (party-size). Hence, CKKS is the best option.
Most related existing PRs
The most related PRs to base upon and modify are ones from secure vertical for interface and integration:
Motivation
Current XGBoost introduced the support for Federated Learning, both horizontal and vertical. However, their capability in supporting secure features is limited. Based on basic arithmetic operations - addition and multiplication - that is supported by common Homomorphic Encryption schemes (Paillier, BFV/BGV, or CKKS), the current horizontal and vertical pipeline cannot be integrated. The reason is server and/or clients need to perform operations that are not supported by HE schemes, including division and argmax.
It will be useful to implement a variation of current horizontal federated learning XGBoost to provide a solution with secure features.
Secure Pattern
Our current horizontal FL design is:
As the local histograms being transmitted across parties (especially via outside communication channels under federated setting), there is a potential concern that the local histogram information can be leaked and learnt by a third party. Hence users could have a need for protecting the local histograms.
There is essentially no major difference between the proposed method and our current HE solution for horizontal deep learning pipelines
Goals
Non-Goals
Assumptions
Same assumptions as our current horizontal federated learning scheme:
Risks
No fundamental risk since we already implemented the functionality of secure vertical XGBoost by adding functions to the XGBoost codebase. Still, care must be taken to not break existing functionality, or make regular training harder.
Design for Encrypted Horizontal Training
With the basic HE operations of addition, a feasible solution can be achieved. Considering the fact that it may not be straightforward to couple AllReduce with a cipher-text addition, we can beak it to two steps: AllGather + cipher-text addition. We will use the processor interface designed and implemented in secure vertical XGBoost for achieving the encryption/aggregation/decryption.
The way processor interface works:
Upon responding to the AllGather call,
Refer to secure vertical XGBoost for details of xgboost-interface communication patterns.
Potentially, there are two options for the global histogram aggregation:
Given the potential concern, the second option is preferred, it will be performed at FL-end (e.g. NVFlare). Therefore, although we call "AllGather", the actual global aggregation has already been performed at server before the AllGather results are received. Interface will provide functionality to properly process the received buffer.
Same as secure vertical xgboost, only encrypted message will go from local to external (controlled by NVFlare), clean text info stays local.
Encryption Scheme:
Implement an alternative vertical scheme will likely have no adverse impact on user-experience, since it only modifies the information flow without changing the base theory.
To achieve best efficiency, a proper encryption scheme needs to be selected. Comparing with vertical xgboost using Pailier due to heavy single-number additions (sample-size), horizontal faces light vector additions (party-size). Hence, CKKS is the best option.
Most related existing PRs
The most related PRs to base upon and modify are ones from secure vertical for interface and integration:
Task list to track the progress
The text was updated successfully, but these errors were encountered: