You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Deployments + Service: This can require cross node and zone traffic, which can increase latency and cost.
Sidecars: This improves latency, but requires a container per-pod. If the number of pods exceeds the number of nodes (quite common), this wastes resources. It also increases start up time, and requires careful Readiness checks to assure that the agent is up before the user application serves requests.
As an alternative, the cerbos PDP can be deployed as a DaemonSet, running one pod per node, and exposing this pod as a NodePort service. Applications are then told the address of the PDP server instance to use by an environment variable set using the K8s Downward API:
fewer instances of the PDP (in cases where there are fewer nodes than pods requiring a PDP)
PriorityClass can be used to ensure that PDP is prioritised for scheduling, and can be ensure the pod is up and running before application pods start
No sidecar is needed, which simplifies user application deployments, and implies no impact on pod start time.
What would the ideal solution look like to you?
Make the deployment strategiy ("deployment/daemonset"), configurable in the helm chart, defaulting to the existing deployment strategy.
Document the advantages, and the steps needed by the applications
Anything else?
A 100% best practice deploy would
make sure mTLS could be supported
possibly support Unix domain socket sharing via the nodes filesystem
Ideally (but not essential), per-node agents are run with the system-node-critical priority class. K8s treats this class specially and integrates the readiness with the full node lifecycle. Unfortunately (last time I checked), this also requires running the pods in the kube-system namespace.
The text was updated successfully, but these errors were encountered:
Is there an existing issue for this?
Feature description
The current deployment strategies discussed are:
There a couple of downsides to these:
As an alternative, the cerbos PDP can be deployed as a DaemonSet, running one pod per node, and exposing this pod as a NodePort service. Applications are then told the address of the PDP server instance to use by an environment variable set using the K8s Downward API:
The potential advantages are:
What would the ideal solution look like to you?
Anything else?
A 100% best practice deploy would
system-node-critical
priority class. K8s treats this class specially and integrates the readiness with the full node lifecycle. Unfortunately (last time I checked), this also requires running the pods in thekube-system
namespace.The text was updated successfully, but these errors were encountered: