Skip to content

A Kubernetes Controller to synchronize NGINX+ Resources with Kubernetes Ingress Resources

License

Notifications You must be signed in to change notification settings

ElmCompany/nginx-loadbalancer-kubernetes

 
 

Repository files navigation

CI Go Report Card License GitHub release (latest SemVer) GitHub go.mod Go version OpenSSF Scorecard CodeQL FOSSA Status Community Support Project Status: Active – The project has reached a stable, usable state and is being actively developed.

nginx-loadbalancer-kubernetes

The NGINX Loadbalancer for Kubernetes, or NLK, is a Kubernetes controller that provides TCP load balancing external to a Kubernetes cluster running on-premise.

Requirements

What you will need

  • A Kubernetes cluster running on-premise.
  • One or more NGINX Plus hosts running outside your Kubernetes cluster (NGINX Plus hosts must have the ability to route traffic to the cluster).

There is a more detailed Installation Reference available in the docs/ directory.

Why NLK?

NLK provides a simple, easy-to-manage way to automate load balancing for your Kubernetes applications by leveraging NGINX Plus hosts running outside your cluster.

NLK installs easily, has a small footprint, and is easy to configure and manage.

NLK does not require learning a custom object model, you only have to understand NGINX configuration to get the most out of this solution. There is thorough documentation available with the specifics in the docs/ directory.

What does NLK do?

tl;dr:

NLK is a Kubernetes controller that monitors Services and Nodes in your cluster, and then sends API calls to an external NGINX Plus server to manage NGINX Plus Upstream servers automatically.

That's all well and good, but what does it mean? Kubernetes clusters require some tooling to handling routing traffic from the outside world (e.g.: the Internet, corporate network, etc.) to the cluster. This is typically done with a load balancer. The load balancer is responsible for routing traffic to the appropriate worker node which then forwards the traffic to the appropriate Service / Pod.

If you are using a hosted Kubernetes solution -- Digital Ocean, AWS, Azure, etc. -- you can use the cloud provider's load balancer service. Those services will create a load balancer for you. You can use the cloud provider's API to manage the load balancer, or you can use the cloud provider's web console.

If you are running Kubernetes on-premise and will need to manage your own load balancer, NLK can help.

NLK itself does not perform load balancing. Rather, NLK allows you to manage Service resources within your cluster to update your load balancers, with tooling you are most likely already using.

Getting Started

There are few bits of administrivia to get out of the way before you can start leveraging NLK for your load balancing needs.

As noted above, NLK is intended for when you have one or more Kubernetes clusters running on-premise. In addition to this, you need to have at least one NGINX Plus host running outside your cluster (Please refer to the Roadmap for information about other load balancer servers).

Deployment

RBAC

As with everything Kubernetes, NLK requires RBAC permissions to function properly. The necessary resources are defined in the various YAML files in deployment/rbac/.

For convenience, two scripts are included, apply.sh, and unapply.sh. These scripts will apply or remove the RBAC resources, respectively.

The permissions required by NLK are modest. NLK requires the ability to read Resources via shared informers; the resources are Services, Nodes, and ConfigMaps. The Services and ConfigMap are restricted to a specific namespace (default: "nlk"). The Nodes resource is cluster-wide.

Configuration

NLK is configured via a ConfigMap, the default settings are found in deployment/configmap.yaml. Presently there is a single configuration value exposed in the ConfigMap, nginx-hosts. This contains a comma-separated list of NGINX Plus hosts that NLK will maintain.

You will need to update this ConfigMap to reflect the NGINX Plus hosts you wish to manage.

If you were to deploy the ConfigMap and start NLK without updating the nginx-hosts value, don't fear; the ConfigMap resource is monitored for changes and NLK will update the NGINX Plus hosts accordingly when the resource is changed, no restart required.

There is an extensive Installation Reference available in the docs/ directory. Please refer to that for detailed instructions on how to deploy NLK and run a demo application.

Versioning

Versioning is a work in progress. The CI/CD pipeline is being developed and will be used to build and publish NLK images to the Container Registry. Once in place, semantic versioning will be used for published images.

Deployment Steps

To get NLK up and running in ten steps or fewer, follow these instructions (NOTE, all the aforementioned prerequisites must be met for this to work). There is a much more detailed Installation Reference available in the docs/ directory.

  1. Clone this repo (optional, you can simply copy the deployments/ directory)

git clone git@github.com:nginxinc/nginx-loadbalancer-kubernetes.git

  1. Apply the Namespace

kubectl apply -f deployments/deployment/namespace.yaml

  1. Apply the RBAC resources

./deployments/rbac/apply.sh

  1. Update / Apply the ConfigMap (For best results update the nginx-hosts values first)

kubectl apply -f deployments/deployment/configmap.yaml

  1. Apply the Deployment

kubectl apply -f deployments/deployment/deployment.yaml

  1. Check the logs

kubectl -n nlk get pods | grep deployment | cut -f1 -d" " | xargs kubectl logs -n nlk --follow $1

At this point NLK should be up and running. Now would be a great time to go over to the Installation Reference and follow the instructions to deploy a demo application.

Monitoring

Presently NLK includes a fair amount of logging. This is intended to be used for debugging purposes. There are plans to add more robust monitoring and alerting in the future.

As a rule, we support the use of OpenTelemetry for observability, and we will be adding support in the near future.

Contributing

Presently we are not accepting pull requests. However, we welcome your feedback and suggestions. Please open an issue to let us know what you think!

One way to contribute is to help us test NLK. We are looking for people to test NLK in a variety of environments.

If you are curious about the implementation, you should certainly browse the code, but first you might wish to refer to the design document. Some of the design decisions are explained there.

Roadmap

While NLK was initially written specifically for NGINX Plus, we recognize there are other load-balancers that can be supported.

To this end, NLK has been architected to be extensible to support other "Border Servers". Border Servers are the term NLK uses to describe load-balancers, reverse proxies, etc. that run outside the cluster and handle routing outside traffic to your cluster.

While we have identified a few potential targets, we are open to suggestions. Please open an issue to share your thoughts on potential implementations.

We look forward to building a community around NLK and value all feedback and suggestions. Varying perspectives and embracing diverse ideas will be key to NLK becoming a solution that is useful to the community. We will consider it a success when we are able to accept pull requests from the community.

Authors

  • Chris Akker - Solutions Architect - Community and Alliances @ F5, Inc.
  • Steve Wagner - Solutions Architect - Community and Alliances @ F5, Inc.

License

Apache License, Version 2.0

© F5, Inc. 2023

(but don't let that scare you, we're really nice people...)

About

A Kubernetes Controller to synchronize NGINX+ Resources with Kubernetes Ingress Resources

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Go 97.3%
  • Mustache 2.0%
  • Other 0.7%