Skip to content

Latest commit

 

History

History
259 lines (211 loc) · 11.6 KB

zookeeper_setup.md

File metadata and controls

259 lines (211 loc) · 11.6 KB

Setting up Zookeeper

This document describes how to setup ZooKeeper in k8s environment.

Zookeeper installation is available in two options:

  1. Quick start - just run it quickly and ask no questions
  2. Advanced setup - setup internal details, such as storage class, replicas number, etc

During ZooKeeper installation the following items are created/configured:

  1. [OPTIONAL] Create separate namespace to run Zookeeper in
  2. Create k8s resources (optionally, within namespace):

Quick start

Quick start is represented in two flavors:

  1. With persistent volume - good for AWS. File are located in deploy/zookeeper/quick-start-persistent-volume
  2. With local emptyDir storage - good for standalone local run, however has to true persistence.
    Files are located in deploy/zookeeper/quick-start-volume-emptyDir

Each quick start flavor provides the following installation options:

  1. 1-node Zookeeper cluster (zookeeper-1- files). No failover provided.
  2. 3-node Zookeeper cluster (zookeeper-3- files). Failover provided.

In case you'd like to test with AWS or any other cloud provider, we recommend to go with deploy/zookeeper/quick-start-persistent-volume persistent storage. In case of local test, you'd may prefer to go with deploy/zookeeper/quick-start-volume-emptyDir emptyDir.

Script-based Installation

In this example we'll go with simple 1-node Zookeeper cluster on AWS and pick deploy/zookeeper/quick-start-persistent-volume. Both create and delete shell scripts are available for simplification.

Manual Installation

In case you'd like to deploy Zookeeper manually, the following steps should be performed:

Namespace

Create namespace

kubectl create namespace zoo1ns

Zookeeper

Deploy Zookeeper into this namespace

kubectl apply -f zookeeper-1-node.yaml -n zoo1ns

Now Zookeeper should be up and running. Let's explore Zookeeper cluster.

IMPORTANT quick-start zookeeper installation are for test purposes mainly.
For fine-tuned Zookeeper setup please refer to advanced setup options.

Advanced setup

Advanced files are are located in deploy/zookeeper/advanced folder. All resources are separated into different files so it is easy to modify them and setup required options.

Advanced setup is available in two options:

  1. With persistent volume
  2. With emptyDir volume

Each of these options have both create and delete scripts provided

  1. Persistent volume create and delete scripts
  2. EmptyDir volume create and delete scripts

Step-by-step explanations:

Namespace

Create namespace in which all the rest resources would be created

kubectl create namespace zoons

Zookeeper Service

Create service. This service provides DNS name for client access to all Zookeeper nodes.

kubectl apply -f 01-service-client-access.yaml -n zoons

Should have as a result

service/zookeeper created

Zookeeper Headless Service

Create headless service. This service provides DNS names for all Zookeeper nodes

kubectl apply -f 02-headless-service.yaml -n zoons

Should have as a result

service/zookeeper-nodes created

Disruption Budget

Create budget. Disruption Budget instructs k8s on how many offline Zookeeper nodes can be at any time

kubectl apply -f 03-pod-disruption-budget.yaml -n zoons

Should have as a result

poddisruptionbudget.policy/zookeeper-pod-distribution-budget created

Storage Class

This part is not that straightforward and may require communication with k8s instance administrator.

First of all, we need to decide, whether Zookeeper would use Persistent Volume as a storage or just stick to more simple Volume (In doc emptyDir type is used)

In case we'd prefer to stick with simpler solution and go with Volume of type emptyDir, we need to go with emptyDir StatefulSet config 05-stateful-set-volume-emptyDir.yaml as described in next Stateful Set unit. Just move to it.

In case we'd prefer to go with Persistent Volume storage, we need to go with Persistent Volume StatefulSet config 05-stateful-set-persistent-volume.yaml

Shortly, Storage Class is used to bind together Persistent Volumes, which are created either by k8s admin manually or automatically by Provisioner. In any case, Persistent Volumes are provided externally to an application to be deployed into k8s. So, this application has to know Storage Class Name to ask for from the k8s in application's claim for new persistent volume - Persistent Volume Claim. This Storage Class Name should be asked from k8s admin and written as application's Persistent Volume Claim .spec.volumeClaimTemplates.storageClassName parameter in StatefulSet configuration. StatefulSet manifest with emptyDir 05-stateful-set-volume-emptyDir.yaml and/or StatefulSet manifest with Persistent Volume 05-stateful-set-persistent-volume.yaml.

Stateful Set

Edit StatefulSet manifest with emptyDir 05-stateful-set-volume-emptyDir.yaml and/or StatefulSet manifest with Persistent Volume 05-stateful-set-persistent-volume.yaml according to your Storage Preferences.

In case we'd go with Volume of type emptyDir, ensure .spec.template.spec.containers.volumes is in place and look like the following:

      volumes:
      - name: datadir-volume
        emptyDir:
          medium: "" #accepted values:  empty str (means node's default medium) or Memory
          sizeLimit: 1Gi

and ensure .spec.volumeClaimTemplates is commented.

In case we'd go with Persistent Volume storage, ensure .spec.template.spec.containers.volumes is commented and ensure .spec.volumeClaimTemplates is uncommented.

  volumeClaimTemplates:
  - metadata:
      name: datadir-volume
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
## storageClassName has to be coordinated with k8s admin and has to be created as a `kind: StorageClass` resource
      storageClassName: storageclass-zookeeper

and ensure storageClassName (storageclass-zookeeper in this example) is specified correctly, as described in Storage Class section

As .yaml file is ready, just apply it with kubectl

kubectl apply -f 05-stateful-set.yaml -n zoons

Should have as a result

statefulset.apps/zookeeper-node created

Now we can take a look into Zookeeper cluster deployed in k8s:

Explore Zookeeper cluster

DNS names

We are expecting to have ZooKeeper cluster of 3 pods inside zoons namespace, named as:

zookeeper-0
zookeeper-1
zookeeper-2

Those pods are expected to have short DNS names as:

zookeeper-0.zookeepers.zoons
zookeeper-1.zookeepers.zoons
zookeeper-2.zookeepers.zoons

where zookeepers is name of Zookeeper headless service and zoons is name of Zookeeper namespace.

and full DNS names (FQDN) as:

zookeeper-0.zookeepers.zoons.svc.cluster.local
zookeeper-1.zookeepers.zoons.svc.cluster.local
zookeeper-2.zookeepers.zoons.svc.cluster.local

Resources

List pods in Zookeeper's namespace

kubectl get pod -n zoons

Expected output is like the following

NAME             READY   STATUS    RESTARTS   AGE
zookeeper-0      1/1     Running   0          9m2s
zookeeper-1      1/1     Running   0          9m2s
zookeeper-2      1/1     Running   0          9m2s

List services

kubectl get service -n zoons

Expected output is like the following

NAME                   TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
zookeeper              ClusterIP   10.108.36.44   <none>        2181/TCP                     168m
zookeepers             ClusterIP   None           <none>        2888/TCP,3888/TCP            31m

List statefulsets

kubectl get statefulset -n zoons

Expected output is like the following

NAME            READY   AGE
zookeepers      3/3     10m

In case all looks fine Zookeeper cluster is up and running