Skip to content

Latest commit

 

History

History
106 lines (76 loc) · 20.3 KB

IMPLEMENTATION.adoc

File metadata and controls

106 lines (76 loc) · 20.3 KB

Implementation Notes Describing the Kiali Operator Internals

This document describes the current state of the implementation of the Kiali Operator (henceforce, "KO"). The purpose for this document is to provide an introduction to the internals of the KO to those who have a need to modify or enhance the KO.

Note
This document reflects the current implementation as of release v1.76.0
Note
To learn how to set up and run the KO within your own development environment, see DEVELOPING.adoc.

Ansible

The core of the KO is Ansible. The KO utilizes the Ansible Operator SDK to provide the base Kubernetes operator functionality. The Kiali-specific code is implemented inside Ansible playbooks and roles.

Note
The KO needs to periodically update its Ansible Operator SDK base image. To see how this is done, just do the same things done previously (see this issue and this PR as good examples of what needs to be done).

Multiple Version Support

The KO can support installing multiple versions of the Kiali server and OSSMC by invoking different versions of the Ansible roles. The versions the KO supports are defined in kiali-default-supported-images.yml and (for OSSMC) in ossmconsole-default-supported-images.yml. For each version supported, there is an Ansible role that the KO executes when it needs to install or remove that Kiali or OSSMC version.

Note
If you need to add support for a new version or remove support for an obsolete version, see DEVELOPING.adoc for those instructions.

To tell the KO which version of Kiali or OSSMC to install, you set the spec.version field in the Kiali CR or OSSMConsole CR. If no spec.version is defined in the CR, the default Ansible role that is executed is defined in default-playbook.yml (side note: the file, and the field inside it, are technically named incorrectly. This isn’t the default playbook, instead it is the default role. But ignore that.) Today, the default version of the Ansible role that is invoked is called, literally, default. This default version is the only one that the upstream Kiali project officially supports. This version support is provided for use by other products that want to retain support for earlier Kiali versions (such as Red Hat OpenShift Service Mesh).

Main Ansible Playbooks

There are several main playbooks that can be invoked by the Ansible Operator SDK when it determines a reconciliation needs to take place. The Ansible Operator SDK knows to do this via the configuration defined in watches.yaml.

  • kiali-deploy.yml - this is invoked when a new Kiali CR is created or an existing one is modified. This playbook determines which version is to be installed/updated and runs that version’s corresponding kiali-deploy Ansible role. If Kiali is to be upgraded (that is, if the Kiali CR’s spec.version has been changed), this playbook will first invoke the kiali-remove Ansible role of the version specified in the spec.version previously declared in the Kiali CR (as found in the Kiali CR’s status field); once the old Kiali is removed, the new version is installed by the execution of the kiali-deploy Ansible role of the new version to be installed (as found in the Kiali CR spec.version). Therefore, an "upgrade" is really just an "uninstall" followed by an "install".

  • kiali-remove.yml - this is invoked when a Kiali CR has been removed. This playbook determines which version of Kiali is being uninstalled (as found in the now-deleted Kiali CR’s spec.version field) and runs that version’s corresponding kiali-remove Ansible role.

  • kiali-new-namespace-detected.yml - this is invoked when a new namespace is created in the cluster. This playbook is simple and small. It’s only job is to simply "touch" any and all existing Kiali CRs ("touching" in this context means adding/modifying an annotation to the Kiali CR such that the modification will cause the KO to trigger a reconcilation). The playbook will only touch those Kiali CRs if the namespace was created after the Kiali CR was created and if the Kiali CR was not modified or touched within the current minute. This playbook enables a useful feature for those Kiali installations that are not given cluster-wide access but are given access to a set of namespaces defined by regular expressions (see spec.deployment.accessible_namespaces) or by Istio Discovery Selectors. In that case, when the KO reconciles the touched Kiali CRs, it will create the necessary Role/RoleBinding resources to give the Kiali installation access to the newly detected namespace.

  • KO v1.76.0 added support for OSSMC. There are additional ossmconsole-deploy.yml and ossmconsole-remove.yml playbooks that install and uninstall OSSMC. These are triggered by OSSMConsole CRs, but work analogously as the kiali-deploy.yml and kiali-remove.yml playbooks as described above.

Ansible Deploy Role

The kiali-deploy role is responsible for installing and updating a Kiai server. It is a standard Ansible role that follows the normal Ansible format. The different directories in this role are described below.

  • defaults - defines defaults for virtually every setting possible in the Kiali CR. Note that a top-level dict is defined (kiali_defaults) with everything under it. This is because the vars (see below) need to do a trick in order to support the use-case where the user doesn’t define every setting in the Kiali CR (which is the typical use-case). Read the comment here to understand the purpose of the trick. Any new setting added to the Kiali CR schema should (almost always) have a default set here. There are a few cases where having an undefined default is necessary, but most times it is not. When in doubt, set the default here.

  • filter_plugins - filter plugins are a way to jump from Ansible into a Python context when things are easier or more efficient to do with Python code rather than directly within Ansible tasks. There are two custom filters the KO uses in the deploy role:

    • only_accessible_namespaces.py - given a list of all known namespaces and a list of accessible namespace regular expressions, this filters out all non-accessible namespaces (i.e. returns a list of only the namespaces that match an accessible namespace regex). Example usage here.

    • stripnone - Recursively processes a given dict value and removes all keys that have a 'None' value. This is needed when setting up the startup variable values. Example usage here.

  • meta - The KO only uses this to declare the collections it wants to use. Today, the KO only needs to declare the kubernetes.core collection.

  • tasks - The tasks that are executed when a Kiali CR has been created or modified. The KO does not care if this is a new Kiali that needs to be installed or an existing Kiali that needs to be updated. The KO will invoke the same tasks and process the same templates. Any existing resources will simply be updated to match that of the templates (this is what it means when it is said the operator "reconciles" the existing resources with the desired state of the templates). The main.yml file is the main starting point of execution for the deploy role. This performs some initialization (getting version information of the operator itself and the cluster, initializing variables and setting defaults, etc) and then handles creation and updates of the various resources that make up a Kiali installation. There is a lot of work done here (and included tasks such as remove-roles.yml and others) that handle reconciling the Roles and Role Bindings (as accessible namespaces come and go, the Kiali Service Account must have its Roles/RoleBindings updated appropriately).

  • templates - YAML files which are used to create new resources (or update existing ones). Ansible expressions can be placed in the templates; these expressions will be evaluated when they are processed by the kubernetes.core.k8s task (an example where they are processed is here). There are two sets of YAML templates - one for OpenShift clusters (openshift) and one for non-OpenShift clusters (kubernetes).

  • vars - defines the actual variables used by the Ansible deploy tasks. All variables are stored under the main top level dict called kiali_vars. Read the comment here to understand the trick being used to define the variables. Notice that only the top group of variables (directly under kiali_vars) has a section defined here (e.g. auth is a top group of variables). When adding a new top group, just copy-and-paste an existing group and rename variables in the new top group as appropriate.

The ossmconsole-deploy role is responsible for installing and updating OSSMC. It is an Ansible role that follows the normal Ansible format and follows the same design as the kiali-deploy role described above.

Ansible Remove Role

The kiali-remove role is responsible for uninstalling a Kiai server. It is a standard Ansible role that follows the normal Ansible format. The different directories in this role are described below.

  • defaults - defines defaults for only those Kiali CR settings the remove tasks need in order to perform the uninstall. Note that a top-level dict is defined (kiali_defaults_remove) with everything under it. This is because the vars (see below) need to do a trick in order to support the use-case where the user doesn’t define all the settings in the Kiali CR (which is the typical use-case). Read the comment here to understand the purpose of the trick.

  • filter_plugins - filter plugins are a way to jump from Ansible into a Python context when things are easier or more efficient to do with Python code rather than directly within Ansible tasks. There is one custom filter the KO uses in the remove role:

    • stripnone.py - Recursively processes a given dict value and removes all keys that have a None value. This is needed when setting up the startup variable values. Example usage here.

  • meta - The KO only uses this to declare the collections it wants to use. Today, the KO only needs to declare the kubernetes.core collection.

  • tasks - The tasks that are executed when a Kiali CR has been removed and Kiali needs to be uninstalled. These tasks will also run if an existing Kial CR had its spec.version changed, in which case the old version installation will be removed via these tasks (this is described above).

  • vars - defines the actual variables used by the Ansible remove tasks. All variables are stored under the main top level dict called kiali_vars_remove. Read the comment here to understand the trick being used to define the variables. Notice that only the top group of variables (directly under kiali_vars_remove) has a section defined here (e.g. deployment is a top group of variables). When adding a new top group, just copy-and-paste an existing group and rename variables in the new top group as appropriate.

The ossmconsole-remove role is responsible for uninstalling OSSMC. It is an Ansible role that follows the normal Ansible format and follows the same design as the kiali-remove role described above.

OLM Metadata Publishing

OLM is an alternative method of installing the KO, as opposed to using the Kiali Operator Helm Chart. When a new release of the Kiali server and operator container images are published on Quay.io, OLM metadata needs to published so users of OLM can subscribe to (aka install) the new KO.

There are three sets of OLM metadata maintained in the github project, each for a different operator catalog that a user might want to use.

  1. The kiali-upstream metadata is published to the Kubernetes Community Operators repo. These operators then become available on OperatorHub.io

  2. The kiali-community metadata is published to the OpenShift Community Operators repo. These operators then become available to OpenShift users as "community" operators.

  3. The kiali-ossm metadata is published as part of the productized OpenShift Service Mesh (OSSM) offering. These operators then become available to OpenShift customers as Red Hat-provided operators.

The publishing of the Kubernetes Community ("kiali-upstream") and OpenShift Community ("kiali-community") Operator metadata is performed manually after a release of Kiali has been published and the Quay.io containers have been verified. Here are the steps necessary.

Manual Steps To Publish OLM Metadata

Note
You must first have forked the two community github repos before performing the steps below. Ensure these are forked and checked out on your local machine:
- https://github.com/k8s-operatorhub/community-operators
- https://github.com/redhat-openshift-ecosystem/community-operators-prod
Note
In order for the PRs that you will create to be automatically processed, your github username must be specified in the reviewers field of the ci.yaml file in both repos. So make sure this one and this one have your github username listed as a reviewer. If not, request that it be added.
  1. Checkout the branch of the version that was just released. For example, if you want to publish the latest z-stream release of KO v1.70:

    git fetch origin
    git checkout -b v1.70 origin/v1.70
  2. Change to the manifests directory:

    cd ./manifests
  3. Run the prepare-community-prs.sh script.

    ./prepare-community-prs.sh \
      --gitrepo-operatorhub <file path to your fork location of github.com/k8s-operatorhub/community-operators> \
      --gitrepo-redhat <file path to your fork location of github.com/redhat-openshift-ecosystem/community-operators-prod>
  4. Read the output of the script and follow its directions. Basically, you want to push a PR to those two github repos for the Kubernetes Community Operators and OpenShift Community Operators

    New Kiali metadata has been added to new branches in the community git repo.
    Create two PRs based on these two branches:
    1. cd /your/redhat-openshift-ecosystem/community-operators-prod && git push <your git remote name> kiali-community-2023-06-05-14-05-50
    2. cd /your/k8s-operatorhub/community-operators && git push <your git remote name> kiali-upstream-2023-06-05-14-05-50
  5. Once you create the two PRs (in here and here), they will be automatically processed. When all CI tests pass the new OLM metadata will be published for you.