Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add kind-based development environment #422

Merged
merged 2 commits into from
Jan 19, 2022

Conversation

nckturner
Copy link
Contributor

  • Scripts to create and tear down the environment.
  • Uses a kind cluster and a stand-alone authenticator container.
  • Sets up a kubeconfig to test client-server interactions end to end.

Example use:

$ make start-dev ADMIN_ARN=arn:aws:iam::352684330888:role/dev-k8s-01 AUTHENTICATOR_IMAGE=aws-iam-authenticator:v0.5.3_02a86a549cee91b37baff12
d2528f185594fb98c_20220117T003254Z                                                                                                                                                           
make /home/ubuntu/go/src/sigs.k8s.io/aws-iam-authenticator/_output/bin/aws-iam-authenticator        
make[1]: Entering directory '/home/ubuntu/go/src/sigs.k8s.io/aws-iam-authenticator'                 
make[1]: '/home/ubuntu/go/src/sigs.k8s.io/aws-iam-authenticator/_output/bin/aws-iam-authenticator' is up to date.
make[1]: Leaving directory '/home/ubuntu/go/src/sigs.k8s.io/aws-iam-authenticator'                  
./hack/start-dev-env.sh                                                                             
Creating network authenticator-dev                                                                  
644eb11eaf9b459d7702b1463bf01b66988077fb7119aa1ee5fb0b6e3b109e5e                                                                                                                             
9abc964c717778714e3aa8208abb01c3d67eeb3c6a7ffe6e5689c62b1ff7d15c
Authenticator running at 172.30.0.10                                                                                                                                                         
Creating cluster "authenticator-dev-cluster" ...                                                                                                                                             
WARNING: Overriding docker network due to KIND_EXPERIMENTAL_DOCKER_NETWORK                                                                                                                   
WARNING: Here be dragons! This is not supported currently.    
 βœ“ Ensuring node image (kindest/node:v1.21.1) πŸ–Ό                                                                                                                                              
 βœ“ Preparing nodes πŸ“¦                                                                                                                                                                        
 βœ“ Writing configuration πŸ“œ                                
 βœ“ Starting control-plane πŸ•ΉοΈ                              
 βœ“ Installing CNI πŸ”Œ                                                                                                                                                                         
 βœ“ Installing StorageClass πŸ’Ύ                                                                                                                                                                
Set kubectl context to "kind-authenticator-dev-cluster"                                                                                                                                      
You can now use your cluster with:                                                                                                                                                           
                                                                                                                   
kubectl cluster-info --context kind-authenticator-dev-cluster --kubeconfig /home/ubuntu/go/src/sigs.k8s.io/aws-iam-authenticator/_output/dev/client/kind-kubeconfig.yaml
                                                                     
Thanks for using kind! 😊                                   
.                                                                                                       
Test authenticator with:                                                               
kubectl --kubeconfig="/home/ubuntu/go/src/sigs.k8s.io/aws-iam-authenticator/_output/dev/client/kubeconfig.yaml" --context="test-authenticator"
.   

And the authenticator client and server are exercised with the following parameters:

$ kubectl --kubeconfig="/home/ubuntu/go/src/sigs.k8s.io/aws-iam-authenticator/_output/dev/client/kubeconfig.yaml" --context="test-authenticator" get pods                                                                                                                                                                                 
No resources found in default namespace.        

- Scripts to create and tear down the environment.
- Uses a kind cluster and a stand-alone authenticator
  container.
- Sets up a kubeconfig to test client-server interactions
  end to end.
@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Jan 18, 2022
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: nckturner

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jan 18, 2022
@nckturner
Copy link
Contributor Author

/retest

Copy link

@jaypipes jaypipes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nckturner this is a great start. I have some ideas inline that might simplify things, however.

# Parameters

# Parameters required to be set by caller for dev environment creation only:
# AUTHENTICATOR_IMAGE

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how about just defaulting to building an image of the locally checked-out code?

Comment on lines +21 to +24
# Check that required binaries are installed
command -v make >/dev/null 2>&1 || { echo >&2 "make is required but it's not installed. Aborting."; exit 1; }
command -v docker >/dev/null 2>&1 || { echo >&2 "docker is required but it's not installed. Aborting."; exit 1; }
command -v kind >/dev/null 2>&1 || { echo >&2 "kind is required but it's not installed. Aborting."; exit 1; }

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is fine, but consider having a check that validates a particular version of these dependencies. We've found in ACK land that just relying on docker or kind (and not a specific version of those) leads to ambiguous dev environments and lots of headache

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, that's a good point. At least printing a warning with expected/tested versions.


# Use make start-dev when you want to create a kind cluster for
# testing the authenticator. You must pass in the admin ARN and
# the image under test.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See note below. Consider defaulting the image to one built from the locally checked-out source.

# Admin kubeconfig generated by kind
kind_kubeconfig="${client_dir}/kind-kubeconfig.yaml"

function create_network() {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a requirement to create a separate virtual network? Why not use the default docker virtual network?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Kind already creates/uses its own network, so in order to include the authenticator in that network I needed to do it this way (or run authenticator on the kind node along with the apiserver, etc, but in that case I would have had to do some setup slightly differently, including generating all config/certs up front instead of letting authenticator generate them. This is because the ca.cert needs to embedded in a webhook kubeconfig and passed to the apiserver on startup). Also interacting with the apiserver as a standalone docker container is quite easy, and we don't have to interact with kind for debugging... e.g. port-forwarding in order to curl the metrics endpoint, etc.

Comment on lines +123 to +146
function start_authenticator() {
mkdir -p "${authenticator_state_host_dir}"
mkdir -p "${authenticator_export_host_dir}"
chmod 777 "${authenticator_state_host_dir}"
chmod 777 "${authenticator_export_host_dir}"
docker run \
--detach \
--ip "${AUTHENTICATOR_IP}" \
--mount "type=bind,src=${authenticator_config_host_dir},dst=${authenticator_config_dest_dir}" \
--mount "type=bind,src=${authenticator_state_host_dir},dst=${authenticator_state_dest_dir}" \
--mount "type=bind,src=${authenticator_export_host_dir},dst=${authenticator_export_dest_dir}" \
--name aws-iam-authenticator \
--network "${NETWORK_NAME}" \
--publish ${authenticator_healthz_port}:${authenticator_healthz_port} \
--publish ${AUTHENTICATOR_PORT}:${AUTHENTICATOR_PORT} \
--rm \
"${AUTHENTICATOR_IMAGE}" \
server \
--config "${authenticator_config_dest_dir}/authenticator.yaml"
}

function kill_authenticator() {
docker kill aws-iam-authenticator || true
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the issue I have with this is that it's not actually testing the installation/operating practice for aws-iam-authenticator that is described in the README: deploy the authenticator as a Daemonset on each control plane node in the k8s cluster, using the example manifest. Why not just kubectl apply a slightly-modified version of the example YAML files against the KinD cluster? That would do dual service of validating the recommended installation approach for the authenticator.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah. The reason I went with the above approach is because I decided to let the authenticator generate its own certificate and the webhook kubeconfig on startup. Since the path to this config needs to be passed to the apiserver as a flag on startup, this suggests starting the authenticator before the kind cluster, because the API server is started automatically on cluster creation and is configured in the kind cluster configuration. That being said, we can just use the init command to pre-generate the configuration, and then launch authenticator, and then patch the API server.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should try the approach that allows quicker iteration that does not need a tear down and restart of the kind cluster. In both approaches i think we get it, probably bit easier with daemonset approach.
The second criteria for choosing is closeness to setups. I'm not sure which one is more prevalent, but i'd lean towards the approach implemented due to my familiarity with managed setups.


function write_kind_config() {
mkdir -p "${kind_config_host_dir}"
sed -e "s|{{AUTHENTICATOR_EXPORT_HOST_DIR}}|${authenticator_export_host_dir}|g" \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do these commands work on mac? i know sed has different syntax for mac and linux. We can mandate installation of gsed to keep these working across both platforms?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I have no idea if they work on Mac but I've definitely ran into problems with sed commands that were written by mac users :)

@jaypipes
Copy link

Chatted about this offline. Good with improvements in followup PRs :)

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jan 19, 2022
@k8s-ci-robot k8s-ci-robot merged commit f974d5e into kubernetes-sigs:master Jan 19, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants