-
Notifications
You must be signed in to change notification settings - Fork 417
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add kind-based development environment #422
Add kind-based development environment #422
Conversation
- Scripts to create and tear down the environment. - Uses a kind cluster and a stand-alone authenticator container. - Sets up a kubeconfig to test client-server interactions end to end.
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: nckturner The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@nckturner this is a great start. I have some ideas inline that might simplify things, however.
# Parameters | ||
|
||
# Parameters required to be set by caller for dev environment creation only: | ||
# AUTHENTICATOR_IMAGE |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how about just defaulting to building an image of the locally checked-out code?
# Check that required binaries are installed | ||
command -v make >/dev/null 2>&1 || { echo >&2 "make is required but it's not installed. Aborting."; exit 1; } | ||
command -v docker >/dev/null 2>&1 || { echo >&2 "docker is required but it's not installed. Aborting."; exit 1; } | ||
command -v kind >/dev/null 2>&1 || { echo >&2 "kind is required but it's not installed. Aborting."; exit 1; } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is fine, but consider having a check that validates a particular version of these dependencies. We've found in ACK land that just relying on docker or kind (and not a specific version of those) leads to ambiguous dev environments and lots of headache
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, that's a good point. At least printing a warning with expected/tested versions.
|
||
# Use make start-dev when you want to create a kind cluster for | ||
# testing the authenticator. You must pass in the admin ARN and | ||
# the image under test. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See note below. Consider defaulting the image to one built from the locally checked-out source.
# Admin kubeconfig generated by kind | ||
kind_kubeconfig="${client_dir}/kind-kubeconfig.yaml" | ||
|
||
function create_network() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there a requirement to create a separate virtual network? Why not use the default docker virtual network?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Kind already creates/uses its own network, so in order to include the authenticator in that network I needed to do it this way (or run authenticator on the kind node along with the apiserver, etc, but in that case I would have had to do some setup slightly differently, including generating all config/certs up front instead of letting authenticator generate them. This is because the ca.cert needs to embedded in a webhook kubeconfig and passed to the apiserver on startup). Also interacting with the apiserver as a standalone docker container is quite easy, and we don't have to interact with kind for debugging... e.g. port-forwarding in order to curl the metrics endpoint, etc.
function start_authenticator() { | ||
mkdir -p "${authenticator_state_host_dir}" | ||
mkdir -p "${authenticator_export_host_dir}" | ||
chmod 777 "${authenticator_state_host_dir}" | ||
chmod 777 "${authenticator_export_host_dir}" | ||
docker run \ | ||
--detach \ | ||
--ip "${AUTHENTICATOR_IP}" \ | ||
--mount "type=bind,src=${authenticator_config_host_dir},dst=${authenticator_config_dest_dir}" \ | ||
--mount "type=bind,src=${authenticator_state_host_dir},dst=${authenticator_state_dest_dir}" \ | ||
--mount "type=bind,src=${authenticator_export_host_dir},dst=${authenticator_export_dest_dir}" \ | ||
--name aws-iam-authenticator \ | ||
--network "${NETWORK_NAME}" \ | ||
--publish ${authenticator_healthz_port}:${authenticator_healthz_port} \ | ||
--publish ${AUTHENTICATOR_PORT}:${AUTHENTICATOR_PORT} \ | ||
--rm \ | ||
"${AUTHENTICATOR_IMAGE}" \ | ||
server \ | ||
--config "${authenticator_config_dest_dir}/authenticator.yaml" | ||
} | ||
|
||
function kill_authenticator() { | ||
docker kill aws-iam-authenticator || true | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the issue I have with this is that it's not actually testing the installation/operating practice for aws-iam-authenticator
that is described in the README: deploy the authenticator as a Daemonset on each control plane node in the k8s cluster, using the example manifest. Why not just kubectl apply a slightly-modified version of the example YAML files against the KinD cluster? That would do dual service of validating the recommended installation approach for the authenticator.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. The reason I went with the above approach is because I decided to let the authenticator generate its own certificate and the webhook kubeconfig on startup. Since the path to this config needs to be passed to the apiserver as a flag on startup, this suggests starting the authenticator before the kind cluster, because the API server is started automatically on cluster creation and is configured in the kind cluster configuration. That being said, we can just use the init command to pre-generate the configuration, and then launch authenticator, and then patch the API server.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should try the approach that allows quicker iteration that does not need a tear down and restart of the kind cluster. In both approaches i think we get it, probably bit easier with daemonset approach.
The second criteria for choosing is closeness to setups. I'm not sure which one is more prevalent, but i'd lean towards the approach implemented due to my familiarity with managed setups.
|
||
function write_kind_config() { | ||
mkdir -p "${kind_config_host_dir}" | ||
sed -e "s|{{AUTHENTICATOR_EXPORT_HOST_DIR}}|${authenticator_export_host_dir}|g" \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do these commands work on mac? i know sed has different syntax for mac and linux. We can mandate installation of gsed
to keep these working across both platforms?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I have no idea if they work on Mac but I've definitely ran into problems with sed commands that were written by mac users :)
Chatted about this offline. Good with improvements in followup PRs :) /lgtm |
Example use:
And the authenticator client and server are exercised with the following parameters: