Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Envtest #382

Closed
nightkr opened this issue Jan 10, 2021 · 9 comments
Closed

Envtest #382

nightkr opened this issue Jan 10, 2021 · 9 comments
Assignees
Labels
automation ci and testing related wontfix This will not be worked on

Comments

@nightkr
Copy link
Member

nightkr commented Jan 10, 2021

It would be nice to support something like controller-runtime's envtest helper for starting a minimal cluster per test.

https://book.kubebuilder.io/cronjob-tutorial/writing-tests.html

@nightkr nightkr added the automation ci and testing related label Jan 10, 2021
@clux clux added the help wanted Not immediately prioritised, please help! label Mar 2, 2021
@kazk
Copy link
Member

kazk commented Jun 9, 2021

A minimal version that creates a temporary cluster without affecting ~/.kube/config, provide configured kube::Client, and clean up when it's dropped can be done using k3d easily.

https://gist.github.com/kazk/3419b8dd0468e1640c4575c638ef590b

#[tokio::test]
async fn test_integration() {
    let test_env = TestEnv::new();
    let client = test_env.client().await;

    let pods: Api<Pod> = Api::default_namespaced(client);
    let _pod = pods.get("example").await.unwrap();
}
// Multiple tests each creating a cluster can run in parallel too
running 1 test
INFO[0000] Prep: Network                                
INFO[0000] Created network 'k3d-c308jevs3ok61t88c0dg' (832b6396535a5538cb87bf2f29a47849011dc1d80e13e567c3c06bd23ede4cdf) 
INFO[0000] Created volume 'k3d-c308jevs3ok61t88c0dg-images' 
INFO[0001] Creating node 'k3d-c308jevs3ok61t88c0dg-server-0' 
INFO[0001] Creating LoadBalancer 'k3d-c308jevs3ok61t88c0dg-serverlb' 
INFO[0001] Starting cluster 'c308jevs3ok61t88c0dg'      
INFO[0001] Starting servers...                          
INFO[0001] Starting Node 'k3d-c308jevs3ok61t88c0dg-server-0' 
INFO[0009] Starting agents...                           
INFO[0009] Starting helpers...                          
INFO[0009] Starting Node 'k3d-c308jevs3ok61t88c0dg-serverlb' 
INFO[0010] Cluster 'c308jevs3ok61t88c0dg' created successfully! 
INFO[0010] You can now use it like this:                
export KUBECONFIG=$(k3d kubeconfig write c308jevs3ok61t88c0dg)
kubectl cluster-info

thread 'test_integration' panicked at 'called `Result::unwrap()` on an `Err` value: Api(ErrorResponse { status: "Failure", message: "pods \"example\" not found", reason: "NotFound", code: 404 })', tests/integration_test.rs:13:42

Deleting k3d cluster c308jevs3ok61t88c0dg...
INFO[0000] Deleting cluster 'c308jevs3ok61t88c0dg'      
INFO[0000] Deleted k3d-c308jevs3ok61t88c0dg-serverlb    
INFO[0000] Deleted k3d-c308jevs3ok61t88c0dg-server-0    
INFO[0000] Deleting cluster network 'k3d-c308jevs3ok61t88c0dg' 
INFO[0000] Deleting image volume 'k3d-c308jevs3ok61t88c0dg-images' 
INFO[0000] Removing cluster details from default kubeconfig... 
INFO[0000] Removing standalone kubeconfig file (if there is one)... 
INFO[0000] Successfully deleted cluster c308jevs3ok61t88c0dg! 
test test_integration ... FAILED

failures:

failures:
    test_integration

test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 12.11s

I think we can extend this (make it configurable, load CRDs, assertion helpers, etc.) to get a pretty good test helper.

@clux
Copy link
Member

clux commented Jun 9, 2021

I think this would be a fantastic out of the box thing to have, particularly if it can be done without screwing over the kubeconfig!

One philosophical aspect about this though. Should we be optimising this to be a general integration test setup that you only need a handful of (because of the spin-up cost), or maybe something that can do tons of little verifications in dynamic namespaces?

If each #[tokio::test] performs a full cluster provisioning step, then we are implicitly encouraging enormous test fns (and the handful of tests case).
Maybe we could have something like a ONCE'd TestEnv::shared setup, so that each #[tokio::test] can get a dynamically created namespace to perform their test? That way, the whole namespaced class of users could have a full cluster setup done once per CI, and run tests very fast (potentially even parallelising some down the road).

@nightkr
Copy link
Member Author

nightkr commented Jun 9, 2021

Maybe we could have something like a ONCE'd TestEnv::shared setup, so that each #[tokio::test] can get a dynamically created namespace to perform their test? That way, the whole namespaced class of users could have a full cluster setup done once per CI, and run tests very fast (potentially even parallelising some down the road).

Maybe we should have both? Namespaces are a good ideal but it breaks down as soon as you want to test something cluster-scoped..

The question would be about how to ensure that we tear down the shared testenv in the end, though.. :/

@kazk
Copy link
Member

kazk commented Jun 9, 2021

(potentially even parallelising some down the road)

Maybe I'm misunderstanding, but this set up can run multiple tests in parallel, each creating a cluster without issues.

Should we be optimising this to be a general integration test setup that you only need a handful of (because of the spin-up cost), or maybe something that can do tons of little verifications in dynamic namespaces?

It's actually much faster than I had imagined. The example above finishes in about 12s on my laptop. Running 5 of them (just repeated with different names) finished in 36s. From the test output, it seems like k3d created 4 clusters at once, and created another one after deleting one.


For shared cluster setup with dynamic namespace, maybe a helper macro to expand into a test case would work. Each test is passed a configured client with default namespace set. Use panic::catch_unwind to collect results and assert all tests passed.

@nightkr
Copy link
Member Author

nightkr commented Jun 9, 2021

Another thing: Many tests probably shouldn't need anything else than the API and maybe the controllers, so that should be able to get the provisioning time down a fair bit too. Especially the container runtime can be pretty slow and annoying to run in sandboxed environments (such as Nix).

@kazk kazk self-assigned this Jun 13, 2021
@kazk
Copy link
Member

kazk commented Jun 13, 2021

I tried writing an integration test for my example controller https://github.com/kazk/cnat-rs. See https://github.com/kazk/cnat-rs/tree/main/tests.

GitHub Actions:
example-output
(note that ~/.kube/config doesn't even exist because it's never touched)

Multiple tests like this can run in parallel just fine.

To create a minimal cluster:

let test_env = TestEnv::new();

Builder to enable features (not much is configurable at the moment):

let test_env = TestEnv::builder()
    .servers(1)
    .agents(3)
    .inject_host_ip()
    .build();

If these look like a good start, we can create kube-test crate having:

  • kube_test: general test helpers
  • kube_test::k3d: TestEnv with k3d

One philosophical aspect about this though. Should we be optimising this to be a general integration test setup that you only need a handful of (because of the spin-up cost), or maybe something that can do tons of little verifications in dynamic namespaces?

I can't think of a way to put each of the small tests in a separate #[tokio::test] and use the same temporary cluster that cleans itself up. It's possible to write many small tests in one #[tokio::test] sharing the cluster, but test failure outputs will be unclear.

@clux
Copy link
Member

clux commented Jun 19, 2021

If these look like a good start, we can create kube-test crate having:
kube_test: general test helpers
kube_test::k3d: TestEnv with k3d

I think this would be super useful. Sorry it took so long to get back on this. A kube-test crate for these things would definitely be welcome in my book at least. IMO, CI is probably one of our weaker spots so far, so a better out-of-the-box experience here would be very worth focusing on. So many people jump at these convenience stories (like kube-derive with schemas) if the interface is sleek.

Happy to review as usual, but the code inside your test repo already looks great as a first release imo.

Also, very cool repo! Did you build cnat against all the operator frameworks to compare? That's like a cool talk topic / blog post right there.

I can't think of a way to put each of the small tests in a separate #[tokio::test] and use the same temporary cluster that cleans itself up. It's possible to write many small tests in one #[tokio::test] sharing the cluster, but test failure outputs will be unclear.

Maybe that type of parallelisation is best left up to the user if the default behaviour would be confusing.

@kazk
Copy link
Member

kazk commented Jun 20, 2021

No problem, I've been busy too. I'll open a PR with mostly the same code and more docs.

Did you build cnat against all the operator frameworks to compare?

I was going to, but only built their client-go version. If I remember correctly, I couldn't find the versions of KubeBuilder (https://github.com/programming-kubernetes/cnat/tree/master/cnat-kubebuilder) or Operator SDK (https://github.com/programming-kubernetes/cnat/tree/master/cnat-operator) they had used. I also didn't spend much time on them.

That's like a cool talk topic / blog post right there.

Yeah, it's interesting to see what's necessary to implement the same thing. I love how straightforward the kube version is. Anyone with Kubernetes and some Rust knowledge should be able to understand, and build it. I'm not into blogs/talks, so feel free to take it :)

@kazk kazk mentioned this issue Jun 21, 2021
4 tasks
@clux clux added wontfix This will not be worked on and removed help wanted Not immediately prioritised, please help! labels Mar 5, 2023
@clux
Copy link
Member

clux commented Mar 5, 2023

Going to close as it's impractical to do something like this inside kube, and there are other ways to test.

Some reasons outlined in #564 (comment) but basically; k3d is not the most reliable way to test, so we should probably not encourage multiple clusters being spun up as part of tests as a main testing method (if people want to do that they should do it on CI configuration where they know what machine they are dealing with).

Easier ways to test are through mocking, or idempotent integration tests using k3d (with some care taken to work in non-isolated environments) as outlined in https://kube.rs/controllers/testing/

Something closer to the original envtest proposal in this issue could be #1108 but not sure if that's particularly likely to happen (and if it does, it would be an orthogonal project to kube). It might be more likely that we can lean on https://github.com/kubernetes-sigs/kwok/ in the future.

@clux clux closed this as completed Mar 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
automation ci and testing related wontfix This will not be worked on
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants