Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Baseline for Keycloak deployment in operator #9628

Merged
merged 1 commit into from Jan 25, 2022

Conversation

vmuzikar
Copy link
Contributor

@vmuzikar vmuzikar commented Jan 18, 2022

Limitations that will be addressed as a follow-up:

Resolves #9171, resolves #9552

@vmuzikar vmuzikar added area/operator Keycloak.X Operator keycloak.x kind/feature Categorizes a PR related to a new feature labels Jan 18, 2022
Copy link
Contributor

@andreaTP andreaTP left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First round of review finished!

Great work @vmuzikar !

I like the overall structure and especially the State management, although I think we should try to avoid introducing (so early!) complexities like WatchedResourcesStore and the custom "Vault" integration relying respectively on the capabilities of the Java SDK and the standard K8s API.

setDefaultLabels(resource);
setOwnerReferences(resource);

Log.debugf("Creating or updating resource: %s", resource);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we agree on one logging framework and stick with it? I don't really mind which as long as is one.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is still the same logging framework coming from Quarkus. It's just using static methods and not CDI. I realized CDI approach is pretty stupid for logging as we don't have everything as beans and passing deps manually is a mess. This is an elegant way offered by Quarkus and we should stick with it everywhere.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we mind that, in this same codebase, the Keycloak Quarkus distribution takes a different approach?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is still valid, but I don't have a super strong opinion against, feel free to mark this as Resolved if we don't care about consistency in this codebase.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, AFAIK rest of the KC can't fully utilize Quarkus features (due to WF), that includes the logging. In the operator we can use what Quarkus provides.

}

protected void setOwnerReferences(HasMetadata resource) {
if (!cr.getMetadata().getNamespace().equals(resource.getMetadata().getNamespace())) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why we have this if? IIRC UID spans over different namespaces

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cross-namespace owner references are disallowed by design. Namespaced dependents can specify cluster-scoped or namespaced owners.

https://kubernetes.io/docs/concepts/overview/working-with-objects/owners-dependents/

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this case I would throw an exception instead of silently ignoring.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this comment instead is still valid, can you, please, either remove the condition at all or throw an Exception instead of silently returning?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if we have a managed resource in different namespace than the CR? We surely want to support those scenarios. Of course, the resources then would have to be "manually" removed by the Operator upon CR deletion.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

correct, but the early return doesn't solve this either ...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So what do you suggest exactly? For sure throwing an exception is not a solution in context of this method. Of course, for true support for foreign resources located in other namespaces we'd need to implement additional logic.

In any case, I'm pretty sure this gets refactored sooner or later when the Dependent Resources are ready in the SDK.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would completely remove this condition as it's misleading and not used ATM, but not anything blocking 👍

operator/src/main/resources/example-keycloak.yml Outdated Show resolved Hide resolved
@vmuzikar
Copy link
Contributor Author

@andreaTP Thank you for your review! I hopefully answered all the comments. :)

@vmuzikar
Copy link
Contributor Author

I removed the watched resources store (i.e. the Deployment won't be restarted on secret change). But I decided not to change the format for referring Secrets (${secret:...) for now as it would create even bigger difference between local configuration using conf files and configuration using CRs. Because we'd need to use name and values separately.

Copy link
Contributor

@andreaTP andreaTP left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vmuzikar Great update! 👏

A few more comments, but only one important regarding the(missing or I missed it 🙂 ) condition for recreating the Deployment, all the rest is totally in the right direction!

operator/src/main/resources/postgres.yaml Outdated Show resolved Hide resolved
setDefaultLabels(resource);
setOwnerReferences(resource);

Log.debugf("Creating or updating resource: %s", resource);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is still valid, but I don't have a super strong opinion against, feel free to mark this as Resolved if we don't care about consistency in this codebase.

}

protected void setOwnerReferences(HasMetadata resource) {
if (!cr.getMetadata().getNamespace().equals(resource.getMetadata().getNamespace())) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this comment instead is still valid, can you, please, either remove the condition at all or throw an Exception instead of silently returning?

@vmuzikar vmuzikar marked this pull request as ready for review January 21, 2022 16:28
andreaTP
andreaTP previously approved these changes Jan 21, 2022
Copy link
Contributor

@andreaTP andreaTP left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, totally good enough to start iterating! 💯

}

private Deployment baseDeployment;
@Override
protected HasMetadata getReconciledResource() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably using other variables names could make this less confusing ? wdyt ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, unified the naming of the local variable with the field name.

public class KeycloakSpec {

private int instances = 1;
private String image;
private Map<String, String> serverConfiguration;
private PodTemplate podTemplate;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would suggest to move this out to another PR

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@@ -2,3 +2,6 @@ quarkus.operator-sdk.crd.apply=true
quarkus.operator-sdk.generate-csv=true
quarkus.container-image.builder=jib
quarkus.operator-sdk.crd.validate=false

# Operator config
operator.operand.image=quay.io/keycloak/keycloak-x:latest
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would suggest another naming, as every element created by the operator are operands .... so probably operator.operand.keycloakImage could be clearer ? wdyt ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Renamed to operator.keycloak.image. WDYT?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgresql-db
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we always prefix the operands with the cr name ? or at least with keycloak ? In the same namespace the user could have another postgresql-db for other purposes. Wdyt ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is actually not an operand. ;) It's just a helper deployment for devs to easily deploy KC with external DB.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes you are right....I see what you mean.

@vmuzikar
Copy link
Contributor Author

@jonathanvila Thanks for the review! I pushed requested changes and rebased on top of master. Will squash the commits once I have a final approval from you and @andreaTP.

andreaTP
andreaTP previously approved these changes Jan 24, 2022
Copy link
Contributor

@andreaTP andreaTP left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

jonathanvila
jonathanvila previously approved these changes Jan 24, 2022
Copy link
Contributor

@jonathanvila jonathanvila left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 👍

@vmuzikar
Copy link
Contributor Author

I squashed the commits.

@pedroigor pedroigor merged commit 6b485b8 into keycloak:main Jan 25, 2022
@vmuzikar vmuzikar deleted the keycloak-deployment branch January 25, 2022 14:07
@AndrewFarley
Copy link

Please forgive me if this is a silly question, but can I ask if/when this will make it into a release of Keycloak-X? I'd like to have a reasonable probe in Kubernetes instead of just a tcpProbe. I can't see this being functional on the latest docker image, 16.1.1, or perhaps the question should instead be when will the docker images be updated eg: to the 17.0.1 release from yesterday?

@andreaTP
Copy link
Contributor

Hi @AndrewFarley this operator will be included as preview in the next Keycloak release:
https://www.keycloak.org/2022/03/releases

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/operator Keycloak.X Operator keycloak.x kind/feature Categorizes a PR related to a new feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Readiness and Liveness probe for the operator deployment of Keycloak.X Keycloak.X deployment
6 participants