Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Using K8S as the registration center, the actual registration failed, but Dubbo will consider the registration successful #14126

Open
3 of 4 tasks
zzphyh opened this issue Apr 23, 2024 · 2 comments
Labels
component/need-triage Need maintainers to triage type/need-triage Need maintainers to triage

Comments

@zzphyh
Copy link

zzphyh commented Apr 23, 2024

Pre-check

  • I am sure that all the content I provide is in English.

Search before asking

  • I had searched in the issues and found no similar issues.

Apache Dubbo Component

Java SDK (apache/dubbo)

Dubbo Version

Dubbo Java 3.2.11, JDK 21

Steps to reproduce this issue

When using K8S as the registration center, there is a small probability of encountering the following exception when starting registration, and it will not automatically retry or fail to start after encountering this exception.

2024-04-23 10:31:31.677 ERROR [Dubbo-framework-shared-scheduler-thread-2] [o.a.d.c.d.DefaultApplicationDeployer] [tttt,,]: [DUBBO] Refresh instance and metadata error., dubbo version: 3.2.11, current host: 192.168.2.12, error code: 5-12. This may be caused by , go to https://dubbo.apache.org/faq/5/12 to find instructions.
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: PATCH at: https://kubernetes.default.svc:443/api/v1/namespaces/aaa/pods/tttt-0. Message: Operation cannot be fulfilled on pods "tttt-0": the object has been modified; please apply your changes to the latest version and try again. Received status: Status(apiVersion=v1, code=409, details=StatusDetails(causes=[], group=null, kind=pods, name=tttt-0, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=Operation cannot be fulfilled on pods "tttt-0": the object has been modified; please apply your changes to the latest version and try again, metadata=ListMeta(_continue=null, remainingItemCount=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=Conflict, status=Failure, additionalProperties={}).
at io.fabric8.kubernetes.client.KubernetesClientException.copyAsCause(KubernetesClientException.java:238)
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.waitForResult(OperationSupport.java:507)
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.handleResponse(OperationSupport.java:524)
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.handlePatch(OperationSupport.java:419)
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.handlePatch(OperationSupport.java:397)
at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.handlePatch(BaseOperation.java:763)
at io.fabric8.kubernetes.client.dsl.internal.HasMetadataOperation.lambda$patch$2(HasMetadataOperation.java:232)
at io.fabric8.kubernetes.client.dsl.internal.HasMetadataOperation.patch(HasMetadataOperation.java:237)
at io.fabric8.kubernetes.client.dsl.internal.HasMetadataOperation.edit(HasMetadataOperation.java:66)
at io.fabric8.kubernetes.client.dsl.internal.HasMetadataOperation.edit(HasMetadataOperation.java:45)
at org.apache.dubbo.registry.kubernetes.KubernetesServiceDiscovery.doRegister(KubernetesServiceDiscovery.java:133)
at org.apache.dubbo.registry.client.AbstractServiceDiscovery.register(AbstractServiceDiscovery.java:161)
at org.apache.dubbo.registry.client.AbstractServiceDiscovery.update(AbstractServiceDiscovery.java:177)
at java.base/java.util.ArrayList.forEach(Unknown Source)
at org.apache.dubbo.registry.client.metadata.ServiceInstanceMetadataUtils.refreshMetadataAndInstance(ServiceInstanceMetadataUtils.java:234)
at org.apache.dubbo.config.deploy.DefaultApplicationDeployer.lambda$registerServiceInstance$5(DefaultApplicationDeployer.java:994)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.base/java.util.concurrent.FutureTask.runAndReset(Unknown Source)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)

What you expected to happen

Automatic retry or startup failure causes K8S to automatically restart pod.

Anything else

I think there is an issue with the following code of the AbstractServiceDiscovery class. Here, whether serviceInstance is null is used as a criterion to determine whether registration is necessary. However, in the register method, the serviceInstance is initialized first before calling the doRegister method to actually register. Even if doRegister fails, the serviceInstance has already been initialized successfully. When the scheduled task reaches this point again, it will be considered successful and will not be registered again.

    @Override
    public synchronized void register() throws RuntimeException {
        if (isDestroy) {
            return;
        }
        if (this.serviceInstance == null) {
            ServiceInstance serviceInstance = createServiceInstance(this.metadataInfo);
            if (!isValidInstance(serviceInstance)) {
                return;
            }
            this.serviceInstance = serviceInstance;
        }
        boolean revisionUpdated = calOrUpdateInstanceRevision(this.serviceInstance);
        if (revisionUpdated) {
            reportMetadata(this.metadataInfo);
            doRegister(this.serviceInstance);
        }
    }

    /**
     * Update assumes that DefaultServiceInstance and its attributes will never get updated once created.
     * Checking hasExportedServices() before registration guarantees that at least one service is ready for creating the
     * instance.
     */
    @Override
    public synchronized void update() throws RuntimeException {
        if (isDestroy) {
            return;
        }

        if (this.serviceInstance == null) {
            register();
        }

        if (!isValidInstance(this.serviceInstance)) {
            return;
        }
        ServiceInstance oldServiceInstance = this.serviceInstance;
        DefaultServiceInstance newServiceInstance =
                new DefaultServiceInstance((DefaultServiceInstance) oldServiceInstance);
        boolean revisionUpdated = calOrUpdateInstanceRevision(newServiceInstance);
        if (revisionUpdated) {
            logger.info(String.format(
                    "Metadata of instance changed, updating instance with revision %s.",
                    newServiceInstance.getServiceMetadata().getRevision()));
            doUpdate(oldServiceInstance, newServiceInstance);
            this.serviceInstance = newServiceInstance;
        }
    }

Are you willing to submit a pull request to fix on your own?

  • Yes I am willing to submit a pull request on my own!

Code of Conduct

@zzphyh zzphyh added component/need-triage Need maintainers to triage type/need-triage Need maintainers to triage labels Apr 23, 2024
@zzphyh zzphyh changed the title [Bug] [Bug] Using K8S as the registration center, the actual registration failed, but Dubbo will consider the registration successful Apr 28, 2024
@AlbumenJ
Copy link
Member

AlbumenJ commented May 8, 2024

So, we need to make doRegister(this.serviceInstance) happened before this.serviceInstance = serviceInstance?

@zzphyh
Copy link
Author

zzphyh commented May 9, 2024

So, we need to make doRegister(this.serviceInstance) happened before this.serviceInstance = serviceInstance?

I think there are many solutions, such as using a separate variable to determine whether registration was successful, or capturing exceptions in the register method and reset this.serviceInstance to null

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/need-triage Need maintainers to triage type/need-triage Need maintainers to triage
Projects
Status: Todo
Development

No branches or pull requests

2 participants