Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Couchbase health check fails due to timeout #14685

Closed
prafsoni opened this issue Oct 4, 2018 · 13 comments
Closed

Couchbase health check fails due to timeout #14685

prafsoni opened this issue Oct 4, 2018 · 13 comments
Assignees
Labels
type: bug A general bug
Milestone

Comments

@prafsoni
Copy link

prafsoni commented Oct 4, 2018

Spring boot - 2.0.5.RELEASE

I see in release notes a similar issues was supposed to be resolved. However, I am seeing this quite frequently(every 30min-60min) in logs, service become unhealthy and gets back healthy in about 30sec or so.

2018-10-04 12:08:01.182  WARN 1 --- [io-8080-exec-15] o.s.b.a.c.CouchbaseHealthIndicator       : Couchbase health check failed
--
  | java.util.concurrent.TimeoutException: null
  | at com.couchbase.client.java.util.Blocking.blockForSingle(Blocking.java:77)
  | at com.couchbase.client.java.bucket.DefaultBucketManager.info(DefaultBucketManager.java:127)
  | at org.springframework.boot.actuate.couchbase.CouchbaseHealthIndicator.getBucketInfo(CouchbaseHealthIndicator.java:84)
  | at org.springframework.boot.actuate.couchbase.CouchbaseHealthIndicator.doHealthCheck(CouchbaseHealthIndicator.java:75)
  | at org.springframework.boot.actuate.health.AbstractHealthIndicator.health(AbstractHealthIndicator.java:84)
  | at org.springframework.boot.actuate.health.CompositeHealthIndicator.health(CompositeHealthIndicator.java:68)
  | at org.springframework.boot.actuate.health.HealthEndpointWebExtension.getHealth(HealthEndpointWebExtension.java:50)
  | at sun.reflect.GeneratedMethodAccessor149.invoke(Unknown Source)
  | at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  | at java.lang.reflect.Method.invoke(Method.java:498)
  | at org.springframework.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:223)
  | at org.springframework.boot.actuate.endpoint.invoke.reflect.ReflectiveOperationInvoker.invoke(ReflectiveOperationInvoker.java:76)
  | at org.springframework.boot.actuate.endpoint.annotation.AbstractDiscoveredOperation.invoke(AbstractDiscoveredOperation.java:61)
  | at org.springframework.boot.actuate.endpoint.web.servlet.AbstractWebMvcEndpointHandlerMapping$ServletWebOperationAdapter.handle(AbstractWebMvcEndpointHandlerMapping.java:274)
  | at org.springframework.boot.actuate.endpoint.web.servlet.AbstractWebMvcEndpointHandlerMapping$OperationHandler.handle(AbstractWebMvcEndpointHandlerMapping.java:330)
  | at sun.reflect.GeneratedMethodAccessor146.invoke(Unknown Source)
  | at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  | at java.lang.reflect.Method.invoke(Method.java:498)
  | at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:209)
  | at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:136)
  | at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:102)
  | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:891)
  | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:797)
  | at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87)
  | at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:991)
  | at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:925)
  | at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:974)
  | at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:866)
  | at javax.servlet.http.HttpServlet.service(HttpServlet.java:635)
  | at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:851)
  | at javax.servlet.http.HttpServlet.service(HttpServlet.java:742)
  | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)
  | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
  | at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
  | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
  | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
  | at org.springframework.boot.actuate.web.trace.servlet.HttpTraceFilter.doFilterInternal(HttpTraceFilter.java:90)
  | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
  | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
  | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
  | at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:99)
  | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
  | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
  | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
  | at org.springframework.web.filter.HttpPutFormContentFilter.doFilterInternal(HttpPutFormContentFilter.java:109)
  | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
  | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
  | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
  | at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:93)
  | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
  | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
  | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
  | at org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.filterAndRecordMetrics(WebMvcMetricsFilter.java:155)
  | at org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.filterAndRecordMetrics(WebMvcMetricsFilter.java:123)
  | at org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.doFilterInternal(WebMvcMetricsFilter.java:108)
  | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
  | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
  | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
  | at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:200)
  | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
  | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
  | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
  | at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:198)
  | at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)
  | at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:493)
  | at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:140)
  | at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81)
  | at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87)
  | at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:342)
  | at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:800)
  | at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
  | at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:806)
  | at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1498)
  | at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
  | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  | at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
  | at java.lang.Thread.run(Thread.java:748)
@spring-projects-issues spring-projects-issues added the status: waiting-for-triage An issue we've not yet triaged label Oct 4, 2018
@wilkinsona
Copy link
Member

wilkinsona commented Oct 4, 2018

Are you referring to #13879? That’s the opposite problem where Couchbase is down and the health indicator would hang. In your case, if you are certain that Couchbase is up, then it’s taken too long to respond and the indicator thinks it’s down. You can use management.health.couchbase.timeout to increase the timeout. The default is one second. Please give this a try and let us know if it helps. It may be that we should increase the default.

@wilkinsona wilkinsona added the status: waiting-for-feedback We need additional information before we can continue label Oct 4, 2018
@prafsoni
Copy link
Author

prafsoni commented Oct 8, 2018

I played around with various different timeout values and I have to change it to 60000 ms and I still see occasional timeout issues, might end up setting it to 75000 ms i.e. previous default. Definitely 1000 ms is not ideal choice for default value.

@spring-projects-issues spring-projects-issues added status: feedback-provided Feedback has been provided and removed status: waiting-for-feedback We need additional information before we can continue labels Oct 8, 2018
@wilkinsona
Copy link
Member

wilkinsona commented Oct 8, 2018

Thanks. I'd be rather concerned about the performance of your Couchbase cluster if a timeout of 60000ms still results in occasional issues. Do you see similar response times for application queries against the cluster?

With regards to the management timeout, Couchbase's docs have this to say about it and the 75000ms default:

The management timeout is used on all synchronous BucketManager and ClusterManager operations and if not overridden by a custom timeout. It set to a quite high timeout because some operations might take a longer time to complete (for example flush).

@prafsoni
Copy link
Author

prafsoni commented Oct 8, 2018

No, performance wise cluster is very responsive queries result are usually <50ms what I have seen is, when couchbase is perform some operations like compacting bucket, updating indexes that's when health check timeout occurs.
In an attempt to keep logs clean I am setting it to a higher value. Right now its too many retries.
Maybe we need a few retries with slightly longer timeouts. say 3000ms, 3 retries 1000ms backOff. Before actually throwing error.

@wilkinsona
Copy link
Member

I wonder if we shouldn't be using Bucket.ping() instead. I learned about it in this thread on the Couchbase forum. Something like this:

PingReport report = this.operations.getCouchbaseBucket().ping(this.timeout,
		TimeUnit.MILLISECONDS);
for (PingServiceHealth serviceHealth : report.services()) {
	PingState state = serviceHealth.state();
	// Decide health based on state being one of OK, TIMEOUT, or ERROR
}

PingReport is in an internal package, but is annotated with @InterfaceAudience.Public. That means that it's "intended to be used by any project or application that depends on this library." However, it's also annotated with @InterfaceStability.Experimental which means that it's "considered experimental and no guarantees can be given in terms of compatibility and stability".

@wilkinsona
Copy link
Member

The problem with something that'll take a minute or more (either for a single call, or multiple calls that retry with a timeout that backs off), is that the caller of the health endpoint is going to have to wait for a minute or more for a response. I wouldn't be surprised if a load balancer gave up before a minute had elapsed and assumed that the application was down. To be useful, I really think we need to find something that gives a reasonable impression of Couchbase's health but also responds quickly.

@prafsoni
Copy link
Author

prafsoni commented Oct 8, 2018

I agree.
FYI - LoadBalancer was causing connection issues with Couchbase at startup and pod was failing to get healthy before set threshold so we switched to connecting nodes directly

@wilkinsona wilkinsona added for: team-attention An issue we'd like other members of the team to review type: bug A general bug and removed for: team-attention An issue we'd like other members of the team to review status: feedback-provided Feedback has been provided status: waiting-for-triage An issue we've not yet triaged labels Oct 10, 2018
@wilkinsona wilkinsona added this to the 2.0.x milestone Oct 10, 2018
@philwebb philwebb modified the milestones: 2.0.x, 2.0.6 Oct 10, 2018
@wilkinsona wilkinsona self-assigned this Oct 11, 2018
@wilkinsona
Copy link
Member

wilkinsona commented Oct 12, 2018

Having installed a couple of Couchbase nodes, I've learned that we should be using Cluster.diagnostics() rather than Bucket.ping(). The former returns pretty much immediately, irrespective of the state of the cluster. The latter will block for the entire timeout period if a node is down.

Cluster.diagnostics() returns a DiagnosticsReport. In that report, when a node in the cluster is down, it's listed as having a state of CONNECTING. When a node is up, it's listed as having a state of CONNECTED.

One downside of using the diagnostics report is that it considers the cluster as a whole, irrespective of what buckets that application is used and how they are replicated across the cluster. It could be that all of the nodes that host the buckets that the application is using are up and yet a node in the cluster may be down and we'd then consider Couchbase to be down unnecessarily. However, I don't think there's any way for us to determine that without doing something at the bucket level and those calls can all block for an unacceptably long time.

We now need to figure out how to get from where we are now to where we want to be. The move to using Cluster rather than the Bucket that we can get from CouchbaseOperations means that the type signature and constructor of the health indicator need to change.

@wilkinsona wilkinsona added the for: team-attention An issue we'd like other members of the team to review label Oct 12, 2018
@philwebb philwebb modified the milestones: 2.0.6, 2.0.x Oct 12, 2018
@wilkinsona
Copy link
Member

Using the DiagnosticsReport is too broad as it checks the health of the entire cluster. Using BucketInfo gives us the desired focus but it may block for an unacceptably long period. What we really need is a bucket-level health check that returns as quickly as Cluster.diagnostics. Without such an API, we have come to the conclusion that there is no good way for us to check Couchbase's health. We are going to deprecate CouchbaseHealthIndicator in 2.0.x and, additionally, stop auto-configuring it in 2.1.x.

@wilkinsona wilkinsona removed the for: team-attention An issue we'd like other members of the team to review label Oct 12, 2018
@wilkinsona wilkinsona changed the title Couchbase health check failed Deprecate CouchbaseHealthIndicator Oct 12, 2018
@wilkinsona wilkinsona modified the milestones: 2.0.x, 2.0.6 Oct 12, 2018
@wilkinsona wilkinsona added type: enhancement A general enhancement and removed type: bug A general bug labels Oct 12, 2018
@simonbasle
Copy link

simonbasle commented Oct 12, 2018

pinging @daschl for his insight here, but I think the diagnostics() way could actually fit the bill.

In a Couchbase Cluster, if a node has the ServiceType.BINARY it hosts a fraction of every Bucket, if I'm not mistaken. So any node with BINARY that is NOT in the connected state would mark the cluster as unhealthy for the most basic use case (key/value operations).

The healthcheck could also be made configurable to let users decide if for they workload the other types of services are relevant (see enum ServiceType, eg. views and N1QL, which are used by Spring Data).

NB: It seems the diagnostics() and ping() features were designed in this Couchbase RFC.

@wilkinsona
Copy link
Member

wilkinsona commented Oct 12, 2018

Thank you, @simonbasle. The RFC is an interesting read. In particular, this section caught my attention:

While some users may want a "health check", the name of this function and the return value were chosen to be clear that it does not attempt to give a boolean healthy true/false to the consumer. The diagnostic report is a rough, backwards view that an application developer can use to determine healthy for their specific workload. SDKs do not have enough context to determine a boolean healthy/unhealthy, so the goal with this API is to summarize as much info as possible for the app developer to assemble a complete, contextual view and come to a conclusion.

I’m not sure that we’re any better-placed than the SDK is to know whether or not things are healthy for a user’s specific workload. Hopefully @daschl will have some input that proves me wrong.

@wilkinsona wilkinsona added the status: on-hold We can't start working on this issue yet label Oct 14, 2018
@daschl
Copy link
Contributor

daschl commented Oct 15, 2018

@wilkinsona I think using the diagnostics() API here would fit the bill as @simonbasle said. Keep in mind that from a IO perspective, the only difference between bucket wide and cluster wide are some Binary (kv) connections that are scoped to the bucket level conceptually. All the other services that might be important (like N1QL queries, fts,...) are cluster scoped since we send the creds as a basic auth header there and share them for efficiency reasons across all buckets.

So I think using the report is an accurate picture of the state of the SDK. One thing thing to consider is that even if it is "cluster scope", it pretty much affects every bucket in the same way since the data is distributed evenly across the cluster.

If you want to get a good aggregated state, I think the best shot is the following algorithm: For every EndpointState that is returned, look at the LifecycleState. If all of them are either idle (i.e. a socket pool which is just not used at this point) or connected the cluster is "green".

@wilkinsona wilkinsona changed the title Deprecate CouchbaseHealthIndicator Couchbase health check fails due to timeout Oct 15, 2018
@wilkinsona wilkinsona added type: bug A general bug and removed status: on-hold We can't start working on this issue yet type: enhancement A general enhancement labels Oct 15, 2018
@wilkinsona
Copy link
Member

Here's a response from a cluster with a single node that's up:

"details": {
        "couchbase": {
            "details": {
                "endpoints": [
                    {
                        "id": "0x2b1702f5",
                        "lastActivity": 0,
                        "local": "/10.0.0.54:54978",
                        "remote": "/10.0.0.54:8093",
                        "state": "CONNECTED",
                        "type": "QUERY"
                    },
                    {
                        "id": "0x1cd735e",
                        "lastActivity": 4201999,
                        "local": "/10.0.0.54:54977",
                        "remote": "/10.0.0.54:8092",
                        "state": "CONNECTED",
                        "type": "VIEW"
                    },
                    {
                        "id": "0x35e7273e",
                        "lastActivity": 873334,
                        "local": "/10.0.0.54:54976",
                        "remote": "/10.0.0.54:11210",
                        "state": "CONNECTED",
                        "type": "BINARY"
                    }
                ],
                "sdk": "couchbase-java-client/2.5.9 (git: 2.5.9, core: 1.5.9) (Mac OS X/10.13.6 x86_64; Java HotSpot(TM) 64-Bit Server VM 1.8.0_181-b13)"
            },
            "status": "UP"
        }
    },
    "status": "UP"

And a single node that's down:

{
    "details": {
        "couchbase": {
            "details": {
                "endpoints": [
                    {
                        "id": "0x2b1702f5",
                        "lastActivity": 0,
                        "local": "/10.0.0.54:54978",
                        "remote": "/10.0.0.54:8093",
                        "state": "CONNECTING",
                        "type": "QUERY"
                    },
                    {
                        "id": "0x1cd735e",
                        "lastActivity": 119509421,
                        "local": "/10.0.0.54:54977",
                        "remote": "/10.0.0.54:8092",
                        "state": "CONNECTING",
                        "type": "VIEW"
                    },
                    {
                        "id": "0x35e7273e",
                        "lastActivity": 6186669,
                        "local": "/10.0.0.54:54976",
                        "remote": "/10.0.0.54:11210",
                        "state": "CONNECTING",
                        "type": "BINARY"
                    }
                ],
                "sdk": "couchbase-java-client/2.5.9 (git: 2.5.9, core: 1.5.9) (Mac OS X/10.13.6 x86_64; Java HotSpot(TM) 64-Bit Server VM 1.8.0_181-b13)"
            },
            "status": "DOWN"
        }
    },
    "status": "DOWN"
}

Two nodes that are both up:

{
    "details": {
        "couchbase": {
            "details": {
                "endpoints": [
                    {
                        "id": "0x2b1702f5",
                        "lastActivity": 0,
                        "local": "/10.0.0.54:55218",
                        "remote": "/10.0.0.54:8093",
                        "state": "CONNECTED",
                        "type": "QUERY"
                    },
                    {
                        "id": "0x1cd735e",
                        "lastActivity": 311131257,
                        "local": "/10.0.0.54:55123",
                        "remote": "/10.0.0.54:8092",
                        "state": "CONNECTED",
                        "type": "VIEW"
                    },
                    {
                        "id": "0x35e7273e",
                        "lastActivity": 307530,
                        "local": "/10.0.0.54:55217",
                        "remote": "/10.0.0.54:11210",
                        "state": "CONNECTED",
                        "type": "BINARY"
                    },
                    {
                        "id": "0x609e7515",
                        "lastActivity": 0,
                        "local": "/10.0.0.54:55341",
                        "remote": "/10.0.0.13:8093",
                        "state": "CONNECTED",
                        "type": "QUERY"
                    },
                    {
                        "id": "0x6995df17",
                        "lastActivity": 0,
                        "local": "/10.0.0.54:55322",
                        "remote": "/10.0.0.13:8092",
                        "state": "CONNECTED",
                        "type": "VIEW"
                    },
                    {
                        "id": "0x7306c408",
                        "lastActivity": 2766411,
                        "local": "/10.0.0.54:55321",
                        "remote": "/10.0.0.13:11210",
                        "state": "CONNECTED",
                        "type": "BINARY"
                    }
                ],
                "sdk": "couchbase-java-client/2.5.9 (git: 2.5.9, core: 1.5.9) (Mac OS X/10.13.6 x86_64; Java HotSpot(TM) 64-Bit Server VM 1.8.0_181-b13)"
            },
            "status": "UP"
        }
    },
    "status": "UP"
}

And two nodes where one is up and one is down:

{
    "details": {
        "couchbase": {
            "details": {
                "endpoints": [
                    {
                        "id": "0x2b1702f5",
                        "lastActivity": 0,
                        "local": "/10.0.0.54:55218",
                        "remote": "/10.0.0.54:8093",
                        "state": "CONNECTED",
                        "type": "QUERY"
                    },
                    {
                        "id": "0x1cd735e",
                        "lastActivity": 373391672,
                        "local": "/10.0.0.54:55123",
                        "remote": "/10.0.0.54:8092",
                        "state": "CONNECTED",
                        "type": "VIEW"
                    },
                    {
                        "id": "0x35e7273e",
                        "lastActivity": 1570432,
                        "local": "/10.0.0.54:55217",
                        "remote": "/10.0.0.54:11210",
                        "state": "CONNECTED",
                        "type": "BINARY"
                    },
                    {
                        "id": "0x609e7515",
                        "lastActivity": 0,
                        "local": "/10.0.0.54:55341",
                        "remote": "/10.0.0.13:8093",
                        "state": "CONNECTING",
                        "type": "QUERY"
                    },
                    {
                        "id": "0x6995df17",
                        "lastActivity": 0,
                        "local": "/10.0.0.54:55322",
                        "remote": "/10.0.0.13:8092",
                        "state": "CONNECTING",
                        "type": "VIEW"
                    },
                    {
                        "id": "0x7306c408",
                        "lastActivity": 5062249,
                        "local": "/10.0.0.54:55321",
                        "remote": "/10.0.0.13:11210",
                        "state": "CONNECTING",
                        "type": "BINARY"
                    }
                ],
                "sdk": "couchbase-java-client/2.5.9 (git: 2.5.9, core: 1.5.9) (Mac OS X/10.13.6 x86_64; Java HotSpot(TM) 64-Bit Server VM 1.8.0_181-b13)"
            },
            "status": "DOWN"
        }
    },
    "status": "DOWN"
}

Now we just need to decide how to move to this new model in 2.0.x. The current implementation of the above is a completely new health indicator. This is technically a breaking change (the details of the health response are different and the type of the bean that's auto-configured has changed) but I can't see a way to fix this out of the box without making some form of breaking change.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: bug A general bug
Projects
None yet
Development

No branches or pull requests

6 participants