Skip to content

Migrating to new 1.10.0 Observation API

Marcin Grzejszczak edited this page Jan 22, 2024 · 5 revisions

DECISION CHART

Decision Chart

DECISIONS (ORANGE BOXES)

DOES YOUR PROJECT HAVE ANY MICROMETER INSTRUMENTATION?

  • Are you using Micrometer’s Timer or Sample APIs ?

  • Are you using any other Micrometer’s Meters?

If you’re using Timer or Sample then you’re an immediate candidate for having support by the Observation API.

If you’re using just Gauges, DistributionSummary, Counters, etc. we can discuss whether they are properly used but most likely you’re NOT a candidate for using the Observation API.

IS THERE A SPRING CLOUD SLEUTH INSTRUMENTATION OF YOUR PROJECT?

  • Do you see your project listed in this documentation?

  • Do you see your project in this list of samples ?

If either of the statements is true then your project is already instrumented by Spring Cloud Sleuth.

ARE YOU SATISFIED WITH THE CURRENT INSTRUMENTATION?

  • Is Sleuth using proper components of your project? Would you use the same APIs, same components if you wrote it yourself?

  • Does Sleuth create useful span / tags ? As a user would you find them helpful when debugging issues on production?

If your project is listed in the spring cloud sleuth samples you can run according to the provided docs.

If you were a user of your own project and you would be having issues with your library, would the provided information be helpful? Are the names of spans OK? Are the tags helpful?

PROCESSES (YELLOW BOXES)

WE NEED TO MERGE WHAT’S THERE IN SLEUTH WITH WHAT YOU HAVE WITH MICROMETER.

We’ll have to sit down and analyze what’s there in Sleuth and how it works with what you have there in your project. We would be more than happy if you helped us out with an initial analysis.

CHECK ALL THE PLACES WHERE MICROMETER IS USED. IF THERE’S A PLACE WHERE AN ELEMENT IS <TIMED> ENSURE THAT IT’S REWRITTEN TO USE THE <OBSERVATION> API.

If you’re using a Timer we would like you to rewrite it to use Observation using Micrometer 1.10.0. The reason is that Observation uses the new handler mechanism that allows us to transparently add tracing support.

Before

Timer.Sample sample = Timer.start();
// do some work
sample.stop(
    Timer.builder("test.timer")
        .tag("metrics-tag", "metrics-tag-value")
        .register(registry)
);

After

Observation.createNotStarted("test.observation", registry)
    .lowCardinalityTag("metrics-tag", "metrics-tag-value")
    .observe(() -> doSomeWorkHere());

In case your tags depend on e.g. an HTTP request or more object you can extend the Observation.Context as follows.

Before

HttpRequest httpRequest = ...;
Timer.Sample sample = Timer.start();
// do some work
sample.stop(
    Timer.builder("test.timer")
        .tag("http.method", httpRequest.method())
        .register(registry)
);

After

class HttpContext extends Observation.Context {
    private final HttpRequest httpRequest;

    HttpContext(HttpRequest httpRequest) {
        this.httpRequest = httpRequest;
    }

    @Override
    public Tags getLowCardinalityTags() {
        return Tags.of("http.method", this.httpRequest.method());
    }
}

Observation.createNotStarted("test.observation", new HttpContext(httpRequest), registry)
  .observe(() -> doSomeWork());

YOU NEED TO COPY SLEUTH’S CODE TO YOUR PROJECT AND REWRITE THE SLEUTH’S <TRACER> API TO MICROMETER’S <OBSERVATION> API.

Before (Sleuth’s code)

try (Tracer.SpanInScope ws = this.tracer.withSpan(span.name("somename").start())) {
  span.tag("tracing-tag", "tracing-tag-value");
  // do some work
  return something;
} 
catch(Exception ex) {
  span.error(ex);
}
finally {
  span.end();
}

After (Observation API)

Observation.createNotStarted("test.observation", registry)
        .highCardinalityTag("tracing-tag", "tracing-tag-value")
        .observe(() -> doSth());

YOU WANT TO DO EVERYTHING MANUALLY OR YOU WANT TO SIGNAL EVENTS

Before (Sleuth’s code)

try (Tracer.SpanInScope ws = this.tracer.withSpan(span.name("somename").start())) {
  span.tag("tracing-tag", "tracing-tag-value");
  // do some work
  span.event("look what happened");
  return something;
} 
catch(Exception ex) {
  span.error(ex);
}
finally {
  span.end();
}

After (Observation API)

Observation observation = Observation.createNotStarted("test.observation", registry)
        .highCardinalityTag("tracing-tag", "tracing-tag-value")
        .start();
try (Scope scope = openScope()) {
  // do some work
  observation.event("look what happened");
  return something;
}
catch (Exception exception) {
  observation.error(exception);
  throw exception;
}
finally {
  observation.stop();
}

CHECK WHETHER METRICS AND SPANS ARE LOOKING FINE (METRIC / SPAN NAMES, TAGS ARE OK)

To test the combination of OTel, Zipkin, Brave and Wavefront the Micrometer Tracing project contains a test library that allows to run your code with all the necessary setup already provided for you. The only thing you need to do is to provide certain configuration values (e.g. Wavefront Token, Wavefront Server).

FAQ

I don’t know anything about tracing, can you explain it?

You can read more about it in the Spring Cloud Sleuth documentation.

What is Brave?

To control a lifecycle of a Span we need a Tracer. Brave is an implementation of a Tracer. Brave is an extremely mature project.

What is OTel (OpenTelemetry)?

To control a lifecycle of a Span we need a Tracer. OTel is an implementation of a Tracer. It’s getting a lot of attention recently.

What is Context Propagation?

In order to propagate tracing information (e.g. Trace & Span) we need to also propagate it via threads and network. Context propagation is the process of writing such instrumentation of frameworks that the context is not lost while switching threads and that the context information is injected when going over a network (e.g. injected to the HTTP headers).

In my project I will need to inject / extract Context, now what?

You can check how we’re doing things with HTTP in the micrometer-tracing project and do accordingly. If that’s not helpful do not hesitate to contact us, we’ll help!

What is Zipkin?

Zipkin is a distributed tracing system. It helps gather timing data needed to troubleshoot latency problems in service architectures. Features include both the collection and lookup of this data.

How do I run Zipkin?

You can run it by using Docker as presented below. For more information read this doc.

$ docker run -d -p 9411:9411 openzipkin/zipkin

What is Wavefront?

Tanzu Observability by Wavefront is a high-performance streaming analytics platform that supports observability for metrics, counters, histograms, and traces/spans. Wavefront is unique because it scales to very high data ingestion rates and query loads. You can collect data from many services and sources across your entire application stack, and can look at details for earlier data that were ingested earlier.

What do Micrometer, Micrometer Tracing and Sleuth have in common?

Spring Cloud Sleuth is a Spring Cloud project that instruments various libraries with an abstraction over Tracers. Its next release will be in version 3.1.0 and it will be the last feature release of that project. For Spring 6 and Spring Boot 3 there will be no Sleuth compatible version.

Micrometer is a metrics facade. Starting from version 1.10.0 it will support handler mechanisms that allow to inject behavior when an Observation was started, stopped etc.

Micrometer Tracing is a tracing facade. It’s a new project that has a port of Spring Cloud Sleuth’s API, the Brave Tracer bridge, the OTel Tracer bridge (ported from here), the Wavefront reporter code and the test code to help you set things up in no time. Micrometer Tracing uses Micrometer 1.10.0 and provides tracing related handlers.

How can I write a unit test for my instrumentation?

You can write a unit test that will assert that an Observation was created with proper values.

Production code that uses the micrometer-observation JAR.

class Example {

        private final ObservationRegistry registry;

        Example(ObservationRegistry registry) {
            this.registry = registry;
        }

        void run() {
            Observation.createNotStarted("foo", registry)
                    .lowCardinalityTag("lowTag", "lowTagValue")
                    .highCardinalityTag("highTag", "highTagValue")
                    .observe(() -> System.out.println("Hello"));
        }
    }

Test code that uses the micrometer-observation-test JAR.

// create a test registry in your tests
TestObservationRegistry registry = TestObservationRegistry.create();

@Test
    void should_assert_your_observation() {
        // run your production code
        new Example(registry).run();

        // check your observation
        assertThat(registry)
                .thenObservationWithNameEqualTo("foo")
                    .hasHighCardinalityTag("highTag", "highTagValue")
                    .hasLowCardinalityTag("lowTag", "lowTagValue")
                    .isStarted()
                    .isStopped()
                .backToMockObservationRegistry()
                    .doesNotHaveRemainingObservation();
    }

As for integration tests using metrics & tracing you can use the Micrometer Tracing sample test runner that we have described in this doc.

How can I write an integration test for my instrumentation?

You don’t necessarily need to…​ We encourage the teams to register an account in Wavefront, run Zipkin locally and create a sample that should connect to those. You can use the Micrometer Tracing Test module to extend a JUnit5 base class (SampleTestRunner) that will set up Wavefront and Zipkin with Brave and OTel for you. Your only concern is to provide Wavefront configuration (token) and the code you want to test. Check the example below.

class SpringFrameworkObservabilityTests extends SampleTestRunner {

    SampleTestRunnerTests() {
        super(SampleTestRunner.SamplerRunnerConfig.builder()
                // that's the default - you don't have to explicitly set it
                .wavefrontUrl("https://vmware.wavefront.com")
                // that's the default - you don't have to explicitly set it
                .zipkinUrl("http://localhost:9411")
                // you must pass the token to check on Wavefront
                .wavefrontToken("foo")
                .build());
    }
  
    @Override
    public SampleTestRunnerConsumer yourCode() {
        return (tracer, meterRegistry) -> {
            // here you want to run the logic you would like to observe
        };
    }
}

What is this Observation.Scope about?

We wanted to separate creation of Observations from putting them in scope. That means that when you put the Observation in scope by calling Observation.Scope scope = observation.openScope(); the handlers can perform specific actions on scope opened and on scope closed. This might be putting objects to ThreadLocal or putting entries in MDC. This might not be relevant from only Metrics point of view, however it’s critical from Tracing / Logging perspective. The rule of thumb should be that wherever there’s user code that we are wrapping we should wrap it as follows:

// Injected via framework
ObservationRegistry registry = ObservationRegistry.create();

Observation observation = Observation.createNotStarted("name", registry)
try (Observation.Scope scope = observation.openScope()) {
    return userCodeThatWeAreWrapping();
}
catch(Exception ex) {
    observation.error(ex);
}
finally {
    observation.stop();
}

A shorter version would be

// Injected via framework
ObservationRegistry registry = ObservationRegistry.create();

Observation.createNotStarted("name", registry).observe(() -> userCodeThatWeAreWrapping());

Migration guides

Release notes

1.10
1.9
1.8
1.7
1.6
1.5
1.4 (non-LTS)
1.3
1.2 (non-LTS)
1.1

Clone this wiki locally