Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simplify otel-collector example #3560

Closed
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.md
Expand Up @@ -27,6 +27,7 @@ This project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.htm
- `traceIDRatioSampler` (given by `TraceIDRatioBased(float64)`) now uses the rightmost bits for sampling decisions,
fixing random sampling when using ID generators like `xray.IDGenerator`
and increasing parity with other language implementations. (#3557)
- Update example of `otel-collector`: simplify setup by using `docker-compose` and add metric exporter (#3560)
dmathieu marked this conversation as resolved.
Show resolved Hide resolved

### Deprecated

Expand Down
13 changes: 13 additions & 0 deletions example/otel-collector/Dockerfile
@@ -0,0 +1,13 @@
FROM golang:1.18-alpine AS base

COPY . /src

WORKDIR /src/example/otel-collector

RUN go build -o main .

FROM alpine:latest

COPY --from=base /src/example/otel-collector/main /app/main

CMD ["/app/main"]
28 changes: 0 additions & 28 deletions example/otel-collector/Makefile

This file was deleted.

200 changes: 23 additions & 177 deletions example/otel-collector/README.md
@@ -1,4 +1,4 @@
# OpenTelemetry Collector Traces Example
# OpenTelemetry Collector Traces, Metrics Example

This example illustrates how to export trace and metric data from the
OpenTelemetry-Go SDK to the OpenTelemetry Collector. From there, we bring the
Expand All @@ -13,116 +13,32 @@ App + SDK ---> OpenTelemetry Collector ---|

# Prerequisites

You will need access to a Kubernetes cluster for this demo. We use a local
instance of [microk8s](https://microk8s.io/), but please feel free to pick
your favorite. If you do decide to use microk8s, please ensure that dns
and storage addons are enabled
You need docker and docker-compose to run this example. If you don't have docker
installed, you can follow the instructions [here](https://docs.docker.com/get-docker/).

```bash
microk8s enable dns storage
```

For simplicity, the demo application is not part of the k8s cluster, and will
access the OpenTelemetry Collector through a NodePort on the cluster. Note that
the NodePort opened by this demo is not secured.

Ideally you'd want to either have your application running as part of the
kubernetes cluster, or use a secured connection (NodePort/LoadBalancer with TLS
or an ingress extension).

If not using microk8s, ensure that cert-manager is installed by following [the
instructions here](https://cert-manager.io/docs/installation/).

# Deploying to Kubernetes

All the necessary Kubernetes deployment files are available in this demo, in the
[k8s](./k8s) folder. For your convenience, we assembled a [makefile](./Makefile)
with deployment commands (see below). For those with subtly different systems,
you are, of course, welcome to poke inside the Makefile and run the commands
manually. If you use microk8s and alias `microk8s kubectl` to `kubectl`, the
Makefile will not recognize the alias, and so the commands will have to be run
manually.

## Setting up the Prometheus operator

If you're using microk8s like us, simply do

```bash
microk8s enable prometheus
```

and you're good to go. Move on to [Using the makefile](#using-the-makefile).

Otherwise, obtain a copy of the Prometheus Operator stack from
[prometheus-operator](https://github.com/prometheus-operator/kube-prometheus):

```bash
git clone https://github.com/prometheus-operator/kube-prometheus.git
cd kube-prometheus
kubectl create -f manifests/setup

# wait for namespaces and CRDs to become available, then
kubectl create -f manifests/
```

And to tear down the stack when you're finished:

```bash
kubectl delete --ignore-not-found=true -f manifests/ -f manifests/setup
```

## Using the makefile

Next, we can deploy our Jaeger instance, Prometheus monitor, and Collector
using the [makefile](./Makefile).

```bash
# Create the namespace
make namespace-k8s

# Deploy Jaeger operator
make jaeger-operator-k8s

# After the operator is deployed, create the Jaeger instance
make jaeger-k8s

# Then the Prometheus instance. Ensure you have enabled a Prometheus operator
# before executing (see above).
make prometheus-k8s

# Finally, deploy the OpenTelemetry Collector
make otel-collector-k8s
```

If you want to clean up after this, you can use the `make clean-k8s` to delete
all the resources created above. Note that this will not remove the namespace.
Because Kubernetes sometimes gets stuck when removing namespaces, please remove
this namespace manually after all the resources inside have been deleted,
for example with
# Running the example

```bash
kubectl delete namespaces observability
docker-compose up -d
```

# Configuring the OpenTelemetry Collector

Although the above steps should deploy and configure everything, let's spend
some time on the [configuration](./k8s/otel-collector.yaml) of the Collector.
some time on the [configuration](./otel-collector-config.yaml) of the Collector.

One important part here is that, in order to enable our application to send data
to the OpenTelemetry Collector, we need to first configure the `otlp` receiver:

```yml
...
otel-collector-config: |
receivers:
# Make sure to add the otlp receiver.
# This will open up the receiver on port 4317.
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
processors:
receivers:
# Make sure to add the otlp receiver.
# This will open up the receiver on port 4317
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
...
```

Expand All @@ -134,98 +50,28 @@ need to create the Jaeger and Prometheus exporters:

```yml
...
exporters:
jaeger:
endpoint: "jaeger-collector.observability.svc.cluster.local:14250"

prometheus:
endpoint: 0.0.0.0:8889
namespace: "testapp"
exporters:
jaeger:
endpoint: "jaeger:14250"
tls:
insecure: true
prometheus:
endpoint: 0.0.0.0:8889
...
```

## OpenTelemetry Collector service

One more aspect in the OpenTelemetry Collector [configuration](./k8s/otel-collector.yaml) worth looking at is the NodePort service used for accessing it:

```yaml
apiVersion: v1
kind: Service
metadata:
...
spec:
ports:
- name: otlp # Default endpoint for otlp receiver.
port: 4317
protocol: TCP
targetPort: 4317
nodePort: 30080
- name: metrics # Endpoint for metrics from our app.
port: 8889
protocol: TCP
targetPort: 8889
selector:
component: otel-collector
type:
NodePort
```

This service will bind the `4317` port used to access the otlp receiver to port `30080` on your cluster's node. By doing so, it makes it possible for us to access the Collector by using the static address `<node-ip>:30080`. In case you are running a local cluster, this will be `localhost:30080`. Note that you can also change this to a LoadBalancer or have an ingress extension for accessing the service.

# Running the code

You can find the complete code for this example in the [main.go](./main.go)
file. To run it, ensure you have a somewhat recent version of Go (preferably >=
1.13) and do

```bash
go run main.go
```

The example simulates an application, hard at work, computing for ten seconds
then finishing.

# Viewing instrumentation data

Now the exciting part! Let's check out the telemetry data generated by our
sample application

## Jaeger UI

First, we need to enable an ingress provider. If you've been using microk8s,
do

```bash
microk8s enable ingress
```

Then find out where the Jaeger console is living:

```bash
kubectl get ingress --all-namespaces
```

For us, we get the output

```
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
observability jaeger-query <none> * 127.0.0.1 80 5h40m
```

indicating that the Jaeger UI is available at
[http://localhost:80](http://localhost:80). Navigate there in your favorite
Jaeger UI is available at
[http://localhost:16686](http://localhost:16686). Navigate there in your favorite
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For Jaeger and Prometheus, how about giving a couple indications on the generated data, and what to look for rather than only the link to the interfaces?

Jaeger could mention to look into test-service go view the generated trace. Prometheus could give a sample query.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dmathieu I've updated as your suggestion 👍

web-browser to view the generated traces.

## Prometheus

Unfortunately, the Prometheus operator doesn't provide a convenient
out-of-the-box ingress route for us to use, so we'll use port-forwarding
instead. Note: this is a quick-and-dirty solution for the sake of example.
You *will* be attacked by shady people if you do this in production!

```bash
kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090
```

Then navigate to [http://localhost:9090](http://localhost:9090) to view
Navigate to [http://localhost:9090](http://localhost:9090) to view
the Prometheus dashboard.
42 changes: 42 additions & 0 deletions example/otel-collector/docker-compose.yaml
@@ -0,0 +1,42 @@
version: "3.7"
networks:
otel-collector-example:
name: otel-collector-example

services:
go-app:
build:
context: ../..
dockerfile: $PWD/Dockerfile
networks:
- otel-collector-example
otel-collector:
image: otel/opentelemetry-collector:latest
networks:
- otel-collector-example
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
command: ["--config=/etc/otel-collector-config.yaml"]
expose:
- 4317 # Default endpoint for otlp receiver
- 8889 # Default endpoint for querying metrics
jaeger:
image: jaegertracing/all-in-one:latest
networks:
- otel-collector-example
expose:
- 14250 # Default endpoint for jaeger receiver
- 16686 # Default endpoint for jaeger query
ports:
- 16686:16686
prometheus:
image: quay.io/prometheus/prometheus:latest
networks:
- otel-collector-example
volumes:
- ./prometheus-config.yaml:/etc/prometheus/prometheus.yml
expose:
- 9090 # Default endpoint for prometheus
ports:
- 9090:9090
command: ["--config.file=/etc/prometheus/prometheus.yml"]
12 changes: 12 additions & 0 deletions example/otel-collector/go.mod
Expand Up @@ -9,8 +9,11 @@ replace (

require (
go.opentelemetry.io/otel v1.11.2
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v0.34.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.11.2
go.opentelemetry.io/otel/metric v0.34.0
go.opentelemetry.io/otel/sdk v1.11.2
go.opentelemetry.io/otel/sdk/metric v0.34.0
go.opentelemetry.io/otel/trace v1.11.2
google.golang.org/grpc v1.51.0
)
Expand All @@ -22,6 +25,7 @@ require (
github.com/golang/protobuf v1.5.2 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.7.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/internal/retry v1.11.2 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlpmetric v0.34.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.11.2 // indirect
go.opentelemetry.io/proto/otlp v0.19.0 // indirect
golang.org/x/net v0.0.0-20220722155237-a158d28d115b // indirect
Expand All @@ -38,3 +42,11 @@ replace go.opentelemetry.io/otel/exporters/otlp/otlptrace => ../../exporters/otl
replace go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc => ../../exporters/otlp/otlptrace/otlptracegrpc

replace go.opentelemetry.io/otel/exporters/otlp/internal/retry => ../../exporters/otlp/internal/retry

replace go.opentelemetry.io/otel/metric => ../../metric

replace go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc => ../../exporters/otlp/otlpmetric/otlpmetricgrpc

replace go.opentelemetry.io/otel/exporters/otlp/otlpmetric => ../../exporters/otlp/otlpmetric

replace go.opentelemetry.io/otel/sdk/metric => ../../sdk/metric
19 changes: 0 additions & 19 deletions example/otel-collector/k8s/jaeger.yaml

This file was deleted.