Skip to content

Commit 590918c

Browse files
committed
STAC-23561: add k8sattributes to operator config, explain RBAC for Otel
1 parent 4a69aa4 commit 590918c

File tree

2 files changed

+73
-14
lines changed

2 files changed

+73
-14
lines changed

docs/latest/modules/en/pages/setup/otel/getting-started/getting-started-k8s-operator.adoc

Lines changed: 41 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -162,6 +162,33 @@ spec:
162162
endpoint: <otlp-suse-observability-endpoint:port>
163163
compression: snappy
164164
processors:
165+
k8sattributes:
166+
passthrough: false
167+
pod_association:
168+
- sources:
169+
- from: resource_attribute
170+
name: k8s.pod.ip
171+
- sources:
172+
- from: resource_attribute
173+
name: k8s.pod.uid
174+
- sources:
175+
- from: connection
176+
extract:
177+
metadata:
178+
- k8s.namespace.name
179+
- k8s.deployment.name
180+
- k8s.statefulset.name
181+
- k8s.daemonset.name
182+
- k8s.cronjob.name
183+
- k8s.job.name
184+
- k8s.node.name
185+
- k8s.pod.name
186+
- k8s.pod.uid
187+
- k8s.pod.start_time
188+
labels:
189+
- tag_name: $$1
190+
key_regex: (.*)
191+
from: pod
165192
memory_limiter:
166193
check_interval: 5s
167194
limit_percentage: 80
@@ -190,15 +217,15 @@ spec:
190217
pipelines:
191218
traces:
192219
receivers: [otlp]
193-
processors: [memory_limiter, resource, batch]
220+
processors: [k8sattributes, memory_limiter, resource, batch]
194221
exporters: [debug, spanmetrics, otlp/suse-observability]
195222
metrics:
196223
receivers: [otlp, spanmetrics, prometheus]
197-
processors: [memory_limiter, resource, batch]
224+
processors: [k8sattributes, memory_limiter, resource, batch]
198225
exporters: [debug, otlp/suse-observability]
199226
logs:
200227
receivers: [otlp]
201-
processors: []
228+
processors: [k8sattributes]
202229
exporters: [nop]
203230
telemetry:
204231
metrics:
@@ -209,6 +236,8 @@ spec:
209236
[CAUTION]
210237
====
211238
*Use the same cluster name as used for installing the SUSE Observability agent* if you also use the SUSE Observability agent with the Kubernetes stackpack. Using a different cluster name will result in an empty traces perspective for Kubernetes components and will overall make correlating information much harder for SUSE Observability and your users.
239+
240+
The `k8sattributes` processor must be the first in the pipelines. This lets it identify the pod that is sending the data.
212241
====
213242

214243

@@ -318,6 +347,15 @@ After a short while and if your pods are getting some traffic you should be able
318347

319348
If you also have the Kubernetes stackpack installed the instrumented pods will also have the traces available in the xref:/use/views/k8s-traces-perspective.adoc[trace perspective].
320349

350+
== Rancher RBAC
351+
352+
For xref:/setup/security/rbac/rbac_rancher.adoc[Rancher RBAC] to work, telemetry data needs to have the following resource attributes present:
353+
354+
* `k8s.cluster.name` - the *Cluster* name as used by the Kubernetes stackpack
355+
* `k8s.namespace.name` - a *Namespace* managed by a Rancher *Project*
356+
357+
This can be achieved by a configuration such as above. There the `kubernetesAttributes` preset of the `opentelemetry-collector` Helm chart injects a `k8sattributes` processor into each pipeline.
358+
321359
== Next steps
322360

323361
You can add new charts to components, for example the service or service instance, for your application, by following xref:/use/metrics/k8s-add-charts.adoc[our guide]. It is also possible to create xref:/use/alerting/k8s-monitors.adoc[new monitors] using the metrics and setup xref:/use/alerting/notifications/configure.adoc[notifications] to get notified when your application is not available or having performance issues.

docs/latest/modules/en/pages/setup/otel/getting-started/getting-started-k8s.adoc

Lines changed: 32 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -6,8 +6,8 @@
66
This guide provides instructions on monitoring an application.
77

88
* The monitored application/workload running in cluster A.
9-
* The Open Telemetry collector running near the observed application(s), so in cluster A, and sending the data to <stackstate-product-name>.
10-
* <stackstate-product-name> running in cluster B, or SUSE Cloud Observability.
9+
* The Open Telemetry collector running near the observed application(s), so in cluster A, and sending the data to {stackstate-product-name}.
10+
* {stackstate-product-name} running in cluster B, or SUSE Cloud Observability.
1111
1212
image::otel/open-telemetry-collector-kubernetes.png[Container instrumentation with Open Telemetry via collector running as Kubernetes deployment]
1313

@@ -23,16 +23,16 @@ Install the OTel (Open Telemetry) collector in cluster A and configure it to:
2323
* Receive data from, potentially many, instrumented applications.
2424
* Enrich collected data with Kubernetes attributes.
2525
* Generate metrics for traces.
26-
* Forward the data to <stackstate-product-name>, including authentication using the API key.
26+
* Forward the data to {stackstate-product-name}, including authentication using the API key.
2727

28-
NOTE: <stackstate-product-name> also retries sending data when there are connection problems.
28+
NOTE: {stackstate-product-name} also retries sending data when there are connection problems.
2929

3030
=== Create a Service Token
3131

3232
There are two ways to create a service token:
3333

34-
* **<stackstate-product-name> UI** - open the main menu by clicking in the top left of the screen and go to `StackPacks` > `Open Telemetry`. If you have not done so before, click the `INSTALL` button. Click the `CREATE NEW SERVICE TOKEN` button and copy the value onto your clipboard.
35-
* **<stackstate-product-name> CLI** - see xref:/use/security/k8s-service-tokens.adoc#_manage_service_tokens[Manage service tokens]
34+
* **{stackstate-product-name} UI** - open the main menu by clicking in the top left of the screen and go to `StackPacks` > `Open Telemetry`. If you have not done so before, click the `INSTALL` button. Click the `CREATE NEW SERVICE TOKEN` button and copy the value onto your clipboard.
35+
* **{stackstate-product-name} CLI** - see xref:/use/security/k8s-service-tokens.adoc#_manage_service_tokens[Manage service tokens]
3636

3737
The service token value must be used where the instructions below mention `<SERVICE_TOKEN>`.
3838

@@ -57,7 +57,7 @@ We install the collector with a Helm chart provided by the Open Telemetry projec
5757
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
5858
----
5959

60-
Create a `otel-collector.yaml` values file for the Helm chart. Here is a good starting point for usage with <stackstate-product-name>, replace `<otlp-suse-observability-endpoint:port>` with your OTLP endpoint (see xref:/setup/otel/otlp-apis.adoc[OTLP API] for your endpoint) and insert the name for your Kubernetes cluster instead of `<your-cluster-name>`:
60+
Create a `otel-collector.yaml` values file for the Helm chart. Here is a good starting point for usage with {stackstate-product-name}, replace `<otlp-suse-observability-endpoint:port>` with your OTLP endpoint (see xref:/setup/otel/otlp-apis.adoc[OTLP API] for your endpoint) and insert the name for your Kubernetes cluster instead of `<your-cluster-name>`:
6161

6262
.otel-collector.yaml
6363
[,yaml]
@@ -84,9 +84,18 @@ config:
8484
otlp:
8585
protocols:
8686
grpc:
87-
endpoint: <otlp-suse-observability-endpoint:port>
87+
endpoint: 0.0.0.0:4317
8888
http:
89-
endpoint: <otlp-suse-observability-endpoint:port>
89+
endpoint: 0.0.0.0:4318
90+
# Scrape the collectors own metrics
91+
prometheus:
92+
config:
93+
scrape_configs:
94+
- job_name: opentelemetry-collector
95+
scrape_interval: 10s
96+
static_configs:
97+
- targets:
98+
- ${env:MY_POD_IP}:8888
9099
extensions:
91100
# Use the API key from the env for authentication
92101
bearertokenauth:
@@ -139,12 +148,15 @@ config:
139148
receivers: [otlp]
140149
processors: []
141150
exporters: [nop]
151+
telemetry:
152+
metrics:
153+
address: ${env:MY_POD_IP}:8888
142154
----
143155

144156

145157
[CAUTION]
146158
====
147-
*Use the same cluster name as used for installing the <stackstate-product-name> agent* if you also use the <stackstate-product-name> agent with the Kubernetes stackpack. Using a different cluster name will result in an empty traces perspective for Kubernetes components and will overall make correlating information much harder for <stackstate-product-name> and your users.
159+
*Use the same cluster name as used for installing the {stackstate-product-name} Agent* if you also use the {stackstate-product-name} agent with the Kubernetes stackpack. Using a different cluster name will result in an empty traces perspective for Kubernetes components and will overall make correlating information much harder for {stackstate-product-name} and your users.
148160
====
149161

150162
Install the collector using the configuration file:
@@ -170,12 +182,21 @@ For other languages follow the documentation on https://opentelemetry.io/docs/la
170182

171183
== View the results
172184

173-
Go to <stackstate-product-name> and make sure the Open Telemetry Stackpack is installed (via the main menu \-> Stackpacks).
185+
Go to {stackstate-product-name} and make sure the Open Telemetry Stackpack is installed (via the main menu \-> Stackpacks).
174186

175187
If your pods are getting traffic, you should be able to find them under their service name in the Open Telemetry \-> services and service instances overviews. Traces appear in the xref:/use/traces/k8sTs-explore-traces.adoc[trace explorer] and in the xref:/use/views/k8s-traces-perspective.adoc[trace perspective] for the service and service instance components. Span metrics and language specific metrics (if available) become available in the xref:/use/views/k8s-metrics-perspective.adoc[metrics perspective] for the components.
176188

177189
If you also have the Kubernetes stackpack installed the instrumented pods will also have the traces available in the xref:/use/views/k8s-traces-perspective.adoc[trace perspective].
178190

191+
== Rancher RBAC
192+
193+
For xref:/setup/security/rbac/rbac_rancher.adoc[Rancher RBAC] to work, telemetry data needs to have the following resource attributes present:
194+
195+
* `k8s.cluster.name` - the *Cluster* name as used by the Kubernetes stackpack
196+
* `k8s.namespace.name` - a *Namespace* managed by a Rancher *Project*
197+
198+
This can be achieved by a configuration such as above. There the `kubernetesAttributes` preset of the `opentelemetry-collector` Helm chart injects a `k8sattributes` processor into each pipeline.
199+
179200
== Next steps
180201

181202
You can add new charts to components, for example, the service or service instance, for your application, by following xref:/use/metrics/k8s-add-charts.adoc[our guide]. It is also possible to create xref:/use/alerting/k8s-monitors.adoc[new monitors] using the metrics and setup xref:/use/alerting/notifications/configure.adoc[notifications] to get notified when your application is not available or having performance issues.

0 commit comments

Comments
 (0)