Skip to content

Commit

Permalink
Add documentation around migration from rollup to downsampling (#107965)
Browse files Browse the repository at this point in the history
This change also updated the deprecation warning on all rollup pages from Rollups will be removed in a future version. Use <<downsampling,downsampling>> instead. to Rollups will be removed in a future version. Please <<rollup-migrating-to-downsampling,migrate>> to <<downsampling,downsampling>> instead..
  • Loading branch information
martijnvg committed May 1, 2024
1 parent 0d444ce commit d59ee6c
Show file tree
Hide file tree
Showing 9 changed files with 131 additions and 9 deletions.
2 changes: 1 addition & 1 deletion docs/reference/rollup/api-quickref.asciidoc
Expand Up @@ -5,7 +5,7 @@
<titleabbrev>API quick reference</titleabbrev>
++++

deprecated::[8.11.0,"Rollups will be removed in a future version. Use <<downsampling,downsampling>> instead."]
deprecated::[8.11.0,"Rollups will be removed in a future version. Please <<rollup-migrating-to-downsampling,migrate>> to <<downsampling,downsampling>> instead."]

Most rollup endpoints have the following base:

Expand Down
4 changes: 3 additions & 1 deletion docs/reference/rollup/index.asciidoc
Expand Up @@ -2,7 +2,7 @@
[[xpack-rollup]]
== Rolling up historical data

deprecated::[8.11.0,"Rollups will be removed in a future version. Use <<downsampling,downsampling>> instead."]
deprecated::[8.11.0,"Rollups will be removed in a future version. Please <<rollup-migrating-to-downsampling,migrate>> to <<downsampling,downsampling>> instead."]

Keeping historical data around for analysis is extremely useful but often avoided due to the financial cost of
archiving massive amounts of data. Retention periods are thus driven by financial realities rather than by the
Expand All @@ -20,6 +20,7 @@ cost of raw data.
* <<rollup-understanding-groups,Understanding rollup grouping>>
* <<rollup-agg-limitations,Rollup aggregation limitations>>
* <<rollup-search-limitations,Rollup search limitations>>
* <<rollup-migrating-to-downsampling,Migrating to downsampling>>


include::overview.asciidoc[]
Expand All @@ -28,3 +29,4 @@ include::rollup-getting-started.asciidoc[]
include::understanding-groups.asciidoc[]
include::rollup-agg-limitations.asciidoc[]
include::rollup-search-limitations.asciidoc[]
include::migrating-to-downsampling.asciidoc[]
120 changes: 120 additions & 0 deletions docs/reference/rollup/migrating-to-downsampling.asciidoc
@@ -0,0 +1,120 @@
[role="xpack"]
[[rollup-migrating-to-downsampling]]
=== Migrating from {rollup-cap} to downsampling
++++
<titleabbrev>Migrating to downsampling</titleabbrev>
++++

Rollup and downsampling are two different features that allow historical metrics to be rolled up.
From a high level rollup is more flexible compared to downsampling, but downsampling is a more robust and
easier feature to downsample metrics.

The following aspects of downsampling are easier or more robust:

* No need to schedule jobs. Downsampling is integrated with Index Lifecycle Management (ILM) and Data Stream Lifecycle (DSL).
* No separate search API. Downsampled indices can be accessed via the search api and es|ql.
* No separate rollup configuration. Downsampling uses the time series dimension and metric configuration from the mapping.

It isn't possible to migrate all rollup usages to downsampling. The main requirement
is that the data should be stored in Elasticsearch as <<tsds,time series data stream (TSDS)>>.
Rollup usages that basically roll the data up by time and all dimensions can migrate to downsampling.

An example rollup usage that can be migrated to downsampling:

[source,console]
--------------------------------------------------
PUT _rollup/job/sensor
{
"index_pattern": "sensor-*",
"rollup_index": "sensor_rollup",
"cron": "0 0 * * * *", <1>
"page_size": 1000,
"groups": { <2>
"date_histogram": {
"field": "timestamp",
"fixed_interval": "60m" <3>
},
"terms": {
"fields": [ "node" ]
}
},
"metrics": [
{
"field": "temperature",
"metrics": [ "min", "max", "sum" ] <4>
},
{
"field": "voltage",
"metrics": [ "avg" ] <4>
}
]
}
--------------------------------------------------
// TEST[setup:sensor_index]

The equivalent <<tsds,time series data stream (TSDS)>> setup that uses downsampling via DSL:

[source,console]
--------------------------------------------------
PUT _index_template/sensor-template
{
"index_patterns": ["sensor-*"],
"data_stream": { },
"template": {
"lifecycle": {
"downsampling": [
{
"after": "1d", <1>
"fixed_interval": "1h" <3>
}
]
},
"settings": {
"index.mode": "time_series"
},
"mappings": {
"properties": {
"node": {
"type": "keyword",
"time_series_dimension": true <2>
},
"temperature": {
"type": "half_float",
"time_series_metric": "gauge" <4>
},
"voltage": {
"type": "half_float",
"time_series_metric": "gauge" <4>
},
"@timestamp": { <2>
"type": "date"
}
}
}
}
}
--------------------------------------------------
// TEST[continued]

////
[source,console]
----
DELETE _index_template/sensor-template
----
// TEST[continued]
////

The downsample configuration is included in the above template for a <<tsds,time series data stream (TSDS)>>.
Only the `downsampling` part is necessary to enable downsampling, which indicates when to downsample to what fixed interval.

<1> In the rollup job, the `cron` field determines when the rollup documents. In the index template,
the `after` field determines when downsampling will rollup documents (note that this the time after a rollover has been performed).
<2> In the rollup job, the `groups` field determines all dimensions of the group documents are rolled up to. In the index template,
the fields with `time_series_dimension` set `true` and the `@timestamp` field determine the group.
<3> In the rollup job, the `fixed_interval` field determines how timestamps are aggregated as part of the grouping.
In the index template, the `fixed_interval` field has the same purpose. Note that downsampling does not support calendar intervals.
<4> In the rollup job, the `metrics` field define the metrics and how to store these metrics. In the index template,
all fields with a `time_series_metric` are metric fields. If a field has `gauge` as `time_series_metric` attribute
value, then min, max, sum and value counts are stored for this field in the downsampled index. If a field has
`counter` as `time_series_metric` attribute value, then only the last value stored for this field in the downsampled
index.
2 changes: 1 addition & 1 deletion docs/reference/rollup/overview.asciidoc
Expand Up @@ -5,7 +5,7 @@
<titleabbrev>Overview</titleabbrev>
++++

deprecated::[8.11.0,"Rollups will be removed in a future version. Use <<downsampling,downsampling>> instead."]
deprecated::[8.11.0,"Rollups will be removed in a future version. Please <<rollup-migrating-to-downsampling,migrate>> to <<downsampling,downsampling>> instead."]

Time-based data (documents that are predominantly identified by their timestamp) often have associated retention policies
to manage data growth. For example, your system may be generating 500 documents every second. That will generate
Expand Down
4 changes: 2 additions & 2 deletions docs/reference/rollup/rollup-agg-limitations.asciidoc
Expand Up @@ -2,7 +2,7 @@
[[rollup-agg-limitations]]
=== {rollup-cap} aggregation limitations

deprecated::[8.11.0,"Rollups will be removed in a future version. Use <<downsampling,downsampling>> instead."]
deprecated::[8.11.0,"Rollups will be removed in a future version. Please <<rollup-migrating-to-downsampling,migrate>> to <<downsampling,downsampling>> instead."]

There are some limitations to how fields can be rolled up / aggregated. This page highlights the major limitations so that
you are aware of them.
Expand All @@ -22,4 +22,4 @@ And the following metrics are allowed to be specified for numeric fields:
- Max aggregation
- Sum aggregation
- Average aggregation
- Value Count aggregation
- Value Count aggregation
2 changes: 1 addition & 1 deletion docs/reference/rollup/rollup-apis.asciidoc
Expand Up @@ -2,7 +2,7 @@
[[rollup-apis]]
== Rollup APIs

deprecated::[8.11.0,"Rollups will be removed in a future version. Use <<downsampling,downsampling>> instead."]
deprecated::[8.11.0,"Rollups will be removed in a future version. Please <<rollup-migrating-to-downsampling,migrate>> to <<downsampling,downsampling>> instead."]

[discrete]
[[rollup-jobs-endpoint]]
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/rollup/rollup-getting-started.asciidoc
Expand Up @@ -5,7 +5,7 @@
<titleabbrev>Getting started</titleabbrev>
++++

deprecated::[8.11.0,"Rollups will be removed in a future version. Use <<downsampling,downsampling>> instead."]
deprecated::[8.11.0,"Rollups will be removed in a future version. Please <<rollup-migrating-to-downsampling,migrate>> to <<downsampling,downsampling>> instead."]

To use the Rollup feature, you need to create one or more "Rollup Jobs". These jobs run continuously in the background
and rollup the index or indices that you specify, placing the rolled documents in a secondary index (also of your choosing).
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/rollup/rollup-search-limitations.asciidoc
Expand Up @@ -2,7 +2,7 @@
[[rollup-search-limitations]]
=== {rollup-cap} search limitations

deprecated::[8.11.0,"Rollups will be removed in a future version. Use <<downsampling,downsampling>> instead."]
deprecated::[8.11.0,"Rollups will be removed in a future version. Please <<rollup-migrating-to-downsampling,migrate>> to <<downsampling,downsampling>> instead."]

While we feel the Rollup function is extremely flexible, the nature of summarizing data means there will be some limitations. Once
live data is thrown away, you will always lose some flexibility.
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/rollup/understanding-groups.asciidoc
Expand Up @@ -2,7 +2,7 @@
[[rollup-understanding-groups]]
=== Understanding groups

deprecated::[8.11.0,"Rollups will be removed in a future version. Use <<downsampling,downsampling>> instead."]
deprecated::[8.11.0,"Rollups will be removed in a future version. Please <<rollup-migrating-to-downsampling,migrate>> to <<downsampling,downsampling>> instead."]

To preserve flexibility, Rollup Jobs are defined based on how future queries may need to use the data. Traditionally, systems force
the admin to make decisions about what metrics to rollup and on what interval. E.g. The average of `cpu_time` on an hourly basis. This
Expand Down

0 comments on commit d59ee6c

Please sign in to comment.