Compare commits

...

81 Commits

Author SHA1 Message Date
Jayice
c558291847 update CHANGELOG.md 2026-04-17 14:34:07 +08:00
Jayice
9bd219fdc7 address review 2026-04-17 14:31:52 +08:00
Jayice
ec9d37ce36 improve code style 2026-04-17 13:55:33 +08:00
Jayice
607630b9f5 add unit test 2026-04-17 13:52:07 +08:00
Jayice
f4df18d2db add documentation for obfuscation 2026-04-17 13:03:25 +08:00
Jayice
29bc38871d address review 2026-04-16 15:49:04 +08:00
Jayice
3f35399c24 support obfuscation for rw 2026-04-15 15:09:40 +08:00
Artem Fetishev
8a20ccf21d apptest: sync code between branches and fix backup/restore range queries (#10799)
Fix app tests:

1. Sync code between vmsingle and vmcluster: it must be the same because
apptest does not differentiate between branches, it just runs pre-built
binaries
2. Simplify range queries in backup/restore test so that it does not
depend on the interval between samples to work correctly.

---------

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-04-14 07:18:09 +02:00
Max Kotliar
1a01dbbec7 docs/changelog: fix unwanted release tag change
The tag v1.138.0 was unintentinally changed to v1.139.0 due to bug in
release script.

Reverting the change. The bug will be addressed separate.
2026-04-13 14:52:21 +03:00
f41gh7
630e413812 docs: update flags with actual v1.140.0 binaries
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2026-04-13 11:34:04 +02:00
f41gh7
b639e7e641 docs: bump version to v1.140.0
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2026-04-13 11:31:40 +02:00
f41gh7
858c318e1f deplyoment/docker: bump version to v1.140.0 2026-04-13 11:31:11 +02:00
f41gh7
b8327ce09c docs: mention new LTS releases
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2026-04-13 11:16:30 +02:00
Aliaksandr Valialkin
7514511c68 app/vmauth/main.go: clarify comments for bufferedBody struct a bit
This is a follow-up for https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10677#discussion_r3064731250
2026-04-11 09:42:32 +02:00
f41gh7
33d524bf13 follow-up for d07c1c73d1
move bugifx into current release
2026-04-10 19:37:14 +02:00
Alexander Frolov
d07c1c73d1 lib/writeconcurrencylimiter: prevent deadlock at IncConcurrency
Previously (*writeconcurrencylimiter.Reader).Read() could permanently leak concurrency tokens from the -maxConcurrentInserts semaphore.
 
 Consider the following example:
* GetReader() acquires a token, then PutReader() unconditionally releases it.
* Read() calls DecConcurrency() before the underlying I/O and IncConcurrency() after it. If IncConcurrency() returns an error, Read() returns without holding a token.
* Each such failure permanently removes one slot from the concurrencyLimitCh semaphore. Slots leak one by one until the channel is fully drained, at which point DecConcurrency() blocks forever, deadlocking ingestion on vmstorage.

 This commit adds tracking for obtained tokens to the reader. Which prevents possible tokens leakage. 

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10784
2026-04-10 19:35:59 +02:00
f41gh7
a896673c42 CHANGELOG.md: cut v1.140.0 release 2026-04-10 17:02:32 +02:00
f41gh7
c60ab2d57a make docs-update-version 2026-04-10 16:54:18 +02:00
f41gh7
49e51611d7 make vmui-update 2026-04-10 16:51:13 +02:00
Hui Wang
902ca83177 app/vmalert: adopt additional rule states in the list rules API
In grafana, the alert list panel can use VictoriaMetrics as datasource
and call `/api/v1/rules` api with [specific
states](https://grafana.com/docs/grafana/latest/alerting/fundamentals/alert-rule-evaluation/nodata-and-error-states/#alert-instance-states).
See
https://play-grafana.victoriametrics.com/d/febljk0a32qyoa/3e68cf3?orgId=1&from=now-1h&to=now&timezone=browser&var-prometheus_datasource=P4169E866C3094E38&var-jaeger_datasource=P14D5514F5CCC0D1C&var-victorialogs_datasource=PD775F2863313E6C7&var-service_namespace=$__all&var-service_name=checkout&refresh=5m&editPanel=40.
Some states are already defined in vmalert, although with different
names. Others, such as "recovering", are currently undefined.
This pull request adopts all these states, rather than fail the request.

Above panel request also uses the `matcher` param to filter rules.
However,
[prometheus](https://prometheus.io/docs/prometheus/latest/querying/api/#rules)
also does not support this parameter and simply ignore it, so I don't
think vmalert needs to support it now.

JFYI, the grafana [Alerting
page](https://play-grafana.victoriametrics.com/alerting) does not
include any of the mentioned `state` or `matcher` parameters in rule
listing requests to the datasource. Filtering is handled by the Grafana
frontend, so most users are not affected by partial support for
filtering in backend products.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10778
2026-04-10 16:47:25 +02:00
Phuong Le
66e3f8736b ci: remove automatic Codecov reporting from test workflow (#10780)
This removes automatic Codecov reporting from VictoriaMetrics CI. This
change keeps local coverage generation available, but removes automatic
PR noise (such as
[this](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10625#issuecomment-4084390659))
and unnecessary CI overhead.
2026-04-10 16:45:11 +02:00
f41gh7
532fcc3dfe docs: remove reverted commit changelog
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2026-04-10 16:35:29 +02:00
Aliaksandr Valialkin
b003d6c6ae Revert "app/vmauth: align request body buffering flags"
This reverts commit b3c03c023c.

Reason for revert: the original logic was correct from the user's perspective:

- The -maxRequestBodySizeToRetry command-line flag controls the size of the request body,
  which could be retried on backend failure. The meaining of this flag wasn't changed after
  the introduction of the -requestBufferSize flag in the commit e31abfc25c
  (see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10309 )

- The -requestBufferSize flag controls the size of the buffer for reading request body
  before sending sending it to the backend and before applying concurrency limits.

These flags are independent from user's perspective. The fact that these flags share the implementation,
sholdn't be known to the user - this is an implementation detail, which allows avoiding double buffering.

Both flags enable request buffering. If the user wants disabling of all the request buffering,
then both flags must be set to 0. That's why these flags are cross-mentioned in their -help descriptions.

Also the reverted commit had the following issues:

- It reduced the default value for the -requestBufferSize flag from 32KiB to 16KiB.
  The 32KiB value has been calculated and justified at https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10309 .
  It shouldn't increase vmagent memory usage too much for typical workloads.
  For example, if vmagent handles 10K concurrent requests, then the memory overhead for the request buffering
  will be 10K*32KiB=320MiB. This is a small price for being able to efficiently handling 10K concurrent requests.

- It added a dot to the end of the https://docs.victoriametrics.com/victoriametrics/vmauth/#request-body-buffering link
  in the description for the description of the -requestBufferSize flag. This breaks clicking the link in some environments,
  since the trailing dot is considered as a part of the url.

- It added a superflouous whitespace in front of the 'Disabling request buffering' text inside the description
  for the -requstBufferSize flag.

- It introduced an unnecessary complexity to the user by mentioning that the zero value
  at -maxBufferSize disables buffering for request reties (these things must be independent
  from the user's perspective).

- It changed the bufferedBody logic in non-trivial ways, which aren't related to the original issue.
  If these changes are needed, then they must be justified in a separate issue and must be prepared
  in a separate pull request / commit.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10675
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10677
2026-04-10 15:55:47 +02:00
Aliaksandr Valialkin
8fa0fae05a lib/protoparser/protoparserutil: fix encoding -> contentType in the description of the ReadUncompressedData function
This is a follow-up for the commit bed7cbd0a4
2026-04-10 15:20:27 +02:00
Aliaksandr Valialkin
3fe2ec7bde docs/victoriametrics/Articles.md: add https://medium.com/airbnb-engineering/building-a-high-volume-metrics-pipeline-with-opentelemetry-and-vmagent-c714d6910b45 2026-04-10 13:28:49 +02:00
Max Kotliar
6389979bce docs/changelog: add thank you for bugfix contribution 2026-04-10 13:08:28 +03:00
Max Kotliar
210fd0ae15 docs/changelog: add thank you for the contribution 2026-04-10 13:06:08 +03:00
Noureldin
f95b483a13 lib/storage: fixes data race at startFreeDiskSpaceWatcher
Previously, Storage.table was initialized after startFreeDiskSpaceWatcher was called.
This created a potential data race condition: if openTable took a long time to complete
and freed disk space during that window, the free disk space watcher could read an
uninitialized (or partially initialized) Storage.table, leading to an invalid memory
address or nil pointer dereference panic.

This commit properly initializes s.isReadOnly state during storage start and
starts FreeDiskSpaceWatcher after openTable.

Bug was introduced in github.com/VictoriaMetrics/VictoriaMetrics/commit/27b958ba8bc66578206ddac26ccf47b2cc3e8101

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10747
2026-04-10 08:33:49 +02:00
Hui Wang
b71c37e20a app/vmalert: align group evaluation time with the eval_offset option
Align group evaluation time with the `eval_offset` option to allow users
to manage group execution more effectively by understanding the exact
time each group will be scheduled, particularly in cases of spreading
rule execution within a window, chaining groups, or debugging data delay
issue.

If the group evaluation takes less than the group interval, but the
initial evaluation combined with the additional restore operation
exceeds the group interval, the evaluation time will be gradually
corrected in subsequent evaluations, as the interval ticker schedule
remains unchanged.

For groups without `eval_offset`, this change also ensures that all
evaluations follow the interval. Previously, the gap between the first
and second evaluations was larger than the interval. And the
`eval_delay` continues to help prevent partial responses.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10772.
2026-04-10 08:31:36 +02:00
Aliaksandr Valialkin
c27b5f5dfe docs/victoriametrics/vmauth.md: fix link to concurrency limiting chapter
The correct link must be https://docs.victoriametrics.com/victoriametrics/vmauth/#concurrency-limiting
instead of https://docs.victoriametrics.com/victoriametrics/vmauth/#concurrency-limits

The incorrect link has been introduced in the commit e31abfc25c
2026-04-09 19:37:40 +02:00
Max Kotliar
0a31eacb3d lib/{osinfo,appmetrics}: Move vm_os_info metric code to lib/appmetrics package (#10776)
Follow-up commit for
211fb08028

Address @f41gh7 review comments:
- Move code from `lib/osinfo` to `lib/appmetrics`.
- Make the logic private.
- Use metrics.WriteGaugeUint64 func.
- Remove registration logic from `app/xxx/main.go`.
- Remove `lib/osinfo` package.
2026-04-09 18:32:47 +03:00
Artem Fetishev
70b0115ea6 lib/storage: reuse nextDayMetricIDs during the first hour of the day (#10704)
At 00:00 UTC the ingested samples start to have timestamps for the new
day (in the ingested samples are always recent). Even though there was a
next-day prefill of the per-day index during the last hour of the day,
some performance degradation is still possible.

For example, in https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10698
it is manifested as `vminsert-to-vmstorage connection saturation` peaks
right after midnight.

Possible hypothesis why this is happening. At midnight,
currHourMetricIDs is empty and prevHourMetricIDs cannot be used because
it holds metricIDs for the previous day. So the ingestion logic hits
dateMetricIDsCache which may not have the metricID in its read-only
buffer and therefore should aquire lock to check its prev read-only
buffer or read-write buffer. Which creates lock contention and therefore
raises ingestion request latency.

A solution to this could be re-using the nextDayMetricIDs during the
first hour of the day. During this time, it is equivalent to
currHourMetricIDs.

---------

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Signed-off-by: Artem Fetishev <149964189+rtm0@users.noreply.github.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
2026-04-09 16:33:42 +02:00
Max Kotliar
dfafd14767 apptest: Improve TestSingleVMAgentDropOnOverload stability (#10774)
Previosly the test could fail on resource constraint runners because
remoteWrite retry happens before the assertion in:

```
    waitFor(
        func() bool {
            return vmagent.RemoteWriteRequests(t, url1) == 1 &&
vmagent.RemoteWriteRequests(t, url2) == 1
        },
    )
```

Because of retry the metric jumps to two and assert never satisfied.

The commit explisitly postpones retries so there is no race condition.

Failed  CI job:

https://github.com/VictoriaMetrics/VictoriaMetrics/actions/runs/24186679213/job/70593055140

PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10774

<img width="1157" height="879" alt="Screenshot 2026-04-09 at 15 30 33"
src="https://github.com/user-attachments/assets/e170ae12-cf79-4501-a57b-fbd3612d31a0"
/>
2026-04-09 16:57:39 +03:00
Max Kotliar
e3fdbc8341 docs/changelog: cleanup follow-up on e1a9901654
e1a9901654
2026-04-09 15:04:48 +03:00
Max Kotliar
1bf442537f docs/changelog: cleanup. follow-up on 211fb08028 commit
211fb08028
2026-04-09 15:01:44 +03:00
JAYICE
211fb08028 introduce os kernel version information metric (#10746)
The commit introduces the `vm_os_info` metric, which is exposed by all VM binaries by default. It provides visibility into the operating system version on which VictoriaMetrics is running, helping with troubleshooting environment-specific issues, like known kernel or fs bugs. 

FIxes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10481
PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10746

Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-04-09 14:43:25 +03:00
Yury Moladau
846124e280 app/vmui: generate CSV format using /api/v1/labels (#10771)
`Export query` button on `Raw Query` tab now fetches labels of executed query and composes export `format` based on that list of labels. It ensures that all query response labels are preserved in the CSV export. 

Also, commit removes the addition of the CSV header in the frontend. Now the header is added by the backend (see https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10706).

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10667
PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10771
Duplicate of: https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10737

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: Yury Molodov <yurymolodov@gmail.com>
Co-authored-by: lawrence3699 <lawrence3699@users.noreply.github.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-04-09 14:18:01 +03:00
andriibeee
e1a9901654 vmselect: add CSV header support for export/import (#10706)
Export (/api/v1/export/csv) now always writes a header row matching the requested format fields. Examples:

```
  # format=__timestamp__:unix_ms,__value__,job,instance
  __timestamp__:unix_ms,__value__,job,instance
  1704067200000,42.5,node,localhost:9090
```

Import (/api/v1/import/csv) gains auto-detection logic: the first row is skipped if any timestamp column fails timestamp parsing or any metric value column fails float parsing. If the first row is not detected as headers, it is parsed as data. This makes the import backward compatible. 

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10666
PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10706

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-04-09 14:00:39 +03:00
dependabot[bot]
5d0cf1d4a5 build(deps): bump vite from 8.0.2 to 8.0.7 in /app/vmui/packages/vmui (#10761)
Bumps [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite) from 8.0.2 to 8.0.7.

https://github.com/vitejs/vite/blob/v8.0.7/packages/vite/CHANGELOG.md

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-09 13:16:01 +03:00
Pablo (Tomas) Fernandez
cd3d297a3d docs: udpate playground page (#10754)
This change reverts part of the changes in
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10686

Motivation: docs added https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10686 in most cases are too verbose, ai-generated and bringing low practical sense.

The improvement goal: remove bloat from the docs and keep them practical and useful.

What it does:
- Completely removes items from the sidebar
- Moves the content of the most important playground pages to the
`/playground/` stub (README.md). Use H2s for each playground.
- Updates and cleans the text.
- Removes the individual children pages in the playground category (keep
only the `/playgrounds/` page/stub and remove the children).
- Removes items as these don't really need much introduction or aren't
playgrounds:
  - log to logsql: a conversion tool
  - sql to logsql: same
- adds Grafana playground section

Links of child pages will become invalid. We don't preserve them as this is pretty new doc (1w on prod) and is unlikely to have already persisted links somewhere.

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2026-04-09 12:08:48 +02:00
f41gh7
52f4d0f055 follow-up for 72c9e9377c
Move changelog entry to the upcoming release section
2026-04-09 11:38:36 +02:00
Hui Wang
72c9e9377c app/vmalert: expose remotewrite queue_size metrics
This commit adds new metrics `vmalert_remotewrite_queue_capacity` and `vmalert_remotewrite_queue_size`, which is updated with each push and it's
frequency depends on `-remoteWrite.concurrency`,
`remoteWrite.flushInterval`

It doesn't account for the pending data within each pushers request, it
should provide a general indication of the queue usage.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10765
2026-04-09 11:22:38 +02:00
andriibeee
0aaa741b5b lib/awsapi: add support for named AWS profile to ec2_sd_config
Add support for named AWS profiles in ec2_sd_config, matching Prometheus behavior.

Example:

```text
~/.aws/config:
[profile account-one]
source_profile = root
role_arn = arn:aws:iam::000000000001:role/prometheus
```

```yaml
scrape config:
- job: ec2
  ec2_sd_configs:
    - profile: account-one
```

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1685
2026-04-09 11:17:17 +02:00
f41gh7
0e9870b7a9 vendor: run go get -u ./lib/...
go get -u ./app/...
go mod tidy -compat=1.26
go mod vendor
2026-04-09 09:34:03 +02:00
Artem Fetishev
accb06d131 lib/storage: refactor storage synctests
Exctract repeated code from nextDayMetricIDs synctests into separate
funcs to make the code more readable.

The change was originally introduced in
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10704 and was
extracted into a separate PR to keep the original change simple.
2026-04-09 09:07:37 +02:00
f41gh7
1787bce6cb app/vminsert: opitimise per insert request memory buffer size
Previously, vminsert did not account for the ingest concurrency limit in buffer size calculation.
This could lead to excessively large buffers and OOM errors when the concurrency limit was reached.

 This commit fixes buffer size calculation by separating `insertCtx` and `storageNode` buffer size limits.

`storageNode` buffer size is set to a larger value, as it is allocated per configured `-storageNode`
and is independent of the concurrency limit.

`insertCtx` buffer size now accounts for the configured concurrency limit
and calculates the maximum buffer size accordingly.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10725
2026-04-09 08:48:37 +02:00
JAYICE
141febd413 app/vmselect: disable partial responses for cluster native requests
Previously, vmselect in cluster-native mode could return partial responses to upstream vmselect.
Since upstream vmselect expects full responses (mimicking vmstorage behavior),
partial responses must be disabled in cluster-native mode.
This prevents incomplete responses from being cached at the upstream vmselect level.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10678
2026-04-09 08:48:13 +02:00
0e4ef622
256eff061d docs/victoriametrics/stream-aggregation: fix rate_sum link (#10756)
### Describe Your Changes

https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8349 updated the recommendation for histogram aggregation from `total` to `rate_sum`, but missed one of the links.

PR: https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10756

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-04-08 13:09:58 +03:00
Zakhar Bessarab
fa1dd0ec0a app/vmagent/remotewrite: automatically set series limits to MaxInt32 when setting value to -1 (#9614)
Automatically set daily and hourly series limits to `MaxInt32` when `remoteWrite.maxHourlySeries` or `remoteWrite.maxDailySeries` is set to `-1`.

This change addresses a usability issue with the cardinality limiter. Users may want to enable the limiter to observe its metrics before deciding on an appropriate limit. However, the underlying bloom filter only supports `int32`, so setting large values can lead to overflow.

With this PR:
* Setting either flag to `-1` is treated as “no practical limit” and internally mapped to `math.MaxInt32`
* Values exceeding `int32` are safely clamped to `MaxInt32` to prevent overflow

This allows users to enable the limiter for estimation purposes without risking invalid configurations or runtime issues.

https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9614

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
Signed-off-by: Max Kotliar <kotlyar.maksim@gmail.com>
Co-authored-by: Nikolay <nik@victoriametrics.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
2026-04-08 12:54:27 +03:00
Max Kotliar
6337dfc472 deployment/docker: update Go builder from Go1.26.1 to Go1.26.2
See
https://github.com/golang/go/issues?q=milestone%3AGo1.26.2%20label%3ACherryPickApproved
2026-04-08 12:43:29 +03:00
JAYICE
0a256002e5 lib/promscape: update last scrape result only when current scrape is successful
Previously, last scrape result was unconditionally update, despite possible scrape error.

The commit updates last scrape result only at successful scrape. It properly accounts `scrape_series_added` metric and aligns it with the same metric in Prometheus.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10653
2026-04-06 17:14:47 +02:00
Nikolay
b3c03c023c app/vmauth: align request body buffering flags
Previously introduced flag `requestBufferSize` raised default value for
in-memory buffer from 16KB to 32KB. It could increase memory usage for
vmauth. Also it made unclean how to actually disable requests buffering.

 This commit aligns flags value to the 16KB. And disables requests
buffering if any of flags value are 0 as mentioned at flags description.
If any of flags have non-default value, those value are used as max size
for request buffer. If both flags are modified - bigger value wins.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10675
2026-04-06 09:51:44 +02:00
Hui Wang
80e2f29761 app/vmalert: add random jitter to concurrent periodical flushers targeting the remote write destination
I expect the change to help in two ways:
1. Spreading remote write flushes over the flush interval to avoid
congestion at the remote write destination;
2. Enhance queue data consumption. Currently, all flushers may always
flush data simultaneously, resulting in periods where no flushers are
consuming data from the queue, which increases the risk of reaching the
queue limit `remoteWrite.maxQueueSize` even when a increased
`remoteWrite.concurrency`. By making the flushers more dispersed, it is
more likely that some flushers are consistently consuming data from the
queue, which should make queue management easier.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10729/
2026-04-06 09:50:20 +02:00
Hui Wang
df34ba3ba2 app/vmalert: expose new histograms to provide better visibility into remote write request sizes
The new histograms should help with debugging whether remote write
pushes are efficient(pushes can be underutilized due to small flush
interval), like in
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10693 and
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10536. This
enhanced visibility will allow related parameters such as
`-remoteWrite.maxBatchSize`, `-remoteWrite.maxQueueSize`,
`-remoteWrite.flushInterval` to be tuned accordingly.

Eventually, `vmalert_remotewrite_sent_rows_total`
and `vmalert_remotewrite_sent_bytes_total` could be deprecated, but it's also fine to leave
them as they are since they're small counters.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10727
2026-04-06 09:49:17 +02:00
sias32
10dd45c4fd dashboards: improvement alert statistics (#10571)
Changes:

- Added the number of `pending alerts` and `firing alerts`
- Improvement `transormations` for panel - FIRING over time by group and rules
- Added sort for panel - FIRING over time by rule

Signed-off-by: sias32 <sias.32@yandex.ru>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-04-03 21:27:19 +03:00
Max Kotliar
a3294b5aa2 docs/guide: fix free space calculation factor in capacity planning formula
Replace 1.2 multiplier with 1.25 in disk space estimation formula.

1.2 only provides ~16.7% free space, while the docs recommend keeping
20%. Using 1.25 correctly accounts for 20% free space.

Inspired by
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10394
2026-04-03 21:20:01 +03:00
Zhu Jiekun
4438454567 vendor: update metrics package with fix unsupported metric type for summary (#10745)
Fix `unsupported` metric type display in exposed metric metadata for
summaries and quantiles by bumping `metrics` SDK version.

This `unsupported` type exists when a summary is not updated within a
certain time window. See https://github.com/VictoriaMetrics/metrics/issues/120 and pull
request https://github.com/VictoriaMetrics/metrics/pull/121 for details.

Signed-off-by: Zhu Jiekun <jiekun@victoriametrics.com>
Signed-off-by: Max Kotliar <kotlyar.maksim@gmail.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-04-03 16:06:47 +03:00
Max Kotliar
71af1ee5f1 .github: Set 21-day cooldown to dependabot updates (#10740)
Recent supply chain attacks on GitHub Actions and npm packages show the
risk of pulling dependency updates too quickly:
-
https://socket.dev/blog/trivy-under-attack-again-github-actions-compromise
-
https://www.stepsecurity.io/blog/axios-compromised-on-npm-malicious-versions-drop-remote-access-trojan
2026-04-03 15:51:07 +03:00
Evgeny
e00fb7e605 app/vmagent: add per-URL -remoteWrite.disableMetadata
Add per-URL `-remoteWrite.disableMetadata` flag to control metadata
sending for each remote storage independently.

After v1.137.0 enabled `-enableMetadata` by default, metadata is sent to
ALL remote write targets, even those with relabeling filters that drop
most metrics. This causes unnecessary growth in
`vmagent_remotewrite_requests_total`. and significant increase in
network load for heavy filtered remote write destinations.
2026-04-03 10:32:34 +02:00
Roman Khavronenko
5e2ee00504 app/vmauth: mention that vmauth can be used with other components
A cosmetic change to highlight that vmauth can be used with other
compnents besides VM only
2026-04-03 10:27:43 +02:00
JAYICE
de2bc4237a lib/backup/s3: retry the requests that failed with unexpected EOF
When the network between client and s3 server is unstable, the client may encounter temporary io.EOF errors when reading the response from s3 server.
Currently, the s3 sdk in vmbackup uses the default retry policy. However, this default retry policy won't retry when s3 sdk meet unexpected EOF. This means that the temporary unexpected EOF error will cause the backup task to fail.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10699
2026-04-03 10:26:58 +02:00
Fred Navruzov
3e51f277bd docs/vmanomaly: v1.29.2 (#10741)
update docs to vmanomaly v1.29.2 release

Signed-off-by: Fred Navruzov <fred-navruzov@users.noreply.github.com>
2026-04-02 22:02:26 +03:00
Roman Khavronenko
5723339525 docs: mention https://victoriametrics.com/blog/victoriametrics-remote-write/ (#10726)
Add link to blogpost with detailed information about zstd+rw protocol.
This PR is based on question in community channel about implementation
details.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-04-02 16:30:10 +03:00
Max Kotliar
3b986ad326 docs/changelog: add thank for contribution 2026-04-02 15:58:23 +03:00
Max Kotliar
08dd38d4a0 vendor: update https://github.com/VictoriaMetrics/metricsql from v0.85.0 to v0.86.0
It contains https://github.com/VictoriaMetrics/metricsql/pull/63 that
reduce number of parentheses added.

It should improve prettify functinality in vmui
2026-04-02 15:42:01 +03:00
Aliaksandr Valialkin
815cc97952 vendor: update github.com/VictoriaMetrics/metrics from v1.42.0 to v1.43.0 2026-04-02 14:18:30 +02:00
Dmytro Kozlov
93d71e7106 vmctl: add thanos migration mode (#10659)
Implemented dedicated thanos migration mode for vmctl to migrate data from Thanos installations to VictoriaMetrics.

Key features:
1. Raw and downsampled blocks support: Reads both raw blocks
(resolution=0) and downsampled blocks (5m/1h resolution) directly from
Thanos snapshots
2. All aggregate types: Imports count, sum, min, max, and counter
aggregates from downsampled blocks as separate metrics with resolution
and type suffixes (e.g., metric_name:5m:count)
3. Dedicated flags: Uses `--thanos-*` prefixed flags (--thanos-snapshot,
--thanos-concurrency, --thanos-filter-time-start,
--thanos-filter-time-end, --thanos-filter-label,
--thanos-filter-label-value, --thanos-aggr-types)
4. Selective aggregate import: Use `--thanos-aggr-types` to import only
specific aggregates

Usage:
```
vmctl thanos --thanos-snapshot /path/to/thanos-data --vm-addr http://victoria-metrics:8428
```

Closes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9262

Signed-off-by: Dmytro Kozlov <d.kozlov@victoriametrics.com>
Signed-off-by: Max Kotliar <kotlyar.maksim@gmail.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
Co-authored-by: Max Kotliar <kotlyar.maksim@gmail.com>
2026-04-02 14:51:01 +03:00
Aliaksandr Valialkin
577b161343 docs/victoriametrics/changelog/CHANGELOG.md: add a description for the change in the commit dd2d6807e4 2026-04-02 13:18:38 +02:00
Mehrdad Banikian
dd2d6807e4 Add split phase metrics for filestream fsync operations (#10493)
## Summary

This PR implements split phase metrics for filestream operations as
requested in #10432.

### Changes

- Added `vm_filestream_fsync_duration_seconds_total` metric to track
fsync syscall duration separately
- Added `vm_filestream_fsync_calls_total` metric to count fsync calls
- Added `vm_filestream_write_syscall_duration_seconds_total` metric to
track write syscall duration (previously mixed with flush time)
- Refactored `MustClose()` and `MustFlush()` to use new `flush()` and
`sync()` helper methods
- Kept `vm_filestream_write_duration_seconds_total` for backward
compatibility

### Problem Solved

Previously, `vm_filestream_write_duration_seconds_total` was being
incremented in two places:
1. `statWriter.Write()` - triggered by `bw.Flush()` and `bw.Write()`
2. `Writer.MustFlush()` - which included the above process, leading to
double-counting

This made it impossible to distinguish between write syscall time and
fsync time, which is critical for diagnosing storage latency issues.

### Solution

The new metrics allow users to:
- Distinguish "flush got slower" vs "fsync got slower" using metrics
only
- No file path labels (bounded cardinality)
- No double-counting between metrics

### Testing

- Code compiles successfully
- All existing metrics are preserved for backward compatibility

Closes #10432

---------

Signed-off-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
Signed-off-by: Aliaksandr Valialkin <valyala@gmail.com>
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
Co-authored-by: Aliaksandr Valialkin <valyala@gmail.com>
2026-04-02 13:14:33 +02:00
Aliaksandr Valialkin
e38e25b756 app/vmagent/remotewrite: improve the readability of the parseRetryAfterHeader() function a bit
- Use shorter name for its' arg: retryAfterString -> s. This is OK to do because the function is small enough,
so it is easier to read 's' instead of 'retryAfterString' in multiple places of the function.

- Remove the name for the returned value - retryAfterDuration, since it only confuses the reader.

This is a follow-up for the commit 5319acb8ed , which introduced this function.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6097
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/6124
2026-04-02 12:52:59 +02:00
Vadim Alekseev
bc708c8568 lib/timeutil: introduce backoff timer struct (#10714)
### Describe Your Changes

I noticed that the backoff timer logic is repeated across multiple
packages. I've implemented a universal wrapper to avoid duplicating this
logic. This structure is already [actively
used](2aa0ea10bb/app/vlagent/kubernetescollector/backoff_timer.go (L11))
for the Kubernetes Collector in vlagent and can be reused in vlagent's
remotewrite. I've also included a usage example in this PR so you can
evaluate its utility.

### Checklist

The following checks are **mandatory**:

- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [X] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-04-02 12:31:28 +02:00
Aliaksandr Valialkin
cd73472a3e docs/victoriametrics/Articles.md: add https://mirastacklabs.ai/blog/chunk-split-caching/ 2026-04-01 22:42:15 +02:00
Aliaksandr Valialkin
527d09653a lib/storage: remove MetricNamesStatsResponse and MetricNamesStatsRecord types
These types hide public types from lib/storage/metricnamestats package.
These types do not resolve any practical issues. Instead, they add a level of indirection,
which complicates reading and understanding the code.

These types were introduced in the commit 795d3fe722
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6145
2026-04-01 22:25:50 +02:00
Aliaksandr Valialkin
28a87b90bb apptest: test apps with the enabled built-in race detector in order to be able to catch data races 2026-04-01 22:11:17 +02:00
Artem Fetishev
c445e7fcc0 docs: bump version to v1.139.0
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-04-01 15:22:06 +02:00
Artem Fetishev
9494ee103e deplyoment/docker: bump version to v1.139.0
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-04-01 15:19:03 +02:00
Artem Fetishev
94af588e92 docs: forward port LTS v1.122.18 changelog to upstream
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-04-01 14:24:20 +02:00
Artem Fetishev
e5c194cc10 docs: forward port LTS v1.136.3 changelog to upstream
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-04-01 14:22:51 +02:00
Artem Fetishev
7c65e3daca docs: fix changelog
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-04-01 13:17:16 +02:00
Pablo (Tomas) Fernandez
a6532c28b2 docs/guides: Add new guide "Set up Datasource-Managed Alerts with vmalert and Grafana" (#10691)
Create a guide to use datasource-managed alerts in Grafana

See: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10528

Signed-off-by: Pablo (Tomas) Fernandez <46322567+TomFern@users.noreply.github.com>
Co-authored-by: Mathias Palmersheim <mathias@victoriametrics.com>
2026-03-31 18:57:04 +03:00
Jose Gómez-Sellés
ec26ebb803 docs: raise cloud awareness in docs (#10716)
### Describe Your Changes

Some users may not know that VictoriaMetrics Cloud provides relevant
features to manage workloads. This change add notes in relevant places
in which users may find that a managed solution is what they need.

The intention is not to push users to Cloud, but giving the information.
That's why it's always phrased like: "If you don't want to do X, Cloud
can do it for you", instead of "Start for free, etc". This is an Open
Source first project, and shall remain as such.

After this gets proper review, VictoriaLogs and other repos may follow.

### Checklist

The following checks are **mandatory**:

- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [X] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: Jose Gómez-Sellés <14234281+jgomezselles@users.noreply.github.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
2026-03-31 18:51:06 +03:00
1511 changed files with 130600 additions and 25379 deletions

View File

@@ -4,6 +4,8 @@ updates:
directory: "/"
schedule:
interval: "daily"
cooldown:
default-days: 21
- package-ecosystem: "gomod"
directory: "/"
schedule:
@@ -23,6 +25,8 @@ updates:
directory: "/"
schedule:
interval: "daily"
cooldown:
default-days: 21
- package-ecosystem: "npm"
directory: "/app/vmui/packages/vmui"
schedule:

View File

@@ -66,8 +66,8 @@ jobs:
strategy:
matrix:
scenario:
- 'test-full'
- 'test-full-386'
- 'test'
- 'test-386'
- 'test-pure'
steps:
@@ -88,11 +88,6 @@ jobs:
- name: Run tests
run: make ${{ matrix.scenario}}
- name: Publish coverage
uses: codecov/codecov-action@v6
with:
files: ./coverage.txt
apptest:
name: apptest
runs-on: apptest

View File

@@ -457,6 +457,9 @@ test:
test-race:
go test -tags 'synctest' -race ./lib/... ./app/...
test-386:
GOARCH=386 go test -tags 'synctest' ./lib/... ./app/...
test-pure:
CGO_ENABLED=0 go test -tags 'synctest' ./lib/... ./app/...
@@ -467,10 +470,10 @@ test-full-386:
GOARCH=386 go test -tags 'synctest' -coverprofile=coverage.txt -covermode=atomic ./lib/... ./app/...
apptest:
$(MAKE) victoria-metrics vmagent vmalert vmauth vmctl vmbackup vmrestore
$(MAKE) victoria-metrics-race vmagent-race vmalert-race vmauth-race vmctl-race vmbackup-race vmrestore-race
go test ./apptest/... -skip="^Test(Cluster|Legacy).*"
apptest-legacy: victoria-metrics vmbackup vmrestore
apptest-legacy: victoria-metrics-race vmbackup-race vmrestore-race
OS=$$(uname | tr '[:upper:]' '[:lower:]'); \
ARCH=$$(uname -m | tr '[:upper:]' '[:lower:]' | sed 's/x86_64/amd64/'); \
VERSION=v1.132.0; \

View File

@@ -4,7 +4,6 @@
[![Docker Pulls](https://img.shields.io/docker/pulls/victoriametrics/victoria-metrics?label=&logo=docker&logoColor=white&labelColor=2496ED&color=2496ED&link=https%3A%2F%2Fhub.docker.com%2Fr%2Fvictoriametrics%2Fvictoria-metrics)](https://hub.docker.com/u/victoriametrics)
[![Go Report](https://goreportcard.com/badge/github.com/VictoriaMetrics/VictoriaMetrics?link=https%3A%2F%2Fgoreportcard.com%2Freport%2Fgithub.com%2FVictoriaMetrics%2FVictoriaMetrics)](https://goreportcard.com/report/github.com/VictoriaMetrics/VictoriaMetrics)
[![Build Status](https://github.com/VictoriaMetrics/VictoriaMetrics/actions/workflows/build.yml/badge.svg?branch=master&link=https%3A%2F%2Fgithub.com%2FVictoriaMetrics%2FVictoriaMetrics%2Factions)](https://github.com/VictoriaMetrics/VictoriaMetrics/actions/workflows/build.yml)
[![codecov](https://codecov.io/gh/VictoriaMetrics/VictoriaMetrics/branch/master/graph/badge.svg?link=https%3A%2F%2Fcodecov.io%2Fgh%2FVictoriaMetrics%2FVictoriaMetrics)](https://app.codecov.io/gh/VictoriaMetrics/VictoriaMetrics)
[![License](https://img.shields.io/github/license/VictoriaMetrics/VictoriaMetrics?labelColor=green&label=&link=https%3A%2F%2Fgithub.com%2FVictoriaMetrics%2FVictoriaMetrics%2Fblob%2Fmaster%2FLICENSE)](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/LICENSE)
[![Join Slack](https://img.shields.io/badge/Join%20Slack-4A154B?logo=slack)](https://slack.victoriametrics.com)
[![X](https://img.shields.io/twitter/follow/VictoriaMetrics?style=flat&label=Follow&color=black&logo=x&labelColor=black&link=https%3A%2F%2Fx.com%2FVictoriaMetrics)](https://x.com/VictoriaMetrics/)

View File

@@ -13,6 +13,9 @@ import (
"sync/atomic"
"time"
"github.com/VictoriaMetrics/metrics"
"github.com/golang/snappy"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/awsapi"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
@@ -21,10 +24,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/persistentqueue"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promauth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/ratelimiter"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timerpool"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeutil"
"github.com/VictoriaMetrics/metrics"
"github.com/golang/snappy"
)
var (
@@ -290,7 +290,7 @@ func getAWSAPIConfig(argIdx int) (*awsapi.Config, error) {
accessKey := awsAccessKey.GetOptionalArg(argIdx)
secretKey := awsSecretKey.GetOptionalArg(argIdx)
service := awsService.GetOptionalArg(argIdx)
cfg, err := awsapi.NewConfig(ec2Endpoint, stsEndpoint, region, roleARN, accessKey, secretKey, service)
cfg, err := awsapi.NewConfig(ec2Endpoint, stsEndpoint, region, roleARN, accessKey, secretKey, service, "")
if err != nil {
return nil, err
}
@@ -405,8 +405,7 @@ func (c *client) newRequest(url string, body []byte) (*http.Request, error) {
// Otherwise, it tries sending the block to remote storage indefinitely.
func (c *client) sendBlockHTTP(block []byte) bool {
c.rl.Register(len(block))
maxRetryDuration := timeutil.AddJitterToDuration(c.retryMaxInterval)
retryDuration := timeutil.AddJitterToDuration(c.retryMinInterval)
bt := timeutil.NewBackoffTimer(c.retryMinInterval, c.retryMaxInterval)
retriesCount := 0
again:
@@ -415,19 +414,10 @@ again:
c.requestDuration.UpdateDuration(startTime)
if err != nil {
c.errorsCount.Inc()
retryDuration *= 2
if retryDuration > maxRetryDuration {
retryDuration = maxRetryDuration
}
remoteWriteRetryLogger.Warnf("couldn't send a block with size %d bytes to %q: %s; re-sending the block in %.3f seconds",
len(block), c.sanitizedURL, err, retryDuration.Seconds())
t := timerpool.Get(retryDuration)
select {
case <-c.stopCh:
timerpool.Put(t)
remoteWriteRetryLogger.Warnf("couldn't send a block with size %d bytes to %q: %s; re-sending the block in %s",
len(block), c.sanitizedURL, err, bt.CurrentDelay())
if !bt.Wait(c.stopCh) {
return false
case <-t.C:
timerpool.Put(t)
}
c.retriesCount.Inc()
goto again
@@ -493,7 +483,10 @@ again:
// Unexpected status code returned
retriesCount++
retryAfterHeader := parseRetryAfterHeader(resp.Header.Get("Retry-After"))
retryDuration = getRetryDuration(retryAfterHeader, retryDuration, maxRetryDuration)
// retryAfterDuration has the highest priority duration
if retryAfterHeader > 0 {
bt.SetDelay(retryAfterHeader)
}
// Handle response
body, err := io.ReadAll(resp.Body)
@@ -502,15 +495,10 @@ again:
logger.Errorf("cannot read response body from %q during retry #%d: %s", c.sanitizedURL, retriesCount, err)
} else {
logger.Errorf("unexpected status code received after sending a block with size %d bytes to %q during retry #%d: %d; response body=%q; "+
"re-sending the block in %.3f seconds", len(block), c.sanitizedURL, retriesCount, statusCode, body, retryDuration.Seconds())
"re-sending the block in %s", len(block), c.sanitizedURL, retriesCount, statusCode, body, bt.CurrentDelay())
}
t := timerpool.Get(retryDuration)
select {
case <-c.stopCh:
timerpool.Put(t)
if !bt.Wait(c.stopCh) {
return false
case <-t.C:
timerpool.Put(t)
}
c.retriesCount.Inc()
goto again
@@ -519,27 +507,6 @@ again:
var remoteWriteRejectedLogger = logger.WithThrottler("remoteWriteRejected", 5*time.Second)
var remoteWriteRetryLogger = logger.WithThrottler("remoteWriteRetry", 5*time.Second)
// getRetryDuration returns retry duration.
// retryAfterDuration has the highest priority.
// If retryAfterDuration is not specified, retryDuration gets doubled.
// retryDuration can't exceed maxRetryDuration.
//
// Also see: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6097
func getRetryDuration(retryAfterDuration, retryDuration, maxRetryDuration time.Duration) time.Duration {
// retryAfterDuration has the highest priority duration
if retryAfterDuration > 0 {
return timeutil.AddJitterToDuration(retryAfterDuration)
}
// default backoff retry policy
retryDuration *= 2
if retryDuration > maxRetryDuration {
retryDuration = maxRetryDuration
}
return retryDuration
}
// repackBlockFromZstdToSnappy repacks the given zstd-compressed block to snappy-compressed block.
//
// The input block may be corrupted, for example, if vmagent was shut down ungracefully and
@@ -570,24 +537,20 @@ func logBlockRejected(block []byte, sanitizedURL string, resp *http.Response) {
}
// parseRetryAfterHeader parses `Retry-After` value retrieved from HTTP response header.
// retryAfterString should be in either HTTP-date or a number of seconds.
// It will return time.Duration(0) if `retryAfterString` does not follow RFC 7231.
func parseRetryAfterHeader(retryAfterString string) (retryAfterDuration time.Duration) {
if retryAfterString == "" {
return retryAfterDuration
//
// s should be in either HTTP-date or a number of seconds.
// It returns time.Duration(0) if s does not follow RFC 7231.
func parseRetryAfterHeader(s string) time.Duration {
if s == "" {
return 0
}
defer func() {
v := retryAfterDuration.Seconds()
logger.Infof("'Retry-After: %s' parsed into %.2f second(s)", retryAfterString, v)
}()
// Retry-After could be in "Mon, 02 Jan 2006 15:04:05 GMT" format.
if parsedTime, err := time.Parse(http.TimeFormat, retryAfterString); err == nil {
if parsedTime, err := time.Parse(http.TimeFormat, s); err == nil {
return time.Duration(time.Until(parsedTime).Seconds()) * time.Second
}
// Retry-After could be in seconds.
if seconds, err := strconv.Atoi(retryAfterString); err == nil {
if seconds, err := strconv.Atoi(s); err == nil {
return time.Duration(seconds) * time.Second
}

View File

@@ -6,66 +6,11 @@ import (
"testing"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding"
"github.com/golang/snappy"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding"
)
func TestCalculateRetryDuration(t *testing.T) {
// `testFunc` call `calculateRetryDuration` for `n` times
// and evaluate if the result of `calculateRetryDuration` is
// 1. >= expectMinDuration
// 2. <= expectMinDuration + 10% (see timeutil.AddJitterToDuration)
f := func(retryAfterDuration, retryDuration time.Duration, n int, expectMinDuration time.Duration) {
t.Helper()
for range n {
retryDuration = getRetryDuration(retryAfterDuration, retryDuration, time.Minute)
}
expectMaxDuration := helper(expectMinDuration)
expectMinDuration = expectMinDuration - (1000 * time.Millisecond) // Avoid edge case when calculating time.Until(now)
if retryDuration < expectMinDuration || retryDuration > expectMaxDuration {
t.Fatalf(
"incorrect retry duration, want (ms): [%d, %d], got (ms): %d",
expectMinDuration.Milliseconds(), expectMaxDuration.Milliseconds(),
retryDuration.Milliseconds(),
)
}
}
// Call calculateRetryDuration for 1 time.
{
// default backoff policy
f(0, time.Second, 1, 2*time.Second)
// default backoff policy exceed max limit"
f(0, 10*time.Minute, 1, time.Minute)
// retry after > default backoff policy
f(10*time.Second, 1*time.Second, 1, 10*time.Second)
// retry after < default backoff policy
f(1*time.Second, 10*time.Second, 1, 1*time.Second)
// retry after invalid and < default backoff policy
f(0, time.Second, 1, 2*time.Second)
}
// Call calculateRetryDuration for multiple times.
{
// default backoff policy 2 times
f(0, time.Second, 2, 4*time.Second)
// default backoff policy 3 times
f(0, time.Second, 3, 8*time.Second)
// default backoff policy N times exceed max limit
f(0, time.Second, 10, time.Minute)
// retry after 120s 1 times
f(120*time.Second, time.Second, 1, 120*time.Second)
// retry after 120s 2 times
f(120*time.Second, time.Second, 2, 120*time.Second)
}
}
func TestParseRetryAfterHeader(t *testing.T) {
f := func(retryAfterString string, expectResult time.Duration) {
t.Helper()
@@ -91,13 +36,6 @@ func TestParseRetryAfterHeader(t *testing.T) {
f(time.Now().Add(10*time.Second).Format("Mon, 02 Jan 2006 15:04:05 FAKETZ"), 0)
}
// helper calculate the max possible time duration calculated by timeutil.AddJitterToDuration.
func helper(d time.Duration) time.Duration {
dv := min(d/10, 10*time.Second)
return d + dv
}
func TestRepackBlockFromZstdToSnappy(t *testing.T) {
expectedPlainBlock := []byte(`foobar`)

View File

@@ -0,0 +1,49 @@
package remotewrite
import (
"crypto/sha256"
"encoding/hex"
"strings"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
)
func (rwctx *remoteWriteCtx) initObfuscationConfig() {
if len(*obfuscationLabels) == 0 {
return
}
idx := rwctx.idx
rwctx.obfuscationLabels = make(map[string]struct{})
rwObfuscationLabels := obfuscationLabels.GetOptionalArg(idx)
rwObfuscationLabelsList := strings.Split(rwObfuscationLabels, "^^")
for _, label := range rwObfuscationLabelsList {
rwctx.obfuscationLabels[label] = struct{}{}
}
}
func (rwctx *remoteWriteCtx) applyObfuscation(tss []prompb.TimeSeries) []prompb.TimeSeries {
if len(rwctx.obfuscationLabels) == 0 || len(tss) == 0 {
return tss
}
cacheObfuscatedResult := make(map[string]string)
for i := range tss {
ts := &tss[i]
labels := ts.Labels
for j := range labels {
label := &labels[j]
if _, ok := rwctx.obfuscationLabels[label.Name]; !ok {
continue
}
if obfuscatedValue, ok := cacheObfuscatedResult[label.Value]; ok {
// fast path: the obfuscated result was calculated before
label.Value = obfuscatedValue
} else {
obfuscatedResult := sha256.Sum256([]byte(label.Value))
cacheObfuscatedResult[label.Value] = hex.EncodeToString(obfuscatedResult[:])
label.Value = cacheObfuscatedResult[label.Value]
}
}
}
return tss
}

View File

@@ -3,6 +3,7 @@ package remotewrite
import (
"flag"
"fmt"
"math"
"net/http"
"net/url"
"path/filepath"
@@ -11,6 +12,10 @@ import (
"sync/atomic"
"time"
"github.com/cespare/xxhash/v2"
"github.com/VictoriaMetrics/metrics"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bloomfilter"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
@@ -23,6 +28,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/memory"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/persistentqueue"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/procutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prommetadata"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promrelabel"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutil"
@@ -30,8 +36,6 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/slicesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/streamaggr"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeserieslimits"
"github.com/VictoriaMetrics/metrics"
"github.com/cespare/xxhash/v2"
)
var (
@@ -80,10 +84,14 @@ var (
`This may be needed for reducing memory usage at remote storage when the order of labels in incoming samples is random. `+
`For example, if m{k1="v1",k2="v2"} may be sent as m{k2="v2",k1="v1"}`+
`Enabled sorting for labels can slow down ingestion performance a bit`)
maxHourlySeries = flag.Int("remoteWrite.maxHourlySeries", 0, "The maximum number of unique series vmagent can send to remote storage systems during the last hour. "+
"Excess series are logged and dropped. This can be useful for limiting series cardinality. See https://docs.victoriametrics.com/victoriametrics/vmagent/#cardinality-limiter")
maxDailySeries = flag.Int("remoteWrite.maxDailySeries", 0, "The maximum number of unique series vmagent can send to remote storage systems during the last 24 hours. "+
"Excess series are logged and dropped. This can be useful for limiting series churn rate. See https://docs.victoriametrics.com/victoriametrics/vmagent/#cardinality-limiter")
maxHourlySeries = flag.Int64("remoteWrite.maxHourlySeries", 0, "The maximum number of unique series vmagent can send to remote storage systems during the last hour. "+
"Excess series are logged and dropped. This can be useful for limiting series cardinality. "+
fmt.Sprintf("Setting this flag to '-1' sets limit to maximum possible value (%d) which is useful in order to enable series tracking without enforcing limits. ", math.MaxInt32)+
"See https://docs.victoriametrics.com/victoriametrics/vmagent/#cardinality-limiter")
maxDailySeries = flag.Int64("remoteWrite.maxDailySeries", 0, "The maximum number of unique series vmagent can send to remote storage systems during the last 24 hours. "+
"Excess series are logged and dropped. This can be useful for limiting series churn rate. "+
fmt.Sprintf("Setting this flag to '-1' sets limit to maximum possible value (%d) which is useful in order to enable series tracking without enforcing limits. ", math.MaxInt32)+
"See https://docs.victoriametrics.com/victoriametrics/vmagent/#cardinality-limiter")
maxIngestionRate = flag.Int("maxIngestionRate", 0, "The maximum number of samples vmagent can receive per second. Data ingestion is paused when the limit is exceeded. "+
"By default there are no limits on samples ingestion rate. See also -remoteWrite.rateLimit")
@@ -92,6 +100,11 @@ var (
"See https://docs.victoriametrics.com/victoriametrics/vmagent/#disabling-on-disk-persistence . See also -remoteWrite.dropSamplesOnOverload")
dropSamplesOnOverload = flag.Bool("remoteWrite.dropSamplesOnOverload", false, "Whether to drop samples when -remoteWrite.disableOnDiskQueue is set and if the samples "+
"cannot be pushed into the configured -remoteWrite.url systems in a timely manner. See https://docs.victoriametrics.com/victoriametrics/vmagent/#disabling-on-disk-persistence")
disableMetadataPerURL = flagutil.NewArrayBool("remoteWrite.disableMetadata", "Whether to disable sending metadata to the corresponding -remoteWrite.url. "+
"By default, metadata sending is controlled by the global -enableMetadata flag")
obfuscationLabels = flagutil.NewArrayString("remoteWrite.obfuscationLabels", "List of label names whose values must be obfuscated before sending to the corresponding -remoteWrite.url."+
"By default, label obfuscation is disabled")
)
var (
@@ -157,8 +170,8 @@ func Init() {
if len(*remoteWriteURLs) == 0 {
logger.Fatalf("at least one `-remoteWrite.url` command-line flag must be set")
}
if *maxHourlySeries > 0 {
hourlySeriesLimiter = bloomfilter.NewLimiter(*maxHourlySeries, time.Hour)
if limit := getMaxHourlySeries(); limit > 0 {
hourlySeriesLimiter = bloomfilter.NewLimiter(limit, time.Hour)
_ = metrics.NewGauge(`vmagent_hourly_series_limit_max_series`, func() float64 {
return float64(hourlySeriesLimiter.MaxItems())
})
@@ -166,8 +179,8 @@ func Init() {
return float64(hourlySeriesLimiter.CurrentItems())
})
}
if *maxDailySeries > 0 {
dailySeriesLimiter = bloomfilter.NewLimiter(*maxDailySeries, 24*time.Hour)
if limit := getMaxDailySeries(); limit > 0 {
dailySeriesLimiter = bloomfilter.NewLimiter(limit, 24*time.Hour)
_ = metrics.NewGauge(`vmagent_daily_series_limit_max_series`, func() float64 {
return float64(dailySeriesLimiter.MaxItems())
})
@@ -540,6 +553,10 @@ func tryPushMetadataToRemoteStorages(rwctxs []*remoteWriteCtx, mms []prompb.Metr
var wg sync.WaitGroup
var anyPushFailed atomic.Bool
for _, rwctx := range rwctxs {
if !rwctx.enableMetadata {
// Skip remote storage with disabled metadata
continue
}
wg.Go(func() {
if !rwctx.tryPushMetadataInternal(mms) {
rwctx.pushFailures.Inc()
@@ -811,9 +828,16 @@ type remoteWriteCtx struct {
streamAggrKeepInput bool
streamAggrDropInput bool
// enableMetadata indicates whether metadata should be sent to this remote storage.
// It is determined by -remoteWrite.enableMetadata per-URL flag if set,
// otherwise by the global -enableMetadata flag.
enableMetadata bool
pss []*pendingSeries
pssNextIdx atomic.Uint64
obfuscationLabels map[string]struct{}
rowsPushedAfterRelabel *metrics.Counter
rowsDroppedByRelabel *metrics.Counter
@@ -822,6 +846,18 @@ type remoteWriteCtx struct {
rowsDroppedOnPushFailure *metrics.Counter
}
// isMetadataEnabledForURL returns true if metadata should be sent to the remote storage at argIdx.
// It checks the per-URL -remoteWrite.disableMetadata flag first.
// If not set, it falls back to the global -enableMetadata flag.
func isMetadataEnabledForURL(argIdx int) bool {
if disableMetadataPerURL.GetOptionalArg(argIdx) {
// Metadata is explicitly disabled for this URL
return false
}
// Use global -enableMetadata value
return prommetadata.IsEnabled()
}
func newRemoteWriteCtx(argIdx int, remoteWriteURL *url.URL, sanitizedURL string) *remoteWriteCtx {
// strip query params, otherwise changing params resets pq
pqURL := *remoteWriteURL
@@ -892,10 +928,11 @@ func newRemoteWriteCtx(argIdx int, remoteWriteURL *url.URL, sanitizedURL string)
}
rwctx := &remoteWriteCtx{
idx: argIdx,
fq: fq,
c: c,
pss: pss,
idx: argIdx,
fq: fq,
c: c,
pss: pss,
enableMetadata: isMetadataEnabledForURL(argIdx),
rowsPushedAfterRelabel: metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_remotewrite_rows_pushed_after_relabel_total{path=%q,url=%q}`, queuePath, sanitizedURL)),
rowsDroppedByRelabel: metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_remotewrite_relabel_metrics_dropped_total{path=%q,url=%q}`, queuePath, sanitizedURL)),
@@ -905,6 +942,7 @@ func newRemoteWriteCtx(argIdx int, remoteWriteURL *url.URL, sanitizedURL string)
rowsDroppedOnPushFailure: metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_remotewrite_samples_dropped_total{path=%q,url=%q}`, queuePath, sanitizedURL)),
}
rwctx.initStreamAggrConfig()
rwctx.initObfuscationConfig()
return rwctx
}
@@ -1088,6 +1126,15 @@ func (rwctx *remoteWriteCtx) tryPushTimeSeriesInternal(tss []prompb.TimeSeries)
rctx.appendExtraLabels(tss, labelsGlobal)
}
if len(rwctx.obfuscationLabels) != 0 {
if rctx == nil {
rctx = getRelabelCtx()
v = tssPool.Get().(*[]prompb.TimeSeries)
tss = append(*v, tss...)
}
tss = rwctx.applyObfuscation(tss)
}
pss := rwctx.pss
idx := rwctx.pssNextIdx.Add(1) % uint64(len(pss))
@@ -1116,3 +1163,21 @@ func newMapFromStrings(a []string) map[string]struct{} {
}
return m
}
func getMaxHourlySeries() int {
limit := *maxHourlySeries
if limit == -1 || limit > math.MaxInt32 {
return math.MaxInt32
}
return int(limit)
}
func getMaxDailySeries() int {
limit := *maxDailySeries
if limit == -1 || limit > math.MaxInt32 {
return math.MaxInt32
}
return int(limit)
}

View File

@@ -374,3 +374,92 @@ func TestCalculateHealthyRwctxIdx(t *testing.T) {
f(1, []int{0}, nil)
f(1, []int{}, []int{0})
}
func TestRemoteWriteContext_Obfuscation(t *testing.T) {
f := func(obfuscationLabelList string, obfuscationLabelCount int, inputTss []prompb.TimeSeries, expectedTss []prompb.TimeSeries) {
t.Helper()
rwctx := &remoteWriteCtx{
idx: 0,
streamAggrKeepInput: false,
streamAggrDropInput: true,
}
defer metrics.UnregisterAllMetrics()
*obfuscationLabels = []string{obfuscationLabelList}
rwctx.initObfuscationConfig()
if len(rwctx.obfuscationLabels) != obfuscationLabelCount {
t.Fatalf("unexpected obfuscation labels len; got %v; want %d", len(rwctx.obfuscationLabels), obfuscationLabelCount)
}
outputTss := rwctx.applyObfuscation(inputTss)
if !reflect.DeepEqual(expectedTss, outputTss) {
t.Fatalf("unexpected samples;\ngot\n%v\nwant\n%v", outputTss, expectedTss)
}
}
f("ip", 1,
[]prompb.TimeSeries{
{
Labels: []prompb.Label{
{Name: "ip", Value: "123"},
{Name: "instance", Value: "1234"},
},
Samples: []prompb.Sample{
{Value: 1, Timestamp: 0},
},
},
},
[]prompb.TimeSeries{
{
Labels: []prompb.Label{
{Name: "ip", Value: "a665a45920422f9d417e4867efdc4fb8a04a1f3fff1fa07e998e86f7f7a27ae3"},
{Name: "instance", Value: "1234"},
},
Samples: []prompb.Sample{
{Value: 1, Timestamp: 0},
},
},
},
)
f("ip^^instance", 2,
[]prompb.TimeSeries{
{
Labels: []prompb.Label{
{Name: "ip", Value: "123"},
{Name: "instance", Value: "1234"},
},
Samples: []prompb.Sample{
{Value: 1, Timestamp: 0},
},
},
{
Labels: []prompb.Label{
{Name: "job", Value: "123"},
},
Samples: []prompb.Sample{
{Value: 1, Timestamp: 0},
},
},
},
[]prompb.TimeSeries{
{
Labels: []prompb.Label{
{Name: "ip", Value: "a665a45920422f9d417e4867efdc4fb8a04a1f3fff1fa07e998e86f7f7a27ae3"},
{Name: "instance", Value: "03ac674216f3e15c761ee1a5e255f067953623c8b388b4459e13f978d7c846f4"},
},
Samples: []prompb.Sample{
{Value: 1, Timestamp: 0},
},
},
{
Labels: []prompb.Label{
{Name: "job", Value: "123"},
},
Samples: []prompb.Sample{
{Value: 1, Timestamp: 0},
},
},
},
)
}

View File

@@ -13,14 +13,18 @@ import (
"sync"
"time"
"github.com/cespare/xxhash/v2"
"github.com/golang/snappy"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/cgroup"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httputil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/netutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promauth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeutil"
"github.com/VictoriaMetrics/metrics"
)
@@ -113,8 +117,10 @@ func NewClient(ctx context.Context, cfg Config) (*Client, error) {
input: make(chan prompb.TimeSeries, cfg.MaxQueueSize),
}
for range cc {
c.run(ctx)
for i := 0; i < cc; i++ {
c.wg.Go(func() {
c.run(ctx, i)
})
}
return c, nil
}
@@ -156,8 +162,7 @@ func (c *Client) Close() error {
return nil
}
func (c *Client) run(ctx context.Context) {
ticker := time.NewTicker(c.flushInterval)
func (c *Client) run(ctx context.Context, id int) {
wr := &prompb.WriteRequest{}
shutdown := func() {
lastCtx, cancel := context.WithTimeout(context.Background(), defaultWriteTimeout)
@@ -174,45 +179,72 @@ func (c *Client) run(ctx context.Context) {
cancel()
}
c.wg.Go(func() {
defer ticker.Stop()
for {
// add jitter to spread remote write flushes over the flush interval to avoid congestion at the remote write destination
h := xxhash.Sum64(bytesutil.ToUnsafeBytes(fmt.Sprintf("%d", id)))
randJitter := uint64(float64(c.flushInterval) * (float64(h) / (1 << 64)))
timer := time.NewTimer(time.Duration(randJitter))
addJitter:
for {
select {
case <-c.doneCh:
timer.Stop()
shutdown()
return
case <-ctx.Done():
timer.Stop()
shutdown()
return
case <-timer.C:
break addJitter
}
}
ticker := time.NewTicker(c.flushInterval)
defer ticker.Stop()
for {
select {
case <-c.doneCh:
shutdown()
return
case <-ctx.Done():
shutdown()
return
case <-ticker.C:
c.flush(ctx, wr)
// drain the potential stale tick to avoid small or empty flushes after a slow flush.
select {
case <-c.doneCh:
shutdown()
return
case <-ctx.Done():
shutdown()
return
case <-ticker.C:
default:
}
case ts, ok := <-c.input:
if !ok {
continue
}
wr.Timeseries = append(wr.Timeseries, ts)
if len(wr.Timeseries) >= c.maxBatchSize {
c.flush(ctx, wr)
// drain the potential stale tick to avoid small or empty flushes after a slow flush.
select {
case <-ticker.C:
default:
}
case ts, ok := <-c.input:
if !ok {
continue
}
wr.Timeseries = append(wr.Timeseries, ts)
if len(wr.Timeseries) >= c.maxBatchSize {
c.flush(ctx, wr)
}
}
}
})
}
}
var (
rwErrors = metrics.NewCounter(`vmalert_remotewrite_errors_total`)
rwTotal = metrics.NewCounter(`vmalert_remotewrite_total`)
sentRows = metrics.NewCounter(`vmalert_remotewrite_sent_rows_total`)
sentBytes = metrics.NewCounter(`vmalert_remotewrite_sent_bytes_total`)
droppedRows = metrics.NewCounter(`vmalert_remotewrite_dropped_rows_total`)
sendDuration = metrics.NewFloatCounter(`vmalert_remotewrite_send_duration_seconds_total`)
bufferFlushDuration = metrics.NewHistogram(`vmalert_remotewrite_flush_duration_seconds`)
// sentRows and sentBytes are historical counters that can now be replaced by flushedRows and flushedBytes histograms. They may be deprecated in the future after the new histograms have been adopted for some time.
sentRows = metrics.NewCounter(`vmalert_remotewrite_sent_rows_total`)
sentBytes = metrics.NewCounter(`vmalert_remotewrite_sent_bytes_total`)
flushedRows = metrics.NewHistogram(`vmalert_remotewrite_sent_rows`)
flushedBytes = metrics.NewHistogram(`vmalert_remotewrite_sent_bytes`)
droppedRows = metrics.NewCounter(`vmalert_remotewrite_dropped_rows_total`)
sendDuration = metrics.NewFloatCounter(`vmalert_remotewrite_send_duration_seconds_total`)
bufferFlushDuration = metrics.NewHistogram(`vmalert_remotewrite_flush_duration_seconds`)
remoteWriteQueueSize = metrics.NewHistogram(`vmalert_remotewrite_queue_size`)
_ = metrics.NewGauge(`vmalert_remotewrite_queue_capacity`, func() float64 {
return float64(*maxQueueSize)
})
_ = metrics.NewGauge(`vmalert_remotewrite_concurrency`, func() float64 {
return float64(*concurrency)
@@ -226,6 +258,7 @@ func GetDroppedRows() int { return int(droppedRows.Get()) }
// it to remote-write endpoint. Flush performs limited amount of retries
// if request fails.
func (c *Client) flush(ctx context.Context, wr *prompb.WriteRequest) {
remoteWriteQueueSize.Update(float64(len(c.input)))
if len(wr.Timeseries) < 1 {
return
}
@@ -235,10 +268,8 @@ func (c *Client) flush(ctx context.Context, wr *prompb.WriteRequest) {
data := wr.MarshalProtobuf(nil)
b := snappy.Encode(nil, data)
retryInterval, maxRetryInterval := *retryMinInterval, *retryMaxTime
if retryInterval > maxRetryInterval {
retryInterval = maxRetryInterval
}
maxRetryInterval := *retryMaxTime
bt := timeutil.NewBackoffTimer(*retryMinInterval, maxRetryInterval)
timeStart := time.Now()
defer func() {
sendDuration.Add(time.Since(timeStart).Seconds())
@@ -256,6 +287,8 @@ L:
if err == nil {
sentRows.Add(len(wr.Timeseries))
sentBytes.Add(len(b))
flushedRows.Update(float64(len(wr.Timeseries)))
flushedBytes.Update(float64(len(b)))
return
}
@@ -281,12 +314,11 @@ L:
break
}
if retryInterval > timeLeftForRetries {
retryInterval = timeLeftForRetries
if bt.CurrentDelay() > timeLeftForRetries {
bt.SetDelay(timeLeftForRetries)
}
// sleeping to prevent remote db hammering
time.Sleep(retryInterval)
retryInterval *= 2
bt.Wait(ctx.Done())
attempts++
}

View File

@@ -381,7 +381,9 @@ func (g *Group) Start(ctx context.Context, rw remotewrite.RWClient, rr datasourc
if len(g.Rules) < 1 {
g.metrics.iterationDuration.UpdateDuration(start)
g.mu.Lock()
g.LastEvaluation = start
g.mu.Unlock()
return ts
}
@@ -395,7 +397,9 @@ func (g *Group) Start(ctx context.Context, rw remotewrite.RWClient, rr datasourc
}
}
g.metrics.iterationDuration.UpdateDuration(start)
g.mu.Lock()
g.LastEvaluation = start
g.mu.Unlock()
return ts
}
@@ -405,11 +409,11 @@ func (g *Group) Start(ctx context.Context, rw remotewrite.RWClient, rr datasourc
g.mu.Unlock()
defer g.evalCancel()
realEvalTS := eval(evalCtx, evalTS)
t := time.NewTicker(g.Interval)
defer t.Stop()
realEvalTS := eval(evalCtx, evalTS)
// restore the rules state after the first evaluation
// so only active alerts can be restored.
if rr != nil {

View File

@@ -52,7 +52,13 @@ var (
"alert": rule.TypeAlerting,
"record": rule.TypeRecording,
}
ruleStates = []string{"ok", "nomatch", "inactive", "firing", "pending", "unhealthy"}
// The "recovering", "noData", "normal", "error" states are used by Grafana.
// Ignore "recovering" since it is not currently acknowledged by vmalert,
// treat "noData" as an alias for "nomatch",
// treat "normal" as an alias for "inactive",
// treat "error" as an alias for "unhealthy"
ruleStates = []string{"ok", "nomatch", "inactive", "firing", "pending", "unhealthy", "recovering", "noData", "normal", "error"}
)
type requestHandler struct {
@@ -363,6 +369,15 @@ func newRulesFilter(r *http.Request) (*rulesFilter, *httpserver.ErrorWithStatusC
if !slices.Contains(ruleStates, v) {
return nil, errResponse(fmt.Errorf(`invalid parameter "state": contains not supported value %q`, v), http.StatusBadRequest)
}
// Replace grafana states with supported internal states
switch v {
case "noData":
v = "nomatch"
case "normal":
v = "inactive"
case "error":
v = "unhealthy"
}
rf.states = append(rf.states, v)
}
}

View File

@@ -357,6 +357,7 @@ func bufferRequestBody(ctx context.Context, r io.ReadCloser, userName string) (i
maxBufSize := max(requestBufferSize.IntN(), maxRequestBodySizeToRetry.IntN())
if maxBufSize <= 0 {
// Request buffering is disabled.
return r, nil
}
@@ -763,7 +764,7 @@ var concurrentRequestsLimitReached = metrics.NewCounter("vmauth_concurrent_reque
func usage() {
const s = `
vmauth authenticates and authorizes incoming requests and proxies them to VictoriaMetrics.
vmauth authenticates and authorizes incoming requests and proxies them to VictoriaMetrics components or any other HTTP backends.
See the docs at https://docs.victoriametrics.com/victoriametrics/vmauth/ .
`
@@ -792,10 +793,11 @@ func handleConcurrencyLimitError(w http.ResponseWriter, r *http.Request, err err
}
// bufferedBody serves two purposes:
// 1. Enables request retries when the body size does not exceed maxBodySize
// by fully buffering the body in memory.
// 2. Prevents slow clients from reducing effective server capacity by
// buffering the request body before acquiring a per-user concurrency slot.
//
// 1. It enables request retries when the request body size does not exceed maxBufSize
// by fully buffering the request body in memory.
// 2. It prevents slow clients from reducing effective server capacity
// by buffering the request body before acquiring a per-user concurrency slot.
//
// See bufferRequestBody for details on how bufferedBody is used.
type bufferedBody struct {
@@ -819,7 +821,7 @@ func newBufferedBody(r io.ReadCloser, buf []byte, maxBufSize int) *bufferedBody
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8051
if len(buf) < maxBufSize {
// Read the full request body into buf.
// The full request body has been already read into buf.
r = nil
}
@@ -832,7 +834,7 @@ func newBufferedBody(r io.ReadCloser, buf []byte, maxBufSize int) *bufferedBody
// Read implements io.Reader interface.
func (bb *bufferedBody) Read(p []byte) (int, error) {
if bb.cannotRetry {
return 0, fmt.Errorf("cannot read already closed body")
return 0, fmt.Errorf("cannot read already closed request body")
}
if bb.bufOffset < len(bb.buf) {
n := copy(p, bb.buf[bb.bufOffset:])

View File

@@ -21,6 +21,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/procutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/pushmetrics"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/snapshot"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/snapshot/snapshotutil"
)

View File

@@ -416,6 +416,16 @@ const (
promTemporaryDirPath = "prom-tmp-dir-path"
)
const (
thanosSnapshot = "thanos-snapshot"
thanosConcurrency = "thanos-concurrency"
thanosFilterTimeStart = "thanos-filter-time-start"
thanosFilterTimeEnd = "thanos-filter-time-end"
thanosFilterLabel = "thanos-filter-label"
thanosFilterLabelValue = "thanos-filter-label-value"
thanosAggrTypes = "thanos-aggr-types"
)
var (
promFlags = []cli.Flag{
&cli.StringFlag{
@@ -451,6 +461,43 @@ var (
Value: os.TempDir(),
},
}
thanosFlags = []cli.Flag{
&cli.StringFlag{
Name: thanosSnapshot,
Usage: "Path to Thanos snapshot directory containing raw and/or downsampled blocks.",
Required: true,
},
&cli.IntFlag{
Name: thanosConcurrency,
Usage: "Number of concurrently running snapshot readers",
Value: 1,
},
&cli.StringFlag{
Name: thanosFilterTimeStart,
Usage: "The time filter in RFC3339 format to select timeseries with timestamp equal or higher than provided value. E.g. '2020-01-01T20:07:00Z'",
},
&cli.StringFlag{
Name: thanosFilterTimeEnd,
Usage: "The time filter in RFC3339 format to select timeseries with timestamp equal or lower than provided value. E.g. '2020-01-01T20:07:00Z'",
},
&cli.StringFlag{
Name: thanosFilterLabel,
Usage: "Thanos label name to filter timeseries by. E.g. '__name__' will filter timeseries by name.",
},
&cli.StringFlag{
Name: thanosFilterLabelValue,
Usage: fmt.Sprintf("Thanos regular expression to filter label from %q flag.", thanosFilterLabel),
Value: ".*",
},
&cli.StringSliceFlag{
Name: thanosAggrTypes,
Usage: "Aggregate types to import from Thanos downsampled blocks. Supported values: count, sum, min, max, counter. " +
"Each aggregate will be imported as a separate metric with the aggregate type as suffix (e.g., metric_name:5m:count). " +
"If not specified, all aggregate types will be imported from downsampled blocks.",
Value: nil,
},
}
)
const (

View File

@@ -27,6 +27,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/influx"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/opentsdb"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/prometheus"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/thanos"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/vm"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/buildinfo"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httputil"
@@ -285,6 +286,7 @@ func main() {
if err != nil {
return fmt.Errorf("failed to create prometheus client: %s", err)
}
pp := prometheusProcessor{
cl: cl,
im: importer,
@@ -294,6 +296,59 @@ func main() {
return pp.run(ctx)
},
},
{
Name: "thanos",
Usage: "Migrate time series from Thanos blocks (supports raw and downsampled data)",
Flags: mergeFlags(globalFlags, thanosFlags, vmFlags),
Before: beforeFn,
Action: func(c *cli.Context) error {
fmt.Println("Thanos import mode")
vmCfg, err := initConfigVM(c)
if err != nil {
return fmt.Errorf("failed to init VM configuration: %s", err)
}
importer, err = vm.NewImporter(ctx, vmCfg)
if err != nil {
return fmt.Errorf("failed to create VM importer: %s", err)
}
thanosCfg := thanos.Config{
Snapshot: c.String(thanosSnapshot),
Filter: thanos.Filter{
TimeMin: c.String(thanosFilterTimeStart),
TimeMax: c.String(thanosFilterTimeEnd),
Label: c.String(thanosFilterLabel),
LabelValue: c.String(thanosFilterLabelValue),
},
}
cl, err := thanos.NewClient(thanosCfg)
if err != nil {
return fmt.Errorf("failed to create thanos client: %s", err)
}
var aggrTypes []thanos.AggrType
if aggrTypesStr := c.StringSlice(thanosAggrTypes); len(aggrTypesStr) > 0 {
for _, typeStr := range aggrTypesStr {
aggrType, err := thanos.ParseAggrType(typeStr)
if err != nil {
return fmt.Errorf("failed to parse aggregate type %q: %s", typeStr, err)
}
aggrTypes = append(aggrTypes, aggrType)
}
}
tp := thanosProcessor{
cl: cl,
im: importer,
cc: c.Int(thanosConcurrency),
isVerbose: c.Bool(globalVerbose),
aggrTypes: aggrTypes,
}
return tp.run(ctx)
},
},
{
Name: "vm-native",
Usage: "Migrate time series between VictoriaMetrics installations",

View File

@@ -0,0 +1,233 @@
package thanos
import (
"encoding/binary"
"errors"
"fmt"
"github.com/prometheus/prometheus/tsdb/chunkenc"
)
// ChunkEncAggr is the top level encoding byte for the AggrChunk.
// It is defined by Thanos as 0xff to prevent collisions with Prometheus encodings.
const ChunkEncAggr = chunkenc.Encoding(0xff)
// AggrType represents an aggregation type in Thanos downsampled blocks.
type AggrType uint8
// AggrTypeNone indicates raw blocks with no aggregation.
// It is used as a sentinel to distinguish raw block processing from downsampled.
const AggrTypeNone AggrType = 255
// Valid aggregation types matching Thanos definitions.
const (
AggrCount AggrType = iota
AggrSum
AggrMin
AggrMax
AggrCounter
)
// AllAggrTypes contains all supported aggregation types.
var AllAggrTypes = []AggrType{AggrCount, AggrSum, AggrMin, AggrMax, AggrCounter}
func (t AggrType) String() string {
switch t {
case AggrCount:
return "count"
case AggrSum:
return "sum"
case AggrMin:
return "min"
case AggrMax:
return "max"
case AggrCounter:
return "counter"
}
return "<unknown>"
}
// ParseAggrType parses aggregate type from string.
func ParseAggrType(s string) (AggrType, error) {
switch s {
case "count":
return AggrCount, nil
case "sum":
return AggrSum, nil
case "min":
return AggrMin, nil
case "max":
return AggrMax, nil
case "counter":
return AggrCounter, nil
}
return 0, fmt.Errorf("unknown aggregate type: %q", s)
}
// ErrAggrNotExist is returned if a requested aggregation is not present in an AggrChunk.
var ErrAggrNotExist = errors.New("aggregate does not exist")
// AggrChunk is a chunk that is composed of a set of aggregates for the same underlying data.
// Not all aggregates must be present.
// This is a read-only implementation for decoding Thanos downsampled blocks.
type AggrChunk []byte
// IsAggrChunk checks if the encoding byte indicates this is an AggrChunk.
func IsAggrChunk(enc chunkenc.Encoding) bool {
return enc == ChunkEncAggr
}
// Get returns the sub-chunk for the given aggregate type if it exists.
func (c AggrChunk) Get(t AggrType) (chunkenc.Chunk, error) {
b := c[:]
var x []byte
for i := AggrType(0); i <= t; i++ {
l, n := binary.Uvarint(b)
if n < 1 {
return nil, errors.New("invalid size: failed to read uvarint")
}
if l > uint64(len(b[n:])) || l+1 > uint64(len(b[n:])) {
if l > 0 {
return nil, errors.New("invalid size: not enough bytes")
}
}
b = b[n:]
// If length is set to zero explicitly, that means the aggregate is unset.
if l == 0 {
if i == t {
return nil, ErrAggrNotExist
}
continue
}
chunkLen := int(l) + 1
x = b[:chunkLen]
b = b[chunkLen:]
}
if len(x) == 0 {
return nil, ErrAggrNotExist
}
return chunkenc.FromData(chunkenc.Encoding(x[0]), x[1:])
}
// Encoding returns the encoding type for AggrChunk.
func (c AggrChunk) Encoding() chunkenc.Encoding {
return ChunkEncAggr
}
// errIterator wraps a nop iterator but reports an error via Err().
// It embeds chunkenc.Iterator to inherit all methods (including Seek)
// which avoids go vet stdmethods warning about Seek signature.
type errIterator struct {
chunkenc.Iterator
err error
}
// Err returns the underlying error.
func (it *errIterator) Err() error {
return it.err
}
// newAggrChunkIterator creates a new iterator for the specified aggregate type.
// If the aggregate is not present in the chunk (ErrAggrNotExist), a nop iterator
// is returned without error — the caller will simply see zero samples.
// Real decoding/corruption errors are reported via the iterator's Err() method.
func newAggrChunkIterator(data []byte, aggrType AggrType) chunkenc.Iterator {
chunk := AggrChunk(data)
subChunk, err := chunk.Get(aggrType)
if err != nil {
if errors.Is(err, ErrAggrNotExist) {
return chunkenc.NewNopIterator()
}
return &errIterator{
Iterator: chunkenc.NewNopIterator(),
err: err,
}
}
return subChunk.Iterator(nil)
}
// AggrChunkWrapper wraps AggrChunk to implement chunkenc.Chunk interface.
// It delegates iteration to a specific aggregate type.
type AggrChunkWrapper struct {
data []byte
aggrType AggrType
}
// NewAggrChunkWrapper creates a new AggrChunk wrapper for the specified aggregate type.
func NewAggrChunkWrapper(data []byte, aggrType AggrType) *AggrChunkWrapper {
return &AggrChunkWrapper{
data: data,
aggrType: aggrType,
}
}
// Bytes returns the underlying byte slice.
func (c *AggrChunkWrapper) Bytes() []byte {
return c.data
}
// Encoding returns the AggrChunk encoding.
func (c *AggrChunkWrapper) Encoding() chunkenc.Encoding {
return ChunkEncAggr
}
// Appender returns an error since AggrChunk is read-only.
func (c *AggrChunkWrapper) Appender() (chunkenc.Appender, error) {
return nil, errors.New("AggrChunk is read-only")
}
// Iterator returns an iterator for the specified aggregate type.
func (c *AggrChunkWrapper) Iterator(it chunkenc.Iterator) chunkenc.Iterator {
return newAggrChunkIterator(c.data, c.aggrType)
}
// NumSamples returns the number of samples in the aggregate.
func (c *AggrChunkWrapper) NumSamples() int {
chunk := AggrChunk(c.data)
subChunk, err := chunk.Get(c.aggrType)
if err != nil {
return 0
}
return subChunk.NumSamples()
}
// Compact is a no-op for read-only AggrChunk.
func (c *AggrChunkWrapper) Compact() {}
// Reset resets the chunk with new data.
func (c *AggrChunkWrapper) Reset(stream []byte) {
c.data = stream
}
// AggrChunkPool is a custom Pool that understands AggrChunk encoding (0xff).
// It delegates standard encodings to the default pool and handles AggrChunk specially.
type AggrChunkPool struct {
defaultPool chunkenc.Pool
aggrType AggrType
}
// NewAggrChunkPool creates a new pool that handles AggrChunk encoding.
func NewAggrChunkPool(aggrType AggrType) *AggrChunkPool {
return &AggrChunkPool{
defaultPool: chunkenc.NewPool(),
aggrType: aggrType,
}
}
// Get returns a chunk for the given encoding and data.
func (p *AggrChunkPool) Get(e chunkenc.Encoding, b []byte) (chunkenc.Chunk, error) {
if e == ChunkEncAggr {
return NewAggrChunkWrapper(b, p.aggrType), nil
}
return p.defaultPool.Get(e, b)
}
// Put returns a chunk to the pool.
func (p *AggrChunkPool) Put(c chunkenc.Chunk) error {
if c.Encoding() == ChunkEncAggr {
// AggrChunk wrappers are not pooled
return nil
}
return p.defaultPool.Put(c)
}

View File

@@ -0,0 +1,110 @@
package thanos
import (
"encoding/json"
"os"
"path/filepath"
)
// BlockMeta extends Prometheus BlockMeta with Thanos-specific fields.
type BlockMeta struct {
// Thanos-specific metadata
Thanos ThanosMeta `json:"thanos,omitempty"`
}
// ThanosMeta contains Thanos-specific block metadata.
type ThanosMeta struct {
// Labels are external labels identifying the producer.
Labels map[string]string `json:"labels,omitempty"`
// Downsample contains downsampling information.
Downsample ThanosDownsample `json:"downsample,omitempty"`
// Source indicates where the block came from.
Source string `json:"source,omitempty"`
// SegmentFiles contains list of segment files in the block.
SegmentFiles []string `json:"segment_files,omitempty"`
// Files contains metadata about files in the block.
Files []ThanosFile `json:"files,omitempty"`
}
// ThanosDownsample contains downsampling resolution info.
type ThanosDownsample struct {
// Resolution is the downsampling resolution in milliseconds.
// 0 means raw data (no downsampling).
// 300000 (5 minutes) or 3600000 (1 hour) for downsampled data.
Resolution int64 `json:"resolution"`
}
// ThanosFile contains metadata about a file in the block.
type ThanosFile struct {
RelPath string `json:"rel_path"`
SizeBytes int64 `json:"size_bytes,omitempty"`
}
// ResolutionLevel represents the downsampling resolution.
type ResolutionLevel int64
const (
// ResolutionRaw is for raw, non-downsampled data.
ResolutionRaw ResolutionLevel = 0
// Resolution5m is for 5-minute downsampled data (300000 ms).
Resolution5m ResolutionLevel = 300000
// Resolution1h is for 1-hour downsampled data (3600000 ms).
Resolution1h ResolutionLevel = 3600000
)
// String returns human-readable resolution string.
func (r ResolutionLevel) String() string {
switch r {
case ResolutionRaw:
return "raw"
case Resolution5m:
return "5m"
case Resolution1h:
return "1h"
default:
return "unknown"
}
}
// ReadBlockMeta reads Thanos-extended block metadata from meta.json.
func ReadBlockMeta(blockDir string) (*BlockMeta, error) {
metaPath := filepath.Join(blockDir, "meta.json")
data, err := os.ReadFile(metaPath)
if err != nil {
return nil, err
}
var meta BlockMeta
if err := json.Unmarshal(data, &meta); err != nil {
return nil, err
}
return &meta, nil
}
// IsDownsampled returns true if the block contains downsampled data.
func (m *BlockMeta) IsDownsampled() bool {
return m.Thanos.Downsample.Resolution > 0
}
// Resolution returns the block's downsampling resolution.
func (m *BlockMeta) Resolution() ResolutionLevel {
return ResolutionLevel(m.Thanos.Downsample.Resolution)
}
// ResolutionSuffix returns a suffix string for metric names based on resolution.
// For example: ":5m" or ":1h" for downsampled data, empty for raw data.
func (m *BlockMeta) ResolutionSuffix() string {
switch m.Resolution() {
case Resolution5m:
return ":5m"
case Resolution1h:
return ":1h"
default:
return ""
}
}

View File

@@ -0,0 +1,83 @@
package thanos
import (
"fmt"
"io"
"os"
"path/filepath"
"github.com/prometheus/prometheus/tsdb"
"github.com/prometheus/prometheus/tsdb/chunkenc"
)
// BlockInfo contains information about a block including Thanos metadata.
type BlockInfo struct {
Block tsdb.BlockReader
Resolution ResolutionLevel
IsThanos bool
// Closer releases the block's resources (file descriptors, mmap).
// Must be called only after all queriers on this block have been closed.
Closer io.Closer
}
// OpenBlocksWithInfo opens all blocks and returns them with their metadata.
// snapshotDir must be a snapshot directory containing block directories.
func OpenBlocksWithInfo(snapshotDir string, aggrType AggrType) ([]BlockInfo, error) {
entries, err := os.ReadDir(snapshotDir)
if err != nil {
return nil, fmt.Errorf("failed to read snapshot directory: %w", err)
}
var blocks []BlockInfo
for _, entry := range entries {
if !entry.IsDir() {
continue
}
blockDir := filepath.Join(snapshotDir, entry.Name())
metaPath := filepath.Join(blockDir, "meta.json")
// Check if this is a valid block directory (has meta.json)
if _, err := os.Stat(metaPath); os.IsNotExist(err) {
continue
}
meta, err := ReadBlockMeta(blockDir)
if err != nil {
CloseBlocks(blocks)
return nil, fmt.Errorf("failed to read Thanos metadata for block %s: %w", blockDir, err)
}
var pool chunkenc.Pool
if meta.IsDownsampled() {
// Use AggrChunkPool for downsampled blocks
pool = NewAggrChunkPool(aggrType)
}
block, err := tsdb.OpenBlock(nil, blockDir, pool, nil)
if err != nil {
// Close previously opened blocks before returning error
CloseBlocks(blocks)
return nil, fmt.Errorf("failed to open block %s: %w", blockDir, err)
}
blocks = append(blocks, BlockInfo{
Block: block,
Resolution: meta.Resolution(),
IsThanos: true,
Closer: block,
})
}
return blocks, nil
}
// CloseBlocks closes all blocks in the slice.
// Must be called only after all queriers on these blocks have been closed.
func CloseBlocks(blocks []BlockInfo) {
for _, bi := range blocks {
if bi.Closer != nil {
_ = bi.Closer.Close()
}
}
}

198
app/vmctl/thanos/client.go Normal file
View File

@@ -0,0 +1,198 @@
package thanos
import (
"context"
"fmt"
"time"
"github.com/prometheus/prometheus/model/labels"
"github.com/prometheus/prometheus/storage"
"github.com/prometheus/prometheus/tsdb"
)
// Config contains parameters for reading Thanos snapshots.
type Config struct {
Snapshot string
Filter Filter
}
// Filter contains configuration for filtering the timeseries.
type Filter struct {
TimeMin string
TimeMax string
Label string
LabelValue string
}
// Client reads Thanos snapshot blocks, including downsampled blocks with AggrChunk encoding.
type Client struct {
snapshotPath string
filter filter
statsPrinted bool
}
type filter struct {
min, max int64
label string
labelValue string
}
func (f filter) inRange(minV, maxV int64) bool {
fmin, fmax := f.min, f.max
if fmin == 0 {
fmin = minV
}
if fmax == 0 {
fmax = maxV
}
return minV <= fmax && fmin <= maxV
}
// NewClient creates a new Thanos snapshot client.
func NewClient(cfg Config) (*Client, error) {
minTime, maxTime, err := parseTime(cfg.Filter.TimeMin, cfg.Filter.TimeMax)
if err != nil {
return nil, fmt.Errorf("failed to parse time in filter: %s", err)
}
return &Client{
snapshotPath: cfg.Snapshot,
filter: filter{
min: minTime,
max: maxTime,
label: cfg.Filter.Label,
labelValue: cfg.Filter.LabelValue,
},
}, nil
}
// Explore fetches all available blocks from the snapshot with support for
// Thanos AggrChunk (downsampled blocks). It opens blocks with a custom pool
// that can decode AggrChunk encoding (0xff).
func (c *Client) Explore(aggrType AggrType) ([]BlockInfo, error) {
blockInfos, err := OpenBlocksWithInfo(c.snapshotPath, aggrType)
if err != nil {
return nil, fmt.Errorf("failed to open blocks: %w", err)
}
s := &Stats{
Filtered: c.filter.min != 0 || c.filter.max != 0 || c.filter.label != "",
Blocks: len(blockInfos),
}
var blocksToImport []BlockInfo
for _, bi := range blockInfos {
meta := bi.Block.Meta()
if s.MinTime == 0 || meta.MinTime < s.MinTime {
s.MinTime = meta.MinTime
}
if s.MaxTime == 0 || meta.MaxTime > s.MaxTime {
s.MaxTime = meta.MaxTime
}
if !c.filter.inRange(meta.MinTime, meta.MaxTime) {
s.SkippedBlocks++
if bi.Closer != nil {
_ = bi.Closer.Close()
}
continue
}
s.Samples += meta.Stats.NumSamples
s.Series += meta.Stats.NumSeries
blocksToImport = append(blocksToImport, bi)
}
if !c.statsPrinted {
fmt.Println(s)
c.statsPrinted = true
}
return blocksToImport, nil
}
// querierSeriesSet wraps a SeriesSet and its underlying Querier, ensuring
// the querier is closed once the SeriesSet has been fully consumed.
// This releases the querier's read reference on the block, which is required
// for Block.Close() to complete without hanging.
type querierSeriesSet struct {
storage.SeriesSet
q storage.Querier
closed bool
}
// Next advances the iterator. When the underlying SeriesSet is exhausted,
// it closes the querier to release resources.
func (s *querierSeriesSet) Next() bool {
if s.SeriesSet.Next() {
return true
}
if !s.closed {
_ = s.q.Close()
s.closed = true
}
return false
}
// Close explicitly closes the underlying querier.
// This must be called if iteration is stopped early (before Next returns false)
// to release block read references and prevent Block.Close() from hanging.
func (s *querierSeriesSet) Close() {
if !s.closed {
_ = s.q.Close()
s.closed = true
}
}
// ClosableSeriesSet extends storage.SeriesSet with a Close method for explicit cleanup.
type ClosableSeriesSet interface {
storage.SeriesSet
Close()
}
// Read reads the given BlockInfo according to configured time and label filters.
// The returned ClosableSeriesSet automatically closes the underlying querier when fully consumed,
// but Close() should be called explicitly (e.g., via defer) to handle early returns.
func (c *Client) Read(bi BlockInfo) (ClosableSeriesSet, error) {
minTime, maxTime := bi.Block.Meta().MinTime, bi.Block.Meta().MaxTime
if c.filter.min != 0 {
minTime = c.filter.min
}
if c.filter.max != 0 {
maxTime = c.filter.max
}
q, err := tsdb.NewBlockQuerier(bi.Block, minTime, maxTime)
if err != nil {
return nil, err
}
ss := q.Select(
context.Background(),
false,
nil,
labels.MustNewMatcher(labels.MatchRegexp, c.filter.label, c.filter.labelValue),
)
return &querierSeriesSet{
SeriesSet: ss,
q: q,
}, nil
}
func parseTime(start, end string) (int64, int64, error) {
var s, e int64
if start == "" && end == "" {
return 0, 0, nil
}
if start != "" {
v, err := time.Parse(time.RFC3339, start)
if err != nil {
return 0, 0, fmt.Errorf("failed to parse %q: %s", start, err)
}
s = v.UnixNano() / int64(time.Millisecond)
}
if end != "" {
v, err := time.Parse(time.RFC3339, end)
if err != nil {
return 0, 0, fmt.Errorf("failed to parse %q: %s", end, err)
}
e = v.UnixNano() / int64(time.Millisecond)
}
return s, e, nil
}

38
app/vmctl/thanos/stats.go Normal file
View File

@@ -0,0 +1,38 @@
package thanos
import (
"fmt"
"time"
)
// Stats represents data migration stats for Thanos blocks.
type Stats struct {
Filtered bool
MinTime int64
MaxTime int64
Samples uint64
Series uint64
Blocks int
SkippedBlocks int
}
// String returns string representation for s.
func (s Stats) String() string {
str := fmt.Sprintf("Thanos snapshot stats:\n"+
" blocks found: %d;\n"+
" blocks skipped by time filter: %d;\n"+
" min time: %d (%v);\n"+
" max time: %d (%v);\n"+
" samples: %d;\n"+
" series: %d.",
s.Blocks, s.SkippedBlocks,
s.MinTime, time.Unix(s.MinTime/1e3, 0).Format(time.RFC3339),
s.MaxTime, time.Unix(s.MaxTime/1e3, 0).Format(time.RFC3339),
s.Samples, s.Series)
if s.Filtered {
str += "\n* Stats numbers are based on blocks meta info and don't account for applied filters."
}
return str
}

View File

@@ -0,0 +1,309 @@
package main
import (
"context"
"fmt"
"log"
"strings"
"sync"
"github.com/prometheus/prometheus/model/labels"
"github.com/prometheus/prometheus/tsdb/chunkenc"
"github.com/VictoriaMetrics/metrics"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/barpool"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/thanos"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/vm"
)
type thanosProcessor struct {
cl *thanos.Client
im *vm.Importer
cc int
isVerbose bool
aggrTypes []thanos.AggrType
}
func (tp *thanosProcessor) run(ctx context.Context) error {
if len(tp.aggrTypes) == 0 {
tp.aggrTypes = thanos.AllAggrTypes
}
log.Printf("Processing blocks with aggregate types: %v", tp.aggrTypes)
// Use the first aggregate type to explore blocks (block list is the same for all types)
blocks, err := tp.cl.Explore(tp.aggrTypes[0])
if err != nil {
return fmt.Errorf("explore failed: %s", err)
}
if len(blocks) < 1 {
return fmt.Errorf("found no blocks to import")
}
// Separate blocks into raw (resolution=0) and downsampled (resolution>0)
var rawBlocks, downsampledBlocks []thanos.BlockInfo
for _, block := range blocks {
if block.Resolution == thanos.ResolutionRaw {
rawBlocks = append(rawBlocks, block)
} else {
downsampledBlocks = append(downsampledBlocks, block)
}
}
log.Printf("Found %d raw blocks and %d downsampled blocks", len(rawBlocks), len(downsampledBlocks))
question := fmt.Sprintf("Found %d blocks to import (%d raw + %d downsampled with %d aggregate types). Continue?",
len(blocks), len(rawBlocks), len(downsampledBlocks), len(tp.aggrTypes))
if !prompt(ctx, question) {
return nil
}
// Calculate total number of block processing passes for the progress bar:
// raw blocks are processed once, downsampled blocks are processed once per aggregate type.
totalPasses := len(rawBlocks) + len(downsampledBlocks)*len(tp.aggrTypes)
thanosBlocksTotal.Add(totalPasses)
bar := barpool.AddWithTemplate(fmt.Sprintf(barTpl, "Processing blocks"), totalPasses)
if err := barpool.Start(); err != nil {
return err
}
defer barpool.Stop()
tp.im.ResetStats()
type phaseStats struct {
name string
series uint64
samples uint64
}
var phases []phaseStats
// Process raw blocks first (no aggregate suffix)
if len(rawBlocks) > 0 {
log.Println("Processing raw blocks (resolution=0)...")
stats, err := tp.processBlocks(rawBlocks, thanos.AggrTypeNone, bar)
if err != nil {
return fmt.Errorf("migration failed for raw blocks: %s", err)
}
phases = append(phases, phaseStats{
name: "raw",
series: stats.series,
samples: stats.samples,
})
}
// Close blocks from the initial Explore. The querierSeriesSet wrapper
// has already released all querier read references, so Close won't hang.
thanos.CloseBlocks(blocks)
// Process downsampled blocks for each aggregate type.
// Each type needs its own AggrChunkPool, so we reopen blocks per type.
for _, aggrType := range tp.aggrTypes {
if len(downsampledBlocks) < 1 {
break
}
log.Printf("Processing downsampled blocks with aggregate type: %s", aggrType)
aggrBlocks, err := tp.cl.Explore(aggrType)
if err != nil {
return fmt.Errorf("explore failed for aggr type %s: %s", aggrType, err)
}
var downsampledOnly []thanos.BlockInfo
for _, block := range aggrBlocks {
if block.Resolution != thanos.ResolutionRaw {
downsampledOnly = append(downsampledOnly, block)
}
}
if len(downsampledOnly) < 1 {
log.Printf("No downsampled blocks found for aggregate type %s, skipping", aggrType)
thanos.CloseBlocks(aggrBlocks)
continue
}
log.Printf("Processing %d blocks for aggregate type: %s", len(downsampledOnly), aggrType)
stats, err := tp.processBlocks(downsampledOnly, aggrType, bar)
thanos.CloseBlocks(aggrBlocks)
if err != nil {
return fmt.Errorf("migration failed for aggr type %s: %s", aggrType, err)
}
phases = append(phases, phaseStats{
name: aggrType.String(),
series: stats.series,
samples: stats.samples,
})
}
// Print per-phase and total statistics
var totalSeries, totalSamples uint64
log.Printf("Migration statistics (%d raw blocks, %d downsampled blocks):", len(rawBlocks), len(downsampledBlocks))
for _, p := range phases {
log.Printf(" %s: %d series, %d samples", p.name, p.series, p.samples)
totalSeries += p.series
totalSamples += p.samples
}
log.Printf(" total: %d series, %d samples", totalSeries, totalSamples)
// Wait for all buffers to flush
tp.im.Close()
// Drain import errors channel
for vmErr := range tp.im.Errors() {
if vmErr.Err != nil {
thanosErrorsTotal.Inc()
return fmt.Errorf("import process failed: %s", wrapErr(vmErr, tp.isVerbose))
}
}
log.Println("Import finished!")
log.Println(tp.im.Stats())
return nil
}
// processBlocksStats holds statistics collected during block processing.
type processBlocksStats struct {
blocks uint64
series uint64
samples uint64
}
func (tp *thanosProcessor) processBlocks(blocks []thanos.BlockInfo, aggrType thanos.AggrType, bar barpool.Bar) (processBlocksStats, error) {
blockReadersCh := make(chan thanos.BlockInfo)
errCh := make(chan error, tp.cc)
var processedBlocks, totalSeries, totalSamples uint64
var mu sync.Mutex
var wg sync.WaitGroup
for i := range tp.cc {
workerID := i
wg.Go(func() {
for bi := range blockReadersCh {
seriesCount, samplesCount, err := tp.do(bi, aggrType)
if err != nil {
thanosErrorsTotal.Inc()
errCh <- fmt.Errorf("read failed for block %q with aggr %s: %s", bi.Block.Meta().ULID, aggrType, err)
return
}
mu.Lock()
processedBlocks++
totalSeries += seriesCount
totalSamples += samplesCount
log.Printf("[Worker %d] Block %s: %d series, %d samples | Total: %d/%d blocks, %d series, %d samples",
workerID, bi.Block.Meta().ULID.String()[:8], seriesCount, samplesCount,
processedBlocks, len(blocks), totalSeries, totalSamples)
mu.Unlock()
thanosBlocksProcessed.Inc()
bar.Increment()
}
})
}
// any error breaks the import
for _, bi := range blocks {
select {
case thanosErr := <-errCh:
close(blockReadersCh)
wg.Wait()
return processBlocksStats{}, fmt.Errorf("thanos error: %s", thanosErr)
case vmErr := <-tp.im.Errors():
close(blockReadersCh)
wg.Wait()
thanosErrorsTotal.Inc()
return processBlocksStats{}, fmt.Errorf("import process failed: %s", wrapErr(vmErr, tp.isVerbose))
case blockReadersCh <- bi:
}
}
close(blockReadersCh)
wg.Wait()
close(errCh)
for err := range errCh {
return processBlocksStats{}, fmt.Errorf("import process failed: %s", err)
}
return processBlocksStats{
blocks: processedBlocks,
series: totalSeries,
samples: totalSamples,
}, nil
}
func (tp *thanosProcessor) do(bi thanos.BlockInfo, aggrType thanos.AggrType) (uint64, uint64, error) {
ss, err := tp.cl.Read(bi)
if err != nil {
return 0, 0, fmt.Errorf("failed to read block: %s", err)
}
defer ss.Close() // Ensure querier is closed even on early returns
var it chunkenc.Iterator
var seriesCount, samplesCount uint64
for ss.Next() {
var name string
var labelPairs []vm.LabelPair
series := ss.At()
series.Labels().Range(func(label labels.Label) {
if label.Name == "__name__" {
name = label.Value
return
}
labelPairs = append(labelPairs, vm.LabelPair{
Name: strings.Clone(label.Name),
Value: strings.Clone(label.Value),
})
})
if name == "" {
return seriesCount, samplesCount, fmt.Errorf("failed to find `__name__` label in labelset for block %v", bi.Block.Meta().ULID)
}
// Add resolution and aggregate type suffix to metric name for downsampled blocks
if bi.Resolution != thanos.ResolutionRaw && aggrType != thanos.AggrTypeNone {
name = fmt.Sprintf("%s:%s:%s", name, bi.Resolution.String(), aggrType.String())
}
var timestamps []int64
var values []float64
it = series.Iterator(it)
for {
typ := it.Next()
if typ == chunkenc.ValNone {
break
}
if typ != chunkenc.ValFloat {
continue
}
t, v := it.At()
timestamps = append(timestamps, t)
values = append(values, v)
}
if err := it.Err(); err != nil {
return seriesCount, samplesCount, err
}
samplesCount += uint64(len(timestamps))
seriesCount++
ts := vm.TimeSeries{
Name: name,
LabelPairs: labelPairs,
Timestamps: timestamps,
Values: values,
}
if err := tp.im.Input(&ts); err != nil {
return seriesCount, samplesCount, err
}
}
return seriesCount, samplesCount, ss.Err()
}
var (
thanosBlocksTotal = metrics.NewCounter(`vmctl_thanos_migration_blocks_total`)
thanosBlocksProcessed = metrics.NewCounter(`vmctl_thanos_migration_blocks_processed`)
thanosErrorsTotal = metrics.NewCounter(`vmctl_thanos_migration_errors_total`)
)

View File

@@ -22,6 +22,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage/metricnamestats"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage/metricsmetadata"
)
@@ -1362,7 +1363,7 @@ func applyGraphiteRegexpFilter(filter string, ss []string) ([]string, error) {
const maxFastAllocBlockSize = 32 * 1024
// GetMetricNamesStats returns statistic for timeseries metric names usage.
func GetMetricNamesStats(qt *querytracer.Tracer, limit, le int, matchPattern string) (storage.MetricNamesStatsResponse, error) {
func GetMetricNamesStats(qt *querytracer.Tracer, limit, le int, matchPattern string) (metricnamestats.StatsResult, error) {
qt = qt.NewChild("get metric names usage statistics with limit: %d, less or equal to: %d, match pattern=%q", limit, le, matchPattern)
defer qt.Done()
return vmstorage.GetMetricNamesStats(qt, limit, le, matchPattern)

View File

@@ -11,6 +11,16 @@
{% stripspace %}
{% func ExportCSVHeader(fieldNames []string) %}
{% if len(fieldNames) == 0 %}{% return %}{% endif %}
{%s= fieldNames[0] %}
{% for _, fieldName := range fieldNames[1:] %}
,
{%s= fieldName %}
{% endfor %}
{% newline %}
{% endfunc %}
{% func ExportCSVLine(xb *exportBlock, fieldNames []string) %}
{% if len(xb.timestamps) == 0 || len(fieldNames) == 0 %}{% return %}{% endif %}
{% for i, timestamp := range xb.timestamps %}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,132 @@
package prometheus
import (
"strings"
"testing"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
)
func TestExportCSVHeader(t *testing.T) {
f := func(fieldNames []string, expected string) {
t.Helper()
got := ExportCSVHeader(fieldNames)
if got != expected {
t.Fatalf("ExportCSVHeader(%v): got %q; want %q", fieldNames, got, expected)
}
}
f(nil, "")
f([]string{}, "")
f([]string{"__value__"}, "__value__\n")
f([]string{"__timestamp__"}, "__timestamp__\n")
f([]string{"__timestamp__:rfc3339"}, "__timestamp__:rfc3339\n")
f([]string{"__name__"}, "__name__\n")
f([]string{"job"}, "job\n")
f([]string{"__timestamp__:rfc3339", "__value__"}, "__timestamp__:rfc3339,__value__\n")
f([]string{"__value__", "__timestamp__"}, "__value__,__timestamp__\n")
f([]string{"job", "instance"}, "job,instance\n")
f([]string{"__name__", "__value__", "__timestamp__:unix_s"}, "__name__,__value__,__timestamp__:unix_s\n")
f([]string{"job", "instance", "__value__", "__timestamp__:unix_ms"}, "job,instance,__value__,__timestamp__:unix_ms\n")
f([]string{"__timestamp__:custom:2006-01-02", "__value__", "host", "dc", "env"},
"__timestamp__:custom:2006-01-02,__value__,host,dc,env\n")
// duplicate fields
f([]string{"__value__", "__value__"}, "__value__,__value__\n")
f([]string{"__timestamp__", "__timestamp__:rfc3339"}, "__timestamp__,__timestamp__:rfc3339\n")
}
func TestExportCSVLine(t *testing.T) {
localBak := time.Local
time.Local = time.UTC
defer func() { time.Local = localBak }()
f := func(mn *storage.MetricName, timestamps []int64, values []float64, fieldNames []string, expected string) {
t.Helper()
xb := &exportBlock{
mn: mn,
timestamps: timestamps,
values: values,
}
got := ExportCSVLine(xb, fieldNames)
if got != expected {
t.Fatalf("ExportCSVLine: got %q; want %q", got, expected)
}
}
mn := &storage.MetricName{
MetricGroup: []byte("cpu_usage"),
Tags: []storage.Tag{
{Key: []byte("job"), Value: []byte("node")},
{Key: []byte("instance"), Value: []byte("localhost:9090")},
},
}
// empty inputs
f(mn, nil, nil, []string{"__value__"}, "")
f(mn, []int64{}, []float64{}, []string{"__value__"}, "")
f(mn, []int64{1000}, []float64{1.5}, nil, "")
f(mn, []int64{1000}, []float64{1.5}, []string{}, "")
f(mn, []int64{1000}, []float64{42.5}, []string{"__value__"}, "42.5\n")
f(mn, []int64{1704067200000}, []float64{1}, []string{"__timestamp__"}, "1704067200000\n")
f(mn, []int64{1704067200000}, []float64{1}, []string{"__timestamp__:unix_s"}, "1704067200\n")
f(mn, []int64{1704067200000}, []float64{1}, []string{"__timestamp__:unix_ms"}, "1704067200000\n")
f(mn, []int64{1704067200000}, []float64{1}, []string{"__timestamp__:unix_ns"}, "1704067200000000000\n")
f(mn, []int64{1704067200000}, []float64{1}, []string{"__timestamp__:rfc3339"}, "2024-01-01T00:00:00Z\n")
f(mn, []int64{1000}, []float64{1}, []string{"__name__"}, "cpu_usage\n")
f(mn, []int64{1000}, []float64{1}, []string{"job"}, "node\n")
f(mn, []int64{1000}, []float64{1}, []string{"instance"}, "localhost:9090\n")
f(mn, []int64{1000}, []float64{1}, []string{"missing_label"}, "\n")
// multiple fields
f(mn, []int64{1704067200000}, []float64{99.9},
[]string{"__timestamp__:unix_s", "__value__", "job"},
"1704067200,99.9,node\n")
// multiple rows
f(mn, []int64{1000, 2000}, []float64{1.1, 2.2},
[]string{"__value__", "__timestamp__"},
"1.1,1000\n2.2,2000\n")
f(mn, []int64{1000, 2000, 3000}, []float64{10, 20, 30},
[]string{"__timestamp__:unix_s", "__value__"},
"1,10\n2,20\n3,30\n")
// escaping for special characters in tag values
f(&storage.MetricName{
MetricGroup: []byte("m"),
Tags: []storage.Tag{{Key: []byte("desc"), Value: []byte("a,b")}},
}, []int64{1000}, []float64{1}, []string{"desc"}, "\"a,b\"\n")
f(&storage.MetricName{
MetricGroup: []byte("m"),
Tags: []storage.Tag{{Key: []byte("desc"), Value: []byte(`say "hello"`)}},
}, []int64{1000}, []float64{1}, []string{"desc"}, "\"say \\\"hello\\\"\"\n")
f(&storage.MetricName{
MetricGroup: []byte("m"),
Tags: []storage.Tag{{Key: []byte("desc"), Value: []byte("line1\nline2")}},
}, []int64{1000}, []float64{1}, []string{"desc"}, "\"line1\\nline2\"\n")
// header and data line field counts must match
fieldNames := []string{"__name__", "job", "instance", "__value__", "__timestamp__:unix_s"}
header := ExportCSVHeader(fieldNames)
line := ExportCSVLine(&exportBlock{
mn: mn,
timestamps: []int64{1704067200000},
values: []float64{99.9},
}, fieldNames)
headerCommas := strings.Count(header, ",")
lineCommas := strings.Count(line, ",")
if headerCommas != lineCommas {
t.Fatalf("header has %d commas, data line has %d commas", headerCommas, lineCommas)
}
if headerCommas != len(fieldNames)-1 {
t.Fatalf("expected %d commas in header, got %d", len(fieldNames)-1, headerCommas)
}
}

View File

@@ -175,6 +175,7 @@ func ExportCSVHandler(startTime time.Time, w http.ResponseWriter, r *http.Reques
w.Header().Set("Content-Type", "text/csv; charset=utf-8")
bw := bufferedwriter.Get(w)
defer bufferedwriter.Put(bw)
WriteExportCSVHeader(bw, fieldNames)
sw := newScalableWriter(bw)
writeCSVLine := func(xb *exportBlock, workerID uint) error {
if len(xb.timestamps) == 0 {

View File

@@ -1,6 +1,7 @@
{% import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage/metricnamestats"
) %}
{% stripspace %}
@@ -34,9 +35,9 @@ TSDBStatusResponse generates response for /api/v1/status/tsdb .
]
{% endfunc %}
{% func tsdbStatusMetricNameEntries(a []storage.TopHeapEntry, queryStats []storage.MetricNamesStatsRecord) %}
{% func tsdbStatusMetricNameEntries(a []storage.TopHeapEntry, queryStats []metricnamestats.StatRecord) %}
{% code
queryStatsByMetricName := make(map[string]storage.MetricNamesStatsRecord,len(queryStats))
queryStatsByMetricName := make(map[string]metricnamestats.StatRecord,len(queryStats))
for _, record := range queryStats{
queryStatsByMetricName[record.MetricName] = record
}

View File

@@ -8,228 +8,229 @@ package prometheus
import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage/metricnamestats"
)
// TSDBStatusResponse generates response for /api/v1/status/tsdb .
//line app/vmselect/prometheus/tsdb_status_response.qtpl:8
//line app/vmselect/prometheus/tsdb_status_response.qtpl:9
import (
qtio422016 "io"
qt422016 "github.com/valyala/quicktemplate"
)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:8
//line app/vmselect/prometheus/tsdb_status_response.qtpl:9
var (
_ = qtio422016.Copy
_ = qt422016.AcquireByteBuffer
)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:8
//line app/vmselect/prometheus/tsdb_status_response.qtpl:9
func StreamTSDBStatusResponse(qw422016 *qt422016.Writer, status *storage.TSDBStatus, qt *querytracer.Tracer) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:8
//line app/vmselect/prometheus/tsdb_status_response.qtpl:9
qw422016.N().S(`{"status":"success","data":{"totalSeries":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:12
//line app/vmselect/prometheus/tsdb_status_response.qtpl:13
qw422016.N().DUL(status.TotalSeries)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:12
//line app/vmselect/prometheus/tsdb_status_response.qtpl:13
qw422016.N().S(`,"totalLabelValuePairs":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:13
//line app/vmselect/prometheus/tsdb_status_response.qtpl:14
qw422016.N().DUL(status.TotalLabelValuePairs)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:13
//line app/vmselect/prometheus/tsdb_status_response.qtpl:14
qw422016.N().S(`,"seriesCountByMetricName":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:14
//line app/vmselect/prometheus/tsdb_status_response.qtpl:15
streamtsdbStatusMetricNameEntries(qw422016, status.SeriesCountByMetricName, status.SeriesQueryStatsByMetricName)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:14
//line app/vmselect/prometheus/tsdb_status_response.qtpl:15
qw422016.N().S(`,"seriesCountByLabelName":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:15
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
streamtsdbStatusEntries(qw422016, status.SeriesCountByLabelName)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:15
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
qw422016.N().S(`,"seriesCountByFocusLabelValue":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
//line app/vmselect/prometheus/tsdb_status_response.qtpl:17
streamtsdbStatusEntries(qw422016, status.SeriesCountByFocusLabelValue)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
//line app/vmselect/prometheus/tsdb_status_response.qtpl:17
qw422016.N().S(`,"seriesCountByLabelValuePair":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:17
//line app/vmselect/prometheus/tsdb_status_response.qtpl:18
streamtsdbStatusEntries(qw422016, status.SeriesCountByLabelValuePair)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:17
//line app/vmselect/prometheus/tsdb_status_response.qtpl:18
qw422016.N().S(`,"labelValueCountByLabelName":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:18
//line app/vmselect/prometheus/tsdb_status_response.qtpl:19
streamtsdbStatusEntries(qw422016, status.LabelValueCountByLabelName)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:18
//line app/vmselect/prometheus/tsdb_status_response.qtpl:19
qw422016.N().S(`}`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:20
//line app/vmselect/prometheus/tsdb_status_response.qtpl:21
qt.Done()
//line app/vmselect/prometheus/tsdb_status_response.qtpl:21
//line app/vmselect/prometheus/tsdb_status_response.qtpl:22
streamdumpQueryTrace(qw422016, qt)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:21
//line app/vmselect/prometheus/tsdb_status_response.qtpl:22
qw422016.N().S(`}`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
func WriteTSDBStatusResponse(qq422016 qtio422016.Writer, status *storage.TSDBStatus, qt *querytracer.Tracer) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
StreamTSDBStatusResponse(qw422016, status, qt)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
func TSDBStatusResponse(status *storage.TSDBStatus, qt *querytracer.Tracer) string {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
WriteTSDBStatusResponse(qb422016, status, qt)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
qs422016 := string(qb422016.B)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
return qs422016
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:25
//line app/vmselect/prometheus/tsdb_status_response.qtpl:26
func streamtsdbStatusEntries(qw422016 *qt422016.Writer, a []storage.TopHeapEntry) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:25
//line app/vmselect/prometheus/tsdb_status_response.qtpl:26
qw422016.N().S(`[`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:27
//line app/vmselect/prometheus/tsdb_status_response.qtpl:28
for i, e := range a {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:27
//line app/vmselect/prometheus/tsdb_status_response.qtpl:28
qw422016.N().S(`{"name":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:29
//line app/vmselect/prometheus/tsdb_status_response.qtpl:30
qw422016.N().Q(e.Name)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:29
//line app/vmselect/prometheus/tsdb_status_response.qtpl:30
qw422016.N().S(`,"value":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:30
//line app/vmselect/prometheus/tsdb_status_response.qtpl:31
qw422016.N().D(int(e.Count))
//line app/vmselect/prometheus/tsdb_status_response.qtpl:30
//line app/vmselect/prometheus/tsdb_status_response.qtpl:31
qw422016.N().S(`}`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:32
//line app/vmselect/prometheus/tsdb_status_response.qtpl:33
if i+1 < len(a) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:32
//line app/vmselect/prometheus/tsdb_status_response.qtpl:33
qw422016.N().S(`,`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:32
//line app/vmselect/prometheus/tsdb_status_response.qtpl:33
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:33
//line app/vmselect/prometheus/tsdb_status_response.qtpl:34
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:33
//line app/vmselect/prometheus/tsdb_status_response.qtpl:34
qw422016.N().S(`]`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
func writetsdbStatusEntries(qq422016 qtio422016.Writer, a []storage.TopHeapEntry) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
streamtsdbStatusEntries(qw422016, a)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
func tsdbStatusEntries(a []storage.TopHeapEntry) string {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
writetsdbStatusEntries(qb422016, a)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
qs422016 := string(qb422016.B)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
return qs422016
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:37
func streamtsdbStatusMetricNameEntries(qw422016 *qt422016.Writer, a []storage.TopHeapEntry, queryStats []storage.MetricNamesStatsRecord) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:39
queryStatsByMetricName := make(map[string]storage.MetricNamesStatsRecord, len(queryStats))
//line app/vmselect/prometheus/tsdb_status_response.qtpl:38
func streamtsdbStatusMetricNameEntries(qw422016 *qt422016.Writer, a []storage.TopHeapEntry, queryStats []metricnamestats.StatRecord) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:40
queryStatsByMetricName := make(map[string]metricnamestats.StatRecord, len(queryStats))
for _, record := range queryStats {
queryStatsByMetricName[record.MetricName] = record
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:43
//line app/vmselect/prometheus/tsdb_status_response.qtpl:44
qw422016.N().S(`[`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:45
//line app/vmselect/prometheus/tsdb_status_response.qtpl:46
for i, e := range a {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:45
//line app/vmselect/prometheus/tsdb_status_response.qtpl:46
qw422016.N().S(`{`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:48
//line app/vmselect/prometheus/tsdb_status_response.qtpl:49
entry, ok := queryStatsByMetricName[e.Name]
//line app/vmselect/prometheus/tsdb_status_response.qtpl:49
//line app/vmselect/prometheus/tsdb_status_response.qtpl:50
qw422016.N().S(`"name":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:50
//line app/vmselect/prometheus/tsdb_status_response.qtpl:51
qw422016.N().Q(e.Name)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:50
//line app/vmselect/prometheus/tsdb_status_response.qtpl:51
qw422016.N().S(`,`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:51
if !ok {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:51
qw422016.N().S(`"value":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:52
qw422016.N().D(int(e.Count))
//line app/vmselect/prometheus/tsdb_status_response.qtpl:53
} else {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:53
if !ok {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:52
qw422016.N().S(`"value":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:54
//line app/vmselect/prometheus/tsdb_status_response.qtpl:53
qw422016.N().D(int(e.Count))
//line app/vmselect/prometheus/tsdb_status_response.qtpl:54
} else {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:54
qw422016.N().S(`"value":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:55
qw422016.N().D(int(e.Count))
//line app/vmselect/prometheus/tsdb_status_response.qtpl:55
qw422016.N().S(`,"requestsCount":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:55
qw422016.N().D(int(entry.RequestsCount))
//line app/vmselect/prometheus/tsdb_status_response.qtpl:55
qw422016.N().S(`,"lastRequestTimestamp":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:56
qw422016.N().D(int(entry.RequestsCount))
//line app/vmselect/prometheus/tsdb_status_response.qtpl:56
qw422016.N().S(`,"lastRequestTimestamp":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:57
qw422016.N().D(int(entry.LastRequestTs))
//line app/vmselect/prometheus/tsdb_status_response.qtpl:57
//line app/vmselect/prometheus/tsdb_status_response.qtpl:58
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:57
//line app/vmselect/prometheus/tsdb_status_response.qtpl:58
qw422016.N().S(`}`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:59
//line app/vmselect/prometheus/tsdb_status_response.qtpl:60
if i+1 < len(a) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:59
//line app/vmselect/prometheus/tsdb_status_response.qtpl:60
qw422016.N().S(`,`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:59
//line app/vmselect/prometheus/tsdb_status_response.qtpl:60
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:60
//line app/vmselect/prometheus/tsdb_status_response.qtpl:61
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:60
//line app/vmselect/prometheus/tsdb_status_response.qtpl:61
qw422016.N().S(`]`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
func writetsdbStatusMetricNameEntries(qq422016 qtio422016.Writer, a []storage.TopHeapEntry, queryStats []storage.MetricNamesStatsRecord) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
func writetsdbStatusMetricNameEntries(qq422016 qtio422016.Writer, a []storage.TopHeapEntry, queryStats []metricnamestats.StatRecord) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
streamtsdbStatusMetricNameEntries(qw422016, a, queryStats)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
func tsdbStatusMetricNameEntries(a []storage.TopHeapEntry, queryStats []storage.MetricNamesStatsRecord) string {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
func tsdbStatusMetricNameEntries(a []storage.TopHeapEntry, queryStats []metricnamestats.StatRecord) string {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
writetsdbStatusMetricNameEntries(qb422016, a, queryStats)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
qs422016 := string(qb422016.B)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
return qs422016
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
}

View File

@@ -1,11 +1,11 @@
{% import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage/metricnamestats"
) %}
{% stripspace %}
MetricNamesStatsResponse generates response for /api/v1/status/metric_names_stats .
{% func MetricNamesStatsResponse(stats *storage.MetricNamesStatsResponse, qt *querytracer.Tracer) %}
{% func MetricNamesStatsResponse(stats *metricnamestats.StatsResult, qt *querytracer.Tracer) %}
{
"status":"success",
"statsCollectedSince": {%dul= stats.CollectedSinceTs %},

View File

@@ -7,7 +7,7 @@ package stats
//line app/vmselect/stats/metric_names_usage_response.qtpl:1
import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage/metricnamestats"
)
// MetricNamesStatsResponse generates response for /api/v1/status/metric_names_stats .
@@ -26,7 +26,7 @@ var (
)
//line app/vmselect/stats/metric_names_usage_response.qtpl:8
func StreamMetricNamesStatsResponse(qw422016 *qt422016.Writer, stats *storage.MetricNamesStatsResponse, qt *querytracer.Tracer) {
func StreamMetricNamesStatsResponse(qw422016 *qt422016.Writer, stats *metricnamestats.StatsResult, qt *querytracer.Tracer) {
//line app/vmselect/stats/metric_names_usage_response.qtpl:8
qw422016.N().S(`{"status":"success","statsCollectedSince":`)
//line app/vmselect/stats/metric_names_usage_response.qtpl:11
@@ -91,7 +91,7 @@ func StreamMetricNamesStatsResponse(qw422016 *qt422016.Writer, stats *storage.Me
}
//line app/vmselect/stats/metric_names_usage_response.qtpl:31
func WriteMetricNamesStatsResponse(qq422016 qtio422016.Writer, stats *storage.MetricNamesStatsResponse, qt *querytracer.Tracer) {
func WriteMetricNamesStatsResponse(qq422016 qtio422016.Writer, stats *metricnamestats.StatsResult, qt *querytracer.Tracer) {
//line app/vmselect/stats/metric_names_usage_response.qtpl:31
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/stats/metric_names_usage_response.qtpl:31
@@ -102,7 +102,7 @@ func WriteMetricNamesStatsResponse(qq422016 qtio422016.Writer, stats *storage.Me
}
//line app/vmselect/stats/metric_names_usage_response.qtpl:31
func MetricNamesStatsResponse(stats *storage.MetricNamesStatsResponse, qt *querytracer.Tracer) string {
func MetricNamesStatsResponse(stats *metricnamestats.StatsResult, qt *querytracer.Tracer) string {
//line app/vmselect/stats/metric_names_usage_response.qtpl:31
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/stats/metric_names_usage_response.qtpl:31

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -37,9 +37,9 @@
<meta property="og:title" content="UI for VictoriaMetrics">
<meta property="og:url" content="https://victoriametrics.com/">
<meta property="og:description" content="Explore and troubleshoot your VictoriaMetrics data">
<script type="module" crossorigin src="./assets/index-KEOgEEMl.js"></script>
<script type="module" crossorigin src="./assets/index-C24BPpD_.js"></script>
<link rel="modulepreload" crossorigin href="./assets/rolldown-runtime-COnpUsM8.js">
<link rel="modulepreload" crossorigin href="./assets/vendor-Mr0bmX1E.js">
<link rel="modulepreload" crossorigin href="./assets/vendor-BWBgVCcr.js">
<link rel="stylesheet" crossorigin href="./assets/vendor-CnsZ1jie.css">
<link rel="stylesheet" crossorigin href="./assets/index-D2OEy8Ra.css">
</head>

View File

@@ -5,6 +5,7 @@ import (
"flag"
"fmt"
"io"
"math"
"net/http"
"strconv"
"strings"
@@ -22,6 +23,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/mergeset"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage/metricnamestats"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage/metricsmetadata"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/stringsutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/syncwg"
@@ -55,11 +57,13 @@ var (
denyQueriesOutsideRetention = flag.Bool("denyQueriesOutsideRetention", false, "Whether to deny queries outside the configured -retentionPeriod. "+
"When set, then /api/v1/query_range would return '503 Service Unavailable' error for queries with 'from' value outside -retentionPeriod. "+
"This may be useful when multiple data sources with distinct retentions are hidden behind query-tee")
maxHourlySeries = flag.Int("storage.maxHourlySeries", 0, "The maximum number of unique series can be added to the storage during the last hour. "+
maxHourlySeries = flag.Int64("storage.maxHourlySeries", 0, "The maximum number of unique series can be added to the storage during the last hour. "+
"Excess series are logged and dropped. This can be useful for limiting series cardinality. See https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#cardinality-limiter . "+
fmt.Sprintf("Setting this flag to '-1' sets limit to maximum possible value (%d) which is useful in order to enable series tracking without enforcing limits. ", math.MaxInt32)+
"See also -storage.maxDailySeries")
maxDailySeries = flag.Int("storage.maxDailySeries", 0, "The maximum number of unique series can be added to the storage during the last 24 hours. "+
maxDailySeries = flag.Int64("storage.maxDailySeries", 0, "The maximum number of unique series can be added to the storage during the last 24 hours. "+
"Excess series are logged and dropped. This can be useful for limiting series churn rate. See https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#cardinality-limiter . "+
fmt.Sprintf("Setting this flag to '-1' sets limit to maximum possible value (%d) which is useful in order to enable series tracking without enforcing limits. ", math.MaxInt32)+
"See also -storage.maxHourlySeries")
minFreeDiskSpaceBytes = flagutil.NewBytes("storage.minFreeDiskSpaceBytes", 100e6, "The minimum free disk space at -storageDataPath after which the storage stops accepting new data")
@@ -141,8 +145,8 @@ func Init(resetCacheIfNeeded func(mrs []storage.MetricRow)) {
WG = syncwg.WaitGroup{}
opts := storage.OpenOptions{
Retention: retentionPeriod.Duration(),
MaxHourlySeries: *maxHourlySeries,
MaxDailySeries: *maxDailySeries,
MaxHourlySeries: getMaxHourlySeries(),
MaxDailySeries: getMaxDailySeries(),
DisablePerDayIndex: *disablePerDayIndex,
TrackMetricNamesStats: *trackMetricNamesStats,
IDBPrefillStart: *idbPrefillStart,
@@ -233,7 +237,7 @@ func DeleteSeries(qt *querytracer.Tracer, tfss []*storage.TagFilters, maxMetrics
}
// GetMetricNamesStats returns metric names usage stats with give limit and lte predicate
func GetMetricNamesStats(qt *querytracer.Tracer, limit, le int, matchPattern string) (storage.MetricNamesStatsResponse, error) {
func GetMetricNamesStats(qt *querytracer.Tracer, limit, le int, matchPattern string) (metricnamestats.StatsResult, error) {
WG.Add(1)
r := Storage.GetMetricNamesStats(qt, limit, le, matchPattern)
WG.Done()
@@ -602,10 +606,10 @@ func writeStorageMetrics(w io.Writer, strg *storage.Storage) {
metrics.WriteCounterUint64(w, `vm_rows_ignored_total{reason="big_timestamp"}`, m.TooBigTimestampRows)
metrics.WriteCounterUint64(w, `vm_rows_ignored_total{reason="small_timestamp"}`, m.TooSmallTimestampRows)
metrics.WriteCounterUint64(w, `vm_rows_ignored_total{reason="invalid_raw_metric_name"}`, m.InvalidRawMetricNames)
if *maxHourlySeries > 0 {
if getMaxHourlySeries() > 0 {
metrics.WriteCounterUint64(w, `vm_rows_ignored_total{reason="hourly_limit_exceeded"}`, m.HourlySeriesLimitRowsDropped)
}
if *maxDailySeries > 0 {
if getMaxDailySeries() > 0 {
metrics.WriteCounterUint64(w, `vm_rows_ignored_total{reason="daily_limit_exceeded"}`, m.DailySeriesLimitRowsDropped)
}
@@ -615,13 +619,13 @@ func writeStorageMetrics(w io.Writer, strg *storage.Storage) {
metrics.WriteCounterUint64(w, `vm_slow_row_inserts_total`, m.SlowRowInserts)
metrics.WriteCounterUint64(w, `vm_slow_per_day_index_inserts_total`, m.SlowPerDayIndexInserts)
if *maxHourlySeries > 0 {
if getMaxHourlySeries() > 0 {
metrics.WriteGaugeUint64(w, `vm_hourly_series_limit_current_series`, m.HourlySeriesLimitCurrentSeries)
metrics.WriteGaugeUint64(w, `vm_hourly_series_limit_max_series`, m.HourlySeriesLimitMaxSeries)
metrics.WriteCounterUint64(w, `vm_hourly_series_limit_rows_dropped_total`, m.HourlySeriesLimitRowsDropped)
}
if *maxDailySeries > 0 {
if getMaxDailySeries() > 0 {
metrics.WriteGaugeUint64(w, `vm_daily_series_limit_current_series`, m.DailySeriesLimitCurrentSeries)
metrics.WriteGaugeUint64(w, `vm_daily_series_limit_max_series`, m.DailySeriesLimitMaxSeries)
metrics.WriteCounterUint64(w, `vm_daily_series_limit_rows_dropped_total`, m.DailySeriesLimitRowsDropped)
@@ -746,3 +750,21 @@ func jsonResponseError(w http.ResponseWriter, err error) {
errStr := err.Error()
fmt.Fprintf(w, `{"status":"error","msg":%s}`, stringsutil.JSONString(errStr))
}
func getMaxHourlySeries() int {
limit := *maxHourlySeries
if limit == -1 || limit > math.MaxInt32 {
return math.MaxInt32
}
return int(limit)
}
func getMaxDailySeries() int {
limit := *maxDailySeries
if limit == -1 || limit > math.MaxInt32 {
return math.MaxInt32
}
return int(limit)
}

View File

@@ -1,4 +1,4 @@
FROM golang:1.26.1 AS build-web-stage
FROM golang:1.26.2 AS build-web-stage
COPY build /build
WORKDIR /build

View File

@@ -17,7 +17,7 @@
"react-input-mask": "^2.0.4",
"react-router-dom": "^7.13.2",
"uplot": "^1.6.32",
"vite": "^8.0.2",
"vite": "^8.0.7",
"web-vitals": "^5.1.0"
},
"devDependencies": {
@@ -903,25 +903,27 @@
}
},
"node_modules/@napi-rs/wasm-runtime": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/@napi-rs/wasm-runtime/-/wasm-runtime-1.1.1.tgz",
"integrity": "sha512-p64ah1M1ld8xjWv3qbvFwHiFVWrq1yFvV4f7w+mzaqiR4IlSgkqhcRdHwsGgomwzBH51sRY4NEowLxnaBjcW/A==",
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/@napi-rs/wasm-runtime/-/wasm-runtime-1.1.2.tgz",
"integrity": "sha512-sNXv5oLJ7ob93xkZ1XnxisYhGYXfaG9f65/ZgYuAu3qt7b3NadcOEhLvx28hv31PgX8SZJRYrAIPQilQmFpLVw==",
"license": "MIT",
"optional": true,
"dependencies": {
"@emnapi/core": "^1.7.1",
"@emnapi/runtime": "^1.7.1",
"@tybys/wasm-util": "^0.10.1"
},
"funding": {
"type": "github",
"url": "https://github.com/sponsors/Brooooooklyn"
},
"peerDependencies": {
"@emnapi/core": "^1.7.1",
"@emnapi/runtime": "^1.7.1"
}
},
"node_modules/@oxc-project/types": {
"version": "0.122.0",
"resolved": "https://registry.npmjs.org/@oxc-project/types/-/types-0.122.0.tgz",
"integrity": "sha512-oLAl5kBpV4w69UtFZ9xqcmTi+GENWOcPF7FCrczTiBbmC0ibXxCwyvZGbO39rCVEuLGAZM84DH0pUIyyv/YJzA==",
"version": "0.123.0",
"resolved": "https://registry.npmjs.org/@oxc-project/types/-/types-0.123.0.tgz",
"integrity": "sha512-YtECP/y8Mj1lSHiUWGSRzy/C6teUKlS87dEfuVKT09LgQbUsBW1rNg+MiJ4buGu3yuADV60gbIvo9/HplA56Ew==",
"license": "MIT",
"funding": {
"url": "https://github.com/sponsors/Boshen"
@@ -1050,9 +1052,6 @@
"cpu": [
"arm"
],
"libc": [
"glibc"
],
"license": "MIT",
"optional": true,
"os": [
@@ -1073,9 +1072,6 @@
"cpu": [
"arm"
],
"libc": [
"musl"
],
"license": "MIT",
"optional": true,
"os": [
@@ -1096,9 +1092,6 @@
"cpu": [
"arm64"
],
"libc": [
"glibc"
],
"license": "MIT",
"optional": true,
"os": [
@@ -1119,9 +1112,6 @@
"cpu": [
"arm64"
],
"libc": [
"musl"
],
"license": "MIT",
"optional": true,
"os": [
@@ -1142,9 +1132,6 @@
"cpu": [
"x64"
],
"libc": [
"glibc"
],
"license": "MIT",
"optional": true,
"os": [
@@ -1165,9 +1152,6 @@
"cpu": [
"x64"
],
"libc": [
"musl"
],
"license": "MIT",
"optional": true,
"os": [
@@ -1334,9 +1318,9 @@
}
},
"node_modules/@rolldown/binding-android-arm64": {
"version": "1.0.0-rc.11",
"resolved": "https://registry.npmjs.org/@rolldown/binding-android-arm64/-/binding-android-arm64-1.0.0-rc.11.tgz",
"integrity": "sha512-SJ+/g+xNnOh6NqYxD0V3uVN4W3VfnrGsC9/hoglicgTNfABFG9JjISvkkU0dNY84MNHLWyOgxP9v9Y9pX4S7+A==",
"version": "1.0.0-rc.13",
"resolved": "https://registry.npmjs.org/@rolldown/binding-android-arm64/-/binding-android-arm64-1.0.0-rc.13.tgz",
"integrity": "sha512-5ZiiecKH2DXAVJTNN13gNMUcCDg4Jy8ZjbXEsPnqa248wgOVeYRX0iqXXD5Jz4bI9BFHgKsI2qmyJynstbmr+g==",
"cpu": [
"arm64"
],
@@ -1350,9 +1334,9 @@
}
},
"node_modules/@rolldown/binding-darwin-arm64": {
"version": "1.0.0-rc.11",
"resolved": "https://registry.npmjs.org/@rolldown/binding-darwin-arm64/-/binding-darwin-arm64-1.0.0-rc.11.tgz",
"integrity": "sha512-7WQgR8SfOPwmDZGFkThUvsmd/nwAWv91oCO4I5LS7RKrssPZmOt7jONN0cW17ydGC1n/+puol1IpoieKqQidmg==",
"version": "1.0.0-rc.13",
"resolved": "https://registry.npmjs.org/@rolldown/binding-darwin-arm64/-/binding-darwin-arm64-1.0.0-rc.13.tgz",
"integrity": "sha512-tz/v/8G77seu8zAB3A5sK3UFoOl06zcshEzhUO62sAEtrEuW/H1CcyoupOrD+NbQJytYgA4CppXPzlrmp4JZKA==",
"cpu": [
"arm64"
],
@@ -1366,9 +1350,9 @@
}
},
"node_modules/@rolldown/binding-darwin-x64": {
"version": "1.0.0-rc.11",
"resolved": "https://registry.npmjs.org/@rolldown/binding-darwin-x64/-/binding-darwin-x64-1.0.0-rc.11.tgz",
"integrity": "sha512-39Ks6UvIHq4rEogIfQBoBRusj0Q0nPVWIvqmwBLaT6aqQGIakHdESBVOPRRLacy4WwUPIx4ZKzfZ9PMW+IeyUQ==",
"version": "1.0.0-rc.13",
"resolved": "https://registry.npmjs.org/@rolldown/binding-darwin-x64/-/binding-darwin-x64-1.0.0-rc.13.tgz",
"integrity": "sha512-8DakphqOz8JrMYWTJmWA+vDJxut6LijZ8Xcdc4flOlAhU7PNVwo2MaWBF9iXjJAPo5rC/IxEFZDhJ3GC7NHvug==",
"cpu": [
"x64"
],
@@ -1382,9 +1366,9 @@
}
},
"node_modules/@rolldown/binding-freebsd-x64": {
"version": "1.0.0-rc.11",
"resolved": "https://registry.npmjs.org/@rolldown/binding-freebsd-x64/-/binding-freebsd-x64-1.0.0-rc.11.tgz",
"integrity": "sha512-jfsm0ZHfhiqrvWjJAmzsqiIFPz5e7mAoCOPBNTcNgkiid/LaFKiq92+0ojH+nmJmKYkre4t71BWXUZDNp7vsag==",
"version": "1.0.0-rc.13",
"resolved": "https://registry.npmjs.org/@rolldown/binding-freebsd-x64/-/binding-freebsd-x64-1.0.0-rc.13.tgz",
"integrity": "sha512-4wBQFfjDuXYN/SVI8inBF3Aa+isq40rc6VMFbk5jcpolUBTe5cYnMsHZ51nFWsx3PVyyNN3vgoESki0Hmr/4BA==",
"cpu": [
"x64"
],
@@ -1398,9 +1382,9 @@
}
},
"node_modules/@rolldown/binding-linux-arm-gnueabihf": {
"version": "1.0.0-rc.11",
"resolved": "https://registry.npmjs.org/@rolldown/binding-linux-arm-gnueabihf/-/binding-linux-arm-gnueabihf-1.0.0-rc.11.tgz",
"integrity": "sha512-zjQaUtSyq1nVe3nxmlSCuR96T1LPlpvmJ0SZy0WJFEsV4kFbXcq2u68L4E6O0XeFj4aex9bEauqjW8UQBeAvfQ==",
"version": "1.0.0-rc.13",
"resolved": "https://registry.npmjs.org/@rolldown/binding-linux-arm-gnueabihf/-/binding-linux-arm-gnueabihf-1.0.0-rc.13.tgz",
"integrity": "sha512-JW/e4yPIXLms+jmnbwwy5LA/LxVwZUWLN8xug+V200wzaVi5TEGIWQlh8o91gWYFxW609euI98OCCemmWGuPrw==",
"cpu": [
"arm"
],
@@ -1414,15 +1398,12 @@
}
},
"node_modules/@rolldown/binding-linux-arm64-gnu": {
"version": "1.0.0-rc.11",
"resolved": "https://registry.npmjs.org/@rolldown/binding-linux-arm64-gnu/-/binding-linux-arm64-gnu-1.0.0-rc.11.tgz",
"integrity": "sha512-WMW1yE6IOnehTcFE9eipFkm3XN63zypWlrJQ2iF7NrQ9b2LDRjumFoOGJE8RJJTJCTBAdmLMnJ8uVitACUUo1Q==",
"version": "1.0.0-rc.13",
"resolved": "https://registry.npmjs.org/@rolldown/binding-linux-arm64-gnu/-/binding-linux-arm64-gnu-1.0.0-rc.13.tgz",
"integrity": "sha512-ZfKWpXiUymDnavepCaM6KG/uGydJ4l2nBmMxg60Ci4CbeefpqjPWpfaZM7PThOhk2dssqBAcwLc6rAyr0uTdXg==",
"cpu": [
"arm64"
],
"libc": [
"glibc"
],
"license": "MIT",
"optional": true,
"os": [
@@ -1433,15 +1414,12 @@
}
},
"node_modules/@rolldown/binding-linux-arm64-musl": {
"version": "1.0.0-rc.11",
"resolved": "https://registry.npmjs.org/@rolldown/binding-linux-arm64-musl/-/binding-linux-arm64-musl-1.0.0-rc.11.tgz",
"integrity": "sha512-jfndI9tsfm4APzjNt6QdBkYwre5lRPUgHeDHoI7ydKUuJvz3lZeCfMsI56BZj+7BYqiKsJm7cfd/6KYV7ubrBg==",
"version": "1.0.0-rc.13",
"resolved": "https://registry.npmjs.org/@rolldown/binding-linux-arm64-musl/-/binding-linux-arm64-musl-1.0.0-rc.13.tgz",
"integrity": "sha512-bmRg3O6Z0gq9yodKKWCIpnlH051sEfdVwt+6m5UDffAQMUUqU0xjnQqqAUm+Gu7ofAAly9DqiQDtKu2nPDEABA==",
"cpu": [
"arm64"
],
"libc": [
"musl"
],
"license": "MIT",
"optional": true,
"os": [
@@ -1452,15 +1430,12 @@
}
},
"node_modules/@rolldown/binding-linux-ppc64-gnu": {
"version": "1.0.0-rc.11",
"resolved": "https://registry.npmjs.org/@rolldown/binding-linux-ppc64-gnu/-/binding-linux-ppc64-gnu-1.0.0-rc.11.tgz",
"integrity": "sha512-ZlFgw46NOAGMgcdvdYwAGu2Q+SLFA9LzbJLW+iyMOJyhj5wk6P3KEE9Gct4xWwSzFoPI7JCdYmYMzVtlgQ+zfw==",
"version": "1.0.0-rc.13",
"resolved": "https://registry.npmjs.org/@rolldown/binding-linux-ppc64-gnu/-/binding-linux-ppc64-gnu-1.0.0-rc.13.tgz",
"integrity": "sha512-8Wtnbw4k7pMYN9B/mOEAsQ8HOiq7AZ31Ig4M9BKn2So4xRaFEhtCSa4ZJaOutOWq50zpgR4N5+L/opnlaCx8wQ==",
"cpu": [
"ppc64"
],
"libc": [
"glibc"
],
"license": "MIT",
"optional": true,
"os": [
@@ -1471,15 +1446,12 @@
}
},
"node_modules/@rolldown/binding-linux-s390x-gnu": {
"version": "1.0.0-rc.11",
"resolved": "https://registry.npmjs.org/@rolldown/binding-linux-s390x-gnu/-/binding-linux-s390x-gnu-1.0.0-rc.11.tgz",
"integrity": "sha512-hIOYmuT6ofM4K04XAZd3OzMySEO4K0/nc9+jmNcxNAxRi6c5UWpqfw3KMFV4MVFWL+jQsSh+bGw2VqmaPMTLyw==",
"version": "1.0.0-rc.13",
"resolved": "https://registry.npmjs.org/@rolldown/binding-linux-s390x-gnu/-/binding-linux-s390x-gnu-1.0.0-rc.13.tgz",
"integrity": "sha512-D/0Nlo8mQuxSMohNJUF2lDXWRsFDsHldfRRgD9bRgktj+EndGPj4DOV37LqDKPYS+osdyhZEH7fTakTAEcW7qg==",
"cpu": [
"s390x"
],
"libc": [
"glibc"
],
"license": "MIT",
"optional": true,
"os": [
@@ -1490,15 +1462,12 @@
}
},
"node_modules/@rolldown/binding-linux-x64-gnu": {
"version": "1.0.0-rc.11",
"resolved": "https://registry.npmjs.org/@rolldown/binding-linux-x64-gnu/-/binding-linux-x64-gnu-1.0.0-rc.11.tgz",
"integrity": "sha512-qXBQQO9OvkjjQPLdUVr7Nr2t3QTZI7s4KZtfw7HzBgjbmAPSFwSv4rmET9lLSgq3rH/ndA3ngv3Qb8l2njoPNA==",
"version": "1.0.0-rc.13",
"resolved": "https://registry.npmjs.org/@rolldown/binding-linux-x64-gnu/-/binding-linux-x64-gnu-1.0.0-rc.13.tgz",
"integrity": "sha512-eRrPvat2YaVQcwwKi/JzOP6MKf1WRnOCr+VaI3cTWz3ZoLcP/654z90lVCJ4dAuMEpPdke0n+qyAqXDZdIC4rA==",
"cpu": [
"x64"
],
"libc": [
"glibc"
],
"license": "MIT",
"optional": true,
"os": [
@@ -1509,15 +1478,12 @@
}
},
"node_modules/@rolldown/binding-linux-x64-musl": {
"version": "1.0.0-rc.11",
"resolved": "https://registry.npmjs.org/@rolldown/binding-linux-x64-musl/-/binding-linux-x64-musl-1.0.0-rc.11.tgz",
"integrity": "sha512-/tpFfoSTzUkH9LPY+cYbqZBDyyX62w5fICq9qzsHLL8uTI6BHip3Q9Uzft0wylk/i8OOwKik8OxW+QAhDmzwmg==",
"version": "1.0.0-rc.13",
"resolved": "https://registry.npmjs.org/@rolldown/binding-linux-x64-musl/-/binding-linux-x64-musl-1.0.0-rc.13.tgz",
"integrity": "sha512-PsdONiFRp8hR8KgVjTWjZ9s7uA3uueWL0t74/cKHfM4dR5zXYv4AjB8BvA+QDToqxAFg4ZkcVEqeu5F7inoz5w==",
"cpu": [
"x64"
],
"libc": [
"musl"
],
"license": "MIT",
"optional": true,
"os": [
@@ -1528,9 +1494,9 @@
}
},
"node_modules/@rolldown/binding-openharmony-arm64": {
"version": "1.0.0-rc.11",
"resolved": "https://registry.npmjs.org/@rolldown/binding-openharmony-arm64/-/binding-openharmony-arm64-1.0.0-rc.11.tgz",
"integrity": "sha512-mcp3Rio2w72IvdZG0oQ4bM2c2oumtwHfUfKncUM6zGgz0KgPz4YmDPQfnXEiY5t3+KD/i8HG2rOB/LxdmieK2g==",
"version": "1.0.0-rc.13",
"resolved": "https://registry.npmjs.org/@rolldown/binding-openharmony-arm64/-/binding-openharmony-arm64-1.0.0-rc.13.tgz",
"integrity": "sha512-hCNXgC5dI3TVOLrPT++PKFNZ+1EtS0mLQwfXXXSUD/+rGlB65gZDwN/IDuxLpQP4x8RYYHqGomlUXzpO8aVI2w==",
"cpu": [
"arm64"
],
@@ -1544,25 +1510,27 @@
}
},
"node_modules/@rolldown/binding-wasm32-wasi": {
"version": "1.0.0-rc.11",
"resolved": "https://registry.npmjs.org/@rolldown/binding-wasm32-wasi/-/binding-wasm32-wasi-1.0.0-rc.11.tgz",
"integrity": "sha512-LXk5Hii1Ph9asuGRjBuz8TUxdc1lWzB7nyfdoRgI0WGPZKmCxvlKk8KfYysqtr4MfGElu/f/pEQRh8fcEgkrWw==",
"version": "1.0.0-rc.13",
"resolved": "https://registry.npmjs.org/@rolldown/binding-wasm32-wasi/-/binding-wasm32-wasi-1.0.0-rc.13.tgz",
"integrity": "sha512-viLS5C5et8NFtLWw9Sw3M/w4vvnVkbWkO7wSNh3C+7G1+uCkGpr6PcjNDSFcNtmXY/4trjPBqUfcOL+P3sWy/g==",
"cpu": [
"wasm32"
],
"license": "MIT",
"optional": true,
"dependencies": {
"@napi-rs/wasm-runtime": "^1.1.1"
"@emnapi/core": "1.9.1",
"@emnapi/runtime": "1.9.1",
"@napi-rs/wasm-runtime": "^1.1.2"
},
"engines": {
"node": ">=14.0.0"
}
},
"node_modules/@rolldown/binding-win32-arm64-msvc": {
"version": "1.0.0-rc.11",
"resolved": "https://registry.npmjs.org/@rolldown/binding-win32-arm64-msvc/-/binding-win32-arm64-msvc-1.0.0-rc.11.tgz",
"integrity": "sha512-dDwf5otnx0XgRY1yqxOC4ITizcdzS/8cQ3goOWv3jFAo4F+xQYni+hnMuO6+LssHHdJW7+OCVL3CoU4ycnh35Q==",
"version": "1.0.0-rc.13",
"resolved": "https://registry.npmjs.org/@rolldown/binding-win32-arm64-msvc/-/binding-win32-arm64-msvc-1.0.0-rc.13.tgz",
"integrity": "sha512-Fqa3Tlt1xL4wzmAYxGNFV36Hb+VfPc9PYU+E25DAnswXv3ODDu/yyWjQDbXMo5AGWkQVjLgQExuVu8I/UaZhPQ==",
"cpu": [
"arm64"
],
@@ -1576,9 +1544,9 @@
}
},
"node_modules/@rolldown/binding-win32-x64-msvc": {
"version": "1.0.0-rc.11",
"resolved": "https://registry.npmjs.org/@rolldown/binding-win32-x64-msvc/-/binding-win32-x64-msvc-1.0.0-rc.11.tgz",
"integrity": "sha512-LN4/skhSggybX71ews7dAj6r2geaMJfm3kMbK2KhFMg9B10AZXnKoLCVVgzhMHL0S+aKtr4p8QbAW8k+w95bAA==",
"version": "1.0.0-rc.13",
"resolved": "https://registry.npmjs.org/@rolldown/binding-win32-x64-msvc/-/binding-win32-x64-msvc-1.0.0-rc.13.tgz",
"integrity": "sha512-/pLI5kPkGEi44TDlnbio3St/5gUFeN51YWNAk/Gnv6mEQBOahRBh52qVFVBpmrnU01n2yysvBML9Ynu7K4kGAQ==",
"cpu": [
"x64"
],
@@ -1592,9 +1560,9 @@
}
},
"node_modules/@rolldown/pluginutils": {
"version": "1.0.0-rc.11",
"resolved": "https://registry.npmjs.org/@rolldown/pluginutils/-/pluginutils-1.0.0-rc.11.tgz",
"integrity": "sha512-xQO9vbwBecJRv9EUcQ/y0dzSTJgA7Q6UVN7xp6B81+tBGSLVAK03yJ9NkJaUA7JFD91kbjxRSC/mDnmvXzbHoQ==",
"version": "1.0.0-rc.13",
"resolved": "https://registry.npmjs.org/@rolldown/pluginutils/-/pluginutils-1.0.0-rc.13.tgz",
"integrity": "sha512-3ngTAv6F/Py35BsYbeeLeecvhMKdsKm4AoOETVhAA+Qc8nrA2I0kF7oa93mE9qnIurngOSpMnQ0x2nQY2FPviA==",
"license": "MIT"
},
"node_modules/@rollup/pluginutils": {
@@ -5178,9 +5146,6 @@
"cpu": [
"arm64"
],
"libc": [
"glibc"
],
"license": "MPL-2.0",
"optional": true,
"os": [
@@ -5201,9 +5166,6 @@
"cpu": [
"arm64"
],
"libc": [
"musl"
],
"license": "MPL-2.0",
"optional": true,
"os": [
@@ -5224,9 +5186,6 @@
"cpu": [
"x64"
],
"libc": [
"glibc"
],
"license": "MPL-2.0",
"optional": true,
"os": [
@@ -5247,9 +5206,6 @@
"cpu": [
"x64"
],
"libc": [
"musl"
],
"license": "MPL-2.0",
"optional": true,
"os": [
@@ -6158,13 +6114,13 @@
}
},
"node_modules/rolldown": {
"version": "1.0.0-rc.11",
"resolved": "https://registry.npmjs.org/rolldown/-/rolldown-1.0.0-rc.11.tgz",
"integrity": "sha512-NRjoKMusSjfRbSYiH3VSumlkgFe7kYAa3pzVOsVYVFY3zb5d7nS+a3KGQ7hJKXuYWbzJKPVQ9Wxq2UvyK+ENpw==",
"version": "1.0.0-rc.13",
"resolved": "https://registry.npmjs.org/rolldown/-/rolldown-1.0.0-rc.13.tgz",
"integrity": "sha512-bvVj8YJmf0rq4pSFmH7laLa6pYrhghv3PRzrCdRAr23g66zOKVJ4wkvFtgohtPLWmthgg8/rkaqRHrpUEh0Zbw==",
"license": "MIT",
"dependencies": {
"@oxc-project/types": "=0.122.0",
"@rolldown/pluginutils": "1.0.0-rc.11"
"@oxc-project/types": "=0.123.0",
"@rolldown/pluginutils": "1.0.0-rc.13"
},
"bin": {
"rolldown": "bin/cli.mjs"
@@ -6173,21 +6129,21 @@
"node": "^20.19.0 || >=22.12.0"
},
"optionalDependencies": {
"@rolldown/binding-android-arm64": "1.0.0-rc.11",
"@rolldown/binding-darwin-arm64": "1.0.0-rc.11",
"@rolldown/binding-darwin-x64": "1.0.0-rc.11",
"@rolldown/binding-freebsd-x64": "1.0.0-rc.11",
"@rolldown/binding-linux-arm-gnueabihf": "1.0.0-rc.11",
"@rolldown/binding-linux-arm64-gnu": "1.0.0-rc.11",
"@rolldown/binding-linux-arm64-musl": "1.0.0-rc.11",
"@rolldown/binding-linux-ppc64-gnu": "1.0.0-rc.11",
"@rolldown/binding-linux-s390x-gnu": "1.0.0-rc.11",
"@rolldown/binding-linux-x64-gnu": "1.0.0-rc.11",
"@rolldown/binding-linux-x64-musl": "1.0.0-rc.11",
"@rolldown/binding-openharmony-arm64": "1.0.0-rc.11",
"@rolldown/binding-wasm32-wasi": "1.0.0-rc.11",
"@rolldown/binding-win32-arm64-msvc": "1.0.0-rc.11",
"@rolldown/binding-win32-x64-msvc": "1.0.0-rc.11"
"@rolldown/binding-android-arm64": "1.0.0-rc.13",
"@rolldown/binding-darwin-arm64": "1.0.0-rc.13",
"@rolldown/binding-darwin-x64": "1.0.0-rc.13",
"@rolldown/binding-freebsd-x64": "1.0.0-rc.13",
"@rolldown/binding-linux-arm-gnueabihf": "1.0.0-rc.13",
"@rolldown/binding-linux-arm64-gnu": "1.0.0-rc.13",
"@rolldown/binding-linux-arm64-musl": "1.0.0-rc.13",
"@rolldown/binding-linux-ppc64-gnu": "1.0.0-rc.13",
"@rolldown/binding-linux-s390x-gnu": "1.0.0-rc.13",
"@rolldown/binding-linux-x64-gnu": "1.0.0-rc.13",
"@rolldown/binding-linux-x64-musl": "1.0.0-rc.13",
"@rolldown/binding-openharmony-arm64": "1.0.0-rc.13",
"@rolldown/binding-wasm32-wasi": "1.0.0-rc.13",
"@rolldown/binding-win32-arm64-msvc": "1.0.0-rc.13",
"@rolldown/binding-win32-x64-msvc": "1.0.0-rc.13"
}
},
"node_modules/rxjs": {
@@ -6437,7 +6393,6 @@
"cpu": [
"arm"
],
"libc": "glibc",
"license": "MIT",
"optional": true,
"os": [
@@ -6454,7 +6409,6 @@
"cpu": [
"arm64"
],
"libc": "glibc",
"license": "MIT",
"optional": true,
"os": [
@@ -6471,7 +6425,6 @@
"cpu": [
"arm"
],
"libc": "musl",
"license": "MIT",
"optional": true,
"os": [
@@ -6488,7 +6441,6 @@
"cpu": [
"arm64"
],
"libc": "musl",
"license": "MIT",
"optional": true,
"os": [
@@ -6505,7 +6457,6 @@
"cpu": [
"riscv64"
],
"libc": "musl",
"license": "MIT",
"optional": true,
"os": [
@@ -6522,7 +6473,6 @@
"cpu": [
"x64"
],
"libc": "musl",
"license": "MIT",
"optional": true,
"os": [
@@ -6539,7 +6489,6 @@
"cpu": [
"riscv64"
],
"libc": "glibc",
"license": "MIT",
"optional": true,
"os": [
@@ -6556,7 +6505,6 @@
"cpu": [
"x64"
],
"libc": "glibc",
"license": "MIT",
"optional": true,
"os": [
@@ -7382,15 +7330,15 @@
"license": "MIT"
},
"node_modules/vite": {
"version": "8.0.2",
"resolved": "https://registry.npmjs.org/vite/-/vite-8.0.2.tgz",
"integrity": "sha512-1gFhNi+bHhRE/qKZOJXACm6tX4bA3Isy9KuKF15AgSRuRazNBOJfdDemPBU16/mpMxApDPrWvZ08DcLPEoRnuA==",
"version": "8.0.7",
"resolved": "https://registry.npmjs.org/vite/-/vite-8.0.7.tgz",
"integrity": "sha512-P1PbweD+2/udplnThz3btF4cf6AgPky7kk23RtHUkJIU5BIxwPprhRGmOAHs6FTI7UiGbTNrgNP6jSYD6JaRnw==",
"license": "MIT",
"dependencies": {
"lightningcss": "^1.32.0",
"picomatch": "^4.0.3",
"picomatch": "^4.0.4",
"postcss": "^8.5.8",
"rolldown": "1.0.0-rc.11",
"rolldown": "1.0.0-rc.13",
"tinyglobby": "^0.2.15"
},
"bin": {
@@ -7408,7 +7356,7 @@
"peerDependencies": {
"@types/node": "^20.19.0 || >=22.12.0",
"@vitejs/devtools": "^0.1.0",
"esbuild": "^0.27.0",
"esbuild": "^0.27.0 || ^0.28.0",
"jiti": ">=1.21.0",
"less": "^4.0.0",
"sass": "^1.70.0",

View File

@@ -29,7 +29,7 @@
"react-input-mask": "^2.0.4",
"react-router-dom": "^7.13.2",
"uplot": "^1.6.32",
"vite": "^8.0.2",
"vite": "^8.0.7",
"web-vitals": "^5.1.0"
},
"devDependencies": {

View File

@@ -16,23 +16,29 @@ export const getExportDataUrl = (server: string, query: string, period: TimePara
return `${server}/api/v1/export?${params}`;
};
export const getExportCSVDataUrl = (server: string, query: string[], period: TimeParams, reduceMemUsage: boolean): string => {
const getBaseParams = (period: TimeParams, query: string[]): URLSearchParams => {
const params = new URLSearchParams({
start: period.start.toString(),
end: period.end.toString(),
format: "__name__,__value__,__timestamp__:unix_ms",
});
query.forEach((q => params.append("match[]", q)));
return params;
};
export const getLabelsUrl = (server: string, query: string[], period: TimeParams): string => {
const params = getBaseParams(period, query);
return `${server}/api/v1/labels?${params}`;
};
export const getExportCSVDataUrl = (server: string, query: string[], period: TimeParams, reduceMemUsage: boolean, format: string): string => {
const params = getBaseParams(period, query);
params.set("format", format);
if (reduceMemUsage) params.set("reduce_mem_usage", "1");
return `${server}/api/v1/export/csv?${params}`;
};
export const getExportJSONDataUrl = (server: string, query: string[], period: TimeParams, reduceMemUsage: boolean): string => {
const params = new URLSearchParams({
start: period.start.toString(),
end: period.end.toString(),
});
query.forEach((q => params.append("match[]", q)));
const params = getBaseParams(period, query);
if (reduceMemUsage) params.set("reduce_mem_usage", "1");
return `${server}/api/v1/export?${params}`;
};

View File

@@ -0,0 +1,29 @@
import { describe, expect, it, vi } from "vitest";
import { fetchRawQueryCSVExport } from "./raw-query";
describe("fetchRawQueryCSVExport", () => {
it.skip("requests all label columns before exporting CSV data", async () => {
const fetchMock = vi.fn()
.mockResolvedValueOnce({
ok: true,
json: async () => ({ data: ["job", "__name__", "instance"] }),
})
.mockResolvedValueOnce({
ok: true,
text: async () => "up,localhost:9100,node_exporter,1,1710000000000",
});
const result = await fetchRawQueryCSVExport(
"http://localhost:8428",
["up"],
{ start: 1710000000, end: 1710000300, step: "15s", date: "2024-03-09T16:05:00Z" },
false,
fetchMock as unknown as typeof fetch,
);
expect(fetchMock).toHaveBeenCalledTimes(2);
expect(fetchMock.mock.calls[0][0]).toBe("http://localhost:8428/api/v1/labels?start=1710000000&end=1710000300&match%5B%5D=up");
expect(fetchMock.mock.calls[1][0]).toBe("http://localhost:8428/api/v1/export/csv?start=1710000000&end=1710000300&match%5B%5D=up&format=__name__%2Cinstance%2Cjob%2C__value__%2C__timestamp__%3Aunix_ms");
expect(result).toBe("up,localhost:9100,node_exporter,1,1710000000000");
});
});

View File

@@ -0,0 +1,31 @@
import { getExportCSVDataUrl, getLabelsUrl } from "./query-range";
import { TimeParams } from "../types";
import { getCSVExportColumns } from "../utils/csv";
interface LabelsResponse {
data?: string[];
}
export const fetchRawQueryCSVExport = async (
serverUrl: string,
query: string[],
period: TimeParams,
reduceMemUsage: boolean,
fetchFn: typeof fetch = fetch,
): Promise<string> => {
const labelsResponse = await fetchFn(getLabelsUrl(serverUrl, query, period));
if (!labelsResponse.ok) {
throw new Error(await labelsResponse.text());
}
const { data = [] } = (await labelsResponse.json()) as LabelsResponse;
const columns = getCSVExportColumns(data);
const format = columns.join(",");
const response = await fetchFn(getExportCSVDataUrl(serverUrl, query, period, reduceMemUsage, format));
if (!response.ok) {
throw new Error(await response.text());
}
return await response.text();
};

View File

@@ -6,10 +6,11 @@ import { useTimeState } from "../../../state/time/TimeStateContext";
import { useAppState } from "../../../state/common/StateContext";
import { useCustomPanelState } from "../../../state/customPanel/CustomPanelStateContext";
import { isValidHttpUrl } from "../../../utils/url";
import { getExportCSVDataUrl, getExportDataUrl, getExportJSONDataUrl } from "../../../api/query-range";
import { getExportDataUrl, getExportJSONDataUrl } from "../../../api/query-range";
import { parseLineToJSON } from "../../../utils/json";
import { downloadCSV, downloadJSON } from "../../../utils/file";
import { useSnack } from "../../../contexts/Snackbar";
import { fetchRawQueryCSVExport } from "../../../api/raw-query";
interface FetchQueryParams {
hideQuery?: number[];
@@ -67,11 +68,8 @@ export const useFetchExport = ({ hideQuery, showAllSeries }: FetchQueryParams):
const getFilename = (format: ExportFormats) => `vmui_export_${query.join("_")}_${period.start}_${period.end}.${format}`;
return {
csv: async () => {
const url = getExportCSVDataUrl(serverUrl, query, period, reduceMemUsage);
const response = await fetch(url);
try {
let text = await response.text();
text = "name,value,timestamp\n" + text;
const text = await fetchRawQueryCSVExport(serverUrl, query, period, reduceMemUsage);
downloadCSV(text, getFilename("csv"));
} catch (e) {
console.error(e);

View File

@@ -1,5 +1,5 @@
import { describe, expect, it } from "vitest";
import { formatValueToCSV } from "./csv";
import { formatValueToCSV, getCSVExportColumns } from "./csv";
describe("formatValueToCSV", () => {
it("should wrap value in quotes if it contains a comma", () => {
@@ -32,3 +32,10 @@ describe("formatValueToCSV", () => {
expect(result).toBe("");
});
});
describe("getCSVExportColumns", () => {
it("should prepend metric name and append value and timestamp columns", () => {
const result = getCSVExportColumns(["instance", "__name__", "job", "instance"]);
expect(result.join(",")).toEqual("__name__,instance,job,__value__,__timestamp__:unix_ms");
});
});

View File

@@ -2,3 +2,8 @@ export const formatValueToCSV= (value: string) =>
(value.includes(",") || value.includes("\n") || value.includes("\""))
? "\"" + value.replace(/"/g, "\"\"") + "\""
: value;
export const getCSVExportColumns = (labelNames: string[]) => {
const labels = Array.from(new Set(labelNames.filter((label) => label && label !== "__name__"))).sort();
return ["__name__", ...labels, "__value__", "__timestamp__:unix_ms"];
};

View File

@@ -88,6 +88,7 @@ type QueryOpts struct {
MaxLookback string
LatencyOffset string
Format string
NoCache string
}
func (qos *QueryOpts) asURLValues() url.Values {
@@ -112,6 +113,7 @@ func (qos *QueryOpts) asURLValues() url.Values {
addNonEmpty("max_lookback", qos.MaxLookback)
addNonEmpty("latency_offset", qos.LatencyOffset)
addNonEmpty("format", qos.Format)
addNonEmpty("nocache", qos.NoCache)
return uv
}

View File

@@ -87,11 +87,11 @@ func (tc *TestCase) MustStartDefaultVmsingle() *Vmsingle {
}
// MustStartVmsingle is a test helper function that starts an instance of
// vmsingle located at ../../bin/victoria-metrics and fails the test if the app
// vmsingle located at ../../bin/victoria-metrics-race and fails the test if the app
// fails to start.
func (tc *TestCase) MustStartVmsingle(instance string, flags []string) *Vmsingle {
tc.t.Helper()
return tc.MustStartVmsingleAt(instance, "../../bin/victoria-metrics", flags)
return tc.MustStartVmsingleAt(instance, "../../bin/victoria-metrics-race", flags)
}
// MustStartVmsingleAt is a test helper function that starts an instance of
@@ -108,11 +108,11 @@ func (tc *TestCase) MustStartVmsingleAt(instance, binary string, flags []string)
}
// MustStartVmstorage is a test helper function that starts an instance of
// vmstorage located at ../../bin/vmstorage and fails the test if the app fails
// vmstorage located at ../../bin/vmstorage-race and fails the test if the app fails
// to start.
func (tc *TestCase) MustStartVmstorage(instance string, flags []string) *Vmstorage {
tc.t.Helper()
return tc.MustStartVmstorageAt(instance, "../../bin/vmstorage", flags)
return tc.MustStartVmstorageAt(instance, "../../bin/vmstorage-race", flags)
}
// MustStartVmstorageAt is a test helper function that starts an instance of
@@ -293,12 +293,12 @@ func (tc *TestCase) MustStartCluster(opts *ClusterOptions) *Vmcluster {
tc.t.Helper()
if opts.Vmstorage1Binary == "" {
opts.Vmstorage1Binary = "../../bin/vmstorage"
opts.Vmstorage1Binary = "../../bin/vmstorage-race"
}
vmstorage1 := tc.MustStartVmstorageAt(opts.Vmstorage1Instance, opts.Vmstorage1Binary, opts.Vmstorage1Flags)
if opts.Vmstorage2Binary == "" {
opts.Vmstorage2Binary = "../../bin/vmstorage"
opts.Vmstorage2Binary = "../../bin/vmstorage-race"
}
vmstorage2 := tc.MustStartVmstorageAt(opts.Vmstorage2Instance, opts.Vmstorage2Binary, opts.Vmstorage2Flags)

View File

@@ -28,7 +28,6 @@ func TestSingleBackupRestore(t *testing.T) {
return tc.MustStartVmsingle("vmsingle", []string{
"-storageDataPath=" + storageDataPath,
"-retentionPeriod=100y",
"-search.maxStalenessInterval=1m",
})
},
stopSUT: func() {
@@ -70,9 +69,7 @@ func TestClusterBackupRestore(t *testing.T) {
VminsertInstance: "vminsert",
VminsertFlags: []string{},
VmselectInstance: "vmselect",
VmselectFlags: []string{
"-search.maxStalenessInterval=1m",
},
VmselectFlags: []string{},
})
},
stopSUT: func() {
@@ -100,15 +97,14 @@ func TestClusterBackupRestore(t *testing.T) {
func testBackupRestore(tc *apptest.TestCase, opts testBackupRestoreOpts) {
t := tc.T()
const msecPerMinute = 60 * 1000
genData := func(count int, prefix string, start int64) (recs []string, wantSeries []map[string]string, wantQueryResults []*apptest.QueryResult) {
genData := func(count int, prefix string, start, step int64) (recs []string, wantSeries []map[string]string, wantQueryResults []*apptest.QueryResult) {
recs = make([]string, count)
wantSeries = make([]map[string]string, count)
wantQueryResults = make([]*apptest.QueryResult, count)
for i := range count {
name := fmt.Sprintf("%s_%03d", prefix, i)
value := float64(i)
timestamp := start + int64(i)*msecPerMinute
timestamp := start + int64(i)*step
recs[i] = fmt.Sprintf("%s %f %d", name, value, timestamp)
wantSeries[i] = map[string]string{"__name__": name}
@@ -148,15 +144,17 @@ func testBackupRestore(tc *apptest.TestCase, opts testBackupRestoreOpts) {
// assertSeries retrieves all data from the storage and compares it with the
// expected result.
assertQueryResults := func(app apptest.PrometheusQuerier, query string, start, end int64, want []*apptest.QueryResult) {
assertQueryResults := func(app apptest.PrometheusQuerier, query string, start, end, step int64, want []*apptest.QueryResult) {
t.Helper()
tc.Assert(&apptest.AssertOptions{
Msg: "unexpected /api/v1/query_range response",
Got: func() any {
return app.PrometheusAPIV1QueryRange(t, query, apptest.QueryOpts{
Start: fmt.Sprintf("%d", start),
End: fmt.Sprintf("%d", end),
Step: "60s",
Start: fmt.Sprintf("%d", start),
End: fmt.Sprintf("%d", end),
Step: fmt.Sprintf("%dms", step),
MaxLookback: fmt.Sprintf("%dms", step-1),
NoCache: "1",
})
},
Want: &apptest.PrometheusAPIV1QueryResponse{
@@ -167,7 +165,6 @@ func testBackupRestore(tc *apptest.TestCase, opts testBackupRestoreOpts) {
},
},
FailNow: true,
Retries: 300,
})
}
@@ -194,8 +191,9 @@ func testBackupRestore(tc *apptest.TestCase, opts testBackupRestoreOpts) {
// below.
const numMetrics = 1000
// With 1000 metrics (one per minute), the time range spans 2 months.
end := time.Date(2025, 3, 1, 10, 0, 0, 0, time.UTC).UnixMilli()
start := end - numMetrics*msecPerMinute
start := time.Date(2025, 1, 1, 0, 0, 0, 0, time.UTC).UnixMilli()
end := time.Date(2025, 3, 1, 0, 0, 0, 0, time.UTC).UnixMilli()
step := (end - start) / numMetrics
// Verify backup/restore:
//
@@ -209,8 +207,8 @@ func testBackupRestore(tc *apptest.TestCase, opts testBackupRestoreOpts) {
// - Start vmsingle
// - Ensure that the queries return batch1 data only.
batch1Data, wantBatch1Series, wantBatch1QueryResults := genData(numMetrics, "batch1", start)
batch2Data, wantBatch2Series, wantBatch2QueryResults := genData(numMetrics, "batch2", start)
batch1Data, wantBatch1Series, wantBatch1QueryResults := genData(numMetrics, "batch1", start, step)
batch2Data, wantBatch2Series, wantBatch2QueryResults := genData(numMetrics, "batch2", start, step)
wantBatch12Series := slices.Concat(wantBatch1Series, wantBatch2Series)
wantBatch12QueryResults := slices.Concat(wantBatch1QueryResults, wantBatch2QueryResults)
@@ -219,13 +217,14 @@ func testBackupRestore(tc *apptest.TestCase, opts testBackupRestoreOpts) {
sut.PrometheusAPIV1ImportPrometheus(t, batch1Data, apptest.QueryOpts{})
sut.ForceFlush(t)
assertSeries(sut, `{__name__=~"batch1.*"}`, start, end, wantBatch1Series)
assertQueryResults(sut, `{__name__=~"batch1.*"}`, start, end, wantBatch1QueryResults)
assertQueryResults(sut, `{__name__=~"batch1.*"}`, start, end, step, wantBatch1QueryResults)
createBackup(sut, "batch1")
sut.PrometheusAPIV1ImportPrometheus(t, batch2Data, apptest.QueryOpts{})
sut.ForceFlush(t)
assertSeries(sut, `{__name__=~"batch(1|2).*"}`, start, end, wantBatch12Series)
assertQueryResults(sut, `{__name__=~"batch(1|2).*"}`, start, end, wantBatch12QueryResults)
assertQueryResults(sut, `{__name__=~"batch(1|2).*"}`, start, end, step, wantBatch12QueryResults)
createBackup(sut, "batch12")
opts.stopSUT()
@@ -235,5 +234,5 @@ func testBackupRestore(tc *apptest.TestCase, opts testBackupRestoreOpts) {
sut = opts.startSUT()
assertSeries(sut, `{__name__=~"batch1.*"}`, start, end, wantBatch1Series)
assertQueryResults(sut, `{__name__=~"batch1.*"}`, start, end, wantBatch1QueryResults)
assertQueryResults(sut, `{__name__=~"batch1.*"}`, start, end, step, wantBatch1QueryResults)
}

View File

@@ -1,7 +1,9 @@
package tests
import (
"fmt"
"testing"
"time"
"github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
@@ -297,6 +299,132 @@ func TestSingleIngestionProtocols(t *testing.T) {
}
func TestSingleCardinalityLimiter(t *testing.T) {
waitFor := func(f func() bool) {
const (
retries = 20
period = 100 * time.Millisecond
)
t.Helper()
for i := 0; i < retries; i++ {
if f() {
return
}
time.Sleep(period)
}
t.Fatalf("timed out waiting for retry #%d", retries)
}
tc := apptest.NewTestCase(t)
defer tc.Stop()
singleHourly := tc.MustStartVmsingle("vmsingle-hourly", []string{
"-retentionPeriod=100y",
"-storage.maxHourlySeries=1",
})
singleHourly.PrometheusAPIV1ImportPrometheus(t, []string{
"foo_bar 1 1652169600000", // 2022-05-10T08:00:00Z
}, apptest.QueryOpts{})
if v := singleHourly.GetIntMetric(t, "vm_hourly_series_limit_max_series"); v != 1 {
t.Fatalf("unexpected vm_hourly_series_limit_max_series value: %d", v)
}
if v := singleHourly.GetIntMetric(t, "vm_hourly_series_limit_current_series"); v != 1 {
t.Fatalf("unexpected vm_hourly_series_limit_current_series value: %d", v)
}
if v := singleHourly.GetIntMetric(t, "vm_hourly_series_limit_rows_dropped_total"); v != 0 {
t.Fatalf("unexpected vm_hourly_series_limit_rows_dropped_total value: %d", v)
}
singleHourly.PrometheusAPIV1ImportPrometheus(t, []string{
"foo_bar2 1 1652169600000", // 2022-05-10T08:00:00Z
}, apptest.QueryOpts{})
waitFor(
func() bool {
return singleHourly.GetIntMetric(t, "vm_hourly_series_limit_rows_dropped_total") > 0
},
)
singleDaily := tc.MustStartVmsingle("vmsingle-daily", []string{
"-retentionPeriod=100y",
"-storage.maxDailySeries=1",
})
singleDaily.PrometheusAPIV1ImportPrometheus(t, []string{
"foo_bar 1 1652169600000", // 2022-05-10T08:00:00Z
}, apptest.QueryOpts{})
if v := singleDaily.GetIntMetric(t, "vm_daily_series_limit_max_series"); v != 1 {
t.Fatalf("unexpected vm_daily_series_limit_max_series value: %d", v)
}
if v := singleDaily.GetIntMetric(t, "vm_daily_series_limit_current_series"); v != 1 {
t.Fatalf("unexpected vm_daily_series_limit_current_series value: %d", v)
}
if v := singleDaily.GetIntMetric(t, "vm_daily_series_limit_rows_dropped_total"); v != 0 {
t.Fatalf("unexpected vm_daily_series_limit_rows_dropped_total value: %d", v)
}
singleDaily.PrometheusAPIV1ImportPrometheus(t, []string{
"foo_bar2 1 1652169600000", // 2022-05-10T08:00:00Z
}, apptest.QueryOpts{})
waitFor(
func() bool {
return singleDaily.GetIntMetric(t, "vm_daily_series_limit_rows_dropped_total") > 0
},
)
singleUnlimited := tc.MustStartVmsingle("vmsingle-unlimited", []string{
"-retentionPeriod=100y",
"-storage.maxHourlySeries=-1",
"-storage.maxDailySeries=-1",
})
metrics := make([]string, 0, 100)
for i := range 100 {
metrics = append(metrics, fmt.Sprintf("foo_bar%d 1 1652169600000", i)) // 2022-05-10T08:00:00Z
}
singleUnlimited.PrometheusAPIV1ImportPrometheus(t, metrics, apptest.QueryOpts{})
waitFor(
func() bool {
return singleUnlimited.GetIntMetric(t, "vm_hourly_series_limit_current_series") > 0
},
)
if v := singleUnlimited.GetIntMetric(t, "vm_hourly_series_limit_max_series"); v == 0 {
t.Fatalf("unexpected vm_hourly_series_limit_max_series value: %d", v)
}
if v := singleUnlimited.GetIntMetric(t, "vm_hourly_series_limit_current_series"); v != 100 {
t.Fatalf("unexpected vm_hourly_series_limit_current_series value: %d", v)
}
if v := singleUnlimited.GetIntMetric(t, "vm_hourly_series_limit_rows_dropped_total"); v != 0 {
t.Fatalf("unexpected vm_hourly_series_limit_rows_dropped_total value: %d", v)
}
if v := singleUnlimited.GetIntMetric(t, "vm_daily_series_limit_max_series"); v == 0 {
t.Fatalf("unexpected vm_daily_series_limit_max_series value: %d", v)
}
if v := singleUnlimited.GetIntMetric(t, "vm_daily_series_limit_current_series"); v != 100 {
t.Fatalf("unexpected vm_daily_series_limit_current_series value: %d", v)
}
if v := singleUnlimited.GetIntMetric(t, "vm_daily_series_limit_rows_dropped_total"); v != 0 {
t.Fatalf("unexpected vm_daily_series_limit_rows_dropped_total value: %d", v)
}
}
func TestClusterIngestionProtocols(t *testing.T) {
fs.MustRemoveDir(t.Name())
tc := apptest.NewTestCase(t)
@@ -591,3 +719,145 @@ func TestClusterIngestionProtocols(t *testing.T) {
})
}
func TestClusterCardinalityLimiter(t *testing.T) {
waitFor := func(f func() bool) {
const (
retries = 20
period = 100 * time.Millisecond
)
t.Helper()
for i := 0; i < retries; i++ {
if f() {
return
}
time.Sleep(period)
}
t.Fatalf("timed out waiting for retry #%d", retries)
}
tc := apptest.NewTestCase(t)
defer tc.Stop()
// Test hourly series limit
vmstorageHourly := tc.MustStartVmstorage("vmstorage-hourly", []string{
"-storageDataPath=" + tc.Dir() + "/vmstorage-hourly",
"-retentionPeriod=100y",
"-storage.maxHourlySeries=1",
})
vminsertHourly := tc.MustStartVminsert("vminsert-hourly", []string{
"-storageNode=" + vmstorageHourly.VminsertAddr(),
})
vminsertHourly.PrometheusAPIV1ImportPrometheus(t, []string{
"foo_bar 1 1652169600000", // 2022-05-10T08:00:00Z
}, apptest.QueryOpts{})
if v := vmstorageHourly.GetIntMetric(t, "vm_hourly_series_limit_max_series"); v != 1 {
t.Fatalf("unexpected vm_hourly_series_limit_max_series value: %d", v)
}
if v := vmstorageHourly.GetIntMetric(t, "vm_hourly_series_limit_current_series"); v != 1 {
t.Fatalf("unexpected vm_hourly_series_limit_current_series value: %d", v)
}
if v := vmstorageHourly.GetIntMetric(t, "vm_hourly_series_limit_rows_dropped_total"); v != 0 {
t.Fatalf("unexpected vm_hourly_series_limit_rows_dropped_total value: %d", v)
}
vminsertHourly.PrometheusAPIV1ImportPrometheus(t, []string{
"foo_bar2 1 1652169600000", // 2022-05-10T08:00:00Z
}, apptest.QueryOpts{})
waitFor(
func() bool {
return vmstorageHourly.GetIntMetric(t, "vm_hourly_series_limit_rows_dropped_total") > 0
},
)
// Test daily series limit
vmstorageDaily := tc.MustStartVmstorage("vmstorage-daily", []string{
"-storageDataPath=" + tc.Dir() + "/vmstorage-daily",
"-retentionPeriod=100y",
"-storage.maxDailySeries=1",
})
vminsertDaily := tc.MustStartVminsert("vminsert-daily", []string{
"-storageNode=" + vmstorageDaily.VminsertAddr(),
})
vminsertDaily.PrometheusAPIV1ImportPrometheus(t, []string{
"foo_bar 1 1652169600000", // 2022-05-10T08:00:00Z
}, apptest.QueryOpts{})
if v := vmstorageDaily.GetIntMetric(t, "vm_daily_series_limit_max_series"); v != 1 {
t.Fatalf("unexpected vm_daily_series_limit_max_series value: %d", v)
}
if v := vmstorageDaily.GetIntMetric(t, "vm_daily_series_limit_current_series"); v != 1 {
t.Fatalf("unexpected vm_daily_series_limit_current_series value: %d", v)
}
if v := vmstorageDaily.GetIntMetric(t, "vm_daily_series_limit_rows_dropped_total"); v != 0 {
t.Fatalf("unexpected vm_daily_series_limit_rows_dropped_total value: %d", v)
}
vminsertDaily.PrometheusAPIV1ImportPrometheus(t, []string{
"foo_bar2 1 1652169600000", // 2022-05-10T08:00:00Z
}, apptest.QueryOpts{})
waitFor(
func() bool {
return vmstorageDaily.GetIntMetric(t, "vm_daily_series_limit_rows_dropped_total") > 0
},
)
// Test unlimited series
vmstorageUnlimited := tc.MustStartVmstorage("vmstorage-unlimited", []string{
"-storageDataPath=" + tc.Dir() + "/vmstorage-unlimited",
"-retentionPeriod=100y",
"-storage.maxHourlySeries=-1",
"-storage.maxDailySeries=-1",
})
vminsertUnlimited := tc.MustStartVminsert("vminsert-unlimited", []string{
"-storageNode=" + vmstorageUnlimited.VminsertAddr(),
})
metrics := make([]string, 0, 100)
for i := range 100 {
metrics = append(metrics, fmt.Sprintf("foo_bar%d 1 1652169600000", i)) // 2022-05-10T08:00:00Z
}
vminsertUnlimited.PrometheusAPIV1ImportPrometheus(t, metrics, apptest.QueryOpts{})
waitFor(
func() bool {
return vmstorageUnlimited.GetIntMetric(t, "vm_hourly_series_limit_current_series") > 0
},
)
if v := vmstorageUnlimited.GetIntMetric(t, "vm_hourly_series_limit_max_series"); v == 0 {
t.Fatalf("unexpected vm_hourly_series_limit_max_series value: %d", v)
}
if v := vmstorageUnlimited.GetIntMetric(t, "vm_hourly_series_limit_current_series"); v != 100 {
t.Fatalf("unexpected vm_hourly_series_limit_current_series value: %d", v)
}
if v := vmstorageUnlimited.GetIntMetric(t, "vm_hourly_series_limit_rows_dropped_total"); v != 0 {
t.Fatalf("unexpected vm_hourly_series_limit_rows_dropped_total value: %d", v)
}
if v := vmstorageUnlimited.GetIntMetric(t, "vm_daily_series_limit_max_series"); v == 0 {
t.Fatalf("unexpected vm_daily_series_limit_max_series value: %d", v)
}
if v := vmstorageUnlimited.GetIntMetric(t, "vm_daily_series_limit_current_series"); v != 100 {
t.Fatalf("unexpected vm_daily_series_limit_current_series value: %d", v)
}
if v := vmstorageUnlimited.GetIntMetric(t, "vm_daily_series_limit_rows_dropped_total"); v != 0 {
t.Fatalf("unexpected vm_daily_series_limit_rows_dropped_total value: %d", v)
}
}

View File

@@ -3,7 +3,9 @@ package tests
import (
"fmt"
"math/rand/v2"
"net"
"testing"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/apptest"
)
@@ -70,3 +72,162 @@ func TestClusterMultilevelSelect(t *testing.T) {
assertSeries(vmselectL1)
assertSeries(vmselectL2)
}
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10678.
func TestClusterMultilevelPartialResponse(t *testing.T) {
tc := apptest.NewTestCase(t)
defer tc.Stop()
// Set up the following multi-level cluster configuration:
//
// |--> available vmstorage
// | ------> vmselect1 --|
// | |--> available vmstorage
// global-vmselect -------|
// | |--> available vmstorage
// | ------> vmselect2 --|
// |--> unavailable vmstorage
vmstorage1 := tc.MustStartVmstorage("vmstorage1", []string{
"-storageDataPath=" + tc.Dir() + "/vmstorage1",
})
vmstorage2 := tc.MustStartVmstorage("vmstorage2", []string{
"-storageDataPath=" + tc.Dir() + "/vmstorage2",
})
regionalVmselect1 := tc.MustStartVmselect("regional-vmselect1", []string{
"-storageNode=" + vmstorage1.VmselectAddr() + "," + vmstorage2.VmselectAddr(),
})
regionalVmselect2 := tc.MustStartVmselect("regional-vmselect2", []string{
"-storageNode=" + vmstorage1.VmselectAddr() + "," + noopTCPServerAddr(t),
})
globalVmselect := tc.MustStartVmselect("global-vmselect", []string{
"-storageNode=" + regionalVmselect1.ClusternativeListenAddr() + "," + regionalVmselect2.ClusternativeListenAddr(),
})
// 1. /api/v1/query
qopts := apptest.QueryOpts{Tenant: "0"}
assertQuery := func(app *apptest.Vmselect, want *apptest.PrometheusAPIV1QueryResponse) {
t.Helper()
tc.Assert(&apptest.AssertOptions{
Msg: "unexpected /api/v1/query response",
Got: func() any {
res := app.PrometheusAPIV1Query(t, `{__name__=~".*"}`, qopts)
res.Sort()
return res
},
Want: want,
})
}
// regional-vmselect1 should return full response.
assertQuery(regionalVmselect1, &apptest.PrometheusAPIV1QueryResponse{
Status: "success",
IsPartial: false,
Data: &apptest.QueryData{ResultType: "vector", Result: []*apptest.QueryResult{}},
})
// regional-vmselect2 should return partial response.
assertQuery(regionalVmselect2, &apptest.PrometheusAPIV1QueryResponse{
Status: "success",
IsPartial: true,
Data: &apptest.QueryData{ResultType: "vector", Result: []*apptest.QueryResult{}},
})
// global-vmselect should return partial response.
assertQuery(globalVmselect, &apptest.PrometheusAPIV1QueryResponse{
Status: "success",
IsPartial: true,
Data: &apptest.QueryData{ResultType: "vector", Result: []*apptest.QueryResult{}},
})
// 2. /api/v1/labels
start := time.Now().Unix()
assertLabel := func(app *apptest.Vmselect, want *apptest.PrometheusAPIV1LabelsResponse) {
t.Helper()
tc.Assert(&apptest.AssertOptions{
Msg: "unexpected /api/v1/label response",
Got: func() any {
res := app.PrometheusAPIV1Labels(t, `{__name__="up"}`, apptest.QueryOpts{
Start: fmt.Sprintf("%d", start-100),
End: fmt.Sprintf("%d", start),
})
return res
},
Want: want,
})
}
// regional-vmselect1 should return full response.
assertLabel(regionalVmselect1, &apptest.PrometheusAPIV1LabelsResponse{
Status: "success",
IsPartial: false,
Data: make([]string, 0),
})
// regional-vmselect2 should return partial response.
assertLabel(regionalVmselect2, &apptest.PrometheusAPIV1LabelsResponse{
Status: "success",
IsPartial: true,
Data: make([]string, 0),
})
// global-vmselect should return partial response.
assertLabel(globalVmselect, &apptest.PrometheusAPIV1LabelsResponse{
Status: "success",
IsPartial: true,
Data: make([]string, 0),
})
// 3. /api/v1/label/%s/values
assertSeries := func(app *apptest.Vmselect, want *apptest.PrometheusAPIV1SeriesResponse) {
t.Helper()
tc.Assert(&apptest.AssertOptions{
Msg: "unexpected /api/v1/series response",
Got: func() any {
res := app.PrometheusAPIV1Series(t, `{__name__="up"}`, apptest.QueryOpts{
Start: fmt.Sprintf("%d", start-100),
End: fmt.Sprintf("%d", start),
})
return res
},
Want: want,
})
}
// regional-vmselect1 should return full response.
assertSeries(regionalVmselect1, &apptest.PrometheusAPIV1SeriesResponse{
Status: "success",
IsPartial: false,
Data: make([]map[string]string, 0),
})
// regional-vmselect2 should return partial response.
assertSeries(regionalVmselect2, &apptest.PrometheusAPIV1SeriesResponse{
Status: "success",
IsPartial: true,
Data: make([]map[string]string, 0),
})
// global-vmselect should return partial response.
assertSeries(globalVmselect, &apptest.PrometheusAPIV1SeriesResponse{
Status: "success",
IsPartial: true,
Data: make([]map[string]string, 0),
})
}
// noopTCPServerAddr start local tcp server,
// which immediately closes any incoming connections
// and return it's address
func noopTCPServerAddr(t *testing.T) string {
t.Helper()
ln, err := net.Listen("tcp", "localhost:0")
if err != nil {
t.Fatalf("failed to create listener: %v", err)
}
go func() {
for {
conn, err := ln.Accept()
if err != nil {
return
}
conn.Close()
}
}()
t.Cleanup(func() { ln.Close() })
return ln.Addr().String()
}

View File

@@ -0,0 +1,28 @@
{
"compaction": {
"level": 1,
"sources": [
"01KKS78P6B68DNJC87ZVPRGC3X"
]
},
"maxTime": 1735696740001,
"minTime": 1735689600000,
"stats": {
"numChunks": 8,
"numFloatSamples": 480,
"numSamples": 480,
"numSeries": 4
},
"thanos": {
"downsample": {
"resolution": 0
},
"labels": {
"prometheus": "test",
"replica": "0"
},
"source": "prometheus"
},
"ulid": "01KKS78P6B68DNJC87ZVPRGC3X",
"version": 1
}

View File

@@ -0,0 +1,27 @@
{
"compaction": {
"level": 1,
"sources": [
"01KKS78PKKR6BQ0SQ5BARRTAKC"
]
},
"maxTime": 1735696500001,
"minTime": 1735689600000,
"stats": {
"numChunks": 4,
"numSamples": 4,
"numSeries": 4
},
"thanos": {
"downsample": {
"resolution": 300000
},
"labels": {
"prometheus": "test",
"replica": "0"
},
"source": "prometheus"
},
"ulid": "01KKS78PKKR6BQ0SQ5BARRTAKC",
"version": 1
}

View File

@@ -0,0 +1,27 @@
{
"compaction": {
"level": 1,
"sources": [
"01KKS78PXE91TK8YX406DERWV2"
]
},
"maxTime": 1735693200001,
"minTime": 1735689600000,
"stats": {
"numChunks": 4,
"numSamples": 4,
"numSeries": 4
},
"thanos": {
"downsample": {
"resolution": 3600000
},
"labels": {
"prometheus": "test",
"replica": "0"
},
"source": "prometheus"
},
"ulid": "01KKS78PXE91TK8YX406DERWV2",
"version": 1
}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -3,6 +3,7 @@ package tests
import (
"fmt"
"io"
"math"
"net/http"
"net/http/httptest"
"strings"
@@ -304,6 +305,11 @@ func TestSingleVMAgentDropOnOverload(t *testing.T) {
// See initRemoteWriteCtxs function in remotewrite.go for details.
"-remoteWrite.maxRowsPerBlock=1000000000",
"-remoteWrite.tmpDataPath=" + tc.Dir() + "/vmagent",
// Delay retry logic to avoid race conditions with waitFor assertions.
// It improves the test stability on resource-constrained runners.
// Should be bigger than retries * period
"-remoteWrite.retryMinInterval=3s",
}, ``)
const (
@@ -362,3 +368,147 @@ func TestSingleVMAgentDropOnOverload(t *testing.T) {
},
)
}
func TestSingleVMAgentCardinalityLimiter(t *testing.T) {
waitFor := func(f func() bool) {
const (
retries = 20
period = 100 * time.Millisecond
)
t.Helper()
for i := 0; i < retries; i++ {
if f() {
return
}
time.Sleep(period)
}
t.Fatalf("timed out waiting for retry #%d", retries)
}
tc := apptest.NewTestCase(t)
defer tc.Stop()
remoteWriteSrv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusNoContent)
}))
defer remoteWriteSrv.Close()
// Verify hourly limit is applied
vmagent := tc.MustStartVmagent("vmagent-hourly", []string{
`-remoteWrite.flushInterval=50ms`,
fmt.Sprintf(`-remoteWrite.url=%s/api/v1/write`, remoteWriteSrv.URL),
"-remoteWrite.maxRowsPerBlock=1",
"-remoteWrite.maxHourlySeries=1",
"-remoteWrite.tmpDataPath=" + tc.Dir() + "/vmagent-hourly",
}, ``)
vmagent.APIV1ImportPrometheus(t, []string{
"foo_bar 1 1652169600000", // 2022-05-10T08:00:00Z
}, apptest.QueryOpts{})
if v := vmagent.GetIntMetric(t, "vmagent_hourly_series_limit_max_series"); v != 1 {
t.Fatalf("unexpected vmagent_hourly_series_limit_max_series value: %d", v)
}
if v := vmagent.GetIntMetric(t, "vmagent_hourly_series_limit_current_series"); v != 1 {
t.Fatalf("unexpected vmagent_hourly_series_limit_current_series value: %d", v)
}
if v := vmagent.GetIntMetric(t, "vmagent_hourly_series_limit_rows_dropped_total"); v != 0 {
t.Fatalf("unexpected vmagent_hourly_series_limit_rows_dropped_total value: %d", v)
}
vmagent.APIV1ImportPrometheusNoWaitFlush(t, []string{
"foo_bar2 1 1652169600000", // 2022-05-10T08:00:00Z
}, apptest.QueryOpts{})
waitFor(
func() bool {
return vmagent.GetIntMetric(t, "vmagent_hourly_series_limit_rows_dropped_total") > 0
},
)
// Daily limits
vmagent2 := tc.MustStartVmagent("vmagent-daily", []string{
`-remoteWrite.flushInterval=50ms`,
fmt.Sprintf(`-remoteWrite.url=%s/api/v1/write`, remoteWriteSrv.URL),
"-remoteWrite.maxRowsPerBlock=1",
"-remoteWrite.maxDailySeries=1",
"-remoteWrite.tmpDataPath=" + tc.Dir() + "/vmagent-daily",
}, ``)
vmagent2.APIV1ImportPrometheus(t, []string{
"foo_bar 1 1652169600000", // 2022-05-10T08:00:00Z
}, apptest.QueryOpts{})
if v := vmagent2.GetIntMetric(t, "vmagent_daily_series_limit_max_series"); v != 1 {
t.Fatalf("unexpected vmagent_daily_series_limit_max_series value: %d", v)
}
if v := vmagent2.GetIntMetric(t, "vmagent_daily_series_limit_current_series"); v != 1 {
t.Fatalf("unexpected vmagent_daily_series_limit_current_series value: %d", v)
}
if v := vmagent2.GetIntMetric(t, "vmagent_daily_series_limit_rows_dropped_total"); v != 0 {
t.Fatalf("unexpected vmagent_daily_series_limit_rows_dropped_total value: %d", v)
}
vmagent2.APIV1ImportPrometheusNoWaitFlush(t, []string{
"foo_bar2 1 1652169600000", // 2022-05-10T08:00:00Z
}, apptest.QueryOpts{})
waitFor(
func() bool {
return vmagent2.GetIntMetric(t, "vmagent_daily_series_limit_rows_dropped_total") > 0
},
)
// test running with unlimited tracker
vmagent3 := tc.MustStartVmagent("vmagent-unlimited", []string{
`-remoteWrite.flushInterval=50ms`,
fmt.Sprintf(`-remoteWrite.url=%s/api/v1/write`, remoteWriteSrv.URL),
"-remoteWrite.maxRowsPerBlock=10",
"-remoteWrite.maxDailySeries=-1",
"-remoteWrite.maxHourlySeries=-1",
"-remoteWrite.tmpDataPath=" + tc.Dir() + "/vmagent-unlimited",
}, ``)
metrics := make([]string, 0, 100)
for i := range 100 {
metrics = append(metrics, fmt.Sprintf("foo_bar%d 1 1652169600000", i)) // 2022-05-10T08:00:00Z
}
vmagent3.APIV1ImportPrometheusNoWaitFlush(t, metrics, apptest.QueryOpts{})
waitFor(
func() bool {
return vmagent3.GetIntMetric(t, "vmagent_hourly_series_limit_current_series") > 0
},
)
if v := vmagent3.GetIntMetric(t, "vmagent_hourly_series_limit_max_series"); v != math.MaxInt32 {
t.Fatalf("unexpected vmagent_hourly_series_limit_max_series value: %d", v)
}
if v := vmagent3.GetIntMetric(t, "vmagent_hourly_series_limit_current_series"); v != 100 {
t.Fatalf("unexpected vmagent_hourly_series_limit_current_series value: %d", v)
}
if v := vmagent3.GetIntMetric(t, "vmagent_hourly_series_limit_rows_dropped_total"); v != 0 {
t.Fatalf("unexpected vmagent_hourly_series_limit_rows_dropped_total value: %d", v)
}
if v := vmagent3.GetIntMetric(t, "vmagent_daily_series_limit_max_series"); v != math.MaxInt32 {
t.Fatalf("unexpected vmagent_daily_series_limit_max_series value: %d", v)
}
if v := vmagent3.GetIntMetric(t, "vmagent_daily_series_limit_current_series"); v != 100 {
t.Fatalf("unexpected vmagent_daily_series_limit_current_series value: %d", v)
}
if v := vmagent3.GetIntMetric(t, "vmagent_daily_series_limit_rows_dropped_total"); v != 0 {
t.Fatalf("unexpected vmagent_daily_series_limit_rows_dropped_total value: %d", v)
}
}

View File

@@ -0,0 +1,180 @@
package tests
import (
"encoding/json"
"fmt"
"io"
"os"
"testing"
"github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
"github.com/VictoriaMetrics/VictoriaMetrics/apptest"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fs"
)
const (
// thanosSnapshot contains both raw (resolution=0) and downsampled (resolution>0) blocks
// with Thanos metadata in meta.json.
thanosSnapshot = "./testdata/thanos-snapshot"
// thanosExpectedAllAggrResponse is the expected response when all aggregate types are imported
// (default behavior without --thanos-aggr-types flag).
thanosExpectedAllAggrResponse = "./testdata/thanos-snapshot/expected_all_aggr_response.json"
// thanosExpectedFilteredAggrResponse is the expected response when only specific aggregate
// types are imported via --thanos-aggr-types flag.
thanosExpectedFilteredAggrResponse = "./testdata/thanos-snapshot/expected_filtered_aggr_response.json"
// thanosQueryFilter is the PromQL query to select the test metrics.
thanosQueryFilter = `{__name__=~"thanos_test.*"}`
// thanosQueryTimeStart and thanosQueryTimeEnd define the time range for querying imported data.
thanosQueryTimeStart = "2025-01-01T00:00:00Z"
thanosQueryTimeEnd = "2025-01-01T02:00:00Z"
)
// TestSingleVmctlThanosMigrationAllAggr tests migration of Thanos blocks without
// --thanos-aggr-types flag. All aggregate types should be imported by default.
func TestSingleVmctlThanosMigrationAllAggr(t *testing.T) {
fs.MustRemoveDir(t.Name())
tc := apptest.NewTestCase(t)
defer tc.Stop()
vmsingleDst := tc.MustStartDefaultVmsingle()
vmAddr := fmt.Sprintf("http://%s/", vmsingleDst.HTTPAddr())
vmctlFlags := []string{
`thanos`,
`--thanos-snapshot=` + thanosSnapshot,
`--vm-addr=` + vmAddr,
`--disable-progress-bar=true`,
}
testThanosMigration(tc, vmsingleDst, vmctlFlags, thanosExpectedAllAggrResponse)
}
// TestClusterVmctlThanosMigrationAllAggr tests migration of Thanos blocks to cluster
// without --thanos-aggr-types flag. All aggregate types should be imported by default.
func TestClusterVmctlThanosMigrationAllAggr(t *testing.T) {
fs.MustRemoveDir(t.Name())
tc := apptest.NewTestCase(t)
defer tc.Stop()
cluster := tc.MustStartDefaultCluster()
vmAddr := fmt.Sprintf("http://%s/", cluster.Vminsert.HTTPAddr())
vmctlFlags := []string{
`thanos`,
`--thanos-snapshot=` + thanosSnapshot,
`--vm-addr=` + vmAddr,
`--disable-progress-bar=true`,
`--vm-account-id=0`,
}
testThanosMigration(tc, cluster, vmctlFlags, thanosExpectedAllAggrResponse)
}
// TestSingleVmctlThanosMigrationFilteredAggr tests migration of Thanos blocks with
// --thanos-aggr-types flag set to specific types (e.g., count,sum).
func TestSingleVmctlThanosMigrationFilteredAggr(t *testing.T) {
fs.MustRemoveDir(t.Name())
tc := apptest.NewTestCase(t)
defer tc.Stop()
vmsingleDst := tc.MustStartDefaultVmsingle()
vmAddr := fmt.Sprintf("http://%s/", vmsingleDst.HTTPAddr())
vmctlFlags := []string{
`thanos`,
`--thanos-snapshot=` + thanosSnapshot,
`--vm-addr=` + vmAddr,
`--disable-progress-bar=true`,
`--thanos-aggr-types=count`,
`--thanos-aggr-types=sum`,
}
testThanosMigration(tc, vmsingleDst, vmctlFlags, thanosExpectedFilteredAggrResponse)
}
// TestClusterVmctlThanosMigrationFilteredAggr tests migration of Thanos blocks to cluster
// with --thanos-aggr-types flag set to specific types (e.g., count,sum).
func TestClusterVmctlThanosMigrationFilteredAggr(t *testing.T) {
fs.MustRemoveDir(t.Name())
tc := apptest.NewTestCase(t)
defer tc.Stop()
cluster := tc.MustStartDefaultCluster()
vmAddr := fmt.Sprintf("http://%s/", cluster.Vminsert.HTTPAddr())
vmctlFlags := []string{
`thanos`,
`--thanos-snapshot=` + thanosSnapshot,
`--vm-addr=` + vmAddr,
`--disable-progress-bar=true`,
`--vm-account-id=0`,
`--thanos-aggr-types=count`,
`--thanos-aggr-types=sum`,
}
testThanosMigration(tc, cluster, vmctlFlags, thanosExpectedFilteredAggrResponse)
}
func testThanosMigration(tc *apptest.TestCase, sut apptest.PrometheusWriteQuerier, vmctlFlags []string, expectedFile string) {
t := tc.T()
t.Helper()
cmpOpt := cmpopts.IgnoreFields(apptest.PrometheusAPIV1QueryResponse{}, "Status", "Data.ResultType")
// Verify no data exists before migration
got := sut.PrometheusAPIV1Query(t, thanosQueryFilter, apptest.QueryOpts{
Step: "5m",
Time: thanosQueryTimeStart,
})
want := apptest.NewPrometheusAPIV1QueryResponse(t, `{"data":{"result":[]}}`)
if diff := cmp.Diff(want, got, cmpOpt); diff != "" {
t.Errorf("unexpected response before migration (-want, +got):\n%s", diff)
}
// Run vmctl migration
tc.MustStartVmctl("vmctl", vmctlFlags)
sut.ForceFlush(t)
// Load expected response
file, err := os.Open(expectedFile)
if err != nil {
t.Fatalf("cannot open expected response file: %s", err)
}
defer file.Close()
bytes, err := io.ReadAll(file)
if err != nil {
t.Fatalf("cannot read expected response file: %s", err)
}
var wantResponse apptest.PrometheusAPIV1QueryResponse
if err := json.Unmarshal(bytes, &wantResponse); err != nil {
t.Fatalf("cannot unmarshal expected response file: %s", err)
}
wantResponse.Sort()
tc.Assert(&apptest.AssertOptions{
Retries: 300,
Msg: "unexpected metrics stored after Thanos migration",
Got: func() any {
result := sut.PrometheusAPIV1Export(t, thanosQueryFilter, apptest.QueryOpts{
Start: thanosQueryTimeStart,
End: thanosQueryTimeEnd,
})
result.Sort()
return result.Data.Result
},
Want: wantResponse.Data.Result,
CmpOpts: []cmp.Option{
cmpopts.IgnoreFields(apptest.PrometheusAPIV1QueryResponse{}, "Status", "Data.ResultType"),
},
})
}

View File

@@ -29,7 +29,7 @@ func StartVmagent(instance string, flags []string, cli *Client, promScrapeConfig
httpListenAddrRE,
}
app, stderrExtracts, err := startApp(instance, "../../bin/vmagent", flags, &appOptions{
app, stderrExtracts, err := startApp(instance, "../../bin/vmagent-race", flags, &appOptions{
defaultFlags: map[string]string{
"-httpListenAddr": "127.0.0.1:0",
"-promscrape.config": promScrapeConfigFilePath,

View File

@@ -32,7 +32,7 @@ func StartVmauth(instance string, flags []string, cli *Client, configFilePath st
httpBuilitinListenAddrRE,
}
app, stderrExtracts, err := startApp(instance, "../../bin/vmauth", flags, &appOptions{
app, stderrExtracts, err := startApp(instance, "../../bin/vmauth-race", flags, &appOptions{
defaultFlags: map[string]string{
"-httpListenAddr": "127.0.0.1:0",
"-auth.config": configFilePath,

View File

@@ -10,6 +10,6 @@ func StartVmbackup(instance, storageDataPath, snapshotCreateURL, dst string, out
"-snapshot.createURL=" + snapshotCreateURL,
"-dst=" + dst,
}
_, _, err := startApp(instance, "../../bin/vmbackup", flags, &appOptions{wait: true, output: output})
_, _, err := startApp(instance, "../../bin/vmbackup-race", flags, &appOptions{wait: true, output: output})
return err
}

View File

@@ -4,6 +4,6 @@ import "io"
// StartVmctl starts an instance of vmctl cli with the given flags
func StartVmctl(instance string, flags []string, output io.Writer) error {
_, _, err := startApp(instance, "../../bin/vmctl", flags, &appOptions{wait: true, output: output})
_, _, err := startApp(instance, "../../bin/vmctl-race", flags, &appOptions{wait: true, output: output})
return err
}

View File

@@ -57,7 +57,7 @@ func StartVminsert(instance string, flags []string, cli *Client, output io.Write
extractREs = append(extractREs, regexp.MustCompile(logRecord))
}
app, stderrExtracts, err := startApp(instance, "../../bin/vminsert", flags, &appOptions{
app, stderrExtracts, err := startApp(instance, "../../bin/vminsert-race", flags, &appOptions{
defaultFlags: map[string]string{
"-httpListenAddr": "127.0.0.1:0",
"-clusternativeListenAddr": "127.0.0.1:0",
@@ -237,8 +237,22 @@ func (app *Vminsert) PrometheusAPIV1ImportPrometheus(t *testing.T, records []str
data := []byte(strings.Join(records, "\n"))
var recordsCount int
var metadataRecords int
uniqueMetadataMetricNames := make(map[string]struct{})
for _, record := range records {
if strings.HasPrefix(record, "#") {
// metric metadata has the following format:
//# HELP importprometheus_series
//# TYPE importprometheus_series
// it results into single metadata record
if strings.HasPrefix(record, "# ") {
metadataItems := strings.Split(record, " ")
if len(metadataItems) < 2 {
t.Fatalf("BUG: unexpected metadata format=%q", record)
}
metricName := metadataItems[2]
if _, ok := uniqueMetadataMetricNames[metricName]; ok {
continue
}
uniqueMetadataMetricNames[metricName] = struct{}{}
metadataRecords++
continue
}

View File

@@ -9,6 +9,6 @@ func StartVmrestore(instance, src, storageDataPath string, output io.Writer) err
"-src=" + src,
"-storageDataPath=" + storageDataPath,
}
_, _, err := startApp(instance, "../../bin/vmrestore", flags, &appOptions{wait: true, output: output})
_, _, err := startApp(instance, "../../bin/vmrestore-race", flags, &appOptions{wait: true, output: output})
return err
}

View File

@@ -25,7 +25,7 @@ type Vmselect struct {
// sets the default flags and populates the app instance state with runtime
// values extracted from the application log (such as httpListenAddr)
func StartVmselect(instance string, flags []string, cli *Client, output io.Writer) (*Vmselect, error) {
app, stderrExtracts, err := startApp(instance, "../../bin/vmselect", flags, &appOptions{
app, stderrExtracts, err := startApp(instance, "../../bin/vmselect-race", flags, &appOptions{
defaultFlags: map[string]string{
"-httpListenAddr": "127.0.0.1:0",
"-clusternativeListenAddr": "127.0.0.1:0",

View File

@@ -1,9 +0,0 @@
# see https://docs.codecov.com/docs/common-recipe-list#set-non-blocking-status-checks
coverage:
status:
project:
default:
informational: true
patch:
default:
informational: true

View File

@@ -119,7 +119,8 @@
"mode": "absolute",
"steps": [
{
"color": "green"
"color": "green",
"value": 0
},
{
"color": "yellow",
@@ -199,7 +200,8 @@
"mode": "absolute",
"steps": [
{
"color": "green"
"color": "green",
"value": 0
}
]
}
@@ -208,14 +210,14 @@
},
"gridPos": {
"h": 4,
"w": 9,
"w": 6,
"x": 0,
"y": 14
},
"id": 5,
"options": {
"colorMode": "value",
"graphMode": "area",
"colorMode": "none",
"graphMode": "none",
"justifyMode": "auto",
"orientation": "auto",
"percentChangeColorMode": "standard",
@@ -257,7 +259,7 @@
"type": "prometheus",
"uid": "$ds"
},
"description": "",
"description": "Shows the total number of loaded alerting rules across selected instances and groups.",
"fieldConfig": {
"defaults": {
"mappings": [],
@@ -266,7 +268,8 @@
"mode": "absolute",
"steps": [
{
"color": "green"
"color": "green",
"value": 0
}
]
}
@@ -275,11 +278,11 @@
},
"gridPos": {
"h": 4,
"w": 7,
"x": 9,
"w": 6,
"x": 6,
"y": 14
},
"id": 4,
"id": 8,
"options": {
"colorMode": "value",
"graphMode": "area",
@@ -320,6 +323,144 @@
"title": "Alerting rules",
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
"uid": "$ds"
},
"description": "Shows the total number of pendings alerts in selected instances and grouping groups.",
"fieldConfig": {
"defaults": {
"mappings": [],
"min": 0,
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "yellow",
"value": 0
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 4,
"w": 6,
"x": 12,
"y": 14
},
"id": 9,
"options": {
"colorMode": "value",
"graphMode": "area",
"justifyMode": "auto",
"orientation": "auto",
"percentChangeColorMode": "standard",
"reduceOptions": {
"calcs": [
"last"
],
"fields": "",
"values": false
},
"showPercentChange": false,
"text": {
"valueSize": 80
},
"textMode": "auto",
"wideLayout": true
},
"pluginVersion": "12.0.2",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${ds}"
},
"editorMode": "code",
"exemplar": false,
"expr": "sum(vmalert_alerts_pending{job=~\"$job\",instance=~\"$instance\",group=~\"$group\"})",
"instant": false,
"interval": "",
"legendFormat": "",
"range": true,
"refId": "A"
}
],
"title": "Alerting pending",
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
"uid": "$ds"
},
"description": "Shows the total number of firing alerts in selected instances and grouping groups.",
"fieldConfig": {
"defaults": {
"mappings": [],
"min": 0,
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "red",
"value": 0
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 4,
"w": 6,
"x": 18,
"y": 14
},
"id": 10,
"options": {
"colorMode": "value",
"graphMode": "area",
"justifyMode": "auto",
"orientation": "auto",
"percentChangeColorMode": "standard",
"reduceOptions": {
"calcs": [
"last"
],
"fields": "",
"values": false
},
"showPercentChange": false,
"text": {
"valueSize": 80
},
"textMode": "auto",
"wideLayout": true
},
"pluginVersion": "12.0.2",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${ds}"
},
"editorMode": "code",
"exemplar": false,
"expr": "sum(vmalert_alerts_firing{job=~\"$job\",instance=~\"$instance\",group=~\"$group\"})",
"instant": false,
"interval": "",
"legendFormat": "",
"range": true,
"refId": "A"
}
],
"title": "Alerting firing",
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
@@ -332,6 +473,9 @@
"cellOptions": {
"type": "auto"
},
"footer": {
"reducers": []
},
"inspect": false
},
"mappings": [],
@@ -339,7 +483,8 @@
"mode": "absolute",
"steps": [
{
"color": "green"
"color": "green",
"value": 0
},
{
"color": "red",
@@ -352,7 +497,7 @@
{
"matcher": {
"id": "byName",
"options": "Count (sum)"
"options": "Count"
},
"properties": [
{
@@ -372,20 +517,12 @@
"id": 2,
"options": {
"cellHeight": "sm",
"footer": {
"countRows": false,
"fields": "",
"reducer": [
"sum"
],
"show": false
},
"frameIndex": 1,
"showHeader": true,
"sortBy": [
{
"desc": true,
"displayName": "Count (sum)"
"displayName": "Count"
}
]
},
@@ -398,7 +535,7 @@
},
"editorMode": "code",
"exemplar": false,
"expr": "topk_max(100, sum(increases_over_time(vmalert_alerts_firing{job=~\"$job\",instance=~\"$instance\",group=~\"$group\"}[$__range])) by(group, alertname) > 0)",
"expr": "topk_max(100, sum(increases_over_time(vmalert_alerts_firing{job=~\"$job\",instance=~\"$instance\",group=~\"$group\"}[$__range])) by(group) > 0)",
"format": "table",
"instant": true,
"key": "Q-3934f0fb-8ad6-4519-a98d-c26d0fc6b312-0",
@@ -414,8 +551,9 @@
"options": {
"excludeByName": {
"Time": true,
"alertname": false
"alertname": true
},
"includeByName": {},
"indexByName": {
"Time": 0,
"Value": 3,
@@ -428,23 +566,6 @@
"group": "Group"
}
}
},
{
"id": "groupBy",
"options": {
"fields": {
"Count": {
"aggregations": [
"sum"
],
"operation": "aggregate"
},
"Group": {
"aggregations": [],
"operation": "groupby"
}
}
}
}
],
"type": "table"
@@ -468,7 +589,8 @@
"mode": "absolute",
"steps": [
{
"color": "green"
"color": "green",
"value": 0
},
{
"color": "red",
@@ -531,16 +653,14 @@
"id": 1,
"options": {
"cellHeight": "sm",
"footer": {
"countRows": false,
"fields": "",
"reducer": [
"sum"
],
"show": false
},
"frameIndex": 1,
"showHeader": true
"showHeader": true,
"sortBy": [
{
"desc": true,
"displayName": "Count"
}
]
},
"pluginVersion": "12.0.2",
"targets": [

View File

@@ -115,7 +115,7 @@
"overrides": []
},
"gridPos": {
"h": 4,
"h": 3,
"w": 4,
"x": 0,
"y": 1

View File

@@ -185,7 +185,7 @@
"textMode": "auto",
"wideLayout": true
},
"pluginVersion": "12.0.2",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -250,7 +250,7 @@
"textMode": "auto",
"wideLayout": true
},
"pluginVersion": "12.0.2",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -313,7 +313,7 @@
"textMode": "auto",
"wideLayout": true
},
"pluginVersion": "12.0.2",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -380,7 +380,7 @@
"textMode": "auto",
"wideLayout": true
},
"pluginVersion": "12.0.2",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -460,7 +460,7 @@
"textMode": "auto",
"wideLayout": true
},
"pluginVersion": "12.0.2",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -557,7 +557,7 @@
},
"showHeader": true
},
"pluginVersion": "12.0.2",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -663,7 +663,7 @@
"sort": "asc"
}
},
"pluginVersion": "12.0.2",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -783,7 +783,7 @@
"sort": "desc"
}
},
"pluginVersion": "12.0.2",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -889,7 +889,7 @@
"sort": "desc"
}
},
"pluginVersion": "12.0.2",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -993,7 +993,7 @@
"sort": "desc"
}
},
"pluginVersion": "12.0.2",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -1097,7 +1097,7 @@
"sort": "none"
}
},
"pluginVersion": "12.0.2",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -1222,7 +1222,7 @@
"sort": "desc"
}
},
"pluginVersion": "11.5.0",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -1337,7 +1337,7 @@
"sort": "desc"
}
},
"pluginVersion": "11.5.0",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -1447,7 +1447,7 @@
"sort": "desc"
}
},
"pluginVersion": "11.5.0",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -1562,7 +1562,7 @@
"sort": "desc"
}
},
"pluginVersion": "11.5.0",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -1695,7 +1695,7 @@
"sort": "desc"
}
},
"pluginVersion": "11.5.0",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -1800,7 +1800,7 @@
"sort": "desc"
}
},
"pluginVersion": "11.5.0",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -1921,7 +1921,7 @@
"sort": "desc"
}
},
"pluginVersion": "11.5.0",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -2043,7 +2043,7 @@
"sort": "none"
}
},
"pluginVersion": "11.5.0",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -2151,7 +2151,7 @@
"sort": "desc"
}
},
"pluginVersion": "11.5.0",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -2260,7 +2260,7 @@
"sort": "desc"
}
},
"pluginVersion": "11.5.0",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -2368,7 +2368,7 @@
"sort": "desc"
}
},
"pluginVersion": "11.5.0",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -2472,7 +2472,7 @@
"sort": "desc"
}
},
"pluginVersion": "11.5.0",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -2592,7 +2592,7 @@
}
]
},
"pluginVersion": "10.4.2",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -2722,7 +2722,7 @@
"sort": "none"
}
},
"pluginVersion": "9.2.6",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -2827,7 +2827,7 @@
"sort": "desc"
}
},
"pluginVersion": "9.1.0",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -2942,7 +2942,7 @@
"sort": "desc"
}
},
"pluginVersion": "9.2.6",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -3044,7 +3044,7 @@
"sort": "desc"
}
},
"pluginVersion": "9.2.6",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -3146,7 +3146,7 @@
"sort": "desc"
}
},
"pluginVersion": "9.2.6",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -3247,7 +3247,7 @@
"sort": "desc"
}
},
"pluginVersion": "9.2.6",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -3346,7 +3346,7 @@
"sort": "desc"
}
},
"pluginVersion": "9.2.6",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -3462,7 +3462,7 @@
"sort": "desc"
}
},
"pluginVersion": "9.2.6",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -3564,7 +3564,7 @@
"sort": "desc"
}
},
"pluginVersion": "8.0.3",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -3663,7 +3663,7 @@
"sort": "none"
}
},
"pluginVersion": "9.2.6",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -3707,11 +3707,13 @@
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
@@ -3720,6 +3722,7 @@
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
@@ -3727,6 +3730,7 @@
"type": "linear"
},
"showPoints": "auto",
"showValues": false,
"spanNulls": false,
"stacking": {
"group": "A",
@@ -3741,7 +3745,8 @@
"mode": "absolute",
"steps": [
{
"color": "green"
"color": "green",
"value": 0
},
{
"color": "red",
@@ -3756,7 +3761,7 @@
"h": 8,
"w": 12,
"x": 0,
"y": 351
"y": 30
},
"id": 52,
"options": {
@@ -3767,10 +3772,12 @@
"showLegend": true
},
"tooltip": {
"hideZeros": false,
"mode": "single",
"sort": "desc"
}
},
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -3778,7 +3785,7 @@
"uid": "$ds"
},
"editorMode": "code",
"expr": "sum(rate(vmalert_remotewrite_sent_rows_total{job=~\"$job\", instance=~\"$instance\"}[$__rate_interval])) by(job)",
"expr": "sum(rate(vmalert_remotewrite_sent_rows_total{job=~\"$job\", instance=~\"$instance\"}[$__rate_interval])) by(job) default sum(rate(vmalert_remotewrite_sent_rows_sum{job=~\"$job\", instance=~\"$instance\"}[$__rate_interval])) by(job)",
"legendFormat": "__auto",
"range": true,
"refId": "A"
@@ -3792,18 +3799,20 @@
"type": "victoriametrics-metrics-datasource",
"uid": "$ds"
},
"description": "Shows the number of datapoints dropped by vmalert while sending to the configured remote write URL. vmalert performs up to 5 retries before dropping the data. Check vmalert's error logs for the specific error message.",
"description": "Shows the global rate for number of written bytes via remote write connections.",
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
@@ -3812,6 +3821,115 @@
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"showValues": false,
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"links": [],
"mappings": [],
"min": 0,
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": 0
},
{
"color": "red",
"value": 80
}
]
},
"unit": "decbytes"
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 30
},
"id": 60,
"options": {
"legend": {
"calcs": [
"mean",
"lastNotNull",
"max"
],
"displayMode": "table",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"hideZeros": false,
"mode": "multi",
"sort": "desc"
}
},
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
"type": "victoriametrics-metrics-datasource",
"uid": "$ds"
},
"editorMode": "code",
"exemplar": true,
"expr": "sum(rate(vmalert_remotewrite_conn_bytes_written_total{job=~\"$job\", instance=~\"$instance\"}[$__rate_interval])) by(job) > 0",
"interval": "",
"legendFormat": "__auto",
"range": true,
"refId": "A"
}
],
"title": "Bytes write rate ($instance)",
"type": "timeseries"
},
{
"datasource": {
"type": "victoriametrics-metrics-datasource",
"uid": "$ds"
},
"description": "Data could be dropped in two scenarios:\n1. The remote write queue(configured by -remoteWrite.maxQueueSize) is full: \nThe queue can fill rapidly if heavy rules generate millions of series, or if remote write requests are unable to send data to the destination in a timely manner, resulting in data being jammed in the queue. Consider tuning -remoteWrite.maxQueueSize or -remoteWrite.concurrency.\nSee also Rows per request pannel.\n2. The request to the configured remote write URL failed(vmalert performs up to 5 retries before dropping the data).\n\nCheck vmalert's error logs for the specific error message.",
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
@@ -3819,6 +3937,7 @@
"type": "linear"
},
"showPoints": "auto",
"showValues": false,
"spanNulls": false,
"stacking": {
"group": "A",
@@ -3833,7 +3952,8 @@
"mode": "absolute",
"steps": [
{
"color": "green"
"color": "green",
"value": 0
},
{
"color": "red",
@@ -3847,8 +3967,8 @@
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 351
"x": 0,
"y": 38
},
"id": 53,
"options": {
@@ -3859,10 +3979,12 @@
"showLegend": true
},
"tooltip": {
"hideZeros": false,
"mode": "single",
"sort": "desc"
}
},
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -3879,6 +4001,237 @@
"title": "Datapoints drop rate ($instance)",
"type": "timeseries"
},
{
"datasource": {
"type": "victoriametrics-metrics-datasource",
"uid": "$ds"
},
"description": "Displays the maximum 99th percentile of the number of time series pending in the remote write queue.\n\nThe maximum queue size is configured by remoteWrite.maxQueueSize. \nvmalert will begin dropping data if the queue has no room for newly generated data.\nThe queue can fill rapidly when heavy rules generate millions of series, or when remote write requests are unable to send data to the destination in a timely manner, causing data to accumulate in the queue. Consider tuning -remoteWrite.maxQueueSize or -remoteWrite.concurrency.\n\nSee also the Rows per request panel.",
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"showValues": false,
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": 0
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "max"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "red",
"mode": "fixed"
}
},
{
"id": "custom.fillOpacity",
"value": 0
}
]
}
]
},
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 38
},
"id": 68,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"hideZeros": false,
"mode": "multi",
"sort": "desc"
}
},
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
"type": "victoriametrics-metrics-datasource",
"uid": "$ds"
},
"editorMode": "code",
"expr": "max(histogram_quantile(0.99, sum(increase(vmalert_remotewrite_queue_size_bucket{job=~\"$job\", instance=~\"$instance\"}[$__rate_interval])) by (instance, vmrange))) > 1",
"legendFormat": "current",
"range": true,
"refId": "A"
},
{
"datasource": {
"type": "victoriametrics-metrics-datasource",
"uid": "$ds"
},
"editorMode": "code",
"expr": "min(vmalert_remotewrite_queue_capacity{job=~\"$job\", instance=~\"$instance\"})",
"hide": false,
"instant": false,
"legendFormat": "max",
"range": true,
"refId": "B"
}
],
"title": "Remote write queue size ($instance)",
"type": "timeseries"
},
{
"datasource": {
"type": "victoriametrics-metrics-datasource",
"uid": "$ds"
},
"description": "Displays the maximum 99th percentile of the number of data samples sent per request to the configured remote write URL.\nThe value is influenced by the utilization of the remote write queue. It is normal for this metric to remain low when there are no rules generating a large number of series, and to spike when heavy rules generate thousands of series.\nDuring periods of high load (when many heavy rules are generating results), the optimal value for rows per request should be high to maximize the efficiency of each push operation.\n\nIf you observe datapoint drops and consistently low values for rows per request, try checking the write latency between vmalert and the remote write destination, or increasing `-remoteWrite.flushInterval`.",
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"showValues": false,
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": 0
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 46
},
"id": 69,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"hideZeros": false,
"mode": "single",
"sort": "desc"
}
},
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
"type": "victoriametrics-metrics-datasource",
"uid": "$ds"
},
"editorMode": "code",
"expr": "max(histogram_quantile(0.99, sum(increase(vmalert_remotewrite_sent_rows_bucket{job=~\"$job\", instance=~\"$instance\"}[$__rate_interval])) by (instance, vmrange)))",
"legendFormat": "max",
"range": true,
"refId": "A"
}
],
"title": "Rows per request ($instance)",
"type": "timeseries"
},
{
"datasource": {
"type": "victoriametrics-metrics-datasource",
@@ -3897,6 +4250,7 @@
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
@@ -3913,6 +4267,7 @@
"type": "linear"
},
"showPoints": "never",
"showValues": false,
"spanNulls": false,
"stacking": {
"group": "A",
@@ -3929,7 +4284,8 @@
"mode": "absolute",
"steps": [
{
"color": "green"
"color": "green",
"value": 0
},
{
"color": "red",
@@ -3944,8 +4300,8 @@
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 378
"x": 12,
"y": 46
},
"id": 54,
"options": {
@@ -3960,10 +4316,12 @@
"showLegend": true
},
"tooltip": {
"hideZeros": false,
"mode": "multi",
"sort": "desc"
}
},
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -3981,109 +4339,6 @@
],
"title": "Connections ($instance)",
"type": "timeseries"
},
{
"datasource": {
"type": "victoriametrics-metrics-datasource",
"uid": "$ds"
},
"description": "Shows the global rate for number of written bytes via remote write connections.",
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"links": [],
"mappings": [],
"min": 0,
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green"
},
{
"color": "red",
"value": 80
}
]
},
"unit": "decbytes"
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 378
},
"id": 60,
"options": {
"legend": {
"calcs": [
"mean",
"lastNotNull",
"max"
],
"displayMode": "table",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "victoriametrics-metrics-datasource",
"uid": "$ds"
},
"editorMode": "code",
"exemplar": true,
"expr": "sum(rate(vmalert_remotewrite_conn_bytes_written_total{job=~\"$job\", instance=~\"$instance\"}[$__rate_interval])) by(job) > 0",
"interval": "",
"legendFormat": "__auto",
"range": true,
"refId": "A"
}
],
"title": "Bytes write rate ($instance)",
"type": "timeseries"
}
],
"title": "Remote write",

View File

@@ -184,7 +184,7 @@
"textMode": "auto",
"wideLayout": true
},
"pluginVersion": "12.0.2",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -249,7 +249,7 @@
"textMode": "auto",
"wideLayout": true
},
"pluginVersion": "12.0.2",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -312,7 +312,7 @@
"textMode": "auto",
"wideLayout": true
},
"pluginVersion": "12.0.2",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -379,7 +379,7 @@
"textMode": "auto",
"wideLayout": true
},
"pluginVersion": "12.0.2",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -459,7 +459,7 @@
"textMode": "auto",
"wideLayout": true
},
"pluginVersion": "12.0.2",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -556,7 +556,7 @@
},
"showHeader": true
},
"pluginVersion": "12.0.2",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -662,7 +662,7 @@
"sort": "asc"
}
},
"pluginVersion": "12.0.2",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -782,7 +782,7 @@
"sort": "desc"
}
},
"pluginVersion": "12.0.2",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -888,7 +888,7 @@
"sort": "desc"
}
},
"pluginVersion": "12.0.2",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -992,7 +992,7 @@
"sort": "desc"
}
},
"pluginVersion": "12.0.2",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -1096,7 +1096,7 @@
"sort": "none"
}
},
"pluginVersion": "12.0.2",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -1221,7 +1221,7 @@
"sort": "desc"
}
},
"pluginVersion": "11.5.0",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -1336,7 +1336,7 @@
"sort": "desc"
}
},
"pluginVersion": "11.5.0",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -1446,7 +1446,7 @@
"sort": "desc"
}
},
"pluginVersion": "11.5.0",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -1561,7 +1561,7 @@
"sort": "desc"
}
},
"pluginVersion": "11.5.0",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -1694,7 +1694,7 @@
"sort": "desc"
}
},
"pluginVersion": "11.5.0",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -1799,7 +1799,7 @@
"sort": "desc"
}
},
"pluginVersion": "11.5.0",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -1920,7 +1920,7 @@
"sort": "desc"
}
},
"pluginVersion": "11.5.0",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -2042,7 +2042,7 @@
"sort": "none"
}
},
"pluginVersion": "11.5.0",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -2150,7 +2150,7 @@
"sort": "desc"
}
},
"pluginVersion": "11.5.0",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -2259,7 +2259,7 @@
"sort": "desc"
}
},
"pluginVersion": "11.5.0",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -2367,7 +2367,7 @@
"sort": "desc"
}
},
"pluginVersion": "11.5.0",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -2471,7 +2471,7 @@
"sort": "desc"
}
},
"pluginVersion": "11.5.0",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -2591,7 +2591,7 @@
}
]
},
"pluginVersion": "10.4.2",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -2721,7 +2721,7 @@
"sort": "none"
}
},
"pluginVersion": "9.2.6",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -2826,7 +2826,7 @@
"sort": "desc"
}
},
"pluginVersion": "9.1.0",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -2941,7 +2941,7 @@
"sort": "desc"
}
},
"pluginVersion": "9.2.6",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -3043,7 +3043,7 @@
"sort": "desc"
}
},
"pluginVersion": "9.2.6",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -3145,7 +3145,7 @@
"sort": "desc"
}
},
"pluginVersion": "9.2.6",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -3246,7 +3246,7 @@
"sort": "desc"
}
},
"pluginVersion": "9.2.6",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -3345,7 +3345,7 @@
"sort": "desc"
}
},
"pluginVersion": "9.2.6",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -3461,7 +3461,7 @@
"sort": "desc"
}
},
"pluginVersion": "9.2.6",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -3563,7 +3563,7 @@
"sort": "desc"
}
},
"pluginVersion": "8.0.3",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -3662,7 +3662,7 @@
"sort": "none"
}
},
"pluginVersion": "9.2.6",
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -3706,11 +3706,13 @@
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
@@ -3719,6 +3721,7 @@
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
@@ -3726,6 +3729,7 @@
"type": "linear"
},
"showPoints": "auto",
"showValues": false,
"spanNulls": false,
"stacking": {
"group": "A",
@@ -3740,7 +3744,8 @@
"mode": "absolute",
"steps": [
{
"color": "green"
"color": "green",
"value": 0
},
{
"color": "red",
@@ -3755,7 +3760,7 @@
"h": 8,
"w": 12,
"x": 0,
"y": 351
"y": 30
},
"id": 52,
"options": {
@@ -3766,10 +3771,12 @@
"showLegend": true
},
"tooltip": {
"hideZeros": false,
"mode": "single",
"sort": "desc"
}
},
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -3777,7 +3784,7 @@
"uid": "$ds"
},
"editorMode": "code",
"expr": "sum(rate(vmalert_remotewrite_sent_rows_total{job=~\"$job\", instance=~\"$instance\"}[$__rate_interval])) by(job)",
"expr": "sum(rate(vmalert_remotewrite_sent_rows_total{job=~\"$job\", instance=~\"$instance\"}[$__rate_interval])) by(job) default sum(rate(vmalert_remotewrite_sent_rows_sum{job=~\"$job\", instance=~\"$instance\"}[$__rate_interval])) by(job)",
"legendFormat": "__auto",
"range": true,
"refId": "A"
@@ -3791,18 +3798,20 @@
"type": "prometheus",
"uid": "$ds"
},
"description": "Shows the number of datapoints dropped by vmalert while sending to the configured remote write URL. vmalert performs up to 5 retries before dropping the data. Check vmalert's error logs for the specific error message.",
"description": "Shows the global rate for number of written bytes via remote write connections.",
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
@@ -3811,6 +3820,115 @@
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"showValues": false,
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"links": [],
"mappings": [],
"min": 0,
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": 0
},
{
"color": "red",
"value": 80
}
]
},
"unit": "decbytes"
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 30
},
"id": 60,
"options": {
"legend": {
"calcs": [
"mean",
"lastNotNull",
"max"
],
"displayMode": "table",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"hideZeros": false,
"mode": "multi",
"sort": "desc"
}
},
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "$ds"
},
"editorMode": "code",
"exemplar": true,
"expr": "sum(rate(vmalert_remotewrite_conn_bytes_written_total{job=~\"$job\", instance=~\"$instance\"}[$__rate_interval])) by(job) > 0",
"interval": "",
"legendFormat": "__auto",
"range": true,
"refId": "A"
}
],
"title": "Bytes write rate ($instance)",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "$ds"
},
"description": "Data could be dropped in two scenarios:\n1. The remote write queue(configured by -remoteWrite.maxQueueSize) is full: \nThe queue can fill rapidly if heavy rules generate millions of series, or if remote write requests are unable to send data to the destination in a timely manner, resulting in data being jammed in the queue. Consider tuning -remoteWrite.maxQueueSize or -remoteWrite.concurrency.\nSee also Rows per request pannel.\n2. The request to the configured remote write URL failed(vmalert performs up to 5 retries before dropping the data).\n\nCheck vmalert's error logs for the specific error message.",
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
@@ -3818,6 +3936,7 @@
"type": "linear"
},
"showPoints": "auto",
"showValues": false,
"spanNulls": false,
"stacking": {
"group": "A",
@@ -3832,7 +3951,8 @@
"mode": "absolute",
"steps": [
{
"color": "green"
"color": "green",
"value": 0
},
{
"color": "red",
@@ -3846,8 +3966,8 @@
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 351
"x": 0,
"y": 38
},
"id": 53,
"options": {
@@ -3858,10 +3978,12 @@
"showLegend": true
},
"tooltip": {
"hideZeros": false,
"mode": "single",
"sort": "desc"
}
},
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -3878,6 +4000,237 @@
"title": "Datapoints drop rate ($instance)",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "$ds"
},
"description": "Displays the maximum 99th percentile of the number of time series pending in the remote write queue.\n\nThe maximum queue size is configured by remoteWrite.maxQueueSize. \nvmalert will begin dropping data if the queue has no room for newly generated data.\nThe queue can fill rapidly when heavy rules generate millions of series, or when remote write requests are unable to send data to the destination in a timely manner, causing data to accumulate in the queue. Consider tuning -remoteWrite.maxQueueSize or -remoteWrite.concurrency.\n\nSee also the Rows per request panel.",
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"showValues": false,
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": 0
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "max"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "red",
"mode": "fixed"
}
},
{
"id": "custom.fillOpacity",
"value": 0
}
]
}
]
},
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 38
},
"id": 68,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"hideZeros": false,
"mode": "multi",
"sort": "desc"
}
},
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "$ds"
},
"editorMode": "code",
"expr": "max(histogram_quantile(0.99, sum(increase(vmalert_remotewrite_queue_size_bucket{job=~\"$job\", instance=~\"$instance\"}[$__rate_interval])) by (instance, vmrange))) > 1",
"legendFormat": "current",
"range": true,
"refId": "A"
},
{
"datasource": {
"type": "prometheus",
"uid": "$ds"
},
"editorMode": "code",
"expr": "min(vmalert_remotewrite_queue_capacity{job=~\"$job\", instance=~\"$instance\"})",
"hide": false,
"instant": false,
"legendFormat": "max",
"range": true,
"refId": "B"
}
],
"title": "Remote write queue size ($instance)",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "$ds"
},
"description": "Displays the maximum 99th percentile of the number of data samples sent per request to the configured remote write URL.\nThe value is influenced by the utilization of the remote write queue. It is normal for this metric to remain low when there are no rules generating a large number of series, and to spike when heavy rules generate thousands of series.\nDuring periods of high load (when many heavy rules are generating results), the optimal value for rows per request should be high to maximize the efficiency of each push operation.\n\nIf you observe datapoint drops and consistently low values for rows per request, try checking the write latency between vmalert and the remote write destination, or increasing `-remoteWrite.flushInterval`.",
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"showValues": false,
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": 0
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 46
},
"id": 69,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"hideZeros": false,
"mode": "single",
"sort": "desc"
}
},
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "$ds"
},
"editorMode": "code",
"expr": "max(histogram_quantile(0.99, sum(increase(vmalert_remotewrite_sent_rows_bucket{job=~\"$job\", instance=~\"$instance\"}[$__rate_interval])) by (instance, vmrange)))",
"legendFormat": "max",
"range": true,
"refId": "A"
}
],
"title": "Rows per request ($instance)",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
@@ -3896,6 +4249,7 @@
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
@@ -3912,6 +4266,7 @@
"type": "linear"
},
"showPoints": "never",
"showValues": false,
"spanNulls": false,
"stacking": {
"group": "A",
@@ -3928,7 +4283,8 @@
"mode": "absolute",
"steps": [
{
"color": "green"
"color": "green",
"value": 0
},
{
"color": "red",
@@ -3943,8 +4299,8 @@
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 378
"x": 12,
"y": 46
},
"id": 54,
"options": {
@@ -3959,10 +4315,12 @@
"showLegend": true
},
"tooltip": {
"hideZeros": false,
"mode": "multi",
"sort": "desc"
}
},
"pluginVersion": "12.2.0",
"targets": [
{
"datasource": {
@@ -3980,109 +4338,6 @@
],
"title": "Connections ($instance)",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "$ds"
},
"description": "Shows the global rate for number of written bytes via remote write connections.",
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"links": [],
"mappings": [],
"min": 0,
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green"
},
{
"color": "red",
"value": 80
}
]
},
"unit": "decbytes"
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 378
},
"id": 60,
"options": {
"legend": {
"calcs": [
"mean",
"lastNotNull",
"max"
],
"displayMode": "table",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "$ds"
},
"editorMode": "code",
"exemplar": true,
"expr": "sum(rate(vmalert_remotewrite_conn_bytes_written_total{job=~\"$job\", instance=~\"$instance\"}[$__rate_interval])) by(job) > 0",
"interval": "",
"legendFormat": "__auto",
"range": true,
"refId": "A"
}
],
"title": "Bytes write rate ($instance)",
"type": "timeseries"
}
],
"title": "Remote write",

View File

@@ -7,7 +7,7 @@ ROOT_IMAGE ?= alpine:3.23.3
ROOT_IMAGE_SCRATCH ?= scratch
CERTS_IMAGE := alpine:3.23.3
GO_BUILDER_IMAGE := golang:1.26.1
GO_BUILDER_IMAGE := golang:1.26.2
BUILDER_IMAGE := local/builder:2.0.0-$(shell echo $(GO_BUILDER_IMAGE) | tr :/ __)-1
BASE_IMAGE := local/base:1.1.4-$(shell echo $(ROOT_IMAGE) | tr :/ __)-$(shell echo $(CERTS_IMAGE) | tr :/ __)

View File

@@ -3,7 +3,7 @@ services:
# It scrapes targets defined in --promscrape.config
# And forward them to --remoteWrite.url
vmagent:
image: victoriametrics/vmagent:v1.138.0
image: victoriametrics/vmagent:v1.140.0
depends_on:
- "vmauth"
ports:
@@ -42,14 +42,14 @@ services:
# vmstorage shards. Each shard receives 1/N of all metrics sent to vminserts,
# where N is number of vmstorages (2 in this case).
vmstorage-1:
image: victoriametrics/vmstorage:v1.138.0-cluster
image: victoriametrics/vmstorage:v1.140.0-cluster
volumes:
- strgdata-1:/storage
command:
- "--storageDataPath=/storage"
restart: always
vmstorage-2:
image: victoriametrics/vmstorage:v1.138.0-cluster
image: victoriametrics/vmstorage:v1.140.0-cluster
volumes:
- strgdata-2:/storage
command:
@@ -59,7 +59,7 @@ services:
# vminsert is ingestion frontend. It receives metrics pushed by vmagent,
# pre-process them and distributes across configured vmstorage shards.
vminsert-1:
image: victoriametrics/vminsert:v1.138.0-cluster
image: victoriametrics/vminsert:v1.140.0-cluster
depends_on:
- "vmstorage-1"
- "vmstorage-2"
@@ -68,7 +68,7 @@ services:
- "--storageNode=vmstorage-2:8400"
restart: always
vminsert-2:
image: victoriametrics/vminsert:v1.138.0-cluster
image: victoriametrics/vminsert:v1.140.0-cluster
depends_on:
- "vmstorage-1"
- "vmstorage-2"
@@ -80,7 +80,7 @@ services:
# vmselect is a query fronted. It serves read queries in MetricsQL or PromQL.
# vmselect collects results from configured `--storageNode` shards.
vmselect-1:
image: victoriametrics/vmselect:v1.138.0-cluster
image: victoriametrics/vmselect:v1.140.0-cluster
depends_on:
- "vmstorage-1"
- "vmstorage-2"
@@ -90,7 +90,7 @@ services:
- "--vmalert.proxyURL=http://vmalert:8880"
restart: always
vmselect-2:
image: victoriametrics/vmselect:v1.138.0-cluster
image: victoriametrics/vmselect:v1.140.0-cluster
depends_on:
- "vmstorage-1"
- "vmstorage-2"
@@ -105,7 +105,7 @@ services:
# read requests from Grafana, vmui, vmalert among vmselects.
# It can be used as an authentication proxy.
vmauth:
image: victoriametrics/vmauth:v1.138.0
image: victoriametrics/vmauth:v1.140.0
depends_on:
- "vmselect-1"
- "vmselect-2"
@@ -119,7 +119,7 @@ services:
# vmalert executes alerting and recording rules
vmalert:
image: victoriametrics/vmalert:v1.138.0
image: victoriametrics/vmalert:v1.140.0
depends_on:
- "vmauth"
ports:

View File

@@ -3,7 +3,7 @@ services:
# It scrapes targets defined in --promscrape.config
# And forward them to --remoteWrite.url
vmagent:
image: victoriametrics/vmagent:v1.138.0
image: victoriametrics/vmagent:v1.140.0
depends_on:
- "victoriametrics"
ports:
@@ -18,7 +18,7 @@ services:
# VictoriaMetrics instance, a single process responsible for
# storing metrics and serve read requests.
victoriametrics:
image: victoriametrics/victoria-metrics:v1.138.0
image: victoriametrics/victoria-metrics:v1.140.0
ports:
- 8428:8428
- 8089:8089
@@ -59,7 +59,7 @@ services:
# vmalert executes alerting and recording rules
vmalert:
image: victoriametrics/vmalert:v1.138.0
image: victoriametrics/vmalert:v1.140.0
depends_on:
- "victoriametrics"
- "alertmanager"

View File

@@ -85,6 +85,20 @@ groups:
to the configured remote write URL. This may result into gaps in recording rules or alerts state.
Check vmalert's logs for detailed error message."
- alert: RemoteWriteQueueHighUsage
expr: histogram_quantile(0.99, sum(increase(vmalert_remotewrite_queue_size_bucket[5m])) by (job, instance, vmrange)) / vmalert_remotewrite_queue_capacity > 0.8
for: 15m
labels:
severity: warning
annotations:
summary: "Remote write queue capacity on the vmalert instance {{ $labels.instance }} has exceeded 80% utilization"
description: "The remote write queue on vmalert instance {{ $labels.instance }} has consistently high utilization.
The queue acts as a buffer between rules generating series and remote-write client consuming and pushing these series. When queue overflows, vmalert will start dropping newly generated series.
Queue may overflow due to multiple reasons:
1. Some bad rules produce too many series at once. This can be limited using the global `-rule.resultsLimit` flag or `limit` param at the rule group level.
2. Remote write connection is slow. Increase `-remoteWrite.concurrency`, so vmalert could establish more concurrent connections.
3. The queue size is too small. Increase `-remoteWrite.maxQueueSize` to extend the buffer size. Note that a larger queue will result in higher memory consumption when the queue is full."
- alert: AlertmanagerErrors
expr: increase(vmalert_alerts_send_errors_total[5m]) > 0
for: 15m
@@ -94,3 +108,4 @@ groups:
summary: "vmalert instance {{ $labels.instance }} is failing to send notifications to Alertmanager"
description: "vmalert instance {{ $labels.instance }} is failing to send alert notifications to \"{{ $labels.addr }}\".
Check vmalert's logs for detailed error message."

View File

@@ -1,6 +1,6 @@
services:
vmagent:
image: victoriametrics/vmagent:v1.138.0
image: victoriametrics/vmagent:v1.140.0
depends_on:
- "victoriametrics"
ports:
@@ -14,7 +14,7 @@ services:
restart: always
victoriametrics:
image: victoriametrics/victoria-metrics:v1.138.0
image: victoriametrics/victoria-metrics:v1.140.0
ports:
- 8428:8428
volumes:
@@ -40,7 +40,7 @@ services:
restart: always
vmalert:
image: victoriametrics/vmalert:v1.138.0
image: victoriametrics/vmalert:v1.140.0
depends_on:
- "victoriametrics"
ports:
@@ -59,7 +59,7 @@ services:
- '--external.alert.source=explore?orgId=1&left=["now-1h","now","VictoriaMetrics",{"expr": },{"mode":"Metrics"},{"ui":[true,true,true,"none"]}]'
restart: always
vmanomaly:
image: victoriametrics/vmanomaly:v1.29.1
image: victoriametrics/vmanomaly:v1.29.2
depends_on:
- "victoriametrics"
ports:

View File

@@ -14,6 +14,15 @@ aliases:
---
Please find the changelog for VictoriaMetrics Anomaly Detection below.
## v1.29.2
Released: 2026-04-02
- UI: Updated [vmanomaly UI](https://docs.victoriametrics.com/anomaly-detection/ui/) from [v1.5.1](https://docs.victoriametrics.com/anomaly-detection/ui/#v151) to [v1.6.0](https://docs.victoriametrics.com/anomaly-detection/ui/#v160), see respective [release notes](https://docs.victoriametrics.com/anomaly-detection/ui/#v160) for details. Notable changes include **full UI state modification** from [AI assistant](https://docs.victoriametrics.com/anomaly-detection/ui/#ai-assistance) and [showing business boundaries](https://docs.victoriametrics.com/anomaly-detection/ui/#visualization-panel) on a graph.
- IMPROVEMENT: Added an option to proxy reader TLS/credential configuration from the `config.reader` to UI, allowing users to leverage the same secure connection settings for both backend and UI queries to datasource without requiring `vmauth` in front of the datasource for UI access. See [authentication section](https://docs.victoriametrics.com/anomaly-detection/ui/#authentication) for details.
- IMPROVEMENT: Added configurable reconnect retry handling for [`VmWriter`](https://docs.victoriametrics.com/anomaly-detection/components/writer/#vm-writer) with `connection_retry_attempts` arg after transient connection-level write failures.
## v1.29.1
Released: 2026-03-25

View File

@@ -421,7 +421,7 @@ services:
# ...
vmanomaly:
container_name: vmanomaly
image: victoriametrics/vmanomaly:v1.29.1
image: victoriametrics/vmanomaly:v1.29.2
# ...
restart: always
volumes:
@@ -639,7 +639,7 @@ options:
Heres an example of using the config splitter to divide configurations based on the `extra_filters` argument from the reader section:
```sh
docker pull victoriametrics/vmanomaly:v1.29.1 && docker image tag victoriametrics/vmanomaly:v1.29.1 vmanomaly
docker pull victoriametrics/vmanomaly:v1.29.2 && docker image tag victoriametrics/vmanomaly:v1.29.2 vmanomaly
```
```sh

View File

@@ -45,7 +45,8 @@ There are 2 types of compatibility to consider when migrating in stateful mode:
| Group start | Group end | Compatibility | Notes |
|---------|--------- |------------|-------|
| [v1.29.1](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1291) | Latest* | Fully Compatible | Just a placeholder for new releases |
| [v1.29.2](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1292) | Latest* | Fully Compatible | Just a placeholder for new releases |
| [v1.29.1](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1291) | [v1.29.2](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1292) | Fully Compatible | - |
| [v1.28.7](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1287) | [v1.29.0](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1290) | Partially compatible* | Dumped models of class [prophet](https://docs.victoriametrics.com/anomaly-detection/components/models/#prophet) and [seasonal quantile](https://docs.victoriametrics.com/anomaly-detection/components/models/#online-seasonal-quantile) have problems with loading to [v1.29.0](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1290) due to dropped `pytz` library. **Upgrading directly from v1.28.7 to [v1.29.1](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1291) with a fix is suggested** |
| [v1.26.0](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1262) | [v1.28.7](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1287) | Fully Compatible | [v1.28.0](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1280) introduced [rolling](https://docs.victoriametrics.com/anomaly-detection/components/models/#rolling-models) model class drop in favor of [online](https://docs.victoriametrics.com/anomaly-detection/components/models/#online-models) models (`rolling_quantile` and `std` models), however, it does not impact compatibility, as artifacts were not produced by default for rolling models. Also, offline `mad` and `zscore` models are redirecting to their respective online counterparts since [v1.28.4](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1284). |
| [v1.25.3](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1253) | [v1.26.0](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1270) | Partially Compatible* | [v1.25.3](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1253) introduced `forecast_at` argument for base [univariate](https://docs.victoriametrics.com/anomaly-detection/components/models/#univariate-models) and `Prophet` [models](https://docs.victoriametrics.com/anomaly-detection/components/models/#prophet), however, itself remains backward-reversible from newer states like [v1.26.2](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1262), [v1.27.0](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1270). (All models except `isolation_forest_multivariate` class will be dropped) |

View File

@@ -122,7 +122,7 @@ Below are the steps to get `vmanomaly` up and running inside a Docker container:
1. Pull Docker image:
```sh
docker pull victoriametrics/vmanomaly:v1.29.1
docker pull victoriametrics/vmanomaly:v1.29.2
```
2. Create the license file with your license key.
@@ -142,7 +142,7 @@ docker run -it \
-v ./license:/license \
-v ./config.yaml:/config.yaml \
-p 8490:8490 \
victoriametrics/vmanomaly:v1.29.1 \
victoriametrics/vmanomaly:v1.29.2 \
/config.yaml \
--licenseFile=/license \
--loggerLevel=INFO \
@@ -159,7 +159,7 @@ docker run -it \
-e VMANOMALY_DATA_DUMPS_DIR=/tmp/vmanomaly/data \
-e VMANOMALY_MODEL_DUMPS_DIR=/tmp/vmanomaly/models \
-p 8490:8490 \
victoriametrics/vmanomaly:v1.29.1 \
victoriametrics/vmanomaly:v1.29.2 \
/config.yaml \
--licenseFile=/license \
--loggerLevel=INFO \
@@ -172,7 +172,7 @@ services:
# ...
vmanomaly:
container_name: vmanomaly
image: victoriametrics/vmanomaly:v1.29.1
image: victoriametrics/vmanomaly:v1.29.2
# ...
restart: always
volumes:

View File

@@ -119,6 +119,10 @@ To start exploring the UI, you can use embedded demo with preconfigured queries
## Authentication
> [!TIP]
{{% available_from "v1.29.2" anomaly %}} You can proxy connection settings to UI, defined in [reader](https://docs.victoriametrics.com/anomaly-detection/components/reader/#config-parameters) section of the configuration file, so that UI can connect to the data sources with the same settings (e.g. credentials, TLS, etc.). Set `server.use_reader_connection_settings` [arg](https://docs.victoriametrics.com/anomaly-detection/components/server/#parameters) to `true` in the configuration file to enable this feature, and UI will automatically use the connection settings from the reader configuration when connecting to the data sources. If `pass auth headers` option is enabled in the UI settings, it will take precedence over the reader-wise settings. See [example configuration](https://docs.victoriametrics.com/anomaly-detection/components/server/#example-configuration) for details.
{{% available_from "v1.27.0" anomaly %}} The vmanomaly UI supports proxying authentication headers from [v1.1.0](#v110) and onwards.
Consider using [vmauth](https://docs.victoriametrics.com/victoriametrics/vmauth/) in front of both vmanomaly (UI and API) and data sources (VictoriaMetrics / VictoriaLogs) to enforce end-to-end setup for accessing the data from the UI.
@@ -198,11 +202,11 @@ The best applications of this mode are:
Copilot appears as a **chat popup** anchored to the bottom-right corner of the page. The panel is resizable by dragging its left edge, and can be opened or closed by clicking the respective icon.
> [!TIP] Copilot is context-aware
> It reads your active model, scheduler, and anomaly settings from the UI automatically, so you don't need to paste your config manually.
> It reads your active model, scheduler, and anomaly settings from the UI automatically, so you don't need to paste your config manually. {{% available_from "v1.29.2" anomaly %}} It also can read and modify full UI settings, such as data source URL, query parameters, scheduler settings, and more, to provide better suggestions.
### Configuration
AI Assistant is disabled by default; enable it with `VMANOMALY_COPILOT_ENABLED=true`, then configure an LLM provider API key and, optionally, a model. Once enabled and configured, Copilot will appear as a chat popup in the bottom-right corner of the UI.
AI Assistant is disabled by default; enable it with `VMANOMALY_COPILOT_ENABLED=true`, then configure credentials for the selected LLM provider and **explicitly set** `VMANOMALY_COPILOT_MODEL` to the exact model Copilot should use. Do not rely on an implicit default model being selected. Once enabled and configured, Copilot will appear as a chat popup in the bottom-right corner of the UI.
Supported providers and model formats:
@@ -262,12 +266,18 @@ export AWS_DEFAULT_REGION=us-east-1
# export AWS_SESSION_TOKEN=your_session_token # if using a session token
```
Optionally override the default model:
Explicitly set the model Copilot should use:
```bash
export VMANOMALY_COPILOT_MODEL=openai:gpt-5-mini
```
For example, if a smaller OpenAI model is desired, set:
```bash
export VMANOMALY_COPILOT_MODEL=openai:gpt-5-nano
```
### MCP tools server
Connects Copilot to [mcp-vmanomaly](https://github.com/VictoriaMetrics/mcp-vmanomaly) for full tool access (built-in docs, models configuration and validation, alerts recommendation, service healthchecks, etc.). Full [tools list](https://github.com/VictoriaMetrics/mcp-vmanomaly?tab=readme-ov-file#toolset):
@@ -305,7 +315,7 @@ docker run -it --rm \
-e VMANOMALY_MCP_SERVER_URL=http://mcp-vmanomaly:8081/mcp \
-p 8080:8080 \
-p 8490:8490 \
victoriametrics/vmanomaly:v1.29.0 \
victoriametrics/vmanomaly:v1.29.2 \
vmanomaly_config.yaml
```
@@ -351,6 +361,8 @@ The Visualization Panel has 2 modes of displaying data - either raw queried data
Also, timeseries (such as `y`, `y_hat`, etc.) can be toggled on/off by clicking on the legend items.
{{% available_from "v1.29.2" anomaly %}} Seeing model [business-boundaries](https://docs.victoriametrics.com/anomaly-detection/faq/#incorporating-domain-knowledge), such as [detection direction](https://docs.victoriametrics.com/anomaly-detection/components/models/#detection-direction) and minimal deviation from expected ([absolute](https://docs.victoriametrics.com/anomaly-detection/components/models/#minimal-deviation-from-expected) and [relative](https://docs.victoriametrics.com/anomaly-detection/components/models/#minimal-relative-deviation-from-expected) combined) can be turned on with "business boundaries" toggle. Showing/hiding individual bands can be done by clicking on the respective legend items, while showing/hiding all business boundaries at once can be done with "business boundaries" toggle.
[Back to UI navigation](#ui-navigation)
### Model Panel
@@ -628,6 +640,15 @@ If the **results** look good and the **model configuration should be deployed in
## Changelog
### v1.6.0
Released: 2026-04-02
vmanomaly version: [v1.29.2](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1292)
- FEATURE: Now AI assistant can see and modify main UI components state (queries, time range, model configuration, etc.) to provide more contextual and accurate suggestions and actions.
- FEATURE: Added support for UI to use reader-wise settings (credentials, TLS, etc.) for connecting to data sources, which otherwise would require having `vmauth` in front of both UI and data sources, just for the UI to be able to connect to them. See [Authentication](#authentication) section for details.
### v1.5.1
Released: 2026-03-25

View File

@@ -1265,7 +1265,7 @@ monitoring:
Let's pull the docker image for `vmanomaly`:
```sh
docker pull victoriametrics/vmanomaly:v1.29.1
docker pull victoriametrics/vmanomaly:v1.29.2
```
Now we can run the docker container putting as volumes both config and model file:
@@ -1279,7 +1279,7 @@ docker run -it \
-v $(PWD)/license:/license \
-v $(PWD)/custom_model.py:/vmanomaly/model/custom.py \
-v $(PWD)/custom.yaml:/config.yaml \
victoriametrics/vmanomaly:v1.29.1 /config.yaml \
victoriametrics/vmanomaly:v1.29.2 /config.yaml \
--licenseFile=/license
--watch
```

View File

@@ -27,9 +27,11 @@ Server component of VictoriaMetrics Anomaly Detection (`vmanomaly`) is responsib
- `ui_default_state`: Optional [UI](https://docs.victoriametrics.com/anomaly-detection/ui/) state fragment to open on `/vmui/`. Must be URL-encoded and start with `#/?` (e.g. `#/?param=value`). See [Default State](https://docs.victoriametrics.com/anomaly-detection/ui/#default-state) section for details on constructing the value from UI state.
- `max_concurrent_tasks`: Maximum number of concurrent anomaly detection tasks processed by the backend. Positive integer. All tasks above the limit will be cancelled if the limit is exceeded. Defaults to `2`.
- `uvicorn_config`: Uvicorn configuration dictionary. Default is `{"log_level": "warning"}`. See [Uvicorn server settings](https://www.uvicorn.org/settings/) for details.
- {{% available_from "v1.29.2" anomaly %}} `use_reader_connection_settings`: If set to `true`, UI will use connection settings (e.g. credentials, TLS, etc.) from the [reader](https://docs.victoriametrics.com/anomaly-detection/components/reader/#config-parameters) configuration when connecting to data sources. This allows UI to connect to data sources with the same settings without requiring having `vmauth` in front of both UI and data sources.
### Example Configuration
> [!TIP]
> If [hot-reloading](https://docs.victoriametrics.com/anomaly-detection/components/scheduler/#hot-reloading) is enabled in vmanomaly service, the server will automatically pick up changes made to the configuration file without requiring a restart.
```yaml
@@ -45,7 +47,16 @@ server:
uvicorn_config: # optional Uvicorn server configuration
log_level: 'warning'
use_reader_connection_settings: true # if set to true, UI will use connection settings from reader configuration below when connecting to data sources, allowing it to connect with the same credentials, TLS settings, etc. without requiring having vmauth in front of both UI and data sources.
# other vmanomaly configuration sections, like reader, scheduler, models, etc.
reader:
datasource_url: %{DS_URL}
user: %{DS_USER}
password: %{DS_PASSWORD}
# or
# bearer_token: %{DS_BEARER_TOKEN}
verify_tls: false
```
### Accessing the server

View File

@@ -255,6 +255,20 @@ Token is passed in the standard format with header: `Authorization: bearer {toke
<td>
<span>
Path to a file, which contains token, that is passed in the standard format with header: `Authorization: bearer {token}`{{% available_from "v1.15.9" anomaly %}}
</span> </td>
</tr>
<tr>
<td>
<span style="white-space: nowrap;">`connection_retry_attempts`</span>
</td>
<td>
`1`
</td>
<td>
<span>
Number of attempts to retry the connection in case of failure {{% available_from "v1.29.2" anomaly %}}.
</span> </td>
</tr>
</tbody>
@@ -276,6 +290,7 @@ writer:
health_path: "health"
user: "foo"
password: "bar"
connection_retry_attempts: 2 # if not specified, it will be 1 by default
```
### Multitenancy support

View File

@@ -395,7 +395,7 @@ services:
restart: always
vmanomaly:
container_name: vmanomaly
image: victoriametrics/vmanomaly:v1.29.1
image: victoriametrics/vmanomaly:v1.29.2
depends_on:
- "victoriametrics"
ports:

View File

@@ -238,23 +238,23 @@ vmagent will write data into VictoriaMetrics single-node and cluster (with tenan
# compose.yaml
services:
vmsingle:
image: victoriametrics/victoria-metrics:v1.138.0
image: victoriametrics/victoria-metrics:v1.140.0
vmstorage:
image: victoriametrics/vmstorage:v1.138.0-cluster
image: victoriametrics/vmstorage:v1.140.0-cluster
vminsert:
image: victoriametrics/vminsert:v1.138.0-cluster
image: victoriametrics/vminsert:v1.140.0-cluster
command:
- -storageNode=vmstorage:8400
vmselect:
image: victoriametrics/vmselect:v1.138.0-cluster
image: victoriametrics/vmselect:v1.140.0-cluster
command:
- -storageNode=vmstorage:8401
vmagent:
image: victoriametrics/vmagent:v1.138.0
image: victoriametrics/vmagent:v1.140.0
volumes:
- ./scrape.yaml:/etc/vmagent/config.yaml
command:
@@ -306,7 +306,7 @@ Now add the vmauth service to `compose.yaml`:
# compose.yaml
services:
vmauth:
image: docker.io/victoriametrics/vmauth:v1.138.0
image: docker.io/victoriametrics/vmauth:v1.140.0
ports:
- 8427:8427
volumes:

View File

@@ -7,6 +7,9 @@ sitemap:
disable: true
---
> [!NOTE] Tip
> Store every configuration file you use during this guide in version control. You may need them for reference or to change the configuration of your installation.
This guide walks you through deploying a VictoriaMetrics cluster version on Kubernetes.
By the end of this guide, you will know:
@@ -382,7 +385,7 @@ Consider reading these resources to complete your setup:
- VictoriaMetrics
- [Learn more about the cluster version](https://docs.victoriametrics.com/victoriametrics/cluster-victoriametrics/)
- [Migrate existing metric data into VictoriaMetrics with vmctl](https://docs.victoriametrics.com/victoriametrics/vmctl/)
- [Setup alerts](https://docs.victoriametrics.com/victoriametrics/vmalert/)
- [Setup alerts](https://docs.victoriametrics.com/guides/vmalert-datasource-managed-alerts-grafana/)
- Grafana
- [Enable persistent storage](https://grafana.com/docs/grafana/latest/setup-grafana/installation/helm/#enable-persistent-storage-recommended)
- [Configure private TLS authority](https://grafana.com/docs/grafana/latest/setup-grafana/installation/helm/#configure-a-private-ca-certificate-authority)

View File

@@ -7,6 +7,9 @@ sitemap:
disable: true
---
> [!NOTE] Tip
> Store every configuration file you use during this guide in version control. You may need them for reference or to change the configuration of your installation.
This guide walks you through deploying a [single-node version of VictoriaMetrics](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/) on Kubernetes using Helm.
At the end of this guide, you will know:
@@ -56,10 +59,19 @@ vm/victoria-metrics-anomaly 1.12.9 v1.28.2 Victoria
## 2. Install [VictoriaMetrics single](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/) from Helm Chart
Download the Helm value files with the example configuration for VictoriaMetrics single-node:
```sh
wget https://docs.victoriametrics.com/guides/examples/guide-vmsingle-values.yaml
```
> [!NOTE] Tip
> Keep this file in version control. You may need it to update the configuration of your VictoriaMetrics service
Run this command in your terminal to install [VictoriaMetrics single node](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/) to the default [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) in your cluster:
```shell
helm install vmsingle vm/victoria-metrics-single -f https://docs.victoriametrics.com/guides/examples/guide-vmsingle-values.yaml
helm install vmsingle vm/victoria-metrics-single -f guide-vmsingle-values.yaml
```
Below are the key sections in the chart values file [`guide-vmsingle-values.yaml`](https://docs.victoriametrics.com/guides/examples/guide-vmsingle-values.yaml):
@@ -315,7 +327,7 @@ Consider reading these resources to complete your setup:
- VictoriaMetrics
- [Learn more about the single-node version](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/)
- [Migrate existing metric data into VictoriaMetrics with vmctl](https://docs.victoriametrics.com/victoriametrics/vmctl/)
- [Setup alerts](https://docs.victoriametrics.com/victoriametrics/vmalert/)
- [Setup alerts](https://docs.victoriametrics.com/guides/vmalert-datasource-managed-alerts-grafana/)
- Grafana
- [Enable persistent storage](https://grafana.com/docs/grafana/latest/setup-grafana/installation/helm/#enable-persistent-storage-recommended)
- [Configure private TLS authority](https://grafana.com/docs/grafana/latest/setup-grafana/installation/helm/#configure-a-private-ca-certificate-authority)

View File

@@ -125,7 +125,7 @@ As a reference, see resource consumption of VictoriaMetrics cluster on our [play
The Retention Period is the number of days or months for storing data. It affects the disk space usage.
The formula for calculating required disk space is the following:
```
Bytes Per Sample * Ingestion rate * Replication Factor * (Retention Period in Seconds +1 Retention Cycle(day or month)) * 1.2 (recommended 20% of free space for merges )
Bytes Per Sample * Ingestion rate * Replication Factor * (Retention Period in Seconds +1 Retention Cycle(day or month)) * 1.25 (recommended 20% of free space for merges )
```
The **Retention Cycle** is one **day** or one **month**. If the retention period is higher than 30 days cycle is a month; otherwise day.
@@ -143,7 +143,7 @@ Keep at least **20%** of free space for VictoriaMetrics to remain efficient with
A Kubernetes environment that produces 5k time series per second with 1-year of the Retention Period and ReplicationFactor=2:
`(1 byte-per-sample * 5000 time series * 2 replication factor * 34128000 seconds) * 1.2 ) / 2^30 = 381 GB`
`(1 byte-per-sample * 5000 time series * 2 replication factor * 34128000 seconds) * 1.25 ) / 2^30 = 397 GB`
VictoriaMetrics requires additional disk space for the index. The lower Churn Rate, the lower is disk space usage for the index.
Usually, index takes about **20%** of the disk space for storing data. High cardinality setups may use **>50%** of storage size for index. If your indexdb looks unexpectedly large, see [FAQ: Why indexdb size is so large?](https://docs.victoriametrics.com/victoriametrics/faq/#why-indexdb-size-is-so-large) for typical ratios and troubleshooting tips.

View File

@@ -155,15 +155,15 @@ These services will store and query the metrics scraped by vmagent.
# compose.yaml
services:
vmstorage:
image: victoriametrics/vmstorage:v1.138.0-cluster
image: victoriametrics/vmstorage:v1.140.0-cluster
vminsert:
image: victoriametrics/vminsert:v1.138.0-cluster
image: victoriametrics/vminsert:v1.140.0-cluster
command:
- -storageNode=vmstorage:8400
vmselect:
image: victoriametrics/vmselect:v1.138.0-cluster
image: victoriametrics/vmselect:v1.140.0-cluster
command:
- -storageNode=vmstorage:8401
ports:
@@ -196,7 +196,7 @@ Add the vmauth service to `compose.yaml`:
# compose.yaml
services:
vmauth:
image: victoriametrics/vmauth:v1.138.0-enterprise
image: victoriametrics/vmauth:v1.140.0-enterprise
ports:
- 8427:8427
volumes:
@@ -251,7 +251,7 @@ Add the vmagent service to `compose.yaml` with OAuth2 configuration:
# compose.yaml
services:
vmagent:
image: victoriametrics/vmagent:v1.138.0
image: victoriametrics/vmagent:v1.140.0
volumes:
- ./scrape.yaml:/etc/vmagent/config.yaml
command:

View File

@@ -0,0 +1,615 @@
---
build:
list: never
publishResources: false
render: never
sitemap:
disable: true
---
Grafana offers a rich alerting UI, including rule grouping, silences, and notification history. While Grafana-managed alerts are easy to use, [they can hit performance issues without additional configuration since they depend on a relational database by default](https://grafana.com/blog/how-we-improved-grafanas-alert-state-history-to-provide-better-insights-into-your-alerting-data/).
By moving rule evaluation to [vmalert](https://docs.victoriametrics.com/victoriametrics/vmalert/), you can move past these limitations while retaining Grafana's unified alerting UI. This guide shows the ideal topology for scalable alerting using vmalert, Alertmanager, and Grafana datasource-managed alerts.
## Grafana Alert Modes
Grafana supports two alert modes, which can run side by side:
- Grafana-managed: alerts are created and evaluated entirely within Grafana itself. The alert state is stored in a SQL database by default, but [Grafana can be configured to store this data in a Prometheus-compatible database like VictoriaMetrics as well](https://grafana.com/docs/grafana/latest/alerting/set-up/configure-alert-state-history/#configure-prometheus-for-alert-state-grafana_alerts-metric).
- Datasource-managed: alerts have their rules defined, stored, and evaluated in an external system like vmalert and Alertmanager, with Grafana just providing the UI. State is stored in VictoriaMetrics.
The following table compares the two modes:
| Aspect | Grafana-Managed | Data Source-Managed |
| ------------------- | -------------------------------------------------- | ----------------------------- |
| Where rules live | Grafana's SQL database | vmalert's YAML config |
| Evaluation | Grafana's scheduler | vmalert |
| Horizontal Scaling | Complex (requires HA SQL database) | Simple (add more pods) |
| State storage | SQL backend or a Prometheus datasource | VictoriaMetrics |
| UI Management | Full create/edit in Grafana | View-only |
| Dependencies | SQL + Grafana | VictoriaMetrics |
| Version control | Complex (rules must be exported/imported) | Easy (rules are in YAML file) |
## Datasource-managed Alert Topology
The proposed alert setup relies on the following services:
- VictoriaMetrics: provides the time-series database and persists alert status.
- vmalert: evaluates alerting rules from its config file against VictoriaMetrics data and forwards firing alerts to Alertmanager.
- Alertmanager: groups and routes alerts to the configured recipients.
- Grafana: serves as the unified UI, connecting to VictoriaMetrics for rules and metrics, and to Alertmanager for notifications/silences.
![Topology Diagram](topology.webp)
## vmalert Demo with Docker {#docker}
In this section, we'll describe how you can try datasource-managed alerts on Grafana with Docker Compose. Follow the steps in this section to see how the Grafana UI looks in datasource-managed alerts.
First, create `alerts.yml`. The following configuration file creates an always-firing alert, which you can use to test datasource-managed alerts in Grafana.
```yaml
# alerts.yml
groups:
- name: demo
rules:
# Always-firing demo alert so you see something immediately
- alert: AlwaysFiring
expr: vector(1)
for: 10s
labels:
severity: warning
annotations:
summary: "Demo alert that always fires"
description: "This is a demo alert from vmalert using vector(1)."
# Simple recording rule you can graph in Grafana
- record: demo:vector_one
expr: vector(1)
```
Next, create a basic Alertmanager config called `alertmanager.yml`. This example does not forward alerts anywhere, but serves as a source for Grafana:
```yaml
# alertmanager.yml
global:
resolve_timeout: 5m
route:
receiver: "log"
receivers:
- name: "log"
```
Finally, create `grafana-datasources.yml` to configure Grafana to use VictoriaMetrics and Alertmanager as datasources for alerts and notifications:
```yaml
# grafana-datasources.yml
apiVersion: 1
datasources:
- name: VictoriaMetrics
type: prometheus
access: proxy
url: http://victoriametrics:8428
isDefault: true
- name: Alertmanager
type: alertmanager
access: proxy
url: http://alertmanager:9093
jsonData:
implementation: prometheus
```
The final piece is the Docker Compose file. This ties all the services together and sets up the required command-line arguments:
```yaml
# compose.yml
services:
victoriametrics:
image: victoriametrics/victoria-metrics:v1.140.0
command:
- "--storageDataPath=/victoria-metrics-data"
- "--selfScrapeInterval=10s"
# Proxy vmalert APIs so Grafana can see rules via VictoriaMetrics
- "--vmalert.proxyURL=http://vmalert:8880"
ports:
- "8428:8428"
volumes:
- vm-data:/victoria-metrics-data
alertmanager:
image: prom/alertmanager:v0.31.1
command:
- "--config.file=/etc/alertmanager/alertmanager.yml"
ports:
- "9093:9093"
volumes:
- ./alertmanager.yml:/etc/alertmanager/alertmanager.yml:ro
vmalert:
image: victoriametrics/vmalert:v1.140.0
depends_on:
- victoriametrics
- alertmanager
command:
# read metrics from VictoriaMetrics
- "--datasource.url=http://victoriametrics:8428"
# store and retrieve alert state from VictoriaMetrics
- "--remoteRead.url=http://victoriametrics:8428"
- "--remoteWrite.url=http://victoriametrics:8428"
# configure Alertmanager as the notifier
- "--notifier.url=http://alertmanager:9093"
- "--rule=/etc/vmalert/alerts.yml"
- "--evaluationInterval=15s"
# external settings link vmalerts and Alertmanager to Grafana
- "--external.url=http://localhost:3000"
- "--external.alert.source=explore?orgId=1&left={\"datasource\":\"VictoriaMetrics\",\"queries\":[{\"refId\":\"A\",\"expr\":\"{{.Expr|queryEscape}}\"}]}"
ports:
- "8880:8880"
volumes:
- ./alerts.yml:/etc/vmalert/alerts.yml:ro
grafana:
image: grafana/grafana:12.4
depends_on:
- victoriametrics
- alertmanager
environment:
GF_SECURITY_ADMIN_USER: admin
GF_SECURITY_ADMIN_PASSWORD: admin
GF_PATHS_PROVISIONING: /etc/grafana/provisioning
ports:
- "3000:3000"
volumes:
- ./grafana-datasources.yml:/etc/grafana/provisioning/datasources/datasources.yml:ro
- grafana-data:/var/lib/grafana
volumes:
vm-data:
grafana-data:
```
Let's break down the main command line arguments that connect every component:
- VictoriaMetrics
- `-vmalert.proxyURL`: forwards Grafana requests for `/api/v1/rules` and `/api/v1/alerts` to vmalert, enabling rule visibility in Grafana UI
- vmalert
- `-datasource.url`: configures VictoriaMetrics as the query source for rule evaluation
- `-remoteWrite.url`: defines VictoriaMetrics as the backend used to persist rule state across restarts
- `-remoteRead.url`: defines VictoriaMetrics as the backend used to read historical state for pending alerts
- `-notifier.url`: directs firing alerts to Alertmanager
- `-external.url`: defines the base URL for alert links so users see the public URL of Alertmanager, or an external alerting UI like Karma or Grafana in notifications.
- `-external.alert.source`: creates a template for clickable alert links for Grafana. Allows Alertmanager UI to link directly to Grafana
Now, start the demo with:
```sh
docker compose up -d
```
Open your browser at `localhost:3000` and log in to Grafana with username `admin` and password `admin`.
If you open the sidebar and select **Alerting** > **Alert rules**, you should be able to see one alert pending or firing.
![Screenshot of Grafana alert pane](grafana-alert-firing.webp)
<figcaption style="text-align: center; font-style: italic;">Datasource-managed alert firing in Grafana</figcaption>
Open the sidebar again and go to **Alerting** > **Active notifications** to see the active alert reported by Alertmanager.
![Screenshot of Grafana Active notifications Page](grafana-active-notifications.webp)
You can also see the alerts in VMUI by opening the browser in `http://localhost:8428/vmui/?#/rules`. This is possible only when we have configured `-vmalert.proxyURL` in VictoriaMetrics.
![Screenshot of VMUI](vmui-alerts.webp)
<figcaption style="text-align: center; font-style: italic;">Alerts can be visualized in VMUI directly</figcaption>
If you open the browser in `http://localhost:9093/#/alerts`, you will see the Alertmanager UI with the firing alert.
![Screenshot of Alertmanager](alertmanager-alerts.webp)
<figcaption style="text-align: center; font-style: italic;">Alertmanager UI showing the firing alert</figcaption>
Clicking **Source** should take you back to Grafana and display the query that generated the alert.
## vmalert and VictoriaMetrics Single on Kubernetes {#vmsingle}
This section explains how to configure datasource-managed alerts on the VictoriaMetrics single-node version on Kubernetes.
### Prerequisites
- A Kubernetes cluster
- VictoriaMetrics single-node
- Grafana
- Helm values or config files used for installation
You can follow this guide to install all required components: [Kubernetes monitoring via VictoriaMetrics single](https://docs.victoriametrics.com/guides/k8s-monitoring-via-vm-single/).
### 1. Ensure VictoriaMetrics and Grafana are installed
Ensure you have added and updated the VictoriaMetrics Helm repository:
```sh
helm repo add vm https://victoriametrics.github.io/helm-charts/
helm repo update
```
Confirm that the VictoriaMetrics single-node version is installed (assuming the release name `vmsingle` from the [installation guide](https://docs.victoriametrics.com/guides/k8s-monitoring-via-vm-single/))
```sh
kubectl get pods -l app.kubernetes.io/instance=vmsingle
```
You should get a single running pod:
```sh
NAME READY STATUS RESTARTS AGE
vmsingle-victoria-metrics-single-server-0 1/1 Running 0 48m
```
Do the same for Grafana:
```sh
kubectl get pod -l app.kubernetes.io/name=grafana
```
You should get the name of the Grafana pod running in your cluster:
```sh
NAME READY STATUS RESTARTS AGE
my-grafana-65d6d4ccbc-nxkxq 1/1 Running 0 58m
```
### 2. Install vmalert and Alertmanager
Create a Helm values file for vmalert and Alertmanager called `vm-alerting-values.yml`.
The example below comes with two demo alerts. Add your own vmalert [alerting rules](https://docs.victoriametrics.com/victoriametrics/vmalert/#rules) in the `config: alerts:` section below.
```sh
cat <<EOF > vm-alerting-values.yml
# Enable and configure Alertmanager
alertmanager:
enabled: true
config:
global:
resolve_timeout: 5m
route:
group_by: ["alertname"]
group_wait: 30s
group_interval: 5m
repeat_interval: 12h
receiver: "log"
receivers:
- name: "log"
# place your default route here for notifications
# Configure vmalert ("server" section in this chart)
server:
# vmalert evaluation datasource: point at vmsingles Prometheus-compatible API
datasource:
url: http://vmsingle-victoria-metrics-single-server.default.svc.cluster.local.:8428
# Where vmalert stores and reads alert state (remote write/read)
remote:
write:
url: http://vmsingle-victoria-metrics-single-server.default.svc.cluster.local.:8428
read:
url: http://vmsingle-victoria-metrics-single-server.default.svc.cluster.local.:8428
# Configure Alertmanager as notifier
notifier:
alertmanager:
# Adjust namespace/name if you install into a non-default namespace or change the release name
url: http://vmalert-victoria-metrics-alert-alertmanager:9093
# Inline demo rules. Add your alerting groups and rules here
config:
alerts:
groups:
- name: vm-health
rules:
- alert: TooManyRestarts
expr: changes(process_start_time_seconds{job=~"victoriametrics|vmagent|vmalert"}[15m]) > 2
labels:
severity: critical
annotations:
summary: "{{ $labels.job }} too many restarts (instance {{ $labels.instance }})"
description: "Job {{ $labels.job }} has restarted more than twice in the last 15 minutes. It might be crashlooping."
- alert: ServiceDown
expr: up{job=~"victoriametrics|vmagent|vmalert"} == 0
for: 2m
labels:
severity: critical
annotations:
summary: "Service {{ $labels.job }} is down on {{ $labels.instance }}"
description: "{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 2 minutes."
EOF
```
Install `vmalert` and Alertmanager with:
```sh
helm install vmalert vm/victoria-metrics-alert -f vm-alerting-values.yml
```
### 3. Configure VictoriaMetrics single to proxy to vmalert
For datasource-managed alerts, Grafana talks to VictoriaMetrics, and VictoriaMetrics proxies alerting-related API calls to vmalert via the `-vmalert.proxyURL` flag.
First, check the service name for `vmalert`:
```sh
kubectl get svc -l app.kubernetes.io/instance=vmalert,app=server
```
Get the name of the `vmalert` service:
```sh
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
vmalert-victoria-metrics-alert-server ClusterIP None <none> 8880/TCP 58m
```
The internal DNS name for this service, in the default namespace, will be:
```text
vmalert-victoria-metrics-alert-server.default.svc.cluster.local:8880
```
Next, create a Helm values file with the internal Kubernetes URL for vmalert.
```sh
cat <<EOF > vm-vmalert-proxy-values.yml
# vm-vmalert-proxy-values.yaml
server:
extraArgs:
vmalert.proxyURL: http://vmalert-victoria-metrics-alert-server.default.svc.cluster.local:8880
EOF
```
Update the configuration of VictoriaMetrics single by applying both *your original Helm values* and this new overlay. In the example below, the original values file is called `vmsingle-values-file.yml` (this is the file you used when you first installed the cluster):
```sh
helm upgrade vmsingle vm/victoria-metrics-single \
-f vmsingle-values-file.yml \
-f vm-vmalert-proxy-values.yml
```
After this upgrade, vmsingle will start proxying `/api/v1/rules`, `/api/v1/alerts`, and other `vmalert` [endpoints](https://docs.victoriametrics.com/victoriametrics/vmalert/#web) to the vmalert service, enabling Grafanas alerting UI and API to work through the VictoriaMetrics datasource.
To finish the setup, jump to the [Configure Grafana](https://docs.victoriametrics.com/guides/vmalert-datasource-managed-alerts-grafana/#grafana) section
## vmalert and VictoriaMetrics Cluster on Kubernetes {#vmcluster}
This section explains how to configure datasource-managed alerts on the VictoriaMetrics cluster version on Kubernetes.
### Prerequisites
- A Kubernetes cluster
- VictoriaMetrics cluster
- Grafana
- Helm values or config files used for the installation of the cluster
You can follow this guide to install the cluster and Grafana first: [Kubernetes monitoring with VictoriaMetrics cluster](https://docs.victoriametrics.com/guides/k8s-monitoring-via-vm-cluster/).
### 1. Ensure VictoriaMetrics and Grafana are installed
Ensure you have added and updated the VictoriaMetrics Helm repository:
```sh
helm repo add vm https://victoriametrics.github.io/helm-charts/
helm repo update
```
Confirm that the VictoriaMetrics cluster is installed (assuming the release name `vmcluster` from the [installation guide](https://docs.victoriametrics.com/guides/k8s-monitoring-via-vm-cluster/)):
```sh
kubectl get pods -l app.kubernetes.io/instance=vmcluster
```
You should see pods for `vmselect`, `vminsert`, and `vmstorage`, for example:
```text
NAME READY STATUS RESTARTS AGE
vmcluster-victoria-metrics-cluster-vminsert-b4d494b4c-cx2m5 1/1 Running 0 3m8s
vmcluster-victoria-metrics-cluster-vminsert-b4d494b4c-xdv76 1/1 Running 0 3m8s
vmcluster-victoria-metrics-cluster-vmselect-67979c98fc-7hfdl 1/1 Running 0 3m8s
vmcluster-victoria-metrics-cluster-vmselect-67979c98fc-ftpzn 1/1 Running 0 3m8s
vmcluster-victoria-metrics-cluster-vmstorage-0 1/1 Running 0 3m8s
vmcluster-victoria-metrics-cluster-vmstorage-1 1/1 Running 0 2m48s
```
VictoriaMetrics exposes its write API via the `vminsert` service on port 8480 and its read (Prometheus-compatible) API via the `vmselect` service on port 8481 by default. For a default installation, these DNS names are:
- Write: `vmcluster-victoria-metrics-cluster-vminsert.default.svc.cluster.local.:8480`
- Read: `vmcluster-victoria-metrics-cluster-vmselect.default.svc.cluster.local.:8481`
Now, ensure Grafana is installed:
```sh
kubectl get pod -l app.kubernetes.io/name=grafana
```
You should get the name of the Grafana pod running in your cluster:
```sh
NAME READY STATUS RESTARTS AGE
my-grafana-65d6d4ccbc-nxkxq 1/1 Running 0 58m
```
### 2. Install vmalert and Alertmanager
Create a Helm values file for vmalert and Alertmanager called `vm-alerting-values.yml`.
The example below comes with two demo alerts. Add your own vmalert [alerting rules](https://docs.victoriametrics.com/victoriametrics/vmalert/#rules) in the `config: alerts:` section below.
```sh
cat <<EOF > vm-alerting-values.yml
# Enable and configure Alertmanager
alertmanager:
enabled: true
config:
global:
resolve_timeout: 5m
route:
group_by: ["alertname"]
group_wait: 30s
group_interval: 5m
repeat_interval: 12h
receiver: "log"
receivers:
- name: "log"
# place your notification route here
# Configure vmalert ("server" section in this chart)
server:
# vmalert evaluation datasource: point at vmselects Prometheus-compatible API
datasource:
url: http://vmcluster-victoria-metrics-cluster-vmselect.default.svc.cluster.local.:8481/select/multitenant/prometheus/
# Where vmalert stores and reads alert state (remote write/read)
remote:
write:
# send ALERTS / recording rule series into vminsert
url: http://vmcluster-victoria-metrics-cluster-vminsert.default.svc.cluster.local.:8480/insert/0/prometheus/
read:
# read back alert state from vmselect
url: http://vmcluster-victoria-metrics-cluster-vmselect.default.svc.cluster.local.:8481/select/0/prometheus/
# Configure Alertmanager as notifier
notifier:
alertmanager:
# Adjust namespace/name if you install into a non-default namespace or change the release name
url: http://vmalert-victoria-metrics-alert-alertmanager:9093
# Inline demo rules. Add your alerting groups and rules here
config:
alerts:
groups:
- name: vm-health
rules:
- alert: TooManyRestarts
expr: changes(process_start_time_seconds{job=~"victoriametrics|vmagent|vmalert"}[15m]) > 2
labels:
severity: critical
annotations:
summary: "{{ $labels.job }} too many restarts (instance {{ $labels.instance }})"
description: "Job {{ $labels.job }} has restarted more than twice in the last 15 minutes.
It might be crashlooping."
- alert: ServiceDown
expr: up{job=~"victoriametrics|vmagent|vmalert"} == 0
for: 2m
labels:
severity: critical
annotations:
summary: "Service {{ $labels.job }} is down on {{ $labels.instance }}"
description: "{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 2 minutes."
EOF
```
The key differences from the [single-node setup](https://docs.victoriametrics.com/guides/vmalert-datasource-managed-alerts-grafana/#vmsingle) section
- `server.datasource.url` and `server.remote.read.url` point to the `vmselect` read endpoint (`/select/multitenant/prometheus/`).
- `server.remote.write.url` points to the `vminsert` write endpoint (`/insert/multitenant/prometheus/`).
Install `vmalert` and Alertmanager with:
```sh
helm install vmalert vm/victoria-metrics-alert -f vm-alerting-values.yml
```
### 3. Configure VictoriaMetrics Cluster to proxy to vmalert
For datasource-managed alerts, Grafana talks to VictoriaMetrics, and VictoriaMetrics proxies alerting-related API calls to vmalert via the `-vmalert.proxyURL` flag. In the cluster version, set this flag on `vmselect` and point it to the vmalert service so Grafana can reach the alert state.
First, check the service name for vmalert:
```sh
kubectl get svc -l app.kubernetes.io/instance=vmalert,app=server
```
You should see something like:
```sh
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
vmalert-victoria-metrics-alert-server ClusterIP None <none> 8880/TCP 58m
```
The internal DNS name for this service, in the default namespace, will be:
```text
vmalert-victoria-metrics-alert-server.default.svc.cluster.local:8880
```
Create a Helm values overlay file for the cluster called `vmcluster-vmalert-proxy-values.yml`.
The `vmselect.extraArgs` map in the `victoria-metrics-cluster` chart allows you to pass arbitrary command-line flags to vmselect, including `-vmalert.proxyURL`.
```sh
cat <<EOF > vmcluster-vmalert-proxy-values.yml
vmselect:
extraArgs:
vmalert.proxyURL: http://vmalert-victoria-metrics-alert-server.default.svc.cluster.local:8880
EOF
```
Update the VictoriaMetrics cluster configuration by applying both your *original Helm values* and this new overlay. In the example below, the original values file is called `vmcluster-values-file.yml` (this is the file you used when you first installed the cluster):
```sh
helm upgrade vmcluster vm/victoria-metrics-cluster \
-f victoria-metrics-cluster-values.yml \
-f vmcluster-vmalert-proxy-values.yml
```
After this upgrade, vmselect will start proxying `/api/v1/rules`, `/api/v1/alerts`, and other `vmalert` endpoints to the vmalert service, enabling Grafanas alerting UI and API to work through the VictoriaMetrics datasource.
To finalize the setup, continue on to the next section, [Configure Grafana](#grafana).
## Configure Grafana {#grafana}
The last step on any Kubernetes-based installation is to add Alertmanager to Grafana so that Grafana can show notifications alongside the rules it discovers via the VictoriaMetrics datasource.
Get the service name for Alertmanager in your cluster:
```sh
kubectl get svc -l app.kubernetes.io/instance=vmalert,app=alertmanager
```
You should see something like:
```text
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
vmalert-victoria-metrics-alert-alertmanager ClusterIP 10.43.114.243 <none> 9093/TCP 68m
```
Next, add Alertmanager to Grafana:
1. Log in to your Grafana dashboard.
2. Go to **Connections** > **Datasources**.
3. Press **+ Add new data source**.
4. Search and select “Alertmanager”.
5. Fill in the following parameters (adjusting namespace/service name if needed):
- Implementation: Prometheus
- URL: `http://vmalert-victoria-metrics-alert-alertmanager.default.svc.cluster.local:9093`
Ensure the URL matches the Alertmanager service name you obtained earlier.
6. Press **Save & Test**.
![Screenshot of Grafana](grafana-alertmanager.webp)
<figcaption style="text-align: center; font-style: italic;">Adding Alertmanager to Grafana</figcaption>
With vmselects `vmalert.proxyURL` set and Alertmanager configured as a datasource, Grafana should now be able to display vmalert rules and alert instances from the VictoriaMetrics Cluster datasource and show notifications managed by Alertmanager in its UI.
## See also
- [YouTube: Mathias Palmersheim - Who will be your Ruler](https://youtu.be/NfhVOEkznFY)
- [Kubernetes monitoring via VictoriaMetrics single](https://docs.victoriametrics.com/guides/k8s-monitoring-via-vm-single/)
- [Kubernetes monitoring with VictoriaMetrics cluster](https://docs.victoriametrics.com/guides/k8s-monitoring-via-vm-cluster/)
- Learn more about [vmalert](https://docs.victoriametrics.com/victoriametrics/vmalert/)

View File

@@ -0,0 +1,17 @@
---
weight: 16
title: Datasource-Managed Alerts with vmalert and Grafana
menu:
docs:
parent: "guides"
weight: 16
tags:
- metrics
- guide
- kubernetes
- alerts
aliases:
- /guides/vmalert-datasource-managed-alerts-grafana.html
---
{{% content "README.md" %}}

Binary file not shown.

After

Width:  |  Height:  |  Size: 403 KiB

Some files were not shown because too many files have changed in this diff Show More