Compare commits

...

339 Commits

Author SHA1 Message Date
func25
4e57f4dc48 fix 2026-04-14 15:10:04 +07:00
Artem Fetishev
8a20ccf21d apptest: sync code between branches and fix backup/restore range queries (#10799)
Fix app tests:

1. Sync code between vmsingle and vmcluster: it must be the same because
apptest does not differentiate between branches, it just runs pre-built
binaries
2. Simplify range queries in backup/restore test so that it does not
depend on the interval between samples to work correctly.

---------

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-04-14 07:18:09 +02:00
Max Kotliar
1a01dbbec7 docs/changelog: fix unwanted release tag change
The tag v1.138.0 was unintentinally changed to v1.139.0 due to bug in
release script.

Reverting the change. The bug will be addressed separate.
2026-04-13 14:52:21 +03:00
f41gh7
630e413812 docs: update flags with actual v1.140.0 binaries
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2026-04-13 11:34:04 +02:00
f41gh7
b639e7e641 docs: bump version to v1.140.0
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2026-04-13 11:31:40 +02:00
f41gh7
858c318e1f deplyoment/docker: bump version to v1.140.0 2026-04-13 11:31:11 +02:00
f41gh7
b8327ce09c docs: mention new LTS releases
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2026-04-13 11:16:30 +02:00
Aliaksandr Valialkin
7514511c68 app/vmauth/main.go: clarify comments for bufferedBody struct a bit
This is a follow-up for https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10677#discussion_r3064731250
2026-04-11 09:42:32 +02:00
f41gh7
33d524bf13 follow-up for d07c1c73d1
move bugifx into current release
2026-04-10 19:37:14 +02:00
Alexander Frolov
d07c1c73d1 lib/writeconcurrencylimiter: prevent deadlock at IncConcurrency
Previously (*writeconcurrencylimiter.Reader).Read() could permanently leak concurrency tokens from the -maxConcurrentInserts semaphore.
 
 Consider the following example:
* GetReader() acquires a token, then PutReader() unconditionally releases it.
* Read() calls DecConcurrency() before the underlying I/O and IncConcurrency() after it. If IncConcurrency() returns an error, Read() returns without holding a token.
* Each such failure permanently removes one slot from the concurrencyLimitCh semaphore. Slots leak one by one until the channel is fully drained, at which point DecConcurrency() blocks forever, deadlocking ingestion on vmstorage.

 This commit adds tracking for obtained tokens to the reader. Which prevents possible tokens leakage. 

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10784
2026-04-10 19:35:59 +02:00
f41gh7
a896673c42 CHANGELOG.md: cut v1.140.0 release 2026-04-10 17:02:32 +02:00
f41gh7
c60ab2d57a make docs-update-version 2026-04-10 16:54:18 +02:00
f41gh7
49e51611d7 make vmui-update 2026-04-10 16:51:13 +02:00
Hui Wang
902ca83177 app/vmalert: adopt additional rule states in the list rules API
In grafana, the alert list panel can use VictoriaMetrics as datasource
and call `/api/v1/rules` api with [specific
states](https://grafana.com/docs/grafana/latest/alerting/fundamentals/alert-rule-evaluation/nodata-and-error-states/#alert-instance-states).
See
https://play-grafana.victoriametrics.com/d/febljk0a32qyoa/3e68cf3?orgId=1&from=now-1h&to=now&timezone=browser&var-prometheus_datasource=P4169E866C3094E38&var-jaeger_datasource=P14D5514F5CCC0D1C&var-victorialogs_datasource=PD775F2863313E6C7&var-service_namespace=$__all&var-service_name=checkout&refresh=5m&editPanel=40.
Some states are already defined in vmalert, although with different
names. Others, such as "recovering", are currently undefined.
This pull request adopts all these states, rather than fail the request.

Above panel request also uses the `matcher` param to filter rules.
However,
[prometheus](https://prometheus.io/docs/prometheus/latest/querying/api/#rules)
also does not support this parameter and simply ignore it, so I don't
think vmalert needs to support it now.

JFYI, the grafana [Alerting
page](https://play-grafana.victoriametrics.com/alerting) does not
include any of the mentioned `state` or `matcher` parameters in rule
listing requests to the datasource. Filtering is handled by the Grafana
frontend, so most users are not affected by partial support for
filtering in backend products.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10778
2026-04-10 16:47:25 +02:00
Phuong Le
66e3f8736b ci: remove automatic Codecov reporting from test workflow (#10780)
This removes automatic Codecov reporting from VictoriaMetrics CI. This
change keeps local coverage generation available, but removes automatic
PR noise (such as
[this](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10625#issuecomment-4084390659))
and unnecessary CI overhead.
2026-04-10 16:45:11 +02:00
f41gh7
532fcc3dfe docs: remove reverted commit changelog
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2026-04-10 16:35:29 +02:00
Aliaksandr Valialkin
b003d6c6ae Revert "app/vmauth: align request body buffering flags"
This reverts commit b3c03c023c.

Reason for revert: the original logic was correct from the user's perspective:

- The -maxRequestBodySizeToRetry command-line flag controls the size of the request body,
  which could be retried on backend failure. The meaining of this flag wasn't changed after
  the introduction of the -requestBufferSize flag in the commit e31abfc25c
  (see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10309 )

- The -requestBufferSize flag controls the size of the buffer for reading request body
  before sending sending it to the backend and before applying concurrency limits.

These flags are independent from user's perspective. The fact that these flags share the implementation,
sholdn't be known to the user - this is an implementation detail, which allows avoiding double buffering.

Both flags enable request buffering. If the user wants disabling of all the request buffering,
then both flags must be set to 0. That's why these flags are cross-mentioned in their -help descriptions.

Also the reverted commit had the following issues:

- It reduced the default value for the -requestBufferSize flag from 32KiB to 16KiB.
  The 32KiB value has been calculated and justified at https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10309 .
  It shouldn't increase vmagent memory usage too much for typical workloads.
  For example, if vmagent handles 10K concurrent requests, then the memory overhead for the request buffering
  will be 10K*32KiB=320MiB. This is a small price for being able to efficiently handling 10K concurrent requests.

- It added a dot to the end of the https://docs.victoriametrics.com/victoriametrics/vmauth/#request-body-buffering link
  in the description for the description of the -requestBufferSize flag. This breaks clicking the link in some environments,
  since the trailing dot is considered as a part of the url.

- It added a superflouous whitespace in front of the 'Disabling request buffering' text inside the description
  for the -requstBufferSize flag.

- It introduced an unnecessary complexity to the user by mentioning that the zero value
  at -maxBufferSize disables buffering for request reties (these things must be independent
  from the user's perspective).

- It changed the bufferedBody logic in non-trivial ways, which aren't related to the original issue.
  If these changes are needed, then they must be justified in a separate issue and must be prepared
  in a separate pull request / commit.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10675
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10677
2026-04-10 15:55:47 +02:00
Aliaksandr Valialkin
8fa0fae05a lib/protoparser/protoparserutil: fix encoding -> contentType in the description of the ReadUncompressedData function
This is a follow-up for the commit bed7cbd0a4
2026-04-10 15:20:27 +02:00
Aliaksandr Valialkin
3fe2ec7bde docs/victoriametrics/Articles.md: add https://medium.com/airbnb-engineering/building-a-high-volume-metrics-pipeline-with-opentelemetry-and-vmagent-c714d6910b45 2026-04-10 13:28:49 +02:00
Max Kotliar
6389979bce docs/changelog: add thank you for bugfix contribution 2026-04-10 13:08:28 +03:00
Max Kotliar
210fd0ae15 docs/changelog: add thank you for the contribution 2026-04-10 13:06:08 +03:00
Noureldin
f95b483a13 lib/storage: fixes data race at startFreeDiskSpaceWatcher
Previously, Storage.table was initialized after startFreeDiskSpaceWatcher was called.
This created a potential data race condition: if openTable took a long time to complete
and freed disk space during that window, the free disk space watcher could read an
uninitialized (or partially initialized) Storage.table, leading to an invalid memory
address or nil pointer dereference panic.

This commit properly initializes s.isReadOnly state during storage start and
starts FreeDiskSpaceWatcher after openTable.

Bug was introduced in github.com/VictoriaMetrics/VictoriaMetrics/commit/27b958ba8bc66578206ddac26ccf47b2cc3e8101

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10747
2026-04-10 08:33:49 +02:00
Hui Wang
b71c37e20a app/vmalert: align group evaluation time with the eval_offset option
Align group evaluation time with the `eval_offset` option to allow users
to manage group execution more effectively by understanding the exact
time each group will be scheduled, particularly in cases of spreading
rule execution within a window, chaining groups, or debugging data delay
issue.

If the group evaluation takes less than the group interval, but the
initial evaluation combined with the additional restore operation
exceeds the group interval, the evaluation time will be gradually
corrected in subsequent evaluations, as the interval ticker schedule
remains unchanged.

For groups without `eval_offset`, this change also ensures that all
evaluations follow the interval. Previously, the gap between the first
and second evaluations was larger than the interval. And the
`eval_delay` continues to help prevent partial responses.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10772.
2026-04-10 08:31:36 +02:00
Aliaksandr Valialkin
c27b5f5dfe docs/victoriametrics/vmauth.md: fix link to concurrency limiting chapter
The correct link must be https://docs.victoriametrics.com/victoriametrics/vmauth/#concurrency-limiting
instead of https://docs.victoriametrics.com/victoriametrics/vmauth/#concurrency-limits

The incorrect link has been introduced in the commit e31abfc25c
2026-04-09 19:37:40 +02:00
Max Kotliar
0a31eacb3d lib/{osinfo,appmetrics}: Move vm_os_info metric code to lib/appmetrics package (#10776)
Follow-up commit for
211fb08028

Address @f41gh7 review comments:
- Move code from `lib/osinfo` to `lib/appmetrics`.
- Make the logic private.
- Use metrics.WriteGaugeUint64 func.
- Remove registration logic from `app/xxx/main.go`.
- Remove `lib/osinfo` package.
2026-04-09 18:32:47 +03:00
Artem Fetishev
70b0115ea6 lib/storage: reuse nextDayMetricIDs during the first hour of the day (#10704)
At 00:00 UTC the ingested samples start to have timestamps for the new
day (in the ingested samples are always recent). Even though there was a
next-day prefill of the per-day index during the last hour of the day,
some performance degradation is still possible.

For example, in https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10698
it is manifested as `vminsert-to-vmstorage connection saturation` peaks
right after midnight.

Possible hypothesis why this is happening. At midnight,
currHourMetricIDs is empty and prevHourMetricIDs cannot be used because
it holds metricIDs for the previous day. So the ingestion logic hits
dateMetricIDsCache which may not have the metricID in its read-only
buffer and therefore should aquire lock to check its prev read-only
buffer or read-write buffer. Which creates lock contention and therefore
raises ingestion request latency.

A solution to this could be re-using the nextDayMetricIDs during the
first hour of the day. During this time, it is equivalent to
currHourMetricIDs.

---------

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Signed-off-by: Artem Fetishev <149964189+rtm0@users.noreply.github.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
2026-04-09 16:33:42 +02:00
Max Kotliar
dfafd14767 apptest: Improve TestSingleVMAgentDropOnOverload stability (#10774)
Previosly the test could fail on resource constraint runners because
remoteWrite retry happens before the assertion in:

```
    waitFor(
        func() bool {
            return vmagent.RemoteWriteRequests(t, url1) == 1 &&
vmagent.RemoteWriteRequests(t, url2) == 1
        },
    )
```

Because of retry the metric jumps to two and assert never satisfied.

The commit explisitly postpones retries so there is no race condition.

Failed  CI job:

https://github.com/VictoriaMetrics/VictoriaMetrics/actions/runs/24186679213/job/70593055140

PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10774

<img width="1157" height="879" alt="Screenshot 2026-04-09 at 15 30 33"
src="https://github.com/user-attachments/assets/e170ae12-cf79-4501-a57b-fbd3612d31a0"
/>
2026-04-09 16:57:39 +03:00
Max Kotliar
e3fdbc8341 docs/changelog: cleanup follow-up on e1a9901654
e1a9901654
2026-04-09 15:04:48 +03:00
Max Kotliar
1bf442537f docs/changelog: cleanup. follow-up on 211fb08028 commit
211fb08028
2026-04-09 15:01:44 +03:00
JAYICE
211fb08028 introduce os kernel version information metric (#10746)
The commit introduces the `vm_os_info` metric, which is exposed by all VM binaries by default. It provides visibility into the operating system version on which VictoriaMetrics is running, helping with troubleshooting environment-specific issues, like known kernel or fs bugs. 

FIxes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10481
PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10746

Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-04-09 14:43:25 +03:00
Yury Moladau
846124e280 app/vmui: generate CSV format using /api/v1/labels (#10771)
`Export query` button on `Raw Query` tab now fetches labels of executed query and composes export `format` based on that list of labels. It ensures that all query response labels are preserved in the CSV export. 

Also, commit removes the addition of the CSV header in the frontend. Now the header is added by the backend (see https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10706).

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10667
PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10771
Duplicate of: https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10737

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: Yury Molodov <yurymolodov@gmail.com>
Co-authored-by: lawrence3699 <lawrence3699@users.noreply.github.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-04-09 14:18:01 +03:00
andriibeee
e1a9901654 vmselect: add CSV header support for export/import (#10706)
Export (/api/v1/export/csv) now always writes a header row matching the requested format fields. Examples:

```
  # format=__timestamp__:unix_ms,__value__,job,instance
  __timestamp__:unix_ms,__value__,job,instance
  1704067200000,42.5,node,localhost:9090
```

Import (/api/v1/import/csv) gains auto-detection logic: the first row is skipped if any timestamp column fails timestamp parsing or any metric value column fails float parsing. If the first row is not detected as headers, it is parsed as data. This makes the import backward compatible. 

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10666
PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10706

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-04-09 14:00:39 +03:00
dependabot[bot]
5d0cf1d4a5 build(deps): bump vite from 8.0.2 to 8.0.7 in /app/vmui/packages/vmui (#10761)
Bumps [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite) from 8.0.2 to 8.0.7.

https://github.com/vitejs/vite/blob/v8.0.7/packages/vite/CHANGELOG.md

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-09 13:16:01 +03:00
Pablo (Tomas) Fernandez
cd3d297a3d docs: udpate playground page (#10754)
This change reverts part of the changes in
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10686

Motivation: docs added https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10686 in most cases are too verbose, ai-generated and bringing low practical sense.

The improvement goal: remove bloat from the docs and keep them practical and useful.

What it does:
- Completely removes items from the sidebar
- Moves the content of the most important playground pages to the
`/playground/` stub (README.md). Use H2s for each playground.
- Updates and cleans the text.
- Removes the individual children pages in the playground category (keep
only the `/playgrounds/` page/stub and remove the children).
- Removes items as these don't really need much introduction or aren't
playgrounds:
  - log to logsql: a conversion tool
  - sql to logsql: same
- adds Grafana playground section

Links of child pages will become invalid. We don't preserve them as this is pretty new doc (1w on prod) and is unlikely to have already persisted links somewhere.

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2026-04-09 12:08:48 +02:00
f41gh7
52f4d0f055 follow-up for 72c9e9377c
Move changelog entry to the upcoming release section
2026-04-09 11:38:36 +02:00
Hui Wang
72c9e9377c app/vmalert: expose remotewrite queue_size metrics
This commit adds new metrics `vmalert_remotewrite_queue_capacity` and `vmalert_remotewrite_queue_size`, which is updated with each push and it's
frequency depends on `-remoteWrite.concurrency`,
`remoteWrite.flushInterval`

It doesn't account for the pending data within each pushers request, it
should provide a general indication of the queue usage.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10765
2026-04-09 11:22:38 +02:00
andriibeee
0aaa741b5b lib/awsapi: add support for named AWS profile to ec2_sd_config
Add support for named AWS profiles in ec2_sd_config, matching Prometheus behavior.

Example:

```text
~/.aws/config:
[profile account-one]
source_profile = root
role_arn = arn:aws:iam::000000000001:role/prometheus
```

```yaml
scrape config:
- job: ec2
  ec2_sd_configs:
    - profile: account-one
```

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1685
2026-04-09 11:17:17 +02:00
f41gh7
0e9870b7a9 vendor: run go get -u ./lib/...
go get -u ./app/...
go mod tidy -compat=1.26
go mod vendor
2026-04-09 09:34:03 +02:00
Artem Fetishev
accb06d131 lib/storage: refactor storage synctests
Exctract repeated code from nextDayMetricIDs synctests into separate
funcs to make the code more readable.

The change was originally introduced in
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10704 and was
extracted into a separate PR to keep the original change simple.
2026-04-09 09:07:37 +02:00
f41gh7
1787bce6cb app/vminsert: opitimise per insert request memory buffer size
Previously, vminsert did not account for the ingest concurrency limit in buffer size calculation.
This could lead to excessively large buffers and OOM errors when the concurrency limit was reached.

 This commit fixes buffer size calculation by separating `insertCtx` and `storageNode` buffer size limits.

`storageNode` buffer size is set to a larger value, as it is allocated per configured `-storageNode`
and is independent of the concurrency limit.

`insertCtx` buffer size now accounts for the configured concurrency limit
and calculates the maximum buffer size accordingly.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10725
2026-04-09 08:48:37 +02:00
JAYICE
141febd413 app/vmselect: disable partial responses for cluster native requests
Previously, vmselect in cluster-native mode could return partial responses to upstream vmselect.
Since upstream vmselect expects full responses (mimicking vmstorage behavior),
partial responses must be disabled in cluster-native mode.
This prevents incomplete responses from being cached at the upstream vmselect level.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10678
2026-04-09 08:48:13 +02:00
0e4ef622
256eff061d docs/victoriametrics/stream-aggregation: fix rate_sum link (#10756)
### Describe Your Changes

https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8349 updated the recommendation for histogram aggregation from `total` to `rate_sum`, but missed one of the links.

PR: https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10756

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-04-08 13:09:58 +03:00
Zakhar Bessarab
fa1dd0ec0a app/vmagent/remotewrite: automatically set series limits to MaxInt32 when setting value to -1 (#9614)
Automatically set daily and hourly series limits to `MaxInt32` when `remoteWrite.maxHourlySeries` or `remoteWrite.maxDailySeries` is set to `-1`.

This change addresses a usability issue with the cardinality limiter. Users may want to enable the limiter to observe its metrics before deciding on an appropriate limit. However, the underlying bloom filter only supports `int32`, so setting large values can lead to overflow.

With this PR:
* Setting either flag to `-1` is treated as “no practical limit” and internally mapped to `math.MaxInt32`
* Values exceeding `int32` are safely clamped to `MaxInt32` to prevent overflow

This allows users to enable the limiter for estimation purposes without risking invalid configurations or runtime issues.

https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9614

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
Signed-off-by: Max Kotliar <kotlyar.maksim@gmail.com>
Co-authored-by: Nikolay <nik@victoriametrics.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
2026-04-08 12:54:27 +03:00
Max Kotliar
6337dfc472 deployment/docker: update Go builder from Go1.26.1 to Go1.26.2
See
https://github.com/golang/go/issues?q=milestone%3AGo1.26.2%20label%3ACherryPickApproved
2026-04-08 12:43:29 +03:00
JAYICE
0a256002e5 lib/promscape: update last scrape result only when current scrape is successful
Previously, last scrape result was unconditionally update, despite possible scrape error.

The commit updates last scrape result only at successful scrape. It properly accounts `scrape_series_added` metric and aligns it with the same metric in Prometheus.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10653
2026-04-06 17:14:47 +02:00
Nikolay
b3c03c023c app/vmauth: align request body buffering flags
Previously introduced flag `requestBufferSize` raised default value for
in-memory buffer from 16KB to 32KB. It could increase memory usage for
vmauth. Also it made unclean how to actually disable requests buffering.

 This commit aligns flags value to the 16KB. And disables requests
buffering if any of flags value are 0 as mentioned at flags description.
If any of flags have non-default value, those value are used as max size
for request buffer. If both flags are modified - bigger value wins.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10675
2026-04-06 09:51:44 +02:00
Hui Wang
80e2f29761 app/vmalert: add random jitter to concurrent periodical flushers targeting the remote write destination
I expect the change to help in two ways:
1. Spreading remote write flushes over the flush interval to avoid
congestion at the remote write destination;
2. Enhance queue data consumption. Currently, all flushers may always
flush data simultaneously, resulting in periods where no flushers are
consuming data from the queue, which increases the risk of reaching the
queue limit `remoteWrite.maxQueueSize` even when a increased
`remoteWrite.concurrency`. By making the flushers more dispersed, it is
more likely that some flushers are consistently consuming data from the
queue, which should make queue management easier.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10729/
2026-04-06 09:50:20 +02:00
Hui Wang
df34ba3ba2 app/vmalert: expose new histograms to provide better visibility into remote write request sizes
The new histograms should help with debugging whether remote write
pushes are efficient(pushes can be underutilized due to small flush
interval), like in
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10693 and
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10536. This
enhanced visibility will allow related parameters such as
`-remoteWrite.maxBatchSize`, `-remoteWrite.maxQueueSize`,
`-remoteWrite.flushInterval` to be tuned accordingly.

Eventually, `vmalert_remotewrite_sent_rows_total`
and `vmalert_remotewrite_sent_bytes_total` could be deprecated, but it's also fine to leave
them as they are since they're small counters.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10727
2026-04-06 09:49:17 +02:00
sias32
10dd45c4fd dashboards: improvement alert statistics (#10571)
Changes:

- Added the number of `pending alerts` and `firing alerts`
- Improvement `transormations` for panel - FIRING over time by group and rules
- Added sort for panel - FIRING over time by rule

Signed-off-by: sias32 <sias.32@yandex.ru>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-04-03 21:27:19 +03:00
Max Kotliar
a3294b5aa2 docs/guide: fix free space calculation factor in capacity planning formula
Replace 1.2 multiplier with 1.25 in disk space estimation formula.

1.2 only provides ~16.7% free space, while the docs recommend keeping
20%. Using 1.25 correctly accounts for 20% free space.

Inspired by
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10394
2026-04-03 21:20:01 +03:00
Zhu Jiekun
4438454567 vendor: update metrics package with fix unsupported metric type for summary (#10745)
Fix `unsupported` metric type display in exposed metric metadata for
summaries and quantiles by bumping `metrics` SDK version.

This `unsupported` type exists when a summary is not updated within a
certain time window. See https://github.com/VictoriaMetrics/metrics/issues/120 and pull
request https://github.com/VictoriaMetrics/metrics/pull/121 for details.

Signed-off-by: Zhu Jiekun <jiekun@victoriametrics.com>
Signed-off-by: Max Kotliar <kotlyar.maksim@gmail.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-04-03 16:06:47 +03:00
Max Kotliar
71af1ee5f1 .github: Set 21-day cooldown to dependabot updates (#10740)
Recent supply chain attacks on GitHub Actions and npm packages show the
risk of pulling dependency updates too quickly:
-
https://socket.dev/blog/trivy-under-attack-again-github-actions-compromise
-
https://www.stepsecurity.io/blog/axios-compromised-on-npm-malicious-versions-drop-remote-access-trojan
2026-04-03 15:51:07 +03:00
Evgeny
e00fb7e605 app/vmagent: add per-URL -remoteWrite.disableMetadata
Add per-URL `-remoteWrite.disableMetadata` flag to control metadata
sending for each remote storage independently.

After v1.137.0 enabled `-enableMetadata` by default, metadata is sent to
ALL remote write targets, even those with relabeling filters that drop
most metrics. This causes unnecessary growth in
`vmagent_remotewrite_requests_total`. and significant increase in
network load for heavy filtered remote write destinations.
2026-04-03 10:32:34 +02:00
Roman Khavronenko
5e2ee00504 app/vmauth: mention that vmauth can be used with other components
A cosmetic change to highlight that vmauth can be used with other
compnents besides VM only
2026-04-03 10:27:43 +02:00
JAYICE
de2bc4237a lib/backup/s3: retry the requests that failed with unexpected EOF
When the network between client and s3 server is unstable, the client may encounter temporary io.EOF errors when reading the response from s3 server.
Currently, the s3 sdk in vmbackup uses the default retry policy. However, this default retry policy won't retry when s3 sdk meet unexpected EOF. This means that the temporary unexpected EOF error will cause the backup task to fail.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10699
2026-04-03 10:26:58 +02:00
Fred Navruzov
3e51f277bd docs/vmanomaly: v1.29.2 (#10741)
update docs to vmanomaly v1.29.2 release

Signed-off-by: Fred Navruzov <fred-navruzov@users.noreply.github.com>
2026-04-02 22:02:26 +03:00
Roman Khavronenko
5723339525 docs: mention https://victoriametrics.com/blog/victoriametrics-remote-write/ (#10726)
Add link to blogpost with detailed information about zstd+rw protocol.
This PR is based on question in community channel about implementation
details.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-04-02 16:30:10 +03:00
Max Kotliar
3b986ad326 docs/changelog: add thank for contribution 2026-04-02 15:58:23 +03:00
Max Kotliar
08dd38d4a0 vendor: update https://github.com/VictoriaMetrics/metricsql from v0.85.0 to v0.86.0
It contains https://github.com/VictoriaMetrics/metricsql/pull/63 that
reduce number of parentheses added.

It should improve prettify functinality in vmui
2026-04-02 15:42:01 +03:00
Aliaksandr Valialkin
815cc97952 vendor: update github.com/VictoriaMetrics/metrics from v1.42.0 to v1.43.0 2026-04-02 14:18:30 +02:00
Dmytro Kozlov
93d71e7106 vmctl: add thanos migration mode (#10659)
Implemented dedicated thanos migration mode for vmctl to migrate data from Thanos installations to VictoriaMetrics.

Key features:
1. Raw and downsampled blocks support: Reads both raw blocks
(resolution=0) and downsampled blocks (5m/1h resolution) directly from
Thanos snapshots
2. All aggregate types: Imports count, sum, min, max, and counter
aggregates from downsampled blocks as separate metrics with resolution
and type suffixes (e.g., metric_name:5m:count)
3. Dedicated flags: Uses `--thanos-*` prefixed flags (--thanos-snapshot,
--thanos-concurrency, --thanos-filter-time-start,
--thanos-filter-time-end, --thanos-filter-label,
--thanos-filter-label-value, --thanos-aggr-types)
4. Selective aggregate import: Use `--thanos-aggr-types` to import only
specific aggregates

Usage:
```
vmctl thanos --thanos-snapshot /path/to/thanos-data --vm-addr http://victoria-metrics:8428
```

Closes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9262

Signed-off-by: Dmytro Kozlov <d.kozlov@victoriametrics.com>
Signed-off-by: Max Kotliar <kotlyar.maksim@gmail.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
Co-authored-by: Max Kotliar <kotlyar.maksim@gmail.com>
2026-04-02 14:51:01 +03:00
Aliaksandr Valialkin
577b161343 docs/victoriametrics/changelog/CHANGELOG.md: add a description for the change in the commit dd2d6807e4 2026-04-02 13:18:38 +02:00
Mehrdad Banikian
dd2d6807e4 Add split phase metrics for filestream fsync operations (#10493)
## Summary

This PR implements split phase metrics for filestream operations as
requested in #10432.

### Changes

- Added `vm_filestream_fsync_duration_seconds_total` metric to track
fsync syscall duration separately
- Added `vm_filestream_fsync_calls_total` metric to count fsync calls
- Added `vm_filestream_write_syscall_duration_seconds_total` metric to
track write syscall duration (previously mixed with flush time)
- Refactored `MustClose()` and `MustFlush()` to use new `flush()` and
`sync()` helper methods
- Kept `vm_filestream_write_duration_seconds_total` for backward
compatibility

### Problem Solved

Previously, `vm_filestream_write_duration_seconds_total` was being
incremented in two places:
1. `statWriter.Write()` - triggered by `bw.Flush()` and `bw.Write()`
2. `Writer.MustFlush()` - which included the above process, leading to
double-counting

This made it impossible to distinguish between write syscall time and
fsync time, which is critical for diagnosing storage latency issues.

### Solution

The new metrics allow users to:
- Distinguish "flush got slower" vs "fsync got slower" using metrics
only
- No file path labels (bounded cardinality)
- No double-counting between metrics

### Testing

- Code compiles successfully
- All existing metrics are preserved for backward compatibility

Closes #10432

---------

Signed-off-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
Signed-off-by: Aliaksandr Valialkin <valyala@gmail.com>
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
Co-authored-by: Aliaksandr Valialkin <valyala@gmail.com>
2026-04-02 13:14:33 +02:00
Aliaksandr Valialkin
e38e25b756 app/vmagent/remotewrite: improve the readability of the parseRetryAfterHeader() function a bit
- Use shorter name for its' arg: retryAfterString -> s. This is OK to do because the function is small enough,
so it is easier to read 's' instead of 'retryAfterString' in multiple places of the function.

- Remove the name for the returned value - retryAfterDuration, since it only confuses the reader.

This is a follow-up for the commit 5319acb8ed , which introduced this function.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6097
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/6124
2026-04-02 12:52:59 +02:00
Vadim Alekseev
bc708c8568 lib/timeutil: introduce backoff timer struct (#10714)
### Describe Your Changes

I noticed that the backoff timer logic is repeated across multiple
packages. I've implemented a universal wrapper to avoid duplicating this
logic. This structure is already [actively
used](2aa0ea10bb/app/vlagent/kubernetescollector/backoff_timer.go (L11))
for the Kubernetes Collector in vlagent and can be reused in vlagent's
remotewrite. I've also included a usage example in this PR so you can
evaluate its utility.

### Checklist

The following checks are **mandatory**:

- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [X] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-04-02 12:31:28 +02:00
Aliaksandr Valialkin
cd73472a3e docs/victoriametrics/Articles.md: add https://mirastacklabs.ai/blog/chunk-split-caching/ 2026-04-01 22:42:15 +02:00
Aliaksandr Valialkin
527d09653a lib/storage: remove MetricNamesStatsResponse and MetricNamesStatsRecord types
These types hide public types from lib/storage/metricnamestats package.
These types do not resolve any practical issues. Instead, they add a level of indirection,
which complicates reading and understanding the code.

These types were introduced in the commit 795d3fe722
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6145
2026-04-01 22:25:50 +02:00
Aliaksandr Valialkin
28a87b90bb apptest: test apps with the enabled built-in race detector in order to be able to catch data races 2026-04-01 22:11:17 +02:00
Artem Fetishev
c445e7fcc0 docs: bump version to v1.139.0
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-04-01 15:22:06 +02:00
Artem Fetishev
9494ee103e deplyoment/docker: bump version to v1.139.0
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-04-01 15:19:03 +02:00
Artem Fetishev
94af588e92 docs: forward port LTS v1.122.18 changelog to upstream
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-04-01 14:24:20 +02:00
Artem Fetishev
e5c194cc10 docs: forward port LTS v1.136.3 changelog to upstream
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-04-01 14:22:51 +02:00
Artem Fetishev
7c65e3daca docs: fix changelog
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-04-01 13:17:16 +02:00
Pablo (Tomas) Fernandez
a6532c28b2 docs/guides: Add new guide "Set up Datasource-Managed Alerts with vmalert and Grafana" (#10691)
Create a guide to use datasource-managed alerts in Grafana

See: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10528

Signed-off-by: Pablo (Tomas) Fernandez <46322567+TomFern@users.noreply.github.com>
Co-authored-by: Mathias Palmersheim <mathias@victoriametrics.com>
2026-03-31 18:57:04 +03:00
Jose Gómez-Sellés
ec26ebb803 docs: raise cloud awareness in docs (#10716)
### Describe Your Changes

Some users may not know that VictoriaMetrics Cloud provides relevant
features to manage workloads. This change add notes in relevant places
in which users may find that a managed solution is what they need.

The intention is not to push users to Cloud, but giving the information.
That's why it's always phrased like: "If you don't want to do X, Cloud
can do it for you", instead of "Start for free, etc". This is an Open
Source first project, and shall remain as such.

After this gets proper review, VictoriaLogs and other repos may follow.

### Checklist

The following checks are **mandatory**:

- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [X] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: Jose Gómez-Sellés <14234281+jgomezselles@users.noreply.github.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
2026-03-31 18:51:06 +03:00
Max Kotliar
faba8b985b docs: bump lts tags 2026-03-28 15:49:56 +02:00
Artem Fetishev
d233a409d9 docs/changelog: cut release v1.139.0
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-03-27 11:28:24 +01:00
Artem Fetishev
96cbd6fff3 docs: update version to v1.139.0
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-03-27 11:27:29 +01:00
Artem Fetishev
b1d009b13a app/vmselect: run make vmui-update
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-03-27 11:18:18 +01:00
Nikolay
57ce00a5c6 lib/fs: restore async deletion of NFS folders
Commit 83da33d8cf
 removed NFS directory delete retries. It was made on assumption, that
 only directory rename could cause such issues. However, both rename and
unlink uses the same "silly rename" logic
https://linux-nfs.org/wiki/index.php/Server-side_silly_rename
 and linux kernel - `fs/nfs/dir.c` `nfs_unlink` and  `nfs_rename`.

 And NFS client may treat file still open, even if it
was properly closed by application. Most probably it could be triggered, because VictoriaMetrics may
open the same file multiple times ( data read and background merges).

There is no issue with VictoriaMetrics itself, it properly closes files. But NFS-client may have delays
or cache metadata information for the files. So it could trigger silly rename behavior.

 This commit restores original behavior with deletion retries and brings
 back metrics for unsuccessful delete operations.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9842
2026-03-27 09:51:27 +01:00
dependabot[bot]
cfb53cbfb9 build(deps): bump codecov/codecov-action from 5 to 6 (#10709)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 5 to 6.

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-27 07:43:28 +02:00
Benjamin Nichols-Farquhar
febafc1cf1 lib/backup: speed up restores on linuxsystems (#10661)
Related to https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10680

We noticed that backup restores in our environment were much slower than
the hardware/bandwidth constraints would suggest and we traced this down
to a couple of bottlenecks. This PR attempts to address all of them.

#### Lack of pre-allocation of files, 

This was causing writes far into files to be quite slow as new blocks
needed to be continually allocated. This was particularly bad on ext4
for us, but will likely be applicable to most disks and filesystems,
you'll see the impl here is linux specific but this is mostly because I
don't have a test env for any other platform and didn't want to blindly
make changes without a validation env.

This comes with the downside of no longer being to to resume a restore
mid file, and requiring the re-downloading of parts already in the file
size the file will appear at full size from the very start. This is I
think _generally_ a good tradeoff for the restore speed gains, it is
definitely a tradeoff so I've included a flag to disable the
pre-allocation behavior and fall back to the existing part diffing
logic.

#### Fsync after each part

With many small parts in relatively few files, or in high concurrency
setups the the writerCloser fsync on each part(actually double fsync
since both `filestream.Writer.mustFlush` and
`filestream.Writer.mustClose` both fsync). Was causing slowdowns since
we would be continually queuing fsyncs.

With the pre-allocation pattern the file is only "ready" once re-named
so I moved to a per file fsync after rename.

#### Concurrent read/write 

The previous download pattern was to do a read from the remoteFs, with
whatever latency that entailed, then sequentially do a write, again with
whatever latency that entailed. This meant that throughput was limited
to `readLatency + writeLatency * blockSize`.

Similar to how `crossTypeCopy` is implemented in the backup process we
can instead use `io.pipe` to allow two goroutines to work in parallel
with a small buffer between them.

#### Pagecache avoidance 

`filestream.Writer` does quite a lot to avoid polluting the page cache,
but this is not relevent in a restore context and with large sequential
block writes its much more effecient to let the OS flush the pagecache
whenever it wants rather than doing a bunch of small buffer syscalls to
flush blocks.

Therefore this switches over to a much simplier directWriterCloser that
does direct file IO and lets the OS handle flushes while mid write.

### Performance 

Before the changes we were seeing writes speeds of only 100MBps, this
was a restore from EBS volumes, ext with 1GB/s throughput with
<img width="1613" height="586" alt="Screenshot 2026-03-16 at 1 29 46 PM"
src="https://github.com/user-attachments/assets/5d54dcb7-cb59-43e0-9247-fda8c70feb2f"
/>


After these changes in the same restore env we're seeing 600MBs flat
rates.
<img width="1611" height="471" alt="Screenshot 2026-03-16 at 1 31 33 PM"
src="https://github.com/user-attachments/assets/ea8e2eb7-533a-48fa-99e0-0b38286e5572"
/>

Signed-off-by: Max Kotliar <kotlyar.maksim@gmail.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-03-27 07:35:44 +02:00
Fred Navruzov
f1f70e976e docs/vmanomaly: fix typos in min rel dev param (#10708)
Fix a typo in the changed example for minimal relative deviation
2026-03-27 07:24:24 +02:00
Fred Navruzov
5aa0a75ff8 docs/vmanomaly-release-v1.29.1 (#10707)
some of the docs not included in v1.29.1 docs' release

Signed-off-by: Fred Navruzov <fred-navruzov@users.noreply.github.com>
2026-03-26 22:32:00 +02:00
Max Kotliar
d83f142c63 .github: check commit signature for both GPG and SSH 2026-03-26 19:37:16 +02:00
Artem Fetishev
a07cae3279 lib/lrucache: remove shards (#10697)
Remove shards as they only complicate things when the number of requests
per second is in the range of thousands.

Related to #10532.

---------

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-03-26 16:28:01 +01:00
Phuong Le
8cda999238 README.md: fix wrong links for Docker and Slack badges (#10705)
### Describe Your Changes

Clicking the Docker and Slack badges redirects to an intermediate page
instead of taking users directly to the intended sites. This change
fixes those links.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: Max Kotliar <kotlyar.maksim@gmail.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-03-26 12:05:06 +02:00
Hui Wang
2d6cf8827d lib/protoparser/opentelemetry: support ExponentialHistogram negative buckets (#10669)
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9896#issuecomment-4037424586.
Histogram-related functions such as histogram_quantile() and the VMUI
heatmap also work with negative bucket values.

Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-03-26 11:57:19 +02:00
Max Kotliar
c59ca79f2b docs/changelog: mention external contributor 2026-03-26 11:54:42 +02:00
andriibeee
be5ae9b95c lib/jwt: support array claim values in match_claims
This commit allows to perform JWT claim matching over 1 dimension arrays. It could
be useful from practical standpoint. Because permissions are usually assigned as a list of values.

  For example, the following config allows admin access over list of assigned roles for user:

```yaml
 match_claims:
   access.roles: "admin"
```

JWT token:
```json
 {
  "access": {
    "roles": [
      "read",
      "write",
      "admin"
   ]
 }
}
```

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10647
2026-03-26 10:23:43 +01:00
andriibeee
60aef0510f lib/promauth: make username optional in basic_auth section
RFC-7617 allows empty password/username. Moreover, from RFC standpoint both empty values are valid as well. It should be just encoded as `:`. So this commit relaxes non-empty username restriction.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6956
2026-03-26 02:18:37 -07:00
Yury Moladau
b3b555c09c app/vmui: update dependencies to latest compatible versions (#10696)
update dependencies to latest compatible versions

Signed-off-by: Yury Molodov <yurymolodov@gmail.com>
2026-03-25 19:34:40 +02:00
Fred Navruzov
c57ea02564 docs/vmanomaly: release v1.29.1 (#10703)
### Describe Your Changes

vmanomaly docs upgrade to v1.29.1 (including AI assistance providers and
respective section rework on UI page)

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-03-25 19:31:14 +02:00
Max Kotliar
5983d27b00 Revert "docs/anomaly-detection: remove vmanomaly docs from vm repo"
This reverts commit d36f7b6b49.
2026-03-25 19:30:21 +02:00
Max Kotliar
d36f7b6b49 docs/anomaly-detection: remove vmanomaly docs from vm repo
vmanomaly docs should be updated via PRs to vmdocs repo. There is no
need to merge them here and cherry-pick through all branches.
2026-03-25 19:17:24 +02:00
Ty Sarna
70ab2c1585 lib/protoparser/prometheus: add support for OpenMetrics-specific metric types (#10689)
- Adds `info`, `gaugehistogram`, `stateset`, and `unknown` as recognized
metric type names in the Prometheus/OpenMetrics text format parser.
- Previously these valid
[OpenMetrics](https://github.com/OpenObservability/OpenMetrics/blob/main/specification/OpenMetrics.md)
types hit the `default` case and emitted an `error`-level log on every
scrape, flooding logs and continuously triggering the `TooManyLogs`
alert.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10685

Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-03-24 15:34:33 +02:00
Pablo (Tomas) Fernandez
c854816642 docs: add playgrounds category (#10686)
Add a new Playgrounds category to the sidebar. Each VictoriaMetrics playground is represented in a separate file.

Signed-off-by: Pablo (Tomas) Fernandez <46322567+TomFern@users.noreply.github.com>
Co-authored-by: Max Kotliar <kotlyar.maksim@gmail.com>
2026-03-24 15:25:02 +02:00
Max Kotliar
285e3d2a63 app/vmselect: enforce datasource_type=prometheus when proxying alert requests (#10668)
Grafana currently supports only Prometheus-style alerts. If other alert types
(e.g. logs or traces) are returned, it may fail with "Error loading alerts".

Grafana queries the vmalert API directly, bypassing the VictoriaMetrics datasource, so query params (such as datasource_type) cannot be enforced on the Grafana side.

To ensure compatibility, we detect Grafana requests via the User-Agent and enforce `datasource_type=prometheus`.

See:
- https://github.com/VictoriaMetrics/victoriametrics-datasource/issues/329#issuecomment-3847585443
- https://github.com/VictoriaMetrics/victoriametrics-datasource/issues/59
2026-03-24 14:52:18 +02:00
Artem Fetishev
95175e00b4 lib/lrucache: sizeBytes should also include key length (#10679)
There are cases then the key sizeBytes is much greater than the value
sizeBytes. Therefore it is important to include the key sizeBytes into
the total.

Also fix some code comments.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-03-24 12:54:31 +01:00
Artem Fetishev
d21d9e8382 lib/storage: Improve indexDB error messages (#10684)
Fixes: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9499

---------

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Signed-off-by: Nikolay <nik@victoriametrics.com>
Co-authored-by: Nikolay <nik@victoriametrics.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
2026-03-24 12:19:04 +01:00
dependabot[bot]
235daa6208 build(deps-dev): bump flatted from 3.3.3 to 3.4.2 in /app/vmui/packages/vmui (#10688)
Bumps [flatted](https://github.com/WebReflection/flatted) from 3.3.3 to
3.4.2.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="3bf09091c3"><code>3bf0909</code></a>
3.4.2</li>
<li><a
href="885ddcc33c"><code>885ddcc</code></a>
fix CWE-1321</li>
<li><a
href="0bdba705d1"><code>0bdba70</code></a>
added flatted-view to the benchmark</li>
<li><a
href="2a02dce7c6"><code>2a02dce</code></a>
3.4.1</li>
<li><a
href="fba4e8f2e1"><code>fba4e8f</code></a>
Merge pull request <a
href="https://redirect.github.com/WebReflection/flatted/issues/89">#89</a>
from WebReflection/python-fix</li>
<li><a
href="5fe86485e6"><code>5fe8648</code></a>
added &quot;when in Rome&quot; also a test for PHP</li>
<li><a
href="53517adbef"><code>53517ad</code></a>
some minor improvement</li>
<li><a
href="b3e2a0c387"><code>b3e2a0c</code></a>
Fixing recursion issue in Python too</li>
<li><a
href="c4b46dbcbf"><code>c4b46db</code></a>
Add SECURITY.md for security policy and reporting</li>
<li><a
href="f86d071e0f"><code>f86d071</code></a>
Create dependabot.yml for version updates</li>
<li>Additional commits viewable in <a
href="https://github.com/WebReflection/flatted/compare/v3.3.3...v3.4.2">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=flatted&package-manager=npm_and_yarn&previous-version=3.3.3&new-version=3.4.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/VictoriaMetrics/VictoriaMetrics/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-24 12:34:05 +02:00
dependabot[bot]
10f4a86540 build(deps): bump google.golang.org/grpc from 1.79.1 to 1.79.3 (#10674)
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.79.1 to 1.79.3.

See https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10674

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-24 11:37:15 +02:00
dependabot[bot]
79cfffb984 build(deps): bump undici from 7.20.0 to 7.24.4 in /app/vmui/packages/vmui (#10673)
Bumps [undici](https://github.com/nodejs/undici) from 7.20.0 to 7.24.4.

See https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10673

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-24 11:35:18 +02:00
Aliaksandr Valialkin
23e2379c28 docs/victoriametrics/Articles.md: add https://www.infoq.com/news/2026/03/self-hosted-observability/ 2026-03-20 18:39:13 +01:00
andriibeee
e761f22049 lib/netutil: warn when IPv6 listen address is used without -enableTCP6 (#10640)
### Describe Your Changes

Fixes #6858

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: andriibeee <154226341+andriibeee@users.noreply.github.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-03-18 21:01:55 +02:00
andriibeee
fb579cf592 lib/jwt: fail on unsupported alg when use=sig, skip non-sig JWKS keys (#10664)
### Describe Your Changes

Fixes #10663

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-03-18 20:40:04 +02:00
hagen1778
fd0d764720 docs/articles: add https://setevoy.medium.com/freebsd-monitoring-with-victoriametrics-and-grafana-f789904f2628
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-03-18 16:04:30 +01:00
Roman Khavronenko
fe8aaa8885 docs: add unique identifier to FAQ page (#10671)
Due to a conflict with VL FAQ page identifier,
VM FAQ page stopped rendering.

This change adds unique identifier to VM FAQ page and fixes the issue.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-03-18 15:29:21 +01:00
hagen1778
b903fc29ec docs/data-ingestion: fix typo in OtelCollector
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-03-18 15:29:04 +01:00
hagen1778
a6833ffd08 dashboards/metrics-explorer: properly reference datasource variable
Before, by mistake, datasource was referenced by input name instead
of variable name. For an unknown reason, it worked well in local setup
and on playground.

This fix is confirmed by users and continues working at local setup
and playground.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-03-18 15:28:26 +01:00
Andrii Chubatiuk
4516a58df9 app/vmalert: add group_limit and page_num for pagination and search for search at /api/v1/rules (#10046)
### Describe Your Changes

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9580

inspired by https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9057

improve https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10005

added changes to support pagination in VMUI alerting tab:
- added pagination panel
<img width="1431" height="197" alt="image"
src="https://github.com/user-attachments/assets/17b2c4e1-06b7-4345-8ccc-008637edd4e0"
/>
- added navigation from group modal to rule and from child modals to
group as a replacement for anchors navigation, which became impossible
after introduction of pagination
<img width="1264" height="599" alt="image"
src="https://github.com/user-attachments/assets/a803347f-e44e-4325-9b59-8656bd6a5d9b"
/>
<img width="1253" height="523" alt="image"
src="https://github.com/user-attachments/assets/70db27bd-0027-4510-9cad-0354e016d2f2"
/>


PR is rebased against [this
change](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10068)

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: Andrii Chubatiuk <achubatiuk@victoriametrics.com>
Co-authored-by: Haley Wang <haley@victoriametrics.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-03-18 13:25:36 +02:00
Pablo (Tomas) Fernandez
5ad7b645e6 Docs: update guide "HA monitoring setup in Kubernetes via VictoriaMetrics Cluster" (#10580)
### Describe Your Changes

Updated the [HA monitoring setup in Kubernetes via VictoriaMetrics
Cluster](https://docs.victoriametrics.com/guides/k8s-ha-monitoring-via-vm-cluster/)
guide.

Changes:
- Added an introduction explaining how HA works in this guide
- Updated and verified commands used in the guide
- Replaced using Grafana UI usage in favor of using VMUI instead (it was
used to run queries, it's easier to just use the built-in VMUI instead
of installing Grafana just to use the Explore tab)
- Removed Grafana screenshots and replaced them with VMUI
- Tested on a modern version of GKE
- Added explanations for `replicationFactor`, de-duplication, and
`isPartial`
- Added next steps
- Added VMUI screenshots


### Checklist

The following checks are **mandatory**:

- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [X] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-03-18 10:55:30 +02:00
Yury Moladau
51a53014c8 app/vmui: fix autocomplete dropdown closing on Raw Query page (#10665)
### Describe Your Changes

Fixed an issue where the autocomplete dropdown did not close after
selecting an option on the Raw Query page.

**How to reproduce**

* Open the Raw Query page
* Trigger autocomplete
* Select any option from the dropdown
* Before the fix, the dropdown stayed open after selection


### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: Yury Molodov <yurymolodov@gmail.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-03-18 10:45:40 +02:00
Aliaksandr Valialkin
e47abd6385 docs/victoriametrics/Articles.md: add https://clovisc.medium.com/monitoring-pipeline-with-blackbox-exporter-prometheus-victoriametrics-and-vmalert-0ab020c7202a 2026-03-18 02:42:49 +01:00
Aliaksandr Valialkin
c04a5a597d docs/victoriametrics/Articles.md: add https://apprecode.com/blog/a-complete-guide-to-victoriametrics-a-prometheus-comparison-and-kubernetes-monitoring-implementation 2026-03-18 02:41:34 +01:00
JAYICE
e695d5f425 app/vmselect: retry with new connection when previous rpc fail on a broken connection
This commit adds a rpc retry by dialing a new connection instead of
getting an old one from the connection pool when the previous rpc error
is `io.EOF`.

It helps prevent broken connections from remaining for too long and
causing failed requests and partial responses during `vmstorage` rolling
restart period

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10314
2026-03-17 11:01:16 +01:00
andriibeee
2bb03f6e34 lib/storage, lib/mergeset: properly account inmemoryPart refCount
Previously inmemoryPart refCount was not properly decremented.

Previous behavior:
* createInmemoryPart called newPartWrapperFromInmemoryPart and returns a partWrapper with refCount=1
* multiple parts are merged in mustMergeInmemoryPartsFinal, which creates a new merged part
* the source partWrappers are never decRef'd
* Since refCount never reaches 0, putInmemoryPart and (*part).MustClose are never called 

 This commit properly decrements refCount at mustMergeInmemoryPartsFinal. 

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10086
2026-03-17 10:54:08 +01:00
Br1an
92f03344eb lib/promscrape/discovery/yandexcloud: add folder_ids option
This commit adds a new `folder_ids` field in
`yandexcloud_sd_configs` that allows users to specify Yandex Cloud
folder IDs directly, bypassing the organization->cloud->folder hierarchy
traversal.

Previously, the Yandex Cloud service discovery required traversing the
entire resource hierarchy (organizations -> clouds -> folders ->
instances) to discover instances. This works when the Service Account
has permissions at all levels. However, some Service Accounts may only
have permissions at the folder level, causing discovery to fail when it
cannot access organization or cloud resources.

With this change, users can now configure folder IDs directly:

```yaml
yandexcloud_sd_configs:
  - service: compute
    folder_ids:
      - folder-id-1
      - folder-id-2
```

When `folder_ids` is specified, the discovery skips the hierarchy
traversal and directly queries instances from the specified folders.
This is a backward-compatible change - when `folder_ids` is not
specified, the existing behavior is preserved.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10587
2026-03-17 10:51:05 +01:00
Artem Fetishev
e3360b87ff docs: run make docs-update-flags
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-03-16 16:59:51 +01:00
Artem Fetishev
4c98b912fa docs: bump version to v1.138.0
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-03-16 16:54:04 +01:00
Artem Fetishev
225e2e870b deplyoment/docker: bump version to v1.138.0
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-03-16 16:48:43 +01:00
Artem Fetishev
2b078301c1 docs/CHANGELOG.md: update changelog with LTS release notes
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-03-16 15:23:05 +01:00
Arie Heinrich
14090c5a07 all: spelling fixes in code comments (#10650)
fixing spelling issues in comments and text strings

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-03-16 11:11:54 +01:00
Arie Heinrich
66d47f23e4 docs: spelling fixes (#10649)
fix spelling in docs (potential removal of empty spaces as default)

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: Arie Heinrich <arie.heinrich@outlook.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
2026-03-16 11:11:25 +01:00
Roman Khavronenko
eacdb80ed7 docs: add AI tools section to the docs (#10642)
The new section is placed in root directory and is supposed to promote
information about the following tools:
* MCP servers for Logs, Traces and Metrics
* List of available agentic skills

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Signed-off-by: Roman Khavronenko <hagen1778@gmail.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
2026-03-16 11:10:52 +01:00
Roman Khavronenko
504cf31dab docs: minor wording updates in storage section (#10633)
The change suppose to make it more clear for understanding and stress
attention on important things.

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Signed-off-by: Roman Khavronenko <hagen1778@gmail.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
2026-03-16 11:10:12 +01:00
Roman Khavronenko
34d190b32a dashboards: add dashboard for exploring stored metrics (#10617)
The new Grafana dashboard uses the following APIs:
- /api/v1/status/tsdb
- /api/v1/status/metric_names_stats

It shows the list of metric names, the request count and the last time
they were "used". Clicking on metric name allows exploring its
cardinality.

Based on https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9832

-----------

The PR contains a few unrelated changes: 
* rename of folder for prometheus datasource to remove the duplicated
word
* fix for vmalert's access to the datasource, as before it wasn't able
to write/read properly

-------------

The dashboard screen cast:


https://github.com/user-attachments/assets/01dda5d9-14e5-4f5a-b795-a838abec4f5e

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: Haley Wang <haley@victoriametrics.com>
2026-03-16 11:09:06 +01:00
Roshan Banisetti
44fa216bb5 app/vmui: show seriesCountByMetricName when label is in focus in Cardinality Explorer (#10638)
### Describe Your Changes

When a label is set as focus label in the Cardinality Explorer, the
"Metric names with the highest number of series" table was hidden. This
change makes it visible alongside the focus label values table.

### How to reproduce

  1. Go to Explore → Cardinality Explorer
2. Enter a selector like `{namespace!=""}` and set Focus label to
`namespace`
  3. Click Execute Query

**Before:** Only "Values for 'namespace' label..." table is shown
**After:** "Metric names with the highest number of series" table is
also shown

<img width="1512" height="723"
alt="b2a8395a1577b31f58ae00f87e29eb87ca98eabfd0b3c0d9185be8f3a9789b5f"
src="https://github.com/user-attachments/assets/50c7f67a-1cfc-40d0-8e99-7750a933ee45"
/>

Fixes #10630 

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: Roshan1299 <banisettirosh@gmail.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2026-03-16 11:04:50 +01:00
JAYICE
4589442345 dashboard: refine top10 instances by sample panel in vmagent (#10655)
### Describe Your Changes

fix https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10654

<img width="1995" height="846" alt="image"
src="https://github.com/user-attachments/assets/673afd18-9d64-43d3-9ec2-38508847a851"
/>


### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-03-16 11:04:04 +01:00
Artem Fetishev
78ad4b974c docs: cut release v1.138.0
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-03-13 16:16:31 +00:00
Artem Fetishev
d12524749f make docs-update-version
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-03-13 17:03:48 +01:00
Artem Fetishev
1a5235a18f make vmui-update
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-03-13 15:48:15 +00:00
Max Kotliar
27847dbbb8 docs: chore vmauth jwt related documentation
fix tags
add available_from
add cross links
2026-03-13 15:40:39 +02:00
Andrii Chubatiuk
33fab3a2d6 lib/backup/s3remote: overwrite source tags, while syncing parts from one s3 location to another
in case of conflicting tags while syncing latest backup with other backup types by default s3 keeps original ones. Commit changes default behaviour, which enables replacing original tags

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics-enterprise/issues/1004
2026-03-13 13:09:51 +01:00
f41gh7
695b21ecfc docs/changelog: mention vmbackupmanager bugfix at changelog
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10639
2026-03-13 10:28:26 +01:00
Nikolay
4ba488f806 lib/jwt: support regex value claim matching
This commit adds regex value matching for JWT claims matching.

Related to
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10584 Fixes
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10628
2026-03-13 10:14:05 +01:00
dependabot[bot]
1e046d35a8 build(deps): bump immutable from 5.1.4 to 5.1.5 in /app/vmui/packages/vmui (#10586)
Bumps [immutable](https://github.com/immutable-js/immutable-js) from
5.1.4 to 5.1.5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/immutable-js/immutable-js/releases">immutable's
releases</a>.</em></p>
<blockquote>
<h2>v5.1.5</h2>
<h2>What's Changed</h2>
<ul>
<li>Fix Improperly Controlled Modification of Object Prototype
Attributes ('Prototype Pollution') in immutable</li>
<li>Upgrade devtools and use immutable version by <a
href="https://github.com/jdeniau"><code>@​jdeniau</code></a> in <a
href="https://redirect.github.com/immutable-js/immutable-js/pull/2158">immutable-js/immutable-js#2158</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/immutable-js/immutable-js/compare/v5.1.4...v5.1.5">https://github.com/immutable-js/immutable-js/compare/v5.1.4...v5.1.5</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/immutable-js/immutable-js/blob/main/CHANGELOG.md">immutable's
changelog</a>.</em></p>
<blockquote>
<h2>5.1.5</h2>
<ul>
<li>Fix Improperly Controlled Modification of Object Prototype
Attributes ('Prototype Pollution') in immutable</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="b37b855686"><code>b37b855</code></a>
5.1.5</li>
<li><a
href="16b3313fdf"><code>16b3313</code></a>
Merge commit from fork</li>
<li><a
href="fd2ef4977e"><code>fd2ef49</code></a>
fix new proto key injection</li>
<li><a
href="6734b7b2af"><code>6734b7b</code></a>
fix Prototype Pollution in mergeDeep, toJS, etc.</li>
<li><a
href="6f772de1e4"><code>6f772de</code></a>
Merge pull request <a
href="https://redirect.github.com/immutable-js/immutable-js/issues/2175">#2175</a>
from immutable-js/dependabot/npm_and_yarn/rollup-4.59.0</li>
<li><a
href="5f3dc61fd0"><code>5f3dc61</code></a>
Bump rollup from 4.34.8 to 4.59.0</li>
<li><a
href="049a594410"><code>049a594</code></a>
Merge pull request <a
href="https://redirect.github.com/immutable-js/immutable-js/issues/2173">#2173</a>
from immutable-js/dependabot/npm_and_yarn/lodash-4.1...</li>
<li><a
href="2481a77331"><code>2481a77</code></a>
Merge pull request <a
href="https://redirect.github.com/immutable-js/immutable-js/issues/2172">#2172</a>
from mrazauskas/update-tstyche</li>
<li><a
href="eb047790b4"><code>eb04779</code></a>
Bump lodash from 4.17.21 to 4.17.23</li>
<li><a
href="b973bf3b62"><code>b973bf3</code></a>
format</li>
<li>Additional commits viewable in <a
href="https://github.com/immutable-js/immutable-js/compare/v5.1.4...v5.1.5">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=immutable&package-manager=npm_and_yarn&previous-version=5.1.4&new-version=5.1.5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/VictoriaMetrics/VictoriaMetrics/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-12 18:08:38 +02:00
dependabot[bot]
8c9b202c94 build(deps): bump rollup from 4.52.5 to 4.59.0 in /app/vmui/packages/vmui (#10556)
Bumps [rollup](https://github.com/rollup/rollup) from 4.52.5 to 4.59.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/rollup/rollup/releases">rollup's
releases</a>.</em></p>
<blockquote>
<h2>v4.59.0</h2>
<h2>4.59.0</h2>
<p><em>2026-02-22</em></p>
<h3>Features</h3>
<ul>
<li>Throw when the generated bundle contains paths that would leave the
output directory (<a
href="https://redirect.github.com/rollup/rollup/issues/6276">#6276</a>)</li>
</ul>
<h3>Pull Requests</h3>
<ul>
<li><a
href="https://redirect.github.com/rollup/rollup/pull/6275">#6275</a>:
Validate bundle stays within output dir (<a
href="https://github.com/lukastaegert"><code>@​lukastaegert</code></a>)</li>
</ul>
<h2>v4.58.0</h2>
<h2>4.58.0</h2>
<p><em>2026-02-20</em></p>
<h3>Features</h3>
<ul>
<li>Also support <code>__NO_SIDE_EFFECTS__</code> annotation before
variable declarations declaring function expressions (<a
href="https://redirect.github.com/rollup/rollup/issues/6272">#6272</a>)</li>
</ul>
<h3>Pull Requests</h3>
<ul>
<li><a
href="https://redirect.github.com/rollup/rollup/pull/6256">#6256</a>:
docs: document PreRenderedChunk properties including isDynamicEntry and
isImplicitEntry (<a
href="https://github.com/njg7194"><code>@​njg7194</code></a>, <a
href="https://github.com/lukastaegert"><code>@​lukastaegert</code></a>)</li>
<li><a
href="https://redirect.github.com/rollup/rollup/pull/6259">#6259</a>:
docs: Correct typo and improve sentence structure in docs for
<code>output.experimentalMinChunkSize</code> (<a
href="https://github.com/millerick"><code>@​millerick</code></a>, <a
href="https://github.com/lukastaegert"><code>@​lukastaegert</code></a>)</li>
<li><a
href="https://redirect.github.com/rollup/rollup/pull/6260">#6260</a>:
fix(deps): update rust crate swc_compiler_base to v47 (<a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot], <a
href="https://github.com/lukastaegert"><code>@​lukastaegert</code></a>)</li>
<li><a
href="https://redirect.github.com/rollup/rollup/pull/6261">#6261</a>:
fix(deps): lock file maintenance minor/patch updates (<a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot], <a
href="https://github.com/lukastaegert"><code>@​lukastaegert</code></a>)</li>
<li><a
href="https://redirect.github.com/rollup/rollup/pull/6262">#6262</a>:
Avoid unnecessary cloning of the code string (<a
href="https://github.com/lukastaegert"><code>@​lukastaegert</code></a>)</li>
<li><a
href="https://redirect.github.com/rollup/rollup/pull/6263">#6263</a>:
fix(deps): update minor/patch updates (<a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot], <a
href="https://github.com/lukastaegert"><code>@​lukastaegert</code></a>)</li>
<li><a
href="https://redirect.github.com/rollup/rollup/pull/6265">#6265</a>:
chore(deps): lock file maintenance (<a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot])</li>
<li><a
href="https://redirect.github.com/rollup/rollup/pull/6267">#6267</a>:
fix(deps): update minor/patch updates (<a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot])</li>
<li><a
href="https://redirect.github.com/rollup/rollup/pull/6268">#6268</a>:
chore(deps): update dependency eslint-plugin-unicorn to v63 (<a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot], <a
href="https://github.com/lukastaegert"><code>@​lukastaegert</code></a>)</li>
<li><a
href="https://redirect.github.com/rollup/rollup/pull/6269">#6269</a>:
chore(deps): update dependency lru-cache to v11 (<a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot])</li>
<li><a
href="https://redirect.github.com/rollup/rollup/pull/6270">#6270</a>:
chore(deps): lock file maintenance (<a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot])</li>
<li><a
href="https://redirect.github.com/rollup/rollup/pull/6272">#6272</a>:
forward NO_SIDE_EFFECTS annotations to function expressions in variable
declarations (<a
href="https://github.com/lukastaegert"><code>@​lukastaegert</code></a>)</li>
</ul>
<h2>v4.57.1</h2>
<h2>4.57.1</h2>
<p><em>2026-01-30</em></p>
<h3>Bug Fixes</h3>
<ul>
<li>Fix heap corruption issue in Windows (<a
href="https://redirect.github.com/rollup/rollup/issues/6251">#6251</a>)</li>
<li>Ensure exports of a dynamic import are fully included when called
from a try...catch (<a
href="https://redirect.github.com/rollup/rollup/issues/6254">#6254</a>)</li>
</ul>
<h3>Pull Requests</h3>
<ul>
<li><a
href="https://redirect.github.com/rollup/rollup/pull/6251">#6251</a>:
fix: Isolate and cache <code>process.report.getReport()</code> calls in
a child process for robust environment detection (<a
href="https://github.com/alan-agius4"><code>@​alan-agius4</code></a>, <a
href="https://github.com/lukastaegert"><code>@​lukastaegert</code></a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/rollup/rollup/blob/master/CHANGELOG.md">rollup's
changelog</a>.</em></p>
<blockquote>
<h2>4.59.0</h2>
<p><em>2026-02-22</em></p>
<h3>Features</h3>
<ul>
<li>Throw when the generated bundle contains paths that would leave the
output directory (<a
href="https://redirect.github.com/rollup/rollup/issues/6276">#6276</a>)</li>
</ul>
<h3>Pull Requests</h3>
<ul>
<li><a
href="https://redirect.github.com/rollup/rollup/pull/6275">#6275</a>:
Validate bundle stays within output dir (<a
href="https://github.com/lukastaegert"><code>@​lukastaegert</code></a>)</li>
</ul>
<h2>4.58.0</h2>
<p><em>2026-02-20</em></p>
<h3>Features</h3>
<ul>
<li>Also support <code>__NO_SIDE_EFFECTS__</code> annotation before
variable declarations declaring function expressions (<a
href="https://redirect.github.com/rollup/rollup/issues/6272">#6272</a>)</li>
</ul>
<h3>Pull Requests</h3>
<ul>
<li><a
href="https://redirect.github.com/rollup/rollup/pull/6256">#6256</a>:
docs: document PreRenderedChunk properties including isDynamicEntry and
isImplicitEntry (<a
href="https://github.com/njg7194"><code>@​njg7194</code></a>, <a
href="https://github.com/lukastaegert"><code>@​lukastaegert</code></a>)</li>
<li><a
href="https://redirect.github.com/rollup/rollup/pull/6259">#6259</a>:
docs: Correct typo and improve sentence structure in docs for
<code>output.experimentalMinChunkSize</code> (<a
href="https://github.com/millerick"><code>@​millerick</code></a>, <a
href="https://github.com/lukastaegert"><code>@​lukastaegert</code></a>)</li>
<li><a
href="https://redirect.github.com/rollup/rollup/pull/6260">#6260</a>:
fix(deps): update rust crate swc_compiler_base to v47 (<a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot], <a
href="https://github.com/lukastaegert"><code>@​lukastaegert</code></a>)</li>
<li><a
href="https://redirect.github.com/rollup/rollup/pull/6261">#6261</a>:
fix(deps): lock file maintenance minor/patch updates (<a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot], <a
href="https://github.com/lukastaegert"><code>@​lukastaegert</code></a>)</li>
<li><a
href="https://redirect.github.com/rollup/rollup/pull/6262">#6262</a>:
Avoid unnecessary cloning of the code string (<a
href="https://github.com/lukastaegert"><code>@​lukastaegert</code></a>)</li>
<li><a
href="https://redirect.github.com/rollup/rollup/pull/6263">#6263</a>:
fix(deps): update minor/patch updates (<a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot], <a
href="https://github.com/lukastaegert"><code>@​lukastaegert</code></a>)</li>
<li><a
href="https://redirect.github.com/rollup/rollup/pull/6265">#6265</a>:
chore(deps): lock file maintenance (<a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot])</li>
<li><a
href="https://redirect.github.com/rollup/rollup/pull/6267">#6267</a>:
fix(deps): update minor/patch updates (<a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot])</li>
<li><a
href="https://redirect.github.com/rollup/rollup/pull/6268">#6268</a>:
chore(deps): update dependency eslint-plugin-unicorn to v63 (<a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot], <a
href="https://github.com/lukastaegert"><code>@​lukastaegert</code></a>)</li>
<li><a
href="https://redirect.github.com/rollup/rollup/pull/6269">#6269</a>:
chore(deps): update dependency lru-cache to v11 (<a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot])</li>
<li><a
href="https://redirect.github.com/rollup/rollup/pull/6270">#6270</a>:
chore(deps): lock file maintenance (<a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot])</li>
<li><a
href="https://redirect.github.com/rollup/rollup/pull/6272">#6272</a>:
forward NO_SIDE_EFFECTS annotations to function expressions in variable
declarations (<a
href="https://github.com/lukastaegert"><code>@​lukastaegert</code></a>)</li>
</ul>
<h2>4.57.1</h2>
<p><em>2026-01-30</em></p>
<h3>Bug Fixes</h3>
<ul>
<li>Fix heap corruption issue in Windows (<a
href="https://redirect.github.com/rollup/rollup/issues/6251">#6251</a>)</li>
<li>Ensure exports of a dynamic import are fully included when called
from a try...catch (<a
href="https://redirect.github.com/rollup/rollup/issues/6254">#6254</a>)</li>
</ul>
<h3>Pull Requests</h3>
<ul>
<li><a
href="https://redirect.github.com/rollup/rollup/pull/6251">#6251</a>:
fix: Isolate and cache <code>process.report.getReport()</code> calls in
a child process for robust environment detection (<a
href="https://github.com/alan-agius4"><code>@​alan-agius4</code></a>, <a
href="https://github.com/lukastaegert"><code>@​lukastaegert</code></a>)</li>
<li><a
href="https://redirect.github.com/rollup/rollup/pull/6252">#6252</a>:
chore(deps): update dependency lru-cache to v11 (<a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot])</li>
<li><a
href="https://redirect.github.com/rollup/rollup/pull/6253">#6253</a>:
chore(deps): lock file maintenance minor/patch updates (<a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot], <a
href="https://github.com/lukastaegert"><code>@​lukastaegert</code></a>)</li>
<li><a
href="https://redirect.github.com/rollup/rollup/pull/6254">#6254</a>:
Fully include dynamic imports in a try-catch (<a
href="https://github.com/lukastaegert"><code>@​lukastaegert</code></a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="ae846957f1"><code>ae84695</code></a>
4.59.0</li>
<li><a
href="b39616e917"><code>b39616e</code></a>
Update audit-resolve</li>
<li><a
href="c60770d7aa"><code>c60770d</code></a>
Validate bundle stays within output dir (<a
href="https://redirect.github.com/rollup/rollup/issues/6275">#6275</a>)</li>
<li><a
href="33f39c1f20"><code>33f39c1</code></a>
4.58.0</li>
<li><a
href="b61c40803b"><code>b61c408</code></a>
forward NO_SIDE_EFFECTS annotations to function expressions in variable
decla...</li>
<li><a
href="7f00689ec9"><code>7f00689</code></a>
Extend agent instructions</li>
<li><a
href="e7b2b85af0"><code>e7b2b85</code></a>
chore(deps): lock file maintenance (<a
href="https://redirect.github.com/rollup/rollup/issues/6270">#6270</a>)</li>
<li><a
href="2aa5da9baf"><code>2aa5da9</code></a>
fix(deps): update minor/patch updates (<a
href="https://redirect.github.com/rollup/rollup/issues/6267">#6267</a>)</li>
<li><a
href="4319837c54"><code>4319837</code></a>
chore(deps): update dependency lru-cache to v11 (<a
href="https://redirect.github.com/rollup/rollup/issues/6269">#6269</a>)</li>
<li><a
href="c3b6b4bdc4"><code>c3b6b4b</code></a>
chore(deps): update dependency eslint-plugin-unicorn to v63 (<a
href="https://redirect.github.com/rollup/rollup/issues/6268">#6268</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/rollup/rollup/compare/v4.52.5...v4.59.0">compare
view</a></li>
</ul>
</details>
<details>
<summary>Install script changes</summary>
<p>This version modifies <code>prepare</code> script that runs during
installation. Review the package contents before updating.</p>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=rollup&package-manager=npm_and_yarn&previous-version=4.52.5&new-version=4.59.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/VictoriaMetrics/VictoriaMetrics/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-12 18:08:25 +02:00
dependabot[bot]
060423141d build(deps): bump minimatch in /app/vmui/packages/vmui (#10555)
Bumps and [minimatch](https://github.com/isaacs/minimatch). These
dependencies needed to be updated together.
Updates `minimatch` from 3.1.2 to 3.1.5
<details>
<summary>Commits</summary>
<ul>
<li><a
href="7bba97888a"><code>7bba978</code></a>
3.1.5</li>
<li><a
href="bd259425b2"><code>bd25942</code></a>
docs: add warning about ReDoS</li>
<li><a
href="1a9c27c757"><code>1a9c27c</code></a>
fix partial matching of globstar patterns</li>
<li><a
href="1a2e084af5"><code>1a2e084</code></a>
3.1.4</li>
<li><a
href="ae24656237"><code>ae24656</code></a>
update lockfile</li>
<li><a
href="b100374922"><code>b100374</code></a>
limit recursion for **, improve perf considerably</li>
<li><a
href="26ffeaa091"><code>26ffeaa</code></a>
lockfile update</li>
<li><a
href="9eca892a4e"><code>9eca892</code></a>
lock node version to 14</li>
<li><a
href="00c323b188"><code>00c323b</code></a>
3.1.3</li>
<li><a
href="30486b2048"><code>30486b2</code></a>
update CI matrix and actions</li>
<li>Additional commits viewable in <a
href="https://github.com/isaacs/minimatch/compare/v3.1.2...v3.1.5">compare
view</a></li>
</ul>
</details>
<br />

Updates `minimatch` from 9.0.5 to 9.0.9
<details>
<summary>Commits</summary>
<ul>
<li><a
href="7bba97888a"><code>7bba978</code></a>
3.1.5</li>
<li><a
href="bd259425b2"><code>bd25942</code></a>
docs: add warning about ReDoS</li>
<li><a
href="1a9c27c757"><code>1a9c27c</code></a>
fix partial matching of globstar patterns</li>
<li><a
href="1a2e084af5"><code>1a2e084</code></a>
3.1.4</li>
<li><a
href="ae24656237"><code>ae24656</code></a>
update lockfile</li>
<li><a
href="b100374922"><code>b100374</code></a>
limit recursion for **, improve perf considerably</li>
<li><a
href="26ffeaa091"><code>26ffeaa</code></a>
lockfile update</li>
<li><a
href="9eca892a4e"><code>9eca892</code></a>
lock node version to 14</li>
<li><a
href="00c323b188"><code>00c323b</code></a>
3.1.3</li>
<li><a
href="30486b2048"><code>30486b2</code></a>
update CI matrix and actions</li>
<li>Additional commits viewable in <a
href="https://github.com/isaacs/minimatch/compare/v3.1.2...v3.1.5">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/VictoriaMetrics/VictoriaMetrics/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-12 18:08:16 +02:00
dependabot[bot]
0ee16ff2e5 build(deps): bump crazy-max/ghaction-import-gpg from 6 to 7 (#10572)
Bumps
[crazy-max/ghaction-import-gpg](https://github.com/crazy-max/ghaction-import-gpg)
from 6 to 7.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/crazy-max/ghaction-import-gpg/releases">crazy-max/ghaction-import-gpg's
releases</a>.</em></p>
<blockquote>
<h2>v7.0.0</h2>
<ul>
<li>Node 24 as default runtime (requires <a
href="https://github.com/actions/runner/releases/tag/v2.327.1">Actions
Runner v2.327.1</a> or later) by <a
href="https://github.com/crazy-max"><code>@​crazy-max</code></a> in <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/pull/241">crazy-max/ghaction-import-gpg#241</a></li>
<li>Switch to ESM and update config/test wiring by <a
href="https://github.com/crazy-max"><code>@​crazy-max</code></a> in <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/pull/239">crazy-max/ghaction-import-gpg#239</a></li>
<li>Bump <code>@​actions/core</code> from 1.11.1 to 3.0.0 in <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/pull/232">crazy-max/ghaction-import-gpg#232</a></li>
<li>Bump <code>@​actions/exec</code> from 1.1.1 to 3.0.0 in <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/pull/242">crazy-max/ghaction-import-gpg#242</a></li>
<li>Bump brace-expansion from 1.1.11 to 1.1.12 in <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/pull/221">crazy-max/ghaction-import-gpg#221</a></li>
<li>Bump minimatch from 3.1.2 to 3.1.5 in <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/pull/240">crazy-max/ghaction-import-gpg#240</a></li>
<li>Bump openpgp from 6.1.0 to 6.3.0 in <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/pull/233">crazy-max/ghaction-import-gpg#233</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/crazy-max/ghaction-import-gpg/compare/v6.3.0...v7.0.0">https://github.com/crazy-max/ghaction-import-gpg/compare/v6.3.0...v7.0.0</a></p>
<h2>v6.3.0</h2>
<ul>
<li>Bump openpgp from 5.11.2 to 6.1.0 in <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/pull/215">crazy-max/ghaction-import-gpg#215</a></li>
<li>Bump cross-spawn from 7.0.3 to 7.0.6 in <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/pull/212">crazy-max/ghaction-import-gpg#212</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/crazy-max/ghaction-import-gpg/compare/v6.2.0...v6.3.0">https://github.com/crazy-max/ghaction-import-gpg/compare/v6.2.0...v6.3.0</a></p>
<h2>v6.2.0</h2>
<ul>
<li>Bump <code>@​actions/core</code> from 1.10.1 to 1.11.1 in <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/pull/209">crazy-max/ghaction-import-gpg#209</a></li>
<li>Bump braces from 3.0.2 to 3.0.3 in <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/pull/203">crazy-max/ghaction-import-gpg#203</a></li>
<li>Bump ip from 2.0.0 to 2.0.1 in <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/pull/196">crazy-max/ghaction-import-gpg#196</a></li>
<li>Bump micromatch from 4.0.4 to 4.0.8 in <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/pull/207">crazy-max/ghaction-import-gpg#207</a></li>
<li>Bump openpgp from 5.11.0 to 5.11.2 in <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/pull/205">crazy-max/ghaction-import-gpg#205</a></li>
<li>Bump tar from 6.1.14 to 6.2.1 in <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/pull/198">crazy-max/ghaction-import-gpg#198</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/crazy-max/ghaction-import-gpg/compare/v6.1.0...v6.2.0">https://github.com/crazy-max/ghaction-import-gpg/compare/v6.1.0...v6.2.0</a></p>
<h2>v6.1.0</h2>
<ul>
<li>Bump <code>@​actions/core</code> from 1.10.0 to 1.10.1 in <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/pull/186">crazy-max/ghaction-import-gpg#186</a></li>
<li>Bump <code>@​babel/traverse</code> from 7.17.3 to 7.23.2 in <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/pull/191">crazy-max/ghaction-import-gpg#191</a></li>
<li>Bump debug from 4.1.1 to 4.3.4 in <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/pull/190">crazy-max/ghaction-import-gpg#190</a></li>
<li>Bump openpgp from 5.10.1 to 5.11.0 in <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/pull/192">crazy-max/ghaction-import-gpg#192</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/crazy-max/ghaction-import-gpg/compare/v6.0.0...v6.1.0">https://github.com/crazy-max/ghaction-import-gpg/compare/v6.0.0...v6.1.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="2dc316deee"><code>2dc316d</code></a>
Merge pull request <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/issues/242">#242</a>
from crazy-max/dependabot/npm_and_yarn/actions/exec-3...</li>
<li><a
href="5812792d2b"><code>5812792</code></a>
chore: update generated content</li>
<li><a
href="ceb906ede8"><code>ceb906e</code></a>
build(deps): bump <code>@​actions/exec</code> from 1.1.1 to 3.0.0</li>
<li><a
href="a9dffd9307"><code>a9dffd9</code></a>
Merge pull request <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/issues/241">#241</a>
from crazy-max/node24</li>
<li><a
href="36d49fcb3c"><code>36d49fc</code></a>
node 24 as default runtime</li>
<li><a
href="50c4e4f047"><code>50c4e4f</code></a>
Merge pull request <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/issues/233">#233</a>
from crazy-max/dependabot/npm_and_yarn/openpgp-6.3.0</li>
<li><a
href="c78fe49862"><code>c78fe49</code></a>
chore: update generated content</li>
<li><a
href="8dbbb1e8e5"><code>8dbbb1e</code></a>
Merge pull request <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/issues/221">#221</a>
from crazy-max/dependabot/npm_and_yarn/brace-expansio...</li>
<li><a
href="fc715b05fd"><code>fc715b0</code></a>
build(deps): bump openpgp from 6.1.0 to 6.3.0</li>
<li><a
href="99469162d0"><code>9946916</code></a>
build(deps): bump brace-expansion from 1.1.11 to 1.1.12</li>
<li>Additional commits viewable in <a
href="https://github.com/crazy-max/ghaction-import-gpg/compare/v6...v7">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=crazy-max/ghaction-import-gpg&package-manager=github_actions&previous-version=6&new-version=7)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-12 17:21:59 +02:00
Max Kotliar
b22853b97f lib/jwt: mark deprecated properties needed only for vmgateway 2026-03-12 16:00:23 +02:00
Max Kotliar
b578fe9817 docs: add guides for vmauth jwt authentication (#10129)
### Describe Your Changes

Related to
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9439

Commit adds two guides:
- One sets up keyclock, vmcluster, vmauth, grafana, and demo how to log
in to grafana using OIDC and use the jwt token to limit metrics fetched
by grafana datasource from vmcluster.
- Second demo on how to configure vmagent so it gets jwt token and uses
it during remote write requests.

To see guides locally run, checkout the branch, run `make docs-debug`,
open browser `http://localhost:1313`.

vmauth jwt related PRs should be merged into
[vmauth-jwt](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/vmauth-jwt)
brench, and when everything is ready, merged into master.

Debug notes for the guides:
https://github.com/VictoriaMetrics/debug-notes/tree/main/guides/vmauth-jwt

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: Pablo (Tomas) Fernandez <46322567+TomFern@users.noreply.github.com>
Co-authored-by: Pablo Fernandez <46322567+TomFern@users.noreply.github.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
2026-03-12 15:46:46 +02:00
f41gh7
e9b7adc0e5 vendor: update metrics package
Related to https://github.com/VictoriaMetrics/metrics/issues/85
2026-03-12 09:41:47 +01:00
Max Kotliar
82eab5c5b7 lib/encoding: fix integer overflow in UnmarshalBytes (#10629)
Poison varint: MaxUint64 encoded as varint (0xFFFFFFFFFFFFFFFF). 
The bounds check uint64(nSize)+n overflows to 9, bypassing the guard. 
Then int(MaxUint64)=-1 makes src[10:9] which panics.
2026-03-11 11:42:49 +01:00
Max Kotliar
5b4ab4456e lib/jwt: Verifier support jwks kid (#10611)
### Describe Your Changes

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10606

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: Nikolay <nik@victoriametrics.com>
Co-authored-by: Nikolay <nik@victoriametrics.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
2026-03-11 00:20:02 +02:00
Nikolay
d3ccc8d7a7 app/vmauth: remove data-race at default_url proxy
Previously there was a data-race, when targetURL was concurrently
 updated in case of default url route.

 This commit fixes data-race and adds concurrency to the routing tests.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10626
2026-03-10 21:00:16 +01:00
Fred Navruzov
eb34bdd8d9 docs/vmanomaly - release v1.29.0 (#10620)
### Describe Your Changes

Documentation updates following `v1.29.0` release of `vmanomaly`

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-03-10 19:41:46 +02:00
Roman Khavronenko
3139fa1c9b docs/vmalert: add more clarification on config reload procedure 2026-03-10 12:50:18 +01:00
Roman Khavronenko
8f4cdb8a42 app/vmauth: add request duration to access log
Request duration could be useful for tracking access logs too. For
example, track referrers for all slow requests.

While there, added tests to track log structure changes.

Related to https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5936
2026-03-10 12:49:07 +01:00
andriibeee
f236801fa4 lib/promauth: support headers in oauth2 token_url requests
OAuth2 token source lib doesn't allow to define request headers explicitly.
This commit  adds a custom transport to mitigate it. New transport modifies http.Request by making a shallow copy of it and setting additional headers.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8939
2026-03-10 10:10:56 +01:00
JAYICE
2c48133ad8 lib/filestream: properly account vm_filestream_write_duration_seconds_total metric
Previously vm_filestream_write_duration_seconds_total will be increased in two places:
*  statWriter.Write()
* Writer.MustFlush(). It will eventually call statWriter.Write(), hence double counting vm_filestream_write_duration_seconds_total

For reference, vm_filestream_read_duration_seconds_total will be increased only in statReader.Read to track read syscall.

 This commit removes latency tracking from MustFlush method.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10564
2026-03-10 10:06:09 +01:00
f41gh7
1cb634858e app/vmauth: add match_claims JWT routing
This commit adds claims matching for jwt token auth.

It allows to perform match for any jwt token json field with nested traversal.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10584
2026-03-10 10:48:18 +02:00
Max Kotliar
4b45f909b5 docs/changelog: port v1.136.1 changelog to master 2026-03-09 20:32:30 +02:00
Yury Moladau
4ae495bd1d app/vmui: rename debug tools buttons for clarity
Replace ambiguous button labels such as "Submit" and "Apply" with
clearer wording to indicate that these actions only preview results and
do not modify the deployment configuration.

Related issue: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10453
2026-03-09 14:27:36 +01:00
Max Kotliar
925b0ecdc9 app/vmauth: Implement OpenID Connect Discovery support
Add support for [OpenID Connect
Discovery](https://openid.net/specs/openid-connect-discovery-1_0.html#IANA)
as an alternative way to obtain verification keys and rotate them
automatically.

`jwt` configuration should allow **exactly one** of the following
verification modes: `public_keys`, `oidc`, `skip_verify`. These options
must be mutually exclusive.

Example: OIDC configuration

```yaml
users:
- jwt:
    oidc:
      issuer: http://identity-provider.com
```

When `oidc` is enabled:

1. On startup, `vmauth` fetches:

   ```
   {issuer}/.well-known/openid-configuration
   ```
2. Extracts `jwks_uri`.
3. Fetches [JWK
keys](https://openid.net/specs/draft-jones-json-web-key-03.html#ExampleJWK)
from `jwks_uri`.
4. Uses discovered keys to verify JWT tokens.

Related to
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10585

Failure handling:
* If discovery fails at startup:
  * No keys are available.
  * The user is skipped.
* Discovery runs periodically in background (e.g., every 1 minute).
* If keys become available later, authentication should start working
automatically.
* If keys were previously fetched and the identity provider becomes
unavailable:
  * Cached keys must be preserved.
  * Authentication continues using cached keys.

#### JWT Requirements in OIDC Mode

When `oidc` is enabled:

* `iss` claim becomes
[mandatory](https://openid.net/specs/openid-connect-core-1_0.html#IDToken).
* `iss` [must
match](https://openid.net/specs/openid-connect-core-1_0.html#RotateEncKeys):
  * `oidc.issuer` from config.
  * `issuer` returned in the OpenID configuration document.
* JWT header must contain `kid`.
* `kid` must be used to select the appropriate key from JWKS.
* Tokens without `kid` must be rejected.
* Tokens without `iss` must be rejected.

Rationale
* Enables automatic key rotation.
* Eliminates manual public key configuration.
* Maintains compatibility with standard OIDC providers.

---------

Signed-off-by: Max Kotliar <kotlyar.maksim@gmail.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
2026-03-09 14:26:23 +01:00
Ihar Statkevich
1348b0e424 vmui: use increase_pure instead of rate for histogram heatmaps
- VMUI Explore Metrics uses `rate` for histogram bucket queries, which
skips the first observation
in each bucket because `rate` requires two data points to calculate a
per-second rate.
- Replace `rate` with `increase_pure`, which assumes counters start from
0 and correctly shows
the first observation when a new bucket appears.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10365
2026-03-09 11:45:56 +01:00
Artem Fetishev
83656e544d lib/storage: remove 1 cpu special case from storage tests
The test should not fail now on systems with 1 cpu because partition
indexDBs are not rotated. See #8948.

Also removed two TODOs from the test to keep it simple.
2026-03-09 11:44:35 +01:00
Nikolay
38a76eca7b app/vmauth: reduce memory allocations for JWT token parsing
This commit adds in-memory pool for jwt tokens. It reduces memory
 allocations and GC pressure.

 Benchmark results:
```
                                         ? before_optimisation.txt ?       after_optimisation.txt        ?
                                         ?         sec/op          ?   sec/op     vs base                ?
JWTRequestHandler/full_template-10                     65.82µ ± 2%   26.87µ ± 2%  -59.18% (p=0.000 n=10)
JWTRequestHandler/token_without_claim-10               734.4n ± 1%   543.9n ± 0%  -25.94% (p=0.000 n=10)
JWTRequestHandler/expired_token-10                    1560.0n ± 0%   681.2n ± 1%  -56.33% (p=0.000 n=10)
geomean                                                4.225µ        2.151µ       -49.08%

                                         ? before_optimisation.txt ?        after_optimisation.txt        ?
                                         ?          B/op           ?     B/op      vs base                ?
JWTRequestHandler/full_template-10                    33.60Ki ± 0%   16.52Ki ± 0%  -50.85% (p=0.000 n=10)
JWTRequestHandler/token_without_claim-10              1.605Ki ± 0%   1.105Ki ± 0%  -31.14% (p=0.000 n=10)
JWTRequestHandler/expired_token-10                    3.267Ki ± 0%   1.045Ki ± 0%  -68.01% (p=0.000 n=10)
geomean                                               5.606Ki        2.672Ki       -52.34%

                                         ? before_optimisation.txt ?       after_optimisation.txt       ?
                                         ?        allocs/op        ? allocs/op   vs base                ?
JWTRequestHandler/full_template-10                      224.0 ± 0%   172.0 ± 0%  -23.21% (p=0.000 n=10)
JWTRequestHandler/token_without_claim-10                17.00 ± 0%   13.00 ± 0%  -23.53% (p=0.000 n=10)
JWTRequestHandler/expired_token-10                      30.00 ± 0%   11.00 ± 0%  -63.33% (p=0.000 n=10)
geomean                                                 48.52        29.08       -40.06%
```

follow-up for f8a101e45e

related issue
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10492
2026-03-09 11:42:51 +01:00
f41gh7
dea915c10d deployment/docker: update Go builder from Go1.26.0 to Go1.26.1
See https://github.com/golang/go/issues?q=milestone%3AGo1.26.1%20label%3ACherryPickApproved
2026-03-09 11:39:32 +01:00
f41gh7
b3f57c113b lib/httpserver: fixes tests after 686c9a21ff 2026-03-05 16:12:29 +01:00
andriibeee
686c9a21ff lib/httpserver: handle preflight HTTP requests properly
Previously OPTIONS HTTP requests for CORS preflight checks would trigger
the original request handler. This pull request fixes that behavior to
align with https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/OPTIONS

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5563
2026-03-05 15:57:13 +01:00
Hui Wang
8f215137e7 docs: polish opentelemetry integration doc 2026-03-05 15:53:06 +01:00
Artem Fetishev
ed5dc35876 app/vmselect: Disable Graphite Tag Series HTTP endpoints (#10579)
Disabling is done by making the the handlers for `/tags/tagSeries` and
`/tags/tagMultiSeries` to return `501 (Not Implemented)` status code
along with the error message saying that the API has been disabled and
will be removed in future.

See: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10544.


Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-03-05 14:27:43 +01:00
Artem Fetishev
13ab8cfb78 docs: Update docs to reflect partition index changes (#10582)
Now that indexDB is per-partition, the indexDB-related docs need to be
updated. Specifically the how the indexDB is cleaned up when it becomes
outside the `-retentionPeriod`.

Follow-up for #8134.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Signed-off-by: Aliaksandr Valialkin <valyala@gmail.com>
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
2026-03-04 18:46:16 +01:00
Nikolay
f8a101e45e lib/jwt: remove memory allocation from token parsing
This commit adds `Reset()` method to the Token struct.
It allows to re-use `Token` object, which reduces memory allocations
needed for parsing `Token` and CPU pressure on GarbageCollector.

 Additionally, it adds fastjson parser, which allows efficiently perform
 claims matching based on dynamic value input.

 Benchmark stats:

```
                                         │ profiles/jwt_parse_before.txt │    profiles/jwt_parse_after.txt     │
                                         │            sec/op             │   sec/op     vs base                │
TokenParse/simple-10                                       3375.0n ± 41%   335.6n ± 4%  -90.05% (p=0.000 n=10)
TokenParse/gateway_labels_and_filters-10                   4259.0n ±  6%   423.3n ± 5%  -90.06% (p=0.000 n=10)
TokenParse/scope_as_slice_string-10                        3781.5n ±  2%   374.7n ± 5%  -90.09% (p=0.000 n=10)
TokenParse/access_claim_string-10                          2974.5n ±  1%   290.9n ± 4%  -90.22% (p=0.000 n=10)
TokenParse/vmauth_related_fields-10                        4340.5n ±  2%   389.2n ± 2%  -91.03% (p=0.000 n=10)
geomean                                                     3.709µ         359.8n       -90.30%

                                         │ profiles/jwt_parse_before.txt │       profiles/jwt_parse_after.txt        │
                                         │             B/op              │     B/op      vs base                     │
TokenParse/simple-10                                        5.195Ki ± 0%   0.000Ki ± 0%  -100.00% (p=0.000 n=10)
TokenParse/gateway_labels_and_filters-10                    6312.00 ± 0%     16.00 ± 0%   -99.75% (p=0.000 n=10)
TokenParse/scope_as_slice_string-10                         6312.00 ± 0%     16.00 ± 0%   -99.75% (p=0.000 n=10)
TokenParse/access_claim_string-10                           4.789Ki ± 0%   0.000Ki ± 0%  -100.00% (p=0.000 n=10)
TokenParse/vmauth_related_fields-10                         6.327Ki ± 0%   0.000Ki ± 0%  -100.00% (p=0.000 n=10)
geomean                                                     5.693Ki                      ?                       ¹ ²
¬π summaries must be >0 to compute geomean
² ratios must be >0 to compute geomean

                                         │ profiles/jwt_parse_before.txt │      profiles/jwt_parse_after.txt       │
                                         │           allocs/op           │ allocs/op   vs base                     │
TokenParse/simple-10                                          39.00 ± 0%    0.00 ± 0%  -100.00% (p=0.000 n=10)
TokenParse/gateway_labels_and_filters-10                     53.000 ± 0%   1.000 ± 0%   -98.11% (p=0.000 n=10)
TokenParse/scope_as_slice_string-10                          54.000 ± 0%   1.000 ± 0%   -98.15% (p=0.000 n=10)
TokenParse/access_claim_string-10                             41.00 ± 0%    0.00 ± 0%  -100.00% (p=0.000 n=10)
TokenParse/vmauth_related_fields-10                           57.00 ± 0%    0.00 ± 0%  -100.00% (p=0.000 n=10)
geomean                                                       48.23                    ?                       ¹ ²
```

Related to
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10492
2026-03-04 17:31:30 +01:00
Max Kotliar
a1a35fd870 .github: remove copilot instruction since we use cubic AI for code review
Copilot results were far from good, so we switched to Cubic AI.
2026-03-04 14:37:01 +02:00
Artem Fetishev
0d5df2722d lib/storage: add an apptest for Graphite tag registration (#10558)
Add an apptest for `/graphite/tags/tagSeries` and `/graphite/tags/tagMultiSeries` URLs path to test the time series registration in the index. This PR is a preparation for disabling these paths (#10544). For now just testing that they actually work as described in https://graphite.readthedocs.io/en/stable/tags.html#adding-series-to-the-tagdb.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-03-04 07:43:07 +01:00
Hui Wang
db3353c6e1 app/vmalert: support negative values for the group eval_offset option
There are following main use cases for `eval_offset`:
1. To ensure rules are evaluated at an exact offset, so the results have
the exact timestamp the user wants.
2. The source data for a certain rule is delivered at a specific time
point, so rules need to be executed after that time point to get correct
results. For example, [chaining
groups](https://docs.victoriametrics.com/victoriametrics/vmalert/#chaining-groups).
3. A group contains some heavy rules that can take a few minutes to
finish. To guarantee a single evaluation can complete in time and not
delay the next run, the user may want to schedule the group to be
executed within [intervalStart, intervalEnd-avgTotalEvaluationDuration].

Negative value can be convenient for case3, as users only need to set
group `eval_offset: -avgTotalEvaluationDuration(a bigger value than the
real duration to leave some buffer would be better)`.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10424
2026-03-03 12:06:56 +01:00
Hui Wang
cfbc5ae31d dashboard: fix expressions in vmauth memory usage panel (#10574)
vmauth doesn’t use fastcache or expose `vm_cache_size_bytes`, so having
`vm_cache_size_bytes` makes the expression evaluate to null.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10574/
2026-03-03 12:06:12 +01:00
hklhai
fdb3c96fc1 app/{vmagent,vminsert}: properly attach host label for datadog-sketches
Due to bug introduced at initial datadog-sketches API implementation, `host` label was incorrectly obtained from `Tags` structure. While actually it's present directly at root of protobuf message.

 This commit properly attaches `host` label in such case.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10557
2026-03-03 12:03:31 +01:00
Max Kotliar
486d923351 docs/changelog: sync lts changelogs 2026-03-02 20:20:31 +02:00
Max Kotliar
f8552bdc96 docs: bump version to v1.137.0
Signed-off-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-03-02 16:11:54 +02:00
Max Kotliar
893c981c57 deplyoment/docker: bump version to v1.137.0
Signed-off-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-03-02 16:04:23 +02:00
Hui Wang
3d7ff783b6 vmalert: prevent a subsequent small remote write requests if the previous one takes too long
If the data flush to the remote write destination takes longer than the
periodic flush interval (default 2s), the ticker channel will contain a
stale tick, causing the ticker case to be selected too early with an
empty or small amount of data inside `wr`, resulting in a wasted remote
write request with one or two time series(if `ts, ok := <-c.input` was
also randomly selected beforehand).

We could also consider resetting the ticker after drain the stale tick
to ensure `wr` always accumulates data for the full flush interval, but
that seems more trivial to me.
2026-03-02 11:28:10 +01:00
Zakhar Bessarab
78543b7f87 lib/backup/actions: do not set s3ACL by default
Disable ACL default configuration as ACL is not always supported by
S3-compatible storages (for example, linode does not support it in some
regions). So it requires users to disable it manually to make it work.
Moreover, it is not a recommended way of objects access configuration
anymore as ACLs for buckts is disabled by default. Currently, it is
recommended to use policies for access controls. See -
https://docs.aws.amazon.com/AmazonS3/latest/userguide/about-object-ownership.html

Fixes: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10539
2026-03-02 11:25:36 +01:00
Roman Khavronenko
f54d22562a docs: add availability mark for access_log feature in vmauth (#10567)
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-03-02 11:23:55 +01:00
Roman Khavronenko
b672e05dce app/vmauth: support printing access logs per user
Add new option per-user to print access logs. Such logs
contain limited amount of information to prevent exposing
sensitive data.

Access logs can be enabled/disabled via hot-reload and could
help locating clients that incorrectly use or abuse vmauth.

See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5936
2026-03-02 10:51:40 +01:00
Artem Fetishev
847871b916 apptest: Fix flaky tests
Cluster apptests failed from time to time with the following error:

```
timed out while waiting for inserted rows to be sent to vmstorage
cluster
```

due to incorrect calculation of inserted row count before and after
insertion. This PR fixes it by putting the "before" count calculation
before the send() operation.
2026-03-02 10:41:35 +01:00
Max Kotliar
2aecca1163 docs/changelog: fix link 2026-02-27 20:01:51 +02:00
Max Kotliar
d1efb2dd37 docs: cut release v1.137.0
Signed-off-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-02-27 19:56:51 +02:00
Max Kotliar
6882c72075 docs: update version to v1.137.0
Signed-off-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-02-27 19:18:53 +02:00
Max Kotliar
60eb543dba app/vmselect: run make vmui-update
Signed-off-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-02-27 18:53:43 +02:00
Max Kotliar
7db42b0659 go.mod: fix govulncheck
govulncheck ./...
=== Symbol Results ===

Vulnerability #1: GO-2026-4559
    Sending certain HTTP/2 frames can cause a server to panic in
    golang.org/x/net
  More info: https://pkg.go.dev/vuln/GO-2026-4559
  Module: golang.org/x/net
    Found in: golang.org/x/net@v0.50.0
    Fixed in: golang.org/x/net@v0.51.0
2026-02-27 14:45:32 +02:00
Hui Wang
8d924f0631 vmselect: revert rollup result cache for instant queries that contain rate function (#10553)
See reason in
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10098#issuecomment-3895011084
2026-02-27 14:37:48 +02:00
Nikolay
791679253d lib/promauth: check client certificate rotation during requests
Previously, the client certificate was only refreshed during the TLS
handshake, which occurs when establishing a new connection. This meant
the remote HTTP server had to close the existing connection for the
client to pick up an updated (e.g. expired) certificate. As a
workaround, connection keep-alive could be disabled, but that
significantly increased request latency.

This commit adds a certificate check during HTTP RoundTrip. If the
client certificate has changed, the RoundTripper recreates the transport
and its connection pool. This behavior is already implemented for CA
certificate changes.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10393
2026-02-27 13:19:50 +01:00
Max Kotliar
a745bb797a docs/changelog: add update note for multitenant api endpoint 2026-02-27 13:45:04 +02:00
Artem Fetishev
3607c53b7c lib/storage: rename cache methods to match unified format (#10534)
Per @valyala's request, rename storage cache methods to adhere the
following format:

```
get[Value]By[Key]FromCache
put[Value]By[Key]ToCache
```

Also move `s.metricIDCache` methods from `indexDB` to `Storage` because
this cache exists at the `Storage` level.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-02-27 10:33:52 +01:00
John Allberg
7969647553 publish SPDX SBOM attestations for container images (#10474)
Enable BuildKit-native SPDX SBOM and provenance attestations by setting
`--sbom=true --provenance=true` in `docker buildx build` within
`publish-via-docker`.

- Set `--provenance=true --sbom=true` in `publish-via-docker` for both
Alpine and scratch variants
- Add SBOM section to SECURITY.md with inspection and Trivy scan
instructions
- Update Release-Guide.md
- Add changelog entry

Verified end-to-end: pushed test image to GHCR, confirmed SBOM
attestation via `docker buildx imagetools inspect`, and Trivy scan via
`trivy image --sbom-sources oci` succeeded (with 0 vulnerabilities :-)).

Fixes #10473 

### Checklist

The following checks are **mandatory**:

- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [X] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: John Allberg <john@ayoy.se>
Signed-off-by: Max Kotliar <mkotlyar@victoriametrics.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
Co-authored-by: Max Kotliar <kotlyar.maksim@gmail.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-02-27 10:50:03 +02:00
Hui Wang
5f887b66c5 docs: add a note for vmctl remote read stream mode (#10548)
Samples in Mimir (or Prometheus) are stored in chunks, which are
compressed efficiently using algorithms rather than being stored as
independent samples, see details in [this
article](https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/)
and [this talk](https://www.youtube.com/watch?v=b_pEevMAC3I).
When using a small `--remote-read-step-interval`, particularly `minute`,
a single chunk may contain samples that exceed the requested time
window, and all the returned chunks contain overlapping samples.
Consequently, vmctl will read and migrate many duplicate samples into
VictoriaMetrics.

In tests, `--remote-read-step-interval=minute
--remote-read-use-stream=true` with raw sample `scrape_interval: 10s`
and remote read time range of 24h can write ~20x duplication.
But I assume the minute interval is rarely used with a large time range
and duplicates are fine in VictoriaMetrics due to deduplication, so we
don't need to disallow using it.
```
## --remote-read-step-interval=minute --remote-read-use-stream=false
## total samples: **15696611(the real number)**
2026/02/26 22:10:25 VictoriaMetrics importer stats:
  idle duration: 50.080851955s;
  time spent while importing: 32.108903417s;
  total samples: 15696611;
  samples/s: 488855.41;
  total bytes: 735.8 MB;
  bytes/s: 22.9 MB;
  import requests: 79;
  import requests retries: 0;
2026/02/26 22:10:25 Total time: 32.112912208s

## --remote-read-step-interval=day --remote-read-use-stream=true
## total samples: 15878869
2026/02/26 22:20:37 VictoriaMetrics importer stats:
  idle duration: 960.698874ms;
  time spent while importing: 6.338309625s;
  total samples: 15878869;
  samples/s: 2505221.41;
  total bytes: 278.6 MB;
  bytes/s: 44.0 MB;
  import requests: 80;
  import requests retries: 0;
2026/02/26 22:20:37 Total time: 6.340023167s

## --remote-read-step-interval=hour --remote-read-use-stream=true
## total samples: 21824000
2026/02/26 22:13:14 VictoriaMetrics importer stats:
  idle duration: 5.238827666s;
  time spent while importing: 7.274528s;
  total samples: 21824000;
  samples/s: 3000057.19;
  total bytes: 394.4 MB;
  bytes/s: 54.2 MB;
  import requests: 110;
  import requests retries: 0;
2026/02/26 22:13:14 Total time: 7.278895084s

## --remote-read-step-interval=minute --remote-read-use-stream=true
## total samples: **353800724(353800724/15696611~22.5)**
2026/02/26 22:18:41 VictoriaMetrics importer stats:
  idle duration: 1m45.09105431s;
  time spent while importing: 1m51.716730125s;
  total samples: 353800724;
  samples/s: 3166944.86;
  total bytes: 6.8 GB;
  bytes/s: 61.3 MB;
  import requests: 1769;
  import requests retries: 0;
2026/02/26 22:18:41 Total time: 1m51.721834958s
```
2026-02-27 10:45:52 +02:00
Roman Khavronenko
d3e2946791 dashboards: remove $instance from drilldown link (#10518)
For unknown reason, $instance variable can't be passed unescaped via
dashboard link. In result, clicking on the line on panel opens a new tab
where panel fails to render.

This happens when `$instance=$__all`. The rendered link becomes
`&var-instance=.*` which then gets double-escaped in the query and
yields no result. This behavior can be verified at
https://play-grafana.victoriametrics.com/.

I've tried to properly unescape the variable using
https://grafana.com/docs/grafana/latest/visualizations/dashboards/variables/variable-syntax
but found no solution.

Hence, proposing to remove this filter from drilldown.

------------



https://github.com/user-attachments/assets/faf76d63-7739-48d7-8ce6-3d567e77003c

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Signed-off-by: Roman Khavronenko <hagen1778@gmail.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
2026-02-27 10:41:26 +02:00
Max Kotliar
603dc03c7d dashboards: add job\instance filters to alerts statistics dashboard (#10549)
### Describe Your Changes

Add `job` and `instance` filters to the `VictoriaMetrics - Alert
statistics` dashboard. This allows users running multiple independent
[vmalert](https://docs.victoriametrics.com/victoriametrics/vmalert/)
instances to filter and analyze alerts statistics per specific instance,
making it easier to identify issues in a particular vmalert deployment.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-27 09:41:21 +02:00
Max Kotliar
1cc471a6c1 app/vmauth: userinfo returns jwt as name (#10546)
### Describe Your Changes

Previously it would return empty string if jwt auth method is
configured. The empty string complicates reading logs.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-26 16:57:18 +02:00
Max Kotliar
d40adb1e58 docs: reorganize OpenTelemetry documentation into integrations and data-ingestion (#10520)
### Describe Your Changes

Move OpenTelemetry-related documentation under docs/integrations and
docs/data-ingestion to establish a clear, scalable structure.

As OpenTelemetry support expands, we need a dedicated place to document
protocol details, implementation specifics, and known limitations, such
as:

- Delta temporality not working with downsampling. See
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10014#issuecomment-3697509266.
- Negative histogram buckets being discarded by VictoriaMetrics. See
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9896.

The new structure separates concerns:

- `docs/integrations/` — protocol overview, implementation details, and
limitations.
- `docs/data-ingestion/` — OpenTelemetry Collector configuration and
ingestion setup.

This aligns OpenTelemetry documentation with the existing structure used
across other integrations and ingestion methods.

New pages and links preserve backward compatiblity

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-26 16:55:53 +02:00
Max Kotliar
8056806d5f docs/changelog: chore changelog 2026-02-26 14:52:53 +02:00
Roman Khavronenko
3d67942a65 Docs: add integration with bindplace (#10543)
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-02-26 14:40:37 +02:00
Max Kotliar
23bdd14cee docs: refine vmauth jwt documentation 2026-02-26 14:38:53 +02:00
Pablo (Tomas) Fernandez
18a2955553 Docs: Update guide "Kubernetes monitoring with VictoriaMetrics Cluster" (#10410)
### Describe Your Changes

- Updated GKE version to a more current 1.34+
- Updated guide to more modern Helm and Kubectl versions
- Tested updated instructions on GKE 1.34.1-gke.3971001 (and a local k3s
instance) successfully
- Removed revision from Grafana values for helm chart (confirmed it
pulls the latest revision)
- Split the helm chart values (`guide-vmcluster-vmagent-values.yaml`)
into more readable chunks and added explanations next to each chunk
- Added and updated expected outputs. Some were missing and others were
outdated
- Updated Grafana dashboards screenshots since they changed from the
last revision
- Updated Grafana repo to use community org (old grafana chart was
deprecated
on Jan 30th -
[source](https://community.grafana.com/t/helm-repository-migration-grafana-community-charts/160983))
- Minor corrections and typo fixes. Improved flow
- Added a section at the end pointing readers where they can go next.

### Checklist

The following checks are **mandatory**:

- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [X] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: Pablo (Tomas) Fernandez <46322567+TomFern@users.noreply.github.com>
Co-authored-by: Vadim Rutkovsky <vadim@vrutkovs.eu>
2026-02-26 14:08:15 +02:00
hagen1778
570a9ef627 docs: update best recommendations for swap
* simplify wording
* add link to Grafana dashboards where they're mentioned

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-02-26 11:43:01 +01:00
Maxime Grenu
40e27fc2c8 docs/vmctl: fix invalid MetricsQL numeric literal in monitoring example (#10494)
## Summary

Fix an invalid MetricsQL numeric literal in the vmctl monitoring
documentation.

## Problem

The PromQL/MetricsQL example query for monitoring vm-native migration
data transfer speed used `1Mb` as a divisor:

```promql
rate(vmctl_vm_native_migration_bytes_transferred_total[5m]) / 1Mb
```

However, `Mb` is **not** a valid MetricsQL numeric suffix. According to
the [MetricsQL
documentation](https://docs.victoriametrics.com/victoriametrics/metricsql/#numeric-values):

> Numeric values can have `K`, `Ki`, `M`, `Mi`, `G`, `Gi`, `T` and `Ti`
suffixes.

The suffix `Mb` does not exist — only `M` (mega, 10^6) and `Mi` (mebi,
2^20 = 1,048,576) are valid.

## Fix

Replace `1Mb` with `1Mi` (1 mebibyte = 1,048,576 bytes), which is the
standard binary unit for memory/storage transfer measurements in
computing, and update the comment to reflect `MiB/s` instead of `MB/s`.

## Files Changed

- `docs/victoriametrics/vmctl/vmctl.md`: fixed the invalid literal `1Mb`
→ `1Mi` and updated the comment

---------

Signed-off-by: Maxime Grenu <maxime.grenu@gmail.com>
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
Co-authored-by: Vadim Alekseev <vadimaleksv@gmail.com>
Co-authored-by: Yury Moladau <yurymolodov@gmail.com>
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
Co-authored-by: Nikolay <nik@victoriametrics.com>
2026-02-26 11:34:25 +01:00
hagen1778
befbf9afca deployment: include alert-statistics in default dashboards
Having this dashboard by default simplifies its maintainance.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-02-26 11:33:26 +01:00
hagen1778
65d0a8e129 dashboards: review alert-statistics dashboard
* add meaningful description, it is required for publishin on grafana.com
* remove dependency on `victoriametrics-metrics-datasource` as it is not used

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-02-26 11:33:26 +01:00
Hui Wang
c2841ca36c metricsql: add function histogram_fraction()
This commit improves compatibility with promql by introducing a missing function `histogram_fraction`.
 
 histogram_fraction is a shortcut for `histogram_share(upperLe, buckets) - histogram_share(lowerLe, buckets)`

histogram_count, histogram_sum or histogram_avg will not be added to metricsQL, as they only operate on Prometheus native histogram, which doesn't have _count and _sum series like the classic histogram or Victoriametrics histogram. For classic histogram, _count and _sum series can be used directly.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5346.
2026-02-26 09:35:13 +01:00
Aliaksandr Valialkin
cd2026e430 lib/httpserver: prefer gzip over zstd compression for http responses if the client indicates it supports both methods
This is needed because some clients and proxies improperly handle zstd-compressed responses.
See https://github.com/VictoriaMetrics/victoriametrics-datasource/issues/455 .

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10535
2026-02-25 21:54:18 +01:00
Nikita Koptelov
216821aa1c app/vmselect: prom handler: LabelValues: decode UTF8-encoded label name
This commit enhances UTF-8 decoding for `/label//values` API by making it compatible 
with Prometheus labelName encodoing. 

 If the label is encoded according to the Prometheus UTF8 encoding scheme
(https://github.com/prometheus/proposals/blob/main/proposals/0028-utf8.md),
decode it before doing the search.

Every label value that starts with "U__" is considered to be
UTF8-encoded, according to the spec.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10446
2026-02-25 19:50:45 +01:00
Nikolay
ef507d372b lib/promscrape: reduce CPU and memory usage for originalLabels
This commit optimizes the storage of originalLabels. Previously, they
were stored as a clone of the discovered labels, which required many
small allocations and added high pressure on the garbage collector.

Now originalLabels are stored as zstd-compressed JSON ([]byte). Since
they are rarely requested, the overhead of zstd decompression and
json.Unmarshal is negligible.

This optimization reduces memory usage for storing originalLabels by 3x
and CPU usage by 2x.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9952
2026-02-25 19:47:30 +01:00
Nikolay
e383b62f59 lib/timerpool: remove misleading panic
After golang 1.23 it's safe to ignore timer.Reset True value.

According to the spec:

 For a chan-based timer created with NewTimer, as of Go 1.23,
 any receive from t.C after Reset has returned is guaranteed not
 to receive a time value corresponding to the previous timer
settings;

 If the program has not received from t.C already and the timer is
 running, Reset is guaranteed to return true.
 Before Go 1.23, the only safe way to use Reset was to call [Timer.Stop]
and explicitly drain the timer first.

 Golang 1.23 changed timer implementation from sync and async. And it
made possible that chan send and timer.Stop could happen in the same
time.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9721
2026-02-25 19:45:12 +01:00
Max Kotliar
8f34284dd2 docs: add available_from for vmauth\jwt feature
Follow-up on
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10499
2026-02-25 15:24:39 +02:00
Max Kotliar
8f4eca39f7 app/vmauth: implement upstream request templating based on JWT vm_access claim
For proposal and implementation check out https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10492

address review comments

* simplify placeholder logic with pre-defined data structure
* add validation helper functions
* consolidate JWT placeholders parsing logic
* slightly reduce memory allocations for query templating
* do not allow templating for client request url params

Signed-off-by: f41gh7 <nik@victoriametrics.com>
2026-02-25 14:46:51 +02:00
hagen1778
d467faf739 docs: add change lines after 673b2ca7db
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-02-25 11:27:53 +01:00
sias32
673b2ca7db dashboards/deployment: add links for vmalert (#10509)
### Describe Your Changes

1. Dashboard: Adding a link to an alert for quick access to it
(alert-statisticl)
2. Rules: Replace localhost with $externalURL to take the address from
the --external.url flag

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: sias32 <sias.32@yandex.ru>
2026-02-25 11:26:44 +01:00
hagen1778
40ccf0c333 app/vmalert: fix typo Minium => Minimum
Follow-up after a6200cc83d

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-02-25 09:28:04 +01:00
hklhai
fe341a4204 Improve Influx parsing error message when raw newline (\n) appears inside quoted fieldvmagent: Improve Influx parsing error message when raw newline (\n)… (#10524)
# Investigation & Root Cause --- InfluxDB Line Protocol Parsing with Raw
Newline (`\n`)

This document describes the investigation process and root cause
analysis for Influx Line Protocol parsing errors in VictoriaMetrics when
a **raw newline (`\n`) byte appears inside a quoted field value**.

------------------------------------------------------------------------

## Background

According to the Influx Line Protocol specification:

-   Each point must be represented as a single line.
-   The newline character (`\n`) separates points.
-   Literal newline bytes are not allowed inside quoted field values.

Therefore, any raw newline byte (`0x0A`) inside a quoted string makes
the line invalid.

------------------------------------------------------------------------

## Related Issue

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10067

------------------------------------------------------------------------

## Expected Behavior

VictoriaMetrics should reject Influx Line Protocol lines that contain a
raw newline inside a quoted field value, since this violates the
protocol specification.

The parsing failure itself is correct.

------------------------------------------------------------------------

## Actual Behavior

VictoriaMetrics rejects the line with the following error:

cannot parse field value for "...": missing closing quote for quoted
field value

While technically correct, the error message does not clearly indicate
that the root cause is a raw newline inside the quoted field value.

------------------------------------------------------------------------

## Minimal Reproducer

The issue can be reproduced without Telegraf or Jolokia:

``` bash
printf 'test value="hello
world"\n' | curl -X POST http://localhost:8428/write --data-binary @-
```

This produces:

cannot parse field value for "value": missing closing quote for quoted
field value

The failure occurs because the value contains an actual newline byte
(0x0A), not the escaped sequence `\n`.

------------------------------------------------------------------------

## Environment Setup

The issue was reproduced using the following stack:

-   VictoriaMetrics v1.127.0
-   InfluxDB 1.8
-   Spring Boot + Jolokia
-   Telegraf 1.36.2

Telegraf collects JVM `SystemProperties`, including:

``` json
"line.separator": "\n"
```

After JSON unmarshalling, this becomes a real newline byte in memory.

Detailed reproduction steps can be found here:

https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10067#issuecomment-3896175100

------------------------------------------------------------------------

## Observed Serialized Line

Using breakpoint debugging in:

    lib/bytesutil/bytebuffer.go:58

The `ReadFrom` function reads and assembles an Influx line containing:

    SystemProperties.line.separator="
    ",

The quoted field contains an actual newline byte before the closing
quote.

This breaks the single-line assumption of Influx Line Protocol.

VictoriaMetrics splits on `\n`, resulting in:

-   A truncated first line
-   A missing closing quote
-   Parsing failure

------------------------------------------------------------------------

## Important Clarification

This issue is **not** caused by the escaped sequence `"\\n"`.

The failure occurs only when the serialized Influx line contains an
actual newline byte (`0x0A`) inside the quoted value.

Escaped `\n` (two characters: `\` and `n`) is valid.

------------------------------------------------------------------------

## Root Cause

-   Telegraf serializes a field containing a real newline byte.
-   Influx Line Protocol forbids literal newline characters inside
    quoted fields.
-   VictoriaMetrics correctly treats `\n` as a line separator.
-   The parser then encounters an incomplete quoted field and reports
    "missing closing quote".

The parsing behavior is correct per specification.

------------------------------------------------------------------------

## Proposed Improvement

The parsing logic should remain unchanged.

However, the error message can be improved to better indicate the root
cause.

Suggested error message:

invalid Influx line protocol: missing closing quote for quoted field
value;
this may be caused by a raw newline (`\n`) inside the quoted field value

This makes the failure immediately actionable and easier to diagnose.

------------------------------------------------------------------------

## Summary

-   The failure is caused by a raw newline byte inside a quoted field
    value.
-   This violates the Influx Line Protocol specification.
-   VictoriaMetrics correctly rejects the line.
-   The error message should explicitly mention the possibility of a raw
    newline (`\n`) inside the quoted field.

Signed-off-by: hklhai <hkhai@outlook.com>
Co-authored-by: Max Kotliar <kotlyar.maksim@gmail.com>
2026-02-24 20:42:43 +02:00
Max Kotliar
83ebf00659 app/vmstorage: increase min free disk space from 10M to 100M (#10529)
### Describe Your Changes

The free disk space check is not continuous but occurs periodically. In
high-load environments with large ingestion rates, the system can exceed
the remaining 10MB between checks. This can lead to a situation where
disk space is exhausted before the next check occurs, causing panic.

Increase the default value 10x to cover the case.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9561

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-24 18:06:55 +02:00
Roman Khavronenko
5e602726f5 app/vmselect: properly apply extra filters for tenant tokens for /api/v1/label/../values (#10503)
Previosly, extra filters were ignored for
`/api/v1/label/vm_account_id/values` or
`/api/v1/label/vm_project_id/values` calls. In result, even if user's
visibility was limited by applying
`?extra_filters[]={vm_account_id="1"}` param they could get the list of
all available tenants in the system.

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>

(cherry picked from commit d2a033453e)
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-02-24 15:42:13 +01:00
hagen1778
a6200cc83d app/vmalert: rename MiniMum => Minimum
Follow-up after a5811d3c3b

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-02-24 15:37:02 +01:00
Fedor Kanin
a5811d3c3b docs/vmalert: fix a typo by replacing maxiMum with maximum (#10516)
### Describe Your Changes

Fix a typo by replacing `maxiMum` with `maximum` in Markdown docs and
CLI flags help.

Resolve #10515 

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-24 15:34:41 +01:00
JAYICE
5962b47c31 document: enrich the description of buckets_limit (#10465)
### Describe Your Changes

fix https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10417

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-24 15:33:16 +01:00
Roman Khavronenko
9a4edc738a docs: re-visit Troubleshooting docs (#10512)
* remove ToC in the beginning, as it duplicates right-bar functionality
and is easier to make a mistake with. For example, it didn't have the
ZFS section in it
* simplify wording where it was possible
* reference new tools VM got in recent releases
* re-prioritize tips order based on personal experience

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Signed-off-by: Roman Khavronenko <hagen1778@gmail.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
Co-authored-by: Pablo (Tomas) Fernandez <46322567+TomFern@users.noreply.github.com>
2026-02-24 15:30:31 +01:00
Roman Khavronenko
30d01e9cae dashboards: filter out zero value for Major page faults panel (#10517)
Components like vmselect and vminsert rarely touch disk, so most of the
time their values are 0. Filtering out 0 values makes the panel cleaner.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-02-24 15:30:05 +01:00
Artem Fetishev
6b46f3920c lib/uint64set: move set un/marshal methods from Storage to uint64set (#10521)
A refactoring that moves the uint64set.Set marshaling and unmarshaling from lib/storage/storage.go to lib/uint64set. Also added function docs and tests.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-02-24 11:15:49 +01:00
Zhu Jiekun
97b11146ee flaky test: disable GC during sync.Pool test (#10523)
Disable GC when testing sync.Pool `Get` and `Put` logic, so the items in pool won't be recycled too fast.

Follow-up for 785daff65d.
2026-02-24 10:19:04 +01:00
Fred Navruzov
2ef74bd6ea docs/vmanomaly - strip bad chars from filenames (#10525)
### Describe Your Changes

Strip spaces and `=` from filenames as suggested in #10522 

now
```shellhelp
find ./docs |egrep '[ =]'
```
returns no such files

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-24 10:05:48 +02:00
Max Kotliar
845161e377 .github: Run apptests on separate pool of runners
It should prvent apptest timeouts due to runners saturation. When
apptests are run with other tests and linters they do not have enough
CPU to complete in time and often times out.

If one re-runs the apptests shortly after they are likely to pass
because the same runner has enough resources available (other job
finished).

Remove GOGC=10 as the runner has enough memory (16Gb)  to run apptests.

I did some tests and obeserve drop in overal test duration from 4.5m to
3.30-3m.
2026-02-23 14:16:40 +02:00
Vadim Rutkovsky
f176a6624a dashboards: operator dashboard should extract version from metrics (#10502)
### Describe Your Changes

Use vm_app_version to determine operator version instead of static text

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

Signed-off-by: Vadim Rutkovsky <vadim@vrutkovs.eu>
2026-02-23 13:32:14 +02:00
Roman Khavronenko
4d06e34b66 docs: add dedicated opentelemetry section to docs (#10491)
The new section is supposed to contain otel related information for all
products, like VT, VM, VL.

It also supposed to be visible for readers right away, without need to
dig for info in each product.

It contains basic information and is supposed to act as a router to more
detailed info in each product.

While there, also updated VM-related otel info.


---------

Depends on
https://github.com/VictoriaMetrics/victoriametrics-datasource/pull/458

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-02-23 10:24:01 +01:00
Aliaksandr Valialkin
6d8ddcb9ed vendor: update github.com/valyala/fastjson from v1.6.9 to v1.6.10
This fixes the issue mentioned at https://github.com/VictoriaMetrics/VictoriaLogs/issues/1042#issuecomment-3936084518
2026-02-21 13:20:45 +01:00
Pablo (Tomas) Fernandez
dd4167709a Docs: Update guide "Getting started with VM Operator" (#10429)
### Describe Your Changes

- Add an introduction with a brief explanation of the operator and its
benefits as an intro
- Make some steps more explicit, instead of just linking to the VM
cluster guide
- Separate config/chart values files from kubectl apply (instead of
using heredoc and in-line yaml)
- Update screenshots and add figcaptions where needed
- Update Kubernetes and tools versions to newer releases
- Remove revision numbers from the Grafana config to install the latest
revision
- Added a section to configure scraping of Kubernetes resources (nodes,
pods, etc.)
- Tested updated instructions on GKE 1.33 and 1.34 (and a local k3s
instance) successfully
- Added and updated expected outputs. Some were missing and others were
outdated
- Updated Grafana dashboards screenshots since they changed from the
last revision
- Minor corrections and typo fixes. Improved flow
- Added a section at the end pointing readers to where they can go next.

### Checklist

The following checks are **mandatory**:

- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [X] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-20 22:38:11 +02:00
Pablo (Tomas) Fernandez
71e253e1f0 Docs: update guide "Headlamp Kubernetes UI and VictoriaMetrics" (#10462)
### Describe Your Changes

- Updated introduction
- Added proper steps
- Tested intructions on headlamp desktop version and the in-cluster web
ui
- Added images to guide user
- Mentioned that the test connection button does not work (it probes a
`-healthy` endpoint that is not supported by VM). The plugin still
works, it's just the test button that fails
- Added links to the single and cluster installation guides

### Checklist

The following checks are **mandatory**:

- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [X] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: Pablo (Tomas) Fernandez <46322567+TomFern@users.noreply.github.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-02-20 22:37:59 +02:00
Pablo (Tomas) Fernandez
9e155ffd9e Docs: Update Guide "How to delete or replace metrics in VictoriaMetrics" (#10500)
### Describe Your Changes

- Rewrote the introduction
- Added list of endpoints for single node, cluster, and cloud
- Added tips for working with VictoriaMetrics running on Kubernetes
- Flushed out explanations for each step
- Added reference links for all required endpoints
- Tested every command

### Checklist

The following checks are **mandatory**:

- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [X] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: Pablo (Tomas) Fernandez <46322567+TomFern@users.noreply.github.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-02-20 22:37:51 +02:00
Max Kotliar
2e9e40dc75 docs/changelog: add regexp example to bugfix description 2026-02-20 16:27:59 +02:00
Max Kotliar
10d4294f9b docs: tiny corrections 2026-02-20 16:21:06 +02:00
Max Kotliar
5e77771668 docs/changelog: chore changelog 2026-02-20 13:23:09 +02:00
Nikolay
dda5545078 lib/storage: properly search tenants
Commit 610b328e5a introduced a bug in the
date range search logic. If the first searched date for a given tenant
did not match, the search could proceed incorrectly.

This commit fixes the SearchTenants API by correctly advancing the date
passed to table.Seek.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10422
2026-02-20 12:03:00 +01:00
Roman Khavronenko
087efbc451 docs: clarify details on dump_request_on_errors
* add example of the produced log, so users could understand the impact;
* stress once again about sensetive data exposure when
dump_request_on_errors is enabled.
2026-02-20 11:54:09 +01:00
Roman Khavronenko
68e64536b1 app/vmauth: clarify the error message for all failed backends
This change adds some context to the error when all backend failed. From
support cases it seems like without the context users might not know
what to do with this error message. Clarification advises them to check
the prev error messages.
2026-02-20 11:53:16 +01:00
Yury Moladau
6e3ce4d55c app/vmui: fix label escaping for cardinality and autocomplete (#10498)
This PR fixes handling of label names containing special characters
(e.g. `.`, `/`, `-`).

Changes:
- Fixed escaping logic for cardinality requests.
- Fixed autocomplete insertion to escape label names in query selectors.

Related issue: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10485
2026-02-20 11:52:30 +01:00
Vadim Alekseev
8d1b88f985 lib/regexutil: prevent panic error parsing regexp: expression nests too deeply
Previously regex simplify function made an attempt to parse string representation of simplified regex.
And it could produce runtime panic due to std lib specification:

```
// Simplify returns a regexp equivalent to re but without counted repetitions
// and with various other simplifications, such as rewriting /(?:a+)+/ to /a+/.
// The resulting regexp will execute correctly but its string representation
// will not produce the same parse tree, because capturing parentheses
// may have been duplicated or removed.
```
 
 This commit ignores simplified regex parsing error and returns back original regex. 
It results into possible missing simplification of some niche regex patterns. 
But it's extremely rare cases rarely seen in production. So the tradeoff is acceptable. 

Fixes victoriaMetrics/victoriaLogs/issues/1112
2026-02-20 11:51:42 +01:00
Max Kotliar
3d3c057d52 docs: make docs-update-flags should rely on git tag (#10490)
### Describe Your Changes

As requested by @valyala changing the behvior of `make
docs-update-flags` from relying on git worktree, specific git remotes to
the git tags. Same way as `make publish-release` works.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-19 18:51:43 +02:00
Max Kotliar
94622fef29 lib/prommetadata: enable metrics metadata ingestion and storing by default (#10489)
### Describe Your Changes

Related to
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2974

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-19 18:45:44 +02:00
Aliaksandr Valialkin
804d77ffc5 all: run go fix -reflecttypefor 2026-02-19 14:05:06 +01:00
Aliaksandr Valialkin
79b18e9742 vendor: update github.com/valyala/fastjson from v1.6.8 to v1.6.9
This should help reducing memory usage at https://github.com/VictoriaMetrics/VictoriaLogs/issues/1042
2026-02-19 13:28:41 +01:00
Benjamin Nichols-Farquhar
3404a47a6d lib/backup implement cross-type backup copies
While server side copies when using the same backup origin and
destination are always most efficient there are times when moving
between backup locations is required.

Right now vmbackup throws an error in these cases. 

While its true that a user could always do a fresh backup from a
snapshot rather than copy an old backup, this requires access to storage
data locations and a running vmstorage instance, something that is not
_generally_ required for otherwise moving backups around in remote
locations using vmbackup.

This is a small change that makes the moving of backups from one
location to another transparent to users, without having to consider if
those locations are the same or different. This both simplifies backup
migrations and unlocks using vmbackup for more complex operations.

Specifically this came up in my use case because we want to orchestrate
the down-scaling of EBS volumes backing our vmstorage cluster, which
requires some complex backup operations, one of which being taking a
backup from s3 to a local filesystem.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10401
2026-02-18 21:45:28 +01:00
Aliaksandr Valialkin
0b8205ef46 lib/httpserver: escape the error string before sending it in the response to the client
See https://github.com/VictoriaMetrics/VictoriaMetrics/security/code-scanning/353
2026-02-18 20:39:52 +01:00
Aliaksandr Valialkin
53514febdc vendor: update github.com/VictoriaMetrics/VictoriaLogs from v0.0.0-20260125191521-bc89d84cd61d to v0.0.0-20260218111324-95b48d57d032 2026-02-18 20:39:19 +01:00
Aliaksandr Valialkin
8531d86da0 lib/timeutil: avoid losing the precision at decimalExp when converting it from int64 to int
This fixes https://github.com/VictoriaMetrics/VictoriaMetrics/security/code-scanning/354
2026-02-18 20:08:47 +01:00
Aliaksandr Valialkin
a47d32e129 vendor: run make vendor-update 2026-02-18 19:46:18 +01:00
Aliaksandr Valialkin
df96f4d3ab all: run go fix -omitzero 2026-02-18 19:37:07 +01:00
Aliaksandr Valialkin
84dc5453ad all: run go fix -minmax 2026-02-18 19:24:27 +01:00
Aliaksandr Valialkin
8093d98c0e all: run go fix -newexpr 2026-02-18 19:05:59 +01:00
Aliaksandr Valialkin
809f9471df all: run go fix -fmtappendf 2026-02-18 18:21:02 +01:00
Aliaksandr Valialkin
f9d6d2e428 all: run go fix -mapsloop 2026-02-18 18:17:20 +01:00
Aliaksandr Valialkin
32eac31416 all: run go fix -slicescontains 2026-02-18 18:17:20 +01:00
Artem Fetishev
4d4c1ff72e lib/storage: shard dateMetricIDCache (#10486)
Use the same sharded implementation as in metricIDCache. The change is
basically a copy-paste. The only difference is that the rotation period
remains `1h` instead `1m` in order not to break the fix for #10064.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-02-18 18:16:37 +01:00
Aliaksandr Valialkin
645ce2b6b3 all: run go fix -slicessort 2026-02-18 15:00:56 +01:00
Aliaksandr Valialkin
89600bd229 all: run go fix -any 2026-02-18 14:58:01 +01:00
Aliaksandr Valialkin
9b3a60efee lib/protoparser/protoparserutil: read request body to chunked buffer instead of contiguous byte slice
This should reduce memory reallocations and fragmentation when reading large request bodies from slow clients.
This also should reduce memory usage a bit because of the reduced memory fragmentation.

Updates https://github.com/VictoriaMetrics/VictoriaLogs/issues/1042
2026-02-18 14:50:31 +01:00
Aliaksandr Valialkin
a8c5934d1b vendor: update github.com/VictoriaMetrics/fastcache from v1.13.2 to v1.13.3 2026-02-18 14:28:34 +01:00
Aliaksandr Valialkin
43544fdb63 vendor: update github.com/valyala/fastjson from v1.6.7 to v1.6.8 2026-02-18 14:28:33 +01:00
Aliaksandr Valialkin
7a4df5755a go.mod: update github.com/VictoriaMetrics/metrics from v1.41.1 to v1.41.2, and github.com/VictoriaMetrics/metricsql from v0.84.10 to v0.85.0 2026-02-18 14:28:33 +01:00
Aliaksandr Valialkin
83bcbc43d1 app/vmauth: consistently use for i := range N instead of for i := 0; i < N; i++ 2026-02-18 14:28:32 +01:00
Aliaksandr Valialkin
79921cf434 app/vmctl: run go fix -rangeint 2026-02-18 14:28:32 +01:00
Aliaksandr Valialkin
40402fdac3 lib: run go fix -rangeint 2026-02-18 14:28:31 +01:00
Aliaksandr Valialkin
05943abc11 lib/persistentqueue: run go fix -rangeint 2026-02-18 14:28:31 +01:00
Aliaksandr Valialkin
e66e71c87e lib/streamaggr: run go fix -rangeint 2026-02-18 14:28:30 +01:00
Aliaksandr Valialkin
7f682c4c76 lib/promscrape: run go fix -rangeint 2026-02-18 14:28:30 +01:00
Aliaksandr Valialkin
4947cd7f14 lib/encoding: run go fix -rangeint 2026-02-18 14:28:30 +01:00
Aliaksandr Valialkin
5ea7314912 lib/mergeset: run go fix -rangeint 2026-02-18 14:28:29 +01:00
Aliaksandr Valialkin
655f0e9c1d lib/storage: run go fix -rangeint 2026-02-18 14:28:29 +01:00
Aliaksandr Valialkin
2ffd25a120 apptest: run go fix -rangeint 2026-02-18 14:28:28 +01:00
Aliaksandr Valialkin
175fcf6676 app/vmalert-tool: run go fix -rangeint 2026-02-18 14:28:28 +01:00
Aliaksandr Valialkin
c05516afbe app/vmselect: run go fix -rangeint 2026-02-18 14:28:27 +01:00
Aliaksandr Valialkin
6b12684e56 app/vmauth: run go fix -rangeint 2026-02-18 14:28:27 +01:00
Aliaksandr Valialkin
8f7c94f512 app/vmalert: run go fix -rangeint 2026-02-18 14:28:26 +01:00
Aliaksandr Valialkin
4a6259a9b2 app/vmagent: run go fix -rangeint 2026-02-18 14:28:26 +01:00
Max Kotliar
d5b9d3e641 dashboards/vmauth: Add Client request buffering latency panel (#10412)
### Describe Your Changes

In https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10310 ability
to [buffer request
body](https://docs.victoriametrics.com/victoriametrics/vmauth/#request-body-buffering)
was added to `vmauth`. This PR adds a new panel `Request body buffering
latency` to `vmauth` dashboard.

Related to
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10309

<img width="1504" height="680" alt="Screenshot 2026-02-07 at 00 28 46"
src="https://github.com/user-attachments/assets/ba98b06f-de2c-4d4c-96bb-e5c20049cebc"
/>

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

Signed-off-by: Max Kotliar <kotlyar.maksim@gmail.com>
Co-authored-by: Hui Wang <haley@victoriametrics.com>
2026-02-18 15:25:38 +02:00
Max Kotliar
6863de2c0e package/release: Add github-verify-release job (#10476)
### Describe Your Changes

The job ensure that:
- the draft release with given `$(TAG)` exists
- the release has excpected `$(GITHUB_ASSETS_COUNT)` number of uploaded
assets
- All the assets were uploaded succesfully.

It also adds helper job `github-get-release` which finds a draft release
by `$(TAG)` and stores into file `/tmp/vm-github-release-$(TAG)` file.

The `github-delete-release1 job is decoupled from the file produced by
`github-create-release job`. So it could be run at any time from any
machine.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-18 15:05:50 +02:00
Artem Fetishev
51a3e4e27a lib/storage: metricIDCache cache follow-up for e5c8581bad (#10468) (#10479)
This is a follow-up PR for e5c8581bad (#10468):

- Extract the bucket size into a constant and document it
- Make benchmark constant metricIDCache-specific
- Add the same benchmark for dateMetricIDCache to compare it with metricIDCache.  See https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10479 for benchmark results.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-02-17 16:10:32 +01:00
Max Kotliar
d7046d6e19 go.mod: update metrics module (#10470)
### Describe Your Changes

VictoriaMetrics binaries will now expose some process-level metrics when
run on macOS.

See:
- https://github.com/VictoriaMetrics/metrics/issues/75
- https://github.com/VictoriaMetrics/metrics/pull/107

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-16 19:52:14 +02:00
Max Kotliar
7e6c03e9c6 docs/changelog: correctly place feater into tip section 2026-02-16 19:43:01 +02:00
Max Kotliar
5267f35104 app/vmauth: authenticate by jwt token (#10435)
### Describe Your Changes

Adds JWT authentication support to vmauth with signature verification
and tenant-based access control. For now, public_keys have to set
explisitly in the config, OIDC discovery will be added in upcoming PRs.

Related to
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10445

Key Features

- JWT Configuration: Added `jwt_token` field to user config supporting
RSA/ECDSA public keys or skip_verify mode (for testing purposes).
- Token Validation: Verifies JWT signatures, checks expiration, and
extracts vm_access claims
- Compatible with vmgateway: jwt tokens issued for vmgateway should work
with vmauth too.

Examples

```yaml
users:
- jwt_token:
    public_keys:
    - |
      -----BEGIN PUBLIC KEY-----
      MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA...
      -----END PUBLIC KEY-----
  url_prefix: "http://victoria-metrics:8428/"
```

```yaml
users:
- jwt_token:
    skip_verify: true
  url_prefix: "http://victoria-metrics:8428/"
```


Constraints

- JWT tokens cannot be mixed with other auth methods (bearer_token,
username, password)
- Requires at least one public key OR skip_verify=true
- Limited to single JWT user (multiple JWT users will be supported in
the future)

Next steps
- Multiple `jwt_token` support. 
- Claim matching
- Claim based routing
- OIDC\JWKS support

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Co-authored-by: Pablo (Tomas) Fernandez <46322567+TomFern@users.noreply.github.com>
2026-02-16 19:40:54 +02:00
Max Kotliar
172ff84299 docs: start v1.136 lts line 2026-02-16 19:18:47 +02:00
Max Kotliar
a3f955dd84 docs: bump version to v1.136.0 2026-02-16 17:43:31 +02:00
Max Kotliar
19e7d986fe deplyoment/docker: bump version to v1.136.0
Signed-off-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-02-16 17:36:53 +02:00
Max Kotliar
db2ad6f900 docs/changelog: update changelog with LTS release notes 2026-02-16 17:31:02 +02:00
Max Kotliar
db1f3f4ab8 deployment/docker: Fix publish final fips images from rc 2026-02-16 14:18:55 +02:00
Max Kotliar
7386a35942 docs/changelog: cut v1.136.0 2026-02-13 19:58:15 +02:00
Max Kotliar
6be2d89008 app/vmselect: run make vmui-update 2026-02-13 19:44:54 +02:00
Artem Fetishev
e5c8581bad lib/storage: optimize metricIDCache sharding (#10468)
Exploit uint64set data structure peculiarities (adjacent elements are
stored in
64KiB buckets) to optimize metricIDCache memory footprint.

As the result the cache utilizes 87% less memory and is up to 90%
faster. See
[benchstat.txt](https://github.com/user-attachments/files/25294076/benchstat.txt).

Follow-up for #10388 and #10346.

Thanks to @valyala for the optimization idea.

---------

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-02-13 18:29:48 +02:00
Nikolay
14bc51554b lib/storage: properly report metrics for the last partition
Previously, on the last day of a month, storage could report empty
metrics for the last partition. This could happen if a new empty
partition was created in updateNextDayMetricIDs or if time series with
future timestamps were ingested.

This commit adds a check to ensure the last partition belongs to the
current month. Since this is typically the most actively used partition,
it should be treated as the last one.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10387
2026-02-13 11:21:20 +01:00
Max Kotliar
7db81d062c docs/changelog: chore tip before release 2026-02-13 10:32:42 +02:00
f41gh7
ad62fe88ed go.mod: update metricsql
It contains fix for https://github.com/VictoriaMetrics/metricsql/issues/60

Signed-off-by: f41gh7 <nik@victoriametrics.com>
2026-02-12 23:49:54 +01:00
Artem Fetishev
40b85eb211 Makefile: rename integration-test to apptest (#10461)
Follow-up for 73015bccb9

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-02-12 19:01:16 +01:00
Roman Khavronenko
88b2464fe8 docs: simplify wording in the top section (#10451)
The purpose of the change is to make better first impression for readers
by removing all unnecessary verbosity. As with status pages, try to
increase the density of useful information.

The initial idea was borrowed from @func25

---------------

<img width="961" height="649" alt="image"
src="https://github.com/user-attachments/assets/2a91ded5-17cf-49ad-a589-45b634af991a"
/>

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Signed-off-by: Roman Khavronenko <hagen1778@gmail.com>
Signed-off-by: Max Kotliar <kotlyar.maksim@gmail.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-02-12 19:27:46 +02:00
Max Kotliar
e4221f97a7 docs: mention top query by memory usage
Follow up on
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10391
2026-02-12 17:54:27 +02:00
Stephan Burns
d40696a2f2 Add restarts annotation to remaining dashboards (#10439)
### Describe Your Changes

Added annotation to show restarts.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: Stephan Burns <34520077+Sleuth56@users.noreply.github.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-02-12 16:39:38 +02:00
Aliaksandr Valialkin
b2a74ec494 dashboards/vm/vmauth.json: run make dashboards-sync after the commit 9774fe8df1 according to dashboards/README.md
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10437
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10438
2026-02-12 14:24:09 +01:00
Mathias Palmersheim
9774fe8df1 Change user count query so it accounts for multiple replicas of vmauth (#10438)
### Describe Your Changes

Fixes issue where multiple replicas of vmauth cause the user count to be
inflated for vmauth see #10437

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-12 14:22:22 +01:00
Artem Fetishev
efd3b66609 Makefile: make vet and golangci-lint to also check synctests
Follow-up for 3d6f353430

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-02-12 13:16:55 +01:00
Zhu Jiekun
785daff65d vminsert: proper reset labelsBuf for OpenTelemetry ingestion to avoid high memory usage
Ensure proper expansion and reset of `buf` size for OpenTelemetry
ingestion. This pull request does:
1. Flush data in `wctx` when `buf` is over 4MiB.
2. Do not return `wctx` with `buf` larger than 4MiB while the actual
in-use length is less than 1MiB to the pool.

Previously, when a small number of requests carried a large volume of
time series or labels, `buf` was over-expanded and recycled to the pool,
resulting in an excessive memory usage issue.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10378
2026-02-12 12:49:05 +01:00
Roman Khavronenko
e3a57a3d80 docs: fix the broken image for single-node (#10460)
See
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10449#issuecomment-3890326179

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-02-12 12:47:26 +01:00
Pablo (Tomas) Fernandez
161633158c Docs: Update guide "How to use OpenTelemetry with VictoriaMetrics and VictoriaLogs" (#10396)
This is part of the effort to upgrate and validate the [Guides in the
docs](https://docs.victoriametrics.com/guides/).

Doc page:
https://docs.victoriametrics.com/guides/getting-started-with-opentelemetry/

Functionally, nothing should change. Aside from the fix that prevented
one of the example applications to run, the rest of the commands in the
guide should be equivalent to the original.

Header anchor links do not change with this update. I added a few
headers but the existing headers anchors should remain unchanged to
prevent breaking existing links.

- Tested on a more modern version of GKE to validate it still works OK
(1.34.1-gke.3971001)
- Changed wording of some sections to improve flow and readability
- Added some missing steps/troubleshooting
- Add tips annotations for cardinality explorer and setup references to
make them stand apart form the main content
- Use `kubectl port-forward svc/...` instead of `kubecl port-forward
pod` (service selectors vs pod names) in some test commands to make
instructions simpler
- Updated OpenTelemetry version to fix error that prevented
`app.go-collector.example` sample code from running
- Replaced the "Visit these links" part in the second program (with the
fast/slow endpoints) with curl commands
- Updated the first VMUI test link to show table instead of graph while
testing OpenTelemetry ingestion (default graph view can be confusing as
there metric value for `k8s_container_ready` doesn't really show any
values)
- Minor typos, grammar check, and consistency (Kubernetes vs kubernetes,
Helm vs Helm, Collector vs collector, etc)
2026-02-12 12:47:06 +01:00
Aliaksandr Valialkin
7b708a8947 .github/workflows/test.yml: use Go version in the cache key for golangci-lint
This should fix issues like in the https://github.com/VictoriaMetrics/VictoriaMetrics/actions/runs/21943547755/job/63375204688 :

    package requires newer Go version go1.26 (application built with go1.25)
2026-02-12 12:20:48 +01:00
Roman Khavronenko
16d5f281fe lib/storage: use child trace during index searches
This change only affects query trace. It correctly uses the branched
query trace in callback function, so in trace it is placed in the right
actions branch.

Bug was introduced in
c705da74f6
2026-02-12 12:19:00 +01:00
JAYICE
6846ca09cb document: add description about time-based kafka commit
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10420
2026-02-12 12:08:53 +01:00
Roman Khavronenko
71997bc754 docs: mention Perses on integrations list (#10442)
While there, attempted to simplify wording in perses doc.
2026-02-12 12:08:27 +01:00
Roman Khavronenko
2ec6fafed0 docs: add diagrams for single and cluster components (#10449)
This PR adds diagram for single-node and updates diagram for cluster
version. Both diagram go with excalidraw source attached, so they can be
updated in future.

Related to
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10398
2026-02-12 12:08:06 +01:00
Roman Khavronenko
a8ac5dfae5 docs: excalidraw vmagent diagram
Source vmagent diagram to excalidraw, so it can be easily updated in
future.

-----------------

<img width="936" height="671" alt="image"
src="https://github.com/user-attachments/assets/1dfc9cb5-0323-4e0d-881c-3c76ccda578f"
/>

<img width="922" height="706" alt="image"
src="https://github.com/user-attachments/assets/42297ede-5986-451c-83fc-c11dba9560e3"
/>

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-02-12 12:07:36 +01:00
Phuong Le
6292d5fefa ci: scope Go artifact cache restore fallback by Go version
Fixes
https://github.com/VictoriaMetrics/VictoriaMetrics/actions/runs/21921172620/job/63301435721
2026-02-12 12:07:14 +01:00
Zhu Jiekun
2a09f25f78 docs: mentioning VictoriaTraces in vmalert's doc (#10457) 2026-02-12 12:06:50 +01:00
Aliaksandr Valialkin
6824ade224 lib/promscrape: follow-up for the commit 22696f378c
- Return back the check that the size of the scraped response doesn't exceed the maxScrapeSize
  at the client.ReadData(). Without this check the scraped response may be truncated to maxScrapeSize+1
  bytes, which can result in decompression error. The decompression error in this case
  hides the original errror about too big response side. This complicates troubleshooting by users.

- Stop decompressing the scraped response as soon as the decompressed response size exceeds maxScrapeSize.
  This protects from excess memory usage needed for holding the decompressed response with sizes exceeding
  the maxScrapeSize.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10320
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9481
2026-02-12 11:39:51 +01:00
Aliaksandr Valialkin
3a3c2084d3 vendor: run make vendor-update 2026-02-11 17:52:42 +01:00
Aliaksandr Valialkin
3d6f353430 deployment/docker/Makefile: update Go builder from Go1.25.7 to Go1.26.0
See https://go.dev/doc/go1.26
2026-02-11 17:35:56 +01:00
Vadim Alekseev
b1f333093b .github/workflows: use Go version from go.mod (#1092) 2026-02-11 16:10:53 +01:00
Artem Fetishev
19403b9cd1 lib/storage: use workingsetcache for tfss loops cache again (#10427)
lrucache causes huge cpu usage in some caches. See #10297.

There was a hypothesis that this was due to too short ttl in lrucache.
Setting it to 1h (the default workingsetcache eviction period) but it did not
completely eliminate the problem. The CPU utilization was not huge but still high.
See #10416.

Thus reverting back fix such deployments. This solution is temporary
because the cache consumes at least 32MB. There is one instance per
indexDB which means that if the retention is 3y then the total memory
utilized by this cache will be over 1GB and most of it will be unused.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-02-11 15:18:34 +01:00
Max Kotliar
4edff7eae2 .github: pin go version to 1.25 to fix CI (#10448)
Go1.26 has been recently released and was picked up by CI actions.

The tests and linter actions start to fail with:

GOEXPERIMENT=synctest go vet ./lib/...
go: unknown GOEXPERIMENT synctest

This happens because Go 1.26 remove synctest experiment.

Changelog:
This package was first available in Go 1.24 under GOEXPERIMENT=synctest,
with a slightly different API. The experiment has now graduated to
general availability. The old API is still present if
GOEXPERIMENT=synctest is set, but will be removed in Go 1.26.

https://go.dev/doc/go1.25#library

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-11 15:43:10 +02:00
Max Kotliar
ce4b131816 docs/changelog: cleanup after merge 2026-02-11 15:01:27 +02:00
Yury Moladau
cf69c56bb7 app/vmui: add label autocomplete context-aware by applying existing label matchers (#10399)
### Describe Your Changes

* Add context-aware label autocomplete by applying existing label
matchers (e.g. namespace/job) when fetching labels and label values.
* Update `package.json` dependencies.
* Update `vite.config.ts` to ensure correct API requests in playground
mode (`start:playground`).

Related issue: #9269

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

Signed-off-by: Yury Molodov <yurymolodov@gmail.com>
2026-02-11 15:00:13 +02:00
JAYICE
42ec981fe9 vmui: add Queries with most memory to execute section in Top Queries page (#10391)
### Describe Your Changes

fix  https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9330

<img width="5088" height="1674" alt="image"
src="https://github.com/user-attachments/assets/4364cfae-8c56-417d-9d1c-6a219fa8802c"
/>


### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

Signed-off-by: JAYICE <1185430411@qq.com>
2026-02-11 14:53:57 +02:00
Hui Wang
35e287d740 docs: remove incorrect description on -search.logSlowQueryStats (#10447)
>Query statistics logging is enabled by default {{% available_from
"v1.129.0" %}} with a threshold of 5s.
2026-02-11 14:49:22 +02:00
Fred Navruzov
9df9a77169 docs/vmanomaly: fix-non-canonical-url-reader-docs (#10444)
### Describe Your Changes

fix non-canonical link to MetricsQL

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-11 13:06:04 +02:00
Max Kotliar
17c514d2fa docs: use canonical link if life of sample diagram 2026-02-11 12:48:06 +02:00
Max Kotliar
c12512bdd7 lib/jwt: address code review comments (#10428)
### Describe Your Changes

Addressing code revoew comments from
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10426, kept them
separate to isolate copy-paste change from follow up changes

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-10 18:57:17 +02:00
Max Kotliar
a108da8215 lib/jwt: opensource jwt library (#10426)
### Describe Your Changes

It was
[decided](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9439#issuecomment-3612299461)
that OIDC authentication in vmauth will be part of open source repo.

That requires opensourcing lib/jwt. PR does not contain any changes in
logic, just copy-paste from enterprise repository.

Related to
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9439

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-10 18:49:23 +02:00
Aliaksandr Valialkin
4e7606f669 lib/backup/actions: properly validate the size for the last part during the restoring from backup
This issue has been found by https://www.cubic.dev/codebase-scan/7b15eebd-abc2-4604-9523-7f9bec5f67f6?violationId=324521b6-50fb-502d-8981-980bd9fd44ab
2026-02-10 15:16:21 +01:00
Aliaksandr Valialkin
060d7f6ed1 lib/protoparser/protoparserutil: limit the maximum size of the snappy-encoded data block, which can be read from the remote client
This is a follow-up for the commit 51b44afd34

This issue has been found by https://www.cubic.dev/codebase-scan/7b15eebd-abc2-4604-9523-7f9bec5f67f6?violationId=5a8fb3b7-1086-5d11-bb06-1f0864bd56ff
2026-02-10 15:03:34 +01:00
Aliaksandr Valialkin
b3c1b00e4d lib/protoparser/protoparserutil: re-use byte buffers in readUncompressedData() with the capacity up to 1MiB
The expected size of the data ingestion request body accepted by VictoriaMetrics / VictoriaLogs / VictoriaTraces
exceeds 64KiB, and is close to 1MiB. That's why it is better to re-use byte buffers with capacities up to 1MiB,
even if less than 25% of their capacity was used the last time.

This should reduce the number of GC cycles at high data ingestion rate when the request body sizes
are distributed at both sided of the 16KiB ... 64KiB range.
This is a follow-up for 09d2ce36e8

Updates https://github.com/VictoriaMetrics/VictoriaLogs/issues/1042
2026-02-10 13:04:47 +01:00
Fred Navruzov
a65f693649 docs/vmanomaly: fix iframe params (#10421)
### Describe Your Changes

fix iframe params in embedded playgrounds on /anomaly-detection/ui/ ,
anomaly-detection/quickstart/ pages

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-10 12:41:38 +02:00
Hui Wang
6285bc4179 app/vmselect: properly count vm_deduplicated_samples_total{type="select"}metric
Previously `vm_deduplicated_samples_total{type="select"}` didn't take in account identical samples.

This commit takes it in account in the same way as `vm_deduplicated_samples_total{type="merge"}` metric.

Related to  https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10384.
2026-02-10 10:23:40 +01:00
Roman Khavronenko
e89f131e34 docs: update metadata API reference across the docs
* mention support of multitenancy in metadata
* add a basic alerting rule for tracking cache utilization
* clarify cleanup policy of metadata cache
2026-02-10 10:19:19 +01:00
Roman Khavronenko
493c1d410f app/vmagent: clarify global nature of remoteWrite.label cmd-line flag
Before, by mistake, -remoteWrite.label flag was referenced in one part
of the doc as per-remoteWrite-url flag. In fact, -remoteWrite.label is
global and applies labels to all remoteWrite URLs unconditionally.

This commit tries to clarify it in docs:
* update the life-of-a-sample diagram to change the labels applying
logic
* add hint how to add a label via `extra_label`
* removes duplicated description for -remoteWrite.label flag

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10373
2026-02-10 10:18:56 +01:00
Pablo (Tomas) Fernandez
b0029ee933 Update guide/k8s-monitoring-via-vm-single (#10372)
This is the first PR on a proposed series of updates to the guides.

I started with this one because:

It's on the top ten guides according to Google Analytics
It's a good starting point for me to get familiar with VM on Kubernetes
I plan to work through the rest of the guides in the following days
(coordinating the effort with JJ).

Changelog for this guide:

- Updated GKE version to a more current 1.34+
- Updated guide to more modern Helm and Kubectl versions
- Tested updated instructions on GKE 1.34.1-gke.3971001 (and a local k3s
instance) successfully
- Removed revision from Grafana values for helm chart (confirmed it
pulls the latest revision)
- Split the helm chart values into more readable chunks and added
explanations next to each chunk
- Added and updated expected outputs. Some were missing and others were
outdated
- Updated Grafana dashboards screenshots since they changed from the
last revision
- Updated Grafana repo to use community org (old grafana chart was
deprecated
on Jan 30th -
[source](https://community.grafana.com/t/helm-repository-migration-grafana-community-charts/160983))
- Minor corrections and typo fixes
- Added a section at the end pointing readers where they can go next.
2026-02-10 10:17:58 +01:00
Alexander Frolov
97e1308386 vmselect: handle NaN values when merging blocks
`vmselect` merges samples from multiple replicas using an optimistic
deduplication path.

c7f52992e7/app/vmselect/netstorage/netstorage.go (L593-L595)

This is useful when `replicationFactor > 1`. However, identical series
containing NaN values from different replicas are treated as different
(due to `NaN != NaN`), forcing the slower fallback path unnecessarily.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10384
2026-02-10 10:16:17 +01:00
Max Kotliar
a279517034 dashboards: add source code data link to logging rate panel (#10406)
### Describe Your Changes

Add Source Code data link (link to bar or line in graph to see) that
points directly to a source code file on Github. `VictoriaMetrics -
cluster`, `VictoriaMetrics - single-node`, and `VictoriaMetrics -
vmagent` dashboards were updated. I did not add it to other panels since
they do not have Drilldown section at all.

Also, fixed a misplaced Drilldown link in `VictoriaMetrics -
single-node` dashboard.

Proxy service code is here
https://github.com/VictoriaMetrics/location2source/

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-10 10:25:46 +02:00
Fred Navruzov
f7ba76a59d docs/vmanomaly: v1.28.6-1.28.7 (#10419)
### Describe Your Changes

- Updated docs to reflect v1.28.6-v1.28.7 changes
- Fixed typos and misaligned section content
- Embedded playgrounds into documentation (data querying, vmanomaly
experiment)

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-10 10:20:43 +02:00
Max Kotliar
60dbd5a97e dashboards: Rename "Concurrent flushes on disk" panel to "Concurrent inserts" (#10409)
### Describe Your Changes

The new title better aligns with the code of
[writeconcurrencylimiter](d9dabea303/lib/writeconcurrencylimiter/concurrencylimiter.go (L140)),
the panel description and the metric used in the query.

Previously, the panel title suggested that it reflected only disk write
performance. During an incident investigation, this led to a wrong
assumption that the panel was unrelated to client-side performance.

In reality, the metric [includes the full write
path](98e320842c/lib/vminsertapi/server.go (L263)):
time spent reading data from the TCP connection, processing it, and
acknowledging the block. The updated title reflects this behavior more
accurately and reduces the risk of misinterpretation during incident
analysis.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-09 19:40:41 +02:00
Aliaksandr Valialkin
32ddfa973b docs/victoriametrics/Single-server-VictoriaMetrics.md: add https://docs.victoriametrics.com/VictoriaMetrics.html seen in the wild according to the 404 pages report in Google Analytics 2026-02-09 16:59:26 +01:00
Aliaksandr Valialkin
d9554a3a22 docs/victoriametrics/Cluster-VictoriaMetrics.md: add https://docs.victoriametrics.com/Cluster-VictoriaMetrics/ alias seen in wild according to the 404 pages report in Google Analytics 2026-02-09 16:58:05 +01:00
Aliaksandr Valialkin
fbab6403dc docs/victoriametrics/MetricsQL.md: add https://docs.victoriametrics.com/MetricsQL/ alias seen in wild according to the 404 pages report in Google Analytics 2026-02-09 16:56:49 +01:00
Jayice
07dd79608b app/vmselect: align graphite render API process timeout to query deadline
Previosly the error returned on timeout suggested a memory leak, which
could confuse a user. In reality timeout could happen if vmselect is
overloaded or the query takes a lot of time to process. The commit
aligns rss. RunParallel with query deadline set either via flag
`-search.maxQueryDuration` or the `timeout` query argument. The logged
warn message is adjusted to suggest resource increase or timeout
increase.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8484

Signed-off-by: JAYICE <jayice.zhou@qq.com>
Signed-off-by: Max Kotliar <kotlyar.maksim@gmail.com>
2026-02-09 14:03:26 +02:00
2489 changed files with 199243 additions and 109881 deletions

View File

@@ -1,23 +0,0 @@
# Project Overview
VictoriaMetrics is a fast, cost-saving, and scalable solution for monitoring and managing time series data. It delivers high performance and reliability, making it an ideal choice for businesses of all sizes.
## Folder Structure
- `/app`: Contains the compilable binaries.
- `/lib`: Contains the golang reusable libraries
- `/docs/victoriametrics`: Contains documentation for the project.
- `/apptest/tests`: Contains integration tests.
## Libraries and Frameworks
- Backend: Golang, no framework. Use third-party libraries sparingly.
- Frontend: React.
## Code review guidelines
Ensure the feature or bugfix includes a changelog entry in /docs/victoriametrics/changelog/CHANGELOG.md.
Verify the entry is under the ## tip section and matches the structure and style of existing entries.
Chore-only changes may be omitted from the changelog.

View File

@@ -4,6 +4,8 @@ updates:
directory: "/"
schedule:
interval: "daily"
cooldown:
default-days: 21
- package-ecosystem: "gomod"
directory: "/"
schedule:
@@ -23,6 +25,8 @@ updates:
directory: "/"
schedule:
interval: "daily"
cooldown:
default-days: 21
- package-ecosystem: "npm"
directory: "/app/vmui/packages/vmui"
schedule:

View File

@@ -71,7 +71,8 @@ jobs:
go.sum
Makefile
app/**/Makefile
go-version: stable
go-version-file: 'go.mod'
- run: go version
- name: Build victoria-metrics for ${{ matrix.os }}-${{ matrix.arch }}
run: make victoria-metrics-${{ matrix.os }}-${{ matrix.arch }}

View File

@@ -27,11 +27,21 @@ jobs:
exit 0
fi
unsigned=$(git log --pretty="%H %G?" $RANGE | grep -vE " (G|E)$" || true)
# Check raw commit objects for a "gpgsig" header as a fast early signal for
# contributors. Both GPG and SSH signatures use this header.
# This avoids relying on %G? which returns N for SSH commits.
# This check is not a security enforcement — unsigned commits cannot be merged
# anyway due to the GitHub repository merge policy.
unsigned=""
for sha in $(git rev-list $RANGE); do
if ! git cat-file commit "$sha" | grep -q "^gpgsig"; then
unsigned="$unsigned $sha"
fi
done
if [ -n "$unsigned" ]; then
echo "Found unsigned commits:"
echo "$unsigned"
exit 1
fi
echo "All commits in PR are signed (G or E)"
echo "All commits in PR are signed (GPG or SSH)"

View File

@@ -21,9 +21,11 @@ jobs:
id: go
uses: actions/setup-go@v6
with:
go-version: stable
go-version-file: 'go.mod'
cache: false
- run: go version
- name: Cache Go artifacts
uses: actions/cache@v4
with:
@@ -32,7 +34,7 @@ jobs:
~/go/pkg/mod
~/go/bin
key: go-artifacts-${{ runner.os }}-check-licenses-${{ steps.go.outputs.go-version }}-${{ hashFiles('go.sum', 'Makefile', 'app/**/Makefile') }}
restore-keys: go-artifacts-${{ runner.os }}-check-licenses-
restore-keys: go-artifacts-${{ runner.os }}-check-licenses-${{ steps.go.outputs.go-version }}-
- name: Check License
run: make check-licenses

View File

@@ -36,7 +36,8 @@ jobs:
uses: actions/setup-go@v6
with:
cache: false
go-version: stable
go-version-file: 'go.mod'
- run: go version
- name: Cache Go artifacts
uses: actions/cache@v4
@@ -46,7 +47,7 @@ jobs:
~/go/bin
~/go/pkg/mod
key: go-artifacts-${{ runner.os }}-codeql-analyze-${{ steps.go.outputs.go-version }}-${{ hashFiles('go.sum', 'Makefile', 'app/**/Makefile') }}
restore-keys: go-artifacts-${{ runner.os }}-codeql-analyze-
restore-keys: go-artifacts-${{ runner.os }}-codeql-analyze-${{ steps.go.outputs.go-version }}-
- name: Initialize CodeQL
uses: github/codeql-action/init@v4

View File

@@ -28,7 +28,7 @@ jobs:
path: __vm-docs
- name: Import GPG key
uses: crazy-max/ghaction-import-gpg@v6
uses: crazy-max/ghaction-import-gpg@v7
id: import-gpg
with:
gpg_private_key: ${{ secrets.VM_BOT_GPG_PRIVATE_KEY }}

View File

@@ -42,8 +42,9 @@ jobs:
go.sum
Makefile
app/**/Makefile
go-version: stable
go-version-file: 'go.mod'
- run: go version
- name: Cache golangci-lint
uses: actions/cache@v4
@@ -51,7 +52,7 @@ jobs:
path: |
~/.cache/golangci-lint
~/go/bin
key: golangci-lint-${{ runner.os }}-${{ hashFiles('.golangci.yml') }}
key: golangci-lint-${{ runner.os }}-${{ steps.go.outputs.go-version }}-${{ hashFiles('.golangci.yml') }}
- name: Run check-all
run: |
@@ -65,8 +66,8 @@ jobs:
strategy:
matrix:
scenario:
- 'test-full'
- 'test-full-386'
- 'test'
- 'test-386'
- 'test-pure'
steps:
@@ -81,19 +82,15 @@ jobs:
go.sum
Makefile
app/**/Makefile
go-version: stable
go-version-file: 'go.mod'
- run: go version
- name: Run tests
run: GOGC=10 make ${{ matrix.scenario}}
run: make ${{ matrix.scenario}}
- name: Publish coverage
uses: codecov/codecov-action@v5
with:
files: ./coverage.txt
integration:
name: integration
runs-on: ubuntu-latest
apptest:
name: apptest
runs-on: apptest
steps:
- name: Code checkout
@@ -107,7 +104,8 @@ jobs:
go.sum
Makefile
app/**/Makefile
go-version: stable
go-version-file: 'go.mod'
- run: go version
- name: Run integration tests
run: make integration-test
- name: Run app tests
run: make apptest

View File

@@ -17,7 +17,7 @@ EXTRA_GO_BUILD_TAGS ?=
GO_BUILDINFO = -X '$(PKG_PREFIX)/lib/buildinfo.Version=$(APP_NAME)-$(DATEINFO_TAG)-$(BUILDINFO_TAG)'
TAR_OWNERSHIP ?= --owner=1000 --group=1000
GOLANGCI_LINT_VERSION := 2.7.2
GOLANGCI_LINT_VERSION := 2.9.0
.PHONY: $(MAKECMDGOALS)
@@ -443,7 +443,7 @@ fmt:
gofmt -l -w -s ./apptest
vet:
GOEXPERIMENT=synctest go vet ./lib/...
go vet -tags 'synctest' ./lib/...
go vet ./app/...
go vet ./apptest/...
@@ -452,28 +452,28 @@ check-all: fmt vet golangci-lint govulncheck
clean-checkers: remove-golangci-lint remove-govulncheck
test:
GOEXPERIMENT=synctest go test ./lib/... ./app/...
go test -tags 'synctest' ./lib/... ./app/...
test-race:
GOEXPERIMENT=synctest go test -race ./lib/... ./app/...
go test -tags 'synctest' -race ./lib/... ./app/...
test-386:
GOARCH=386 go test -tags 'synctest' ./lib/... ./app/...
test-pure:
GOEXPERIMENT=synctest CGO_ENABLED=0 go test ./lib/... ./app/...
CGO_ENABLED=0 go test -tags 'synctest' ./lib/... ./app/...
test-full:
GOEXPERIMENT=synctest go test -coverprofile=coverage.txt -covermode=atomic ./lib/... ./app/...
go test -tags 'synctest' -coverprofile=coverage.txt -covermode=atomic ./lib/... ./app/...
test-full-386:
GOEXPERIMENT=synctest GOARCH=386 go test -coverprofile=coverage.txt -covermode=atomic ./lib/... ./app/...
integration-test:
$(MAKE) apptest
GOARCH=386 go test -tags 'synctest' -coverprofile=coverage.txt -covermode=atomic ./lib/... ./app/...
apptest:
$(MAKE) victoria-metrics vmagent vmalert vmauth vmctl vmbackup vmrestore
$(MAKE) victoria-metrics-race vmagent-race vmalert-race vmauth-race vmctl-race vmbackup-race vmrestore-race
go test ./apptest/... -skip="^Test(Cluster|Legacy).*"
integration-test-legacy: victoria-metrics vmbackup vmrestore
apptest-legacy: victoria-metrics-race vmbackup-race vmrestore-race
OS=$$(uname | tr '[:upper:]' '[:lower:]'); \
ARCH=$$(uname -m | tr '[:upper:]' '[:lower:]' | sed 's/x86_64/amd64/'); \
VERSION=v1.132.0; \
@@ -490,17 +490,17 @@ integration-test-legacy: victoria-metrics vmbackup vmrestore
go test ./apptest/tests -run="^TestLegacySingle.*"
benchmark:
GOEXPERIMENT=synctest go test -bench=. ./lib/...
go test -bench=. ./app/...
go test -run=NO_TESTS -bench=. ./lib/...
go test -run=NO_TESTS -bench=. ./app/...
benchmark-pure:
GOEXPERIMENT=synctest CGO_ENABLED=0 go test -bench=. ./lib/...
CGO_ENABLED=0 go test -bench=. ./app/...
CGO_ENABLED=0 go test -run=NO_TESTS -bench=. ./lib/...
CGO_ENABLED=0 go test -run=NO_TESTS -bench=. ./app/...
vendor-update:
go get -u ./lib/...
go get -u ./app/...
go mod tidy -compat=1.24
go mod tidy -compat=1.26
go mod vendor
app-local:
@@ -524,7 +524,7 @@ install-qtc:
golangci-lint: install-golangci-lint
GOEXPERIMENT=synctest golangci-lint run
golangci-lint run --build-tags 'synctest'
install-golangci-lint:
which golangci-lint && (golangci-lint --version | grep -q $(GOLANGCI_LINT_VERSION)) || curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(shell go env GOPATH)/bin v$(GOLANGCI_LINT_VERSION)

View File

@@ -1,12 +1,11 @@
# VictoriaMetrics
[![Latest Release](https://img.shields.io/github/v/release/VictoriaMetrics/VictoriaMetrics?sort=semver&label=&filter=!*-victorialogs&logo=github&labelColor=gray&color=gray&link=https%3A%2F%2Fgithub.com%2FVictoriaMetrics%2FVictoriaMetrics%2Freleases%2Flatest)](https://github.com/VictoriaMetrics/VictoriaMetrics/releases)
![Docker Pulls](https://img.shields.io/docker/pulls/victoriametrics/victoria-metrics?label=&logo=docker&logoColor=white&labelColor=2496ED&color=2496ED&link=https%3A%2F%2Fhub.docker.com%2Fr%2Fvictoriametrics%2Fvictoria-metrics)
[![Docker Pulls](https://img.shields.io/docker/pulls/victoriametrics/victoria-metrics?label=&logo=docker&logoColor=white&labelColor=2496ED&color=2496ED&link=https%3A%2F%2Fhub.docker.com%2Fr%2Fvictoriametrics%2Fvictoria-metrics)](https://hub.docker.com/u/victoriametrics)
[![Go Report](https://goreportcard.com/badge/github.com/VictoriaMetrics/VictoriaMetrics?link=https%3A%2F%2Fgoreportcard.com%2Freport%2Fgithub.com%2FVictoriaMetrics%2FVictoriaMetrics)](https://goreportcard.com/report/github.com/VictoriaMetrics/VictoriaMetrics)
[![Build Status](https://github.com/VictoriaMetrics/VictoriaMetrics/actions/workflows/build.yml/badge.svg?branch=master&link=https%3A%2F%2Fgithub.com%2FVictoriaMetrics%2FVictoriaMetrics%2Factions)](https://github.com/VictoriaMetrics/VictoriaMetrics/actions/workflows/build.yml)
[![codecov](https://codecov.io/gh/VictoriaMetrics/VictoriaMetrics/branch/master/graph/badge.svg?link=https%3A%2F%2Fcodecov.io%2Fgh%2FVictoriaMetrics%2FVictoriaMetrics)](https://app.codecov.io/gh/VictoriaMetrics/VictoriaMetrics)
[![License](https://img.shields.io/github/license/VictoriaMetrics/VictoriaMetrics?labelColor=green&label=&link=https%3A%2F%2Fgithub.com%2FVictoriaMetrics%2FVictoriaMetrics%2Fblob%2Fmaster%2FLICENSE)](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/LICENSE)
![Slack](https://img.shields.io/badge/Join-4A154B?logo=slack&link=https%3A%2F%2Fslack.victoriametrics.com)
[![Join Slack](https://img.shields.io/badge/Join%20Slack-4A154B?logo=slack)](https://slack.victoriametrics.com)
[![X](https://img.shields.io/twitter/follow/VictoriaMetrics?style=flat&label=Follow&color=black&logo=x&labelColor=black&link=https%3A%2F%2Fx.com%2FVictoriaMetrics)](https://x.com/VictoriaMetrics/)
[![Reddit](https://img.shields.io/reddit/subreddit-subscribers/VictoriaMetrics?style=flat&label=Join&labelColor=red&logoColor=white&logo=reddit&link=https%3A%2F%2Fwww.reddit.com%2Fr%2FVictoriaMetrics)](https://www.reddit.com/r/VictoriaMetrics/)
@@ -16,16 +15,21 @@
<img src="docs/victoriametrics/logo.webp" width="300" alt="VictoriaMetrics logo">
</picture>
VictoriaMetrics is a fast, cost-saving, and scalable solution for monitoring and managing time series data. It delivers high performance and reliability, making it an ideal choice for businesses of all sizes.
VictoriaMetrics is a fast, cost-effective, and scalable solution for monitoring and managing time series data. It delivers high performance and reliability, making it an ideal choice for businesses of all sizes.
Here are some resources and information about VictoriaMetrics:
- Documentation: [docs.victoriametrics.com](https://docs.victoriametrics.com)
- Case studies: [Grammarly, Roblox, Wix,...](https://docs.victoriametrics.com/victoriametrics/casestudies/).
- Available: [Binary releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest), docker images [Docker Hub](https://hub.docker.com/r/victoriametrics/victoria-metrics/) and [Quay](https://quay.io/repository/victoriametrics/victoria-metrics), [Source code](https://github.com/VictoriaMetrics/VictoriaMetrics)
- Deployment types: [Single-node version](https://docs.victoriametrics.com/), [Cluster version](https://docs.victoriametrics.com/victoriametrics/cluster-victoriametrics/), and [Enterprise version](https://docs.victoriametrics.com/victoriametrics/enterprise/)
- Changelog: [CHANGELOG](https://docs.victoriametrics.com/victoriametrics/changelog/), and [How to upgrade](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#how-to-upgrade-victoriametrics)
- Community: [Slack](https://slack.victoriametrics.com/), [X (Twitter)](https://x.com/VictoriaMetrics), [LinkedIn](https://www.linkedin.com/company/victoriametrics/), [YouTube](https://www.youtube.com/@VictoriaMetrics)
- **Case studies**: [Grammarly, Roblox, Wix, Spotify,...](https://docs.victoriametrics.com/victoriametrics/casestudies/).
- **Available**: [Binary releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest), Docker images on [Docker Hub](https://hub.docker.com/r/victoriametrics/victoria-metrics/) and [Quay](https://quay.io/repository/victoriametrics/victoria-metrics), [Source code](https://github.com/VictoriaMetrics/VictoriaMetrics).
- **Deployment types**: [Single-node version](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/) and [Cluster version](https://docs.victoriametrics.com/victoriametrics/cluster-victoriametrics/) under [Apache License 2.0](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/LICENSE).
- **Getting started:** Read [key concepts](https://docs.victoriametrics.com/victoriametrics/keyconcepts/) and follow the
[quick start guide](https://docs.victoriametrics.com/victoriametrics/quick-start/).
- **Community**: [Slack](https://slack.victoriametrics.com/) (join via [Slack Inviter](https://slack.victoriametrics.com/)), [X (Twitter)](https://x.com/VictoriaMetrics), [YouTube](https://www.youtube.com/@VictoriaMetrics). See full list [here](https://docs.victoriametrics.com/victoriametrics/#community-and-contributions).
- **Changelog**: Project evolves fast - check the [CHANGELOG](https://docs.victoriametrics.com/victoriametrics/changelog/), and [How to upgrade](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#how-to-upgrade-victoriametrics).
- **Enterprise support:** [Contact us](mailto:info@victoriametrics.com) for commercial support with additional [enterprise features](https://docs.victoriametrics.com/victoriametrics/enterprise/).
- **Enterprise releases:** Enterprise and [long-term support releases (LTS)](https://docs.victoriametrics.com/victoriametrics/lts-releases/) are publicly available and can be evaluated for free
using a [free trial license](https://victoriametrics.com/products/enterprise/trial/).
- **Security:** we achieved [security certifications](https://victoriametrics.com/security/) for Database Software Development and Software-Based Monitoring Services.
Yes, we open-source both the single-node VictoriaMetrics and the cluster version.

View File

@@ -12,6 +12,31 @@ The following versions of VictoriaMetrics receive regular security fixes:
See [this page](https://victoriametrics.com/security/) for more details.
## Software Bill of Materials (SBOM)
Every VictoriaMetrics container{{% available_from "#" %}} image published to
[Docker Hub](https://hub.docker.com/u/victoriametrics)
and [Quay.io](https://quay.io/organization/victoriametrics)
includes an [SPDX](https://spdx.dev/) SBOM attestation
generated automatically by BuildKit during
`docker buildx build`.
To inspect the SBOM for an image:
```sh
docker buildx imagetools inspect \
docker.io/victoriametrics/victoria-metrics:latest \
--format "{{ json .SBOM }}"
```
To scan an image using its SBOM attestation with
[Trivy](https://github.com/aquasecurity/trivy):
```sh
trivy image --sbom-sources oci \
docker.io/victoriametrics/victoria-metrics:latest
```
## Reporting a Vulnerability
Please report any security issues to <security@victoriametrics.com>

View File

@@ -33,13 +33,13 @@ func PopulateTimeTpl(b []byte, tGlobal time.Time) []byte {
}
switch strings.TrimSpace(parts[0]) {
case `TIME_S`:
return []byte(fmt.Sprintf("%d", t.Unix()))
return fmt.Appendf(nil, "%d", t.Unix())
case `TIME_MSZ`:
return []byte(fmt.Sprintf("%d", t.Unix()*1e3))
return fmt.Appendf(nil, "%d", t.Unix()*1e3)
case `TIME_MS`:
return []byte(fmt.Sprintf("%d", timeToMillis(t)))
return fmt.Appendf(nil, "%d", timeToMillis(t))
case `TIME_NS`:
return []byte(fmt.Sprintf("%d", t.UnixNano()))
return fmt.Appendf(nil, "%d", t.UnixNano())
default:
log.Fatalf("unknown time pattern %s in %s", parts[0], repl)
}

View File

@@ -49,6 +49,11 @@ func insertRows(at *auth.Token, sketches []*datadogsketches.Sketch, extraLabels
Name: "__name__",
Value: m.Name,
})
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10557
labels = append(labels, prompb.Label{
Name: "host",
Value: sketch.Host,
})
for _, label := range m.Labels {
labels = append(labels, prompb.Label{
Name: label.Name,
@@ -57,9 +62,6 @@ func insertRows(at *auth.Token, sketches []*datadogsketches.Sketch, extraLabels
}
for _, tag := range sketch.Tags {
name, value := datadogutil.SplitTag(tag)
if name == "host" {
name = "exported_host"
}
labels = append(labels, prompb.Label{
Name: name,
Value: value,

View File

@@ -13,6 +13,9 @@ import (
"sync/atomic"
"time"
"github.com/VictoriaMetrics/metrics"
"github.com/golang/snappy"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/awsapi"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
@@ -21,10 +24,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/persistentqueue"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promauth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/ratelimiter"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timerpool"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeutil"
"github.com/VictoriaMetrics/metrics"
"github.com/golang/snappy"
)
var (
@@ -290,7 +290,7 @@ func getAWSAPIConfig(argIdx int) (*awsapi.Config, error) {
accessKey := awsAccessKey.GetOptionalArg(argIdx)
secretKey := awsSecretKey.GetOptionalArg(argIdx)
service := awsService.GetOptionalArg(argIdx)
cfg, err := awsapi.NewConfig(ec2Endpoint, stsEndpoint, region, roleARN, accessKey, secretKey, service)
cfg, err := awsapi.NewConfig(ec2Endpoint, stsEndpoint, region, roleARN, accessKey, secretKey, service, "")
if err != nil {
return nil, err
}
@@ -405,8 +405,7 @@ func (c *client) newRequest(url string, body []byte) (*http.Request, error) {
// Otherwise, it tries sending the block to remote storage indefinitely.
func (c *client) sendBlockHTTP(block []byte) bool {
c.rl.Register(len(block))
maxRetryDuration := timeutil.AddJitterToDuration(c.retryMaxInterval)
retryDuration := timeutil.AddJitterToDuration(c.retryMinInterval)
bt := timeutil.NewBackoffTimer(c.retryMinInterval, c.retryMaxInterval)
retriesCount := 0
again:
@@ -415,19 +414,10 @@ again:
c.requestDuration.UpdateDuration(startTime)
if err != nil {
c.errorsCount.Inc()
retryDuration *= 2
if retryDuration > maxRetryDuration {
retryDuration = maxRetryDuration
}
remoteWriteRetryLogger.Warnf("couldn't send a block with size %d bytes to %q: %s; re-sending the block in %.3f seconds",
len(block), c.sanitizedURL, err, retryDuration.Seconds())
t := timerpool.Get(retryDuration)
select {
case <-c.stopCh:
timerpool.Put(t)
remoteWriteRetryLogger.Warnf("couldn't send a block with size %d bytes to %q: %s; re-sending the block in %s",
len(block), c.sanitizedURL, err, bt.CurrentDelay())
if !bt.Wait(c.stopCh) {
return false
case <-t.C:
timerpool.Put(t)
}
c.retriesCount.Inc()
goto again
@@ -493,7 +483,10 @@ again:
// Unexpected status code returned
retriesCount++
retryAfterHeader := parseRetryAfterHeader(resp.Header.Get("Retry-After"))
retryDuration = getRetryDuration(retryAfterHeader, retryDuration, maxRetryDuration)
// retryAfterDuration has the highest priority duration
if retryAfterHeader > 0 {
bt.SetDelay(retryAfterHeader)
}
// Handle response
body, err := io.ReadAll(resp.Body)
@@ -502,15 +495,10 @@ again:
logger.Errorf("cannot read response body from %q during retry #%d: %s", c.sanitizedURL, retriesCount, err)
} else {
logger.Errorf("unexpected status code received after sending a block with size %d bytes to %q during retry #%d: %d; response body=%q; "+
"re-sending the block in %.3f seconds", len(block), c.sanitizedURL, retriesCount, statusCode, body, retryDuration.Seconds())
"re-sending the block in %s", len(block), c.sanitizedURL, retriesCount, statusCode, body, bt.CurrentDelay())
}
t := timerpool.Get(retryDuration)
select {
case <-c.stopCh:
timerpool.Put(t)
if !bt.Wait(c.stopCh) {
return false
case <-t.C:
timerpool.Put(t)
}
c.retriesCount.Inc()
goto again
@@ -519,27 +507,6 @@ again:
var remoteWriteRejectedLogger = logger.WithThrottler("remoteWriteRejected", 5*time.Second)
var remoteWriteRetryLogger = logger.WithThrottler("remoteWriteRetry", 5*time.Second)
// getRetryDuration returns retry duration.
// retryAfterDuration has the highest priority.
// If retryAfterDuration is not specified, retryDuration gets doubled.
// retryDuration can't exceed maxRetryDuration.
//
// Also see: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6097
func getRetryDuration(retryAfterDuration, retryDuration, maxRetryDuration time.Duration) time.Duration {
// retryAfterDuration has the highest priority duration
if retryAfterDuration > 0 {
return timeutil.AddJitterToDuration(retryAfterDuration)
}
// default backoff retry policy
retryDuration *= 2
if retryDuration > maxRetryDuration {
retryDuration = maxRetryDuration
}
return retryDuration
}
// repackBlockFromZstdToSnappy repacks the given zstd-compressed block to snappy-compressed block.
//
// The input block may be corrupted, for example, if vmagent was shut down ungracefully and
@@ -570,24 +537,20 @@ func logBlockRejected(block []byte, sanitizedURL string, resp *http.Response) {
}
// parseRetryAfterHeader parses `Retry-After` value retrieved from HTTP response header.
// retryAfterString should be in either HTTP-date or a number of seconds.
// It will return time.Duration(0) if `retryAfterString` does not follow RFC 7231.
func parseRetryAfterHeader(retryAfterString string) (retryAfterDuration time.Duration) {
if retryAfterString == "" {
return retryAfterDuration
//
// s should be in either HTTP-date or a number of seconds.
// It returns time.Duration(0) if s does not follow RFC 7231.
func parseRetryAfterHeader(s string) time.Duration {
if s == "" {
return 0
}
defer func() {
v := retryAfterDuration.Seconds()
logger.Infof("'Retry-After: %s' parsed into %.2f second(s)", retryAfterString, v)
}()
// Retry-After could be in "Mon, 02 Jan 2006 15:04:05 GMT" format.
if parsedTime, err := time.Parse(http.TimeFormat, retryAfterString); err == nil {
if parsedTime, err := time.Parse(http.TimeFormat, s); err == nil {
return time.Duration(time.Until(parsedTime).Seconds()) * time.Second
}
// Retry-After could be in seconds.
if seconds, err := strconv.Atoi(retryAfterString); err == nil {
if seconds, err := strconv.Atoi(s); err == nil {
return time.Duration(seconds) * time.Second
}

View File

@@ -6,66 +6,12 @@ import (
"testing"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding"
"github.com/golang/snappy"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
)
func TestCalculateRetryDuration(t *testing.T) {
// `testFunc` call `calculateRetryDuration` for `n` times
// and evaluate if the result of `calculateRetryDuration` is
// 1. >= expectMinDuration
// 2. <= expectMinDuration + 10% (see timeutil.AddJitterToDuration)
f := func(retryAfterDuration, retryDuration time.Duration, n int, expectMinDuration time.Duration) {
t.Helper()
for i := 0; i < n; i++ {
retryDuration = getRetryDuration(retryAfterDuration, retryDuration, time.Minute)
}
expectMaxDuration := helper(expectMinDuration)
expectMinDuration = expectMinDuration - (1000 * time.Millisecond) // Avoid edge case when calculating time.Until(now)
if retryDuration < expectMinDuration || retryDuration > expectMaxDuration {
t.Fatalf(
"incorrect retry duration, want (ms): [%d, %d], got (ms): %d",
expectMinDuration.Milliseconds(), expectMaxDuration.Milliseconds(),
retryDuration.Milliseconds(),
)
}
}
// Call calculateRetryDuration for 1 time.
{
// default backoff policy
f(0, time.Second, 1, 2*time.Second)
// default backoff policy exceed max limit"
f(0, 10*time.Minute, 1, time.Minute)
// retry after > default backoff policy
f(10*time.Second, 1*time.Second, 1, 10*time.Second)
// retry after < default backoff policy
f(1*time.Second, 10*time.Second, 1, 1*time.Second)
// retry after invalid and < default backoff policy
f(0, time.Second, 1, 2*time.Second)
}
// Call calculateRetryDuration for multiple times.
{
// default backoff policy 2 times
f(0, time.Second, 2, 4*time.Second)
// default backoff policy 3 times
f(0, time.Second, 3, 8*time.Second)
// default backoff policy N times exceed max limit
f(0, time.Second, 10, time.Minute)
// retry after 120s 1 times
f(120*time.Second, time.Second, 1, 120*time.Second)
// retry after 120s 2 times
f(120*time.Second, time.Second, 2, 120*time.Second)
}
}
func TestParseRetryAfterHeader(t *testing.T) {
f := func(retryAfterString string, expectResult time.Duration) {
t.Helper()
@@ -91,11 +37,32 @@ func TestParseRetryAfterHeader(t *testing.T) {
f(time.Now().Add(10*time.Second).Format("Mon, 02 Jan 2006 15:04:05 FAKETZ"), 0)
}
// helper calculate the max possible time duration calculated by timeutil.AddJitterToDuration.
func helper(d time.Duration) time.Duration {
dv := min(d/10, 10*time.Second)
func TestInitSecretFlags(t *testing.T) {
showRemoteWriteURLOrig := *showRemoteWriteURL
defer func() {
*showRemoteWriteURL = showRemoteWriteURLOrig
flagutil.UnregisterAllSecretFlags()
}()
return d + dv
flagutil.UnregisterAllSecretFlags()
*showRemoteWriteURL = false
InitSecretFlags()
if !flagutil.IsSecretFlag("remotewrite.url") {
t.Fatalf("expecting remoteWrite.url to be secret")
}
if !flagutil.IsSecretFlag("remotewrite.headers") {
t.Fatalf("expecting remoteWrite.headers to be secret")
}
flagutil.UnregisterAllSecretFlags()
*showRemoteWriteURL = true
InitSecretFlags()
if flagutil.IsSecretFlag("remotewrite.url") {
t.Fatalf("remoteWrite.url must remain visible when -remoteWrite.showURL is set")
}
if !flagutil.IsSecretFlag("remotewrite.headers") {
t.Fatalf("expecting remoteWrite.headers to remain secret")
}
}
func TestRepackBlockFromZstdToSnappy(t *testing.T) {

View File

@@ -51,9 +51,9 @@ func testPushWriteRequest(t *testing.T, rowsCount, expectedBlockLenProm, expecte
func newTestWriteRequest(seriesCount, labelsCount int) *prompb.WriteRequest {
var wr prompb.WriteRequest
for i := 0; i < seriesCount; i++ {
for i := range seriesCount {
var labels []prompb.Label
for j := 0; j < labelsCount; j++ {
for j := range labelsCount {
labels = append(labels, prompb.Label{
Name: fmt.Sprintf("label_%d_%d", i, j),
Value: fmt.Sprintf("value_%d_%d", i, j),

View File

@@ -20,8 +20,7 @@ import (
)
var (
unparsedLabelsGlobal = flagutil.NewArrayString("remoteWrite.label", "Optional label in the form 'name=value' to add to all the metrics before sending them to -remoteWrite.url. "+
"Pass multiple -remoteWrite.label flags in order to add multiple labels to metrics before sending them to remote storage")
unparsedLabelsGlobal = flagutil.NewArrayString("remoteWrite.label", "Optional label in the form 'name=value' to add to all the metrics before sending them to all -remoteWrite.url.")
relabelConfigPathGlobal = flag.String("remoteWrite.relabelConfig", "", "Optional path to file with relabeling configs, which are applied "+
"to all the metrics before sending them to -remoteWrite.url. See also -remoteWrite.urlRelabelConfig. "+
"The path can point either to local file or to http url. "+
@@ -39,7 +38,7 @@ var (
labelsGlobal []prompb.Label
remoteWriteRelabelConfigData atomic.Pointer[[]byte]
remoteWriteURLRelabelConfigData atomic.Pointer[[]interface{}]
remoteWriteURLRelabelConfigData atomic.Pointer[[]any]
relabelConfigReloads *metrics.Counter
relabelConfigReloadErrors *metrics.Counter
@@ -91,8 +90,8 @@ func WriteURLRelabelConfigData(w io.Writer) {
return
}
type urlRelabelCfg struct {
Url string `yaml:"url"`
RelabelConfig interface{} `yaml:"relabel_config"`
Url string `yaml:"url"`
RelabelConfig any `yaml:"relabel_config"`
}
var cs []urlRelabelCfg
for i, url := range *remoteWriteURLs {
@@ -145,7 +144,7 @@ func loadRelabelConfigs() (*relabelConfigs, error) {
len(*relabelConfigPaths), (len(*remoteWriteURLs)))
}
var urlRelabelCfgs []interface{}
var urlRelabelCfgs []any
rcs.perURL = make([]*promrelabel.ParsedConfigs, len(*remoteWriteURLs))
for i, path := range *relabelConfigPaths {
if len(path) == 0 {
@@ -158,7 +157,7 @@ func loadRelabelConfigs() (*relabelConfigs, error) {
}
rcs.perURL[i] = prc
var parsedCfg interface{}
var parsedCfg any
_ = yaml.Unmarshal(rawCfg, &parsedCfg)
urlRelabelCfgs = append(urlRelabelCfgs, parsedCfg)
}

View File

@@ -3,6 +3,7 @@ package remotewrite
import (
"flag"
"fmt"
"math"
"net/http"
"net/url"
"path/filepath"
@@ -11,6 +12,10 @@ import (
"sync/atomic"
"time"
"github.com/cespare/xxhash/v2"
"github.com/VictoriaMetrics/metrics"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bloomfilter"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
@@ -23,6 +28,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/memory"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/persistentqueue"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/procutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prommetadata"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promrelabel"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutil"
@@ -30,8 +36,6 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/slicesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/streamaggr"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeserieslimits"
"github.com/VictoriaMetrics/metrics"
"github.com/cespare/xxhash/v2"
)
var (
@@ -80,10 +84,14 @@ var (
`This may be needed for reducing memory usage at remote storage when the order of labels in incoming samples is random. `+
`For example, if m{k1="v1",k2="v2"} may be sent as m{k2="v2",k1="v1"}`+
`Enabled sorting for labels can slow down ingestion performance a bit`)
maxHourlySeries = flag.Int("remoteWrite.maxHourlySeries", 0, "The maximum number of unique series vmagent can send to remote storage systems during the last hour. "+
"Excess series are logged and dropped. This can be useful for limiting series cardinality. See https://docs.victoriametrics.com/victoriametrics/vmagent/#cardinality-limiter")
maxDailySeries = flag.Int("remoteWrite.maxDailySeries", 0, "The maximum number of unique series vmagent can send to remote storage systems during the last 24 hours. "+
"Excess series are logged and dropped. This can be useful for limiting series churn rate. See https://docs.victoriametrics.com/victoriametrics/vmagent/#cardinality-limiter")
maxHourlySeries = flag.Int64("remoteWrite.maxHourlySeries", 0, "The maximum number of unique series vmagent can send to remote storage systems during the last hour. "+
"Excess series are logged and dropped. This can be useful for limiting series cardinality. "+
fmt.Sprintf("Setting this flag to '-1' sets limit to maximum possible value (%d) which is useful in order to enable series tracking without enforcing limits. ", math.MaxInt32)+
"See https://docs.victoriametrics.com/victoriametrics/vmagent/#cardinality-limiter")
maxDailySeries = flag.Int64("remoteWrite.maxDailySeries", 0, "The maximum number of unique series vmagent can send to remote storage systems during the last 24 hours. "+
"Excess series are logged and dropped. This can be useful for limiting series churn rate. "+
fmt.Sprintf("Setting this flag to '-1' sets limit to maximum possible value (%d) which is useful in order to enable series tracking without enforcing limits. ", math.MaxInt32)+
"See https://docs.victoriametrics.com/victoriametrics/vmagent/#cardinality-limiter")
maxIngestionRate = flag.Int("maxIngestionRate", 0, "The maximum number of samples vmagent can receive per second. Data ingestion is paused when the limit is exceeded. "+
"By default there are no limits on samples ingestion rate. See also -remoteWrite.rateLimit")
@@ -92,6 +100,8 @@ var (
"See https://docs.victoriametrics.com/victoriametrics/vmagent/#disabling-on-disk-persistence . See also -remoteWrite.dropSamplesOnOverload")
dropSamplesOnOverload = flag.Bool("remoteWrite.dropSamplesOnOverload", false, "Whether to drop samples when -remoteWrite.disableOnDiskQueue is set and if the samples "+
"cannot be pushed into the configured -remoteWrite.url systems in a timely manner. See https://docs.victoriametrics.com/victoriametrics/vmagent/#disabling-on-disk-persistence")
disableMetadataPerURL = flagutil.NewArrayBool("remoteWrite.disableMetadata", "Whether to disable sending metadata to the corresponding -remoteWrite.url. "+
"By default, metadata sending is controlled by the global -enableMetadata flag")
)
var (
@@ -141,6 +151,8 @@ func InitSecretFlags() {
// remoteWrite.url can contain authentication codes, so hide it at `/metrics` output.
flagutil.RegisterSecretFlag("remoteWrite.url")
}
// remoteWrite.headers can contain auth headers such as Authorization and API keys.
flagutil.RegisterSecretFlag("remoteWrite.headers")
}
var (
@@ -157,8 +169,8 @@ func Init() {
if len(*remoteWriteURLs) == 0 {
logger.Fatalf("at least one `-remoteWrite.url` command-line flag must be set")
}
if *maxHourlySeries > 0 {
hourlySeriesLimiter = bloomfilter.NewLimiter(*maxHourlySeries, time.Hour)
if limit := getMaxHourlySeries(); limit > 0 {
hourlySeriesLimiter = bloomfilter.NewLimiter(limit, time.Hour)
_ = metrics.NewGauge(`vmagent_hourly_series_limit_max_series`, func() float64 {
return float64(hourlySeriesLimiter.MaxItems())
})
@@ -166,8 +178,8 @@ func Init() {
return float64(hourlySeriesLimiter.CurrentItems())
})
}
if *maxDailySeries > 0 {
dailySeriesLimiter = bloomfilter.NewLimiter(*maxDailySeries, 24*time.Hour)
if limit := getMaxDailySeries(); limit > 0 {
dailySeriesLimiter = bloomfilter.NewLimiter(limit, 24*time.Hour)
_ = metrics.NewGauge(`vmagent_daily_series_limit_max_series`, func() float64 {
return float64(dailySeriesLimiter.MaxItems())
})
@@ -540,6 +552,10 @@ func tryPushMetadataToRemoteStorages(rwctxs []*remoteWriteCtx, mms []prompb.Metr
var wg sync.WaitGroup
var anyPushFailed atomic.Bool
for _, rwctx := range rwctxs {
if !rwctx.enableMetadata {
// Skip remote storage with disabled metadata
continue
}
wg.Go(func() {
if !rwctx.tryPushMetadataInternal(mms) {
rwctx.pushFailures.Inc()
@@ -811,6 +827,11 @@ type remoteWriteCtx struct {
streamAggrKeepInput bool
streamAggrDropInput bool
// enableMetadata indicates whether metadata should be sent to this remote storage.
// It is determined by -remoteWrite.enableMetadata per-URL flag if set,
// otherwise by the global -enableMetadata flag.
enableMetadata bool
pss []*pendingSeries
pssNextIdx atomic.Uint64
@@ -822,6 +843,18 @@ type remoteWriteCtx struct {
rowsDroppedOnPushFailure *metrics.Counter
}
// isMetadataEnabledForURL returns true if metadata should be sent to the remote storage at argIdx.
// It checks the per-URL -remoteWrite.disableMetadata flag first.
// If not set, it falls back to the global -enableMetadata flag.
func isMetadataEnabledForURL(argIdx int) bool {
if disableMetadataPerURL.GetOptionalArg(argIdx) {
// Metadata is explicitly disabled for this URL
return false
}
// Use global -enableMetadata value
return prommetadata.IsEnabled()
}
func newRemoteWriteCtx(argIdx int, remoteWriteURL *url.URL, sanitizedURL string) *remoteWriteCtx {
// strip query params, otherwise changing params resets pq
pqURL := *remoteWriteURL
@@ -892,10 +925,11 @@ func newRemoteWriteCtx(argIdx int, remoteWriteURL *url.URL, sanitizedURL string)
}
rwctx := &remoteWriteCtx{
idx: argIdx,
fq: fq,
c: c,
pss: pss,
idx: argIdx,
fq: fq,
c: c,
pss: pss,
enableMetadata: isMetadataEnabledForURL(argIdx),
rowsPushedAfterRelabel: metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_remotewrite_rows_pushed_after_relabel_total{path=%q,url=%q}`, queuePath, sanitizedURL)),
rowsDroppedByRelabel: metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_remotewrite_relabel_metrics_dropped_total{path=%q,url=%q}`, queuePath, sanitizedURL)),
@@ -1080,7 +1114,7 @@ func (rwctx *remoteWriteCtx) tryPushTimeSeriesInternal(tss []prompb.TimeSeries)
}()
if len(labelsGlobal) > 0 {
// Make a copy of tss before adding extra labels in order to prevent
// Make a copy of tss before adding extra labels to prevent
// from affecting time series for other remoteWrite.url configs.
rctx = getRelabelCtx()
v = tssPool.Get().(*[]prompb.TimeSeries)
@@ -1116,3 +1150,21 @@ func newMapFromStrings(a []string) map[string]struct{} {
}
return m
}
func getMaxHourlySeries() int {
limit := *maxHourlySeries
if limit == -1 || limit > math.MaxInt32 {
return math.MaxInt32
}
return int(limit)
}
func getMaxDailySeries() int {
limit := *maxDailySeries
if limit == -1 || limit > math.MaxInt32 {
return math.MaxInt32
}
return int(limit)
}

View File

@@ -28,12 +28,12 @@ func TestGetLabelsHash_Distribution(t *testing.T) {
itemsCount := 1_000 * bucketsCount
m := make([]int, bucketsCount)
var labels []prompb.Label
for i := 0; i < itemsCount; i++ {
for i := range itemsCount {
labels = append(labels[:0], prompb.Label{
Name: "__name__",
Value: fmt.Sprintf("some_name_%d", i),
})
for j := 0; j < 10; j++ {
for j := range 10 {
labels = append(labels, prompb.Label{
Name: fmt.Sprintf("label_%d", j),
Value: fmt.Sprintf("value_%d_%d", i, j),
@@ -248,7 +248,7 @@ func TestShardAmountRemoteWriteCtx(t *testing.T) {
seriesCount := 100000
// build 1000000 series
tssBlock := make([]prompb.TimeSeries, 0, seriesCount)
for i := 0; i < seriesCount; i++ {
for i := range seriesCount {
tssBlock = append(tssBlock, prompb.TimeSeries{
Labels: []prompb.Label{
{
@@ -269,7 +269,7 @@ func TestShardAmountRemoteWriteCtx(t *testing.T) {
// build active time series set
nodes := make([]string, 0, remoteWriteCount)
activeTimeSeriesByNodes := make([]map[string]struct{}, remoteWriteCount)
for i := 0; i < remoteWriteCount; i++ {
for i := range remoteWriteCount {
nodes = append(nodes, fmt.Sprintf("node%d", i))
activeTimeSeriesByNodes[i] = make(map[string]struct{})
}

View File

@@ -41,7 +41,7 @@ func TestParseInputValue_Success(t *testing.T) {
if len(outputExpected) != len(output) {
t.Fatalf("unexpected output length; got %d; want %d", len(outputExpected), len(output))
}
for i := 0; i < len(outputExpected); i++ {
for i := range outputExpected {
if outputExpected[i].Omitted != output[i].Omitted {
t.Fatalf("unexpected Omitted field in the output\ngot\n%v\nwant\n%v", output, outputExpected)
}

View File

@@ -4,6 +4,7 @@ import (
"context"
"flag"
"fmt"
"maps"
"net"
"net/http"
"net/http/httptest"
@@ -12,6 +13,7 @@ import (
"os/signal"
"path/filepath"
"reflect"
"slices"
"sort"
"strings"
"syscall"
@@ -348,9 +350,7 @@ func (tg *testGroup) test(evalInterval time.Duration, groupOrderMap map[string]i
for k := range alertEvalTimesMap {
alertEvalTimes = append(alertEvalTimes, k)
}
sort.Slice(alertEvalTimes, func(i, j int) bool {
return alertEvalTimes[i] < alertEvalTimes[j]
})
slices.Sort(alertEvalTimes)
// sort group eval order according to the given "group_eval_order".
sort.Slice(testGroups, func(i, j int) bool {
@@ -361,12 +361,8 @@ func (tg *testGroup) test(evalInterval time.Duration, groupOrderMap map[string]i
var groups []*rule.Group
for _, group := range testGroups {
mergedExternalLabels := make(map[string]string)
for k, v := range tg.ExternalLabels {
mergedExternalLabels[k] = v
}
for k, v := range externalLabels {
mergedExternalLabels[k] = v
}
maps.Copy(mergedExternalLabels, tg.ExternalLabels)
maps.Copy(mergedExternalLabels, externalLabels)
ng := rule.NewGroup(group, q, time.Minute, mergedExternalLabels)
ng.Init()
groups = append(groups, ng)

View File

@@ -81,12 +81,9 @@ func (g *Group) Validate(validateTplFn ValidateTplFn, validateExpressions bool)
if g.Interval.Duration() < 0 {
return fmt.Errorf("interval shouldn't be lower than 0")
}
if g.EvalOffset.Duration() < 0 {
return fmt.Errorf("eval_offset shouldn't be lower than 0")
}
// if `eval_offset` is set, interval won't use global evaluationInterval flag and must bigger than offset.
if g.EvalOffset.Duration() > g.Interval.Duration() {
return fmt.Errorf("eval_offset should be smaller than interval; now eval_offset: %v, interval: %v", g.EvalOffset.Duration(), g.Interval.Duration())
// if `eval_offset` is set, the group interval must be specified explicitly(instead of inherited from global evaluationInterval flag) and must bigger than offset.
if g.EvalOffset.Duration().Abs() > g.Interval.Duration() {
return fmt.Errorf("the abs value of eval_offset should be smaller than interval; now eval_offset: %v, interval: %v", g.EvalOffset.Duration(), g.Interval.Duration())
}
if g.EvalOffset != nil && g.EvalDelay != nil {
return fmt.Errorf("eval_offset cannot be used with eval_delay")

View File

@@ -176,11 +176,17 @@ func TestGroupValidate_Failure(t *testing.T) {
}, false, "interval shouldn't be lower than 0")
f(&Group{
Name: "wrong eval_offset",
Name: "too big eval_offset",
Interval: promutil.NewDuration(time.Minute),
EvalOffset: promutil.NewDuration(2 * time.Minute),
}, false, "eval_offset should be smaller than interval")
f(&Group{
Name: "too big negative eval_offset",
Interval: promutil.NewDuration(time.Minute),
EvalOffset: promutil.NewDuration(-2 * time.Minute),
}, false, "eval_offset should be smaller than interval")
limit := -1
f(&Group{
Name: "wrong limit",

View File

@@ -2,6 +2,7 @@ package config
import (
"fmt"
"slices"
"strings"
"github.com/VictoriaMetrics/VictoriaLogs/lib/logstorage"
@@ -80,12 +81,8 @@ func (t *Type) ValidateExpr(expr string) error {
if err != nil {
return fmt.Errorf("cannot obtain labels from LogsQL expr: %q, err: %w", expr, err)
}
for i := range labels {
// VictoriaLogs inserts `_time` field as a label in result when query with `stats by (_time:step)`,
// making the result meaningless and may lead to cardinality issues.
if labels[i] == "_time" {
return fmt.Errorf("bad LogsQL expr: %q, err: cannot contain time buckets stats pipe `stats by (_time:step)`", expr)
}
if slices.Contains(labels, "_time") {
return fmt.Errorf("bad LogsQL expr: %q, err: cannot contain time buckets stats pipe `stats by (_time:step)`", expr)
}
default:
return fmt.Errorf("unknown datasource type=%q", t.Name)

View File

@@ -5,6 +5,7 @@ import (
"errors"
"fmt"
"io"
"maps"
"net/http"
"net/url"
"strings"
@@ -91,9 +92,7 @@ func (c *Client) Clone() *Client {
ns.extraHeaders = make([]keyValue, len(c.extraHeaders))
copy(ns.extraHeaders, c.extraHeaders)
}
for k, v := range c.extraParams {
ns.extraParams[k] = v
}
maps.Copy(ns.extraParams, c.extraParams)
return ns
}

View File

@@ -34,7 +34,7 @@ type promResponse struct {
// Stats supported by VictoriaMetrics since v1.90
Stats struct {
SeriesFetched *string `json:"seriesFetched,omitempty"`
} `json:"stats,omitempty"`
} `json:"stats"`
// IsPartial supported by VictoriaMetrics
IsPartial *bool `json:"isPartial,omitempty"`
}

View File

@@ -134,7 +134,7 @@ func (ls Labels) String() string {
func LabelCompare(a, b Labels) int {
l := min(len(b), len(a))
for i := 0; i < l; i++ {
for i := range l {
if a[i].Name != b[i].Name {
if a[i].Name < b[i].Name {
return -1

View File

@@ -13,7 +13,7 @@ func BenchmarkPromInstantUnmarshal(b *testing.B) {
// BenchmarkParsePrometheusResponse/Instant_std+fastjson-10 1760 668959 ns/op 280147 B/op 5781 allocs/op
b.Run("Instant std+fastjson", func(b *testing.B) {
for i := 0; i < b.N; i++ {
for range b.N {
var pi promInstant
err = pi.Unmarshal(data)
if err != nil {

View File

@@ -56,7 +56,7 @@ absolute path to all .tpl files in root.
-rule.templates="dir/**/*.tpl". Includes all the .tpl files in "dir" subfolders recursively.
`)
configCheckInterval = flag.Duration("configCheckInterval", 0, "Interval for checking for changes in '-rule' or '-notifier.config' files. "+
configCheckInterval = flag.Duration("configCheckInterval", 0, "Interval for checking for changes in '-rule', '-rule.templates' and '-notifier.config' files. "+
"By default, the checking is disabled. Send SIGHUP signal in order to force config check for changes.")
httpListenAddrs = flagutil.NewArrayString("httpListenAddr", "Address to listen for incoming http requests. See also -tls and -httpListenAddr.useProxyProtocol")

View File

@@ -98,7 +98,7 @@ func (m *manager) close() {
m.wg.Wait()
}
func (m *manager) startGroup(ctx context.Context, g *rule.Group, restore bool) error {
func (m *manager) startGroup(ctx context.Context, g *rule.Group, restore bool) {
id := g.GetID()
g.Init()
m.wg.Go(func() {
@@ -110,7 +110,6 @@ func (m *manager) startGroup(ctx context.Context, g *rule.Group, restore bool) e
})
m.groups[id] = g
return nil
}
func (m *manager) update(ctx context.Context, groupsCfg []config.Group, restore bool) error {
@@ -119,7 +118,7 @@ func (m *manager) update(ctx context.Context, groupsCfg []config.Group, restore
for _, cfg := range groupsCfg {
for _, r := range cfg.Rules {
if rrPresent && arPresent {
continue
break
}
if r.Record != "" {
rrPresent = true
@@ -162,10 +161,7 @@ func (m *manager) update(ctx context.Context, groupsCfg []config.Group, restore
}
}
for _, ng := range groupsRegistry {
if err := m.startGroup(ctx, ng, restore); err != nil {
m.groupsMu.Unlock()
return err
}
m.startGroup(ctx, ng, restore)
}
m.groupsMu.Unlock()

View File

@@ -69,7 +69,7 @@ func TestManagerUpdateConcurrent(t *testing.T) {
for n := range workers {
wg.Go(func() {
r := rand.New(rand.NewSource(int64(n)))
for i := 0; i < iterations; i++ {
for range iterations {
rnd := r.Intn(len(paths))
cfg, err := config.Parse([]string{paths[rnd]}, notifier.ValidateTemplates, true)
if err != nil { // update can fail and this is expected
@@ -259,7 +259,7 @@ func compareGroups(t *testing.T, a, b *rule.Group) {
for i, r := range a.Rules {
got, want := r, b.Rules[i]
if a.CreateID() != b.CreateID() {
t.Fatalf("expected to have rule %q; got %q", want.ID(), got.ID())
t.Fatalf("expected to have rule %d; got %d", want.ID(), got.ID())
}
if err := rule.CompareRules(t, want, got); err != nil {
t.Fatalf("comparison error: %s", err)

View File

@@ -216,7 +216,7 @@ consul_sd_configs:
for n := range workers {
wg.Go(func() {
r := rand.New(rand.NewSource(int64(n)))
for i := 0; i < iterations; i++ {
for range iterations {
rnd := r.Intn(len(paths))
_ = cw.reload(paths[rnd]) // update can fail and this is expected
_ = cw.notifiers()

View File

@@ -13,14 +13,18 @@ import (
"sync"
"time"
"github.com/cespare/xxhash/v2"
"github.com/golang/snappy"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/cgroup"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httputil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/netutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promauth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeutil"
"github.com/VictoriaMetrics/metrics"
)
@@ -114,7 +118,9 @@ func NewClient(ctx context.Context, cfg Config) (*Client, error) {
}
for i := 0; i < cc; i++ {
c.run(ctx)
c.wg.Go(func() {
c.run(ctx, i)
})
}
return c, nil
}
@@ -156,8 +162,7 @@ func (c *Client) Close() error {
return nil
}
func (c *Client) run(ctx context.Context) {
ticker := time.NewTicker(c.flushInterval)
func (c *Client) run(ctx context.Context, id int) {
wr := &prompb.WriteRequest{}
shutdown := func() {
lastCtx, cancel := context.WithTimeout(context.Background(), defaultWriteTimeout)
@@ -174,40 +179,72 @@ func (c *Client) run(ctx context.Context) {
cancel()
}
c.wg.Go(func() {
defer ticker.Stop()
for {
// add jitter to spread remote write flushes over the flush interval to avoid congestion at the remote write destination
h := xxhash.Sum64(bytesutil.ToUnsafeBytes(fmt.Sprintf("%d", id)))
randJitter := uint64(float64(c.flushInterval) * (float64(h) / (1 << 64)))
timer := time.NewTimer(time.Duration(randJitter))
addJitter:
for {
select {
case <-c.doneCh:
timer.Stop()
shutdown()
return
case <-ctx.Done():
timer.Stop()
shutdown()
return
case <-timer.C:
break addJitter
}
}
ticker := time.NewTicker(c.flushInterval)
defer ticker.Stop()
for {
select {
case <-c.doneCh:
shutdown()
return
case <-ctx.Done():
shutdown()
return
case <-ticker.C:
c.flush(ctx, wr)
// drain the potential stale tick to avoid small or empty flushes after a slow flush.
select {
case <-c.doneCh:
shutdown()
return
case <-ctx.Done():
shutdown()
return
case <-ticker.C:
default:
}
case ts, ok := <-c.input:
if !ok {
continue
}
wr.Timeseries = append(wr.Timeseries, ts)
if len(wr.Timeseries) >= c.maxBatchSize {
c.flush(ctx, wr)
case ts, ok := <-c.input:
if !ok {
continue
}
wr.Timeseries = append(wr.Timeseries, ts)
if len(wr.Timeseries) >= c.maxBatchSize {
c.flush(ctx, wr)
}
}
}
})
}
}
var (
rwErrors = metrics.NewCounter(`vmalert_remotewrite_errors_total`)
rwTotal = metrics.NewCounter(`vmalert_remotewrite_total`)
sentRows = metrics.NewCounter(`vmalert_remotewrite_sent_rows_total`)
sentBytes = metrics.NewCounter(`vmalert_remotewrite_sent_bytes_total`)
droppedRows = metrics.NewCounter(`vmalert_remotewrite_dropped_rows_total`)
sendDuration = metrics.NewFloatCounter(`vmalert_remotewrite_send_duration_seconds_total`)
bufferFlushDuration = metrics.NewHistogram(`vmalert_remotewrite_flush_duration_seconds`)
// sentRows and sentBytes are historical counters that can now be replaced by flushedRows and flushedBytes histograms. They may be deprecated in the future after the new histograms have been adopted for some time.
sentRows = metrics.NewCounter(`vmalert_remotewrite_sent_rows_total`)
sentBytes = metrics.NewCounter(`vmalert_remotewrite_sent_bytes_total`)
flushedRows = metrics.NewHistogram(`vmalert_remotewrite_sent_rows`)
flushedBytes = metrics.NewHistogram(`vmalert_remotewrite_sent_bytes`)
droppedRows = metrics.NewCounter(`vmalert_remotewrite_dropped_rows_total`)
sendDuration = metrics.NewFloatCounter(`vmalert_remotewrite_send_duration_seconds_total`)
bufferFlushDuration = metrics.NewHistogram(`vmalert_remotewrite_flush_duration_seconds`)
remoteWriteQueueSize = metrics.NewHistogram(`vmalert_remotewrite_queue_size`)
_ = metrics.NewGauge(`vmalert_remotewrite_queue_capacity`, func() float64 {
return float64(*maxQueueSize)
})
_ = metrics.NewGauge(`vmalert_remotewrite_concurrency`, func() float64 {
return float64(*concurrency)
@@ -221,6 +258,7 @@ func GetDroppedRows() int { return int(droppedRows.Get()) }
// it to remote-write endpoint. Flush performs limited amount of retries
// if request fails.
func (c *Client) flush(ctx context.Context, wr *prompb.WriteRequest) {
remoteWriteQueueSize.Update(float64(len(c.input)))
if len(wr.Timeseries) < 1 {
return
}
@@ -230,16 +268,16 @@ func (c *Client) flush(ctx context.Context, wr *prompb.WriteRequest) {
data := wr.MarshalProtobuf(nil)
b := snappy.Encode(nil, data)
retryInterval, maxRetryInterval := *retryMinInterval, *retryMaxTime
if retryInterval > maxRetryInterval {
retryInterval = maxRetryInterval
}
maxRetryInterval := *retryMaxTime
bt := timeutil.NewBackoffTimer(*retryMinInterval, maxRetryInterval)
timeStart := time.Now()
defer func() {
sendDuration.Add(time.Since(timeStart).Seconds())
}()
attempts := 0
L:
for attempts := 0; ; attempts++ {
for {
err := c.send(ctx, b)
if err != nil && (errors.Is(err, io.EOF) || netutil.IsTrivialNetworkError(err)) {
// Something in the middle between client and destination might be closing
@@ -249,6 +287,8 @@ L:
if err == nil {
sentRows.Add(len(wr.Timeseries))
sentBytes.Add(len(b))
flushedRows.Update(float64(len(wr.Timeseries)))
flushedBytes.Update(float64(len(b)))
return
}
@@ -274,13 +314,13 @@ L:
break
}
if retryInterval > timeLeftForRetries {
retryInterval = timeLeftForRetries
if bt.CurrentDelay() > timeLeftForRetries {
bt.SetDelay(timeLeftForRetries)
}
// sleeping to prevent remote db hammering
time.Sleep(retryInterval)
retryInterval *= 2
bt.Wait(ctx.Done())
attempts++
}
rwErrors.Inc()

View File

@@ -44,7 +44,7 @@ func TestClient_Push(t *testing.T) {
r := rand.New(rand.NewSource(1))
const rowsN = int(1e4)
for i := 0; i < rowsN; i++ {
for range rowsN {
s := prompb.TimeSeries{
Samples: []prompb.Sample{{
Value: r.Float64(),
@@ -102,7 +102,7 @@ func TestClient_run_maxBatchSizeDuringShutdown(t *testing.T) {
}
// push time series to the client.
for i := 0; i < pushCnt; i++ {
for range pushCnt {
if err = rwClient.Push(prompb.TimeSeries{}); err != nil {
t.Fatalf("cannot time series to the client: %s", err)
}

View File

@@ -22,7 +22,7 @@ func TestDebugClient_Push(t *testing.T) {
const rowsN = 100
var sent int
for i := 0; i < rowsN; i++ {
for i := range rowsN {
s := prompb.TimeSeries{
Samples: []prompb.Sample{{
Value: float64(i),

View File

@@ -0,0 +1,106 @@
//go:build synctest
package rule
import (
"context"
"strings"
"testing"
"testing/synctest"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/datasource"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/notifier"
)
// TestAlertingRule_ActiveAtPreservedInAnnotations ensures that the fix for
// https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9543 is preserved
// while allowing query templates in labels (https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9783)
func TestAlertingRule_ActiveAtPreservedInAnnotations(t *testing.T) {
// wrap into synctest because of time manipulations
synctest.Test(t, func(t *testing.T) {
fq := &datasource.FakeQuerier{}
ar := &AlertingRule{
Name: "TestActiveAtPreservation",
Labels: map[string]string{
"test_query_in_label": `{{ "static_value" }}`,
},
Annotations: map[string]string{
"description": "Alert active since {{ $activeAt }}",
},
alerts: make(map[uint64]*notifier.Alert),
q: fq,
state: &ruleState{
entries: make([]StateEntry, 10),
},
}
// Mock query result - return empty result to make suppress_for_mass_alert = false
// (no need to add anything to fq for empty result)
// Add a metric that should trigger the alert
fq.Add(metricWithValueAndLabels(t, 1, "instance", "server1"))
// First execution - creates new alert
ts1 := time.Now()
_, err := ar.exec(context.TODO(), ts1, 0)
if err != nil {
t.Fatalf("unexpected error on first exec: %s", err)
}
if len(ar.alerts) != 1 {
t.Fatalf("expected 1 alert, got %d", len(ar.alerts))
}
firstAlert := ar.GetAlerts()[0]
// Verify first execution: activeAt should be ts1 and annotation should reflect it
if !firstAlert.ActiveAt.Equal(ts1) {
t.Fatalf("expected activeAt to be %v, got %v", ts1, firstAlert.ActiveAt)
}
// Extract time from annotation (format will be like "Alert active since 2025-09-30 08:55:13.638551611 -0400 EDT m=+0.002928464")
expectedTimeStr := ts1.Format("2006-01-02 15:04:05")
if !strings.Contains(firstAlert.Annotations["description"], expectedTimeStr) {
t.Fatalf("first exec annotation should contain time %s, got: %s", expectedTimeStr, firstAlert.Annotations["description"])
}
// Second execution - should preserve activeAt in annotation
// Ensure different timestamp with different seconds
// sleep is non-blocking thanks to synctest
time.Sleep(2 * time.Second)
ts2 := time.Now()
_, err = ar.exec(context.TODO(), ts2, 0)
if err != nil {
t.Fatalf("unexpected error on second exec: %s", err)
}
// Get the alert again (should be the same alert)
if len(ar.alerts) != 1 {
t.Fatalf("expected 1 alert, got %d", len(ar.alerts))
}
secondAlert := ar.GetAlerts()[0]
// Critical test: activeAt should still be ts1, not ts2
if !secondAlert.ActiveAt.Equal(ts1) {
t.Fatalf("activeAt should be preserved as %v, but got %v", ts1, secondAlert.ActiveAt)
}
// Critical test: annotation should still contain ts1 time, not ts2
if !strings.Contains(secondAlert.Annotations["description"], expectedTimeStr) {
t.Fatalf("second exec annotation should still contain original time %s, got: %s", expectedTimeStr, secondAlert.Annotations["description"])
}
// Additional verification: annotation should NOT contain ts2 time
ts2TimeStr := ts2.Format("2006-01-02 15:04:05")
if strings.Contains(secondAlert.Annotations["description"], ts2TimeStr) {
t.Fatalf("annotation should NOT contain new eval time %s, got: %s", ts2TimeStr, secondAlert.Annotations["description"])
}
// Verify query template in labels still works (this would fail if query templates were broken)
if firstAlert.Labels["test_query_in_label"] != "static_value" {
t.Fatalf("expected test_query_in_label=static_value, got %s", firstAlert.Labels["test_query_in_label"])
}
})
}

View File

@@ -10,7 +10,6 @@ import (
"strings"
"sync"
"testing"
"testing/synctest"
"time"
"github.com/VictoriaMetrics/metrics"
@@ -1479,95 +1478,3 @@ func TestAlertingRule_QueryTemplateInLabels(t *testing.T) {
t.Fatalf("expected 'suppress_for_mass_alert' label to be 'true' or 'false', got '%s'", suppressLabel)
}
}
// TestAlertingRule_ActiveAtPreservedInAnnotations ensures that the fix for
// https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9543 is preserved
// while allowing query templates in labels (https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9783)
func TestAlertingRule_ActiveAtPreservedInAnnotations(t *testing.T) {
// wrap into synctest because of time manipulations
synctest.Test(t, func(t *testing.T) {
fq := &datasource.FakeQuerier{}
ar := &AlertingRule{
Name: "TestActiveAtPreservation",
Labels: map[string]string{
"test_query_in_label": `{{ "static_value" }}`,
},
Annotations: map[string]string{
"description": "Alert active since {{ $activeAt }}",
},
alerts: make(map[uint64]*notifier.Alert),
q: fq,
state: &ruleState{
entries: make([]StateEntry, 10),
},
}
// Mock query result - return empty result to make suppress_for_mass_alert = false
// (no need to add anything to fq for empty result)
// Add a metric that should trigger the alert
fq.Add(metricWithValueAndLabels(t, 1, "instance", "server1"))
// First execution - creates new alert
ts1 := time.Now()
_, err := ar.exec(context.TODO(), ts1, 0)
if err != nil {
t.Fatalf("unexpected error on first exec: %s", err)
}
if len(ar.alerts) != 1 {
t.Fatalf("expected 1 alert, got %d", len(ar.alerts))
}
firstAlert := ar.GetAlerts()[0]
// Verify first execution: activeAt should be ts1 and annotation should reflect it
if !firstAlert.ActiveAt.Equal(ts1) {
t.Fatalf("expected activeAt to be %v, got %v", ts1, firstAlert.ActiveAt)
}
// Extract time from annotation (format will be like "Alert active since 2025-09-30 08:55:13.638551611 -0400 EDT m=+0.002928464")
expectedTimeStr := ts1.Format("2006-01-02 15:04:05")
if !strings.Contains(firstAlert.Annotations["description"], expectedTimeStr) {
t.Fatalf("first exec annotation should contain time %s, got: %s", expectedTimeStr, firstAlert.Annotations["description"])
}
// Second execution - should preserve activeAt in annotation
// Ensure different timestamp with different seconds
// sleep is non-blocking thanks to synctest
time.Sleep(2 * time.Second)
ts2 := time.Now()
_, err = ar.exec(context.TODO(), ts2, 0)
if err != nil {
t.Fatalf("unexpected error on second exec: %s", err)
}
// Get the alert again (should be the same alert)
if len(ar.alerts) != 1 {
t.Fatalf("expected 1 alert, got %d", len(ar.alerts))
}
secondAlert := ar.GetAlerts()[0]
// Critical test: activeAt should still be ts1, not ts2
if !secondAlert.ActiveAt.Equal(ts1) {
t.Fatalf("activeAt should be preserved as %v, but got %v", ts1, secondAlert.ActiveAt)
}
// Critical test: annotation should still contain ts1 time, not ts2
if !strings.Contains(secondAlert.Annotations["description"], expectedTimeStr) {
t.Fatalf("second exec annotation should still contain original time %s, got: %s", expectedTimeStr, secondAlert.Annotations["description"])
}
// Additional verification: annotation should NOT contain ts2 time
ts2TimeStr := ts2.Format("2006-01-02 15:04:05")
if strings.Contains(secondAlert.Annotations["description"], ts2TimeStr) {
t.Fatalf("annotation should NOT contain new eval time %s, got: %s", ts2TimeStr, secondAlert.Annotations["description"])
}
// Verify query template in labels still works (this would fail if query templates were broken)
if firstAlert.Labels["test_query_in_label"] != "static_value" {
t.Fatalf("expected test_query_in_label=static_value, got %s", firstAlert.Labels["test_query_in_label"])
}
})
}

View File

@@ -6,6 +6,7 @@ import (
"flag"
"fmt"
"hash/fnv"
"maps"
"net/url"
"sync"
"time"
@@ -30,8 +31,8 @@ var (
"0 means no limit.")
ruleUpdateEntriesLimit = flag.Int("rule.updateEntriesLimit", 20, "Defines the max number of rule's state updates stored in-memory. "+
"Rule's updates are available on rule's Details page and are used for debugging purposes. The number of stored updates can be overridden per rule via update_entries_limit param.")
resendDelay = flag.Duration("rule.resendDelay", 0, "MiniMum amount of time to wait before resending an alert to notifier.")
maxResolveDuration = flag.Duration("rule.maxResolveDuration", 0, "Limits the maxiMum duration for automatic alert expiration, "+
resendDelay = flag.Duration("rule.resendDelay", 0, "Minimum amount of time to wait before resending an alert to notifier.")
maxResolveDuration = flag.Duration("rule.maxResolveDuration", 0, "Limits the maximum duration for automatic alert expiration, "+
"which by default is 4 times evaluationInterval of the parent group")
evalDelay = flag.Duration("rule.evalDelay", 30*time.Second, "Adjustment of the 'time' parameter for rule evaluation requests to compensate intentional data delay from the datasource. "+
"Normally, should be equal to '-search.latencyOffset' (cmd-line flag configured for VictoriaMetrics single-node or vmselect). "+
@@ -97,9 +98,7 @@ type groupMetrics struct {
// set2 has priority over set1.
func mergeLabels(groupName, ruleName string, set1, set2 map[string]string) map[string]string {
r := map[string]string{}
for k, v := range set1 {
r[k] = v
}
maps.Copy(r, set1)
for k, v := range set2 {
if prevV, ok := r[k]; ok {
logger.Infof("label %q=%q for rule %q.%q overwritten with external label %q=%q",
@@ -382,7 +381,9 @@ func (g *Group) Start(ctx context.Context, rw remotewrite.RWClient, rr datasourc
if len(g.Rules) < 1 {
g.metrics.iterationDuration.UpdateDuration(start)
g.mu.Lock()
g.LastEvaluation = start
g.mu.Unlock()
return ts
}
@@ -396,7 +397,9 @@ func (g *Group) Start(ctx context.Context, rw remotewrite.RWClient, rr datasourc
}
}
g.metrics.iterationDuration.UpdateDuration(start)
g.mu.Lock()
g.LastEvaluation = start
g.mu.Unlock()
return ts
}
@@ -406,11 +409,11 @@ func (g *Group) Start(ctx context.Context, rw remotewrite.RWClient, rr datasourc
g.mu.Unlock()
defer g.evalCancel()
realEvalTS := eval(evalCtx, evalTS)
t := time.NewTicker(g.Interval)
defer t.Stop()
realEvalTS := eval(evalCtx, evalTS)
// restore the rules state after the first evaluation
// so only active alerts can be restored.
if rr != nil {
@@ -485,8 +488,15 @@ func (g *Group) UpdateWith(newGroup *Group) {
// delayBeforeStart calculates delay based on Group ID, so all groups will start at different moments of time.
func (g *Group) delayBeforeStart(ts time.Time, maxDelay time.Duration) time.Duration {
if g.EvalOffset != nil {
offset := *g.EvalOffset
// adjust the offset for negative evalOffset, the rule is:
// `eval_offset: -x` is equivalent to `eval_offset: y` for `interval: x+y`.
// For example, `eval_offset: -6m` is equivalent to `eval_offset: 4m` for `interval: 10m`.
if offset < 0 {
offset += g.Interval
}
// if offset is specified, ignore the maxDelay and return a duration aligned with offset
currentOffsetPoint := ts.Truncate(g.Interval).Add(*g.EvalOffset)
currentOffsetPoint := ts.Truncate(g.Interval).Add(offset)
if currentOffsetPoint.Before(ts) {
// wait until the next offset point
return currentOffsetPoint.Add(g.Interval).Sub(ts)
@@ -495,11 +505,8 @@ func (g *Group) delayBeforeStart(ts time.Time, maxDelay time.Duration) time.Dura
}
// otherwise, return a random duration between [0..min(interval, maxDelay)] based on group ID
interval := g.Interval
if interval > maxDelay {
// artificially limit interval, so groups with big intervals could start sooner.
interval = maxDelay
}
// artificially limit interval, so groups with big intervals could start sooner.
interval := min(g.Interval, maxDelay)
var randSleep time.Duration
randSleep = time.Duration(float64(interval) * (float64(g.GetID()) / (1 << 64)))
sleepOffset := time.Duration(ts.UnixNano() % interval.Nanoseconds())

View File

@@ -405,7 +405,8 @@ func TestGroupStart(t *testing.T) {
var cur uint64
prev := g.metrics.iterationTotal.Get()
for i := 0; ; i++ {
i := 0
for {
if i > 40 {
t.Fatalf("group wasn't able to perform %d evaluations during %d eval intervals", n, i)
}
@@ -414,6 +415,7 @@ func TestGroupStart(t *testing.T) {
return
}
time.Sleep(interval)
i++
}
}
@@ -604,6 +606,15 @@ func TestGroupStartDelay(t *testing.T) {
f("2023-01-01T00:03:30.000+00:00", "2023-01-01T00:08:00.000+00:00")
f("2023-01-01T00:08:00.000+00:00", "2023-01-01T00:08:00.000+00:00")
// test group with negative offset -2min, which is equivalent to 3min offset for 5min interval
offset = -2 * time.Minute
g.EvalOffset = &offset
f("2023-01-01T00:00:15.000+00:00", "2023-01-01T00:03:00.000+00:00")
f("2023-01-01T00:01:00.000+00:00", "2023-01-01T00:03:00.000+00:00")
f("2023-01-01T00:03:30.000+00:00", "2023-01-01T00:08:00.000+00:00")
f("2023-01-01T00:08:00.000+00:00", "2023-01-01T00:08:00.000+00:00")
maxDelay = time.Minute * 1
g.EvalOffset = nil

View File

@@ -121,7 +121,7 @@ func (s *ruleState) add(e StateEntry) {
func replayRule(r Rule, start, end time.Time, rw remotewrite.RWClient, replayRuleRetryAttempts int) (int, error) {
var err error
var tss []prompb.TimeSeries
for i := 0; i < replayRuleRetryAttempts; i++ {
for i := range replayRuleRetryAttempts {
tss, err = r.execRange(context.Background(), start, end)
if err == nil {
break

View File

@@ -40,7 +40,7 @@ func TestRule_state(t *testing.T) {
}
var last time.Time
for i := 0; i < stateEntriesN*2; i++ {
for range stateEntriesN * 2 {
last = time.Now()
r.state.add(StateEntry{At: last})
}
@@ -68,7 +68,7 @@ func TestRule_stateConcurrent(_ *testing.T) {
var wg sync.WaitGroup
for range workers {
wg.Go(func() {
for i := 0; i < iterations; i++ {
for range iterations {
r.state.add(StateEntry{At: time.Now()})
r.state.getAll()
r.state.getLast()

View File

@@ -19,13 +19,13 @@ func CompareRules(t *testing.T, a, b Rule) error {
case *AlertingRule:
br, ok := b.(*AlertingRule)
if !ok {
return fmt.Errorf("rule %q supposed to be of type AlertingRule", b.ID())
return fmt.Errorf("rule %d supposed to be of type AlertingRule", b.ID())
}
return compareAlertingRules(t, v, br)
case *RecordingRule:
br, ok := b.(*RecordingRule)
if !ok {
return fmt.Errorf("rule %q supposed to be of type RecordingRule", b.ID())
return fmt.Errorf("rule %d supposed to be of type RecordingRule", b.ID())
}
return compareRecordingRules(t, v, br)
default:

View File

@@ -57,12 +57,8 @@ type ApiGroup struct {
EvalOffset float64 `json:"eval_offset,omitempty"`
// EvalDelay will adjust the `time` parameter of rule evaluation requests to compensate intentional query delay from datasource.
EvalDelay float64 `json:"eval_delay,omitempty"`
// Unhealthy unhealthy rules count
Unhealthy int
// Healthy passing rules count
Healthy int
// NoMatch not matching rules count
NoMatch int
// States represents counts per each rule state
States map[string]int `json:"states"`
}
// APILink returns a link to the group's JSON representation.
@@ -134,6 +130,11 @@ type ApiRule struct {
Updates []StateEntry `json:"-"`
}
// IsNoMatch returns true if rule is in nomatch state
func (r *ApiRule) IsNoMatch() bool {
return r.LastSamples == 0 && r.LastSeriesFetched != nil && *r.LastSeriesFetched == 0
}
// ApiAlert represents a notifier.AlertingRule state
// for WEB view
// https://github.com/prometheus/compliance/blob/main/alert_generator/specification.md#get-apiv1rules
@@ -235,6 +236,20 @@ func NewAlertAPI(ar *AlertingRule, a *notifier.Alert) *ApiAlert {
return aa
}
func (r *ApiRule) ExtendState() {
if len(r.Alerts) > 0 {
return
}
if r.State == "" {
r.State = "ok"
}
if r.Health != "ok" {
r.State = "unhealthy"
} else if r.IsNoMatch() {
r.State = "nomatch"
}
}
// ToAPI returns ApiGroup representation of g
func (g *Group) ToAPI() *ApiGroup {
g.mu.RLock()
@@ -252,6 +267,7 @@ func (g *Group) ToAPI() *ApiGroup {
Headers: headersToStrings(g.Headers),
NotifierHeaders: headersToStrings(g.NotifierHeaders),
Labels: g.Labels,
States: make(map[string]int),
}
if g.EvalOffset != nil {
ag.EvalOffset = g.EvalOffset.Seconds()
@@ -259,9 +275,10 @@ func (g *Group) ToAPI() *ApiGroup {
if g.EvalDelay != nil {
ag.EvalDelay = g.EvalDelay.Seconds()
}
ag.Rules = make([]ApiRule, 0)
ag.Rules = make([]ApiRule, 0, len(g.Rules))
for _, r := range g.Rules {
ag.Rules = append(ag.Rules, r.ToAPI())
ar := r.ToAPI()
ag.Rules = append(ag.Rules, ar)
}
return &ag
}

View File

@@ -11,7 +11,7 @@
<path d="M224.163 175.27a1.9 1.9 0 0 0 2.8 0l6-5.9a2.1 2.1 0 0 0 .2-2.7 1.9 1.9 0 0 0-3-.2l-2.6 2.6v-5.2c0-1.54-1.667-2.502-3-1.732-.619.357-1 1.017-1 1.732v5.2l-2.6-2.6a1.9 1.9 0 0 0-3 .2 2.1 2.1 0 0 0 .2 2.7zm-16.459-23.297h36c1.54 0 2.502-1.667 1.732-3a2 2 0 0 0-1.732-1h-36c-1.54 0-2.502 1.667-1.732 3 .357.619 1.017 1 1.732 1m36 4h-36c-1.54 0-2.502 1.667-1.732 3 .357.619 1.017 1 1.732 1h36c1.54 0 2.502-1.667 1.732-3a2 2 0 0 0-1.732-1m-16.59-23.517a1.9 1.9 0 0 0-2.8 0l-6 5.9a2.1 2.1 0 0 0-.2 2.7 1.9 1.9 0 0 0 3 .2l2.6-2.6v5.2c0 1.54 1.667 2.502 3 1.732.619-.357 1-1.017 1-1.732v-5.2l2.6 2.6a1.9 1.9 0 0 0 3-.2 2.1 2.1 0 0 0-.2-2.7z"/>
</symbol>
<symbol id="filter" viewBox="-10 -10 320 310">
<symbol id="state" viewBox="-10 -10 320 310">
<path d="M288.953 0h-277c-5.522 0-10 4.478-10 10v49.531c0 5.522 4.478 10 10 10h12.372l91.378 107.397v113.978a10 10 0 0 0 15.547 8.32l49.5-33a10 10 0 0 0 4.453-8.32v-80.978l91.378-107.397h12.372c5.522 0 10-4.478 10-10V10c0-5.522-4.477-10-10-10M167.587 166.77a10 10 0 0 0-2.384 6.48v79.305l-29.5 19.666V173.25a10 10 0 0 0-2.384-6.48L50.585 69.531h199.736zM278.953 49.531h-257V20h257z"/>
</symbol>

Before

Width:  |  Height:  |  Size: 4.7 KiB

After

Width:  |  Height:  |  Size: 4.7 KiB

View File

@@ -8,9 +8,9 @@ function actionAll(isCollapse) {
});
}
function groupFilter(key) {
function groupForState(key) {
if (key) {
location.href = `?filter=${key}`;
location.href = `?state=${key}`;
} else {
window.location = window.location.pathname;
}

View File

@@ -42,7 +42,7 @@ func TestErrGroupConcurrent(_ *testing.T) {
const writersN = 4
payload := make(chan error, writersN)
for i := 0; i < writersN; i++ {
for range writersN {
go func() {
for err := range payload {
eg.Add(err)
@@ -51,7 +51,7 @@ func TestErrGroupConcurrent(_ *testing.T) {
}
const iterations = 500
for i := 0; i < iterations; i++ {
for i := range iterations {
payload <- fmt.Errorf("error %d", i)
if i%10 == 0 {
_ = eg.Err()

View File

@@ -1,9 +1,11 @@
package main
import (
"cmp"
"embed"
"encoding/json"
"fmt"
"math"
"net/http"
"slices"
"strconv"
@@ -50,6 +52,13 @@ var (
"alert": rule.TypeAlerting,
"record": rule.TypeRecording,
}
// The "recovering", "noData", "normal", "error" states are used by Grafana.
// Ignore "recovering" since it is not currently acknowledged by vmalert,
// treat "noData" as an alias for "nomatch",
// treat "normal" as an alias for "inactive",
// treat "error" as an alias for "unhealthy"
ruleStates = []string{"ok", "nomatch", "inactive", "firing", "pending", "unhealthy", "recovering", "noData", "normal", "error"}
)
type requestHandler struct {
@@ -63,6 +72,14 @@ var (
staticServer = http.StripPrefix("/vmalert", staticHandler)
)
func marshalJson(v any, kind string) ([]byte, *httpserver.ErrorWithStatusCode) {
data, err := json.Marshal(v)
if err != nil {
return nil, errResponse(fmt.Errorf("failed to marshal %s: %s", kind, err), http.StatusInternalServerError)
}
return data, nil
}
func (rh *requestHandler) handler(w http.ResponseWriter, r *http.Request) bool {
if strings.HasPrefix(r.URL.Path, "/vmalert/static") {
staticServer.ServeHTTP(w, r)
@@ -94,40 +111,32 @@ func (rh *requestHandler) handler(w http.ResponseWriter, r *http.Request) bool {
httpserver.Errorf(w, r, "%s", err)
return true
}
WriteRuleDetails(w, r, rule)
WriteRule(w, r, rule)
return true
case "/vmalert/groups":
// current used by old vmalert UI and Grafana Alerts
case "/vmalert/groups", "/rules":
rf, err := newRulesFilter(r)
if err != nil {
httpserver.Errorf(w, r, "%s", err)
return true
}
data := rh.groups(rf)
WriteListGroups(w, r, data, rf.filter)
// only support filtering by a single state
state := ""
if len(rf.states) > 0 {
state = rf.states[0]
rf.states = rf.states[:1]
}
lr := rh.groups(rf)
WriteListGroups(w, r, lr.Data.Groups, state)
return true
case "/vmalert/notifiers":
WriteListTargets(w, r, notifier.GetTargets())
return true
// special cases for Grafana requests,
// served without `vmalert` prefix:
case "/rules":
// Grafana makes an extra request to `/rules`
// handler in addition to `/api/v1/rules` calls in alerts UI
var data []*rule.ApiGroup
rf, err := newRulesFilter(r)
if err != nil {
httpserver.Errorf(w, r, "%s", err)
return true
}
data = rh.groups(rf)
WriteListGroups(w, r, data, rf.filter)
return true
case "/vmalert/api/v1/notifiers", "/api/v1/notifiers":
data, err := rh.listNotifiers()
if err != nil {
httpserver.Errorf(w, r, "%s", err)
errJson(w, r, err)
return true
}
w.Header().Set("Content-Type", "application/json")
@@ -135,15 +144,14 @@ func (rh *requestHandler) handler(w http.ResponseWriter, r *http.Request) bool {
return true
case "/vmalert/api/v1/rules", "/api/v1/rules":
// path used by Grafana for ng alerting
var data []byte
rf, err := newRulesFilter(r)
if err != nil {
httpserver.Errorf(w, r, "%s", err)
errJson(w, r, err)
return true
}
data, err = rh.listGroups(rf)
data, err := rh.listGroups(rf)
if err != nil {
httpserver.Errorf(w, r, "%s", err)
errJson(w, r, err)
return true
}
w.Header().Set("Content-Type", "application/json")
@@ -152,14 +160,14 @@ func (rh *requestHandler) handler(w http.ResponseWriter, r *http.Request) bool {
case "/vmalert/api/v1/alerts", "/api/v1/alerts":
// path used by Grafana for ng alerting
rf, err := newRulesFilter(r)
gf, err := newGroupsFilter(r)
if err != nil {
httpserver.Errorf(w, r, "%s", err)
errJson(w, r, err)
return true
}
data, err := rh.listAlerts(rf)
data, err := rh.listAlerts(gf)
if err != nil {
httpserver.Errorf(w, r, "%s", err)
errJson(w, r, err)
return true
}
w.Header().Set("Content-Type", "application/json")
@@ -168,12 +176,12 @@ func (rh *requestHandler) handler(w http.ResponseWriter, r *http.Request) bool {
case "/vmalert/api/v1/alert", "/api/v1/alert":
alert, err := rh.getAlert(r)
if err != nil {
httpserver.Errorf(w, r, "%s", err)
errJson(w, r, err)
return true
}
data, err := json.Marshal(alert)
data, err := marshalJson(alert, "alert")
if err != nil {
httpserver.Errorf(w, r, "failed to marshal alert: %s", err)
errJson(w, r, err)
return true
}
w.Header().Set("Content-Type", "application/json")
@@ -182,16 +190,16 @@ func (rh *requestHandler) handler(w http.ResponseWriter, r *http.Request) bool {
case "/vmalert/api/v1/rule", "/api/v1/rule":
apiRule, err := rh.getRule(r)
if err != nil {
httpserver.Errorf(w, r, "%s", err)
errJson(w, r, err)
return true
}
rwu := rule.ApiRuleWithUpdates{
ApiRule: apiRule,
StateUpdates: apiRule.Updates,
}
data, err := json.Marshal(rwu)
data, err := marshalJson(rwu, "rule")
if err != nil {
httpserver.Errorf(w, r, "failed to marshal rule: %s", err)
errJson(w, r, err)
return true
}
w.Header().Set("Content-Type", "application/json")
@@ -200,12 +208,12 @@ func (rh *requestHandler) handler(w http.ResponseWriter, r *http.Request) bool {
case "/vmalert/api/v1/group", "/api/v1/group":
group, err := rh.getGroup(r)
if err != nil {
httpserver.Errorf(w, r, "%s", err)
errJson(w, r, err)
return true
}
data, err := json.Marshal(group)
data, err := marshalJson(group, "group")
if err != nil {
httpserver.Errorf(w, r, "failed to marshal group: %s", err)
errJson(w, r, err)
return true
}
w.Header().Set("Content-Type", "application/json")
@@ -225,10 +233,10 @@ func (rh *requestHandler) handler(w http.ResponseWriter, r *http.Request) bool {
}
}
func (rh *requestHandler) getGroup(r *http.Request) (*rule.ApiGroup, error) {
func (rh *requestHandler) getGroup(r *http.Request) (*rule.ApiGroup, *httpserver.ErrorWithStatusCode) {
groupID, err := strconv.ParseUint(r.FormValue(rule.ParamGroupID), 10, 64)
if err != nil {
return nil, fmt.Errorf("failed to read %q param: %w", rule.ParamGroupID, err)
return nil, errResponse(fmt.Errorf("failed to read %q param: %w", rule.ParamGroupID, err), http.StatusBadRequest)
}
obj, err := rh.m.groupAPI(groupID)
if err != nil {
@@ -237,14 +245,14 @@ func (rh *requestHandler) getGroup(r *http.Request) (*rule.ApiGroup, error) {
return obj, nil
}
func (rh *requestHandler) getRule(r *http.Request) (rule.ApiRule, error) {
func (rh *requestHandler) getRule(r *http.Request) (rule.ApiRule, *httpserver.ErrorWithStatusCode) {
groupID, err := strconv.ParseUint(r.FormValue(rule.ParamGroupID), 10, 64)
if err != nil {
return rule.ApiRule{}, fmt.Errorf("failed to read %q param: %w", rule.ParamGroupID, err)
return rule.ApiRule{}, errResponse(fmt.Errorf("failed to read %q param: %w", rule.ParamGroupID, err), http.StatusBadRequest)
}
ruleID, err := strconv.ParseUint(r.FormValue(rule.ParamRuleID), 10, 64)
if err != nil {
return rule.ApiRule{}, fmt.Errorf("failed to read %q param: %w", rule.ParamRuleID, err)
return rule.ApiRule{}, errResponse(fmt.Errorf("failed to read %q param: %w", rule.ParamRuleID, err), http.StatusBadRequest)
}
obj, err := rh.m.ruleAPI(groupID, ruleID)
if err != nil {
@@ -253,14 +261,14 @@ func (rh *requestHandler) getRule(r *http.Request) (rule.ApiRule, error) {
return obj, nil
}
func (rh *requestHandler) getAlert(r *http.Request) (*rule.ApiAlert, error) {
func (rh *requestHandler) getAlert(r *http.Request) (*rule.ApiAlert, *httpserver.ErrorWithStatusCode) {
groupID, err := strconv.ParseUint(r.FormValue(rule.ParamGroupID), 10, 64)
if err != nil {
return nil, fmt.Errorf("failed to read %q param: %w", rule.ParamGroupID, err)
return nil, errResponse(fmt.Errorf("failed to read %q param: %w", rule.ParamGroupID, err), http.StatusBadRequest)
}
alertID, err := strconv.ParseUint(r.FormValue(rule.ParamAlertID), 10, 64)
if err != nil {
return nil, fmt.Errorf("failed to read %q param: %w", rule.ParamAlertID, err)
return nil, errResponse(fmt.Errorf("failed to read %q param: %w", rule.ParamAlertID, err), http.StatusBadRequest)
}
a, err := rh.m.alertAPI(groupID, alertID)
if err != nil {
@@ -270,28 +278,76 @@ func (rh *requestHandler) getAlert(r *http.Request) (*rule.ApiAlert, error) {
}
type listGroupsResponse struct {
Status string `json:"status"`
Data struct {
Status string `json:"status"`
Page int `json:"page,omitempty"`
TotalPages int `json:"total_pages,omitempty"`
TotalGroups int `json:"total_groups,omitempty"`
TotalRules int `json:"total_rules,omitempty"`
Data struct {
Groups []*rule.ApiGroup `json:"groups"`
} `json:"data"`
}
// see https://prometheus.io/docs/prometheus/latest/querying/api/#rules
type rulesFilter struct {
files []string
groupNames []string
ruleNames []string
ruleType string
excludeAlerts bool
filter string
dsType config.Type
type groupsFilter struct {
groupNames []string
files []string
dsType config.Type
}
func newRulesFilter(r *http.Request) (*rulesFilter, error) {
rf := &rulesFilter{}
query := r.URL.Query()
func newGroupsFilter(r *http.Request) (*groupsFilter, *httpserver.ErrorWithStatusCode) {
_ = r.ParseForm()
vs := r.Form
gf := &groupsFilter{
groupNames: vs["rule_group[]"],
files: vs["file[]"],
}
dsType := vs.Get("datasource_type")
if len(dsType) > 0 {
if config.SupportedType(dsType) {
gf.dsType = config.NewRawType(dsType)
} else {
return nil, errResponse(fmt.Errorf(`invalid parameter "datasource_type": not supported value %q`, dsType), http.StatusBadRequest)
}
}
return gf, nil
}
ruleTypeParam := query.Get("type")
func (gf *groupsFilter) matches(group *rule.Group) bool {
if len(gf.groupNames) > 0 && !slices.Contains(gf.groupNames, group.Name) {
return false
}
if len(gf.files) > 0 && !slices.Contains(gf.files, group.File) {
return false
}
if len(gf.dsType.Name) > 0 && gf.dsType.String() != group.Type.String() {
return false
}
return true
}
// see https://prometheus.io/docs/prometheus/latest/querying/api/#rules
type rulesFilter struct {
gf *groupsFilter
ruleNames []string
ruleType string
excludeAlerts bool
states []string
maxGroups int
pageNum int
search string
extendedStates bool
}
func newRulesFilter(r *http.Request) (*rulesFilter, *httpserver.ErrorWithStatusCode) {
gf, err := newGroupsFilter(r)
if err != nil {
return nil, err
}
var rf rulesFilter
rf.gf = gf
vs := r.Form
ruleTypeParam := vs.Get("type")
if len(ruleTypeParam) > 0 {
if ruleType, ok := ruleTypeMap[ruleTypeParam]; ok {
rf.ruleType = ruleType
@@ -300,102 +356,155 @@ func newRulesFilter(r *http.Request) (*rulesFilter, error) {
}
}
dsType := query.Get("datasource_type")
if len(dsType) > 0 {
if config.SupportedType(dsType) {
rf.dsType = config.NewRawType(dsType)
} else {
return nil, errResponse(fmt.Errorf(`invalid parameter "datasource_type": not supported value %q`, dsType), http.StatusBadRequest)
}
states := vs["state"]
if len(states) == 0 {
states = vs["filter"]
}
filter := strings.ToLower(query.Get("filter"))
if len(filter) > 0 {
if filter == "nomatch" || filter == "unhealthy" {
rf.filter = filter
} else {
return nil, errResponse(fmt.Errorf(`invalid parameter "filter": not supported value %q`, filter), http.StatusBadRequest)
for _, s := range states {
values := strings.Split(s, ",")
for _, v := range values {
if len(v) == 0 {
continue
}
if !slices.Contains(ruleStates, v) {
return nil, errResponse(fmt.Errorf(`invalid parameter "state": contains not supported value %q`, v), http.StatusBadRequest)
}
// Replace grafana states with supported internal states
switch v {
case "noData":
v = "nomatch"
case "normal":
v = "inactive"
case "error":
v = "unhealthy"
}
rf.states = append(rf.states, v)
}
}
rf.excludeAlerts = httputil.GetBool(r, "exclude_alerts")
rf.ruleNames = append([]string{}, r.Form["rule_name[]"]...)
rf.groupNames = append([]string{}, r.Form["rule_group[]"]...)
rf.files = append([]string{}, r.Form["file[]"]...)
return rf, nil
rf.extendedStates = httputil.GetBool(r, "extended_states")
rf.ruleNames = append([]string{}, vs["rule_name[]"]...)
rf.search = strings.ToLower(vs.Get("search"))
pageNum := vs.Get("page_num")
maxGroups := vs.Get("group_limit")
if pageNum != "" {
if maxGroups == "" {
return nil, errResponse(fmt.Errorf(`"group_limit" needs to be present in order to paginate over the groups`), http.StatusBadRequest)
}
v, err := strconv.Atoi(pageNum)
if err != nil || v <= 0 {
return nil, errResponse(fmt.Errorf(`"page_num" is expected to be a positive number, found %q`, pageNum), http.StatusBadRequest)
}
rf.pageNum = v
}
if maxGroups != "" {
v, err := strconv.Atoi(maxGroups)
if err != nil || v <= 0 {
return nil, errResponse(fmt.Errorf(`"group_limit" is expected to be a positive number, found %q`, maxGroups), http.StatusBadRequest)
}
rf.maxGroups = v
}
return &rf, nil
}
func (rf *rulesFilter) matchesGroup(group *rule.Group) bool {
if len(rf.groupNames) > 0 && !slices.Contains(rf.groupNames, group.Name) {
func (rf *rulesFilter) matchesRule(r *rule.ApiRule) bool {
if rf.ruleType != "" && rf.ruleType != r.Type {
return false
}
if len(rf.files) > 0 && !slices.Contains(rf.files, group.File) {
if len(rf.ruleNames) > 0 && !slices.Contains(rf.ruleNames, r.Name) {
return false
}
if len(rf.dsType.Name) > 0 && rf.dsType.String() != group.Type.String() {
return false
if len(rf.states) == 0 {
return true
}
return true
return slices.Contains(rf.states, r.State)
}
func (rh *requestHandler) groups(rf *rulesFilter) []*rule.ApiGroup {
func (rh *requestHandler) groups(rf *rulesFilter) *listGroupsResponse {
rh.m.groupsMu.RLock()
defer rh.m.groupsMu.RUnlock()
groups := make([]*rule.ApiGroup, 0)
skipGroups := (rf.pageNum - 1) * rf.maxGroups
lr := &listGroupsResponse{
Status: "success",
}
lr.Data.Groups = make([]*rule.ApiGroup, 0)
if skipGroups >= len(rh.m.groups) {
return lr
}
// sort list of groups for deterministic output
groups := make([]*rule.Group, 0, len(rh.m.groups))
for _, group := range rh.m.groups {
if !rf.matchesGroup(group) {
groups = append(groups, group)
}
slices.SortFunc(groups, func(a, b *rule.Group) int {
nameCmp := cmp.Compare(a.Name, b.Name)
if nameCmp != 0 {
return nameCmp
}
return cmp.Compare(a.File, b.File)
})
for _, group := range groups {
if !rf.gf.matches(group) {
continue
}
groupFound := len(rf.search) == 0 || strings.Contains(strings.ToLower(group.Name), rf.search) || strings.Contains(strings.ToLower(group.File), rf.search)
g := group.ToAPI()
// the returned list should always be non-nil
// https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4221
filteredRules := make([]rule.ApiRule, 0)
for _, rule := range g.Rules {
if rf.ruleType != "" && rf.ruleType != rule.Type {
if !groupFound && !strings.Contains(strings.ToLower(rule.Name), rf.search) {
continue
}
if len(rf.ruleNames) > 0 && !slices.Contains(rf.ruleNames, rule.Name) {
continue
if rf.extendedStates {
rule.ExtendState()
}
if (rule.LastError == "" && rf.filter == "unhealthy") || (!isNoMatch(rule) && rf.filter == "nomatch") {
if !rf.matchesRule(&rule) {
continue
}
if rf.excludeAlerts {
rule.Alerts = nil
}
if rule.LastError != "" {
g.Unhealthy++
} else {
g.Healthy++
}
if isNoMatch(rule) {
g.NoMatch++
}
g.States[rule.State]++
filteredRules = append(filteredRules, rule)
}
g.Rules = filteredRules
groups = append(groups, g)
}
// sort list of groups for deterministic output
slices.SortFunc(groups, func(a, b *rule.ApiGroup) int {
if a.Name != b.Name {
return strings.Compare(a.Name, b.Name)
if len(g.Rules) == 0 || len(filteredRules) > 0 {
if rf.maxGroups > 0 {
lr.TotalGroups++
lr.TotalRules += len(filteredRules)
}
if skipGroups > 0 {
skipGroups--
continue
}
if rf.maxGroups == 0 || len(lr.Data.Groups) < rf.maxGroups {
g.Rules = filteredRules
lr.Data.Groups = append(lr.Data.Groups, g)
}
}
return strings.Compare(a.File, b.File)
})
return groups
}
if rf.maxGroups > 0 {
lr.Page = rf.pageNum
lr.TotalPages = max(int(math.Ceil(float64(lr.TotalGroups)/float64(rf.maxGroups))), 1)
}
return lr
}
func (rh *requestHandler) listGroups(rf *rulesFilter) ([]byte, error) {
lr := listGroupsResponse{Status: "success"}
lr.Data.Groups = rh.groups(rf)
func (rh *requestHandler) listGroups(rf *rulesFilter) ([]byte, *httpserver.ErrorWithStatusCode) {
lr := rh.groups(rf)
if rf.pageNum > 1 && len(lr.Data.Groups) == 0 {
return nil, errResponse(fmt.Errorf(`page_num exceeds total amount of pages`), http.StatusBadRequest)
}
if lr.Page > lr.TotalPages {
return nil, errResponse(fmt.Errorf(`page_num=%d exceeds total amount of pages in result=%d`, lr.Page, lr.TotalPages), http.StatusBadRequest)
}
b, err := json.Marshal(lr)
if err != nil {
return nil, &httpserver.ErrorWithStatusCode{
Err: fmt.Errorf(`error encoding list of active alerts: %w`, err),
StatusCode: http.StatusInternalServerError,
}
return nil, errResponse(fmt.Errorf(`error encoding list of groups: %w`, err), http.StatusInternalServerError)
}
return b, nil
}
@@ -434,14 +543,14 @@ func (rh *requestHandler) groupAlerts() []rule.GroupAlerts {
return gAlerts
}
func (rh *requestHandler) listAlerts(rf *rulesFilter) ([]byte, error) {
func (rh *requestHandler) listAlerts(gf *groupsFilter) ([]byte, *httpserver.ErrorWithStatusCode) {
rh.m.groupsMu.RLock()
defer rh.m.groupsMu.RUnlock()
lr := listAlertsResponse{Status: "success"}
lr.Data.Alerts = make([]*rule.ApiAlert, 0)
for _, group := range rh.m.groups {
if !rf.matchesGroup(group) {
if !gf.matches(group) {
continue
}
g := group.ToAPI()
@@ -460,10 +569,7 @@ func (rh *requestHandler) listAlerts(rf *rulesFilter) ([]byte, error) {
b, err := json.Marshal(lr)
if err != nil {
return nil, &httpserver.ErrorWithStatusCode{
Err: fmt.Errorf(`error encoding list of active alerts: %w`, err),
StatusCode: http.StatusInternalServerError,
}
return nil, errResponse(fmt.Errorf(`error encoding list of active alerts: %w`, err), http.StatusInternalServerError)
}
return b, nil
}
@@ -475,7 +581,7 @@ type listNotifiersResponse struct {
} `json:"data"`
}
func (rh *requestHandler) listNotifiers() ([]byte, error) {
func (rh *requestHandler) listNotifiers() ([]byte, *httpserver.ErrorWithStatusCode) {
targets := notifier.GetTargets()
lr := listNotifiersResponse{Status: "success"}
@@ -497,10 +603,7 @@ func (rh *requestHandler) listNotifiers() ([]byte, error) {
b, err := json.Marshal(lr)
if err != nil {
return nil, &httpserver.ErrorWithStatusCode{
Err: fmt.Errorf(`error encoding list of notifiers: %w`, err),
StatusCode: http.StatusInternalServerError,
}
return nil, errResponse(fmt.Errorf(`error encoding list of notifiers: %w`, err), http.StatusInternalServerError)
}
return b, nil
}
@@ -511,3 +614,8 @@ func errResponse(err error, sc int) *httpserver.ErrorWithStatusCode {
StatusCode: sc,
}
}
func errJson(w http.ResponseWriter, r *http.Request, err *httpserver.ErrorWithStatusCode) {
w.Header().Set("Content-Type", "application/json")
httpserver.Errorf(w, r, `{"error":%q,"errorType":%d}`, err, err.StatusCode)
}

View File

@@ -12,7 +12,7 @@
"github.com/VictoriaMetrics/VictoriaMetrics/lib/buildinfo"
) %}
{% func Controls(prefix, currentIcon, currentText string, icons, filters map[string]string, search bool) %}
{% func Controls(prefix, currentIcon, currentText string, icons, states map[string]string, search bool) %}
<div class="btn-toolbar mb-3" role="toolbar">
<div class="d-flex gap-2 justify-content-between w-100">
<div class="d-flex gap-2 align-items-center">
@@ -28,10 +28,10 @@
<use href="{%s prefix %}static/icons/icons.svg#expand"/>
</svg>
</a>
{% if len(filters) > 0 %}
{% if len(states) > 0 %}
<span class="d-none d-md-inline-block">Filter by status:</span>
<svg class="d-md-none" width="20" height="20">
<use href="{%s prefix %}static/icons/icons.svg#filter">
<use href="{%s prefix %}static/icons/icons.svg#state">
</svg>
<div class="dropdown">
<button
@@ -46,10 +46,10 @@
</svg>
</button>
<ul class="dropdown-menu">
{% for key, title := range filters %}
{% for key, title := range states %}
{% if title != currentText %}
<li>
<a class="dropdown-item" onclick="groupFilter('{%s key %}')">
<a class="dropdown-item" onclick="groupForState('{%s key %}')">
<span class="d-none d-md-inline-block">{%s title %}</span>
<svg class="d-md-none" width="22" height="22">
<use href="{%s prefix %}static/icons/icons.svg#{%s icons[key] %}"/>
@@ -97,10 +97,10 @@
{%= tpl.Footer(r) %}
{% endfunc %}
{% func ListGroups(r *http.Request, groups []*rule.ApiGroup, filter string) %}
{% func ListGroups(r *http.Request, groups []*rule.ApiGroup, state string) %}
{%code
prefix := vmalertutil.Prefix(r.URL.Path)
filters := map[string]string{
states := map[string]string{
"": "All",
"unhealthy": "Unhealthy",
"nomatch": "No Match",
@@ -110,14 +110,14 @@
"unhealthy": "unhealthy",
"nomatch": "nomatch",
}
currentText := filters[filter]
currentIcon := icons[filter]
currentText := states[state]
currentIcon := icons[state]
%}
{%= tpl.Header(r, navItems, "Groups", getLastConfigError()) %}
{%= Controls(prefix, currentIcon, currentText, icons, filters, true) %}
{%= Controls(prefix, currentIcon, currentText, icons, states, true) %}
{% if len(groups) > 0 %}
{% for _, g := range groups %}
<div id="group-{%s g.ID %}" class="w-100 border-0 flex-column vm-group{% if g.Unhealthy > 0 %} alert-danger{% endif %}">
<div id="group-{%s g.ID %}" class="w-100 border-0 flex-column vm-group{% if g.States["unhealthy"] > 0 %} alert-danger{% endif %}">
<span class="d-flex justify-content-between">
<a
class="vm-group-search"
@@ -130,9 +130,9 @@
data-bs-target="#item-{%s g.ID %}"
>
<span class="d-flex gap-2">
{% if g.Unhealthy > 0 %}<span class="badge bg-danger" title="Number of rules with status Error">{%d g.Unhealthy %}</span> {% endif %}
{% if g.NoMatch > 0 %}<span class="badge bg-warning" title="Number of rules with status NoMatch">{%d g.NoMatch %}</span> {% endif %}
<span class="badge bg-success" title="Number of rules with status Ok">{%d g.Healthy %}</span>
{% if g.States["unhealthy"] > 0 %}<span class="badge bg-danger" title="Number of rules with status Error">{%d g.States["unhealthy"] %}</span> {% endif %}
{% if g.States["nomatch"] > 0 %}<span class="badge bg-warning" title="Number of rules with status NoMatch">{%d g.States["nomatch"] %}</span> {% endif %}
<span class="badge bg-success" title="Number of rules with status Ok">{%d g.States["ok"] %}</span>
</span>
</span>
</span>
@@ -189,7 +189,7 @@
<b>record:</b> {%s r.Name %}
{% endif %}
|
{%= seriesFetchedWarn(prefix, r) %}
{%= seriesFetchedWarn(prefix, &r) %}
<span><a target="_blank" href="{%s prefix+r.WebLink() %}">Details</a></span>
</div>
<div class="col-12">
@@ -476,7 +476,7 @@
{% endfunc %}
{% func RuleDetails(r *http.Request, rule rule.ApiRule) %}
{% func Rule(r *http.Request, rule rule.ApiRule) %}
{%code prefix := vmalertutil.Prefix(r.URL.Path) %}
{%= tpl.Header(r, navItems, "", getLastConfigError()) %}
{%code
@@ -661,8 +661,8 @@
<span class="badge bg-warning text-dark" title="This firing state is kept because of `keep_firing_for`">stabilizing</span>
{% endfunc %}
{% func seriesFetchedWarn(prefix string, r rule.ApiRule) %}
{% if isNoMatch(r) %}
{% func seriesFetchedWarn(prefix string, r *rule.ApiRule) %}
{% if r.IsNoMatch() %}
<svg
data-bs-toggle="tooltip"
title="No match! This rule's last evaluation hasn't selected any time series from the datasource.
@@ -673,9 +673,3 @@
</svg>
{% endif %}
{% endfunc %}
{%code
func isNoMatch (r rule.ApiRule) bool {
return r.LastSamples == 0 && r.LastSeriesFetched != nil && *r.LastSeriesFetched == 0
}
%}

View File

@@ -31,7 +31,7 @@ var (
)
//line app/vmalert/web.qtpl:15
func StreamControls(qw422016 *qt422016.Writer, prefix, currentIcon, currentText string, icons, filters map[string]string, search bool) {
func StreamControls(qw422016 *qt422016.Writer, prefix, currentIcon, currentText string, icons, states map[string]string, search bool) {
//line app/vmalert/web.qtpl:15
qw422016.N().S(`
<div class="btn-toolbar mb-3" role="toolbar">
@@ -59,7 +59,7 @@ func StreamControls(qw422016 *qt422016.Writer, prefix, currentIcon, currentText
</a>
`)
//line app/vmalert/web.qtpl:31
if len(filters) > 0 {
if len(states) > 0 {
//line app/vmalert/web.qtpl:31
qw422016.N().S(`
<span class="d-none d-md-inline-block">Filter by status:</span>
@@ -68,7 +68,7 @@ func StreamControls(qw422016 *qt422016.Writer, prefix, currentIcon, currentText
//line app/vmalert/web.qtpl:34
qw422016.E().S(prefix)
//line app/vmalert/web.qtpl:34
qw422016.N().S(`static/icons/icons.svg#filter">
qw422016.N().S(`static/icons/icons.svg#state">
</svg>
<div class="dropdown">
<button
@@ -97,7 +97,7 @@ func StreamControls(qw422016 *qt422016.Writer, prefix, currentIcon, currentText
<ul class="dropdown-menu">
`)
//line app/vmalert/web.qtpl:49
for key, title := range filters {
for key, title := range states {
//line app/vmalert/web.qtpl:49
qw422016.N().S(`
`)
@@ -106,7 +106,7 @@ func StreamControls(qw422016 *qt422016.Writer, prefix, currentIcon, currentText
//line app/vmalert/web.qtpl:50
qw422016.N().S(`
<li>
<a class="dropdown-item" onclick="groupFilter('`)
<a class="dropdown-item" onclick="groupForState('`)
//line app/vmalert/web.qtpl:52
qw422016.E().S(key)
//line app/vmalert/web.qtpl:52
@@ -176,22 +176,22 @@ func StreamControls(qw422016 *qt422016.Writer, prefix, currentIcon, currentText
}
//line app/vmalert/web.qtpl:77
func WriteControls(qq422016 qtio422016.Writer, prefix, currentIcon, currentText string, icons, filters map[string]string, search bool) {
func WriteControls(qq422016 qtio422016.Writer, prefix, currentIcon, currentText string, icons, states map[string]string, search bool) {
//line app/vmalert/web.qtpl:77
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmalert/web.qtpl:77
StreamControls(qw422016, prefix, currentIcon, currentText, icons, filters, search)
StreamControls(qw422016, prefix, currentIcon, currentText, icons, states, search)
//line app/vmalert/web.qtpl:77
qt422016.ReleaseWriter(qw422016)
//line app/vmalert/web.qtpl:77
}
//line app/vmalert/web.qtpl:77
func Controls(prefix, currentIcon, currentText string, icons, filters map[string]string, search bool) string {
func Controls(prefix, currentIcon, currentText string, icons, states map[string]string, search bool) string {
//line app/vmalert/web.qtpl:77
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmalert/web.qtpl:77
WriteControls(qb422016, prefix, currentIcon, currentText, icons, filters, search)
WriteControls(qb422016, prefix, currentIcon, currentText, icons, states, search)
//line app/vmalert/web.qtpl:77
qs422016 := string(qb422016.B)
//line app/vmalert/web.qtpl:77
@@ -324,13 +324,13 @@ func Welcome(r *http.Request) string {
}
//line app/vmalert/web.qtpl:100
func StreamListGroups(qw422016 *qt422016.Writer, r *http.Request, groups []*rule.ApiGroup, filter string) {
func StreamListGroups(qw422016 *qt422016.Writer, r *http.Request, groups []*rule.ApiGroup, state string) {
//line app/vmalert/web.qtpl:100
qw422016.N().S(`
`)
//line app/vmalert/web.qtpl:102
prefix := vmalertutil.Prefix(r.URL.Path)
filters := map[string]string{
states := map[string]string{
"": "All",
"unhealthy": "Unhealthy",
"nomatch": "No Match",
@@ -340,8 +340,8 @@ func StreamListGroups(qw422016 *qt422016.Writer, r *http.Request, groups []*rule
"unhealthy": "unhealthy",
"nomatch": "nomatch",
}
currentText := filters[filter]
currentIcon := icons[filter]
currentText := states[state]
currentIcon := icons[state]
//line app/vmalert/web.qtpl:115
qw422016.N().S(`
@@ -352,7 +352,7 @@ func StreamListGroups(qw422016 *qt422016.Writer, r *http.Request, groups []*rule
qw422016.N().S(`
`)
//line app/vmalert/web.qtpl:117
StreamControls(qw422016, prefix, currentIcon, currentText, icons, filters, true)
StreamControls(qw422016, prefix, currentIcon, currentText, icons, states, true)
//line app/vmalert/web.qtpl:117
qw422016.N().S(`
`)
@@ -371,7 +371,7 @@ func StreamListGroups(qw422016 *qt422016.Writer, r *http.Request, groups []*rule
//line app/vmalert/web.qtpl:120
qw422016.N().S(`" class="w-100 border-0 flex-column vm-group`)
//line app/vmalert/web.qtpl:120
if g.Unhealthy > 0 {
if g.States["unhealthy"] > 0 {
//line app/vmalert/web.qtpl:120
qw422016.N().S(` alert-danger`)
//line app/vmalert/web.qtpl:120
@@ -418,11 +418,11 @@ func StreamListGroups(qw422016 *qt422016.Writer, r *http.Request, groups []*rule
<span class="d-flex gap-2">
`)
//line app/vmalert/web.qtpl:133
if g.Unhealthy > 0 {
if g.States["unhealthy"] > 0 {
//line app/vmalert/web.qtpl:133
qw422016.N().S(`<span class="badge bg-danger" title="Number of rules with status Error">`)
//line app/vmalert/web.qtpl:133
qw422016.N().D(g.Unhealthy)
qw422016.N().D(g.States["unhealthy"])
//line app/vmalert/web.qtpl:133
qw422016.N().S(`</span> `)
//line app/vmalert/web.qtpl:133
@@ -431,11 +431,11 @@ func StreamListGroups(qw422016 *qt422016.Writer, r *http.Request, groups []*rule
qw422016.N().S(`
`)
//line app/vmalert/web.qtpl:134
if g.NoMatch > 0 {
if g.States["nomatch"] > 0 {
//line app/vmalert/web.qtpl:134
qw422016.N().S(`<span class="badge bg-warning" title="Number of rules with status NoMatch">`)
//line app/vmalert/web.qtpl:134
qw422016.N().D(g.NoMatch)
qw422016.N().D(g.States["nomatch"])
//line app/vmalert/web.qtpl:134
qw422016.N().S(`</span> `)
//line app/vmalert/web.qtpl:134
@@ -444,7 +444,7 @@ func StreamListGroups(qw422016 *qt422016.Writer, r *http.Request, groups []*rule
qw422016.N().S(`
<span class="badge bg-success" title="Number of rules with status Ok">`)
//line app/vmalert/web.qtpl:135
qw422016.N().D(g.Healthy)
qw422016.N().D(g.States["ok"])
//line app/vmalert/web.qtpl:135
qw422016.N().S(`</span>
</span>
@@ -617,7 +617,7 @@ func StreamListGroups(qw422016 *qt422016.Writer, r *http.Request, groups []*rule
|
`)
//line app/vmalert/web.qtpl:192
streamseriesFetchedWarn(qw422016, prefix, r)
streamseriesFetchedWarn(qw422016, prefix, &r)
//line app/vmalert/web.qtpl:192
qw422016.N().S(`
<span><a target="_blank" href="`)
@@ -750,22 +750,22 @@ func StreamListGroups(qw422016 *qt422016.Writer, r *http.Request, groups []*rule
}
//line app/vmalert/web.qtpl:234
func WriteListGroups(qq422016 qtio422016.Writer, r *http.Request, groups []*rule.ApiGroup, filter string) {
func WriteListGroups(qq422016 qtio422016.Writer, r *http.Request, groups []*rule.ApiGroup, state string) {
//line app/vmalert/web.qtpl:234
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmalert/web.qtpl:234
StreamListGroups(qw422016, r, groups, filter)
StreamListGroups(qw422016, r, groups, state)
//line app/vmalert/web.qtpl:234
qt422016.ReleaseWriter(qw422016)
//line app/vmalert/web.qtpl:234
}
//line app/vmalert/web.qtpl:234
func ListGroups(r *http.Request, groups []*rule.ApiGroup, filter string) string {
func ListGroups(r *http.Request, groups []*rule.ApiGroup, state string) string {
//line app/vmalert/web.qtpl:234
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmalert/web.qtpl:234
WriteListGroups(qb422016, r, groups, filter)
WriteListGroups(qb422016, r, groups, state)
//line app/vmalert/web.qtpl:234
qs422016 := string(qb422016.B)
//line app/vmalert/web.qtpl:234
@@ -1462,7 +1462,7 @@ func Alert(r *http.Request, alert *rule.ApiAlert) string {
}
//line app/vmalert/web.qtpl:479
func StreamRuleDetails(qw422016 *qt422016.Writer, r *http.Request, rule rule.ApiRule) {
func StreamRule(qw422016 *qt422016.Writer, r *http.Request, rule rule.ApiRule) {
//line app/vmalert/web.qtpl:479
qw422016.N().S(`
`)
@@ -1859,22 +1859,22 @@ func StreamRuleDetails(qw422016 *qt422016.Writer, r *http.Request, rule rule.Api
}
//line app/vmalert/web.qtpl:642
func WriteRuleDetails(qq422016 qtio422016.Writer, r *http.Request, rule rule.ApiRule) {
func WriteRule(qq422016 qtio422016.Writer, r *http.Request, rule rule.ApiRule) {
//line app/vmalert/web.qtpl:642
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmalert/web.qtpl:642
StreamRuleDetails(qw422016, r, rule)
StreamRule(qw422016, r, rule)
//line app/vmalert/web.qtpl:642
qt422016.ReleaseWriter(qw422016)
//line app/vmalert/web.qtpl:642
}
//line app/vmalert/web.qtpl:642
func RuleDetails(r *http.Request, rule rule.ApiRule) string {
func Rule(r *http.Request, rule rule.ApiRule) string {
//line app/vmalert/web.qtpl:642
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmalert/web.qtpl:642
WriteRuleDetails(qb422016, r, rule)
WriteRule(qb422016, r, rule)
//line app/vmalert/web.qtpl:642
qs422016 := string(qb422016.B)
//line app/vmalert/web.qtpl:642
@@ -2015,12 +2015,12 @@ func badgeStabilizing() string {
}
//line app/vmalert/web.qtpl:664
func streamseriesFetchedWarn(qw422016 *qt422016.Writer, prefix string, r rule.ApiRule) {
func streamseriesFetchedWarn(qw422016 *qt422016.Writer, prefix string, r *rule.ApiRule) {
//line app/vmalert/web.qtpl:664
qw422016.N().S(`
`)
//line app/vmalert/web.qtpl:665
if isNoMatch(r) {
if r.IsNoMatch() {
//line app/vmalert/web.qtpl:665
qw422016.N().S(`
<svg
@@ -2045,7 +2045,7 @@ func streamseriesFetchedWarn(qw422016 *qt422016.Writer, prefix string, r rule.Ap
}
//line app/vmalert/web.qtpl:675
func writeseriesFetchedWarn(qq422016 qtio422016.Writer, prefix string, r rule.ApiRule) {
func writeseriesFetchedWarn(qq422016 qtio422016.Writer, prefix string, r *rule.ApiRule) {
//line app/vmalert/web.qtpl:675
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmalert/web.qtpl:675
@@ -2056,7 +2056,7 @@ func writeseriesFetchedWarn(qq422016 qtio422016.Writer, prefix string, r rule.Ap
}
//line app/vmalert/web.qtpl:675
func seriesFetchedWarn(prefix string, r rule.ApiRule) string {
func seriesFetchedWarn(prefix string, r *rule.ApiRule) string {
//line app/vmalert/web.qtpl:675
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmalert/web.qtpl:675
@@ -2069,8 +2069,3 @@ func seriesFetchedWarn(prefix string, r rule.ApiRule) string {
return qs422016
//line app/vmalert/web.qtpl:675
}
//line app/vmalert/web.qtpl:678
func isNoMatch(r rule.ApiRule) bool {
return r.LastSamples == 0 && r.LastSeriesFetched != nil && *r.LastSeriesFetched == 0
}

View File

@@ -210,7 +210,7 @@ func TestHandler(t *testing.T) {
}
})
t.Run("/api/v1/rules&filters", func(t *testing.T) {
t.Run("/api/v1/rules&states", func(t *testing.T) {
check := func(url string, statusCode, expGroups, expRules int) {
t.Helper()
lr := listGroupsResponse{}
@@ -252,9 +252,15 @@ func TestHandler(t *testing.T) {
check("/api/v1/rules?rule_group[]=group&file[]=foo", 200, 0, 0)
check("/api/v1/rules?rule_group[]=group&file[]=rules.yaml", 200, 3, 6)
check("/api/v1/rules?rule_group[]=group&file[]=rules.yaml&rule_name[]=foo", 200, 3, 0)
check("/api/v1/rules?rule_group[]=group&file[]=rules.yaml&rule_name[]=foo", 200, 0, 0)
check("/api/v1/rules?rule_group[]=group&file[]=rules.yaml&rule_name[]=alert", 200, 3, 3)
check("/api/v1/rules?rule_group[]=group&file[]=rules.yaml&rule_name[]=alert&rule_name[]=record", 200, 3, 6)
check("/api/v1/rules?group_limit=1", 200, 1, 2)
check("/api/v1/rules?group_limit=1&type=alert", 200, 1, 1)
check("/api/v1/rules?group_limit=1&type=record", 200, 1, 1)
check("/api/v1/rules?group_limit=2", 200, 2, 4)
check(fmt.Sprintf("/api/v1/rules?group_limit=1&page_num=%d", 1), 200, 1, 2)
})
t.Run("/api/v1/rules&exclude_alerts=true", func(t *testing.T) {
// check if response returns active alerts by default

View File

@@ -13,6 +13,7 @@ import (
"net/url"
"os"
"regexp"
"slices"
"sort"
"strconv"
"strings"
@@ -28,6 +29,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fs/fscore"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/netutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/procutil"
@@ -65,10 +67,11 @@ type AuthConfig struct {
type UserInfo struct {
Name string `yaml:"name,omitempty"`
BearerToken string `yaml:"bearer_token,omitempty"`
AuthToken string `yaml:"auth_token,omitempty"`
Username string `yaml:"username,omitempty"`
Password string `yaml:"password,omitempty"`
BearerToken string `yaml:"bearer_token,omitempty"`
JWT *JWTConfig `yaml:"jwt,omitempty"`
AuthToken string `yaml:"auth_token,omitempty"`
Username string `yaml:"username,omitempty"`
Password string `yaml:"password,omitempty"`
URLPrefix *URLPrefix `yaml:"url_prefix,omitempty"`
DiscoverBackendIPs *bool `yaml:"discover_backend_ips,omitempty"`
@@ -89,6 +92,8 @@ type UserInfo struct {
MetricLabels map[string]string `yaml:"metric_labels,omitempty"`
AccessLog *AccessLog `yaml:"access_log,omitempty"`
concurrencyLimitCh chan struct{}
concurrencyLimitReached *metrics.Counter
@@ -101,11 +106,40 @@ type UserInfo struct {
requestsDuration *metrics.Summary
}
// AccessLog represents configuration for access log settings.
type AccessLog struct {
Filters *AccessLogFilters `yaml:"filters"`
}
// AccessLogFilters represents list of filters for access logs printing
type AccessLogFilters struct {
// SkipStatusCodes is a list of HTTP status codes for which access logs will be skipped
SkipStatusCodes []int `yaml:"skip_status_codes"`
}
func (ui *UserInfo) logRequest(r *http.Request, userName string, statusCode int, duration time.Duration) {
if ui.AccessLog == nil {
return
}
filters := ui.AccessLog.Filters
if filters != nil && len(filters.SkipStatusCodes) > 0 {
if slices.Contains(filters.SkipStatusCodes, statusCode) {
return
}
}
remoteAddr := httpserver.GetQuotedRemoteAddr(r)
requestURI := httpserver.GetRequestURI(r)
logger.Infof("access_log request_host=%q request_uri=%q status_code=%d remote_addr=%s user_agent=%q referer=%q duration_ms=%d username=%q",
r.Host, requestURI, statusCode, remoteAddr, r.UserAgent(), r.Referer(), duration.Milliseconds(), userName)
}
// HeadersConf represents config for request and response headers.
type HeadersConf struct {
RequestHeaders []*Header `yaml:"headers,omitempty"`
ResponseHeaders []*Header `yaml:"response_headers,omitempty"`
KeepOriginalHost *bool `yaml:"keep_original_host,omitempty"`
RequestHeaders []*Header `yaml:"headers,omitempty"`
ResponseHeaders []*Header `yaml:"response_headers,omitempty"`
KeepOriginalHost *bool `yaml:"keep_original_host,omitempty"`
hasAnyPlaceHolders bool
}
func (ui *UserInfo) beginConcurrencyLimit(ctx context.Context) error {
@@ -113,7 +147,7 @@ func (ui *UserInfo) beginConcurrencyLimit(ctx context.Context) error {
case ui.concurrencyLimitCh <- struct{}{}:
return nil
default:
// The number of concurrently executed requests for the given user equals the limt.
// The number of concurrently executed requests for the given user equals the limit.
// Wait until some of the currently executed requests are finished, so the current request could be executed.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10078
select {
@@ -348,6 +382,7 @@ func (bus *backendURLs) add(u *url.URL) {
url: u,
healthCheckContext: bus.healthChecksContext,
healthCheckWG: &bus.healthChecksWG,
hasPlaceHolders: hasAnyPlaceholders(u),
})
}
@@ -365,6 +400,8 @@ type backendURL struct {
concurrentRequests atomic.Int32
url *url.URL
hasPlaceHolders bool
}
func (bu *backendURL) isBroken() bool {
@@ -588,7 +625,7 @@ func getLeastLoadedBackendURL(bus []*backendURL, atomicCounter *atomic.Uint32) *
// Slow path - select other backend urls.
n := atomicCounter.Add(1) - 1
for i := uint32(0); i < uint32(len(bus)); i++ {
for i := range uint32(len(bus)) {
idx := (n + i) % uint32(len(bus))
bu := bus[idx]
if bu.isBroken() {
@@ -598,7 +635,7 @@ func getLeastLoadedBackendURL(bus []*backendURL, atomicCounter *atomic.Uint32) *
// The Load() in front of CompareAndSwap() avoids CAS overhead for items with values bigger than 0.
if bu.concurrentRequests.Load() == 0 && bu.concurrentRequests.CompareAndSwap(0, 1) {
atomicCounter.CompareAndSwap(n+1, idx+1)
// There is no need in the call bu.get(), because we already incremented bu.concrrentRequests above.
// There is no need in the call bu.get(), because we already incremented bu.concurrentRequests above.
return bu
}
}
@@ -799,6 +836,9 @@ var (
// authUsers contains the currently loaded auth users
authUsers atomic.Pointer[map[string]*UserInfo]
// jwt authentication cache
jwtAuthCache atomic.Pointer[jwtCache]
authConfigWG sync.WaitGroup
stopCh chan struct{}
)
@@ -838,6 +878,16 @@ func reloadAuthConfigData(data []byte) (bool, error) {
return false, fmt.Errorf("failed to parse auth config: %w", err)
}
jui, oidcDP, err := parseJWTUsers(ac)
if err != nil {
return false, fmt.Errorf("failed to parse JWT users from auth config: %w", err)
}
oidcDP.startDiscovery()
jwtc := &jwtCache{
users: jui,
oidcDP: oidcDP,
}
m, err := parseAuthConfigUsers(ac)
if err != nil {
return false, fmt.Errorf("failed to parse users from auth config: %w", err)
@@ -854,9 +904,15 @@ func reloadAuthConfigData(data []byte) (bool, error) {
}
metrics.RegisterSet(ac.ms)
jwtcPrev := jwtAuthCache.Load()
if jwtcPrev != nil {
jwtcPrev.oidcDP.stopDiscovery()
}
authConfig.Store(ac)
authConfigData.Store(&data)
authUsers.Store(&m)
jwtAuthCache.Store(jwtc)
return true, nil
}
@@ -881,12 +937,18 @@ func parseAuthConfig(data []byte) (*AuthConfig, error) {
if ui.BearerToken != "" {
return nil, fmt.Errorf("field bearer_token can't be specified for unauthorized_user section")
}
if ui.JWT != nil {
return nil, fmt.Errorf("field jwt can't be specified for unauthorized_user section")
}
if ui.AuthToken != "" {
return nil, fmt.Errorf("field auth_token can't be specified for unauthorized_user section")
}
if ui.Name != "" {
return nil, fmt.Errorf("field name can't be specified for unauthorized_user section")
}
if err := parseJWTPlaceholdersForUserInfo(ui, false); err != nil {
return nil, err
}
if err := ui.initURLs(); err != nil {
return nil, err
}
@@ -927,16 +989,27 @@ func parseAuthConfigUsers(ac *AuthConfig) (map[string]*UserInfo, error) {
}
for i := range uis {
ui := &uis[i]
// users with jwt tokens are parsed by parseJWTUsers function.
// the function also checks that users with jwt tokens do not have auth tokens, bearer tokens, usernames and passwords.
if ui.JWT != nil {
continue
}
ats, err := getAuthTokens(ui.AuthToken, ui.BearerToken, ui.Username, ui.Password)
if err != nil {
return nil, err
}
for _, at := range ats {
if uiOld := byAuthToken[at]; uiOld != nil {
return nil, fmt.Errorf("duplicate auth token=%q found for username=%q, name=%q; the previous one is set for username=%q, name=%q",
at, ui.Username, ui.Name, uiOld.Username, uiOld.Name)
}
}
if err := parseJWTPlaceholdersForUserInfo(ui, false); err != nil {
return nil, err
}
if err := ui.initURLs(); err != nil {
return nil, err
}
@@ -1036,6 +1109,7 @@ func (ui *UserInfo) initURLs() error {
return err
}
}
for _, e := range ui.URLMaps {
if len(e.SrcPaths) == 0 && len(e.SrcHosts) == 0 && len(e.SrcQueryArgs) == 0 && len(e.SrcHeaders) == 0 {
return fmt.Errorf("missing `src_paths`, `src_hosts`, `src_query_args` and `src_headers` in `url_map`")
@@ -1095,6 +1169,9 @@ func (ui *UserInfo) name() string {
h := xxhash.Sum64([]byte(ui.AuthToken))
return fmt.Sprintf("auth_token:hash:%016X", h)
}
if ui.JWT != nil {
return `jwt`
}
return ""
}

View File

@@ -4,8 +4,11 @@ import (
"bytes"
"fmt"
"net"
"net/http"
"net/url"
"strings"
"testing"
"time"
"gopkg.in/yaml.v2"
@@ -276,6 +279,50 @@ users:
url_prefix: http://foo.bar
metric_labels:
not-prometheus-compatible: value
`)
// placeholder in url_prefix
f(`
users:
- username: foo
password: bar
url_prefix: 'http://ahost/{{a_placeholder}}/foobar'
`)
// placeholder in a header
f(`
users:
- username: foo
password: bar
headers:
- 'X-Foo: {{a_placeholder}}'
url_prefix: 'http://ahost'
`)
// placeholder in url_prefix
f(`
users:
- username: foo
password: bar
url_prefix: 'http://ahost/{{a_placeholder}}/foobar'
`)
// placeholder in a header in url_map
f(`
users:
- username: foo
password: bar
url_map:
- src_paths: ["/select/.*"]
headers:
- 'X-Foo: {{a_placeholder}}'
url_prefix: 'http://ahost'
`)
// placeholder in a header in url_map
f(`
users:
- username: foo
password: bar
url_map:
- src_paths: ["/select/.*"]
url_prefix: 'http://ahost/{{a_placeholder}}/foobar'
`)
}
@@ -378,7 +425,7 @@ users:
RetryStatusCodes: []int{500, 501},
LoadBalancingPolicy: "first_available",
MergeQueryArgs: []string{"foo", "bar"},
DropSrcPathPrefixParts: intp(1),
DropSrcPathPrefixParts: new(1),
DiscoverBackendIPs: &discoverBackendIPsTrue,
},
}, nil)
@@ -621,6 +668,47 @@ unauthorized_user:
},
},
})
// skip user info with jwt, it is parsed by parseJWTUsers
f(`
users:
- username: foo
password: bar
url_prefix: http://aaa:343/bbb
- jwt: {skip_verify: true}
url_prefix: http://aaa:343/bbb
`, map[string]*UserInfo{
getHTTPAuthBasicToken("foo", "bar"): {
Username: "foo",
Password: "bar",
URLPrefix: mustParseURL("http://aaa:343/bbb"),
},
}, nil)
// Multiple users with access logs enabled
f(`
users:
- username: foo
url_prefix: http://foo
access_log: {}
- username: bar
url_prefix: https://bar/x/
access_log:
filters:
skip_status_codes: [404]
`, map[string]*UserInfo{
getHTTPAuthBasicToken("foo", ""): {
Username: "foo",
URLPrefix: mustParseURL("http://foo"),
AccessLog: &AccessLog{},
},
getHTTPAuthBasicToken("bar", ""): {
Username: "bar",
URLPrefix: mustParseURL("https://bar/x/"),
AccessLog: &AccessLog{Filters: &AccessLogFilters{SkipStatusCodes: []int{404}}},
},
}, nil)
}
func TestParseAuthConfigPassesTLSVerificationConfig(t *testing.T) {
@@ -831,7 +919,7 @@ func TestBrokenBackend(t *testing.T) {
bus[1].setBroken()
// broken backend should never return while there are healthy backends
for i := 0; i < 1e3; i++ {
for range int(1e3) {
b := up.getBackendURL()
if b.isBroken() {
t.Fatalf("unexpected broken backend %q", b.url)
@@ -908,6 +996,41 @@ func TestDiscoverBackendIPsWithIPV6(t *testing.T) {
}
func TestLogRequest(t *testing.T) {
ui := &UserInfo{AccessLog: &AccessLog{}}
testOutput := &bytes.Buffer{}
logger.SetOutputForTests(testOutput)
defer logger.ResetOutputForTest()
req, err := http.NewRequest("GET", "http://localhost:8080/select/0/prometheus", nil)
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
f := func(user string, status int, duration time.Duration, expectedLog string) {
t.Helper()
testOutput.Reset()
ui.logRequest(req, user, status, duration)
got := testOutput.String()
if expectedLog == "" && got != "" {
t.Fatalf("expected empty log, got %q", got)
}
if !strings.Contains(got, expectedLog) {
t.Fatalf("output \n%q \nshould contain \n%q", testOutput.String(), expectedLog)
}
}
f("foo", 200, 10*time.Millisecond, `access_log request_host="localhost:8080" request_uri="" status_code=200 remote_addr="" user_agent="" referer="" duration_ms=10 username="foo"`)
f("foo", 404, time.Second, `access_log request_host="localhost:8080" request_uri="" status_code=404 remote_addr="" user_agent="" referer="" duration_ms=1000 username="foo"`)
ui.AccessLog.Filters = &AccessLogFilters{SkipStatusCodes: []int{200}}
f("foo", 200, 10*time.Millisecond, ``)
f("foo", 404, 10*time.Millisecond, `access_log request_host="localhost:8080" request_uri="" status_code=404 remote_addr="" user_agent="" referer="" duration_ms=10 username="foo"`)
}
func getRegexs(paths []string) []*Regex {
var sps []*Regex
for _, path := range paths {
@@ -963,10 +1086,6 @@ func mustParseURLs(us []string) *URLPrefix {
return up
}
func intp(n int) *int {
return &n
}
func mustNewRegex(s string) *Regex {
var re Regex
if err := yaml.Unmarshal([]byte(s), &re); err != nil {

View File

@@ -116,6 +116,20 @@ users:
- "http://default1:8888/unsupported_url_handler"
- "http://default2:8888/unsupported_url_handler"
# A JWT token based routing:
# - Requests with JWT token that has the following structure:
# {"team": "ops", "security": {"read_access": "1"}, "vm_access": {"metrics_account_id": 1000,"metrics_project_id":5}}
# is routed to vmselect nodes and request url placeholder replaced with metrics tenant identificators
- name: jwt-opts-team
jwt:
match_claims:
team: ops
security.read_access: "1"
skip_verify: true
url_prefix:
- "http://vmselect1:8481/select/{{.MetricsTenant}}/prometheus"
- "http://vmselect2:8481/select/{{.MetricsTenant}}/prometheus"
# Requests without Authorization header are proxied according to `unauthorized_user` section.
# Requests are proxied in round-robin fashion between `url_prefix` backends.
# The deny_partial_response query arg is added to all the proxied requests.
@@ -125,3 +139,8 @@ unauthorized_user:
- http://vmselect-az1/?deny_partial_response=1
- http://vmselect-az2/?deny_partial_response=1
retry_status_codes: [503, 500]
# log access for requests routed to this user
access_log:
filters:
# except requests with Status Codes below
skip_status_codes: [200, 202]

486
app/vmauth/jwt.go Normal file
View File

@@ -0,0 +1,486 @@
package main
import (
"fmt"
"net/url"
"os"
"slices"
"sort"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/jwt"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
)
const (
metricsTenantPlaceholder = `{{.MetricsTenant}}`
metricsExtraLabelsPlaceholder = `{{.MetricsExtraLabels}}`
metricsExtraFiltersPlaceholder = `{{.MetricsExtraFilters}}`
logsAccountIDPlaceholder = `{{.LogsAccountID}}`
logsProjectIDPlaceholder = `{{.LogsProjectID}}`
logsExtraFiltersPlaceholder = `{{.LogsExtraFilters}}`
logsExtraStreamFiltersPlaceholder = `{{.LogsExtraStreamFilters}}`
placeholderPrefix = `{{`
)
var allPlaceholders = []string{
metricsTenantPlaceholder,
metricsExtraLabelsPlaceholder,
metricsExtraFiltersPlaceholder,
logsAccountIDPlaceholder,
logsProjectIDPlaceholder,
logsExtraFiltersPlaceholder,
logsExtraStreamFiltersPlaceholder,
}
var urlPathPlaceHolders = []string{
metricsTenantPlaceholder,
logsAccountIDPlaceholder,
logsProjectIDPlaceholder,
}
type jwtCache struct {
// users contain UserInfo`s from AuthConfig with JWTConfig set
users []*UserInfo
oidcDP *oidcDiscovererPool
}
type JWTConfig struct {
PublicKeys []string `yaml:"public_keys,omitempty"`
PublicKeyFiles []string `yaml:"public_key_files,omitempty"`
SkipVerify bool `yaml:"skip_verify,omitempty"`
OIDC *oidcConfig `yaml:"oidc,omitempty"`
MatchClaims map[string]string `yaml:"match_claims,omitempty"`
parsedMatchClaims []*jwt.Claim
// verifierPool is used to verify JWT tokens.
// It is initialized from PublicKeys and/or PublicKeyFiles.
// In this case, it is initialized once at config reload and never updated until next reload
// In case of OIDC, it is initialized on config reload and periodically updated by discovery process.
verifierPool atomic.Pointer[jwt.VerifierPool]
}
func parseJWTUsers(ac *AuthConfig) ([]*UserInfo, *oidcDiscovererPool, error) {
jui := make([]*UserInfo, 0, len(ac.Users))
oidcDP := &oidcDiscovererPool{}
uniqClaims := make(map[string]*UserInfo)
var sortedClaims []string
for idx, ui := range ac.Users {
jwtToken := ui.JWT
if jwtToken == nil {
continue
}
if ui.AuthToken != "" || ui.BearerToken != "" || ui.Username != "" || ui.Password != "" {
return nil, nil, fmt.Errorf("auth_token, bearer_token, username and password cannot be specified if jwt is set")
}
if len(jwtToken.PublicKeys) == 0 && len(jwtToken.PublicKeyFiles) == 0 && !jwtToken.SkipVerify && jwtToken.OIDC == nil {
return nil, nil, fmt.Errorf("jwt must contain at least a single public key, public_key_files, oidc or have skip_verify=true")
}
var claimsString string
sortedClaims = sortedClaims[:0]
parsedClaims := make([]*jwt.Claim, 0, len(jwtToken.MatchClaims))
for ck, cv := range jwtToken.MatchClaims {
sortedClaims = append(sortedClaims, fmt.Sprintf("%s=%s", ck, cv))
pc, err := jwt.NewClaim(ck, cv)
if err != nil {
return nil, nil, fmt.Errorf("incorrect match claim, key=%q, value regex=%q: %w", ck, cv, err)
}
parsedClaims = append(parsedClaims, pc)
}
ui.JWT.parsedMatchClaims = parsedClaims
sort.Strings(sortedClaims)
claimsString = strings.Join(sortedClaims, ",")
if oldUI, ok := uniqClaims[claimsString]; ok {
return nil, nil, fmt.Errorf("duplicate match claims=%q found for name=%q at idx=%d; the previous one is set for name=%q", claimsString, ui.Name, idx, oldUI.Name)
}
uniqClaims[claimsString] = &ui
if len(jwtToken.PublicKeys) > 0 || len(jwtToken.PublicKeyFiles) > 0 {
keys := make([]any, 0, len(jwtToken.PublicKeys)+len(jwtToken.PublicKeyFiles))
for i := range jwtToken.PublicKeys {
k, err := jwt.ParseKey([]byte(jwtToken.PublicKeys[i]))
if err != nil {
return nil, nil, err
}
keys = append(keys, k)
}
for _, filePath := range jwtToken.PublicKeyFiles {
keyData, err := os.ReadFile(filePath)
if err != nil {
return nil, nil, fmt.Errorf("cannot read public key from file %q: %w", filePath, err)
}
k, err := jwt.ParseKey(keyData)
if err != nil {
return nil, nil, fmt.Errorf("cannot parse public key from file %q: %w", filePath, err)
}
keys = append(keys, k)
}
vp, err := jwt.NewVerifierPool(keys)
if err != nil {
return nil, nil, err
}
jwtToken.verifierPool.Store(vp)
}
if jwtToken.OIDC != nil {
if len(jwtToken.PublicKeys) > 0 || len(jwtToken.PublicKeyFiles) > 0 || jwtToken.SkipVerify {
return nil, nil, fmt.Errorf("jwt with oidc cannot contain public keys or have skip_verify=true")
}
if jwtToken.OIDC.Issuer == "" {
return nil, nil, fmt.Errorf("oidc issuer cannot be empty")
}
isserURL, err := url.Parse(jwtToken.OIDC.Issuer)
if err != nil {
return nil, nil, fmt.Errorf("oidc issuer %q must be a valid URL", jwtToken.OIDC.Issuer)
}
if isserURL.Scheme != "https" && isserURL.Scheme != "http" {
return nil, nil, fmt.Errorf("oidc issuer %q must have http or https scheme", jwtToken.OIDC.Issuer)
}
oidcDP.createOrAdd(ui.JWT.OIDC.Issuer, &ui.JWT.verifierPool)
}
if err := parseJWTPlaceholdersForUserInfo(&ui, true); err != nil {
return nil, nil, err
}
if err := ui.initURLs(); err != nil {
return nil, nil, err
}
metricLabels, err := ui.getMetricLabels()
if err != nil {
return nil, nil, fmt.Errorf("cannot parse metric_labels: %w", err)
}
ui.requests = ac.ms.GetOrCreateCounter(`vmauth_user_requests_total` + metricLabels)
ui.requestErrors = ac.ms.GetOrCreateCounter(`vmauth_user_request_errors_total` + metricLabels)
ui.backendRequests = ac.ms.GetOrCreateCounter(`vmauth_user_request_backend_requests_total` + metricLabels)
ui.backendErrors = ac.ms.GetOrCreateCounter(`vmauth_user_request_backend_errors_total` + metricLabels)
ui.requestsDuration = ac.ms.GetOrCreateSummary(`vmauth_user_request_duration_seconds` + metricLabels)
mcr := ui.getMaxConcurrentRequests()
ui.concurrencyLimitCh = make(chan struct{}, mcr)
ui.concurrencyLimitReached = ac.ms.GetOrCreateCounter(`vmauth_user_concurrent_requests_limit_reached_total` + metricLabels)
_ = ac.ms.GetOrCreateGauge(`vmauth_user_concurrent_requests_capacity`+metricLabels, func() float64 {
return float64(cap(ui.concurrencyLimitCh))
})
_ = ac.ms.GetOrCreateGauge(`vmauth_user_concurrent_requests_current`+metricLabels, func() float64 {
return float64(len(ui.concurrencyLimitCh))
})
rt, err := newRoundTripper(ui.TLSCAFile, ui.TLSCertFile, ui.TLSKeyFile, ui.TLSServerName, ui.TLSInsecureSkipVerify)
if err != nil {
return nil, nil, fmt.Errorf("cannot initialize HTTP RoundTripper: %w", err)
}
ui.rt = rt
jui = append(jui, &ui)
}
// sort by amount of matching claims
// it allows to more specific claim win in case of clash
sort.SliceStable(jui, func(i, j int) bool {
return len(jui[i].JWT.MatchClaims) > len(jui[j].JWT.MatchClaims)
})
return jui, oidcDP, nil
}
var tokenPool sync.Pool
func getToken() *jwt.Token {
tkn := tokenPool.Get()
if tkn == nil {
return &jwt.Token{}
}
return tkn.(*jwt.Token)
}
func putToken(tkn *jwt.Token) {
tkn.Reset()
tokenPool.Put(tkn)
}
func getJWTUserInfo(ats []string) (*UserInfo, *jwt.Token) {
js := *jwtAuthCache.Load()
if len(js.users) == 0 {
return nil, nil
}
tkn := getToken()
for _, at := range ats {
if strings.Count(at, ".") != 2 {
continue
}
at, _ = strings.CutPrefix(at, `http_auth:`)
tkn.Reset()
if err := tkn.Parse(at, true); err != nil {
if *logInvalidAuthTokens {
logger.Infof("cannot parse jwt token: %s", err)
}
continue
}
if tkn.IsExpired(time.Now()) {
if *logInvalidAuthTokens {
// TODO: add more context:
// token claims with issuer
logger.Infof("jwt token is expired")
}
continue
}
if ui := getUserInfoByJWTToken(tkn, js.users); ui != nil {
return ui, tkn
}
}
putToken(tkn)
return nil, nil
}
func getUserInfoByJWTToken(tkn *jwt.Token, users []*UserInfo) *UserInfo {
for _, ui := range users {
if !tkn.MatchClaims(ui.JWT.parsedMatchClaims) {
continue
}
if ui.JWT.SkipVerify {
return ui
}
if ui.JWT.OIDC != nil {
// OIDC requires iss claim.
// It must match the discovery issuer URL set in OIDC config.
// https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderMetadata
if tkn.Issuer() == "" {
if *logInvalidAuthTokens {
logger.Infof("jwt token must have issuer filed")
}
return nil
}
if tkn.Issuer() != ui.JWT.OIDC.Issuer {
if *logInvalidAuthTokens {
logger.Infof("jwt token issuer: %q does not match oidc issuer: %q", tkn.Issuer(), ui.JWT.OIDC.Issuer)
}
return nil
}
}
vp := ui.JWT.verifierPool.Load()
if vp == nil {
if *logInvalidAuthTokens {
logger.Infof("jwt verifier not initialed")
}
return nil
}
if err := vp.Verify(tkn); err != nil {
if *logInvalidAuthTokens {
logger.Infof("cannot verify jwt token: %s", err)
}
return nil
}
return ui
}
if *logInvalidAuthTokens {
logger.Infof("no user match jwt token")
}
return nil
}
func replaceJWTPlaceholders(bu *backendURL, hc HeadersConf, vma *jwt.VMAccessClaim) (*url.URL, HeadersConf) {
if !bu.hasPlaceHolders && !hc.hasAnyPlaceHolders {
return bu.url, hc
}
targetURL := bu.url
data := jwtClaimsData(vma)
if bu.hasPlaceHolders {
// template url params and request path
// make a copy of url
uCopy := *bu.url
for _, uph := range urlPathPlaceHolders {
replacement := data[uph]
uCopy.Path = strings.ReplaceAll(uCopy.Path, uph, replacement[0])
}
query := uCopy.Query()
var foundAnyQueryPlaceholder bool
var templatedValues []string
for param, values := range query {
templatedValues = templatedValues[:0]
// filter in-place values with placeholders
// and accumulate replacements
// it will change the order of param values
// but it's not guaranteed
// and will be changed in any way with multiple arg templates
var cnt int
for _, value := range values {
if dv, ok := data[value]; ok {
foundAnyQueryPlaceholder = true
templatedValues = append(templatedValues, dv...)
continue
}
values[cnt] = value
cnt++
}
values = values[:cnt]
values = append(values, templatedValues...)
query[param] = values
}
if foundAnyQueryPlaceholder {
uCopy.RawQuery = query.Encode()
}
targetURL = &uCopy
}
if hc.hasAnyPlaceHolders {
// make a copy of headers and update only values with placeholder
rhs := make([]*Header, 0, len(hc.RequestHeaders))
for _, rh := range hc.RequestHeaders {
if dv, ok := data[rh.Value]; ok {
rh := &Header{
Name: rh.Name,
Value: strings.Join(dv, ","),
}
rhs = append(rhs, rh)
continue
}
rhs = append(rhs, rh)
}
hc.RequestHeaders = rhs
}
return targetURL, hc
}
func jwtClaimsData(vma *jwt.VMAccessClaim) map[string][]string {
data := map[string][]string{
// TODO: optimize at parsing stage
metricsTenantPlaceholder: {fmt.Sprintf("%d:%d", vma.MetricsAccountID, vma.MetricsProjectID)},
metricsExtraLabelsPlaceholder: vma.MetricsExtraLabels,
metricsExtraFiltersPlaceholder: vma.MetricsExtraFilters,
// TODO: optimize at parsing stage
logsAccountIDPlaceholder: {fmt.Sprintf("%d", vma.LogsAccountID)},
logsProjectIDPlaceholder: {fmt.Sprintf("%d", vma.LogsProjectID)},
logsExtraFiltersPlaceholder: vma.LogsExtraFilters,
logsExtraStreamFiltersPlaceholder: vma.LogsExtraStreamFilters,
}
return data
}
func parseJWTPlaceholdersForUserInfo(ui *UserInfo, isAllowed bool) error {
if ui.URLPrefix != nil {
if err := validateJWTPlaceholdersForURL(ui.URLPrefix, isAllowed); err != nil {
return err
}
}
if err := parsePlaceholdersForHC(&ui.HeadersConf, isAllowed); err != nil {
return err
}
if ui.DefaultURL != nil {
if err := validateJWTPlaceholdersForURL(ui.DefaultURL, isAllowed); err != nil {
return fmt.Errorf("invalid `default_url` placeholders: %w", err)
}
}
for i := range ui.URLMaps {
e := &ui.URLMaps[i]
if e.URLPrefix != nil {
if err := validateJWTPlaceholdersForURL(e.URLPrefix, isAllowed); err != nil {
return fmt.Errorf("invalid `url_map` `url_prefix` placeholders: %w", err)
}
}
if err := parsePlaceholdersForHC(&e.HeadersConf, isAllowed); err != nil {
return fmt.Errorf("invalid `url_map` headers placeholders: %w", err)
}
}
return nil
}
func validateJWTPlaceholdersForURL(up *URLPrefix, isAllowed bool) error {
for _, bu := range up.busOriginal {
ok := strings.Contains(bu.Path, placeholderPrefix)
if ok && !isAllowed {
return fmt.Errorf("placeholder: %q is only allowed at JWT token context", bu.Path)
}
if ok {
p := bu.Path
for _, ph := range allPlaceholders {
p = strings.ReplaceAll(p, ph, ``)
}
if strings.Contains(p, placeholderPrefix) {
return fmt.Errorf("invalid placeholder found in URL request path: %q, supported values are: %s", bu.Path, strings.Join(allPlaceholders, ", "))
}
}
for param, values := range bu.Query() {
for _, value := range values {
ok := strings.Contains(value, placeholderPrefix)
if ok && !isAllowed {
return fmt.Errorf("query param: %q with placeholder: %q is only allowed at JWT token context", param, value)
}
if ok {
// possible placeholder
if !slices.Contains(allPlaceholders, value) {
return fmt.Errorf("query param: %q has unsupported placeholder string: %q, supported values are: %s", param, value, strings.Join(allPlaceholders, ", "))
}
}
}
}
}
return nil
}
func parsePlaceholdersForHC(hc *HeadersConf, isAllowed bool) error {
for _, rhs := range hc.RequestHeaders {
ok := strings.Contains(rhs.Value, placeholderPrefix)
if ok && !isAllowed {
return fmt.Errorf("request header: %q placeholder: %q is only supported at JWT context", rhs.Name, rhs.Value)
}
if ok {
if !slices.Contains(allPlaceholders, rhs.Value) {
return fmt.Errorf("request header: %q has unsupported placeholder: %q, supported values are: %s", rhs.Name, rhs.Value, strings.Join(allPlaceholders, ", "))
}
hc.hasAnyPlaceHolders = true
}
}
for _, rhs := range hc.ResponseHeaders {
if strings.Contains(rhs.Value, placeholderPrefix) {
return fmt.Errorf("response header placeholders are not supported; found placeholder prefix at header: %q with value: %q", rhs.Name, rhs.Value)
}
}
return nil
}
func hasAnyPlaceholders(u *url.URL) bool {
if strings.Contains(u.Path, placeholderPrefix) {
return true
}
if len(u.Query()) == 0 {
return false
}
for _, values := range u.Query() {
for _, value := range values {
if strings.HasPrefix(value, placeholderPrefix) {
return true
}
}
}
return false
}

503
app/vmauth/jwt_test.go Normal file
View File

@@ -0,0 +1,503 @@
package main
import (
"encoding/json"
"fmt"
"net/http"
"net/http/httptest"
"os"
"path/filepath"
"testing"
)
func TestJWTParseAuthConfigFailure(t *testing.T) {
validRSAPublicKey := `-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAiX7oPWKOWRQsGFEWvwZO
mL2PYsdYUsu9nr0qtPCjxQHUJgLfT3rdKlvKpPFYv7ZmKnqTncg36Wz9uiYmWJ7e
IB5Z+fko8kVIMzarCqVvpAJDzYF/pUii68xvuYoK3L9TIOAeyCXv+prwnr2IH+Mw
9AONzWbRrYoO74XyTE9vMU5qmI/L1VPk+PR8lqPOSptLvzsfoaIk2ED4yK2nRB+6
st+k4nccPqbErqHc8aiXnXfugfnr6b+NPFYUzKsDqkymGOokVijrI8B3jNw6c6Do
zphk+D3wgLsXYHfMcZbXIMqffqm/aB8Qg88OpFOkQ3rd2p6R9+hacnZkfkn3Phiw
yQIDAQAB
-----END PUBLIC KEY-----
`
// ECDSA with the P-521 curve
validECDSAPublicKey := `-----BEGIN PUBLIC KEY-----
MIGbMBAGByqGSM49AgEGBSuBBAAjA4GGAAQAU9RmtkCRuYTKCyvLlDn5DtBZOHSe
QTa5j9q/oQVpCKqcXVFrH5dgh0GL+P/ZhkeuowPzCZqntGf0+7wPt9OxSJcADVJm
dv92m540MXss8zdHf5qtE0gsu2Ved0R7Z8a8QwGZ/1mYZ+kFGGbdQTlSvRqDySTq
XOtclIk1uhc03oL9nOQ=
-----END PUBLIC KEY-----
`
f := func(s string, expErr string) {
t.Helper()
ac, err := parseAuthConfig([]byte(s))
if err != nil {
if expErr != err.Error() {
t.Fatalf("unexpected error; got\n%q\nwant\n%q", err.Error(), expErr)
}
return
}
users, oidcDP, err := parseJWTUsers(ac)
if err == nil {
t.Fatalf("expecting non-nil error; got %v", users)
}
if expErr != err.Error() {
t.Fatalf("unexpected error; got\n%q\nwant \n%q", err.Error(), expErr)
}
if oidcDP != nil {
t.Fatalf("expecting nil oidcDP; got %v", oidcDP)
}
}
// unauthorized_user cannot be used with jwt
f(`
unauthorized_user:
jwt: {skip_verify: true}
url_prefix: http://foo.bar
`, `field jwt can't be specified for unauthorized_user section`)
// username and jwt in a single config
f(`
users:
- username: foo
jwt: {skip_verify: true}
url_prefix: http://foo.bar
`, `auth_token, bearer_token, username and password cannot be specified if jwt is set`)
// bearer_token and jwt in a single config
f(`
users:
- bearer_token: foo
jwt: {skip_verify: true}
url_prefix: http://foo.bar
`, `auth_token, bearer_token, username and password cannot be specified if jwt is set`)
// bearer_token and jwt in a single config
f(`
users:
- auth_token: "Foo token"
jwt: {skip_verify: true}
url_prefix: http://foo.bar
`, `auth_token, bearer_token, username and password cannot be specified if jwt is set`)
// jwt public_keys or skip_verify must be set, part 1
f(`
users:
- jwt: {}
url_prefix: http://foo.bar
`, `jwt must contain at least a single public key, public_key_files, oidc or have skip_verify=true`)
// jwt public_keys or skip_verify must be set, part 2
f(`
users:
- jwt: {public_keys: null}
url_prefix: http://foo.bar
`, `jwt must contain at least a single public key, public_key_files, oidc or have skip_verify=true`)
// jwt public_keys or skip_verify must be set, part 3
f(`
users:
- jwt: {public_keys: []}
url_prefix: http://foo.bar
`, `jwt must contain at least a single public key, public_key_files, oidc or have skip_verify=true`)
// jwt public_keys, public_key_files or skip_verify must be set
f(`
users:
- jwt: {public_key_files: []}
url_prefix: http://foo.bar
`, `jwt must contain at least a single public key, public_key_files, oidc or have skip_verify=true`)
// invalid public key, part 1
f(`
users:
- jwt: {public_keys: [""]}
url_prefix: http://foo.bar
`, `failed to parse key "": failed to decode PEM block containing public key`)
// invalid public key, part 2
f(`
users:
- jwt: {public_keys: ["invalid"]}
url_prefix: http://foo.bar
`, `failed to parse key "invalid": failed to decode PEM block containing public key`)
// invalid public key, part 2
f(fmt.Sprintf(`
users:
- jwt:
public_keys:
- %q
- %q
- "invalid"
url_prefix: http://foo.bar
`, validRSAPublicKey, validECDSAPublicKey), `failed to parse key "invalid": failed to decode PEM block containing public key`)
// several jwt users
// invalid public key, part 2
f(fmt.Sprintf(`
users:
- jwt:
public_keys:
- %q
url_prefix: http://foo.bar
- jwt:
public_keys:
- %q
url_prefix: http://foo.bar
`, validRSAPublicKey, validECDSAPublicKey), `duplicate match claims="" found for name="" at idx=1; the previous one is set for name=""`)
// public key file doesn't exist
f(`
users:
- jwt:
public_key_files:
- /path/to/nonexistent/file.pem
url_prefix: http://foo.bar
`, "cannot read public key from file \"/path/to/nonexistent/file.pem\": open /path/to/nonexistent/file.pem: no such file or directory")
// public key file invalid
// auth with key from file
publicKeyFile := filepath.Join(t.TempDir(), "a_public_key.pem")
if err := os.WriteFile(publicKeyFile, []byte(`invalidPEM`), 0o644); err != nil {
t.Fatalf("failed to write public key file: %s", err)
}
f(`
users:
- jwt:
public_key_files:
- `+publicKeyFile+`
url_prefix: http://foo.bar
`, "cannot parse public key from file \""+publicKeyFile+"\": failed to parse key \"invalidPEM\": failed to decode PEM block containing public key")
// unsupported placeholder in a header
f(`
users:
- jwt:
skip_verify: true
url_prefix: http://foo.bar/{{.UnsupportedPlaceholder}}/foo`,
"invalid placeholder found in URL request path: \"/{{.UnsupportedPlaceholder}}/foo\", supported values are: {{.MetricsTenant}}, {{.MetricsExtraLabels}}, {{.MetricsExtraFilters}}, {{.LogsAccountID}}, {{.LogsProjectID}}, {{.LogsExtraFilters}}, {{.LogsExtraStreamFilters}}",
)
// unsupported placeholder in a header
f(`
users:
- jwt:
skip_verify: true
headers:
- "AccountID: {{.UnsupportedPlaceholder}}"
url_prefix: http://foo.bar
`,
"request header: \"AccountID\" has unsupported placeholder: \"{{.UnsupportedPlaceholder}}\", supported values are: {{.MetricsTenant}}, {{.MetricsExtraLabels}}, {{.MetricsExtraFilters}}, {{.LogsAccountID}}, {{.LogsProjectID}}, {{.LogsExtraFilters}}, {{.LogsExtraStreamFilters}}",
)
// spaces in templating not allowed
f(`
users:
- jwt:
skip_verify: true
headers:
- "AccountID: {{ .LogsAccountID }}"
url_prefix: http://foo.bar
`,
"request header: \"AccountID\" has unsupported placeholder: \"{{ .LogsAccountID }}\", supported values are: {{.MetricsTenant}}, {{.MetricsExtraLabels}}, {{.MetricsExtraFilters}}, {{.LogsAccountID}}, {{.LogsProjectID}}, {{.LogsExtraFilters}}, {{.LogsExtraStreamFilters}}",
)
// oidc is not an object
f(`
users:
- jwt:
oidc: "not an object"
url_prefix: http://foo.bar
`,
"cannot unmarshal AuthConfig data: yaml: unmarshal errors:\n line 4: cannot unmarshal !!str `not an ...` into main.oidcConfig",
)
// oidc issuer empty
f(`
users:
- jwt:
oidc: {}
url_prefix: http://foo.bar
`,
"oidc issuer cannot be empty",
)
// oidc issuer invalid urls
f(`
users:
- jwt:
oidc:
issuer: "::invalid-url"
url_prefix: http://foo.bar
`,
"oidc issuer \"::invalid-url\" must be a valid URL",
)
// oidc issuer invalid urls
f(`
users:
- jwt:
oidc:
issuer: "invalid-url"
url_prefix: http://foo.bar
`,
"oidc issuer \"invalid-url\" must have http or https scheme",
)
// oidc and public_keys are not allowed
f(fmt.Sprintf(`
users:
- jwt:
public_keys:
- %q
oidc:
issuer: https://example.com
url_prefix: http://foo.bar
`, validRSAPublicKey),
"jwt with oidc cannot contain public keys or have skip_verify=true",
)
// oidc and skip_verify are not allowed
f(`
users:
- jwt:
skip_verify: true
oidc:
issuer: https://example.com
url_prefix: http://foo.bar
`,
"jwt with oidc cannot contain public keys or have skip_verify=true",
)
// duplicate claims
f(`
users:
- jwt:
skip_verify: true
match_claims:
team: ops
name: user-1
url_prefix: http://foo.bar
- jwt:
skip_verify: true
match_claims:
team: ops
name: user-2
url_prefix: http://foo.bar`,
"duplicate match claims=\"team=ops\" found for name=\"user-2\" at idx=1; the previous one is set for name=\"user-1\"",
)
}
func TestJWTParseAuthConfigSuccess(t *testing.T) {
validRSAPublicKey := `-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAiX7oPWKOWRQsGFEWvwZO
mL2PYsdYUsu9nr0qtPCjxQHUJgLfT3rdKlvKpPFYv7ZmKnqTncg36Wz9uiYmWJ7e
IB5Z+fko8kVIMzarCqVvpAJDzYF/pUii68xvuYoK3L9TIOAeyCXv+prwnr2IH+Mw
9AONzWbRrYoO74XyTE9vMU5qmI/L1VPk+PR8lqPOSptLvzsfoaIk2ED4yK2nRB+6
st+k4nccPqbErqHc8aiXnXfugfnr6b+NPFYUzKsDqkymGOokVijrI8B3jNw6c6Do
zphk+D3wgLsXYHfMcZbXIMqffqm/aB8Qg88OpFOkQ3rd2p6R9+hacnZkfkn3Phiw
yQIDAQAB
-----END PUBLIC KEY-----
`
// ECDSA with the P-521 curve
validECDSAPublicKey := `-----BEGIN PUBLIC KEY-----
MIGbMBAGByqGSM49AgEGBSuBBAAjA4GGAAQAU9RmtkCRuYTKCyvLlDn5DtBZOHSe
QTa5j9q/oQVpCKqcXVFrH5dgh0GL+P/ZhkeuowPzCZqntGf0+7wPt9OxSJcADVJm
dv92m540MXss8zdHf5qtE0gsu2Ved0R7Z8a8QwGZ/1mYZ+kFGGbdQTlSvRqDySTq
XOtclIk1uhc03oL9nOQ=
-----END PUBLIC KEY-----
`
f := func(s string) {
t.Helper()
ac, err := parseAuthConfig([]byte(s))
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
jui, oidcDP, err := parseJWTUsers(ac)
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
oidcDP.startDiscovery()
defer oidcDP.stopDiscovery()
for _, ui := range jui {
if ui.JWT == nil {
t.Fatalf("unexpected nil JWTConfig")
}
if ui.JWT.SkipVerify {
if ui.JWT.verifierPool.Load() != nil {
t.Fatalf("unexpected non-nil verifier pool for skip_verify=true")
}
continue
}
if ui.JWT.verifierPool.Load() == nil {
t.Fatalf("unexpected nil verifier pool for non-empty public keys")
}
}
}
f(fmt.Sprintf(`
users:
- jwt:
public_keys:
- %q
url_prefix: http://foo.bar
`, validRSAPublicKey))
f(fmt.Sprintf(`
users:
- jwt:
public_keys:
- %q
url_prefix: http://foo.bar
`, validECDSAPublicKey))
f(fmt.Sprintf(`
users:
- jwt:
public_keys:
- %q
- %q
url_prefix: http://foo.bar
`, validRSAPublicKey, validECDSAPublicKey))
f(`
users:
- jwt:
skip_verify: true
url_prefix: http://foo.bar
`)
// combined with other auth methods
f(`
users:
- username: foo
password: bar
url_prefix: http://foo.bar
- jwt:
skip_verify: true
url_prefix: http://foo.bar
- bearer_token: foo
url_prefix: http://foo.bar
`)
rsaKeyFile := filepath.Join(t.TempDir(), "rsa_public_key.pem")
if err := os.WriteFile(rsaKeyFile, []byte(validRSAPublicKey), 0o644); err != nil {
t.Fatalf("failed to write RSA key file: %s", err)
}
ecdsaKeyFile := filepath.Join(t.TempDir(), "ecdsa_public_key.pem")
if err := os.WriteFile(ecdsaKeyFile, []byte(validECDSAPublicKey), 0o644); err != nil {
t.Fatalf("failed to write ECDSA key file: %s", err)
}
// Test single public key file
f(fmt.Sprintf(`
users:
- jwt:
public_key_files:
- %q
url_prefix: http://foo.bar
`, rsaKeyFile))
// Test multiple public key files
f(fmt.Sprintf(`
users:
- jwt:
public_key_files:
- %q
- %q
url_prefix: http://foo.bar
`, rsaKeyFile, ecdsaKeyFile))
// Test combined inline keys and files
f(fmt.Sprintf(`
users:
- jwt:
public_keys:
- %q
public_key_files:
- %q
url_prefix: http://foo.bar
`, validECDSAPublicKey, rsaKeyFile))
// oidc stub server
var ipSrv *httptest.Server
ipSrv = httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path == "/.well-known/openid-configuration" {
w.Header().Set("Content-Type", "application/json")
_ = json.NewEncoder(w).Encode(map[string]string{
"issuer": ipSrv.URL,
"jwks_uri": fmt.Sprintf("%s/jwks", ipSrv.URL),
})
return
}
if r.URL.Path == "/jwks" {
// resp generated by https://jwkset.com/generate
w.Header().Set("Content-Type", "application/json")
w.Write([]byte(`
{
"keys": [
{
"kty": "RSA",
"kid": "f13eee91-f566-4829-80fa-fca847c21f0e",
"d": "Ua1llEFz3LZ05CrK5a2JxKMUEWJGXhBPPF20hHQjzxd1w0IEJK_mhPZQG8dNtBROBNIi1FC9l6QRw-RTnVIVat5Xy4yDFNKXXL3ZLXejOHY8SXrNEIDqQ-cSwIpK9cK7Umib0PcPeEeeAED5mqDH75D8_YssWFF18kLbNB5Z9pZmn6Fshiht7l2Sh4GN-KcReOW6eiQQwckDte3OGmZCRbtEriLWJt5TUGUvfZVIlcclqNMycNB6jGa9E1pO5Up7Ki3ZbI_-6XmRgZPtqnR9oLJ1zn3fj3hYpCXo-zcqLuOu3qxcslsq5igsfBzgGtfIJHY9LfWmHUsaDEa5cAX1gQ",
"n": "xbLXXBTNREk70UCMiqZ53_mTzYh89W-UaPU61GZ-RZ5lYcLgyWOb5mdyRbvJpcgfZpsOeGAUWbk3GkQ4vqn8kUMnnWhUum2Qk9kGubOJGLW6yaURd00j3E-ilQ5xO2R_Hzz8bAojxV8GKdGTQ-iTf8z8nsSHH8kR2SERbNJCFFtwtFU7vyFWyoH4Lmvu2UpICTHFCR9RqwQVjyoKB1JjJ6Dh1L4zPTlsvQEnqoeFQHPYr0QcQSMYXdfPvlt_FiLOAOE89fX_9T2r9WbFAoda3uTRE5_aal0jxUU2cFyeVSIgauNtF07fp422XFb4XPkWQWrdNx0KX53laSIYQ9HOpw",
"e": "AQAB",
"p": "2JT57AD-Q2lamgjgyn0wL7DgYZ3OoCTTrDm5_NHg6h13uDvyIlXSukuUeWm4tzPSDedpstbS7dgXkLw5eQXBHwPYtByTcEZS8Z37CBnhMOOhfo_U1aNIPPanJACvWBgz47-TxHsxW1YhztZqghRoicBZPSSBAj49MgANJ4jF0zc",
"q": "6a4MkeSXJI-ZzQ-bgP8hwJqpLFr0AiNGQcjZMH4Nn4CPGdnGiqqe6flhfLimgbNhbb67B0-8fLIji8zGhGKDL_JSIpAAdmfs2vzeEsY2hScrqVbd1VbfRcRh0J6lsn7obxkbvQthp9sX2DQbeDcEeaFEvd9gDKQSATYEqWo7eBE",
"dp": "haL2yu6Z9RJuuxi7S3YPY33qFZF_y0St71j3L854zzw7gMxMTW9TRWwZQwk-1pv9AmNFzvnK0MNDVyUs-UXZsb932TrApshdqYRnPsppLvdl0GgDVYcYrbUr0IUzrFHSwraVAOlavRbaaXvX4EejcUvkRFvf1nh83fs2Iqy8E-U",
"dq": "Cnf5qC-Ndd3ZDg688LJ9WJuVKJ-Kfu4Fn7zXvgxnn9Wqk4XmFyA9rk21yFidXQIkQz5gMpun3g48-W5bFmMzbVp1w4af_q35NnZNnJm0p5Jxqkxx87TIm9-IYkg5NB3rW87MJ1PzNAnkr5LmCCSu1qQa6Eaxjt9qzxMUcmKH94E",
"qi": "saAeU11iaKHmye3cwCAYkegcyWbXV3xIXEVJtS9Af_yM19UhspwY2VhuwRaajcwYZwtvR9_ITmX9M-ea7uLdd7aDYO1fujC8NGbopeC4Hkr7yb5vTly3pfKf4h-3LwGGUucJUetdz1lmMIYiyuG4_gSf1yIEtPDLKzXiedgEMdI"
}
]
}
`))
return
}
http.NotFound(w, r)
}))
defer ipSrv.Close()
f(`
users:
- jwt:
oidc:
issuer: ` + ipSrv.URL + `
url_prefix: http://foo.bar
`)
// multiple match claims
f(fmt.Sprintf(`
users:
- jwt:
match_claims:
role: ro
team: dev
public_keys:
- %q
url_prefix: http://foo.bar
- jwt:
match_claims:
role: admin
team: dev
public_key_files:
- %q
- %q
url_prefix: http://foo.bar
- jwt:
match_claims:
role: viewer
team: dev
department: ceo
skip_verify: true
url_prefix: http://foo.bar
`, validRSAPublicKey, rsaKeyFile, ecdsaKeyFile))
}

View File

@@ -16,6 +16,7 @@ import (
"sync"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/jwt"
"github.com/VictoriaMetrics/metrics"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/buildinfo"
@@ -47,7 +48,7 @@ var (
responseTimeout = flag.Duration("responseTimeout", 5*time.Minute, "The timeout for receiving a response from backend")
requestBufferSize = flagutil.NewBytes("requestBufferSize", 32*1024, "The size of the buffer for reading the request body before proxying the request to backends. "+
"This allows reducing the comsumption of backend resources when processing requests from clients connected via slow networks. "+
"This allows reducing the consumption of backend resources when processing requests from clients connected via slow networks. "+
"Set to 0 to disable request buffering. See https://docs.victoriametrics.com/victoriametrics/vmauth/#request-body-buffering")
maxRequestBodySizeToRetry = flagutil.NewBytes("maxRequestBodySizeToRetry", 16*1024, "The maximum request body size to buffer in memory for potential retries at other backends. "+
"Request bodies larger than this size cannot be retried if the backend fails. Zero or negative value disables request body buffering and retries. "+
@@ -173,7 +174,7 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
// Process requests for unauthorized users
ui := authConfig.Load().UnauthorizedUser
if ui != nil {
processUserRequest(w, r, ui)
processUserRequest(w, r, ui, nil)
return true
}
@@ -181,29 +182,36 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
return true
}
ui := getUserInfoByAuthTokens(ats)
if ui == nil {
uu := authConfig.Load().UnauthorizedUser
if uu != nil {
processUserRequest(w, r, uu)
return true
}
invalidAuthTokenRequests.Inc()
if *logInvalidAuthTokens {
err := fmt.Errorf("cannot authorize request with auth tokens %q", ats)
err = &httpserver.ErrorWithStatusCode{
Err: err,
StatusCode: http.StatusUnauthorized,
}
httpserver.Errorf(w, r, "%s", err)
} else {
http.Error(w, "Unauthorized", http.StatusUnauthorized)
if ui := getUserInfoByAuthTokens(ats); ui != nil {
processUserRequest(w, r, ui, nil)
return true
}
if ui, tkn := getJWTUserInfo(ats); ui != nil {
if tkn == nil {
logger.Panicf("BUG: unexpected nil jwt token for user %q", ui.name())
}
defer putToken(tkn)
processUserRequest(w, r, ui, tkn)
return true
}
processUserRequest(w, r, ui)
uu := authConfig.Load().UnauthorizedUser
if uu != nil {
processUserRequest(w, r, uu, nil)
return true
}
invalidAuthTokenRequests.Inc()
if *logInvalidAuthTokens {
err := fmt.Errorf("cannot authorize request with auth tokens %q", ats)
err = &httpserver.ErrorWithStatusCode{
Err: err,
StatusCode: http.StatusUnauthorized,
}
httpserver.Errorf(w, r, "%s", err)
} else {
http.Error(w, "Unauthorized", http.StatusUnauthorized)
}
return true
}
@@ -218,7 +226,37 @@ func getUserInfoByAuthTokens(ats []string) *UserInfo {
return nil
}
func processUserRequest(w http.ResponseWriter, r *http.Request, ui *UserInfo) {
// responseWriterWithStatus is a wrapper around http.ResponseWriter that captures the status code written to the response.
type responseWriterWithStatus struct {
http.ResponseWriter
status int
}
// WriteHeader records the status so it can be easily retrieved later
func (rws *responseWriterWithStatus) WriteHeader(status int) {
rws.status = status
rws.ResponseWriter.WriteHeader(status)
}
// Flush implements net/http.Flusher interface
//
// This is needed for the copyStreamToClient()
func (rws *responseWriterWithStatus) Flush() {
flusher, ok := rws.ResponseWriter.(http.Flusher)
if !ok {
logger.Panicf("BUG: it is expected http.ResponseWriter (%T) supports http.Flusher interface", rws.ResponseWriter)
}
flusher.Flush()
}
// Unwrap returns the original ResponseWriter wrapped by rws.
//
// This is needed for the net/http.ResponseController - see https://pkg.go.dev/net/http#NewResponseController
func (rws *responseWriterWithStatus) Unwrap() http.ResponseWriter {
return rws.ResponseWriter
}
func processUserRequest(w http.ResponseWriter, r *http.Request, ui *UserInfo, tkn *jwt.Token) {
startTime := time.Now()
defer ui.requestsDuration.UpdateDuration(startTime)
@@ -227,6 +265,20 @@ func processUserRequest(w http.ResponseWriter, r *http.Request, ui *UserInfo) {
ctx, cancel := context.WithTimeout(r.Context(), *maxQueueDuration)
defer cancel()
userName := ui.name()
if userName == "" {
userName = "unauthorized"
}
if ui.AccessLog != nil {
w = &responseWriterWithStatus{ResponseWriter: w}
defer func() {
rws := w.(*responseWriterWithStatus)
duration := time.Since(startTime)
ui.logRequest(r, userName, rws.status, duration)
}()
}
// Acquire global concurrency limit.
if err := beginConcurrencyLimit(ctx); err != nil {
handleConcurrencyLimitError(w, r, err)
@@ -245,10 +297,6 @@ func processUserRequest(w http.ResponseWriter, r *http.Request, ui *UserInfo) {
}
// Read the initial chunk for the request body.
userName := ui.name()
if userName == "" {
userName = "unauthorized"
}
bb, err := bufferRequestBody(ctx, r.Body, userName)
if err != nil {
httpserver.Errorf(w, r, "%s", err)
@@ -269,7 +317,7 @@ func processUserRequest(w http.ResponseWriter, r *http.Request, ui *UserInfo) {
defer ui.endConcurrencyLimit()
// Process the request.
processRequest(w, r, ui)
processRequest(w, r, ui, tkn)
}
func beginConcurrencyLimit(ctx context.Context) error {
@@ -309,6 +357,7 @@ func bufferRequestBody(ctx context.Context, r io.ReadCloser, userName string) (i
maxBufSize := max(requestBufferSize.IntN(), maxRequestBodySizeToRetry.IntN())
if maxBufSize <= 0 {
// Request buffering is disabled.
return r, nil
}
@@ -342,7 +391,7 @@ func bufferRequestBody(ctx context.Context, r io.ReadCloser, userName string) (i
return bb, nil
}
func processRequest(w http.ResponseWriter, r *http.Request, ui *UserInfo) {
func processRequest(w http.ResponseWriter, r *http.Request, ui *UserInfo, tkn *jwt.Token) {
u := normalizeURL(r.URL)
up, hc := ui.getURLPrefixAndHeaders(u, r.Host, r.Header)
isDefault := false
@@ -368,22 +417,27 @@ func processRequest(w http.ResponseWriter, r *http.Request, ui *UserInfo) {
}
maxAttempts := up.getBackendsCount()
for i := 0; i < maxAttempts; i++ {
for range maxAttempts {
bu := up.getBackendURL()
if bu == nil {
break
}
targetURL := bu.url
if tkn != nil {
// for security reasons allow templating only for configured url values and headers
targetURL, hc = replaceJWTPlaceholders(bu, hc, tkn.VMAccess())
}
if isDefault {
// Don't change path and add request_path query param for default route.
targetURLCopy := *targetURL
query := targetURL.Query()
query.Set("request_path", u.String())
targetURL.RawQuery = query.Encode()
targetURLCopy.RawQuery = query.Encode()
targetURL = &targetURLCopy
} else {
// Update path for regular routes.
targetURL = mergeURLs(targetURL, u, up.dropSrcPathPrefixParts, up.mergeQueryArgs)
}
wasLocalRetry := false
again:
ok, needLocalRetry := tryProcessingRequest(w, r, targetURL, hc, up.retryStatusCodes, ui, bu)
@@ -401,7 +455,7 @@ func processRequest(w http.ResponseWriter, r *http.Request, ui *UserInfo) {
ui.backendErrors.Inc()
}
err := &httpserver.ErrorWithStatusCode{
Err: fmt.Errorf("all the %d backends for the user %q are unavailable", up.getBackendsCount(), ui.name()),
Err: fmt.Errorf("all the %d backends for the user %q are unavailable for proxying the request - check previous WARN logs to see the exact error for each failed backend", up.getBackendsCount(), ui.name()),
StatusCode: http.StatusBadGateway,
}
httpserver.Errorf(w, r, "%s", err)
@@ -710,7 +764,7 @@ var concurrentRequestsLimitReached = metrics.NewCounter("vmauth_concurrent_reque
func usage() {
const s = `
vmauth authenticates and authorizes incoming requests and proxies them to VictoriaMetrics.
vmauth authenticates and authorizes incoming requests and proxies them to VictoriaMetrics components or any other HTTP backends.
See the docs at https://docs.victoriametrics.com/victoriametrics/vmauth/ .
`
@@ -739,10 +793,11 @@ func handleConcurrencyLimitError(w http.ResponseWriter, r *http.Request, err err
}
// bufferedBody serves two purposes:
// 1. Enables request retries when the body size does not exceed maxBodySize
// by fully buffering the body in memory.
// 2. Prevents slow clients from reducing effective server capacity by
// buffering the request body before acquiring a per-user concurrency slot.
//
// 1. It enables request retries when the request body size does not exceed maxBufSize
// by fully buffering the request body in memory.
// 2. It prevents slow clients from reducing effective server capacity
// by buffering the request body before acquiring a per-user concurrency slot.
//
// See bufferRequestBody for details on how bufferedBody is used.
type bufferedBody struct {
@@ -766,7 +821,7 @@ func newBufferedBody(r io.ReadCloser, buf []byte, maxBufSize int) *bufferedBody
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8051
if len(buf) < maxBufSize {
// Read the full request body into buf.
// The full request body has been already read into buf.
r = nil
}
@@ -779,7 +834,7 @@ func newBufferedBody(r io.ReadCloser, buf []byte, maxBufSize int) *bufferedBody
// Read implements io.Reader interface.
func (bb *bufferedBody) Read(p []byte) (int, error) {
if bb.cannotRetry {
return 0, fmt.Errorf("cannot read already closed body")
return 0, fmt.Errorf("cannot read already closed request body")
}
if bb.bufOffset < len(bb.buf) {
n := copy(p, bb.buf[bb.bufOffset:])

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,194 @@
package main
import (
"crypto"
"crypto/rand"
"crypto/rsa"
"crypto/x509"
"encoding/base64"
"encoding/json"
"encoding/pem"
"fmt"
"net/http"
"net/http/httptest"
"strings"
"testing"
"time"
)
func BenchmarkJWTRequestHandler(b *testing.B) {
// Generate RSA key pair for testing
privateKey, err := rsa.GenerateKey(rand.Reader, 2048)
if err != nil {
b.Fatalf("cannot generate RSA key: %s", err)
}
// Generate public key PEM
publicKeyBytes, err := x509.MarshalPKIXPublicKey(&privateKey.PublicKey)
if err != nil {
b.Fatalf("cannot marshal public key: %s", err)
}
publicKeyPEM := pem.EncodeToMemory(&pem.Block{
Type: "PUBLIC KEY",
Bytes: publicKeyBytes,
})
genToken := func(t *testing.B, body map[string]any, valid bool) string {
t.Helper()
headerJSON, err := json.Marshal(map[string]any{
"alg": "RS256",
"typ": "JWT",
})
if err != nil {
t.Fatalf("cannot marshal header: %s", err)
}
headerB64 := base64.RawURLEncoding.EncodeToString(headerJSON)
bodyJSON, err := json.Marshal(body)
if err != nil {
t.Fatalf("cannot marshal body: %s", err)
}
bodyB64 := base64.RawURLEncoding.EncodeToString(bodyJSON)
payload := headerB64 + "." + bodyB64
var signatureB64 string
if valid {
// Create real RSA signature
hash := crypto.SHA256
h := hash.New()
h.Write([]byte(payload))
digest := h.Sum(nil)
signature, err := rsa.SignPKCS1v15(rand.Reader, privateKey, hash, digest)
if err != nil {
t.Fatalf("cannot sign token: %s", err)
}
signatureB64 = base64.RawURLEncoding.EncodeToString(signature)
} else {
signatureB64 = base64.RawURLEncoding.EncodeToString([]byte("invalid_signature"))
}
return payload + "." + signatureB64
}
f := func(name string, cfgStr string, r *http.Request, statusCodeExpected int) {
b.Helper()
b.ReportAllocs()
b.ResetTimer()
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
if _, err := w.Write([]byte("path: " + r.URL.Path + "\n")); err != nil {
panic(fmt.Errorf("cannot write response: %w", err))
}
}))
defer ts.Close()
cfgStr = strings.ReplaceAll(cfgStr, "{BACKEND}", ts.URL)
cfgOrigP := authConfigData.Load()
if _, err := reloadAuthConfigData([]byte(cfgStr)); err != nil {
b.Fatalf("cannot load config data: %s", err)
}
defer func() {
cfgOrig := []byte("unauthorized_user:\n url_prefix: http://foo/bar")
if cfgOrigP != nil {
cfgOrig = *cfgOrigP
}
_, err := reloadAuthConfigData(cfgOrig)
if err != nil {
b.Fatalf("cannot load the original config: %s", err)
}
}()
b.Run(name, func(b *testing.B) {
b.ResetTimer()
b.ReportAllocs()
b.RunParallel(func(pb *testing.PB) {
w := &fakeResponseWriter{}
for pb.Next() {
w.reset()
if !requestHandlerWithInternalRoutes(w, r) {
b.Fatalf("unexpected false is returned from requestHandler")
}
if w.statusCode != statusCodeExpected {
b.Fatalf("unexpected response code (-%d;+%d)", statusCodeExpected, w.statusCode)
}
}
})
})
}
simpleCfgStr := fmt.Sprintf(`
users:
- jwt:
public_keys:
- %q
url_prefix: {BACKEND}/foo`, string(publicKeyPEM))
noVMAccessClaimToken := genToken(b, nil, true)
expiredToken := genToken(b, map[string]any{
"exp": 10,
"vm_access": map[string]any{},
}, true)
fullToken := genToken(b, map[string]any{
"exp": time.Now().Add(10 * time.Minute).Unix(),
"scope": "email id",
"vm_access": map[string]any{
"extra_labels": map[string]string{
"label": "value1",
"label2": "value3",
},
"extra_filters": []string{"stream_filter1", "stream_filter2"},
"metrics_account_id": 123,
"metrics_project_id": 234,
"metrics_extra_labels": []string{
"label1=value1",
"label2=value2",
},
"metrics_extra_filters": []string{
`{label3="value3"}`,
`{label4="value4"}`,
},
"logs_account_id": 345,
"logs_project_id": 456,
"logs_extra_filters": []string{
`{"namespace":"my-app","env":"prod"}`,
},
"logs_extra_stream_filters": []string{
`{"team":"dev"}`,
},
},
}, true)
// tenant headers are overwritten if set as placeholders
// extra_filters extra_stream_filters from vm_access claim merged with statically defined
request := httptest.NewRequest(`GET`, "http://some-host.com/query", nil)
request.Header.Set(`Authorization`, `Bearer `+fullToken)
f("full_template",
fmt.Sprintf(`
users:
- jwt:
public_keys:
- %q
headers:
- "AccountID: {{.LogsAccountID}}"
- "ProjectID: {{.LogsProjectID}}"
url_prefix: {BACKEND}/select/logsql/?extra_filters=aStaticFilter&extra_stream_filters=aStaticStreamFilter&extra_filters={{.LogsExtraFilters}}&extra_stream_filters={{.LogsExtraStreamFilters}}`, string(publicKeyPEM)),
request,
http.StatusOK,
)
// token without vm_access claim
request = httptest.NewRequest(`GET`, "http://some-host.com/abc", nil)
request.Header.Set(`Authorization`, `Bearer `+noVMAccessClaimToken)
f("token_without_claim", simpleCfgStr, request, http.StatusUnauthorized)
// expired token
request = httptest.NewRequest(`GET`, "http://some-host.com/abc", nil)
request.Header.Set(`Authorization`, `Bearer `+expiredToken)
f("expired_token", simpleCfgStr, request, http.StatusUnauthorized)
}

195
app/vmauth/oidc.go Normal file
View File

@@ -0,0 +1,195 @@
package main
import (
"context"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/jwt"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeutil"
)
type oidcConfig struct {
Issuer string `yaml:"issuer"`
}
type oidcDiscovererPool struct {
ds map[string]*oidcDiscoverer
context context.Context
cancel func()
wg *sync.WaitGroup
}
func (dp *oidcDiscovererPool) createOrAdd(issuer string, vp *atomic.Pointer[jwt.VerifierPool]) {
if dp.ds == nil {
dp.ds = make(map[string]*oidcDiscoverer)
dp.context, dp.cancel = context.WithCancel(context.Background())
dp.wg = &sync.WaitGroup{}
}
ds, found := dp.ds[issuer]
if !found {
ds = &oidcDiscoverer{
issuer: issuer,
}
dp.ds[issuer] = ds
}
ds.vps = append(ds.vps, vp)
}
func (dp *oidcDiscovererPool) startDiscovery() {
if len(dp.ds) == 0 {
return
}
for _, d := range dp.ds {
dp.wg.Go(func() {
if err := d.refreshVerifierPools(dp.context); err != nil {
logger.Errorf("failed to initialize OIDC verifier pool at start for issuer %q: %s", d.issuer, err)
}
})
}
dp.wg.Wait()
for _, d := range dp.ds {
dp.wg.Go(func() {
d.run(dp.context)
})
}
}
func (dp *oidcDiscovererPool) stopDiscovery() {
if len(dp.ds) == 0 {
return
}
dp.cancel()
dp.wg.Wait()
}
type oidcDiscoverer struct {
issuer string
vps []*atomic.Pointer[jwt.VerifierPool]
}
func (d *oidcDiscoverer) run(ctx context.Context) {
t := time.NewTimer(timeutil.AddJitterToDuration(time.Second * 10))
defer t.Stop()
for {
select {
case <-t.C:
if err := d.refreshVerifierPools(ctx); errors.Is(err, context.Canceled) {
return
} else if err != nil {
t.Reset(timeutil.AddJitterToDuration(time.Second * 10))
logger.Errorf("failed to refresh OIDC verifier pool for issuer %q: %v", d.issuer, err)
continue
}
// OIDC may return Cache-Control header with max-age directive.
// It could be used as time range for next refresh.
// https://openid.net/specs/openid-connect-core-1_0.html#RotateEncKeys
t.Reset(timeutil.AddJitterToDuration(time.Minute * 5))
case <-ctx.Done():
return
}
}
}
func (d *oidcDiscoverer) refreshVerifierPools(ctx context.Context) error {
cfg, err := getOpenIDConfiguration(ctx, d.issuer)
if err != nil {
return err
}
// The issuer in the OIDC configuration must match the expected issuer.
// https://openid.net/specs/openid-connect-core-1_0.html#RotateEncKeys
if cfg.Issuer != d.issuer {
return fmt.Errorf("openid configuration issuer %q does not match expected issuer %q", cfg.Issuer, d.issuer)
}
verifierPool, err := fetchAndParseJWKs(ctx, cfg.JWKsURI)
if err != nil {
return err
}
for _, vp := range d.vps {
vp.Store(verifierPool)
}
return nil
}
// See https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderMetadata for details.
type openidConfig struct {
Issuer string `json:"issuer"`
JWKsURI string `json:"jwks_uri"`
}
var oidcHTTPClient = &http.Client{
Timeout: time.Second * 5,
}
func fetchAndParseJWKs(ctx context.Context, jwksURI string) (*jwt.VerifierPool, error) {
req, err := http.NewRequestWithContext(ctx, http.MethodGet, jwksURI, nil)
if err != nil {
return nil, fmt.Errorf("failed to create request for fetching jwks keys from %q: %w", jwksURI, err)
}
resp, err := oidcHTTPClient.Do(req)
if err != nil {
return nil, fmt.Errorf("failed to fetch jwks keys from %q: %w", jwksURI, err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return nil, fmt.Errorf("unexpected status code %d when fetching jwks keys from %q", resp.StatusCode, jwksURI)
}
b, err := io.ReadAll(resp.Body)
if err != nil {
return nil, fmt.Errorf("failed to read response body from %q: %w", jwksURI, err)
}
vp, err := jwt.ParseJWKs(b)
if err != nil {
return nil, fmt.Errorf("failed to parse jwks keys from %q: %v", jwksURI, err)
}
return vp, nil
}
func getOpenIDConfiguration(ctx context.Context, issuer string) (openidConfig, error) {
issuer, _ = strings.CutSuffix(issuer, "/")
configURL := fmt.Sprintf("%s/.well-known/openid-configuration", issuer)
req, err := http.NewRequestWithContext(ctx, http.MethodGet, configURL, nil)
if err != nil {
return openidConfig{}, fmt.Errorf("failed to create request for fetching openid config from %q: %w", configURL, err)
}
resp, err := oidcHTTPClient.Do(req)
if err != nil {
return openidConfig{}, fmt.Errorf("failed to fetch openid config from %q: %w", configURL, err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return openidConfig{}, fmt.Errorf("unexpected status code %d when fetching openid config from %q", resp.StatusCode, configURL)
}
var cfg openidConfig
if err := json.NewDecoder(resp.Body).Decode(&cfg); err != nil {
return openidConfig{}, fmt.Errorf("failed to decode openid config from %q: %s", configURL, err)
}
return cfg, nil
}

View File

@@ -174,7 +174,7 @@ func TestCreateTargetURLSuccess(t *testing.T) {
},
RetryStatusCodes: []int{503, 501},
LoadBalancingPolicy: "first_available",
DropSrcPathPrefixParts: intp(2),
DropSrcPathPrefixParts: new(2),
}, "/a/b/c", "http://foo.bar/c", `bb: aaa`, `x: y`, []int{503, 501}, "first_available", 2)
f(&UserInfo{
URLPrefix: mustParseURL("http://foo.bar/federate"),
@@ -219,13 +219,13 @@ func TestCreateTargetURLSuccess(t *testing.T) {
},
RetryStatusCodes: []int{503, 500, 501},
LoadBalancingPolicy: "first_available",
DropSrcPathPrefixParts: intp(1),
DropSrcPathPrefixParts: new(1),
},
{
SrcPaths: getRegexs([]string{"/api/v1/write"}),
URLPrefix: mustParseURL("http://vminsert/0/prometheus"),
RetryStatusCodes: []int{},
DropSrcPathPrefixParts: intp(0),
DropSrcPathPrefixParts: new(0),
},
{
SrcPaths: getRegexs([]string{"/metrics"}),
@@ -242,7 +242,7 @@ func TestCreateTargetURLSuccess(t *testing.T) {
},
},
RetryStatusCodes: []int{502},
DropSrcPathPrefixParts: intp(2),
DropSrcPathPrefixParts: new(2),
}
f(ui, "http://host42/vmsingle/api/v1/query?query=up&db=foo", "http://vmselect/0/prometheus/api/v1/query?db=foo&query=up",
"xx: aa\nyy: asdf", "qwe: rty", []int{503, 500, 501}, "first_available", 1)
@@ -259,7 +259,7 @@ func TestCreateTargetURLSuccess(t *testing.T) {
SrcPaths: getRegexs([]string{"/api/v1/write"}),
URLPrefix: mustParseURL("http://vminsert/0/prometheus"),
RetryStatusCodes: []int{},
DropSrcPathPrefixParts: intp(0),
DropSrcPathPrefixParts: new(0),
},
{
SrcPaths: getRegexs([]string{"/metrics/a/b"}),
@@ -275,7 +275,7 @@ func TestCreateTargetURLSuccess(t *testing.T) {
},
},
RetryStatusCodes: []int{502},
DropSrcPathPrefixParts: intp(2),
DropSrcPathPrefixParts: new(2),
}
f(ui, "https://foo-host/api/v1/write", "http://vminsert/0/prometheus/api/v1/write", "", "", []int{}, "least_loaded", 0)
f(ui, "https://foo-host/metrics/a/b", "http://metrics-server/b", "", "", []int{502}, "least_loaded", 2)

View File

@@ -21,6 +21,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/procutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/pushmetrics"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/snapshot"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/snapshot/snapshotutil"
)

View File

@@ -47,7 +47,7 @@ func New(retries int, factor float64, minDuration time.Duration) (*Backoff, erro
// Retry process retries until all attempts are completed
func (b *Backoff) Retry(ctx context.Context, cb retryableFunc) (uint64, error) {
var attempt uint64
for i := 0; i < b.retries; i++ {
for i := range b.retries {
err := cb()
if err == nil {
return attempt, nil

View File

@@ -416,6 +416,16 @@ const (
promTemporaryDirPath = "prom-tmp-dir-path"
)
const (
thanosSnapshot = "thanos-snapshot"
thanosConcurrency = "thanos-concurrency"
thanosFilterTimeStart = "thanos-filter-time-start"
thanosFilterTimeEnd = "thanos-filter-time-end"
thanosFilterLabel = "thanos-filter-label"
thanosFilterLabelValue = "thanos-filter-label-value"
thanosAggrTypes = "thanos-aggr-types"
)
var (
promFlags = []cli.Flag{
&cli.StringFlag{
@@ -451,6 +461,43 @@ var (
Value: os.TempDir(),
},
}
thanosFlags = []cli.Flag{
&cli.StringFlag{
Name: thanosSnapshot,
Usage: "Path to Thanos snapshot directory containing raw and/or downsampled blocks.",
Required: true,
},
&cli.IntFlag{
Name: thanosConcurrency,
Usage: "Number of concurrently running snapshot readers",
Value: 1,
},
&cli.StringFlag{
Name: thanosFilterTimeStart,
Usage: "The time filter in RFC3339 format to select timeseries with timestamp equal or higher than provided value. E.g. '2020-01-01T20:07:00Z'",
},
&cli.StringFlag{
Name: thanosFilterTimeEnd,
Usage: "The time filter in RFC3339 format to select timeseries with timestamp equal or lower than provided value. E.g. '2020-01-01T20:07:00Z'",
},
&cli.StringFlag{
Name: thanosFilterLabel,
Usage: "Thanos label name to filter timeseries by. E.g. '__name__' will filter timeseries by name.",
},
&cli.StringFlag{
Name: thanosFilterLabelValue,
Usage: fmt.Sprintf("Thanos regular expression to filter label from %q flag.", thanosFilterLabel),
Value: ".*",
},
&cli.StringSliceFlag{
Name: thanosAggrTypes,
Usage: "Aggregate types to import from Thanos downsampled blocks. Supported values: count, sum, min, max, counter. " +
"Each aggregate will be imported as a separate metric with the aggregate type as suffix (e.g., metric_name:5m:count). " +
"If not specified, all aggregate types will be imported from downsampled blocks.",
Value: nil,
},
}
)
const (

View File

@@ -27,6 +27,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/influx"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/opentsdb"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/prometheus"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/thanos"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/vm"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/buildinfo"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httputil"
@@ -285,6 +286,7 @@ func main() {
if err != nil {
return fmt.Errorf("failed to create prometheus client: %s", err)
}
pp := prometheusProcessor{
cl: cl,
im: importer,
@@ -294,6 +296,59 @@ func main() {
return pp.run(ctx)
},
},
{
Name: "thanos",
Usage: "Migrate time series from Thanos blocks (supports raw and downsampled data)",
Flags: mergeFlags(globalFlags, thanosFlags, vmFlags),
Before: beforeFn,
Action: func(c *cli.Context) error {
fmt.Println("Thanos import mode")
vmCfg, err := initConfigVM(c)
if err != nil {
return fmt.Errorf("failed to init VM configuration: %s", err)
}
importer, err = vm.NewImporter(ctx, vmCfg)
if err != nil {
return fmt.Errorf("failed to create VM importer: %s", err)
}
thanosCfg := thanos.Config{
Snapshot: c.String(thanosSnapshot),
Filter: thanos.Filter{
TimeMin: c.String(thanosFilterTimeStart),
TimeMax: c.String(thanosFilterTimeEnd),
Label: c.String(thanosFilterLabel),
LabelValue: c.String(thanosFilterLabelValue),
},
}
cl, err := thanos.NewClient(thanosCfg)
if err != nil {
return fmt.Errorf("failed to create thanos client: %s", err)
}
var aggrTypes []thanos.AggrType
if aggrTypesStr := c.StringSlice(thanosAggrTypes); len(aggrTypesStr) > 0 {
for _, typeStr := range aggrTypesStr {
aggrType, err := thanos.ParseAggrType(typeStr)
if err != nil {
return fmt.Errorf("failed to parse aggregate type %q: %s", typeStr, err)
}
aggrTypes = append(aggrTypes, aggrType)
}
}
tp := thanosProcessor{
cl: cl,
im: importer,
cc: c.Int(thanosConcurrency),
isVerbose: c.Bool(globalVerbose),
aggrTypes: aggrTypes,
}
return tp.run(ctx)
},
},
{
Name: "vm-native",
Usage: "Migrate time series between VictoriaMetrics installations",

View File

@@ -109,7 +109,7 @@ func (c Client) FindMetrics(q string) ([]string, error) {
return nil, fmt.Errorf("failed to send GET request to %q: %s", q, err)
}
if resp.StatusCode != 200 {
return nil, fmt.Errorf("bad return from OpenTSDB: %q: %v", resp.StatusCode, resp)
return nil, fmt.Errorf("bad return from OpenTSDB: %d: %v", resp.StatusCode, resp)
}
defer func() { _ = resp.Body.Close() }()
body, err := io.ReadAll(resp.Body)
@@ -133,7 +133,7 @@ func (c Client) FindSeries(metric string) ([]Meta, error) {
return nil, fmt.Errorf("failed to set GET request to %q: %s", q, err)
}
if resp.StatusCode != 200 {
return nil, fmt.Errorf("bad return from OpenTSDB: %q: %v", resp.StatusCode, resp)
return nil, fmt.Errorf("bad return from OpenTSDB: %d: %v", resp.StatusCode, resp)
}
defer func() { _ = resp.Body.Close() }()
body, err := io.ReadAll(resp.Body)

View File

@@ -0,0 +1,233 @@
package thanos
import (
"encoding/binary"
"errors"
"fmt"
"github.com/prometheus/prometheus/tsdb/chunkenc"
)
// ChunkEncAggr is the top level encoding byte for the AggrChunk.
// It is defined by Thanos as 0xff to prevent collisions with Prometheus encodings.
const ChunkEncAggr = chunkenc.Encoding(0xff)
// AggrType represents an aggregation type in Thanos downsampled blocks.
type AggrType uint8
// AggrTypeNone indicates raw blocks with no aggregation.
// It is used as a sentinel to distinguish raw block processing from downsampled.
const AggrTypeNone AggrType = 255
// Valid aggregation types matching Thanos definitions.
const (
AggrCount AggrType = iota
AggrSum
AggrMin
AggrMax
AggrCounter
)
// AllAggrTypes contains all supported aggregation types.
var AllAggrTypes = []AggrType{AggrCount, AggrSum, AggrMin, AggrMax, AggrCounter}
func (t AggrType) String() string {
switch t {
case AggrCount:
return "count"
case AggrSum:
return "sum"
case AggrMin:
return "min"
case AggrMax:
return "max"
case AggrCounter:
return "counter"
}
return "<unknown>"
}
// ParseAggrType parses aggregate type from string.
func ParseAggrType(s string) (AggrType, error) {
switch s {
case "count":
return AggrCount, nil
case "sum":
return AggrSum, nil
case "min":
return AggrMin, nil
case "max":
return AggrMax, nil
case "counter":
return AggrCounter, nil
}
return 0, fmt.Errorf("unknown aggregate type: %q", s)
}
// ErrAggrNotExist is returned if a requested aggregation is not present in an AggrChunk.
var ErrAggrNotExist = errors.New("aggregate does not exist")
// AggrChunk is a chunk that is composed of a set of aggregates for the same underlying data.
// Not all aggregates must be present.
// This is a read-only implementation for decoding Thanos downsampled blocks.
type AggrChunk []byte
// IsAggrChunk checks if the encoding byte indicates this is an AggrChunk.
func IsAggrChunk(enc chunkenc.Encoding) bool {
return enc == ChunkEncAggr
}
// Get returns the sub-chunk for the given aggregate type if it exists.
func (c AggrChunk) Get(t AggrType) (chunkenc.Chunk, error) {
b := c[:]
var x []byte
for i := AggrType(0); i <= t; i++ {
l, n := binary.Uvarint(b)
if n < 1 {
return nil, errors.New("invalid size: failed to read uvarint")
}
if l > uint64(len(b[n:])) || l+1 > uint64(len(b[n:])) {
if l > 0 {
return nil, errors.New("invalid size: not enough bytes")
}
}
b = b[n:]
// If length is set to zero explicitly, that means the aggregate is unset.
if l == 0 {
if i == t {
return nil, ErrAggrNotExist
}
continue
}
chunkLen := int(l) + 1
x = b[:chunkLen]
b = b[chunkLen:]
}
if len(x) == 0 {
return nil, ErrAggrNotExist
}
return chunkenc.FromData(chunkenc.Encoding(x[0]), x[1:])
}
// Encoding returns the encoding type for AggrChunk.
func (c AggrChunk) Encoding() chunkenc.Encoding {
return ChunkEncAggr
}
// errIterator wraps a nop iterator but reports an error via Err().
// It embeds chunkenc.Iterator to inherit all methods (including Seek)
// which avoids go vet stdmethods warning about Seek signature.
type errIterator struct {
chunkenc.Iterator
err error
}
// Err returns the underlying error.
func (it *errIterator) Err() error {
return it.err
}
// newAggrChunkIterator creates a new iterator for the specified aggregate type.
// If the aggregate is not present in the chunk (ErrAggrNotExist), a nop iterator
// is returned without error — the caller will simply see zero samples.
// Real decoding/corruption errors are reported via the iterator's Err() method.
func newAggrChunkIterator(data []byte, aggrType AggrType) chunkenc.Iterator {
chunk := AggrChunk(data)
subChunk, err := chunk.Get(aggrType)
if err != nil {
if errors.Is(err, ErrAggrNotExist) {
return chunkenc.NewNopIterator()
}
return &errIterator{
Iterator: chunkenc.NewNopIterator(),
err: err,
}
}
return subChunk.Iterator(nil)
}
// AggrChunkWrapper wraps AggrChunk to implement chunkenc.Chunk interface.
// It delegates iteration to a specific aggregate type.
type AggrChunkWrapper struct {
data []byte
aggrType AggrType
}
// NewAggrChunkWrapper creates a new AggrChunk wrapper for the specified aggregate type.
func NewAggrChunkWrapper(data []byte, aggrType AggrType) *AggrChunkWrapper {
return &AggrChunkWrapper{
data: data,
aggrType: aggrType,
}
}
// Bytes returns the underlying byte slice.
func (c *AggrChunkWrapper) Bytes() []byte {
return c.data
}
// Encoding returns the AggrChunk encoding.
func (c *AggrChunkWrapper) Encoding() chunkenc.Encoding {
return ChunkEncAggr
}
// Appender returns an error since AggrChunk is read-only.
func (c *AggrChunkWrapper) Appender() (chunkenc.Appender, error) {
return nil, errors.New("AggrChunk is read-only")
}
// Iterator returns an iterator for the specified aggregate type.
func (c *AggrChunkWrapper) Iterator(it chunkenc.Iterator) chunkenc.Iterator {
return newAggrChunkIterator(c.data, c.aggrType)
}
// NumSamples returns the number of samples in the aggregate.
func (c *AggrChunkWrapper) NumSamples() int {
chunk := AggrChunk(c.data)
subChunk, err := chunk.Get(c.aggrType)
if err != nil {
return 0
}
return subChunk.NumSamples()
}
// Compact is a no-op for read-only AggrChunk.
func (c *AggrChunkWrapper) Compact() {}
// Reset resets the chunk with new data.
func (c *AggrChunkWrapper) Reset(stream []byte) {
c.data = stream
}
// AggrChunkPool is a custom Pool that understands AggrChunk encoding (0xff).
// It delegates standard encodings to the default pool and handles AggrChunk specially.
type AggrChunkPool struct {
defaultPool chunkenc.Pool
aggrType AggrType
}
// NewAggrChunkPool creates a new pool that handles AggrChunk encoding.
func NewAggrChunkPool(aggrType AggrType) *AggrChunkPool {
return &AggrChunkPool{
defaultPool: chunkenc.NewPool(),
aggrType: aggrType,
}
}
// Get returns a chunk for the given encoding and data.
func (p *AggrChunkPool) Get(e chunkenc.Encoding, b []byte) (chunkenc.Chunk, error) {
if e == ChunkEncAggr {
return NewAggrChunkWrapper(b, p.aggrType), nil
}
return p.defaultPool.Get(e, b)
}
// Put returns a chunk to the pool.
func (p *AggrChunkPool) Put(c chunkenc.Chunk) error {
if c.Encoding() == ChunkEncAggr {
// AggrChunk wrappers are not pooled
return nil
}
return p.defaultPool.Put(c)
}

View File

@@ -0,0 +1,110 @@
package thanos
import (
"encoding/json"
"os"
"path/filepath"
)
// BlockMeta extends Prometheus BlockMeta with Thanos-specific fields.
type BlockMeta struct {
// Thanos-specific metadata
Thanos ThanosMeta `json:"thanos,omitempty"`
}
// ThanosMeta contains Thanos-specific block metadata.
type ThanosMeta struct {
// Labels are external labels identifying the producer.
Labels map[string]string `json:"labels,omitempty"`
// Downsample contains downsampling information.
Downsample ThanosDownsample `json:"downsample,omitempty"`
// Source indicates where the block came from.
Source string `json:"source,omitempty"`
// SegmentFiles contains list of segment files in the block.
SegmentFiles []string `json:"segment_files,omitempty"`
// Files contains metadata about files in the block.
Files []ThanosFile `json:"files,omitempty"`
}
// ThanosDownsample contains downsampling resolution info.
type ThanosDownsample struct {
// Resolution is the downsampling resolution in milliseconds.
// 0 means raw data (no downsampling).
// 300000 (5 minutes) or 3600000 (1 hour) for downsampled data.
Resolution int64 `json:"resolution"`
}
// ThanosFile contains metadata about a file in the block.
type ThanosFile struct {
RelPath string `json:"rel_path"`
SizeBytes int64 `json:"size_bytes,omitempty"`
}
// ResolutionLevel represents the downsampling resolution.
type ResolutionLevel int64
const (
// ResolutionRaw is for raw, non-downsampled data.
ResolutionRaw ResolutionLevel = 0
// Resolution5m is for 5-minute downsampled data (300000 ms).
Resolution5m ResolutionLevel = 300000
// Resolution1h is for 1-hour downsampled data (3600000 ms).
Resolution1h ResolutionLevel = 3600000
)
// String returns human-readable resolution string.
func (r ResolutionLevel) String() string {
switch r {
case ResolutionRaw:
return "raw"
case Resolution5m:
return "5m"
case Resolution1h:
return "1h"
default:
return "unknown"
}
}
// ReadBlockMeta reads Thanos-extended block metadata from meta.json.
func ReadBlockMeta(blockDir string) (*BlockMeta, error) {
metaPath := filepath.Join(blockDir, "meta.json")
data, err := os.ReadFile(metaPath)
if err != nil {
return nil, err
}
var meta BlockMeta
if err := json.Unmarshal(data, &meta); err != nil {
return nil, err
}
return &meta, nil
}
// IsDownsampled returns true if the block contains downsampled data.
func (m *BlockMeta) IsDownsampled() bool {
return m.Thanos.Downsample.Resolution > 0
}
// Resolution returns the block's downsampling resolution.
func (m *BlockMeta) Resolution() ResolutionLevel {
return ResolutionLevel(m.Thanos.Downsample.Resolution)
}
// ResolutionSuffix returns a suffix string for metric names based on resolution.
// For example: ":5m" or ":1h" for downsampled data, empty for raw data.
func (m *BlockMeta) ResolutionSuffix() string {
switch m.Resolution() {
case Resolution5m:
return ":5m"
case Resolution1h:
return ":1h"
default:
return ""
}
}

View File

@@ -0,0 +1,83 @@
package thanos
import (
"fmt"
"io"
"os"
"path/filepath"
"github.com/prometheus/prometheus/tsdb"
"github.com/prometheus/prometheus/tsdb/chunkenc"
)
// BlockInfo contains information about a block including Thanos metadata.
type BlockInfo struct {
Block tsdb.BlockReader
Resolution ResolutionLevel
IsThanos bool
// Closer releases the block's resources (file descriptors, mmap).
// Must be called only after all queriers on this block have been closed.
Closer io.Closer
}
// OpenBlocksWithInfo opens all blocks and returns them with their metadata.
// snapshotDir must be a snapshot directory containing block directories.
func OpenBlocksWithInfo(snapshotDir string, aggrType AggrType) ([]BlockInfo, error) {
entries, err := os.ReadDir(snapshotDir)
if err != nil {
return nil, fmt.Errorf("failed to read snapshot directory: %w", err)
}
var blocks []BlockInfo
for _, entry := range entries {
if !entry.IsDir() {
continue
}
blockDir := filepath.Join(snapshotDir, entry.Name())
metaPath := filepath.Join(blockDir, "meta.json")
// Check if this is a valid block directory (has meta.json)
if _, err := os.Stat(metaPath); os.IsNotExist(err) {
continue
}
meta, err := ReadBlockMeta(blockDir)
if err != nil {
CloseBlocks(blocks)
return nil, fmt.Errorf("failed to read Thanos metadata for block %s: %w", blockDir, err)
}
var pool chunkenc.Pool
if meta.IsDownsampled() {
// Use AggrChunkPool for downsampled blocks
pool = NewAggrChunkPool(aggrType)
}
block, err := tsdb.OpenBlock(nil, blockDir, pool, nil)
if err != nil {
// Close previously opened blocks before returning error
CloseBlocks(blocks)
return nil, fmt.Errorf("failed to open block %s: %w", blockDir, err)
}
blocks = append(blocks, BlockInfo{
Block: block,
Resolution: meta.Resolution(),
IsThanos: true,
Closer: block,
})
}
return blocks, nil
}
// CloseBlocks closes all blocks in the slice.
// Must be called only after all queriers on these blocks have been closed.
func CloseBlocks(blocks []BlockInfo) {
for _, bi := range blocks {
if bi.Closer != nil {
_ = bi.Closer.Close()
}
}
}

198
app/vmctl/thanos/client.go Normal file
View File

@@ -0,0 +1,198 @@
package thanos
import (
"context"
"fmt"
"time"
"github.com/prometheus/prometheus/model/labels"
"github.com/prometheus/prometheus/storage"
"github.com/prometheus/prometheus/tsdb"
)
// Config contains parameters for reading Thanos snapshots.
type Config struct {
Snapshot string
Filter Filter
}
// Filter contains configuration for filtering the timeseries.
type Filter struct {
TimeMin string
TimeMax string
Label string
LabelValue string
}
// Client reads Thanos snapshot blocks, including downsampled blocks with AggrChunk encoding.
type Client struct {
snapshotPath string
filter filter
statsPrinted bool
}
type filter struct {
min, max int64
label string
labelValue string
}
func (f filter) inRange(minV, maxV int64) bool {
fmin, fmax := f.min, f.max
if fmin == 0 {
fmin = minV
}
if fmax == 0 {
fmax = maxV
}
return minV <= fmax && fmin <= maxV
}
// NewClient creates a new Thanos snapshot client.
func NewClient(cfg Config) (*Client, error) {
minTime, maxTime, err := parseTime(cfg.Filter.TimeMin, cfg.Filter.TimeMax)
if err != nil {
return nil, fmt.Errorf("failed to parse time in filter: %s", err)
}
return &Client{
snapshotPath: cfg.Snapshot,
filter: filter{
min: minTime,
max: maxTime,
label: cfg.Filter.Label,
labelValue: cfg.Filter.LabelValue,
},
}, nil
}
// Explore fetches all available blocks from the snapshot with support for
// Thanos AggrChunk (downsampled blocks). It opens blocks with a custom pool
// that can decode AggrChunk encoding (0xff).
func (c *Client) Explore(aggrType AggrType) ([]BlockInfo, error) {
blockInfos, err := OpenBlocksWithInfo(c.snapshotPath, aggrType)
if err != nil {
return nil, fmt.Errorf("failed to open blocks: %w", err)
}
s := &Stats{
Filtered: c.filter.min != 0 || c.filter.max != 0 || c.filter.label != "",
Blocks: len(blockInfos),
}
var blocksToImport []BlockInfo
for _, bi := range blockInfos {
meta := bi.Block.Meta()
if s.MinTime == 0 || meta.MinTime < s.MinTime {
s.MinTime = meta.MinTime
}
if s.MaxTime == 0 || meta.MaxTime > s.MaxTime {
s.MaxTime = meta.MaxTime
}
if !c.filter.inRange(meta.MinTime, meta.MaxTime) {
s.SkippedBlocks++
if bi.Closer != nil {
_ = bi.Closer.Close()
}
continue
}
s.Samples += meta.Stats.NumSamples
s.Series += meta.Stats.NumSeries
blocksToImport = append(blocksToImport, bi)
}
if !c.statsPrinted {
fmt.Println(s)
c.statsPrinted = true
}
return blocksToImport, nil
}
// querierSeriesSet wraps a SeriesSet and its underlying Querier, ensuring
// the querier is closed once the SeriesSet has been fully consumed.
// This releases the querier's read reference on the block, which is required
// for Block.Close() to complete without hanging.
type querierSeriesSet struct {
storage.SeriesSet
q storage.Querier
closed bool
}
// Next advances the iterator. When the underlying SeriesSet is exhausted,
// it closes the querier to release resources.
func (s *querierSeriesSet) Next() bool {
if s.SeriesSet.Next() {
return true
}
if !s.closed {
_ = s.q.Close()
s.closed = true
}
return false
}
// Close explicitly closes the underlying querier.
// This must be called if iteration is stopped early (before Next returns false)
// to release block read references and prevent Block.Close() from hanging.
func (s *querierSeriesSet) Close() {
if !s.closed {
_ = s.q.Close()
s.closed = true
}
}
// ClosableSeriesSet extends storage.SeriesSet with a Close method for explicit cleanup.
type ClosableSeriesSet interface {
storage.SeriesSet
Close()
}
// Read reads the given BlockInfo according to configured time and label filters.
// The returned ClosableSeriesSet automatically closes the underlying querier when fully consumed,
// but Close() should be called explicitly (e.g., via defer) to handle early returns.
func (c *Client) Read(bi BlockInfo) (ClosableSeriesSet, error) {
minTime, maxTime := bi.Block.Meta().MinTime, bi.Block.Meta().MaxTime
if c.filter.min != 0 {
minTime = c.filter.min
}
if c.filter.max != 0 {
maxTime = c.filter.max
}
q, err := tsdb.NewBlockQuerier(bi.Block, minTime, maxTime)
if err != nil {
return nil, err
}
ss := q.Select(
context.Background(),
false,
nil,
labels.MustNewMatcher(labels.MatchRegexp, c.filter.label, c.filter.labelValue),
)
return &querierSeriesSet{
SeriesSet: ss,
q: q,
}, nil
}
func parseTime(start, end string) (int64, int64, error) {
var s, e int64
if start == "" && end == "" {
return 0, 0, nil
}
if start != "" {
v, err := time.Parse(time.RFC3339, start)
if err != nil {
return 0, 0, fmt.Errorf("failed to parse %q: %s", start, err)
}
s = v.UnixNano() / int64(time.Millisecond)
}
if end != "" {
v, err := time.Parse(time.RFC3339, end)
if err != nil {
return 0, 0, fmt.Errorf("failed to parse %q: %s", end, err)
}
e = v.UnixNano() / int64(time.Millisecond)
}
return s, e, nil
}

38
app/vmctl/thanos/stats.go Normal file
View File

@@ -0,0 +1,38 @@
package thanos
import (
"fmt"
"time"
)
// Stats represents data migration stats for Thanos blocks.
type Stats struct {
Filtered bool
MinTime int64
MaxTime int64
Samples uint64
Series uint64
Blocks int
SkippedBlocks int
}
// String returns string representation for s.
func (s Stats) String() string {
str := fmt.Sprintf("Thanos snapshot stats:\n"+
" blocks found: %d;\n"+
" blocks skipped by time filter: %d;\n"+
" min time: %d (%v);\n"+
" max time: %d (%v);\n"+
" samples: %d;\n"+
" series: %d.",
s.Blocks, s.SkippedBlocks,
s.MinTime, time.Unix(s.MinTime/1e3, 0).Format(time.RFC3339),
s.MaxTime, time.Unix(s.MaxTime/1e3, 0).Format(time.RFC3339),
s.Samples, s.Series)
if s.Filtered {
str += "\n* Stats numbers are based on blocks meta info and don't account for applied filters."
}
return str
}

View File

@@ -0,0 +1,309 @@
package main
import (
"context"
"fmt"
"log"
"strings"
"sync"
"github.com/prometheus/prometheus/model/labels"
"github.com/prometheus/prometheus/tsdb/chunkenc"
"github.com/VictoriaMetrics/metrics"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/barpool"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/thanos"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/vm"
)
type thanosProcessor struct {
cl *thanos.Client
im *vm.Importer
cc int
isVerbose bool
aggrTypes []thanos.AggrType
}
func (tp *thanosProcessor) run(ctx context.Context) error {
if len(tp.aggrTypes) == 0 {
tp.aggrTypes = thanos.AllAggrTypes
}
log.Printf("Processing blocks with aggregate types: %v", tp.aggrTypes)
// Use the first aggregate type to explore blocks (block list is the same for all types)
blocks, err := tp.cl.Explore(tp.aggrTypes[0])
if err != nil {
return fmt.Errorf("explore failed: %s", err)
}
if len(blocks) < 1 {
return fmt.Errorf("found no blocks to import")
}
// Separate blocks into raw (resolution=0) and downsampled (resolution>0)
var rawBlocks, downsampledBlocks []thanos.BlockInfo
for _, block := range blocks {
if block.Resolution == thanos.ResolutionRaw {
rawBlocks = append(rawBlocks, block)
} else {
downsampledBlocks = append(downsampledBlocks, block)
}
}
log.Printf("Found %d raw blocks and %d downsampled blocks", len(rawBlocks), len(downsampledBlocks))
question := fmt.Sprintf("Found %d blocks to import (%d raw + %d downsampled with %d aggregate types). Continue?",
len(blocks), len(rawBlocks), len(downsampledBlocks), len(tp.aggrTypes))
if !prompt(ctx, question) {
return nil
}
// Calculate total number of block processing passes for the progress bar:
// raw blocks are processed once, downsampled blocks are processed once per aggregate type.
totalPasses := len(rawBlocks) + len(downsampledBlocks)*len(tp.aggrTypes)
thanosBlocksTotal.Add(totalPasses)
bar := barpool.AddWithTemplate(fmt.Sprintf(barTpl, "Processing blocks"), totalPasses)
if err := barpool.Start(); err != nil {
return err
}
defer barpool.Stop()
tp.im.ResetStats()
type phaseStats struct {
name string
series uint64
samples uint64
}
var phases []phaseStats
// Process raw blocks first (no aggregate suffix)
if len(rawBlocks) > 0 {
log.Println("Processing raw blocks (resolution=0)...")
stats, err := tp.processBlocks(rawBlocks, thanos.AggrTypeNone, bar)
if err != nil {
return fmt.Errorf("migration failed for raw blocks: %s", err)
}
phases = append(phases, phaseStats{
name: "raw",
series: stats.series,
samples: stats.samples,
})
}
// Close blocks from the initial Explore. The querierSeriesSet wrapper
// has already released all querier read references, so Close won't hang.
thanos.CloseBlocks(blocks)
// Process downsampled blocks for each aggregate type.
// Each type needs its own AggrChunkPool, so we reopen blocks per type.
for _, aggrType := range tp.aggrTypes {
if len(downsampledBlocks) < 1 {
break
}
log.Printf("Processing downsampled blocks with aggregate type: %s", aggrType)
aggrBlocks, err := tp.cl.Explore(aggrType)
if err != nil {
return fmt.Errorf("explore failed for aggr type %s: %s", aggrType, err)
}
var downsampledOnly []thanos.BlockInfo
for _, block := range aggrBlocks {
if block.Resolution != thanos.ResolutionRaw {
downsampledOnly = append(downsampledOnly, block)
}
}
if len(downsampledOnly) < 1 {
log.Printf("No downsampled blocks found for aggregate type %s, skipping", aggrType)
thanos.CloseBlocks(aggrBlocks)
continue
}
log.Printf("Processing %d blocks for aggregate type: %s", len(downsampledOnly), aggrType)
stats, err := tp.processBlocks(downsampledOnly, aggrType, bar)
thanos.CloseBlocks(aggrBlocks)
if err != nil {
return fmt.Errorf("migration failed for aggr type %s: %s", aggrType, err)
}
phases = append(phases, phaseStats{
name: aggrType.String(),
series: stats.series,
samples: stats.samples,
})
}
// Print per-phase and total statistics
var totalSeries, totalSamples uint64
log.Printf("Migration statistics (%d raw blocks, %d downsampled blocks):", len(rawBlocks), len(downsampledBlocks))
for _, p := range phases {
log.Printf(" %s: %d series, %d samples", p.name, p.series, p.samples)
totalSeries += p.series
totalSamples += p.samples
}
log.Printf(" total: %d series, %d samples", totalSeries, totalSamples)
// Wait for all buffers to flush
tp.im.Close()
// Drain import errors channel
for vmErr := range tp.im.Errors() {
if vmErr.Err != nil {
thanosErrorsTotal.Inc()
return fmt.Errorf("import process failed: %s", wrapErr(vmErr, tp.isVerbose))
}
}
log.Println("Import finished!")
log.Println(tp.im.Stats())
return nil
}
// processBlocksStats holds statistics collected during block processing.
type processBlocksStats struct {
blocks uint64
series uint64
samples uint64
}
func (tp *thanosProcessor) processBlocks(blocks []thanos.BlockInfo, aggrType thanos.AggrType, bar barpool.Bar) (processBlocksStats, error) {
blockReadersCh := make(chan thanos.BlockInfo)
errCh := make(chan error, tp.cc)
var processedBlocks, totalSeries, totalSamples uint64
var mu sync.Mutex
var wg sync.WaitGroup
for i := range tp.cc {
workerID := i
wg.Go(func() {
for bi := range blockReadersCh {
seriesCount, samplesCount, err := tp.do(bi, aggrType)
if err != nil {
thanosErrorsTotal.Inc()
errCh <- fmt.Errorf("read failed for block %q with aggr %s: %s", bi.Block.Meta().ULID, aggrType, err)
return
}
mu.Lock()
processedBlocks++
totalSeries += seriesCount
totalSamples += samplesCount
log.Printf("[Worker %d] Block %s: %d series, %d samples | Total: %d/%d blocks, %d series, %d samples",
workerID, bi.Block.Meta().ULID.String()[:8], seriesCount, samplesCount,
processedBlocks, len(blocks), totalSeries, totalSamples)
mu.Unlock()
thanosBlocksProcessed.Inc()
bar.Increment()
}
})
}
// any error breaks the import
for _, bi := range blocks {
select {
case thanosErr := <-errCh:
close(blockReadersCh)
wg.Wait()
return processBlocksStats{}, fmt.Errorf("thanos error: %s", thanosErr)
case vmErr := <-tp.im.Errors():
close(blockReadersCh)
wg.Wait()
thanosErrorsTotal.Inc()
return processBlocksStats{}, fmt.Errorf("import process failed: %s", wrapErr(vmErr, tp.isVerbose))
case blockReadersCh <- bi:
}
}
close(blockReadersCh)
wg.Wait()
close(errCh)
for err := range errCh {
return processBlocksStats{}, fmt.Errorf("import process failed: %s", err)
}
return processBlocksStats{
blocks: processedBlocks,
series: totalSeries,
samples: totalSamples,
}, nil
}
func (tp *thanosProcessor) do(bi thanos.BlockInfo, aggrType thanos.AggrType) (uint64, uint64, error) {
ss, err := tp.cl.Read(bi)
if err != nil {
return 0, 0, fmt.Errorf("failed to read block: %s", err)
}
defer ss.Close() // Ensure querier is closed even on early returns
var it chunkenc.Iterator
var seriesCount, samplesCount uint64
for ss.Next() {
var name string
var labelPairs []vm.LabelPair
series := ss.At()
series.Labels().Range(func(label labels.Label) {
if label.Name == "__name__" {
name = label.Value
return
}
labelPairs = append(labelPairs, vm.LabelPair{
Name: strings.Clone(label.Name),
Value: strings.Clone(label.Value),
})
})
if name == "" {
return seriesCount, samplesCount, fmt.Errorf("failed to find `__name__` label in labelset for block %v", bi.Block.Meta().ULID)
}
// Add resolution and aggregate type suffix to metric name for downsampled blocks
if bi.Resolution != thanos.ResolutionRaw && aggrType != thanos.AggrTypeNone {
name = fmt.Sprintf("%s:%s:%s", name, bi.Resolution.String(), aggrType.String())
}
var timestamps []int64
var values []float64
it = series.Iterator(it)
for {
typ := it.Next()
if typ == chunkenc.ValNone {
break
}
if typ != chunkenc.ValFloat {
continue
}
t, v := it.At()
timestamps = append(timestamps, t)
values = append(values, v)
}
if err := it.Err(); err != nil {
return seriesCount, samplesCount, err
}
samplesCount += uint64(len(timestamps))
seriesCount++
ts := vm.TimeSeries{
Name: name,
LabelPairs: labelPairs,
Timestamps: timestamps,
Values: values,
}
if err := tp.im.Input(&ts); err != nil {
return seriesCount, samplesCount, err
}
}
return seriesCount, samplesCount, ss.Err()
}
var (
thanosBlocksTotal = metrics.NewCounter(`vmctl_thanos_migration_blocks_total`)
thanosBlocksProcessed = metrics.NewCounter(`vmctl_thanos_migration_blocks_processed`)
thanosErrorsTotal = metrics.NewCounter(`vmctl_thanos_migration_errors_total`)
)

View File

@@ -76,11 +76,11 @@ func (ts *TimeSeries) write(w io.Writer) (int, error) {
pointsCount := len(timestampsBatch)
cw.printf(`},"timestamps":[`)
for i := 0; i < pointsCount-1; i++ {
for i := range pointsCount - 1 {
cw.printf(`%d,`, timestampsBatch[i])
}
cw.printf(`%d],"values":[`, timestampsBatch[pointsCount-1])
for i := 0; i < pointsCount-1; i++ {
for i := range pointsCount - 1 {
cw.printf(`%v,`, valuesBatch[i])
}
cw.printf("%v]}\n", valuesBatch[pointsCount-1])

View File

@@ -262,7 +262,7 @@ func (p *vmNativeProcessor) runBackfilling(ctx context.Context, tenantID string,
errCh := make(chan error, p.cc)
var wg sync.WaitGroup
for i := 0; i < p.cc; i++ {
for range p.cc {
wg.Go(func() {
for f := range filterCh {
if !p.disablePerMetricRequests {

View File

@@ -55,7 +55,7 @@ var (
deduplicator *streamaggr.Deduplicator
)
// CheckStreamAggrConfig checks config pointed by -stramaggr.config
// CheckStreamAggrConfig checks config pointed by -streamaggr.config
func CheckStreamAggrConfig() error {
if *streamAggrConfig == "" {
return nil

View File

@@ -45,15 +45,14 @@ func insertRows(sketches []*datadogsketches.Sketch, extraLabels []prompb.Label)
ms := sketch.ToSummary()
for _, m := range ms {
ctx.Labels = ctx.Labels[:0]
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10557
ctx.AddLabel("host", sketch.Host) // newly added
ctx.AddLabel("", m.Name)
for _, label := range m.Labels {
ctx.AddLabel(label.Name, label.Value)
}
for _, tag := range sketch.Tags {
name, value := datadogutil.SplitTag(tag)
if name == "host" {
name = "exported_host"
}
ctx.AddLabel(name, value)
}
for j := range extraLabels {

View File

@@ -77,7 +77,7 @@ func push(ctx *common.InsertCtx, tss []prompb.TimeSeries) {
r := &ts.Samples[i]
metricNameRaw, err = ctx.WriteDataPointExt(metricNameRaw, ctx.Labels, r.Timestamp, r.Value)
if err != nil {
logger.Errorf("cannot write promscape data to storage: %s", err)
logger.Errorf("cannot write promscrape data to storage: %s", err)
return
}
}

View File

@@ -30,6 +30,7 @@ var (
concurrency = flag.Int("concurrency", 10, "The number of concurrent workers. Higher concurrency may reduce restore duration")
maxBytesPerSecond = flagutil.NewBytes("maxBytesPerSecond", 0, "The maximum download speed. There is no limit if it is set to 0")
skipBackupCompleteCheck = flag.Bool("skipBackupCompleteCheck", false, "Whether to skip checking for 'backup complete' file in -src. This may be useful for restoring from old backups, which were created without 'backup complete' file")
SkipPreallocation = flag.Bool("skipFilePreallocation", false, "Whether to skip pre-allocated files. This will likely be slower in most cases, but allows restores to resume mid file on failure")
)
func main() {
@@ -63,6 +64,7 @@ func main() {
Src: srcFS,
Dst: dstFS,
SkipBackupCompleteCheck: *skipBackupCompleteCheck,
SkipPreallocation: *SkipPreallocation,
}
pushmetrics.Init()
if err := a.Run(ctx); err != nil {

View File

@@ -142,7 +142,7 @@ type aggrStatePercentile struct {
func newAggrStatePercentile(pointsLen int, n float64) aggrState {
hs := make([]*histogram.Fast, pointsLen)
for i := 0; i < pointsLen; i++ {
for i := range pointsLen {
hs[i] = histogram.NewFast()
}
return &aggrStatePercentile{

View File

@@ -9,6 +9,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/netstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/searchutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/cgroup"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timerpool"
@@ -49,7 +50,7 @@ func (ec *evalConfig) newTimestamps(step int64) []int64 {
pointsLen := ec.pointsLen(step)
timestamps := make([]int64, pointsLen)
ts := ec.startTime
for i := 0; i < pointsLen; i++ {
for i := range pointsLen {
timestamps[i] = ts
ts += step
}
@@ -196,12 +197,17 @@ func newNextSeriesForSearchQuery(ec *evalConfig, sq *storage.SearchQuery, expr g
pathExpression: safePathExpression(expr),
}
s.summarize(aggrAvg, ec.startTime, ec.endTime, ec.storageStep, 0)
t := timerpool.Get(30 * time.Second)
// A negative or zero duration will cause timer.C to return immediately
remainingTimeout := ec.deadline.Deadline() - fasttime.UnixTimestamp()
t := timerpool.Get(time.Duration(remainingTimeout) * time.Second)
defer timerpool.Put(t)
select {
case seriesCh <- s:
case <-t.C:
logger.Errorf("resource leak when processing the %s (full query: %s); please report this error to VictoriaMetrics developers",
logger.Errorf("reached timeout when processing the %s (full query: %s), it can be due to the amount of storageNodes configured in vmselect is more than vmselects available CPU count "+
"or vmselect is heavy loaded. Consider adding resources or increasing `-search.maxQueryDuration` or `timeout` parameter in the query.",
expr.AppendString(nil), ec.originalQuery)
}
return nil

View File

@@ -25,7 +25,7 @@ func naturalLess(a, b string) bool {
}
func getNonNumPrefix(s string) (prefix string, tail string) {
for i := 0; i < len(s); i++ {
for i := range len(s) {
ch := s[i]
if ch >= '0' && ch <= '9' {
return s[:i], s[i:]

View File

@@ -82,7 +82,7 @@ func RenderHandler(startTime time.Time, w http.ResponseWriter, r *http.Request)
if s := r.FormValue("maxDataPoints"); len(s) > 0 {
n, err := strconv.ParseFloat(s, 64)
if err != nil {
return fmt.Errorf("cannot parse maxDataPoints=%q: %w", maxDataPoints, err)
return fmt.Errorf("cannot parse maxDataPoints=%d: %w", maxDataPoints, err)
}
if n <= 0 {
return fmt.Errorf("maxDataPoints must be greater than 0; got %f", n)
@@ -209,7 +209,7 @@ func parseInterval(s string) (int64, error) {
s = strings.TrimSpace(s)
prefix := s
var suffix string
for i := 0; i < len(s); i++ {
for i := range len(s) {
ch := s[i]
if ch != '-' && ch != '+' && ch != '.' && (ch < '0' || ch > '9') {
prefix = s[:i]

View File

@@ -1228,7 +1228,7 @@ func transformDelay(ec *evalConfig, fe *graphiteql.FuncExpr) (nextSeriesFunc, er
stepsLocal = len(values)
}
copy(values[stepsLocal:], values[:len(values)-stepsLocal])
for i := 0; i < stepsLocal; i++ {
for i := range stepsLocal {
values[i] = nan
}
}
@@ -1740,7 +1740,7 @@ func transformGroup(ec *evalConfig, fe *graphiteql.FuncExpr) (nextSeriesFunc, er
func groupSeriesLists(ec *evalConfig, args []*graphiteql.ArgExpr, expr graphiteql.Expr) (nextSeriesFunc, error) {
var nextSeriess []nextSeriesFunc
for i := 0; i < len(args); i++ {
for i := range args {
nextSeries, err := evalSeriesList(ec, args, "seriesList", i)
if err != nil {
for _, f := range nextSeriess {
@@ -3233,7 +3233,7 @@ func transformSeriesByTag(ec *evalConfig, fe *graphiteql.FuncExpr) (nextSeriesFu
return nil, fmt.Errorf("at least one tagExpression must be passed to seriesByTag")
}
var tagExpressions []string
for i := 0; i < len(args); i++ {
for i := range args {
te, err := getString(args, "tagExpressions", i)
if err != nil {
return nil, err
@@ -3633,7 +3633,7 @@ var graphiteToGolangRe = regexp.MustCompile(`\\(\d+)`)
func getNodes(args []*graphiteql.ArgExpr) ([]graphiteql.Expr, error) {
var nodes []graphiteql.Expr
for i := 0; i < len(args); i++ {
for i := range args {
expr := args[i].Expr
switch expr.(type) {
case *graphiteql.NumberExpr, *graphiteql.StringExpr:
@@ -4052,7 +4052,7 @@ func formatPathsFromSeriesExpressions(seriesExpressions []string, sortPaths bool
func newNaNSeries(ec *evalConfig, step int64) *series {
values := make([]float64, ec.pointsLen(step))
for i := 0; i < len(values); i++ {
for i := range values {
values[i] = nan
}
return &series{
@@ -5244,7 +5244,7 @@ func transformLinearRegression(ec *evalConfig, fe *graphiteql.FuncExpr) (nextSer
func linearRegressionForSeries(ec *evalConfig, fe *graphiteql.FuncExpr, ss, sourceSeries []*series) (nextSeriesFunc, error) {
var resp []*series
for i := 0; i < len(ss); i++ {
for i := range ss {
source := sourceSeries[i]
s := ss[i]
s.Tags["linearRegressions"] = fmt.Sprintf("%d, %d", ec.startTime/1e3, ec.endTime/1e3)
@@ -5258,7 +5258,7 @@ func linearRegressionForSeries(ec *evalConfig, fe *graphiteql.FuncExpr, ss, sour
continue
}
values := s.Values
for j := 0; j < len(values); j++ {
for j := range values {
values[j] = offset + (float64(int(s.Timestamps[0])+j*int(s.step)))*factor
}
resp = append(resp, s)
@@ -5370,7 +5370,7 @@ func holtWinterConfidenceBands(ec *evalConfig, fe *graphiteql.FuncExpr, args []*
valuesLen := len(forecastValues)
upperBand := make([]float64, 0, valuesLen)
lowerBand := make([]float64, 0, valuesLen)
for i := 0; i < valuesLen; i++ {
for i := range valuesLen {
forecastItem := forecastValues[i]
deviationItem := deviationValues[i]
if math.IsNaN(forecastItem) || math.IsNaN(deviationItem) {
@@ -5464,7 +5464,7 @@ func transformHoltWintersAberration(ec *evalConfig, fe *graphiteql.FuncExpr) (ne
return nil, fmt.Errorf("bug, len mismatch for series: %d and upperBand values: %d or lowerBand values: %d", len(values), len(upperBand), len(lowerBand))
}
aberration := make([]float64, 0, len(values))
for i := 0; i < len(values); i++ {
for i := range values {
v := values[i]
upperValue := upperBand[i]
lowerValue := lowerBand[i]

View File

@@ -280,7 +280,7 @@ func isMetricExprChar(ch byte) bool {
}
func appendEscapedIdent(dst []byte, s string) []byte {
for i := 0; i < len(s); i++ {
for i := range len(s) {
ch := s[i]
if isIdentChar(ch) || isMetricExprChar(ch) {
if i == 0 && !isFirstIdentChar(ch) {

View File

@@ -321,19 +321,23 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
return true
case "/tags/tagSeries":
graphiteTagsTagSeriesRequests.Inc()
if err := graphite.TagsTagSeriesHandler(startTime, w, r); err != nil {
graphiteTagsTagSeriesErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
return true
err := &httpserver.ErrorWithStatusCode{
Err: fmt.Errorf("graphite tag registration has been disabled and is planned to be removed in future. " +
"See: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10544"),
StatusCode: http.StatusNotImplemented,
}
graphiteTagsTagSeriesErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
return true
case "/tags/tagMultiSeries":
graphiteTagsTagMultiSeriesRequests.Inc()
if err := graphite.TagsTagMultiSeriesHandler(startTime, w, r); err != nil {
graphiteTagsTagMultiSeriesErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
return true
err := &httpserver.ErrorWithStatusCode{
Err: fmt.Errorf("graphite tag registration has been disabled and is planned to be removed in future. " +
"See: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10544"),
StatusCode: http.StatusNotImplemented,
}
graphiteTagsTagMultiSeriesErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
return true
case "/tags":
graphiteTagsRequests.Inc()
@@ -739,6 +743,26 @@ func proxyVMAlertRequests(w http.ResponseWriter, r *http.Request, path string) {
req := r.Clone(r.Context())
req.URL.Path = strings.TrimPrefix(path, "prometheus")
req.Host = vmalertProxyHost
if strings.HasPrefix(r.Header.Get(`User-Agent`), `Grafana`) {
// Grafana currently supports only Prometheus-style alerts. If other alert types
// (e.g. logs or traces) are returned, it may fail with "Error loading alerts".
//
// Grafana queries the vmalert API directly, bypassing the VictoriaMetrics datasource,
// so query params (such as datasource_type) cannot be enforced on the Grafana side.
//
// To ensure compatibility, we detect Grafana requests via the User-Agent and enforce
// `datasource_type=prometheus`.
//
// See:
// - https://github.com/VictoriaMetrics/victoriametrics-datasource/issues/329#issuecomment-3847585443
// - https://github.com/VictoriaMetrics/victoriametrics-datasource/issues/59
q := req.URL.Query()
q.Set("datasource_type", "prometheus")
req.URL.RawQuery = q.Encode()
req.RequestURI = ""
}
vmalertProxy.ServeHTTP(w, req)
}

View File

@@ -5,6 +5,8 @@ import (
"errors"
"flag"
"fmt"
"math"
"slices"
"sort"
"sync"
"sync/atomic"
@@ -20,6 +22,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage/metricnamestats"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage/metricsmetadata"
)
@@ -490,10 +493,7 @@ func (pts *packedTimeseries) unpackTo(dst []*sortBlock, tbf *tmpBlocksFile, tr s
}
// Prepare worker channels.
workers := min(len(upws), gomaxprocs)
if workers < 1 {
workers = 1
}
workers := max(min(len(upws), gomaxprocs), 1)
itemsPerWorker := (len(upws) + workers - 1) / workers
workChs := make([]chan *unpackWork, workers)
for i := range workChs {
@@ -578,6 +578,7 @@ func mergeSortBlocks(dst *Result, sbh *sortBlocksHeap, dedupInterval int64) {
return
}
heap.Init(sbh)
var dedupSamples int
for {
sbs := sbh.sbs
top := sbs[0]
@@ -593,6 +594,7 @@ func mergeSortBlocks(dst *Result, sbh *sortBlocksHeap, dedupInterval int64) {
if n := equalSamplesPrefix(top, sbNext); n > 0 && dedupInterval > 0 {
// Skip n replicated samples at top if deduplication is enabled.
top.NextIdx = topNextIdx + n
dedupSamples += n
} else {
// Copy samples from top to dst with timestamps not exceeding tsNext.
top.NextIdx = topNextIdx + binarySearchTimestamps(top.Timestamps[topNextIdx:], tsNext)
@@ -607,8 +609,8 @@ func mergeSortBlocks(dst *Result, sbh *sortBlocksHeap, dedupInterval int64) {
}
}
timestamps, values := storage.DeduplicateSamples(dst.Timestamps, dst.Values, dedupInterval)
dedups := len(dst.Timestamps) - len(timestamps)
dedupsDuringSelect.Add(dedups)
dedupSamples += len(dst.Timestamps) - len(timestamps)
dedupsDuringSelect.Add(dedupSamples)
dst.Timestamps = timestamps
dst.Values = values
}
@@ -634,7 +636,7 @@ func equalTimestampsPrefix(a, b []int64) int {
func equalValuesPrefix(a, b []float64) int {
for i, v := range a {
if i >= len(b) || v != b[i] {
if i >= len(b) || math.Float64bits(v) != math.Float64bits(b[i]) {
return i
}
}
@@ -829,12 +831,7 @@ func GraphiteTags(qt *querytracer.Tracer, filter string, limit int, deadline sea
}
func hasString(a []string, s string) bool {
for _, x := range a {
if x == s {
return true
}
}
return false
return slices.Contains(a, s)
}
// LabelValues returns label values matching the given labelName and sq until the given deadline.
@@ -1366,7 +1363,7 @@ func applyGraphiteRegexpFilter(filter string, ss []string) ([]string, error) {
const maxFastAllocBlockSize = 32 * 1024
// GetMetricNamesStats returns statistic for timeseries metric names usage.
func GetMetricNamesStats(qt *querytracer.Tracer, limit, le int, matchPattern string) (storage.MetricNamesStatsResponse, error) {
func GetMetricNamesStats(qt *querytracer.Tracer, limit, le int, matchPattern string) (metricnamestats.StatsResult, error) {
qt = qt.NewChild("get metric names usage statistics with limit: %d, less or equal to: %d, match pattern=%q", limit, le, matchPattern)
defer qt.Done()
return vmstorage.GetMetricNamesStats(qt, limit, le, matchPattern)

View File

@@ -1,8 +1,11 @@
package netstorage
import (
"math"
"reflect"
"testing"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/decimal"
)
func TestMergeSortBlocks(t *testing.T) {
@@ -194,3 +197,111 @@ func TestMergeSortBlocks(t *testing.T) {
Values: []float64{7, 24, 26},
})
}
func TestEqualSamplesPrefix(t *testing.T) {
f := func(a, b *sortBlock, expected int) {
t.Helper()
actual := equalSamplesPrefix(a, b)
if actual != expected {
t.Fatalf("unexpected result: got %d, want %d", actual, expected)
}
}
// Empty blocks
f(&sortBlock{}, &sortBlock{}, 0)
// Identical blocks
f(&sortBlock{
Timestamps: []int64{1, 2, 3, 4},
Values: []float64{5, 6, 7, 8},
}, &sortBlock{
Timestamps: []int64{1, 2, 3, 4},
Values: []float64{5, 6, 7, 8},
}, 4)
// Non-zero NextIdx
f(&sortBlock{
Timestamps: []int64{1, 2, 3, 4},
Values: []float64{5, 6, 7, 8},
NextIdx: 2,
}, &sortBlock{
Timestamps: []int64{10, 20, 3, 4},
Values: []float64{50, 60, 7, 8},
NextIdx: 2,
}, 2)
// Non-zero NextIdx with mismatch
f(&sortBlock{
Timestamps: []int64{1, 2, 3, 4},
Values: []float64{5, 6, 7, 8},
NextIdx: 1,
}, &sortBlock{
Timestamps: []int64{10, 2, 3, 4},
Values: []float64{50, 6, 7, 80},
NextIdx: 1,
}, 2)
// Different lengths
f(&sortBlock{
Timestamps: []int64{1, 2, 3, 4},
Values: []float64{5, 6, 7, 8},
}, &sortBlock{
Timestamps: []int64{1, 2, 3},
Values: []float64{5, 6, 7},
}, 3)
// Timestamps diverge
f(&sortBlock{
Timestamps: []int64{1, 2, 3, 4},
Values: []float64{5, 6, 7, 8},
}, &sortBlock{
Timestamps: []int64{1, 2, 30, 4},
Values: []float64{5, 6, 7, 8},
}, 2)
// Values diverge
f(&sortBlock{
Timestamps: []int64{1, 2, 3, 4},
Values: []float64{5, 6, 7, 8},
}, &sortBlock{
Timestamps: []int64{1, 2, 3, 4},
Values: []float64{5, 60, 7, 8},
}, 1)
// Zero matches
f(&sortBlock{
Timestamps: []int64{1, 2, 3, 4},
Values: []float64{5, 6, 7, 8},
}, &sortBlock{
Timestamps: []int64{5, 6, 7, 8},
Values: []float64{1, 2, 3, 4},
}, 0)
// Compare staleness markers, matching
f(&sortBlock{
Timestamps: []int64{1, 2, 3, 4},
Values: []float64{5, decimal.StaleNaN, 7, 8},
}, &sortBlock{
Timestamps: []int64{1, 2, 3, 4},
Values: []float64{5, decimal.StaleNaN, 7, 8},
}, 4)
// Special float values: +Inf, -Inf, 0, -0
f(&sortBlock{
Timestamps: []int64{1, 2, 3, 4},
Values: []float64{math.Inf(1), math.Inf(-1), math.Copysign(0, +1), math.Copysign(0, -1)},
}, &sortBlock{
Timestamps: []int64{1, 2, 3, 4},
Values: []float64{math.Inf(1), math.Inf(-1), math.Copysign(0, +1), math.Copysign(0, -1)},
}, 4)
// Positive zero vs negative zero (bitwise different)
f(&sortBlock{
Timestamps: []int64{1, 2},
Values: []float64{5, math.Copysign(0, +1)},
}, &sortBlock{
Timestamps: []int64{1, 2},
Values: []float64{5, math.Copysign(0, -1)},
}, 1)
}

View File

@@ -10,14 +10,14 @@ func BenchmarkMergeSortBlocks(b *testing.B) {
b.Run(fmt.Sprintf("replicationFactor-%d", replicationFactor), func(b *testing.B) {
const samplesPerBlock = 8192
var blocks []*sortBlock
for j := 0; j < 10; j++ {
for j := range 10 {
timestamps := make([]int64, samplesPerBlock)
values := make([]float64, samplesPerBlock)
for i := range timestamps {
timestamps[i] = int64(j*samplesPerBlock + i)
values[i] = float64(j*samplesPerBlock + i)
}
for i := 0; i < replicationFactor; i++ {
for range replicationFactor {
blocks = append(blocks, &sortBlock{
Timestamps: timestamps,
Values: values,
@@ -30,7 +30,7 @@ func BenchmarkMergeSortBlocks(b *testing.B) {
b.Run("overlapped-blocks-bestcase", func(b *testing.B) {
const samplesPerBlock = 8192
var blocks []*sortBlock
for j := 0; j < 10; j++ {
for j := range 10 {
timestamps := make([]int64, samplesPerBlock)
values := make([]float64, samplesPerBlock)
for i := range timestamps {
@@ -45,7 +45,7 @@ func BenchmarkMergeSortBlocks(b *testing.B) {
for j := 1; j < len(blocks); j++ {
prev := blocks[j-1].Timestamps
curr := blocks[j].Timestamps
for i := 0; i < samplesPerBlock/2; i++ {
for i := range samplesPerBlock / 2 {
prev[i+samplesPerBlock/2], curr[i] = curr[i], prev[i+samplesPerBlock/2]
}
}
@@ -54,7 +54,7 @@ func BenchmarkMergeSortBlocks(b *testing.B) {
b.Run("overlapped-blocks-worstcase", func(b *testing.B) {
const samplesPerBlock = 8192
var blocks []*sortBlock
for j := 0; j < 5; j++ {
for j := range 5 {
timestamps := make([]int64, samplesPerBlock)
values := make([]float64, samplesPerBlock)
for i := range timestamps {

View File

@@ -11,6 +11,16 @@
{% stripspace %}
{% func ExportCSVHeader(fieldNames []string) %}
{% if len(fieldNames) == 0 %}{% return %}{% endif %}
{%s= fieldNames[0] %}
{% for _, fieldName := range fieldNames[1:] %}
,
{%s= fieldName %}
{% endfor %}
{% newline %}
{% endfunc %}
{% func ExportCSVLine(xb *exportBlock, fieldNames []string) %}
{% if len(xb.timestamps) == 0 || len(fieldNames) == 0 %}{% return %}{% endif %}
{% for i, timestamp := range xb.timestamps %}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,132 @@
package prometheus
import (
"strings"
"testing"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
)
func TestExportCSVHeader(t *testing.T) {
f := func(fieldNames []string, expected string) {
t.Helper()
got := ExportCSVHeader(fieldNames)
if got != expected {
t.Fatalf("ExportCSVHeader(%v): got %q; want %q", fieldNames, got, expected)
}
}
f(nil, "")
f([]string{}, "")
f([]string{"__value__"}, "__value__\n")
f([]string{"__timestamp__"}, "__timestamp__\n")
f([]string{"__timestamp__:rfc3339"}, "__timestamp__:rfc3339\n")
f([]string{"__name__"}, "__name__\n")
f([]string{"job"}, "job\n")
f([]string{"__timestamp__:rfc3339", "__value__"}, "__timestamp__:rfc3339,__value__\n")
f([]string{"__value__", "__timestamp__"}, "__value__,__timestamp__\n")
f([]string{"job", "instance"}, "job,instance\n")
f([]string{"__name__", "__value__", "__timestamp__:unix_s"}, "__name__,__value__,__timestamp__:unix_s\n")
f([]string{"job", "instance", "__value__", "__timestamp__:unix_ms"}, "job,instance,__value__,__timestamp__:unix_ms\n")
f([]string{"__timestamp__:custom:2006-01-02", "__value__", "host", "dc", "env"},
"__timestamp__:custom:2006-01-02,__value__,host,dc,env\n")
// duplicate fields
f([]string{"__value__", "__value__"}, "__value__,__value__\n")
f([]string{"__timestamp__", "__timestamp__:rfc3339"}, "__timestamp__,__timestamp__:rfc3339\n")
}
func TestExportCSVLine(t *testing.T) {
localBak := time.Local
time.Local = time.UTC
defer func() { time.Local = localBak }()
f := func(mn *storage.MetricName, timestamps []int64, values []float64, fieldNames []string, expected string) {
t.Helper()
xb := &exportBlock{
mn: mn,
timestamps: timestamps,
values: values,
}
got := ExportCSVLine(xb, fieldNames)
if got != expected {
t.Fatalf("ExportCSVLine: got %q; want %q", got, expected)
}
}
mn := &storage.MetricName{
MetricGroup: []byte("cpu_usage"),
Tags: []storage.Tag{
{Key: []byte("job"), Value: []byte("node")},
{Key: []byte("instance"), Value: []byte("localhost:9090")},
},
}
// empty inputs
f(mn, nil, nil, []string{"__value__"}, "")
f(mn, []int64{}, []float64{}, []string{"__value__"}, "")
f(mn, []int64{1000}, []float64{1.5}, nil, "")
f(mn, []int64{1000}, []float64{1.5}, []string{}, "")
f(mn, []int64{1000}, []float64{42.5}, []string{"__value__"}, "42.5\n")
f(mn, []int64{1704067200000}, []float64{1}, []string{"__timestamp__"}, "1704067200000\n")
f(mn, []int64{1704067200000}, []float64{1}, []string{"__timestamp__:unix_s"}, "1704067200\n")
f(mn, []int64{1704067200000}, []float64{1}, []string{"__timestamp__:unix_ms"}, "1704067200000\n")
f(mn, []int64{1704067200000}, []float64{1}, []string{"__timestamp__:unix_ns"}, "1704067200000000000\n")
f(mn, []int64{1704067200000}, []float64{1}, []string{"__timestamp__:rfc3339"}, "2024-01-01T00:00:00Z\n")
f(mn, []int64{1000}, []float64{1}, []string{"__name__"}, "cpu_usage\n")
f(mn, []int64{1000}, []float64{1}, []string{"job"}, "node\n")
f(mn, []int64{1000}, []float64{1}, []string{"instance"}, "localhost:9090\n")
f(mn, []int64{1000}, []float64{1}, []string{"missing_label"}, "\n")
// multiple fields
f(mn, []int64{1704067200000}, []float64{99.9},
[]string{"__timestamp__:unix_s", "__value__", "job"},
"1704067200,99.9,node\n")
// multiple rows
f(mn, []int64{1000, 2000}, []float64{1.1, 2.2},
[]string{"__value__", "__timestamp__"},
"1.1,1000\n2.2,2000\n")
f(mn, []int64{1000, 2000, 3000}, []float64{10, 20, 30},
[]string{"__timestamp__:unix_s", "__value__"},
"1,10\n2,20\n3,30\n")
// escaping for special characters in tag values
f(&storage.MetricName{
MetricGroup: []byte("m"),
Tags: []storage.Tag{{Key: []byte("desc"), Value: []byte("a,b")}},
}, []int64{1000}, []float64{1}, []string{"desc"}, "\"a,b\"\n")
f(&storage.MetricName{
MetricGroup: []byte("m"),
Tags: []storage.Tag{{Key: []byte("desc"), Value: []byte(`say "hello"`)}},
}, []int64{1000}, []float64{1}, []string{"desc"}, "\"say \\\"hello\\\"\"\n")
f(&storage.MetricName{
MetricGroup: []byte("m"),
Tags: []storage.Tag{{Key: []byte("desc"), Value: []byte("line1\nline2")}},
}, []int64{1000}, []float64{1}, []string{"desc"}, "\"line1\\nline2\"\n")
// header and data line field counts must match
fieldNames := []string{"__name__", "job", "instance", "__value__", "__timestamp__:unix_s"}
header := ExportCSVHeader(fieldNames)
line := ExportCSVLine(&exportBlock{
mn: mn,
timestamps: []int64{1704067200000},
values: []float64{99.9},
}, fieldNames)
headerCommas := strings.Count(header, ",")
lineCommas := strings.Count(line, ",")
if headerCommas != lineCommas {
t.Fatalf("header has %d commas, data line has %d commas", headerCommas, lineCommas)
}
if headerCommas != len(fieldNames)-1 {
t.Fatalf("expected %d commas in header, got %d", len(fieldNames)-1, headerCommas)
}
}

View File

@@ -6,11 +6,13 @@ import (
"math"
"net/http"
"runtime"
"slices"
"strconv"
"strings"
"sync"
"sync/atomic"
"time"
"unicode/utf8"
"github.com/VictoriaMetrics/metrics"
"github.com/VictoriaMetrics/metricsql"
@@ -173,6 +175,7 @@ func ExportCSVHandler(startTime time.Time, w http.ResponseWriter, r *http.Reques
w.Header().Set("Content-Type", "text/csv; charset=utf-8")
bw := bufferedwriter.Get(w)
defer bufferedwriter.Put(bw)
WriteExportCSVHeader(bw, fieldNames)
sw := newScalableWriter(bw)
writeCSVLine := func(xb *exportBlock, workerID uint) error {
if len(xb.timestamps) == 0 {
@@ -527,6 +530,14 @@ func LabelValuesHandler(qt *querytracer.Tracer, startTime time.Time, labelName s
return err
}
sq := storage.NewSearchQuery(cp.start, cp.end, cp.filterss, *maxLabelsAPISeries)
if strings.HasPrefix(labelName, "U__") {
// This label seems to be Unicode-encoded according to the Prometheus spec.
// See https://prometheus.io/docs/prometheus/latest/querying/api/#querying-label-values
// Spec: https://github.com/prometheus/proposals/blob/main/proposals/0028-utf8.md
labelName = unescapePrometheusLabelName(labelName)
}
labelValues, err := netstorage.LabelValues(qt, labelName, sq, limit, cp.deadline)
if err != nil {
return fmt.Errorf("cannot obtain values for label %q: %w", labelName, err)
@@ -1004,14 +1015,7 @@ func removeEmptyValuesAndTimeseries(tss []netstorage.Result) []netstorage.Result
dst := tss[:0]
for i := range tss {
ts := &tss[i]
hasNaNs := false
for _, v := range ts.Values {
if math.IsNaN(v) {
hasNaNs = true
break
}
}
if !hasNaNs {
if !slices.ContainsFunc(ts.Values, math.IsNaN) {
// Fast path: nothing to remove.
if len(ts.Values) > 0 {
dst = append(dst, *ts)
@@ -1336,3 +1340,70 @@ func calculateMaxUniqueTimeSeriesForResource(maxConcurrentRequests, remainingMem
func GetMaxUniqueTimeSeries() int {
return maxUniqueTimeseriesValue
}
// copied from https://github.com/prometheus/common/blob/adea6285c1c7447fcb7bfdeb6abfc6eff893e0a7/model/metric.go#L483
// it's not possible to use direct import due to increased binary size
func unescapePrometheusLabelName(name string) string {
// lower function taken from strconv.atoi.
lower := func(c byte) byte {
return c | ('x' - 'X')
}
if len(name) == 0 {
return name
}
escapedName, found := strings.CutPrefix(name, "U__")
if !found {
return name
}
var unescaped strings.Builder
TOP:
for i := 0; i < len(escapedName); i++ {
// All non-underscores are treated normally.
if escapedName[i] != '_' {
unescaped.WriteByte(escapedName[i])
continue
}
i++
if i >= len(escapedName) {
return name
}
// A double underscore is a single underscore.
if escapedName[i] == '_' {
unescaped.WriteByte('_')
continue
}
// We think we are in a UTF-8 code, process it.
var utf8Val uint
for j := 0; i < len(escapedName); j++ {
// This is too many characters for a utf8 value based on the MaxRune
// value of '\U0010FFFF'.
if j >= 6 {
return name
}
// Found a closing underscore, convert to a rune, check validity, and append.
if escapedName[i] == '_' {
utf8Rune := rune(utf8Val)
if !utf8.ValidRune(utf8Rune) {
return name
}
unescaped.WriteRune(utf8Rune)
continue TOP
}
r := lower(escapedName[i])
utf8Val *= 16
switch {
case r >= '0' && r <= '9':
utf8Val += uint(r) - '0'
case r >= 'a' && r <= 'f':
utf8Val += uint(r) - 'a' + 10
default:
return name
}
i++
}
// Didn't find closing underscore, invalid.
return name
}
return unescaped.String()
}

View File

@@ -1,6 +1,7 @@
{% import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage/metricnamestats"
) %}
{% stripspace %}
@@ -34,9 +35,9 @@ TSDBStatusResponse generates response for /api/v1/status/tsdb .
]
{% endfunc %}
{% func tsdbStatusMetricNameEntries(a []storage.TopHeapEntry, queryStats []storage.MetricNamesStatsRecord) %}
{% func tsdbStatusMetricNameEntries(a []storage.TopHeapEntry, queryStats []metricnamestats.StatRecord) %}
{% code
queryStatsByMetricName := make(map[string]storage.MetricNamesStatsRecord,len(queryStats))
queryStatsByMetricName := make(map[string]metricnamestats.StatRecord,len(queryStats))
for _, record := range queryStats{
queryStatsByMetricName[record.MetricName] = record
}

View File

@@ -8,228 +8,229 @@ package prometheus
import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage/metricnamestats"
)
// TSDBStatusResponse generates response for /api/v1/status/tsdb .
//line app/vmselect/prometheus/tsdb_status_response.qtpl:8
//line app/vmselect/prometheus/tsdb_status_response.qtpl:9
import (
qtio422016 "io"
qt422016 "github.com/valyala/quicktemplate"
)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:8
//line app/vmselect/prometheus/tsdb_status_response.qtpl:9
var (
_ = qtio422016.Copy
_ = qt422016.AcquireByteBuffer
)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:8
//line app/vmselect/prometheus/tsdb_status_response.qtpl:9
func StreamTSDBStatusResponse(qw422016 *qt422016.Writer, status *storage.TSDBStatus, qt *querytracer.Tracer) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:8
//line app/vmselect/prometheus/tsdb_status_response.qtpl:9
qw422016.N().S(`{"status":"success","data":{"totalSeries":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:12
//line app/vmselect/prometheus/tsdb_status_response.qtpl:13
qw422016.N().DUL(status.TotalSeries)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:12
//line app/vmselect/prometheus/tsdb_status_response.qtpl:13
qw422016.N().S(`,"totalLabelValuePairs":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:13
//line app/vmselect/prometheus/tsdb_status_response.qtpl:14
qw422016.N().DUL(status.TotalLabelValuePairs)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:13
//line app/vmselect/prometheus/tsdb_status_response.qtpl:14
qw422016.N().S(`,"seriesCountByMetricName":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:14
//line app/vmselect/prometheus/tsdb_status_response.qtpl:15
streamtsdbStatusMetricNameEntries(qw422016, status.SeriesCountByMetricName, status.SeriesQueryStatsByMetricName)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:14
//line app/vmselect/prometheus/tsdb_status_response.qtpl:15
qw422016.N().S(`,"seriesCountByLabelName":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:15
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
streamtsdbStatusEntries(qw422016, status.SeriesCountByLabelName)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:15
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
qw422016.N().S(`,"seriesCountByFocusLabelValue":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
//line app/vmselect/prometheus/tsdb_status_response.qtpl:17
streamtsdbStatusEntries(qw422016, status.SeriesCountByFocusLabelValue)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
//line app/vmselect/prometheus/tsdb_status_response.qtpl:17
qw422016.N().S(`,"seriesCountByLabelValuePair":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:17
//line app/vmselect/prometheus/tsdb_status_response.qtpl:18
streamtsdbStatusEntries(qw422016, status.SeriesCountByLabelValuePair)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:17
//line app/vmselect/prometheus/tsdb_status_response.qtpl:18
qw422016.N().S(`,"labelValueCountByLabelName":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:18
//line app/vmselect/prometheus/tsdb_status_response.qtpl:19
streamtsdbStatusEntries(qw422016, status.LabelValueCountByLabelName)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:18
//line app/vmselect/prometheus/tsdb_status_response.qtpl:19
qw422016.N().S(`}`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:20
//line app/vmselect/prometheus/tsdb_status_response.qtpl:21
qt.Done()
//line app/vmselect/prometheus/tsdb_status_response.qtpl:21
//line app/vmselect/prometheus/tsdb_status_response.qtpl:22
streamdumpQueryTrace(qw422016, qt)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:21
//line app/vmselect/prometheus/tsdb_status_response.qtpl:22
qw422016.N().S(`}`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
func WriteTSDBStatusResponse(qq422016 qtio422016.Writer, status *storage.TSDBStatus, qt *querytracer.Tracer) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
StreamTSDBStatusResponse(qw422016, status, qt)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
func TSDBStatusResponse(status *storage.TSDBStatus, qt *querytracer.Tracer) string {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
WriteTSDBStatusResponse(qb422016, status, qt)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
qs422016 := string(qb422016.B)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
return qs422016
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:25
//line app/vmselect/prometheus/tsdb_status_response.qtpl:26
func streamtsdbStatusEntries(qw422016 *qt422016.Writer, a []storage.TopHeapEntry) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:25
//line app/vmselect/prometheus/tsdb_status_response.qtpl:26
qw422016.N().S(`[`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:27
//line app/vmselect/prometheus/tsdb_status_response.qtpl:28
for i, e := range a {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:27
//line app/vmselect/prometheus/tsdb_status_response.qtpl:28
qw422016.N().S(`{"name":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:29
//line app/vmselect/prometheus/tsdb_status_response.qtpl:30
qw422016.N().Q(e.Name)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:29
//line app/vmselect/prometheus/tsdb_status_response.qtpl:30
qw422016.N().S(`,"value":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:30
//line app/vmselect/prometheus/tsdb_status_response.qtpl:31
qw422016.N().D(int(e.Count))
//line app/vmselect/prometheus/tsdb_status_response.qtpl:30
//line app/vmselect/prometheus/tsdb_status_response.qtpl:31
qw422016.N().S(`}`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:32
//line app/vmselect/prometheus/tsdb_status_response.qtpl:33
if i+1 < len(a) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:32
//line app/vmselect/prometheus/tsdb_status_response.qtpl:33
qw422016.N().S(`,`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:32
//line app/vmselect/prometheus/tsdb_status_response.qtpl:33
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:33
//line app/vmselect/prometheus/tsdb_status_response.qtpl:34
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:33
//line app/vmselect/prometheus/tsdb_status_response.qtpl:34
qw422016.N().S(`]`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
func writetsdbStatusEntries(qq422016 qtio422016.Writer, a []storage.TopHeapEntry) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
streamtsdbStatusEntries(qw422016, a)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
func tsdbStatusEntries(a []storage.TopHeapEntry) string {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
writetsdbStatusEntries(qb422016, a)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
qs422016 := string(qb422016.B)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
return qs422016
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:37
func streamtsdbStatusMetricNameEntries(qw422016 *qt422016.Writer, a []storage.TopHeapEntry, queryStats []storage.MetricNamesStatsRecord) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:39
queryStatsByMetricName := make(map[string]storage.MetricNamesStatsRecord, len(queryStats))
//line app/vmselect/prometheus/tsdb_status_response.qtpl:38
func streamtsdbStatusMetricNameEntries(qw422016 *qt422016.Writer, a []storage.TopHeapEntry, queryStats []metricnamestats.StatRecord) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:40
queryStatsByMetricName := make(map[string]metricnamestats.StatRecord, len(queryStats))
for _, record := range queryStats {
queryStatsByMetricName[record.MetricName] = record
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:43
//line app/vmselect/prometheus/tsdb_status_response.qtpl:44
qw422016.N().S(`[`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:45
//line app/vmselect/prometheus/tsdb_status_response.qtpl:46
for i, e := range a {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:45
//line app/vmselect/prometheus/tsdb_status_response.qtpl:46
qw422016.N().S(`{`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:48
//line app/vmselect/prometheus/tsdb_status_response.qtpl:49
entry, ok := queryStatsByMetricName[e.Name]
//line app/vmselect/prometheus/tsdb_status_response.qtpl:49
//line app/vmselect/prometheus/tsdb_status_response.qtpl:50
qw422016.N().S(`"name":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:50
//line app/vmselect/prometheus/tsdb_status_response.qtpl:51
qw422016.N().Q(e.Name)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:50
//line app/vmselect/prometheus/tsdb_status_response.qtpl:51
qw422016.N().S(`,`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:51
if !ok {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:51
qw422016.N().S(`"value":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:52
qw422016.N().D(int(e.Count))
//line app/vmselect/prometheus/tsdb_status_response.qtpl:53
} else {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:53
if !ok {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:52
qw422016.N().S(`"value":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:54
//line app/vmselect/prometheus/tsdb_status_response.qtpl:53
qw422016.N().D(int(e.Count))
//line app/vmselect/prometheus/tsdb_status_response.qtpl:54
} else {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:54
qw422016.N().S(`"value":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:55
qw422016.N().D(int(e.Count))
//line app/vmselect/prometheus/tsdb_status_response.qtpl:55
qw422016.N().S(`,"requestsCount":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:55
qw422016.N().D(int(entry.RequestsCount))
//line app/vmselect/prometheus/tsdb_status_response.qtpl:55
qw422016.N().S(`,"lastRequestTimestamp":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:56
qw422016.N().D(int(entry.RequestsCount))
//line app/vmselect/prometheus/tsdb_status_response.qtpl:56
qw422016.N().S(`,"lastRequestTimestamp":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:57
qw422016.N().D(int(entry.LastRequestTs))
//line app/vmselect/prometheus/tsdb_status_response.qtpl:57
//line app/vmselect/prometheus/tsdb_status_response.qtpl:58
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:57
//line app/vmselect/prometheus/tsdb_status_response.qtpl:58
qw422016.N().S(`}`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:59
//line app/vmselect/prometheus/tsdb_status_response.qtpl:60
if i+1 < len(a) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:59
//line app/vmselect/prometheus/tsdb_status_response.qtpl:60
qw422016.N().S(`,`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:59
//line app/vmselect/prometheus/tsdb_status_response.qtpl:60
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:60
//line app/vmselect/prometheus/tsdb_status_response.qtpl:61
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:60
//line app/vmselect/prometheus/tsdb_status_response.qtpl:61
qw422016.N().S(`]`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
func writetsdbStatusMetricNameEntries(qq422016 qtio422016.Writer, a []storage.TopHeapEntry, queryStats []storage.MetricNamesStatsRecord) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
func writetsdbStatusMetricNameEntries(qq422016 qtio422016.Writer, a []storage.TopHeapEntry, queryStats []metricnamestats.StatRecord) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
streamtsdbStatusMetricNameEntries(qw422016, a, queryStats)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
func tsdbStatusMetricNameEntries(a []storage.TopHeapEntry, queryStats []storage.MetricNamesStatsRecord) string {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
func tsdbStatusMetricNameEntries(a []storage.TopHeapEntry, queryStats []metricnamestats.StatRecord) string {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
writetsdbStatusMetricNameEntries(qb422016, a, queryStats)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
qs422016 := string(qb422016.B)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
return qs422016
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
}

View File

@@ -742,7 +742,7 @@ func getRangeTopKTimeseries(tss []*timeseries, modifier *metricsql.ModifierExpr,
func reverseSeries(tss []*timeseries) {
j := len(tss)
for i := 0; i < len(tss)/2; i++ {
for i := range len(tss) / 2 {
j--
tss[i], tss[j] = tss[j], tss[i]
}
@@ -983,7 +983,7 @@ func getPerPointIQRBounds(tss []*timeseries) ([]float64, []float64) {
var qs []float64
lower := make([]float64, pointsLen)
upper := make([]float64, pointsLen)
for i := 0; i < pointsLen; i++ {
for i := range pointsLen {
values = values[:0]
for _, ts := range tss {
v := ts.Values[i]

View File

@@ -53,7 +53,7 @@ func TestIncrementalAggr(t *testing.T) {
Values: valuesExpected,
}}
// run the test multiple times to make sure there are no side effects on concurrency
for i := 0; i < 10; i++ {
for i := range 10 {
iafc := newIncrementalAggrFuncContext(ae, callbacks)
tssSrcCopy := copyTimeseries(tssSrc)
if err := testIncrementalParallelAggr(iafc, tssSrcCopy, tssExpected); err != nil {

View File

@@ -5,6 +5,7 @@ import (
"fmt"
"math"
"regexp"
"slices"
"sort"
"strings"
"sync"
@@ -1165,6 +1166,61 @@ func evalInstantRollup(qt *querytracer.Tracer, ec *EvalConfig, funcName string,
},
}
return evalExpr(qt, ec, be)
// the cached rate result could be inaccurate in edge cases, see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10098
case "rate":
if iafc != nil {
if !strings.EqualFold(iafc.ae.Name, "sum") {
qt.Printf("do not apply instant rollup optimization for incremental aggregate %s()", iafc.ae.Name)
return evalAt(qt, timestamp, window)
}
qt.Printf("optimized calculation for sum(rate(m[d])) as (sum(increase(m[d])) / d)")
afe := expr.(*metricsql.AggrFuncExpr)
fe := afe.Args[0].(*metricsql.FuncExpr)
feIncrease := *fe
feIncrease.Name = "increase"
// copy RollupExpr to drop possible offset,
// see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9762
newArg := copyRollupExpr(fe.Args[0].(*metricsql.RollupExpr))
newArg.Offset = nil
feIncrease.Args = []metricsql.Expr{newArg}
d := newArg.Window.Duration(ec.Step)
if d == 0 {
d = ec.Step
}
afeIncrease := *afe
afeIncrease.Args = []metricsql.Expr{&feIncrease}
be := &metricsql.BinaryOpExpr{
Op: "/",
KeepMetricNames: true,
Left: &afeIncrease,
Right: &metricsql.NumberExpr{
N: float64(d) / 1000,
},
}
return evalExpr(qt, ec, be)
}
qt.Printf("optimized calculation for instant rollup rate(m[d]) as (increase(m[d]) / d)")
fe := expr.(*metricsql.FuncExpr)
feIncrease := *fe
feIncrease.Name = "increase"
// copy RollupExpr to drop possible offset,
// see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9762
newArg := copyRollupExpr(fe.Args[0].(*metricsql.RollupExpr))
newArg.Offset = nil
feIncrease.Args = []metricsql.Expr{newArg}
d := newArg.Window.Duration(ec.Step)
if d == 0 {
d = ec.Step
}
be := &metricsql.BinaryOpExpr{
Op: "/",
KeepMetricNames: fe.KeepMetricNames,
Left: &feIncrease,
Right: &metricsql.NumberExpr{
N: float64(d) / 1000,
},
}
return evalExpr(qt, ec, be)
case "max_over_time":
if iafc != nil {
if !strings.EqualFold(iafc.ae.Name, "max") {
@@ -1935,14 +1991,7 @@ func dropStaleNaNs(funcName string, values []float64, timestamps []int64) ([]flo
return values, timestamps
}
// Remove Prometheus staleness marks, so non-default rollup functions don't hit NaN values.
hasStaleSamples := false
for _, v := range values {
if decimal.IsStaleNaN(v) {
hasStaleSamples = true
break
}
}
if !hasStaleSamples {
if !slices.ContainsFunc(values, decimal.IsStaleNaN) {
// Fast path: values have no Prometheus staleness marks.
return values, timestamps
}

View File

@@ -313,7 +313,7 @@ func escapeDots(s string) string {
return s
}
result := make([]byte, 0, len(s)+2*dotsCount)
for i := 0; i < len(s); i++ {
for i := range len(s) {
if s[i] == '.' && (i == 0 || s[i-1] != '\\') && (i+1 == len(s) || i+1 < len(s) && s[i+1] != '*' && s[i+1] != '+' && s[i+1] != '{') {
// Escape a dot if the following conditions are met:
// - if it isn't escaped already, i.e. if there is no `\` char before the dot.

View File

@@ -67,7 +67,7 @@ func TestExecSuccess(t *testing.T) {
Deadline: searchutil.NewDeadline(time.Now(), time.Minute, ""),
RoundDigits: 100,
}
for i := 0; i < 5; i++ {
for range 5 {
result, err := Exec(nil, ec, q, false)
if err != nil {
t.Fatalf(`unexpected error when executing %q: %s`, q, err)
@@ -4018,6 +4018,12 @@ func TestExecSuccess(t *testing.T) {
resultExpected := []netstorage.Result{}
f(q, resultExpected)
})
t.Run(`histogram_fraction(scalar)`, func(t *testing.T) {
t.Parallel()
q := `histogram_fraction(123, 456, time())`
resultExpected := []netstorage.Result{}
f(q, resultExpected)
})
t.Run(`histogram_quantile(single-value-no-le)`, func(t *testing.T) {
t.Parallel()
q := `histogram_quantile(0.6, label_set(100, "foo", "bar"))`
@@ -4030,6 +4036,12 @@ func TestExecSuccess(t *testing.T) {
resultExpected := []netstorage.Result{}
f(q, resultExpected)
})
t.Run(`histogram_fraction(single-value-no-le)`, func(t *testing.T) {
t.Parallel()
q := `histogram_fraction(123,456, label_set(100, "foo", "bar"))`
resultExpected := []netstorage.Result{}
f(q, resultExpected)
})
t.Run(`histogram_quantile(single-value-invalid-le)`, func(t *testing.T) {
t.Parallel()
q := `histogram_quantile(0.6, label_set(100, "le", "foobar"))`
@@ -4042,6 +4054,12 @@ func TestExecSuccess(t *testing.T) {
resultExpected := []netstorage.Result{}
f(q, resultExpected)
})
t.Run(`histogram_fraction(single-value-invalid-le)`, func(t *testing.T) {
t.Parallel()
q := `histogram_fraction(50, 60, label_set(100, "le", "foobar"))`
resultExpected := []netstorage.Result{}
f(q, resultExpected)
})
t.Run(`histogram_quantile(single-value-inf-le)`, func(t *testing.T) {
t.Parallel()
q := `histogram_quantile(0.6, label_set(100, "le", "+Inf"))`
@@ -4183,6 +4201,28 @@ func TestExecSuccess(t *testing.T) {
resultExpected := []netstorage.Result{r}
f(q, resultExpected)
})
t.Run(`histogram_fraction(single-value-valid-le)`, func(t *testing.T) {
t.Parallel()
q := `histogram_fraction(0, 100, label_set(100, "le", "200"))`
r := netstorage.Result{
MetricName: metricNameExpected,
Values: []float64{0.5, 0.5, 0.5, 0.5, 0.5, 0.5},
Timestamps: timestampsExpected,
}
resultExpected := []netstorage.Result{r}
f(q, resultExpected)
})
t.Run(`histogram_fraction(single-value-valid-le)`, func(t *testing.T) {
t.Parallel()
q := `histogram_fraction(200, 300, label_set(100, "le", "200"))`
r := netstorage.Result{
MetricName: metricNameExpected,
Values: []float64{0, 0, 0, 0, 0, 0},
Timestamps: timestampsExpected,
}
resultExpected := []netstorage.Result{r}
f(q, resultExpected)
})
t.Run(`histogram_quantile(single-value-valid-le, boundsLabel)`, func(t *testing.T) {
t.Parallel()
q := `sort(histogram_quantile(0.6, label_set(100, "le", "200"), "foobar"))`
@@ -4212,7 +4252,7 @@ func TestExecSuccess(t *testing.T) {
resultExpected := []netstorage.Result{r1, r2, r3}
f(q, resultExpected)
})
t.Run(`histogram_quantile(single-value-valid-le, boundsLabel)`, func(t *testing.T) {
t.Run(`histogram_share(single-value-valid-le, boundsLabel)`, func(t *testing.T) {
t.Parallel()
q := `sort(histogram_share(120, label_set(100, "le", "200"), "foobar"))`
r1 := netstorage.Result{
@@ -4311,7 +4351,37 @@ func TestExecSuccess(t *testing.T) {
resultExpected := []netstorage.Result{r}
f(q, resultExpected)
})
t.Run(`histogram_share(single-value-valid-le-mid-le)`, func(t *testing.T) {
t.Run(`histogram_fraction(single-value-valid-le-max-le)`, func(t *testing.T) {
t.Parallel()
q := `histogram_fraction(0,100, (
label_set(100, "le", "100"),
label_set(40, "le", "50"),
label_set(0, "le", "10"),
))`
r := netstorage.Result{
MetricName: metricNameExpected,
Values: []float64{1, 1, 1, 1, 1, 1},
Timestamps: timestampsExpected,
}
resultExpected := []netstorage.Result{r}
f(q, resultExpected)
})
t.Run(`histogram_fraction(single-value-valid-le-min-le)`, func(t *testing.T) {
t.Parallel()
q := `histogram_fraction(0,10, (
label_set(100, "le", "100"),
label_set(40, "le", "50"),
label_set(0, "le", "10"),
))`
r := netstorage.Result{
MetricName: metricNameExpected,
Values: []float64{0, 0, 0, 0, 0, 0},
Timestamps: timestampsExpected,
}
resultExpected := []netstorage.Result{r}
f(q, resultExpected)
})
t.Run(`histogram_share(single-value-valid-le-mid-le-1)`, func(t *testing.T) {
t.Parallel()
q := `histogram_share(105, (
label_set(100, "le", "200"),
@@ -4325,6 +4395,34 @@ func TestExecSuccess(t *testing.T) {
resultExpected := []netstorage.Result{r}
f(q, resultExpected)
})
t.Run(`histogram_share(single-value-valid-le-mid-le-2)`, func(t *testing.T) {
t.Parallel()
q := `histogram_share(55, (
label_set(100, "le", "200"),
label_set(0, "le", "55"),
))`
r := netstorage.Result{
MetricName: metricNameExpected,
Values: []float64{0, 0, 0, 0, 0, 0},
Timestamps: timestampsExpected,
}
resultExpected := []netstorage.Result{r}
f(q, resultExpected)
})
t.Run(`histogram_fraction(single-value-valid-le-mid-le)`, func(t *testing.T) {
t.Parallel()
q := `histogram_fraction(55,105, (
label_set(100, "le", "200"),
label_set(0, "le", "55"),
))`
r := netstorage.Result{
MetricName: metricNameExpected,
Values: []float64{0.3448275862068966, 0.3448275862068966, 0.3448275862068966, 0.3448275862068966, 0.3448275862068966, 0.3448275862068966},
Timestamps: timestampsExpected,
}
resultExpected := []netstorage.Result{r}
f(q, resultExpected)
})
t.Run(`histogram_quantile(single-value-valid-le-min-phi-no-zero-bucket)`, func(t *testing.T) {
t.Parallel()
q := `histogram_quantile(0, label_set(100, "le", "200"))`
@@ -4358,6 +4456,17 @@ func TestExecSuccess(t *testing.T) {
resultExpected := []netstorage.Result{r}
f(q, resultExpected)
})
t.Run(`histogram_fraction(scalar-phi)`, func(t *testing.T) {
t.Parallel()
q := `histogram_fraction(25, time() / 8, label_set(100, "le", "200"))`
r := netstorage.Result{
MetricName: metricNameExpected,
Values: []float64{0.5, 0.625, 0.75, 0.875, 0.875, 0.875},
Timestamps: timestampsExpected,
}
resultExpected := []netstorage.Result{r}
f(q, resultExpected)
})
t.Run(`histogram_quantile(duplicate-le)`, func(t *testing.T) {
// See https://github.com/VictoriaMetrics/VictoriaMetrics/pull/3225
t.Parallel()
@@ -4439,6 +4548,36 @@ func TestExecSuccess(t *testing.T) {
resultExpected := []netstorage.Result{r1, r2}
f(q, resultExpected)
})
t.Run(`histogram_fraction(valid)`, func(t *testing.T) {
t.Parallel()
q := `sort(histogram_fraction(0, 25,
label_set(90, "foo", "bar", "le", "10")
or label_set(100, "foo", "bar", "le", "30")
or label_set(300, "foo", "bar", "le", "+Inf")
or label_set(200, "tag", "xx", "le", "10")
or label_set(300, "tag", "xx", "le", "30")
))`
r1 := netstorage.Result{
MetricName: metricNameExpected,
Values: []float64{0.325, 0.325, 0.325, 0.325, 0.325, 0.325},
Timestamps: timestampsExpected,
}
r1.MetricName.Tags = []storage.Tag{{
Key: []byte("foo"),
Value: []byte("bar"),
}}
r2 := netstorage.Result{
MetricName: metricNameExpected,
Values: []float64{0.9166666666666666, 0.9166666666666666, 0.9166666666666666, 0.9166666666666666, 0.9166666666666666, 0.9166666666666666},
Timestamps: timestampsExpected,
}
r2.MetricName.Tags = []storage.Tag{{
Key: []byte("tag"),
Value: []byte("xx"),
}}
resultExpected := []netstorage.Result{r1, r2}
f(q, resultExpected)
})
t.Run(`histogram_quantile(negative-bucket-count)`, func(t *testing.T) {
t.Parallel()
q := `histogram_quantile(0.6,
@@ -4555,6 +4694,25 @@ func TestExecSuccess(t *testing.T) {
resultExpected := []netstorage.Result{r}
f(q, resultExpected)
})
t.Run(`histogram_fraction(normal-bucket-count)`, func(t *testing.T) {
t.Parallel()
q := `histogram_fraction(22,35,
label_set(0, "foo", "bar", "le", "10")
or label_set(100, "foo", "bar", "le", "30")
or label_set(300, "foo", "bar", "le", "+Inf")
)`
r := netstorage.Result{
MetricName: metricNameExpected,
Values: []float64{0.1333333333333333, 0.1333333333333333, 0.1333333333333333, 0.1333333333333333, 0.1333333333333333, 0.1333333333333333},
Timestamps: timestampsExpected,
}
r.MetricName.Tags = []storage.Tag{{
Key: []byte("foo"),
Value: []byte("bar"),
}}
resultExpected := []netstorage.Result{r}
f(q, resultExpected)
})
t.Run(`histogram_quantile(normal-bucket-count, boundsLabel)`, func(t *testing.T) {
t.Parallel()
q := `sort(histogram_quantile(0.2,
@@ -9827,7 +9985,7 @@ func TestExecError(t *testing.T) {
Deadline: searchutil.NewDeadline(time.Now(), time.Minute, ""),
RoundDigits: 100,
}
for i := 0; i < 4; i++ {
for range 4 {
rv, err := Exec(nil, ec, q, false)
if err == nil {
t.Fatalf(`expecting non-nil error on %q`, q)

View File

@@ -55,7 +55,7 @@ type parseCache struct {
func newParseCache() *parseCache {
pc := new(parseCache)
for i := 0; i < parseBucketCount; i++ {
for i := range parseBucketCount {
pc.buckets[i] = newParseBucket()
}
return pc
@@ -75,7 +75,7 @@ func (pc *parseCache) get(q string) *parseCacheValue {
func (pc *parseCache) requests() uint64 {
var n uint64
for i := 0; i < parseBucketCount; i++ {
for i := range parseBucketCount {
n += pc.buckets[i].requests.Load()
}
return n
@@ -83,7 +83,7 @@ func (pc *parseCache) requests() uint64 {
func (pc *parseCache) misses() uint64 {
var n uint64
for i := 0; i < parseBucketCount; i++ {
for i := range parseBucketCount {
n += pc.buckets[i].misses.Load()
}
return n
@@ -91,7 +91,7 @@ func (pc *parseCache) misses() uint64 {
func (pc *parseCache) len() uint64 {
var n uint64
for i := 0; i < parseBucketCount; i++ {
for i := range parseBucketCount {
n += pc.buckets[i].len()
}
return n

View File

@@ -17,7 +17,7 @@ func testGetParseCacheValue(q string) *parseCacheValue {
func testGenerateQueries(items int) []string {
queries := make([]string, items)
for i := 0; i < items; i++ {
for i := range items {
queries[i] = fmt.Sprintf(`node_time_seconds{instance="node%d", job="job%d"}`, i, i)
}
return queries
@@ -102,7 +102,7 @@ func TestParseCacheBucketOverflow(t *testing.T) {
v := testGetParseCacheValue(queries[0])
// Fill bucket
for i := 0; i < parseBucketMaxLen; i++ {
for i := range parseBucketMaxLen {
b.put(queries[i], v)
}
expectedLen = uint64(parseBucketMaxLen)

Some files were not shown because too many files have changed in this diff Show More