Compare commits

...

196 Commits

Author SHA1 Message Date
f41gh7
5e9324673e docs/changelog: cut release v1.143.0
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2026-05-08 13:46:00 +02:00
f41gh7
9c5ac6b05f docs: update version to v1.143.0
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2026-05-08 13:45:15 +02:00
f41gh7
563c311e6c make vmui-update 2026-05-08 13:38:32 +02:00
f41gh7
205428984d vendor: update github.com/prometheus/prometheus 2026-05-08 13:38:18 +02:00
Nikolay
87e59a4bbf app/vmselect/searchutil: prioritize URL query params over form values
When a request contains both URL path query params and POST form values
for extra_label and extra_filters[], URL query params now take
precedence. This resolves the conflict between the two sources and
simplifies security enforcement for extra_label/extra_filters policies
via vmauth or any other http proxy.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10908
2026-05-08 10:11:58 +02:00
Max Kotliar
64f6c7e300 docs/integrations: add available_from placeholder for native histogram feature
Follow up on
76e0bcdf45
2026-05-08 10:57:05 +03:00
f41gh7
27f81ebf1d deployment/docker: update Go builder from Go1.26.2 to Go1.26.3
See https://github.com/golang/go/issues?q=milestone%3AGo1.26.3%20label%3ACherryPickApproved
2026-05-08 09:28:43 +02:00
JAYICE
696c1aa3e8 lib/fs: introduce new metric for Filesystem type name
This commit introduces a new metric to expose fs type for the provided path.

 For example:
```
vm_fs_info{path="/vmstorage-data", fs_type="xfs"}
```

 Path must be registered with new method `fs.RegisterPathFsMetrics`.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10482
2026-05-08 09:17:03 +02:00
Max Kotliar
2d79f2b455 docs/changelog: fix order - first features than bugs.
For some reason bugs were the first.
2026-05-07 21:16:53 +03:00
Kirill Yurkov
1d2ec1947b dsahboards: Add Kafka (Enterprise) row to vmagent dashboard (#10728)
Add a new `Kafka (Enterprise)` row to both vmagent dashboards:

- `dashboards/vmagent.json`
- `dashboards/vm/vmagent.json`

The row is placed before `Drilldown` and contains three Kafka-specific
panels:

- `Kafka bytes`
- `Kafka messages in/out`
- `Kafka and consumer errors`

The goal is to provide a compact Kafka-focused view for enterprise
vmagent deployments without duplicating the existing generic remote
write panels such as connection saturation and persistent queue size.

The new row helps distinguish:

- producer vs consumer throughput at the Kafka topic level
- message-rate shifts that may indicate smaller Kafka payloads and
higher per-message overhead
- producer-side Kafka errors vs consumer-side Kafka errors

Descriptions include links to the relevant Kafka documentation sections.

PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10728

---------

Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-05-07 21:15:21 +03:00
andriibeee
d5e7ecd7b1 app/vmselect: set CORS headers on /api/v1/export endpoints (#10900)
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10899
PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10900

Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-05-07 21:03:46 +03:00
JAYICE
0c7928b0ff app/vmauth: pick first backend to process request when all backends are unavailable (#10886)
The commit restores the previous behavior where the first backend is still selected and the request is sent to it. This behavior existed before commit 9c36f0931a, but was later changed to return no backends. Hence, vmauth would reject all requests for the next 3s if all backends are unavailable. In some rare cases, it leads to an increase in error responses. 

The commit restores the original behavior, adds comments explaining why it is important, and introduces tests covering the logic.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10837
PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10886

---------

Signed-off-by: JAYICE <1185430411@qq.com>
Signed-off-by: Max Kotliar <kotlyar.maksim@gmail.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
Co-authored-by: Hui Wang <haley@victoriametrics.com>
2026-05-07 20:44:50 +03:00
Hui Wang
76e0bcdf45 lib/prompb: support prometheus native histogram during ingestion
This commit adds support for Prometheus Native Histogram https://prometheus.io/docs/specs/native_histograms data ingestion via Prometheus RemoteWrite format. It converts Native Histograms into VictoriaMetrics histogram format.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10743
2026-05-07 19:06:51 +02:00
dependabot[bot]
a13bfb3aaa build(deps): bump github/codeql-action from 4.35.1 to 4.35.2 (#10921)
Bumps [github/codeql-action](https://github.com/github/codeql-action)
from 4.35.1 to 4.35.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/github/codeql-action/releases">github/codeql-action's
releases</a>.</em></p>
<blockquote>
<h2>v4.35.2</h2>
<ul>
<li>The undocumented TRAP cache cleanup feature that could be enabled
using the <code>CODEQL_ACTION_CLEANUP_TRAP_CACHES</code> environment
variable is deprecated and will be removed in May 2026. If you are
affected by this, we recommend disabling TRAP caching by passing the
<code>trap-caching: false</code> input to the <code>init</code> Action.
<a
href="https://redirect.github.com/github/codeql-action/pull/3795">#3795</a></li>
<li>The Git version 2.36.0 requirement for improved incremental analysis
now only applies to repositories that contain submodules. <a
href="https://redirect.github.com/github/codeql-action/pull/3789">#3789</a></li>
<li>Python analysis on GHES no longer extracts the standard library,
relying instead on models of the standard library. This should result in
significantly faster extraction and analysis times, while the effect on
alerts should be minimal. <a
href="https://redirect.github.com/github/codeql-action/pull/3794">#3794</a></li>
<li>Fixed a bug in the validation of OIDC configurations for private
registries that was added in CodeQL Action 4.33.0 / 3.33.0. <a
href="https://redirect.github.com/github/codeql-action/pull/3807">#3807</a></li>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.25.2">2.25.2</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3823">#3823</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/github/codeql-action/blob/main/CHANGELOG.md">github/codeql-action's
changelog</a>.</em></p>
<blockquote>
<h2>4.35.2 - 15 Apr 2026</h2>
<ul>
<li>The undocumented TRAP cache cleanup feature that could be enabled
using the <code>CODEQL_ACTION_CLEANUP_TRAP_CACHES</code> environment
variable is deprecated and will be removed in May 2026. If you are
affected by this, we recommend disabling TRAP caching by passing the
<code>trap-caching: false</code> input to the <code>init</code> Action.
<a
href="https://redirect.github.com/github/codeql-action/pull/3795">#3795</a></li>
<li>The Git version 2.36.0 requirement for improved incremental analysis
now only applies to repositories that contain submodules. <a
href="https://redirect.github.com/github/codeql-action/pull/3789">#3789</a></li>
<li>Python analysis on GHES no longer extracts the standard library,
relying instead on models of the standard library. This should result in
significantly faster extraction and analysis times, while the effect on
alerts should be minimal. <a
href="https://redirect.github.com/github/codeql-action/pull/3794">#3794</a></li>
<li>Fixed a bug in the validation of OIDC configurations for private
registries that was added in CodeQL Action 4.33.0 / 3.33.0. <a
href="https://redirect.github.com/github/codeql-action/pull/3807">#3807</a></li>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.25.2">2.25.2</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3823">#3823</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="95e58e9a2c"><code>95e58e9</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3824">#3824</a>
from github/update-v4.35.2-d2e135a73</li>
<li><a
href="6f31bfe060"><code>6f31bfe</code></a>
Update changelog for v4.35.2</li>
<li><a
href="d2e135a73a"><code>d2e135a</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3823">#3823</a>
from github/update-bundle/codeql-bundle-v2.25.2</li>
<li><a
href="60abb65df0"><code>60abb65</code></a>
Add changelog note</li>
<li><a
href="5a0a562209"><code>5a0a562</code></a>
Update default bundle to codeql-bundle-v2.25.2</li>
<li><a
href="65216971a1"><code>6521697</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3820">#3820</a>
from github/dependabot/github_actions/dot-github/wor...</li>
<li><a
href="3c45af2dd2"><code>3c45af2</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3821">#3821</a>
from github/dependabot/npm_and_yarn/npm-minor-345b93...</li>
<li><a
href="f1c339364c"><code>f1c3393</code></a>
Rebuild</li>
<li><a
href="1024fc496c"><code>1024fc4</code></a>
Rebuild</li>
<li><a
href="9dd4cfed96"><code>9dd4cfe</code></a>
Bump the npm-minor group across 1 directory with 6 updates</li>
<li>Additional commits viewable in <a
href="https://github.com/github/codeql-action/compare/v4.35.1...v4.35.2">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github/codeql-action&package-manager=github_actions&previous-version=4.35.1&new-version=4.35.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-05-07 16:26:50 +03:00
dependabot[bot]
08254f5c25 build(deps): bump marked from 18.0.0 to 18.0.2 in /app/vmui/packages/vmui (#10904)
Bumps [marked](https://github.com/markedjs/marked) from 18.0.0 to
18.0.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/markedjs/marked/releases">marked's
releases</a>.</em></p>
<blockquote>
<h2>v18.0.2</h2>
<h2><a
href="https://github.com/markedjs/marked/compare/v18.0.1...v18.0.2">18.0.2</a>
(2026-04-18)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>fix infinite loop for indented code blank line (<a
href="https://redirect.github.com/markedjs/marked/issues/3947">#3947</a>)
(<a
href="58a52e8a49">58a52e8</a>)</li>
</ul>
<h2>v18.0.1</h2>
<h2><a
href="https://github.com/markedjs/marked/compare/v18.0.0...v18.0.1">18.0.1</a>
(2026-04-17)</h2>
<h3>Bug Fixes</h3>
<ul>
<li><strong>rules:</strong> ensure lookbehind regex is evaluated
correctly by minifiers (<a
href="https://redirect.github.com/markedjs/marked/issues/3945">#3945</a>)
(<a
href="abd907aab5">abd907a</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="c4f4529d69"><code>c4f4529</code></a>
chore(release): 18.0.2 [skip ci]</li>
<li><a
href="58a52e8a49"><code>58a52e8</code></a>
fix: fix infinite loop for indented code blank line (<a
href="https://redirect.github.com/markedjs/marked/issues/3947">#3947</a>)</li>
<li><a
href="98b38246c0"><code>98b3824</code></a>
chore(release): 18.0.1 [skip ci]</li>
<li><a
href="abd907aab5"><code>abd907a</code></a>
fix(rules): ensure lookbehind regex is evaluated correctly by minifiers
(<a
href="https://redirect.github.com/markedjs/marked/issues/3945">#3945</a>)</li>
<li><a
href="96351c4a22"><code>96351c4</code></a>
chore(deps-dev): bump marked-highlight from 2.2.3 to 2.2.4 (<a
href="https://redirect.github.com/markedjs/marked/issues/3946">#3946</a>)</li>
<li><a
href="c1326994ed"><code>c132699</code></a>
chore: update testutils (<a
href="https://redirect.github.com/markedjs/marked/issues/3942">#3942</a>)</li>
<li>See full diff in <a
href="https://github.com/markedjs/marked/compare/v18.0.0...v18.0.2">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=marked&package-manager=npm_and_yarn&previous-version=18.0.0&new-version=18.0.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/VictoriaMetrics/VictoriaMetrics/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-05-07 16:26:06 +03:00
JAYICE
03bad6a270 lib/backup: explicitly use MD5 checksum header in S3 DeleteObjects requests (#1038)
The change improves compatibility with 3rd party S3 implementations. MD5 had been a default checksum method for a long time, but in v1.73.0 it was changed to CRC by AWS. Some implementations do not support CRC, such as Dell ECS.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10907
PR https://github.com/VictoriaMetrics/VictoriaMetrics-enterprise/pull/1038

---------

Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-05-07 14:51:34 +03:00
Roman Khavronenko
f1cbe7c700 apptest/vmaget: add helper for creating vmagent instance with low flush interval (#10925)
This change introduces a helper `MustStartDefaultRWVmagent` that by
default sets `-remoteWrite.flushInterval=50ms`. This helper makes it
easier to setup RW tests as all of them rely on frequent flushes. So
instead of overloading the flag, we can use dedicated helper for that.

This helper was added after newly added RW test became flaky because it
didn't have `-remoteWrite.flushInterval=50ms` set.

---------

Failing test
https://github.com/VictoriaMetrics/VictoriaMetrics/actions/runs/25446725004/job/74769752869#step:5:71

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-05-07 14:19:14 +03:00
andriibeee
90c9892757 app/vmauth: honor -maxRequestBodySizeToRetry independently of -requestBufferSize (#10882)
This PR makes vmauth honor `-maxRequestBodySizeToRetry` regardless of `-requestBufferSize`. Previously the larger of the two was used, so the retry could not be disabled by setting `-maxRequestBodySizeToRetry=0`, `-requestBufferSize` has to be set to zero too. 

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10857
PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10882

---------

Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-05-07 13:42:41 +03:00
hagen1778
ee8bb76808 docs/articles: add "Creating Kubernetes debugging AI Agent for VictoriaMetrics"
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-05-06 20:41:27 +02:00
hagen1778
0554c35d45 docs/articles: merge article and video links into one option
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-05-06 20:40:25 +02:00
hagen1778
dd72d3492d docs/articles: update link that was moved from datanami
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-05-06 20:02:57 +02:00
hagen1778
f0a147fdf7 docs/articles: drop dead link
Original link can't be found anywhere else, so dropping it.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-05-06 20:02:36 +02:00
Nikolay
8074d99d1f apptests: add opentemetry protocol integration tests 2026-05-06 18:06:12 +02:00
f41gh7
8474f15359 lib/httpserver: support multitnenacy via headers
This commit adds possibility to omit tenantID in the URL path. In this case,
tenantID will be fetched from HTTP headers `AccountID` and `ProjectID`.
If headers are missing too, then default `0:0` tenantID is used.

This functionality can be enabled only if -enableMultitenantHandlers
cmd-line flag was set to vminsert, vmselect or vmagent.

Motivation: this change makes VM configuration for multienancy
consistent with VL configuration - see
https://docs.victoriametrics.com/victorialogs/#multitenancy. And keeps
backward compatibility in the same time.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4241
2026-05-06 17:49:54 +02:00
Roman Khavronenko
8fa785bb64 docs/vmalert: print templates content in a raw format (#10912)
Before, some of the template examples were wrongly renderred by hugo.
For example:
```
http://vm-grafana.com/<dashboard-id>?viewPanel=<panel-id>&from={{($activeAt.Add (parseDurationTime \"-1h\")).UnixMilli}}&to={{($activeAt.Add (parseDurationTime \"1h\")).UnixMilli}}
```
was renderred like:
```
http://vm-grafana.com/ ?viewPanel=&from={{($activeAt.Add (parseDurationTime "-1h")).UnixMilli}}&to={{($activeAt.Add (parseDurationTime "1h")).UnixMilli}}
```

Wrapping examples in ` helps to render them raw.
While there, also fixed some examples.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-05-06 17:00:39 +02:00
hagen1778
6bddb233f7 docs: rm duplicated article
https://medium.com/airbnb-engineering/building-a-high-volume-metrics-pipeline-with-opentelemetry-and-vmagent-c714d6910b45 was already mentioned before
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-05-06 16:58:15 +02:00
hagen1778
4bb874df1c docs: add link to https://docs.victoriametrics.com/guides/
Mention https://docs.victoriametrics.com/guides/ in the Articles/guides.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-05-06 16:56:59 +02:00
Julius Rickert
099ec5c25a lib/promscrape//etzner: update hetzner_sd_configs for Hetzner Cloud datacenter → location API change
On 2025-12-16, Hetzner Cloud deprecated the `datacenter` field in their
Servers API and introduced a top-level `location` field carrying the
same data. The `datacenter` field will be removed after 2026-07-01.
Without this change, `__meta_hetzner_hcloud_datacenter_location`, and
`__meta_hetzner_hcloud_datacenter_location_network_zone` would silently
become empty for the `hcloud` role after that date.

This mirrors the change made in Prometheus v3.11.0
([prometheus/prometheus#17850](https://github.com/prometheus/prometheus/pull/17850)).

## Changes

**`hcloud` role:**
- Add `HCloudLocation` struct and `Location` field on `HCloudServer`,
mapped to the new top-level `location` API field
- Emit two new canonical labels: `__meta_hetzner_hcloud_location` and
`__meta_hetzner_hcloud_location_network_zone`
- Keep the deprecated `__meta_hetzner_hcloud_datacenter_location` and
`__meta_hetzner_hcloud_datacenter_location_network_zone` labels, now
sourced from the new `location` field so they continue to work past
2026-07-01
- `__meta_hetzner_datacenter` (the datacenter name, e.g. `fsn1-dc14`) is
unaffected for this role — the datacenter name is a distinct concept
from location and is kept as-is (this will stop working starting
2026-07-01)

**`robot` role:**
- Add `__meta_hetzner_robot_datacenter` as the canonical replacement for
`__meta_hetzner_datacenter`; the old label is kept for backward
compatibility

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10909
2026-05-05 17:51:13 +02:00
Max Kotliar
eb459df85e docs/changelog: add update note about bug in vminsert 2026-04-30 21:07:28 +03:00
Max Kotliar
ebc9d49e50 docs: forward port LTS v1.136.8 changelog to upstream
Signed-off-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-04-30 20:50:48 +03:00
f41gh7
b2a6fba673 docs/changelog: mention vminsert enterprise bugfix
At v1.142.0 was introduced a bug, when changes from OSS version were
 back-ported into Enterprise branch. It changed the order of storage
 nodes discovery. And resulted into:
 * overwrite of discovered storage nodes
 * duplicate of per storage node metrics

  This bug only affects enterprise vminsert version.
2026-04-30 17:13:40 +02:00
Roman Khavronenko
6100b8ba10 docs/vmalert: mention -rule.stripFilePath in #security (#10902)
Mention -rule.stripFilePath cmd-limne flag in security recommendations,
so users can be aware of it.

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: Haley Wang <haley@victoriametrics.com>
2026-04-29 20:01:25 +02:00
Roman Khavronenko
403d32f57f docs: mention AI observability (#10903)
The change adds `AI observability` section to `AI tools` documentation.
It mentions excellent @Amper articles describing these integrations in
all details.

The doc change doesn't repeat the articles, but rather helps users to
discover them.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-04-29 20:00:49 +02:00
Mathias Palmersheim
ed8ebb8314 docs/vmalert: clarified urls for tenant option (#10898)
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10897 by clarifying what URLS should be used for `-datasource.url`, `-remoteRead.url`, and `-remoteWrite.url` when `-clusterMode` is specified.


PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10898

---------

Co-authored-by: Haley Wang <haley@victoriametrics.com>
2026-04-29 12:18:08 +03:00
Hui Wang
55c8bb26db docs: polish stream aggregation doc (#10896)
PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10896

Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-04-29 12:12:31 +03:00
Max Kotliar
129358f9ea docs: update release guidance doc (#10887)
Leave only generic details about the release process in public docs.

To maintainers: 
All internal details are described in
https://github.com/VictoriaMetrics/release/blob/main/README.md. The new
document contains up-to-date release process guidance. Please refer to
it instead while preparing a new release.

An archived version of this document is available at:
https://github.com/VictoriaMetrics/release/blob/main/legacy_docs/Release-Guide.md.
2026-04-29 12:04:59 +03:00
Hui Wang
5d5e5b3e44 app/vmalert: add -rule.stripFilePath flag
The flag already exists in the ENT version. We decided to expose it in
OSS and strip the path from all public places, including all
APIs(includes `/metrics`) and debug logs(it's minor info there).

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5625
2026-04-29 10:12:11 +02:00
andriibeee
88882227f7 app/vmalert: add formatTime template function
This commit adds `formatTime` template function to the vmalert. Which accepts format string and current timestamp.

{{ now | formatTime "2006-01-02T15:04:05Z07:00" }}


Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10624
2026-04-29 10:09:54 +02:00
Nikolay
64e43e59a7 lib/httpserver: suppress TCP health check for tls connections
Previously, if `-tls` flag was provided, victoria metrics components
produced the following log error entry at health checks:

 http: TLS handshake error from 10.244.0.1:46556: EOF

Such health checks are common for many orchestration systems, such as
consul
or kubernetes. And default http server already suppresses such EOF
health checks.

 This commit adds suppression to the tls server as well.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10538
2026-04-29 09:59:57 +02:00
Max Kotliar
200a764d32 docs: add links to telegram channels (#10894)
PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10894
2026-04-28 19:22:59 +03:00
Pablo (Tomas) Fernandez
b29ad9e6ce docs: update guide "Collecting OpenShift logs with Victoria Logs" (#10864)
# What Changed

- Updated the operator installation procedure
- Updated the commands to match the rest of the guides
- Updated screenshots
- Reordered steps to make more sense of the process
- Fixed issues in the YAML
- Tested on actual OpenShift trial instance running on AWS
- Added steps to confirm log ingestion using VMUI

PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10864
2026-04-28 16:59:30 +03:00
Pablo (Tomas) Fernandez
00c0c149da docs: fix links in docs; refine security page (#10874)
This PR fixes several broken links and anchors in the victoriametrics
docs.

Note about links changes in FAQ.md file. The links inside the paragraph
break navigation in the right-side menu. To fix this, an explicit anchor
definition has been added. The anchor is the same as before, setting it
explsitly fixes the siebar links.

See https://github.com/VictoriaMetrics/vmdocs/issues/221 for the
up-to-date list once this PR is merged.

PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10874
2026-04-28 16:58:09 +03:00
Max Kotliar
542ea4788e app/vmalert: fix typo in comment 2026-04-28 16:38:49 +03:00
Max Kotliar
124bdbd383 docs: Replace waiting_for_release with completed label in CONTRIBUTING.md 2026-04-28 16:37:23 +03:00
Max Kotliar
1b3e549833 docs/changelog: cleanup CHANGELOG_2025.md 2026-04-28 16:31:46 +03:00
Max Kotliar
c37b78f366 docs: bump version to v1.142.0
Signed-off-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-04-28 14:05:59 +03:00
Max Kotliar
017bfc094d deplyoment/docker: bump version to v1.142.0
Signed-off-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-04-28 14:05:02 +03:00
Max Kotliar
411ec81619 docs: forward port LTS v1.136.7 changelog to upstream
Signed-off-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-04-28 14:02:21 +03:00
Max Kotliar
64ccd2ed44 docs/changelog: cut release v1.142.0
Signed-off-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-04-28 12:55:51 +03:00
Nikolay
89c0b1c1aa lib/opentelemetry: properly reset metric metadata
Previously, metricMetadata was not properly reset during parsing of
metrics. It could result into `Unit` suffix to be added from previously
parsed metric into next metric without Unit field.

  For example, metric `http_request` with `Unit` `seconds` will be
converted into `http_request_seconds` and `Unit` field hold `seconds`.
Next parsed metric `cpu_usage_ratio` has no `Unit` and it will get
previous `seconds` `Unit` -> `cpu_usage_ratio_seconds`.

 This commit adds metricMetadata reset call before parsing of next
 metric.

 Bug was introduced at 293d80910c

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10889
2026-04-28 11:17:12 +02:00
Hui Wang
387a54d3c8 dashboards: polish vmauth dashboard (#10884)
See updated dashboard in
https://play-grafana.victoriametrics.com/d/nbuo5Mr4k/victoriametrics-vmauth?orgId=1&from=now-3h&to=now&timezone=browser&var-ds=P4169E866C3094E38&var-job=vmclusterlb-benchmark-vm-cluster-lts&var-instance=$__all&var-user=$__all&var-adhoc=&refresh=30s.

`Stats`:
1. `Users count`: set default value 0;
2. `Uptime`: count vmauth instances per job instead of showing instance
uptime, to be consistent with other dashboards. The actual uptime is not
very useful and is hard to read.

`Overview`:
1. Reorder panels;
2. `Requests rejected rate`: add a `>0` threshold in query.

`Troubleshooting`:
1. Remove unused `Restarts` panel;
2. `Logging rate`: add a `>0` threshold in query;
3. Add `Requests backend error rate` to show underlying backend errors
in addition to request errors.

I don’t see a specific change that needs to be mentioned in the
changelog.
2026-04-27 20:20:40 +03:00
Roman Khavronenko
20928171a8 docs/playgrounds: mention iximiuz playgrounds (#10878)
Iximiuz labs prepared a set of playgrounds for VictoriaMetrics. These
are interactive playgrounds backed by real Linux machines running
VictoriaMetrics software, allowing experimenting and investigating right
in the browser tab.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-04-27 19:58:04 +03:00
Zakhar Bessarab
ff79527c7f docs/playgrounds: add links to SSO playground (#10877)
Added info about Grafana SSO playground to playgrounds docs.

---------

Signed-off-by: Max Kotliar <kotlyar.maksim@gmail.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-04-27 19:43:19 +03:00
Max Kotliar
492419c2e8 docs: update flags with actual v1.141.0 binaries
Signed-off-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-04-27 14:38:27 +03:00
Max Kotliar
f42c56fc48 docs: bump version to v1.141.0
Signed-off-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-04-27 14:36:07 +03:00
Max Kotliar
684f96759f deplyoment/docker: bump version to v1.141.0
Signed-off-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-04-27 14:29:48 +03:00
Max Kotliar
5c3dc0f429 docs: forward port LTS v1.122.21 changelog to upstream
Signed-off-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-04-27 13:57:54 +03:00
Max Kotliar
ca5bc3a4c4 docs: forward port LTS v1.136.6 changelog to upstream
Signed-off-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-04-27 13:57:03 +03:00
Max Kotliar
2336c7e72f docs/changelog: fix upgrade alpine version
follow-up for
49a8dd4da6
2026-04-24 21:38:04 +03:00
Max Kotliar
b803a46e7f docs/changelog: cut release v1.141.0
Signed-off-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-04-24 19:41:27 +03:00
Max Kotliar
0e845e234f app/vmselect: run make vmui-update
Signed-off-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-04-24 19:37:52 +03:00
Max Kotliar
49a8dd4da6 deployment/docker: update base Alpine Docker image from 3.23.2 to 3.23.3
See
https://www.alpinelinux.org/posts/Alpine-3.20.10-3.21.7-3.22.4-3.23.4-released.html
2026-04-24 18:22:01 +03:00
Max Kotliar
2609a53e41 docs/changelog: chore 2026-04-24 15:59:34 +03:00
Nikolay
1ca4b3ba3c app/vmagent: properly attach tenant information to metadata (#10865)
Previously, vmagent ignored tenant ID information obtained from
`__tenant_id__` label for metrics metadata. It made it impossible to route
metrics metadata to the `/multitenant` endpoints. This commit adds tenant ID to the metrics metadata.

It also fixes VMagent multitenant ingestion endpoints. Previously, the tenant info defined there was not properly set to metadata. 

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10828
PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10865

---------

Signed-off-by: Nikolay <nik@victoriametrics.com>
Signed-off-by: f41gh7 <nik@victoriametrics.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-04-24 14:36:35 +03:00
Hui Wang
66b9890025 dashboards: add metadata ingestion row rate queries to vmagent&vmcluster dashboards (#10868)
Metadata is enabled by default since v1.137.0, and the metadata volume
can be a big contributor to resource usage and network traffic.

vmagent dahsboard:
1. `Troubleshooting` section: rename `Datapoints rate` panel to `Rows
rate` to include metadata rate;
2. `Ingestion` section: add metadata rate to existing `Rows rate` panel.
(The difference between this panel and the one above is that this panel
only contains data from write requests, while the above panel also
includes the scraping part.)


vmcluster dashboard:
1. `vminsert` section: add `Rows rate` panel

Didn’t see a good place for it in the vmsingle dashboard, since it
doesn’t have a dedicated insert section, and I don’t want to add it to
`overview` yet.

https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10868
2026-04-24 14:07:31 +03:00
Yury Moladau
2e7591d567 app/vmui: improve series color visibility (#10872)
### Describe Your Changes

Improve generated series colors to increase visibility and consistency
across light and dark themes.

Related issue: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10869
PR: https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10872

| Before | After |
|---|---|
| <img width="758" height="469" alt="image"
src="https://github.com/user-attachments/assets/dfe879fc-c1ff-4128-923b-24dd0b829421"
/> | <img width="758" height="469" alt="image"
src="https://github.com/user-attachments/assets/7ea6f618-2d6d-43b6-b881-9525a2897ef6"
/> |
| <img width="758" height="469" alt="image"
src="https://github.com/user-attachments/assets/ab07e223-5ab5-43dc-8c3f-7ab28d4ab2b6"
/> | <img width="758" height="469" alt="image"
src="https://github.com/user-attachments/assets/988d19b6-ca16-4ca6-af8a-e043cfb066d3"
/> |

---------

Signed-off-by: Yury Molodov <yurymolodov@gmail.com>
Signed-off-by: Max Kotliar <kotlyar.maksim@gmail.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
2026-04-24 13:36:26 +03:00
hagen1778
ca8d9d21a9 docs: mention accuracy issues for histogram aggregation
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-04-24 10:31:17 +02:00
hagen1778
0653b7c7b8 docs: mention histogram aggregation link
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-04-24 10:23:24 +02:00
Roman Khavronenko
569197d038 docs: update stream aggregation docs (#10871)
* add visual mermaid diagram to demonstrate aggregation concept;
* update Recording-rules-alternative:
* * recommend using rate_sum instead of total for better reliability
* * demonstrate how to calculate sliding window, typicall for recording
rules

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: Pablo Fernandez <46322567+TomFern@users.noreply.github.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-04-24 10:20:29 +02:00
Max Kotliar
5f357e6a94 docs/changelog: chore update notes
force evey update note to be on a new line
2026-04-23 20:36:42 +03:00
Artem Fetishev
c317e95ab8 lib/storage: support samples with future timestamps (#10718)
Add the support of storage and retrieval of samples with future
timestamps as requested in https://github.com/VictoriaMetrics/VictoriaMetrics/issues/827

What to expect:

- By default, the max future timestamp is still limited to `now+2d`. To
change it, set the `-futureRetention` flag in `vmstorage`. The max flag
value is currently limited to `100y`. It can be extended if we see a
demand for this, but it can't be more than `~ 290y` due to how the time
duration is implemented in Go. The flag value can't be less than `2d`.
- downsampling and retention filters (available in enterprise edition)
are currently not supported for future timestamps
- If `vmstorage` restarts with a smaller value of `-futureRetention`
flag, any future partitions that are outside the new future retention
will be automatically deleted.
- Data ingestion, data retrieval, backup/restore, timeseries (soft)
deletion, and other operations work with future timestamps the same way
as with the historical timestamps.
- In the cluster version, the affected binaries are `vmstorage` and
`vmselect`. This means that `vmselect` version must match `vmstorage`
version if you want to query future timestamps. `vminsert` was not
affected, so its version can be a lower one.
- If you downgrade the `vmstorage`, the data with future timestamps will
remain on disk and memory (per-partition caches) but won't be available
for querying.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Signed-off-by: Artem Fetishev <149964189+rtm0@users.noreply.github.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
2026-04-23 18:12:33 +02:00
Artem Fetishev
a875597b09 lib/timeutil: ensure parsed time is in allowed range (#10870)
Update `timeutil.ParseTimeAt` to check the time limits for all date/time formats, not just year.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-04-23 17:37:15 +02:00
Max Kotliar
3062f4355d docs: forward port LTS v1.122.20 changelog to upstream
Signed-off-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-04-23 17:38:00 +03:00
Max Kotliar
aa206acd6f docs: forward port LTS v1.136.5 changelog to upstream
Signed-off-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-04-23 17:32:25 +03:00
Nikolay
9a74f71a5f app/vmauth: properly start backend healths
Previously, backend url health check start could produce a data race
and a race condition.

 The following panic could be produced:
`panic: sync: WaitGroup is reused before previous Wait has returned`

 It happened because concurrent goroutine could process request, while
 configuration was reloaded and stopHealthChecks method was called.

 This commit adds a dedicated structure for backend health checks.
Which protects from data race with mutex guard. And prevents race
condition with a boolean flag.

Fixes: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10806
2026-04-23 11:05:06 +02:00
Roman Khavronenko
1dcf0f6826 github: update PR template
Visually outline that guideline message should be removed from
description before submitting the PR. This should prevent cases when PR
template was blending into the PRs description remaining unnoticed.
2026-04-23 11:04:29 +02:00
Max Kotliar
727abb0b57 go.mod: update metricsql to version that fixes bug in binary op evaluation ordering
The commit in metricsql
d0bc93816e
introduced a bug that changes an order of binary op evaluation. This
commit updates to metricsql version that fixes a bug by reverting to
previous behavior.

The bug was introduced in v1.140.0, v1.136.4, and v1.122.19 releases.

It was reported in
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10856
2026-04-22 20:40:00 +03:00
cubic-dev-ai[bot]
2c262c5ef6 app/vmctl: return errors instead of silently skipping unexpected OpenTSDB responses
Previously 
- `GetData` in the OpenTSDB client was returning empty `Metric{}` with
`nil` error for several conditions (multiple series returned, aggregate
tags present, `modifyData` failures), causing `vmctl opentsdb` to
silently drop series during migration

 This commit changes these silent return paths to return proper errors with
descriptive messages including the query string, so operators can detect
and diagnose partial migrations.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10797
2026-04-22 11:28:55 +02:00
andriibeee
a3df0f890b lib/cgroup: support reading cpu/memory limits from systemd slices
cgroup v2 version supports slices ( aka path hierarchy) for resource limits. It's mostly supported by systemd
and container runtime build on top of it.

 This commit reads subpath for systemd slices and traverse it with reading minimal limit value.

Related docs:
https://docs.oracle.com/en/operating-systems/oracle-linux/9/systemd/SystemdMngCgroupsV2.html#SlicesServicesScopesHierarchy
https://www.freedesktop.org/software/systemd/man/latest/systemd.slice.html

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10635
2026-04-22 10:18:03 +02:00
Max Kotliar
0785d16711 docs/vmauth: add example for using TLS on public addr but keeping internal non-TLS (#10858)
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10793
2026-04-22 10:06:42 +02:00
Hui Wang
dc94aa9339 app/vmalert: properly remove empty labels value
Previously, if rule label value was set to empty string, vmalert ignored this label during labels merge with labels from data source response. In contrast, Prometheus removes data source label in this case as well. Which allows to perform label delete operation.

 This commit uses the same logic as Prometheus for resolving labels conflicts and allows to remove labels.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10766
2026-04-22 09:53:25 +02:00
Max Kotliar
032f70e262 docs/changelog: add update note about bug in metricsql
Follow up to
7029283f7d
for LTS releases
2026-04-21 20:19:56 +03:00
Max Kotliar
7029283f7d docs/changelog: add update note about bug in metricsql
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10856

Bug introduced in https://github.com/VictoriaMetrics/metricsql/pull/63
via commit
08dd38d4a0
2026-04-21 20:15:47 +03:00
Fred Navruzov
6c1534c7b1 docs/vmanomaly: update visual assets and formulations (#10859)
Update vmanomaly visual assets and improve clarification on allowed
datasources
2026-04-21 19:46:44 +03:00
Roman Khavronenko
0c05b0b15b apptest: restore helper for default tenant
Helper `getTenant` was removed in
e0e01e46f0 assuming that new change
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10782 will
tolerate missing tenantID in the path.

While that change is still not merged - restoring the helper for tests
to remain functional.
2026-04-21 11:03:06 +02:00
Alexander Frolov
a2b1d1eb62 app/vminsert: account storageNodesBucket count in per-node buffer size
Follow-up for ceda0407fb which added a regression, which could
double vminsert memory usage.

 This commit takes in account a second buffer per storageNode.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10725#issuecomment-4282256709
2026-04-20 21:28:14 +02:00
dependabot[bot]
e3cd3329d6 build(deps): bump github/codeql-action from 4 to 4.35.1 (#10844)
Bumps [github/codeql-action](https://github.com/github/codeql-action)
from 4 to 4.35.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/github/codeql-action/releases">github/codeql-action's
releases</a>.</em></p>
<blockquote>
<h2>v4.35.1</h2>
<ul>
<li>Fix incorrect minimum required Git version for <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis</a>: it should have been 2.36.0, not 2.11.0. <a
href="https://redirect.github.com/github/codeql-action/pull/3781">#3781</a></li>
</ul>
<h2>v4.35.0</h2>
<ul>
<li>Reduced the minimum Git version required for <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis</a> from 2.38.0 to 2.11.0. <a
href="https://redirect.github.com/github/codeql-action/pull/3767">#3767</a></li>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.25.1">2.25.1</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3773">#3773</a></li>
</ul>
<h2>v4.34.1</h2>
<ul>
<li>Downgrade default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.24.3">2.24.3</a>
due to issues with a small percentage of Actions and JavaScript
analyses. <a
href="https://redirect.github.com/github/codeql-action/pull/3762">#3762</a></li>
</ul>
<h2>v4.34.0</h2>
<ul>
<li>Added an experimental change which disables TRAP caching when <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis</a> is enabled, since improved incremental analysis
supersedes TRAP caching. This will improve performance and reduce
Actions cache usage. We expect to roll this change out to everyone in
March. <a
href="https://redirect.github.com/github/codeql-action/pull/3569">#3569</a></li>
<li>We are rolling out improved incremental analysis to C/C++ analyses
that use build mode <code>none</code>. We expect this rollout to be
complete by the end of April 2026. <a
href="https://redirect.github.com/github/codeql-action/pull/3584">#3584</a></li>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.25.0">2.25.0</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3585">#3585</a></li>
</ul>
<h2>v4.33.0</h2>
<ul>
<li>
<p>Upcoming change: Starting April 2026, the CodeQL Action will skip
collecting file coverage information on pull requests to improve
analysis performance. File coverage information will still be computed
on non-PR analyses. Pull request analyses will log a warning about this
upcoming change. <a
href="https://redirect.github.com/github/codeql-action/pull/3562">#3562</a></p>
<p>To opt out of this change:</p>
<ul>
<li><strong>Repositories owned by an organization:</strong> Create a
custom repository property with the name
<code>github-codeql-file-coverage-on-prs</code> and the type
&quot;True/false&quot;, then set this property to <code>true</code> in
the repository's settings. For more information, see <a
href="https://docs.github.com/en/organizations/managing-organization-settings/managing-custom-properties-for-repositories-in-your-organization">Managing
custom properties for repositories in your organization</a>.
Alternatively, if you are using an advanced setup workflow, you can set
the <code>CODEQL_ACTION_FILE_COVERAGE_ON_PRS</code> environment variable
to <code>true</code> in your workflow.</li>
<li><strong>User-owned repositories using default setup:</strong> Switch
to an advanced setup workflow and set the
<code>CODEQL_ACTION_FILE_COVERAGE_ON_PRS</code> environment variable to
<code>true</code> in your workflow.</li>
<li><strong>User-owned repositories using advanced setup:</strong> Set
the <code>CODEQL_ACTION_FILE_COVERAGE_ON_PRS</code> environment variable
to <code>true</code> in your workflow.</li>
</ul>
</li>
<li>
<p>Fixed <a
href="https://redirect.github.com/github/codeql-action/issues/3555">a
bug</a> which caused the CodeQL Action to fail loading repository
properties if a &quot;Multi select&quot; repository property was
configured for the repository. <a
href="https://redirect.github.com/github/codeql-action/pull/3557">#3557</a></p>
</li>
<li>
<p>The CodeQL Action now loads <a
href="https://docs.github.com/en/organizations/managing-organization-settings/managing-custom-properties-for-repositories-in-your-organization">custom
repository properties</a> on GitHub Enterprise Server, enabling the
customization of features such as
<code>github-codeql-disable-overlay</code> that was previously only
available on GitHub.com. <a
href="https://redirect.github.com/github/codeql-action/pull/3559">#3559</a></p>
</li>
<li>
<p>Once <a
href="https://docs.github.com/en/code-security/how-tos/secure-at-scale/configure-organization-security/manage-usage-and-access/giving-org-access-private-registries">private
package registries</a> can be configured with OIDC-based authentication
for organizations, the CodeQL Action will now be able to accept such
configurations. <a
href="https://redirect.github.com/github/codeql-action/pull/3563">#3563</a></p>
</li>
<li>
<p>Fixed the retry mechanism for database uploads. Previously this would
fail with the error &quot;Response body object should not be disturbed
or locked&quot;. <a
href="https://redirect.github.com/github/codeql-action/pull/3564">#3564</a></p>
</li>
<li>
<p>A warning is now emitted if the CodeQL Action detects a repository
property whose name suggests that it relates to the CodeQL Action, but
which is not one of the properties recognised by the current version of
the CodeQL Action. <a
href="https://redirect.github.com/github/codeql-action/pull/3570">#3570</a></p>
</li>
</ul>
<h2>v4.32.6</h2>
<ul>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.24.3">2.24.3</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3548">#3548</a></li>
</ul>
<h2>v4.32.5</h2>
<ul>
<li>Repositories owned by an organization can now set up the
<code>github-codeql-disable-overlay</code> custom repository property to
disable <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis for CodeQL</a>. First, create a custom repository
property with the name <code>github-codeql-disable-overlay</code> and
the type &quot;True/false&quot; in the organization's settings. Then in
the repository's settings, set this property to <code>true</code> to
disable improved incremental analysis. For more information, see <a
href="https://docs.github.com/en/organizations/managing-organization-settings/managing-custom-properties-for-repositories-in-your-organization">Managing
custom properties for repositories in your organization</a>. This
feature is not yet available on GitHub Enterprise Server. <a
href="https://redirect.github.com/github/codeql-action/pull/3507">#3507</a></li>
<li>Added an experimental change so that when <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis</a> fails on a runner — potentially due to
insufficient disk space — the failure is recorded in the Actions cache
so that subsequent runs will automatically skip improved incremental
analysis until something changes (e.g. a larger runner is provisioned or
a new CodeQL version is released). We expect to roll this change out to
everyone in March. <a
href="https://redirect.github.com/github/codeql-action/pull/3487">#3487</a></li>
<li>The minimum memory check for improved incremental analysis is now
skipped for CodeQL 2.24.3 and later, which has reduced peak RAM usage.
<a
href="https://redirect.github.com/github/codeql-action/pull/3515">#3515</a></li>
<li>Reduced log levels for best-effort private package registry
connection check failures to reduce noise from workflow annotations. <a
href="https://redirect.github.com/github/codeql-action/pull/3516">#3516</a></li>
<li>Added an experimental change which lowers the minimum disk space
requirement for <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis</a>, enabling it to run on standard GitHub Actions
runners. We expect to roll this change out to everyone in March. <a
href="https://redirect.github.com/github/codeql-action/pull/3498">#3498</a></li>
<li>Added an experimental change which allows the
<code>start-proxy</code> action to resolve the CodeQL CLI version from
feature flags instead of using the linked CLI bundle version. We expect
to roll this change out to everyone in March. <a
href="https://redirect.github.com/github/codeql-action/pull/3512">#3512</a></li>
<li>The previously experimental changes from versions 4.32.3, 4.32.4,
3.32.3 and 3.32.4 are now enabled by default. <a
href="https://redirect.github.com/github/codeql-action/pull/3503">#3503</a>,
<a
href="https://redirect.github.com/github/codeql-action/pull/3504">#3504</a></li>
</ul>
<h2>v4.32.4</h2>
<ul>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.24.2">2.24.2</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3493">#3493</a></li>
<li>Added an experimental change which improves how certificates are
generated for the authentication proxy that is used by the CodeQL Action
in Default Setup when <a
href="https://docs.github.com/en/code-security/how-tos/secure-at-scale/configure-organization-security/manage-usage-and-access/giving-org-access-private-registries">private
package registries are configured</a>. This is expected to generate more
widely compatible certificates and should have no impact on analyses
which are working correctly already. We expect to roll this change out
to everyone in February. <a
href="https://redirect.github.com/github/codeql-action/pull/3473">#3473</a></li>
<li>When the CodeQL Action is run <a
href="https://docs.github.com/en/code-security/how-tos/scan-code-for-vulnerabilities/troubleshooting/troubleshooting-analysis-errors/logs-not-detailed-enough#creating-codeql-debugging-artifacts-for-codeql-default-setup">with
debugging enabled in Default Setup</a> and <a
href="https://docs.github.com/en/code-security/how-tos/secure-at-scale/configure-organization-security/manage-usage-and-access/giving-org-access-private-registries">private
package registries are configured</a>, the &quot;Setup proxy for
registries&quot; step will output additional diagnostic information that
can be used for troubleshooting. <a
href="https://redirect.github.com/github/codeql-action/pull/3486">#3486</a></li>
<li>Added a setting which allows the CodeQL Action to enable network
debugging for Java programs. This will help GitHub staff support
customers with troubleshooting issues in GitHub-managed CodeQL
workflows, such as Default Setup. This setting can only be enabled by
GitHub staff. <a
href="https://redirect.github.com/github/codeql-action/pull/3485">#3485</a></li>
<li>Added a setting which enables GitHub-managed workflows, such as
Default Setup, to use a <a
href="https://github.com/dsp-testing/codeql-cli-nightlies">nightly
CodeQL CLI release</a> instead of the latest, stable release that is
used by default. This will help GitHub staff support customers whose
analyses for a given repository or organization require early access to
a change in an upcoming CodeQL CLI release. This setting can only be
enabled by GitHub staff. <a
href="https://redirect.github.com/github/codeql-action/pull/3484">#3484</a></li>
</ul>
<h2>v4.32.3</h2>
<ul>
<li>Added experimental support for testing connections to <a
href="https://docs.github.com/en/code-security/how-tos/secure-at-scale/configure-organization-security/manage-usage-and-access/giving-org-access-private-registries">private
package registries</a>. This feature is not currently enabled for any
analysis. In the future, it may be enabled by default for Default Setup.
<a
href="https://redirect.github.com/github/codeql-action/pull/3466">#3466</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/github/codeql-action/blob/main/CHANGELOG.md">github/codeql-action's
changelog</a>.</em></p>
<blockquote>
<h2>4.35.1 - 27 Mar 2026</h2>
<ul>
<li>Fix incorrect minimum required Git version for <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis</a>: it should have been 2.36.0, not 2.11.0. <a
href="https://redirect.github.com/github/codeql-action/pull/3781">#3781</a></li>
</ul>
<h2>4.35.0 - 27 Mar 2026</h2>
<ul>
<li>Reduced the minimum Git version required for <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis</a> from 2.38.0 to 2.11.0. <a
href="https://redirect.github.com/github/codeql-action/pull/3767">#3767</a></li>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.25.1">2.25.1</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3773">#3773</a></li>
</ul>
<h2>4.34.1 - 20 Mar 2026</h2>
<ul>
<li>Downgrade default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.24.3">2.24.3</a>
due to issues with a small percentage of Actions and JavaScript
analyses. <a
href="https://redirect.github.com/github/codeql-action/pull/3762">#3762</a></li>
</ul>
<h2>4.34.0 - 20 Mar 2026</h2>
<ul>
<li>Added an experimental change which disables TRAP caching when <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis</a> is enabled, since improved incremental analysis
supersedes TRAP caching. This will improve performance and reduce
Actions cache usage. We expect to roll this change out to everyone in
March. <a
href="https://redirect.github.com/github/codeql-action/pull/3569">#3569</a></li>
<li>We are rolling out improved incremental analysis to C/C++ analyses
that use build mode <code>none</code>. We expect this rollout to be
complete by the end of April 2026. <a
href="https://redirect.github.com/github/codeql-action/pull/3584">#3584</a></li>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.25.0">2.25.0</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3585">#3585</a></li>
</ul>
<h2>4.33.0 - 16 Mar 2026</h2>
<ul>
<li>
<p>Upcoming change: Starting April 2026, the CodeQL Action will skip
collecting file coverage information on pull requests to improve
analysis performance. File coverage information will still be computed
on non-PR analyses. Pull request analyses will log a warning about this
upcoming change. <a
href="https://redirect.github.com/github/codeql-action/pull/3562">#3562</a></p>
<p>To opt out of this change:</p>
<ul>
<li><strong>Repositories owned by an organization:</strong> Create a
custom repository property with the name
<code>github-codeql-file-coverage-on-prs</code> and the type
&quot;True/false&quot;, then set this property to <code>true</code> in
the repository's settings. For more information, see <a
href="https://docs.github.com/en/organizations/managing-organization-settings/managing-custom-properties-for-repositories-in-your-organization">Managing
custom properties for repositories in your organization</a>.
Alternatively, if you are using an advanced setup workflow, you can set
the <code>CODEQL_ACTION_FILE_COVERAGE_ON_PRS</code> environment variable
to <code>true</code> in your workflow.</li>
<li><strong>User-owned repositories using default setup:</strong> Switch
to an advanced setup workflow and set the
<code>CODEQL_ACTION_FILE_COVERAGE_ON_PRS</code> environment variable to
<code>true</code> in your workflow.</li>
<li><strong>User-owned repositories using advanced setup:</strong> Set
the <code>CODEQL_ACTION_FILE_COVERAGE_ON_PRS</code> environment variable
to <code>true</code> in your workflow.</li>
</ul>
</li>
<li>
<p>Fixed <a
href="https://redirect.github.com/github/codeql-action/issues/3555">a
bug</a> which caused the CodeQL Action to fail loading repository
properties if a &quot;Multi select&quot; repository property was
configured for the repository. <a
href="https://redirect.github.com/github/codeql-action/pull/3557">#3557</a></p>
</li>
<li>
<p>The CodeQL Action now loads <a
href="https://docs.github.com/en/organizations/managing-organization-settings/managing-custom-properties-for-repositories-in-your-organization">custom
repository properties</a> on GitHub Enterprise Server, enabling the
customization of features such as
<code>github-codeql-disable-overlay</code> that was previously only
available on GitHub.com. <a
href="https://redirect.github.com/github/codeql-action/pull/3559">#3559</a></p>
</li>
<li>
<p>Once <a
href="https://docs.github.com/en/code-security/how-tos/secure-at-scale/configure-organization-security/manage-usage-and-access/giving-org-access-private-registries">private
package registries</a> can be configured with OIDC-based authentication
for organizations, the CodeQL Action will now be able to accept such
configurations. <a
href="https://redirect.github.com/github/codeql-action/pull/3563">#3563</a></p>
</li>
<li>
<p>Fixed the retry mechanism for database uploads. Previously this would
fail with the error &quot;Response body object should not be disturbed
or locked&quot;. <a
href="https://redirect.github.com/github/codeql-action/pull/3564">#3564</a></p>
</li>
<li>
<p>A warning is now emitted if the CodeQL Action detects a repository
property whose name suggests that it relates to the CodeQL Action, but
which is not one of the properties recognised by the current version of
the CodeQL Action. <a
href="https://redirect.github.com/github/codeql-action/pull/3570">#3570</a></p>
</li>
</ul>
<h2>4.32.6 - 05 Mar 2026</h2>
<ul>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.24.3">2.24.3</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3548">#3548</a></li>
</ul>
<h2>4.32.5 - 02 Mar 2026</h2>
<ul>
<li>Repositories owned by an organization can now set up the
<code>github-codeql-disable-overlay</code> custom repository property to
disable <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis for CodeQL</a>. First, create a custom repository
property with the name <code>github-codeql-disable-overlay</code> and
the type &quot;True/false&quot; in the organization's settings. Then in
the repository's settings, set this property to <code>true</code> to
disable improved incremental analysis. For more information, see <a
href="https://docs.github.com/en/organizations/managing-organization-settings/managing-custom-properties-for-repositories-in-your-organization">Managing
custom properties for repositories in your organization</a>. This
feature is not yet available on GitHub Enterprise Server. <a
href="https://redirect.github.com/github/codeql-action/pull/3507">#3507</a></li>
<li>Added an experimental change so that when <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis</a> fails on a runner — potentially due to
insufficient disk space — the failure is recorded in the Actions cache
so that subsequent runs will automatically skip improved incremental
analysis until something changes (e.g. a larger runner is provisioned or
a new CodeQL version is released). We expect to roll this change out to
everyone in March. <a
href="https://redirect.github.com/github/codeql-action/pull/3487">#3487</a></li>
<li>The minimum memory check for improved incremental analysis is now
skipped for CodeQL 2.24.3 and later, which has reduced peak RAM usage.
<a
href="https://redirect.github.com/github/codeql-action/pull/3515">#3515</a></li>
<li>Reduced log levels for best-effort private package registry
connection check failures to reduce noise from workflow annotations. <a
href="https://redirect.github.com/github/codeql-action/pull/3516">#3516</a></li>
<li>Added an experimental change which lowers the minimum disk space
requirement for <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis</a>, enabling it to run on standard GitHub Actions
runners. We expect to roll this change out to everyone in March. <a
href="https://redirect.github.com/github/codeql-action/pull/3498">#3498</a></li>
<li>Added an experimental change which allows the
<code>start-proxy</code> action to resolve the CodeQL CLI version from
feature flags instead of using the linked CLI bundle version. We expect
to roll this change out to everyone in March. <a
href="https://redirect.github.com/github/codeql-action/pull/3512">#3512</a></li>
<li>The previously experimental changes from versions 4.32.3, 4.32.4,
3.32.3 and 3.32.4 are now enabled by default. <a
href="https://redirect.github.com/github/codeql-action/pull/3503">#3503</a>,
<a
href="https://redirect.github.com/github/codeql-action/pull/3504">#3504</a></li>
</ul>
<h2>4.32.4 - 20 Feb 2026</h2>
<ul>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.24.2">2.24.2</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3493">#3493</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="c10b8064de"><code>c10b806</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3782">#3782</a>
from github/update-v4.35.1-d6d1743b8</li>
<li><a
href="c5ffd06837"><code>c5ffd06</code></a>
Update changelog for v4.35.1</li>
<li><a
href="d6d1743b8e"><code>d6d1743</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3781">#3781</a>
from github/henrymercer/update-git-minimum-version</li>
<li><a
href="65d2efa733"><code>65d2efa</code></a>
Add changelog note</li>
<li><a
href="2437b20ab3"><code>2437b20</code></a>
Update minimum git version for overlay to 2.36.0</li>
<li><a
href="ea5f71947c"><code>ea5f719</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3775">#3775</a>
from github/dependabot/npm_and_yarn/node-forge-1.4.0</li>
<li><a
href="45ceeea896"><code>45ceeea</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3777">#3777</a>
from github/mergeback/v4.35.0-to-main-b8bb9f28</li>
<li><a
href="24448c9843"><code>24448c9</code></a>
Rebuild</li>
<li><a
href="7c51060631"><code>7c51060</code></a>
Update changelog and version after v4.35.0</li>
<li><a
href="b8bb9f28b8"><code>b8bb9f2</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3776">#3776</a>
from github/update-v4.35.0-0078ad667</li>
<li>Additional commits viewable in <a
href="https://github.com/github/codeql-action/compare/v4...v4.35.1">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github/codeql-action&package-manager=github_actions&previous-version=4&new-version=4.35.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-20 15:55:02 +03:00
dependabot[bot]
6c57246940 build(deps): bump actions/cache from 4 to 5.0.4 (#10802)
Bumps [actions/cache](https://github.com/actions/cache) from 4 to 5.0.4.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/cache/releases">actions/cache's
releases</a>.</em></p>
<blockquote>
<h2>v5.0.4</h2>
<h2>What's Changed</h2>
<ul>
<li>Add release instructions and update maintainer docs by <a
href="https://github.com/Link"><code>@​Link</code></a>- in <a
href="https://redirect.github.com/actions/cache/pull/1696">actions/cache#1696</a></li>
<li>Potential fix for code scanning alert no. 52: Workflow does not
contain permissions by <a
href="https://github.com/Link"><code>@​Link</code></a>- in <a
href="https://redirect.github.com/actions/cache/pull/1697">actions/cache#1697</a></li>
<li>Fix workflow permissions and cleanup workflow names / formatting by
<a href="https://github.com/Link"><code>@​Link</code></a>- in <a
href="https://redirect.github.com/actions/cache/pull/1699">actions/cache#1699</a></li>
<li>docs: Update examples to use the latest version by <a
href="https://github.com/XZTDean"><code>@​XZTDean</code></a> in <a
href="https://redirect.github.com/actions/cache/pull/1690">actions/cache#1690</a></li>
<li>Fix proxy integration tests by <a
href="https://github.com/Link"><code>@​Link</code></a>- in <a
href="https://redirect.github.com/actions/cache/pull/1701">actions/cache#1701</a></li>
<li>Fix cache key in examples.md for bun.lock by <a
href="https://github.com/RyPeck"><code>@​RyPeck</code></a> in <a
href="https://redirect.github.com/actions/cache/pull/1722">actions/cache#1722</a></li>
<li>Update dependencies &amp; patch security vulnerabilities by <a
href="https://github.com/Link"><code>@​Link</code></a>- in <a
href="https://redirect.github.com/actions/cache/pull/1738">actions/cache#1738</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/XZTDean"><code>@​XZTDean</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/cache/pull/1690">actions/cache#1690</a></li>
<li><a href="https://github.com/RyPeck"><code>@​RyPeck</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/cache/pull/1722">actions/cache#1722</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/cache/compare/v5...v5.0.4">https://github.com/actions/cache/compare/v5...v5.0.4</a></p>
<h2>v5.0.3</h2>
<h2>What's Changed</h2>
<ul>
<li>Bump <code>@actions/cache</code> to v5.0.5 (Resolves: <a
href="https://github.com/actions/cache/security/dependabot/33">https://github.com/actions/cache/security/dependabot/33</a>)</li>
<li>Bump <code>@actions/core</code> to v2.0.3</li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/cache/compare/v5...v5.0.3">https://github.com/actions/cache/compare/v5...v5.0.3</a></p>
<h2>v.5.0.2</h2>
<h1>v5.0.2</h1>
<h2>What's Changed</h2>
<p>When creating cache entries, 429s returned from the cache service
will not be retried.</p>
<h2>v5.0.1</h2>
<blockquote>
<p>[!IMPORTANT]
<strong><code>actions/cache@v5</code> runs on the Node.js 24 runtime and
requires a minimum Actions Runner version of
<code>2.327.1</code>.</strong></p>
<p>If you are using self-hosted runners, ensure they are updated before
upgrading.</p>
</blockquote>
<hr />
<h1>v5.0.1</h1>
<h2>What's Changed</h2>
<ul>
<li>fix: update <code>@​actions/cache</code> for Node.js 24 punycode
deprecation by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/cache/pull/1685">actions/cache#1685</a></li>
<li>prepare release v5.0.1 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/cache/pull/1686">actions/cache#1686</a></li>
</ul>
<h1>v5.0.0</h1>
<h2>What's Changed</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/actions/cache/blob/main/RELEASES.md">actions/cache's
changelog</a>.</em></p>
<blockquote>
<h1>Releases</h1>
<h2>How to prepare a release</h2>
<blockquote>
<p>[!NOTE]<br />
Relevant for maintainers with write access only.</p>
</blockquote>
<ol>
<li>Switch to a new branch from <code>main</code>.</li>
<li>Run <code>npm test</code> to ensure all tests are passing.</li>
<li>Update the version in <a
href="https://github.com/actions/cache/blob/main/package.json"><code>https://github.com/actions/cache/blob/main/package.json</code></a>.</li>
<li>Run <code>npm run build</code> to update the compiled files.</li>
<li>Update this <a
href="https://github.com/actions/cache/blob/main/RELEASES.md"><code>https://github.com/actions/cache/blob/main/RELEASES.md</code></a>
with the new version and changes in the <code>## Changelog</code>
section.</li>
<li>Run <code>licensed cache</code> to update the license report.</li>
<li>Run <code>licensed status</code> and resolve any warnings by
updating the <a
href="https://github.com/actions/cache/blob/main/.licensed.yml"><code>https://github.com/actions/cache/blob/main/.licensed.yml</code></a>
file with the exceptions.</li>
<li>Commit your changes and push your branch upstream.</li>
<li>Open a pull request against <code>main</code> and get it reviewed
and merged.</li>
<li>Draft a new release <a
href="https://github.com/actions/cache/releases">https://github.com/actions/cache/releases</a>
use the same version number used in <code>package.json</code>
<ol>
<li>Create a new tag with the version number.</li>
<li>Auto generate release notes and update them to match the changes you
made in <code>RELEASES.md</code>.</li>
<li>Toggle the set as the latest release option.</li>
<li>Publish the release.</li>
</ol>
</li>
<li>Navigate to <a
href="https://github.com/actions/cache/actions/workflows/release-new-action-version.yml">https://github.com/actions/cache/actions/workflows/release-new-action-version.yml</a>
<ol>
<li>There should be a workflow run queued with the same version
number.</li>
<li>Approve the run to publish the new version and update the major tags
for this action.</li>
</ol>
</li>
</ol>
<h2>Changelog</h2>
<h3>5.0.4</h3>
<ul>
<li>Bump <code>minimatch</code> to v3.1.5 (fixes ReDoS via globstar
patterns)</li>
<li>Bump <code>undici</code> to v6.24.1 (WebSocket decompression bomb
protection, header validation fixes)</li>
<li>Bump <code>fast-xml-parser</code> to v5.5.6</li>
</ul>
<h3>5.0.3</h3>
<ul>
<li>Bump <code>@actions/cache</code> to v5.0.5 (Resolves: <a
href="https://github.com/actions/cache/security/dependabot/33">https://github.com/actions/cache/security/dependabot/33</a>)</li>
<li>Bump <code>@actions/core</code> to v2.0.3</li>
</ul>
<h3>5.0.2</h3>
<ul>
<li>Bump <code>@actions/cache</code> to v5.0.3 <a
href="https://redirect.github.com/actions/cache/pull/1692">#1692</a></li>
</ul>
<h3>5.0.1</h3>
<ul>
<li>Update <code>@azure/storage-blob</code> to <code>^12.29.1</code> via
<code>@actions/cache@5.0.1</code> <a
href="https://redirect.github.com/actions/cache/pull/1685">#1685</a></li>
</ul>
<h3>5.0.0</h3>
<blockquote>
<p>[!IMPORTANT]
<code>actions/cache@v5</code> runs on the Node.js 24 runtime and
requires a minimum Actions Runner version of <code>2.327.1</code>.</p>
</blockquote>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="27d5ce7f10"><code>27d5ce7</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/cache/issues/1747">#1747</a>
from actions/yacaovsnc/update-dependency</li>
<li><a
href="f280785d7b"><code>f280785</code></a>
licensed changes</li>
<li><a
href="619aeb1606"><code>619aeb1</code></a>
npm run build generated dist files</li>
<li><a
href="bcf16c2893"><code>bcf16c2</code></a>
Update ts-http-runtime to 0.3.5</li>
<li><a
href="668228422a"><code>6682284</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/cache/issues/1738">#1738</a>
from actions/prepare-v5.0.4</li>
<li><a
href="e34039626f"><code>e340396</code></a>
Update RELEASES</li>
<li><a
href="8a67110529"><code>8a67110</code></a>
Add licenses</li>
<li><a
href="1865903e1b"><code>1865903</code></a>
Update dependencies &amp; patch security vulnerabilities</li>
<li><a
href="5656298164"><code>5656298</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/cache/issues/1722">#1722</a>
from RyPeck/patch-1</li>
<li><a
href="4e380d19e1"><code>4e380d1</code></a>
Fix cache key in examples.md for bun.lock</li>
<li>Additional commits viewable in <a
href="https://github.com/actions/cache/compare/v4...v5">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/cache&package-manager=github_actions&previous-version=4&new-version=5.0.4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-20 15:53:09 +03:00
andriibeee
05112e54e2 lib/netutil: fix IPv6 address corruption in proxy protocol v2 parser
Proxy protocol parser kept sub-slice reference for pooled bytesBuffer at readProxyProto
```
 bb := bbPool.Get()
 defer bbPool.Put(bb)   // ← buffer returned to pool AFTER function returns
...
   IP:   bb.B[0:16],  // ← BUG: sub-slice of pooled buffer!
...
 ```

 This commit properly allocates new slice for ipv6 address and copies buffer content to it.

 Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10839
2026-04-20 12:11:04 +02:00
Andrii Chubatiuk
ce227fe7d9 lib/streamaggr: added vm_streamaggr_counter_resets_total counter (#10807)
### Describe Your Changes

Added `vm_streamaggr_counter_resets` metric for `rate*`, `total*`, and
`increase*` outputs, which is useful for unpredictable output behaviour
investigation.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: Andrii Chubatiuk <andrew.chubatiuk@gmail.com>
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Signed-off-by: Roman Khavronenko <hagen1778@gmail.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2026-04-20 11:48:03 +02:00
hagen1778
e4524eb2fb deployment/alerts: move IndexDBRecordsDrop and TooManyTSIDMisses rules to storage-related files
`IndexDBRecordsDrop` and `TooManyTSIDMisses` were mistakenly placed to `alerts-health.yml`,
which was supposed to contain rules related to all VM components. But these two rules
are related to storage components only (vmstorage and vmsingle). Moving them to corresponding
files.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-04-20 11:43:21 +02:00
hagen1778
b9ba5dacc3 deployment/alerts: rename alerts.yml to alerts-single-node.yml
The change should reduce confusion for users where `alerts.yml`
belongs to. Before, developers could mistakenly assume that
`alerts.yml` was related to both single and cluster installations.
In result, rule `MetadataCacheUtilizationIsTooHigh` was added only
to `alerts.yml` and not copied to `alerts-cluster.yml`.

The rename change should bring more context into the file name
and reduce confusion in the future.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-04-20 11:37:30 +02:00
hagen1778
1a8fe4f2f8 deployment/alerts: add MetadataCacheUtilizationIsTooHigh to cluster rules
Before, this rule was only a part of single-node rule set.
But it is applicable for both: single and cluster installations.
Adding it to cluster as well.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-04-20 11:31:43 +02:00
Roman Khavronenko
2dcfbd8e19 deployment/rules: add MetricNameStatsCacheUtilizationIsTooHigh alert (#10840)
The new rule `MetricNameStatsCacheUtilizationIsTooHigh` will signalize
about overutilization of Metric names usage stats tracker. See
https://docs.victoriametrics.com/victoriametrics/#track-ingested-metrics-usage

This rule can fire for deployments with high churn rate of metric names.
In cases like this, it is better to disable metric name tracking
completely, as it brings no use.

It might fire for deployments that were tracking metric names for very
long periods and this alert might be a good sign to reset the cache.

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-04-20 11:29:53 +02:00
Max Kotliar
728269a5af docs/changelog: chore wording a bit; add a link 2026-04-17 19:34:39 +03:00
Jan Dittrich
eaf24ec631 docs: align the limit mentioned in the docs with actual flag -maxLabelsPerTimeseries value (#10826)
The docs currently wrongly states that vminsert applies a label limit
per timeseries of `30`. Currently, the limit is `40`, which is also
correctly stated in in vmcluster docs. This PR corrects this in the key
concepts docs.

```
  -maxLabelsPerTimeseries int
     The maximum number of labels per time series to be accepted. Series with superfluous labels are ignored. In this case the vm_rows_ignored_total{reason="too_many_labels"} metric at /metrics page is incremented (default 40)
```

https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10826
2026-04-17 19:17:39 +03:00
Phuong Le
e47f7a9d4e docs/contributing: clarify test requirements in pull request checklist (#10781)
Clarify in the pull request checklist that tests are expected for
non-trivial changes and bug fixes must include tests unless a maintainer
explicitly agrees otherwise

https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10781
2026-04-17 18:29:52 +03:00
Phuong Le
02279b8594 .github: shorten PR template (#10789)
After switching squash merges to use the PR title and description, the
PR template text started leaking into final commit messages and adding
noise.

This PR removes the template and documents what a PR title and PR
description should contain instead.

See https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10789
2026-04-17 18:18:12 +03:00
f41gh7
65a44bd9e5 docs: changelog add missing PR links
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2026-04-17 11:12:56 +02:00
f41gh7
431dda673e vendor: update metrics and metrisql libs 2026-04-17 11:10:07 +02:00
andriibeee
d66b7a2283 app/vmauth: properly close backend response body
Previously After RoundTrip returns successfully (err == nil, res != nil), the code checks if the original client request's context was canceled. If canceled, it returns immediately without closing res.Body. 

There is a race window where:
1) RoundTrip completes successfully (res is non-nil)
2) The client cancels the request context (closes connection)
3) The context check at line 484 sees the cancellation
4) The function returns without closing res.Body

The response body holds a reference to the underlying TCP connection. Without closing it, the connection is permanently leaked along with the transport goroutines (readLoop + writeLoop or dialConnFor).

 bug was introduced at https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10233

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10833
2026-04-17 10:57:13 +02:00
Yury Moladau
fd45463b5f app/vmui: fix Alerting Rules page query link and time display
**"Run query" link params**  
Added correct params to "Run query" link on Alerting Rules page:
- `g0.step_input` - set to `group.interval` (in seconds)
- `g0.end_time` - set to `rule.lastEvaluation` / `alert.activeAt`
- `g0.relative_time=none` - to fix the time range

**Time display timezone**  
Changed `t.format(...)` to `t.tz().format(...)` to display time in the
user-selected timezone.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10366
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10827
2026-04-17 10:32:59 +02:00
andriibeee
153c5bb803 lib/handshake: ignore TCP healthchecks in VMSelect just like in VMInsert
TCP healthchecks on the clusternative port of vmselect logs the following warning continuously:

    VictoriaMetrics/lib/vmselectapi/server.go:204 cannot complete vmselect handshake due to network error with client "10.129.30.27:43829": cannot read hello message : cannot read message with size 11: EOF; read only 0 bytes. Check vmselect logs for errors

This is in contrast to vminsert, where it seems like there's handling for these healthchecks:
```
 if errors.Is(err, io.EOF) {
 	// This is likely a TCP healthcheck, which must be ignored in order to prevent logs pollution.
 	// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1762
 	return errTCPHealthcheck
```

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10786
2026-04-16 23:02:01 +02:00
Nikolay
a29229a877 lib/promscrape: prevent unbounded scrape error body read
Previously, on non-200 HTTP status codes, lib/promscrape performed an
unbounded body read, which could potentially result in OOM.

This commit adds a maxScrapeSize limit to error response body reads,
protecting against malicious or misbehaving metrics endpoints.
2026-04-16 22:50:08 +02:00
cubic-dev-ai[bot]
aa94652ec3 app/vminsert: correctly stop StopIngestionRateLimiter before vminsert.Stop in vmsingle shutdown
vmsingle shuts down vminsert before closing the ingestion rate limiter, even though the rate limiter API explicitly requires the opposite order to unblock callers. vminsert.Stop() waits for unmarshal workers, which can be blocked in ingestionRateLimiter.Register() when the limit is hit.
2026-04-16 22:49:11 +02:00
Yury Moladau
ad85524fb1 app/vmui: update package dependencies (#10831)
### Describe Your Changes

Update package versions in `app/vmui/packages/vmui/package.json`.

Signed-off-by: Yury Molodov <yurymolodov@gmail.com>
2026-04-16 22:44:54 +02:00
cubic-dev-ai[bot]
3fe606770f fix: prevent deadlock in vmrestore worker pool on context cancellation
Workers in runParallelPerPathInternal check ctxLocal.Done() before processing each work item and exit early on cancellation — without sending a result to resultCh. However, the coordinator loop always waits for exactly len(perPath) results from resultCh. If cancellation occurs before all tasks report, the read blocks indefinitely.
2026-04-16 22:44:31 +02:00
Fred Navruzov
b3054bbadd docs/vmanomaly-v1.29.3 (#10832)
### Describe Your Changes

Update vmanomaly docs to v1.29.3

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-04-16 17:34:25 +03:00
Roman Khavronenko
443ea9cbc6 apptest: add support for specifying HTTP headers (#10830)
This change allows specifying headers for provided API calls. This
ability is required for proper testing of Tenant-via-Header feature in
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10782

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-04-16 15:03:19 +02:00
andriibeee
a36395500b lib/awsapi: pre-populate credentials only for static creds without roleARN
0aaa741b5b  introduced a regression in lib/awsapi/config.go that causes empty credentials to be returned on the very first call to getFreshAPICredentials() when using EKS Pod Identity (or any container credential mechanism with no static access key). These empty credentials are then used for SigV4 signing -> 403 Forbidden on every remote write request.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10815
2026-04-16 11:51:42 +02:00
Max Kotliar
cc3a14b16b docs/changelog: fix feature indention 2026-04-15 17:34:22 +03:00
Aliaksandr Valialkin
7ef08b1781 vendor: update github.com/VictoriaMetrics/VictoriaLogs from v1.50.1-0.20260415114444-d5b5febe4954 to github.com/VictoriaMetrics/VictoriaLogs v1.50.1-0.20260415124154-6b7a6357aec0
This is needed for vmalert, so it accepts LogsQL queries with 'limit' and 'offset' pipes.

See https://github.com/VictoriaMetrics/VictoriaLogs/issues/1296#issuecomment-4252036978
2026-04-15 14:45:01 +02:00
Aliaksandr Valialkin
969cb5b4ae vendor: run make vendor-update 2026-04-15 14:03:53 +02:00
Aliaksandr Valialkin
b9f0e614bd vendor: update github.com/VictoriaMetrics/VictoriaLogs from v0.0.0-20260218111324-95b48d57d032 to v1.50.1-0.20260415114444-d5b5febe4954 2026-04-15 13:54:46 +02:00
Aliaksandr Valialkin
ed44c08f5f docs/Makefile: avoid creating a docker image with docs server at make docs-update-version
Just run a simple bash command without the heavyweight Docker image

While at it, rely on TAG environment variable instead of PKG_TAG env variable
for `make docs-update-version`, in order to be consistent with other Make commands.
2026-04-15 13:24:52 +02:00
f41gh7
3ae44e734b docs: remove promscrape.dropOriginalLabels from relabeling-debug section
Follow-up for ef507d372b.

 It's no longer needed to manually set promscrape.dropOriginalLabels
 flag, since it's has False value by default.
2026-04-15 12:34:07 +02:00
Pablo (Tomas) Fernandez
d3264bd78f docs/guides: fix broken links (#10800)
Fix broken or moved links in guides.

### Checklist

The following checks are **mandatory**:

- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [X] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-04-15 10:18:48 +02:00
hagen1778
1f87faafec docs/articles: add new 3rd party article about stream aggregation
https://medium.com/airbnb-engineering/building-a-high-volume-metrics-pipeline-with-opentelemetry-and-vmagent-c714d6910b45
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-04-15 10:11:57 +02:00
hagen1778
521b73dfc5 docs/vmagent: move relabeling section higher
The change is needed to group splitting/sharding section of the documentation,
so they go one after another. This should improve readability.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-04-15 10:10:53 +02:00
hagen1778
61db79c10a docs/vmagent: mention ability to filter scrape targets
The previous descrioption didn't mention that relabeling can be used
for filtering scrape targets. Adding this metion.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-04-15 10:08:31 +02:00
hagen1778
460ac6468c docs/relabeling: restore links to articles about relableing internals
These links were removed in 134501bf99
without adding complete substitution to their content.

Restoring these links as they can be useful for readers to learn about relabeling.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-04-15 10:07:07 +02:00
hagen1778
c42023c586 docs/playgrounds: add aliases for old links
The old links were removed in #10754
mistakenly thinking that google didn't index it. However, it did. And users can get 404
when searching in google for VM plyagrounds.

Restoring the links via aliases. It means hugo will serve the `/playgrounds` page when
user requests `/playgrounds/victoriametrics/`.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-04-15 10:04:46 +02:00
Artem Fetishev
8a20ccf21d apptest: sync code between branches and fix backup/restore range queries (#10799)
Fix app tests:

1. Sync code between vmsingle and vmcluster: it must be the same because
apptest does not differentiate between branches, it just runs pre-built
binaries
2. Simplify range queries in backup/restore test so that it does not
depend on the interval between samples to work correctly.

---------

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-04-14 07:18:09 +02:00
Max Kotliar
1a01dbbec7 docs/changelog: fix unwanted release tag change
The tag v1.138.0 was unintentinally changed to v1.139.0 due to bug in
release script.

Reverting the change. The bug will be addressed separate.
2026-04-13 14:52:21 +03:00
f41gh7
630e413812 docs: update flags with actual v1.140.0 binaries
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2026-04-13 11:34:04 +02:00
f41gh7
b639e7e641 docs: bump version to v1.140.0
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2026-04-13 11:31:40 +02:00
f41gh7
858c318e1f deplyoment/docker: bump version to v1.140.0 2026-04-13 11:31:11 +02:00
f41gh7
b8327ce09c docs: mention new LTS releases
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2026-04-13 11:16:30 +02:00
Aliaksandr Valialkin
7514511c68 app/vmauth/main.go: clarify comments for bufferedBody struct a bit
This is a follow-up for https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10677#discussion_r3064731250
2026-04-11 09:42:32 +02:00
f41gh7
33d524bf13 follow-up for d07c1c73d1
move bugifx into current release
2026-04-10 19:37:14 +02:00
Alexander Frolov
d07c1c73d1 lib/writeconcurrencylimiter: prevent deadlock at IncConcurrency
Previously (*writeconcurrencylimiter.Reader).Read() could permanently leak concurrency tokens from the -maxConcurrentInserts semaphore.
 
 Consider the following example:
* GetReader() acquires a token, then PutReader() unconditionally releases it.
* Read() calls DecConcurrency() before the underlying I/O and IncConcurrency() after it. If IncConcurrency() returns an error, Read() returns without holding a token.
* Each such failure permanently removes one slot from the concurrencyLimitCh semaphore. Slots leak one by one until the channel is fully drained, at which point DecConcurrency() blocks forever, deadlocking ingestion on vmstorage.

 This commit adds tracking for obtained tokens to the reader. Which prevents possible tokens leakage. 

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10784
2026-04-10 19:35:59 +02:00
f41gh7
a896673c42 CHANGELOG.md: cut v1.140.0 release 2026-04-10 17:02:32 +02:00
f41gh7
c60ab2d57a make docs-update-version 2026-04-10 16:54:18 +02:00
f41gh7
49e51611d7 make vmui-update 2026-04-10 16:51:13 +02:00
Hui Wang
902ca83177 app/vmalert: adopt additional rule states in the list rules API
In grafana, the alert list panel can use VictoriaMetrics as datasource
and call `/api/v1/rules` api with [specific
states](https://grafana.com/docs/grafana/latest/alerting/fundamentals/alert-rule-evaluation/nodata-and-error-states/#alert-instance-states).
See
https://play-grafana.victoriametrics.com/d/febljk0a32qyoa/3e68cf3?orgId=1&from=now-1h&to=now&timezone=browser&var-prometheus_datasource=P4169E866C3094E38&var-jaeger_datasource=P14D5514F5CCC0D1C&var-victorialogs_datasource=PD775F2863313E6C7&var-service_namespace=$__all&var-service_name=checkout&refresh=5m&editPanel=40.
Some states are already defined in vmalert, although with different
names. Others, such as "recovering", are currently undefined.
This pull request adopts all these states, rather than fail the request.

Above panel request also uses the `matcher` param to filter rules.
However,
[prometheus](https://prometheus.io/docs/prometheus/latest/querying/api/#rules)
also does not support this parameter and simply ignore it, so I don't
think vmalert needs to support it now.

JFYI, the grafana [Alerting
page](https://play-grafana.victoriametrics.com/alerting) does not
include any of the mentioned `state` or `matcher` parameters in rule
listing requests to the datasource. Filtering is handled by the Grafana
frontend, so most users are not affected by partial support for
filtering in backend products.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10778
2026-04-10 16:47:25 +02:00
Phuong Le
66e3f8736b ci: remove automatic Codecov reporting from test workflow (#10780)
This removes automatic Codecov reporting from VictoriaMetrics CI. This
change keeps local coverage generation available, but removes automatic
PR noise (such as
[this](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10625#issuecomment-4084390659))
and unnecessary CI overhead.
2026-04-10 16:45:11 +02:00
f41gh7
532fcc3dfe docs: remove reverted commit changelog
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2026-04-10 16:35:29 +02:00
Aliaksandr Valialkin
b003d6c6ae Revert "app/vmauth: align request body buffering flags"
This reverts commit b3c03c023c.

Reason for revert: the original logic was correct from the user's perspective:

- The -maxRequestBodySizeToRetry command-line flag controls the size of the request body,
  which could be retried on backend failure. The meaining of this flag wasn't changed after
  the introduction of the -requestBufferSize flag in the commit e31abfc25c
  (see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10309 )

- The -requestBufferSize flag controls the size of the buffer for reading request body
  before sending sending it to the backend and before applying concurrency limits.

These flags are independent from user's perspective. The fact that these flags share the implementation,
sholdn't be known to the user - this is an implementation detail, which allows avoiding double buffering.

Both flags enable request buffering. If the user wants disabling of all the request buffering,
then both flags must be set to 0. That's why these flags are cross-mentioned in their -help descriptions.

Also the reverted commit had the following issues:

- It reduced the default value for the -requestBufferSize flag from 32KiB to 16KiB.
  The 32KiB value has been calculated and justified at https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10309 .
  It shouldn't increase vmagent memory usage too much for typical workloads.
  For example, if vmagent handles 10K concurrent requests, then the memory overhead for the request buffering
  will be 10K*32KiB=320MiB. This is a small price for being able to efficiently handling 10K concurrent requests.

- It added a dot to the end of the https://docs.victoriametrics.com/victoriametrics/vmauth/#request-body-buffering link
  in the description for the description of the -requestBufferSize flag. This breaks clicking the link in some environments,
  since the trailing dot is considered as a part of the url.

- It added a superflouous whitespace in front of the 'Disabling request buffering' text inside the description
  for the -requstBufferSize flag.

- It introduced an unnecessary complexity to the user by mentioning that the zero value
  at -maxBufferSize disables buffering for request reties (these things must be independent
  from the user's perspective).

- It changed the bufferedBody logic in non-trivial ways, which aren't related to the original issue.
  If these changes are needed, then they must be justified in a separate issue and must be prepared
  in a separate pull request / commit.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10675
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10677
2026-04-10 15:55:47 +02:00
Aliaksandr Valialkin
8fa0fae05a lib/protoparser/protoparserutil: fix encoding -> contentType in the description of the ReadUncompressedData function
This is a follow-up for the commit bed7cbd0a4
2026-04-10 15:20:27 +02:00
Aliaksandr Valialkin
3fe2ec7bde docs/victoriametrics/Articles.md: add https://medium.com/airbnb-engineering/building-a-high-volume-metrics-pipeline-with-opentelemetry-and-vmagent-c714d6910b45 2026-04-10 13:28:49 +02:00
Max Kotliar
6389979bce docs/changelog: add thank you for bugfix contribution 2026-04-10 13:08:28 +03:00
Max Kotliar
210fd0ae15 docs/changelog: add thank you for the contribution 2026-04-10 13:06:08 +03:00
Noureldin
f95b483a13 lib/storage: fixes data race at startFreeDiskSpaceWatcher
Previously, Storage.table was initialized after startFreeDiskSpaceWatcher was called.
This created a potential data race condition: if openTable took a long time to complete
and freed disk space during that window, the free disk space watcher could read an
uninitialized (or partially initialized) Storage.table, leading to an invalid memory
address or nil pointer dereference panic.

This commit properly initializes s.isReadOnly state during storage start and
starts FreeDiskSpaceWatcher after openTable.

Bug was introduced in github.com/VictoriaMetrics/VictoriaMetrics/commit/27b958ba8bc66578206ddac26ccf47b2cc3e8101

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10747
2026-04-10 08:33:49 +02:00
Hui Wang
b71c37e20a app/vmalert: align group evaluation time with the eval_offset option
Align group evaluation time with the `eval_offset` option to allow users
to manage group execution more effectively by understanding the exact
time each group will be scheduled, particularly in cases of spreading
rule execution within a window, chaining groups, or debugging data delay
issue.

If the group evaluation takes less than the group interval, but the
initial evaluation combined with the additional restore operation
exceeds the group interval, the evaluation time will be gradually
corrected in subsequent evaluations, as the interval ticker schedule
remains unchanged.

For groups without `eval_offset`, this change also ensures that all
evaluations follow the interval. Previously, the gap between the first
and second evaluations was larger than the interval. And the
`eval_delay` continues to help prevent partial responses.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10772.
2026-04-10 08:31:36 +02:00
Aliaksandr Valialkin
c27b5f5dfe docs/victoriametrics/vmauth.md: fix link to concurrency limiting chapter
The correct link must be https://docs.victoriametrics.com/victoriametrics/vmauth/#concurrency-limiting
instead of https://docs.victoriametrics.com/victoriametrics/vmauth/#concurrency-limits

The incorrect link has been introduced in the commit e31abfc25c
2026-04-09 19:37:40 +02:00
Max Kotliar
0a31eacb3d lib/{osinfo,appmetrics}: Move vm_os_info metric code to lib/appmetrics package (#10776)
Follow-up commit for
211fb08028

Address @f41gh7 review comments:
- Move code from `lib/osinfo` to `lib/appmetrics`.
- Make the logic private.
- Use metrics.WriteGaugeUint64 func.
- Remove registration logic from `app/xxx/main.go`.
- Remove `lib/osinfo` package.
2026-04-09 18:32:47 +03:00
Artem Fetishev
70b0115ea6 lib/storage: reuse nextDayMetricIDs during the first hour of the day (#10704)
At 00:00 UTC the ingested samples start to have timestamps for the new
day (in the ingested samples are always recent). Even though there was a
next-day prefill of the per-day index during the last hour of the day,
some performance degradation is still possible.

For example, in https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10698
it is manifested as `vminsert-to-vmstorage connection saturation` peaks
right after midnight.

Possible hypothesis why this is happening. At midnight,
currHourMetricIDs is empty and prevHourMetricIDs cannot be used because
it holds metricIDs for the previous day. So the ingestion logic hits
dateMetricIDsCache which may not have the metricID in its read-only
buffer and therefore should aquire lock to check its prev read-only
buffer or read-write buffer. Which creates lock contention and therefore
raises ingestion request latency.

A solution to this could be re-using the nextDayMetricIDs during the
first hour of the day. During this time, it is equivalent to
currHourMetricIDs.

---------

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Signed-off-by: Artem Fetishev <149964189+rtm0@users.noreply.github.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
2026-04-09 16:33:42 +02:00
Max Kotliar
dfafd14767 apptest: Improve TestSingleVMAgentDropOnOverload stability (#10774)
Previosly the test could fail on resource constraint runners because
remoteWrite retry happens before the assertion in:

```
    waitFor(
        func() bool {
            return vmagent.RemoteWriteRequests(t, url1) == 1 &&
vmagent.RemoteWriteRequests(t, url2) == 1
        },
    )
```

Because of retry the metric jumps to two and assert never satisfied.

The commit explisitly postpones retries so there is no race condition.

Failed  CI job:

https://github.com/VictoriaMetrics/VictoriaMetrics/actions/runs/24186679213/job/70593055140

PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10774

<img width="1157" height="879" alt="Screenshot 2026-04-09 at 15 30 33"
src="https://github.com/user-attachments/assets/e170ae12-cf79-4501-a57b-fbd3612d31a0"
/>
2026-04-09 16:57:39 +03:00
Max Kotliar
e3fdbc8341 docs/changelog: cleanup follow-up on e1a9901654
e1a9901654
2026-04-09 15:04:48 +03:00
Max Kotliar
1bf442537f docs/changelog: cleanup. follow-up on 211fb08028 commit
211fb08028
2026-04-09 15:01:44 +03:00
JAYICE
211fb08028 introduce os kernel version information metric (#10746)
The commit introduces the `vm_os_info` metric, which is exposed by all VM binaries by default. It provides visibility into the operating system version on which VictoriaMetrics is running, helping with troubleshooting environment-specific issues, like known kernel or fs bugs. 

FIxes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10481
PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10746

Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-04-09 14:43:25 +03:00
Yury Moladau
846124e280 app/vmui: generate CSV format using /api/v1/labels (#10771)
`Export query` button on `Raw Query` tab now fetches labels of executed query and composes export `format` based on that list of labels. It ensures that all query response labels are preserved in the CSV export. 

Also, commit removes the addition of the CSV header in the frontend. Now the header is added by the backend (see https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10706).

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10667
PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10771
Duplicate of: https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10737

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: Yury Molodov <yurymolodov@gmail.com>
Co-authored-by: lawrence3699 <lawrence3699@users.noreply.github.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-04-09 14:18:01 +03:00
andriibeee
e1a9901654 vmselect: add CSV header support for export/import (#10706)
Export (/api/v1/export/csv) now always writes a header row matching the requested format fields. Examples:

```
  # format=__timestamp__:unix_ms,__value__,job,instance
  __timestamp__:unix_ms,__value__,job,instance
  1704067200000,42.5,node,localhost:9090
```

Import (/api/v1/import/csv) gains auto-detection logic: the first row is skipped if any timestamp column fails timestamp parsing or any metric value column fails float parsing. If the first row is not detected as headers, it is parsed as data. This makes the import backward compatible. 

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10666
PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10706

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-04-09 14:00:39 +03:00
dependabot[bot]
5d0cf1d4a5 build(deps): bump vite from 8.0.2 to 8.0.7 in /app/vmui/packages/vmui (#10761)
Bumps [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite) from 8.0.2 to 8.0.7.

https://github.com/vitejs/vite/blob/v8.0.7/packages/vite/CHANGELOG.md

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-09 13:16:01 +03:00
Pablo (Tomas) Fernandez
cd3d297a3d docs: udpate playground page (#10754)
This change reverts part of the changes in
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10686

Motivation: docs added https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10686 in most cases are too verbose, ai-generated and bringing low practical sense.

The improvement goal: remove bloat from the docs and keep them practical and useful.

What it does:
- Completely removes items from the sidebar
- Moves the content of the most important playground pages to the
`/playground/` stub (README.md). Use H2s for each playground.
- Updates and cleans the text.
- Removes the individual children pages in the playground category (keep
only the `/playgrounds/` page/stub and remove the children).
- Removes items as these don't really need much introduction or aren't
playgrounds:
  - log to logsql: a conversion tool
  - sql to logsql: same
- adds Grafana playground section

Links of child pages will become invalid. We don't preserve them as this is pretty new doc (1w on prod) and is unlikely to have already persisted links somewhere.

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2026-04-09 12:08:48 +02:00
f41gh7
52f4d0f055 follow-up for 72c9e9377c
Move changelog entry to the upcoming release section
2026-04-09 11:38:36 +02:00
Hui Wang
72c9e9377c app/vmalert: expose remotewrite queue_size metrics
This commit adds new metrics `vmalert_remotewrite_queue_capacity` and `vmalert_remotewrite_queue_size`, which is updated with each push and it's
frequency depends on `-remoteWrite.concurrency`,
`remoteWrite.flushInterval`

It doesn't account for the pending data within each pushers request, it
should provide a general indication of the queue usage.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10765
2026-04-09 11:22:38 +02:00
andriibeee
0aaa741b5b lib/awsapi: add support for named AWS profile to ec2_sd_config
Add support for named AWS profiles in ec2_sd_config, matching Prometheus behavior.

Example:

```text
~/.aws/config:
[profile account-one]
source_profile = root
role_arn = arn:aws:iam::000000000001:role/prometheus
```

```yaml
scrape config:
- job: ec2
  ec2_sd_configs:
    - profile: account-one
```

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1685
2026-04-09 11:17:17 +02:00
f41gh7
0e9870b7a9 vendor: run go get -u ./lib/...
go get -u ./app/...
go mod tidy -compat=1.26
go mod vendor
2026-04-09 09:34:03 +02:00
Artem Fetishev
accb06d131 lib/storage: refactor storage synctests
Exctract repeated code from nextDayMetricIDs synctests into separate
funcs to make the code more readable.

The change was originally introduced in
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10704 and was
extracted into a separate PR to keep the original change simple.
2026-04-09 09:07:37 +02:00
f41gh7
1787bce6cb app/vminsert: opitimise per insert request memory buffer size
Previously, vminsert did not account for the ingest concurrency limit in buffer size calculation.
This could lead to excessively large buffers and OOM errors when the concurrency limit was reached.

 This commit fixes buffer size calculation by separating `insertCtx` and `storageNode` buffer size limits.

`storageNode` buffer size is set to a larger value, as it is allocated per configured `-storageNode`
and is independent of the concurrency limit.

`insertCtx` buffer size now accounts for the configured concurrency limit
and calculates the maximum buffer size accordingly.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10725
2026-04-09 08:48:37 +02:00
JAYICE
141febd413 app/vmselect: disable partial responses for cluster native requests
Previously, vmselect in cluster-native mode could return partial responses to upstream vmselect.
Since upstream vmselect expects full responses (mimicking vmstorage behavior),
partial responses must be disabled in cluster-native mode.
This prevents incomplete responses from being cached at the upstream vmselect level.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10678
2026-04-09 08:48:13 +02:00
0e4ef622
256eff061d docs/victoriametrics/stream-aggregation: fix rate_sum link (#10756)
### Describe Your Changes

https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8349 updated the recommendation for histogram aggregation from `total` to `rate_sum`, but missed one of the links.

PR: https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10756

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-04-08 13:09:58 +03:00
Zakhar Bessarab
fa1dd0ec0a app/vmagent/remotewrite: automatically set series limits to MaxInt32 when setting value to -1 (#9614)
Automatically set daily and hourly series limits to `MaxInt32` when `remoteWrite.maxHourlySeries` or `remoteWrite.maxDailySeries` is set to `-1`.

This change addresses a usability issue with the cardinality limiter. Users may want to enable the limiter to observe its metrics before deciding on an appropriate limit. However, the underlying bloom filter only supports `int32`, so setting large values can lead to overflow.

With this PR:
* Setting either flag to `-1` is treated as “no practical limit” and internally mapped to `math.MaxInt32`
* Values exceeding `int32` are safely clamped to `MaxInt32` to prevent overflow

This allows users to enable the limiter for estimation purposes without risking invalid configurations or runtime issues.

https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9614

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
Signed-off-by: Max Kotliar <kotlyar.maksim@gmail.com>
Co-authored-by: Nikolay <nik@victoriametrics.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
2026-04-08 12:54:27 +03:00
Max Kotliar
6337dfc472 deployment/docker: update Go builder from Go1.26.1 to Go1.26.2
See
https://github.com/golang/go/issues?q=milestone%3AGo1.26.2%20label%3ACherryPickApproved
2026-04-08 12:43:29 +03:00
JAYICE
0a256002e5 lib/promscape: update last scrape result only when current scrape is successful
Previously, last scrape result was unconditionally update, despite possible scrape error.

The commit updates last scrape result only at successful scrape. It properly accounts `scrape_series_added` metric and aligns it with the same metric in Prometheus.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10653
2026-04-06 17:14:47 +02:00
Nikolay
b3c03c023c app/vmauth: align request body buffering flags
Previously introduced flag `requestBufferSize` raised default value for
in-memory buffer from 16KB to 32KB. It could increase memory usage for
vmauth. Also it made unclean how to actually disable requests buffering.

 This commit aligns flags value to the 16KB. And disables requests
buffering if any of flags value are 0 as mentioned at flags description.
If any of flags have non-default value, those value are used as max size
for request buffer. If both flags are modified - bigger value wins.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10675
2026-04-06 09:51:44 +02:00
Hui Wang
80e2f29761 app/vmalert: add random jitter to concurrent periodical flushers targeting the remote write destination
I expect the change to help in two ways:
1. Spreading remote write flushes over the flush interval to avoid
congestion at the remote write destination;
2. Enhance queue data consumption. Currently, all flushers may always
flush data simultaneously, resulting in periods where no flushers are
consuming data from the queue, which increases the risk of reaching the
queue limit `remoteWrite.maxQueueSize` even when a increased
`remoteWrite.concurrency`. By making the flushers more dispersed, it is
more likely that some flushers are consistently consuming data from the
queue, which should make queue management easier.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10729/
2026-04-06 09:50:20 +02:00
Hui Wang
df34ba3ba2 app/vmalert: expose new histograms to provide better visibility into remote write request sizes
The new histograms should help with debugging whether remote write
pushes are efficient(pushes can be underutilized due to small flush
interval), like in
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10693 and
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10536. This
enhanced visibility will allow related parameters such as
`-remoteWrite.maxBatchSize`, `-remoteWrite.maxQueueSize`,
`-remoteWrite.flushInterval` to be tuned accordingly.

Eventually, `vmalert_remotewrite_sent_rows_total`
and `vmalert_remotewrite_sent_bytes_total` could be deprecated, but it's also fine to leave
them as they are since they're small counters.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10727
2026-04-06 09:49:17 +02:00
sias32
10dd45c4fd dashboards: improvement alert statistics (#10571)
Changes:

- Added the number of `pending alerts` and `firing alerts`
- Improvement `transormations` for panel - FIRING over time by group and rules
- Added sort for panel - FIRING over time by rule

Signed-off-by: sias32 <sias.32@yandex.ru>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-04-03 21:27:19 +03:00
Max Kotliar
a3294b5aa2 docs/guide: fix free space calculation factor in capacity planning formula
Replace 1.2 multiplier with 1.25 in disk space estimation formula.

1.2 only provides ~16.7% free space, while the docs recommend keeping
20%. Using 1.25 correctly accounts for 20% free space.

Inspired by
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10394
2026-04-03 21:20:01 +03:00
Zhu Jiekun
4438454567 vendor: update metrics package with fix unsupported metric type for summary (#10745)
Fix `unsupported` metric type display in exposed metric metadata for
summaries and quantiles by bumping `metrics` SDK version.

This `unsupported` type exists when a summary is not updated within a
certain time window. See https://github.com/VictoriaMetrics/metrics/issues/120 and pull
request https://github.com/VictoriaMetrics/metrics/pull/121 for details.

Signed-off-by: Zhu Jiekun <jiekun@victoriametrics.com>
Signed-off-by: Max Kotliar <kotlyar.maksim@gmail.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-04-03 16:06:47 +03:00
Max Kotliar
71af1ee5f1 .github: Set 21-day cooldown to dependabot updates (#10740)
Recent supply chain attacks on GitHub Actions and npm packages show the
risk of pulling dependency updates too quickly:
-
https://socket.dev/blog/trivy-under-attack-again-github-actions-compromise
-
https://www.stepsecurity.io/blog/axios-compromised-on-npm-malicious-versions-drop-remote-access-trojan
2026-04-03 15:51:07 +03:00
Evgeny
e00fb7e605 app/vmagent: add per-URL -remoteWrite.disableMetadata
Add per-URL `-remoteWrite.disableMetadata` flag to control metadata
sending for each remote storage independently.

After v1.137.0 enabled `-enableMetadata` by default, metadata is sent to
ALL remote write targets, even those with relabeling filters that drop
most metrics. This causes unnecessary growth in
`vmagent_remotewrite_requests_total`. and significant increase in
network load for heavy filtered remote write destinations.
2026-04-03 10:32:34 +02:00
Roman Khavronenko
5e2ee00504 app/vmauth: mention that vmauth can be used with other components
A cosmetic change to highlight that vmauth can be used with other
compnents besides VM only
2026-04-03 10:27:43 +02:00
JAYICE
de2bc4237a lib/backup/s3: retry the requests that failed with unexpected EOF
When the network between client and s3 server is unstable, the client may encounter temporary io.EOF errors when reading the response from s3 server.
Currently, the s3 sdk in vmbackup uses the default retry policy. However, this default retry policy won't retry when s3 sdk meet unexpected EOF. This means that the temporary unexpected EOF error will cause the backup task to fail.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10699
2026-04-03 10:26:58 +02:00
Fred Navruzov
3e51f277bd docs/vmanomaly: v1.29.2 (#10741)
update docs to vmanomaly v1.29.2 release

Signed-off-by: Fred Navruzov <fred-navruzov@users.noreply.github.com>
2026-04-02 22:02:26 +03:00
Roman Khavronenko
5723339525 docs: mention https://victoriametrics.com/blog/victoriametrics-remote-write/ (#10726)
Add link to blogpost with detailed information about zstd+rw protocol.
This PR is based on question in community channel about implementation
details.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-04-02 16:30:10 +03:00
Max Kotliar
3b986ad326 docs/changelog: add thank for contribution 2026-04-02 15:58:23 +03:00
Max Kotliar
08dd38d4a0 vendor: update https://github.com/VictoriaMetrics/metricsql from v0.85.0 to v0.86.0
It contains https://github.com/VictoriaMetrics/metricsql/pull/63 that
reduce number of parentheses added.

It should improve prettify functinality in vmui
2026-04-02 15:42:01 +03:00
Aliaksandr Valialkin
815cc97952 vendor: update github.com/VictoriaMetrics/metrics from v1.42.0 to v1.43.0 2026-04-02 14:18:30 +02:00
Dmytro Kozlov
93d71e7106 vmctl: add thanos migration mode (#10659)
Implemented dedicated thanos migration mode for vmctl to migrate data from Thanos installations to VictoriaMetrics.

Key features:
1. Raw and downsampled blocks support: Reads both raw blocks
(resolution=0) and downsampled blocks (5m/1h resolution) directly from
Thanos snapshots
2. All aggregate types: Imports count, sum, min, max, and counter
aggregates from downsampled blocks as separate metrics with resolution
and type suffixes (e.g., metric_name:5m:count)
3. Dedicated flags: Uses `--thanos-*` prefixed flags (--thanos-snapshot,
--thanos-concurrency, --thanos-filter-time-start,
--thanos-filter-time-end, --thanos-filter-label,
--thanos-filter-label-value, --thanos-aggr-types)
4. Selective aggregate import: Use `--thanos-aggr-types` to import only
specific aggregates

Usage:
```
vmctl thanos --thanos-snapshot /path/to/thanos-data --vm-addr http://victoria-metrics:8428
```

Closes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9262

Signed-off-by: Dmytro Kozlov <d.kozlov@victoriametrics.com>
Signed-off-by: Max Kotliar <kotlyar.maksim@gmail.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
Co-authored-by: Max Kotliar <kotlyar.maksim@gmail.com>
2026-04-02 14:51:01 +03:00
Aliaksandr Valialkin
577b161343 docs/victoriametrics/changelog/CHANGELOG.md: add a description for the change in the commit dd2d6807e4 2026-04-02 13:18:38 +02:00
Mehrdad Banikian
dd2d6807e4 Add split phase metrics for filestream fsync operations (#10493)
## Summary

This PR implements split phase metrics for filestream operations as
requested in #10432.

### Changes

- Added `vm_filestream_fsync_duration_seconds_total` metric to track
fsync syscall duration separately
- Added `vm_filestream_fsync_calls_total` metric to count fsync calls
- Added `vm_filestream_write_syscall_duration_seconds_total` metric to
track write syscall duration (previously mixed with flush time)
- Refactored `MustClose()` and `MustFlush()` to use new `flush()` and
`sync()` helper methods
- Kept `vm_filestream_write_duration_seconds_total` for backward
compatibility

### Problem Solved

Previously, `vm_filestream_write_duration_seconds_total` was being
incremented in two places:
1. `statWriter.Write()` - triggered by `bw.Flush()` and `bw.Write()`
2. `Writer.MustFlush()` - which included the above process, leading to
double-counting

This made it impossible to distinguish between write syscall time and
fsync time, which is critical for diagnosing storage latency issues.

### Solution

The new metrics allow users to:
- Distinguish "flush got slower" vs "fsync got slower" using metrics
only
- No file path labels (bounded cardinality)
- No double-counting between metrics

### Testing

- Code compiles successfully
- All existing metrics are preserved for backward compatibility

Closes #10432

---------

Signed-off-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
Signed-off-by: Aliaksandr Valialkin <valyala@gmail.com>
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
Co-authored-by: Aliaksandr Valialkin <valyala@gmail.com>
2026-04-02 13:14:33 +02:00
Aliaksandr Valialkin
e38e25b756 app/vmagent/remotewrite: improve the readability of the parseRetryAfterHeader() function a bit
- Use shorter name for its' arg: retryAfterString -> s. This is OK to do because the function is small enough,
so it is easier to read 's' instead of 'retryAfterString' in multiple places of the function.

- Remove the name for the returned value - retryAfterDuration, since it only confuses the reader.

This is a follow-up for the commit 5319acb8ed , which introduced this function.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6097
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/6124
2026-04-02 12:52:59 +02:00
Vadim Alekseev
bc708c8568 lib/timeutil: introduce backoff timer struct (#10714)
### Describe Your Changes

I noticed that the backoff timer logic is repeated across multiple
packages. I've implemented a universal wrapper to avoid duplicating this
logic. This structure is already [actively
used](2aa0ea10bb/app/vlagent/kubernetescollector/backoff_timer.go (L11))
for the Kubernetes Collector in vlagent and can be reused in vlagent's
remotewrite. I've also included a usage example in this PR so you can
evaluate its utility.

### Checklist

The following checks are **mandatory**:

- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [X] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-04-02 12:31:28 +02:00
Aliaksandr Valialkin
cd73472a3e docs/victoriametrics/Articles.md: add https://mirastacklabs.ai/blog/chunk-split-caching/ 2026-04-01 22:42:15 +02:00
Aliaksandr Valialkin
527d09653a lib/storage: remove MetricNamesStatsResponse and MetricNamesStatsRecord types
These types hide public types from lib/storage/metricnamestats package.
These types do not resolve any practical issues. Instead, they add a level of indirection,
which complicates reading and understanding the code.

These types were introduced in the commit 795d3fe722
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6145
2026-04-01 22:25:50 +02:00
Aliaksandr Valialkin
28a87b90bb apptest: test apps with the enabled built-in race detector in order to be able to catch data races 2026-04-01 22:11:17 +02:00
Artem Fetishev
c445e7fcc0 docs: bump version to v1.139.0
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-04-01 15:22:06 +02:00
Artem Fetishev
9494ee103e deplyoment/docker: bump version to v1.139.0
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-04-01 15:19:03 +02:00
Artem Fetishev
94af588e92 docs: forward port LTS v1.122.18 changelog to upstream
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-04-01 14:24:20 +02:00
Artem Fetishev
e5c194cc10 docs: forward port LTS v1.136.3 changelog to upstream
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-04-01 14:22:51 +02:00
Artem Fetishev
7c65e3daca docs: fix changelog
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-04-01 13:17:16 +02:00
Pablo (Tomas) Fernandez
a6532c28b2 docs/guides: Add new guide "Set up Datasource-Managed Alerts with vmalert and Grafana" (#10691)
Create a guide to use datasource-managed alerts in Grafana

See: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10528

Signed-off-by: Pablo (Tomas) Fernandez <46322567+TomFern@users.noreply.github.com>
Co-authored-by: Mathias Palmersheim <mathias@victoriametrics.com>
2026-03-31 18:57:04 +03:00
Jose Gómez-Sellés
ec26ebb803 docs: raise cloud awareness in docs (#10716)
### Describe Your Changes

Some users may not know that VictoriaMetrics Cloud provides relevant
features to manage workloads. This change add notes in relevant places
in which users may find that a managed solution is what they need.

The intention is not to push users to Cloud, but giving the information.
That's why it's always phrased like: "If you don't want to do X, Cloud
can do it for you", instead of "Start for free, etc". This is an Open
Source first project, and shall remain as such.

After this gets proper review, VictoriaLogs and other repos may follow.

### Checklist

The following checks are **mandatory**:

- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [X] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: Jose Gómez-Sellés <14234281+jgomezselles@users.noreply.github.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
2026-03-31 18:51:06 +03:00
1812 changed files with 144166 additions and 30165 deletions

0
.codex Normal file
View File

View File

@@ -4,6 +4,8 @@ updates:
directory: "/"
schedule:
interval: "daily"
cooldown:
default-days: 21
- package-ecosystem: "gomod"
directory: "/"
schedule:
@@ -23,6 +25,8 @@ updates:
directory: "/"
schedule:
interval: "daily"
cooldown:
default-days: 21
- package-ecosystem: "npm"
directory: "/app/vmui/packages/vmui"
schedule:

View File

@@ -1,10 +1,3 @@
### Describe Your Changes
**PLEASE REMOVE LINE BELOW BEFORE SUBMITTING**
Please provide a brief description of the changes you made. Be as specific as possible to help others understand the purpose and impact of your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development goals](https://docs.victoriametrics.com/victoriametrics/goals/).
Before creating the PR, make sure you have read and followed the [VictoriaMetrics contributing guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).

View File

@@ -27,7 +27,7 @@ jobs:
- run: go version
- name: Cache Go artifacts
uses: actions/cache@v4
uses: actions/cache@v5
with:
path: |
~/.cache/go-build

View File

@@ -40,7 +40,7 @@ jobs:
- run: go version
- name: Cache Go artifacts
uses: actions/cache@v4
uses: actions/cache@v5
with:
path: |
~/.cache/go-build
@@ -50,14 +50,14 @@ jobs:
restore-keys: go-artifacts-${{ runner.os }}-codeql-analyze-${{ steps.go.outputs.go-version }}-
- name: Initialize CodeQL
uses: github/codeql-action/init@v4
uses: github/codeql-action/init@v4.35.2
with:
languages: go
- name: Autobuild
uses: github/codeql-action/autobuild@v4
uses: github/codeql-action/autobuild@v4.35.2
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v4
uses: github/codeql-action/analyze@v4.35.2
with:
category: 'language:go'

View File

@@ -47,7 +47,7 @@ jobs:
- run: go version
- name: Cache golangci-lint
uses: actions/cache@v4
uses: actions/cache@v5
with:
path: |
~/.cache/golangci-lint
@@ -66,8 +66,8 @@ jobs:
strategy:
matrix:
scenario:
- 'test-full'
- 'test-full-386'
- 'test'
- 'test-386'
- 'test-pure'
steps:
@@ -88,11 +88,6 @@ jobs:
- name: Run tests
run: make ${{ matrix.scenario}}
- name: Publish coverage
uses: codecov/codecov-action@v6
with:
files: ./coverage.txt
apptest:
name: apptest
runs-on: apptest

View File

@@ -457,6 +457,9 @@ test:
test-race:
go test -tags 'synctest' -race ./lib/... ./app/...
test-386:
GOARCH=386 go test -tags 'synctest' ./lib/... ./app/...
test-pure:
CGO_ENABLED=0 go test -tags 'synctest' ./lib/... ./app/...
@@ -467,10 +470,10 @@ test-full-386:
GOARCH=386 go test -tags 'synctest' -coverprofile=coverage.txt -covermode=atomic ./lib/... ./app/...
apptest:
$(MAKE) victoria-metrics vmagent vmalert vmauth vmctl vmbackup vmrestore
$(MAKE) victoria-metrics-race vmagent-race vmalert-race vmauth-race vmctl-race vmbackup-race vmrestore-race
go test ./apptest/... -skip="^Test(Cluster|Legacy).*"
apptest-legacy: victoria-metrics vmbackup vmrestore
apptest-legacy: victoria-metrics-race vmbackup-race vmrestore-race
OS=$$(uname | tr '[:upper:]' '[:lower:]'); \
ARCH=$$(uname -m | tr '[:upper:]' '[:lower:]' | sed 's/x86_64/amd64/'); \
VERSION=v1.132.0; \

View File

@@ -4,7 +4,6 @@
[![Docker Pulls](https://img.shields.io/docker/pulls/victoriametrics/victoria-metrics?label=&logo=docker&logoColor=white&labelColor=2496ED&color=2496ED&link=https%3A%2F%2Fhub.docker.com%2Fr%2Fvictoriametrics%2Fvictoria-metrics)](https://hub.docker.com/u/victoriametrics)
[![Go Report](https://goreportcard.com/badge/github.com/VictoriaMetrics/VictoriaMetrics?link=https%3A%2F%2Fgoreportcard.com%2Freport%2Fgithub.com%2FVictoriaMetrics%2FVictoriaMetrics)](https://goreportcard.com/report/github.com/VictoriaMetrics/VictoriaMetrics)
[![Build Status](https://github.com/VictoriaMetrics/VictoriaMetrics/actions/workflows/build.yml/badge.svg?branch=master&link=https%3A%2F%2Fgithub.com%2FVictoriaMetrics%2FVictoriaMetrics%2Factions)](https://github.com/VictoriaMetrics/VictoriaMetrics/actions/workflows/build.yml)
[![codecov](https://codecov.io/gh/VictoriaMetrics/VictoriaMetrics/branch/master/graph/badge.svg?link=https%3A%2F%2Fcodecov.io%2Fgh%2FVictoriaMetrics%2FVictoriaMetrics)](https://app.codecov.io/gh/VictoriaMetrics/VictoriaMetrics)
[![License](https://img.shields.io/github/license/VictoriaMetrics/VictoriaMetrics?labelColor=green&label=&link=https%3A%2F%2Fgithub.com%2FVictoriaMetrics%2FVictoriaMetrics%2Fblob%2Fmaster%2FLICENSE)](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/LICENSE)
[![Join Slack](https://img.shields.io/badge/Join%20Slack-4A154B?logo=slack)](https://slack.victoriametrics.com)
[![X](https://img.shields.io/twitter/follow/VictoriaMetrics?style=flat&label=Follow&color=black&logo=x&labelColor=black&link=https%3A%2F%2Fx.com%2FVictoriaMetrics)](https://x.com/VictoriaMetrics/)

View File

@@ -1,42 +1,4 @@
# Security Policy
## Supported Versions
You can find out about our security policy and VictoriaMetrics version support on the [security page](https://docs.victoriametrics.com/victoriametrics/#security) in the documentation.
The following versions of VictoriaMetrics receive regular security fixes:
| Version | Supported |
|--------------------------------------------------------------------------------|--------------------|
| [Latest release](https://docs.victoriametrics.com/victoriametrics/changelog/) | :white_check_mark: |
| [LTS releases](https://docs.victoriametrics.com/victoriametrics/lts-releases/) | :white_check_mark: |
| other releases | :x: |
See [this page](https://victoriametrics.com/security/) for more details.
## Software Bill of Materials (SBOM)
Every VictoriaMetrics container{{% available_from "#" %}} image published to
[Docker Hub](https://hub.docker.com/u/victoriametrics)
and [Quay.io](https://quay.io/organization/victoriametrics)
includes an [SPDX](https://spdx.dev/) SBOM attestation
generated automatically by BuildKit during
`docker buildx build`.
To inspect the SBOM for an image:
```sh
docker buildx imagetools inspect \
docker.io/victoriametrics/victoria-metrics:latest \
--format "{{ json .SBOM }}"
```
To scan an image using its SBOM attestation with
[Trivy](https://github.com/aquasecurity/trivy):
```sh
trivy image --sbom-sources oci \
docker.io/victoriametrics/victoria-metrics:latest
```
## Reporting a Vulnerability
Please report any security issues to <security@victoriametrics.com>

View File

@@ -118,8 +118,8 @@ func main() {
logger.Fatalf("cannot stop the webservice: %s", err)
}
logger.Infof("successfully shut down the webservice in %.3f seconds", time.Since(startTime).Seconds())
vminsert.Stop()
vminsertcommon.StopIngestionRateLimiter()
vminsert.Stop()
vmstorage.Stop()
vmselect.Stop()

View File

@@ -83,6 +83,9 @@ var (
maxLabelsPerTimeseries = flag.Int("maxLabelsPerTimeseries", 0, "The maximum number of labels per time series to be accepted. Series with superfluous labels are ignored. In this case the vm_rows_ignored_total{reason=\"too_many_labels\"} metric at /metrics page is incremented")
maxLabelNameLen = flag.Int("maxLabelNameLen", 0, "The maximum length of label names in the accepted time series. Series with longer label name are ignored. In this case the vm_rows_ignored_total{reason=\"too_long_label_name\"} metric at /metrics page is incremented")
maxLabelValueLen = flag.Int("maxLabelValueLen", 0, "The maximum length of label values in the accepted time series. Series with longer label value are ignored. In this case the vm_rows_ignored_total{reason=\"too_long_label_value\"} metric at /metrics page is incremented")
enableMultitenancyViaHeaders = flag.Bool("enableMultitenancyViaHeaders", false, "Enables multitenancy via HTTP headers. "+
"See https://docs.victoriametrics.com/victoriametrics/vmagent/#multitenancy")
)
var (
@@ -216,7 +219,7 @@ func getOpenTSDBHTTPInsertHandler() func(req *http.Request) error {
}
return func(req *http.Request) error {
path := strings.ReplaceAll(req.URL.Path, "//", "/")
at, err := getAuthTokenFromPath(path)
at, err := getAuthTokenFromPath(path, req.Header)
if err != nil {
return fmt.Errorf("cannot obtain auth token from path %q: %w", path, err)
}
@@ -224,8 +227,15 @@ func getOpenTSDBHTTPInsertHandler() func(req *http.Request) error {
}
}
func getAuthTokenFromPath(path string) (*auth.Token, error) {
p, err := httpserver.ParsePath(path)
func parsePath(path string, header http.Header) (*httpserver.Path, error) {
if *enableMultitenancyViaHeaders {
return httpserver.ParsePathAndHeaders(path, header)
}
return httpserver.ParsePath(path)
}
func getAuthTokenFromPath(path string, header http.Header) (*auth.Token, error) {
p, err := parsePath(path, header)
if err != nil {
return nil, fmt.Errorf("cannot parse multitenant path: %w", err)
}
@@ -559,14 +569,15 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
}
func processMultitenantRequest(w http.ResponseWriter, r *http.Request, path string) bool {
p, err := httpserver.ParsePath(path)
p, err := parsePath(path, r.Header)
if err != nil {
// Cannot parse multitenant path. Skip it - probably it will be parsed later.
return false
}
if p.Prefix != "insert" {
httpserver.Errorf(w, r, `unsupported multitenant prefix: %q; expected "insert"`, p.Prefix)
return true
// processMultitenantRequest is called for all unmatched path variants,
// but we should try parsing only /insert prefixed to avoid catching all possible paths.
return false
}
at, err := auth.NewTokenPossibleMultitenant(p.AuthToken)
if err != nil {

View File

@@ -77,16 +77,6 @@ func insertRows(at *auth.Token, tss []prompb.TimeSeries, mms []prompb.MetricMeta
var metadataTotal int
if prommetadata.IsEnabled() {
var accountID, projectID uint32
if at != nil {
accountID = at.AccountID
projectID = at.ProjectID
for i := range mms {
mm := &mms[i]
mm.AccountID = accountID
mm.ProjectID = projectID
}
}
ctx.WriteRequest.Metadata = mms
metadataTotal = len(mms)
}

View File

@@ -75,11 +75,6 @@ func insertRows(at *auth.Token, rows []prometheus.Row, mms []prometheus.Metadata
Samples: samples[len(samples)-1:],
})
}
var accountID, projectID uint32
if at != nil {
accountID = at.AccountID
projectID = at.ProjectID
}
for i := range mms {
mm := &mms[i]
mmsDst = append(mmsDst, prompb.MetricMetadata{
@@ -88,8 +83,6 @@ func insertRows(at *auth.Token, rows []prometheus.Row, mms []prometheus.Metadata
Type: mm.Type,
// there is no unit in Prometheus exposition formats
AccountID: accountID,
ProjectID: projectID,
})
}
ctx.WriteRequest.Timeseries = tssDst

View File

@@ -72,11 +72,6 @@ func insertRows(at *auth.Token, timeseries []prompb.TimeSeries, mms []prompb.Met
var metadataTotal int
if prommetadata.IsEnabled() {
var accountID, projectID uint32
if at != nil {
accountID = at.AccountID
projectID = at.ProjectID
}
for i := range mms {
mm := &mms[i]
mmsDst = append(mmsDst, prompb.MetricMetadata{
@@ -85,8 +80,8 @@ func insertRows(at *auth.Token, timeseries []prompb.TimeSeries, mms []prompb.Met
Type: mm.Type,
Unit: mm.Unit,
AccountID: accountID,
ProjectID: projectID,
AccountID: mm.AccountID,
ProjectID: mm.ProjectID,
})
}
ctx.WriteRequest.Metadata = mmsDst

View File

@@ -13,6 +13,9 @@ import (
"sync/atomic"
"time"
"github.com/VictoriaMetrics/metrics"
"github.com/golang/snappy"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/awsapi"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
@@ -21,10 +24,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/persistentqueue"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promauth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/ratelimiter"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timerpool"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeutil"
"github.com/VictoriaMetrics/metrics"
"github.com/golang/snappy"
)
var (
@@ -290,7 +290,7 @@ func getAWSAPIConfig(argIdx int) (*awsapi.Config, error) {
accessKey := awsAccessKey.GetOptionalArg(argIdx)
secretKey := awsSecretKey.GetOptionalArg(argIdx)
service := awsService.GetOptionalArg(argIdx)
cfg, err := awsapi.NewConfig(ec2Endpoint, stsEndpoint, region, roleARN, accessKey, secretKey, service)
cfg, err := awsapi.NewConfig(ec2Endpoint, stsEndpoint, region, roleARN, accessKey, secretKey, service, "")
if err != nil {
return nil, err
}
@@ -405,8 +405,7 @@ func (c *client) newRequest(url string, body []byte) (*http.Request, error) {
// Otherwise, it tries sending the block to remote storage indefinitely.
func (c *client) sendBlockHTTP(block []byte) bool {
c.rl.Register(len(block))
maxRetryDuration := timeutil.AddJitterToDuration(c.retryMaxInterval)
retryDuration := timeutil.AddJitterToDuration(c.retryMinInterval)
bt := timeutil.NewBackoffTimer(c.retryMinInterval, c.retryMaxInterval)
retriesCount := 0
again:
@@ -415,19 +414,10 @@ again:
c.requestDuration.UpdateDuration(startTime)
if err != nil {
c.errorsCount.Inc()
retryDuration *= 2
if retryDuration > maxRetryDuration {
retryDuration = maxRetryDuration
}
remoteWriteRetryLogger.Warnf("couldn't send a block with size %d bytes to %q: %s; re-sending the block in %.3f seconds",
len(block), c.sanitizedURL, err, retryDuration.Seconds())
t := timerpool.Get(retryDuration)
select {
case <-c.stopCh:
timerpool.Put(t)
remoteWriteRetryLogger.Warnf("couldn't send a block with size %d bytes to %q: %s; re-sending the block in %s",
len(block), c.sanitizedURL, err, bt.CurrentDelay())
if !bt.Wait(c.stopCh) {
return false
case <-t.C:
timerpool.Put(t)
}
c.retriesCount.Inc()
goto again
@@ -493,7 +483,10 @@ again:
// Unexpected status code returned
retriesCount++
retryAfterHeader := parseRetryAfterHeader(resp.Header.Get("Retry-After"))
retryDuration = getRetryDuration(retryAfterHeader, retryDuration, maxRetryDuration)
// retryAfterDuration has the highest priority duration
if retryAfterHeader > 0 {
bt.SetDelay(retryAfterHeader)
}
// Handle response
body, err := io.ReadAll(resp.Body)
@@ -502,15 +495,10 @@ again:
logger.Errorf("cannot read response body from %q during retry #%d: %s", c.sanitizedURL, retriesCount, err)
} else {
logger.Errorf("unexpected status code received after sending a block with size %d bytes to %q during retry #%d: %d; response body=%q; "+
"re-sending the block in %.3f seconds", len(block), c.sanitizedURL, retriesCount, statusCode, body, retryDuration.Seconds())
"re-sending the block in %s", len(block), c.sanitizedURL, retriesCount, statusCode, body, bt.CurrentDelay())
}
t := timerpool.Get(retryDuration)
select {
case <-c.stopCh:
timerpool.Put(t)
if !bt.Wait(c.stopCh) {
return false
case <-t.C:
timerpool.Put(t)
}
c.retriesCount.Inc()
goto again
@@ -519,27 +507,6 @@ again:
var remoteWriteRejectedLogger = logger.WithThrottler("remoteWriteRejected", 5*time.Second)
var remoteWriteRetryLogger = logger.WithThrottler("remoteWriteRetry", 5*time.Second)
// getRetryDuration returns retry duration.
// retryAfterDuration has the highest priority.
// If retryAfterDuration is not specified, retryDuration gets doubled.
// retryDuration can't exceed maxRetryDuration.
//
// Also see: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6097
func getRetryDuration(retryAfterDuration, retryDuration, maxRetryDuration time.Duration) time.Duration {
// retryAfterDuration has the highest priority duration
if retryAfterDuration > 0 {
return timeutil.AddJitterToDuration(retryAfterDuration)
}
// default backoff retry policy
retryDuration *= 2
if retryDuration > maxRetryDuration {
retryDuration = maxRetryDuration
}
return retryDuration
}
// repackBlockFromZstdToSnappy repacks the given zstd-compressed block to snappy-compressed block.
//
// The input block may be corrupted, for example, if vmagent was shut down ungracefully and
@@ -570,24 +537,20 @@ func logBlockRejected(block []byte, sanitizedURL string, resp *http.Response) {
}
// parseRetryAfterHeader parses `Retry-After` value retrieved from HTTP response header.
// retryAfterString should be in either HTTP-date or a number of seconds.
// It will return time.Duration(0) if `retryAfterString` does not follow RFC 7231.
func parseRetryAfterHeader(retryAfterString string) (retryAfterDuration time.Duration) {
if retryAfterString == "" {
return retryAfterDuration
//
// s should be in either HTTP-date or a number of seconds.
// It returns time.Duration(0) if s does not follow RFC 7231.
func parseRetryAfterHeader(s string) time.Duration {
if s == "" {
return 0
}
defer func() {
v := retryAfterDuration.Seconds()
logger.Infof("'Retry-After: %s' parsed into %.2f second(s)", retryAfterString, v)
}()
// Retry-After could be in "Mon, 02 Jan 2006 15:04:05 GMT" format.
if parsedTime, err := time.Parse(http.TimeFormat, retryAfterString); err == nil {
if parsedTime, err := time.Parse(http.TimeFormat, s); err == nil {
return time.Duration(time.Until(parsedTime).Seconds()) * time.Second
}
// Retry-After could be in seconds.
if seconds, err := strconv.Atoi(retryAfterString); err == nil {
if seconds, err := strconv.Atoi(s); err == nil {
return time.Duration(seconds) * time.Second
}

View File

@@ -6,66 +6,11 @@ import (
"testing"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding"
"github.com/golang/snappy"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding"
)
func TestCalculateRetryDuration(t *testing.T) {
// `testFunc` call `calculateRetryDuration` for `n` times
// and evaluate if the result of `calculateRetryDuration` is
// 1. >= expectMinDuration
// 2. <= expectMinDuration + 10% (see timeutil.AddJitterToDuration)
f := func(retryAfterDuration, retryDuration time.Duration, n int, expectMinDuration time.Duration) {
t.Helper()
for range n {
retryDuration = getRetryDuration(retryAfterDuration, retryDuration, time.Minute)
}
expectMaxDuration := helper(expectMinDuration)
expectMinDuration = expectMinDuration - (1000 * time.Millisecond) // Avoid edge case when calculating time.Until(now)
if retryDuration < expectMinDuration || retryDuration > expectMaxDuration {
t.Fatalf(
"incorrect retry duration, want (ms): [%d, %d], got (ms): %d",
expectMinDuration.Milliseconds(), expectMaxDuration.Milliseconds(),
retryDuration.Milliseconds(),
)
}
}
// Call calculateRetryDuration for 1 time.
{
// default backoff policy
f(0, time.Second, 1, 2*time.Second)
// default backoff policy exceed max limit"
f(0, 10*time.Minute, 1, time.Minute)
// retry after > default backoff policy
f(10*time.Second, 1*time.Second, 1, 10*time.Second)
// retry after < default backoff policy
f(1*time.Second, 10*time.Second, 1, 1*time.Second)
// retry after invalid and < default backoff policy
f(0, time.Second, 1, 2*time.Second)
}
// Call calculateRetryDuration for multiple times.
{
// default backoff policy 2 times
f(0, time.Second, 2, 4*time.Second)
// default backoff policy 3 times
f(0, time.Second, 3, 8*time.Second)
// default backoff policy N times exceed max limit
f(0, time.Second, 10, time.Minute)
// retry after 120s 1 times
f(120*time.Second, time.Second, 1, 120*time.Second)
// retry after 120s 2 times
f(120*time.Second, time.Second, 2, 120*time.Second)
}
}
func TestParseRetryAfterHeader(t *testing.T) {
f := func(retryAfterString string, expectResult time.Duration) {
t.Helper()
@@ -91,13 +36,6 @@ func TestParseRetryAfterHeader(t *testing.T) {
f(time.Now().Add(10*time.Second).Format("Mon, 02 Jan 2006 15:04:05 FAKETZ"), 0)
}
// helper calculate the max possible time duration calculated by timeutil.AddJitterToDuration.
func helper(d time.Duration) time.Duration {
dv := min(d/10, 10*time.Second)
return d + dv
}
func TestRepackBlockFromZstdToSnappy(t *testing.T) {
expectedPlainBlock := []byte(`foobar`)

View File

@@ -211,6 +211,9 @@ func (wr *writeRequest) copyMetadata(dst, src *prompb.MetricMetadata) {
dst.Type = src.Type
dst.Unit = src.Unit
dst.AccountID = src.AccountID
dst.ProjectID = src.ProjectID
// Pre-allocate memory for all string fields.
neededBufLen := len(src.MetricFamilyName) + len(src.Help)
bufLen := len(wr.metadatabuf)

View File

@@ -3,6 +3,7 @@ package remotewrite
import (
"flag"
"fmt"
"math"
"net/http"
"net/url"
"path/filepath"
@@ -11,6 +12,10 @@ import (
"sync/atomic"
"time"
"github.com/cespare/xxhash/v2"
"github.com/VictoriaMetrics/metrics"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bloomfilter"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
@@ -23,6 +28,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/memory"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/persistentqueue"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/procutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prommetadata"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promrelabel"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutil"
@@ -30,8 +36,6 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/slicesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/streamaggr"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeserieslimits"
"github.com/VictoriaMetrics/metrics"
"github.com/cespare/xxhash/v2"
)
var (
@@ -80,10 +84,14 @@ var (
`This may be needed for reducing memory usage at remote storage when the order of labels in incoming samples is random. `+
`For example, if m{k1="v1",k2="v2"} may be sent as m{k2="v2",k1="v1"}`+
`Enabled sorting for labels can slow down ingestion performance a bit`)
maxHourlySeries = flag.Int("remoteWrite.maxHourlySeries", 0, "The maximum number of unique series vmagent can send to remote storage systems during the last hour. "+
"Excess series are logged and dropped. This can be useful for limiting series cardinality. See https://docs.victoriametrics.com/victoriametrics/vmagent/#cardinality-limiter")
maxDailySeries = flag.Int("remoteWrite.maxDailySeries", 0, "The maximum number of unique series vmagent can send to remote storage systems during the last 24 hours. "+
"Excess series are logged and dropped. This can be useful for limiting series churn rate. See https://docs.victoriametrics.com/victoriametrics/vmagent/#cardinality-limiter")
maxHourlySeries = flag.Int64("remoteWrite.maxHourlySeries", 0, "The maximum number of unique series vmagent can send to remote storage systems during the last hour. "+
"Excess series are logged and dropped. This can be useful for limiting series cardinality. "+
fmt.Sprintf("Setting this flag to '-1' sets limit to maximum possible value (%d) which is useful in order to enable series tracking without enforcing limits. ", math.MaxInt32)+
"See https://docs.victoriametrics.com/victoriametrics/vmagent/#cardinality-limiter")
maxDailySeries = flag.Int64("remoteWrite.maxDailySeries", 0, "The maximum number of unique series vmagent can send to remote storage systems during the last 24 hours. "+
"Excess series are logged and dropped. This can be useful for limiting series churn rate. "+
fmt.Sprintf("Setting this flag to '-1' sets limit to maximum possible value (%d) which is useful in order to enable series tracking without enforcing limits. ", math.MaxInt32)+
"See https://docs.victoriametrics.com/victoriametrics/vmagent/#cardinality-limiter")
maxIngestionRate = flag.Int("maxIngestionRate", 0, "The maximum number of samples vmagent can receive per second. Data ingestion is paused when the limit is exceeded. "+
"By default there are no limits on samples ingestion rate. See also -remoteWrite.rateLimit")
@@ -92,6 +100,8 @@ var (
"See https://docs.victoriametrics.com/victoriametrics/vmagent/#disabling-on-disk-persistence . See also -remoteWrite.dropSamplesOnOverload")
dropSamplesOnOverload = flag.Bool("remoteWrite.dropSamplesOnOverload", false, "Whether to drop samples when -remoteWrite.disableOnDiskQueue is set and if the samples "+
"cannot be pushed into the configured -remoteWrite.url systems in a timely manner. See https://docs.victoriametrics.com/victoriametrics/vmagent/#disabling-on-disk-persistence")
disableMetadataPerURL = flagutil.NewArrayBool("remoteWrite.disableMetadata", "Whether to disable sending metadata to the corresponding -remoteWrite.url. "+
"By default, metadata sending is controlled by the global -enableMetadata flag")
)
var (
@@ -157,8 +167,8 @@ func Init() {
if len(*remoteWriteURLs) == 0 {
logger.Fatalf("at least one `-remoteWrite.url` command-line flag must be set")
}
if *maxHourlySeries > 0 {
hourlySeriesLimiter = bloomfilter.NewLimiter(*maxHourlySeries, time.Hour)
if limit := getMaxHourlySeries(); limit > 0 {
hourlySeriesLimiter = bloomfilter.NewLimiter(limit, time.Hour)
_ = metrics.NewGauge(`vmagent_hourly_series_limit_max_series`, func() float64 {
return float64(hourlySeriesLimiter.MaxItems())
})
@@ -166,8 +176,8 @@ func Init() {
return float64(hourlySeriesLimiter.CurrentItems())
})
}
if *maxDailySeries > 0 {
dailySeriesLimiter = bloomfilter.NewLimiter(*maxDailySeries, 24*time.Hour)
if limit := getMaxDailySeries(); limit > 0 {
dailySeriesLimiter = bloomfilter.NewLimiter(limit, 24*time.Hour)
_ = metrics.NewGauge(`vmagent_daily_series_limit_max_series`, func() float64 {
return float64(dailySeriesLimiter.MaxItems())
})
@@ -275,6 +285,7 @@ func initRemoteWriteCtxs(urls []string) {
rwctxs[i] = newRemoteWriteCtx(i, remoteWriteURL, sanitizedURL)
rwctxIdx[i] = i
}
fs.RegisterPathFsMetrics(*tmpDataPath)
if *shardByURL {
consistentHashNodes := make([]string, 0, len(urls))
@@ -388,7 +399,7 @@ func tryPush(at *auth.Token, wr *prompb.WriteRequest, forceDropSamplesOnFailure
// Push metadata separately from time series, since it doesn't need sharding,
// relabeling, stream aggregation, deduplication, etc.
if !tryPushMetadataToRemoteStorages(rwctxs, mms, forceDropSamplesOnFailure) {
if !tryPushMetadataToRemoteStorages(at, rwctxs, mms, forceDropSamplesOnFailure) {
return false
}
@@ -526,11 +537,18 @@ func pushTimeSeriesToRemoteStoragesTrackDropped(tss []prompb.TimeSeries) {
}
}
func tryPushMetadataToRemoteStorages(rwctxs []*remoteWriteCtx, mms []prompb.MetricMetadata, forceDropSamplesOnFailure bool) bool {
func tryPushMetadataToRemoteStorages(at *auth.Token, rwctxs []*remoteWriteCtx, mms []prompb.MetricMetadata, forceDropSamplesOnFailure bool) bool {
if len(mms) == 0 {
// Nothing to push
return true
}
if at != nil {
for idx := range mms {
mm := &mms[idx]
mm.AccountID = at.AccountID
mm.ProjectID = at.ProjectID
}
}
// Do not shard metadata even if -remoteWrite.shardByURL is set, just replicate it among rwctxs.
// Since metadata is usually small and there is no guarantee that metadata can be sent to
// the same remote storage with the corresponding metrics.
@@ -540,6 +558,10 @@ func tryPushMetadataToRemoteStorages(rwctxs []*remoteWriteCtx, mms []prompb.Metr
var wg sync.WaitGroup
var anyPushFailed atomic.Bool
for _, rwctx := range rwctxs {
if !rwctx.enableMetadata {
// Skip remote storage with disabled metadata
continue
}
wg.Go(func() {
if !rwctx.tryPushMetadataInternal(mms) {
rwctx.pushFailures.Inc()
@@ -811,6 +833,11 @@ type remoteWriteCtx struct {
streamAggrKeepInput bool
streamAggrDropInput bool
// enableMetadata indicates whether metadata should be sent to this remote storage.
// It is determined by -remoteWrite.enableMetadata per-URL flag if set,
// otherwise by the global -enableMetadata flag.
enableMetadata bool
pss []*pendingSeries
pssNextIdx atomic.Uint64
@@ -822,6 +849,18 @@ type remoteWriteCtx struct {
rowsDroppedOnPushFailure *metrics.Counter
}
// isMetadataEnabledForURL returns true if metadata should be sent to the remote storage at argIdx.
// It checks the per-URL -remoteWrite.disableMetadata flag first.
// If not set, it falls back to the global -enableMetadata flag.
func isMetadataEnabledForURL(argIdx int) bool {
if disableMetadataPerURL.GetOptionalArg(argIdx) {
// Metadata is explicitly disabled for this URL
return false
}
// Use global -enableMetadata value
return prommetadata.IsEnabled()
}
func newRemoteWriteCtx(argIdx int, remoteWriteURL *url.URL, sanitizedURL string) *remoteWriteCtx {
// strip query params, otherwise changing params resets pq
pqURL := *remoteWriteURL
@@ -892,10 +931,11 @@ func newRemoteWriteCtx(argIdx int, remoteWriteURL *url.URL, sanitizedURL string)
}
rwctx := &remoteWriteCtx{
idx: argIdx,
fq: fq,
c: c,
pss: pss,
idx: argIdx,
fq: fq,
c: c,
pss: pss,
enableMetadata: isMetadataEnabledForURL(argIdx),
rowsPushedAfterRelabel: metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_remotewrite_rows_pushed_after_relabel_total{path=%q,url=%q}`, queuePath, sanitizedURL)),
rowsDroppedByRelabel: metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_remotewrite_relabel_metrics_dropped_total{path=%q,url=%q}`, queuePath, sanitizedURL)),
@@ -1116,3 +1156,21 @@ func newMapFromStrings(a []string) map[string]struct{} {
}
return m
}
func getMaxHourlySeries() int {
limit := *maxHourlySeries
if limit == -1 || limit > math.MaxInt32 {
return math.MaxInt32
}
return int(limit)
}
func getMaxDailySeries() int {
limit := *maxDailySeries
if limit == -1 || limit > math.MaxInt32 {
return math.MaxInt32
}
return int(limit)
}

View File

@@ -222,6 +222,9 @@ func (r *Rule) Validate() error {
if r.Expr == "" {
return fmt.Errorf("expression can't be empty")
}
if _, ok := r.Labels["__name__"]; ok {
return fmt.Errorf("invalid rule label __name__")
}
return checkOverflow(r.XXX, "rule")
}

View File

@@ -136,6 +136,9 @@ func TestRuleValidate(t *testing.T) {
if err := (&Rule{Alert: "alert"}).Validate(); err == nil {
t.Fatalf("expected empty expr error")
}
if err := (&Rule{Record: "record", Expr: "sum(test)", Labels: map[string]string{"__name__": "test"}}).Validate(); err == nil {
t.Fatalf("invalid rule label; got %s", err)
}
if err := (&Rule{Alert: "alert", Expr: "test>0"}).Validate(); err != nil {
t.Fatalf("expected valid rule; got %s", err)
}

View File

@@ -87,6 +87,7 @@ func (m *Metric) DelLabel(key string) {
for i, l := range m.Labels {
if l.Name == key {
m.Labels = append(m.Labels[:i], m.Labels[i+1:]...)
break
}
}
}

View File

@@ -14,7 +14,7 @@ type Notifier interface {
Send(ctx context.Context, alerts []Alert, alertLabels [][]prompb.Label, notifierHeaders map[string]string) error
// Addr returns address where alerts are sent.
Addr() string
// LastError returns error, that occured during last attempt to send data
// LastError returns error, that occurred during last attempt to send data
LastError() string
// Close is a destructor for the Notifier
Close()

View File

@@ -13,14 +13,18 @@ import (
"sync"
"time"
"github.com/cespare/xxhash/v2"
"github.com/golang/snappy"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/cgroup"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httputil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/netutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promauth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeutil"
"github.com/VictoriaMetrics/metrics"
)
@@ -113,8 +117,10 @@ func NewClient(ctx context.Context, cfg Config) (*Client, error) {
input: make(chan prompb.TimeSeries, cfg.MaxQueueSize),
}
for range cc {
c.run(ctx)
for i := 0; i < cc; i++ {
c.wg.Go(func() {
c.run(ctx, i)
})
}
return c, nil
}
@@ -156,8 +162,7 @@ func (c *Client) Close() error {
return nil
}
func (c *Client) run(ctx context.Context) {
ticker := time.NewTicker(c.flushInterval)
func (c *Client) run(ctx context.Context, id int) {
wr := &prompb.WriteRequest{}
shutdown := func() {
lastCtx, cancel := context.WithTimeout(context.Background(), defaultWriteTimeout)
@@ -174,45 +179,72 @@ func (c *Client) run(ctx context.Context) {
cancel()
}
c.wg.Go(func() {
defer ticker.Stop()
for {
// add jitter to spread remote write flushes over the flush interval to avoid congestion at the remote write destination
h := xxhash.Sum64(bytesutil.ToUnsafeBytes(fmt.Sprintf("%d", id)))
randJitter := uint64(float64(c.flushInterval) * (float64(h) / (1 << 64)))
timer := time.NewTimer(time.Duration(randJitter))
addJitter:
for {
select {
case <-c.doneCh:
timer.Stop()
shutdown()
return
case <-ctx.Done():
timer.Stop()
shutdown()
return
case <-timer.C:
break addJitter
}
}
ticker := time.NewTicker(c.flushInterval)
defer ticker.Stop()
for {
select {
case <-c.doneCh:
shutdown()
return
case <-ctx.Done():
shutdown()
return
case <-ticker.C:
c.flush(ctx, wr)
// drain the potential stale tick to avoid small or empty flushes after a slow flush.
select {
case <-c.doneCh:
shutdown()
return
case <-ctx.Done():
shutdown()
return
case <-ticker.C:
default:
}
case ts, ok := <-c.input:
if !ok {
continue
}
wr.Timeseries = append(wr.Timeseries, ts)
if len(wr.Timeseries) >= c.maxBatchSize {
c.flush(ctx, wr)
// drain the potential stale tick to avoid small or empty flushes after a slow flush.
select {
case <-ticker.C:
default:
}
case ts, ok := <-c.input:
if !ok {
continue
}
wr.Timeseries = append(wr.Timeseries, ts)
if len(wr.Timeseries) >= c.maxBatchSize {
c.flush(ctx, wr)
}
}
}
})
}
}
var (
rwErrors = metrics.NewCounter(`vmalert_remotewrite_errors_total`)
rwTotal = metrics.NewCounter(`vmalert_remotewrite_total`)
sentRows = metrics.NewCounter(`vmalert_remotewrite_sent_rows_total`)
sentBytes = metrics.NewCounter(`vmalert_remotewrite_sent_bytes_total`)
droppedRows = metrics.NewCounter(`vmalert_remotewrite_dropped_rows_total`)
sendDuration = metrics.NewFloatCounter(`vmalert_remotewrite_send_duration_seconds_total`)
bufferFlushDuration = metrics.NewHistogram(`vmalert_remotewrite_flush_duration_seconds`)
// sentRows and sentBytes are historical counters that can now be replaced by flushedRows and flushedBytes histograms. They may be deprecated in the future after the new histograms have been adopted for some time.
sentRows = metrics.NewCounter(`vmalert_remotewrite_sent_rows_total`)
sentBytes = metrics.NewCounter(`vmalert_remotewrite_sent_bytes_total`)
flushedRows = metrics.NewHistogram(`vmalert_remotewrite_sent_rows`)
flushedBytes = metrics.NewHistogram(`vmalert_remotewrite_sent_bytes`)
droppedRows = metrics.NewCounter(`vmalert_remotewrite_dropped_rows_total`)
sendDuration = metrics.NewFloatCounter(`vmalert_remotewrite_send_duration_seconds_total`)
bufferFlushDuration = metrics.NewHistogram(`vmalert_remotewrite_flush_duration_seconds`)
remoteWriteQueueSize = metrics.NewHistogram(`vmalert_remotewrite_queue_size`)
_ = metrics.NewGauge(`vmalert_remotewrite_queue_capacity`, func() float64 {
return float64(*maxQueueSize)
})
_ = metrics.NewGauge(`vmalert_remotewrite_concurrency`, func() float64 {
return float64(*concurrency)
@@ -226,6 +258,7 @@ func GetDroppedRows() int { return int(droppedRows.Get()) }
// it to remote-write endpoint. Flush performs limited amount of retries
// if request fails.
func (c *Client) flush(ctx context.Context, wr *prompb.WriteRequest) {
remoteWriteQueueSize.Update(float64(len(c.input)))
if len(wr.Timeseries) < 1 {
return
}
@@ -235,10 +268,8 @@ func (c *Client) flush(ctx context.Context, wr *prompb.WriteRequest) {
data := wr.MarshalProtobuf(nil)
b := snappy.Encode(nil, data)
retryInterval, maxRetryInterval := *retryMinInterval, *retryMaxTime
if retryInterval > maxRetryInterval {
retryInterval = maxRetryInterval
}
maxRetryInterval := *retryMaxTime
bt := timeutil.NewBackoffTimer(*retryMinInterval, maxRetryInterval)
timeStart := time.Now()
defer func() {
sendDuration.Add(time.Since(timeStart).Seconds())
@@ -256,6 +287,8 @@ L:
if err == nil {
sentRows.Add(len(wr.Timeseries))
sentBytes.Add(len(b))
flushedRows.Update(float64(len(wr.Timeseries)))
flushedBytes.Update(float64(len(b)))
return
}
@@ -281,12 +314,11 @@ L:
break
}
if retryInterval > timeLeftForRetries {
retryInterval = timeLeftForRetries
if bt.CurrentDelay() > timeLeftForRetries {
bt.SetDelay(timeLeftForRetries)
}
// sleeping to prevent remote db hammering
time.Sleep(retryInterval)
retryInterval *= 2
bt.Wait(ctx.Done())
attempts++
}

View File

@@ -103,7 +103,10 @@ func TestClient_run_maxBatchSizeDuringShutdown(t *testing.T) {
// push time series to the client.
for range pushCnt {
if err = rwClient.Push(prompb.TimeSeries{}); err != nil {
if err = rwClient.Push(prompb.TimeSeries{
Labels: []prompb.Label{{Name: "__name__", Value: "m"}},
Samples: []prompb.Sample{{Value: 1, Timestamp: 1000}},
}); err != nil {
t.Fatalf("cannot time series to the client: %s", err)
}
}

View File

@@ -312,9 +312,11 @@ type labelSet struct {
// On k conflicts in origin set, the original value is preferred and copied
// to processed with `exported_%k` key. The copy happens only if passed v isn't equal to origin[k] value.
func (ls *labelSet) add(k, v string) {
// do not add label with empty value, since it has no meaning.
// see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9984
// do not add label with empty value to the result, as it has no meaning:
// if the label already exists in the original query result, remove it to preserve compatibility with relabeling, see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10766.
// otherwise, ignore the label, see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9984.
if v == "" {
delete(ls.processed, k)
return
}
ls.processed[k] = v

View File

@@ -1363,6 +1363,7 @@ func TestAlertingRule_ToLabels(t *testing.T) {
{Name: "instance", Value: "0.0.0.0:8800"},
{Name: "group", Value: "vmalert"},
{Name: "alertname", Value: "ConfigurationReloadFailure"},
{Name: "pod", Value: "vmalert-0"},
},
Values: []float64{1},
Timestamps: []int64{time.Now().UnixNano()},
@@ -1374,6 +1375,7 @@ func TestAlertingRule_ToLabels(t *testing.T) {
"group": "vmalert", // this shouldn't have effect since value in metric is equal
"invalid_label": "{{ .Values.mustRuntimeFail }}",
"empty_label": "", // this should be dropped
"pod": "", // this should remove the pod label from query result
},
Expr: "sum(vmalert_alerting_rules_error) by(instance, group, alertname) > 0",
Name: "AlertingRulesError",
@@ -1385,6 +1387,7 @@ func TestAlertingRule_ToLabels(t *testing.T) {
"group": "vmalert",
"alertname": "ConfigurationReloadFailure",
"alertgroup": "vmalert",
"pod": "vmalert-0",
"invalid_label": `error evaluating template: template: :1:298: executing "" at <.Values.mustRuntimeFail>: can't evaluate field Values in type notifier.tplData`,
}

View File

@@ -8,6 +8,7 @@ import (
"hash/fnv"
"maps"
"net/url"
"path"
"sync"
"time"
@@ -42,6 +43,9 @@ var (
"For example, if lookback=1h then range from now() to now()-1h will be scanned.")
maxStartDelay = flag.Duration("group.maxStartDelay", 5*time.Minute, "Defines the max delay before starting the group evaluation. Group's start is artificially delayed for random duration on interval"+
" [0..min(--group.maxStartDelay, group.interval)]. This helps smoothing out the load on the configured datasource, so evaluations aren't executed too close to each other.")
ruleStripFilePath = flag.Bool("rule.stripFilePath", false, "Whether to strip rule file paths in logs and all API responses, including /metrics. "+
"For example, file path '/path/to/tenant_id/rules.yml' will be stripped to 'groupHashID/rules.yml'. "+
"This flag may be useful for hiding sensitive information in file paths, such as S3 bucket details.")
)
// Group is an entity for grouping rules
@@ -147,6 +151,12 @@ func NewGroup(cfg config.Group, qb datasource.QuerierBuilder, defaultInterval ti
g.EvalDelay = &cfg.EvalDelay.D
}
g.id = g.CreateID()
// strip file path from group.File after generated group ID when ruleStripFilePath is set,
// so it won't be exposed in logs and api responses
if *ruleStripFilePath {
_, filename := path.Split(g.File)
g.File = fmt.Sprintf("%d/%s", g.id, filename)
}
for _, h := range cfg.Headers {
g.Headers[h.Key] = h.Value
}
@@ -381,7 +391,9 @@ func (g *Group) Start(ctx context.Context, rw remotewrite.RWClient, rr datasourc
if len(g.Rules) < 1 {
g.metrics.iterationDuration.UpdateDuration(start)
g.mu.Lock()
g.LastEvaluation = start
g.mu.Unlock()
return ts
}
@@ -395,7 +407,9 @@ func (g *Group) Start(ctx context.Context, rw remotewrite.RWClient, rr datasourc
}
}
g.metrics.iterationDuration.UpdateDuration(start)
g.mu.Lock()
g.LastEvaluation = start
g.mu.Unlock()
return ts
}
@@ -405,11 +419,14 @@ func (g *Group) Start(ctx context.Context, rw remotewrite.RWClient, rr datasourc
g.mu.Unlock()
defer g.evalCancel()
realEvalTS := eval(evalCtx, evalTS)
// start the interval ticker before the first evaluation,
// so that the evaluation timestamps of groups with the `eval_offset` option are also aligned,
// see https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10773
t := time.NewTicker(g.Interval)
defer t.Stop()
realEvalTS := eval(evalCtx, evalTS)
// restore the rules state after the first evaluation
// so only active alerts can be restored.
if rr != nil {

View File

@@ -742,3 +742,64 @@ func parseTime(t *testing.T, s string) time.Time {
}
return tt
}
func TestRuleStripFilePath(t *testing.T) {
configG := config.Group{
Name: "group",
File: "/var/local/test/rules.yaml",
Type: config.NewRawType("prometheus"),
Concurrency: 1,
Rules: []config.Rule{
{
ID: 0,
Alert: "alert",
},
{
ID: 1,
Record: "record",
},
}}
qb := &datasource.FakeQuerier{}
g := NewGroup(configG, qb, 1*time.Minute, nil)
gID := g.id
if g.File != "/var/local/test/rules.yaml" {
t.Fatalf("expected file path to be unchanged; got %q instead", g.File)
}
for _, r := range g.Rules {
if ar, ok := r.(*AlertingRule); ok {
if ar.File != "/var/local/test/rules.yaml" {
t.Fatalf("expected rule file path to be unchanged; got %q instead", ar.File)
}
}
if rr, ok := r.(*RecordingRule); ok {
if rr.File != "/var/local/test/rules.yaml" {
t.Fatalf("expected rule file path to be unchanged; got %q instead", rr.File)
}
}
}
oldRuleStripFilePath := *ruleStripFilePath
*ruleStripFilePath = true
defer func() {
*ruleStripFilePath = oldRuleStripFilePath
}()
g = NewGroup(configG, qb, 1*time.Minute, nil)
if g.File != fmt.Sprintf("%d/rules.yaml", gID) {
t.Fatalf("expected file path to be stripped to %q; got %q instead", fmt.Sprintf("%d/rules.yaml", gID), g.File)
}
for _, r := range g.Rules {
if ar, ok := r.(*AlertingRule); ok {
if ar.File != fmt.Sprintf("%d/rules.yaml", gID) {
t.Fatalf("expected rule file path to be unchanged; got %q instead", ar.File)
}
}
if rr, ok := r.(*RecordingRule); ok {
if rr.File != fmt.Sprintf("%d/rules.yaml", gID) {
t.Fatalf("expected rule file path to be unchanged; got %q instead", rr.File)
}
}
}
}

View File

@@ -293,9 +293,11 @@ func (rr *RecordingRule) toTimeSeries(m datasource.Metric) prompb.TimeSeries {
}
// add extra labels configured by user
for k := range rr.Labels {
// do not add label with empty value, since it has no meaning.
// see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9984
// do not add label with empty value to the result, as it has no meaning:
// if the label already exists in the original query result, remove it to preserve compatibility with relabeling, see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10766.
// otherwise, ignore the label, see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9984.
if rr.Labels[k] == "" {
m.DelLabel(k)
continue
}
existingLabel := promrelabel.GetLabelByName(m.Labels, k)

View File

@@ -163,11 +163,13 @@ func TestRecordingRule_Exec(t *testing.T) {
f(&RecordingRule{
Name: "job:foo",
Labels: map[string]string{
"source": "test",
"source": "test",
"empty_label": "", // this should be dropped
"pod": "", // this should remove the pod label from query result
},
}, [][]datasource.Metric{{
metricWithValueAndLabels(t, 2, "__name__", "foo", "job", "foo"),
metricWithValueAndLabels(t, 1, "__name__", "bar", "job", "bar", "source", "origin"),
metricWithValueAndLabels(t, 2, "__name__", "foo", "job", "foo", "pod", "vmalert-0"),
metricWithValueAndLabels(t, 1, "__name__", "bar", "job", "bar", "source", "origin", "pod", "vmalert-1"),
metricWithValueAndLabels(t, 1, "__name__", "baz", "job", "baz", "source", "test"),
}}, [][]prompb.TimeSeries{{
newTimeSeries([]float64{2}, []int64{ts.UnixNano()}, []prompb.Label{

View File

@@ -252,6 +252,9 @@ func (r *ApiRule) ExtendState() {
// ToAPI returns ApiGroup representation of g
func (g *Group) ToAPI() *ApiGroup {
if g == nil {
return &ApiGroup{}
}
g.mu.RLock()
defer g.mu.RUnlock()
ag := ApiGroup{

View File

@@ -402,6 +402,20 @@ func templateFuncs() textTpl.FuncMap {
return t, nil
},
// formatTime formats the given Unix timestamp with the provided layout.
// For example: {{ now | formatTime "2006-01-02T15:04:05Z07:00" }}
"formatTime": func(layout string, i any) (string, error) {
v, err := toFloat64(i)
if err != nil {
return "", fmt.Errorf("formatTime: %w", err)
}
if math.IsNaN(v) || math.IsInf(v, 0) {
return "", fmt.Errorf("formatTime: cannot convert %v to time", v)
}
t := timeFromUnixTimestamp(v).Time().UTC()
return t.Format(layout), nil
},
/* URLs */
// externalURL returns value of `external.url` flag

View File

@@ -6,6 +6,7 @@ import (
"strings"
"testing"
textTpl "text/template"
"time"
)
func TestTemplateFuncs_StringConversion(t *testing.T) {
@@ -103,6 +104,26 @@ func TestTemplateFuncs_Formatting(t *testing.T) {
f("humanizeTimestamp", 1679055557, "2023-03-17 12:19:17 +0000 UTC")
}
func TestTemplateFuncs_FormatTime(t *testing.T) {
funcs := templateFuncs()
formatTime := funcs["formatTime"].(func(layout string, i any) (string, error))
f := func(layout string, input any, expected string) {
t.Helper()
result, err := formatTime(layout, input)
if err != nil {
t.Fatalf("unexpected error for formatTime(%q, %v): %s", layout, input, err)
}
if result != expected {
t.Fatalf("unexpected result for formatTime(%q, %v); got\n%s\nwant\n%s", layout, input, result, expected)
}
}
f(time.RFC3339, float64(1679055557), "2023-03-17T12:19:17Z")
f("2006-01-02T15:04:05", int64(1679055557), "2023-03-17T12:19:17")
f(time.RFC822, int(1679055557), "17 Mar 23 12:19 UTC")
}
func mkTemplate(current, replacement any) textTemplate {
tmpl := textTemplate{}
if current != nil {

View File

@@ -52,7 +52,13 @@ var (
"alert": rule.TypeAlerting,
"record": rule.TypeRecording,
}
ruleStates = []string{"ok", "nomatch", "inactive", "firing", "pending", "unhealthy"}
// The "recovering", "noData", "normal", "error" states are used by Grafana.
// Ignore "recovering" since it is not currently acknowledged by vmalert,
// treat "noData" as an alias for "nomatch",
// treat "normal" as an alias for "inactive",
// treat "error" as an alias for "unhealthy"
ruleStates = []string{"ok", "nomatch", "inactive", "firing", "pending", "unhealthy", "recovering", "noData", "normal", "error"}
)
type requestHandler struct {
@@ -363,6 +369,15 @@ func newRulesFilter(r *http.Request) (*rulesFilter, *httpserver.ErrorWithStatusC
if !slices.Contains(ruleStates, v) {
return nil, errResponse(fmt.Errorf(`invalid parameter "state": contains not supported value %q`, v), http.StatusBadRequest)
}
// Replace grafana states with supported internal states
switch v {
case "noData":
v = "nomatch"
case "normal":
v = "inactive"
case "error":
v = "unhealthy"
}
rf.states = append(rf.states, v)
}
}

View File

@@ -362,40 +362,62 @@ func (up *URLPrefix) setLoadBalancingPolicy(loadBalancingPolicy string) error {
}
type backendURLs struct {
healthChecksContext context.Context
healthChecksCancel func()
healthChecksWG sync.WaitGroup
bhc backendHealthCheck
bus []*backendURL
}
type backendHealthCheck struct {
ctx context.Context
// mu protects fields below
cancel func()
mu sync.Mutex
isStopped bool
wg sync.WaitGroup
}
func (bhc *backendHealthCheck) run(hc func()) {
bhc.mu.Lock()
defer bhc.mu.Unlock()
if bhc.isStopped {
return
}
bhc.wg.Go(hc)
}
func (bhc *backendHealthCheck) stop() {
bhc.mu.Lock()
bhc.cancel()
bhc.isStopped = true
bhc.mu.Unlock()
bhc.wg.Wait()
}
func newBackendURLs() *backendURLs {
ctx, cancel := context.WithCancel(context.Background())
return &backendURLs{
healthChecksContext: ctx,
healthChecksCancel: cancel,
bhc: backendHealthCheck{
ctx: ctx,
cancel: cancel,
},
}
}
func (bus *backendURLs) add(u *url.URL) {
bus.bus = append(bus.bus, &backendURL{
url: u,
healthCheckContext: bus.healthChecksContext,
healthCheckWG: &bus.healthChecksWG,
hasPlaceHolders: hasAnyPlaceholders(u),
url: u,
bhc: &bus.bhc,
hasPlaceHolders: hasAnyPlaceholders(u),
})
}
func (bus *backendURLs) stopHealthChecks() {
bus.healthChecksCancel()
bus.healthChecksWG.Wait()
bus.bhc.stop()
}
type backendURL struct {
broken atomic.Bool
healthCheckContext context.Context
healthCheckWG *sync.WaitGroup
bhc *backendHealthCheck
concurrentRequests atomic.Int32
@@ -410,7 +432,7 @@ func (bu *backendURL) isBroken() bool {
func (bu *backendURL) setBroken() {
if bu.broken.CompareAndSwap(false, true) {
bu.healthCheckWG.Go(func() {
bu.bhc.run(func() {
bu.runHealthCheck()
bu.broken.Store(false)
})
@@ -432,11 +454,11 @@ func (bu *backendURL) runHealthCheck() {
case <-t.C:
// Verify network connectivity via TCP dial before marking backend healthy.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9997
ctx, cancel := context.WithTimeout(bu.healthCheckContext, time.Second)
ctx, cancel := context.WithTimeout(bu.bhc.ctx, time.Second)
c, err := netutil.Dialer.DialContext(ctx, "tcp", addr)
cancel()
if err != nil {
if errors.Is(bu.healthCheckContext.Err(), context.Canceled) {
if errors.Is(bu.bhc.ctx.Err(), context.Canceled) {
return
}
logger.Warnf("ignoring the backend at %s for %s because of dial error: %s", addr, *failTimeout, err)
@@ -445,7 +467,7 @@ func (bu *backendURL) runHealthCheck() {
_ = c.Close()
return
case <-bu.healthCheckContext.Done():
case <-bu.bhc.ctx.Done():
return
}
}
@@ -588,6 +610,7 @@ func areEqualBackendURLs(a, b []*backendURL) bool {
}
// getFirstAvailableBackendURL returns the first available backendURL, which isn't broken.
// If all backendURLs are broken, then returns the first backendURL.
//
// backendURL.put() must be called on the returned backendURL after the request is complete.
func getFirstAvailableBackendURL(bus []*backendURL) *backendURL {
@@ -606,21 +629,22 @@ func getFirstAvailableBackendURL(bus []*backendURL) *backendURL {
return bu
}
}
return nil
// All backend urls are unavailable, then returning a first one, it could help increase the success rate of the requests。
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10837#issuecomment-4307050980.
bu.get()
return bu
}
// getLeastLoadedBackendURL returns a non-broken backendURL with the lowest number of concurrent requests.
// If all backendURLs are broken, then returns the first backendURL.
//
// backendURL.put() must be called on the returned backendURL after the request is complete.
func getLeastLoadedBackendURL(bus []*backendURL, atomicCounter *atomic.Uint32) *backendURL {
firstBu := bus[0]
if len(bus) == 1 {
// Fast path - return the only backend url.
bu := bus[0]
if bu.isBroken() {
return nil
}
bu.get()
return bu
firstBu.get()
return firstBu
}
// Slow path - select other backend urls.
@@ -658,7 +682,10 @@ func getLeastLoadedBackendURL(bus []*backendURL, atomicCounter *atomic.Uint32) *
}
buMin := bus[buMinIdx]
if buMin.isBroken() {
return nil
// If all backendURLs are broken, then returns the first backendURL.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10837#issuecomment-4307050980.
firstBu.get()
return firstBu
}
buMin.get()
atomicCounter.CompareAndSwap(n+1, buMinIdx+1)

View File

@@ -1031,6 +1031,33 @@ func TestLogRequest(t *testing.T) {
f("foo", 404, 10*time.Millisecond, `access_log request_host="localhost:8080" request_uri="" status_code=404 remote_addr="" user_agent="" referer="" duration_ms=10 username="foo"`)
}
func TestGetFirstAvailableBackend(t *testing.T) {
f := func(broken []bool, expectedIdx int) {
t.Helper()
bus := make([]*backendURL, len(broken))
for i := range broken {
bus[i] = &backendURL{
url: &url.URL{Host: fmt.Sprintf("server-%d", i)},
}
bus[i].broken.Store(broken[i])
}
bu := getFirstAvailableBackendURL(bus)
if bu == nil {
t.Fatalf("unexpected nil backend")
}
if bu.url.Host != fmt.Sprintf("server-%d", expectedIdx) {
t.Fatalf("unexpected backend, expected server-%d, got %s", expectedIdx, bu.url.Host)
}
}
f([]bool{false, false, false}, 0)
f([]bool{true, true, false}, 2)
// all backend are broken, then return the first one.
f([]bool{true, true, true}, 0)
f([]bool{true}, 0)
}
func getRegexs(paths []string) []*Regex {
var sps []*Regex
for _, path := range paths {

View File

@@ -51,7 +51,7 @@ var (
"This allows reducing the consumption of backend resources when processing requests from clients connected via slow networks. "+
"Set to 0 to disable request buffering. See https://docs.victoriametrics.com/victoriametrics/vmauth/#request-body-buffering")
maxRequestBodySizeToRetry = flagutil.NewBytes("maxRequestBodySizeToRetry", 16*1024, "The maximum request body size to buffer in memory for potential retries at other backends. "+
"Request bodies larger than this size cannot be retried if the backend fails. Zero or negative value disables request body buffering and retries. "+
"Request bodies larger than this size cannot be retried if the backend fails. Zero or negative value disables retries. "+
"See also -requestBufferSize")
maxConcurrentRequests = flag.Int("maxConcurrentRequests", 1000, "The maximum number of concurrent requests vmauth can process simultaneously. "+
@@ -357,6 +357,7 @@ func bufferRequestBody(ctx context.Context, r io.ReadCloser, userName string) (i
maxBufSize := max(requestBufferSize.IntN(), maxRequestBodySizeToRetry.IntN())
if maxBufSize <= 0 {
// Request buffering is disabled.
return r, nil
}
@@ -480,6 +481,9 @@ func tryProcessingRequest(w http.ResponseWriter, r *http.Request, targetURL *url
canRetry := !bbOK || bb.canRetry()
res, err := ui.rt.RoundTrip(req)
if err == nil {
defer func() { _ = res.Body.Close() }()
}
if errors.Is(r.Context().Err(), context.Canceled) {
// Do not retry canceled requests.
@@ -549,7 +553,6 @@ func tryProcessingRequest(w http.ResponseWriter, r *http.Request, targetURL *url
w.WriteHeader(res.StatusCode)
err = copyStreamToClient(w, res.Body)
_ = res.Body.Close()
if errors.Is(r.Context().Err(), context.Canceled) {
// Do not retry canceled requests.
@@ -763,7 +766,7 @@ var concurrentRequestsLimitReached = metrics.NewCounter("vmauth_concurrent_reque
func usage() {
const s = `
vmauth authenticates and authorizes incoming requests and proxies them to VictoriaMetrics.
vmauth authenticates and authorizes incoming requests and proxies them to VictoriaMetrics components or any other HTTP backends.
See the docs at https://docs.victoriametrics.com/victoriametrics/vmauth/ .
`
@@ -792,10 +795,11 @@ func handleConcurrencyLimitError(w http.ResponseWriter, r *http.Request, err err
}
// bufferedBody serves two purposes:
// 1. Enables request retries when the body size does not exceed maxBodySize
// by fully buffering the body in memory.
// 2. Prevents slow clients from reducing effective server capacity by
// buffering the request body before acquiring a per-user concurrency slot.
//
// 1. It enables request retries when the request body size does not exceed maxBufSize
// by fully buffering the request body in memory.
// 2. It prevents slow clients from reducing effective server capacity
// by buffering the request body before acquiring a per-user concurrency slot.
//
// See bufferRequestBody for details on how bufferedBody is used.
type bufferedBody struct {
@@ -819,7 +823,7 @@ func newBufferedBody(r io.ReadCloser, buf []byte, maxBufSize int) *bufferedBody
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8051
if len(buf) < maxBufSize {
// Read the full request body into buf.
// The full request body has been already read into buf.
r = nil
}
@@ -832,7 +836,7 @@ func newBufferedBody(r io.ReadCloser, buf []byte, maxBufSize int) *bufferedBody
// Read implements io.Reader interface.
func (bb *bufferedBody) Read(p []byte) (int, error) {
if bb.cannotRetry {
return 0, fmt.Errorf("cannot read already closed body")
return 0, fmt.Errorf("cannot read already closed request body")
}
if bb.bufOffset < len(bb.buf) {
n := copy(p, bb.buf[bb.bufOffset:])
@@ -846,14 +850,18 @@ func (bb *bufferedBody) Read(p []byte) (int, error) {
}
func (bb *bufferedBody) canRetry() bool {
return bb.r == nil
if bb.r != nil {
return false
}
maxRetrySize := maxRequestBodySizeToRetry.IntN()
return len(bb.buf) == 0 || (maxRetrySize > 0 && len(bb.buf) <= maxRetrySize)
}
// Close implements io.Closer interface.
func (bb *bufferedBody) Close() error {
bb.resetReader()
bb.cannotRetry = !bb.canRetry()
if bb.r != nil {
bb.cannotRetry = true
return bb.r.Close()
}
return nil

View File

@@ -19,6 +19,7 @@ import (
"os"
"path/filepath"
"sort"
"strconv"
"strings"
"sync/atomic"
"testing"
@@ -1831,7 +1832,7 @@ func (r *mockBody) Read(p []byte) (n int, err error) {
}
func TestBufferedBody_RetrySuccess(t *testing.T) {
f := func(s string, maxBodySize int) {
f := func(s string, maxSizeToRetry, bufferSize int) {
t.Helper()
defaultRequestBufferSize := requestBufferSize.String()
@@ -1840,7 +1841,7 @@ func TestBufferedBody_RetrySuccess(t *testing.T) {
t.Fatalf("cannot reset requestBufferSize: %s", err)
}
}()
if err := requestBufferSize.Set(fmt.Sprintf("%d", maxBodySize)); err != nil {
if err := requestBufferSize.Set(strconv.Itoa(bufferSize)); err != nil {
t.Fatalf("cannot set requestBufferSize: %s", err)
}
@@ -1850,7 +1851,7 @@ func TestBufferedBody_RetrySuccess(t *testing.T) {
t.Fatalf("cannot reset maxRequestBodySizeToRetry: %s", err)
}
}()
if err := maxRequestBodySizeToRetry.Set("0"); err != nil {
if err := maxRequestBodySizeToRetry.Set(strconv.Itoa(maxSizeToRetry)); err != nil {
t.Fatalf("cannot set maxRequestBodySizeToRetry: %s", err)
}
@@ -1879,16 +1880,20 @@ func TestBufferedBody_RetrySuccess(t *testing.T) {
}
}
f("", 0)
f("", -1)
f("", 100)
f("foo", 100)
f("foobar", 100)
f(newTestString(1000), 1001)
f("", 0, 2000)
f("", 0, 0)
f("", -1, 2000)
f("", 100, 2000)
f("foo", 100, 2000)
f("foobar", 100, 2000)
f("foobar", 100, 0)
f("foobar", 100, -1)
f(newTestString(1000), 1001, 2000)
f(newTestString(1000), 1001, 500)
}
func TestBufferedBody_RetrySuccessPartialRead(t *testing.T) {
f := func(s string, maxBodySize int) {
f := func(s string, maxSizeToRetry, bufferSize int) {
t.Helper()
// Check the case with partial read
@@ -1898,7 +1903,7 @@ func TestBufferedBody_RetrySuccessPartialRead(t *testing.T) {
t.Fatalf("cannot reset requestBufferSize: %s", err)
}
}()
if err := requestBufferSize.Set(fmt.Sprintf("%d", maxBodySize)); err != nil {
if err := requestBufferSize.Set(strconv.Itoa(bufferSize)); err != nil {
t.Fatalf("cannot set requestBufferSize: %s", err)
}
@@ -1908,7 +1913,7 @@ func TestBufferedBody_RetrySuccessPartialRead(t *testing.T) {
t.Fatalf("cannot reset maxRequestBodySizeToRetry: %s", err)
}
}()
if err := maxRequestBodySizeToRetry.Set("0"); err != nil {
if err := maxRequestBodySizeToRetry.Set(strconv.Itoa(maxSizeToRetry)); err != nil {
t.Fatalf("cannot set maxRequestBodySizeToRetry: %s", err)
}
@@ -1952,16 +1957,20 @@ func TestBufferedBody_RetrySuccessPartialRead(t *testing.T) {
}
}
f("", 0)
f("", -1)
f("", 100)
f("foo", 100)
f("foobar", 100)
f(newTestString(1000), 1001)
f("", 0, 2000)
f("", 0, 0)
f("", -1, 2000)
f("", 100, 2000)
f("foo", 100, 2000)
f("foobar", 100, 2000)
f("foobar", 100, 0)
f("foobar", 100, -1)
f(newTestString(1000), 1001, 2000)
f(newTestString(1000), 1001, 500)
}
func TestBufferedBody_RetryFailureTooBigBody(t *testing.T) {
f := func(s string, maxBodySize int) {
f := func(s string, maxSizeToRetry, bufferSize int) {
t.Helper()
defaultRequestBufferSize := requestBufferSize.String()
@@ -1970,7 +1979,7 @@ func TestBufferedBody_RetryFailureTooBigBody(t *testing.T) {
t.Fatalf("cannot reset requestBufferSize: %s", err)
}
}()
if err := requestBufferSize.Set("0"); err != nil {
if err := requestBufferSize.Set(strconv.Itoa(bufferSize)); err != nil {
t.Fatalf("cannot set requestBufferSize: %s", err)
}
@@ -1980,7 +1989,7 @@ func TestBufferedBody_RetryFailureTooBigBody(t *testing.T) {
t.Fatalf("cannot reset maxRequestBodySizeToRetry: %s", err)
}
}()
if err := maxRequestBodySizeToRetry.Set(fmt.Sprintf("%d", maxBodySize)); err != nil {
if err := maxRequestBodySizeToRetry.Set(strconv.Itoa(maxSizeToRetry)); err != nil {
t.Fatalf("cannot set maxRequestBodySizeToRetry: %s", err)
}
@@ -2025,12 +2034,17 @@ func TestBufferedBody_RetryFailureTooBigBody(t *testing.T) {
}
const maxBodySize = 1000
f(newTestString(maxBodySize+1), maxBodySize)
f(newTestString(2*maxBodySize), maxBodySize)
f(newTestString(maxBodySize+1), 0, 2*maxBodySize)
f(newTestString(maxBodySize+1), -1, 2*maxBodySize)
f(newTestString(maxBodySize+1), maxBodySize, 0)
f(newTestString(maxBodySize+1), maxBodySize, -1)
f(newTestString(maxBodySize+1), maxBodySize, maxBodySize)
f(newTestString(maxBodySize+1), maxBodySize, 2*maxBodySize)
f(newTestString(2*maxBodySize), maxBodySize, 0)
}
func TestBufferedBody_RetryFailureZeroOrNegativeMaxBodySize(t *testing.T) {
f := func(s string, maxBodySize int) {
func TestBufferedBody_RetryDisabledByMaxRequestBodySizeToRetry(t *testing.T) {
f := func(s string, maxSizeToRetry, bufferSize int) {
t.Helper()
defaultRequestBufferSize := requestBufferSize.String()
@@ -2039,10 +2053,20 @@ func TestBufferedBody_RetryFailureZeroOrNegativeMaxBodySize(t *testing.T) {
t.Fatalf("cannot reset requestBufferSize: %s", err)
}
}()
if err := requestBufferSize.Set(fmt.Sprintf("%d", maxBodySize)); err != nil {
if err := requestBufferSize.Set(strconv.Itoa(bufferSize)); err != nil {
t.Fatalf("cannot set requestBufferSize: %s", err)
}
defaultMaxRequestBodySizeToRetry := maxRequestBodySizeToRetry.String()
defer func() {
if err := maxRequestBodySizeToRetry.Set(defaultMaxRequestBodySizeToRetry); err != nil {
t.Fatalf("cannot reset maxRequestBodySizeToRetry: %s", err)
}
}()
if err := maxRequestBodySizeToRetry.Set(strconv.Itoa(maxSizeToRetry)); err != nil {
t.Fatalf("cannot set maxRequestBodySizeToRetry: %s", err)
}
ctx := context.Background()
rb, err := bufferRequestBody(ctx, io.NopCloser(bytes.NewBufferString(s)), "foo")
if err != nil {
@@ -2051,8 +2075,8 @@ func TestBufferedBody_RetryFailureZeroOrNegativeMaxBodySize(t *testing.T) {
bb, ok := rb.(*bufferedBody)
canRetry := !ok || bb.canRetry()
if !canRetry {
t.Fatalf("canRetry() must return true before reading anything")
if canRetry {
t.Fatalf("canRetry() must return false before reading anything")
}
data, err := io.ReadAll(rb)
if err != nil {
@@ -2066,19 +2090,19 @@ func TestBufferedBody_RetryFailureZeroOrNegativeMaxBodySize(t *testing.T) {
}
data, err = io.ReadAll(rb)
if err != nil {
t.Fatalf("unexpected error in io.ReadAll: %s", err)
if err == nil {
t.Fatalf("expecting non-nil error")
}
if string(data) != s {
t.Fatalf("unexpected data read\ngot\n%s\nwant\n%s", data, s)
if len(data) != 0 {
t.Fatalf("unexpected non-empty data read: %q", data)
}
}
f("foobar", 0)
f(newTestString(1000), 0)
f("foobar", 0, 2048)
f(newTestString(1000), 0, 2048)
f("foobar", -1)
f(newTestString(1000), -1)
f("foobar", -1, 2048)
f(newTestString(1000), -1, 2048)
}
func newTestString(sLen int) string {

View File

@@ -21,6 +21,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/procutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/pushmetrics"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/snapshot"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/snapshot/snapshotutil"
)

View File

@@ -416,6 +416,16 @@ const (
promTemporaryDirPath = "prom-tmp-dir-path"
)
const (
thanosSnapshot = "thanos-snapshot"
thanosConcurrency = "thanos-concurrency"
thanosFilterTimeStart = "thanos-filter-time-start"
thanosFilterTimeEnd = "thanos-filter-time-end"
thanosFilterLabel = "thanos-filter-label"
thanosFilterLabelValue = "thanos-filter-label-value"
thanosAggrTypes = "thanos-aggr-types"
)
var (
promFlags = []cli.Flag{
&cli.StringFlag{
@@ -451,6 +461,43 @@ var (
Value: os.TempDir(),
},
}
thanosFlags = []cli.Flag{
&cli.StringFlag{
Name: thanosSnapshot,
Usage: "Path to Thanos snapshot directory containing raw and/or downsampled blocks.",
Required: true,
},
&cli.IntFlag{
Name: thanosConcurrency,
Usage: "Number of concurrently running snapshot readers",
Value: 1,
},
&cli.StringFlag{
Name: thanosFilterTimeStart,
Usage: "The time filter in RFC3339 format to select timeseries with timestamp equal or higher than provided value. E.g. '2020-01-01T20:07:00Z'",
},
&cli.StringFlag{
Name: thanosFilterTimeEnd,
Usage: "The time filter in RFC3339 format to select timeseries with timestamp equal or lower than provided value. E.g. '2020-01-01T20:07:00Z'",
},
&cli.StringFlag{
Name: thanosFilterLabel,
Usage: "Thanos label name to filter timeseries by. E.g. '__name__' will filter timeseries by name.",
},
&cli.StringFlag{
Name: thanosFilterLabelValue,
Usage: fmt.Sprintf("Thanos regular expression to filter label from %q flag.", thanosFilterLabel),
Value: ".*",
},
&cli.StringSliceFlag{
Name: thanosAggrTypes,
Usage: "Aggregate types to import from Thanos downsampled blocks. Supported values: count, sum, min, max, counter. " +
"Each aggregate will be imported as a separate metric with the aggregate type as suffix (e.g., metric_name:5m:count). " +
"If not specified, all aggregate types will be imported from downsampled blocks.",
Value: nil,
},
}
)
const (

View File

@@ -27,6 +27,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/influx"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/opentsdb"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/prometheus"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/thanos"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/vm"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/buildinfo"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httputil"
@@ -285,6 +286,7 @@ func main() {
if err != nil {
return fmt.Errorf("failed to create prometheus client: %s", err)
}
pp := prometheusProcessor{
cl: cl,
im: importer,
@@ -294,6 +296,59 @@ func main() {
return pp.run(ctx)
},
},
{
Name: "thanos",
Usage: "Migrate time series from Thanos blocks (supports raw and downsampled data)",
Flags: mergeFlags(globalFlags, thanosFlags, vmFlags),
Before: beforeFn,
Action: func(c *cli.Context) error {
fmt.Println("Thanos import mode")
vmCfg, err := initConfigVM(c)
if err != nil {
return fmt.Errorf("failed to init VM configuration: %s", err)
}
importer, err = vm.NewImporter(ctx, vmCfg)
if err != nil {
return fmt.Errorf("failed to create VM importer: %s", err)
}
thanosCfg := thanos.Config{
Snapshot: c.String(thanosSnapshot),
Filter: thanos.Filter{
TimeMin: c.String(thanosFilterTimeStart),
TimeMax: c.String(thanosFilterTimeEnd),
Label: c.String(thanosFilterLabel),
LabelValue: c.String(thanosFilterLabelValue),
},
}
cl, err := thanos.NewClient(thanosCfg)
if err != nil {
return fmt.Errorf("failed to create thanos client: %s", err)
}
var aggrTypes []thanos.AggrType
if aggrTypesStr := c.StringSlice(thanosAggrTypes); len(aggrTypesStr) > 0 {
for _, typeStr := range aggrTypesStr {
aggrType, err := thanos.ParseAggrType(typeStr)
if err != nil {
return fmt.Errorf("failed to parse aggregate type %q: %s", typeStr, err)
}
aggrTypes = append(aggrTypes, aggrType)
}
}
tp := thanosProcessor{
cl: cl,
im: importer,
cc: c.Int(thanosConcurrency),
isVerbose: c.Bool(globalVerbose),
aggrTypes: aggrTypes,
}
return tp.run(ctx)
},
},
{
Name: "vm-native",
Usage: "Migrate time series between VictoriaMetrics installations",

View File

@@ -8,10 +8,10 @@ import (
"time"
vmetrics "github.com/VictoriaMetrics/metrics"
"github.com/cheggaaa/pb/v3"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/opentsdb"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/vm"
"github.com/cheggaaa/pb/v3"
)
type otsdbProcessor struct {
@@ -89,9 +89,6 @@ func (op *otsdbProcessor) run(ctx context.Context) error {
// we're going to make serieslist * queryRanges queries, so we should represent that in the progress bar
otsdbSeriesTotal.Add(len(serieslist) * queryRanges)
bar := pb.StartNew(len(serieslist) * queryRanges)
defer func(bar *pb.ProgressBar) {
bar.Finish()
}(bar)
var wg sync.WaitGroup
for range op.otsdbcc {
wg.Go(func() {
@@ -106,41 +103,22 @@ func (op *otsdbProcessor) run(ctx context.Context) error {
}
})
}
/*
Loop through all series for this metric, processing all retentions and time ranges
requested. This loop is our primary "collect data from OpenTSDB loop" and should
be async, sending data to VictoriaMetrics over time.
runErr := op.sendQueries(ctx, serieslist, seriesCh, errCh, startTime)
The idea with having the select at the inner-most loop is to ensure quick
short-circuiting on error.
*/
for _, series := range serieslist {
for _, rt := range op.oc.Retentions {
for _, tr := range rt.QueryRanges {
select {
case otsdbErr := <-errCh:
return fmt.Errorf("opentsdb error: %s", otsdbErr)
case vmErr := <-op.im.Errors():
otsdbErrorsTotal.Inc()
return fmt.Errorf("import process failed: %s", wrapErr(vmErr, op.isVerbose))
case seriesCh <- queryObj{
Tr: tr, StartTime: startTime,
Series: series, Rt: opentsdb.RetentionMeta{
FirstOrder: rt.FirstOrder, SecondOrder: rt.SecondOrder, AggTime: rt.AggTime}}:
}
}
}
}
// Drain channels per metric
// Always drain channels and wait for workers to prevent goroutine leaks
close(seriesCh)
wg.Wait()
close(errCh)
// check for any lingering errors on the query side
for otsdbErr := range errCh {
return fmt.Errorf("import process failed: \n%s", otsdbErr)
if runErr == nil {
runErr = fmt.Errorf("import process failed: \n%s", otsdbErr)
}
}
bar.Finish()
if runErr != nil {
return runErr
}
log.Print(op.im.Stats())
}
op.im.Close()
@@ -155,6 +133,34 @@ func (op *otsdbProcessor) run(ctx context.Context) error {
return nil
}
// sendQueries iterates over all series and retention ranges, sending queries to workers.
// It returns early if ctx is canceled or an error is received.
func (op *otsdbProcessor) sendQueries(ctx context.Context, serieslist []opentsdb.Meta, seriesCh chan<- queryObj, errCh <-chan error, startTime int64) error {
for _, series := range serieslist {
for _, rt := range op.oc.Retentions {
for _, tr := range rt.QueryRanges {
select {
case <-ctx.Done():
return fmt.Errorf("context canceled: %s", ctx.Err())
case otsdbErr := <-errCh:
otsdbErrorsTotal.Inc()
return fmt.Errorf("opentsdb error: %s", otsdbErr)
case vmErr := <-op.im.Errors():
return fmt.Errorf("import process failed: %s", wrapErr(vmErr, op.isVerbose))
case seriesCh <- queryObj{
Tr: tr, StartTime: startTime,
Series: series, Rt: opentsdb.RetentionMeta{
FirstOrder: rt.FirstOrder,
SecondOrder: rt.SecondOrder,
AggTime: rt.AggTime,
}}:
}
}
}
}
return nil
}
func (op *otsdbProcessor) do(s queryObj) error {
start := s.StartTime - s.Tr.Start
end := s.StartTime - s.Tr.End
@@ -163,6 +169,7 @@ func (op *otsdbProcessor) do(s queryObj) error {
return fmt.Errorf("failed to collect data for %v in %v:%v :: %v", s.Series, s.Rt, s.Tr, err)
}
if len(data.Timestamps) < 1 || len(data.Values) < 1 {
log.Printf("no data found for %v in %v:%v...skipping", s.Series, s.Rt, s.Tr)
return nil
}
labels := make([]vm.LabelPair, 0, len(data.Tags))

View File

@@ -108,10 +108,10 @@ func (c Client) FindMetrics(q string) ([]string, error) {
if err != nil {
return nil, fmt.Errorf("failed to send GET request to %q: %s", q, err)
}
defer func() { _ = resp.Body.Close() }()
if resp.StatusCode != 200 {
return nil, fmt.Errorf("bad return from OpenTSDB: %d: %v", resp.StatusCode, resp)
}
defer func() { _ = resp.Body.Close() }()
body, err := io.ReadAll(resp.Body)
if err != nil {
return nil, fmt.Errorf("could not retrieve metric data from %q: %s", q, err)
@@ -130,12 +130,12 @@ func (c Client) FindSeries(metric string) ([]Meta, error) {
q := fmt.Sprintf("%s/api/search/lookup?m=%s&limit=%d", c.Addr, metric, c.Limit)
resp, err := c.c.Get(q)
if err != nil {
return nil, fmt.Errorf("failed to set GET request to %q: %s", q, err)
return nil, fmt.Errorf("failed to send GET request to %q: %s", q, err)
}
defer func() { _ = resp.Body.Close() }()
if resp.StatusCode != 200 {
return nil, fmt.Errorf("bad return from OpenTSDB: %d: %v", resp.StatusCode, resp)
}
defer func() { _ = resp.Body.Close() }()
body, err := io.ReadAll(resp.Body)
if err != nil {
return nil, fmt.Errorf("could not retrieve series data from %q: %s", q, err)
@@ -185,6 +185,7 @@ func (c Client) GetData(series Meta, rt RetentionMeta, start int64, end int64, m
if err != nil {
return Metric{}, fmt.Errorf("failed to send GET request to %q: %s", q, err)
}
defer func() { _ = resp.Body.Close() }()
/*
There are three potential failures here, none of which should kill the entire
migration run:
@@ -196,7 +197,6 @@ func (c Client) GetData(series Meta, rt RetentionMeta, start int64, end int64, m
log.Printf("bad response code from OpenTSDB query %v for %q...skipping", resp.StatusCode, q)
return Metric{}, nil
}
defer func() { _ = resp.Body.Close() }()
body, err := io.ReadAll(resp.Body)
if err != nil {
log.Println("couldn't read response body from OpenTSDB query...skipping")
@@ -239,27 +239,20 @@ func (c Client) GetData(series Meta, rt RetentionMeta, start int64, end int64, m
In all "bad" cases, we don't end the migration, we just don't process that particular message
*/
if len(output) < 1 {
// no results returned...return an empty object without error
return Metric{}, nil
}
if len(output) > 1 {
// multiple series returned for a single query. We can't process this right, so...
return Metric{}, nil
return Metric{}, fmt.Errorf("unexpected number of series returned: %d for query %q; expected 1", len(output), q)
}
if len(output[0].AggregateTags) > 0 {
// This failure means we've suppressed potential series somehow...
return Metric{}, nil
return Metric{}, fmt.Errorf("aggregate tags %v present in response for query %q; series may be suppressed", output[0].AggregateTags, q)
}
data := Metric{}
data.Metric = output[0].Metric
data.Tags = output[0].Tags
/*
We evaluate data for correctness before formatting the actual values
to skip a little bit of time if the series has invalid formatting
*/
data, err = modifyData(data, c.Normalize)
if err != nil {
return Metric{}, nil
return Metric{}, fmt.Errorf("failed to convert metric data for query %q: %w", q, err)
}
/*

View File

@@ -32,7 +32,7 @@ func convertDuration(duration string) (time.Duration, error) {
var err error
var timeValue int
if strings.HasSuffix(duration, "y") {
timeValue, err = strconv.Atoi(strings.Trim(duration, "y"))
timeValue, err = strconv.Atoi(strings.TrimSuffix(duration, "y"))
if err != nil {
return 0, fmt.Errorf("invalid time range: %q", duration)
}
@@ -42,7 +42,7 @@ func convertDuration(duration string) (time.Duration, error) {
return 0, fmt.Errorf("invalid time range: %q", duration)
}
} else if strings.HasSuffix(duration, "w") {
timeValue, err = strconv.Atoi(strings.Trim(duration, "w"))
timeValue, err = strconv.Atoi(strings.TrimSuffix(duration, "w"))
if err != nil {
return 0, fmt.Errorf("invalid time range: %q", duration)
}
@@ -52,7 +52,7 @@ func convertDuration(duration string) (time.Duration, error) {
return 0, fmt.Errorf("invalid time range: %q", duration)
}
} else if strings.HasSuffix(duration, "d") {
timeValue, err = strconv.Atoi(strings.Trim(duration, "d"))
timeValue, err = strconv.Atoi(strings.TrimSuffix(duration, "d"))
if err != nil {
return 0, fmt.Errorf("invalid time range: %q", duration)
}
@@ -95,6 +95,9 @@ func convertRetention(retention string, offset int64, msecTime bool) (Retention,
if !msecTime {
queryLength = queryLength / 1000
}
if queryLength <= 0 {
return Retention{}, fmt.Errorf("ttl %q resolves to non-positive query range %d; use a larger duration", chunks[2], queryLength)
}
queryRange := queryLength
// bump by the offset so we don't look at empty ranges any time offset > ttl
queryLength += offset
@@ -138,16 +141,29 @@ func convertRetention(retention string, offset int64, msecTime bool) (Retention,
2. we discover the actual size of each "chunk"
This is second division step
*/
querySize = int64(queryRange / (queryRange / (rowLength * 4)))
divisor := queryRange / (rowLength * 4)
if divisor == 0 {
querySize = queryRange
} else {
querySize = queryRange / divisor
}
} else {
/*
Unless the aggTime (how long a range of data we're requesting per individual point)
is greater than the row size. Then we'll need to use that to determine
how big each individual query should be
*/
querySize = int64(queryRange / (queryRange / (aggTime * 4)))
divisor := queryRange / (aggTime * 4)
if divisor == 0 {
querySize = queryRange
} else {
querySize = queryRange / divisor
}
}
if querySize <= 0 {
return Retention{}, fmt.Errorf("computed non-positive querySize=%d for retention %q; check parameters", querySize, retention)
}
var timeChunks []TimeRange
var i int64
for i = offset; i <= queryLength; i = i + querySize {

View File

@@ -0,0 +1,233 @@
package thanos
import (
"encoding/binary"
"errors"
"fmt"
"github.com/prometheus/prometheus/tsdb/chunkenc"
)
// ChunkEncAggr is the top level encoding byte for the AggrChunk.
// It is defined by Thanos as 0xff to prevent collisions with Prometheus encodings.
const ChunkEncAggr = chunkenc.Encoding(0xff)
// AggrType represents an aggregation type in Thanos downsampled blocks.
type AggrType uint8
// AggrTypeNone indicates raw blocks with no aggregation.
// It is used as a sentinel to distinguish raw block processing from downsampled.
const AggrTypeNone AggrType = 255
// Valid aggregation types matching Thanos definitions.
const (
AggrCount AggrType = iota
AggrSum
AggrMin
AggrMax
AggrCounter
)
// AllAggrTypes contains all supported aggregation types.
var AllAggrTypes = []AggrType{AggrCount, AggrSum, AggrMin, AggrMax, AggrCounter}
func (t AggrType) String() string {
switch t {
case AggrCount:
return "count"
case AggrSum:
return "sum"
case AggrMin:
return "min"
case AggrMax:
return "max"
case AggrCounter:
return "counter"
}
return "<unknown>"
}
// ParseAggrType parses aggregate type from string.
func ParseAggrType(s string) (AggrType, error) {
switch s {
case "count":
return AggrCount, nil
case "sum":
return AggrSum, nil
case "min":
return AggrMin, nil
case "max":
return AggrMax, nil
case "counter":
return AggrCounter, nil
}
return 0, fmt.Errorf("unknown aggregate type: %q", s)
}
// ErrAggrNotExist is returned if a requested aggregation is not present in an AggrChunk.
var ErrAggrNotExist = errors.New("aggregate does not exist")
// AggrChunk is a chunk that is composed of a set of aggregates for the same underlying data.
// Not all aggregates must be present.
// This is a read-only implementation for decoding Thanos downsampled blocks.
type AggrChunk []byte
// IsAggrChunk checks if the encoding byte indicates this is an AggrChunk.
func IsAggrChunk(enc chunkenc.Encoding) bool {
return enc == ChunkEncAggr
}
// Get returns the sub-chunk for the given aggregate type if it exists.
func (c AggrChunk) Get(t AggrType) (chunkenc.Chunk, error) {
b := c[:]
var x []byte
for i := AggrType(0); i <= t; i++ {
l, n := binary.Uvarint(b)
if n < 1 {
return nil, errors.New("invalid size: failed to read uvarint")
}
if l > uint64(len(b[n:])) || l+1 > uint64(len(b[n:])) {
if l > 0 {
return nil, errors.New("invalid size: not enough bytes")
}
}
b = b[n:]
// If length is set to zero explicitly, that means the aggregate is unset.
if l == 0 {
if i == t {
return nil, ErrAggrNotExist
}
continue
}
chunkLen := int(l) + 1
x = b[:chunkLen]
b = b[chunkLen:]
}
if len(x) == 0 {
return nil, ErrAggrNotExist
}
return chunkenc.FromData(chunkenc.Encoding(x[0]), x[1:])
}
// Encoding returns the encoding type for AggrChunk.
func (c AggrChunk) Encoding() chunkenc.Encoding {
return ChunkEncAggr
}
// errIterator wraps a nop iterator but reports an error via Err().
// It embeds chunkenc.Iterator to inherit all methods (including Seek)
// which avoids go vet stdmethods warning about Seek signature.
type errIterator struct {
chunkenc.Iterator
err error
}
// Err returns the underlying error.
func (it *errIterator) Err() error {
return it.err
}
// newAggrChunkIterator creates a new iterator for the specified aggregate type.
// If the aggregate is not present in the chunk (ErrAggrNotExist), a nop iterator
// is returned without error — the caller will simply see zero samples.
// Real decoding/corruption errors are reported via the iterator's Err() method.
func newAggrChunkIterator(data []byte, aggrType AggrType) chunkenc.Iterator {
chunk := AggrChunk(data)
subChunk, err := chunk.Get(aggrType)
if err != nil {
if errors.Is(err, ErrAggrNotExist) {
return chunkenc.NewNopIterator()
}
return &errIterator{
Iterator: chunkenc.NewNopIterator(),
err: err,
}
}
return subChunk.Iterator(nil)
}
// AggrChunkWrapper wraps AggrChunk to implement chunkenc.Chunk interface.
// It delegates iteration to a specific aggregate type.
type AggrChunkWrapper struct {
data []byte
aggrType AggrType
}
// NewAggrChunkWrapper creates a new AggrChunk wrapper for the specified aggregate type.
func NewAggrChunkWrapper(data []byte, aggrType AggrType) *AggrChunkWrapper {
return &AggrChunkWrapper{
data: data,
aggrType: aggrType,
}
}
// Bytes returns the underlying byte slice.
func (c *AggrChunkWrapper) Bytes() []byte {
return c.data
}
// Encoding returns the AggrChunk encoding.
func (c *AggrChunkWrapper) Encoding() chunkenc.Encoding {
return ChunkEncAggr
}
// Appender returns an error since AggrChunk is read-only.
func (c *AggrChunkWrapper) Appender() (chunkenc.Appender, error) {
return nil, errors.New("AggrChunk is read-only")
}
// Iterator returns an iterator for the specified aggregate type.
func (c *AggrChunkWrapper) Iterator(it chunkenc.Iterator) chunkenc.Iterator {
return newAggrChunkIterator(c.data, c.aggrType)
}
// NumSamples returns the number of samples in the aggregate.
func (c *AggrChunkWrapper) NumSamples() int {
chunk := AggrChunk(c.data)
subChunk, err := chunk.Get(c.aggrType)
if err != nil {
return 0
}
return subChunk.NumSamples()
}
// Compact is a no-op for read-only AggrChunk.
func (c *AggrChunkWrapper) Compact() {}
// Reset resets the chunk with new data.
func (c *AggrChunkWrapper) Reset(stream []byte) {
c.data = stream
}
// AggrChunkPool is a custom Pool that understands AggrChunk encoding (0xff).
// It delegates standard encodings to the default pool and handles AggrChunk specially.
type AggrChunkPool struct {
defaultPool chunkenc.Pool
aggrType AggrType
}
// NewAggrChunkPool creates a new pool that handles AggrChunk encoding.
func NewAggrChunkPool(aggrType AggrType) *AggrChunkPool {
return &AggrChunkPool{
defaultPool: chunkenc.NewPool(),
aggrType: aggrType,
}
}
// Get returns a chunk for the given encoding and data.
func (p *AggrChunkPool) Get(e chunkenc.Encoding, b []byte) (chunkenc.Chunk, error) {
if e == ChunkEncAggr {
return NewAggrChunkWrapper(b, p.aggrType), nil
}
return p.defaultPool.Get(e, b)
}
// Put returns a chunk to the pool.
func (p *AggrChunkPool) Put(c chunkenc.Chunk) error {
if c.Encoding() == ChunkEncAggr {
// AggrChunk wrappers are not pooled
return nil
}
return p.defaultPool.Put(c)
}

View File

@@ -0,0 +1,110 @@
package thanos
import (
"encoding/json"
"os"
"path/filepath"
)
// BlockMeta extends Prometheus BlockMeta with Thanos-specific fields.
type BlockMeta struct {
// Thanos-specific metadata
Thanos ThanosMeta `json:"thanos,omitempty"`
}
// ThanosMeta contains Thanos-specific block metadata.
type ThanosMeta struct {
// Labels are external labels identifying the producer.
Labels map[string]string `json:"labels,omitempty"`
// Downsample contains downsampling information.
Downsample ThanosDownsample `json:"downsample,omitempty"`
// Source indicates where the block came from.
Source string `json:"source,omitempty"`
// SegmentFiles contains list of segment files in the block.
SegmentFiles []string `json:"segment_files,omitempty"`
// Files contains metadata about files in the block.
Files []ThanosFile `json:"files,omitempty"`
}
// ThanosDownsample contains downsampling resolution info.
type ThanosDownsample struct {
// Resolution is the downsampling resolution in milliseconds.
// 0 means raw data (no downsampling).
// 300000 (5 minutes) or 3600000 (1 hour) for downsampled data.
Resolution int64 `json:"resolution"`
}
// ThanosFile contains metadata about a file in the block.
type ThanosFile struct {
RelPath string `json:"rel_path"`
SizeBytes int64 `json:"size_bytes,omitempty"`
}
// ResolutionLevel represents the downsampling resolution.
type ResolutionLevel int64
const (
// ResolutionRaw is for raw, non-downsampled data.
ResolutionRaw ResolutionLevel = 0
// Resolution5m is for 5-minute downsampled data (300000 ms).
Resolution5m ResolutionLevel = 300000
// Resolution1h is for 1-hour downsampled data (3600000 ms).
Resolution1h ResolutionLevel = 3600000
)
// String returns human-readable resolution string.
func (r ResolutionLevel) String() string {
switch r {
case ResolutionRaw:
return "raw"
case Resolution5m:
return "5m"
case Resolution1h:
return "1h"
default:
return "unknown"
}
}
// ReadBlockMeta reads Thanos-extended block metadata from meta.json.
func ReadBlockMeta(blockDir string) (*BlockMeta, error) {
metaPath := filepath.Join(blockDir, "meta.json")
data, err := os.ReadFile(metaPath)
if err != nil {
return nil, err
}
var meta BlockMeta
if err := json.Unmarshal(data, &meta); err != nil {
return nil, err
}
return &meta, nil
}
// IsDownsampled returns true if the block contains downsampled data.
func (m *BlockMeta) IsDownsampled() bool {
return m.Thanos.Downsample.Resolution > 0
}
// Resolution returns the block's downsampling resolution.
func (m *BlockMeta) Resolution() ResolutionLevel {
return ResolutionLevel(m.Thanos.Downsample.Resolution)
}
// ResolutionSuffix returns a suffix string for metric names based on resolution.
// For example: ":5m" or ":1h" for downsampled data, empty for raw data.
func (m *BlockMeta) ResolutionSuffix() string {
switch m.Resolution() {
case Resolution5m:
return ":5m"
case Resolution1h:
return ":1h"
default:
return ""
}
}

View File

@@ -0,0 +1,83 @@
package thanos
import (
"fmt"
"io"
"os"
"path/filepath"
"github.com/prometheus/prometheus/tsdb"
"github.com/prometheus/prometheus/tsdb/chunkenc"
)
// BlockInfo contains information about a block including Thanos metadata.
type BlockInfo struct {
Block tsdb.BlockReader
Resolution ResolutionLevel
IsThanos bool
// Closer releases the block's resources (file descriptors, mmap).
// Must be called only after all queriers on this block have been closed.
Closer io.Closer
}
// OpenBlocksWithInfo opens all blocks and returns them with their metadata.
// snapshotDir must be a snapshot directory containing block directories.
func OpenBlocksWithInfo(snapshotDir string, aggrType AggrType) ([]BlockInfo, error) {
entries, err := os.ReadDir(snapshotDir)
if err != nil {
return nil, fmt.Errorf("failed to read snapshot directory: %w", err)
}
var blocks []BlockInfo
for _, entry := range entries {
if !entry.IsDir() {
continue
}
blockDir := filepath.Join(snapshotDir, entry.Name())
metaPath := filepath.Join(blockDir, "meta.json")
// Check if this is a valid block directory (has meta.json)
if _, err := os.Stat(metaPath); os.IsNotExist(err) {
continue
}
meta, err := ReadBlockMeta(blockDir)
if err != nil {
CloseBlocks(blocks)
return nil, fmt.Errorf("failed to read Thanos metadata for block %s: %w", blockDir, err)
}
var pool chunkenc.Pool
if meta.IsDownsampled() {
// Use AggrChunkPool for downsampled blocks
pool = NewAggrChunkPool(aggrType)
}
block, err := tsdb.OpenBlock(nil, blockDir, pool, nil)
if err != nil {
// Close previously opened blocks before returning error
CloseBlocks(blocks)
return nil, fmt.Errorf("failed to open block %s: %w", blockDir, err)
}
blocks = append(blocks, BlockInfo{
Block: block,
Resolution: meta.Resolution(),
IsThanos: true,
Closer: block,
})
}
return blocks, nil
}
// CloseBlocks closes all blocks in the slice.
// Must be called only after all queriers on these blocks have been closed.
func CloseBlocks(blocks []BlockInfo) {
for _, bi := range blocks {
if bi.Closer != nil {
_ = bi.Closer.Close()
}
}
}

198
app/vmctl/thanos/client.go Normal file
View File

@@ -0,0 +1,198 @@
package thanos
import (
"context"
"fmt"
"time"
"github.com/prometheus/prometheus/model/labels"
"github.com/prometheus/prometheus/storage"
"github.com/prometheus/prometheus/tsdb"
)
// Config contains parameters for reading Thanos snapshots.
type Config struct {
Snapshot string
Filter Filter
}
// Filter contains configuration for filtering the timeseries.
type Filter struct {
TimeMin string
TimeMax string
Label string
LabelValue string
}
// Client reads Thanos snapshot blocks, including downsampled blocks with AggrChunk encoding.
type Client struct {
snapshotPath string
filter filter
statsPrinted bool
}
type filter struct {
min, max int64
label string
labelValue string
}
func (f filter) inRange(minV, maxV int64) bool {
fmin, fmax := f.min, f.max
if fmin == 0 {
fmin = minV
}
if fmax == 0 {
fmax = maxV
}
return minV <= fmax && fmin <= maxV
}
// NewClient creates a new Thanos snapshot client.
func NewClient(cfg Config) (*Client, error) {
minTime, maxTime, err := parseTime(cfg.Filter.TimeMin, cfg.Filter.TimeMax)
if err != nil {
return nil, fmt.Errorf("failed to parse time in filter: %s", err)
}
return &Client{
snapshotPath: cfg.Snapshot,
filter: filter{
min: minTime,
max: maxTime,
label: cfg.Filter.Label,
labelValue: cfg.Filter.LabelValue,
},
}, nil
}
// Explore fetches all available blocks from the snapshot with support for
// Thanos AggrChunk (downsampled blocks). It opens blocks with a custom pool
// that can decode AggrChunk encoding (0xff).
func (c *Client) Explore(aggrType AggrType) ([]BlockInfo, error) {
blockInfos, err := OpenBlocksWithInfo(c.snapshotPath, aggrType)
if err != nil {
return nil, fmt.Errorf("failed to open blocks: %w", err)
}
s := &Stats{
Filtered: c.filter.min != 0 || c.filter.max != 0 || c.filter.label != "",
Blocks: len(blockInfos),
}
var blocksToImport []BlockInfo
for _, bi := range blockInfos {
meta := bi.Block.Meta()
if s.MinTime == 0 || meta.MinTime < s.MinTime {
s.MinTime = meta.MinTime
}
if s.MaxTime == 0 || meta.MaxTime > s.MaxTime {
s.MaxTime = meta.MaxTime
}
if !c.filter.inRange(meta.MinTime, meta.MaxTime) {
s.SkippedBlocks++
if bi.Closer != nil {
_ = bi.Closer.Close()
}
continue
}
s.Samples += meta.Stats.NumSamples
s.Series += meta.Stats.NumSeries
blocksToImport = append(blocksToImport, bi)
}
if !c.statsPrinted {
fmt.Println(s)
c.statsPrinted = true
}
return blocksToImport, nil
}
// querierSeriesSet wraps a SeriesSet and its underlying Querier, ensuring
// the querier is closed once the SeriesSet has been fully consumed.
// This releases the querier's read reference on the block, which is required
// for Block.Close() to complete without hanging.
type querierSeriesSet struct {
storage.SeriesSet
q storage.Querier
closed bool
}
// Next advances the iterator. When the underlying SeriesSet is exhausted,
// it closes the querier to release resources.
func (s *querierSeriesSet) Next() bool {
if s.SeriesSet.Next() {
return true
}
if !s.closed {
_ = s.q.Close()
s.closed = true
}
return false
}
// Close explicitly closes the underlying querier.
// This must be called if iteration is stopped early (before Next returns false)
// to release block read references and prevent Block.Close() from hanging.
func (s *querierSeriesSet) Close() {
if !s.closed {
_ = s.q.Close()
s.closed = true
}
}
// ClosableSeriesSet extends storage.SeriesSet with a Close method for explicit cleanup.
type ClosableSeriesSet interface {
storage.SeriesSet
Close()
}
// Read reads the given BlockInfo according to configured time and label filters.
// The returned ClosableSeriesSet automatically closes the underlying querier when fully consumed,
// but Close() should be called explicitly (e.g., via defer) to handle early returns.
func (c *Client) Read(bi BlockInfo) (ClosableSeriesSet, error) {
minTime, maxTime := bi.Block.Meta().MinTime, bi.Block.Meta().MaxTime
if c.filter.min != 0 {
minTime = c.filter.min
}
if c.filter.max != 0 {
maxTime = c.filter.max
}
q, err := tsdb.NewBlockQuerier(bi.Block, minTime, maxTime)
if err != nil {
return nil, err
}
ss := q.Select(
context.Background(),
false,
nil,
labels.MustNewMatcher(labels.MatchRegexp, c.filter.label, c.filter.labelValue),
)
return &querierSeriesSet{
SeriesSet: ss,
q: q,
}, nil
}
func parseTime(start, end string) (int64, int64, error) {
var s, e int64
if start == "" && end == "" {
return 0, 0, nil
}
if start != "" {
v, err := time.Parse(time.RFC3339, start)
if err != nil {
return 0, 0, fmt.Errorf("failed to parse %q: %s", start, err)
}
s = v.UnixNano() / int64(time.Millisecond)
}
if end != "" {
v, err := time.Parse(time.RFC3339, end)
if err != nil {
return 0, 0, fmt.Errorf("failed to parse %q: %s", end, err)
}
e = v.UnixNano() / int64(time.Millisecond)
}
return s, e, nil
}

38
app/vmctl/thanos/stats.go Normal file
View File

@@ -0,0 +1,38 @@
package thanos
import (
"fmt"
"time"
)
// Stats represents data migration stats for Thanos blocks.
type Stats struct {
Filtered bool
MinTime int64
MaxTime int64
Samples uint64
Series uint64
Blocks int
SkippedBlocks int
}
// String returns string representation for s.
func (s Stats) String() string {
str := fmt.Sprintf("Thanos snapshot stats:\n"+
" blocks found: %d;\n"+
" blocks skipped by time filter: %d;\n"+
" min time: %d (%v);\n"+
" max time: %d (%v);\n"+
" samples: %d;\n"+
" series: %d.",
s.Blocks, s.SkippedBlocks,
s.MinTime, time.Unix(s.MinTime/1e3, 0).Format(time.RFC3339),
s.MaxTime, time.Unix(s.MaxTime/1e3, 0).Format(time.RFC3339),
s.Samples, s.Series)
if s.Filtered {
str += "\n* Stats numbers are based on blocks meta info and don't account for applied filters."
}
return str
}

View File

@@ -0,0 +1,309 @@
package main
import (
"context"
"fmt"
"log"
"strings"
"sync"
"github.com/prometheus/prometheus/model/labels"
"github.com/prometheus/prometheus/tsdb/chunkenc"
"github.com/VictoriaMetrics/metrics"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/barpool"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/thanos"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/vm"
)
type thanosProcessor struct {
cl *thanos.Client
im *vm.Importer
cc int
isVerbose bool
aggrTypes []thanos.AggrType
}
func (tp *thanosProcessor) run(ctx context.Context) error {
if len(tp.aggrTypes) == 0 {
tp.aggrTypes = thanos.AllAggrTypes
}
log.Printf("Processing blocks with aggregate types: %v", tp.aggrTypes)
// Use the first aggregate type to explore blocks (block list is the same for all types)
blocks, err := tp.cl.Explore(tp.aggrTypes[0])
if err != nil {
return fmt.Errorf("explore failed: %s", err)
}
if len(blocks) < 1 {
return fmt.Errorf("found no blocks to import")
}
// Separate blocks into raw (resolution=0) and downsampled (resolution>0)
var rawBlocks, downsampledBlocks []thanos.BlockInfo
for _, block := range blocks {
if block.Resolution == thanos.ResolutionRaw {
rawBlocks = append(rawBlocks, block)
} else {
downsampledBlocks = append(downsampledBlocks, block)
}
}
log.Printf("Found %d raw blocks and %d downsampled blocks", len(rawBlocks), len(downsampledBlocks))
question := fmt.Sprintf("Found %d blocks to import (%d raw + %d downsampled with %d aggregate types). Continue?",
len(blocks), len(rawBlocks), len(downsampledBlocks), len(tp.aggrTypes))
if !prompt(ctx, question) {
return nil
}
// Calculate total number of block processing passes for the progress bar:
// raw blocks are processed once, downsampled blocks are processed once per aggregate type.
totalPasses := len(rawBlocks) + len(downsampledBlocks)*len(tp.aggrTypes)
thanosBlocksTotal.Add(totalPasses)
bar := barpool.AddWithTemplate(fmt.Sprintf(barTpl, "Processing blocks"), totalPasses)
if err := barpool.Start(); err != nil {
return err
}
defer barpool.Stop()
tp.im.ResetStats()
type phaseStats struct {
name string
series uint64
samples uint64
}
var phases []phaseStats
// Process raw blocks first (no aggregate suffix)
if len(rawBlocks) > 0 {
log.Println("Processing raw blocks (resolution=0)...")
stats, err := tp.processBlocks(rawBlocks, thanos.AggrTypeNone, bar)
if err != nil {
return fmt.Errorf("migration failed for raw blocks: %s", err)
}
phases = append(phases, phaseStats{
name: "raw",
series: stats.series,
samples: stats.samples,
})
}
// Close blocks from the initial Explore. The querierSeriesSet wrapper
// has already released all querier read references, so Close won't hang.
thanos.CloseBlocks(blocks)
// Process downsampled blocks for each aggregate type.
// Each type needs its own AggrChunkPool, so we reopen blocks per type.
for _, aggrType := range tp.aggrTypes {
if len(downsampledBlocks) < 1 {
break
}
log.Printf("Processing downsampled blocks with aggregate type: %s", aggrType)
aggrBlocks, err := tp.cl.Explore(aggrType)
if err != nil {
return fmt.Errorf("explore failed for aggr type %s: %s", aggrType, err)
}
var downsampledOnly []thanos.BlockInfo
for _, block := range aggrBlocks {
if block.Resolution != thanos.ResolutionRaw {
downsampledOnly = append(downsampledOnly, block)
}
}
if len(downsampledOnly) < 1 {
log.Printf("No downsampled blocks found for aggregate type %s, skipping", aggrType)
thanos.CloseBlocks(aggrBlocks)
continue
}
log.Printf("Processing %d blocks for aggregate type: %s", len(downsampledOnly), aggrType)
stats, err := tp.processBlocks(downsampledOnly, aggrType, bar)
thanos.CloseBlocks(aggrBlocks)
if err != nil {
return fmt.Errorf("migration failed for aggr type %s: %s", aggrType, err)
}
phases = append(phases, phaseStats{
name: aggrType.String(),
series: stats.series,
samples: stats.samples,
})
}
// Print per-phase and total statistics
var totalSeries, totalSamples uint64
log.Printf("Migration statistics (%d raw blocks, %d downsampled blocks):", len(rawBlocks), len(downsampledBlocks))
for _, p := range phases {
log.Printf(" %s: %d series, %d samples", p.name, p.series, p.samples)
totalSeries += p.series
totalSamples += p.samples
}
log.Printf(" total: %d series, %d samples", totalSeries, totalSamples)
// Wait for all buffers to flush
tp.im.Close()
// Drain import errors channel
for vmErr := range tp.im.Errors() {
if vmErr.Err != nil {
thanosErrorsTotal.Inc()
return fmt.Errorf("import process failed: %s", wrapErr(vmErr, tp.isVerbose))
}
}
log.Println("Import finished!")
log.Println(tp.im.Stats())
return nil
}
// processBlocksStats holds statistics collected during block processing.
type processBlocksStats struct {
blocks uint64
series uint64
samples uint64
}
func (tp *thanosProcessor) processBlocks(blocks []thanos.BlockInfo, aggrType thanos.AggrType, bar barpool.Bar) (processBlocksStats, error) {
blockReadersCh := make(chan thanos.BlockInfo)
errCh := make(chan error, tp.cc)
var processedBlocks, totalSeries, totalSamples uint64
var mu sync.Mutex
var wg sync.WaitGroup
for i := range tp.cc {
workerID := i
wg.Go(func() {
for bi := range blockReadersCh {
seriesCount, samplesCount, err := tp.do(bi, aggrType)
if err != nil {
thanosErrorsTotal.Inc()
errCh <- fmt.Errorf("read failed for block %q with aggr %s: %s", bi.Block.Meta().ULID, aggrType, err)
return
}
mu.Lock()
processedBlocks++
totalSeries += seriesCount
totalSamples += samplesCount
log.Printf("[Worker %d] Block %s: %d series, %d samples | Total: %d/%d blocks, %d series, %d samples",
workerID, bi.Block.Meta().ULID.String()[:8], seriesCount, samplesCount,
processedBlocks, len(blocks), totalSeries, totalSamples)
mu.Unlock()
thanosBlocksProcessed.Inc()
bar.Increment()
}
})
}
// any error breaks the import
for _, bi := range blocks {
select {
case thanosErr := <-errCh:
close(blockReadersCh)
wg.Wait()
return processBlocksStats{}, fmt.Errorf("thanos error: %s", thanosErr)
case vmErr := <-tp.im.Errors():
close(blockReadersCh)
wg.Wait()
thanosErrorsTotal.Inc()
return processBlocksStats{}, fmt.Errorf("import process failed: %s", wrapErr(vmErr, tp.isVerbose))
case blockReadersCh <- bi:
}
}
close(blockReadersCh)
wg.Wait()
close(errCh)
for err := range errCh {
return processBlocksStats{}, fmt.Errorf("import process failed: %s", err)
}
return processBlocksStats{
blocks: processedBlocks,
series: totalSeries,
samples: totalSamples,
}, nil
}
func (tp *thanosProcessor) do(bi thanos.BlockInfo, aggrType thanos.AggrType) (uint64, uint64, error) {
ss, err := tp.cl.Read(bi)
if err != nil {
return 0, 0, fmt.Errorf("failed to read block: %s", err)
}
defer ss.Close() // Ensure querier is closed even on early returns
var it chunkenc.Iterator
var seriesCount, samplesCount uint64
for ss.Next() {
var name string
var labelPairs []vm.LabelPair
series := ss.At()
series.Labels().Range(func(label labels.Label) {
if label.Name == "__name__" {
name = label.Value
return
}
labelPairs = append(labelPairs, vm.LabelPair{
Name: strings.Clone(label.Name),
Value: strings.Clone(label.Value),
})
})
if name == "" {
return seriesCount, samplesCount, fmt.Errorf("failed to find `__name__` label in labelset for block %v", bi.Block.Meta().ULID)
}
// Add resolution and aggregate type suffix to metric name for downsampled blocks
if bi.Resolution != thanos.ResolutionRaw && aggrType != thanos.AggrTypeNone {
name = fmt.Sprintf("%s:%s:%s", name, bi.Resolution.String(), aggrType.String())
}
var timestamps []int64
var values []float64
it = series.Iterator(it)
for {
typ := it.Next()
if typ == chunkenc.ValNone {
break
}
if typ != chunkenc.ValFloat {
continue
}
t, v := it.At()
timestamps = append(timestamps, t)
values = append(values, v)
}
if err := it.Err(); err != nil {
return seriesCount, samplesCount, err
}
samplesCount += uint64(len(timestamps))
seriesCount++
ts := vm.TimeSeries{
Name: name,
LabelPairs: labelPairs,
Timestamps: timestamps,
Values: values,
}
if err := tp.im.Input(&ts); err != nil {
return seriesCount, samplesCount, err
}
}
return seriesCount, samplesCount, ss.Err()
}
var (
thanosBlocksTotal = metrics.NewCounter(`vmctl_thanos_migration_blocks_total`)
thanosBlocksProcessed = metrics.NewCounter(`vmctl_thanos_migration_blocks_processed`)
thanosErrorsTotal = metrics.NewCounter(`vmctl_thanos_migration_errors_total`)
)

View File

@@ -262,6 +262,7 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
return true
case "/api/v1/export":
exportRequests.Inc()
httpserver.EnableCORS(w, r)
if err := prometheus.ExportHandler(startTime, w, r); err != nil {
exportErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
@@ -270,6 +271,7 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
return true
case "/api/v1/export/csv":
exportCSVRequests.Inc()
httpserver.EnableCORS(w, r)
if err := prometheus.ExportCSVHandler(startTime, w, r); err != nil {
exportCSVErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
@@ -278,6 +280,7 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
return true
case "/api/v1/export/native":
exportNativeRequests.Inc()
httpserver.EnableCORS(w, r)
if err := prometheus.ExportNativeHandler(startTime, w, r); err != nil {
exportNativeErrors.Inc()
httpserver.Errorf(w, r, "%s", err)

View File

@@ -22,6 +22,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage/metricnamestats"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage/metricsmetadata"
)
@@ -1362,7 +1363,7 @@ func applyGraphiteRegexpFilter(filter string, ss []string) ([]string, error) {
const maxFastAllocBlockSize = 32 * 1024
// GetMetricNamesStats returns statistic for timeseries metric names usage.
func GetMetricNamesStats(qt *querytracer.Tracer, limit, le int, matchPattern string) (storage.MetricNamesStatsResponse, error) {
func GetMetricNamesStats(qt *querytracer.Tracer, limit, le int, matchPattern string) (metricnamestats.StatsResult, error) {
qt = qt.NewChild("get metric names usage statistics with limit: %d, less or equal to: %d, match pattern=%q", limit, le, matchPattern)
defer qt.Done()
return vmstorage.GetMetricNamesStats(qt, limit, le, matchPattern)

View File

@@ -11,6 +11,16 @@
{% stripspace %}
{% func ExportCSVHeader(fieldNames []string) %}
{% if len(fieldNames) == 0 %}{% return %}{% endif %}
{%s= fieldNames[0] %}
{% for _, fieldName := range fieldNames[1:] %}
,
{%s= fieldName %}
{% endfor %}
{% newline %}
{% endfunc %}
{% func ExportCSVLine(xb *exportBlock, fieldNames []string) %}
{% if len(xb.timestamps) == 0 || len(fieldNames) == 0 %}{% return %}{% endif %}
{% for i, timestamp := range xb.timestamps %}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,132 @@
package prometheus
import (
"strings"
"testing"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
)
func TestExportCSVHeader(t *testing.T) {
f := func(fieldNames []string, expected string) {
t.Helper()
got := ExportCSVHeader(fieldNames)
if got != expected {
t.Fatalf("ExportCSVHeader(%v): got %q; want %q", fieldNames, got, expected)
}
}
f(nil, "")
f([]string{}, "")
f([]string{"__value__"}, "__value__\n")
f([]string{"__timestamp__"}, "__timestamp__\n")
f([]string{"__timestamp__:rfc3339"}, "__timestamp__:rfc3339\n")
f([]string{"__name__"}, "__name__\n")
f([]string{"job"}, "job\n")
f([]string{"__timestamp__:rfc3339", "__value__"}, "__timestamp__:rfc3339,__value__\n")
f([]string{"__value__", "__timestamp__"}, "__value__,__timestamp__\n")
f([]string{"job", "instance"}, "job,instance\n")
f([]string{"__name__", "__value__", "__timestamp__:unix_s"}, "__name__,__value__,__timestamp__:unix_s\n")
f([]string{"job", "instance", "__value__", "__timestamp__:unix_ms"}, "job,instance,__value__,__timestamp__:unix_ms\n")
f([]string{"__timestamp__:custom:2006-01-02", "__value__", "host", "dc", "env"},
"__timestamp__:custom:2006-01-02,__value__,host,dc,env\n")
// duplicate fields
f([]string{"__value__", "__value__"}, "__value__,__value__\n")
f([]string{"__timestamp__", "__timestamp__:rfc3339"}, "__timestamp__,__timestamp__:rfc3339\n")
}
func TestExportCSVLine(t *testing.T) {
localBak := time.Local
time.Local = time.UTC
defer func() { time.Local = localBak }()
f := func(mn *storage.MetricName, timestamps []int64, values []float64, fieldNames []string, expected string) {
t.Helper()
xb := &exportBlock{
mn: mn,
timestamps: timestamps,
values: values,
}
got := ExportCSVLine(xb, fieldNames)
if got != expected {
t.Fatalf("ExportCSVLine: got %q; want %q", got, expected)
}
}
mn := &storage.MetricName{
MetricGroup: []byte("cpu_usage"),
Tags: []storage.Tag{
{Key: []byte("job"), Value: []byte("node")},
{Key: []byte("instance"), Value: []byte("localhost:9090")},
},
}
// empty inputs
f(mn, nil, nil, []string{"__value__"}, "")
f(mn, []int64{}, []float64{}, []string{"__value__"}, "")
f(mn, []int64{1000}, []float64{1.5}, nil, "")
f(mn, []int64{1000}, []float64{1.5}, []string{}, "")
f(mn, []int64{1000}, []float64{42.5}, []string{"__value__"}, "42.5\n")
f(mn, []int64{1704067200000}, []float64{1}, []string{"__timestamp__"}, "1704067200000\n")
f(mn, []int64{1704067200000}, []float64{1}, []string{"__timestamp__:unix_s"}, "1704067200\n")
f(mn, []int64{1704067200000}, []float64{1}, []string{"__timestamp__:unix_ms"}, "1704067200000\n")
f(mn, []int64{1704067200000}, []float64{1}, []string{"__timestamp__:unix_ns"}, "1704067200000000000\n")
f(mn, []int64{1704067200000}, []float64{1}, []string{"__timestamp__:rfc3339"}, "2024-01-01T00:00:00Z\n")
f(mn, []int64{1000}, []float64{1}, []string{"__name__"}, "cpu_usage\n")
f(mn, []int64{1000}, []float64{1}, []string{"job"}, "node\n")
f(mn, []int64{1000}, []float64{1}, []string{"instance"}, "localhost:9090\n")
f(mn, []int64{1000}, []float64{1}, []string{"missing_label"}, "\n")
// multiple fields
f(mn, []int64{1704067200000}, []float64{99.9},
[]string{"__timestamp__:unix_s", "__value__", "job"},
"1704067200,99.9,node\n")
// multiple rows
f(mn, []int64{1000, 2000}, []float64{1.1, 2.2},
[]string{"__value__", "__timestamp__"},
"1.1,1000\n2.2,2000\n")
f(mn, []int64{1000, 2000, 3000}, []float64{10, 20, 30},
[]string{"__timestamp__:unix_s", "__value__"},
"1,10\n2,20\n3,30\n")
// escaping for special characters in tag values
f(&storage.MetricName{
MetricGroup: []byte("m"),
Tags: []storage.Tag{{Key: []byte("desc"), Value: []byte("a,b")}},
}, []int64{1000}, []float64{1}, []string{"desc"}, "\"a,b\"\n")
f(&storage.MetricName{
MetricGroup: []byte("m"),
Tags: []storage.Tag{{Key: []byte("desc"), Value: []byte(`say "hello"`)}},
}, []int64{1000}, []float64{1}, []string{"desc"}, "\"say \\\"hello\\\"\"\n")
f(&storage.MetricName{
MetricGroup: []byte("m"),
Tags: []storage.Tag{{Key: []byte("desc"), Value: []byte("line1\nline2")}},
}, []int64{1000}, []float64{1}, []string{"desc"}, "\"line1\\nline2\"\n")
// header and data line field counts must match
fieldNames := []string{"__name__", "job", "instance", "__value__", "__timestamp__:unix_s"}
header := ExportCSVHeader(fieldNames)
line := ExportCSVLine(&exportBlock{
mn: mn,
timestamps: []int64{1704067200000},
values: []float64{99.9},
}, fieldNames)
headerCommas := strings.Count(header, ",")
lineCommas := strings.Count(line, ",")
if headerCommas != lineCommas {
t.Fatalf("header has %d commas, data line has %d commas", headerCommas, lineCommas)
}
if headerCommas != len(fieldNames)-1 {
t.Fatalf("expected %d commas in header, got %d", len(fieldNames)-1, headerCommas)
}
}

View File

@@ -175,6 +175,7 @@ func ExportCSVHandler(startTime time.Time, w http.ResponseWriter, r *http.Reques
w.Header().Set("Content-Type", "text/csv; charset=utf-8")
bw := bufferedwriter.Get(w)
defer bufferedwriter.Put(bw)
WriteExportCSVHeader(bw, fieldNames)
sw := newScalableWriter(bw)
writeCSVLine := func(xb *exportBlock, workerID uint) error {
if len(xb.timestamps) == 0 {
@@ -1222,11 +1223,7 @@ func getCommonParamsInternal(r *http.Request, startTime time.Time, requireNonEmp
if err != nil {
return nil, err
}
// Limit the `end` arg to the current time +2 days in the same way
// as it is limited during data ingestion.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/blob/ea06d2fd3ccbbb6aa4480ab3b04f7b671408be2a/lib/storage/table.go#L378
// This should fix possible timestamp overflow - see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2669
maxTS := startTime.UnixNano()/1e6 + 2*24*3600*1000
maxTS := int64(math.MaxInt64 / 1_000_000)
if end > maxTS {
end = maxTS
}

View File

@@ -1,6 +1,7 @@
{% import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage/metricnamestats"
) %}
{% stripspace %}
@@ -34,9 +35,9 @@ TSDBStatusResponse generates response for /api/v1/status/tsdb .
]
{% endfunc %}
{% func tsdbStatusMetricNameEntries(a []storage.TopHeapEntry, queryStats []storage.MetricNamesStatsRecord) %}
{% func tsdbStatusMetricNameEntries(a []storage.TopHeapEntry, queryStats []metricnamestats.StatRecord) %}
{% code
queryStatsByMetricName := make(map[string]storage.MetricNamesStatsRecord,len(queryStats))
queryStatsByMetricName := make(map[string]metricnamestats.StatRecord,len(queryStats))
for _, record := range queryStats{
queryStatsByMetricName[record.MetricName] = record
}

View File

@@ -8,228 +8,229 @@ package prometheus
import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage/metricnamestats"
)
// TSDBStatusResponse generates response for /api/v1/status/tsdb .
//line app/vmselect/prometheus/tsdb_status_response.qtpl:8
//line app/vmselect/prometheus/tsdb_status_response.qtpl:9
import (
qtio422016 "io"
qt422016 "github.com/valyala/quicktemplate"
)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:8
//line app/vmselect/prometheus/tsdb_status_response.qtpl:9
var (
_ = qtio422016.Copy
_ = qt422016.AcquireByteBuffer
)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:8
//line app/vmselect/prometheus/tsdb_status_response.qtpl:9
func StreamTSDBStatusResponse(qw422016 *qt422016.Writer, status *storage.TSDBStatus, qt *querytracer.Tracer) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:8
//line app/vmselect/prometheus/tsdb_status_response.qtpl:9
qw422016.N().S(`{"status":"success","data":{"totalSeries":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:12
//line app/vmselect/prometheus/tsdb_status_response.qtpl:13
qw422016.N().DUL(status.TotalSeries)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:12
//line app/vmselect/prometheus/tsdb_status_response.qtpl:13
qw422016.N().S(`,"totalLabelValuePairs":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:13
//line app/vmselect/prometheus/tsdb_status_response.qtpl:14
qw422016.N().DUL(status.TotalLabelValuePairs)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:13
//line app/vmselect/prometheus/tsdb_status_response.qtpl:14
qw422016.N().S(`,"seriesCountByMetricName":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:14
//line app/vmselect/prometheus/tsdb_status_response.qtpl:15
streamtsdbStatusMetricNameEntries(qw422016, status.SeriesCountByMetricName, status.SeriesQueryStatsByMetricName)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:14
//line app/vmselect/prometheus/tsdb_status_response.qtpl:15
qw422016.N().S(`,"seriesCountByLabelName":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:15
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
streamtsdbStatusEntries(qw422016, status.SeriesCountByLabelName)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:15
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
qw422016.N().S(`,"seriesCountByFocusLabelValue":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
//line app/vmselect/prometheus/tsdb_status_response.qtpl:17
streamtsdbStatusEntries(qw422016, status.SeriesCountByFocusLabelValue)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
//line app/vmselect/prometheus/tsdb_status_response.qtpl:17
qw422016.N().S(`,"seriesCountByLabelValuePair":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:17
//line app/vmselect/prometheus/tsdb_status_response.qtpl:18
streamtsdbStatusEntries(qw422016, status.SeriesCountByLabelValuePair)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:17
//line app/vmselect/prometheus/tsdb_status_response.qtpl:18
qw422016.N().S(`,"labelValueCountByLabelName":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:18
//line app/vmselect/prometheus/tsdb_status_response.qtpl:19
streamtsdbStatusEntries(qw422016, status.LabelValueCountByLabelName)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:18
//line app/vmselect/prometheus/tsdb_status_response.qtpl:19
qw422016.N().S(`}`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:20
//line app/vmselect/prometheus/tsdb_status_response.qtpl:21
qt.Done()
//line app/vmselect/prometheus/tsdb_status_response.qtpl:21
//line app/vmselect/prometheus/tsdb_status_response.qtpl:22
streamdumpQueryTrace(qw422016, qt)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:21
//line app/vmselect/prometheus/tsdb_status_response.qtpl:22
qw422016.N().S(`}`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
func WriteTSDBStatusResponse(qq422016 qtio422016.Writer, status *storage.TSDBStatus, qt *querytracer.Tracer) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
StreamTSDBStatusResponse(qw422016, status, qt)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
func TSDBStatusResponse(status *storage.TSDBStatus, qt *querytracer.Tracer) string {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
WriteTSDBStatusResponse(qb422016, status, qt)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
qs422016 := string(qb422016.B)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
return qs422016
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:25
//line app/vmselect/prometheus/tsdb_status_response.qtpl:26
func streamtsdbStatusEntries(qw422016 *qt422016.Writer, a []storage.TopHeapEntry) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:25
//line app/vmselect/prometheus/tsdb_status_response.qtpl:26
qw422016.N().S(`[`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:27
//line app/vmselect/prometheus/tsdb_status_response.qtpl:28
for i, e := range a {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:27
//line app/vmselect/prometheus/tsdb_status_response.qtpl:28
qw422016.N().S(`{"name":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:29
//line app/vmselect/prometheus/tsdb_status_response.qtpl:30
qw422016.N().Q(e.Name)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:29
//line app/vmselect/prometheus/tsdb_status_response.qtpl:30
qw422016.N().S(`,"value":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:30
//line app/vmselect/prometheus/tsdb_status_response.qtpl:31
qw422016.N().D(int(e.Count))
//line app/vmselect/prometheus/tsdb_status_response.qtpl:30
//line app/vmselect/prometheus/tsdb_status_response.qtpl:31
qw422016.N().S(`}`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:32
//line app/vmselect/prometheus/tsdb_status_response.qtpl:33
if i+1 < len(a) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:32
//line app/vmselect/prometheus/tsdb_status_response.qtpl:33
qw422016.N().S(`,`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:32
//line app/vmselect/prometheus/tsdb_status_response.qtpl:33
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:33
//line app/vmselect/prometheus/tsdb_status_response.qtpl:34
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:33
//line app/vmselect/prometheus/tsdb_status_response.qtpl:34
qw422016.N().S(`]`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
func writetsdbStatusEntries(qq422016 qtio422016.Writer, a []storage.TopHeapEntry) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
streamtsdbStatusEntries(qw422016, a)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
func tsdbStatusEntries(a []storage.TopHeapEntry) string {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
writetsdbStatusEntries(qb422016, a)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
qs422016 := string(qb422016.B)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
return qs422016
//line app/vmselect/prometheus/tsdb_status_response.qtpl:35
//line app/vmselect/prometheus/tsdb_status_response.qtpl:36
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:37
func streamtsdbStatusMetricNameEntries(qw422016 *qt422016.Writer, a []storage.TopHeapEntry, queryStats []storage.MetricNamesStatsRecord) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:39
queryStatsByMetricName := make(map[string]storage.MetricNamesStatsRecord, len(queryStats))
//line app/vmselect/prometheus/tsdb_status_response.qtpl:38
func streamtsdbStatusMetricNameEntries(qw422016 *qt422016.Writer, a []storage.TopHeapEntry, queryStats []metricnamestats.StatRecord) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:40
queryStatsByMetricName := make(map[string]metricnamestats.StatRecord, len(queryStats))
for _, record := range queryStats {
queryStatsByMetricName[record.MetricName] = record
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:43
//line app/vmselect/prometheus/tsdb_status_response.qtpl:44
qw422016.N().S(`[`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:45
//line app/vmselect/prometheus/tsdb_status_response.qtpl:46
for i, e := range a {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:45
//line app/vmselect/prometheus/tsdb_status_response.qtpl:46
qw422016.N().S(`{`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:48
//line app/vmselect/prometheus/tsdb_status_response.qtpl:49
entry, ok := queryStatsByMetricName[e.Name]
//line app/vmselect/prometheus/tsdb_status_response.qtpl:49
//line app/vmselect/prometheus/tsdb_status_response.qtpl:50
qw422016.N().S(`"name":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:50
//line app/vmselect/prometheus/tsdb_status_response.qtpl:51
qw422016.N().Q(e.Name)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:50
//line app/vmselect/prometheus/tsdb_status_response.qtpl:51
qw422016.N().S(`,`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:51
if !ok {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:51
qw422016.N().S(`"value":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:52
qw422016.N().D(int(e.Count))
//line app/vmselect/prometheus/tsdb_status_response.qtpl:53
} else {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:53
if !ok {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:52
qw422016.N().S(`"value":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:54
//line app/vmselect/prometheus/tsdb_status_response.qtpl:53
qw422016.N().D(int(e.Count))
//line app/vmselect/prometheus/tsdb_status_response.qtpl:54
} else {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:54
qw422016.N().S(`"value":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:55
qw422016.N().D(int(e.Count))
//line app/vmselect/prometheus/tsdb_status_response.qtpl:55
qw422016.N().S(`,"requestsCount":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:55
qw422016.N().D(int(entry.RequestsCount))
//line app/vmselect/prometheus/tsdb_status_response.qtpl:55
qw422016.N().S(`,"lastRequestTimestamp":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:56
qw422016.N().D(int(entry.RequestsCount))
//line app/vmselect/prometheus/tsdb_status_response.qtpl:56
qw422016.N().S(`,"lastRequestTimestamp":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:57
qw422016.N().D(int(entry.LastRequestTs))
//line app/vmselect/prometheus/tsdb_status_response.qtpl:57
//line app/vmselect/prometheus/tsdb_status_response.qtpl:58
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:57
//line app/vmselect/prometheus/tsdb_status_response.qtpl:58
qw422016.N().S(`}`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:59
//line app/vmselect/prometheus/tsdb_status_response.qtpl:60
if i+1 < len(a) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:59
//line app/vmselect/prometheus/tsdb_status_response.qtpl:60
qw422016.N().S(`,`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:59
//line app/vmselect/prometheus/tsdb_status_response.qtpl:60
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:60
//line app/vmselect/prometheus/tsdb_status_response.qtpl:61
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:60
//line app/vmselect/prometheus/tsdb_status_response.qtpl:61
qw422016.N().S(`]`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
func writetsdbStatusMetricNameEntries(qq422016 qtio422016.Writer, a []storage.TopHeapEntry, queryStats []storage.MetricNamesStatsRecord) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
func writetsdbStatusMetricNameEntries(qq422016 qtio422016.Writer, a []storage.TopHeapEntry, queryStats []metricnamestats.StatRecord) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
streamtsdbStatusMetricNameEntries(qw422016, a, queryStats)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
func tsdbStatusMetricNameEntries(a []storage.TopHeapEntry, queryStats []storage.MetricNamesStatsRecord) string {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
func tsdbStatusMetricNameEntries(a []storage.TopHeapEntry, queryStats []metricnamestats.StatRecord) string {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
writetsdbStatusMetricNameEntries(qb422016, a, queryStats)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
qs422016 := string(qb422016.B)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
return qs422016
//line app/vmselect/prometheus/tsdb_status_response.qtpl:62
//line app/vmselect/prometheus/tsdb_status_response.qtpl:63
}

View File

@@ -132,9 +132,20 @@ func (d *Deadline) String() string {
//
// {env="prod",team="devops",t1="v1",t2="v2"}
// {env=~"dev|staging",team!="devops",t1="v1",t2="v2"}
//
// Query args from URL path have precedence over post form args.
func GetExtraTagFilters(r *http.Request) ([][]storage.TagFilter, error) {
var tagFilters []storage.TagFilter
for _, match := range r.Form["extra_label"] {
urlQueryValues := r.URL.Query()
getRequestParam := func(key string) []string {
// query request param must always take precedence over form values
// in order to simplify security enforcement policy for extra_label and extra_filters
if uv, ok := urlQueryValues[key]; ok {
return uv
}
return r.Form[key]
}
for _, match := range getRequestParam("extra_label") {
tmp := strings.SplitN(match, "=", 2)
if len(tmp) != 2 {
return nil, fmt.Errorf("`extra_label` query arg must have the format `name=value`; got %q", match)
@@ -148,8 +159,8 @@ func GetExtraTagFilters(r *http.Request) ([][]storage.TagFilter, error) {
Value: []byte(tmp[1]),
})
}
extraFilters := append([]string{}, r.Form["extra_filters"]...)
extraFilters = append(extraFilters, r.Form["extra_filters[]"]...)
extraFilters := append([]string{}, getRequestParam("extra_filters")...)
extraFilters = append(extraFilters, getRequestParam("extra_filters[]")...)
if len(extraFilters) == 0 {
if len(tagFilters) == 0 {
return nil, nil

View File

@@ -20,6 +20,7 @@ func TestGetExtraTagFilters(t *testing.T) {
}
return &http.Request{
Form: q,
URL: &url.URL{RawQuery: q.Encode()},
}
}
f := func(t *testing.T, r *http.Request, want []string, wantErr bool) {
@@ -79,6 +80,24 @@ func TestGetExtraTagFilters(t *testing.T) {
nil,
false,
)
formValues, err := url.ParseQuery(`extra_label=env=prod&extra_label=job=vmsingle&extra_label=tenant=prod&extra_filters[]={foo="bar"}&extra_filters[]={tenant="prod"}`)
if err != nil {
t.Fatalf("BUG: cannot parse query: %s", err)
}
urlValues, err := url.ParseQuery(`extra_label=job=vmagent&extra_label=env=dev&extra_filters[]={tenant="dev"}`)
if err != nil {
t.Fatalf("BUG: cannot parse query: %s", err)
}
httpReqWithBothFormAndURLParams := &http.Request{
Form: formValues,
URL: &url.URL{
RawQuery: urlValues.Encode(),
},
}
f(t, httpReqWithBothFormAndURLParams,
[]string{`{tenant="dev",job="vmagent",env="dev"}`},
false)
}
func TestParseMetricSelectorSuccess(t *testing.T) {

View File

@@ -1,11 +1,11 @@
{% import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage/metricnamestats"
) %}
{% stripspace %}
MetricNamesStatsResponse generates response for /api/v1/status/metric_names_stats .
{% func MetricNamesStatsResponse(stats *storage.MetricNamesStatsResponse, qt *querytracer.Tracer) %}
{% func MetricNamesStatsResponse(stats *metricnamestats.StatsResult, qt *querytracer.Tracer) %}
{
"status":"success",
"statsCollectedSince": {%dul= stats.CollectedSinceTs %},

View File

@@ -7,7 +7,7 @@ package stats
//line app/vmselect/stats/metric_names_usage_response.qtpl:1
import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage/metricnamestats"
)
// MetricNamesStatsResponse generates response for /api/v1/status/metric_names_stats .
@@ -26,7 +26,7 @@ var (
)
//line app/vmselect/stats/metric_names_usage_response.qtpl:8
func StreamMetricNamesStatsResponse(qw422016 *qt422016.Writer, stats *storage.MetricNamesStatsResponse, qt *querytracer.Tracer) {
func StreamMetricNamesStatsResponse(qw422016 *qt422016.Writer, stats *metricnamestats.StatsResult, qt *querytracer.Tracer) {
//line app/vmselect/stats/metric_names_usage_response.qtpl:8
qw422016.N().S(`{"status":"success","statsCollectedSince":`)
//line app/vmselect/stats/metric_names_usage_response.qtpl:11
@@ -91,7 +91,7 @@ func StreamMetricNamesStatsResponse(qw422016 *qt422016.Writer, stats *storage.Me
}
//line app/vmselect/stats/metric_names_usage_response.qtpl:31
func WriteMetricNamesStatsResponse(qq422016 qtio422016.Writer, stats *storage.MetricNamesStatsResponse, qt *querytracer.Tracer) {
func WriteMetricNamesStatsResponse(qq422016 qtio422016.Writer, stats *metricnamestats.StatsResult, qt *querytracer.Tracer) {
//line app/vmselect/stats/metric_names_usage_response.qtpl:31
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/stats/metric_names_usage_response.qtpl:31
@@ -102,7 +102,7 @@ func WriteMetricNamesStatsResponse(qq422016 qtio422016.Writer, stats *storage.Me
}
//line app/vmselect/stats/metric_names_usage_response.qtpl:31
func MetricNamesStatsResponse(stats *storage.MetricNamesStatsResponse, qt *querytracer.Tracer) string {
func MetricNamesStatsResponse(stats *metricnamestats.StatsResult, qt *querytracer.Tracer) string {
//line app/vmselect/stats/metric_names_usage_response.qtpl:31
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/stats/metric_names_usage_response.qtpl:31

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -37,9 +37,9 @@
<meta property="og:title" content="UI for VictoriaMetrics">
<meta property="og:url" content="https://victoriametrics.com/">
<meta property="og:description" content="Explore and troubleshoot your VictoriaMetrics data">
<script type="module" crossorigin src="./assets/index-KEOgEEMl.js"></script>
<script type="module" crossorigin src="./assets/index-C7gvW_Zn.js"></script>
<link rel="modulepreload" crossorigin href="./assets/rolldown-runtime-COnpUsM8.js">
<link rel="modulepreload" crossorigin href="./assets/vendor-Mr0bmX1E.js">
<link rel="modulepreload" crossorigin href="./assets/vendor-C8Kwp93_.js">
<link rel="stylesheet" crossorigin href="./assets/vendor-CnsZ1jie.css">
<link rel="stylesheet" crossorigin href="./assets/index-D2OEy8Ra.css">
</head>

View File

@@ -5,6 +5,7 @@ import (
"flag"
"fmt"
"io"
"math"
"net/http"
"strconv"
"strings"
@@ -22,6 +23,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/mergeset"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage/metricnamestats"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage/metricsmetadata"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/stringsutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/syncwg"
@@ -31,6 +33,8 @@ import (
var (
retentionPeriod = flagutil.NewRetentionDuration("retentionPeriod", "1M", "Data with timestamps outside the retentionPeriod is automatically deleted. The minimum retentionPeriod is 24h or 1d. "+
"See https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#retention. See also -retentionFilter")
futureRetention = flagutil.NewRetentionDuration("futureRetention", "2d", "Data with timestamps bigger than now+futureRetention is automatically deleted. "+
"The minimum futureRetention is 2 days. See https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#retention")
snapshotAuthKey = flagutil.NewPassword("snapshotAuthKey", "authKey, which must be passed in query string to /snapshot* pages. It overrides -httpAuth.*")
forceMergeAuthKey = flagutil.NewPassword("forceMergeAuthKey", "authKey, which must be passed in query string to /internal/force_merge pages. It overrides -httpAuth.*")
forceFlushAuthKey = flagutil.NewPassword("forceFlushAuthKey", "authKey, which must be passed in query string to /internal/force_flush pages. It overrides -httpAuth.*")
@@ -55,11 +59,13 @@ var (
denyQueriesOutsideRetention = flag.Bool("denyQueriesOutsideRetention", false, "Whether to deny queries outside the configured -retentionPeriod. "+
"When set, then /api/v1/query_range would return '503 Service Unavailable' error for queries with 'from' value outside -retentionPeriod. "+
"This may be useful when multiple data sources with distinct retentions are hidden behind query-tee")
maxHourlySeries = flag.Int("storage.maxHourlySeries", 0, "The maximum number of unique series can be added to the storage during the last hour. "+
maxHourlySeries = flag.Int64("storage.maxHourlySeries", 0, "The maximum number of unique series can be added to the storage during the last hour. "+
"Excess series are logged and dropped. This can be useful for limiting series cardinality. See https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#cardinality-limiter . "+
fmt.Sprintf("Setting this flag to '-1' sets limit to maximum possible value (%d) which is useful in order to enable series tracking without enforcing limits. ", math.MaxInt32)+
"See also -storage.maxDailySeries")
maxDailySeries = flag.Int("storage.maxDailySeries", 0, "The maximum number of unique series can be added to the storage during the last 24 hours. "+
maxDailySeries = flag.Int64("storage.maxDailySeries", 0, "The maximum number of unique series can be added to the storage during the last 24 hours. "+
"Excess series are logged and dropped. This can be useful for limiting series churn rate. See https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#cardinality-limiter . "+
fmt.Sprintf("Setting this flag to '-1' sets limit to maximum possible value (%d) which is useful in order to enable series tracking without enforcing limits. ", math.MaxInt32)+
"See also -storage.maxHourlySeries")
minFreeDiskSpaceBytes = flagutil.NewBytes("storage.minFreeDiskSpaceBytes", 100e6, "The minimum free disk space at -storageDataPath after which the storage stops accepting new data")
@@ -131,7 +137,12 @@ func Init(resetCacheIfNeeded func(mrs []storage.MetricRow)) {
mergeset.SetDataBlocksSparseCacheSize(cacheSizeIndexDBDataBlocksSparse.IntN())
if retentionPeriod.Duration() < 24*time.Hour {
logger.Fatalf("-retentionPeriod cannot be smaller than a day; got %s", retentionPeriod)
logger.Fatalf("-retentionPeriod cannot be smaller than a day; got %s. "+
"See https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#retention", retentionPeriod)
}
if futureRetention.Duration() < 2*24*time.Hour {
logger.Fatalf("-futureRetention cannot be smaller than 2 days; got %s. "+
"See https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#retention", futureRetention)
}
if *idbPrefillStart > 23*time.Hour {
logger.Panicf("-storage.idbPrefillStart cannot exceed 23 hours; got %s", idbPrefillStart)
@@ -141,8 +152,9 @@ func Init(resetCacheIfNeeded func(mrs []storage.MetricRow)) {
WG = syncwg.WaitGroup{}
opts := storage.OpenOptions{
Retention: retentionPeriod.Duration(),
MaxHourlySeries: *maxHourlySeries,
MaxDailySeries: *maxDailySeries,
FutureRetention: futureRetention.Duration(),
MaxHourlySeries: getMaxHourlySeries(),
MaxDailySeries: getMaxDailySeries(),
DisablePerDayIndex: *disablePerDayIndex,
TrackMetricNamesStats: *trackMetricNamesStats,
IDBPrefillStart: *idbPrefillStart,
@@ -168,6 +180,7 @@ func Init(resetCacheIfNeeded func(mrs []storage.MetricRow)) {
writeStorageMetrics(w, strg)
})
metrics.RegisterSet(storageMetrics)
fs.RegisterPathFsMetrics(*DataPath)
}
var storageMetrics *metrics.Set
@@ -233,7 +246,7 @@ func DeleteSeries(qt *querytracer.Tracer, tfss []*storage.TagFilters, maxMetrics
}
// GetMetricNamesStats returns metric names usage stats with give limit and lte predicate
func GetMetricNamesStats(qt *querytracer.Tracer, limit, le int, matchPattern string) (storage.MetricNamesStatsResponse, error) {
func GetMetricNamesStats(qt *querytracer.Tracer, limit, le int, matchPattern string) (metricnamestats.StatsResult, error) {
WG.Add(1)
r := Storage.GetMetricNamesStats(qt, limit, le, matchPattern)
WG.Done()
@@ -602,10 +615,10 @@ func writeStorageMetrics(w io.Writer, strg *storage.Storage) {
metrics.WriteCounterUint64(w, `vm_rows_ignored_total{reason="big_timestamp"}`, m.TooBigTimestampRows)
metrics.WriteCounterUint64(w, `vm_rows_ignored_total{reason="small_timestamp"}`, m.TooSmallTimestampRows)
metrics.WriteCounterUint64(w, `vm_rows_ignored_total{reason="invalid_raw_metric_name"}`, m.InvalidRawMetricNames)
if *maxHourlySeries > 0 {
if getMaxHourlySeries() > 0 {
metrics.WriteCounterUint64(w, `vm_rows_ignored_total{reason="hourly_limit_exceeded"}`, m.HourlySeriesLimitRowsDropped)
}
if *maxDailySeries > 0 {
if getMaxDailySeries() > 0 {
metrics.WriteCounterUint64(w, `vm_rows_ignored_total{reason="daily_limit_exceeded"}`, m.DailySeriesLimitRowsDropped)
}
@@ -615,13 +628,13 @@ func writeStorageMetrics(w io.Writer, strg *storage.Storage) {
metrics.WriteCounterUint64(w, `vm_slow_row_inserts_total`, m.SlowRowInserts)
metrics.WriteCounterUint64(w, `vm_slow_per_day_index_inserts_total`, m.SlowPerDayIndexInserts)
if *maxHourlySeries > 0 {
if getMaxHourlySeries() > 0 {
metrics.WriteGaugeUint64(w, `vm_hourly_series_limit_current_series`, m.HourlySeriesLimitCurrentSeries)
metrics.WriteGaugeUint64(w, `vm_hourly_series_limit_max_series`, m.HourlySeriesLimitMaxSeries)
metrics.WriteCounterUint64(w, `vm_hourly_series_limit_rows_dropped_total`, m.HourlySeriesLimitRowsDropped)
}
if *maxDailySeries > 0 {
if getMaxDailySeries() > 0 {
metrics.WriteGaugeUint64(w, `vm_daily_series_limit_current_series`, m.DailySeriesLimitCurrentSeries)
metrics.WriteGaugeUint64(w, `vm_daily_series_limit_max_series`, m.DailySeriesLimitMaxSeries)
metrics.WriteCounterUint64(w, `vm_daily_series_limit_rows_dropped_total`, m.DailySeriesLimitRowsDropped)
@@ -746,3 +759,21 @@ func jsonResponseError(w http.ResponseWriter, err error) {
errStr := err.Error()
fmt.Fprintf(w, `{"status":"error","msg":%s}`, stringsutil.JSONString(errStr))
}
func getMaxHourlySeries() int {
limit := *maxHourlySeries
if limit == -1 || limit > math.MaxInt32 {
return math.MaxInt32
}
return int(limit)
}
func getMaxDailySeries() int {
limit := *maxDailySeries
if limit == -1 || limit > math.MaxInt32 {
return math.MaxInt32
}
return int(limit)
}

View File

@@ -1,4 +1,4 @@
FROM golang:1.26.1 AS build-web-stage
FROM golang:1.26.3 AS build-web-stage
COPY build /build
WORKDIR /build
@@ -6,7 +6,7 @@ COPY web/ /build/
RUN GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -o web-amd64 github.com/VictoriMetrics/vmui/ && \
GOOS=windows GOARCH=amd64 CGO_ENABLED=0 go build -o web-windows github.com/VictoriMetrics/vmui/
FROM alpine:3.23.3
FROM alpine:3.23.4
USER root
COPY --from=build-web-stage /build/web-amd64 /app/web

File diff suppressed because it is too large Load Diff

View File

@@ -23,14 +23,14 @@
"classnames": "^2.5.1",
"dayjs": "^1.11.20",
"lodash.debounce": "^4.0.8",
"marked": "^17.0.5",
"preact": "^10.29.0",
"qs": "^6.15.0",
"marked": "^18.0.2",
"preact": "^10.29.1",
"qs": "^6.15.1",
"react-input-mask": "^2.0.4",
"react-router-dom": "^7.13.2",
"react-router-dom": "^7.14.1",
"uplot": "^1.6.32",
"vite": "^8.0.2",
"web-vitals": "^5.1.0"
"vite": "^8.0.8",
"web-vitals": "^5.2.0"
},
"devDependencies": {
"@eslint/eslintrc": "^3.3.5",
@@ -39,24 +39,24 @@
"@testing-library/jest-dom": "^6.9.1",
"@testing-library/preact": "^3.2.4",
"@types/lodash.debounce": "^4.0.9",
"@types/node": "^25.5.0",
"@types/node": "^25.6.0",
"@types/qs": "^6.15.0",
"@types/react": "^19.2.14",
"@types/react-input-mask": "^3.0.6",
"@types/react-router-dom": "^5.3.3",
"@typescript-eslint/eslint-plugin": "^8.57.2",
"@typescript-eslint/parser": "^8.57.2",
"@typescript-eslint/eslint-plugin": "^8.58.2",
"@typescript-eslint/parser": "^8.58.2",
"cross-env": "^10.1.0",
"eslint": "^9.39.2",
"eslint-plugin-react": "^7.37.5",
"eslint-plugin-unused-imports": "^4.4.1",
"globals": "^17.4.0",
"globals": "^17.5.0",
"http-proxy-middleware": "^3.0.5",
"jsdom": "^29.0.1",
"postcss": "^8.5.8",
"sass-embedded": "^1.98.0",
"typescript": "^5.9.3",
"vitest": "^4.1.1"
"jsdom": "^29.0.2",
"postcss": "^8.5.10",
"sass-embedded": "^1.99.0",
"typescript": "^6.0.2",
"vitest": "^4.1.4"
},
"browserslist": {
"production": [

View File

@@ -16,23 +16,29 @@ export const getExportDataUrl = (server: string, query: string, period: TimePara
return `${server}/api/v1/export?${params}`;
};
export const getExportCSVDataUrl = (server: string, query: string[], period: TimeParams, reduceMemUsage: boolean): string => {
const getBaseParams = (period: TimeParams, query: string[]): URLSearchParams => {
const params = new URLSearchParams({
start: period.start.toString(),
end: period.end.toString(),
format: "__name__,__value__,__timestamp__:unix_ms",
});
query.forEach((q => params.append("match[]", q)));
return params;
};
export const getLabelsUrl = (server: string, query: string[], period: TimeParams): string => {
const params = getBaseParams(period, query);
return `${server}/api/v1/labels?${params}`;
};
export const getExportCSVDataUrl = (server: string, query: string[], period: TimeParams, reduceMemUsage: boolean, format: string): string => {
const params = getBaseParams(period, query);
params.set("format", format);
if (reduceMemUsage) params.set("reduce_mem_usage", "1");
return `${server}/api/v1/export/csv?${params}`;
};
export const getExportJSONDataUrl = (server: string, query: string[], period: TimeParams, reduceMemUsage: boolean): string => {
const params = new URLSearchParams({
start: period.start.toString(),
end: period.end.toString(),
});
query.forEach((q => params.append("match[]", q)));
const params = getBaseParams(period, query);
if (reduceMemUsage) params.set("reduce_mem_usage", "1");
return `${server}/api/v1/export?${params}`;
};

View File

@@ -0,0 +1,29 @@
import { describe, expect, it, vi } from "vitest";
import { fetchRawQueryCSVExport } from "./raw-query";
describe("fetchRawQueryCSVExport", () => {
it.skip("requests all label columns before exporting CSV data", async () => {
const fetchMock = vi.fn()
.mockResolvedValueOnce({
ok: true,
json: async () => ({ data: ["job", "__name__", "instance"] }),
})
.mockResolvedValueOnce({
ok: true,
text: async () => "up,localhost:9100,node_exporter,1,1710000000000",
});
const result = await fetchRawQueryCSVExport(
"http://localhost:8428",
["up"],
{ start: 1710000000, end: 1710000300, step: "15s", date: "2024-03-09T16:05:00Z" },
false,
fetchMock as unknown as typeof fetch,
);
expect(fetchMock).toHaveBeenCalledTimes(2);
expect(fetchMock.mock.calls[0][0]).toBe("http://localhost:8428/api/v1/labels?start=1710000000&end=1710000300&match%5B%5D=up");
expect(fetchMock.mock.calls[1][0]).toBe("http://localhost:8428/api/v1/export/csv?start=1710000000&end=1710000300&match%5B%5D=up&format=__name__%2Cinstance%2Cjob%2C__value__%2C__timestamp__%3Aunix_ms");
expect(result).toBe("up,localhost:9100,node_exporter,1,1710000000000");
});
});

View File

@@ -0,0 +1,31 @@
import { getExportCSVDataUrl, getLabelsUrl } from "./query-range";
import { TimeParams } from "../types";
import { getCSVExportColumns } from "../utils/csv";
interface LabelsResponse {
data?: string[];
}
export const fetchRawQueryCSVExport = async (
serverUrl: string,
query: string[],
period: TimeParams,
reduceMemUsage: boolean,
fetchFn: typeof fetch = fetch,
): Promise<string> => {
const labelsResponse = await fetchFn(getLabelsUrl(serverUrl, query, period));
if (!labelsResponse.ok) {
throw new Error(await labelsResponse.text());
}
const { data = [] } = (await labelsResponse.json()) as LabelsResponse;
const columns = getCSVExportColumns(data);
const format = columns.join(",");
const response = await fetchFn(getExportCSVDataUrl(serverUrl, query, period, reduceMemUsage, format));
if (!response.ok) {
throw new Error(await response.text());
}
return await response.text();
};

View File

@@ -1,7 +1,7 @@
import { useMemo } from "preact/compat";
import "./style.scss";
import { Alert as APIAlert } from "../../../types";
import { createSearchParams } from "react-router-dom";
import { Alert as APIAlert, Group } from "../../../types";
import { Link } from "react-router-dom";
import Button from "../../Main/Button/Button";
import Badges, { BadgeColor } from "../Badges";
import { formatEventTime } from "../helpers";
@@ -9,12 +9,14 @@ import {
SearchIcon,
} from "../../Main/Icons";
import CodeExample from "../../Main/CodeExample/CodeExample";
import router from "../../../router";
interface BaseAlertProps {
item: APIAlert;
group?: Group;
}
const BaseAlert = ({ item }: BaseAlertProps) => {
const BaseAlert = ({ item, group }: BaseAlertProps) => {
const query = item?.expression;
const alertLabels = item?.labels || {};
const alertLabelsItems = useMemo(() => {
@@ -24,13 +26,19 @@ const BaseAlert = ({ item }: BaseAlertProps) => {
}]));
}, [alertLabels]);
const openQueryLink = () => {
const params = {
const queryLink = useMemo(() => {
if (!group?.interval) return;
const params = new URLSearchParams({
"g0.expr": query,
"g0.end_time": ""
};
window.open(`#/?${createSearchParams(params).toString()}`, "_blank", "noopener noreferrer");
};
"g0.end_time": item.activeAt,
// Interval is the Group's evaluation interval in float seconds as present in the file. See: /app/vmalert/rule/web.go
"g0.step_input": `${group.interval}s`,
"g0.relative_time": "none",
});
return `${router.home}?${params.toString()}`;
}, [query, item.activeAt, group?.interval]);
return (
<div className="vm-explore-alerts-alert-item">
@@ -45,15 +53,22 @@ const BaseAlert = ({ item }: BaseAlertProps) => {
style={{ "text-align": "end" }}
colSpan={2}
>
<Button
size="small"
variant="outlined"
color="gray"
startIcon={<SearchIcon />}
onClick={openQueryLink}
>
<span className="vm-button-text">Run query</span>
</Button>
{queryLink && (
<Link
to={queryLink}
target={"_blank"}
rel="noreferrer"
>
<Button
size="small"
variant="outlined"
color="gray"
startIcon={<SearchIcon />}
>
<span className="vm-button-text">Run query</span>
</Button>
</Link>
)}
</td>
</tr>
<tr>

View File

@@ -1,19 +1,21 @@
import { useMemo } from "preact/compat";
import "./style.scss";
import { Rule as APIRule } from "../../../types";
import { useNavigate, createSearchParams } from "react-router-dom";
import { Group, Rule as APIRule } from "../../../types";
import { useNavigate, Link } from "react-router-dom";
import { SearchIcon, DetailsIcon } from "../../Main/Icons";
import Button from "../../Main/Button/Button";
import Alert from "../../Main/Alert/Alert";
import Badges, { BadgeColor } from "../Badges";
import { formatDuration, formatEventTime } from "../helpers";
import CodeExample from "../../Main/CodeExample/CodeExample";
import router from "../../../router";
interface BaseRuleProps {
item: APIRule;
group?: Group;
}
const BaseRule = ({ item }: BaseRuleProps) => {
const BaseRule = ({ item, group }: BaseRuleProps) => {
const query = item?.query;
const navigate = useNavigate();
const openAlertLink = (id: string) => {
@@ -33,13 +35,19 @@ const BaseRule = ({ item }: BaseRuleProps) => {
}]));
}, [ruleLabels]);
const openQueryLink = () => {
const params = {
const queryLink = useMemo(() => {
if (!group?.interval) return;
const params = new URLSearchParams({
"g0.expr": query,
"g0.end_time": ""
};
window.open(`#/?${createSearchParams(params).toString()}`, "_blank", "noopener noreferrer");
};
"g0.end_time": item.lastEvaluation,
// Interval is the Group's evaluation interval in float seconds as present in the file. See: /app/vmalert/rule/web.go
"g0.step_input": `${group.interval}s`,
"g0.relative_time": "none",
});
return `${router.home}?${params.toString()}`;
}, [query, item.lastEvaluation, group?.interval]);
return (
<div className="vm-explore-alerts-rule-item">
@@ -54,15 +62,22 @@ const BaseRule = ({ item }: BaseRuleProps) => {
style={{ "text-align": "end" }}
colSpan={2}
>
<Button
size="small"
variant="outlined"
color="gray"
startIcon={<SearchIcon />}
onClick={openQueryLink}
>
<span className="vm-button-text">Run query</span>
</Button>
{queryLink && (
<Link
to={queryLink}
target={"_blank"}
rel="noreferrer"
>
<Button
size="small"
variant="outlined"
color="gray"
startIcon={<SearchIcon />}
>
<span className="vm-button-text">Run query</span>
</Button>
</Link>
)}
</td>
</tr>
<tr>

View File

@@ -2,15 +2,16 @@ import { FC } from "preact/compat";
import ItemHeader from "../ItemHeader";
import Accordion from "../../Main/Accordion/Accordion";
import "./style.scss";
import { Rule as APIRule } from "../../../types";
import { Group, Rule as APIRule } from "../../../types";
import BaseRule from "../BaseRule";
interface RuleProps {
states: Record<string, number>;
rule: APIRule;
group: Group;
}
const Rule: FC<RuleProps> = ({ states, rule }) => {
const Rule: FC<RuleProps> = ({ states, rule, group }) => {
const state = Object.keys(states).length > 0 ? Object.keys(states)[0] : "ok";
return (
<div className={`vm-explore-alerts-rule vm-badge-item ${state.replace(" ", "-")}`}>
@@ -25,7 +26,10 @@ const Rule: FC<RuleProps> = ({ states, rule }) => {
name={rule.name}
/>}
>
<BaseRule item={rule} />
<BaseRule
item={rule}
group={group}
/>
</Accordion>
</div>
);

View File

@@ -50,7 +50,6 @@ const RulesHeader = ({
label="Rule type"
placeholder="Please select rule type"
onChange={onChangeRuleType}
autofocus={!!types.length && !isMobile}
includeAll
searchable
/>

View File

@@ -17,7 +17,7 @@ export const formatDuration = (raw: number) => {
export const formatEventTime = (raw: string) => {
const t = dayjs(raw);
return t.year() <= 1 ? "Never" : t.format("DD MMM YYYY HH:mm:ss");
return t.year() <= 1 ? "Never" : t.tz().format("DD MMM YYYY HH:mm:ss");
};
export const getStates = (rule: Rule) => {

View File

@@ -2,10 +2,11 @@ import Spinner from "../../components/Main/Spinner/Spinner";
import Alert from "../../components/Main/Alert/Alert";
import { useFetchItem } from "./hooks/useFetchItem";
import "./style.scss";
import { Alert as APIAlert } from "../../types";
import { Alert as APIAlert, Group as APIGroup } from "../../types";
import ItemHeader from "../../components/ExploreAlerts/ItemHeader";
import BaseAlert from "../../components/ExploreAlerts/BaseAlert";
import Modal from "../../components/Main/Modal/Modal";
import { useFetchGroup } from "./hooks/useFetchGroup";
interface ExploreAlertProps {
groupId: string;
@@ -17,10 +18,19 @@ interface ExploreAlertProps {
const ExploreAlert = ({ groupId, id, mode, onClose }: ExploreAlertProps) => {
const {
item,
isLoading,
error,
isLoading: isLoadingItem,
error: errorItem,
} = useFetchItem<APIAlert>({ groupId, id, mode });
const {
group,
isLoading: isLoadingGroup,
error: errorGroup,
} = useFetchGroup<APIGroup>({ id: groupId });
const error = errorItem || errorGroup;
const isLoading = isLoadingItem || isLoadingGroup;
if (isLoading) return (
<Spinner />
);
@@ -51,7 +61,12 @@ const ExploreAlert = ({ groupId, id, mode, onClose }: ExploreAlertProps) => {
onClose={onClose}
>
<div className="vm-explore-alerts">
{item && (<BaseAlert item={item} />) || (
{item ? (
<BaseAlert
item={item}
group={group}
/>
) : (
<Alert variant="info">{noItemFound}</Alert>
)}
</div>

View File

@@ -2,11 +2,12 @@ import Spinner from "../../components/Main/Spinner/Spinner";
import Alert from "../../components/Main/Alert/Alert";
import { useFetchItem } from "./hooks/useFetchItem";
import "./style.scss";
import { Rule as APIRule } from "../../types";
import { Group as APIGroup, Rule as APIRule } from "../../types";
import ItemHeader from "../../components/ExploreAlerts/ItemHeader";
import BaseRule from "../../components/ExploreAlerts/BaseRule";
import Modal from "../../components/Main/Modal/Modal";
import { getStates } from "../../components/ExploreAlerts/helpers";
import { useFetchGroup } from "./hooks/useFetchGroup";
interface ExploreRuleProps {
groupId: string;
@@ -18,10 +19,19 @@ interface ExploreRuleProps {
const ExploreRule = ({ groupId, id, mode, onClose }: ExploreRuleProps) => {
const {
item,
isLoading,
error,
isLoading: isLoadingItem,
error: errorItem,
} = useFetchItem<APIRule>({ groupId, id, mode });
const {
group,
isLoading: isLoadingGroup,
error: errorGroup,
} = useFetchGroup<APIGroup>({ id: groupId });
const error = errorItem || errorGroup;
const isLoading = isLoadingItem || isLoadingGroup;
if (isLoading) return (
<Spinner />
);
@@ -49,7 +59,12 @@ const ExploreRule = ({ groupId, id, mode, onClose }: ExploreRuleProps) => {
onClose={onClose}
>
<div className="vm-explore-alerts">
{item && (<BaseRule item={item} />) || (
{item ? (
<BaseRule
item={item}
group={group}
/>
) : (
<Alert variant="info">{noItemFound}</Alert>
)}
</div>

View File

@@ -132,7 +132,7 @@ const ExploreRules: FC = () => {
newParams.set("page_num", "1");
setSearchParams(newParams);
const changes = getChanges(title, states);
setStates(changes.length == allStates.length ? [] : changes);
setStates(changes.length === allStates.length ? [] : changes);
}, [states, searchParams]);
const handleChangeRuleType = useCallback((title: string) => {
@@ -186,6 +186,7 @@ const ExploreRules: FC = () => {
<Rule
key={`rule-${rule.id}`}
rule={rule}
group={group}
states={getStates(rule)}
/>
))}

View File

@@ -6,10 +6,11 @@ import { useTimeState } from "../../../state/time/TimeStateContext";
import { useAppState } from "../../../state/common/StateContext";
import { useCustomPanelState } from "../../../state/customPanel/CustomPanelStateContext";
import { isValidHttpUrl } from "../../../utils/url";
import { getExportCSVDataUrl, getExportDataUrl, getExportJSONDataUrl } from "../../../api/query-range";
import { getExportDataUrl, getExportJSONDataUrl } from "../../../api/query-range";
import { parseLineToJSON } from "../../../utils/json";
import { downloadCSV, downloadJSON } from "../../../utils/file";
import { useSnack } from "../../../contexts/Snackbar";
import { fetchRawQueryCSVExport } from "../../../api/raw-query";
interface FetchQueryParams {
hideQuery?: number[];
@@ -67,11 +68,8 @@ export const useFetchExport = ({ hideQuery, showAllSeries }: FetchQueryParams):
const getFilename = (format: ExportFormats) => `vmui_export_${query.join("_")}_${period.start}_${period.end}.${format}`;
return {
csv: async () => {
const url = getExportCSVDataUrl(serverUrl, query, period, reduceMemUsage);
const response = await fetch(url);
try {
let text = await response.text();
text = "name,value,timestamp\n" + text;
const text = await fetchRawQueryCSVExport(serverUrl, query, period, reduceMemUsage);
downloadCSV(text, getFilename("csv"));
} catch (e) {
console.error(e);

View File

@@ -1,47 +1,18 @@
import { ArrayRGB } from "../types";
export const baseContrastColors = [
"#e54040",
"#32a9dc",
"#2ee329",
"#7126a1",
"#e38f0f",
"#3d811a",
"#ffea00",
"#2d2d2d",
"#da42a6",
"#a44e0c",
"#e6194b", // red
"#4363d8", // blue
"#3cb44b", // green
"#911eb4", // purple
"#f58231", // orange
"#f032e6", // magenta
"#c8a200", // dark yellow
"#a65628", // brown
"#42d4f4", // cyan
"#a9a9a9", // gray
];
export const hexToRGB = (hex: string): string => {
if (hex.length != 7) return "0, 0, 0";
const r = parseInt(hex.slice(1, 3), 16);
const g = parseInt(hex.slice(3, 5), 16);
const b = parseInt(hex.slice(5, 7), 16);
return `${r}, ${g}, ${b}`;
};
export const getColorFromString = (text: string): string => {
const SEED = 16777215;
const FACTOR = 49979693;
let b = 1;
let d = 0;
let f = 1;
if (text.length > 0) {
for (let i = 0; i < text.length; i++) {
text[i].charCodeAt(0) > d && (d = text[i].charCodeAt(0));
f = parseInt(String(SEED / d));
b = (b + text[i].charCodeAt(0) * f * FACTOR) % SEED;
}
}
let hex = ((b * text.length) % SEED).toString(16);
hex = hex.padEnd(6, hex);
return `#${hex}`;
};
export const getContrastColor = (value: string) => {
let hex = value.replace("#", "").trim();
@@ -70,3 +41,109 @@ export const generateGradient = (start: ArrayRGB, end: ArrayRGB, steps: number)
}
return gradient.map(c => `rgb(${c})`);
};
const clamp = (n: number, min: number, max: number) => Math.min(max, Math.max(min, n));
const hexToRgb = (hex: string) => {
let value = hex.replace("#", "").trim();
if (value.length === 3) {
value = value.split("").map((c) => c + c).join("");
}
if (!/^[0-9a-fA-F]{6}$/.test(value)) {
throw new Error("Invalid HEX color.");
}
return {
r: parseInt(value.slice(0, 2), 16),
g: parseInt(value.slice(2, 4), 16),
b: parseInt(value.slice(4, 6), 16),
};
};
const rgbToHex = (r: number, g: number, b: number) =>
`#${[r, g, b].map((v) => clamp(Math.round(v), 0, 255).toString(16).padStart(2, "0")).join("")}`;
const rgbToHsl = (r: number, g: number, b: number) => {
r /= 255; g /= 255; b /= 255;
const max = Math.max(r, g, b);
const min = Math.min(r, g, b);
const l = (max + min) / 2;
const d = max - min;
let h = 0;
let s = 0;
if (d !== 0) {
s = d / (1 - Math.abs(2 * l - 1));
switch (max) {
case r: h = ((g - b) / d) % 6; break;
case g: h = (b - r) / d + 2; break;
case b: h = (r - g) / d + 4; break;
}
h *= 60;
if (h < 0) h += 360;
}
return { h, s: s * 100, l: l * 100 };
};
const hslToRgb = (h: number, s: number, l: number) => {
s /= 100;
l /= 100;
const c = (1 - Math.abs(2 * l - 1)) * s;
const x = c * (1 - Math.abs((h / 60) % 2 - 1));
const m = l - c / 2;
let r: number;
let g: number;
let b: number;
if (h < 60) [r, g, b] = [c, x, 0];
else if (h < 120) [r, g, b] = [x, c, 0];
else if (h < 180) [r, g, b] = [0, c, x];
else if (h < 240) [r, g, b] = [0, x, c];
else if (h < 300) [r, g, b] = [x, 0, c];
else [r, g, b] = [c, 0, x];
return {
r: (r + m) * 255,
g: (g + m) * 255,
b: (b + m) * 255,
};
};
const varyColor = (hex: string, variant: number) => {
const { r, g, b } = hexToRgb(hex);
const { h, s, l } = rgbToHsl(r, g, b);
const variants = [
{ ds: 0, dl: 0 },
{ ds: -20, dl: -16 },
{ ds: -16, dl: +16 },
{ ds: +14, dl: -20 },
];
const v = variants[variant % variants.length];
const nextS = clamp(s + v.ds, 35, 85);
const nextL = clamp(l + v.dl, 35, 70);
const rgb = hslToRgb(h, nextS, nextL);
return rgbToHex(rgb.r, rgb.g, rgb.b);
};
export const getSeriesColor = (index: number) => {
const baseCount = baseContrastColors.length;
const baseIndex = index % baseCount;
const variantIndex = Math.floor(index / baseCount);
const base = baseContrastColors[(baseIndex + variantIndex) % baseCount];
return varyColor(base, variantIndex);
};

View File

@@ -1,5 +1,5 @@
import { describe, expect, it } from "vitest";
import { formatValueToCSV } from "./csv";
import { formatValueToCSV, getCSVExportColumns } from "./csv";
describe("formatValueToCSV", () => {
it("should wrap value in quotes if it contains a comma", () => {
@@ -32,3 +32,10 @@ describe("formatValueToCSV", () => {
expect(result).toBe("");
});
});
describe("getCSVExportColumns", () => {
it("should prepend metric name and append value and timestamp columns", () => {
const result = getCSVExportColumns(["instance", "__name__", "job", "instance"]);
expect(result.join(",")).toEqual("__name__,instance,job,__value__,__timestamp__:unix_ms");
});
});

View File

@@ -2,3 +2,8 @@ export const formatValueToCSV= (value: string) =>
(value.includes(",") || value.includes("\n") || value.includes("\""))
? "\"" + value.replace(/"/g, "\"\"") + "\""
: value;
export const getCSVExportColumns = (labelNames: string[]) => {
const labels = Array.from(new Set(labelNames.filter((label) => label && label !== "__name__"))).sort();
return ["__name__", ...labels, "__value__", "__timestamp__:unix_ms"];
};

View File

@@ -2,7 +2,7 @@ import { MetricBase, MetricResult } from "../../api/types";
import uPlot, { Series as uPlotSeries } from "uplot";
import { getNameForMetric, promValueToNumber } from "../metric";
import { HideSeriesArgs, LegendItemType, SeriesItem } from "../../types";
import { baseContrastColors, getColorFromString } from "../color";
import { getSeriesColor } from "../color";
import { getMathStats } from "../math";
import { formatPrettyNumber } from "./helpers";
import { drawPoints } from "./scatter";
@@ -17,11 +17,10 @@ export const extractFields = (metric: MetricBase["metric"]): string => {
export const getSeriesItemContext = (data: MetricResult[], hideSeries: string[], alias: string[], showPoints?: boolean, isRawQuery?: boolean) => {
const colorState: {[key: string]: string} = {};
const maxColors = Math.min(data.length, baseContrastColors.length);
for (let i = 0; i < maxColors; i++) {
for (let i = 0; i < data.length; i++) {
const label = getNameForMetric(data[i], alias[data[i].group - 1]);
colorState[label] = baseContrastColors[i];
colorState[label] = getSeriesColor(i);
}
return (d: MetricResult): SeriesItem => {
@@ -32,7 +31,7 @@ export const getSeriesItemContext = (data: MetricResult[], hideSeries: string[],
label,
hasAlias: Boolean(aliasValue),
width: 1.4,
stroke: colorState[label] || getColorFromString(label),
stroke: colorState[label],
points: getPointsSeries(showPoints, isRawQuery),
spanGaps: false,
freeFormFields: d.metric,

View File

@@ -15,13 +15,12 @@
"forceConsistentCasingInFileNames": true,
"noFallthroughCasesInSwitch": true,
"module": "esnext",
"moduleResolution": "node",
"moduleResolution": "bundler",
"resolveJsonModule": true,
"isolatedModules": true,
"noEmit": true,
"jsx": "react-jsx",
"jsxImportSource": "preact",
"downlevelIteration": true,
"noUnusedLocals": true,
"paths": {
"react": ["./node_modules/preact/compat/"],
@@ -32,5 +31,8 @@
},
"include": [
"src"
],
"exclude": [
"scripts/**/*.ts"
]
}

View File

@@ -2,6 +2,7 @@ package apptest
import (
"bytes"
"fmt"
"io"
"net"
"net/http"
@@ -33,37 +34,41 @@ func (c *Client) CloseConnections() {
c.httpCli.CloseIdleConnections()
}
// Get sends a HTTP GET request, returns
// Get sends an HTTP GET request, returns
// the response body and status code to the caller.
func (c *Client) Get(t *testing.T, url string) (string, int) {
func (c *Client) Get(t *testing.T, url string, headers http.Header) (string, int) {
t.Helper()
return c.do(t, http.MethodGet, url, "", nil)
return c.do(t, http.MethodGet, url, nil, headers)
}
// Post sends a HTTP POST request, returns
// Post sends an HTTP POST request, returns
// the response body and status code to the caller.
func (c *Client) Post(t *testing.T, url, contentType string, data []byte) (string, int) {
func (c *Client) Post(t *testing.T, url string, data []byte, headers http.Header) (string, int) {
t.Helper()
return c.do(t, http.MethodPost, url, contentType, data)
return c.do(t, http.MethodPost, url, data, headers)
}
// PostForm sends a HTTP POST request containing the POST-form data, returns
// PostForm sends an HTTP POST request containing the POST-form data with attached getHeaders, returns
// the response body and status code to the caller.
func (c *Client) PostForm(t *testing.T, url string, data url.Values) (string, int) {
func (c *Client) PostForm(t *testing.T, url string, data url.Values, headers http.Header) (string, int) {
t.Helper()
return c.Post(t, url, "application/x-www-form-urlencoded", []byte(data.Encode()))
if headers == nil {
headers = make(http.Header)
}
headers.Set("Content-Type", "application/x-www-form-urlencoded")
return c.Post(t, url, []byte(data.Encode()), headers)
}
// Delete sends a HTTP DELETE request and returns the response body and status code
// Delete sends an HTTP DELETE request and returns the response body and status code
// to the caller.
func (c *Client) Delete(t *testing.T, url string) (string, int) {
t.Helper()
return c.do(t, http.MethodDelete, url, "", nil)
return c.do(t, http.MethodDelete, url, nil, nil)
}
// do prepares a HTTP request, sends it to the server, receives the response
// do prepares an HTTP request, sends it to the server, receives the response
// from the server, returns the response body and status code to the caller.
func (c *Client) do(t *testing.T, method, url, contentType string, data []byte) (string, int) {
func (c *Client) do(t *testing.T, method, url string, data []byte, headers http.Header) (string, int) {
t.Helper()
req, err := http.NewRequest(method, url, bytes.NewReader(data))
@@ -71,9 +76,7 @@ func (c *Client) do(t *testing.T, method, url, contentType string, data []byte)
t.Fatalf("could not create a HTTP request: %v", err)
}
if len(contentType) > 0 {
req.Header.Add("Content-Type", contentType)
}
req.Header = headers
res, err := c.httpCli.Do(req)
if err != nil {
t.Fatalf("could not send HTTP request: %v", err)
@@ -103,6 +106,35 @@ func (c *Client) Write(t *testing.T, address string, data []string) {
}
}
// getClusterPath returns path in cluster's URL format.
// Based on QueryOpts, it will either put tenant ID into URL
// or will skip it if tenant is set via HTTP headers.
func getClusterPath(addr, prefix, suffix string, o QueryOpts) string {
if o.Tenant != "" {
// QueryOpts.Tenant has priority over headers
return tenantViaURL(addr, prefix, o.Tenant, suffix)
}
h := o.getHeaders()
if h.Get("AccountID") != "" || h.Get("ProjectID") != "" {
return tenantViaHeaders(addr, prefix, suffix)
}
// tenant is missing in QueryOpts and in HTTP headers. Falling back to default 0:0 tenant in URL
return tenantViaURL(addr, prefix, "0:0", suffix)
}
// tenantViaURL returns path in cluster's URL format with tenant specified in URL
func tenantViaURL(addr, prefix, tenant, suffix string) string {
return fmt.Sprintf("http://%s/%s/%s/%s", addr, prefix, tenant, suffix)
}
// tenantViaHeaders returns path in cluster's URL format where tenant is omitted in URL
// Only supported if -enableMultitenancyViaHeaders is specified
func tenantViaHeaders(addr, prefix, suffix string) string {
return fmt.Sprintf("http://%s/%s/%s", addr, prefix, suffix)
}
// readAllAndClose reads everything from the response body and then closes it.
func readAllAndClose(t *testing.T, responseBody io.ReadCloser) string {
t.Helper()
@@ -135,7 +167,7 @@ func (app *ServesMetrics) GetIntMetric(t *testing.T, metricName string) int {
func (app *ServesMetrics) GetMetric(t *testing.T, metricName string) float64 {
t.Helper()
metrics, statusCode := app.cli.Get(t, app.metricsURL)
metrics, statusCode := app.cli.Get(t, app.metricsURL, nil)
if statusCode != http.StatusOK {
t.Fatalf("unexpected status code: got %d, want %d", statusCode, http.StatusOK)
}
@@ -161,7 +193,7 @@ func (app *ServesMetrics) GetMetricsByPrefix(t *testing.T, prefix string) []floa
values := []float64{}
metrics, statusCode := app.cli.Get(t, app.metricsURL)
metrics, statusCode := app.cli.Get(t, app.metricsURL, nil)
if statusCode != http.StatusOK {
t.Fatalf("unexpected status code: got %d, want %d", statusCode, http.StatusOK)
}
@@ -190,7 +222,7 @@ func (app *ServesMetrics) GetMetricsByRegexp(t *testing.T, re *regexp.Regexp) []
values := []float64{}
metrics, statusCode := app.cli.Get(t, app.metricsURL)
metrics, statusCode := app.cli.Get(t, app.metricsURL, nil)
if statusCode != http.StatusOK {
t.Fatalf("unexpected status code: got %d, want %d", statusCode, http.StatusOK)
}

View File

@@ -4,6 +4,7 @@ import (
"encoding/json"
"fmt"
"math"
"net/http"
"net/url"
"slices"
"sort"
@@ -88,6 +89,15 @@ type QueryOpts struct {
MaxLookback string
LatencyOffset string
Format string
NoCache string
Headers http.Header
}
func (qos *QueryOpts) getHeaders() http.Header {
if qos.Headers == nil {
qos.Headers = make(http.Header)
}
return qos.Headers
}
func (qos *QueryOpts) asURLValues() url.Values {
@@ -112,18 +122,11 @@ func (qos *QueryOpts) asURLValues() url.Values {
addNonEmpty("max_lookback", qos.MaxLookback)
addNonEmpty("latency_offset", qos.LatencyOffset)
addNonEmpty("format", qos.Format)
addNonEmpty("nocache", qos.NoCache)
return uv
}
// getTenant returns tenant with optional default value
func (qos *QueryOpts) getTenant() string {
if qos.Tenant == "" {
return "0"
}
return qos.Tenant
}
// PrometheusAPIV1QueryResponse is an inmemory representation of the
// /prometheus/api/v1/query or /prometheus/api/v1/query_range response.
type PrometheusAPIV1QueryResponse struct {

View File

@@ -6,6 +6,7 @@ import (
"os"
"path"
"path/filepath"
"slices"
"sync"
"testing"
"time"
@@ -87,11 +88,11 @@ func (tc *TestCase) MustStartDefaultVmsingle() *Vmsingle {
}
// MustStartVmsingle is a test helper function that starts an instance of
// vmsingle located at ../../bin/victoria-metrics and fails the test if the app
// vmsingle located at ../../bin/victoria-metrics-race and fails the test if the app
// fails to start.
func (tc *TestCase) MustStartVmsingle(instance string, flags []string) *Vmsingle {
tc.t.Helper()
return tc.MustStartVmsingleAt(instance, "../../bin/victoria-metrics", flags)
return tc.MustStartVmsingleAt(instance, "../../bin/victoria-metrics-race", flags)
}
// MustStartVmsingleAt is a test helper function that starts an instance of
@@ -108,11 +109,11 @@ func (tc *TestCase) MustStartVmsingleAt(instance, binary string, flags []string)
}
// MustStartVmstorage is a test helper function that starts an instance of
// vmstorage located at ../../bin/vmstorage and fails the test if the app fails
// vmstorage located at ../../bin/vmstorage-race and fails the test if the app fails
// to start.
func (tc *TestCase) MustStartVmstorage(instance string, flags []string) *Vmstorage {
tc.t.Helper()
return tc.MustStartVmstorageAt(instance, "../../bin/vmstorage", flags)
return tc.MustStartVmstorageAt(instance, "../../bin/vmstorage-race", flags)
}
// MustStartVmstorageAt is a test helper function that starts an instance of
@@ -169,6 +170,18 @@ func (tc *TestCase) MustStartVmagent(instance string, flags []string, promScrape
return app
}
// MustStartDefaultRWVmagent is a test helper function that starts an instance of
// vmagent with defaults suitable for remote-write tests.
func (tc *TestCase) MustStartDefaultRWVmagent(instance string, flags []string) *Vmagent {
tc.t.Helper()
defaultFlags := []string{
"-remoteWrite.flushInterval=50ms",
}
defaultFlags = slices.Concat(defaultFlags, flags)
return tc.MustStartVmagent(instance, defaultFlags, ``)
}
// Vmcluster represents a typical cluster setup: several vmstorage replicas, one
// vminsert, and one vmselect.
//
@@ -293,12 +306,12 @@ func (tc *TestCase) MustStartCluster(opts *ClusterOptions) *Vmcluster {
tc.t.Helper()
if opts.Vmstorage1Binary == "" {
opts.Vmstorage1Binary = "../../bin/vmstorage"
opts.Vmstorage1Binary = "../../bin/vmstorage-race"
}
vmstorage1 := tc.MustStartVmstorageAt(opts.Vmstorage1Instance, opts.Vmstorage1Binary, opts.Vmstorage1Flags)
if opts.Vmstorage2Binary == "" {
opts.Vmstorage2Binary = "../../bin/vmstorage"
opts.Vmstorage2Binary = "../../bin/vmstorage-race"
}
vmstorage2 := tc.MustStartVmstorageAt(opts.Vmstorage2Instance, opts.Vmstorage2Binary, opts.Vmstorage2Flags)

View File

@@ -28,7 +28,7 @@ func TestSingleBackupRestore(t *testing.T) {
return tc.MustStartVmsingle("vmsingle", []string{
"-storageDataPath=" + storageDataPath,
"-retentionPeriod=100y",
"-search.maxStalenessInterval=1m",
"-futureRetention=2y",
})
},
stopSUT: func() {
@@ -61,18 +61,18 @@ func TestClusterBackupRestore(t *testing.T) {
Vmstorage1Flags: []string{
"-storageDataPath=" + storage1DataPath,
"-retentionPeriod=100y",
"-futureRetention=2y",
},
Vmstorage2Instance: "vmstorage2",
Vmstorage2Flags: []string{
"-storageDataPath=" + storage2DataPath,
"-retentionPeriod=100y",
"-futureRetention=2y",
},
VminsertInstance: "vminsert",
VminsertFlags: []string{},
VmselectInstance: "vmselect",
VmselectFlags: []string{
"-search.maxStalenessInterval=1m",
},
VmselectFlags: []string{},
})
},
stopSUT: func() {
@@ -100,15 +100,20 @@ func TestClusterBackupRestore(t *testing.T) {
func testBackupRestore(tc *apptest.TestCase, opts testBackupRestoreOpts) {
t := tc.T()
const msecPerMinute = 60 * 1000
genData := func(count int, prefix string, start int64) (recs []string, wantSeries []map[string]string, wantQueryResults []*apptest.QueryResult) {
recs = make([]string, count)
wantSeries = make([]map[string]string, count)
wantQueryResults = make([]*apptest.QueryResult, count)
type data struct {
samples []string
wantSeries []map[string]string
wantQueryResults []*apptest.QueryResult
}
genData := func(count int, prefix string, start, step int64) data {
recs := make([]string, count)
wantSeries := make([]map[string]string, count)
wantQueryResults := make([]*apptest.QueryResult, count)
for i := range count {
name := fmt.Sprintf("%s_%03d", prefix, i)
value := float64(i)
timestamp := start + int64(i)*msecPerMinute
timestamp := start + int64(i)*step
recs[i] = fmt.Sprintf("%s %f %d", name, value, timestamp)
wantSeries[i] = map[string]string{"__name__": name}
@@ -117,7 +122,15 @@ func testBackupRestore(tc *apptest.TestCase, opts testBackupRestoreOpts) {
Samples: []*apptest.Sample{{Timestamp: timestamp, Value: value}},
}
}
return recs, wantSeries, wantQueryResults
return data{recs, wantSeries, wantQueryResults}
}
concatData := func(d1, d2 data) data {
var d data
d.samples = slices.Concat(d1.samples, d2.samples)
d.wantSeries = slices.Concat(d1.wantSeries, d2.wantSeries)
d.wantQueryResults = slices.Concat(d1.wantQueryResults, d2.wantQueryResults)
return d
}
backupBaseDir, err := filepath.Abs(filepath.Join(tc.Dir(), "backups"))
@@ -148,15 +161,17 @@ func testBackupRestore(tc *apptest.TestCase, opts testBackupRestoreOpts) {
// assertSeries retrieves all data from the storage and compares it with the
// expected result.
assertQueryResults := func(app apptest.PrometheusQuerier, query string, start, end int64, want []*apptest.QueryResult) {
assertQueryResults := func(app apptest.PrometheusQuerier, query string, start, end, step int64, want []*apptest.QueryResult) {
t.Helper()
tc.Assert(&apptest.AssertOptions{
Msg: "unexpected /api/v1/query_range response",
Got: func() any {
return app.PrometheusAPIV1QueryRange(t, query, apptest.QueryOpts{
Start: fmt.Sprintf("%d", start),
End: fmt.Sprintf("%d", end),
Step: "60s",
Start: fmt.Sprintf("%d", start),
End: fmt.Sprintf("%d", end),
Step: fmt.Sprintf("%dms", step),
MaxLookback: fmt.Sprintf("%dms", step-1),
NoCache: "1",
})
},
Want: &apptest.PrometheusAPIV1QueryResponse{
@@ -167,7 +182,6 @@ func testBackupRestore(tc *apptest.TestCase, opts testBackupRestoreOpts) {
},
},
FailNow: true,
Retries: 300,
})
}
@@ -193,9 +207,20 @@ func testBackupRestore(tc *apptest.TestCase, opts testBackupRestoreOpts) {
// Use the same number of metrics and time range for all the data ingestions
// below.
const numMetrics = 1000
// With 1000 metrics (one per minute), the time range spans 2 months.
end := time.Date(2025, 3, 1, 10, 0, 0, 0, time.UTC).UnixMilli()
start := end - numMetrics*msecPerMinute
start := time.Date(2025, 1, 1, 0, 0, 0, 0, time.UTC).UnixMilli()
end := time.Date(2025, 3, 1, 0, 0, 0, 0, time.UTC).UnixMilli()
step := (end - start) / numMetrics
batch1 := genData(numMetrics, "batch1", start, step)
batch2 := genData(numMetrics, "batch2", start, step)
batches12 := concatData(batch1, batch2)
now := time.Now().UTC()
startFuture := time.Date(now.Year()+1, 1, 1, 0, 0, 0, 0, time.UTC).UnixMilli()
endFuture := time.Date(now.Year()+1, 3, 1, 0, 0, 0, 0, time.UTC).UnixMilli()
stepFuture := (endFuture - startFuture) / numMetrics
batch1Future := genData(numMetrics, "batch1", startFuture, stepFuture)
batch2Future := genData(numMetrics, "batch2", startFuture, stepFuture)
batches12Future := concatData(batch1Future, batch2Future)
// Verify backup/restore:
//
@@ -209,23 +234,25 @@ func testBackupRestore(tc *apptest.TestCase, opts testBackupRestoreOpts) {
// - Start vmsingle
// - Ensure that the queries return batch1 data only.
batch1Data, wantBatch1Series, wantBatch1QueryResults := genData(numMetrics, "batch1", start)
batch2Data, wantBatch2Series, wantBatch2QueryResults := genData(numMetrics, "batch2", start)
wantBatch12Series := slices.Concat(wantBatch1Series, wantBatch2Series)
wantBatch12QueryResults := slices.Concat(wantBatch1QueryResults, wantBatch2QueryResults)
sut := opts.startSUT()
sut.PrometheusAPIV1ImportPrometheus(t, batch1Data, apptest.QueryOpts{})
sut.PrometheusAPIV1ImportPrometheus(t, batch1.samples, apptest.QueryOpts{})
sut.PrometheusAPIV1ImportPrometheus(t, batch1Future.samples, apptest.QueryOpts{})
sut.ForceFlush(t)
assertSeries(sut, `{__name__=~"batch1.*"}`, start, end, wantBatch1Series)
assertQueryResults(sut, `{__name__=~"batch1.*"}`, start, end, wantBatch1QueryResults)
assertSeries(sut, `{__name__=~"batch1.*"}`, start, end, batch1.wantSeries)
assertSeries(sut, `{__name__=~"batch1.*"}`, startFuture, endFuture, batch1Future.wantSeries)
assertQueryResults(sut, `{__name__=~"batch1.*"}`, start, end, step, batch1.wantQueryResults)
assertQueryResults(sut, `{__name__=~"batch1.*"}`, startFuture, endFuture, stepFuture, batch1Future.wantQueryResults)
createBackup(sut, "batch1")
sut.PrometheusAPIV1ImportPrometheus(t, batch2Data, apptest.QueryOpts{})
sut.PrometheusAPIV1ImportPrometheus(t, batch2.samples, apptest.QueryOpts{})
sut.PrometheusAPIV1ImportPrometheus(t, batch2Future.samples, apptest.QueryOpts{})
sut.ForceFlush(t)
assertSeries(sut, `{__name__=~"batch(1|2).*"}`, start, end, wantBatch12Series)
assertQueryResults(sut, `{__name__=~"batch(1|2).*"}`, start, end, wantBatch12QueryResults)
assertSeries(sut, `{__name__=~"batch(1|2).*"}`, start, end, batches12.wantSeries)
assertSeries(sut, `{__name__=~"batch(1|2).*"}`, startFuture, endFuture, batches12Future.wantSeries)
assertQueryResults(sut, `{__name__=~"batch(1|2).*"}`, start, end, step, batches12.wantQueryResults)
assertQueryResults(sut, `{__name__=~"batch(1|2).*"}`, startFuture, endFuture, stepFuture, batches12Future.wantQueryResults)
createBackup(sut, "batch12")
opts.stopSUT()
@@ -234,6 +261,8 @@ func testBackupRestore(tc *apptest.TestCase, opts testBackupRestoreOpts) {
sut = opts.startSUT()
assertSeries(sut, `{__name__=~"batch1.*"}`, start, end, wantBatch1Series)
assertQueryResults(sut, `{__name__=~"batch1.*"}`, start, end, wantBatch1QueryResults)
assertSeries(sut, `{__name__=~"batch(1|2).*"}`, start, end, batch1.wantSeries)
assertSeries(sut, `{__name__=~"batch(1|2).*"}`, startFuture, endFuture, batch1Future.wantSeries)
assertQueryResults(sut, `{__name__=~"batch(1|2).*"}`, start, end, step, batch1.wantQueryResults)
assertQueryResults(sut, `{__name__=~"batch(1|2).*"}`, startFuture, endFuture, stepFuture, batch1Future.wantQueryResults)
}

View File

@@ -0,0 +1,211 @@
package tests
import (
"fmt"
"path/filepath"
"testing"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/apptest"
)
func TestSingleFutureTimestamps(t *testing.T) {
tc := apptest.NewTestCase(t)
defer tc.Stop()
opts := testFutureTimestampsOpts{
start: func() apptest.PrometheusWriteQuerier {
return tc.MustStartVmsingle("vmsingle", []string{
"-storageDataPath=" + filepath.Join(tc.Dir(), "vmsingle"),
"-retentionPeriod=100y",
"-futureRetention=100y",
})
},
stop: func() {
tc.StopApp("vmsingle")
},
}
testFutureTimestamps(tc, opts)
}
func TestClusterFutureTimestamps(t *testing.T) {
tc := apptest.NewTestCase(t)
defer tc.Stop()
opts := testFutureTimestampsOpts{
start: func() apptest.PrometheusWriteQuerier {
return tc.MustStartCluster(&apptest.ClusterOptions{
Vmstorage1Instance: "vmstorage1",
Vmstorage1Flags: []string{
"-storageDataPath=" + filepath.Join(tc.Dir(), "vmstorage1"),
"-retentionPeriod=100y",
"-futureRetention=100y",
},
Vmstorage2Instance: "vmstorage2",
Vmstorage2Flags: []string{
"-storageDataPath=" + filepath.Join(tc.Dir(), "vmstorage2"),
"-retentionPeriod=100y",
"-futureRetention=100y",
},
VminsertInstance: "vminsert",
VminsertFlags: []string{},
VmselectInstance: "vmselect",
VmselectFlags: []string{},
})
},
stop: func() {
tc.StopApp("vminsert")
tc.StopApp("vmselect")
tc.StopApp("vmstorage1")
tc.StopApp("vmstorage2")
},
}
testFutureTimestamps(tc, opts)
}
type testFutureTimestampsOpts struct {
start func() apptest.PrometheusWriteQuerier
stop func()
}
func testFutureTimestamps(tc *apptest.TestCase, opts testFutureTimestampsOpts) {
t := tc.T()
// assertSeries retrieves set of all metric names from the storage and
// compares it with the expected set.
assertSeries := func(app apptest.PrometheusQuerier, prefix string, start, end int64, want []map[string]string) {
t.Helper()
query := fmt.Sprintf(`{__name__=~"metric_%s.*"}`, prefix)
tc.Assert(&apptest.AssertOptions{
Msg: "unexpected /api/v1/series response",
Got: func() any {
return app.PrometheusAPIV1Series(t, query, apptest.QueryOpts{
Start: fmt.Sprintf("%d", start),
End: fmt.Sprintf("%d", end),
}).Sort()
},
Want: &apptest.PrometheusAPIV1SeriesResponse{
Status: "success",
Data: want,
},
FailNow: true,
})
}
// assertSeries retrieves all data from the storage and compares it with the
// expected result.
assertQueryResults := func(app apptest.PrometheusQuerier, prefix string, start, end, step int64, want []*apptest.QueryResult) {
t.Helper()
query := fmt.Sprintf(`{__name__=~"metric_%s.*"}`, prefix)
tc.Assert(&apptest.AssertOptions{
Msg: "unexpected /api/v1/query_range response",
Got: func() any {
return app.PrometheusAPIV1QueryRange(t, query, apptest.QueryOpts{
Start: fmt.Sprintf("%d", start),
End: fmt.Sprintf("%d", end),
Step: fmt.Sprintf("%dms", step),
MaxLookback: fmt.Sprintf("%dms", step-1),
NoCache: "1",
})
},
Want: &apptest.PrometheusAPIV1QueryResponse{
Status: "success",
Data: &apptest.QueryData{
ResultType: "matrix",
Result: want,
},
},
FailNow: true,
})
}
f := func(prefix string, startTime, endTime time.Time, wantEmpty bool) {
const numMetrics = 1000
start := startTime.UnixMilli()
end := endTime.UnixMilli()
step := (end - start) / numMetrics
data := genFutureTimestampsData(prefix, numMetrics, start, step)
if wantEmpty {
data.wantSeries = []map[string]string{}
data.wantQueryResults = []*apptest.QueryResult{}
}
// Ingest data and check query results.
sut := opts.start()
sut.PrometheusAPIV1ImportPrometheus(t, data.samples, apptest.QueryOpts{})
sut.ForceFlush(t)
assertSeries(sut, prefix, start, end, data.wantSeries)
assertQueryResults(sut, prefix, start, end, step, data.wantQueryResults)
// Ensure the queries work after restrart.
opts.stop()
sut = opts.start()
assertSeries(sut, prefix, start, end, data.wantSeries)
assertQueryResults(sut, prefix, start, end, step, data.wantQueryResults)
opts.stop()
}
now := time.Now().UTC()
retentionLimit := 100 * 365 * 24 * time.Hour
var start, end time.Time
start = time.Date(now.Year(), now.Month(), now.Day()+1, 0, 0, 0, 0, time.UTC)
end = time.Date(now.Year(), now.Month(), now.Day()+2, 0, 0, 0, 0, time.UTC)
f("future_1d", start, end, false)
start = time.Date(now.Year(), now.Month()+1, 1, 0, 0, 0, 0, time.UTC)
end = time.Date(now.Year(), now.Month()+2, 1, 0, 0, 0, 0, time.UTC)
f("future_1m", start, end, false)
start = time.Date(now.Year()+1, 1, 1, 0, 0, 0, 0, time.UTC)
end = time.Date(now.Year()+2, 1, 1, 0, 0, 0, 0, time.UTC)
f("future_1y", start, end, false)
start = now.Add(retentionLimit - 24*time.Hour)
end = now.Add(retentionLimit)
f("future_1d_before_limit", start, end, false)
start = now.Add(retentionLimit + time.Minute)
end = now.Add(retentionLimit + 24*time.Hour)
f("future_1d_beyond_limit", start, end, true)
}
type futureTimestampsData struct {
samples []string
wantSeries []map[string]string
wantQueryResults []*apptest.QueryResult
}
func genFutureTimestampsData(prefix string, numMetrics, start, step int64) futureTimestampsData {
samples := make([]string, numMetrics)
wantSeries := make([]map[string]string, numMetrics)
wantQueryResults := make([]*apptest.QueryResult, numMetrics)
for i := range numMetrics {
metricName := fmt.Sprintf("metric_%s_%04d", prefix, i)
labelName := fmt.Sprintf("label_%s_%04d", prefix, i)
labelValue := fmt.Sprintf("value_%s_%04d", prefix, i)
value := i
timestamp := start + i*step
samples[i] = fmt.Sprintf(`%s{%s="value", label="%s"} %d %d`, metricName, labelName, labelValue, value, timestamp)
wantSeries[i] = map[string]string{
"__name__": metricName,
labelName: "value",
"label": labelValue,
}
wantQueryResults[i] = &apptest.QueryResult{
Metric: map[string]string{
"__name__": metricName,
labelName: "value",
"label": labelValue,
},
Samples: []*apptest.Sample{{Timestamp: timestamp, Value: float64(value)}},
}
}
return futureTimestampsData{samples, wantSeries, wantQueryResults}
}

View File

@@ -1,7 +1,9 @@
package tests
import (
"fmt"
"testing"
"time"
"github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
@@ -9,6 +11,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/apptest"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fs"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
otlppb "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentelemetry/pb"
)
func TestSingleIngestionProtocols(t *testing.T) {
@@ -295,6 +298,357 @@ func TestSingleIngestionProtocols(t *testing.T) {
},
})
// opentelemetry metrics protocol
tsNano := uint64(1707123456700 * 1e6) // 2024-02-05T08:57:36.700Z
otlpData := otlppb.MetricsData{
ResourceMetrics: []*otlppb.ResourceMetrics{
{
Resource: &otlppb.Resource{
Attributes: []*otlppb.KeyValue{
{
Key: "foo",
Value: &otlppb.AnyValue{StringValue: new("bar")},
},
},
},
ScopeMetrics: []*otlppb.ScopeMetrics{
{
Scope: &otlppb.InstrumentationScope{
Name: new("otlp"),
Version: new("v1"),
Attributes: []*otlppb.KeyValue{
{
Key: "scope_attribute",
Value: &otlppb.AnyValue{IntValue: new(int64(100))},
},
},
},
Metrics: []*otlppb.Metric{
{
Name: "otlp_series_gauge",
Gauge: &otlppb.Gauge{
DataPoints: []*otlppb.NumberDataPoint{
{IntValue: new(int64(10)), TimeUnixNano: tsNano},
{IntValue: new(int64(5)), TimeUnixNano: tsNano, Attributes: []*otlppb.KeyValue{{Key: "bar", Value: &otlppb.AnyValue{StringValue: new("foo")}}}},
},
},
},
{
Name: "otlp_series_counter",
Sum: &otlppb.Sum{
DataPoints: []*otlppb.NumberDataPoint{
{IntValue: new(int64(30)), TimeUnixNano: tsNano, Attributes: []*otlppb.KeyValue{{Key: "bar", Value: &otlppb.AnyValue{StringValue: new("foo")}}}},
},
},
},
},
},
{
Scope: &otlppb.InstrumentationScope{
Name: new("otlp2"),
Version: new("v2"),
},
Metrics: []*otlppb.Metric{
{
Name: "otlp_series_histogram",
Histogram: &otlppb.Histogram{
DataPoints: []*otlppb.HistogramDataPoint{
{
Count: 15,
Sum: new(float64(100)),
ExplicitBounds: []float64{0.1, 0.5, 1.0, 5.0},
BucketCounts: []uint64{0, 5, 10, 0, 0},
TimeUnixNano: tsNano,
Attributes: []*otlppb.KeyValue{
{Key: "baz", Value: &otlppb.AnyValue{ArrayValue: &otlppb.ArrayValue{Values: []*otlppb.AnyValue{
{StringValue: new("foo")},
{IntValue: new(int64(100))},
}}}},
},
},
},
},
},
},
},
},
},
{
ScopeMetrics: []*otlppb.ScopeMetrics{
{
Metrics: []*otlppb.Metric{
{
Name: "otlp_series_summary",
Summary: &otlppb.Summary{
DataPoints: []*otlppb.SummaryDataPoint{
{
Attributes: []*otlppb.KeyValue{},
TimeUnixNano: tsNano,
Sum: 17.5,
Count: 2,
QuantileValues: []*otlppb.ValueAtQuantile{
{
Quantile: 0.1,
Value: 7.5,
},
{
Quantile: 0.5,
Value: 10.0,
},
},
},
},
},
},
},
},
},
},
},
}
sut.OpentelemetryV1Metrics(t, otlpData, apptest.QueryOpts{})
sut.ForceFlush(t)
f(sut, &opts{
query: `{__name__=~"otlp.+"}`,
wantMetrics: []map[string]string{
{
"__name__": "otlp_series_counter",
"foo": "bar",
"bar": "foo",
"scope.attributes.scope_attribute": "100",
"scope.name": "otlp",
"scope.version": "v1",
},
{
"__name__": "otlp_series_gauge",
"foo": "bar",
"bar": "foo",
"scope.attributes.scope_attribute": "100",
"scope.name": "otlp",
"scope.version": "v1",
},
{
"__name__": "otlp_series_gauge",
"foo": "bar",
"scope.attributes.scope_attribute": "100",
"scope.name": "otlp",
"scope.version": "v1",
},
{
"__name__": "otlp_series_histogram_bucket",
"baz": `["foo",100]`,
"foo": "bar",
"scope.name": "otlp2",
"scope.version": "v2",
"le": "+Inf",
},
{
"__name__": "otlp_series_histogram_bucket",
"baz": `["foo",100]`,
"foo": "bar",
"scope.name": "otlp2",
"scope.version": "v2",
"le": "0.1",
},
{
"__name__": "otlp_series_histogram_bucket",
"baz": `["foo",100]`,
"foo": "bar",
"scope.name": "otlp2",
"scope.version": "v2",
"le": "0.5",
},
{
"__name__": "otlp_series_histogram_bucket",
"baz": `["foo",100]`,
"foo": "bar",
"scope.name": "otlp2",
"scope.version": "v2",
"le": "1",
},
{
"__name__": "otlp_series_histogram_bucket",
"baz": `["foo",100]`,
"foo": "bar",
"scope.name": "otlp2",
"scope.version": "v2",
"le": "5",
},
{
"__name__": "otlp_series_histogram_count",
"baz": `["foo",100]`,
"foo": "bar",
"scope.name": "otlp2",
"scope.version": "v2",
},
{
"__name__": "otlp_series_histogram_sum",
"baz": `["foo",100]`,
"foo": "bar",
"scope.name": "otlp2",
"scope.version": "v2",
},
{
"__name__": "otlp_series_summary",
"quantile": "0.1",
},
{
"__name__": "otlp_series_summary",
"quantile": "0.5",
},
{
"__name__": "otlp_series_summary_count",
},
{
"__name__": "otlp_series_summary_sum",
},
},
wantSamples: []*apptest.Sample{
{Timestamp: 1707123456700, Value: 30}, // 2024-02-05T08:57:36.700Z
{Timestamp: 1707123456700, Value: 5}, // 2024-02-05T08:57:36.700Z
{Timestamp: 1707123456700, Value: 10}, // 2024-02-05T08:57:36.700Z
{Timestamp: 1707123456700, Value: 15}, // 2024-02-05T08:57:36.700Z
{Timestamp: 1707123456700, Value: 0}, // 2024-02-05T08:57:36.700Z
{Timestamp: 1707123456700, Value: 5}, // 2024-02-05T08:57:36.700Z
{Timestamp: 1707123456700, Value: 15}, // 2024-02-05T08:57:36.700Z
{Timestamp: 1707123456700, Value: 15}, // 2024-02-05T08:57:36.700Z
{Timestamp: 1707123456700, Value: 15}, // 2024-02-05T08:57:36.700Z
{Timestamp: 1707123456700, Value: 100}, // 2024-02-05T08:57:36.700Z
{Timestamp: 1707123456700, Value: 7.5}, // 2024-02-05T08:57:36.700Z
{Timestamp: 1707123456700, Value: 10}, // 2024-02-05T08:57:36.700Z
{Timestamp: 1707123456700, Value: 2}, // 2024-02-05T08:57:36.700Z
{Timestamp: 1707123456700, Value: 17.5}, // 2024-02-05T08:57:36.700Z
},
})
}
func TestSingleCardinalityLimiter(t *testing.T) {
waitFor := func(f func() bool) {
const (
retries = 20
period = 100 * time.Millisecond
)
t.Helper()
for i := 0; i < retries; i++ {
if f() {
return
}
time.Sleep(period)
}
t.Fatalf("timed out waiting for retry #%d", retries)
}
tc := apptest.NewTestCase(t)
defer tc.Stop()
singleHourly := tc.MustStartVmsingle("vmsingle-hourly", []string{
"-retentionPeriod=100y",
"-storage.maxHourlySeries=1",
})
singleHourly.PrometheusAPIV1ImportPrometheus(t, []string{
"foo_bar 1 1652169600000", // 2022-05-10T08:00:00Z
}, apptest.QueryOpts{})
if v := singleHourly.GetIntMetric(t, "vm_hourly_series_limit_max_series"); v != 1 {
t.Fatalf("unexpected vm_hourly_series_limit_max_series value: %d", v)
}
if v := singleHourly.GetIntMetric(t, "vm_hourly_series_limit_current_series"); v != 1 {
t.Fatalf("unexpected vm_hourly_series_limit_current_series value: %d", v)
}
if v := singleHourly.GetIntMetric(t, "vm_hourly_series_limit_rows_dropped_total"); v != 0 {
t.Fatalf("unexpected vm_hourly_series_limit_rows_dropped_total value: %d", v)
}
singleHourly.PrometheusAPIV1ImportPrometheus(t, []string{
"foo_bar2 1 1652169600000", // 2022-05-10T08:00:00Z
}, apptest.QueryOpts{})
waitFor(
func() bool {
return singleHourly.GetIntMetric(t, "vm_hourly_series_limit_rows_dropped_total") > 0
},
)
singleDaily := tc.MustStartVmsingle("vmsingle-daily", []string{
"-retentionPeriod=100y",
"-storage.maxDailySeries=1",
})
singleDaily.PrometheusAPIV1ImportPrometheus(t, []string{
"foo_bar 1 1652169600000", // 2022-05-10T08:00:00Z
}, apptest.QueryOpts{})
if v := singleDaily.GetIntMetric(t, "vm_daily_series_limit_max_series"); v != 1 {
t.Fatalf("unexpected vm_daily_series_limit_max_series value: %d", v)
}
if v := singleDaily.GetIntMetric(t, "vm_daily_series_limit_current_series"); v != 1 {
t.Fatalf("unexpected vm_daily_series_limit_current_series value: %d", v)
}
if v := singleDaily.GetIntMetric(t, "vm_daily_series_limit_rows_dropped_total"); v != 0 {
t.Fatalf("unexpected vm_daily_series_limit_rows_dropped_total value: %d", v)
}
singleDaily.PrometheusAPIV1ImportPrometheus(t, []string{
"foo_bar2 1 1652169600000", // 2022-05-10T08:00:00Z
}, apptest.QueryOpts{})
waitFor(
func() bool {
return singleDaily.GetIntMetric(t, "vm_daily_series_limit_rows_dropped_total") > 0
},
)
singleUnlimited := tc.MustStartVmsingle("vmsingle-unlimited", []string{
"-retentionPeriod=100y",
"-storage.maxHourlySeries=-1",
"-storage.maxDailySeries=-1",
})
metrics := make([]string, 0, 100)
for i := range 100 {
metrics = append(metrics, fmt.Sprintf("foo_bar%d 1 1652169600000", i)) // 2022-05-10T08:00:00Z
}
singleUnlimited.PrometheusAPIV1ImportPrometheus(t, metrics, apptest.QueryOpts{})
waitFor(
func() bool {
return singleUnlimited.GetIntMetric(t, "vm_hourly_series_limit_current_series") > 0
},
)
if v := singleUnlimited.GetIntMetric(t, "vm_hourly_series_limit_max_series"); v == 0 {
t.Fatalf("unexpected vm_hourly_series_limit_max_series value: %d", v)
}
if v := singleUnlimited.GetIntMetric(t, "vm_hourly_series_limit_current_series"); v != 100 {
t.Fatalf("unexpected vm_hourly_series_limit_current_series value: %d", v)
}
if v := singleUnlimited.GetIntMetric(t, "vm_hourly_series_limit_rows_dropped_total"); v != 0 {
t.Fatalf("unexpected vm_hourly_series_limit_rows_dropped_total value: %d", v)
}
if v := singleUnlimited.GetIntMetric(t, "vm_daily_series_limit_max_series"); v == 0 {
t.Fatalf("unexpected vm_daily_series_limit_max_series value: %d", v)
}
if v := singleUnlimited.GetIntMetric(t, "vm_daily_series_limit_current_series"); v != 100 {
t.Fatalf("unexpected vm_daily_series_limit_current_series value: %d", v)
}
if v := singleUnlimited.GetIntMetric(t, "vm_daily_series_limit_rows_dropped_total"); v != 0 {
t.Fatalf("unexpected vm_daily_series_limit_rows_dropped_total value: %d", v)
}
}
func TestClusterIngestionProtocols(t *testing.T) {
@@ -590,4 +944,371 @@ func TestClusterIngestionProtocols(t *testing.T) {
},
})
// opentelemetry metrics protocol
tsNano := uint64(1707123456700 * 1e6) // 2024-02-05T08:57:36.700Z
otlpData := otlppb.MetricsData{
ResourceMetrics: []*otlppb.ResourceMetrics{
{
Resource: &otlppb.Resource{
Attributes: []*otlppb.KeyValue{
{
Key: "foo",
Value: &otlppb.AnyValue{StringValue: new("bar")},
},
},
},
ScopeMetrics: []*otlppb.ScopeMetrics{
{
Scope: &otlppb.InstrumentationScope{
Name: new("otlp"),
Version: new("v1"),
Attributes: []*otlppb.KeyValue{
{
Key: "scope_attribute",
Value: &otlppb.AnyValue{IntValue: new(int64(100))},
},
},
},
Metrics: []*otlppb.Metric{
{
Name: "otlp_series_gauge",
Gauge: &otlppb.Gauge{
DataPoints: []*otlppb.NumberDataPoint{
{IntValue: new(int64(10)), TimeUnixNano: tsNano},
{IntValue: new(int64(5)), TimeUnixNano: tsNano, Attributes: []*otlppb.KeyValue{{Key: "bar", Value: &otlppb.AnyValue{StringValue: new("foo")}}}},
},
},
},
{
Name: "otlp_series_counter",
Sum: &otlppb.Sum{
DataPoints: []*otlppb.NumberDataPoint{
{IntValue: new(int64(30)), TimeUnixNano: tsNano, Attributes: []*otlppb.KeyValue{{Key: "bar", Value: &otlppb.AnyValue{StringValue: new("foo")}}}},
},
},
},
},
},
{
Scope: &otlppb.InstrumentationScope{
Name: new("otlp2"),
Version: new("v2"),
},
Metrics: []*otlppb.Metric{
{
Name: "otlp_series_histogram",
Histogram: &otlppb.Histogram{
DataPoints: []*otlppb.HistogramDataPoint{
{
Count: 15,
Sum: new(float64(100)),
ExplicitBounds: []float64{0.1, 0.5, 1.0, 5.0},
BucketCounts: []uint64{0, 5, 10, 0, 0},
TimeUnixNano: tsNano,
Attributes: []*otlppb.KeyValue{
{Key: "baz", Value: &otlppb.AnyValue{ArrayValue: &otlppb.ArrayValue{Values: []*otlppb.AnyValue{
{StringValue: new("foo")},
{IntValue: new(int64(100))},
}}}},
},
},
},
},
},
},
},
},
},
{
ScopeMetrics: []*otlppb.ScopeMetrics{
{
Metrics: []*otlppb.Metric{
{
Name: "otlp_series_summary",
Summary: &otlppb.Summary{
DataPoints: []*otlppb.SummaryDataPoint{
{
Attributes: []*otlppb.KeyValue{},
TimeUnixNano: tsNano,
Sum: 17.5,
Count: 2,
QuantileValues: []*otlppb.ValueAtQuantile{
{
Quantile: 0.1,
Value: 7.5,
},
{
Quantile: 0.5,
Value: 10.0,
},
},
},
},
},
},
},
},
},
},
},
}
vminsert.OpentelemetryV1Metrics(t, otlpData, apptest.QueryOpts{})
vmstorage.ForceFlush(t)
f(&opts{
query: `{__name__=~"otlp.+"}`,
wantMetrics: []map[string]string{
{
"__name__": "otlp_series_counter",
"foo": "bar",
"bar": "foo",
"scope.attributes.scope_attribute": "100",
"scope.name": "otlp",
"scope.version": "v1",
},
{
"__name__": "otlp_series_gauge",
"foo": "bar",
"bar": "foo",
"scope.attributes.scope_attribute": "100",
"scope.name": "otlp",
"scope.version": "v1",
},
{
"__name__": "otlp_series_gauge",
"foo": "bar",
"scope.attributes.scope_attribute": "100",
"scope.name": "otlp",
"scope.version": "v1",
},
{
"__name__": "otlp_series_histogram_bucket",
"baz": `["foo",100]`,
"foo": "bar",
"scope.name": "otlp2",
"scope.version": "v2",
"le": "+Inf",
},
{
"__name__": "otlp_series_histogram_bucket",
"baz": `["foo",100]`,
"foo": "bar",
"scope.name": "otlp2",
"scope.version": "v2",
"le": "0.1",
},
{
"__name__": "otlp_series_histogram_bucket",
"baz": `["foo",100]`,
"foo": "bar",
"scope.name": "otlp2",
"scope.version": "v2",
"le": "0.5",
},
{
"__name__": "otlp_series_histogram_bucket",
"baz": `["foo",100]`,
"foo": "bar",
"scope.name": "otlp2",
"scope.version": "v2",
"le": "1",
},
{
"__name__": "otlp_series_histogram_bucket",
"baz": `["foo",100]`,
"foo": "bar",
"scope.name": "otlp2",
"scope.version": "v2",
"le": "5",
},
{
"__name__": "otlp_series_histogram_count",
"baz": `["foo",100]`,
"foo": "bar",
"scope.name": "otlp2",
"scope.version": "v2",
},
{
"__name__": "otlp_series_histogram_sum",
"baz": `["foo",100]`,
"foo": "bar",
"scope.name": "otlp2",
"scope.version": "v2",
},
{
"__name__": "otlp_series_summary",
"quantile": "0.1",
},
{
"__name__": "otlp_series_summary",
"quantile": "0.5",
},
{
"__name__": "otlp_series_summary_count",
},
{
"__name__": "otlp_series_summary_sum",
},
},
wantSamples: []*apptest.Sample{
{Timestamp: 1707123456700, Value: 30}, // 2024-02-05T08:57:36.700Z
{Timestamp: 1707123456700, Value: 5}, // 2024-02-05T08:57:36.700Z
{Timestamp: 1707123456700, Value: 10}, // 2024-02-05T08:57:36.700Z
{Timestamp: 1707123456700, Value: 15}, // 2024-02-05T08:57:36.700Z
{Timestamp: 1707123456700, Value: 0}, // 2024-02-05T08:57:36.700Z
{Timestamp: 1707123456700, Value: 5}, // 2024-02-05T08:57:36.700Z
{Timestamp: 1707123456700, Value: 15}, // 2024-02-05T08:57:36.700Z
{Timestamp: 1707123456700, Value: 15}, // 2024-02-05T08:57:36.700Z
{Timestamp: 1707123456700, Value: 15}, // 2024-02-05T08:57:36.700Z
{Timestamp: 1707123456700, Value: 100}, // 2024-02-05T08:57:36.700Z
{Timestamp: 1707123456700, Value: 7.5}, // 2024-02-05T08:57:36.700Z
{Timestamp: 1707123456700, Value: 10}, // 2024-02-05T08:57:36.700Z
{Timestamp: 1707123456700, Value: 2}, // 2024-02-05T08:57:36.700Z
{Timestamp: 1707123456700, Value: 17.5}, // 2024-02-05T08:57:36.700Z
},
})
}
func TestClusterCardinalityLimiter(t *testing.T) {
waitFor := func(f func() bool) {
const (
retries = 20
period = 100 * time.Millisecond
)
t.Helper()
for i := 0; i < retries; i++ {
if f() {
return
}
time.Sleep(period)
}
t.Fatalf("timed out waiting for retry #%d", retries)
}
tc := apptest.NewTestCase(t)
defer tc.Stop()
// Test hourly series limit
vmstorageHourly := tc.MustStartVmstorage("vmstorage-hourly", []string{
"-storageDataPath=" + tc.Dir() + "/vmstorage-hourly",
"-retentionPeriod=100y",
"-storage.maxHourlySeries=1",
})
vminsertHourly := tc.MustStartVminsert("vminsert-hourly", []string{
"-storageNode=" + vmstorageHourly.VminsertAddr(),
})
vminsertHourly.PrometheusAPIV1ImportPrometheus(t, []string{
"foo_bar 1 1652169600000", // 2022-05-10T08:00:00Z
}, apptest.QueryOpts{})
if v := vmstorageHourly.GetIntMetric(t, "vm_hourly_series_limit_max_series"); v != 1 {
t.Fatalf("unexpected vm_hourly_series_limit_max_series value: %d", v)
}
if v := vmstorageHourly.GetIntMetric(t, "vm_hourly_series_limit_current_series"); v != 1 {
t.Fatalf("unexpected vm_hourly_series_limit_current_series value: %d", v)
}
if v := vmstorageHourly.GetIntMetric(t, "vm_hourly_series_limit_rows_dropped_total"); v != 0 {
t.Fatalf("unexpected vm_hourly_series_limit_rows_dropped_total value: %d", v)
}
vminsertHourly.PrometheusAPIV1ImportPrometheus(t, []string{
"foo_bar2 1 1652169600000", // 2022-05-10T08:00:00Z
}, apptest.QueryOpts{})
waitFor(
func() bool {
return vmstorageHourly.GetIntMetric(t, "vm_hourly_series_limit_rows_dropped_total") > 0
},
)
// Test daily series limit
vmstorageDaily := tc.MustStartVmstorage("vmstorage-daily", []string{
"-storageDataPath=" + tc.Dir() + "/vmstorage-daily",
"-retentionPeriod=100y",
"-storage.maxDailySeries=1",
})
vminsertDaily := tc.MustStartVminsert("vminsert-daily", []string{
"-storageNode=" + vmstorageDaily.VminsertAddr(),
})
vminsertDaily.PrometheusAPIV1ImportPrometheus(t, []string{
"foo_bar 1 1652169600000", // 2022-05-10T08:00:00Z
}, apptest.QueryOpts{})
if v := vmstorageDaily.GetIntMetric(t, "vm_daily_series_limit_max_series"); v != 1 {
t.Fatalf("unexpected vm_daily_series_limit_max_series value: %d", v)
}
if v := vmstorageDaily.GetIntMetric(t, "vm_daily_series_limit_current_series"); v != 1 {
t.Fatalf("unexpected vm_daily_series_limit_current_series value: %d", v)
}
if v := vmstorageDaily.GetIntMetric(t, "vm_daily_series_limit_rows_dropped_total"); v != 0 {
t.Fatalf("unexpected vm_daily_series_limit_rows_dropped_total value: %d", v)
}
vminsertDaily.PrometheusAPIV1ImportPrometheus(t, []string{
"foo_bar2 1 1652169600000", // 2022-05-10T08:00:00Z
}, apptest.QueryOpts{})
waitFor(
func() bool {
return vmstorageDaily.GetIntMetric(t, "vm_daily_series_limit_rows_dropped_total") > 0
},
)
// Test unlimited series
vmstorageUnlimited := tc.MustStartVmstorage("vmstorage-unlimited", []string{
"-storageDataPath=" + tc.Dir() + "/vmstorage-unlimited",
"-retentionPeriod=100y",
"-storage.maxHourlySeries=-1",
"-storage.maxDailySeries=-1",
})
vminsertUnlimited := tc.MustStartVminsert("vminsert-unlimited", []string{
"-storageNode=" + vmstorageUnlimited.VminsertAddr(),
})
metrics := make([]string, 0, 100)
for i := range 100 {
metrics = append(metrics, fmt.Sprintf("foo_bar%d 1 1652169600000", i)) // 2022-05-10T08:00:00Z
}
vminsertUnlimited.PrometheusAPIV1ImportPrometheus(t, metrics, apptest.QueryOpts{})
waitFor(
func() bool {
return vmstorageUnlimited.GetIntMetric(t, "vm_hourly_series_limit_current_series") > 0
},
)
if v := vmstorageUnlimited.GetIntMetric(t, "vm_hourly_series_limit_max_series"); v == 0 {
t.Fatalf("unexpected vm_hourly_series_limit_max_series value: %d", v)
}
if v := vmstorageUnlimited.GetIntMetric(t, "vm_hourly_series_limit_current_series"); v != 100 {
t.Fatalf("unexpected vm_hourly_series_limit_current_series value: %d", v)
}
if v := vmstorageUnlimited.GetIntMetric(t, "vm_hourly_series_limit_rows_dropped_total"); v != 0 {
t.Fatalf("unexpected vm_hourly_series_limit_rows_dropped_total value: %d", v)
}
if v := vmstorageUnlimited.GetIntMetric(t, "vm_daily_series_limit_max_series"); v == 0 {
t.Fatalf("unexpected vm_daily_series_limit_max_series value: %d", v)
}
if v := vmstorageUnlimited.GetIntMetric(t, "vm_daily_series_limit_current_series"); v != 100 {
t.Fatalf("unexpected vm_daily_series_limit_current_series value: %d", v)
}
if v := vmstorageUnlimited.GetIntMetric(t, "vm_daily_series_limit_rows_dropped_total"); v != 0 {
t.Fatalf("unexpected vm_daily_series_limit_rows_dropped_total value: %d", v)
}
}

View File

@@ -3,7 +3,9 @@ package tests
import (
"fmt"
"math/rand/v2"
"net"
"testing"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/apptest"
)
@@ -70,3 +72,162 @@ func TestClusterMultilevelSelect(t *testing.T) {
assertSeries(vmselectL1)
assertSeries(vmselectL2)
}
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10678.
func TestClusterMultilevelPartialResponse(t *testing.T) {
tc := apptest.NewTestCase(t)
defer tc.Stop()
// Set up the following multi-level cluster configuration:
//
// |--> available vmstorage
// | ------> vmselect1 --|
// | |--> available vmstorage
// global-vmselect -------|
// | |--> available vmstorage
// | ------> vmselect2 --|
// |--> unavailable vmstorage
vmstorage1 := tc.MustStartVmstorage("vmstorage1", []string{
"-storageDataPath=" + tc.Dir() + "/vmstorage1",
})
vmstorage2 := tc.MustStartVmstorage("vmstorage2", []string{
"-storageDataPath=" + tc.Dir() + "/vmstorage2",
})
regionalVmselect1 := tc.MustStartVmselect("regional-vmselect1", []string{
"-storageNode=" + vmstorage1.VmselectAddr() + "," + vmstorage2.VmselectAddr(),
})
regionalVmselect2 := tc.MustStartVmselect("regional-vmselect2", []string{
"-storageNode=" + vmstorage1.VmselectAddr() + "," + noopTCPServerAddr(t),
})
globalVmselect := tc.MustStartVmselect("global-vmselect", []string{
"-storageNode=" + regionalVmselect1.ClusternativeListenAddr() + "," + regionalVmselect2.ClusternativeListenAddr(),
})
// 1. /api/v1/query
qopts := apptest.QueryOpts{Tenant: "0"}
assertQuery := func(app *apptest.Vmselect, want *apptest.PrometheusAPIV1QueryResponse) {
t.Helper()
tc.Assert(&apptest.AssertOptions{
Msg: "unexpected /api/v1/query response",
Got: func() any {
res := app.PrometheusAPIV1Query(t, `{__name__=~".*"}`, qopts)
res.Sort()
return res
},
Want: want,
})
}
// regional-vmselect1 should return full response.
assertQuery(regionalVmselect1, &apptest.PrometheusAPIV1QueryResponse{
Status: "success",
IsPartial: false,
Data: &apptest.QueryData{ResultType: "vector", Result: []*apptest.QueryResult{}},
})
// regional-vmselect2 should return partial response.
assertQuery(regionalVmselect2, &apptest.PrometheusAPIV1QueryResponse{
Status: "success",
IsPartial: true,
Data: &apptest.QueryData{ResultType: "vector", Result: []*apptest.QueryResult{}},
})
// global-vmselect should return partial response.
assertQuery(globalVmselect, &apptest.PrometheusAPIV1QueryResponse{
Status: "success",
IsPartial: true,
Data: &apptest.QueryData{ResultType: "vector", Result: []*apptest.QueryResult{}},
})
// 2. /api/v1/labels
start := time.Now().Unix()
assertLabel := func(app *apptest.Vmselect, want *apptest.PrometheusAPIV1LabelsResponse) {
t.Helper()
tc.Assert(&apptest.AssertOptions{
Msg: "unexpected /api/v1/label response",
Got: func() any {
res := app.PrometheusAPIV1Labels(t, `{__name__="up"}`, apptest.QueryOpts{
Start: fmt.Sprintf("%d", start-100),
End: fmt.Sprintf("%d", start),
})
return res
},
Want: want,
})
}
// regional-vmselect1 should return full response.
assertLabel(regionalVmselect1, &apptest.PrometheusAPIV1LabelsResponse{
Status: "success",
IsPartial: false,
Data: make([]string, 0),
})
// regional-vmselect2 should return partial response.
assertLabel(regionalVmselect2, &apptest.PrometheusAPIV1LabelsResponse{
Status: "success",
IsPartial: true,
Data: make([]string, 0),
})
// global-vmselect should return partial response.
assertLabel(globalVmselect, &apptest.PrometheusAPIV1LabelsResponse{
Status: "success",
IsPartial: true,
Data: make([]string, 0),
})
// 3. /api/v1/label/%s/values
assertSeries := func(app *apptest.Vmselect, want *apptest.PrometheusAPIV1SeriesResponse) {
t.Helper()
tc.Assert(&apptest.AssertOptions{
Msg: "unexpected /api/v1/series response",
Got: func() any {
res := app.PrometheusAPIV1Series(t, `{__name__="up"}`, apptest.QueryOpts{
Start: fmt.Sprintf("%d", start-100),
End: fmt.Sprintf("%d", start),
})
return res
},
Want: want,
})
}
// regional-vmselect1 should return full response.
assertSeries(regionalVmselect1, &apptest.PrometheusAPIV1SeriesResponse{
Status: "success",
IsPartial: false,
Data: make([]map[string]string, 0),
})
// regional-vmselect2 should return partial response.
assertSeries(regionalVmselect2, &apptest.PrometheusAPIV1SeriesResponse{
Status: "success",
IsPartial: true,
Data: make([]map[string]string, 0),
})
// global-vmselect should return partial response.
assertSeries(globalVmselect, &apptest.PrometheusAPIV1SeriesResponse{
Status: "success",
IsPartial: true,
Data: make([]map[string]string, 0),
})
}
// noopTCPServerAddr start local tcp server,
// which immediately closes any incoming connections
// and return it's address
func noopTCPServerAddr(t *testing.T) string {
t.Helper()
ln, err := net.Listen("tcp", "localhost:0")
if err != nil {
t.Fatalf("failed to create listener: %v", err)
}
go func() {
for {
conn, err := ln.Accept()
if err != nil {
return
}
conn.Close()
}
}()
t.Cleanup(func() { ln.Close() })
return ln.Addr().String()
}

View File

@@ -0,0 +1,313 @@
package tests
import (
"net/http"
"testing"
"github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
"github.com/VictoriaMetrics/VictoriaMetrics/apptest"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fs"
)
func TestClusterMultiTenantSelectViaHeaders(t *testing.T) {
fs.MustRemoveDir(t.Name())
cmpOpt := cmpopts.IgnoreFields(apptest.PrometheusAPIV1QueryResponse{}, "Status", "Data.ResultType")
cmpSROpt := cmpopts.IgnoreFields(apptest.PrometheusAPIV1SeriesResponse{}, "Status", "IsPartial")
tc := apptest.NewTestCase(t)
defer tc.Stop()
vmstorage := tc.MustStartVmstorage("vmstorage", []string{
"-storageDataPath=" + tc.Dir() + "/vmstorage",
"-retentionPeriod=100y",
})
vminsert := tc.MustStartVminsert("vminsert", []string{
"-storageNode=" + vmstorage.VminsertAddr(),
"-enableMultitenancyViaHeaders",
})
vmselect := tc.MustStartVmselect("vmselect", []string{
"-storageNode=" + vmstorage.VmselectAddr(),
"-search.tenantCacheExpireDuration=0",
"-enableMultitenancyViaHeaders",
})
multitenant := make(http.Header)
multitenant.Set("AccountID", "multitenant")
// test for empty tenants request
got := vmselect.PrometheusAPIV1Query(t, "foo_bar", apptest.QueryOpts{
Headers: multitenant,
Step: "5m",
Time: "2022-05-10T08:03:00.000Z",
})
want := apptest.NewPrometheusAPIV1QueryResponse(t, `{"data":{"result":[]}}`)
if diff := cmp.Diff(want, got, cmpOpt); diff != "" {
t.Errorf("unexpected response (-want, +got):\n%s", diff)
}
// ingest per tenant data and verify it with search
samples := []string{
`foo_bar 1.00 1652169600000`, // 2022-05-10T08:00:00Z
`foo_bar 2.00 1652169660000`, // 2022-05-10T08:01:00Z
`foo_bar 3.00 1652169720000`, // 2022-05-10T08:02:00Z
}
tenantHeaders := []map[string]string{
{"AccountID": "1", "ProjectID": "1"},
{"AccountID": "1", "ProjectID": "15"},
{"AccountID": "2"},
{"ProjectID": "3"},
}
instantCT := "2022-05-10T08:05:00.000Z" // 1652169900 Unix seconds
for _, headers := range tenantHeaders {
h := make(http.Header)
for k, v := range headers {
h.Set(k, v)
}
vminsert.PrometheusAPIV1ImportPrometheus(t, samples, apptest.QueryOpts{Headers: h})
vmstorage.ForceFlush(t)
// verify tenants are searchable via tenantID in headers
got := vmselect.PrometheusAPIV1Query(t, "foo_bar", apptest.QueryOpts{
Headers: h, Time: instantCT,
})
want := apptest.NewPrometheusAPIV1QueryResponse(t, `{"data":{"result":[{"metric":{"__name__":"foo_bar"},"value":[1652169900,"3"]}]}}`)
if diff := cmp.Diff(want, got, cmpOpt); diff != "" {
t.Errorf("unexpected response (-want, +got):\n%s", diff)
}
}
// verify all tenants searchable with multitenant header
// /api/v1/query
want = apptest.NewPrometheusAPIV1QueryResponse(t,
`{"data":
{"result":[
{"metric":{"__name__":"foo_bar","vm_account_id":"0","vm_project_id":"3"},"value":[1652169900,"3"]},
{"metric":{"__name__":"foo_bar","vm_account_id":"1","vm_project_id": "1"},"value":[1652169900,"3"]},
{"metric":{"__name__":"foo_bar","vm_account_id":"1","vm_project_id":"15"},"value":[1652169900,"3"]},
{"metric":{"__name__":"foo_bar","vm_account_id":"2","vm_project_id":"0"},"value":[1652169900,"3"]}
]
}
}`,
)
got = vmselect.PrometheusAPIV1Query(t, "foo_bar", apptest.QueryOpts{
Headers: multitenant,
Time: instantCT,
})
if diff := cmp.Diff(want, got, cmpOpt); diff != "" {
t.Errorf("unexpected response (-want, +got):\n%s", diff)
}
// /api/v1/query_range aggregated by tenant labels
query := "sum(foo_bar) by(vm_account_id,vm_project_id)"
got = vmselect.PrometheusAPIV1QueryRange(t, query, apptest.QueryOpts{
Headers: multitenant,
Start: "2022-05-10T07:59:00.000Z",
End: "2022-05-10T08:05:00.000Z",
Step: "1m",
})
want = apptest.NewPrometheusAPIV1QueryResponse(t,
`{"data":
{"result": [
{"metric": {"vm_account_id": "0","vm_project_id":"3"}, "values": [[1652169600,"1"],[1652169660,"2"],[1652169720,"3"],[1652169780,"3"]]},
{"metric": {"vm_account_id": "1","vm_project_id":"1"}, "values": [[1652169600,"1"],[1652169660,"2"],[1652169720,"3"],[1652169780,"3"]]},
{"metric": {"vm_account_id": "1","vm_project_id":"15"}, "values": [[1652169600,"1"],[1652169660,"2"],[1652169720,"3"],[1652169780,"3"]]},
{"metric": {"vm_account_id": "2","vm_project_id":"0"}, "values": [[1652169600,"1"],[1652169660,"2"],[1652169720,"3"],[1652169780,"3"]]}
]
}
}`)
if diff := cmp.Diff(want, got, cmpOpt); diff != "" {
t.Errorf("unexpected response (-want, +got):\n%s", diff)
}
// verify /api/v1/series response
wantSR := apptest.NewPrometheusAPIV1SeriesResponse(t,
`{"data": [
{"__name__":"foo_bar", "vm_account_id":"1", "vm_project_id":"1"},
{"__name__":"foo_bar", "vm_account_id":"1", "vm_project_id":"15"},
{"__name__":"foo_bar", "vm_account_id":"2", "vm_project_id":"0"},
{"__name__":"foo_bar", "vm_account_id":"0", "vm_project_id":"3"}
]
}`)
wantSR.Sort()
gotSR := vmselect.PrometheusAPIV1Series(t, "foo_bar", apptest.QueryOpts{
Headers: multitenant,
Start: "2022-05-10T08:03:00.000Z",
})
gotSR.Sort()
if diff := cmp.Diff(wantSR, gotSR, cmpSROpt); diff != "" {
t.Errorf("unexpected response (-want, +got):\n%s", diff)
}
// test ingestion with multitenant header, tenants must be populated from labels
//
var tenantLabelsSamples = []string{
`foo_bar{vm_account_id="5"} 1.00 1652169720000`, // 2022-05-10T08:02:00Z'
`foo_bar{vm_project_id="10"} 2.00 1652169660000`, // 2022-05-10T08:01:00Z
`foo_bar{vm_account_id="5",vm_project_id="15"} 3.00 1652169720000`, // 2022-05-10T08:02:00Z
}
vminsert.PrometheusAPIV1ImportPrometheus(t, tenantLabelsSamples, apptest.QueryOpts{Headers: multitenant})
vmstorage.ForceFlush(t)
// /api/v1/query with query filters
want = apptest.NewPrometheusAPIV1QueryResponse(t,
`{"data":
{"result":[
{"metric":{"__name__":"foo_bar","vm_account_id":"5","vm_project_id": "0"},"value":[1652169900,"1"]},
{"metric":{"__name__":"foo_bar","vm_account_id":"5","vm_project_id":"15"},"value":[1652169900,"3"]}
]
}
}`,
)
got = vmselect.PrometheusAPIV1Query(t, `foo_bar{vm_account_id="5"}`, apptest.QueryOpts{
Time: instantCT,
Headers: multitenant,
})
if diff := cmp.Diff(want, got, cmpOpt); diff != "" {
t.Errorf("unexpected response (-want, +got):\n%s", diff)
}
// /api/v1/series with extra_filters
wantSR = apptest.NewPrometheusAPIV1SeriesResponse(t,
`{"data": [
{"__name__":"foo_bar", "vm_account_id":"5", "vm_project_id":"15"},
{"__name__":"foo_bar", "vm_account_id":"1", "vm_project_id":"15"}
]
}`)
wantSR.Sort()
gotSR = vmselect.PrometheusAPIV1Series(t, "foo_bar", apptest.QueryOpts{
Start: "2022-05-10T08:00:00.000Z",
End: "2022-05-10T08:30:00.000Z",
ExtraFilters: []string{`{vm_project_id="15"}`},
Headers: multitenant,
})
gotSR.Sort()
if diff := cmp.Diff(wantSR, gotSR, cmpSROpt); diff != "" {
t.Errorf("unexpected response (-want, +got):\n%s", diff)
}
// /api/v1/label/../value with extra_filters
wantVR := apptest.NewPrometheusAPIV1LabelValuesResponse(t,
`{"data": [
"5"
]
}`)
// matchQuery is ignored for /api/v1/label/<labelName>/values lookups with multitenant token
gotVR := vmselect.PrometheusAPIV1LabelValues(t, "vm_account_id", "xxx", apptest.QueryOpts{
Start: "2022-05-10T08:00:00.000Z",
End: "2022-05-10T08:30:00.000Z",
ExtraFilters: []string{`{vm_account_id="5"}`},
Headers: multitenant,
})
gotSR.Sort()
if diff := cmp.Diff(wantVR, gotVR, cmpopts.IgnoreFields(apptest.PrometheusAPIV1LabelValuesResponse{}, "Status", "IsPartial")); diff != "" {
t.Errorf("unexpected response (-want, +got):\n%s", diff)
}
// Delete series from specific tenant
tenantID := make(http.Header)
tenantID.Set("AccountID", "5")
tenantID.Set("ProjectID", "15")
vmselect.APIV1AdminTSDBDeleteSeries(t, "foo_bar", apptest.QueryOpts{
Headers: tenantID,
})
wantSR = apptest.NewPrometheusAPIV1SeriesResponse(t,
`{"data": [
{"__name__":"foo_bar", "vm_account_id":"0", "vm_project_id":"3"},
{"__name__":"foo_bar", "vm_account_id":"0", "vm_project_id":"10"},
{"__name__":"foo_bar", "vm_account_id":"1", "vm_project_id":"1"},
{"__name__":"foo_bar", "vm_account_id":"1", "vm_project_id":"15"},
{"__name__":"foo_bar", "vm_account_id":"2", "vm_project_id":"0"},
{"__name__":"foo_bar", "vm_account_id":"5", "vm_project_id":"0"}
]
}`)
wantSR.Sort()
gotSR = vmselect.PrometheusAPIV1Series(t, "foo_bar", apptest.QueryOpts{
Headers: multitenant,
Start: "2022-05-10T08:03:00.000Z",
})
gotSR.Sort()
if diff := cmp.Diff(wantSR, gotSR, cmpSROpt); diff != "" {
t.Errorf("unexpected response (-want, +got):\n%s", diff)
}
// Delete series for multitenant with tenant filter
vmselect.APIV1AdminTSDBDeleteSeries(t, `foo_bar{vm_account_id="1"}`, apptest.QueryOpts{
Headers: multitenant,
})
wantSR = apptest.NewPrometheusAPIV1SeriesResponse(t,
`{"data": [
{"__name__":"foo_bar", "vm_account_id":"0", "vm_project_id":"3"},
{"__name__":"foo_bar", "vm_account_id":"0", "vm_project_id":"10"},
{"__name__":"foo_bar", "vm_account_id":"2", "vm_project_id":"0"},
{"__name__":"foo_bar", "vm_account_id":"5", "vm_project_id":"0"}
]
}`)
wantSR.Sort()
gotSR = vmselect.PrometheusAPIV1Series(t, `foo_bar`, apptest.QueryOpts{
Headers: multitenant,
Start: "2022-05-10T08:03:00.000Z",
})
gotSR.Sort()
if diff := cmp.Diff(wantSR, gotSR, cmpSROpt); diff != "" {
t.Errorf("unexpected response (-want, +got):\n%s", diff)
}
if got := vmselect.GetIntMetric(t, `vm_cache_requests_total{type="multitenancy/tenants"}`); got != 0 {
t.Errorf("unexpected multitenancy tenants cache requests; got %d; want 0", got)
}
if got := vmselect.GetIntMetric(t, `vm_cache_misses_total{type="multitenancy/tenants"}`); got != 0 {
t.Errorf("unexpected multitenancy tenants cache misses; got %d; want 0", got)
}
if got := vmselect.GetIntMetric(t, `vm_cache_entries{type="multitenancy/tenants"}`); got != 0 {
t.Errorf("unexpected multitenancy tenants cache entries; got %d; want 0", got)
}
// verify that tenant in path has priority over tenant specified in headers
// /api/v1/import/prometheus
tenantInHeader := make(http.Header)
tenantInHeader.Set("AccountID", "42")
tenantInPath := "112"
vminsert.PrometheusAPIV1ImportPrometheus(t, samples, apptest.QueryOpts{
// tenants in header and path clash - path should have higher priority on ingestion
Headers: tenantInHeader,
Tenant: "112",
})
vmstorage.ForceFlush(t)
want = apptest.NewPrometheusAPIV1QueryResponse(t,
`{"data":
{"result":[
{"metric":{"__name__":"foo_bar"},"value":[1652169900,"3"]}
]
}
}`,
)
got = vmselect.PrometheusAPIV1Query(t, "foo_bar", apptest.QueryOpts{
// tenants in header and path clash - path should have higher priority on ingestion
Headers: multitenant,
Tenant: tenantInPath,
Time: instantCT,
})
if diff := cmp.Diff(want, got, cmpOpt); diff != "" {
t.Errorf("unexpected response (-want, +got):\n%s", diff)
}
}

View File

@@ -0,0 +1,186 @@
package tests
import (
"encoding/json"
"fmt"
"net/http"
"net/http/httptest"
"sort"
"strconv"
"strings"
"testing"
)
// openTSDBPoint is a single data point served by the mock OpenTSDB server.
type openTSDBPoint struct {
Metric string
Tags map[string]string
Timestamp int64
Value float64
}
// openTSDBMockServer implements the minimal subset of the OpenTSDB HTTP API
// used by vmctl opentsdb: /api/suggest, /api/search/lookup, /api/query.
type openTSDBMockServer struct {
server *httptest.Server
points []openTSDBPoint
}
// newOpenTSDBMockServer starts an httptest server serving the given points.
func newOpenTSDBMockServer(t *testing.T, points []openTSDBPoint) *openTSDBMockServer {
t.Helper()
s := &openTSDBMockServer{points: points}
mux := http.NewServeMux()
mux.HandleFunc("/api/suggest", s.handleSuggest)
mux.HandleFunc("/api/search/lookup", s.handleLookup)
mux.HandleFunc("/api/query", s.handleQuery)
s.server = httptest.NewServer(mux)
return s
}
// close shuts down the server.
func (s *openTSDBMockServer) close() { s.server.Close() }
// httpAddr returns the server URL.
func (s *openTSDBMockServer) httpAddr() string { return s.server.URL }
// handleSuggest serves https://opentsdb.net/docs/build/html/api_http/suggest.html
func (s *openTSDBMockServer) handleSuggest(w http.ResponseWriter, r *http.Request) {
q := r.URL.Query().Get("q")
seen := make(map[string]bool, len(s.points))
var out []string
for _, p := range s.points {
if seen[p.Metric] {
continue
}
if q != "" && !strings.Contains(p.Metric, q) {
continue
}
seen[p.Metric] = true
out = append(out, p.Metric)
}
_ = json.NewEncoder(w).Encode(out)
}
// handleLookup serves https://opentsdb.net/docs/build/html/api_http/search/lookup.html
func (s *openTSDBMockServer) handleLookup(w http.ResponseWriter, r *http.Request) {
metric := r.URL.Query().Get("m")
type meta struct {
Metric string `json:"metric"`
Tags map[string]string `json:"tags"`
}
seen := make(map[string]bool, len(s.points))
var results []meta
for _, p := range s.points {
if p.Metric != metric {
continue
}
key := tagsKey(p.Tags)
if seen[key] {
continue
}
seen[key] = true
results = append(results, meta{p.Metric, p.Tags})
}
_ = json.NewEncoder(w).Encode(map[string]any{
"type": "LOOKUP",
"metric": metric,
"results": results,
})
}
// handleQuery serves https://opentsdb.net/docs/build/html/api_http/query/index.html
func (s *openTSDBMockServer) handleQuery(w http.ResponseWriter, r *http.Request) {
m := r.URL.Query().Get("m")
metric, tagFilter, ok := parseQuery(m)
if !ok {
http.Error(w, "bad query param", http.StatusBadRequest)
return
}
start, err := strconv.ParseInt(r.URL.Query().Get("start"), 10, 64)
if err != nil {
http.Error(w, "bad start param", http.StatusBadRequest)
return
}
end, err := strconv.ParseInt(r.URL.Query().Get("end"), 10, 64)
if err != nil {
http.Error(w, "bad end param", http.StatusBadRequest)
return
}
type resp struct {
Metric string `json:"metric"`
Tags map[string]string `json:"tags"`
AggregateTags []string `json:"aggregateTags"`
Dps map[string]float64 `json:"dps"`
}
grouped := make(map[string]*resp, len(s.points))
for _, p := range s.points {
if p.Metric != metric {
continue
}
if !matchTags(p.Tags, tagFilter) {
continue
}
if p.Timestamp < start || p.Timestamp > end {
continue
}
key := tagsKey(p.Tags)
if _, exists := grouped[key]; !exists {
grouped[key] = &resp{
Metric: p.Metric,
Tags: p.Tags,
AggregateTags: []string{},
Dps: map[string]float64{},
}
}
grouped[key].Dps[fmt.Sprintf("%d", p.Timestamp)] = p.Value
}
out := make([]*resp, 0, len(grouped))
for _, v := range grouped {
out = append(out, v)
}
_ = json.NewEncoder(w).Encode(out)
}
// parseQuery parses the OpenTSDB m= query parameter.
// Format: "<agg>:<bucket>-<agg>-none:<metric>{k=v,k=v}"
func parseQuery(m string) (string, map[string]string, bool) {
parts := strings.SplitN(m, ":", 3)
if len(parts) != 3 {
return "", nil, false
}
metric, tagStr, _ := strings.Cut(parts[2], "{")
tags := make(map[string]string, 4)
tagStr = strings.TrimSuffix(tagStr, "}")
for _, kv := range strings.Split(tagStr, ",") {
if k, v, ok := strings.Cut(kv, "="); ok {
tags[k] = v
}
}
return metric, tags, true
}
func matchTags(got, filter map[string]string) bool {
for k, v := range filter {
if v == "*" {
continue
}
if got[k] != v {
return false
}
}
return true
}
func tagsKey(tags map[string]string) string {
keys := make([]string, 0, len(tags))
for k := range tags {
keys = append(keys, k)
}
sort.Strings(keys)
parts := make([]string, 0, len(keys))
for _, k := range keys {
parts = append(parts, k+"="+tags[k])
}
return strings.Join(parts, ",")
}

View File

@@ -0,0 +1,28 @@
{
"compaction": {
"level": 1,
"sources": [
"01KKS78P6B68DNJC87ZVPRGC3X"
]
},
"maxTime": 1735696740001,
"minTime": 1735689600000,
"stats": {
"numChunks": 8,
"numFloatSamples": 480,
"numSamples": 480,
"numSeries": 4
},
"thanos": {
"downsample": {
"resolution": 0
},
"labels": {
"prometheus": "test",
"replica": "0"
},
"source": "prometheus"
},
"ulid": "01KKS78P6B68DNJC87ZVPRGC3X",
"version": 1
}

Some files were not shown because too many files have changed in this diff Show More