The outdated link to the slides for this talk has been dropped in the commit f0a147fdf7 .
The video recording for the talk is still available at YouTube ( https://www.youtube.com/watch?v=ZJQYW-cFOms ),
so put it to the articles page.
Enterprise version of VictoriaTraces isn't available yet, but it is better to mention it
at the https://docs.victoriametrics.com/victoriametrics/enterprise/ page for the sake of consistency.
While at it, consistently use absolute links, even if they point to the same document.
This simplifies moving the text between docs without breaking the links.
This change should clearly distinguish different multitnenacy scenarios
for vmagent. It is expected to be easier to read and follow for users.
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: Pablo Fernandez <46322567+TomFern@users.noreply.github.com>
The core `lib/promauth` already supports `usernameFile`
configs, but the CLI flags for vmagent remotewrite and vmalert
datasource/remotewrite/remoteread/notifier only expose
`basicAuth.username`.
This commit adds the corresponding `basicAuth.usernameFile` flags to match
the existing `basicAuth.passwordFile` pattern, closing the gap between
YAML and CLI configuration.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9436
This commit adds a warning message, if `-memory.allowedBytes` has value less than 1MB.
It should help to debug possible issues, if there is a problem with app start-up due to low memory limit.
For example, fastcache could panic at `-memory.allowedBytes=`
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10935
In most cases, vmalert is configured to write to vm components like
vminsert or vmagent, using VictoriaMetrics remote write protocol can
save network bandwidth.
The VictoriaMetrics remote write protocol is used by default, and the
protocol is downgraded from VictoriaMetrics to Prometheus remote write
if one request fails with protocol error.
Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10929
Replace the pattern of `git checkout <tag> && make <binary>` with `git
worktree add /tmp/vm-* <tag>` so that flag updates no longer switch the
working tree of the current repository. Each variant (opensource,
enterprise, cluster) gets its own worktree, removing the need to restore
the original branch between steps.
Also normalize dynamic default values in vmctl prometheus flags
(-prom-tmp-dir-path) to `os.TempDir()` to reduce noisy diffs caused by
machine-specific temp paths.
Add '=' separator between label name and value when computing the hash
to prevent false collisions, like {a="bc"} and {ab="c"} hashing to the
same value.
getLabelsHashForShard is added to avoid sharding disruptions in vmagent
(-remoteWrite.shardByURL=true mode). The function preserves previous
behavior, without '=' between name and value.
PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10937
### Describe Your Changes
Fix stale `quantiles(...)` stream aggregation output for series without
samples in the current aggregation interval.
Previously, `quantilesAggrConfig` reused the `quantiles` buffer across
aggregation values. If `quantilesAggrValue.flush` was called for a
series without samples after another series had already calculated
quantiles, the stale quantile
values could be emitted for the empty series.
This could produce unrealistic `*_quantiles` output values and make the
same aggregated value appear across unrelated labelsets.
The PR skips `quantiles(...)` output when there is no histogram for the
current interval and adds a regression test for this case.
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
---------
Co-authored-by: hagen1778 <roman@victoriametrics.com>
synctest runs inner closure in a new goroutine, which makes `t.Helper` instruction
useless on `t.Fatalf` checks. So when test fails we observe the log line where `t.Fatalf`
was called, instead of where `f()` was called.
Moving checks out of synctest closure makes `t.Helper` useful again.
--
In the synctest we were waiting for ingest a new batch of samples for aggregation interval.
Because of this, the new batch had 50% chance to be ingested in the previous or current
aggregation interval, depending on whether go run time initiated flush() call or no.
This change waits for additional 1ms for flush to happen. Locally, it stopped producing
flaky tests.
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
When a request contains both URL path query params and POST form values
for extra_label and extra_filters[], URL query params now take
precedence. This resolves the conflict between the two sources and
simplifies security enforcement for extra_label/extra_filters policies
via vmauth or any other http proxy.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10908
This commit introduces a new metric to expose fs type for the provided path.
For example:
```
vm_fs_info{path="/vmstorage-data", fs_type="xfs"}
```
Path must be registered with new method `fs.RegisterPathFsMetrics`.
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10482
Add a new `Kafka (Enterprise)` row to both vmagent dashboards:
- `dashboards/vmagent.json`
- `dashboards/vm/vmagent.json`
The row is placed before `Drilldown` and contains three Kafka-specific
panels:
- `Kafka bytes`
- `Kafka messages in/out`
- `Kafka and consumer errors`
The goal is to provide a compact Kafka-focused view for enterprise
vmagent deployments without duplicating the existing generic remote
write panels such as connection saturation and persistent queue size.
The new row helps distinguish:
- producer vs consumer throughput at the Kafka topic level
- message-rate shifts that may indicate smaller Kafka payloads and
higher per-message overhead
- producer-side Kafka errors vs consumer-side Kafka errors
Descriptions include links to the relevant Kafka documentation sections.
PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10728
---------
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
Bumps [github/codeql-action](https://github.com/github/codeql-action)
from 4.35.1 to 4.35.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/github/codeql-action/releases">github/codeql-action's
releases</a>.</em></p>
<blockquote>
<h2>v4.35.2</h2>
<ul>
<li>The undocumented TRAP cache cleanup feature that could be enabled
using the <code>CODEQL_ACTION_CLEANUP_TRAP_CACHES</code> environment
variable is deprecated and will be removed in May 2026. If you are
affected by this, we recommend disabling TRAP caching by passing the
<code>trap-caching: false</code> input to the <code>init</code> Action.
<a
href="https://redirect.github.com/github/codeql-action/pull/3795">#3795</a></li>
<li>The Git version 2.36.0 requirement for improved incremental analysis
now only applies to repositories that contain submodules. <a
href="https://redirect.github.com/github/codeql-action/pull/3789">#3789</a></li>
<li>Python analysis on GHES no longer extracts the standard library,
relying instead on models of the standard library. This should result in
significantly faster extraction and analysis times, while the effect on
alerts should be minimal. <a
href="https://redirect.github.com/github/codeql-action/pull/3794">#3794</a></li>
<li>Fixed a bug in the validation of OIDC configurations for private
registries that was added in CodeQL Action 4.33.0 / 3.33.0. <a
href="https://redirect.github.com/github/codeql-action/pull/3807">#3807</a></li>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.25.2">2.25.2</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3823">#3823</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/github/codeql-action/blob/main/CHANGELOG.md">github/codeql-action's
changelog</a>.</em></p>
<blockquote>
<h2>4.35.2 - 15 Apr 2026</h2>
<ul>
<li>The undocumented TRAP cache cleanup feature that could be enabled
using the <code>CODEQL_ACTION_CLEANUP_TRAP_CACHES</code> environment
variable is deprecated and will be removed in May 2026. If you are
affected by this, we recommend disabling TRAP caching by passing the
<code>trap-caching: false</code> input to the <code>init</code> Action.
<a
href="https://redirect.github.com/github/codeql-action/pull/3795">#3795</a></li>
<li>The Git version 2.36.0 requirement for improved incremental analysis
now only applies to repositories that contain submodules. <a
href="https://redirect.github.com/github/codeql-action/pull/3789">#3789</a></li>
<li>Python analysis on GHES no longer extracts the standard library,
relying instead on models of the standard library. This should result in
significantly faster extraction and analysis times, while the effect on
alerts should be minimal. <a
href="https://redirect.github.com/github/codeql-action/pull/3794">#3794</a></li>
<li>Fixed a bug in the validation of OIDC configurations for private
registries that was added in CodeQL Action 4.33.0 / 3.33.0. <a
href="https://redirect.github.com/github/codeql-action/pull/3807">#3807</a></li>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.25.2">2.25.2</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3823">#3823</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="95e58e9a2c"><code>95e58e9</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3824">#3824</a>
from github/update-v4.35.2-d2e135a73</li>
<li><a
href="6f31bfe060"><code>6f31bfe</code></a>
Update changelog for v4.35.2</li>
<li><a
href="d2e135a73a"><code>d2e135a</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3823">#3823</a>
from github/update-bundle/codeql-bundle-v2.25.2</li>
<li><a
href="60abb65df0"><code>60abb65</code></a>
Add changelog note</li>
<li><a
href="5a0a562209"><code>5a0a562</code></a>
Update default bundle to codeql-bundle-v2.25.2</li>
<li><a
href="65216971a1"><code>6521697</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3820">#3820</a>
from github/dependabot/github_actions/dot-github/wor...</li>
<li><a
href="3c45af2dd2"><code>3c45af2</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3821">#3821</a>
from github/dependabot/npm_and_yarn/npm-minor-345b93...</li>
<li><a
href="f1c339364c"><code>f1c3393</code></a>
Rebuild</li>
<li><a
href="1024fc496c"><code>1024fc4</code></a>
Rebuild</li>
<li><a
href="9dd4cfed96"><code>9dd4cfe</code></a>
Bump the npm-minor group across 1 directory with 6 updates</li>
<li>Additional commits viewable in <a
href="https://github.com/github/codeql-action/compare/v4.35.1...v4.35.2">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
This change introduces a helper `MustStartDefaultRWVmagent` that by
default sets `-remoteWrite.flushInterval=50ms`. This helper makes it
easier to setup RW tests as all of them rely on frequent flushes. So
instead of overloading the flag, we can use dedicated helper for that.
This helper was added after newly added RW test became flaky because it
didn't have `-remoteWrite.flushInterval=50ms` set.
---------
Failing test
https://github.com/VictoriaMetrics/VictoriaMetrics/actions/runs/25446725004/job/74769752869#step:5:71
Signed-off-by: hagen1778 <roman@victoriametrics.com>
This commit adds possibility to omit tenantID in the URL path. In this case,
tenantID will be fetched from HTTP headers `AccountID` and `ProjectID`.
If headers are missing too, then default `0:0` tenantID is used.
This functionality can be enabled only if -enableMultitenantHandlers
cmd-line flag was set to vminsert, vmselect or vmagent.
Motivation: this change makes VM configuration for multienancy
consistent with VL configuration - see
https://docs.victoriametrics.com/victorialogs/#multitenancy. And keeps
backward compatibility in the same time.
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4241
Before, some of the template examples were wrongly renderred by hugo.
For example:
```
http://vm-grafana.com/<dashboard-id>?viewPanel=<panel-id>&from={{($activeAt.Add (parseDurationTime \"-1h\")).UnixMilli}}&to={{($activeAt.Add (parseDurationTime \"1h\")).UnixMilli}}
```
was renderred like:
```
http://vm-grafana.com/ ?viewPanel=&from={{($activeAt.Add (parseDurationTime "-1h")).UnixMilli}}&to={{($activeAt.Add (parseDurationTime "1h")).UnixMilli}}
```
Wrapping examples in ` helps to render them raw.
While there, also fixed some examples.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
On 2025-12-16, Hetzner Cloud deprecated the `datacenter` field in their
Servers API and introduced a top-level `location` field carrying the
same data. The `datacenter` field will be removed after 2026-07-01.
Without this change, `__meta_hetzner_hcloud_datacenter_location`, and
`__meta_hetzner_hcloud_datacenter_location_network_zone` would silently
become empty for the `hcloud` role after that date.
This mirrors the change made in Prometheus v3.11.0
([prometheus/prometheus#17850](https://github.com/prometheus/prometheus/pull/17850)).
## Changes
**`hcloud` role:**
- Add `HCloudLocation` struct and `Location` field on `HCloudServer`,
mapped to the new top-level `location` API field
- Emit two new canonical labels: `__meta_hetzner_hcloud_location` and
`__meta_hetzner_hcloud_location_network_zone`
- Keep the deprecated `__meta_hetzner_hcloud_datacenter_location` and
`__meta_hetzner_hcloud_datacenter_location_network_zone` labels, now
sourced from the new `location` field so they continue to work past
2026-07-01
- `__meta_hetzner_datacenter` (the datacenter name, e.g. `fsn1-dc14`) is
unaffected for this role — the datacenter name is a distinct concept
from location and is kept as-is (this will stop working starting
2026-07-01)
**`robot` role:**
- Add `__meta_hetzner_robot_datacenter` as the canonical replacement for
`__meta_hetzner_datacenter`; the old label is kept for backward
compatibility
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10909
At v1.142.0 was introduced a bug, when changes from OSS version were
back-ported into Enterprise branch. It changed the order of storage
nodes discovery. And resulted into:
* overwrite of discovered storage nodes
* duplicate of per storage node metrics
This bug only affects enterprise vminsert version.
Mention -rule.stripFilePath cmd-limne flag in security recommendations,
so users can be aware of it.
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: Haley Wang <haley@victoriametrics.com>
The change adds `AI observability` section to `AI tools` documentation.
It mentions excellent @Amper articles describing these integrations in
all details.
The doc change doesn't repeat the articles, but rather helps users to
discover them.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
The flag already exists in the ENT version. We decided to expose it in
OSS and strip the path from all public places, including all
APIs(includes `/metrics`) and debug logs(it's minor info there).
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5625
Previously, if `-tls` flag was provided, victoria metrics components
produced the following log error entry at health checks:
http: TLS handshake error from 10.244.0.1:46556: EOF
Such health checks are common for many orchestration systems, such as
consul
or kubernetes. And default http server already suppresses such EOF
health checks.
This commit adds suppression to the tls server as well.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10538
# What Changed
- Updated the operator installation procedure
- Updated the commands to match the rest of the guides
- Updated screenshots
- Reordered steps to make more sense of the process
- Fixed issues in the YAML
- Tested on actual OpenShift trial instance running on AWS
- Added steps to confirm log ingestion using VMUI
PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10864
This PR fixes several broken links and anchors in the victoriametrics
docs.
Note about links changes in FAQ.md file. The links inside the paragraph
break navigation in the right-side menu. To fix this, an explicit anchor
definition has been added. The anchor is the same as before, setting it
explsitly fixes the siebar links.
See https://github.com/VictoriaMetrics/vmdocs/issues/221 for the
up-to-date list once this PR is merged.
PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10874
Previously, metricMetadata was not properly reset during parsing of
metrics. It could result into `Unit` suffix to be added from previously
parsed metric into next metric without Unit field.
For example, metric `http_request` with `Unit` `seconds` will be
converted into `http_request_seconds` and `Unit` field hold `seconds`.
Next parsed metric `cpu_usage_ratio` has no `Unit` and it will get
previous `seconds` `Unit` -> `cpu_usage_ratio_seconds`.
This commit adds metricMetadata reset call before parsing of next
metric.
Bug was introduced at 293d80910c
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10889
Iximiuz labs prepared a set of playgrounds for VictoriaMetrics. These
are interactive playgrounds backed by real Linux machines running
VictoriaMetrics software, allowing experimenting and investigating right
in the browser tab.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Metadata is enabled by default since v1.137.0, and the metadata volume
can be a big contributor to resource usage and network traffic.
vmagent dahsboard:
1. `Troubleshooting` section: rename `Datapoints rate` panel to `Rows
rate` to include metadata rate;
2. `Ingestion` section: add metadata rate to existing `Rows rate` panel.
(The difference between this panel and the one above is that this panel
only contains data from write requests, while the above panel also
includes the scraping part.)
vmcluster dashboard:
1. `vminsert` section: add `Rows rate` panel
Didn’t see a good place for it in the vmsingle dashboard, since it
doesn’t have a dedicated insert section, and I don’t want to add it to
`overview` yet.
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10868
* add visual mermaid diagram to demonstrate aggregation concept;
* update Recording-rules-alternative:
* * recommend using rate_sum instead of total for better reliability
* * demonstrate how to calculate sliding window, typicall for recording
rules
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: Pablo Fernandez <46322567+TomFern@users.noreply.github.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
Add the support of storage and retrieval of samples with future
timestamps as requested in https://github.com/VictoriaMetrics/VictoriaMetrics/issues/827
What to expect:
- By default, the max future timestamp is still limited to `now+2d`. To
change it, set the `-futureRetention` flag in `vmstorage`. The max flag
value is currently limited to `100y`. It can be extended if we see a
demand for this, but it can't be more than `~ 290y` due to how the time
duration is implemented in Go. The flag value can't be less than `2d`.
- downsampling and retention filters (available in enterprise edition)
are currently not supported for future timestamps
- If `vmstorage` restarts with a smaller value of `-futureRetention`
flag, any future partitions that are outside the new future retention
will be automatically deleted.
- Data ingestion, data retrieval, backup/restore, timeseries (soft)
deletion, and other operations work with future timestamps the same way
as with the historical timestamps.
- In the cluster version, the affected binaries are `vmstorage` and
`vmselect`. This means that `vmselect` version must match `vmstorage`
version if you want to query future timestamps. `vminsert` was not
affected, so its version can be a lower one.
- If you downgrade the `vmstorage`, the data with future timestamps will
remain on disk and memory (per-partition caches) but won't be available
for querying.
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Signed-off-by: Artem Fetishev <149964189+rtm0@users.noreply.github.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
Update `timeutil.ParseTimeAt` to check the time limits for all date/time formats, not just year.
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Previously, backend url health check start could produce a data race
and a race condition.
The following panic could be produced:
`panic: sync: WaitGroup is reused before previous Wait has returned`
It happened because concurrent goroutine could process request, while
configuration was reloaded and stopHealthChecks method was called.
This commit adds a dedicated structure for backend health checks.
Which protects from data race with mutex guard. And prevents race
condition with a boolean flag.
Fixes: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10806
Visually outline that guideline message should be removed from
description before submitting the PR. This should prevent cases when PR
template was blending into the PRs description remaining unnoticed.
The commit in metricsql
d0bc93816e
introduced a bug that changes an order of binary op evaluation. This
commit updates to metricsql version that fixes a bug by reverting to
previous behavior.
The bug was introduced in v1.140.0, v1.136.4, and v1.122.19 releases.
It was reported in
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10856
Previously
- `GetData` in the OpenTSDB client was returning empty `Metric{}` with
`nil` error for several conditions (multiple series returned, aggregate
tags present, `modifyData` failures), causing `vmctl opentsdb` to
silently drop series during migration
This commit changes these silent return paths to return proper errors with
descriptive messages including the query string, so operators can detect
and diagnose partial migrations.
Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10797
Previously, if rule label value was set to empty string, vmalert ignored this label during labels merge with labels from data source response. In contrast, Prometheus removes data source label in this case as well. Which allows to perform label delete operation.
This commit uses the same logic as Prometheus for resolving labels conflicts and allows to remove labels.
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10766
Proxy protocol parser kept sub-slice reference for pooled bytesBuffer at readProxyProto
```
bb := bbPool.Get()
defer bbPool.Put(bb) // ← buffer returned to pool AFTER function returns
...
IP: bb.B[0:16], // ← BUG: sub-slice of pooled buffer!
...
```
This commit properly allocates new slice for ipv6 address and copies buffer content to it.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10839
`IndexDBRecordsDrop` and `TooManyTSIDMisses` were mistakenly placed to `alerts-health.yml`,
which was supposed to contain rules related to all VM components. But these two rules
are related to storage components only (vmstorage and vmsingle). Moving them to corresponding
files.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
The change should reduce confusion for users where `alerts.yml`
belongs to. Before, developers could mistakenly assume that
`alerts.yml` was related to both single and cluster installations.
In result, rule `MetadataCacheUtilizationIsTooHigh` was added only
to `alerts.yml` and not copied to `alerts-cluster.yml`.
The rename change should bring more context into the file name
and reduce confusion in the future.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Before, this rule was only a part of single-node rule set.
But it is applicable for both: single and cluster installations.
Adding it to cluster as well.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
The new rule `MetricNameStatsCacheUtilizationIsTooHigh` will signalize
about overutilization of Metric names usage stats tracker. See
https://docs.victoriametrics.com/victoriametrics/#track-ingested-metrics-usage
This rule can fire for deployments with high churn rate of metric names.
In cases like this, it is better to disable metric name tracking
completely, as it brings no use.
It might fire for deployments that were tracking metric names for very
long periods and this alert might be a good sign to reset the cache.
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
The docs currently wrongly states that vminsert applies a label limit
per timeseries of `30`. Currently, the limit is `40`, which is also
correctly stated in in vmcluster docs. This PR corrects this in the key
concepts docs.
```
-maxLabelsPerTimeseries int
The maximum number of labels per time series to be accepted. Series with superfluous labels are ignored. In this case the vm_rows_ignored_total{reason="too_many_labels"} metric at /metrics page is incremented (default 40)
```
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10826
After switching squash merges to use the PR title and description, the
PR template text started leaking into final commit messages and adding
noise.
This PR removes the template and documents what a PR title and PR
description should contain instead.
See https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10789
Previously After RoundTrip returns successfully (err == nil, res != nil), the code checks if the original client request's context was canceled. If canceled, it returns immediately without closing res.Body.
There is a race window where:
1) RoundTrip completes successfully (res is non-nil)
2) The client cancels the request context (closes connection)
3) The context check at line 484 sees the cancellation
4) The function returns without closing res.Body
The response body holds a reference to the underlying TCP connection. Without closing it, the connection is permanently leaked along with the transport goroutines (readLoop + writeLoop or dialConnFor).
bug was introduced at https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10233
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10833
**"Run query" link params**
Added correct params to "Run query" link on Alerting Rules page:
- `g0.step_input` - set to `group.interval` (in seconds)
- `g0.end_time` - set to `rule.lastEvaluation` / `alert.activeAt`
- `g0.relative_time=none` - to fix the time range
**Time display timezone**
Changed `t.format(...)` to `t.tz().format(...)` to display time in the
user-selected timezone.
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10366https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10827
TCP healthchecks on the clusternative port of vmselect logs the following warning continuously:
VictoriaMetrics/lib/vmselectapi/server.go:204 cannot complete vmselect handshake due to network error with client "10.129.30.27:43829": cannot read hello message : cannot read message with size 11: EOF; read only 0 bytes. Check vmselect logs for errors
This is in contrast to vminsert, where it seems like there's handling for these healthchecks:
```
if errors.Is(err, io.EOF) {
// This is likely a TCP healthcheck, which must be ignored in order to prevent logs pollution.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1762
return errTCPHealthcheck
```
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10786
Previously, on non-200 HTTP status codes, lib/promscrape performed an
unbounded body read, which could potentially result in OOM.
This commit adds a maxScrapeSize limit to error response body reads,
protecting against malicious or misbehaving metrics endpoints.
vmsingle shuts down vminsert before closing the ingestion rate limiter, even though the rate limiter API explicitly requires the opposite order to unblock callers. vminsert.Stop() waits for unmarshal workers, which can be blocked in ingestionRateLimiter.Register() when the limit is hit.
Workers in runParallelPerPathInternal check ctxLocal.Done() before processing each work item and exit early on cancellation — without sending a result to resultCh. However, the coordinator loop always waits for exactly len(perPath) results from resultCh. If cancellation occurs before all tasks report, the read blocks indefinitely.
0aaa741b5b introduced a regression in lib/awsapi/config.go that causes empty credentials to be returned on the very first call to getFreshAPICredentials() when using EKS Pod Identity (or any container credential mechanism with no static access key). These empty credentials are then used for SigV4 signing -> 403 Forbidden on every remote write request.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10815
Just run a simple bash command without the heavyweight Docker image
While at it, rely on TAG environment variable instead of PKG_TAG env variable
for `make docs-update-version`, in order to be consistent with other Make commands.
The change is needed to group splitting/sharding section of the documentation,
so they go one after another. This should improve readability.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
The previous descrioption didn't mention that relabeling can be used
for filtering scrape targets. Adding this metion.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
These links were removed in 134501bf99
without adding complete substitution to their content.
Restoring these links as they can be useful for readers to learn about relabeling.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
The old links were removed in #10754
mistakenly thinking that google didn't index it. However, it did. And users can get 404
when searching in google for VM plyagrounds.
Restoring the links via aliases. It means hugo will serve the `/playgrounds` page when
user requests `/playgrounds/victoriametrics/`.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Fix app tests:
1. Sync code between vmsingle and vmcluster: it must be the same because
apptest does not differentiate between branches, it just runs pre-built
binaries
2. Simplify range queries in backup/restore test so that it does not
depend on the interval between samples to work correctly.
---------
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Previously (*writeconcurrencylimiter.Reader).Read() could permanently leak concurrency tokens from the -maxConcurrentInserts semaphore.
Consider the following example:
* GetReader() acquires a token, then PutReader() unconditionally releases it.
* Read() calls DecConcurrency() before the underlying I/O and IncConcurrency() after it. If IncConcurrency() returns an error, Read() returns without holding a token.
* Each such failure permanently removes one slot from the concurrencyLimitCh semaphore. Slots leak one by one until the channel is fully drained, at which point DecConcurrency() blocks forever, deadlocking ingestion on vmstorage.
This commit adds tracking for obtained tokens to the reader. Which prevents possible tokens leakage.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10784
This reverts commit b3c03c023c.
Reason for revert: the original logic was correct from the user's perspective:
- The -maxRequestBodySizeToRetry command-line flag controls the size of the request body,
which could be retried on backend failure. The meaining of this flag wasn't changed after
the introduction of the -requestBufferSize flag in the commit e31abfc25c
(see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10309 )
- The -requestBufferSize flag controls the size of the buffer for reading request body
before sending sending it to the backend and before applying concurrency limits.
These flags are independent from user's perspective. The fact that these flags share the implementation,
sholdn't be known to the user - this is an implementation detail, which allows avoiding double buffering.
Both flags enable request buffering. If the user wants disabling of all the request buffering,
then both flags must be set to 0. That's why these flags are cross-mentioned in their -help descriptions.
Also the reverted commit had the following issues:
- It reduced the default value for the -requestBufferSize flag from 32KiB to 16KiB.
The 32KiB value has been calculated and justified at https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10309 .
It shouldn't increase vmagent memory usage too much for typical workloads.
For example, if vmagent handles 10K concurrent requests, then the memory overhead for the request buffering
will be 10K*32KiB=320MiB. This is a small price for being able to efficiently handling 10K concurrent requests.
- It added a dot to the end of the https://docs.victoriametrics.com/victoriametrics/vmauth/#request-body-buffering link
in the description for the description of the -requestBufferSize flag. This breaks clicking the link in some environments,
since the trailing dot is considered as a part of the url.
- It added a superflouous whitespace in front of the 'Disabling request buffering' text inside the description
for the -requstBufferSize flag.
- It introduced an unnecessary complexity to the user by mentioning that the zero value
at -maxBufferSize disables buffering for request reties (these things must be independent
from the user's perspective).
- It changed the bufferedBody logic in non-trivial ways, which aren't related to the original issue.
If these changes are needed, then they must be justified in a separate issue and must be prepared
in a separate pull request / commit.
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10675
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10677
Previously, Storage.table was initialized after startFreeDiskSpaceWatcher was called.
This created a potential data race condition: if openTable took a long time to complete
and freed disk space during that window, the free disk space watcher could read an
uninitialized (or partially initialized) Storage.table, leading to an invalid memory
address or nil pointer dereference panic.
This commit properly initializes s.isReadOnly state during storage start and
starts FreeDiskSpaceWatcher after openTable.
Bug was introduced in github.com/VictoriaMetrics/VictoriaMetrics/commit/27b958ba8bc66578206ddac26ccf47b2cc3e8101
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10747
Align group evaluation time with the `eval_offset` option to allow users
to manage group execution more effectively by understanding the exact
time each group will be scheduled, particularly in cases of spreading
rule execution within a window, chaining groups, or debugging data delay
issue.
If the group evaluation takes less than the group interval, but the
initial evaluation combined with the additional restore operation
exceeds the group interval, the evaluation time will be gradually
corrected in subsequent evaluations, as the interval ticker schedule
remains unchanged.
For groups without `eval_offset`, this change also ensures that all
evaluations follow the interval. Previously, the gap between the first
and second evaluations was larger than the interval. And the
`eval_delay` continues to help prevent partial responses.
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10772.
Follow-up commit for
211fb08028
Address @f41gh7 review comments:
- Move code from `lib/osinfo` to `lib/appmetrics`.
- Make the logic private.
- Use metrics.WriteGaugeUint64 func.
- Remove registration logic from `app/xxx/main.go`.
- Remove `lib/osinfo` package.
At 00:00 UTC the ingested samples start to have timestamps for the new
day (in the ingested samples are always recent). Even though there was a
next-day prefill of the per-day index during the last hour of the day,
some performance degradation is still possible.
For example, in https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10698
it is manifested as `vminsert-to-vmstorage connection saturation` peaks
right after midnight.
Possible hypothesis why this is happening. At midnight,
currHourMetricIDs is empty and prevHourMetricIDs cannot be used because
it holds metricIDs for the previous day. So the ingestion logic hits
dateMetricIDsCache which may not have the metricID in its read-only
buffer and therefore should aquire lock to check its prev read-only
buffer or read-write buffer. Which creates lock contention and therefore
raises ingestion request latency.
A solution to this could be re-using the nextDayMetricIDs during the
first hour of the day. During this time, it is equivalent to
currHourMetricIDs.
---------
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Signed-off-by: Artem Fetishev <149964189+rtm0@users.noreply.github.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
This change reverts part of the changes in
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10686
Motivation: docs added https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10686 in most cases are too verbose, ai-generated and bringing low practical sense.
The improvement goal: remove bloat from the docs and keep them practical and useful.
What it does:
- Completely removes items from the sidebar
- Moves the content of the most important playground pages to the
`/playground/` stub (README.md). Use H2s for each playground.
- Updates and cleans the text.
- Removes the individual children pages in the playground category (keep
only the `/playgrounds/` page/stub and remove the children).
- Removes items as these don't really need much introduction or aren't
playgrounds:
- log to logsql: a conversion tool
- sql to logsql: same
- adds Grafana playground section
Links of child pages will become invalid. We don't preserve them as this is pretty new doc (1w on prod) and is unlikely to have already persisted links somewhere.
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
This commit adds new metrics `vmalert_remotewrite_queue_capacity` and `vmalert_remotewrite_queue_size`, which is updated with each push and it's
frequency depends on `-remoteWrite.concurrency`,
`remoteWrite.flushInterval`
It doesn't account for the pending data within each pushers request, it
should provide a general indication of the queue usage.
Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10765
Exctract repeated code from nextDayMetricIDs synctests into separate
funcs to make the code more readable.
The change was originally introduced in
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10704 and was
extracted into a separate PR to keep the original change simple.
Previously, vminsert did not account for the ingest concurrency limit in buffer size calculation.
This could lead to excessively large buffers and OOM errors when the concurrency limit was reached.
This commit fixes buffer size calculation by separating `insertCtx` and `storageNode` buffer size limits.
`storageNode` buffer size is set to a larger value, as it is allocated per configured `-storageNode`
and is independent of the concurrency limit.
`insertCtx` buffer size now accounts for the configured concurrency limit
and calculates the maximum buffer size accordingly.
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10725
Previously, vmselect in cluster-native mode could return partial responses to upstream vmselect.
Since upstream vmselect expects full responses (mimicking vmstorage behavior),
partial responses must be disabled in cluster-native mode.
This prevents incomplete responses from being cached at the upstream vmselect level.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10678
Automatically set daily and hourly series limits to `MaxInt32` when `remoteWrite.maxHourlySeries` or `remoteWrite.maxDailySeries` is set to `-1`.
This change addresses a usability issue with the cardinality limiter. Users may want to enable the limiter to observe its metrics before deciding on an appropriate limit. However, the underlying bloom filter only supports `int32`, so setting large values can lead to overflow.
With this PR:
* Setting either flag to `-1` is treated as “no practical limit” and internally mapped to `math.MaxInt32`
* Values exceeding `int32` are safely clamped to `MaxInt32` to prevent overflow
This allows users to enable the limiter for estimation purposes without risking invalid configurations or runtime issues.
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9614
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
Signed-off-by: Max Kotliar <kotlyar.maksim@gmail.com>
Co-authored-by: Nikolay <nik@victoriametrics.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
Previously, last scrape result was unconditionally update, despite possible scrape error.
The commit updates last scrape result only at successful scrape. It properly accounts `scrape_series_added` metric and aligns it with the same metric in Prometheus.
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10653
Previously introduced flag `requestBufferSize` raised default value for
in-memory buffer from 16KB to 32KB. It could increase memory usage for
vmauth. Also it made unclean how to actually disable requests buffering.
This commit aligns flags value to the 16KB. And disables requests
buffering if any of flags value are 0 as mentioned at flags description.
If any of flags have non-default value, those value are used as max size
for request buffer. If both flags are modified - bigger value wins.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10675
I expect the change to help in two ways:
1. Spreading remote write flushes over the flush interval to avoid
congestion at the remote write destination;
2. Enhance queue data consumption. Currently, all flushers may always
flush data simultaneously, resulting in periods where no flushers are
consuming data from the queue, which increases the risk of reaching the
queue limit `remoteWrite.maxQueueSize` even when a increased
`remoteWrite.concurrency`. By making the flushers more dispersed, it is
more likely that some flushers are consistently consuming data from the
queue, which should make queue management easier.
Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10729/
Changes:
- Added the number of `pending alerts` and `firing alerts`
- Improvement `transormations` for panel - FIRING over time by group and rules
- Added sort for panel - FIRING over time by rule
Signed-off-by: sias32 <sias.32@yandex.ru>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
Replace 1.2 multiplier with 1.25 in disk space estimation formula.
1.2 only provides ~16.7% free space, while the docs recommend keeping
20%. Using 1.25 correctly accounts for 20% free space.
Inspired by
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10394
Add per-URL `-remoteWrite.disableMetadata` flag to control metadata
sending for each remote storage independently.
After v1.137.0 enabled `-enableMetadata` by default, metadata is sent to
ALL remote write targets, even those with relabeling filters that drop
most metrics. This causes unnecessary growth in
`vmagent_remotewrite_requests_total`. and significant increase in
network load for heavy filtered remote write destinations.
When the network between client and s3 server is unstable, the client may encounter temporary io.EOF errors when reading the response from s3 server.
Currently, the s3 sdk in vmbackup uses the default retry policy. However, this default retry policy won't retry when s3 sdk meet unexpected EOF. This means that the temporary unexpected EOF error will cause the backup task to fail.
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10699
Add link to blogpost with detailed information about zstd+rw protocol.
This PR is based on question in community channel about implementation
details.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Implemented dedicated thanos migration mode for vmctl to migrate data from Thanos installations to VictoriaMetrics.
Key features:
1. Raw and downsampled blocks support: Reads both raw blocks
(resolution=0) and downsampled blocks (5m/1h resolution) directly from
Thanos snapshots
2. All aggregate types: Imports count, sum, min, max, and counter
aggregates from downsampled blocks as separate metrics with resolution
and type suffixes (e.g., metric_name:5m:count)
3. Dedicated flags: Uses `--thanos-*` prefixed flags (--thanos-snapshot,
--thanos-concurrency, --thanos-filter-time-start,
--thanos-filter-time-end, --thanos-filter-label,
--thanos-filter-label-value, --thanos-aggr-types)
4. Selective aggregate import: Use `--thanos-aggr-types` to import only
specific aggregates
Usage:
```
vmctl thanos --thanos-snapshot /path/to/thanos-data --vm-addr http://victoria-metrics:8428
```
Closes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9262
Signed-off-by: Dmytro Kozlov <d.kozlov@victoriametrics.com>
Signed-off-by: Max Kotliar <kotlyar.maksim@gmail.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
Co-authored-by: Max Kotliar <kotlyar.maksim@gmail.com>
## Summary
This PR implements split phase metrics for filestream operations as
requested in #10432.
### Changes
- Added `vm_filestream_fsync_duration_seconds_total` metric to track
fsync syscall duration separately
- Added `vm_filestream_fsync_calls_total` metric to count fsync calls
- Added `vm_filestream_write_syscall_duration_seconds_total` metric to
track write syscall duration (previously mixed with flush time)
- Refactored `MustClose()` and `MustFlush()` to use new `flush()` and
`sync()` helper methods
- Kept `vm_filestream_write_duration_seconds_total` for backward
compatibility
### Problem Solved
Previously, `vm_filestream_write_duration_seconds_total` was being
incremented in two places:
1. `statWriter.Write()` - triggered by `bw.Flush()` and `bw.Write()`
2. `Writer.MustFlush()` - which included the above process, leading to
double-counting
This made it impossible to distinguish between write syscall time and
fsync time, which is critical for diagnosing storage latency issues.
### Solution
The new metrics allow users to:
- Distinguish "flush got slower" vs "fsync got slower" using metrics
only
- No file path labels (bounded cardinality)
- No double-counting between metrics
### Testing
- Code compiles successfully
- All existing metrics are preserved for backward compatibility
Closes#10432
---------
Signed-off-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
Signed-off-by: Aliaksandr Valialkin <valyala@gmail.com>
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
Co-authored-by: Aliaksandr Valialkin <valyala@gmail.com>
These types hide public types from lib/storage/metricnamestats package.
These types do not resolve any practical issues. Instead, they add a level of indirection,
which complicates reading and understanding the code.
These types were introduced in the commit 795d3fe722
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6145
### Describe Your Changes
Some users may not know that VictoriaMetrics Cloud provides relevant
features to manage workloads. This change add notes in relevant places
in which users may find that a managed solution is what they need.
The intention is not to push users to Cloud, but giving the information.
That's why it's always phrased like: "If you don't want to do X, Cloud
can do it for you", instead of "Start for free, etc". This is an Open
Source first project, and shall remain as such.
After this gets proper review, VictoriaLogs and other repos may follow.
### Checklist
The following checks are **mandatory**:
- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [X] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
---------
Signed-off-by: Jose Gómez-Sellés <14234281+jgomezselles@users.noreply.github.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
Commit 83da33d8cf
removed NFS directory delete retries. It was made on assumption, that
only directory rename could cause such issues. However, both rename and
unlink uses the same "silly rename" logic
https://linux-nfs.org/wiki/index.php/Server-side_silly_rename
and linux kernel - `fs/nfs/dir.c` `nfs_unlink` and `nfs_rename`.
And NFS client may treat file still open, even if it
was properly closed by application. Most probably it could be triggered, because VictoriaMetrics may
open the same file multiple times ( data read and background merges).
There is no issue with VictoriaMetrics itself, it properly closes files. But NFS-client may have delays
or cache metadata information for the files. So it could trigger silly rename behavior.
This commit restores original behavior with deletion retries and brings
back metrics for unsuccessful delete operations.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9842
Related to https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10680
We noticed that backup restores in our environment were much slower than
the hardware/bandwidth constraints would suggest and we traced this down
to a couple of bottlenecks. This PR attempts to address all of them.
#### Lack of pre-allocation of files,
This was causing writes far into files to be quite slow as new blocks
needed to be continually allocated. This was particularly bad on ext4
for us, but will likely be applicable to most disks and filesystems,
you'll see the impl here is linux specific but this is mostly because I
don't have a test env for any other platform and didn't want to blindly
make changes without a validation env.
This comes with the downside of no longer being to to resume a restore
mid file, and requiring the re-downloading of parts already in the file
size the file will appear at full size from the very start. This is I
think _generally_ a good tradeoff for the restore speed gains, it is
definitely a tradeoff so I've included a flag to disable the
pre-allocation behavior and fall back to the existing part diffing
logic.
#### Fsync after each part
With many small parts in relatively few files, or in high concurrency
setups the the writerCloser fsync on each part(actually double fsync
since both `filestream.Writer.mustFlush` and
`filestream.Writer.mustClose` both fsync). Was causing slowdowns since
we would be continually queuing fsyncs.
With the pre-allocation pattern the file is only "ready" once re-named
so I moved to a per file fsync after rename.
#### Concurrent read/write
The previous download pattern was to do a read from the remoteFs, with
whatever latency that entailed, then sequentially do a write, again with
whatever latency that entailed. This meant that throughput was limited
to `readLatency + writeLatency * blockSize`.
Similar to how `crossTypeCopy` is implemented in the backup process we
can instead use `io.pipe` to allow two goroutines to work in parallel
with a small buffer between them.
#### Pagecache avoidance
`filestream.Writer` does quite a lot to avoid polluting the page cache,
but this is not relevent in a restore context and with large sequential
block writes its much more effecient to let the OS flush the pagecache
whenever it wants rather than doing a bunch of small buffer syscalls to
flush blocks.
Therefore this switches over to a much simplier directWriterCloser that
does direct file IO and lets the OS handle flushes while mid write.
### Performance
Before the changes we were seeing writes speeds of only 100MBps, this
was a restore from EBS volumes, ext with 1GB/s throughput with
<img width="1613" height="586" alt="Screenshot 2026-03-16 at 1 29 46 PM"
src="https://github.com/user-attachments/assets/5d54dcb7-cb59-43e0-9247-fda8c70feb2f"
/>
After these changes in the same restore env we're seeing 600MBs flat
rates.
<img width="1611" height="471" alt="Screenshot 2026-03-16 at 1 31 33 PM"
src="https://github.com/user-attachments/assets/ea8e2eb7-533a-48fa-99e0-0b38286e5572"
/>
Signed-off-by: Max Kotliar <kotlyar.maksim@gmail.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
Remove shards as they only complicate things when the number of requests
per second is in the range of thousands.
Related to #10532.
---------
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
This commit allows to perform JWT claim matching over 1 dimension arrays. It could
be useful from practical standpoint. Because permissions are usually assigned as a list of values.
For example, the following config allows admin access over list of assigned roles for user:
```yaml
match_claims:
access.roles: "admin"
```
JWT token:
```json
{
"access": {
"roles": [
"read",
"write",
"admin"
]
}
}
```
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10647
RFC-7617 allows empty password/username. Moreover, from RFC standpoint both empty values are valid as well. It should be just encoded as `:`. So this commit relaxes non-empty username restriction.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6956
There are cases then the key sizeBytes is much greater than the value
sizeBytes. Therefore it is important to include the key sizeBytes into
the total.
Also fix some code comments.
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Bumps [flatted](https://github.com/WebReflection/flatted) from 3.3.3 to
3.4.2.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="3bf09091c3"><code>3bf0909</code></a>
3.4.2</li>
<li><a
href="885ddcc33c"><code>885ddcc</code></a>
fix CWE-1321</li>
<li><a
href="0bdba705d1"><code>0bdba70</code></a>
added flatted-view to the benchmark</li>
<li><a
href="2a02dce7c6"><code>2a02dce</code></a>
3.4.1</li>
<li><a
href="fba4e8f2e1"><code>fba4e8f</code></a>
Merge pull request <a
href="https://redirect.github.com/WebReflection/flatted/issues/89">#89</a>
from WebReflection/python-fix</li>
<li><a
href="5fe86485e6"><code>5fe8648</code></a>
added "when in Rome" also a test for PHP</li>
<li><a
href="53517adbef"><code>53517ad</code></a>
some minor improvement</li>
<li><a
href="b3e2a0c387"><code>b3e2a0c</code></a>
Fixing recursion issue in Python too</li>
<li><a
href="c4b46dbcbf"><code>c4b46db</code></a>
Add SECURITY.md for security policy and reporting</li>
<li><a
href="f86d071e0f"><code>f86d071</code></a>
Create dependabot.yml for version updates</li>
<li>Additional commits viewable in <a
href="https://github.com/WebReflection/flatted/compare/v3.3.3...v3.4.2">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/VictoriaMetrics/VictoriaMetrics/network/alerts).
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Due to a conflict with VL FAQ page identifier,
VM FAQ page stopped rendering.
This change adds unique identifier to VM FAQ page and fixes the issue.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Before, by mistake, datasource was referenced by input name instead
of variable name. For an unknown reason, it worked well in local setup
and on playground.
This fix is confirmed by users and continues working at local setup
and playground.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
### Describe Your Changes
Updated the [HA monitoring setup in Kubernetes via VictoriaMetrics
Cluster](https://docs.victoriametrics.com/guides/k8s-ha-monitoring-via-vm-cluster/)
guide.
Changes:
- Added an introduction explaining how HA works in this guide
- Updated and verified commands used in the guide
- Replaced using Grafana UI usage in favor of using VMUI instead (it was
used to run queries, it's easier to just use the built-in VMUI instead
of installing Grafana just to use the Explore tab)
- Removed Grafana screenshots and replaced them with VMUI
- Tested on a modern version of GKE
- Added explanations for `replicationFactor`, de-duplication, and
`isPartial`
- Added next steps
- Added VMUI screenshots
### Checklist
The following checks are **mandatory**:
- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [X] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
This commit adds a rpc retry by dialing a new connection instead of
getting an old one from the connection pool when the previous rpc error
is `io.EOF`.
It helps prevent broken connections from remaining for too long and
causing failed requests and partial responses during `vmstorage` rolling
restart period
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10314
Previously inmemoryPart refCount was not properly decremented.
Previous behavior:
* createInmemoryPart called newPartWrapperFromInmemoryPart and returns a partWrapper with refCount=1
* multiple parts are merged in mustMergeInmemoryPartsFinal, which creates a new merged part
* the source partWrappers are never decRef'd
* Since refCount never reaches 0, putInmemoryPart and (*part).MustClose are never called
This commit properly decrements refCount at mustMergeInmemoryPartsFinal.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10086
This commit adds a new `folder_ids` field in
`yandexcloud_sd_configs` that allows users to specify Yandex Cloud
folder IDs directly, bypassing the organization->cloud->folder hierarchy
traversal.
Previously, the Yandex Cloud service discovery required traversing the
entire resource hierarchy (organizations -> clouds -> folders ->
instances) to discover instances. This works when the Service Account
has permissions at all levels. However, some Service Accounts may only
have permissions at the folder level, causing discovery to fail when it
cannot access organization or cloud resources.
With this change, users can now configure folder IDs directly:
```yaml
yandexcloud_sd_configs:
- service: compute
folder_ids:
- folder-id-1
- folder-id-2
```
When `folder_ids` is specified, the discovery skips the hierarchy
traversal and directly queries instances from the specified folders.
This is a backward-compatible change - when `folder_ids` is not
specified, the existing behavior is preserved.
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10587
The new section is placed in root directory and is supposed to promote
information about the following tools:
* MCP servers for Logs, Traces and Metrics
* List of available agentic skills
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Signed-off-by: Roman Khavronenko <hagen1778@gmail.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
The change suppose to make it more clear for understanding and stress
attention on important things.
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Signed-off-by: Roman Khavronenko <hagen1778@gmail.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
The new Grafana dashboard uses the following APIs:
- /api/v1/status/tsdb
- /api/v1/status/metric_names_stats
It shows the list of metric names, the request count and the last time
they were "used". Clicking on metric name allows exploring its
cardinality.
Based on https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9832
-----------
The PR contains a few unrelated changes:
* rename of folder for prometheus datasource to remove the duplicated
word
* fix for vmalert's access to the datasource, as before it wasn't able
to write/read properly
-------------
The dashboard screen cast:
https://github.com/user-attachments/assets/01dda5d9-14e5-4f5a-b795-a838abec4f5e
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: Haley Wang <haley@victoriametrics.com>
### Describe Your Changes
When a label is set as focus label in the Cardinality Explorer, the
"Metric names with the highest number of series" table was hidden. This
change makes it visible alongside the focus label values table.
### How to reproduce
1. Go to Explore → Cardinality Explorer
2. Enter a selector like `{namespace!=""}` and set Focus label to
`namespace`
3. Click Execute Query
**Before:** Only "Values for 'namespace' label..." table is shown
**After:** "Metric names with the highest number of series" table is
also shown
<img width="1512" height="723"
alt="b2a8395a1577b31f58ae00f87e29eb87ca98eabfd0b3c0d9185be8f3a9789b5f"
src="https://github.com/user-attachments/assets/50c7f67a-1cfc-40d0-8e99-7750a933ee45"
/>
Fixes#10630
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
---------
Signed-off-by: Roshan1299 <banisettirosh@gmail.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
Bumps and [minimatch](https://github.com/isaacs/minimatch). These
dependencies needed to be updated together.
Updates `minimatch` from 3.1.2 to 3.1.5
<details>
<summary>Commits</summary>
<ul>
<li><a
href="7bba97888a"><code>7bba978</code></a>
3.1.5</li>
<li><a
href="bd259425b2"><code>bd25942</code></a>
docs: add warning about ReDoS</li>
<li><a
href="1a9c27c757"><code>1a9c27c</code></a>
fix partial matching of globstar patterns</li>
<li><a
href="1a2e084af5"><code>1a2e084</code></a>
3.1.4</li>
<li><a
href="ae24656237"><code>ae24656</code></a>
update lockfile</li>
<li><a
href="b100374922"><code>b100374</code></a>
limit recursion for **, improve perf considerably</li>
<li><a
href="26ffeaa091"><code>26ffeaa</code></a>
lockfile update</li>
<li><a
href="9eca892a4e"><code>9eca892</code></a>
lock node version to 14</li>
<li><a
href="00c323b188"><code>00c323b</code></a>
3.1.3</li>
<li><a
href="30486b2048"><code>30486b2</code></a>
update CI matrix and actions</li>
<li>Additional commits viewable in <a
href="https://github.com/isaacs/minimatch/compare/v3.1.2...v3.1.5">compare
view</a></li>
</ul>
</details>
<br />
Updates `minimatch` from 9.0.5 to 9.0.9
<details>
<summary>Commits</summary>
<ul>
<li><a
href="7bba97888a"><code>7bba978</code></a>
3.1.5</li>
<li><a
href="bd259425b2"><code>bd25942</code></a>
docs: add warning about ReDoS</li>
<li><a
href="1a9c27c757"><code>1a9c27c</code></a>
fix partial matching of globstar patterns</li>
<li><a
href="1a2e084af5"><code>1a2e084</code></a>
3.1.4</li>
<li><a
href="ae24656237"><code>ae24656</code></a>
update lockfile</li>
<li><a
href="b100374922"><code>b100374</code></a>
limit recursion for **, improve perf considerably</li>
<li><a
href="26ffeaa091"><code>26ffeaa</code></a>
lockfile update</li>
<li><a
href="9eca892a4e"><code>9eca892</code></a>
lock node version to 14</li>
<li><a
href="00c323b188"><code>00c323b</code></a>
3.1.3</li>
<li><a
href="30486b2048"><code>30486b2</code></a>
update CI matrix and actions</li>
<li>Additional commits viewable in <a
href="https://github.com/isaacs/minimatch/compare/v3.1.2...v3.1.5">compare
view</a></li>
</ul>
</details>
<br />
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/VictoriaMetrics/VictoriaMetrics/network/alerts).
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Poison varint: MaxUint64 encoded as varint (0xFFFFFFFFFFFFFFFF).
The bounds check uint64(nSize)+n overflows to 9, bypassing the guard.
Then int(MaxUint64)=-1 makes src[10:9] which panics.
Previously there was a data-race, when targetURL was concurrently
updated in case of default url route.
This commit fixes data-race and adds concurrency to the routing tests.
Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10626
OAuth2 token source lib doesn't allow to define request headers explicitly.
This commit adds a custom transport to mitigate it. New transport modifies http.Request by making a shallow copy of it and setting additional headers.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8939
Previously vm_filestream_write_duration_seconds_total will be increased in two places:
* statWriter.Write()
* Writer.MustFlush(). It will eventually call statWriter.Write(), hence double counting vm_filestream_write_duration_seconds_total
For reference, vm_filestream_read_duration_seconds_total will be increased only in statReader.Read to track read syscall.
This commit removes latency tracking from MustFlush method.
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10564
Replace ambiguous button labels such as "Submit" and "Apply" with
clearer wording to indicate that these actions only preview results and
do not modify the deployment configuration.
Related issue: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10453
Add support for [OpenID Connect
Discovery](https://openid.net/specs/openid-connect-discovery-1_0.html#IANA)
as an alternative way to obtain verification keys and rotate them
automatically.
`jwt` configuration should allow **exactly one** of the following
verification modes: `public_keys`, `oidc`, `skip_verify`. These options
must be mutually exclusive.
Example: OIDC configuration
```yaml
users:
- jwt:
oidc:
issuer: http://identity-provider.com
```
When `oidc` is enabled:
1. On startup, `vmauth` fetches:
```
{issuer}/.well-known/openid-configuration
```
2. Extracts `jwks_uri`.
3. Fetches [JWK
keys](https://openid.net/specs/draft-jones-json-web-key-03.html#ExampleJWK)
from `jwks_uri`.
4. Uses discovered keys to verify JWT tokens.
Related to
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10585
Failure handling:
* If discovery fails at startup:
* No keys are available.
* The user is skipped.
* Discovery runs periodically in background (e.g., every 1 minute).
* If keys become available later, authentication should start working
automatically.
* If keys were previously fetched and the identity provider becomes
unavailable:
* Cached keys must be preserved.
* Authentication continues using cached keys.
#### JWT Requirements in OIDC Mode
When `oidc` is enabled:
* `iss` claim becomes
[mandatory](https://openid.net/specs/openid-connect-core-1_0.html#IDToken).
* `iss` [must
match](https://openid.net/specs/openid-connect-core-1_0.html#RotateEncKeys):
* `oidc.issuer` from config.
* `issuer` returned in the OpenID configuration document.
* JWT header must contain `kid`.
* `kid` must be used to select the appropriate key from JWKS.
* Tokens without `kid` must be rejected.
* Tokens without `iss` must be rejected.
Rationale
* Enables automatic key rotation.
* Eliminates manual public key configuration.
* Maintains compatibility with standard OIDC providers.
---------
Signed-off-by: Max Kotliar <kotlyar.maksim@gmail.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
- VMUI Explore Metrics uses `rate` for histogram bucket queries, which
skips the first observation
in each bucket because `rate` requires two data points to calculate a
per-second rate.
- Replace `rate` with `increase_pure`, which assumes counters start from
0 and correctly shows
the first observation when a new bucket appears.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10365
The test should not fail now on systems with 1 cpu because partition
indexDBs are not rotated. See #8948.
Also removed two TODOs from the test to keep it simple.
Disabling is done by making the the handlers for `/tags/tagSeries` and
`/tags/tagMultiSeries` to return `501 (Not Implemented)` status code
along with the error message saying that the API has been disabled and
will be removed in future.
See: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10544.
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Now that indexDB is per-partition, the indexDB-related docs need to be
updated. Specifically the how the indexDB is cleaned up when it becomes
outside the `-retentionPeriod`.
Follow-up for #8134.
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Signed-off-by: Aliaksandr Valialkin <valyala@gmail.com>
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
There are following main use cases for `eval_offset`:
1. To ensure rules are evaluated at an exact offset, so the results have
the exact timestamp the user wants.
2. The source data for a certain rule is delivered at a specific time
point, so rules need to be executed after that time point to get correct
results. For example, [chaining
groups](https://docs.victoriametrics.com/victoriametrics/vmalert/#chaining-groups).
3. A group contains some heavy rules that can take a few minutes to
finish. To guarantee a single evaluation can complete in time and not
delay the next run, the user may want to schedule the group to be
executed within [intervalStart, intervalEnd-avgTotalEvaluationDuration].
Negative value can be convenient for case3, as users only need to set
group `eval_offset: -avgTotalEvaluationDuration(a bigger value than the
real duration to leave some buffer would be better)`.
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10424
Due to bug introduced at initial datadog-sketches API implementation, `host` label was incorrectly obtained from `Tags` structure. While actually it's present directly at root of protobuf message.
This commit properly attaches `host` label in such case.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10557
If the data flush to the remote write destination takes longer than the
periodic flush interval (default 2s), the ticker channel will contain a
stale tick, causing the ticker case to be selected too early with an
empty or small amount of data inside `wr`, resulting in a wasted remote
write request with one or two time series(if `ts, ok := <-c.input` was
also randomly selected beforehand).
We could also consider resetting the ticker after drain the stale tick
to ensure `wr` always accumulates data for the full flush interval, but
that seems more trivial to me.
Add new option per-user to print access logs. Such logs
contain limited amount of information to prevent exposing
sensitive data.
Access logs can be enabled/disabled via hot-reload and could
help locating clients that incorrectly use or abuse vmauth.
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5936
Cluster apptests failed from time to time with the following error:
```
timed out while waiting for inserted rows to be sent to vmstorage
cluster
```
due to incorrect calculation of inserted row count before and after
insertion. This PR fixes it by putting the "before" count calculation
before the send() operation.
Previously, the client certificate was only refreshed during the TLS
handshake, which occurs when establishing a new connection. This meant
the remote HTTP server had to close the existing connection for the
client to pick up an updated (e.g. expired) certificate. As a
workaround, connection keep-alive could be disabled, but that
significantly increased request latency.
This commit adds a certificate check during HTTP RoundTrip. If the
client certificate has changed, the RoundTripper recreates the transport
and its connection pool. This behavior is already implemented for CA
certificate changes.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10393
Per @valyala's request, rename storage cache methods to adhere the
following format:
```
get[Value]By[Key]FromCache
put[Value]By[Key]ToCache
```
Also move `s.metricIDCache` methods from `indexDB` to `Storage` because
this cache exists at the `Storage` level.
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Enable BuildKit-native SPDX SBOM and provenance attestations by setting
`--sbom=true --provenance=true` in `docker buildx build` within
`publish-via-docker`.
- Set `--provenance=true --sbom=true` in `publish-via-docker` for both
Alpine and scratch variants
- Add SBOM section to SECURITY.md with inspection and Trivy scan
instructions
- Update Release-Guide.md
- Add changelog entry
Verified end-to-end: pushed test image to GHCR, confirmed SBOM
attestation via `docker buildx imagetools inspect`, and Trivy scan via
`trivy image --sbom-sources oci` succeeded (with 0 vulnerabilities :-)).
Fixes#10473
### Checklist
The following checks are **mandatory**:
- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [X] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
---------
Signed-off-by: John Allberg <john@ayoy.se>
Signed-off-by: Max Kotliar <mkotlyar@victoriametrics.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
Co-authored-by: Max Kotliar <kotlyar.maksim@gmail.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
Samples in Mimir (or Prometheus) are stored in chunks, which are
compressed efficiently using algorithms rather than being stored as
independent samples, see details in [this
article](https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/)
and [this talk](https://www.youtube.com/watch?v=b_pEevMAC3I).
When using a small `--remote-read-step-interval`, particularly `minute`,
a single chunk may contain samples that exceed the requested time
window, and all the returned chunks contain overlapping samples.
Consequently, vmctl will read and migrate many duplicate samples into
VictoriaMetrics.
In tests, `--remote-read-step-interval=minute
--remote-read-use-stream=true` with raw sample `scrape_interval: 10s`
and remote read time range of 24h can write ~20x duplication.
But I assume the minute interval is rarely used with a large time range
and duplicates are fine in VictoriaMetrics due to deduplication, so we
don't need to disallow using it.
```
## --remote-read-step-interval=minute --remote-read-use-stream=false
## total samples: **15696611(the real number)**
2026/02/26 22:10:25 VictoriaMetrics importer stats:
idle duration: 50.080851955s;
time spent while importing: 32.108903417s;
total samples: 15696611;
samples/s: 488855.41;
total bytes: 735.8 MB;
bytes/s: 22.9 MB;
import requests: 79;
import requests retries: 0;
2026/02/26 22:10:25 Total time: 32.112912208s
## --remote-read-step-interval=day --remote-read-use-stream=true
## total samples: 15878869
2026/02/26 22:20:37 VictoriaMetrics importer stats:
idle duration: 960.698874ms;
time spent while importing: 6.338309625s;
total samples: 15878869;
samples/s: 2505221.41;
total bytes: 278.6 MB;
bytes/s: 44.0 MB;
import requests: 80;
import requests retries: 0;
2026/02/26 22:20:37 Total time: 6.340023167s
## --remote-read-step-interval=hour --remote-read-use-stream=true
## total samples: 21824000
2026/02/26 22:13:14 VictoriaMetrics importer stats:
idle duration: 5.238827666s;
time spent while importing: 7.274528s;
total samples: 21824000;
samples/s: 3000057.19;
total bytes: 394.4 MB;
bytes/s: 54.2 MB;
import requests: 110;
import requests retries: 0;
2026/02/26 22:13:14 Total time: 7.278895084s
## --remote-read-step-interval=minute --remote-read-use-stream=true
## total samples: **353800724(353800724/15696611~22.5)**
2026/02/26 22:18:41 VictoriaMetrics importer stats:
idle duration: 1m45.09105431s;
time spent while importing: 1m51.716730125s;
total samples: 353800724;
samples/s: 3166944.86;
total bytes: 6.8 GB;
bytes/s: 61.3 MB;
import requests: 1769;
import requests retries: 0;
2026/02/26 22:18:41 Total time: 1m51.721834958s
```
### Describe Your Changes
Move OpenTelemetry-related documentation under docs/integrations and
docs/data-ingestion to establish a clear, scalable structure.
As OpenTelemetry support expands, we need a dedicated place to document
protocol details, implementation specifics, and known limitations, such
as:
- Delta temporality not working with downsampling. See
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10014#issuecomment-3697509266.
- Negative histogram buckets being discarded by VictoriaMetrics. See
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9896.
The new structure separates concerns:
- `docs/integrations/` — protocol overview, implementation details, and
limitations.
- `docs/data-ingestion/` — OpenTelemetry Collector configuration and
ingestion setup.
This aligns OpenTelemetry documentation with the existing structure used
across other integrations and ingestion methods.
New pages and links preserve backward compatiblity
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
### Describe Your Changes
- Updated GKE version to a more current 1.34+
- Updated guide to more modern Helm and Kubectl versions
- Tested updated instructions on GKE 1.34.1-gke.3971001 (and a local k3s
instance) successfully
- Removed revision from Grafana values for helm chart (confirmed it
pulls the latest revision)
- Split the helm chart values (`guide-vmcluster-vmagent-values.yaml`)
into more readable chunks and added explanations next to each chunk
- Added and updated expected outputs. Some were missing and others were
outdated
- Updated Grafana dashboards screenshots since they changed from the
last revision
- Updated Grafana repo to use community org (old grafana chart was
deprecated
on Jan 30th -
[source](https://community.grafana.com/t/helm-repository-migration-grafana-community-charts/160983))
- Minor corrections and typo fixes. Improved flow
- Added a section at the end pointing readers where they can go next.
### Checklist
The following checks are **mandatory**:
- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [X] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
---------
Signed-off-by: Pablo (Tomas) Fernandez <46322567+TomFern@users.noreply.github.com>
Co-authored-by: Vadim Rutkovsky <vadim@vrutkovs.eu>
## Summary
Fix an invalid MetricsQL numeric literal in the vmctl monitoring
documentation.
## Problem
The PromQL/MetricsQL example query for monitoring vm-native migration
data transfer speed used `1Mb` as a divisor:
```promql
rate(vmctl_vm_native_migration_bytes_transferred_total[5m]) / 1Mb
```
However, `Mb` is **not** a valid MetricsQL numeric suffix. According to
the [MetricsQL
documentation](https://docs.victoriametrics.com/victoriametrics/metricsql/#numeric-values):
> Numeric values can have `K`, `Ki`, `M`, `Mi`, `G`, `Gi`, `T` and `Ti`
suffixes.
The suffix `Mb` does not exist — only `M` (mega, 10^6) and `Mi` (mebi,
2^20 = 1,048,576) are valid.
## Fix
Replace `1Mb` with `1Mi` (1 mebibyte = 1,048,576 bytes), which is the
standard binary unit for memory/storage transfer measurements in
computing, and update the comment to reflect `MiB/s` instead of `MB/s`.
## Files Changed
- `docs/victoriametrics/vmctl/vmctl.md`: fixed the invalid literal `1Mb`
→ `1Mi` and updated the comment
---------
Signed-off-by: Maxime Grenu <maxime.grenu@gmail.com>
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
Co-authored-by: Vadim Alekseev <vadimaleksv@gmail.com>
Co-authored-by: Yury Moladau <yurymolodov@gmail.com>
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
Co-authored-by: Nikolay <nik@victoriametrics.com>
* add meaningful description, it is required for publishin on grafana.com
* remove dependency on `victoriametrics-metrics-datasource` as it is not used
Signed-off-by: hagen1778 <roman@victoriametrics.com>
This commit improves compatibility with promql by introducing a missing function `histogram_fraction`.
histogram_fraction is a shortcut for `histogram_share(upperLe, buckets) - histogram_share(lowerLe, buckets)`
histogram_count, histogram_sum or histogram_avg will not be added to metricsQL, as they only operate on Prometheus native histogram, which doesn't have _count and _sum series like the classic histogram or Victoriametrics histogram. For classic histogram, _count and _sum series can be used directly.
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5346.
This commit optimizes the storage of originalLabels. Previously, they
were stored as a clone of the discovered labels, which required many
small allocations and added high pressure on the garbage collector.
Now originalLabels are stored as zstd-compressed JSON ([]byte). Since
they are rarely requested, the overhead of zstd decompression and
json.Unmarshal is negligible.
This optimization reduces memory usage for storing originalLabels by 3x
and CPU usage by 2x.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9952
After golang 1.23 it's safe to ignore timer.Reset True value.
According to the spec:
For a chan-based timer created with NewTimer, as of Go 1.23,
any receive from t.C after Reset has returned is guaranteed not
to receive a time value corresponding to the previous timer
settings;
If the program has not received from t.C already and the timer is
running, Reset is guaranteed to return true.
Before Go 1.23, the only safe way to use Reset was to call [Timer.Stop]
and explicitly drain the timer first.
Golang 1.23 changed timer implementation from sync and async. And it
made possible that chan send and timer.Stop could happen in the same
time.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9721
# Investigation & Root Cause --- InfluxDB Line Protocol Parsing with Raw
Newline (`\n`)
This document describes the investigation process and root cause
analysis for Influx Line Protocol parsing errors in VictoriaMetrics when
a **raw newline (`\n`) byte appears inside a quoted field value**.
------------------------------------------------------------------------
## Background
According to the Influx Line Protocol specification:
- Each point must be represented as a single line.
- The newline character (`\n`) separates points.
- Literal newline bytes are not allowed inside quoted field values.
Therefore, any raw newline byte (`0x0A`) inside a quoted string makes
the line invalid.
------------------------------------------------------------------------
## Related Issue
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10067
------------------------------------------------------------------------
## Expected Behavior
VictoriaMetrics should reject Influx Line Protocol lines that contain a
raw newline inside a quoted field value, since this violates the
protocol specification.
The parsing failure itself is correct.
------------------------------------------------------------------------
## Actual Behavior
VictoriaMetrics rejects the line with the following error:
cannot parse field value for "...": missing closing quote for quoted
field value
While technically correct, the error message does not clearly indicate
that the root cause is a raw newline inside the quoted field value.
------------------------------------------------------------------------
## Minimal Reproducer
The issue can be reproduced without Telegraf or Jolokia:
``` bash
printf 'test value="hello
world"\n' | curl -X POST http://localhost:8428/write --data-binary @-
```
This produces:
cannot parse field value for "value": missing closing quote for quoted
field value
The failure occurs because the value contains an actual newline byte
(0x0A), not the escaped sequence `\n`.
------------------------------------------------------------------------
## Environment Setup
The issue was reproduced using the following stack:
- VictoriaMetrics v1.127.0
- InfluxDB 1.8
- Spring Boot + Jolokia
- Telegraf 1.36.2
Telegraf collects JVM `SystemProperties`, including:
``` json
"line.separator": "\n"
```
After JSON unmarshalling, this becomes a real newline byte in memory.
Detailed reproduction steps can be found here:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10067#issuecomment-3896175100
------------------------------------------------------------------------
## Observed Serialized Line
Using breakpoint debugging in:
lib/bytesutil/bytebuffer.go:58
The `ReadFrom` function reads and assembles an Influx line containing:
SystemProperties.line.separator="
",
The quoted field contains an actual newline byte before the closing
quote.
This breaks the single-line assumption of Influx Line Protocol.
VictoriaMetrics splits on `\n`, resulting in:
- A truncated first line
- A missing closing quote
- Parsing failure
------------------------------------------------------------------------
## Important Clarification
This issue is **not** caused by the escaped sequence `"\\n"`.
The failure occurs only when the serialized Influx line contains an
actual newline byte (`0x0A`) inside the quoted value.
Escaped `\n` (two characters: `\` and `n`) is valid.
------------------------------------------------------------------------
## Root Cause
- Telegraf serializes a field containing a real newline byte.
- Influx Line Protocol forbids literal newline characters inside
quoted fields.
- VictoriaMetrics correctly treats `\n` as a line separator.
- The parser then encounters an incomplete quoted field and reports
"missing closing quote".
The parsing behavior is correct per specification.
------------------------------------------------------------------------
## Proposed Improvement
The parsing logic should remain unchanged.
However, the error message can be improved to better indicate the root
cause.
Suggested error message:
invalid Influx line protocol: missing closing quote for quoted field
value;
this may be caused by a raw newline (`\n`) inside the quoted field value
This makes the failure immediately actionable and easier to diagnose.
------------------------------------------------------------------------
## Summary
- The failure is caused by a raw newline byte inside a quoted field
value.
- This violates the Influx Line Protocol specification.
- VictoriaMetrics correctly rejects the line.
- The error message should explicitly mention the possibility of a raw
newline (`\n`) inside the quoted field.
Signed-off-by: hklhai <hkhai@outlook.com>
Co-authored-by: Max Kotliar <kotlyar.maksim@gmail.com>
Previosly, extra filters were ignored for
`/api/v1/label/vm_account_id/values` or
`/api/v1/label/vm_project_id/values` calls. In result, even if user's
visibility was limited by applying
`?extra_filters[]={vm_account_id="1"}` param they could get the list of
all available tenants in the system.
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
(cherry picked from commit d2a033453e)
Signed-off-by: hagen1778 <roman@victoriametrics.com>
* remove ToC in the beginning, as it duplicates right-bar functionality
and is easier to make a mistake with. For example, it didn't have the
ZFS section in it
* simplify wording where it was possible
* reference new tools VM got in recent releases
* re-prioritize tips order based on personal experience
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Signed-off-by: Roman Khavronenko <hagen1778@gmail.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
Co-authored-by: Pablo (Tomas) Fernandez <46322567+TomFern@users.noreply.github.com>
Components like vmselect and vminsert rarely touch disk, so most of the
time their values are 0. Filtering out 0 values makes the panel cleaner.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
A refactoring that moves the uint64set.Set marshaling and unmarshaling from lib/storage/storage.go to lib/uint64set. Also added function docs and tests.
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
It should prvent apptest timeouts due to runners saturation. When
apptests are run with other tests and linters they do not have enough
CPU to complete in time and often times out.
If one re-runs the apptests shortly after they are likely to pass
because the same runner has enough resources available (other job
finished).
Remove GOGC=10 as the runner has enough memory (16Gb) to run apptests.
I did some tests and obeserve drop in overal test duration from 4.5m to
3.30-3m.
The new section is supposed to contain otel related information for all
products, like VT, VM, VL.
It also supposed to be visible for readers right away, without need to
dig for info in each product.
It contains basic information and is supposed to act as a router to more
detailed info in each product.
While there, also updated VM-related otel info.
---------
Depends on
https://github.com/VictoriaMetrics/victoriametrics-datasource/pull/458
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
### Describe Your Changes
- Add an introduction with a brief explanation of the operator and its
benefits as an intro
- Make some steps more explicit, instead of just linking to the VM
cluster guide
- Separate config/chart values files from kubectl apply (instead of
using heredoc and in-line yaml)
- Update screenshots and add figcaptions where needed
- Update Kubernetes and tools versions to newer releases
- Remove revision numbers from the Grafana config to install the latest
revision
- Added a section to configure scraping of Kubernetes resources (nodes,
pods, etc.)
- Tested updated instructions on GKE 1.33 and 1.34 (and a local k3s
instance) successfully
- Added and updated expected outputs. Some were missing and others were
outdated
- Updated Grafana dashboards screenshots since they changed from the
last revision
- Minor corrections and typo fixes. Improved flow
- Added a section at the end pointing readers to where they can go next.
### Checklist
The following checks are **mandatory**:
- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [X] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
### Describe Your Changes
- Updated introduction
- Added proper steps
- Tested intructions on headlamp desktop version and the in-cluster web
ui
- Added images to guide user
- Mentioned that the test connection button does not work (it probes a
`-healthy` endpoint that is not supported by VM). The plugin still
works, it's just the test button that fails
- Added links to the single and cluster installation guides
### Checklist
The following checks are **mandatory**:
- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [X] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
---------
Signed-off-by: Pablo (Tomas) Fernandez <46322567+TomFern@users.noreply.github.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
### Describe Your Changes
- Rewrote the introduction
- Added list of endpoints for single node, cluster, and cloud
- Added tips for working with VictoriaMetrics running on Kubernetes
- Flushed out explanations for each step
- Added reference links for all required endpoints
- Tested every command
### Checklist
The following checks are **mandatory**:
- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [X] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
---------
Signed-off-by: Pablo (Tomas) Fernandez <46322567+TomFern@users.noreply.github.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
Commit 610b328e5a introduced a bug in the
date range search logic. If the first searched date for a given tenant
did not match, the search could proceed incorrectly.
This commit fixes the SearchTenants API by correctly advancing the date
passed to table.Seek.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10422
* add example of the produced log, so users could understand the impact;
* stress once again about sensetive data exposure when
dump_request_on_errors is enabled.
This change adds some context to the error when all backend failed. From
support cases it seems like without the context users might not know
what to do with this error message. Clarification advises them to check
the prev error messages.
Previously regex simplify function made an attempt to parse string representation of simplified regex.
And it could produce runtime panic due to std lib specification:
```
// Simplify returns a regexp equivalent to re but without counted repetitions
// and with various other simplifications, such as rewriting /(?:a+)+/ to /a+/.
// The resulting regexp will execute correctly but its string representation
// will not produce the same parse tree, because capturing parentheses
// may have been duplicated or removed.
```
This commit ignores simplified regex parsing error and returns back original regex.
It results into possible missing simplification of some niche regex patterns.
But it's extremely rare cases rarely seen in production. So the tradeoff is acceptable.
Fixes victoriaMetrics/victoriaLogs/issues/1112
While server side copies when using the same backup origin and
destination are always most efficient there are times when moving
between backup locations is required.
Right now vmbackup throws an error in these cases.
While its true that a user could always do a fresh backup from a
snapshot rather than copy an old backup, this requires access to storage
data locations and a running vmstorage instance, something that is not
_generally_ required for otherwise moving backups around in remote
locations using vmbackup.
This is a small change that makes the moving of backups from one
location to another transparent to users, without having to consider if
those locations are the same or different. This both simplifies backup
migrations and unlocks using vmbackup for more complex operations.
Specifically this came up in my use case because we want to orchestrate
the down-scaling of EBS volumes backing our vmstorage cluster, which
requires some complex backup operations, one of which being taking a
backup from s3 to a local filesystem.
Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10401
Use the same sharded implementation as in metricIDCache. The change is
basically a copy-paste. The only difference is that the rotation period
remains `1h` instead `1m` in order not to break the fix for #10064.
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
This should reduce memory reallocations and fragmentation when reading large request bodies from slow clients.
This also should reduce memory usage a bit because of the reduced memory fragmentation.
Updates https://github.com/VictoriaMetrics/VictoriaLogs/issues/1042
### Describe Your Changes
The job ensure that:
- the draft release with given `$(TAG)` exists
- the release has excpected `$(GITHUB_ASSETS_COUNT)` number of uploaded
assets
- All the assets were uploaded succesfully.
It also adds helper job `github-get-release` which finds a draft release
by `$(TAG)` and stores into file `/tmp/vm-github-release-$(TAG)` file.
The `github-delete-release1 job is decoupled from the file produced by
`github-create-release job`. So it could be run at any time from any
machine.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
### Describe Your Changes
Adds JWT authentication support to vmauth with signature verification
and tenant-based access control. For now, public_keys have to set
explisitly in the config, OIDC discovery will be added in upcoming PRs.
Related to
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10445
Key Features
- JWT Configuration: Added `jwt_token` field to user config supporting
RSA/ECDSA public keys or skip_verify mode (for testing purposes).
- Token Validation: Verifies JWT signatures, checks expiration, and
extracts vm_access claims
- Compatible with vmgateway: jwt tokens issued for vmgateway should work
with vmauth too.
Examples
```yaml
users:
- jwt_token:
public_keys:
- |
-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA...
-----END PUBLIC KEY-----
url_prefix: "http://victoria-metrics:8428/"
```
```yaml
users:
- jwt_token:
skip_verify: true
url_prefix: "http://victoria-metrics:8428/"
```
Constraints
- JWT tokens cannot be mixed with other auth methods (bearer_token,
username, password)
- Requires at least one public key OR skip_verify=true
- Limited to single JWT user (multiple JWT users will be supported in
the future)
Next steps
- Multiple `jwt_token` support.
- Claim matching
- Claim based routing
- OIDC\JWKS support
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
---------
Co-authored-by: Pablo (Tomas) Fernandez <46322567+TomFern@users.noreply.github.com>
Exploit uint64set data structure peculiarities (adjacent elements are
stored in
64KiB buckets) to optimize metricIDCache memory footprint.
As the result the cache utilizes 87% less memory and is up to 90%
faster. See
[benchstat.txt](https://github.com/user-attachments/files/25294076/benchstat.txt).
Follow-up for #10388 and #10346.
Thanks to @valyala for the optimization idea.
---------
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Previously, on the last day of a month, storage could report empty
metrics for the last partition. This could happen if a new empty
partition was created in updateNextDayMetricIDs or if time series with
future timestamps were ingested.
This commit adds a check to ensure the last partition belongs to the
current month. Since this is typically the most actively used partition,
it should be treated as the last one.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10387
Ensure proper expansion and reset of `buf` size for OpenTelemetry
ingestion. This pull request does:
1. Flush data in `wctx` when `buf` is over 4MiB.
2. Do not return `wctx` with `buf` larger than 4MiB while the actual
in-use length is less than 1MiB to the pool.
Previously, when a small number of requests carried a large volume of
time series or labels, `buf` was over-expanded and recycled to the pool,
resulting in an excessive memory usage issue.
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10378
This is part of the effort to upgrate and validate the [Guides in the
docs](https://docs.victoriametrics.com/guides/).
Doc page:
https://docs.victoriametrics.com/guides/getting-started-with-opentelemetry/
Functionally, nothing should change. Aside from the fix that prevented
one of the example applications to run, the rest of the commands in the
guide should be equivalent to the original.
Header anchor links do not change with this update. I added a few
headers but the existing headers anchors should remain unchanged to
prevent breaking existing links.
- Tested on a more modern version of GKE to validate it still works OK
(1.34.1-gke.3971001)
- Changed wording of some sections to improve flow and readability
- Added some missing steps/troubleshooting
- Add tips annotations for cardinality explorer and setup references to
make them stand apart form the main content
- Use `kubectl port-forward svc/...` instead of `kubecl port-forward
pod` (service selectors vs pod names) in some test commands to make
instructions simpler
- Updated OpenTelemetry version to fix error that prevented
`app.go-collector.example` sample code from running
- Replaced the "Visit these links" part in the second program (with the
fast/slow endpoints) with curl commands
- Updated the first VMUI test link to show table instead of graph while
testing OpenTelemetry ingestion (default graph view can be confusing as
there metric value for `k8s_container_ready` doesn't really show any
values)
- Minor typos, grammar check, and consistency (Kubernetes vs kubernetes,
Helm vs Helm, Collector vs collector, etc)
This change only affects query trace. It correctly uses the branched
query trace in callback function, so in trace it is placed in the right
actions branch.
Bug was introduced in
c705da74f6
- Return back the check that the size of the scraped response doesn't exceed the maxScrapeSize
at the client.ReadData(). Without this check the scraped response may be truncated to maxScrapeSize+1
bytes, which can result in decompression error. The decompression error in this case
hides the original errror about too big response side. This complicates troubleshooting by users.
- Stop decompressing the scraped response as soon as the decompressed response size exceeds maxScrapeSize.
This protects from excess memory usage needed for holding the decompressed response with sizes exceeding
the maxScrapeSize.
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10320
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9481
lrucache causes huge cpu usage in some caches. See #10297.
There was a hypothesis that this was due to too short ttl in lrucache.
Setting it to 1h (the default workingsetcache eviction period) but it did not
completely eliminate the problem. The CPU utilization was not huge but still high.
See #10416.
Thus reverting back fix such deployments. This solution is temporary
because the cache consumes at least 32MB. There is one instance per
indexDB which means that if the retention is 3y then the total memory
utilized by this cache will be over 1GB and most of it will be unused.
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Go1.26 has been recently released and was picked up by CI actions.
The tests and linter actions start to fail with:
GOEXPERIMENT=synctest go vet ./lib/...
go: unknown GOEXPERIMENT synctest
This happens because Go 1.26 remove synctest experiment.
Changelog:
This package was first available in Go 1.24 under GOEXPERIMENT=synctest,
with a slightly different API. The experiment has now graduated to
general availability. The old API is still present if
GOEXPERIMENT=synctest is set, but will be removed in Go 1.26.
https://go.dev/doc/go1.25#library
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
### Describe Your Changes
* Add context-aware label autocomplete by applying existing label
matchers (e.g. namespace/job) when fetching labels and label values.
* Update `package.json` dependencies.
* Update `vite.config.ts` to ensure correct API requests in playground
mode (`start:playground`).
Related issue: #9269
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
Signed-off-by: Yury Molodov <yurymolodov@gmail.com>
The expected size of the data ingestion request body accepted by VictoriaMetrics / VictoriaLogs / VictoriaTraces
exceeds 64KiB, and is close to 1MiB. That's why it is better to re-use byte buffers with capacities up to 1MiB,
even if less than 25% of their capacity was used the last time.
This should reduce the number of GC cycles at high data ingestion rate when the request body sizes
are distributed at both sided of the 16KiB ... 64KiB range.
This is a follow-up for 09d2ce36e8
Updates https://github.com/VictoriaMetrics/VictoriaLogs/issues/1042
Previously `vm_deduplicated_samples_total{type="select"}` didn't take in account identical samples.
This commit takes it in account in the same way as `vm_deduplicated_samples_total{type="merge"}` metric.
Related to https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10384.
Before, by mistake, -remoteWrite.label flag was referenced in one part
of the doc as per-remoteWrite-url flag. In fact, -remoteWrite.label is
global and applies labels to all remoteWrite URLs unconditionally.
This commit tries to clarify it in docs:
* update the life-of-a-sample diagram to change the labels applying
logic
* add hint how to add a label via `extra_label`
* removes duplicated description for -remoteWrite.label flag
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10373
This is the first PR on a proposed series of updates to the guides.
I started with this one because:
It's on the top ten guides according to Google Analytics
It's a good starting point for me to get familiar with VM on Kubernetes
I plan to work through the rest of the guides in the following days
(coordinating the effort with JJ).
Changelog for this guide:
- Updated GKE version to a more current 1.34+
- Updated guide to more modern Helm and Kubectl versions
- Tested updated instructions on GKE 1.34.1-gke.3971001 (and a local k3s
instance) successfully
- Removed revision from Grafana values for helm chart (confirmed it
pulls the latest revision)
- Split the helm chart values into more readable chunks and added
explanations next to each chunk
- Added and updated expected outputs. Some were missing and others were
outdated
- Updated Grafana dashboards screenshots since they changed from the
last revision
- Updated Grafana repo to use community org (old grafana chart was
deprecated
on Jan 30th -
[source](https://community.grafana.com/t/helm-repository-migration-grafana-community-charts/160983))
- Minor corrections and typo fixes
- Added a section at the end pointing readers where they can go next.
### Describe Your Changes
Add Source Code data link (link to bar or line in graph to see) that
points directly to a source code file on Github. `VictoriaMetrics -
cluster`, `VictoriaMetrics - single-node`, and `VictoriaMetrics -
vmagent` dashboards were updated. I did not add it to other panels since
they do not have Drilldown section at all.
Also, fixed a misplaced Drilldown link in `VictoriaMetrics -
single-node` dashboard.
Proxy service code is here
https://github.com/VictoriaMetrics/location2source/
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
### Describe Your Changes
The new title better aligns with the code of
[writeconcurrencylimiter](d9dabea303/lib/writeconcurrencylimiter/concurrencylimiter.go (L140)),
the panel description and the metric used in the query.
Previously, the panel title suggested that it reflected only disk write
performance. During an incident investigation, this led to a wrong
assumption that the panel was unrelated to client-side performance.
In reality, the metric [includes the full write
path](98e320842c/lib/vminsertapi/server.go (L263)):
time spent reading data from the TCP connection, processing it, and
acknowledging the block. The updated title reflects this behavior more
accurately and reduces the risk of misinterpretation during incident
analysis.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
Previosly the error returned on timeout suggested a memory leak, which
could confuse a user. In reality timeout could happen if vmselect is
overloaded or the query takes a lot of time to process. The commit
aligns rss. RunParallel with query deadline set either via flag
`-search.maxQueryDuration` or the `timeout` query argument. The logged
warn message is adjusted to suggest resource increase or timeout
increase.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8484
Signed-off-by: JAYICE <jayice.zhou@qq.com>
Signed-off-by: Max Kotliar <kotlyar.maksim@gmail.com>
Previously, the Kafka consumer in vmagent committed offsets per message
(manual commit). At high message rates, this could overload the commit
path (coordinator, __consumer_offsets topic, and network).
This commit introduces time-based manual commits with a controlled window:
* enable.auto.commit remains false by default.
* After a successful TryPush (data accepted into the buffer before the
vmagent queue/backend), vmagent adds the message to pending offsets.
* Offsets are committed periodically (every second), as well as during
shutdown and partition rebalance.
This keeps the commit point tied to TryPush (stronger guarantees than
auto-commit) while significantly reducing commit QPS.
Auto-commit is also time-based, but it advances offsets based on poll()
delivery rather than application-level processing. This means offsets
may be committed before data is actually accepted by the vmagent
pipeline, slightly increasing the risk of data loss on crash or restart.
This change does not make the Kafka consumer fully transactional
end-to-end. Buffers in vmagent/vminsert/vmstorage still imply possible
data loss on hard stops. However, it provides stronger guarantees than
auto-commit, since commits are based on TryPush rather than poll().
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10395
Keeping such byte slices in the pool may increase memory usage when processing a small share of requests
with much bigger sizes than the average processed request.
This should help reducing memory usage at https://github.com/VictoriaMetrics/VictoriaLogs/issues/1042
Respect the default value of http.DefaultTransport.Proxy. Previously,
it could be unintentionally overridden with a nil value.
This commit aligns Proxy configuration across all created transports.
Previously, default `Proxy` was unconditionally replaced with config value, which could be nil.
It made impossible to use default http client proxy env variables.
This commit adds check in oauth http client builder that only overwrites the
transport proxy if a custom proxy url function is defined.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10385
Bug was introduced at 5a587f2006, while porting change from cluster branch.
This commit properlyslice `mms
[]metricsmetadata.Row` slice . Previously, every WriteMetadata call triggered a
slice allocation.
This shouldn't significantly impact overall performance, so I haven't
included benchmarks.
Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10392
This should reduce cpu utilization while still removing the storage
connection saturation.
Follow-up for 6bc809813b (#10346)
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
This is needed in order to allow using lib/pushmetrics for vmctl as it does not use go native flags.
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
app/vmctl: add metrics for the migrations
- add flags to allow setting up metrics push
- add metrics to track progress of the migration for all modes
- add metrics for generic backoff and limiter packages
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
The current implementation has a bottleneck – a single mutex to access
`prev`/`next` metric sets. Each rotation results in storage utilization
spikes since lock-free `curr` is almost empty, and cache needs to
promote metrics from `prev` to `next`.
This is an attempt to reduce contention by spliting cache into separate
shards.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10367
This commit adds a new flag `disableScheduledBackups` for `vmbackupmanager. Which disables any scheduled backups. It could be useful to keep vmbackupmanager running and serving API calls only.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10364
The new `ALERTS_FOR_STATE` may be retrieved during restore when:
1. a group contains multiple heavy rules, alerting rule A may have
already been executed and its state metrics successfully uploaded to the
datasource by the time all rules within the group have finished
executing;
2. the datasource makes data queryable very quickly, for instance, when
users configure a small value for `-search.latencyOffset`.
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10335
Previously, running a vmalert with an empty notifier.url does not produce an error and leads to vmalert which will never send a notification successfully.
This commit properly validates notifier.url empty value.
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10355
After changing the scrape interval from a smaller value (e.g., 30s) to a larger value (e.g., 60s), the changes() function starts to yield non-zero values even when the underlying values have not changed.
This commit keeps unchanged series values when a large gap occurs between samples or when the scrape interval decreases.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10280
Commit 83da33d8cf introduced a check to
detect directories partially removed via IsPartiallyRemovedDir.
However, the check was performed using the full path, while de.Name()
returns only the current entry name (without the path). As a result, the
check always succeeded and the function did not behave as intended.
This flag allows disabling the mincore() syscall introduced in
50fc48ac47. On older ZFS filesystems,
mincore() may trigger a bug related to ZFSÕs own in-memory cache. Mixing
reads from mmap()ed files and direct disk reads can corrupt the ZFS ARC
cache and lead to data read corruption.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10327
Commit f62893c151 added an attempt to fix
stats for `tagFiltersCache`, `metricIDCache`, and `dateMetricIDCache`.
Instead of aggregated stats, it returned the largest cache stats by
cache size.
This resulted in possible counter decreases for counter metric types. It
made aggregated metrics less usable.
This commit changes cache stats aggregation by metric type:
* size-related gauge metrics are returned based on max cache size usage
* metric counters are reported as a sum of all counters
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10275
### Describe Your Changes
In the Cardinality Explorer, when filtering, a "Percentage from total"
stat appears. This stat is documented as "the share of these series in
the total number of time series".
This works for pages for individual metrics. However, if using a filter
that returns *multiple* metrics, the value of "Percentage from total"
will only account for the size of the *first* metric. One can have a
filter that returns, say, 10k time series (out of, say, 100k in the VM
cluster), and if the first metric returned has 1k time series, then
"Percentage from total" will show 1%, not 10%.
This PR fixes that calculation.
Credits to @PleasingFungus for the original fix (PR #10288).
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
Signed-off-by: Yury Molodov <yurymolodov@gmail.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
Co-authored-by: PleasingFungus <PleasingFungus@users.noreply.github.com>
Commit e35a9a366c changed the order of wg.Add calls in the Graphite transform package. Previously, all wg.Add calls were made upfront, but after that change it became possible for wg.Wait to exit earlier than expected.
This commit fixes the issue by spawning all background goroutines first and starting the goroutine that calls wg.Wait afterward.
Previously vmagent had remoteWrite.queues as a global setting that was be applied to every persistentqueue. However, it could be useful to specify remotewrite.queues per remotewrite.url.
Considering each rw might have different workload(latency, throughput, and availability), so it will be more flexible for tuning if we can set remoteWrite.queues separately for specific rw.
This commit, makes `-remoteWrite-queues` configurable per remoteWrite.url.
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10270
Initial implementation of searchTenantsOnDate used a index scan for the given prefix (index prefix + tenant + date).
It did not check whether the date prefix was actually outside the current date.
This commit adds the missing date check and makes the tenant search results accurate.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10295
These metrics must be incremented when the request couldn't be processed because of the configured per-user concurrency limit.
The commit 76176ac1d3 moved the counter increase to the place when the current request
is put in the wait queue because of the concurrency limit is reached. This is incorrect, since such requests
can still be successfully processed during -maxQueueDuration . This also contradicts the docs at https://docs.victoriametrics.com/victoriametrics/vmauth/#concurrency-limiting
There is a small practical sense in counting the number of times the concurrency limit is reached,
while the request is successfully processed during the -maxQueueDuration after that.
Add missing alerting rule for rejected unauthorized requests because of the concurrency limit.
Add missing grouping by instance for per-user counter of rejected queries because of the concurrency limit.
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10078
This simplifies troubleshooting by investigating the vm_log_messages_total metric
when logs are unavailable. The logs may be unavailable when the -loggerLevel command-line
flag is set to value other than INFO. The logs may be unavailable when clients
use Monitoring of Monitoring service ( https://victoriametrics.com/products/mom/ ),
which provides metrics, but doesn't provide logs from VictoriaMetrics components
running at the client side.
Add `is_printed` label to the `vm_log_messages_total` metric in order to detect whether
the given log has been suppressed or printed.
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10304
While at it, make more readable the description for the TooManyLogs alert,
which is based on the vm_log_messages_total metric.
Also return back the `level!="info"` instead of `level="error"` filter
in the query for this alerting rule, in order to be consistent with queries
at the official dashboards for VictoriaMetrics components.
TODO: investigate too high warnings rate at https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2760
and fix it at the source of these warnings instead of modifying the query
for the TooManyLogs alert.
Add fallback to legacy indexDB for stats search
After introducing the new partition index
(f97f627f79), storage stopped returning
stats for date ranges outside the partition index. This made the
migration backward incompatible, as there was no way to retrieve stats
for dates prior to the migration.
This change adds a fallback to the legacy indexDB search when the status
search on the current partition index returns zero series.
This is an imperfect solution: due to tag filters, the TSDB status
search may legitimately return empty results. However, the additional
overhead is small and acceptable.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10315
The vmanomaly has been moved to a separate repository. This means that the functionality related to vmanomaly is no longer needed in the app/vmui located in the VictoriaMetrics repository.
This commit removes all the functionality and unnecessary abstractions related to vmanomaly from the app/vmui repository. This should help improving long-term maintenance of the code.
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9755
VictoriaMetrics is a fast, cost-saving, and scalable solution for monitoring and managing time series data. It delivers high performance and reliability, making it an ideal choice for businesses of all sizes.
## Folder Structure
-`/app`: Contains the compilable binaries.
-`/lib`: Contains the golang reusable libraries
-`/docs/victoriametrics`: Contains documentation for the project.
-`/apptest/tests`: Contains integration tests.
## Libraries and Frameworks
- Backend: Golang, no framework. Use third-party libraries sparingly.
- Frontend: React.
## Code review guidelines
Ensure the feature or bugfix includes a changelog entry in /docs/victoriametrics/changelog/CHANGELOG.md.
Verify the entry is under the ## tip section and matches the structure and style of existing entries.
Chore-only changes may be omitted from the changelog.
Please provide a brief description of the changes you made. Be as specific as possible to help others understand the purpose and impact of your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development goals](https://docs.victoriametrics.com/victoriametrics/goals/).
Before creating the PR, make sure you have read and followed the [VictoriaMetrics contributing guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
VictoriaMetrics is a fast, cost-saving, and scalable solution for monitoring and managing time series data. It delivers high performance and reliability, making it an ideal choice for businesses of all sizes.
VictoriaMetrics is a fast, cost-effective, and scalable solution for monitoring and managing time series data. It delivers high performance and reliability, making it an ideal choice for businesses of all sizes.
Here are some resources and information about VictoriaMetrics:
-Deployment types: [Single-node version](https://docs.victoriametrics.com/), [Cluster version](https://docs.victoriametrics.com/victoriametrics/cluster-victoriametrics/), and [Enterprise version](https://docs.victoriametrics.com/victoriametrics/enterprise/)
- Changelog: [CHANGELOG](https://docs.victoriametrics.com/victoriametrics/changelog/), and [How to upgrade](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#how-to-upgrade-victoriametrics)
-**Community**: [Slack](https://slack.victoriametrics.com/) (join via [Slack Inviter](https://slack.victoriametrics.com/)), [X (Twitter)](https://x.com/VictoriaMetrics), [YouTube](https://www.youtube.com/@VictoriaMetrics). See full list [here](https://docs.victoriametrics.com/victoriametrics/#community-and-contributions).
- **Changelog**: Project evolves fast - check the [CHANGELOG](https://docs.victoriametrics.com/victoriametrics/changelog/), and [How to upgrade](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#how-to-upgrade-victoriametrics).
- **Enterprise support:** [Contact us](mailto:info@victoriametrics.com) for commercial support with additional [enterprise features](https://docs.victoriametrics.com/victoriametrics/enterprise/).
- **Enterprise releases:** Enterprise and [long-term support releases (LTS)](https://docs.victoriametrics.com/victoriametrics/lts-releases/) are publicly available and can be evaluated for free
using a [free trial license](https://victoriametrics.com/products/enterprise/trial/).
- **Security:** we achieved [security certifications](https://victoriametrics.com/security/) for Database Software Development and Software-Based Monitoring Services.
Yes, we open-source both the single-node VictoriaMetrics and the cluster version.
You can find out about our security policy and VictoriaMetrics version support on the [security page](https://docs.victoriametrics.com/victoriametrics/#security) in the documentation.
The following versions of VictoriaMetrics receive regular security fixes:
maxLabelsPerTimeseries=flag.Int("maxLabelsPerTimeseries",0,"The maximum number of labels per time series to be accepted. Series with superfluous labels are ignored. In this case the vm_rows_ignored_total{reason=\"too_many_labels\"} metric at /metrics page is incremented")
maxLabelNameLen=flag.Int("maxLabelNameLen",0,"The maximum length of label names in the accepted time series. Series with longer label name are ignored. In this case the vm_rows_ignored_total{reason=\"too_long_label_name\"} metric at /metrics page is incremented")
maxLabelValueLen=flag.Int("maxLabelValueLen",0,"The maximum length of label values in the accepted time series. Series with longer label value are ignored. In this case the vm_rows_ignored_total{reason=\"too_long_label_value\"} metric at /metrics page is incremented")
enableMultitenancyViaHeaders=flag.Bool("enableMultitenancyViaHeaders",false,"Enables multitenancy via HTTP headers. "+
"See https://docs.victoriametrics.com/victoriametrics/vmagent/#multitenancy")
fmt.Fprintf(w,"See docs at <a href='https://docs.victoriametrics.com/victoriametrics/vmagent/'>https://docs.victoriametrics.com/victoriametrics/vmagent/</a></br>")
"Multiple headers must be delimited by '^^': -remoteWrite.headers='header1:value1^^header2:value2'")
basicAuthUsername=flagutil.NewArrayString("remoteWrite.basicAuth.username","Optional basic auth username to use for the corresponding -remoteWrite.url")
basicAuthUsernameFile=flagutil.NewArrayString("remoteWrite.basicAuth.usernameFile","Optional path to basic auth username to use for the corresponding -remoteWrite.url. "+
"The file is re-read every second")
basicAuthPassword=flagutil.NewArrayString("remoteWrite.basicAuth.password","Optional basic auth password to use for the corresponding -remoteWrite.url")
basicAuthPasswordFile=flagutil.NewArrayString("remoteWrite.basicAuth.passwordFile","Optional path to basic auth password to use for the corresponding -remoteWrite.url. "+
unparsedLabelsGlobal=flagutil.NewArrayString("remoteWrite.label","Optional label in the form 'name=value' to add to all the metrics before sending them to -remoteWrite.url."+
"Pass multiple -remoteWrite.label flags in order to add multiple labels to metrics before sending them to remote storage")
unparsedLabelsGlobal=flagutil.NewArrayString("remoteWrite.label","Optional label in the form 'name=value' to add to all the metrics before sending them to all -remoteWrite.url.")
relabelConfigPathGlobal=flag.String("remoteWrite.relabelConfig","","Optional path to file with relabeling configs, which are applied "+
"to all the metrics before sending them to -remoteWrite.url. See also -remoteWrite.urlRelabelConfig. "+
"The path can point either to local file or to http url. "+
"See also -remoteWrite.maxDiskUsagePerURL and -remoteWrite.disableOnDiskQueue")
keepDanglingQueues=flag.Bool("remoteWrite.keepDanglingQueues",false,"Keep persistent queues contents at -remoteWrite.tmpDataPath in case there are no matching -remoteWrite.url. "+
"Useful when -remoteWrite.url is changed temporarily and persistent queue files will be needed later on.")
queues=flag.Int("remoteWrite.queues",cgroup.AvailableCPUs()*2,"The number of concurrent queues to each -remoteWrite.url. Set more queues if default number of queues "+
queues=flagutil.NewArrayInt("remoteWrite.queues",cgroup.AvailableCPUs()*2,"The number of concurrent queues to each -remoteWrite.url. Set more queues if default number of queues "+
"isn't enough for sending high volume of collected data to remote storage. "+
"Default value depends on the number of available CPU cores. It should work fine in most cases since it minimizes resource usage")
showRemoteWriteURL=flag.Bool("remoteWrite.showURL",false,"Whether to show -remoteWrite.url in the exported metrics. "+
@@ -80,10 +84,14 @@ var (
`This may be needed for reducing memory usage at remote storage when the order of labels in incoming samples is random. `+
`For example, if m{k1="v1",k2="v2"} may be sent as m{k2="v2",k1="v1"}`+
`Enabled sorting for labels can slow down ingestion performance a bit`)
maxHourlySeries=flag.Int("remoteWrite.maxHourlySeries",0,"The maximum number of unique series vmagent can send to remote storage systems during the last hour. "+
"Excess series are logged and dropped. This can be useful for limiting series cardinality. See https://docs.victoriametrics.com/victoriametrics/vmagent/#cardinality-limiter")
maxDailySeries=flag.Int("remoteWrite.maxDailySeries",0,"The maximum number of unique series vmagent can send to remote storage systems during the last 24 hours. "+
"Excess series are logged and dropped. This can be useful for limiting series churn rate. See https://docs.victoriametrics.com/victoriametrics/vmagent/#cardinality-limiter")
maxHourlySeries=flag.Int64("remoteWrite.maxHourlySeries",0,"The maximum number of unique series vmagent can send to remote storage systems during the last hour. "+
"Excess series are logged and dropped. This can be useful for limiting series cardinality. "+
fmt.Sprintf("Setting this flag to '-1' sets limit to maximum possible value (%d) which is useful in order to enable series tracking without enforcing limits. ",math.MaxInt32)+
"See https://docs.victoriametrics.com/victoriametrics/vmagent/#cardinality-limiter")
maxDailySeries=flag.Int64("remoteWrite.maxDailySeries",0,"The maximum number of unique series vmagent can send to remote storage systems during the last 24 hours. "+
"Excess series are logged and dropped. This can be useful for limiting series churn rate. "+
fmt.Sprintf("Setting this flag to '-1' sets limit to maximum possible value (%d) which is useful in order to enable series tracking without enforcing limits. ",math.MaxInt32)+
"See https://docs.victoriametrics.com/victoriametrics/vmagent/#cardinality-limiter")
maxIngestionRate=flag.Int("maxIngestionRate",0,"The maximum number of samples vmagent can receive per second. Data ingestion is paused when the limit is exceeded. "+
"By default there are no limits on samples ingestion rate. See also -remoteWrite.rateLimit")
@@ -92,6 +100,8 @@ var (
"See https://docs.victoriametrics.com/victoriametrics/vmagent/#disabling-on-disk-persistence . See also -remoteWrite.dropSamplesOnOverload")
dropSamplesOnOverload=flag.Bool("remoteWrite.dropSamplesOnOverload",false,"Whether to drop samples when -remoteWrite.disableOnDiskQueue is set and if the samples "+
"cannot be pushed into the configured -remoteWrite.url systems in a timely manner. See https://docs.victoriametrics.com/victoriametrics/vmagent/#disabling-on-disk-persistence")
disableMetadataPerURL=flagutil.NewArrayBool("remoteWrite.disableMetadata","Whether to disable sending metadata to the corresponding -remoteWrite.url. "+
"By default, metadata sending is controlled by the global -enableMetadata flag")
)
var(
@@ -157,8 +167,8 @@ func Init() {
iflen(*remoteWriteURLs)==0{
logger.Fatalf("at least one `-remoteWrite.url` command-line flag must be set")
returnfmt.Errorf("interval shouldn't be lower than 0")
}
ifg.EvalOffset.Duration()<0{
returnfmt.Errorf("eval_offset shouldn't be lower than 0")
}
// if `eval_offset` is set, interval won't use global evaluationInterval flag and must bigger than offset.
ifg.EvalOffset.Duration()>g.Interval.Duration(){
returnfmt.Errorf("eval_offset should be smaller than interval; now eval_offset: %v, interval: %v",g.EvalOffset.Duration(),g.Interval.Duration())
// if `eval_offset` is set, the group interval must be specified explicitly(instead of inherited from global evaluationInterval flag) and must bigger than offset.
returnfmt.Errorf("the abs value of eval_offset should be smaller than interval; now eval_offset: %v, interval: %v",g.EvalOffset.Duration(),g.Interval.Duration())
}
ifg.EvalOffset!=nil&&g.EvalDelay!=nil{
returnfmt.Errorf("eval_offset cannot be used with eval_delay")
@@ -56,7 +56,7 @@ absolute path to all .tpl files in root.
-rule.templates="dir/**/*.tpl". Includes all the .tpl files in "dir" subfolders recursively.
`)
configCheckInterval=flag.Duration("configCheckInterval",0,"Interval for checking for changes in '-rule' or '-notifier.config' files. "+
configCheckInterval=flag.Duration("configCheckInterval",0,"Interval for checking for changes in '-rule', '-rule.templates' and '-notifier.config' files. "+
"By default, the checking is disabled. Send SIGHUP signal in order to force config check for changes.")
httpListenAddrs=flagutil.NewArrayString("httpListenAddr","Address to listen for incoming http requests. See also -tls and -httpListenAddr.useProxyProtocol")
@@ -81,9 +81,7 @@ absolute path to all .tpl files in root.
dryRun=flag.Bool("dryRun",false,"Whether to check only config files without running vmalert. The rules file are validated. The -rule flag must be specified.")
)
var(
extURL*url.URL
)
varextURL*url.URL
funcmain(){
// Write flags and help message to stdout, since it is easier to grep or pipe.
// sentRows and sentBytes are historical counters that can now be replaced by flushedRows and flushedBytes histograms. They may be deprecated in the future after the new histograms have been adopted for some time.
logger.Infof("received unsupported media type or bad request from remote storage at %q. Re-packing the block to Prometheus remote write and retrying."+
"See https://docs.victoriametrics.com/victoriametrics/vmagent/#victoriametrics-remote-write-protocol",req.URL.Redacted())
zstdBlockLen:=len(data)
data,err=repackBlockFromZstdToSnappy(data)
iferr==nil{
logger.Infof("received unsupported media type or bad request from remote storage at %q. Downgrading protocol from VictoriaMetrics to Prometheus remote write for all future requests. "+
"See https://docs.victoriametrics.com/victoriametrics/vmagent/#victoriametrics-remote-write-protocol",req.URL.Redacted())
c.isVMRemoteWrite.Store(false)
returnc.send(ctx,data)
}
logger.Warnf("failed to repack zstd block (%d bytes) to snappy: %s; The block will be rejected. "+
"Possible cause: ungraceful shutdown leading to persisted queue corruption.",
zstdBlockLen,err)
}
}
ifresp.StatusCode!=http.StatusTooManyRequests{
// MUST NOT retry write requests on HTTP 4xx responses other than 429
return&nonRetriableError{
@@ -354,3 +438,19 @@ type nonRetriableError struct {
func(e*nonRetriableError)Error()string{
returne.err.Error()
}
var(
writeRequestBufPoolbytesutil.ByteBufferPool
compressBufPoolbytesutil.ByteBufferPool
)
// repackBlockFromZstdToSnappy repacks the given zstd-compressed block to snappy-compressed block.
// On k conflicts in origin set, the original value is preferred and copied
// to processed with `exported_%k` key. The copy happens only if passed v isn't equal to origin[k] value.
func(ls*labelSet)add(k,vstring){
// do not add label with empty value, since it has no meaning.
// see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9984
// do not add label with empty value to the result, as it has no meaning:
// if the label already exists in the original query result, remove it to preserve compatibility with relabeling, see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10766.
// otherwise, ignore the label, see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9984.
ifv==""{
delete(ls.processed,k)
return
}
ls.processed[k]=v
@@ -818,7 +820,9 @@ func (ar *AlertingRule) restore(ctx context.Context, q datasource.Querier, ts ti
"invalid_label":`error evaluating template: template: :1:298: executing "" at <.Values.mustRuntimeFail>: can't evaluate field Values in type notifier.tplData`,
ruleUpdateEntriesLimit=flag.Int("rule.updateEntriesLimit",20,"Defines the max number of rule's state updates stored in-memory. "+
"Rule's updates are available on rule's Details page and are used for debugging purposes. The number of stored updates can be overridden per rule via update_entries_limit param.")
resendDelay=flag.Duration("rule.resendDelay",0,"MiniMum amount of time to wait before resending an alert to notifier.")
maxResolveDuration=flag.Duration("rule.maxResolveDuration",0,"Limits the maxiMum duration for automatic alert expiration, "+
resendDelay=flag.Duration("rule.resendDelay",0,"Minimum amount of time to wait before resending an alert to notifier.")
maxResolveDuration=flag.Duration("rule.maxResolveDuration",0,"Limits the maximum duration for automatic alert expiration, "+
"which by default is 4 times evaluationInterval of the parent group")
evalDelay=flag.Duration("rule.evalDelay",30*time.Second,"Adjustment of the 'time' parameter for rule evaluation requests to compensate intentional data delay from the datasource. "+
"Normally, should be equal to '-search.latencyOffset' (cmd-line flag configured for VictoriaMetrics single-node or vmselect). "+
@@ -40,6 +43,9 @@ var (
"For example, if lookback=1h then range from now() to now()-1h will be scanned.")
maxStartDelay=flag.Duration("group.maxStartDelay",5*time.Minute,"Defines the max delay before starting the group evaluation. Group's start is artificially delayed for random duration on interval"+
" [0..min(--group.maxStartDelay, group.interval)]. This helps smoothing out the load on the configured datasource, so evaluations aren't executed too close to each other.")
ruleStripFilePath=flag.Bool("rule.stripFilePath",false,"Whether to strip rule file paths in logs and all API responses, including /metrics. "+
"For example, file path '/path/to/tenant_id/rules.yml' will be stripped to 'groupHashID/rules.yml'. "+
"This flag may be useful for hiding sensitive information in file paths, such as S3 bucket details.")
// do not add label with empty value, since it has no meaning.
// see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9984
// do not add label with empty value to the result, as it has no meaning:
// if the label already exists in the original query result, remove it to preserve compatibility with relabeling, see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10766.
// otherwise, ignore the label, see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9984.
{% if g.Unhealthy > 0 %}<span class="badge bg-danger" title="Number of rules with status Error">{%d g.Unhealthy %}</span> {% endif %}
{% if g.NoMatch > 0 %}<span class="badge bg-warning" title="Number of rules with status NoMatch">{%d g.NoMatch %}</span> {% endif %}
<span class="badge bg-success" title="Number of rules with status Ok">{%d g.Healthy %}</span>
{% if g.States["unhealthy"] > 0 %}<span class="badge bg-danger" title="Number of rules with status Error">{%d g.States["unhealthy"] %}</span> {% endif %}
{% if g.States["nomatch"] > 0 %}<span class="badge bg-warning" title="Number of rules with status NoMatch">{%d g.States["nomatch"] %}</span> {% endif %}
<span class="badge bg-success" title="Number of rules with status Ok">{%d g.States["ok"] %}</span>
returnnil,nil,fmt.Errorf("incorrect match claim, key=%q, value regex=%q: %w",ck,cv,err)
}
parsedClaims=append(parsedClaims,pc)
}
ui.JWT.parsedMatchClaims=parsedClaims
sort.Strings(sortedClaims)
claimsString=strings.Join(sortedClaims,",")
ifoldUI,ok:=uniqClaims[claimsString];ok{
returnnil,nil,fmt.Errorf("duplicate match claims=%q found for name=%q at idx=%d; the previous one is set for name=%q",claimsString,ui.Name,idx,oldUI.Name)
useProxyProtocol=flagutil.NewArrayBool("httpListenAddr.useProxyProtocol","Whether to use proxy protocol for connections accepted at the corresponding -httpListenAddr . "+
"See https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt . "+
"With enabled proxy protocol http server cannot serve regular /metrics endpoint. Use -pushmetrics.url for metrics pushing")
maxIdleConnsPerBackend=flag.Int("maxIdleConnsPerBackend",100,"The maximum number of idle connections vmauth can open per each backend host. "+
"See also -maxConcurrentRequests")
idleConnTimeout=flag.Duration("idleConnTimeout",50*time.Second,"The timeout for HTTP keep-alive connections to backend services. "+
maxIdleConnsPerBackend=flag.Int("maxIdleConnsPerBackend",100,"The maximum number of idle connections vmauth can open per each backend host")
idleConnTimeout=flag.Duration("idleConnTimeout",50*time.Second,"The timeout for HTTP keep-alive connections to backend services. "+
"It is recommended setting this value to values smaller than -http.idleConnTimeout set at backend services")
responseTimeout=flag.Duration("responseTimeout",5*time.Minute,"The timeout for receiving a response from backend")
maxConcurrentRequests=flag.Int("maxConcurrentRequests",1000,"The maximum number of concurrent requests vmauth can process. Other requests are rejected with "+
"'429 Too Many Requests' http status code. See also -maxQueueDuration, -maxConcurrentPerUserRequests and -maxIdleConnsPerBackend command-line options")
maxConcurrentPerUserRequests=flag.Int("maxConcurrentPerUserRequests",300,"The maximum number of concurrent requests vmauth can process per each configured user. "+
"Other requests are rejected with '429 Too Many Requests' http status code. See also -maxQueueDuration and -maxConcurrentRequests command-line options "+
"and max_concurrent_requests option in per-user config")
maxQueueDuration=flag.Duration("maxQueueDuration",10*time.Second,"The maximum duration the request waits for execution when the number of concurrently executed "+
"requests reach -maxConcurrentRequests or -maxConcurrentPerUserRequests before returning '429 Too Many Requests' error. "+
"This allows graceful handling of short spikes in the number of concurrent requests")
requestBufferSize=flagutil.NewBytes("requestBufferSize",32*1024,"The size of the buffer for reading the request body before proxying the request to backends. "+
"This allows reducing the consumption of backend resources when processing requests from clients connected via slow networks. "+
"Set to 0 to disable request buffering. See https://docs.victoriametrics.com/victoriametrics/vmauth/#request-body-buffering")
maxRequestBodySizeToRetry=flagutil.NewBytes("maxRequestBodySizeToRetry",16*1024,"The maximum request body size to buffer in memory for potential retries at other backends. "+
"Request bodies larger than this size cannot be retried if the backend fails. Zero or negative value disables retries. "+
"See also -requestBufferSize")
maxConcurrentRequests=flag.Int("maxConcurrentRequests",1000,"The maximum number of concurrent requests vmauth can process simultaneously. "+
"Requests exceeding this limit are queued for up to -maxQueueDuration and then rejected with '429 Too Many Requests' http status code if the limit is still reached. "+
"This protects vmauth itself from overloading and out-of-memory (OOM) failures. See also -maxConcurrentPerUserRequests "+
maxConcurrentPerUserRequests=flag.Int("maxConcurrentPerUserRequests",100,"The maximum number of concurrent requests vmauth can process per each configured user. "+
"Requests exceeding this limit are queued for up to -maxQueueDuration and then rejected with '429 Too Many Requests' http status code if the limit is still reached. "+
"This provides fairness and isolation between users, preventing a single user from consuming all the available resources. "+
"It works in conjunction with -maxConcurrentRequests, which sets the global limit across all users. "+
"This default can be overridden for individual users via max_concurrent_requests option in per-user config. "+
"See https://docs.victoriametrics.com/victoriametrics/vmauth/#concurrency-limiting")
maxQueueDuration=flag.Duration("maxQueueDuration",10*time.Second,"The maximum duration to wait before rejecting incoming requests if concurrency limit "+
"specified via -maxConcurrentRequests or -maxConcurrentPerUserRequests command-line flags is reached. "+
"Requests are rejected with '429 Too Many Requests' http status code if the limit is still reached after the -maxQueueDuration duration. "+
"This allows graceful handling of short spikes in concurrent requests. See https://docs.victoriametrics.com/victoriametrics/vmauth/#concurrency-limiting")
reloadAuthKey=flagutil.NewPassword("reloadAuthKey","Auth key for /-/reload http endpoint. It must be passed via authKey query arg. It overrides -httpAuth.*")
logInvalidAuthTokens=flag.Bool("logInvalidAuthTokens",false,"Whether to log requests with invalid auth tokens. "+
`Such requests are always counted at vmauth_http_request_errors_total{reason="invalid_auth_token"} metric, which is exposed at /metrics page`)
failTimeout=flag.Duration("failTimeout",3*time.Second,"Sets a delay period for load balancing to skip a malfunctioning backend")
maxRequestBodySizeToRetry=flagutil.NewBytes("maxRequestBodySizeToRetry",16*1024,"The maximum request body size, which can be cached and re-tried at other backends. "+
"Bigger values may require more memory. Zero or negative value disables caching of request body. This may be useful when proxying data ingestion requests")
failTimeout=flag.Duration("failTimeout",3*time.Second,"Sets a delay period for load balancing to skip a malfunctioning backend")
backendTLSInsecureSkipVerify=flag.Bool("backend.tlsInsecureSkipVerify",false,"Whether to skip TLS verification when connecting to backends over HTTPS. "+
"See https://docs.victoriametrics.com/victoriametrics/vmauth/#backend-tls-setup")
backendTLSCAFile=flag.String("backend.TLSCAFile","","Optional path to TLS root CA file, which is used for TLS verification when connecting to backends over HTTPS. "+
// The -maxConcurrentRequests are executed. Wait until some of the requests are finished,
// so the current request could be executed.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10078
select{
caseconcurrencyLimitCh<-struct{}{}:
iferr:=ui.beginConcurrencyLimit(ctx);err!=nil{
handleConcurrencyLimitError(w,r,err)
<-concurrencyLimitCh
return
}
returnnil
case<-ctx.Done():
err:=ctx.Err()
concurrentRequestsLimitReached.Inc()
iferrors.Is(err,context.DeadlineExceeded){
err=fmt.Errorf("cannot start executing the request during -maxQueueDuration=%s because -maxConcurrentRequests=%d concurrent requests are executed",
// The current request couldn't be executed until the request timeout.
concurrentRequestsLimitReached.Inc()
returnfmt.Errorf("cannot start executing the request during -maxQueueDuration=%s because -maxConcurrentRequests=%d concurrent requests are executed",
*maxQueueDuration,cap(concurrencyLimitCh))
handleConcurrencyLimitError(w,r,err)
return
}
err=fmt.Errorf("cannot start executing the request because -maxConcurrentRequests=%d concurrent requests are executed: %w",cap(concurrencyLimitCh),err)
handleConcurrencyLimitError(w,r,err)
return
returnfmt.Errorf("cannot start executing the request because -maxConcurrentRequests=%d concurrent requests are executed: %w",cap(concurrencyLimitCh),err)
Err:fmt.Errorf("all the %d backends for the user %q are unavailable",up.getBackendsCount(),ui.name()),
Err:fmt.Errorf("all the %d backends for the user %q are unavailable for proxying the request - check previous WARN logs to see the exact error for each failed backend",up.getBackendsCount(),ui.name()),
// If we get an error from the retry_status_codes list, but cannot execute retry,
// we consider such a request an error as well.
err:=&httpserver.ErrorWithStatusCode{
Err:fmt.Errorf("got response status code=%d from %s, but cannot retry the request at another backend, because the request has been already consumed",
Err:fmt.Errorf("got response status code=%d from %s, but cannot retry the request at another backend, because the request body has been already consumed",
// Retry requests at other backends if it matches retryStatusCodes.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4893
remoteAddr:=httpserver.GetQuotedRemoteAddr(r)
// NOTE: do not use httpserver.GetRequestURI
// it explicitly reads request body, which may fail retries.
requestURI:=httpserver.GetRequestURI(r)
logger.Warnf("remoteAddr: %s; requestURI: %s; request to %s failed, retrying the request at another backend because response status code=%d belongs to retry_status_codes=%d",
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.