Compare commits

...

64 Commits

Author SHA1 Message Date
JAYICE
c248236526 Update docs/victoriametrics/changelog/CHANGELOG.md
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
Signed-off-by: JAYICE <jayice.zhou@qq.com>
2026-04-22 17:17:43 +08:00
JAYICE
095db27338 Merge branch 'master' into issue-10507 2026-04-22 16:58:11 +08:00
andriibeee
a3df0f890b lib/cgroup: support reading cpu/memory limits from systemd slices
cgroup v2 version supports slices ( aka path hierarchy) for resource limits. It's mostly supported by systemd
and container runtime build on top of it.

 This commit reads subpath for systemd slices and traverse it with reading minimal limit value.

Related docs:
https://docs.oracle.com/en/operating-systems/oracle-linux/9/systemd/SystemdMngCgroupsV2.html#SlicesServicesScopesHierarchy
https://www.freedesktop.org/software/systemd/man/latest/systemd.slice.html

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10635
2026-04-22 10:18:03 +02:00
Max Kotliar
0785d16711 docs/vmauth: add example for using TLS on public addr but keeping internal non-TLS (#10858)
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10793
2026-04-22 10:06:42 +02:00
Hui Wang
dc94aa9339 app/vmalert: properly remove empty labels value
Previously, if rule label value was set to empty string, vmalert ignored this label during labels merge with labels from data source response. In contrast, Prometheus removes data source label in this case as well. Which allows to perform label delete operation.

 This commit uses the same logic as Prometheus for resolving labels conflicts and allows to remove labels.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10766
2026-04-22 09:53:25 +02:00
Max Kotliar
032f70e262 docs/changelog: add update note about bug in metricsql
Follow up to
7029283f7d
for LTS releases
2026-04-21 20:19:56 +03:00
Max Kotliar
7029283f7d docs/changelog: add update note about bug in metricsql
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10856

Bug introduced in https://github.com/VictoriaMetrics/metricsql/pull/63
via commit
08dd38d4a0
2026-04-21 20:15:47 +03:00
Fred Navruzov
6c1534c7b1 docs/vmanomaly: update visual assets and formulations (#10859)
Update vmanomaly visual assets and improve clarification on allowed
datasources
2026-04-21 19:46:44 +03:00
Roman Khavronenko
0c05b0b15b apptest: restore helper for default tenant
Helper `getTenant` was removed in
e0e01e46f0 assuming that new change
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10782 will
tolerate missing tenantID in the path.

While that change is still not merged - restoring the helper for tests
to remain functional.
2026-04-21 11:03:06 +02:00
Alexander Frolov
a2b1d1eb62 app/vminsert: account storageNodesBucket count in per-node buffer size
Follow-up for ceda0407fb which added a regression, which could
double vminsert memory usage.

 This commit takes in account a second buffer per storageNode.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10725#issuecomment-4282256709
2026-04-20 21:28:14 +02:00
dependabot[bot]
e3cd3329d6 build(deps): bump github/codeql-action from 4 to 4.35.1 (#10844)
Bumps [github/codeql-action](https://github.com/github/codeql-action)
from 4 to 4.35.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/github/codeql-action/releases">github/codeql-action's
releases</a>.</em></p>
<blockquote>
<h2>v4.35.1</h2>
<ul>
<li>Fix incorrect minimum required Git version for <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis</a>: it should have been 2.36.0, not 2.11.0. <a
href="https://redirect.github.com/github/codeql-action/pull/3781">#3781</a></li>
</ul>
<h2>v4.35.0</h2>
<ul>
<li>Reduced the minimum Git version required for <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis</a> from 2.38.0 to 2.11.0. <a
href="https://redirect.github.com/github/codeql-action/pull/3767">#3767</a></li>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.25.1">2.25.1</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3773">#3773</a></li>
</ul>
<h2>v4.34.1</h2>
<ul>
<li>Downgrade default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.24.3">2.24.3</a>
due to issues with a small percentage of Actions and JavaScript
analyses. <a
href="https://redirect.github.com/github/codeql-action/pull/3762">#3762</a></li>
</ul>
<h2>v4.34.0</h2>
<ul>
<li>Added an experimental change which disables TRAP caching when <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis</a> is enabled, since improved incremental analysis
supersedes TRAP caching. This will improve performance and reduce
Actions cache usage. We expect to roll this change out to everyone in
March. <a
href="https://redirect.github.com/github/codeql-action/pull/3569">#3569</a></li>
<li>We are rolling out improved incremental analysis to C/C++ analyses
that use build mode <code>none</code>. We expect this rollout to be
complete by the end of April 2026. <a
href="https://redirect.github.com/github/codeql-action/pull/3584">#3584</a></li>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.25.0">2.25.0</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3585">#3585</a></li>
</ul>
<h2>v4.33.0</h2>
<ul>
<li>
<p>Upcoming change: Starting April 2026, the CodeQL Action will skip
collecting file coverage information on pull requests to improve
analysis performance. File coverage information will still be computed
on non-PR analyses. Pull request analyses will log a warning about this
upcoming change. <a
href="https://redirect.github.com/github/codeql-action/pull/3562">#3562</a></p>
<p>To opt out of this change:</p>
<ul>
<li><strong>Repositories owned by an organization:</strong> Create a
custom repository property with the name
<code>github-codeql-file-coverage-on-prs</code> and the type
&quot;True/false&quot;, then set this property to <code>true</code> in
the repository's settings. For more information, see <a
href="https://docs.github.com/en/organizations/managing-organization-settings/managing-custom-properties-for-repositories-in-your-organization">Managing
custom properties for repositories in your organization</a>.
Alternatively, if you are using an advanced setup workflow, you can set
the <code>CODEQL_ACTION_FILE_COVERAGE_ON_PRS</code> environment variable
to <code>true</code> in your workflow.</li>
<li><strong>User-owned repositories using default setup:</strong> Switch
to an advanced setup workflow and set the
<code>CODEQL_ACTION_FILE_COVERAGE_ON_PRS</code> environment variable to
<code>true</code> in your workflow.</li>
<li><strong>User-owned repositories using advanced setup:</strong> Set
the <code>CODEQL_ACTION_FILE_COVERAGE_ON_PRS</code> environment variable
to <code>true</code> in your workflow.</li>
</ul>
</li>
<li>
<p>Fixed <a
href="https://redirect.github.com/github/codeql-action/issues/3555">a
bug</a> which caused the CodeQL Action to fail loading repository
properties if a &quot;Multi select&quot; repository property was
configured for the repository. <a
href="https://redirect.github.com/github/codeql-action/pull/3557">#3557</a></p>
</li>
<li>
<p>The CodeQL Action now loads <a
href="https://docs.github.com/en/organizations/managing-organization-settings/managing-custom-properties-for-repositories-in-your-organization">custom
repository properties</a> on GitHub Enterprise Server, enabling the
customization of features such as
<code>github-codeql-disable-overlay</code> that was previously only
available on GitHub.com. <a
href="https://redirect.github.com/github/codeql-action/pull/3559">#3559</a></p>
</li>
<li>
<p>Once <a
href="https://docs.github.com/en/code-security/how-tos/secure-at-scale/configure-organization-security/manage-usage-and-access/giving-org-access-private-registries">private
package registries</a> can be configured with OIDC-based authentication
for organizations, the CodeQL Action will now be able to accept such
configurations. <a
href="https://redirect.github.com/github/codeql-action/pull/3563">#3563</a></p>
</li>
<li>
<p>Fixed the retry mechanism for database uploads. Previously this would
fail with the error &quot;Response body object should not be disturbed
or locked&quot;. <a
href="https://redirect.github.com/github/codeql-action/pull/3564">#3564</a></p>
</li>
<li>
<p>A warning is now emitted if the CodeQL Action detects a repository
property whose name suggests that it relates to the CodeQL Action, but
which is not one of the properties recognised by the current version of
the CodeQL Action. <a
href="https://redirect.github.com/github/codeql-action/pull/3570">#3570</a></p>
</li>
</ul>
<h2>v4.32.6</h2>
<ul>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.24.3">2.24.3</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3548">#3548</a></li>
</ul>
<h2>v4.32.5</h2>
<ul>
<li>Repositories owned by an organization can now set up the
<code>github-codeql-disable-overlay</code> custom repository property to
disable <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis for CodeQL</a>. First, create a custom repository
property with the name <code>github-codeql-disable-overlay</code> and
the type &quot;True/false&quot; in the organization's settings. Then in
the repository's settings, set this property to <code>true</code> to
disable improved incremental analysis. For more information, see <a
href="https://docs.github.com/en/organizations/managing-organization-settings/managing-custom-properties-for-repositories-in-your-organization">Managing
custom properties for repositories in your organization</a>. This
feature is not yet available on GitHub Enterprise Server. <a
href="https://redirect.github.com/github/codeql-action/pull/3507">#3507</a></li>
<li>Added an experimental change so that when <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis</a> fails on a runner — potentially due to
insufficient disk space — the failure is recorded in the Actions cache
so that subsequent runs will automatically skip improved incremental
analysis until something changes (e.g. a larger runner is provisioned or
a new CodeQL version is released). We expect to roll this change out to
everyone in March. <a
href="https://redirect.github.com/github/codeql-action/pull/3487">#3487</a></li>
<li>The minimum memory check for improved incremental analysis is now
skipped for CodeQL 2.24.3 and later, which has reduced peak RAM usage.
<a
href="https://redirect.github.com/github/codeql-action/pull/3515">#3515</a></li>
<li>Reduced log levels for best-effort private package registry
connection check failures to reduce noise from workflow annotations. <a
href="https://redirect.github.com/github/codeql-action/pull/3516">#3516</a></li>
<li>Added an experimental change which lowers the minimum disk space
requirement for <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis</a>, enabling it to run on standard GitHub Actions
runners. We expect to roll this change out to everyone in March. <a
href="https://redirect.github.com/github/codeql-action/pull/3498">#3498</a></li>
<li>Added an experimental change which allows the
<code>start-proxy</code> action to resolve the CodeQL CLI version from
feature flags instead of using the linked CLI bundle version. We expect
to roll this change out to everyone in March. <a
href="https://redirect.github.com/github/codeql-action/pull/3512">#3512</a></li>
<li>The previously experimental changes from versions 4.32.3, 4.32.4,
3.32.3 and 3.32.4 are now enabled by default. <a
href="https://redirect.github.com/github/codeql-action/pull/3503">#3503</a>,
<a
href="https://redirect.github.com/github/codeql-action/pull/3504">#3504</a></li>
</ul>
<h2>v4.32.4</h2>
<ul>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.24.2">2.24.2</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3493">#3493</a></li>
<li>Added an experimental change which improves how certificates are
generated for the authentication proxy that is used by the CodeQL Action
in Default Setup when <a
href="https://docs.github.com/en/code-security/how-tos/secure-at-scale/configure-organization-security/manage-usage-and-access/giving-org-access-private-registries">private
package registries are configured</a>. This is expected to generate more
widely compatible certificates and should have no impact on analyses
which are working correctly already. We expect to roll this change out
to everyone in February. <a
href="https://redirect.github.com/github/codeql-action/pull/3473">#3473</a></li>
<li>When the CodeQL Action is run <a
href="https://docs.github.com/en/code-security/how-tos/scan-code-for-vulnerabilities/troubleshooting/troubleshooting-analysis-errors/logs-not-detailed-enough#creating-codeql-debugging-artifacts-for-codeql-default-setup">with
debugging enabled in Default Setup</a> and <a
href="https://docs.github.com/en/code-security/how-tos/secure-at-scale/configure-organization-security/manage-usage-and-access/giving-org-access-private-registries">private
package registries are configured</a>, the &quot;Setup proxy for
registries&quot; step will output additional diagnostic information that
can be used for troubleshooting. <a
href="https://redirect.github.com/github/codeql-action/pull/3486">#3486</a></li>
<li>Added a setting which allows the CodeQL Action to enable network
debugging for Java programs. This will help GitHub staff support
customers with troubleshooting issues in GitHub-managed CodeQL
workflows, such as Default Setup. This setting can only be enabled by
GitHub staff. <a
href="https://redirect.github.com/github/codeql-action/pull/3485">#3485</a></li>
<li>Added a setting which enables GitHub-managed workflows, such as
Default Setup, to use a <a
href="https://github.com/dsp-testing/codeql-cli-nightlies">nightly
CodeQL CLI release</a> instead of the latest, stable release that is
used by default. This will help GitHub staff support customers whose
analyses for a given repository or organization require early access to
a change in an upcoming CodeQL CLI release. This setting can only be
enabled by GitHub staff. <a
href="https://redirect.github.com/github/codeql-action/pull/3484">#3484</a></li>
</ul>
<h2>v4.32.3</h2>
<ul>
<li>Added experimental support for testing connections to <a
href="https://docs.github.com/en/code-security/how-tos/secure-at-scale/configure-organization-security/manage-usage-and-access/giving-org-access-private-registries">private
package registries</a>. This feature is not currently enabled for any
analysis. In the future, it may be enabled by default for Default Setup.
<a
href="https://redirect.github.com/github/codeql-action/pull/3466">#3466</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/github/codeql-action/blob/main/CHANGELOG.md">github/codeql-action's
changelog</a>.</em></p>
<blockquote>
<h2>4.35.1 - 27 Mar 2026</h2>
<ul>
<li>Fix incorrect minimum required Git version for <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis</a>: it should have been 2.36.0, not 2.11.0. <a
href="https://redirect.github.com/github/codeql-action/pull/3781">#3781</a></li>
</ul>
<h2>4.35.0 - 27 Mar 2026</h2>
<ul>
<li>Reduced the minimum Git version required for <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis</a> from 2.38.0 to 2.11.0. <a
href="https://redirect.github.com/github/codeql-action/pull/3767">#3767</a></li>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.25.1">2.25.1</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3773">#3773</a></li>
</ul>
<h2>4.34.1 - 20 Mar 2026</h2>
<ul>
<li>Downgrade default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.24.3">2.24.3</a>
due to issues with a small percentage of Actions and JavaScript
analyses. <a
href="https://redirect.github.com/github/codeql-action/pull/3762">#3762</a></li>
</ul>
<h2>4.34.0 - 20 Mar 2026</h2>
<ul>
<li>Added an experimental change which disables TRAP caching when <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis</a> is enabled, since improved incremental analysis
supersedes TRAP caching. This will improve performance and reduce
Actions cache usage. We expect to roll this change out to everyone in
March. <a
href="https://redirect.github.com/github/codeql-action/pull/3569">#3569</a></li>
<li>We are rolling out improved incremental analysis to C/C++ analyses
that use build mode <code>none</code>. We expect this rollout to be
complete by the end of April 2026. <a
href="https://redirect.github.com/github/codeql-action/pull/3584">#3584</a></li>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.25.0">2.25.0</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3585">#3585</a></li>
</ul>
<h2>4.33.0 - 16 Mar 2026</h2>
<ul>
<li>
<p>Upcoming change: Starting April 2026, the CodeQL Action will skip
collecting file coverage information on pull requests to improve
analysis performance. File coverage information will still be computed
on non-PR analyses. Pull request analyses will log a warning about this
upcoming change. <a
href="https://redirect.github.com/github/codeql-action/pull/3562">#3562</a></p>
<p>To opt out of this change:</p>
<ul>
<li><strong>Repositories owned by an organization:</strong> Create a
custom repository property with the name
<code>github-codeql-file-coverage-on-prs</code> and the type
&quot;True/false&quot;, then set this property to <code>true</code> in
the repository's settings. For more information, see <a
href="https://docs.github.com/en/organizations/managing-organization-settings/managing-custom-properties-for-repositories-in-your-organization">Managing
custom properties for repositories in your organization</a>.
Alternatively, if you are using an advanced setup workflow, you can set
the <code>CODEQL_ACTION_FILE_COVERAGE_ON_PRS</code> environment variable
to <code>true</code> in your workflow.</li>
<li><strong>User-owned repositories using default setup:</strong> Switch
to an advanced setup workflow and set the
<code>CODEQL_ACTION_FILE_COVERAGE_ON_PRS</code> environment variable to
<code>true</code> in your workflow.</li>
<li><strong>User-owned repositories using advanced setup:</strong> Set
the <code>CODEQL_ACTION_FILE_COVERAGE_ON_PRS</code> environment variable
to <code>true</code> in your workflow.</li>
</ul>
</li>
<li>
<p>Fixed <a
href="https://redirect.github.com/github/codeql-action/issues/3555">a
bug</a> which caused the CodeQL Action to fail loading repository
properties if a &quot;Multi select&quot; repository property was
configured for the repository. <a
href="https://redirect.github.com/github/codeql-action/pull/3557">#3557</a></p>
</li>
<li>
<p>The CodeQL Action now loads <a
href="https://docs.github.com/en/organizations/managing-organization-settings/managing-custom-properties-for-repositories-in-your-organization">custom
repository properties</a> on GitHub Enterprise Server, enabling the
customization of features such as
<code>github-codeql-disable-overlay</code> that was previously only
available on GitHub.com. <a
href="https://redirect.github.com/github/codeql-action/pull/3559">#3559</a></p>
</li>
<li>
<p>Once <a
href="https://docs.github.com/en/code-security/how-tos/secure-at-scale/configure-organization-security/manage-usage-and-access/giving-org-access-private-registries">private
package registries</a> can be configured with OIDC-based authentication
for organizations, the CodeQL Action will now be able to accept such
configurations. <a
href="https://redirect.github.com/github/codeql-action/pull/3563">#3563</a></p>
</li>
<li>
<p>Fixed the retry mechanism for database uploads. Previously this would
fail with the error &quot;Response body object should not be disturbed
or locked&quot;. <a
href="https://redirect.github.com/github/codeql-action/pull/3564">#3564</a></p>
</li>
<li>
<p>A warning is now emitted if the CodeQL Action detects a repository
property whose name suggests that it relates to the CodeQL Action, but
which is not one of the properties recognised by the current version of
the CodeQL Action. <a
href="https://redirect.github.com/github/codeql-action/pull/3570">#3570</a></p>
</li>
</ul>
<h2>4.32.6 - 05 Mar 2026</h2>
<ul>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.24.3">2.24.3</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3548">#3548</a></li>
</ul>
<h2>4.32.5 - 02 Mar 2026</h2>
<ul>
<li>Repositories owned by an organization can now set up the
<code>github-codeql-disable-overlay</code> custom repository property to
disable <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis for CodeQL</a>. First, create a custom repository
property with the name <code>github-codeql-disable-overlay</code> and
the type &quot;True/false&quot; in the organization's settings. Then in
the repository's settings, set this property to <code>true</code> to
disable improved incremental analysis. For more information, see <a
href="https://docs.github.com/en/organizations/managing-organization-settings/managing-custom-properties-for-repositories-in-your-organization">Managing
custom properties for repositories in your organization</a>. This
feature is not yet available on GitHub Enterprise Server. <a
href="https://redirect.github.com/github/codeql-action/pull/3507">#3507</a></li>
<li>Added an experimental change so that when <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis</a> fails on a runner — potentially due to
insufficient disk space — the failure is recorded in the Actions cache
so that subsequent runs will automatically skip improved incremental
analysis until something changes (e.g. a larger runner is provisioned or
a new CodeQL version is released). We expect to roll this change out to
everyone in March. <a
href="https://redirect.github.com/github/codeql-action/pull/3487">#3487</a></li>
<li>The minimum memory check for improved incremental analysis is now
skipped for CodeQL 2.24.3 and later, which has reduced peak RAM usage.
<a
href="https://redirect.github.com/github/codeql-action/pull/3515">#3515</a></li>
<li>Reduced log levels for best-effort private package registry
connection check failures to reduce noise from workflow annotations. <a
href="https://redirect.github.com/github/codeql-action/pull/3516">#3516</a></li>
<li>Added an experimental change which lowers the minimum disk space
requirement for <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis</a>, enabling it to run on standard GitHub Actions
runners. We expect to roll this change out to everyone in March. <a
href="https://redirect.github.com/github/codeql-action/pull/3498">#3498</a></li>
<li>Added an experimental change which allows the
<code>start-proxy</code> action to resolve the CodeQL CLI version from
feature flags instead of using the linked CLI bundle version. We expect
to roll this change out to everyone in March. <a
href="https://redirect.github.com/github/codeql-action/pull/3512">#3512</a></li>
<li>The previously experimental changes from versions 4.32.3, 4.32.4,
3.32.3 and 3.32.4 are now enabled by default. <a
href="https://redirect.github.com/github/codeql-action/pull/3503">#3503</a>,
<a
href="https://redirect.github.com/github/codeql-action/pull/3504">#3504</a></li>
</ul>
<h2>4.32.4 - 20 Feb 2026</h2>
<ul>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.24.2">2.24.2</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3493">#3493</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="c10b8064de"><code>c10b806</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3782">#3782</a>
from github/update-v4.35.1-d6d1743b8</li>
<li><a
href="c5ffd06837"><code>c5ffd06</code></a>
Update changelog for v4.35.1</li>
<li><a
href="d6d1743b8e"><code>d6d1743</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3781">#3781</a>
from github/henrymercer/update-git-minimum-version</li>
<li><a
href="65d2efa733"><code>65d2efa</code></a>
Add changelog note</li>
<li><a
href="2437b20ab3"><code>2437b20</code></a>
Update minimum git version for overlay to 2.36.0</li>
<li><a
href="ea5f71947c"><code>ea5f719</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3775">#3775</a>
from github/dependabot/npm_and_yarn/node-forge-1.4.0</li>
<li><a
href="45ceeea896"><code>45ceeea</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3777">#3777</a>
from github/mergeback/v4.35.0-to-main-b8bb9f28</li>
<li><a
href="24448c9843"><code>24448c9</code></a>
Rebuild</li>
<li><a
href="7c51060631"><code>7c51060</code></a>
Update changelog and version after v4.35.0</li>
<li><a
href="b8bb9f28b8"><code>b8bb9f2</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3776">#3776</a>
from github/update-v4.35.0-0078ad667</li>
<li>Additional commits viewable in <a
href="https://github.com/github/codeql-action/compare/v4...v4.35.1">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github/codeql-action&package-manager=github_actions&previous-version=4&new-version=4.35.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-20 15:55:02 +03:00
dependabot[bot]
6c57246940 build(deps): bump actions/cache from 4 to 5.0.4 (#10802)
Bumps [actions/cache](https://github.com/actions/cache) from 4 to 5.0.4.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/cache/releases">actions/cache's
releases</a>.</em></p>
<blockquote>
<h2>v5.0.4</h2>
<h2>What's Changed</h2>
<ul>
<li>Add release instructions and update maintainer docs by <a
href="https://github.com/Link"><code>@​Link</code></a>- in <a
href="https://redirect.github.com/actions/cache/pull/1696">actions/cache#1696</a></li>
<li>Potential fix for code scanning alert no. 52: Workflow does not
contain permissions by <a
href="https://github.com/Link"><code>@​Link</code></a>- in <a
href="https://redirect.github.com/actions/cache/pull/1697">actions/cache#1697</a></li>
<li>Fix workflow permissions and cleanup workflow names / formatting by
<a href="https://github.com/Link"><code>@​Link</code></a>- in <a
href="https://redirect.github.com/actions/cache/pull/1699">actions/cache#1699</a></li>
<li>docs: Update examples to use the latest version by <a
href="https://github.com/XZTDean"><code>@​XZTDean</code></a> in <a
href="https://redirect.github.com/actions/cache/pull/1690">actions/cache#1690</a></li>
<li>Fix proxy integration tests by <a
href="https://github.com/Link"><code>@​Link</code></a>- in <a
href="https://redirect.github.com/actions/cache/pull/1701">actions/cache#1701</a></li>
<li>Fix cache key in examples.md for bun.lock by <a
href="https://github.com/RyPeck"><code>@​RyPeck</code></a> in <a
href="https://redirect.github.com/actions/cache/pull/1722">actions/cache#1722</a></li>
<li>Update dependencies &amp; patch security vulnerabilities by <a
href="https://github.com/Link"><code>@​Link</code></a>- in <a
href="https://redirect.github.com/actions/cache/pull/1738">actions/cache#1738</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/XZTDean"><code>@​XZTDean</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/cache/pull/1690">actions/cache#1690</a></li>
<li><a href="https://github.com/RyPeck"><code>@​RyPeck</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/cache/pull/1722">actions/cache#1722</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/cache/compare/v5...v5.0.4">https://github.com/actions/cache/compare/v5...v5.0.4</a></p>
<h2>v5.0.3</h2>
<h2>What's Changed</h2>
<ul>
<li>Bump <code>@actions/cache</code> to v5.0.5 (Resolves: <a
href="https://github.com/actions/cache/security/dependabot/33">https://github.com/actions/cache/security/dependabot/33</a>)</li>
<li>Bump <code>@actions/core</code> to v2.0.3</li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/cache/compare/v5...v5.0.3">https://github.com/actions/cache/compare/v5...v5.0.3</a></p>
<h2>v.5.0.2</h2>
<h1>v5.0.2</h1>
<h2>What's Changed</h2>
<p>When creating cache entries, 429s returned from the cache service
will not be retried.</p>
<h2>v5.0.1</h2>
<blockquote>
<p>[!IMPORTANT]
<strong><code>actions/cache@v5</code> runs on the Node.js 24 runtime and
requires a minimum Actions Runner version of
<code>2.327.1</code>.</strong></p>
<p>If you are using self-hosted runners, ensure they are updated before
upgrading.</p>
</blockquote>
<hr />
<h1>v5.0.1</h1>
<h2>What's Changed</h2>
<ul>
<li>fix: update <code>@​actions/cache</code> for Node.js 24 punycode
deprecation by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/cache/pull/1685">actions/cache#1685</a></li>
<li>prepare release v5.0.1 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/cache/pull/1686">actions/cache#1686</a></li>
</ul>
<h1>v5.0.0</h1>
<h2>What's Changed</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/actions/cache/blob/main/RELEASES.md">actions/cache's
changelog</a>.</em></p>
<blockquote>
<h1>Releases</h1>
<h2>How to prepare a release</h2>
<blockquote>
<p>[!NOTE]<br />
Relevant for maintainers with write access only.</p>
</blockquote>
<ol>
<li>Switch to a new branch from <code>main</code>.</li>
<li>Run <code>npm test</code> to ensure all tests are passing.</li>
<li>Update the version in <a
href="https://github.com/actions/cache/blob/main/package.json"><code>https://github.com/actions/cache/blob/main/package.json</code></a>.</li>
<li>Run <code>npm run build</code> to update the compiled files.</li>
<li>Update this <a
href="https://github.com/actions/cache/blob/main/RELEASES.md"><code>https://github.com/actions/cache/blob/main/RELEASES.md</code></a>
with the new version and changes in the <code>## Changelog</code>
section.</li>
<li>Run <code>licensed cache</code> to update the license report.</li>
<li>Run <code>licensed status</code> and resolve any warnings by
updating the <a
href="https://github.com/actions/cache/blob/main/.licensed.yml"><code>https://github.com/actions/cache/blob/main/.licensed.yml</code></a>
file with the exceptions.</li>
<li>Commit your changes and push your branch upstream.</li>
<li>Open a pull request against <code>main</code> and get it reviewed
and merged.</li>
<li>Draft a new release <a
href="https://github.com/actions/cache/releases">https://github.com/actions/cache/releases</a>
use the same version number used in <code>package.json</code>
<ol>
<li>Create a new tag with the version number.</li>
<li>Auto generate release notes and update them to match the changes you
made in <code>RELEASES.md</code>.</li>
<li>Toggle the set as the latest release option.</li>
<li>Publish the release.</li>
</ol>
</li>
<li>Navigate to <a
href="https://github.com/actions/cache/actions/workflows/release-new-action-version.yml">https://github.com/actions/cache/actions/workflows/release-new-action-version.yml</a>
<ol>
<li>There should be a workflow run queued with the same version
number.</li>
<li>Approve the run to publish the new version and update the major tags
for this action.</li>
</ol>
</li>
</ol>
<h2>Changelog</h2>
<h3>5.0.4</h3>
<ul>
<li>Bump <code>minimatch</code> to v3.1.5 (fixes ReDoS via globstar
patterns)</li>
<li>Bump <code>undici</code> to v6.24.1 (WebSocket decompression bomb
protection, header validation fixes)</li>
<li>Bump <code>fast-xml-parser</code> to v5.5.6</li>
</ul>
<h3>5.0.3</h3>
<ul>
<li>Bump <code>@actions/cache</code> to v5.0.5 (Resolves: <a
href="https://github.com/actions/cache/security/dependabot/33">https://github.com/actions/cache/security/dependabot/33</a>)</li>
<li>Bump <code>@actions/core</code> to v2.0.3</li>
</ul>
<h3>5.0.2</h3>
<ul>
<li>Bump <code>@actions/cache</code> to v5.0.3 <a
href="https://redirect.github.com/actions/cache/pull/1692">#1692</a></li>
</ul>
<h3>5.0.1</h3>
<ul>
<li>Update <code>@azure/storage-blob</code> to <code>^12.29.1</code> via
<code>@actions/cache@5.0.1</code> <a
href="https://redirect.github.com/actions/cache/pull/1685">#1685</a></li>
</ul>
<h3>5.0.0</h3>
<blockquote>
<p>[!IMPORTANT]
<code>actions/cache@v5</code> runs on the Node.js 24 runtime and
requires a minimum Actions Runner version of <code>2.327.1</code>.</p>
</blockquote>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="27d5ce7f10"><code>27d5ce7</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/cache/issues/1747">#1747</a>
from actions/yacaovsnc/update-dependency</li>
<li><a
href="f280785d7b"><code>f280785</code></a>
licensed changes</li>
<li><a
href="619aeb1606"><code>619aeb1</code></a>
npm run build generated dist files</li>
<li><a
href="bcf16c2893"><code>bcf16c2</code></a>
Update ts-http-runtime to 0.3.5</li>
<li><a
href="668228422a"><code>6682284</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/cache/issues/1738">#1738</a>
from actions/prepare-v5.0.4</li>
<li><a
href="e34039626f"><code>e340396</code></a>
Update RELEASES</li>
<li><a
href="8a67110529"><code>8a67110</code></a>
Add licenses</li>
<li><a
href="1865903e1b"><code>1865903</code></a>
Update dependencies &amp; patch security vulnerabilities</li>
<li><a
href="5656298164"><code>5656298</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/cache/issues/1722">#1722</a>
from RyPeck/patch-1</li>
<li><a
href="4e380d19e1"><code>4e380d1</code></a>
Fix cache key in examples.md for bun.lock</li>
<li>Additional commits viewable in <a
href="https://github.com/actions/cache/compare/v4...v5">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/cache&package-manager=github_actions&previous-version=4&new-version=5.0.4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-20 15:53:09 +03:00
andriibeee
05112e54e2 lib/netutil: fix IPv6 address corruption in proxy protocol v2 parser
Proxy protocol parser kept sub-slice reference for pooled bytesBuffer at readProxyProto
```
 bb := bbPool.Get()
 defer bbPool.Put(bb)   // ← buffer returned to pool AFTER function returns
...
   IP:   bb.B[0:16],  // ← BUG: sub-slice of pooled buffer!
...
 ```

 This commit properly allocates new slice for ipv6 address and copies buffer content to it.

 Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10839
2026-04-20 12:11:04 +02:00
Andrii Chubatiuk
ce227fe7d9 lib/streamaggr: added vm_streamaggr_counter_resets_total counter (#10807)
### Describe Your Changes

Added `vm_streamaggr_counter_resets` metric for `rate*`, `total*`, and
`increase*` outputs, which is useful for unpredictable output behaviour
investigation.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: Andrii Chubatiuk <andrew.chubatiuk@gmail.com>
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Signed-off-by: Roman Khavronenko <hagen1778@gmail.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2026-04-20 11:48:03 +02:00
hagen1778
e4524eb2fb deployment/alerts: move IndexDBRecordsDrop and TooManyTSIDMisses rules to storage-related files
`IndexDBRecordsDrop` and `TooManyTSIDMisses` were mistakenly placed to `alerts-health.yml`,
which was supposed to contain rules related to all VM components. But these two rules
are related to storage components only (vmstorage and vmsingle). Moving them to corresponding
files.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-04-20 11:43:21 +02:00
hagen1778
b9ba5dacc3 deployment/alerts: rename alerts.yml to alerts-single-node.yml
The change should reduce confusion for users where `alerts.yml`
belongs to. Before, developers could mistakenly assume that
`alerts.yml` was related to both single and cluster installations.
In result, rule `MetadataCacheUtilizationIsTooHigh` was added only
to `alerts.yml` and not copied to `alerts-cluster.yml`.

The rename change should bring more context into the file name
and reduce confusion in the future.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-04-20 11:37:30 +02:00
hagen1778
1a8fe4f2f8 deployment/alerts: add MetadataCacheUtilizationIsTooHigh to cluster rules
Before, this rule was only a part of single-node rule set.
But it is applicable for both: single and cluster installations.
Adding it to cluster as well.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-04-20 11:31:43 +02:00
Roman Khavronenko
2dcfbd8e19 deployment/rules: add MetricNameStatsCacheUtilizationIsTooHigh alert (#10840)
The new rule `MetricNameStatsCacheUtilizationIsTooHigh` will signalize
about overutilization of Metric names usage stats tracker. See
https://docs.victoriametrics.com/victoriametrics/#track-ingested-metrics-usage

This rule can fire for deployments with high churn rate of metric names.
In cases like this, it is better to disable metric name tracking
completely, as it brings no use.

It might fire for deployments that were tracking metric names for very
long periods and this alert might be a good sign to reset the cache.

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-04-20 11:29:53 +02:00
Max Kotliar
728269a5af docs/changelog: chore wording a bit; add a link 2026-04-17 19:34:39 +03:00
Jan Dittrich
eaf24ec631 docs: align the limit mentioned in the docs with actual flag -maxLabelsPerTimeseries value (#10826)
The docs currently wrongly states that vminsert applies a label limit
per timeseries of `30`. Currently, the limit is `40`, which is also
correctly stated in in vmcluster docs. This PR corrects this in the key
concepts docs.

```
  -maxLabelsPerTimeseries int
     The maximum number of labels per time series to be accepted. Series with superfluous labels are ignored. In this case the vm_rows_ignored_total{reason="too_many_labels"} metric at /metrics page is incremented (default 40)
```

https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10826
2026-04-17 19:17:39 +03:00
Phuong Le
e47f7a9d4e docs/contributing: clarify test requirements in pull request checklist (#10781)
Clarify in the pull request checklist that tests are expected for
non-trivial changes and bug fixes must include tests unless a maintainer
explicitly agrees otherwise

https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10781
2026-04-17 18:29:52 +03:00
Phuong Le
02279b8594 .github: shorten PR template (#10789)
After switching squash merges to use the PR title and description, the
PR template text started leaking into final commit messages and adding
noise.

This PR removes the template and documents what a PR title and PR
description should contain instead.

See https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10789
2026-04-17 18:18:12 +03:00
f41gh7
65a44bd9e5 docs: changelog add missing PR links
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2026-04-17 11:12:56 +02:00
f41gh7
431dda673e vendor: update metrics and metrisql libs 2026-04-17 11:10:07 +02:00
andriibeee
d66b7a2283 app/vmauth: properly close backend response body
Previously After RoundTrip returns successfully (err == nil, res != nil), the code checks if the original client request's context was canceled. If canceled, it returns immediately without closing res.Body. 

There is a race window where:
1) RoundTrip completes successfully (res is non-nil)
2) The client cancels the request context (closes connection)
3) The context check at line 484 sees the cancellation
4) The function returns without closing res.Body

The response body holds a reference to the underlying TCP connection. Without closing it, the connection is permanently leaked along with the transport goroutines (readLoop + writeLoop or dialConnFor).

 bug was introduced at https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10233

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10833
2026-04-17 10:57:13 +02:00
Yury Moladau
fd45463b5f app/vmui: fix Alerting Rules page query link and time display
**"Run query" link params**  
Added correct params to "Run query" link on Alerting Rules page:
- `g0.step_input` - set to `group.interval` (in seconds)
- `g0.end_time` - set to `rule.lastEvaluation` / `alert.activeAt`
- `g0.relative_time=none` - to fix the time range

**Time display timezone**  
Changed `t.format(...)` to `t.tz().format(...)` to display time in the
user-selected timezone.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10366
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10827
2026-04-17 10:32:59 +02:00
andriibeee
153c5bb803 lib/handshake: ignore TCP healthchecks in VMSelect just like in VMInsert
TCP healthchecks on the clusternative port of vmselect logs the following warning continuously:

    VictoriaMetrics/lib/vmselectapi/server.go:204 cannot complete vmselect handshake due to network error with client "10.129.30.27:43829": cannot read hello message : cannot read message with size 11: EOF; read only 0 bytes. Check vmselect logs for errors

This is in contrast to vminsert, where it seems like there's handling for these healthchecks:
```
 if errors.Is(err, io.EOF) {
 	// This is likely a TCP healthcheck, which must be ignored in order to prevent logs pollution.
 	// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1762
 	return errTCPHealthcheck
```

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10786
2026-04-16 23:02:01 +02:00
Nikolay
a29229a877 lib/promscrape: prevent unbounded scrape error body read
Previously, on non-200 HTTP status codes, lib/promscrape performed an
unbounded body read, which could potentially result in OOM.

This commit adds a maxScrapeSize limit to error response body reads,
protecting against malicious or misbehaving metrics endpoints.
2026-04-16 22:50:08 +02:00
cubic-dev-ai[bot]
aa94652ec3 app/vminsert: correctly stop StopIngestionRateLimiter before vminsert.Stop in vmsingle shutdown
vmsingle shuts down vminsert before closing the ingestion rate limiter, even though the rate limiter API explicitly requires the opposite order to unblock callers. vminsert.Stop() waits for unmarshal workers, which can be blocked in ingestionRateLimiter.Register() when the limit is hit.
2026-04-16 22:49:11 +02:00
Yury Moladau
ad85524fb1 app/vmui: update package dependencies (#10831)
### Describe Your Changes

Update package versions in `app/vmui/packages/vmui/package.json`.

Signed-off-by: Yury Molodov <yurymolodov@gmail.com>
2026-04-16 22:44:54 +02:00
cubic-dev-ai[bot]
3fe606770f fix: prevent deadlock in vmrestore worker pool on context cancellation
Workers in runParallelPerPathInternal check ctxLocal.Done() before processing each work item and exit early on cancellation — without sending a result to resultCh. However, the coordinator loop always waits for exactly len(perPath) results from resultCh. If cancellation occurs before all tasks report, the read blocks indefinitely.
2026-04-16 22:44:31 +02:00
Fred Navruzov
b3054bbadd docs/vmanomaly-v1.29.3 (#10832)
### Describe Your Changes

Update vmanomaly docs to v1.29.3

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-04-16 17:34:25 +03:00
Roman Khavronenko
443ea9cbc6 apptest: add support for specifying HTTP headers (#10830)
This change allows specifying headers for provided API calls. This
ability is required for proper testing of Tenant-via-Header feature in
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10782

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-04-16 15:03:19 +02:00
andriibeee
a36395500b lib/awsapi: pre-populate credentials only for static creds without roleARN
0aaa741b5b  introduced a regression in lib/awsapi/config.go that causes empty credentials to be returned on the very first call to getFreshAPICredentials() when using EKS Pod Identity (or any container credential mechanism with no static access key). These empty credentials are then used for SigV4 signing -> 403 Forbidden on every remote write request.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10815
2026-04-16 11:51:42 +02:00
Jayice
a6ff705771 improve documentation 2026-04-16 14:07:01 +08:00
Max Kotliar
cc3a14b16b docs/changelog: fix feature indention 2026-04-15 17:34:22 +03:00
Aliaksandr Valialkin
7ef08b1781 vendor: update github.com/VictoriaMetrics/VictoriaLogs from v1.50.1-0.20260415114444-d5b5febe4954 to github.com/VictoriaMetrics/VictoriaLogs v1.50.1-0.20260415124154-6b7a6357aec0
This is needed for vmalert, so it accepts LogsQL queries with 'limit' and 'offset' pipes.

See https://github.com/VictoriaMetrics/VictoriaLogs/issues/1296#issuecomment-4252036978
2026-04-15 14:45:01 +02:00
Aliaksandr Valialkin
969cb5b4ae vendor: run make vendor-update 2026-04-15 14:03:53 +02:00
Aliaksandr Valialkin
b9f0e614bd vendor: update github.com/VictoriaMetrics/VictoriaLogs from v0.0.0-20260218111324-95b48d57d032 to v1.50.1-0.20260415114444-d5b5febe4954 2026-04-15 13:54:46 +02:00
Aliaksandr Valialkin
ed44c08f5f docs/Makefile: avoid creating a docker image with docs server at make docs-update-version
Just run a simple bash command without the heavyweight Docker image

While at it, rely on TAG environment variable instead of PKG_TAG env variable
for `make docs-update-version`, in order to be consistent with other Make commands.
2026-04-15 13:24:52 +02:00
f41gh7
3ae44e734b docs: remove promscrape.dropOriginalLabels from relabeling-debug section
Follow-up for ef507d372b.

 It's no longer needed to manually set promscrape.dropOriginalLabels
 flag, since it's has False value by default.
2026-04-15 12:34:07 +02:00
Jayice
0c5886012d only apply for shardByURL=true 2026-04-15 16:56:27 +08:00
Pablo (Tomas) Fernandez
d3264bd78f docs/guides: fix broken links (#10800)
Fix broken or moved links in guides.

### Checklist

The following checks are **mandatory**:

- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [X] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-04-15 10:18:48 +02:00
hagen1778
1f87faafec docs/articles: add new 3rd party article about stream aggregation
https://medium.com/airbnb-engineering/building-a-high-volume-metrics-pipeline-with-opentelemetry-and-vmagent-c714d6910b45
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-04-15 10:11:57 +02:00
hagen1778
521b73dfc5 docs/vmagent: move relabeling section higher
The change is needed to group splitting/sharding section of the documentation,
so they go one after another. This should improve readability.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-04-15 10:10:53 +02:00
hagen1778
61db79c10a docs/vmagent: mention ability to filter scrape targets
The previous descrioption didn't mention that relabeling can be used
for filtering scrape targets. Adding this metion.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-04-15 10:08:31 +02:00
hagen1778
460ac6468c docs/relabeling: restore links to articles about relableing internals
These links were removed in 134501bf99
without adding complete substitution to their content.

Restoring these links as they can be useful for readers to learn about relabeling.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-04-15 10:07:07 +02:00
hagen1778
c42023c586 docs/playgrounds: add aliases for old links
The old links were removed in #10754
mistakenly thinking that google didn't index it. However, it did. And users can get 404
when searching in google for VM plyagrounds.

Restoring the links via aliases. It means hugo will serve the `/playgrounds` page when
user requests `/playgrounds/victoriametrics/`.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-04-15 10:04:46 +02:00
Jayice
9c5fbe1a30 rollback test codes 2026-04-15 15:13:33 +08:00
Jayice
653576d8b1 introduce a new flag -remoteWrite.enableRerouting to explicitly control the rerouting behavior 2026-04-15 13:47:59 +08:00
Artem Fetishev
8a20ccf21d apptest: sync code between branches and fix backup/restore range queries (#10799)
Fix app tests:

1. Sync code between vmsingle and vmcluster: it must be the same because
apptest does not differentiate between branches, it just runs pre-built
binaries
2. Simplify range queries in backup/restore test so that it does not
depend on the interval between samples to work correctly.

---------

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-04-14 07:18:09 +02:00
Max Kotliar
1a01dbbec7 docs/changelog: fix unwanted release tag change
The tag v1.138.0 was unintentinally changed to v1.139.0 due to bug in
release script.

Reverting the change. The bug will be addressed separate.
2026-04-13 14:52:21 +03:00
f41gh7
630e413812 docs: update flags with actual v1.140.0 binaries
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2026-04-13 11:34:04 +02:00
f41gh7
b639e7e641 docs: bump version to v1.140.0
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2026-04-13 11:31:40 +02:00
f41gh7
858c318e1f deplyoment/docker: bump version to v1.140.0 2026-04-13 11:31:11 +02:00
f41gh7
b8327ce09c docs: mention new LTS releases
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2026-04-13 11:16:30 +02:00
Aliaksandr Valialkin
7514511c68 app/vmauth/main.go: clarify comments for bufferedBody struct a bit
This is a follow-up for https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10677#discussion_r3064731250
2026-04-11 09:42:32 +02:00
f41gh7
33d524bf13 follow-up for d07c1c73d1
move bugifx into current release
2026-04-10 19:37:14 +02:00
Alexander Frolov
d07c1c73d1 lib/writeconcurrencylimiter: prevent deadlock at IncConcurrency
Previously (*writeconcurrencylimiter.Reader).Read() could permanently leak concurrency tokens from the -maxConcurrentInserts semaphore.
 
 Consider the following example:
* GetReader() acquires a token, then PutReader() unconditionally releases it.
* Read() calls DecConcurrency() before the underlying I/O and IncConcurrency() after it. If IncConcurrency() returns an error, Read() returns without holding a token.
* Each such failure permanently removes one slot from the concurrencyLimitCh semaphore. Slots leak one by one until the channel is fully drained, at which point DecConcurrency() blocks forever, deadlocking ingestion on vmstorage.

 This commit adds tracking for obtained tokens to the reader. Which prevents possible tokens leakage. 

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10784
2026-04-10 19:35:59 +02:00
f41gh7
a896673c42 CHANGELOG.md: cut v1.140.0 release 2026-04-10 17:02:32 +02:00
f41gh7
c60ab2d57a make docs-update-version 2026-04-10 16:54:18 +02:00
f41gh7
49e51611d7 make vmui-update 2026-04-10 16:51:13 +02:00
Hui Wang
902ca83177 app/vmalert: adopt additional rule states in the list rules API
In grafana, the alert list panel can use VictoriaMetrics as datasource
and call `/api/v1/rules` api with [specific
states](https://grafana.com/docs/grafana/latest/alerting/fundamentals/alert-rule-evaluation/nodata-and-error-states/#alert-instance-states).
See
https://play-grafana.victoriametrics.com/d/febljk0a32qyoa/3e68cf3?orgId=1&from=now-1h&to=now&timezone=browser&var-prometheus_datasource=P4169E866C3094E38&var-jaeger_datasource=P14D5514F5CCC0D1C&var-victorialogs_datasource=PD775F2863313E6C7&var-service_namespace=$__all&var-service_name=checkout&refresh=5m&editPanel=40.
Some states are already defined in vmalert, although with different
names. Others, such as "recovering", are currently undefined.
This pull request adopts all these states, rather than fail the request.

Above panel request also uses the `matcher` param to filter rules.
However,
[prometheus](https://prometheus.io/docs/prometheus/latest/querying/api/#rules)
also does not support this parameter and simply ignore it, so I don't
think vmalert needs to support it now.

JFYI, the grafana [Alerting
page](https://play-grafana.victoriametrics.com/alerting) does not
include any of the mentioned `state` or `matcher` parameters in rule
listing requests to the datasource. Filtering is handled by the Grafana
frontend, so most users are not affected by partial support for
filtering in backend products.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10778
2026-04-10 16:47:25 +02:00
Phuong Le
66e3f8736b ci: remove automatic Codecov reporting from test workflow (#10780)
This removes automatic Codecov reporting from VictoriaMetrics CI. This
change keeps local coverage generation available, but removes automatic
PR noise (such as
[this](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10625#issuecomment-4084390659))
and unnecessary CI overhead.
2026-04-10 16:45:11 +02:00
338 changed files with 6760 additions and 2942 deletions

0
.codex Normal file
View File

View File

@@ -1,10 +1 @@
### Describe Your Changes
Please provide a brief description of the changes you made. Be as specific as possible to help others understand the purpose and impact of your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development goals](https://docs.victoriametrics.com/victoriametrics/goals/).
Before creating the PR, please read [VictoriaMetrics contributing guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist) and remove this line after confirming you understand and follow them.

View File

@@ -27,7 +27,7 @@ jobs:
- run: go version
- name: Cache Go artifacts
uses: actions/cache@v4
uses: actions/cache@v5
with:
path: |
~/.cache/go-build

View File

@@ -40,7 +40,7 @@ jobs:
- run: go version
- name: Cache Go artifacts
uses: actions/cache@v4
uses: actions/cache@v5
with:
path: |
~/.cache/go-build
@@ -50,14 +50,14 @@ jobs:
restore-keys: go-artifacts-${{ runner.os }}-codeql-analyze-${{ steps.go.outputs.go-version }}-
- name: Initialize CodeQL
uses: github/codeql-action/init@v4
uses: github/codeql-action/init@v4.35.1
with:
languages: go
- name: Autobuild
uses: github/codeql-action/autobuild@v4
uses: github/codeql-action/autobuild@v4.35.1
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v4
uses: github/codeql-action/analyze@v4.35.1
with:
category: 'language:go'

View File

@@ -47,7 +47,7 @@ jobs:
- run: go version
- name: Cache golangci-lint
uses: actions/cache@v4
uses: actions/cache@v5
with:
path: |
~/.cache/golangci-lint
@@ -66,8 +66,8 @@ jobs:
strategy:
matrix:
scenario:
- 'test-full'
- 'test-full-386'
- 'test'
- 'test-386'
- 'test-pure'
steps:
@@ -88,11 +88,6 @@ jobs:
- name: Run tests
run: make ${{ matrix.scenario}}
- name: Publish coverage
uses: codecov/codecov-action@v6
with:
files: ./coverage.txt
apptest:
name: apptest
runs-on: apptest

View File

@@ -457,6 +457,9 @@ test:
test-race:
go test -tags 'synctest' -race ./lib/... ./app/...
test-386:
GOARCH=386 go test -tags 'synctest' ./lib/... ./app/...
test-pure:
CGO_ENABLED=0 go test -tags 'synctest' ./lib/... ./app/...

View File

@@ -4,7 +4,6 @@
[![Docker Pulls](https://img.shields.io/docker/pulls/victoriametrics/victoria-metrics?label=&logo=docker&logoColor=white&labelColor=2496ED&color=2496ED&link=https%3A%2F%2Fhub.docker.com%2Fr%2Fvictoriametrics%2Fvictoria-metrics)](https://hub.docker.com/u/victoriametrics)
[![Go Report](https://goreportcard.com/badge/github.com/VictoriaMetrics/VictoriaMetrics?link=https%3A%2F%2Fgoreportcard.com%2Freport%2Fgithub.com%2FVictoriaMetrics%2FVictoriaMetrics)](https://goreportcard.com/report/github.com/VictoriaMetrics/VictoriaMetrics)
[![Build Status](https://github.com/VictoriaMetrics/VictoriaMetrics/actions/workflows/build.yml/badge.svg?branch=master&link=https%3A%2F%2Fgithub.com%2FVictoriaMetrics%2FVictoriaMetrics%2Factions)](https://github.com/VictoriaMetrics/VictoriaMetrics/actions/workflows/build.yml)
[![codecov](https://codecov.io/gh/VictoriaMetrics/VictoriaMetrics/branch/master/graph/badge.svg?link=https%3A%2F%2Fcodecov.io%2Fgh%2FVictoriaMetrics%2FVictoriaMetrics)](https://app.codecov.io/gh/VictoriaMetrics/VictoriaMetrics)
[![License](https://img.shields.io/github/license/VictoriaMetrics/VictoriaMetrics?labelColor=green&label=&link=https%3A%2F%2Fgithub.com%2FVictoriaMetrics%2FVictoriaMetrics%2Fblob%2Fmaster%2FLICENSE)](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/LICENSE)
[![Join Slack](https://img.shields.io/badge/Join%20Slack-4A154B?logo=slack)](https://slack.victoriametrics.com)
[![X](https://img.shields.io/twitter/follow/VictoriaMetrics?style=flat&label=Follow&color=black&logo=x&labelColor=black&link=https%3A%2F%2Fx.com%2FVictoriaMetrics)](https://x.com/VictoriaMetrics/)

View File

@@ -118,8 +118,8 @@ func main() {
logger.Fatalf("cannot stop the webservice: %s", err)
}
logger.Infof("successfully shut down the webservice in %.3f seconds", time.Since(startTime).Seconds())
vminsert.Stop()
vminsertcommon.StopIngestionRateLimiter()
vminsert.Stop()
vmstorage.Stop()
vmselect.Stop()

View File

@@ -102,6 +102,8 @@ var (
"cannot be pushed into the configured -remoteWrite.url systems in a timely manner. See https://docs.victoriametrics.com/victoriametrics/vmagent/#disabling-on-disk-persistence")
disableMetadataPerURL = flagutil.NewArrayBool("remoteWrite.disableMetadata", "Whether to disable sending metadata to the corresponding -remoteWrite.url. "+
"By default, metadata sending is controlled by the global -enableMetadata flag")
enableRerouting = flag.Bool("remoteWrite.enableRerouting", false, "Whether to reroute samples to available remote storage systems when there's any remote storage system and its persistent queue can not "+
"keep up with the data ingestion rate. If this flag is not set, then it will be calculated automatically based on -remoteWrite.disableOnDiskQueue. See https://docs.victoriametrics.com/victoriametrics/vmagent/#disabling-on-disk-persistence")
)
var (
@@ -215,6 +217,10 @@ func Init() {
// to the remaining -remoteWrite.url and dropping them on the blocked queue.
dropSamplesOnFailureGlobal = *dropSamplesOnOverload || disableOnDiskQueueAny && len(*remoteWriteURLs) > 1
if *shardByURL && !flagutil.IsSet("remoteWrite.enableRerouting") {
*enableRerouting = disableOnDiskQueueAny
}
dropDanglingQueues()
// Start config reloader.
@@ -498,11 +504,13 @@ func tryPush(at *auth.Token, wr *prompb.WriteRequest, forceDropSamplesOnFailure
//
// calculateHealthyRwctxIdx will rely on the order of rwctx to be in ascending order.
func getEligibleRemoteWriteCtxs(tss []prompb.TimeSeries, forceDropSamplesOnFailure bool) ([]*remoteWriteCtx, bool) {
if !disableOnDiskQueueAny {
if (*shardByURL && !*enableRerouting) || !disableOnDiskQueueAny {
return rwctxsGlobal, true
}
// This code is applicable if at least a single remote storage has -disableOnDiskQueue
// This code is applicable when:
// 1. remoteWrite.shardByUrl is disabled and at least a single remote storage has -disableOnDiskQueue.
// 2. remoteWrite.shardByUrl is enabled and remoteWrite.enableRerouting is set to true.
rwctxs := make([]*remoteWriteCtx, 0, len(rwctxsGlobal))
for _, rwctx := range rwctxsGlobal {
if !rwctx.fq.IsWriteBlocked() {

View File

@@ -222,6 +222,9 @@ func (r *Rule) Validate() error {
if r.Expr == "" {
return fmt.Errorf("expression can't be empty")
}
if _, ok := r.Labels["__name__"]; ok {
return fmt.Errorf("invalid rule label __name__")
}
return checkOverflow(r.XXX, "rule")
}

View File

@@ -136,6 +136,9 @@ func TestRuleValidate(t *testing.T) {
if err := (&Rule{Alert: "alert"}).Validate(); err == nil {
t.Fatalf("expected empty expr error")
}
if err := (&Rule{Record: "record", Expr: "sum(test)", Labels: map[string]string{"__name__": "test"}}).Validate(); err == nil {
t.Fatalf("invalid rule label; got %s", err)
}
if err := (&Rule{Alert: "alert", Expr: "test>0"}).Validate(); err != nil {
t.Fatalf("expected valid rule; got %s", err)
}

View File

@@ -87,6 +87,7 @@ func (m *Metric) DelLabel(key string) {
for i, l := range m.Labels {
if l.Name == key {
m.Labels = append(m.Labels[:i], m.Labels[i+1:]...)
break
}
}
}

View File

@@ -312,9 +312,11 @@ type labelSet struct {
// On k conflicts in origin set, the original value is preferred and copied
// to processed with `exported_%k` key. The copy happens only if passed v isn't equal to origin[k] value.
func (ls *labelSet) add(k, v string) {
// do not add label with empty value, since it has no meaning.
// see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9984
// do not add label with empty value to the result, as it has no meaning:
// if the label already exists in the original query result, remove it to preserve compatibility with relabeling, see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10766.
// otherwise, ignore the label, see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9984.
if v == "" {
delete(ls.processed, k)
return
}
ls.processed[k] = v

View File

@@ -1363,6 +1363,7 @@ func TestAlertingRule_ToLabels(t *testing.T) {
{Name: "instance", Value: "0.0.0.0:8800"},
{Name: "group", Value: "vmalert"},
{Name: "alertname", Value: "ConfigurationReloadFailure"},
{Name: "pod", Value: "vmalert-0"},
},
Values: []float64{1},
Timestamps: []int64{time.Now().UnixNano()},
@@ -1374,6 +1375,7 @@ func TestAlertingRule_ToLabels(t *testing.T) {
"group": "vmalert", // this shouldn't have effect since value in metric is equal
"invalid_label": "{{ .Values.mustRuntimeFail }}",
"empty_label": "", // this should be dropped
"pod": "", // this should remove the pod label from query result
},
Expr: "sum(vmalert_alerting_rules_error) by(instance, group, alertname) > 0",
Name: "AlertingRulesError",
@@ -1385,6 +1387,7 @@ func TestAlertingRule_ToLabels(t *testing.T) {
"group": "vmalert",
"alertname": "ConfigurationReloadFailure",
"alertgroup": "vmalert",
"pod": "vmalert-0",
"invalid_label": `error evaluating template: template: :1:298: executing "" at <.Values.mustRuntimeFail>: can't evaluate field Values in type notifier.tplData`,
}

View File

@@ -409,6 +409,9 @@ func (g *Group) Start(ctx context.Context, rw remotewrite.RWClient, rr datasourc
g.mu.Unlock()
defer g.evalCancel()
// start the interval ticker before the first evaluation,
// so that the evaluation timestamps of groups with the `eval_offset` option are also aligned,
// see https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10773
t := time.NewTicker(g.Interval)
defer t.Stop()

View File

@@ -293,9 +293,11 @@ func (rr *RecordingRule) toTimeSeries(m datasource.Metric) prompb.TimeSeries {
}
// add extra labels configured by user
for k := range rr.Labels {
// do not add label with empty value, since it has no meaning.
// see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9984
// do not add label with empty value to the result, as it has no meaning:
// if the label already exists in the original query result, remove it to preserve compatibility with relabeling, see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10766.
// otherwise, ignore the label, see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9984.
if rr.Labels[k] == "" {
m.DelLabel(k)
continue
}
existingLabel := promrelabel.GetLabelByName(m.Labels, k)

View File

@@ -163,11 +163,13 @@ func TestRecordingRule_Exec(t *testing.T) {
f(&RecordingRule{
Name: "job:foo",
Labels: map[string]string{
"source": "test",
"source": "test",
"empty_label": "", // this should be dropped
"pod": "", // this should remove the pod label from query result
},
}, [][]datasource.Metric{{
metricWithValueAndLabels(t, 2, "__name__", "foo", "job", "foo"),
metricWithValueAndLabels(t, 1, "__name__", "bar", "job", "bar", "source", "origin"),
metricWithValueAndLabels(t, 2, "__name__", "foo", "job", "foo", "pod", "vmalert-0"),
metricWithValueAndLabels(t, 1, "__name__", "bar", "job", "bar", "source", "origin", "pod", "vmalert-1"),
metricWithValueAndLabels(t, 1, "__name__", "baz", "job", "baz", "source", "test"),
}}, [][]prompb.TimeSeries{{
newTimeSeries([]float64{2}, []int64{ts.UnixNano()}, []prompb.Label{

View File

@@ -52,7 +52,13 @@ var (
"alert": rule.TypeAlerting,
"record": rule.TypeRecording,
}
ruleStates = []string{"ok", "nomatch", "inactive", "firing", "pending", "unhealthy"}
// The "recovering", "noData", "normal", "error" states are used by Grafana.
// Ignore "recovering" since it is not currently acknowledged by vmalert,
// treat "noData" as an alias for "nomatch",
// treat "normal" as an alias for "inactive",
// treat "error" as an alias for "unhealthy"
ruleStates = []string{"ok", "nomatch", "inactive", "firing", "pending", "unhealthy", "recovering", "noData", "normal", "error"}
)
type requestHandler struct {
@@ -363,6 +369,15 @@ func newRulesFilter(r *http.Request) (*rulesFilter, *httpserver.ErrorWithStatusC
if !slices.Contains(ruleStates, v) {
return nil, errResponse(fmt.Errorf(`invalid parameter "state": contains not supported value %q`, v), http.StatusBadRequest)
}
// Replace grafana states with supported internal states
switch v {
case "noData":
v = "nomatch"
case "normal":
v = "inactive"
case "error":
v = "unhealthy"
}
rf.states = append(rf.states, v)
}
}

View File

@@ -357,6 +357,7 @@ func bufferRequestBody(ctx context.Context, r io.ReadCloser, userName string) (i
maxBufSize := max(requestBufferSize.IntN(), maxRequestBodySizeToRetry.IntN())
if maxBufSize <= 0 {
// Request buffering is disabled.
return r, nil
}
@@ -480,6 +481,9 @@ func tryProcessingRequest(w http.ResponseWriter, r *http.Request, targetURL *url
canRetry := !bbOK || bb.canRetry()
res, err := ui.rt.RoundTrip(req)
if err == nil {
defer func() { _ = res.Body.Close() }()
}
if errors.Is(r.Context().Err(), context.Canceled) {
// Do not retry canceled requests.
@@ -549,7 +553,6 @@ func tryProcessingRequest(w http.ResponseWriter, r *http.Request, targetURL *url
w.WriteHeader(res.StatusCode)
err = copyStreamToClient(w, res.Body)
_ = res.Body.Close()
if errors.Is(r.Context().Err(), context.Canceled) {
// Do not retry canceled requests.
@@ -792,10 +795,11 @@ func handleConcurrencyLimitError(w http.ResponseWriter, r *http.Request, err err
}
// bufferedBody serves two purposes:
// 1. Enables request retries when the body size does not exceed maxBodySize
// by fully buffering the body in memory.
// 2. Prevents slow clients from reducing effective server capacity by
// buffering the request body before acquiring a per-user concurrency slot.
//
// 1. It enables request retries when the request body size does not exceed maxBufSize
// by fully buffering the request body in memory.
// 2. It prevents slow clients from reducing effective server capacity
// by buffering the request body before acquiring a per-user concurrency slot.
//
// See bufferRequestBody for details on how bufferedBody is used.
type bufferedBody struct {
@@ -819,7 +823,7 @@ func newBufferedBody(r io.ReadCloser, buf []byte, maxBufSize int) *bufferedBody
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8051
if len(buf) < maxBufSize {
// Read the full request body into buf.
// The full request body has been already read into buf.
r = nil
}
@@ -832,7 +836,7 @@ func newBufferedBody(r io.ReadCloser, buf []byte, maxBufSize int) *bufferedBody
// Read implements io.Reader interface.
func (bb *bufferedBody) Read(p []byte) (int, error) {
if bb.cannotRetry {
return 0, fmt.Errorf("cannot read already closed body")
return 0, fmt.Errorf("cannot read already closed request body")
}
if bb.bufOffset < len(bb.buf) {
n := copy(p, bb.buf[bb.bufOffset:])

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

View File

@@ -23,14 +23,14 @@
"classnames": "^2.5.1",
"dayjs": "^1.11.20",
"lodash.debounce": "^4.0.8",
"marked": "^17.0.5",
"preact": "^10.29.0",
"qs": "^6.15.0",
"marked": "^18.0.0",
"preact": "^10.29.1",
"qs": "^6.15.1",
"react-input-mask": "^2.0.4",
"react-router-dom": "^7.13.2",
"react-router-dom": "^7.14.1",
"uplot": "^1.6.32",
"vite": "^8.0.7",
"web-vitals": "^5.1.0"
"vite": "^8.0.8",
"web-vitals": "^5.2.0"
},
"devDependencies": {
"@eslint/eslintrc": "^3.3.5",
@@ -39,24 +39,24 @@
"@testing-library/jest-dom": "^6.9.1",
"@testing-library/preact": "^3.2.4",
"@types/lodash.debounce": "^4.0.9",
"@types/node": "^25.5.0",
"@types/node": "^25.6.0",
"@types/qs": "^6.15.0",
"@types/react": "^19.2.14",
"@types/react-input-mask": "^3.0.6",
"@types/react-router-dom": "^5.3.3",
"@typescript-eslint/eslint-plugin": "^8.57.2",
"@typescript-eslint/parser": "^8.57.2",
"@typescript-eslint/eslint-plugin": "^8.58.2",
"@typescript-eslint/parser": "^8.58.2",
"cross-env": "^10.1.0",
"eslint": "^9.39.2",
"eslint-plugin-react": "^7.37.5",
"eslint-plugin-unused-imports": "^4.4.1",
"globals": "^17.4.0",
"globals": "^17.5.0",
"http-proxy-middleware": "^3.0.5",
"jsdom": "^29.0.1",
"postcss": "^8.5.8",
"sass-embedded": "^1.98.0",
"typescript": "^5.9.3",
"vitest": "^4.1.1"
"jsdom": "^29.0.2",
"postcss": "^8.5.10",
"sass-embedded": "^1.99.0",
"typescript": "^6.0.2",
"vitest": "^4.1.4"
},
"browserslist": {
"production": [

View File

@@ -1,7 +1,7 @@
import { useMemo } from "preact/compat";
import "./style.scss";
import { Alert as APIAlert } from "../../../types";
import { createSearchParams } from "react-router-dom";
import { Alert as APIAlert, Group } from "../../../types";
import { Link } from "react-router-dom";
import Button from "../../Main/Button/Button";
import Badges, { BadgeColor } from "../Badges";
import { formatEventTime } from "../helpers";
@@ -9,12 +9,14 @@ import {
SearchIcon,
} from "../../Main/Icons";
import CodeExample from "../../Main/CodeExample/CodeExample";
import router from "../../../router";
interface BaseAlertProps {
item: APIAlert;
group?: Group;
}
const BaseAlert = ({ item }: BaseAlertProps) => {
const BaseAlert = ({ item, group }: BaseAlertProps) => {
const query = item?.expression;
const alertLabels = item?.labels || {};
const alertLabelsItems = useMemo(() => {
@@ -24,13 +26,19 @@ const BaseAlert = ({ item }: BaseAlertProps) => {
}]));
}, [alertLabels]);
const openQueryLink = () => {
const params = {
const queryLink = useMemo(() => {
if (!group?.interval) return;
const params = new URLSearchParams({
"g0.expr": query,
"g0.end_time": ""
};
window.open(`#/?${createSearchParams(params).toString()}`, "_blank", "noopener noreferrer");
};
"g0.end_time": item.activeAt,
// Interval is the Group's evaluation interval in float seconds as present in the file. See: /app/vmalert/rule/web.go
"g0.step_input": `${group.interval}s`,
"g0.relative_time": "none",
});
return `${router.home}?${params.toString()}`;
}, [query, item.activeAt, group?.interval]);
return (
<div className="vm-explore-alerts-alert-item">
@@ -45,15 +53,22 @@ const BaseAlert = ({ item }: BaseAlertProps) => {
style={{ "text-align": "end" }}
colSpan={2}
>
<Button
size="small"
variant="outlined"
color="gray"
startIcon={<SearchIcon />}
onClick={openQueryLink}
>
<span className="vm-button-text">Run query</span>
</Button>
{queryLink && (
<Link
to={queryLink}
target={"_blank"}
rel="noreferrer"
>
<Button
size="small"
variant="outlined"
color="gray"
startIcon={<SearchIcon />}
>
<span className="vm-button-text">Run query</span>
</Button>
</Link>
)}
</td>
</tr>
<tr>

View File

@@ -1,19 +1,21 @@
import { useMemo } from "preact/compat";
import "./style.scss";
import { Rule as APIRule } from "../../../types";
import { useNavigate, createSearchParams } from "react-router-dom";
import { Group, Rule as APIRule } from "../../../types";
import { useNavigate, Link } from "react-router-dom";
import { SearchIcon, DetailsIcon } from "../../Main/Icons";
import Button from "../../Main/Button/Button";
import Alert from "../../Main/Alert/Alert";
import Badges, { BadgeColor } from "../Badges";
import { formatDuration, formatEventTime } from "../helpers";
import CodeExample from "../../Main/CodeExample/CodeExample";
import router from "../../../router";
interface BaseRuleProps {
item: APIRule;
group?: Group;
}
const BaseRule = ({ item }: BaseRuleProps) => {
const BaseRule = ({ item, group }: BaseRuleProps) => {
const query = item?.query;
const navigate = useNavigate();
const openAlertLink = (id: string) => {
@@ -33,13 +35,19 @@ const BaseRule = ({ item }: BaseRuleProps) => {
}]));
}, [ruleLabels]);
const openQueryLink = () => {
const params = {
const queryLink = useMemo(() => {
if (!group?.interval) return;
const params = new URLSearchParams({
"g0.expr": query,
"g0.end_time": ""
};
window.open(`#/?${createSearchParams(params).toString()}`, "_blank", "noopener noreferrer");
};
"g0.end_time": item.lastEvaluation,
// Interval is the Group's evaluation interval in float seconds as present in the file. See: /app/vmalert/rule/web.go
"g0.step_input": `${group.interval}s`,
"g0.relative_time": "none",
});
return `${router.home}?${params.toString()}`;
}, [query, item.lastEvaluation, group?.interval]);
return (
<div className="vm-explore-alerts-rule-item">
@@ -54,15 +62,22 @@ const BaseRule = ({ item }: BaseRuleProps) => {
style={{ "text-align": "end" }}
colSpan={2}
>
<Button
size="small"
variant="outlined"
color="gray"
startIcon={<SearchIcon />}
onClick={openQueryLink}
>
<span className="vm-button-text">Run query</span>
</Button>
{queryLink && (
<Link
to={queryLink}
target={"_blank"}
rel="noreferrer"
>
<Button
size="small"
variant="outlined"
color="gray"
startIcon={<SearchIcon />}
>
<span className="vm-button-text">Run query</span>
</Button>
</Link>
)}
</td>
</tr>
<tr>

View File

@@ -2,15 +2,16 @@ import { FC } from "preact/compat";
import ItemHeader from "../ItemHeader";
import Accordion from "../../Main/Accordion/Accordion";
import "./style.scss";
import { Rule as APIRule } from "../../../types";
import { Group, Rule as APIRule } from "../../../types";
import BaseRule from "../BaseRule";
interface RuleProps {
states: Record<string, number>;
rule: APIRule;
group: Group;
}
const Rule: FC<RuleProps> = ({ states, rule }) => {
const Rule: FC<RuleProps> = ({ states, rule, group }) => {
const state = Object.keys(states).length > 0 ? Object.keys(states)[0] : "ok";
return (
<div className={`vm-explore-alerts-rule vm-badge-item ${state.replace(" ", "-")}`}>
@@ -25,7 +26,10 @@ const Rule: FC<RuleProps> = ({ states, rule }) => {
name={rule.name}
/>}
>
<BaseRule item={rule} />
<BaseRule
item={rule}
group={group}
/>
</Accordion>
</div>
);

View File

@@ -50,7 +50,6 @@ const RulesHeader = ({
label="Rule type"
placeholder="Please select rule type"
onChange={onChangeRuleType}
autofocus={!!types.length && !isMobile}
includeAll
searchable
/>

View File

@@ -17,7 +17,7 @@ export const formatDuration = (raw: number) => {
export const formatEventTime = (raw: string) => {
const t = dayjs(raw);
return t.year() <= 1 ? "Never" : t.format("DD MMM YYYY HH:mm:ss");
return t.year() <= 1 ? "Never" : t.tz().format("DD MMM YYYY HH:mm:ss");
};
export const getStates = (rule: Rule) => {

View File

@@ -2,10 +2,11 @@ import Spinner from "../../components/Main/Spinner/Spinner";
import Alert from "../../components/Main/Alert/Alert";
import { useFetchItem } from "./hooks/useFetchItem";
import "./style.scss";
import { Alert as APIAlert } from "../../types";
import { Alert as APIAlert, Group as APIGroup } from "../../types";
import ItemHeader from "../../components/ExploreAlerts/ItemHeader";
import BaseAlert from "../../components/ExploreAlerts/BaseAlert";
import Modal from "../../components/Main/Modal/Modal";
import { useFetchGroup } from "./hooks/useFetchGroup";
interface ExploreAlertProps {
groupId: string;
@@ -17,10 +18,19 @@ interface ExploreAlertProps {
const ExploreAlert = ({ groupId, id, mode, onClose }: ExploreAlertProps) => {
const {
item,
isLoading,
error,
isLoading: isLoadingItem,
error: errorItem,
} = useFetchItem<APIAlert>({ groupId, id, mode });
const {
group,
isLoading: isLoadingGroup,
error: errorGroup,
} = useFetchGroup<APIGroup>({ id: groupId });
const error = errorItem || errorGroup;
const isLoading = isLoadingItem || isLoadingGroup;
if (isLoading) return (
<Spinner />
);
@@ -51,7 +61,12 @@ const ExploreAlert = ({ groupId, id, mode, onClose }: ExploreAlertProps) => {
onClose={onClose}
>
<div className="vm-explore-alerts">
{item && (<BaseAlert item={item} />) || (
{item ? (
<BaseAlert
item={item}
group={group}
/>
) : (
<Alert variant="info">{noItemFound}</Alert>
)}
</div>

View File

@@ -2,11 +2,12 @@ import Spinner from "../../components/Main/Spinner/Spinner";
import Alert from "../../components/Main/Alert/Alert";
import { useFetchItem } from "./hooks/useFetchItem";
import "./style.scss";
import { Rule as APIRule } from "../../types";
import { Group as APIGroup, Rule as APIRule } from "../../types";
import ItemHeader from "../../components/ExploreAlerts/ItemHeader";
import BaseRule from "../../components/ExploreAlerts/BaseRule";
import Modal from "../../components/Main/Modal/Modal";
import { getStates } from "../../components/ExploreAlerts/helpers";
import { useFetchGroup } from "./hooks/useFetchGroup";
interface ExploreRuleProps {
groupId: string;
@@ -18,10 +19,19 @@ interface ExploreRuleProps {
const ExploreRule = ({ groupId, id, mode, onClose }: ExploreRuleProps) => {
const {
item,
isLoading,
error,
isLoading: isLoadingItem,
error: errorItem,
} = useFetchItem<APIRule>({ groupId, id, mode });
const {
group,
isLoading: isLoadingGroup,
error: errorGroup,
} = useFetchGroup<APIGroup>({ id: groupId });
const error = errorItem || errorGroup;
const isLoading = isLoadingItem || isLoadingGroup;
if (isLoading) return (
<Spinner />
);
@@ -49,7 +59,12 @@ const ExploreRule = ({ groupId, id, mode, onClose }: ExploreRuleProps) => {
onClose={onClose}
>
<div className="vm-explore-alerts">
{item && (<BaseRule item={item} />) || (
{item ? (
<BaseRule
item={item}
group={group}
/>
) : (
<Alert variant="info">{noItemFound}</Alert>
)}
</div>

View File

@@ -132,7 +132,7 @@ const ExploreRules: FC = () => {
newParams.set("page_num", "1");
setSearchParams(newParams);
const changes = getChanges(title, states);
setStates(changes.length == allStates.length ? [] : changes);
setStates(changes.length === allStates.length ? [] : changes);
}, [states, searchParams]);
const handleChangeRuleType = useCallback((title: string) => {
@@ -186,6 +186,7 @@ const ExploreRules: FC = () => {
<Rule
key={`rule-${rule.id}`}
rule={rule}
group={group}
states={getStates(rule)}
/>
))}

View File

@@ -15,13 +15,12 @@
"forceConsistentCasingInFileNames": true,
"noFallthroughCasesInSwitch": true,
"module": "esnext",
"moduleResolution": "node",
"moduleResolution": "bundler",
"resolveJsonModule": true,
"isolatedModules": true,
"noEmit": true,
"jsx": "react-jsx",
"jsxImportSource": "preact",
"downlevelIteration": true,
"noUnusedLocals": true,
"paths": {
"react": ["./node_modules/preact/compat/"],
@@ -32,5 +31,8 @@
},
"include": [
"src"
],
"exclude": [
"scripts/**/*.ts"
]
}

View File

@@ -33,37 +33,41 @@ func (c *Client) CloseConnections() {
c.httpCli.CloseIdleConnections()
}
// Get sends a HTTP GET request, returns
// Get sends an HTTP GET request, returns
// the response body and status code to the caller.
func (c *Client) Get(t *testing.T, url string) (string, int) {
func (c *Client) Get(t *testing.T, url string, headers http.Header) (string, int) {
t.Helper()
return c.do(t, http.MethodGet, url, "", nil)
return c.do(t, http.MethodGet, url, nil, headers)
}
// Post sends a HTTP POST request, returns
// Post sends an HTTP POST request, returns
// the response body and status code to the caller.
func (c *Client) Post(t *testing.T, url, contentType string, data []byte) (string, int) {
func (c *Client) Post(t *testing.T, url string, data []byte, headers http.Header) (string, int) {
t.Helper()
return c.do(t, http.MethodPost, url, contentType, data)
return c.do(t, http.MethodPost, url, data, headers)
}
// PostForm sends a HTTP POST request containing the POST-form data, returns
// PostForm sends an HTTP POST request containing the POST-form data with attached getHeaders, returns
// the response body and status code to the caller.
func (c *Client) PostForm(t *testing.T, url string, data url.Values) (string, int) {
func (c *Client) PostForm(t *testing.T, url string, data url.Values, headers http.Header) (string, int) {
t.Helper()
return c.Post(t, url, "application/x-www-form-urlencoded", []byte(data.Encode()))
if headers == nil {
headers = make(http.Header)
}
headers.Set("Content-Type", "application/x-www-form-urlencoded")
return c.Post(t, url, []byte(data.Encode()), headers)
}
// Delete sends a HTTP DELETE request and returns the response body and status code
// Delete sends an HTTP DELETE request and returns the response body and status code
// to the caller.
func (c *Client) Delete(t *testing.T, url string) (string, int) {
t.Helper()
return c.do(t, http.MethodDelete, url, "", nil)
return c.do(t, http.MethodDelete, url, nil, nil)
}
// do prepares a HTTP request, sends it to the server, receives the response
// do prepares an HTTP request, sends it to the server, receives the response
// from the server, returns the response body and status code to the caller.
func (c *Client) do(t *testing.T, method, url, contentType string, data []byte) (string, int) {
func (c *Client) do(t *testing.T, method, url string, data []byte, headers http.Header) (string, int) {
t.Helper()
req, err := http.NewRequest(method, url, bytes.NewReader(data))
@@ -71,9 +75,7 @@ func (c *Client) do(t *testing.T, method, url, contentType string, data []byte)
t.Fatalf("could not create a HTTP request: %v", err)
}
if len(contentType) > 0 {
req.Header.Add("Content-Type", contentType)
}
req.Header = headers
res, err := c.httpCli.Do(req)
if err != nil {
t.Fatalf("could not send HTTP request: %v", err)
@@ -135,7 +137,7 @@ func (app *ServesMetrics) GetIntMetric(t *testing.T, metricName string) int {
func (app *ServesMetrics) GetMetric(t *testing.T, metricName string) float64 {
t.Helper()
metrics, statusCode := app.cli.Get(t, app.metricsURL)
metrics, statusCode := app.cli.Get(t, app.metricsURL, nil)
if statusCode != http.StatusOK {
t.Fatalf("unexpected status code: got %d, want %d", statusCode, http.StatusOK)
}
@@ -161,7 +163,7 @@ func (app *ServesMetrics) GetMetricsByPrefix(t *testing.T, prefix string) []floa
values := []float64{}
metrics, statusCode := app.cli.Get(t, app.metricsURL)
metrics, statusCode := app.cli.Get(t, app.metricsURL, nil)
if statusCode != http.StatusOK {
t.Fatalf("unexpected status code: got %d, want %d", statusCode, http.StatusOK)
}
@@ -190,7 +192,7 @@ func (app *ServesMetrics) GetMetricsByRegexp(t *testing.T, re *regexp.Regexp) []
values := []float64{}
metrics, statusCode := app.cli.Get(t, app.metricsURL)
metrics, statusCode := app.cli.Get(t, app.metricsURL, nil)
if statusCode != http.StatusOK {
t.Fatalf("unexpected status code: got %d, want %d", statusCode, http.StatusOK)
}

View File

@@ -4,6 +4,7 @@ import (
"encoding/json"
"fmt"
"math"
"net/http"
"net/url"
"slices"
"sort"
@@ -88,6 +89,23 @@ type QueryOpts struct {
MaxLookback string
LatencyOffset string
Format string
NoCache string
Headers http.Header
}
// getTenant returns tenant with optional default value
func (qos *QueryOpts) getTenant() string {
if qos.Tenant == "" {
return "0"
}
return qos.Tenant
}
func (qos *QueryOpts) getHeaders() http.Header {
if qos.Headers == nil {
qos.Headers = make(http.Header)
}
return qos.Headers
}
func (qos *QueryOpts) asURLValues() url.Values {
@@ -112,18 +130,11 @@ func (qos *QueryOpts) asURLValues() url.Values {
addNonEmpty("max_lookback", qos.MaxLookback)
addNonEmpty("latency_offset", qos.LatencyOffset)
addNonEmpty("format", qos.Format)
addNonEmpty("nocache", qos.NoCache)
return uv
}
// getTenant returns tenant with optional default value
func (qos *QueryOpts) getTenant() string {
if qos.Tenant == "" {
return "0"
}
return qos.Tenant
}
// PrometheusAPIV1QueryResponse is an inmemory representation of the
// /prometheus/api/v1/query or /prometheus/api/v1/query_range response.
type PrometheusAPIV1QueryResponse struct {

View File

@@ -28,7 +28,6 @@ func TestSingleBackupRestore(t *testing.T) {
return tc.MustStartVmsingle("vmsingle", []string{
"-storageDataPath=" + storageDataPath,
"-retentionPeriod=100y",
"-search.maxStalenessInterval=1m",
})
},
stopSUT: func() {
@@ -70,9 +69,7 @@ func TestClusterBackupRestore(t *testing.T) {
VminsertInstance: "vminsert",
VminsertFlags: []string{},
VmselectInstance: "vmselect",
VmselectFlags: []string{
"-search.maxStalenessInterval=1m",
},
VmselectFlags: []string{},
})
},
stopSUT: func() {
@@ -100,15 +97,14 @@ func TestClusterBackupRestore(t *testing.T) {
func testBackupRestore(tc *apptest.TestCase, opts testBackupRestoreOpts) {
t := tc.T()
const msecPerMinute = 60 * 1000
genData := func(count int, prefix string, start int64) (recs []string, wantSeries []map[string]string, wantQueryResults []*apptest.QueryResult) {
genData := func(count int, prefix string, start, step int64) (recs []string, wantSeries []map[string]string, wantQueryResults []*apptest.QueryResult) {
recs = make([]string, count)
wantSeries = make([]map[string]string, count)
wantQueryResults = make([]*apptest.QueryResult, count)
for i := range count {
name := fmt.Sprintf("%s_%03d", prefix, i)
value := float64(i)
timestamp := start + int64(i)*msecPerMinute
timestamp := start + int64(i)*step
recs[i] = fmt.Sprintf("%s %f %d", name, value, timestamp)
wantSeries[i] = map[string]string{"__name__": name}
@@ -148,15 +144,17 @@ func testBackupRestore(tc *apptest.TestCase, opts testBackupRestoreOpts) {
// assertSeries retrieves all data from the storage and compares it with the
// expected result.
assertQueryResults := func(app apptest.PrometheusQuerier, query string, start, end int64, want []*apptest.QueryResult) {
assertQueryResults := func(app apptest.PrometheusQuerier, query string, start, end, step int64, want []*apptest.QueryResult) {
t.Helper()
tc.Assert(&apptest.AssertOptions{
Msg: "unexpected /api/v1/query_range response",
Got: func() any {
return app.PrometheusAPIV1QueryRange(t, query, apptest.QueryOpts{
Start: fmt.Sprintf("%d", start),
End: fmt.Sprintf("%d", end),
Step: "60s",
Start: fmt.Sprintf("%d", start),
End: fmt.Sprintf("%d", end),
Step: fmt.Sprintf("%dms", step),
MaxLookback: fmt.Sprintf("%dms", step-1),
NoCache: "1",
})
},
Want: &apptest.PrometheusAPIV1QueryResponse{
@@ -167,7 +165,6 @@ func testBackupRestore(tc *apptest.TestCase, opts testBackupRestoreOpts) {
},
},
FailNow: true,
Retries: 300,
})
}
@@ -194,8 +191,9 @@ func testBackupRestore(tc *apptest.TestCase, opts testBackupRestoreOpts) {
// below.
const numMetrics = 1000
// With 1000 metrics (one per minute), the time range spans 2 months.
end := time.Date(2025, 3, 1, 10, 0, 0, 0, time.UTC).UnixMilli()
start := end - numMetrics*msecPerMinute
start := time.Date(2025, 1, 1, 0, 0, 0, 0, time.UTC).UnixMilli()
end := time.Date(2025, 3, 1, 0, 0, 0, 0, time.UTC).UnixMilli()
step := (end - start) / numMetrics
// Verify backup/restore:
//
@@ -209,8 +207,8 @@ func testBackupRestore(tc *apptest.TestCase, opts testBackupRestoreOpts) {
// - Start vmsingle
// - Ensure that the queries return batch1 data only.
batch1Data, wantBatch1Series, wantBatch1QueryResults := genData(numMetrics, "batch1", start)
batch2Data, wantBatch2Series, wantBatch2QueryResults := genData(numMetrics, "batch2", start)
batch1Data, wantBatch1Series, wantBatch1QueryResults := genData(numMetrics, "batch1", start, step)
batch2Data, wantBatch2Series, wantBatch2QueryResults := genData(numMetrics, "batch2", start, step)
wantBatch12Series := slices.Concat(wantBatch1Series, wantBatch2Series)
wantBatch12QueryResults := slices.Concat(wantBatch1QueryResults, wantBatch2QueryResults)
@@ -219,13 +217,14 @@ func testBackupRestore(tc *apptest.TestCase, opts testBackupRestoreOpts) {
sut.PrometheusAPIV1ImportPrometheus(t, batch1Data, apptest.QueryOpts{})
sut.ForceFlush(t)
assertSeries(sut, `{__name__=~"batch1.*"}`, start, end, wantBatch1Series)
assertQueryResults(sut, `{__name__=~"batch1.*"}`, start, end, wantBatch1QueryResults)
assertQueryResults(sut, `{__name__=~"batch1.*"}`, start, end, step, wantBatch1QueryResults)
createBackup(sut, "batch1")
sut.PrometheusAPIV1ImportPrometheus(t, batch2Data, apptest.QueryOpts{})
sut.ForceFlush(t)
assertSeries(sut, `{__name__=~"batch(1|2).*"}`, start, end, wantBatch12Series)
assertQueryResults(sut, `{__name__=~"batch(1|2).*"}`, start, end, wantBatch12QueryResults)
assertQueryResults(sut, `{__name__=~"batch(1|2).*"}`, start, end, step, wantBatch12QueryResults)
createBackup(sut, "batch12")
opts.stopSUT()
@@ -235,5 +234,5 @@ func testBackupRestore(tc *apptest.TestCase, opts testBackupRestoreOpts) {
sut = opts.startSUT()
assertSeries(sut, `{__name__=~"batch1.*"}`, start, end, wantBatch1Series)
assertQueryResults(sut, `{__name__=~"batch1.*"}`, start, end, wantBatch1QueryResults)
assertQueryResults(sut, `{__name__=~"batch1.*"}`, start, end, step, wantBatch1QueryResults)
}

View File

@@ -76,11 +76,13 @@ func (app *Vmagent) APIV1ImportPrometheus(t *testing.T, records []string, opts Q
// Flushing may still be in progress on the function return.
//
// See https://docs.victoriametrics.com/victoriametrics/url-examples/#apiv1importprometheus
func (app *Vmagent) APIV1ImportPrometheusNoWaitFlush(t *testing.T, records []string, _ QueryOpts) {
func (app *Vmagent) APIV1ImportPrometheusNoWaitFlush(t *testing.T, records []string, opts QueryOpts) {
t.Helper()
data := []byte(strings.Join(records, "\n"))
_, statusCode := app.cli.Post(t, app.apiV1ImportPrometheusURL, "text/plain", data)
headers := opts.getHeaders()
headers.Set("Content-Type", "text/plain")
_, statusCode := app.cli.Post(t, app.apiV1ImportPrometheusURL, data, headers)
if statusCode != http.StatusNoContent {
t.Fatalf("unexpected status code: got %d, want %d", statusCode, http.StatusNoContent)
}

View File

@@ -57,7 +57,7 @@ func StartVminsert(instance string, flags []string, cli *Client, output io.Write
extractREs = append(extractREs, regexp.MustCompile(logRecord))
}
app, stderrExtracts, err := startApp(instance, "../../bin/vminsert", flags, &appOptions{
app, stderrExtracts, err := startApp(instance, "../../bin/vminsert-race", flags, &appOptions{
defaultFlags: map[string]string{
"-httpListenAddr": "127.0.0.1:0",
"-clusternativeListenAddr": "127.0.0.1:0",
@@ -114,8 +114,10 @@ func (app *Vminsert) InfluxWrite(t *testing.T, records []string, opts QueryOpts)
}
data := []byte(strings.Join(records, "\n"))
headers := opts.getHeaders()
headers.Set("Content-Type", "text/plain")
app.sendBlocking(t, len(records), func() {
_, statusCode := app.cli.Post(t, url, "text/plain", data)
_, statusCode := app.cli.Post(t, url, data, headers)
if statusCode != http.StatusNoContent {
t.Fatalf("unexpected status code: got %d, want %d", statusCode, http.StatusNoContent)
}
@@ -146,8 +148,10 @@ func (app *Vminsert) PrometheusAPIV1ImportCSV(t *testing.T, records []string, op
url += "?" + uvs
}
data := []byte(strings.Join(records, "\n"))
headers := opts.getHeaders()
headers.Set("Content-Type", "text/plain")
app.sendBlocking(t, len(records), func() {
_, statusCode := app.cli.Post(t, url, "text/plain", data)
_, statusCode := app.cli.Post(t, url, data, headers)
if statusCode != http.StatusNoContent {
t.Fatalf("unexpected status code: got %d, want %d", statusCode, http.StatusNoContent)
}
@@ -168,8 +172,10 @@ func (app *Vminsert) PrometheusAPIV1ImportNative(t *testing.T, data []byte, opts
if len(uvs) > 0 {
url += "?" + uvs
}
headers := opts.getHeaders()
headers.Set("Content-Type", "text/plain")
app.sendBlocking(t, 1, func() {
_, statusCode := app.cli.Post(t, url, "text/plain", data)
_, statusCode := app.cli.Post(t, url, data, headers)
if statusCode != http.StatusNoContent {
t.Fatalf("unexpected status code: got %d, want %d", statusCode, http.StatusNoContent)
}
@@ -191,8 +197,10 @@ func (app *Vminsert) OpenTSDBAPIPut(t *testing.T, records []string, opts QueryOp
url += "?" + uvs
}
data := []byte("[" + strings.Join(records, ",") + "]")
headers := opts.getHeaders()
headers.Set("Content-Type", "application/json")
app.sendBlocking(t, len(records), func() {
_, statusCode := app.cli.Post(t, url, "application/json", data)
_, statusCode := app.cli.Post(t, url, data, headers)
if statusCode != http.StatusNoContent {
t.Fatalf("unexpected status code: got %d, want %d", statusCode, http.StatusNoContent)
}
@@ -211,8 +219,10 @@ func (app *Vminsert) PrometheusAPIV1Write(t *testing.T, wr prompb.WriteRequest,
if prommetadata.IsEnabled() {
recordsCount += len(wr.Metadata)
}
headers := opts.getHeaders()
headers.Set("Content-Type", "application/x-protobuf")
app.sendBlocking(t, recordsCount, func() {
_, statusCode := app.cli.Post(t, url, "application/x-protobuf", data)
_, statusCode := app.cli.Post(t, url, data, headers)
if statusCode != http.StatusNoContent {
t.Fatalf("unexpected status code: got %d, want %d", statusCode, http.StatusNoContent)
}
@@ -237,8 +247,22 @@ func (app *Vminsert) PrometheusAPIV1ImportPrometheus(t *testing.T, records []str
data := []byte(strings.Join(records, "\n"))
var recordsCount int
var metadataRecords int
uniqueMetadataMetricNames := make(map[string]struct{})
for _, record := range records {
if strings.HasPrefix(record, "#") {
// metric metadata has the following format:
//# HELP importprometheus_series
//# TYPE importprometheus_series
// it results into single metadata record
if strings.HasPrefix(record, "# ") {
metadataItems := strings.Split(record, " ")
if len(metadataItems) < 2 {
t.Fatalf("BUG: unexpected metadata format=%q", record)
}
metricName := metadataItems[2]
if _, ok := uniqueMetadataMetricNames[metricName]; ok {
continue
}
uniqueMetadataMetricNames[metricName] = struct{}{}
metadataRecords++
continue
}
@@ -247,8 +271,10 @@ func (app *Vminsert) PrometheusAPIV1ImportPrometheus(t *testing.T, records []str
if prommetadata.IsEnabled() {
recordsCount += metadataRecords
}
headers := opts.getHeaders()
headers.Set("Content-Type", "text/plain")
app.sendBlocking(t, recordsCount, func() {
_, statusCode := app.cli.Post(t, url, "text/plain", data)
_, statusCode := app.cli.Post(t, url, data, headers)
if statusCode != http.StatusNoContent {
t.Fatalf("unexpected status code: got %d, want %d", statusCode, http.StatusNoContent)
}
@@ -268,8 +294,10 @@ func (app *Vminsert) ZabbixConnectorHistory(t *testing.T, records []string, opts
url += "?" + uvs
}
data := []byte(strings.Join(records, "\n"))
headers := opts.getHeaders()
headers.Set("Content-Type", "application/json")
app.sendBlocking(t, len(records), func() {
_, statusCode := app.cli.Post(t, url, "application/json", data)
_, statusCode := app.cli.Post(t, url, data, headers)
if statusCode != http.StatusOK {
t.Fatalf("unexpected status code: got %d, want %d", statusCode, http.StatusOK)
}

View File

@@ -25,7 +25,7 @@ type Vmselect struct {
// sets the default flags and populates the app instance state with runtime
// values extracted from the application log (such as httpListenAddr)
func StartVmselect(instance string, flags []string, cli *Client, output io.Writer) (*Vmselect, error) {
app, stderrExtracts, err := startApp(instance, "../../bin/vmselect", flags, &appOptions{
app, stderrExtracts, err := startApp(instance, "../../bin/vmselect-race", flags, &appOptions{
defaultFlags: map[string]string{
"-httpListenAddr": "127.0.0.1:0",
"-clusternativeListenAddr": "127.0.0.1:0",
@@ -76,7 +76,7 @@ func (app *Vmselect) PrometheusAPIV1Export(t *testing.T, query string, opts Quer
values := opts.asURLValues()
values.Add("match[]", query)
values.Add("format", "promapi")
res, _ := app.cli.PostForm(t, exportURL, values)
res, _ := app.cli.PostForm(t, exportURL, values, opts.Headers)
return NewPrometheusAPIV1QueryResponse(t, res)
}
@@ -92,7 +92,7 @@ func (app *Vmselect) PrometheusAPIV1ExportNative(t *testing.T, query string, opt
values := opts.asURLValues()
values.Add("match[]", query)
values.Add("format", "promapi")
res, _ := app.cli.PostForm(t, exportURL, values)
res, _ := app.cli.PostForm(t, exportURL, values, opts.Headers)
return []byte(res)
}
@@ -108,7 +108,7 @@ func (app *Vmselect) PrometheusAPIV1Query(t *testing.T, query string, opts Query
values := opts.asURLValues()
values.Add("query", query)
res, _ := app.cli.PostForm(t, queryURL, values)
res, _ := app.cli.PostForm(t, queryURL, values, opts.Headers)
return NewPrometheusAPIV1QueryResponse(t, res)
}
@@ -124,7 +124,7 @@ func (app *Vmselect) PrometheusAPIV1QueryRange(t *testing.T, query string, opts
values := opts.asURLValues()
values.Add("query", query)
res, _ := app.cli.PostForm(t, queryURL, values)
res, _ := app.cli.PostForm(t, queryURL, values, opts.Headers)
return NewPrometheusAPIV1QueryResponse(t, res)
}
@@ -139,7 +139,7 @@ func (app *Vmselect) PrometheusAPIV1Series(t *testing.T, matchQuery string, opts
values := opts.asURLValues()
values.Add("match[]", matchQuery)
res, _ := app.cli.PostForm(t, seriesURL, values)
res, _ := app.cli.PostForm(t, seriesURL, values, opts.Headers)
return NewPrometheusAPIV1SeriesResponse(t, res)
}
@@ -153,7 +153,7 @@ func (app *Vmselect) PrometheusAPIV1SeriesCount(t *testing.T, opts QueryOpts) *P
seriesURL := fmt.Sprintf("http://%s/select/%s/prometheus/api/v1/series/count", app.httpListenAddr, opts.getTenant())
values := opts.asURLValues()
res, _ := app.cli.PostForm(t, seriesURL, values)
res, _ := app.cli.PostForm(t, seriesURL, values, opts.Headers)
return NewPrometheusAPIV1SeriesCountResponse(t, res)
}
@@ -168,7 +168,7 @@ func (app *Vmselect) PrometheusAPIV1Labels(t *testing.T, matchQuery string, opts
values.Add("match[]", matchQuery)
queryURL := fmt.Sprintf("http://%s/select/%s/prometheus/api/v1/labels", app.httpListenAddr, opts.getTenant())
res, _ := app.cli.PostForm(t, queryURL, values)
res, _ := app.cli.PostForm(t, queryURL, values, opts.Headers)
return NewPrometheusAPIV1LabelsResponse(t, res)
}
@@ -183,7 +183,7 @@ func (app *Vmselect) PrometheusAPIV1LabelValues(t *testing.T, labelName, matchQu
values.Add("match[]", matchQuery)
queryURL := fmt.Sprintf("http://%s/select/%s/prometheus/api/v1/label/%s/values", app.httpListenAddr, opts.getTenant(), labelName)
res, _ := app.cli.PostForm(t, queryURL, values)
res, _ := app.cli.PostForm(t, queryURL, values, opts.Headers)
return NewPrometheusAPIV1LabelValuesResponse(t, res)
}
@@ -197,7 +197,7 @@ func (app *Vmselect) PrometheusAPIV1Metadata(t *testing.T, metric string, limit
values.Add("limit", strconv.Itoa(limit))
queryURL := fmt.Sprintf("http://%s/select/%s/prometheus/api/v1/metadata", app.httpListenAddr, opts.getTenant())
res, _ := app.cli.PostForm(t, queryURL, values)
res, _ := app.cli.PostForm(t, queryURL, values, opts.Headers)
return NewPrometheusAPIV1Metadata(t, res)
}
@@ -212,7 +212,7 @@ func (app *Vmselect) APIV1AdminTSDBDeleteSeries(t *testing.T, matchQuery string,
values := opts.asURLValues()
values.Add("match[]", matchQuery)
res, statusCode := app.cli.PostForm(t, queryURL, values)
res, statusCode := app.cli.PostForm(t, queryURL, values, opts.Headers)
if statusCode != http.StatusNoContent {
t.Fatalf("unexpected status code: got %d, want %d, resp text=%q", statusCode, http.StatusNoContent, res)
}
@@ -231,7 +231,7 @@ func (app *Vmselect) MetricNamesStats(t *testing.T, limit, le, matchPattern stri
values.Add("match_pattern", matchPattern)
queryURL := fmt.Sprintf("http://%s/select/%s/prometheus/api/v1/status/metric_names_stats", app.httpListenAddr, opts.getTenant())
res, statusCode := app.cli.PostForm(t, queryURL, values)
res, statusCode := app.cli.PostForm(t, queryURL, values, opts.Headers)
if statusCode != http.StatusOK {
t.Fatalf("unexpected status code: got %d, want %d, resp text=%q", statusCode, http.StatusOK, res)
}
@@ -251,7 +251,7 @@ func (app *Vmselect) MetricNamesStatsReset(t *testing.T, opts QueryOpts) {
values := opts.asURLValues()
queryURL := fmt.Sprintf("http://%s/admin/api/v1/admin/status/metric_names_stats/reset", app.httpListenAddr)
res, statusCode := app.cli.PostForm(t, queryURL, values)
res, statusCode := app.cli.PostForm(t, queryURL, values, opts.Headers)
if statusCode != http.StatusNoContent {
t.Fatalf("unexpected status code: got %d, want %d, resp text=%q", statusCode, http.StatusNoContent, res)
}
@@ -275,7 +275,7 @@ func (app *Vmselect) APIV1StatusTSDB(t *testing.T, matchQuery string, date strin
addNonEmpty("topN", topN)
addNonEmpty("date", date)
res, statusCode := app.cli.PostForm(t, seriesURL, values)
res, statusCode := app.cli.PostForm(t, seriesURL, values, opts.Headers)
if statusCode != http.StatusOK {
t.Fatalf("unexpected status code: got %d, want %d, resp text=%q", statusCode, http.StatusOK, res)
}
@@ -295,7 +295,7 @@ func (app *Vmselect) GraphiteMetricsIndex(t *testing.T, opts QueryOpts) Graphite
t.Helper()
seriesURL := fmt.Sprintf("http://%s/select/%s/graphite/metrics/index.json", app.httpListenAddr, opts.getTenant())
res, statusCode := app.cli.Get(t, seriesURL)
res, statusCode := app.cli.Get(t, seriesURL, opts.Headers)
if statusCode != http.StatusOK {
t.Fatalf("unexpected status code: got %d, want %d, resp text=%q", statusCode, http.StatusOK, res)
}
@@ -317,7 +317,7 @@ func (app *Vmselect) GraphiteTagsTagSeries(t *testing.T, record string, opts Que
values := opts.asURLValues()
values.Add("path", record)
_, statusCode := app.cli.PostForm(t, url, values)
_, statusCode := app.cli.PostForm(t, url, values, opts.Headers)
if got, want := statusCode, http.StatusNotImplemented; got != want {
t.Fatalf("unexpected status code: got %d, want %d", got, want)
}
@@ -332,7 +332,7 @@ func (app *Vmselect) GraphiteTagsTagMultiSeries(t *testing.T, records []string,
values.Add("path", rec)
}
_, statusCode := app.cli.PostForm(t, url, values)
_, statusCode := app.cli.PostForm(t, url, values, opts.Headers)
if got, want := statusCode, http.StatusNotImplemented; got != want {
t.Fatalf("unexpected status code: got %d, want %d", got, want)
}
@@ -343,7 +343,7 @@ func (app *Vmselect) APIV1AdminTenants(t *testing.T) *AdminTenantsResponse {
t.Helper()
tenantsURL := fmt.Sprintf("http://%s/admin/tenants", app.httpListenAddr)
res, statusCode := app.cli.Get(t, tenantsURL)
res, statusCode := app.cli.Get(t, tenantsURL, nil)
if statusCode != http.StatusOK {
t.Fatalf("unexpected status code: got %d, want %d, resp text=%q", statusCode, http.StatusOK, res)
}

View File

@@ -98,7 +98,7 @@ func StartVmsingleAt(instance, binary string, flags []string, cli *Client, outpu
func (app *Vmsingle) ForceFlush(t *testing.T) {
t.Helper()
_, statusCode := app.cli.Get(t, app.forceFlushURL)
_, statusCode := app.cli.Get(t, app.forceFlushURL, nil)
if statusCode != http.StatusOK {
t.Fatalf("unexpected status code: got %d, want %d", statusCode, http.StatusOK)
}
@@ -108,7 +108,7 @@ func (app *Vmsingle) ForceFlush(t *testing.T) {
func (app *Vmsingle) ForceMerge(t *testing.T) {
t.Helper()
_, statusCode := app.cli.Get(t, app.forceMergeURL)
_, statusCode := app.cli.Get(t, app.forceMergeURL, nil)
if statusCode != http.StatusOK {
t.Fatalf("unexpected status code: got %d, want %d", statusCode, http.StatusOK)
}
@@ -130,8 +130,9 @@ func (app *Vmsingle) InfluxWrite(t *testing.T, records []string, opts QueryOpts)
if len(uvs) > 0 {
url += "?" + uvs
}
_, statusCode := app.cli.Post(t, url, "text/plain", data)
headers := opts.getHeaders()
headers.Set("Content-Type", "text/plain")
_, statusCode := app.cli.Post(t, url, data, headers)
if statusCode != http.StatusNoContent {
t.Fatalf("unexpected status code: got %d, want %d", statusCode, http.StatusNoContent)
}
@@ -161,7 +162,9 @@ func (app *Vmsingle) PrometheusAPIV1ImportCSV(t *testing.T, records []string, op
url += "?" + uvs
}
data := []byte(strings.Join(records, "\n"))
_, statusCode := app.cli.Post(t, url, "text/plain", data)
headers := opts.getHeaders()
headers.Set("Content-Type", "text/plain")
_, statusCode := app.cli.Post(t, url, data, headers)
if statusCode != http.StatusNoContent {
t.Fatalf("unexpected status code: got %d, want %d", statusCode, http.StatusNoContent)
}
@@ -181,7 +184,9 @@ func (app *Vmsingle) PrometheusAPIV1ImportNative(t *testing.T, data []byte, opts
if len(uvs) > 0 {
url += "?" + uvs
}
_, statusCode := app.cli.Post(t, url, "text/plain", data)
headers := opts.getHeaders()
headers.Set("Content-Type", "text/plain")
_, statusCode := app.cli.Post(t, url, data, headers)
if statusCode != http.StatusNoContent {
t.Fatalf("unexpected status code: got %d, want %d", statusCode, http.StatusNoContent)
}
@@ -203,7 +208,9 @@ func (app *Vmsingle) OpenTSDBAPIPut(t *testing.T, records []string, opts QueryOp
url += "?" + uvs
}
data := []byte("[" + strings.Join(records, ",") + "]")
_, statusCode := app.cli.Post(t, url, "text/plain", data)
headers := opts.getHeaders()
headers.Set("Content-Type", "text/plain")
_, statusCode := app.cli.Post(t, url, data, headers)
if statusCode != http.StatusNoContent {
t.Fatalf("unexpected status code: got %d, want %d", statusCode, http.StatusNoContent)
}
@@ -212,11 +219,13 @@ func (app *Vmsingle) OpenTSDBAPIPut(t *testing.T, records []string, opts QueryOp
// PrometheusAPIV1Write is a test helper function that inserts a
// collection of records in Prometheus remote-write format by sending a HTTP
// POST request to /prometheus/api/v1/write vmsingle endpoint.
func (app *Vmsingle) PrometheusAPIV1Write(t *testing.T, wr prompb.WriteRequest, _ QueryOpts) {
func (app *Vmsingle) PrometheusAPIV1Write(t *testing.T, wr prompb.WriteRequest, opts QueryOpts) {
t.Helper()
data := snappy.Encode(nil, wr.MarshalProtobuf(nil))
_, statusCode := app.cli.Post(t, app.prometheusAPIV1WriteURL, "application/x-protobuf", data)
headers := opts.getHeaders()
headers.Set("Content-Type", "application/x-protobuf")
_, statusCode := app.cli.Post(t, app.prometheusAPIV1WriteURL, data, headers)
if statusCode != http.StatusNoContent {
t.Fatalf("unexpected status code: got %d, want %d", statusCode, http.StatusNoContent)
}
@@ -237,9 +246,10 @@ func (app *Vmsingle) PrometheusAPIV1ImportPrometheus(t *testing.T, records []str
if len(uvs) > 0 {
url += "?" + uvs
}
headers := opts.getHeaders()
headers.Set("Content-Type", "text/plain")
data := []byte(strings.Join(records, "\n"))
_, statusCode := app.cli.Post(t, url, "text/plain", data)
_, statusCode := app.cli.Post(t, url, data, headers)
if statusCode != http.StatusNoContent {
t.Fatalf("unexpected status code: got %d, want %d", statusCode, http.StatusNoContent)
}
@@ -256,7 +266,7 @@ func (app *Vmsingle) PrometheusAPIV1Export(t *testing.T, query string, opts Quer
values.Add("match[]", query)
values.Add("format", "promapi")
res, _ := app.cli.PostForm(t, app.prometheusAPIV1ExportURL, values)
res, _ := app.cli.PostForm(t, app.prometheusAPIV1ExportURL, values, opts.Headers)
return NewPrometheusAPIV1QueryResponse(t, res)
}
@@ -273,7 +283,7 @@ func (app *Vmsingle) PrometheusAPIV1ExportNative(t *testing.T, query string, opt
values.Add("match[]", query)
values.Add("format", "promapi")
res, _ := app.cli.PostForm(t, app.prometheusAPIV1ExportNativeURL, values)
res, _ := app.cli.PostForm(t, app.prometheusAPIV1ExportNativeURL, values, opts.Headers)
return []byte(res)
}
@@ -287,7 +297,7 @@ func (app *Vmsingle) PrometheusAPIV1Query(t *testing.T, query string, opts Query
values := opts.asURLValues()
values.Add("query", query)
res, _ := app.cli.PostForm(t, app.prometheusAPIV1QueryURL, values)
res, _ := app.cli.PostForm(t, app.prometheusAPIV1QueryURL, values, opts.Headers)
return NewPrometheusAPIV1QueryResponse(t, res)
}
@@ -302,7 +312,7 @@ func (app *Vmsingle) PrometheusAPIV1QueryRange(t *testing.T, query string, opts
values := opts.asURLValues()
values.Add("query", query)
res, _ := app.cli.PostForm(t, app.prometheusAPIV1QueryRangeURL, values)
res, _ := app.cli.PostForm(t, app.prometheusAPIV1QueryRangeURL, values, opts.Headers)
return NewPrometheusAPIV1QueryResponse(t, res)
}
@@ -316,7 +326,7 @@ func (app *Vmsingle) PrometheusAPIV1Series(t *testing.T, matchQuery string, opts
values := opts.asURLValues()
values.Add("match[]", matchQuery)
res, _ := app.cli.PostForm(t, app.prometheusAPIV1SeriesURL, values)
res, _ := app.cli.PostForm(t, app.prometheusAPIV1SeriesURL, values, opts.Headers)
return NewPrometheusAPIV1SeriesResponse(t, res)
}
@@ -330,7 +340,7 @@ func (app *Vmsingle) PrometheusAPIV1SeriesCount(t *testing.T, opts QueryOpts) *P
values := opts.asURLValues()
queryURL := fmt.Sprintf("http://%s/prometheus/api/v1/series/count", app.httpListenAddr)
res, _ := app.cli.PostForm(t, queryURL, values)
res, _ := app.cli.PostForm(t, queryURL, values, opts.Headers)
return NewPrometheusAPIV1SeriesCountResponse(t, res)
}
@@ -345,7 +355,7 @@ func (app *Vmsingle) PrometheusAPIV1Labels(t *testing.T, matchQuery string, opts
values.Add("match[]", matchQuery)
queryURL := fmt.Sprintf("http://%s/prometheus/api/v1/labels", app.httpListenAddr)
res, _ := app.cli.PostForm(t, queryURL, values)
res, _ := app.cli.PostForm(t, queryURL, values, opts.Headers)
return NewPrometheusAPIV1LabelsResponse(t, res)
}
@@ -360,7 +370,7 @@ func (app *Vmsingle) PrometheusAPIV1LabelValues(t *testing.T, labelName, matchQu
values.Add("match[]", matchQuery)
queryURL := fmt.Sprintf("http://%s/prometheus/api/v1/label/%s/values", app.httpListenAddr, labelName)
res, _ := app.cli.PostForm(t, queryURL, values)
res, _ := app.cli.PostForm(t, queryURL, values, opts.Headers)
return NewPrometheusAPIV1LabelValuesResponse(t, res)
}
@@ -374,7 +384,7 @@ func (app *Vmsingle) PrometheusAPIV1Metadata(t *testing.T, metric string, limit
values.Add("limit", strconv.Itoa(limit))
queryURL := fmt.Sprintf("http://%s/prometheus/api/v1/metadata", app.httpListenAddr)
res, _ := app.cli.PostForm(t, queryURL, values)
res, _ := app.cli.PostForm(t, queryURL, values, opts.Headers)
return NewPrometheusAPIV1Metadata(t, res)
}
@@ -389,7 +399,7 @@ func (app *Vmsingle) APIV1AdminTSDBDeleteSeries(t *testing.T, matchQuery string,
values := opts.asURLValues()
values.Add("match[]", matchQuery)
res, statusCode := app.cli.PostForm(t, queryURL, values)
res, statusCode := app.cli.PostForm(t, queryURL, values, opts.Headers)
if statusCode != http.StatusNoContent {
t.Fatalf("unexpected status code: got %d, want %d, resp text=%q", statusCode, http.StatusNoContent, res)
}
@@ -402,7 +412,7 @@ func (app *Vmsingle) GraphiteMetricsIndex(t *testing.T, _ QueryOpts) GraphiteMet
t.Helper()
seriesURL := fmt.Sprintf("http://%s/metrics/index.json", app.httpListenAddr)
res, statusCode := app.cli.Get(t, seriesURL)
res, statusCode := app.cli.Get(t, seriesURL, nil)
if statusCode != http.StatusOK {
t.Fatalf("unexpected status code: got %d, want %d, resp text=%q", statusCode, http.StatusOK, res)
}
@@ -424,7 +434,7 @@ func (app *Vmsingle) GraphiteTagsTagSeries(t *testing.T, record string, opts Que
values := opts.asURLValues()
values.Add("path", record)
_, statusCode := app.cli.PostForm(t, url, values)
_, statusCode := app.cli.PostForm(t, url, values, opts.Headers)
if got, want := statusCode, http.StatusNotImplemented; got != want {
t.Fatalf("unexpected status code: got %d, want %d", got, want)
}
@@ -439,7 +449,7 @@ func (app *Vmsingle) GraphiteTagsTagMultiSeries(t *testing.T, records []string,
values.Add("path", rec)
}
_, statusCode := app.cli.PostForm(t, url, values)
_, statusCode := app.cli.PostForm(t, url, values, opts.Headers)
if got, want := statusCode, http.StatusNotImplemented; got != want {
t.Fatalf("unexpected status code: got %d, want %d", got, want)
}
@@ -458,7 +468,7 @@ func (app *Vmsingle) APIV1StatusMetricNamesStats(t *testing.T, limit, le, matchP
values.Add("match_pattern", matchPattern)
queryURL := fmt.Sprintf("http://%s/api/v1/status/metric_names_stats", app.httpListenAddr)
res, statusCode := app.cli.PostForm(t, queryURL, values)
res, statusCode := app.cli.PostForm(t, queryURL, values, opts.Headers)
if statusCode != http.StatusOK {
t.Fatalf("unexpected status code: got %d, want %d, resp text=%q", statusCode, http.StatusOK, res)
}
@@ -478,7 +488,7 @@ func (app *Vmsingle) APIV1AdminStatusMetricNamesStatsReset(t *testing.T, opts Qu
values := opts.asURLValues()
queryURL := fmt.Sprintf("http://%s/api/v1/admin/status/metric_names_stats/reset", app.httpListenAddr)
res, statusCode := app.cli.PostForm(t, queryURL, values)
res, statusCode := app.cli.PostForm(t, queryURL, values, opts.Headers)
if statusCode != http.StatusNoContent {
t.Fatalf("unexpected status code: got %d, want %d, resp text=%q", statusCode, http.StatusNoContent, res)
}
@@ -491,7 +501,7 @@ func (app *Vmsingle) APIV1AdminStatusMetricNamesStatsReset(t *testing.T, opts Qu
func (app *Vmsingle) SnapshotCreate(t *testing.T) *SnapshotCreateResponse {
t.Helper()
data, statusCode := app.cli.Post(t, app.SnapshotCreateURL(), "", nil)
data, statusCode := app.cli.Post(t, app.SnapshotCreateURL(), nil, nil)
if got, want := statusCode, http.StatusOK; got != want {
t.Fatalf("unexpected status code: got %d, want %d, resp text=%q", got, want, data)
}
@@ -517,7 +527,7 @@ func (app *Vmsingle) APIV1AdminTSDBSnapshot(t *testing.T) *APIV1AdminTSDBSnapsho
t.Helper()
queryURL := fmt.Sprintf("http://%s/api/v1/admin/tsdb/snapshot", app.httpListenAddr)
data, statusCode := app.cli.Post(t, queryURL, "", nil)
data, statusCode := app.cli.Post(t, queryURL, nil, nil)
if got, want := statusCode, http.StatusOK; got != want {
t.Fatalf("unexpected status code: got %d, want %d, resp text=%q", got, want, data)
}
@@ -538,7 +548,7 @@ func (app *Vmsingle) SnapshotList(t *testing.T) *SnapshotListResponse {
t.Helper()
queryURL := fmt.Sprintf("http://%s/snapshot/list", app.httpListenAddr)
data, statusCode := app.cli.Get(t, queryURL)
data, statusCode := app.cli.Get(t, queryURL, nil)
if got, want := statusCode, http.StatusOK; got != want {
t.Fatalf("unexpected status code: got %d, want %d, resp text=%q", got, want, data)
}
@@ -584,7 +594,7 @@ func (app *Vmsingle) SnapshotDeleteAll(t *testing.T) *SnapshotDeleteAllResponse
t.Helper()
queryURL := fmt.Sprintf("http://%s/snapshot/delete_all", app.httpListenAddr)
data, statusCode := app.cli.Get(t, queryURL)
data, statusCode := app.cli.Get(t, queryURL, nil)
if got, want := statusCode, http.StatusOK; got != want {
t.Fatalf("unexpected status code: got %d, want %d, resp text=%q", got, want, data)
}
@@ -615,7 +625,7 @@ func (app *Vmsingle) APIV1StatusTSDB(t *testing.T, matchQuery string, date strin
addNonEmpty("topN", topN)
addNonEmpty("date", date)
res, statusCode := app.cli.PostForm(t, seriesURL, values)
res, statusCode := app.cli.PostForm(t, seriesURL, values, opts.Headers)
if statusCode != http.StatusOK {
t.Fatalf("unexpected status code: got %d, want %d, resp text=%q", statusCode, http.StatusOK, res)
}
@@ -641,7 +651,9 @@ func (app *Vmsingle) ZabbixConnectorHistory(t *testing.T, records []string, opts
url += "?" + uvs
}
data := []byte(strings.Join(records, "\n"))
_, statusCode := app.cli.Post(t, url, "application/json", data)
headers := opts.getHeaders()
headers.Set("Content-Type", "application/json")
_, statusCode := app.cli.Post(t, url, data, headers)
if statusCode != http.StatusOK {
t.Fatalf("unexpected status code: got %d, want %d", statusCode, http.StatusOK)
}

View File

@@ -77,7 +77,7 @@ func (app *Vmstorage) ForceFlush(t *testing.T) {
t.Helper()
forceFlushURL := fmt.Sprintf("http://%s/internal/force_flush", app.httpListenAddr)
_, statusCode := app.cli.Get(t, forceFlushURL)
_, statusCode := app.cli.Get(t, forceFlushURL, nil)
if statusCode != http.StatusOK {
t.Fatalf("unexpected status code: got %d, want %d", statusCode, http.StatusOK)
}
@@ -88,7 +88,7 @@ func (app *Vmstorage) ForceMerge(t *testing.T) {
t.Helper()
forceMergeURL := fmt.Sprintf("http://%s/internal/force_merge", app.httpListenAddr)
_, statusCode := app.cli.Get(t, forceMergeURL)
_, statusCode := app.cli.Get(t, forceMergeURL, nil)
if statusCode != http.StatusOK {
t.Fatalf("unexpected status code: got %d, want %d", statusCode, http.StatusOK)
}
@@ -101,7 +101,7 @@ func (app *Vmstorage) ForceMerge(t *testing.T) {
func (app *Vmstorage) SnapshotCreate(t *testing.T) *SnapshotCreateResponse {
t.Helper()
data, statusCode := app.cli.Post(t, app.SnapshotCreateURL(), "", nil)
data, statusCode := app.cli.Post(t, app.SnapshotCreateURL(), nil, nil)
if got, want := statusCode, http.StatusOK; got != want {
t.Fatalf("unexpected status code: got %d, want %d, resp text=%q", got, want, data)
}
@@ -127,7 +127,7 @@ func (app *Vmstorage) SnapshotList(t *testing.T) *SnapshotListResponse {
t.Helper()
queryURL := fmt.Sprintf("http://%s/snapshot/list", app.httpListenAddr)
data, statusCode := app.cli.Get(t, queryURL)
data, statusCode := app.cli.Get(t, queryURL, nil)
if got, want := statusCode, http.StatusOK; got != want {
t.Fatalf("unexpected status code: got %d, want %d, resp text=%q", got, want, data)
}
@@ -173,7 +173,7 @@ func (app *Vmstorage) SnapshotDeleteAll(t *testing.T) *SnapshotDeleteAllResponse
t.Helper()
queryURL := fmt.Sprintf("http://%s/snapshot/delete_all", app.httpListenAddr)
data, statusCode := app.cli.Post(t, queryURL, "", nil)
data, statusCode := app.cli.Post(t, queryURL, nil, nil)
if got, want := statusCode, http.StatusOK; got != want {
t.Fatalf("unexpected status code: got %d, want %d, resp text=%q", got, want, data)
}

View File

@@ -1,9 +0,0 @@
# see https://docs.codecov.com/docs/common-recipe-list#set-non-blocking-status-checks
coverage:
status:
project:
default:
informational: true
patch:
default:
informational: true

View File

@@ -151,7 +151,7 @@ Some alerting rules thresholds are just recommendations and could require an adj
The list of alerting rules is the following:
* [alerts-health.yml](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/deployment/docker/rules/alerts-health.yml):
alerting rules related to all VictoriaMetrics components for tracking their "health" state;
* [alerts.yml](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/deployment/docker/rules/alerts.yml):
* [alerts-single-node.yml](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/deployment/docker/rules/alerts-single-node.yml):
alerting rules related to [single-server VictoriaMetrics](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/) installation;
* [alerts-cluster.yml](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/deployment/docker/rules/alerts-cluster.yml):
alerting rules related to [cluster version of VictoriaMetrics](https://docs.victoriametrics.com/victoriametrics/cluster-victoriametrics/);

View File

@@ -3,7 +3,7 @@ services:
# It scrapes targets defined in --promscrape.config
# And forward them to --remoteWrite.url
vmagent:
image: victoriametrics/vmagent:v1.139.0
image: victoriametrics/vmagent:v1.140.0
depends_on:
- "vmauth"
ports:
@@ -42,14 +42,14 @@ services:
# vmstorage shards. Each shard receives 1/N of all metrics sent to vminserts,
# where N is number of vmstorages (2 in this case).
vmstorage-1:
image: victoriametrics/vmstorage:v1.139.0-cluster
image: victoriametrics/vmstorage:v1.140.0-cluster
volumes:
- strgdata-1:/storage
command:
- "--storageDataPath=/storage"
restart: always
vmstorage-2:
image: victoriametrics/vmstorage:v1.139.0-cluster
image: victoriametrics/vmstorage:v1.140.0-cluster
volumes:
- strgdata-2:/storage
command:
@@ -59,7 +59,7 @@ services:
# vminsert is ingestion frontend. It receives metrics pushed by vmagent,
# pre-process them and distributes across configured vmstorage shards.
vminsert-1:
image: victoriametrics/vminsert:v1.139.0-cluster
image: victoriametrics/vminsert:v1.140.0-cluster
depends_on:
- "vmstorage-1"
- "vmstorage-2"
@@ -68,7 +68,7 @@ services:
- "--storageNode=vmstorage-2:8400"
restart: always
vminsert-2:
image: victoriametrics/vminsert:v1.139.0-cluster
image: victoriametrics/vminsert:v1.140.0-cluster
depends_on:
- "vmstorage-1"
- "vmstorage-2"
@@ -80,7 +80,7 @@ services:
# vmselect is a query fronted. It serves read queries in MetricsQL or PromQL.
# vmselect collects results from configured `--storageNode` shards.
vmselect-1:
image: victoriametrics/vmselect:v1.139.0-cluster
image: victoriametrics/vmselect:v1.140.0-cluster
depends_on:
- "vmstorage-1"
- "vmstorage-2"
@@ -90,7 +90,7 @@ services:
- "--vmalert.proxyURL=http://vmalert:8880"
restart: always
vmselect-2:
image: victoriametrics/vmselect:v1.139.0-cluster
image: victoriametrics/vmselect:v1.140.0-cluster
depends_on:
- "vmstorage-1"
- "vmstorage-2"
@@ -105,7 +105,7 @@ services:
# read requests from Grafana, vmui, vmalert among vmselects.
# It can be used as an authentication proxy.
vmauth:
image: victoriametrics/vmauth:v1.139.0
image: victoriametrics/vmauth:v1.140.0
depends_on:
- "vmselect-1"
- "vmselect-2"
@@ -119,13 +119,13 @@ services:
# vmalert executes alerting and recording rules
vmalert:
image: victoriametrics/vmalert:v1.139.0
image: victoriametrics/vmalert:v1.140.0
depends_on:
- "vmauth"
ports:
- 8880:8880
volumes:
- ./rules/alerts-cluster.yml:/etc/alerts/alerts.yml
- ./rules/alerts-cluster.yml:/etc/alerts/alerts-cluster.yml
- ./rules/alerts-health.yml:/etc/alerts/alerts-health.yml
- ./rules/alerts-vmagent.yml:/etc/alerts/alerts-vmagent.yml
- ./rules/alerts-vmalert.yml:/etc/alerts/alerts-vmalert.yml

View File

@@ -3,7 +3,7 @@ services:
# It scrapes targets defined in --promscrape.config
# And forward them to --remoteWrite.url
vmagent:
image: victoriametrics/vmagent:v1.139.0
image: victoriametrics/vmagent:v1.140.0
depends_on:
- "victoriametrics"
ports:
@@ -18,7 +18,7 @@ services:
# VictoriaMetrics instance, a single process responsible for
# storing metrics and serve read requests.
victoriametrics:
image: victoriametrics/victoria-metrics:v1.139.0
image: victoriametrics/victoria-metrics:v1.140.0
ports:
- 8428:8428
- 8089:8089
@@ -59,14 +59,14 @@ services:
# vmalert executes alerting and recording rules
vmalert:
image: victoriametrics/vmalert:v1.139.0
image: victoriametrics/vmalert:v1.140.0
depends_on:
- "victoriametrics"
- "alertmanager"
ports:
- 8880:8880
volumes:
- ./rules/alerts.yml:/etc/alerts/alerts.yml
- ./rules/alerts-single-node.yml:/etc/alerts/alerts-single-node.yml
- ./rules/alerts-health.yml:/etc/alerts/alerts-health.yml
- ./rules/alerts-vmagent.yml:/etc/alerts/alerts-vmagent.yml
- ./rules/alerts-vmalert.yml:/etc/alerts/alerts-vmalert.yml

View File

@@ -170,3 +170,57 @@ groups:
is saturated by more than 90% and vminsert won't be able to keep up.\n
This usually means that more vminsert or vmstorage nodes must be added to the cluster in order to increase
the total number of vminsert -> vmstorage links."
- alert: MetadataCacheUtilizationIsTooHigh
expr: |
vm_metrics_metadata_storage_size_bytes / vm_metrics_metadata_storage_max_size_bytes > 0.95
for: 15m
labels:
severity: warning
annotations:
summary: "Metadata cache capacity on {{ $labels.instance }} (job={{ $labels.job }}) is utilized for more than 95% for the last 15min"
description: "Metadata cache stores meta information about ingested time series - see https://docs.victoriametrics.com/victoriametrics/#metrics-metadata.
When cache is overutilized, the oldest entries will be dropped out automatically. It may result into incomplete
response for /api/v1/metadata API calls. It doesn't impact regular queries or alerts. Cache size is controlled
via -storage.maxMetadataStorageSize cmd-line flag."
- alert: MetricNameStatsCacheUtilizationIsTooHigh
expr: |
vm_cache_size_bytes{type="storage/metricNamesStatsTracker"} / vm_cache_size_max_bytes{type="storage/metricNamesStatsTracker"} > 0.95
for: 15m
labels:
severity: warning
annotations:
summary: "Cache capacity for tracking metric names usage on {{ $labels.instance }} (job={{ $labels.job }}) is utilized for more than 95% during the last 15min"
description: "Metric names usage cache stores information about unique metric names and how frequently they are queried - see https://docs.victoriametrics.com/victoriametrics/#track-ingested-metrics-usage.
When cache is overutilized, it will stop tracking the new metric names. It has no other negative impact.
Usually, the number of unique metric names is very limited (thousands). The cache can be overutilized only if metric names
are changing too frequently or if the cache size is too low. There are following ways to mitigate cache overutilization:
- disable cache via `--storage.trackMetricNamesStats=false` flag, so metric names usage will stop tracking
- increase the cache size via `--storage.cacheSizeMetricNamesStats` flag
- reset the cache (see docs for details)"
- alert: IndexDBRecordsDrop
expr: increase(vm_indexdb_items_dropped_total[5m]) > 0
labels:
severity: critical
annotations:
summary: "IndexDB skipped registering items during data ingestion with reason={{ $labels.reason }}."
description: |
VictoriaMetrics could skip registering new timeseries during ingestion if they fail the validation process.
For example, `reason=too_long_item` means that time series cannot exceed 64KB. Please, reduce the number
of labels or label values for such series. Or enforce these limits via `-maxLabelsPerTimeseries` and
`-maxLabelValueLen` command-line flags.
- alert: TooManyTSIDMisses
expr: increase(vm_missing_tsids_for_metric_id_total[5m]) > 0
for: 15m
labels:
severity: critical
annotations:
summary: "Unexpected TSID misses for job \"{{ $labels.job }}\" ({{ $labels.instance }}) for the last 15 minutes"
description: |
Unexpected TSID misses for \"{{ $labels.job }}\" ({{ $labels.instance }}) for the last 15 minutes.
If this happens after unclean shutdown of VictoriaMetrics process (via \"kill -9\", OOM or power off),
then this is OK - the alert must go away in a few minutes after the restart.
Otherwise this may point to the corruption of index data.

View File

@@ -82,19 +82,6 @@ groups:
Check the logs for the given target. Check also the \"location\" label at the vm_log_messages_total metric if -loggerLevel command-line flag is set to value other than INFO.
This label contains code locations responsible for generating log messages suppressed by -loggerLevel.
- alert: TooManyTSIDMisses
expr: increase(vm_missing_tsids_for_metric_id_total[5m]) > 0
for: 15m
labels:
severity: critical
annotations:
summary: "Unexpected TSID misses for job \"{{ $labels.job }}\" ({{ $labels.instance }}) for the last 15 minutes"
description: |
Unexpected TSID misses for \"{{ $labels.job }}\" ({{ $labels.instance }}) for the last 15 minutes.
If this happens after unclean shutdown of VictoriaMetrics process (via \"kill -9\", OOM or power off),
then this is OK - the alert must go away in a few minutes after the restart.
Otherwise this may point to the corruption of index data.
- alert: ConcurrentInsertsHitTheLimit
expr: avg_over_time(vm_concurrent_insert_current[1m]) >= vm_concurrent_insert_capacity
for: 15m
@@ -109,28 +96,6 @@ groups:
making write attempts. If vmagent's or vminsert's CPU usage and network saturation are at normal level, then
it might be worth adjusting `-maxConcurrentInserts` cmd-line flag.
- alert: IndexDBRecordsDrop
expr: increase(vm_indexdb_items_dropped_total[5m]) > 0
labels:
severity: critical
annotations:
summary: "IndexDB skipped registering items during data ingestion with reason={{ $labels.reason }}."
description: |
VictoriaMetrics could skip registering new timeseries during ingestion if they fail the validation process.
For example, `reason=too_long_item` means that time series cannot exceed 64KB. Please, reduce the number
of labels or label values for such series. Or enforce these limits via `-maxLabelsPerTimeseries` and
`-maxLabelValueLen` command-line flags.
- alert: RowsRejectedOnIngestion
expr: rate(vm_rows_ignored_total[5m]) > 0
for: 15m
labels:
severity: warning
annotations:
summary: "Some rows are rejected on \"{{ $labels.instance }}\" on ingestion attempt"
description: "Ingested rows on instance \"{{ $labels.instance }}\" are rejected due to the
following reason: \"{{ $labels.reason }}\""
- alert: TooHighQueryLoad
expr: increase(vm_concurrent_select_limit_timeout_total[5m]) > 0
for: 15m
@@ -148,3 +113,14 @@ groups:
* increase compute resources or number of replicas;
* adjust limits `-search.maxConcurrentRequests` and `-search.maxQueueDuration`.
See more at https://docs.victoriametrics.com/victoriametrics/troubleshooting/#slow-queries.
- alert: RowsRejectedOnIngestion
expr: rate(vm_rows_ignored_total[5m]) > 0
for: 15m
labels:
severity: warning
annotations:
summary: "Some rows are rejected on \"{{ $labels.instance }}\" on ingestion attempt"
description: "Ingested rows on instance \"{{ $labels.instance }}\" are rejected due to the
following reason: \"{{ $labels.reason }}\""

View File

@@ -148,4 +148,45 @@ groups:
description: "Metadata cache stores meta information about ingested time series - see https://docs.victoriametrics.com/victoriametrics/#metrics-metadata.
When cache is overutilized, the oldest entries will be dropped out automatically. It may result into incomplete
response for /api/v1/metadata API calls. It doesn't impact regular queries or alerts. Cache size is controlled
via -storage.maxMetadataStorageSize cmd-line flag."
via -storage.maxMetadataStorageSize cmd-line flag."
- alert: MetricNameStatsCacheUtilizationIsTooHigh
expr: |
vm_cache_size_bytes{type="storage/metricNamesStatsTracker"} / vm_cache_size_max_bytes{type="storage/metricNamesStatsTracker"} > 0.95
for: 15m
labels:
severity: warning
annotations:
summary: "Cache capacity for tracking metric names usage on {{ $labels.instance }} (job={{ $labels.job }}) is utilized for more than 95% during the last 15min"
description: "Metric names usage cache stores information about unique metric names and how frequently they are queried - see https://docs.victoriametrics.com/victoriametrics/#track-ingested-metrics-usage.
When cache is overutilized, it will stop tracking the new metric names. It has no other negative impact.
Usually, the number of unique metric names is very limited (thousands). The cache can be overutilized only if metric names
are changing too frequently or if the cache size is too low. There are following ways to mitigate cache overutilization:
- disable cache via `--storage.trackMetricNamesStats=false` flag, so metric names usage will stop tracking
- increase the cache size via `--storage.cacheSizeMetricNamesStats` flag
- reset the cache (see docs for details)"
- alert: IndexDBRecordsDrop
expr: increase(vm_indexdb_items_dropped_total[5m]) > 0
labels:
severity: critical
annotations:
summary: "IndexDB skipped registering items during data ingestion with reason={{ $labels.reason }}."
description: |
VictoriaMetrics could skip registering new timeseries during ingestion if they fail the validation process.
For example, `reason=too_long_item` means that time series cannot exceed 64KB. Please, reduce the number
of labels or label values for such series. Or enforce these limits via `-maxLabelsPerTimeseries` and
`-maxLabelValueLen` command-line flags.
- alert: TooManyTSIDMisses
expr: increase(vm_missing_tsids_for_metric_id_total[5m]) > 0
for: 15m
labels:
severity: critical
annotations:
summary: "Unexpected TSID misses for job \"{{ $labels.job }}\" ({{ $labels.instance }}) for the last 15 minutes"
description: |
Unexpected TSID misses for \"{{ $labels.job }}\" ({{ $labels.instance }}) for the last 15 minutes.
If this happens after unclean shutdown of VictoriaMetrics process (via \"kill -9\", OOM or power off),
then this is OK - the alert must go away in a few minutes after the restart.
Otherwise this may point to the corruption of index data.

View File

@@ -1,6 +1,6 @@
services:
vmagent:
image: victoriametrics/vmagent:v1.139.0
image: victoriametrics/vmagent:v1.140.0
depends_on:
- "victoriametrics"
ports:
@@ -14,7 +14,7 @@ services:
restart: always
victoriametrics:
image: victoriametrics/victoria-metrics:v1.139.0
image: victoriametrics/victoria-metrics:v1.140.0
ports:
- 8428:8428
volumes:
@@ -40,7 +40,7 @@ services:
restart: always
vmalert:
image: victoriametrics/vmalert:v1.139.0
image: victoriametrics/vmalert:v1.140.0
depends_on:
- "victoriametrics"
ports:
@@ -59,7 +59,7 @@ services:
- '--external.alert.source=explore?orgId=1&left=["now-1h","now","VictoriaMetrics",{"expr": },{"mode":"Metrics"},{"ui":[true,true,true,"none"]}]'
restart: always
vmanomaly:
image: victoriametrics/vmanomaly:v1.29.2
image: victoriametrics/vmanomaly:v1.29.3
depends_on:
- "victoriametrics"
ports:

View File

@@ -41,18 +41,8 @@ docs-debug: docs docs-image
$(foreach dir,$(wildcard ./docs/$(dir)/*), -v ./docs/$(notdir $(dir)):/opt/docs/content/$(notdir $(dir))) \
vmdocs-docker-package
docs-update-version: docs-image
$(if $(filter v%,$(PKG_TAG)), \
docker run \
--rm \
--entrypoint /usr/bin/find \
--platform $(DOCKER_PLATFORM) \
--name vmdocs-docker-container \
-v ./docs:/opt/docs/content/victoriametrics vmdocs-docker-package \
content \
-regex ".*\.md" \
-exec sed -i 's/{{% available_from "#" %}}/{{% available_from "$(PKG_TAG)" %}}/g' {} \;, \
$(info "Skipping docs version update, invalid $$PKG_TAG: $(PKG_TAG)"))
docs-update-version:
find docs/victoriametrics/ -name '*.md' -exec sed -i 's/{{% available_from "#" %}}/{{% available_from "$(TAG)" %}}/g' {} \;
# Converts images at docs folder to webp format
# See https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#images-in-documentation
@@ -342,4 +332,4 @@ endif
$(MAKE) docs-update-vmagent-flags && git checkout "$$orig_branch" && \
$(MAKE) docs-update-vmselect-flags && git checkout "$$orig_branch" && \
$(MAKE) docs-update-vminsert-flags && git checkout "$$orig_branch" && \
$(MAKE) docs-update-vmstorage-flags && git checkout "$$orig_branch"
$(MAKE) docs-update-vmstorage-flags && git checkout "$$orig_branch"

View File

@@ -14,6 +14,11 @@ aliases:
---
Please find the changelog for VictoriaMetrics Anomaly Detection below.
## v1.29.3
Released: 2026-04-16
- UI: Updated [vmanomaly UI](https://docs.victoriametrics.com/anomaly-detection/ui/) from [v1.6.0](https://docs.victoriametrics.com/anomaly-detection/ui/#v160) to [v1.6.1](https://docs.victoriametrics.com/anomaly-detection/ui/#v161), see respective [release notes](https://docs.victoriametrics.com/anomaly-detection/ui/#v161) for details.
## v1.29.2
Released: 2026-04-02

View File

@@ -48,13 +48,15 @@ Please see example graph illustrating this logic below:
## What data does vmanomaly operate on?
`vmanomaly` operates on timeseries (metrics) data, and supports both **VictoriaMetrics** and **VictoriaLogs** as data sources. Choose the source depending on the use case.
> [!NOTE]
> `vmanomaly` operates on timeseries (metrics) data, and supports both **VictoriaMetrics** and **VictoriaLogs/VictoriaTraces** as data sources to get metrics-compatible data. Choose the source depending on the use case. Single-node / Cluster and OpenSource / Enterprise datasources are supported as well, `vmanomaly` is compatible with both, yet itself requires an [Enterprise license](https://victoriametrics.com/products/enterprise/) to run.
**VictoriaMetrics (metrics):** use full [MetricsQL](https://docs.victoriametrics.com/victoriametrics/metricsql/) for selection, sampling, and processing; [global filters](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#prometheus-querying-api-enhancements) are also supported. See the [VmReader](https://docs.victoriametrics.com/anomaly-detection/components/reader/#vm-reader) for the details.
**VictoriaLogs (logs → metrics):** {{% available_from "v1.26.0" anomaly %}} use [LogsQL](https://docs.victoriametrics.com/victorialogs/logsql/) via the [`VLogsReader`](https://docs.victoriametrics.com/anomaly-detection/components/reader/#vlogs-reader) to create log-derived metrics for anomaly detection (e.g., error rates, request latencies).
**VictoriaLogs (logs → metrics):** {{% available_from "v1.26.0" anomaly %}} use [LogsQL](https://docs.victoriametrics.com/victorialogs/logsql/) via the [`VLogsReader`](https://docs.victoriametrics.com/anomaly-detection/components/reader/#vlogs-reader) to create log-derived or traces-derived metrics for anomaly detection (e.g., error rates, request latencies, error spans count).
> Please note that only LogsQL queries with [stats pipe](https://docs.victoriametrics.com/victorialogs/logsql/#stats-pipe) functions [subset](https://docs.victoriametrics.com/anomaly-detection/components/reader/#valid-stats-functions) are supported, as they produce **numeric** time series.
> [!NOTE]
> Please note that only LogsQL queries with [stats pipe](https://docs.victoriametrics.com/victorialogs/logsql/#stats-pipe) functions [subset](https://docs.victoriametrics.com/anomaly-detection/components/reader/#valid-stats-functions) are supported, as they produce **numeric** time series.
## Using offsets
@@ -421,7 +423,7 @@ services:
# ...
vmanomaly:
container_name: vmanomaly
image: victoriametrics/vmanomaly:v1.29.2
image: victoriametrics/vmanomaly:v1.29.3
# ...
restart: always
volumes:
@@ -639,7 +641,7 @@ options:
Heres an example of using the config splitter to divide configurations based on the `extra_filters` argument from the reader section:
```sh
docker pull victoriametrics/vmanomaly:v1.29.2 && docker image tag victoriametrics/vmanomaly:v1.29.2 vmanomaly
docker pull victoriametrics/vmanomaly:v1.29.3 && docker image tag victoriametrics/vmanomaly:v1.29.3 vmanomaly
```
```sh

View File

@@ -45,8 +45,8 @@ There are 2 types of compatibility to consider when migrating in stateful mode:
| Group start | Group end | Compatibility | Notes |
|---------|--------- |------------|-------|
| [v1.29.2](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1292) | Latest* | Fully Compatible | Just a placeholder for new releases |
| [v1.29.1](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1291) | [v1.29.2](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1292) | Fully Compatible | - |
| [v1.29.3](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1293) | Latest* | Fully Compatible | Just a placeholder for new releases |
| [v1.29.1](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1291) | [v1.29.3](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1293) | Fully Compatible | - |
| [v1.28.7](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1287) | [v1.29.0](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1290) | Partially compatible* | Dumped models of class [prophet](https://docs.victoriametrics.com/anomaly-detection/components/models/#prophet) and [seasonal quantile](https://docs.victoriametrics.com/anomaly-detection/components/models/#online-seasonal-quantile) have problems with loading to [v1.29.0](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1290) due to dropped `pytz` library. **Upgrading directly from v1.28.7 to [v1.29.1](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1291) with a fix is suggested** |
| [v1.26.0](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1262) | [v1.28.7](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1287) | Fully Compatible | [v1.28.0](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1280) introduced [rolling](https://docs.victoriametrics.com/anomaly-detection/components/models/#rolling-models) model class drop in favor of [online](https://docs.victoriametrics.com/anomaly-detection/components/models/#online-models) models (`rolling_quantile` and `std` models), however, it does not impact compatibility, as artifacts were not produced by default for rolling models. Also, offline `mad` and `zscore` models are redirecting to their respective online counterparts since [v1.28.4](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1284). |
| [v1.25.3](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1253) | [v1.26.0](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1270) | Partially Compatible* | [v1.25.3](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1253) introduced `forecast_at` argument for base [univariate](https://docs.victoriametrics.com/anomaly-detection/components/models/#univariate-models) and `Prophet` [models](https://docs.victoriametrics.com/anomaly-detection/components/models/#prophet), however, itself remains backward-reversible from newer states like [v1.26.2](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1262), [v1.27.0](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1270). (All models except `isolation_forest_multivariate` class will be dropped) |
@@ -81,4 +81,4 @@ In stateless mode, the migration process is almost straightforward as there are
# Other VmReader settings...
sampling_period: 1m
...
```
```

View File

@@ -122,7 +122,7 @@ Below are the steps to get `vmanomaly` up and running inside a Docker container:
1. Pull Docker image:
```sh
docker pull victoriametrics/vmanomaly:v1.29.2
docker pull victoriametrics/vmanomaly:v1.29.3
```
2. Create the license file with your license key.
@@ -142,7 +142,7 @@ docker run -it \
-v ./license:/license \
-v ./config.yaml:/config.yaml \
-p 8490:8490 \
victoriametrics/vmanomaly:v1.29.2 \
victoriametrics/vmanomaly:v1.29.3 \
/config.yaml \
--licenseFile=/license \
--loggerLevel=INFO \
@@ -159,7 +159,7 @@ docker run -it \
-e VMANOMALY_DATA_DUMPS_DIR=/tmp/vmanomaly/data \
-e VMANOMALY_MODEL_DUMPS_DIR=/tmp/vmanomaly/models \
-p 8490:8490 \
victoriametrics/vmanomaly:v1.29.2 \
victoriametrics/vmanomaly:v1.29.3 \
/config.yaml \
--licenseFile=/license \
--loggerLevel=INFO \
@@ -172,7 +172,7 @@ services:
# ...
vmanomaly:
container_name: vmanomaly
image: victoriametrics/vmanomaly:v1.29.2
image: victoriametrics/vmanomaly:v1.29.3
# ...
restart: always
volumes:

View File

@@ -9,14 +9,17 @@ sitemap:
In today's fast-paced and complex landscape of system monitoring, [VictoriaMetrics Anomaly Detection](https://victoriametrics.com/products/enterprise/anomaly-detection/) (`vmanomaly`), a part of our [Enterprise offering](https://victoriametrics.com/products/enterprise/), serves as an **observability layer** for SREs and DevOps teams atop of collected data to **automate the detection of anomalies in time-series data**, reducing manual efforts required to identify abnormal system behavior.
Unlike traditional threshold-based alerting, which relies on **raw metric values** and requires constant tuning and maintenance of thresholds and alerting rules, `vmanomaly` introduces a **unified, interpretable [anomaly score](https://docs.victoriametrics.com/anomaly-detection/faq/#what-is-anomaly-score)** - a **de-trended, de-seasonalized metric** generated through machine learning. This approach eliminates the need for frequent manual adjustments by enabling **stable, long-term static thresholds (as simple as `anomaly_score > 1`)** that remain effective over time through continuous model retraining.
Unlike traditional threshold-based alerting, which relies on **raw metric values** and requires constant tuning and maintenance of thresholds and alerting rules, `vmanomaly` introduces a **unified, interpretable [anomaly score](https://docs.victoriametrics.com/anomaly-detection/faq/#what-is-anomaly-score)** - a **de-trended, de-seasonalized metric** generated through machine learning. This approach eliminates the need for frequent manual adjustments by enabling **stable, long-term static thresholds (as simple as `anomaly_score > 1`)** that remain effective over time through continuous model retraining and updates.
By shifting to anomaly-based detection, teams can **identify and respond to potential issues faster**, enhancing system reliability and operational efficiency while significantly **reducing the engineering effort spent on handcrafting and maintaining alerting rules**.
## What does it do?
`vmanomaly` is designed to **periodically analyze new data points** across selected metrics (either requested from [VictoriaMetrics TSDB](https://docs.victoriametrics.com/victoriametrics/) or produced by [VictoriaLogs](https://docs.victoriametrics.com/victorialogs/) metrics [endpoint](https://docs.victoriametrics.com/victorialogs/querying/#querying-log-range-stats)), generating a **unified metric** called [anomaly score](https://docs.victoriametrics.com/anomaly-detection/faq/#what-is-anomaly-score).
`vmanomaly` is designed to **periodically analyze new data points** across selected metrics - either requested from [VictoriaMetrics TSDB](https://docs.victoriametrics.com/victoriametrics/) or produced by [VictoriaLogs](https://docs.victoriametrics.com/victorialogs/) or [VictoriaTraces](https://docs.victoriametrics.com/victoriatraces/) metrics [endpoint](https://docs.victoriametrics.com/victorialogs/querying/#querying-log-range-stats) - to generate a **unified metric** called [anomaly score](https://docs.victoriametrics.com/anomaly-detection/faq/#what-is-anomaly-score).
> [!NOTE]
> `vmanomaly` can use both single-node and cluster versions of VictoriaMetrics/VictoriaLogs/VictoriaTraces as a data source, and is compatible with both OpenSource and Enterprise versions of it. However, `vmanomaly` itself requires an Enterprise license to run, and is part of our [Enterprise offering](https://victoriametrics.com/products/enterprise/).
Key functions:
- **Automated anomaly detection** - continuously scans time-series data to identify deviations from expected behavior.

View File

@@ -315,7 +315,7 @@ docker run -it --rm \
-e VMANOMALY_MCP_SERVER_URL=http://mcp-vmanomaly:8081/mcp \
-p 8080:8080 \
-p 8490:8490 \
victoriametrics/vmanomaly:v1.29.2 \
victoriametrics/vmanomaly:v1.29.3 \
vmanomaly_config.yaml
```
@@ -640,6 +640,23 @@ If the **results** look good and the **model configuration should be deployed in
## Changelog
### v1.6.1
Released: 2026-04-16
vmanomaly version: [v1.29.3](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1293)
- IMPROVEMENT: Consecutive anomalies (when "streaks" option is enabled) are now grouped in the Visualization Panel as a single anomaly line instead of multiple dots for reduced visual noise and better representation of prolonged anomalous periods, while still showing the exact anomaly score and labels on hover.
- IMPROVEMENT: Raw query results now refresh automatically after time range changes; yet anomaly detection results are preserved until "Detect Anomalies" button is hit again, to avoid recalculating anomalies on the new time range without explicit user action, which could be costly if the new time range is large and the model is complex.
- IMPROVEMENT: Table legend view is now enabled by default for sorting and filtering enablement.
- BUGFIX: Generated config and example alert outputs now preserve configured fit/infer values correctly and avoid invalid float-based duration strings in generated YAML, which could lead to data validation errors if copied to production configuration without adjustments.
- BUGFIX: Fixed multiple confusing anomaly UI behaviors around scheduler fields (fit_every, infer_every) and generated artifacts.
- BUGFIX: Chart y-axis range is now updating after legend series selection (regression introduced in v1.6.0).
### v1.6.0
Released: 2026-04-02

Binary file not shown.

Before

Width:  |  Height:  |  Size: 51 KiB

After

Width:  |  Height:  |  Size: 357 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 54 KiB

After

Width:  |  Height:  |  Size: 467 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 188 KiB

View File

@@ -395,7 +395,7 @@ services:
restart: always
vmanomaly:
container_name: vmanomaly
image: victoriametrics/vmanomaly:v1.29.2
image: victoriametrics/vmanomaly:v1.29.3
depends_on:
- "victoriametrics"
ports:

Binary file not shown.

Before

Width:  |  Height:  |  Size: 25 KiB

After

Width:  |  Height:  |  Size: 234 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 106 KiB

After

Width:  |  Height:  |  Size: 282 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 181 KiB

After

Width:  |  Height:  |  Size: 1.0 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 164 KiB

After

Width:  |  Height:  |  Size: 929 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 104 KiB

After

Width:  |  Height:  |  Size: 563 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 331 KiB

After

Width:  |  Height:  |  Size: 871 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 93 KiB

After

Width:  |  Height:  |  Size: 310 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 122 KiB

After

Width:  |  Height:  |  Size: 944 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.6 KiB

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 44 KiB

After

Width:  |  Height:  |  Size: 303 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 140 KiB

After

Width:  |  Height:  |  Size: 681 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 109 KiB

After

Width:  |  Height:  |  Size: 805 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

After

Width:  |  Height:  |  Size: 111 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 160 KiB

After

Width:  |  Height:  |  Size: 1.2 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 236 KiB

After

Width:  |  Height:  |  Size: 189 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 26 KiB

After

Width:  |  Height:  |  Size: 206 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 26 KiB

After

Width:  |  Height:  |  Size: 206 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.5 KiB

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 53 KiB

After

Width:  |  Height:  |  Size: 206 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 42 KiB

After

Width:  |  Height:  |  Size: 428 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.7 KiB

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

After

Width:  |  Height:  |  Size: 740 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.4 KiB

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 35 KiB

After

Width:  |  Height:  |  Size: 225 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 98 KiB

After

Width:  |  Height:  |  Size: 189 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 11 KiB

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 25 KiB

After

Width:  |  Height:  |  Size: 88 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

After

Width:  |  Height:  |  Size: 160 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 19 KiB

After

Width:  |  Height:  |  Size: 93 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 80 KiB

After

Width:  |  Height:  |  Size: 425 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 53 KiB

After

Width:  |  Height:  |  Size: 514 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 32 KiB

After

Width:  |  Height:  |  Size: 193 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 19 KiB

After

Width:  |  Height:  |  Size: 64 KiB

View File

@@ -238,23 +238,23 @@ vmagent will write data into VictoriaMetrics single-node and cluster (with tenan
# compose.yaml
services:
vmsingle:
image: victoriametrics/victoria-metrics:v1.139.0
image: victoriametrics/victoria-metrics:v1.140.0
vmstorage:
image: victoriametrics/vmstorage:v1.139.0-cluster
image: victoriametrics/vmstorage:v1.140.0-cluster
vminsert:
image: victoriametrics/vminsert:v1.139.0-cluster
image: victoriametrics/vminsert:v1.140.0-cluster
command:
- -storageNode=vmstorage:8400
vmselect:
image: victoriametrics/vmselect:v1.139.0-cluster
image: victoriametrics/vmselect:v1.140.0-cluster
command:
- -storageNode=vmstorage:8401
vmagent:
image: victoriametrics/vmagent:v1.139.0
image: victoriametrics/vmagent:v1.140.0
volumes:
- ./scrape.yaml:/etc/vmagent/config.yaml
command:
@@ -306,7 +306,7 @@ Now add the vmauth service to `compose.yaml`:
# compose.yaml
services:
vmauth:
image: docker.io/victoriametrics/vmauth:v1.139.0
image: docker.io/victoriametrics/vmauth:v1.140.0
ports:
- 8427:8427
volumes:
@@ -420,7 +420,7 @@ Create two Prometheus datasources in Grafana with the following URLs: `http://vm
![Prometheus datasource](grafana-datasource-prometheus.webp)
You can also use the VictoriaMetrics [Grafana datasource](https://github.com/VictoriaMetrics/victoriametrics-datasource) plugin.
See installation instructions in [Grafana datasource - Installation](https://docs.victoriametrics.com/victoriametrics/victoriametrics-datasource/#installation).
See installation instructions in [Grafana datasource - Installation](https://docs.victoriametrics.com/victoriametrics/integrations/grafana/#victoriametrics-datasource).
Users with the `vm_access` claim will be able to query metrics from the specified tenant with extra filters applied.

View File

@@ -155,15 +155,15 @@ These services will store and query the metrics scraped by vmagent.
# compose.yaml
services:
vmstorage:
image: victoriametrics/vmstorage:v1.139.0-cluster
image: victoriametrics/vmstorage:v1.140.0-cluster
vminsert:
image: victoriametrics/vminsert:v1.139.0-cluster
image: victoriametrics/vminsert:v1.140.0-cluster
command:
- -storageNode=vmstorage:8400
vmselect:
image: victoriametrics/vmselect:v1.139.0-cluster
image: victoriametrics/vmselect:v1.140.0-cluster
command:
- -storageNode=vmstorage:8401
ports:
@@ -196,7 +196,7 @@ Add the vmauth service to `compose.yaml`:
# compose.yaml
services:
vmauth:
image: victoriametrics/vmauth:v1.139.0-enterprise
image: victoriametrics/vmauth:v1.140.0-enterprise
ports:
- 8427:8427
volumes:
@@ -251,7 +251,7 @@ Add the vmagent service to `compose.yaml` with OAuth2 configuration:
# compose.yaml
services:
vmagent:
image: victoriametrics/vmagent:v1.139.0
image: victoriametrics/vmagent:v1.140.0
volumes:
- ./scrape.yaml:/etc/vmagent/config.yaml
command:

View File

@@ -107,7 +107,7 @@ The final piece is the Docker Compose file. This ties all the services together
# compose.yml
services:
victoriametrics:
image: victoriametrics/victoria-metrics:v1.139.0
image: victoriametrics/victoria-metrics:v1.140.0
command:
- "--storageDataPath=/victoria-metrics-data"
- "--selfScrapeInterval=10s"
@@ -128,7 +128,7 @@ services:
- ./alertmanager.yml:/etc/alertmanager/alertmanager.yml:ro
vmalert:
image: victoriametrics/vmalert:v1.139.0
image: victoriametrics/vmalert:v1.140.0
depends_on:
- victoriametrics
- alertmanager
@@ -198,7 +198,7 @@ If you open the sidebar and select **Alerting** > **Alert rules**, you should be
Open the sidebar again and go to **Alerting** > **Active notifications** to see the active alert reported by Alertmanager.
![Screenshot of Grafana Active notifications Page](grafana-active-notifications.webp)
![Screenshot of Grafana Active notifications Page](grafana-notifications.webp)
You can also see the alerts in VMUI by opening the browser in `http://localhost:8428/vmui/?#/rules`. This is possible only when we have configured `-vmalert.proxyURL` in VictoriaMetrics.

View File

@@ -10,5 +10,11 @@ tags:
- logs
- traces
- playground
aliases:
- /playgrounds/victoriametrics/
- /playgrounds/victorialogs/
- /playgrounds/victoriatraces/
- /playgrounds/cloud/
- /playgrounds/vmanomaly/
---
{{% content "README.md" %}}

View File

@@ -113,6 +113,7 @@ See also [case studies](https://docs.victoriametrics.com/victoriametrics/casestu
* [FreeBSD: monitoring with VictoriaMetrics and Grafana](https://setevoy.medium.com/freebsd-monitoring-with-victoriametrics-and-grafana-f789904f2628)
* [QCon London 2026: Wrangling Telemetry at Scale, a Guide to Self-Hosted Observability](https://www.infoq.com/news/2026/03/self-hosted-observability/)
* [How We Made Telemetry Queries 10x Faster: Chunk-Split Caching for Metrics, Logs, and Traces](https://mirastacklabs.ai/blog/chunk-split-caching/)
* [Building a high-volume metrics pipeline with OpenTelemetry and vmagent](https://medium.com/airbnb-engineering/building-a-high-volume-metrics-pipeline-with-opentelemetry-and-vmagent-c714d6910b45)
## Third-party articles and slides about VictoriaLogs

View File

@@ -62,11 +62,13 @@ Pull requests requirements:
1. The pull request must conform to [VictoriaMetrics development goals](https://docs.victoriametrics.com/victoriametrics/goals/).
1. Don't use `master` branch for making PRs, as it makes it impossible for reviewers to modify the changes.
1. All commits need to be [signed](https://docs.github.com/en/authentication/managing-commit-signature-verification/signing-commits).
1. A commit message should contain clear and concise description of what was done and for what purpose.
Use the imperative, present tense: "change" not "changed" nor "changes". Read your commit message as "This commit will ..", don't capitalize the first letter.
Message should be prefixed with `<dir>/<component>:` to show what component has been changed, i.e. `app/vmalert: fix...`.
1. Pull request title should be prefixed with `<dir>/<component>:` to show what component has been changed, i.e. `app/vmalert: fix...`.
Pull request description should contain clear and concise description of what was done, why it is needed and for what purpose.
Use clear language, so reviewers can quickly understand the change and its impact.
1. A link to the issue(s) related to the change, if any. Use `Fixes [issue link]` if the PR resolves the issue, or `Related to [issue link]` for reference.
1. Tests proving that the change is effective. See [this style guide](https://itnext.io/f-tests-as-a-replacement-for-table-driven-tests-in-go-8814a8b19e9e) for tests.
1. Tests proving that the change is effective. Tests are expected for non-trivial new functionality or non-trivial modifications.
Bug fixes must include tests unless a maintainer explicitly agrees otherwise.
See [this style guide](https://itnext.io/f-tests-as-a-replacement-for-table-driven-tests-in-go-8814a8b19e9e) for tests.
To run tests and code checks locally, execute commands `make test-full` and `make check-all`.
1. Try to not extend the scope of the pull requests outside the issue, do not make unrelated changes.
1. Update [docs](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/docs) if needed. For example, adding a new flag or changing behavior of existing flags or features

View File

@@ -556,7 +556,7 @@ See more details on [how to monitor VictoriaMetrics components](https://docs.vic
- `-storage.maxHourlySeries` is the limit on the number of [active time series](https://docs.victoriametrics.com/victoriametrics/faq/#what-is-an-active-time-series) during the last hour.
- `-storage.maxDailySeries` is the limit on the number of unique time series during the day. This limit can be used for limiting daily [time series churn rate](https://docs.victoriametrics.com/victoriametrics/faq/#what-is-high-churn-rate).
It is possible to use `-1` as a value for these flags{{% available_from "#" %}} in order to enable series tracking but set limit to maximum possible value.
It is possible to use `-1` as a value for these flags{{% available_from "v1.140.0" %}} in order to enable series tracking but set limit to maximum possible value.
This is useful in order to estimate the number of unique series written to `vmstorage` without enforcing limits.
Note that these limits are set and applied individually per each `vmstorage` node in the cluster. So, if the cluster has `N` `vmstorage` nodes, then the cluster-level limits will be `N` times bigger than the per-`vmstorage` limits.

View File

@@ -27,5 +27,5 @@ to [the latest available releases](https://docs.victoriametrics.com/victoriametr
## Currently supported LTS release lines
- v1.136.x - the latest one is [v1.136.3 LTS release](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.136.3)
- v1.122.x - the latest one is [v1.122.18 LTS release](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.122.18)
- v1.136.x - the latest one is [v1.136.4 LTS release](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.136.4)
- v1.122.x - the latest one is [v1.122.19 LTS release](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.122.19)

Some files were not shown because too many files have changed in this diff Show More