Compare commits

...

194 Commits

Author SHA1 Message Date
Andrii Chubatiuk
305f1c91f8 lib/{fs,filestream}: use single ParallelExecutor for fs and filestream tasks 2025-12-31 11:51:32 +02:00
JAYICE
74b03c93a6 makefile: support vmauth in docs-update-flags command (#10222)
### Describe Your Changes

implement
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10221

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-12-30 19:14:06 +02:00
Max Kotliar
0e9bb5a42d docs: sync flags in docs with acutal binaries 2025-12-30 18:59:33 +02:00
Max Kotliar
f1a88e57cf docs/changelog: fix link to PR
follow up on
1792b6bd9a
2025-12-30 17:38:48 +02:00
Max Kotliar
76176ac1d3 app/vmauth: increase concurency limit reached before waiting in queue
Follow up on
c9596a0364 (r173413964)

See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10078
2025-12-30 17:23:10 +02:00
Max Kotliar
c08adb31bb docs: remove available from placeholder from code block
The {{% available_from "#" %}} placeholder does not work inside code
blocks. Replacing it with hard coded value.

Introduced in
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10168.

See comment
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10168/files#r2651440620
for more details.
2025-12-30 16:10:55 +02:00
Artem Fetishev
b49b0471ef lib/storage: move legacy code to legacy files (#10215)
Follow-up for f97f627 (#8134)

The code was moved as is, no changes were made to moved code.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-12-30 13:16:24 +01:00
Artem Fetishev
13102045a7 changelog: update v1.132.0 release notes with a note on ungraceful shutdown
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-12-30 10:29:24 +01:00
Artem Fetishev
d226e5b95f lib/ingestserver: Actually close the first vminsert connection (#10224)
Since the first connection is not closed, the vmstorage will never
terminate gracefully which will cause the reset of all caches on the
start-up.

Follow-up for 244769a00d (#10136)

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-12-29 15:13:30 +01:00
Hui Wang
30bbb5660b docs: clarify recording rule labels do not support templating (#10186)
fix https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10183
2025-12-29 15:29:45 +02:00
Max Kotliar
1792b6bd9a docs/changelog: Add PR\issue links, fix typo in tip section 2025-12-29 12:58:07 +02:00
Artem Fetishev
f97f627f79 lib/storage: implement partition index (#8134)
This should reduce disk space occupied by indexDBs as they get deleted along
with the corresponding partitions once those partitions become outside the
retention window.

- Motivation: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7599
- What to expect: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8134

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Co-authored-by: Andrei Baidarov <baidarov@nebius.com>
2025-12-24 18:53:49 +01:00
Phuong Le
785c1fd053 issues/question-template: fix typos (#947) 2025-12-24 11:37:34 +01:00
Aliaksandr Valialkin
697bfd5cee app/vmauth: properly verify whether the request has been canceled by the client in handleConcurrecnyLimitError()
The `err` may contain information about request cancelation performed by the server code.
In such cases the error must be logged. The error must be ignored only if the client canceled the request.

This is a follow-up for the commit c9596a0364

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10078
2025-12-24 11:31:36 +01:00
Artem Fetishev
f0ac6d9ac9 lib/storage: log the beginning and end of saving metric name usage stats to file (#10205)
This is to debug cases when metric name tracker resets the tsid cache
after restart. It could be due vmstorage not having enough time to stop
gracefully. Logs should provide this info.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-12-23 17:25:43 +01:00
Artem Fetishev
f0b251d967 lib/storage: fix per-idb cache stats (#10204)
This fixes the following corner case: if all instances of a cache have
zero size, the stats won't be set at all. This results in some weird
graphs if the cache is reset very often (such as tfssCache): the cache
sizeMaxBytes alternates between the actual value and zero.

Follow-up for f62893c151

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-12-23 17:06:10 +01:00
Nikolay
c3346ae8fd app/victoria-metrics: properly add prometheus metrics metadata (#10192)
Commit 5a587f2006 was not properly ported
to the single node branch. Since single node is able to perform both
promscrape and self-scrape, it's required to add metadata add methods to
those paths.

 This commit fixes missing metadata add to the storage.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10175
2025-12-23 13:57:19 +01:00
Jinlin
0ffb3fdfce lib/storage: fix log typo 2025-12-23 13:50:40 +01:00
Zakhar Bessarab
4e234ccbd1 docs/enterprise: add description of license key update (#10194)
Describe Your Changes:

- describe options of updating the enterprise license key
- fix a few typos
2025-12-23 13:37:36 +01:00
Alexander Frolov
943589ca31 lib/promscrape: fix isAutoMetric to recognize all auto-generated metrics
Previously, `scrape_labels_limit` was missing from the check.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10197
2025-12-23 13:36:50 +01:00
Aliaksandr Valialkin
c9596a0364 app/vmauth: add -maxQueueDuration command-line flag for graceful handling of short spikes in the number of concurrent requests
Previously a short spike in the number of concurrent requests immediately led to `429 Too Many Requests` errors
when the number of concurrent requests exceeds -maxConcurrentRequests or -maxConcurrentPerUserRequests.

This commit allows processing short spikes in the number of concurrent requests during the -maxQueueDuration timeout.
The requests are rejected only if they couldn't be served accroding to the concurrency limits during the -maxQueueDuration.

See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10078
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10112
2025-12-22 16:39:01 +01:00
Aliaksandr Valialkin
e7b0a00493 app/vmauth: follow-up for the commit 7f689df824
- Introduce backendURLs struct, which holds all the backend urls and allows stopping
  all the health checkers across all the backend urls with a single call to backendURLs.stopHealthChecks().

- Immediately cancel the pending Dial call to the backend when backendURLs.stopHealthChecks() is called.
  Use lib/netutil.Dialer.DialContext() for this.

- Replace a fragile closing of stopHealthCheckCh channel via stopHealthCheckOnce.Do()
  with easier to maintain call of cancel() func for the corresponding healthChecksContext.

- Wait until health checker goroutines are finished before return from UserInfo.stopHealthChecks().
  Previously the health checker goroutines could run for some time trying to dial the backend
  after the return from UserInfo.stopHealthChecks().

- Try dialing the broken backend for https urls. It is better if the broken backend logs the error
  instead of routing client requests to the broken backend.

- Log dial errors to the broken backend, so users could troubleshoot the backend connectivity issue with more details.

- Refer the correct issue - https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9997 -
  in the comments explaining why periodic dialing of the broken backend is needed.
  Previously the https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9890 was incorrectly referred.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9997
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10147
2025-12-22 15:20:51 +01:00
Hui Wang
be0fe546e5 vmauth: skip a redundant request if all backends are broken with least_loaded policy (#10202)
similar to https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10170
2025-12-22 13:06:12 +01:00
Hui Wang
13911db316 vmauth: add new counters to track the number of user request errors
follow up https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10177

Add `vmauth_user_request_backend_requests_total` and
`vmauth_unauthorized_user_request_backend_requests_total` which track
the number of user request errors, and aligned with
`vmauth_user_requests_total`.

The existing `vmauth_http_request_errors_total` currently only counts
requests with `invalid_auth_token`. Once authorization has passed, any
subsequent request errors are tracked under
`xxx_user_request_backend_requests_total`.
2025-12-22 13:05:54 +01:00
Artem Fetishev
0cb90f91fc lib/storage: follow-up for d9c07dbc0b (#10169) - fix changelog
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-12-19 08:44:10 +01:00
Alexander Frolov
bdf65dde88 app/vmagent: make sure vmagent_rows_inserted_total counts samples (#10191)
As vminsert does

4d9b69b5a6/app/vminsert/newrelic/request_handler.go (L68)

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10191
2025-12-18 16:37:37 +01:00
Max Kotliar
4d9b69b5a6 docs/changelog: add known issue note related to memory leak on OpenTelemetry parsing code. 2025-12-18 12:39:12 +02:00
Nikolay
692a9be5fa lib/storage: check indexDB refCount at MustClose
In order to gracefully stop indexDB, refCount must be checked during
storage graceful shutdown.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10063
2025-12-17 18:48:53 +01:00
Kirill Kobylyanskiy
c8742ab120 lib/promscrape: add global sampleLimit support
This commit introduces the global `sampleLimit` setting to restrict the number
of samples accepted per scrape target, mirroring the behavior of
Prometheus.

Motivation:
1) The existing `-promscrape.seriesLimitPerTarget` flag currently takes
precedence over any `sample_limit` setting defined directly on the
scrape target. The new `sampleLimit` implementation ensures that the
target configuration is able to override the global setting, allowing
users to define specific limits per target.
2) The existing series limit flag uses memory-intensive Bloom filters,
resulting in high RAM consumption under high-cardinality scraping
scenarios. The `sampleLimit` provides a much simpler, low-overhead
alternative.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10145
2025-12-17 18:47:05 +01:00
Aliaksandr Valialkin
b6f8128273 Makefile: update golangci-lint from v2.4.0 to v2.7.2
See https://github.com/golangci/golangci-lint/releases/tag/v2.7.2
2025-12-17 16:59:02 +01:00
Aliaksandr Valialkin
bed7cbd0a4 all: consistently use encoding.DecompressZSTD* instead of zstd.Decompress* across the codebase
The encoding.DecompressZSTD* consistently updates the vm_zstd_block_decompress_calls_total metric.

Also make the follwing improvements after the commit 10f7cd2ffc:

- Add encoding.DecompressZSTDLimited() function and use it instead of zstd.DecompressLimited,
  so it properly updates vm_zstd_block_decompress_calls_total metric.

- Clarify description for the encoding.DecompressZSTD* and zstd.Decompress* functions.
2025-12-17 16:48:06 +01:00
Artem Fetishev
d9c07dbc0b lib/storage: rotate dateMetricIDCache instead of resetting (#10169)
Currently, `dateMetricIDCache` is reset when it is full and it is never
reset is not full but the data it stores is no longer needed. This leads
to the following problems:
- During regular data ingestion the cache sizeBytes may exceed max
allowed size and the cache gets reset which may potentially slow down
data ingestion (see #10064)
- The cache is per-indexDB. This means that in partition index (#8134)
there will be as many instances of this cache as the number of
partitions. If someone performs a backfill across all partitions, this
will fill all caches and they will never get reset even if no more
historical data is ingested.

So the solution is to periodically rotate the cache. After first
rotation the data is not deleted but moved to `prev` storage. After
second rotation `prev` gets deleted. This gives the cache an opportunity
to restore the `prev` data if it is still in use. Based on #10167.

This PR also removes the introduced recently introduced
`-storage.cacheSizeIndexDBDateMetricID` flag (see #10135). This should
be safe since it is new and its use case is very niche, i.e. no one
would really use it.

---------

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-12-17 15:43:05 +01:00
Artem Fetishev
20ad9cd395 lib/storage: introduce metricIDCache
The cache serves the same purpose as `dateMetricIDCache` but is used for
caching metricIDs from global index.
The cache was introduces in https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10167 and it has been decided to add it in a separate commit to reduce diff.

Related  PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10167
2025-12-17 13:31:11 +01:00
Hui Wang
8b3fe9cdec app/vmauth: add new counters to track the number of requests sent to backends
We have `vmauth_user_requests_total` and
`vmauth_unauthorized_user_requests_total` to track requests from the
user side. However, in scenarios such as request timeouts or when the
response code matches `retry_status_code`, a single request may be
retried across multiple backends.

Exposing counters `vmauth_user_request_backend_requests_total` and
`vmauth_unauthorized_user_request_backend_requests_total` that track the
number of requests sent to backends provides insight into the routing
logic and can help identify if requests are being consistently retried,
which may contribute to increased request duration.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10171
2025-12-17 13:27:08 +01:00
Hui Wang
e1e367b3cb app/vmauth: properly increment metric xxx_user_request_backend_errors_total
Currently, backendErrors may be counted twice if a request to the
backend fails due to context.DeadlineExceeded.

9bc7a17d80/app/vmauth/main.go (L328)

9bc7a17d80/app/vmauth/main.go (L294)

And we increment this counter in a way that is somewhat inconsistent.
Given that the counter's name is `xx_request_backend_errors_total`, it
should only increase when a backend request returns an error. This value
can exceed the user request error count if multiple backend requests
fail for a single user request.
The `xxx_request_backend_errors_total` counter should be used in
conjunction with the `xxx_request_backend_requests_total` introduced in
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10171.
2025-12-17 13:24:26 +01:00
Hui Wang
f40c6fcad1 app/vmauth: skip a redundant request if all backends are broken with first_available policy
There is no reason to send a request to the first backend if all
backends are marked as broken.
Also, 
>// getFirstAvailableBackendURL returns the first available backendURL,
which isn't broken.


The fix only skips a redundant request when all backends are
unavailable, it doesn't introduce any changes from user's perspective,
so I skipped changelog.
2025-12-17 13:22:37 +01:00
Aliaksandr Valialkin
b6bc186013 docs/victoriametrics/Articles.md: add https://developer-friendly.blog/blog/2024/06/17/unlocking-the-power-of-victoriametrics-a-prometheus-alternative/ 2025-12-16 15:46:23 +01:00
Aliaksandr Valialkin
9bc7a17d80 lib/protoparser/opentelemetry: typo fix: wince -> since
This is a follow-up for the commit 293d80910c
2025-12-15 20:13:45 +01:00
f41gh7
9ce548dcb5 docs: update release version to latest 2025-12-15 10:37:35 +01:00
f41gh7
82e583338d docs: update LTS releases 2025-12-15 10:34:43 +01:00
Aliaksandr Valialkin
19009836c7 vendor: update github.com/valyala/fastjson from v1.6.5 to v1.6.7 2025-12-14 23:09:43 +01:00
Max Kotliar
c2362ab670 docs: review links in changelogs 2025-12-12 19:43:15 +02:00
f41gh7
d04a42e846 make vmui-update 2025-12-12 12:50:13 +01:00
f41gh7
0d930dda16 CHANGELOG.md: cut v1.132.0 release 2025-12-12 12:45:34 +01:00
Artem Fetishev
e026215701 lib/storage: Document post-delete cache resets (#10158)
When the time series deletion is performed some of the storage caches
need to be reset but some not. This PR reviews all storage caches and
documents why there are reset or not and also places all the resetting
logic (and comments) in one place.
2025-12-12 11:11:30 +01:00
JAYICE
34a542c324 lib/storage: include last sample when query at the last millisecond of the day
One millisecond shouldn't be subtracted from the `tr.MaxTimestamp`, and
related test cases will be added

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9804
2025-12-12 11:01:06 +01:00
Fred Navruzov
ff0aaa38b7 docs/vmanomaly: release v1.28.2 (#10160)
### Describe Your Changes

Update docs and assets (visualizations) for /anomaly-detection section
with `v1.28.2` release

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-12-11 20:56:59 +02:00
Max Kotliar
0e2f0ac95f lib/protoparser/opentelemetry: fix typo in code
#
github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentelemetry/pb
lib/protoparser/opentelemetry/pb/pb.go:1683:19: undefined: lctx

Bug introduced in
1dc71212f8
2025-12-11 18:37:04 +02:00
Max Kotliar
7f689df824 app/vmauth: validate backend with a dial check before marking it healthy (#10147)
### Describe Your Changes

Previously, a backend was considered healthy as soon as its
'bu.brokenDeadline' deadline expired, even if it was still unavailable.
This caused avoidable request failures and retries.

Now vmauth performs a TCP dial (1s timeout) before restoring the backend
to the healthy
pool. This avoids routing traffic to backends that are still down.

The dial check also covers cases where a route to the backend cannot be
resolved. Without this check, user requests would hang until the
connection timeout, leading to long waits
or errors. The new check fails fast and doesn't impact real user
requests.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9997


### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-12-11 18:26:59 +02:00
Max Kotliar
bd725bdd69 dashboards: add usseful links to dashboards
Dashboards:

- Add a link to proper docs section
- Add a link to troubleshooting page
- Add links to community and enterprise support

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9904
2025-12-11 18:11:07 +02:00
Aliaksandr Valialkin
712b7cfeeb lib/promscrape: allow scraping targets with responses equal to c.maxScrapeSize
Return "too big response size" error only for responses bigger than c.maxScrapeSize
(this option can be set either via max_scrape_size option inside scrape config
or via -promscrape.maxScrapeSize command-line flag).

Previously responses with sizes equal to c.maxScrapeSize were incorrectly rejected.
2025-12-11 16:15:46 +01:00
Aliaksandr Valialkin
1dc71212f8 lib/protoparser/opentelemetry/pb: reset the decoderContext.ls.Labels length to zero after clearing all the references to the original byte slice
This is a follow-up for 25f49e6f54
2025-12-11 15:39:32 +01:00
Aliaksandr Valialkin
25f49e6f54 lib/protoparser/opentelemetry: explicitly clear all the references to the underlying byte slice at decoderContext.ls.Labels up to its capacity
This should prevent from the excess memory usage because of dangling source byte slices
referred by decoderContext.ls.Labels.

This is a change similar to 63a68edb05
2025-12-11 15:33:13 +01:00
Max Kotliar
dcf9f0eb7b lib/promscrape: Add a warning to active targets panel if -dropOriginalLabels=true (some debug info not available)
Previously the original labels were preserved (
-dropOriginalLabels=false). In
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9772 the default
behavior was changed. Now vmagent\vmsingle drops origianal labels. The
change
created some confusion related to UI. For example, debug relabling
column is completly hidden when the labels are not available. It created
a steram of questions.

This commit adds a warning similar to one we have at "Discovered
targets" tab, and also always show the "Debug relabeling" column. When
there is not info for it "N/A" printed.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9901
Follow-up https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9772
2025-12-11 16:24:50 +02:00
Aliaksandr Valialkin
606382178b lib/protoparser/protoparserutil: do not store too big buffers to the pool at ReadUncompressedData if only a small part of the buffer is used last time
This should prevent from excess memory usage because of inefficiently used buffers.

This should help the case at https://github.com/VictoriaMetrics/VictoriaLogs/issues/869
2025-12-11 15:11:44 +01:00
Artem Fetishev
220249f023 lib/storage: use lrucache to implement tagFilters loops cache
The tagFilters loop cache is per-indexDB which means that currently
there are two instances, one for idbCurr and one for idbPrev. When the
partition index (#8134) is released, there will be as many instances of
this cache as there will be partitions.

The cache is implemented using workingsetcache. Which occupies at least
30MB even when unused. Given that only the latest indexDB is used most
of the time, a lot of memory can be wasted.

Therefore the cache implementation is changed to lrucache because it
does not consume memory when it is unused and also has timeout-based
eviction.

This is a follow-up for 4cd727a511
(https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10072).
2025-12-11 08:42:58 +01:00
Max Kotliar
c6731f964c dashboards: add memory usage breakdown panels into Drilldown sections
Right now we have two separate panels: RSS memory % usage and RSS
anonymous memory % usage. This makes trend comparison difficult because
one have to visually correlate two independent panels. Another problem
is that these panels don't show Go runtime allocations at all. The same
applies to memory allocated in C. There are allocations in C (zstd) one
should account for but there is no even a metric to expose it.

The commit adds Memory usage breakdown panel into Drilldown section. It
provides insight into Go Stack, Go Heap, Go Heap Released, Go Other,
Mmap: VM Cache, File cache memory distribution

It should help spot trends changes in memory by type or invistigate
issues such as #10069 and #10028 easier.

Panel info:
This panel shows memory usage by category.

How to use:
- Start from the high-level RSS panel.
- Identify an instance with unexpected or abnormal memory growth.
- Filter to that instance to inspect the detailed breakdown here.

Interpretation
- A steadily rising Go Heap usually indicates a memory leak. Collect
pprof memory profile.
- A growing Go Stack commonly points to a goroutine leak.

<img width="1508" height="628" alt="Screenshot 2025-12-08 at 13 18 44"
src="https://github.com/user-attachments/assets/0e794324-e86d-468e-b926-8bb11f5a2043"
/>
<img width="1503" height="674" alt="Screenshot 2025-12-08 at 13 19 34"
src="https://github.com/user-attachments/assets/62fc3fff-33b3-4dfe-ad3f-ad0526a8a606"
/>

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10139
2025-12-11 08:39:00 +01:00
Sinotov Vladimir
859435a8df lib/protoparser: added push data with zabbix connector (#6087)
Support receiving data from the Zabbix connector with API `/zabbixconnector/api/v1/history`

Labels:
    - The metric name is added to the `__name__` label.
    - Host name to `host` label.
    - Visible name  to `hostname` label.

The returned response complies with the requirements of the Zabbix

 See the following doc for connector [protocol](https://www.zabbix.com/documentation/current/en/manual/config/export/streaming).

Useful links:
- Zabbix Streaming to external systems
(https://www.zabbix.com/documentation/current/en/manual/config/export/streaming)
- Zabbix Newline-delimited JSON expor
(https://www.zabbix.com/documentation/current/en/manual/appendix/protocols/real_time_export)

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/6087
2025-12-10 17:00:27 +01:00
Max Kotliar
5b12fd35d7 app/vminsert: improve slowness-based rerouting logic
Adjust slowness-based rerouting logic.

Rerouting now occurs only from the slowest node, and only if the cluster
as a whole has enough available capacity to handle the additional load.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9890
2025-12-10 16:25:44 +01:00
Aliaksandr Valialkin
293d80910c lib/protoparser/opentelemetry: eliminate memory allocations during parsing of samples send via OpenTelemetry protocol
This increases the parser performance by 4x-6x.

This commit uses the technique similar to https://github.com/VictoriaMetrics/VictoriaLogs/pull/720

goos: linux
goarch: amd64
pkg: github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentelemetry/stream
cpu: AMD Ryzen 7 PRO 5850U with Radeon Graphics
                                                    │   old.txt    │               new.txt               │
                                                    │    sec/op    │   sec/op     vs base                │
ParseStream/default-metrics-labels-formatting-16      15.565µ ± 1%   2.150µ ± 3%  -86.19% (p=0.000 n=10)
ParseStream/prometheus-metrics-labels-formatting-16   24.228µ ± 2%   4.355µ ± 1%  -82.02% (p=0.000 n=10)
ParseStream/prometheus-metrics-formatting-16          23.028µ ± 2%   3.395µ ± 1%  -85.26% (p=0.000 n=10)
geomean                                                20.55µ        3.168µ       -84.59%

                                                    │   old.txt    │                new.txt                 │
                                                    │     B/s      │      B/s       vs base                 │
ParseStream/default-metrics-labels-formatting-16      127.9Mi ± 1%    918.3Mi ± 3%  +617.82% (p=0.000 n=10)
ParseStream/prometheus-metrics-labels-formatting-16   82.19Mi ± 2%   453.32Mi ± 1%  +451.57% (p=0.000 n=10)
ParseStream/prometheus-metrics-formatting-16          86.47Mi ± 2%   581.56Mi ± 1%  +572.52% (p=0.000 n=10)
geomean                                               96.88Mi         623.3Mi       +543.34%

                                                    │   old.txt    │                 new.txt                  │
                                                    │     B/op     │    B/op      vs base                     │
ParseStream/default-metrics-labels-formatting-16      12.53Ki ± 0%   0.00Ki ± 0%  -100.00% (p=0.000 n=10)
ParseStream/prometheus-metrics-labels-formatting-16   21.15Ki ± 1%   0.00Ki ±  ?  -100.00% (p=0.000 n=10)
ParseStream/prometheus-metrics-formatting-16          20.74Ki ± 1%   0.00Ki ±  ?  -100.00% (p=0.000 n=10)
geomean                                               17.65Ki                     ?                       ¹ ²
¹ summaries must be >0 to compute geomean
² ratios must be >0 to compute geomean

                                                    │  old.txt   │                new.txt                 │
                                                    │ allocs/op  │ allocs/op  vs base                     │
ParseStream/default-metrics-labels-formatting-16      426.0 ± 0%    0.0 ± 0%  -100.00% (p=0.000 n=10)
ParseStream/prometheus-metrics-labels-formatting-16   514.0 ± 0%    0.0 ± 0%  -100.00% (p=0.000 n=10)
ParseStream/prometheus-metrics-formatting-16          514.0 ± 0%    0.0 ± 0%  -100.00% (p=0.000 n=10)
geomean                                               482.8                   ?                       ¹ ²
2025-12-10 16:11:59 +01:00
Artem Fetishev
bc4d98b358 app/vmstorage: properly name dateMetricIDCache metrics
The following dmc metrics were given standard names, i.e.:

- vm_date_metric_id_cache_resets_total became
vm_cache_resets_total{type="indexdb/date_metricID"}
- vm_date_metric_id_cache_syncs_total became
vm_cache_syncs_total{type="indexdb/date_metricID"}

This change should be safe since these metrics are currently not used in
VictoriaMetrics Gragana dashboards.

Additionally, other cache metrics were organized within the code so that
each metric has the same order.
2025-12-10 14:57:52 +01:00
Alexander Frolov
ad153f72ef lib/storage: utilize persisted hourMetricIDs cache to avoid redundant indexDB lookups after vmstorage restart
This commit optimizes the performance of the storage by improving the utilization of persisted hourMetricIDs cache to avoid redundant indexDB lookups after vmstorage restart. The change refactors the hour-based cache checking logic using a switch statement to handle multiple hour scenarios more efficiently.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10114
2025-12-10 14:56:05 +01:00
Vadim Rutkovsky
f2578a9764 docs/victoriametrics: update LTS-releases.md (#10153)
### Describe Your Changes

Doc update to mention fresh patch releases - 1.122.10 and 1.110.25

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-12-10 15:11:26 +02:00
Aliaksandr Valialkin
d5e19717b7 Makefile: use the correct -trim_path at pprof-cpu
It shouldn't end with @.

The `PPROF_FILE=/path/to/cpu.pprof make pprof-cpu` is good for investigating profiles received from production builds.
2025-12-10 13:41:10 +01:00
Max Kotliar
5c40328e5f docs: mention Grafana panel that can help with swap related issues 2025-12-10 14:08:43 +02:00
Yury Moladau
1117437456 app/vmui: improve legend auto-collapse threshold, warning and toggle (#10140)
### Describe Your Changes

This PR improves the legend auto-collapse behavior in vmui:
- Increase the legend auto-collapse threshold from `20` to `100` series.
- Add a warning message when the legend is collapsed by default, showing
the actual series count.
- Add a user setting to disable automatic legend collapsing (enabled by
default).

Related issue: #10075

<img width="352" alt="image"
src="https://github.com/user-attachments/assets/22ee2ef9-6369-47a8-87a1-c63a0e17fccd"
/>
<img width="1618" height="197" alt="image"
src="https://github.com/user-attachments/assets/791eb9b6-4397-476d-ad44-5152e50d1975"
/>


### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

Signed-off-by: Yury Molodov <yurymolodov@gmail.com>
2025-12-10 13:59:16 +02:00
Aliaksandr Valialkin
094a7cf3f9 lib/protoparser/opentelemetry/stream: benchmark cases when prometheus-compatible naming for metrics and labels is enabled 2025-12-10 11:50:46 +01:00
Artem Fetishev
538e489497 docs: Update cache tuning section (#10149)
- Remove mentions of `Caches` section in Grafana dashobards since this section does not exist anymore.
- Rewrite a bit the description of cache panels in Troubleshooting section.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2025-12-10 11:42:33 +01:00
Aliaksandr Valialkin
744aa3fe9f lib/protoparser/opentelemetry/stream: make the BenchmarkParseStream closer to real production cases
- Add more metrics to the protobuf to parse.
- Measure scan speed of the original protobuf at bytes/sec. Previously the number of ParseStream() calls per second was measured.
2025-12-10 11:21:57 +01:00
Aliaksandr Valialkin
44a3885f97 lib/protoparser/opentelemetry/stream: avoid memory allocations for bytes.NewBuffer() on every iteration of BenchmarkParseStream
Re-use benchReader for reading the same data on every iteration of BenchmarkParseStream.
2025-12-10 11:10:39 +01:00
Aliaksandr Valialkin
f43264f9f2 lib/ioutil: add missing package after the commit 2da010495c 2025-12-10 11:07:15 +01:00
Aliaksandr Valialkin
e07bc7a74e lib/prompb: move all the code related to WriteRequestUnmarshaler to a separate file - write_request_unmarshaler.go
This should improve code maintenance a bit.

This is a follow-up for the commit b98e592752
2025-12-10 10:45:59 +01:00
Aliaksandr Valialkin
d1680063f5 lib/prompb: rename MetricMetadataType to MetricType
Also rename MetricMetadata* constants to MetricType* constants.

This makes the code a bit more readable.

This is a follow-up for the commit 25cd5637bc

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2974
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9306
2025-12-10 01:18:44 +01:00
Aliaksandr Valialkin
2da010495c all: pool io.LimitedReader in order to save a memory allocation and reduce CPU usage a bit 2025-12-10 01:18:43 +01:00
Artem Fetishev
7c78f95f2e docs: Update flags (#10148)
Follow-up for dc5d7aa4ce
(https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10135)

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-12-09 17:48:40 +01:00
Kirill Yurkov
5bd67c5f49 docs: recommend disabling swap (#10113)
add swap disable commands in install recommendations to prevent
performance issues

---------

Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2025-12-09 15:50:13 +02:00
Max Kotliar
c618f471ca apptest: make results order stable in test special query regression
Sometimes test fails with error:

--- FAIL: TestClusterSpecialQueryRegression (15.57s)
special_query_regression_test.go:76: unexpected /api/v1/export
response (-want, +got):
          &apptest.PrometheusAPIV1QueryResponse{
          	... // 1 ignored field
          	Data: &apptest.QueryData{
          		... // 1 ignored field
          		Result: []*apptest.QueryResult{
          			&{
          				Metric: map[string]string{
          					"__name__":
"prometheus.sensitiveRegex",
- 					"label":
"SensitiveRegex",
+ 					"label":
"sensitiveRegex",
          				},
          				Sample:  nil,
          				Samples: {&{Timestamp:
1707123456700, Value: 10}},
          			},
          			&{
          				Metric: map[string]string{
          					"__name__":
"prometheus.sensitiveRegex",
- 					"label":
"sensitiveRegex",
+ 					"label":
"SensitiveRegex",
          				},
          				Sample:  nil,
          				Samples: {&{Timestamp:
1707123456700, Value: 10}},
          			},
          		},
          	},
          	ErrorType: "",
          	Error:     "",
          	IsPartial: false,
          }

FAIL
FAIL	github.com/VictoriaMetrics/VictoriaMetrics/apptest/tests
	18.676s
FAIL
2025-12-09 15:20:01 +02:00
Artem Fetishev
f62893c151 lib/storage: report per-idb cache stats only once
`tagFiltersCache` and `dateMeticIDCache` are now per-indexDB. Currently
we have 2 instance of indexDBs (prev and curr) and therefore 2 instances
of each cache.

When the storage stats is collected, the stats of individual caches is
added together. For example, is the `sizeMaxBytes` of each
tagFiltersCache is `100MB` and the `sizeBytes` of each instance is
`10MB` and `99MB`, then the resulting stats will be `sizeMaxBytes ==
200MB, sizeBytes == 109MB`.

While this is accurate, this stats hides a potential problem. It says
that the cache utilization is slightly above `50%` (109/200) and
everything seems to be okay. But in reality one of the caches is
utilized by 99% and soon will start evicting existing records to make
room for new ones, potentially slowing down the data retrieval. Ops
won't see it and will not take necessary action.

The solution is to report stats only for one instance of cache whose
utilization is the highest.

Alternatives considered:
- #10123. Might work, but breaks the encapsulation and can potentially
be slower
- Do not aggregate the stats and report is per-indexDB. This increases
the number of metrics and makes it dependent on the number of indexDB
instances (which can be many once #8134 is released).

Related issue https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8134
2025-12-09 12:43:50 +01:00
JAYICE
76f5def301 dashboard: fix page fault panel (#10141)
add `[$__rate_interval]` to fix page fault panel introduced in
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9977
2025-12-09 12:41:28 +01:00
Artem Fetishev
3be5ed0e32 Revert "lib/storage: after deleting series, reset tsid only once" (#10143)
This reverts commit dbe71700b5.

tsidCache is persistent and must be reset before deletedMetricID records
are added to the index. THis is needed to handle ungraceful shutdowns
properly.
2025-12-09 10:57:08 +01:00
Aliaksandr Valialkin
4ac40d955b lib/prompb: use MetricMetadataType type for MetricMetadata.Type field
This eliminates the need of manual conversion between MetricMetadataType and uint32 / int32.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2974

This is a follow-up for the commit 5a587f2006
2025-12-08 20:31:29 +01:00
Artem Fetishev
dc5d7aa4ce lib/storage: properly report dateMetricIDCache stats
A number of changes to `dateMetricIDCache` stats and configuration:

1. Export `SizeMaxBytes` metric and make the size configurable via a
flag
2. Fix `EntriesCount` and `SizeBytes` stats. Previously the cache
reported this stats for its immutable part only. Whereas there are cases
when the number of entries in its mutable part is comparable with the
number in immutable part. The stats from the mutable part remains
invisible until it is sync'ed to the immutable part. It is also possible
that the cache gets reset after the sync because the cache size exceeds
the max allowed size. Reporting the stats for both mutable and immutable
parts should provide a clear picture of the cache utilization.

Together, SizeBytes and SizeMaxBytes should enable tracking the cache
utilization properly. And take appropriate actions if necessary (such as
adjusting the memory resources and/or cache size limit via a flag).

Related issue https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10064
2025-12-08 14:17:58 +01:00
JAYICE
244769a00d vmstorage: skip last sleep when closing vminsertSrv connections
After closing last connection to vminsert, vmstorage will still wait for
an interval, causing actual shutdown time will be always longger than
configurations.

This commit just skip the last sleep

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10136
2025-12-08 14:10:17 +01:00
Max Kotliar
8e81d54851 Revert "dashboards: add memory usage breakdown panels into Drilldown sections"
This reverts commit 5117cde8bc.
2025-12-08 13:42:10 +02:00
Max Kotliar
5117cde8bc dashboards: add memory usage breakdown panels into Drilldown sections
Right now we have two separate panels: RSS memory % usage and RSS
anonymous memory % usage. This makes trend comparison difficult because
one have to visually correlate two independent panels. Another problem
is that these panels don't show Go runtime allocations at all. The same
applies to memory allocated in C. There are allocations in C (zstd) one
should account for but there is no even a metric to expose it.

The commit adds Memory usage breakdown panel into Drilldown section. It
provides insight into Go Stack, Go Heap, Go Heap Released, Go Other,
Mmap: VM Cache, File cache memory distribution

It should help spot trends changes in memory by type or invistigate
issues such as #10069 and #10028 easier.

Panel info:
This panel shows memory usage by category.

How to use:
- Start from the high-level RSS panel.
- Identify an instance with unexpected or abnormal memory growth.
- Filter to that instance to inspect the detailed breakdown here.

Interpretation
- A steadily rising Go Heap usually indicates a memory leak. Collect
pprof memory profile.
- A growing Go Stack commonly points to a goroutine leak.
2025-12-08 13:39:34 +02:00
Artem Fetishev
85367cae38 Idb blockcache metrics unittest (#10050)
indexDB has 3 block caches. These caches export metrics. Storage
collects these
metrics for each indexDB it has (currently prev and curr only).

There is a potential problem:
- These caches are shared by all indexDBs
- Each indexDB reports the block cache metrics.
- Storage collects the metrics of all indexDBs by adding them together.

I.e. it is possible to count block cache metrics several times.
It is not the case in current implementation because the addition of the
metrics
is not performed intentionally.

The added unit test 1) demonstrates that the resulting counts are
reported
correctly and 2) protects from future unintentional changes in this
behavior.

Additionally a code comment is added to explain why block cache metrics
are not summed up.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-12-06 18:14:52 +01:00
Aliaksandr Valialkin
159b71cabb lib/protoparser/influx: properly clean references to underlying byte slices from tagsPool and fieldsPool inside unmarshalContext
This should prevent from memory leaks when unmarshalContext fields point to unused byte slices.
2025-12-06 11:52:57 +01:00
Aliaksandr Valialkin
78b8c773ae docs/victoriametrics/: remove misleading statement about extending ext4 partition to 16TB+
It is enough to recommend the given format options for disks with 1TB+ sizes
2025-12-05 23:00:47 +01:00
Nikolay
aab92d3c0f protoparser/influx: reduce memory allocation (#10109)
Previously, influx parser allocated a new slice byte for
unescape of Row fields. It adds extra pressure at GC and increases CPU
usage.

 This commit changes escape to in-place updates for provided []byte.
Since request for parsing is actually a []byte converted into the
string, it's safe to update it in-place. To be able to interact with
[]byte directly, this commit changes parser API and accepts []byte
instead of string.

Benchstat:
```
                                 │   before    │                after                │
                                 │   sec/op    │   sec/op     vs base                │
RowsUnmarshalUnescape-10           74.68n ± 4%   54.23n ± 5%  -27.38% (p=0.000 n=10)
RowsUnmarshalUnescapeNoEscape-10   40.41n ± 2%   42.59n ± 1%   +5.39% (p=0.000 n=10)
geomean                            54.93n        48.06n       -12.51%

                                 │    before    │                after                 │
                                 │     B/s      │     B/s       vs base                │
RowsUnmarshalUnescape-10           1.035Gi ± 4%   1.425Gi ± 5%  +37.72% (p=0.000 n=10)
RowsUnmarshalUnescapeNoEscape-10   1.613Gi ± 2%   1.531Gi ± 1%   -5.11% (p=0.000 n=10)
geomean                            1.292Gi        1.477Gi       +14.32%

                                 │   before    │                after                 │
                                 │    B/op     │    B/op     vs base                  │
RowsUnmarshalUnescape-10           149.00 ± 0%   96.00 ± 0%  -35.57% (p=0.000 n=10)
RowsUnmarshalUnescapeNoEscape-10    80.00 ± 0%   80.00 ± 0%        ~ (p=1.000 n=10) ¹
geomean                             109.2        87.64       -19.73%
¹ all samples are equal

                                 │   before   │                after                 │
                                 │ allocs/op  │ allocs/op   vs base                  │
RowsUnmarshalUnescape-10           5.000 ± 0%   1.000 ± 0%  -80.00% (p=0.000 n=10)
RowsUnmarshalUnescapeNoEscape-10   1.000 ± 0%   1.000 ± 0%        ~ (p=1.000 n=10) ¹
geomean                            2.236        1.000       -55.28%
```

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10053

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
2025-12-05 18:18:07 +01:00
Nikolay
2bef26288e lib/memory: add validation for remaining system memory
Previously, if user defined value for `memory.allowedBytes` flag
exceeded system memory limit, remaining memory could take negative
value. It results into incorrect memory auto-detect calculations for
various components. Such as vmstorage unique timeseries limit and parts
size.

 This commit adds negative value check. And also logs system memory
limit at start-up of vm components.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10083
2025-12-05 18:14:04 +01:00
Hui Wang
c14dbad33b vmselect: disable rollup result cache for instant queries that contain rate function
Previously, in order to cache results for `rate`, we consider
`rate(m[d])` as `(increase(m[d]) / d)` and cache the `increase` result.
However, in MetricsQL, `rate(d) = (lastValue - firstValue) /
(lastTimestamp - firstTimestamp)`, so it does not equal to
`increase(d)/d` if `d != (lastTimestamp - firstTimestamp)`.
Although the issue primarily arises when the time series samples are not
continuous, but the discrepancy is hard to debug and can be confusing to
users. Because the range query doesn't use this optimization, causing
recording rule results to
differ from raw query results in VMUI. 
Therefore, it is better to disable the usage and only enable it when we
can cache it correctly.

fixes https://github.com/VictoriaMetrics/victoriaMetrics/issues/10098
2025-12-05 17:38:32 +01:00
Artem Fetishev
dbe71700b5 lib/storage: after deleting series, reset tsid only once
As indexDBs became independent from each other, the tsidCache is now
reset more than once when the DeleteSeries() operation is performed. But
it needs to be performed only once. Thus, move the deletion from indexDB
to Storage.

Follow-up for 16d75ab0bd.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10119
2025-12-05 17:38:02 +01:00
Hui Wang
d4fa326659 vmselect: reset rollup result cache with -search.disableCache when necessary
There’s no need to call `c.Reset()` for rollup result cache if it’s not
persisted(`-cacheDataPath` not specified) or has already been cleared by
`-search.resetRollupResultCacheOnStartup`, as it is already newly
created.


Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10095
2025-12-05 17:37:30 +01:00
Andrei Baidarov
040ef931d1 vmalert: do not increment errors counter on cancel context errors
Follow-up for https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10027

`vmalert_alerting_rules_errors_total` increments on any error


445f30a4a6/app/vmalert/rule/alerting.go (L455-L460)

while `vmalert_execution_errors_total` only on non-cancellation ones


445f30a4a6/app/vmalert/rule/group.go (L747-L756)

This commit ignores cancellation errors in
`vmalert_alerting_rules_errors_total` too
2025-12-05 17:36:42 +01:00
JAYICE
474009a7f1 dashboard: add page faults panel for vmsingle&vmcluster (#9977)
### Describe Your Changes
add page fault panel in `Troubleshooting`section for vmcluster and
vmsingle. fix
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9974

The query
```
sum(rate(process_minor_pagefaults_total{job=~"$job", instance=~"$instance"})) by (job,instance)

sum(rate(process_major_pagefaults_total{job=~"$job", instance=~"$instance"})) by (job,instance)
```

<img width="1088" height="306" alt="image"
src="https://github.com/user-attachments/assets/4b4ac884-5372-4141-a429-ac0b296dc926"
/>
2025-12-05 18:04:44 +02:00
Nikolay
1b1442d91b app/vmgateway: properly handle proxy request errors
Previously vmgateway didn't handle http.Abort error.
It could lead to the unexpected panic at webserver.

This commit adds panic recover and prevent app from crash.
2025-12-05 16:32:50 +01:00
Aliaksandr Valialkin
3e359dc920 lib/protoparser/influx: remove IgnoreErrors field from Rows and replace it with the explicit skipInvalidLines arg at Rows.Unmarshal()
This improves the maintainability of the code, since the caller of Rows.Unmarshal() always knows
whether invalid lines must be skipped.

While at it, add missing error checks returned from Rows.Unmarshal().

This is a follow-up for the commit daa7183749

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7090
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/7165
2025-12-05 16:24:54 +01:00
Hui Wang
e41f642a59 add flag description for -selectNode (#10022) 2025-12-05 14:53:06 +02:00
Artem Fetishev
7a2cc7fbad lib/storage: use deadline instead is.deadline
This makes SearchTSIDs() consistent with SearchMetricNames().

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-12-05 02:08:14 +01:00
Aliaksandr Valialkin
a7b99dd164 vendor: update github.com/VictoriaMetrics/easyproto from v0.1.4 to v1.0.0 2025-12-04 21:47:20 +01:00
Andrii Chubatiuk
647f107576 vmui: always add /prometheus prefix while generating backend url 2025-12-04 18:09:47 +02:00
dependabot[bot]
04f8296c85 build(deps-dev): bump js-yaml from 4.1.0 to 4.1.1 in /app/vmui/packages/vmui (#10017)
Bumps [js-yaml](https://github.com/nodeca/js-yaml) from 4.1.0 to 4.1.1.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/nodeca/js-yaml/blob/master/CHANGELOG.md">js-yaml's
changelog</a>.</em></p>
<blockquote>
<h2>[4.1.1] - 2025-11-12</h2>
<h3>Security</h3>
<ul>
<li>Fix prototype pollution issue in yaml merge (&lt;&lt;)
operator.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="cc482e7759"><code>cc482e7</code></a>
4.1.1 released</li>
<li><a
href="50968b862e"><code>50968b8</code></a>
dist rebuild</li>
<li><a
href="d092d86603"><code>d092d86</code></a>
lint fix</li>
<li><a
href="383665ff42"><code>383665f</code></a>
fix prototype pollution in merge (&lt;&lt;)</li>
<li><a
href="0d3ca7a27b"><code>0d3ca7a</code></a>
README.md: HTTP =&gt; HTTPS (<a
href="https://redirect.github.com/nodeca/js-yaml/issues/678">#678</a>)</li>
<li><a
href="49baadd52a"><code>49baadd</code></a>
doc: 'empty' style option for !!null</li>
<li><a
href="ba3460eb9d"><code>ba3460e</code></a>
Fix demo link (<a
href="https://redirect.github.com/nodeca/js-yaml/issues/618">#618</a>)</li>
<li>See full diff in <a
href="https://github.com/nodeca/js-yaml/compare/4.1.0...4.1.1">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=js-yaml&package-manager=npm_and_yarn&previous-version=4.1.0&new-version=4.1.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/VictoriaMetrics/VictoriaMetrics/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-04 17:42:17 +02:00
dependabot[bot]
1c3e64e9ad build(deps): bump actions/checkout from 4 to 6 (#10082)
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to
6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/releases">actions/checkout's
releases</a>.</em></p>
<blockquote>
<h2>v6.0.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Update README to include Node.js 24 support details and requirements
by <a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/2248">actions/checkout#2248</a></li>
<li>Persist creds to a separate file by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2286">actions/checkout#2286</a></li>
<li>v6-beta by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2298">actions/checkout#2298</a></li>
<li>update readme/changelog for v6 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2311">actions/checkout#2311</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v5.0.0...v6.0.0">https://github.com/actions/checkout/compare/v5.0.0...v6.0.0</a></p>
<h2>v6-beta</h2>
<h2>What's Changed</h2>
<p>Updated persist-credentials to store the credentials under
<code>$RUNNER_TEMP</code> instead of directly in the local git
config.</p>
<p>This requires a minimum Actions Runner version of <a
href="https://github.com/actions/runner/releases/tag/v2.329.0">v2.329.0</a>
to access the persisted credentials for <a
href="https://docs.github.com/en/actions/tutorials/use-containerized-services/create-a-docker-container-action">Docker
container action</a> scenarios.</p>
<h2>v5.0.1</h2>
<h2>What's Changed</h2>
<ul>
<li>Port v6 cleanup to v5 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2301">actions/checkout#2301</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v5...v5.0.1">https://github.com/actions/checkout/compare/v5...v5.0.1</a></p>
<h2>v5.0.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Update actions checkout to use node 24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2226">actions/checkout#2226</a></li>
<li>Prepare v5.0.0 release by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2238">actions/checkout#2238</a></li>
</ul>
<h2>⚠️ Minimum Compatible Runner Version</h2>
<p><strong>v2.327.1</strong><br />
<a
href="https://github.com/actions/runner/releases/tag/v2.327.1">Release
Notes</a></p>
<p>Make sure your runner is updated to this version or newer to use this
release.</p>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v4...v5.0.0">https://github.com/actions/checkout/compare/v4...v5.0.0</a></p>
<h2>v4.3.1</h2>
<h2>What's Changed</h2>
<ul>
<li>Port v6 cleanup to v4 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2305">actions/checkout#2305</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v4...v4.3.1">https://github.com/actions/checkout/compare/v4...v4.3.1</a></p>
<h2>v4.3.0</h2>
<h2>What's Changed</h2>
<ul>
<li>docs: update README.md by <a
href="https://github.com/motss"><code>@​motss</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1971">actions/checkout#1971</a></li>
<li>Add internal repos for checking out multiple repositories by <a
href="https://github.com/mouismail"><code>@​mouismail</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1977">actions/checkout#1977</a></li>
<li>Documentation update - add recommended permissions to Readme by <a
href="https://github.com/benwells"><code>@​benwells</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2043">actions/checkout#2043</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/blob/main/CHANGELOG.md">actions/checkout's
changelog</a>.</em></p>
<blockquote>
<h1>Changelog</h1>
<h2>V6.0.0</h2>
<ul>
<li>Persist creds to a separate file by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2286">actions/checkout#2286</a></li>
<li>Update README to include Node.js 24 support details and requirements
by <a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/2248">actions/checkout#2248</a></li>
</ul>
<h2>V5.0.1</h2>
<ul>
<li>Port v6 cleanup to v5 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2301">actions/checkout#2301</a></li>
</ul>
<h2>V5.0.0</h2>
<ul>
<li>Update actions checkout to use node 24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2226">actions/checkout#2226</a></li>
</ul>
<h2>V4.3.1</h2>
<ul>
<li>Port v6 cleanup to v4 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2305">actions/checkout#2305</a></li>
</ul>
<h2>V4.3.0</h2>
<ul>
<li>docs: update README.md by <a
href="https://github.com/motss"><code>@​motss</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1971">actions/checkout#1971</a></li>
<li>Add internal repos for checking out multiple repositories by <a
href="https://github.com/mouismail"><code>@​mouismail</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1977">actions/checkout#1977</a></li>
<li>Documentation update - add recommended permissions to Readme by <a
href="https://github.com/benwells"><code>@​benwells</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2043">actions/checkout#2043</a></li>
<li>Adjust positioning of user email note and permissions heading by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2044">actions/checkout#2044</a></li>
<li>Update README.md by <a
href="https://github.com/nebuk89"><code>@​nebuk89</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2194">actions/checkout#2194</a></li>
<li>Update CODEOWNERS for actions by <a
href="https://github.com/TingluoHuang"><code>@​TingluoHuang</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/2224">actions/checkout#2224</a></li>
<li>Update package dependencies by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2236">actions/checkout#2236</a></li>
</ul>
<h2>v4.2.2</h2>
<ul>
<li><code>url-helper.ts</code> now leverages well-known environment
variables by <a href="https://github.com/jww3"><code>@​jww3</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/1941">actions/checkout#1941</a></li>
<li>Expand unit test coverage for <code>isGhes</code> by <a
href="https://github.com/jww3"><code>@​jww3</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1946">actions/checkout#1946</a></li>
</ul>
<h2>v4.2.1</h2>
<ul>
<li>Check out other refs/* by commit if provided, fall back to ref by <a
href="https://github.com/orhantoy"><code>@​orhantoy</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1924">actions/checkout#1924</a></li>
</ul>
<h2>v4.2.0</h2>
<ul>
<li>Add Ref and Commit outputs by <a
href="https://github.com/lucacome"><code>@​lucacome</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1180">actions/checkout#1180</a></li>
<li>Dependency updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>- <a
href="https://redirect.github.com/actions/checkout/pull/1777">actions/checkout#1777</a>,
<a
href="https://redirect.github.com/actions/checkout/pull/1872">actions/checkout#1872</a></li>
</ul>
<h2>v4.1.7</h2>
<ul>
<li>Bump the minor-npm-dependencies group across 1 directory with 4
updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1739">actions/checkout#1739</a></li>
<li>Bump actions/checkout from 3 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1697">actions/checkout#1697</a></li>
<li>Check out other refs/* by commit by <a
href="https://github.com/orhantoy"><code>@​orhantoy</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1774">actions/checkout#1774</a></li>
<li>Pin actions/checkout's own workflows to a known, good, stable
version. by <a href="https://github.com/jww3"><code>@​jww3</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1776">actions/checkout#1776</a></li>
</ul>
<h2>v4.1.6</h2>
<ul>
<li>Check platform to set archive extension appropriately by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1732">actions/checkout#1732</a></li>
</ul>
<h2>v4.1.5</h2>
<ul>
<li>Update NPM dependencies by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1703">actions/checkout#1703</a></li>
<li>Bump github/codeql-action from 2 to 3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1694">actions/checkout#1694</a></li>
<li>Bump actions/setup-node from 1 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1696">actions/checkout#1696</a></li>
<li>Bump actions/upload-artifact from 2 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1695">actions/checkout#1695</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1af3b93b68"><code>1af3b93</code></a>
update readme/changelog for v6 (<a
href="https://redirect.github.com/actions/checkout/issues/2311">#2311</a>)</li>
<li><a
href="71cf2267d8"><code>71cf226</code></a>
v6-beta (<a
href="https://redirect.github.com/actions/checkout/issues/2298">#2298</a>)</li>
<li><a
href="069c695914"><code>069c695</code></a>
Persist creds to a separate file (<a
href="https://redirect.github.com/actions/checkout/issues/2286">#2286</a>)</li>
<li><a
href="ff7abcd0c3"><code>ff7abcd</code></a>
Update README to include Node.js 24 support details and requirements (<a
href="https://redirect.github.com/actions/checkout/issues/2248">#2248</a>)</li>
<li><a
href="08c6903cd8"><code>08c6903</code></a>
Prepare v5.0.0 release (<a
href="https://redirect.github.com/actions/checkout/issues/2238">#2238</a>)</li>
<li><a
href="9f265659d3"><code>9f26565</code></a>
Update actions checkout to use node 24 (<a
href="https://redirect.github.com/actions/checkout/issues/2226">#2226</a>)</li>
<li>See full diff in <a
href="https://github.com/actions/checkout/compare/v4...v6">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/checkout&package-manager=github_actions&previous-version=4&new-version=6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-04 17:40:42 +02:00
Hui Wang
4212491031 vmalert: clarify templating in alerting rule labels (#10121)
follow up
38dd971f58.

Labels only support limited templating variables in
https://docs.victoriametrics.com/victoriametrics/vmalert/#templating,
including `$labels`, `$value` and `expr`, to avoid breaking alert states
or causing cardinality issue with results.
2025-12-04 17:35:27 +02:00
Zakhar Bessarab
f76bc956ca app/vmctl: respect context cancellation during user prompts
Previously, context cancellation was ignored when reading user response
for the prompt. That leads to ignoring of "Ctrl+C" and other termination
signals to vmctl until user finishes the input.

Fix that by properly propagating the context and respecting the
cancellation of the context.


Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-12-04 15:57:31 +04:00
Aliaksandr Valialkin
655074c3e0 lib/protoparser/opentelemetry/pb: remove code related to parsing logs in OTEL format
This code is no longer needed after the commit 4ffb74448d

See https://github.com/VictoriaMetrics/VictoriaLogs/pull/720
2025-12-04 00:49:54 +01:00
Aliaksandr Valialkin
5e95fdf23e docs/victoriametrics/FAQ.md: add a link to the guide on how to calculate the needed disk space at VictoriaLogs at why indexdb size is so large? chapter
This is a follow-up for 68f670cbc5
2025-12-03 15:51:40 +01:00
Aliaksandr Valialkin
ffcfb74b17 deployment: update Go builder from v1.25.4 to v1.25.5
See https://github.com/golang/go/issues?q=milestone%3AGo1.25.5%20label%3ACherryPickApproved
2025-12-03 15:20:11 +01:00
Max Kotliar
fe803bfc6e Capitalize titles in operator.json
Signed-off-by: d3spair <git@agrshv.dev>
2025-12-03 13:43:39 +02:00
Andrii Chubatiuk
8ee466ab06 dashboard: add panels for operator flags and global params 2025-12-03 13:28:59 +02:00
Sylvain Rabot
6ca48d5025 lib/vmbackup/s3backup: support custom SSE KMS key id and ACL
Add more S3 configurations.

- SSES3KeyID allows to push to a bucket that is another account as the
KMS key it uses to encrypt data server side.
- ACL allows configure which permissions are given to the object
uploaded on the bucket (usefull when bucket policy expect a given
permission such as `bucket-owner-full-control`).

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
Co-authored-by: Andrii Chubatiuk <andrew.chubatiuk@gmail.com>
2025-12-03 10:06:57 +04:00
Fred Navruzov
70eb9d39d5 docs/vmanomaly: release v1.28.1 (#10111)
### Describe Your Changes

Updates of docs and examples to `vmanomaly` v1.28.1

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-12-02 21:08:17 +02:00
Zakhar Bessarab
1985c79a4d deployment: update references to the latest release 2025-12-01 21:16:14 +04:00
Zakhar Bessarab
f0dafacfd3 docs: update references to the latest release 2025-12-01 21:15:13 +04:00
Zakhar Bessarab
6c01f5d50f docs/changelog: backport LTS changelogs 2025-12-01 20:44:43 +04:00
f41gh7
84658e77da docs/changelog: sort changelog entries
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2025-12-01 11:11:54 +01:00
Zakhar Bessarab
4dc32ff1d7 app/vminsert/netstorage: fix list of nodes used for SD
Previously, vminsert was using original list of addrs instead of
discovered addrs. Properly use discovered list of addrs.
2025-12-01 11:11:53 +01:00
Artem Fetishev
08a1b2e75c lib/lrucache: do not reset requests and misses after cache reset
Follow-up for https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10072.

Do not reset requests and misses metrics since cache reset implies the
reset of the storage only.
2025-12-01 10:12:47 +01:00
Zakhar Bessarab
7e5b68fc1f docs/changelog: cut v1.131.0 2025-11-28 20:20:08 +04:00
Zakhar Bessarab
dcc130603c docs: update availble from tags 2025-11-28 20:13:42 +04:00
Zakhar Bessarab
9842ad2299 app/vmselect: run make vmui-update 2025-11-28 20:01:08 +04:00
Aliaksandr Valialkin
63c0cf673f Makefile: generate quicktemplate output files only at lib and app directories
Previously the output files were incorrectly generated inside unexpeted directories such as vendor
2025-11-28 16:07:22 +01:00
Nowa Ammerlaan
7f51bb4ce7 protoparser/influx: account for excess white spaces before timestamp
Some influx clients ( such as nimon monitoring client) adds excess white spaces in the influx line and does not set a
timestamp. Since Influx protocol requires whitespace before timestamp only when it set, it could present without timestamp. Whitespace before omitted timestamp confuses parser.

This commit adds check for the skipped timestamp and test case for it.

Fixes: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10049
2025-11-28 14:36:35 +01:00
Nikolay
38df52ea08 app/vmselect: improve performance for multi-level requests
Previously, proxy vmselect (aka 1st level vmselect) performed parsing
of MetricBlock received from vmstorage before forwarding it into top vmselect. It required an additional CPU and Memory, which greatly slowed down query requests.

This commit changes lib/vmselectapi iterator API, instead of MetricBlock, it returns encoded MetricBlock as a byte slice.
It allows to save CPU and memory at proxy vmselect by eliminating need of decoding MetricBlock received from storage.

In addition, it adds the following optimizations for proxy vmselect:
* reduces memory allocations by using iterator pool
 * add per storageNode workerItem for iterator

Also, it adds optimization for vmstorage, it no longer performs extra memory copy of MetricName for MetricBlock.

vmselect and vmstorage metrics vm_vmselect_metric_rows_read_total and vm_metric_rows_read_total were removed, it's not used at any dashboards and rules. New Iterator API doesn't support it.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9899
2025-11-28 13:04:55 +01:00
Max Kotliar
023a13435c dashboards: make dashboards-sync 2025-11-27 16:52:45 +02:00
Max Kotliar
1ddcbed6d7 dashboards: Show "Disk space usage % by type" as stacked graph in Cluster dashboard. (#10089)
### Describe Your Changes

VictoriaMetrics - cluster dashboard.

vmstorage -> Disk space usage % by type pane.

Switch panel to 100% stacked view to show space distribution.

The goal is to highlight how space is split between datapoints and
indexdb types; Simple time-series values made this hard to see. A 100%
stacked layout makes the distribution immediately visible.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9932

was: <img width="1201" height="609" alt="Image"
src="https://github.com/user-attachments/assets/1d199e65-5a20-4c63-a251-b7087020f42a"
/>


now: 
<img width="1208" height="608" alt="Screenshot 2025-11-27 at 13 14 51"
src="https://github.com/user-attachments/assets/96aa32f3-1243-486b-bac8-2d3c0f4bdb7a"
/>


### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-11-27 16:50:15 +02:00
Aliaksandr Valialkin
edd02cdb5b docs/victoriametrics/goals.md: clarify that bugs, which affect a small number of users at rare edge cases, can be fixed later 2025-11-27 14:29:17 +01:00
Artem Fetishev
4cd727a511 lib/storage: use lrucache for tfss cache (#10072)
The purpose of this PR is the same as #10000, except `lrucache` is used
for implementing tfss cache.

---------

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-11-27 14:18:03 +01:00
Andrii Chubatiuk
19c0477976 chore(app/vmui): conditionally render accordion children (#10068)
### Describe Your Changes

revert change, that was introduced in
483e00ffb9
since rendering of all nested children significantly impacts alerting
tab performance in case of multiple items
@Loori-R @arturminchukov , what do you think about using react-virtuoso
additionally for alerting tab to decrease dom size?

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-11-27 14:31:34 +02:00
Ben Randall
4fdd8f0906 lib/protoparser/opentelemetry: use separate loggers for unsupported delta temporality/metric type logs (#10021)
A throttled logger will continue to log messages occasionally with a
suffix indicating how many similar logs were throttled. Using the same
logger for multiple log messages can result in certain logs being
entirely suppressed and invisible in the logs. This updates most of the
loggers used in `appendFromScopeMetrics` to be their own logger so that
"unsupported delta temporality/metric type" logs will be visible for all
metric types. Additionally, `skippedSampleLogger` is only used by
`appendSamplesFromHistogram` so this was moved closer to that function.

Related to #9447
Related to #9498

- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [X] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Co-authored-by: Max Kotliar <kotlyar.maksim@gmail.com>
2025-11-27 14:19:43 +02:00
Andrii Chubatiuk
9897872ca9 lib/flagutil: clarify usage of quotes in array flag values 2025-11-27 14:17:07 +02:00
Hui Wang
b8bbb07431 dashboard: tidy vmauth panels (#10088)
before:
<img width="2498" height="1042" alt="image"
src="https://github.com/user-attachments/assets/0bbd7cc2-7062-494f-827b-96d86133537f"
/>
after:
<img width="2497" height="968" alt="image"
src="https://github.com/user-attachments/assets/6256ccc2-2f8f-40ea-a23b-a1a20e242b3c"
/>
which is more consistent with other dashabords.
2025-11-27 14:12:53 +02:00
Max Kotliar
eb1c8dd67d docs: add links to issues in changelog 2025-11-27 14:09:02 +02:00
Aliaksandr Valialkin
50fc48ac47 lib/fs: avoid Go runtime stalls on Linux when all the GOMAXPROCS threads are blocked in major pagefaults while reading the data from memory-mapped files
Go runtime executes all the goroutines on GOMAXPROCS operating system threads.
Go runtime cannot switch the OS thread to another goroutine if the current goroutine
is stuck in the major pagefault while reading the data from memory-mapped file,
because Go runtime doesn't distiguinsh between reading from regular memory and reading
from memory-mapped file. So the OS thread becomes stuck while waiting until the OS
reads the data from file at the requested memory address and returns back control to Go application.

In the worst case it is possible that all the GOMAXPROCS threads are stuck in major pagefaults,
so Go runtime pauses executing all the goroutines. This state is possible in environments
with small GOMAXPROCS and high-latency disks such as NFS or small HDD-based disks at AWS.

See https://valyala.medium.com/mmap-in-go-considered-harmful-d92a25cb161d for more details.

This commit protects from such stalls by verifying whether the given memory location from memory-mapped file
is already loaded in the OS page cache before reading from that memory.
If the location isn't in the OS page cache, then it falls back to pread() syscall for reading the data from file.
Go runtime allocates extra OS threads for long-running syscalls, so it can continue executing goroutines
across all the GOMAXPROCS threads while reading the data from slow storage via pread() syscall.

This commit uses mincore() syscall for detecting whether the given memory page is available in the OS page cache.
It also caches mincore() results for up to a minute in order to reduce the overhead for the mincore() syscall.

This commit reduces the increase rate for the process_major_pagefaults_total metric by multiple orders of magnitude
on systems with high-latency disks.
2025-11-26 20:52:27 +01:00
Artem Fetishev
3bd9c75acc lib/lrucache: use uint64 for SizeBytes() and SizeMaxBytes() (#10077)
Currently, `lrucache.Cache` `SizeBytes()` and `SizeMaxBytes()` return
type is `int`. The cache `Entry.SizeBytes()` also returns `int` value.
Changing the type to `uint64` will allow using `uint64set.Set` as the
cache entry type (see #10072).

Please note that using `uint64` regardless the cpu architecture is set
is not entirely correct, because in 32-bit systems the size won't ever
get bigger than `2^32`, so the `uint64` will too much. However current
type (`int`) is not correct either since it is signed and will only
allow to store values up to `2^31`. Alternatively, all `SizeBytes()`
methods should return `uint`.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-11-26 11:39:07 +01:00
Max Kotliar
9c0683f8d1 dashboards: run make dashboards-sync 2025-11-25 20:13:06 +02:00
Max Kotliar
bf4660912f .github: Add changelog tip linter 2025-11-25 13:41:48 +02:00
Yury Molodov
bb54b5e661 app/vmui: improve alert styles for better readability (#10012)
### Describe Your Changes

This PR improves vmui alert styles by adding borders between rows,
introducing a hover state for easier row identification, and aligning
badges to the left.

Related issue: #9856

| Before | After |
|--------|--------|
| <img width="1427" height="1310" alt="image"
src="https://github.com/user-attachments/assets/68f3469e-95df-449f-a85d-1c0285520e2d"
/> | <img width="1427" height="1310" alt="Image"
src="https://github.com/user-attachments/assets/89501efb-c66f-402a-9d14-01c86930a5e2"
/> |

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: Yury Molodov <yurymolodov@gmail.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2025-11-25 13:38:24 +02:00
Andrii Chubatiuk
200a729565 app/vmui: fixed ability to select multiple metrics in explore metrics tab (#10008)
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9995

change in only `Select` component leads to infinite
ExploreMetricsGraphItem component refresh since each time array has a
new reference

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-11-25 13:29:45 +02:00
Yury Molodov
7303495ae1 app/vmui: fix rendering of multiple points at the same timestamp (#10010)
### Describe Your Changes

1. Removed the *step* control from the **Raw Query** page, as it didn’t
affect chart rendering and caused confusion.
2. Fixed rendering of multiple points with the same timestamp -
previously, the second point was hidden.
3. Added proper visualization for points with the same timestamp and
identical values: such points are now shown as a square, and the tooltip
displays the number of duplicates.

**Example:**

```json
{
  "values": [1, 22, 10, 10, 5, 6],
  "timestamps": [
    1761955247950,
    1761955247950,
    1761955248960,
    1761955248960,
    1761955251980,
    1761955252990
  ]
}
```

<img width="500" height="1120" alt="image"
src="https://github.com/user-attachments/assets/192aa43e-8008-4f03-8966-00f59e52ec40"
/>
<img width="300" height="676" alt="image"
src="https://github.com/user-attachments/assets/8e361cb3-1286-452a-a687-b6b40ba7807b"
/>

Related issues: #9667 and #9666

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

Signed-off-by: Yury Molodov <yurymolodov@gmail.com>
2025-11-25 11:00:00 +02:00
Andrei Baidarov
98b5288e9c vmselect: do not immediately fail request if vmstorage returns search… (#10030)
….maxConcurrentRequests error

If `vmstorage` is currently overloaded it could return
maxConcurrentRequests error. Now `vmselect` immediately fails the whole
request even if `replicationFactor` is set up and other replicas could
respond without errors.

This PR treats them as regular errors, not fatal ones.

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-11-24 20:37:37 +02:00
Cancai Cai
d7f9cd971d docs/notes: fix syntax errors (#10019)
### Describe Your Changes

I'm not sure if this is a mistake.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

Signed-off-by: cancaicai <2356672992@qq.com>
2025-11-24 20:37:05 +02:00
cancaicai
2cb08095c6 docs/storage: fix typo
Signed-off-by: cancaicai <2356672992@qq.com>
2025-11-24 15:46:40 +02:00
dependabot[bot]
8bc41f4c79 build(deps): bump golang.org/x/crypto from 0.43.0 to 0.45.0 (#10052)
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from
0.43.0 to 0.45.0.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="4e0068c009"><code>4e0068c</code></a>
go.mod: update golang.org/x dependencies</li>
<li><a
href="e79546e28b"><code>e79546e</code></a>
ssh: curb GSSAPI DoS risk by limiting number of specified OIDs</li>
<li><a
href="f91f7a7c31"><code>f91f7a7</code></a>
ssh/agent: prevent panic on malformed constraint</li>
<li><a
href="2df4153a03"><code>2df4153</code></a>
acme/autocert: let automatic renewal work with short lifetime certs</li>
<li><a
href="bcf6a849ef"><code>bcf6a84</code></a>
acme: pass context to request</li>
<li><a
href="b4f2b62076"><code>b4f2b62</code></a>
ssh: fix error message on unsupported cipher</li>
<li><a
href="79ec3a51fc"><code>79ec3a5</code></a>
ssh: allow to bind to a hostname in remote forwarding</li>
<li><a
href="122a78f140"><code>122a78f</code></a>
go.mod: update golang.org/x dependencies</li>
<li><a
href="c0531f9c34"><code>c0531f9</code></a>
all: eliminate vet diagnostics</li>
<li><a
href="0997000b45"><code>0997000</code></a>
all: fix some comments</li>
<li>Additional commits viewable in <a
href="https://github.com/golang/crypto/compare/v0.43.0...v0.45.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=golang.org/x/crypto&package-manager=go_modules&previous-version=0.43.0&new-version=0.45.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/VictoriaMetrics/VictoriaMetrics/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-24 15:13:50 +02:00
Zhu Jiekun
bb1e0d8f3b opentsdb: Avoid blocking when a connection doesn't send anything (#10045)
### Describe Your Changes

fix #9987 

Avoid blocking when a connection to `-opentsdbListenAddr` doesn't send
any data. This issue blocked other connections from being handled.

> This bug can be tested with:
> 1. Start VictoriaMetrics Single-node with `-opentsdbListenAddr=:4242`.
> 2. Run: `telnet 127.0.0.1 4242` without typing any data after
connection established.
> 3. Run (in another terminal, after step 2): `curl -H 'Content-Type:
application/json' -d
'{"metric":"x.y.z","value":2222222.34,"tags":{"t1":"v1","t2":"v2"}}'
http://localhost:4242/api/put`
> 
> Before the change:
> - Step 3 was blocked infinitely.
> 
> Expect result after the change:
> - Step 3 was executed.
> - Connection established by step 2 will be closed after 5 seconds.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2025-11-24 14:31:19 +02:00
Mathias Palmersheim
70be2e7ea3 Remove threshold from available cpu panel (#10056)
### Describe Your Changes

fixes #9988 by removing the cpu threshold from the Available CPU panel

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-11-24 14:16:35 +02:00
Kirill Yurkov
61796e355a docs: link faq for large indexdb (#10061)
Clarified the index size note in
docs/guides/understand-your-setup-size/README.md to steer readers toward
the FAQ when indexdb feels oversized, noting typical ratios and
troubleshooting guidance.
2025-11-24 14:04:05 +02:00
dependabot[bot]
2ae3fd47eb build(deps): bump actions/checkout from 5 to 6 (#10060)
Bumps [actions/checkout](https://github.com/actions/checkout) from 5 to
6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/releases">actions/checkout's
releases</a>.</em></p>
<blockquote>
<h2>v6.0.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Update README to include Node.js 24 support details and requirements
by <a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/2248">actions/checkout#2248</a></li>
<li>Persist creds to a separate file by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2286">actions/checkout#2286</a></li>
<li>v6-beta by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2298">actions/checkout#2298</a></li>
<li>update readme/changelog for v6 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2311">actions/checkout#2311</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v5.0.0...v6.0.0">https://github.com/actions/checkout/compare/v5.0.0...v6.0.0</a></p>
<h2>v6-beta</h2>
<h2>What's Changed</h2>
<p>Updated persist-credentials to store the credentials under
<code>$RUNNER_TEMP</code> instead of directly in the local git
config.</p>
<p>This requires a minimum Actions Runner version of <a
href="https://github.com/actions/runner/releases/tag/v2.329.0">v2.329.0</a>
to access the persisted credentials for <a
href="https://docs.github.com/en/actions/tutorials/use-containerized-services/create-a-docker-container-action">Docker
container action</a> scenarios.</p>
<h2>v5.0.1</h2>
<h2>What's Changed</h2>
<ul>
<li>Port v6 cleanup to v5 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2301">actions/checkout#2301</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v5...v5.0.1">https://github.com/actions/checkout/compare/v5...v5.0.1</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/blob/main/CHANGELOG.md">actions/checkout's
changelog</a>.</em></p>
<blockquote>
<h1>Changelog</h1>
<h2>V6.0.0</h2>
<ul>
<li>Persist creds to a separate file by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2286">actions/checkout#2286</a></li>
<li>Update README to include Node.js 24 support details and requirements
by <a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/2248">actions/checkout#2248</a></li>
</ul>
<h2>V5.0.1</h2>
<ul>
<li>Port v6 cleanup to v5 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2301">actions/checkout#2301</a></li>
</ul>
<h2>V5.0.0</h2>
<ul>
<li>Update actions checkout to use node 24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2226">actions/checkout#2226</a></li>
</ul>
<h2>V4.3.1</h2>
<ul>
<li>Port v6 cleanup to v4 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2305">actions/checkout#2305</a></li>
</ul>
<h2>V4.3.0</h2>
<ul>
<li>docs: update README.md by <a
href="https://github.com/motss"><code>@​motss</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1971">actions/checkout#1971</a></li>
<li>Add internal repos for checking out multiple repositories by <a
href="https://github.com/mouismail"><code>@​mouismail</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1977">actions/checkout#1977</a></li>
<li>Documentation update - add recommended permissions to Readme by <a
href="https://github.com/benwells"><code>@​benwells</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2043">actions/checkout#2043</a></li>
<li>Adjust positioning of user email note and permissions heading by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2044">actions/checkout#2044</a></li>
<li>Update README.md by <a
href="https://github.com/nebuk89"><code>@​nebuk89</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2194">actions/checkout#2194</a></li>
<li>Update CODEOWNERS for actions by <a
href="https://github.com/TingluoHuang"><code>@​TingluoHuang</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/2224">actions/checkout#2224</a></li>
<li>Update package dependencies by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2236">actions/checkout#2236</a></li>
</ul>
<h2>v4.2.2</h2>
<ul>
<li><code>url-helper.ts</code> now leverages well-known environment
variables by <a href="https://github.com/jww3"><code>@​jww3</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/1941">actions/checkout#1941</a></li>
<li>Expand unit test coverage for <code>isGhes</code> by <a
href="https://github.com/jww3"><code>@​jww3</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1946">actions/checkout#1946</a></li>
</ul>
<h2>v4.2.1</h2>
<ul>
<li>Check out other refs/* by commit if provided, fall back to ref by <a
href="https://github.com/orhantoy"><code>@​orhantoy</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1924">actions/checkout#1924</a></li>
</ul>
<h2>v4.2.0</h2>
<ul>
<li>Add Ref and Commit outputs by <a
href="https://github.com/lucacome"><code>@​lucacome</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1180">actions/checkout#1180</a></li>
<li>Dependency updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>- <a
href="https://redirect.github.com/actions/checkout/pull/1777">actions/checkout#1777</a>,
<a
href="https://redirect.github.com/actions/checkout/pull/1872">actions/checkout#1872</a></li>
</ul>
<h2>v4.1.7</h2>
<ul>
<li>Bump the minor-npm-dependencies group across 1 directory with 4
updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1739">actions/checkout#1739</a></li>
<li>Bump actions/checkout from 3 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1697">actions/checkout#1697</a></li>
<li>Check out other refs/* by commit by <a
href="https://github.com/orhantoy"><code>@​orhantoy</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1774">actions/checkout#1774</a></li>
<li>Pin actions/checkout's own workflows to a known, good, stable
version. by <a href="https://github.com/jww3"><code>@​jww3</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1776">actions/checkout#1776</a></li>
</ul>
<h2>v4.1.6</h2>
<ul>
<li>Check platform to set archive extension appropriately by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1732">actions/checkout#1732</a></li>
</ul>
<h2>v4.1.5</h2>
<ul>
<li>Update NPM dependencies by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1703">actions/checkout#1703</a></li>
<li>Bump github/codeql-action from 2 to 3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1694">actions/checkout#1694</a></li>
<li>Bump actions/setup-node from 1 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1696">actions/checkout#1696</a></li>
<li>Bump actions/upload-artifact from 2 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1695">actions/checkout#1695</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1af3b93b68"><code>1af3b93</code></a>
update readme/changelog for v6 (<a
href="https://redirect.github.com/actions/checkout/issues/2311">#2311</a>)</li>
<li><a
href="71cf2267d8"><code>71cf226</code></a>
v6-beta (<a
href="https://redirect.github.com/actions/checkout/issues/2298">#2298</a>)</li>
<li><a
href="069c695914"><code>069c695</code></a>
Persist creds to a separate file (<a
href="https://redirect.github.com/actions/checkout/issues/2286">#2286</a>)</li>
<li><a
href="ff7abcd0c3"><code>ff7abcd</code></a>
Update README to include Node.js 24 support details and requirements (<a
href="https://redirect.github.com/actions/checkout/issues/2248">#2248</a>)</li>
<li>See full diff in <a
href="https://github.com/actions/checkout/compare/v5...v6">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/checkout&package-manager=github_actions&previous-version=5&new-version=6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-24 14:00:22 +02:00
Max Kotliar
ebad7e5496 docs: Describe relation between slow inserts and unsorted labels. 2025-11-24 13:35:23 +02:00
Max Kotliar
e52de06ee5 docs: sync flags in docs with actual binaries 2025-11-24 13:16:22 +02:00
Aliaksandr Valialkin
38dd971f58 docs/victoriametrics/vmalert.md: clarify that templates can be used inside rule labels
Rule labels can contain templates in the same way as annotations.
See aad6ab009e/app/vmalert/rule/alerting_test.go (L1192)
and https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/#templating

Document this, since users sometimes ask this question.
2025-11-24 10:50:55 +01:00
Artem Fetishev
aad6ab009e lib/storage: minor metricNameSearch fixes (#10065)
- Fix comment
- Re-use dst instead introducing a new variable.

This change has been requested to be in a separated PR during the
pt-index (#8134) code review.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-11-21 20:06:20 +01:00
Artem Fetishev
2c125e14c7 lib/storage: also create parts.json on parition creation (#10051)
Currently, when a partition is created its corresponding parts.json file
is not created right away (see createNewParition()). Its creation is
delayed until the first part files are created on disk (see
swapSrcWithDstParts()). However, the parts.json file is created for a
possibly empty partition when an existing partition is opened (see
mustOpenPartition()) and when a partition snapshot is create (see
MustCreateSnapshotAt()).

I.e. `parts.json` is an important part of a partition, since it is an
artifact that describes the partition contents. And it should be created
on pt creation even if its contents is empty.

To be honest, this change is mostly a no-op for the current storage
implementation. It only makes the code consistent, i.e. the parts.json
is created along with the partition.

However having it created when a partition is created becomes in
pt-index (#7599, #8134), because it allows having partitions with no
data and therefore without parts.json file. Still not a big deal but the
unit tests start failing.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-11-21 14:19:53 +01:00
Artem Fetishev
13dc60e257 lib/storage: refactoring - move dateMetricIDCache code to a separate file (#10055)
dateMetricIDCache does not belong to storage anymore since it has been
moved to indexDB. Instead moving the case to index_db.go, move it to a
separate file in order to navigate the code more easily.

No changes have been done to the code or tests.

Follow up for: #9983

---------

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Co-authored-by: Alexander Frolov <9749087+fxrlv@users.noreply.github.com>
2025-11-21 13:52:33 +01:00
Artem Fetishev
ed64c90e7a lib/storage: fix comments related to nextDayMetricIDs
Follow-up for 49b0a4fb16

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-11-21 13:31:20 +01:00
Artem Fetishev
49b0a4fb16 lib/storage: refactoring - simplify nextDayMetricIDs data structure (#10058)
The data structure used for holding the nextDayMetricIDs is too complex
and can be simplified (flattened).

Follow up for: #9983

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-11-21 13:02:02 +01:00
Artem Fetishev
5141496c43 lib/storage: add overlapsWith() and contains() methods to TimeRange (#10059)
The change was introduced in pt-index PR (#8134) and is extracted into a
separate PR.

Currently used in partition_search and partition. If you see more places
like this, please let me know.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-11-21 12:24:40 +01:00
Andrii Chubatiuk
24fac64875 docs: add warning blockquote regarding latest backup lifecycle policy (#10054)
Update formatting for warning text.

<img width="732" height="432" alt="image"
src="https://github.com/user-attachments/assets/1549e69a-fc65-445f-b567-9b5e4e1a8617"
/>
2025-11-20 13:46:34 +04:00
Aliaksandr Valialkin
8250f469a7 docs/victoriametrics/Articles.md: add https://medium.com/@kanakaraju896/backing-up-victoriametrics-data-a-complete-guide-24473c74450f 2025-11-20 08:36:59 +01:00
Aliaksandr Valialkin
7fb0f0e015 docs/victoriametrics/Articles.md: add https://blackmetalz.github.io/why-i-switched-to-victoriametrics-scaling-from-small-business-to-enterprise.html 2025-11-20 08:33:19 +01:00
Andrii Chubatiuk
563dbeaea1 app/vmalert: do not increment errors counter on cancel context
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10027
2025-11-19 13:32:16 +01:00
Nikolay
7e6468c1e3 lib/storage: properly increment missing tsids metric
Bug was introduced at 2380e4829d

Due to typo vm_missing_tsids_for_metric_id_total metric was incremented instead of vm_missing_metric_names_for_metric_id_total for missing metricName for metricID search.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10041
2025-11-19 13:29:44 +01:00
Hui Wang
328f33202f chore: clarify vmalert -external.label usage (#10042)
To clarify that HA vmalert doesn't need to specify `-external.label`.
2025-11-19 13:28:36 +01:00
Fred Navruzov
951331db80 docs/vmanomaly: release v1.28.0 (#10031)
### Describe Your Changes

Upgraded vmanomaly docs & guides to release v1.28.0 (UI v1.2.0)

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-11-18 21:47:03 +02:00
Andrii Chubatiuk
e6139be8ba docs/vmbackupmanager: mention version since which -backupTypeTagName flag is available (#10038)
Mention version since which `backupTypeTagName` flag is available
2025-11-18 18:56:19 +04:00
Andrii Chubatiuk
77e5920014 app/vmbackupmanager: set backup type tag on backup's items
* app/vmbackupmanager: set VMBackupType tag on backup's items

* address review comments
2025-11-18 16:30:13 +04:00
Zakhar Bessarab
78049e991b docs/cluster: remove mention of select for metadata (#10034)
vmselect does not have a flag to enable metadata querying, remove
invalid reference to it from the docs.
2025-11-18 15:32:20 +04:00
Artem Fetishev
c972d70f00 docs: update VictoriaMetrics components version to v1.130.0
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-11-17 22:03:17 +01:00
Artem Fetishev
b947562f2b deployment/docker: update VM components version to v1.130.0
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-11-17 21:56:42 +01:00
Artem Fetishev
344a81fa20 docs: bump last LTS versions
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-11-17 20:14:07 +00:00
Artem Fetishev
4b022ea8a8 docs/CHANGELOG.md: update changelog with LTS release notes
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-11-17 20:08:16 +00:00
Artem Fetishev
04c24fc831 lib/workingsetcache: Fix bytesSize metric calculation (#10025)
Follow-up for 3e6fc445a9

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-11-17 13:49:17 +01:00
Artem Fetishev
d2f78e4b2b docs/CHANGELOG.md: cut v1.130.0
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-11-14 17:45:20 +00:00
Max Kotliar
3995837c58 docs: update latest version in docs to v1.130.0 2025-11-14 19:37:14 +02:00
Artem Fetishev
1d53496f98 make vmui-update
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-11-14 17:21:49 +00:00
Artem Fetishev
73a1ce2dd6 lib/storage: Move dateMetricIDCache to indexDB (#9983)
Looks like the `dateMetricIDCache` must be per indexDB:

- the use of this cache and `is.hasDateMetricID()` often go in pairs. So
it makes
  sense to use this cache in that method.
- The same is true for `createPerDayIndexes()`: everytime the index
entry is
  created, a corresponding entry is added to the cache.
- As a result the generation field is also removed from the cache.

Related to #7599 and #8134.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-11-14 16:03:28 +01:00
Aliaksandr Valialkin
daa88f6a43 docs/victoriametrics: cross-link rebalancing section at VictoriaMetrics cluster docs and the corresponding question at the FAQ page 2025-11-14 15:36:57 +01:00
Aliaksandr Valialkin
7bff73b0f7 docs/victoriametrics/Cluster-VictoriaMetrics.md: add rebalancing chapter, which explains how to rebalance data among vmstorage nodes
This is very frequent question from new users of VcitoriaMetrcs who migrate from other solutions
with automatic data rebalancing among storage nodes, so it is a good idea to cover it in the docs.
2025-11-14 15:32:29 +01:00
Max Kotliar
bf3b1cf6b6 lib/storage/metricsmetadata: ensure deterministic sorting for identical metric names across tenants
Metrics metadata is loaded from a per-tenant storage map
(perTenantStorage map[uint64]map[string]*Row), so result rows order is
non-deterministic. The existing sortRows implementation only sorts by
metric name and ingestion time, which means rows that differ only by
tenant/account ID still sorted undeterministically.

This change updates `sortRows` to include account\project identifiers in
the comparison, ensuring stable and deterministic ordering for metadata
entries that share the same metric name and timestamp.

First discovered as flaky test:

--- FAIL: TestStorageRead (0.00s)
    storage_test.go:337: unexpected rows get result (-want, +got):
          []*metricsmetadata.Row{
          	&{
          		... // 2 ignored and 1 identical fields
          		Help:      "uselesshelp1",
          		Unit:      "seconds1",
        - 		AccountID: 1,
        + 		AccountID: 0,
        - 		ProjectID: 1,
        + 		ProjectID: 0,
          		Type:      1,
          	},
          	&{
          		... // 2 ignored and 1 identical fields
          		Help:      "uselesshelp1",
          		Unit:      "seconds1",
        - 		AccountID: 0,
        + 		AccountID: 1,
        - 		ProjectID: 0,
        + 		ProjectID: 1,
          		Type:      1,
          	},
          }
FAIL

https://github.com/VictoriaMetrics/VictoriaMetrics-enterprise/actions/runs/19361594138/job/55394642029#step:4:133
2025-11-14 15:22:27 +02:00
Max Kotliar
a10ff67354 docs/changelog: Add links to changelog 2025-11-14 13:41:59 +02:00
Haley Wang
9a8463df42 lib/storage: add a value check for retentionFilter to ensure it does not exceed retentionPeriod 2025-11-14 12:50:46 +02:00
Max Kotliar
7e22b169f1 docs: Add metrics metadata how to use in docs
follow-up for https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9487
2025-11-14 10:37:15 +01:00
f41gh7
80c1af5af1 apptest: add metrics metadata test for vmsingle
related issue github.com/VictoriaMetrics/VictoriaMetrics/issues/2974
2025-11-14 10:29:28 +01:00
f41gh7
5a587f2006 app/{vmstorage,vmselect,vminsert}: introduce metrics metadata storage
This commits adds storage part and cluster RPC methods for metrics metadata.

 Key concepts:
* vmstorage persists metadata in-memory only.
* vmstorage evicts metadata records older than 1 hour.
* vmstorage stores only the last value of metadata for time series
  metric name.
* vminsert opens an additional TCP connection to the vmstorage for
  metadata write requests.
* vmselect doesn't support `limit_per_metric_name`.

This feature is available optional and must be enabled via flag - `-enableMetadata` provided to vminsert/vmsingle.

Fixes github.com/VictoriaMetrics/VictoriaMetrics/issues/2974
2025-11-14 10:24:38 +01:00
Aliaksandr Valialkin
847cd1e336 docs/guides/understand-your-setup-size/README.md: remove the misleading recommendation for having at least 2vCPU cores per each vmstorage node
vmstorage nodes work perfectly with one CPU core and even with 10% of a single CPU core
if the allocated CPU resources matches their workload.

It is better to recommend allocating the an interger number of CPU cores to vmstorage
in order to achieve an optimal performance, since vmstorage allocates internal resources
according to the available CPU cores. If there is a fractional number of CPU cores,
then the allocation of internal resources may be not so optimal.

Fractional number of CPU cores may also lead to increased latencies and stalls
because some P threads at Go runtime won't be able to run goroutines from their ready queues
in a timely manner becasue of the lack of CPU time. See https://victoriametrics.com/blog/kubernetes-cpu-go-gomaxprocs/
2025-11-14 09:48:30 +01:00
Aliaksandr Valialkin
c86857b269 docs/victoriametrics/vmagent.md: mention that it isn't recommended increasing the -maxConcurrentRequests command-line flag value in general case
Too big values for the -maxConcurrentRequests command-line flag increase memory usage
and increase CPU overhead for processing incoming requests in most cases.
The only valid reason for increasing the value for -maxConcurrentRequests command-line flag
is when many clients send data to vmagent over very slow network.
2025-11-14 09:40:31 +01:00
Hui Wang
c93937101c Improve vmalert UI tip (#9998) 2025-11-13 21:04:39 +01:00
Aliaksandr Valialkin
cca7380dd3 docs/victoriametrics: fix broken link to /api/v1/rules docs at Prometheus 2025-11-13 19:40:10 +01:00
Aliaksandr Valialkin
ca3b9b18b5 docs/victoriametrics/README.md: add context links to the FAQ entry describing why IndexDB size may be too large 2025-11-13 19:36:18 +01:00
Nikolay
10f7cd2ffc lib/encoding/zstd: properly apply size limits
Previously, zstd Decoder didn't take in account Request Size limits
applied by VictoriaMetrics components.  And in case of incorrectly formed zstd block, VictoriaMetrics
component may allocate extra memory. Which may lead to the OOM errors.

This commit makes ingest endpoints check frame content size and window size headers based on MaxRequest Limits.
2025-11-13 18:13:23 +01:00
Hui Wang
fa85726a82 vmalert: print the error message as value if templating fails in alerting rule
For users, if an alerting rule has a misconfigured annotation, it's more
important to deliver the alert when the rule triggers rather than skip
it with templating error logs.
Then users can see the faulty annotation in alert message and fix it.

Note: the previous behavior is retained in replay mode because errors
there should be noticed immediately; hiding them could waste time,
resources and require a re-replay after fixes.
Also the rule's status in the vmalert UI remains unhealthy if templating
failed.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9853
2025-11-13 17:34:55 +01:00
Hui Wang
567c084d6d vmalert: drop labels with empty values in generated alerts and time series
In prometheus ecosystem, a label with an empty value equals no label,
since a query like `test{something=""}` matches all the series without
label `something`.
So for vmalert, preserving empty-value labels in generated alerts or
time series is unnecessary and can cause alert hash mismatches during
[restore](https://docs.victoriametrics.com/victoriametrics/vmalert/#alerts-state-on-restarts).
The empty-value label shouldn't come from datasource response since they
follow the same rule(omit empty-value labels), it may come from
`-external.label` or rule labels, but the empty value could be caused by
occasionally templating failures, which is hard to check there.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9984
2025-11-13 17:24:27 +01:00
Hui Wang
12a1388fbc vmalert: fix a potential race condition in web api during rule hot reload
Group rules are not protected by
[m.groupsMu](03c784e3e3/app/vmalert/manager.go (L25)),
they could be updated(with config hot reload) during `/api/v1/rule`,
`/api/v1/alert` and `/api/v1/alerts` API calls. This fix takes a
snapshot by calling `group.ToAPI()` first, making all reads safe.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9551
2025-11-13 17:22:25 +01:00
JAYICE
62c19b386a lib/httputl: fix failing to access http2 sd service by the shadow copy of http.DefaultTransport
Clone `http.DefaultTransport` and disable HTTP2 without resetting
`TLSClientConfig.NextProtos` in the shadow copy of
`http.DefaultTransport` will cause the request to HTTP/2 server to fail.
See https://github.com/golang/go/issues/39302.

To reproduce it, use a scrape config like:
```
scrape_configs:
  - job_name: test
    yandexcloud_sd_configs:
      - service: compute
        api_endpoint: https://api.cloud.yandex.net
```
Before the fix, access to the SD service would fail.

A solution is to specify `http/1.1` in  `TLSClientConfig.NextProtos`.

Related golang issue: https://github.com/golang/go/issues/39302

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9981
2025-11-13 17:19:15 +01:00
331 changed files with 25086 additions and 13245 deletions

View File

@@ -5,7 +5,7 @@ body:
- type: textarea
id: describe-the-component
attributes:
label: Is your question request related to a specific component?
label: Is your question related to a specific component?
placeholder: |
VictoriaMetrics, vmagent, vmalert, vmui, etc...
validations:

48
.github/scripts/lint-changelog-tip.sh vendored Executable file
View File

@@ -0,0 +1,48 @@
#!/usr/bin/env sh
set -e
CHANGELOG_FILE="docs/victoriametrics/changelog/CHANGELOG.md"
GITHUB_BASE_REF=${GITHUB_BASE_REF:-"master"}
GIT_REMOTE=${GIT_REMOTE:-"origin"}
git diff "${GIT_REMOTE}/${GITHUB_BASE_REF}"...HEAD -- $CHANGELOG_FILE > diff.txt
if ! grep -q "^+" diff.txt; then
echo "No additions in CHANGELOG.md"
exit 0
fi
ADDED_LINES=$(grep "^+\S" diff.txt | sed 's/^+//')
START_TIP=$(grep -n "^## tip" "$CHANGELOG_FILE" | head -1 | cut -d: -f1)
if [ -z "$START_TIP" ]; then
echo "ERROR: ${CHANGELOG_FILE} does not contain a ## tip section"
exit 1
fi
END_TIP=$(awk "NR>$START_TIP && /^## / {print NR; exit}" "${CHANGELOG_FILE}")
if [ -z "$END_TIP" ]; then
END_TIP=$(wc -l < "$CHANGELOG_FILE")
fi
BAD=0
while IFS= read -r line; do
# Grep exact line inside the file and get line numbers
MATCHES=$(grep -n -F "$line" "$CHANGELOG_FILE" | cut -d: -f1)
for m in $MATCHES; do
if [ "$m" -lt "$START_TIP" ] || [ "$m" -gt "$END_TIP" ]; then
echo "'$line' on line ${m} is outside ## tip section (lines ${START_TIP}-${END_TIP})"
BAD=1
fi
done
done << EOF
$ADDED_LINES
EOF
if [ "$BAD" -ne 0 ]; then
echo "CHANGELOG modifications must be placed inside the ## tip section."
exit 1
fi
echo "CHANGELOG modifications are valid."

View File

@@ -61,7 +61,7 @@ jobs:
arch: amd64
steps:
- name: Code checkout
uses: actions/checkout@v5
uses: actions/checkout@v6
- name: Setup Go
id: go

19
.github/workflows/changelog-linter.yml vendored Normal file
View File

@@ -0,0 +1,19 @@
name: 'changelog-linter'
on:
pull_request:
paths:
- "docs/victoriametrics/changelog/CHANGELOG.md"
jobs:
tip-lint:
runs-on: 'ubuntu-latest'
steps:
- uses: 'actions/checkout@v6'
with:
# needed for proper diff
fetch-depth: 0
- name: 'Validate that changelog changes are under ## tip'
run: |
GITHUB_BASE_REF=${{ github.base_ref }} ./.github/scripts/lint-changelog-tip.sh

View File

@@ -8,7 +8,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v5
uses: actions/checkout@v6
with:
fetch-depth: 0 # we need full history for commit verification

View File

@@ -29,7 +29,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v5
uses: actions/checkout@v6
- name: Set up Go
id: go

View File

@@ -16,12 +16,12 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Code checkout
uses: actions/checkout@v5
uses: actions/checkout@v6
with:
path: __vm
- name: Checkout private code
uses: actions/checkout@v5
uses: actions/checkout@v6
with:
repository: VictoriaMetrics/vmdocs
token: ${{ secrets.VM_BOT_GH_TOKEN }}

View File

@@ -32,7 +32,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Code checkout
uses: actions/checkout@v5
uses: actions/checkout@v6
- name: Setup Go
id: go
@@ -71,7 +71,7 @@ jobs:
steps:
- name: Code checkout
uses: actions/checkout@v5
uses: actions/checkout@v6
- name: Setup Go
id: go
@@ -97,7 +97,7 @@ jobs:
steps:
- name: Code checkout
uses: actions/checkout@v5
uses: actions/checkout@v6
- name: Setup Go
id: go

View File

@@ -32,7 +32,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Code checkout
uses: actions/checkout@v5
uses: actions/checkout@v6
- name: Setup Node
uses: actions/setup-node@v6

View File

@@ -17,7 +17,7 @@ EXTRA_GO_BUILD_TAGS ?=
GO_BUILDINFO = -X '$(PKG_PREFIX)/lib/buildinfo.Version=$(APP_NAME)-$(DATEINFO_TAG)-$(BUILDINFO_TAG)'
TAR_OWNERSHIP ?= --owner=1000 --group=1000
GOLANGCI_LINT_VERSION := 2.4.0
GOLANGCI_LINT_VERSION := 2.7.2
.PHONY: $(MAKECMDGOALS)
@@ -435,7 +435,7 @@ release-vmutils-windows-goarch: \
vmctl-windows-$(GOARCH)-prod.exe
pprof-cpu:
go tool pprof -trim_path=github.com/VictoriaMetrics/VictoriaMetrics@ $(PPROF_FILE)
go tool pprof -trim_path=github.com/VictoriaMetrics/VictoriaMetrics $(PPROF_FILE)
fmt:
gofmt -l -w -s ./lib
@@ -471,7 +471,23 @@ integration-test:
apptest:
$(MAKE) victoria-metrics vmagent vmalert vmauth vmctl vmbackup vmrestore
go test ./apptest/... -skip="^TestCluster.*"
go test ./apptest/... -skip="^Test(Cluster|Legacy).*"
integration-test-legacy: victoria-metrics vmbackup vmrestore
OS=$$(uname | tr '[:upper:]' '[:lower:]'); \
ARCH=$$(uname -m | tr '[:upper:]' '[:lower:]' | sed 's/x86_64/amd64/'); \
VERSION=v1.132.0; \
VMSINGLE=victoria-metrics-$${OS}-$${ARCH}-$${VERSION}.tar.gz; \
VMCLUSTER=victoria-metrics-$${OS}-$${ARCH}-$${VERSION}-cluster.tar.gz; \
URL=https://github.com/VictoriaMetrics/VictoriaMetrics/releases/download/$${VERSION}; \
DIR=/tmp/$${VERSION}; \
test -d $${DIR} || (mkdir $${DIR} && \
curl --output-dir /tmp -LO $${URL}/$${VMSINGLE} && tar xzf /tmp/$${VMSINGLE} -C $${DIR} && \
curl --output-dir /tmp -LO $${URL}/$${VMCLUSTER} && tar xzf /tmp/$${VMCLUSTER} -C $${DIR} \
); \
VM_LEGACY_VMSINGLE_PATH=$${DIR}/victoria-metrics-prod \
VM_LEGACY_VMSTORAGE_PATH=$${DIR}/vmstorage-prod \
go test ./apptest/tests -run="^TestLegacySingle.*"
benchmark:
GOEXPERIMENT=synctest go test -bench=. ./lib/...
@@ -500,7 +516,8 @@ app-local-windows-goarch:
CGO_ENABLED=0 GOOS=windows GOARCH=$(GOARCH) go build $(RACE) -ldflags "$(GO_BUILDINFO)" -tags "$(EXTRA_GO_BUILD_TAGS)" -o bin/$(APP_NAME)-windows-$(GOARCH)$(RACE).exe $(PKG_PREFIX)/app/$(APP_NAME)
quicktemplate-gen: install-qtc
qtc
qtc -dir=lib
qtc -dir=app
install-qtc:
which qtc || go install github.com/valyala/quicktemplate/qtc@latest

View File

@@ -10,9 +10,11 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/decimal"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prommetadata"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/prometheus"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage/metricsmetadata"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeserieslimits"
)
@@ -48,6 +50,7 @@ func selfScraper(scrapeInterval time.Duration) {
var bb bytesutil.ByteBuffer
var rows prometheus.Rows
var metadataRows prometheus.MetadataRows
var mrs []storage.MetricRow
var labels []prompb.Label
t := time.NewTicker(scrapeInterval)
@@ -57,8 +60,12 @@ func selfScraper(scrapeInterval time.Duration) {
appmetrics.WritePrometheusMetrics(&bb)
s := bytesutil.ToUnsafeString(bb.B)
rows.Reset()
// VictoriaMetrics components don't expose metadata yet, only need to parse samples
rows.UnmarshalWithErrLogger(s, nil)
// Parse metrics and optionally metadata when enabled
if prommetadata.IsEnabled() {
rows, metadataRows = prometheus.UnmarshalWithMetadata(rows, metadataRows, s, nil)
} else {
rows.UnmarshalWithErrLogger(s, nil)
}
mrs = mrs[:0]
for i := range rows.Rows {
r := &rows.Rows[i]
@@ -91,6 +98,19 @@ func selfScraper(scrapeInterval time.Duration) {
if err := vmstorage.AddRows(mrs); err != nil {
logger.Errorf("cannot store self-scraped metrics: %s", err)
}
if len(metadataRows.Rows) > 0 {
mms := make([]metricsmetadata.Row, 0, len(metadataRows.Rows))
for _, mm := range metadataRows.Rows {
mms = append(mms, metricsmetadata.Row{
MetricFamilyName: bytesutil.ToUnsafeBytes(mm.Metric),
Help: bytesutil.ToUnsafeBytes(mm.Help),
Type: mm.Type,
})
}
if err := vmstorage.AddMetadataRows(mms); err != nil {
logger.Errorf("cannot store self-scraped metrics metadata: %s", err)
}
}
}
for {
select {

View File

@@ -27,6 +27,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/promremotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/vmimport"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/zabbixconnector"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/buildinfo"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
@@ -350,6 +351,17 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
}
firehose.WriteSuccessResponse(w, r)
return true
case "/zabbixconnector/api/v1/history":
zabbixconnectorHistoryRequests.Inc()
if err := zabbixconnector.InsertHandlerForHTTP(nil, r); err != nil {
zabbixconnectorHistoryErrors.Inc()
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusBadRequest)
fmt.Fprintf(w, `{"error":%q}`, err.Error())
return true
}
w.WriteHeader(http.StatusOK)
return true
case "/newrelic":
newrelicCheckRequest.Inc()
w.Header().Set("Content-Type", "application/json")
@@ -644,6 +656,17 @@ func processMultitenantRequest(w http.ResponseWriter, r *http.Request, path stri
}
firehose.WriteSuccessResponse(w, r)
return true
case "zabbixconnector/api/v1/history":
zabbixconnectorHistoryRequests.Inc()
if err := zabbixconnector.InsertHandlerForHTTP(at, r); err != nil {
zabbixconnectorHistoryErrors.Inc()
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusBadRequest)
fmt.Fprintf(w, `{"error":%q}`, err.Error())
return true
}
w.WriteHeader(http.StatusOK)
return true
case "newrelic":
newrelicCheckRequest.Inc()
w.Header().Set("Content-Type", "application/json")
@@ -765,6 +788,9 @@ var (
opentelemetryPushRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/opentelemetry/v1/metrics", protocol="opentelemetry"}`)
opentelemetryPushErrors = metrics.NewCounter(`vmagent_http_request_errors_total{path="/opentelemetry/v1/metrics", protocol="opentelemetry"}`)
zabbixconnectorHistoryRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/zabbixconnector/api/v1/history", protocol="zabbixconnector"}`)
zabbixconnectorHistoryErrors = metrics.NewCounter(`vmagent_http_request_errors_total{path="/zabbixconnector/api/v1/history", protocol="zabbixconnector"}`)
newrelicWriteRequests = metrics.NewCounter(`vm_http_requests_total{path="/newrelic/infra/v2/metrics/events/bulk", protocol="newrelic"}`)
newrelicWriteErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/newrelic/infra/v2/metrics/events/bulk", protocol="newrelic"}`)

View File

@@ -78,7 +78,7 @@ func insertRows(at *auth.Token, rows []newrelic.Row, extraLabels []prompb.Label)
if !remotewrite.TryPush(at, &ctx.WriteRequest) {
return remotewrite.ErrQueueFullHTTPRetry
}
rowsInserted.Add(len(rows))
rowsInserted.Add(samplesCount)
if at != nil {
rowsTenantInserted.Get(at).Add(samplesCount)
}

View File

@@ -25,7 +25,7 @@ var (
rowsPerInsert = metrics.NewHistogram(`vmagent_rows_per_insert{type="opentelemetry"}`)
)
// InsertHandler processes metrics from given reader.
// InsertHandlerForReader processes metrics from given reader.
func InsertHandlerForReader(at *auth.Token, r io.Reader, encoding string) error {
return stream.ParseStream(r, encoding, nil, func(tss []prompb.TimeSeries, mms []prompb.MetricMetadata) error {
return insertRows(at, tss, mms, nil)

View File

@@ -15,7 +15,6 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/awsapi"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding/zstd"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httputil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
@@ -554,9 +553,9 @@ func getRetryDuration(retryAfterDuration, retryDuration, maxRetryDuration time.D
// For more details, see: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9417
func repackBlockFromZstdToSnappy(zstdBlock []byte) ([]byte, error) {
plainBlock := make([]byte, 0, len(zstdBlock)*2)
plainBlock, err := zstd.Decompress(plainBlock, zstdBlock)
plainBlock, err := encoding.DecompressZSTD(plainBlock, zstdBlock)
if err != nil {
return nil, fmt.Errorf("zstd: decompress: %s", err)
return nil, err
}
return snappy.Encode(nil, plainBlock), nil

View File

@@ -0,0 +1,80 @@
package zabbixconnector
import (
"net/http"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/zabbixconnector"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/zabbixconnector/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
"github.com/VictoriaMetrics/metrics"
)
var (
rowsInserted = metrics.NewCounter(`vmagent_rows_inserted_total{type="zabbixconnector"}`)
rowsTenantInserted = tenantmetrics.NewCounterMap(`vmagent_tenant_inserted_rows_total{type="zabbixconnector"}`)
rowsPerInsert = metrics.NewHistogram(`vmagent_rows_per_insert{type="zabbixconnector"}`)
)
// InsertHandlerForHTTP processes remote write for ZabbixConnector POST /zabbixconnector/v1/history request.
func InsertHandlerForHTTP(at *auth.Token, req *http.Request) error {
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
encoding := req.Header.Get("Content-Encoding")
return stream.Parse(req.Body, encoding, func(rows []zabbixconnector.Row) error {
return insertRows(at, rows, extraLabels)
})
}
func insertRows(at *auth.Token, rows []zabbixconnector.Row, extraLabels []prompb.Label) error {
ctx := common.GetPushCtx()
defer common.PutPushCtx(ctx)
rowsTotal := len(rows)
tssDst := ctx.WriteRequest.Timeseries[:0]
labels := ctx.Labels[:0]
samples := ctx.Samples[:0]
for i := range rows {
r := &rows[i]
labelsLen := len(labels)
for j := range r.Tags {
tag := &r.Tags[j]
labels = append(labels, prompb.Label{
Name: bytesutil.ToUnsafeString(tag.Key),
Value: bytesutil.ToUnsafeString(tag.Value),
})
}
labels = append(labels, extraLabels...)
samplesLen := len(samples)
samples = append(samples, prompb.Sample{
Value: r.Value,
Timestamp: r.Timestamp,
})
tssDst = append(tssDst, prompb.TimeSeries{
Labels: labels[labelsLen:],
Samples: samples[samplesLen:],
})
}
ctx.WriteRequest.Timeseries = tssDst
ctx.Labels = labels
ctx.Samples = samples
if !remotewrite.TryPush(at, &ctx.WriteRequest) {
return remotewrite.ErrQueueFullHTTPRetry
}
rowsInserted.Add(rowsTotal)
if at != nil {
rowsTenantInserted.Get(at).Add(rowsTotal)
}
rowsPerInsert.Update(float64(rowsTotal))
return nil
}

View File

@@ -116,7 +116,7 @@ func TestParse_Failure(t *testing.T) {
f([]string{"testdata/rules/rules_interval_bad.rules"}, "eval_offset should be smaller than interval")
f([]string{"testdata/rules/rules0-bad.rules"}, "unexpected token")
f([]string{"testdata/dir/rules0-bad.rules"}, "error parsing annotation")
f([]string{"testdata/dir/rules0-bad.rules"}, "invalid annotations")
f([]string{"testdata/dir/rules1-bad.rules"}, "duplicate in file")
f([]string{"testdata/dir/rules2-bad.rules"}, "function \"unknown\" not defined")
f([]string{"testdata/dir/rules3-bad.rules"}, "either `record` or `alert` must be set")
@@ -343,7 +343,6 @@ func TestGroupValidate_Failure(t *testing.T) {
},
},
}, true, "bad prometheus expr")
}
func TestGroupValidate_Success(t *testing.T) {

View File

@@ -3,6 +3,7 @@ package main
import (
"context"
"fmt"
"strconv"
"sync"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/config"
@@ -45,13 +46,15 @@ func (m *manager) ruleAPI(gID, rID uint64) (rule.ApiRule, error) {
m.groupsMu.RLock()
defer m.groupsMu.RUnlock()
g, ok := m.groups[gID]
group, ok := m.groups[gID]
if !ok {
return rule.ApiRule{}, fmt.Errorf("can't find group with id %d", gID)
}
g := group.ToAPI()
ruleID := strconv.FormatUint(rID, 10)
for _, r := range g.Rules {
if r.ID() == rID {
return r.ToAPI(), nil
if r.ID == ruleID {
return r, nil
}
}
return rule.ApiRule{}, fmt.Errorf("can't find rule with id %d in group %q", rID, g.Name)
@@ -62,17 +65,20 @@ func (m *manager) alertAPI(gID, aID uint64) (*rule.ApiAlert, error) {
m.groupsMu.RLock()
defer m.groupsMu.RUnlock()
g, ok := m.groups[gID]
group, ok := m.groups[gID]
if !ok {
return nil, fmt.Errorf("can't find group with id %d", gID)
}
g := group.ToAPI()
for _, r := range g.Rules {
ar, ok := r.(*rule.AlertingRule)
if !ok {
if r.Type != rule.TypeAlerting {
continue
}
if apiAlert := ar.AlertToAPI(aID); apiAlert != nil {
return apiAlert, nil
alertID := strconv.FormatUint(aID, 10)
for _, a := range r.Alerts {
if a.ID == alertID {
return a, nil
}
}
}
return nil, fmt.Errorf("can't find alert with id %d in group %q", aID, g.Name)

View File

@@ -166,8 +166,8 @@ func templateAnnotations(annotations map[string]string, data AlertTplData, tmpl
ctmpl, _ := tmpl.Clone()
ctmpl = ctmpl.Option("missingkey=zero")
if err := templateAnnotation(&buf, builder.String(), tData, ctmpl, execute); err != nil {
r[key] = text
eg.Add(fmt.Errorf("key %q, template %q: %w", key, text, err))
r[key] = err.Error()
eg.Add(fmt.Errorf("(key: %q, value: %q): %w", key, text, err))
continue
}
r[key] = buf.String()
@@ -184,13 +184,13 @@ type tplData struct {
func templateAnnotation(dst io.Writer, text string, data tplData, tpl *textTpl.Template, execute bool) error {
tpl, err := tpl.Parse(text)
if err != nil {
return fmt.Errorf("error parsing annotation template: %w", err)
return fmt.Errorf("error parsing template: %w", err)
}
if !execute {
return nil
}
if err = tpl.Execute(dst, data); err != nil {
return fmt.Errorf("error evaluating annotation template: %w", err)
return fmt.Errorf("error evaluating template: %w", err)
}
return nil
}

View File

@@ -3,6 +3,7 @@ package notifier
import (
"bytes"
"context"
"errors"
"fmt"
"io"
"net/http"
@@ -86,6 +87,11 @@ func (am *AlertManager) Send(ctx context.Context, alerts []Alert, alertLabels []
err := am.send(ctx, alerts, alertLabels, headers)
am.metrics.alertsSendDuration.UpdateDuration(startTime)
if err != nil {
// the context can be cancelled on graceful shutdown
// or on group update. So no need to handle the error as usual.
if errors.Is(err, context.Canceled) {
return nil
}
am.metrics.alertsSendErrors.Add(len(alerts))
am.lastError = err.Error()
} else {

View File

@@ -2,6 +2,7 @@ package rule
import (
"context"
"errors"
"fmt"
"hash/fnv"
"math"
@@ -246,16 +247,6 @@ func (ar *AlertingRule) GetAlerts() []*notifier.Alert {
return alerts
}
// GetAlert returns alert if id exists
func (ar *AlertingRule) GetAlert(id uint64) *notifier.Alert {
ar.alertsMu.RLock()
defer ar.alertsMu.RUnlock()
if ar.alerts == nil {
return nil
}
return ar.alerts[id]
}
func (ar *AlertingRule) logDebugf(at time.Time, a *notifier.Alert, format string, args ...any) {
if !ar.Debug {
return
@@ -321,6 +312,11 @@ type labelSet struct {
// On k conflicts in origin set, the original value is preferred and copied
// to processed with `exported_%k` key. The copy happens only if passed v isn't equal to origin[k] value.
func (ls *labelSet) add(k, v string) {
// do not add label with empty value, since it has no meaning.
// see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9984
if v == "" {
return
}
ls.processed[k] = v
ov, ok := ls.origin[k]
if !ok {
@@ -355,9 +351,6 @@ func (ar *AlertingRule) toLabels(m datasource.Metric, qFn templates.QueryFn) (*l
Value: m.Values[0],
Expr: ar.Expr,
})
if err != nil {
return nil, fmt.Errorf("failed to expand labels: %w", err)
}
for k, v := range extraLabels {
ls.add(k, v)
}
@@ -368,7 +361,7 @@ func (ar *AlertingRule) toLabels(m datasource.Metric, qFn templates.QueryFn) (*l
if !*disableAlertGroupLabel && ar.GroupName != "" {
ls.add(alertGroupNameLabel, ar.GroupName)
}
return ls, nil
return ls, err
}
// execRange executes alerting rule on the given time range similarly to exec.
@@ -461,7 +454,7 @@ func (ar *AlertingRule) exec(ctx context.Context, ts time.Time, limit int) ([]pr
defer func() {
ar.state.add(curState)
if curState.Err != nil {
if curState.Err != nil && !errors.Is(curState.Err, context.Canceled) {
ar.metrics.errors.Inc()
}
}()
@@ -484,8 +477,9 @@ func (ar *AlertingRule) exec(ctx context.Context, ts time.Time, limit int) ([]pr
for i, m := range res.Data {
ls, err := ar.expandLabelTemplates(m, qFn)
if err != nil {
// only set error in current state, but do not break alert processing
curState.Err = err
return nil, curState.Err
logger.Errorf("got templating error in rule %s: %q", ar.Name, err)
}
at := ts
alertID := hash(ls.processed)
@@ -497,8 +491,9 @@ func (ar *AlertingRule) exec(ctx context.Context, ts time.Time, limit int) ([]pr
}
as, err := ar.expandAnnotationTemplates(m, qFn, at, ls)
if err != nil {
// only set error in current state, but do not break alert processing
curState.Err = err
return nil, curState.Err
logger.Errorf("got templating error in rule %s: %q", ar.Name, err)
}
expandedLabels[i] = ls
expandedAnnotations[i] = as
@@ -607,7 +602,7 @@ func (ar *AlertingRule) exec(ctx context.Context, ts time.Time, limit int) ([]pr
func (ar *AlertingRule) expandLabelTemplates(m datasource.Metric, qFn templates.QueryFn) (*labelSet, error) {
ls, err := ar.toLabels(m, qFn)
if err != nil {
return nil, fmt.Errorf("failed to expand label templates: %s", err)
return ls, fmt.Errorf("failed to expand label templates: %s", err)
}
return ls, nil
}
@@ -625,7 +620,7 @@ func (ar *AlertingRule) expandAnnotationTemplates(m datasource.Metric, qFn templ
}
as, err := notifier.ExecTemplate(qFn, ar.Annotations, tplData)
if err != nil {
return nil, fmt.Errorf("failed to expand annotation templates: %s", err)
return as, fmt.Errorf("failed to expand annotation templates: %s", err)
}
return as, nil
}

View File

@@ -1370,8 +1370,10 @@ func TestAlertingRule_ToLabels(t *testing.T) {
ar := &AlertingRule{
Labels: map[string]string{
"instance": "override", // this should override instance with new value
"group": "vmalert", // this shouldn't have effect since value in metric is equal
"instance": "override", // this should override instance with new value
"group": "vmalert", // this shouldn't have effect since value in metric is equal
"invalid_label": "{{ .Values.mustRuntimeFail }}",
"empty_label": "", // this should be dropped
},
Expr: "sum(vmalert_alerting_rules_error) by(instance, group, alertname) > 0",
Name: "AlertingRulesError",
@@ -1379,10 +1381,11 @@ func TestAlertingRule_ToLabels(t *testing.T) {
}
expectedOriginLabels := map[string]string{
"instance": "0.0.0.0:8800",
"group": "vmalert",
"alertname": "ConfigurationReloadFailure",
"alertgroup": "vmalert",
"instance": "0.0.0.0:8800",
"group": "vmalert",
"alertname": "ConfigurationReloadFailure",
"alertgroup": "vmalert",
"invalid_label": `error evaluating template: template: :1:268: executing "" at <.Values.mustRuntimeFail>: can't evaluate field Values in type notifier.tplData`,
}
expectedProcessedLabels := map[string]string{
@@ -1392,11 +1395,12 @@ func TestAlertingRule_ToLabels(t *testing.T) {
"exported_alertname": "ConfigurationReloadFailure",
"group": "vmalert",
"alertgroup": "vmalert",
"invalid_label": `error evaluating template: template: :1:268: executing "" at <.Values.mustRuntimeFail>: can't evaluate field Values in type notifier.tplData`,
}
ls, err := ar.toLabels(metric, nil)
if err != nil {
t.Fatalf("unexpected error: %s", err)
if err == nil || !strings.Contains(err.Error(), "error evaluating template") {
t.Fatalf("unexpected error %q", err.Error())
}
if !reflect.DeepEqual(ls.origin, expectedOriginLabels) {

View File

@@ -2,6 +2,7 @@ package rule
import (
"context"
"errors"
"fmt"
"strings"
"time"
@@ -197,7 +198,7 @@ func (rr *RecordingRule) exec(ctx context.Context, ts time.Time, limit int) ([]p
defer func() {
rr.state.add(curState)
if curState.Err != nil {
if curState.Err != nil && !errors.Is(curState.Err, context.Canceled) {
rr.metrics.errors.Inc()
}
}()
@@ -236,7 +237,8 @@ func (rr *RecordingRule) exec(ctx context.Context, ts time.Time, limit int) ([]p
Labels: stringToLabels(k),
Samples: []prompb.Sample{
{Value: decimal.StaleNaN, Timestamp: ts.UnixNano() / 1e6},
}})
},
})
}
rr.lastEvaluation = curEvaluation
return tss, nil
@@ -291,6 +293,11 @@ func (rr *RecordingRule) toTimeSeries(m datasource.Metric) prompb.TimeSeries {
}
// add extra labels configured by user
for k := range rr.Labels {
// do not add label with empty value, since it has no meaning.
// see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9984
if rr.Labels[k] == "" {
continue
}
existingLabel := promrelabel.GetLabelByName(m.Labels, k)
if existingLabel != nil { // there is a conflict between extra and existing label
if existingLabel.Value == rr.Labels[k] {

View File

@@ -209,15 +209,6 @@ func (ar *AlertingRule) AlertsToAPI() []*ApiAlert {
return alerts
}
// AlertToAPI generates apiAlert object from alert by its id(hash)
func (ar *AlertingRule) AlertToAPI(id uint64) *ApiAlert {
a := ar.GetAlert(id)
if a == nil {
return nil
}
return NewAlertAPI(ar, a)
}
// NewAlertAPI creates apiAlert for notifier.Alert
func NewAlertAPI(ar *AlertingRule, a *notifier.Alert) *ApiAlert {
aa := &ApiAlert{

View File

@@ -412,18 +412,18 @@ func (rh *requestHandler) groupAlerts() []rule.GroupAlerts {
defer rh.m.groupsMu.RUnlock()
var gAlerts []rule.GroupAlerts
for _, g := range rh.m.groups {
for _, group := range rh.m.groups {
var alerts []*rule.ApiAlert
g := group.ToAPI()
for _, r := range g.Rules {
a, ok := r.(*rule.AlertingRule)
if !ok {
if r.Type != rule.TypeAlerting {
continue
}
alerts = append(alerts, a.AlertsToAPI()...)
alerts = append(alerts, r.Alerts...)
}
if len(alerts) > 0 {
gAlerts = append(gAlerts, rule.GroupAlerts{
Group: g.ToAPI(),
Group: g,
Alerts: alerts,
})
}
@@ -444,12 +444,12 @@ func (rh *requestHandler) listAlerts(rf *rulesFilter) ([]byte, error) {
if !rf.matchesGroup(group) {
continue
}
for _, r := range group.Rules {
a, ok := r.(*rule.AlertingRule)
if !ok {
g := group.ToAPI()
for _, r := range g.Rules {
if r.Type != rule.TypeAlerting {
continue
}
lr.Data.Alerts = append(lr.Data.Alerts, a.AlertsToAPI()...)
lr.Data.Alerts = append(lr.Data.Alerts, r.Alerts...)
}
}

View File

@@ -602,11 +602,11 @@
<table class="table table-striped table-hover table-sm">
<thead>
<tr>
<th scope="col" title="The time when event was created">Updated at</th>
<th scope="col" title="The time when the rule was executed">Updated at</th>
<th scope="col" class="w-10 text-center" title="How many series expression returns. Each series will represent an alert.">Series returned</th>
{% if seriesFetchedEnabled %}<th scope="col" class="w-10 text-center" title="How many series were scanned by datasource during the evaluation">Series fetched</th>{% endif %}
<th scope="col" class="w-10 text-center" title="How many seconds request took">Duration</th>
<th scope="col" class="text-center" title="Time used for rule execution">Executed at</th>
<th scope="col" class="text-center" title="The time used in execution query request">Execution timestamp</th>
<th scope="col" class="text-center" title="cURL command with request example">cURL</th>
</tr>
</thead>

View File

@@ -1717,7 +1717,7 @@ func StreamRuleDetails(qw422016 *qt422016.Writer, r *http.Request, rule rule.Api
<table class="table table-striped table-hover table-sm">
<thead>
<tr>
<th scope="col" title="The time when event was created">Updated at</th>
<th scope="col" title="The time when the rule was executed">Updated at</th>
<th scope="col" class="w-10 text-center" title="How many series expression returns. Each series will represent an alert.">Series returned</th>
`)
//line app/vmalert/web.qtpl:607
@@ -1729,7 +1729,7 @@ func StreamRuleDetails(qw422016 *qt422016.Writer, r *http.Request, rule rule.Api
//line app/vmalert/web.qtpl:607
qw422016.N().S(`
<th scope="col" class="w-10 text-center" title="How many seconds request took">Duration</th>
<th scope="col" class="text-center" title="Time used for rule execution">Executed at</th>
<th scope="col" class="text-center" title="The time used in execution query request">Execution timestamp</th>
<th scope="col" class="text-center" title="cURL command with request example">cURL</th>
</tr>
</thead>

View File

@@ -4,6 +4,7 @@ import (
"bytes"
"context"
"encoding/base64"
"errors"
"flag"
"fmt"
"math"
@@ -94,6 +95,8 @@ type UserInfo struct {
rt http.RoundTripper
requests *metrics.Counter
requestErrors *metrics.Counter
backendRequests *metrics.Counter
backendErrors *metrics.Counter
requestsDuration *metrics.Summary
}
@@ -105,13 +108,29 @@ type HeadersConf struct {
KeepOriginalHost *bool `yaml:"keep_original_host,omitempty"`
}
func (ui *UserInfo) beginConcurrencyLimit() error {
func (ui *UserInfo) beginConcurrencyLimit(ctx context.Context) error {
select {
case ui.concurrencyLimitCh <- struct{}{}:
return nil
default:
ui.concurrencyLimitReached.Inc()
return fmt.Errorf("cannot handle more than %d concurrent requests from user %s", ui.getMaxConcurrentRequests(), ui.name())
// The per-user limit for the number of concurrent requests is reached.
// Wait until the currently executed requests are finished, so the current request could be executed.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10078
select {
case ui.concurrencyLimitCh <- struct{}{}:
return nil
case <-ctx.Done():
err := ctx.Err()
if errors.Is(err, context.DeadlineExceeded) {
return fmt.Errorf("cannot start executing the request during -maxQueueDuration=%s because %d concurrent requests from the user %s are executed",
*maxQueueDuration, ui.getMaxConcurrentRequests(), ui.name())
}
return fmt.Errorf("cannot start executing the request because %d concurrent requests from the user %s are executed: %w",
ui.getMaxConcurrentRequests(), ui.name(), err)
}
}
}
@@ -127,6 +146,18 @@ func (ui *UserInfo) getMaxConcurrentRequests() int {
return mcr
}
func (ui *UserInfo) stopHealthChecks() {
if ui == nil {
return
}
if ui.URLPrefix == nil {
return
}
bus := ui.URLPrefix.bus.Load()
bus.stopHealthChecks()
}
// Header is `Name: Value` http header, which must be added to the proxied request.
type Header struct {
Name string
@@ -262,7 +293,7 @@ type URLPrefix struct {
// the list of backend urls
//
// the list can be dynamically updated if `discover_backend_ips` option is set.
bus atomic.Pointer[[]*backendURL]
bus atomic.Pointer[backendURLs]
// if this option is set, then backend ips for busOriginal are periodically re-discovered and put to bus.
discoverBackendIPs bool
@@ -286,21 +317,93 @@ func (up *URLPrefix) setLoadBalancingPolicy(loadBalancingPolicy string) error {
}
}
type backendURLs struct {
healthChecksContext context.Context
healthChecksCancel func()
healthChecksWG sync.WaitGroup
bus []*backendURL
}
func newBackendURLs() *backendURLs {
ctx, cancel := context.WithCancel(context.Background())
return &backendURLs{
healthChecksContext: ctx,
healthChecksCancel: cancel,
}
}
func (bus *backendURLs) add(u *url.URL) {
bus.bus = append(bus.bus, &backendURL{
url: u,
healthCheckContext: bus.healthChecksContext,
healthCheckWG: &bus.healthChecksWG,
})
}
func (bus *backendURLs) stopHealthChecks() {
bus.healthChecksCancel()
bus.healthChecksWG.Wait()
}
type backendURL struct {
brokenDeadline atomic.Uint64
broken atomic.Bool
healthCheckContext context.Context
healthCheckWG *sync.WaitGroup
concurrentRequests atomic.Int32
url *url.URL
}
func (bu *backendURL) isBroken() bool {
ct := fasttime.UnixTimestamp()
return ct < bu.brokenDeadline.Load()
return bu.broken.Load()
}
func (bu *backendURL) setBroken() {
deadline := fasttime.UnixTimestamp() + uint64((*failTimeout).Seconds())
bu.brokenDeadline.Store(deadline)
if bu.broken.CompareAndSwap(false, true) {
bu.healthCheckWG.Add(1)
go func() {
defer bu.healthCheckWG.Done()
bu.runHealthCheck()
bu.broken.Store(false)
}()
}
}
func (bu *backendURL) runHealthCheck() {
port := bu.url.Port()
if port == "" {
port = "80"
}
addr := net.JoinHostPort(bu.url.Hostname(), port)
t := time.NewTicker(*failTimeout)
defer t.Stop()
for {
select {
case <-t.C:
// Verify network connectivity via TCP dial before marking backend healthy.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9997
ctx, cancel := context.WithTimeout(bu.healthCheckContext, time.Second)
c, err := netutil.Dialer.DialContext(ctx, "tcp", addr)
cancel()
if err != nil {
if errors.Is(bu.healthCheckContext.Err(), context.Canceled) {
return
}
logger.Warnf("ignoring the backend at %s for %s becasue of dial error: %s", addr, *failTimeout, err)
continue
}
_ = c.Close()
return
case <-bu.healthCheckContext.Done():
return
}
}
}
func (bu *backendURL) get() {
@@ -312,8 +415,8 @@ func (bu *backendURL) put() {
}
func (up *URLPrefix) getBackendsCount() int {
pbus := up.bus.Load()
return len(*pbus)
bus := up.bus.Load()
return len(bus.bus)
}
// getBackendURL returns the backendURL depending on the load balance policy.
@@ -324,16 +427,15 @@ func (up *URLPrefix) getBackendsCount() int {
func (up *URLPrefix) getBackendURL() *backendURL {
up.discoverBackendAddrsIfNeeded()
pbus := up.bus.Load()
bus := *pbus
if len(bus) == 0 {
bus := up.bus.Load()
if len(bus.bus) == 0 {
return nil
}
if up.loadBalancingPolicy == "first_available" {
return getFirstAvailableBackendURL(bus)
return getFirstAvailableBackendURL(bus.bus)
}
return getLeastLoadedBackendURL(bus, &up.n)
return getLeastLoadedBackendURL(bus.bus, &up.n)
}
func (up *URLPrefix) discoverBackendAddrsIfNeeded() {
@@ -407,25 +509,24 @@ func (up *URLPrefix) discoverBackendAddrsIfNeeded() {
cancel()
// generate new backendURLs for the resolved IPs
var busNew []*backendURL
busNew := newBackendURLs()
for _, bu := range up.busOriginal {
host := bu.Hostname()
for _, addr := range hostToAddrs[host] {
buCopy := *bu
buCopy.Host = addr
busNew = append(busNew, &backendURL{
url: &buCopy,
})
busNew.add(&buCopy)
}
}
pbus := up.bus.Load()
if areEqualBackendURLs(*pbus, busNew) {
bus := up.bus.Load()
if areEqualBackendURLs(bus.bus, busNew.bus) {
return
}
// Store new backend urls
up.bus.Store(&busNew)
up.bus.Store(busNew)
bus.stopHealthChecks()
}
func areEqualBackendURLs(a, b []*backendURL) bool {
@@ -456,20 +557,23 @@ func getFirstAvailableBackendURL(bus []*backendURL) *backendURL {
for i := 1; i < len(bus); i++ {
if !bus[i].isBroken() {
bu = bus[i]
break
bu.get()
return bu
}
}
bu.get()
return bu
return nil
}
// getLeastLoadedBackendURL returns the backendURL with the minimum number of concurrent requests.
// getLeastLoadedBackendURL returns a non-broken backendURL with the lowest number of concurrent requests.
//
// backendURL.put() must be called on the returned backendURL after the request is complete.
func getLeastLoadedBackendURL(bus []*backendURL, atomicCounter *atomic.Uint32) *backendURL {
if len(bus) == 1 {
// Fast path - return the only backend url.
bu := bus[0]
if bu.isBroken() {
return nil
}
bu.get()
return bu
}
@@ -494,7 +598,7 @@ func getLeastLoadedBackendURL(bus []*backendURL, atomicCounter *atomic.Uint32) *
// Slow path - return the backend with the minimum number of concurrently executed requests.
buMinIdx := n % uint32(len(bus))
minRequests := bus[buMinIdx].concurrentRequests.Load()
for i := uint32(0); i < uint32(len(bus)); i++ {
for i := uint32(1); i < uint32(len(bus)); i++ {
idx := (n + i) % uint32(len(bus))
bu := bus[idx]
if bu.isBroken() {
@@ -508,6 +612,9 @@ func getLeastLoadedBackendURL(bus []*backendURL, atomicCounter *atomic.Uint32) *
}
}
buMin := bus[buMinIdx]
if buMin.isBroken() {
return nil
}
buMin.get()
atomicCounter.CompareAndSwap(n+1, buMinIdx+1)
return buMin
@@ -732,6 +839,11 @@ func reloadAuthConfigData(data []byte) (bool, error) {
acPrev := authConfig.Load()
if acPrev != nil {
acPrev.UnauthorizedUser.stopHealthChecks()
for i := range acPrev.Users {
acPrev.Users[i].stopHealthChecks()
}
metrics.UnregisterSet(acPrev.ms, true)
}
metrics.RegisterSet(ac.ms)
@@ -778,6 +890,8 @@ func parseAuthConfig(data []byte) (*AuthConfig, error) {
return nil, fmt.Errorf("cannot parse metric_labels for unauthorized_user: %w", err)
}
ui.requests = ac.ms.NewCounter(`vmauth_unauthorized_user_requests_total` + metricLabels)
ui.requestErrors = ac.ms.NewCounter(`vmauth_unauthorized_user_request_errors_total` + metricLabels)
ui.backendRequests = ac.ms.NewCounter(`vmauth_unauthorized_user_request_backend_requests_total` + metricLabels)
ui.backendErrors = ac.ms.NewCounter(`vmauth_unauthorized_user_request_backend_errors_total` + metricLabels)
ui.requestsDuration = ac.ms.NewSummary(`vmauth_unauthorized_user_request_duration_seconds` + metricLabels)
ui.concurrencyLimitCh = make(chan struct{}, ui.getMaxConcurrentRequests())
@@ -826,6 +940,8 @@ func parseAuthConfigUsers(ac *AuthConfig) (map[string]*UserInfo, error) {
return nil, fmt.Errorf("cannot parse metric_labels: %w", err)
}
ui.requests = ac.ms.GetOrCreateCounter(`vmauth_user_requests_total` + metricLabels)
ui.requestErrors = ac.ms.GetOrCreateCounter(`vmauth_user_request_errors_total` + metricLabels)
ui.backendRequests = ac.ms.GetOrCreateCounter(`vmauth_user_request_backend_requests_total` + metricLabels)
ui.backendErrors = ac.ms.GetOrCreateCounter(`vmauth_user_request_backend_errors_total` + metricLabels)
ui.requestsDuration = ac.ms.GetOrCreateSummary(`vmauth_user_request_duration_seconds` + metricLabels)
mcr := ui.getMaxConcurrentRequests()
@@ -1060,13 +1176,11 @@ func (up *URLPrefix) sanitizeAndInitialize() error {
}
// Initialize up.bus
bus := make([]*backendURL, len(up.busOriginal))
for i, bu := range up.busOriginal {
bus[i] = &backendURL{
url: bu,
}
bus := newBackendURLs()
for _, bu := range up.busOriginal {
bus.add(bu)
}
up.bus.Store(&bus)
up.bus.Store(bus)
return nil
}

View File

@@ -753,7 +753,7 @@ func TestGetLeastLoadedBackendURL(t *testing.T) {
up.loadBalancingPolicy = "least_loaded"
pbus := up.bus.Load()
bus := *pbus
bus := pbus.bus
fn := func(ns ...int) {
t.Helper()
@@ -825,7 +825,7 @@ func TestBrokenBackend(t *testing.T) {
})
up.loadBalancingPolicy = "least_loaded"
pbus := up.bus.Load()
bus := *pbus
bus := pbus.bus
// explicitly mark one of the backends as broken
bus[1].setBroken()
@@ -848,7 +848,7 @@ func TestDiscoverBackendIPsWithIPV6(t *testing.T) {
up.discoverBackendAddrsIfNeeded()
pbus := up.bus.Load()
bus := *pbus
bus := pbus.bus
if len(bus) != 1 {
t.Fatalf("expected url list to be of size 1; got %d instead", len(bus))
@@ -942,16 +942,14 @@ func mustParseURL(u string) *URLPrefix {
}
func mustParseURLs(us []string) *URLPrefix {
bus := make([]*backendURL, len(us))
bus := newBackendURLs()
urls := make([]*url.URL, len(us))
for i, u := range us {
pu, err := url.Parse(u)
if err != nil {
panic(fmt.Errorf("BUG: cannot parse %q: %w", u, err))
}
bus[i] = &backendURL{
url: pu,
}
bus.add(pu)
urls[i] = pu
}
up := &URLPrefix{}
@@ -960,7 +958,7 @@ func mustParseURLs(us []string) *URLPrefix {
} else {
up.vOriginal = us
}
up.bus.Store(&bus)
up.bus.Store(bus)
up.busOriginal = urls
return up
}

View File

@@ -44,12 +44,17 @@ var (
"See also -maxConcurrentRequests")
idleConnTimeout = flag.Duration("idleConnTimeout", 50*time.Second, "The timeout for HTTP keep-alive connections to backend services. "+
"It is recommended setting this value to values smaller than -http.idleConnTimeout set at backend services")
responseTimeout = flag.Duration("responseTimeout", 5*time.Minute, "The timeout for receiving a response from backend")
responseTimeout = flag.Duration("responseTimeout", 5*time.Minute, "The timeout for receiving a response from backend")
maxConcurrentRequests = flag.Int("maxConcurrentRequests", 1000, "The maximum number of concurrent requests vmauth can process. Other requests are rejected with "+
"'429 Too Many Requests' http status code. See also -maxConcurrentPerUserRequests and -maxIdleConnsPerBackend command-line options")
"'429 Too Many Requests' http status code. See also -maxQueueDuration, -maxConcurrentPerUserRequests and -maxIdleConnsPerBackend command-line options")
maxConcurrentPerUserRequests = flag.Int("maxConcurrentPerUserRequests", 300, "The maximum number of concurrent requests vmauth can process per each configured user. "+
"Other requests are rejected with '429 Too Many Requests' http status code. See also -maxConcurrentRequests command-line option and max_concurrent_requests option "+
"in per-user config")
"Other requests are rejected with '429 Too Many Requests' http status code. See also -maxQueueDuration and -maxConcurrentRequests command-line options "+
"and max_concurrent_requests option in per-user config")
maxQueueDuration = flag.Duration("maxQueueDuration", 10*time.Second, "The maximum duration the request waits for execution when the number of concurrently executed "+
"requests reach -maxConcurrentRequests or -maxConcurrentPerUserRequests before returning '429 Too Many Requests' error. "+
"This allows graceful handling of short spikes in the number of concurrent requests")
reloadAuthKey = flagutil.NewPassword("reloadAuthKey", "Auth key for /-/reload http endpoint. It must be passed via authKey query arg. It overrides -httpAuth.*")
logInvalidAuthTokens = flag.Bool("logInvalidAuthTokens", false, "Whether to log requests with invalid auth tokens. "+
`Such requests are always counted at vmauth_http_request_errors_total{reason="invalid_auth_token"} metric, which is exposed at /metrics page`)
@@ -151,7 +156,6 @@ func requestHandlerWithInternalRoutes(w http.ResponseWriter, r *http.Request) bo
}
func requestHandler(w http.ResponseWriter, r *http.Request) bool {
ats := getAuthTokensFromRequest(r)
if len(ats) == 0 {
// Process requests for unauthorized users
@@ -208,20 +212,45 @@ func processUserRequest(w http.ResponseWriter, r *http.Request, ui *UserInfo) {
ui.requests.Inc()
ctx, cancel := context.WithTimeout(r.Context(), *maxQueueDuration)
defer cancel()
// Limit the concurrency of requests to backends
concurrencyLimitOnce.Do(concurrencyLimitInit)
select {
case concurrencyLimitCh <- struct{}{}:
if err := ui.beginConcurrencyLimit(); err != nil {
if err := ui.beginConcurrencyLimit(ctx); err != nil {
handleConcurrencyLimitError(w, r, err)
<-concurrencyLimitCh
return
}
default:
concurrentRequestsLimitReached.Inc()
err := fmt.Errorf("cannot serve more than -maxConcurrentRequests=%d concurrent requests", cap(concurrencyLimitCh))
handleConcurrencyLimitError(w, r, err)
return
// The -maxConcurrentRequests are executed. Wait until some of the requests are finished,
// so the current request could be executed.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10078
select {
case concurrencyLimitCh <- struct{}{}:
if err := ui.beginConcurrencyLimit(ctx); err != nil {
handleConcurrencyLimitError(w, r, err)
<-concurrencyLimitCh
return
}
case <-ctx.Done():
err := ctx.Err()
concurrentRequestsLimitReached.Inc()
if errors.Is(err, context.DeadlineExceeded) {
err = fmt.Errorf("cannot start executing the request during -maxQueueDuration=%s because -maxConcurrentRequests=%d concurrent requests are executed",
*maxQueueDuration, cap(concurrencyLimitCh))
handleConcurrencyLimitError(w, r, err)
return
}
err = fmt.Errorf("cannot start executing the request because -maxConcurrentRequests=%d concurrent requests are executed: %w", cap(concurrencyLimitCh), err)
handleConcurrencyLimitError(w, r, err)
return
}
}
processRequest(w, r, ui)
ui.endConcurrencyLimit()
@@ -285,16 +314,18 @@ func processRequest(w http.ResponseWriter, r *http.Request, ui *UserInfo) {
return
}
bu.setBroken()
ui.backendErrors.Inc()
}
err := &httpserver.ErrorWithStatusCode{
Err: fmt.Errorf("all the %d backends for the user %q are unavailable", up.getBackendsCount(), ui.name()),
StatusCode: http.StatusBadGateway,
}
httpserver.Errorf(w, r, "%s", err)
ui.backendErrors.Inc()
ui.requestErrors.Inc()
}
func tryProcessingRequest(w http.ResponseWriter, r *http.Request, targetURL *url.URL, hc HeadersConf, retryStatusCodes []int, ui *UserInfo) (bool, bool) {
ui.backendRequests.Inc()
req := sanitizeRequestHeaders(r)
req.URL = targetURL
@@ -325,7 +356,6 @@ func tryProcessingRequest(w http.ResponseWriter, r *http.Request, targetURL *url
if errors.Is(err, context.DeadlineExceeded) {
// Timed out request must be counted as errors, since this usually means that the backend is slow.
logger.Warnf("remoteAddr: %s; requestURI: %s; timeout while proxying the response from %s: %s", remoteAddr, requestURI, targetURL, err)
ui.backendErrors.Inc()
}
return false, false
}
@@ -337,6 +367,7 @@ func tryProcessingRequest(w http.ResponseWriter, r *http.Request, targetURL *url
}
httpserver.Errorf(w, r, "%s", err)
ui.backendErrors.Inc()
ui.requestErrors.Inc()
return true, false
}
if netutil.IsTrivialNetworkError(err) {
@@ -344,11 +375,11 @@ func tryProcessingRequest(w http.ResponseWriter, r *http.Request, targetURL *url
return false, true
}
// Retry the request if its body wasn't read yet. This usually means that the backend isn't reachable.
// Request body wasn't read yet, this usually means that the backend isn't reachable; retry the request at another backend
remoteAddr := httpserver.GetQuotedRemoteAddr(r)
// NOTE: do not use httpserver.GetRequestURI
// it explicitly reads request body, which may fail retries.
logger.Warnf("remoteAddr: %s; requestURI: %s; retrying the request to %s because of response error: %s", remoteAddr, req.URL, targetURL, err)
logger.Warnf("remoteAddr: %s; requestURI: %s; request to %s failed: %s, retrying the request at another backend", remoteAddr, req.URL, targetURL, err)
return false, false
}
if slices.Contains(retryStatusCodes, res.StatusCode) {
@@ -357,12 +388,13 @@ func tryProcessingRequest(w http.ResponseWriter, r *http.Request, targetURL *url
// If we get an error from the retry_status_codes list, but cannot execute retry,
// we consider such a request an error as well.
err := &httpserver.ErrorWithStatusCode{
Err: fmt.Errorf("got response status code=%d from %s, but cannot retry the request on another backend, because the request has been already consumed",
Err: fmt.Errorf("got response status code=%d from %s, but cannot retry the request at another backend, because the request has been already consumed",
res.StatusCode, targetURL),
StatusCode: http.StatusServiceUnavailable,
}
httpserver.Errorf(w, r, "%s", err)
ui.backendErrors.Inc()
ui.requestErrors.Inc()
return true, false
}
// Retry requests at other backends if it matches retryStatusCodes.
@@ -370,7 +402,7 @@ func tryProcessingRequest(w http.ResponseWriter, r *http.Request, targetURL *url
remoteAddr := httpserver.GetQuotedRemoteAddr(r)
// NOTE: do not use httpserver.GetRequestURI
// it explicitly reads request body, which may fail retries.
logger.Warnf("remoteAddr: %s; requestURI: %s; retrying the request to %s because response status code=%d belongs to retry_status_codes=%d",
logger.Warnf("remoteAddr: %s; requestURI: %s; request to %s failed, retrying the request at another backend because response status code=%d belongs to retry_status_codes=%d",
remoteAddr, req.URL, targetURL, res.StatusCode, retryStatusCodes)
return false, false
}
@@ -386,6 +418,7 @@ func tryProcessingRequest(w http.ResponseWriter, r *http.Request, targetURL *url
requestURI := httpserver.GetRequestURI(r)
logger.Warnf("remoteAddr: %s; requestURI: %s; error when proxying response body from %s: %s", remoteAddr, requestURI, targetURL, err)
ui.requestErrors.Inc()
return true, false
}
return true, false
@@ -596,6 +629,13 @@ func handleMissingAuthorizationError(w http.ResponseWriter) {
}
func handleConcurrencyLimitError(w http.ResponseWriter, r *http.Request, err error) {
ctx := r.Context()
if errors.Is(ctx.Err(), context.Canceled) {
// Do not return any response for the request canceled by the client,
// since the connection to the client is already closed.
return
}
w.Header().Add("Retry-After", "10")
err = &httpserver.ErrorWithStatusCode{
Err: err,
@@ -652,6 +692,7 @@ type zeroReader struct{}
func (r *zeroReader) Read(_ []byte) (int, error) {
return 0, io.EOF
}
func (r *zeroReader) Close() error {
return nil
}

View File

@@ -212,7 +212,7 @@ func newSrcFS() (*fslocal.FS, error) {
}
func newDstFS(ctx context.Context) (common.RemoteFS, error) {
fs, err := actions.NewRemoteFS(ctx, *dst)
fs, err := actions.NewRemoteFS(ctx, *dst, nil)
if err != nil {
return nil, fmt.Errorf("cannot parse `-dst`=%q: %w", *dst, err)
}
@@ -255,7 +255,7 @@ func newOriginFS(ctx context.Context) (common.OriginFS, error) {
if len(*origin) == 0 {
return &fsnil.FS{}, nil
}
fs, err := actions.NewRemoteFS(ctx, *origin)
fs, err := actions.NewRemoteFS(ctx, *origin, nil)
if err != nil {
return nil, fmt.Errorf("cannot parse `-origin`=%q: %w", *origin, err)
}
@@ -266,7 +266,7 @@ func newRemoteOriginFS(ctx context.Context) (common.RemoteFS, error) {
if len(*origin) == 0 {
return nil, fmt.Errorf("-origin cannot be empty when -snapshotName and -snapshot.createURL aren't set")
}
fs, err := actions.NewRemoteFS(ctx, *origin)
fs, err := actions.NewRemoteFS(ctx, *origin, nil)
if err != nil {
return nil, fmt.Errorf("cannot parse `-origin`=%q: %w", *origin, err)
}

View File

@@ -1,6 +1,7 @@
package main
import (
"context"
"fmt"
"io"
"log"
@@ -37,7 +38,7 @@ func newInfluxProcessor(ic *influx.Client, im *vm.Importer, cc int, separator st
}
}
func (ip *influxProcessor) run() error {
func (ip *influxProcessor) run(ctx context.Context) error {
series, err := ip.ic.Explore()
if err != nil {
return fmt.Errorf("explore query failed: %s", err)
@@ -47,7 +48,7 @@ func (ip *influxProcessor) run() error {
}
question := fmt.Sprintf("Found %d timeseries to import. Continue?", len(series))
if !prompt(question) {
if !prompt(ctx, question) {
return nil
}

View File

@@ -103,7 +103,7 @@ func main() {
}
otsdbProcessor := newOtsdbProcessor(otsdbClient, importer, c.Int(otsdbConcurrency), c.Bool(globalVerbose))
return otsdbProcessor.run()
return otsdbProcessor.run(ctx)
},
},
{
@@ -164,7 +164,7 @@ func main() {
c.Bool(influxSkipDatabaseLabel),
c.Bool(influxPrometheusMode),
c.Bool(globalVerbose))
return processor.run()
return processor.run(ctx)
},
},
{
@@ -279,7 +279,7 @@ func main() {
cc: c.Int(promConcurrency),
isVerbose: c.Bool(globalVerbose),
}
return pp.run()
return pp.run(ctx)
},
},
{

View File

@@ -1,6 +1,7 @@
package main
import (
"context"
"fmt"
"log"
"sync"
@@ -37,7 +38,7 @@ func newOtsdbProcessor(oc *opentsdb.Client, im *vm.Importer, otsdbcc int, verbos
}
}
func (op *otsdbProcessor) run() error {
func (op *otsdbProcessor) run(ctx context.Context) error {
log.Println("Loading all metrics from OpenTSDB for filters: ", op.oc.Filters)
var metrics []string
for _, filter := range op.oc.Filters {
@@ -53,7 +54,7 @@ func (op *otsdbProcessor) run() error {
}
question := fmt.Sprintf("Found %d metrics to import. Continue?", len(metrics))
if !prompt(question) {
if !prompt(ctx, question) {
return nil
}
op.im.ResetStats()

View File

@@ -1,6 +1,7 @@
package main
import (
"context"
"fmt"
"log"
"sync"
@@ -30,7 +31,7 @@ type prometheusProcessor struct {
isVerbose bool
}
func (pp *prometheusProcessor) run() error {
func (pp *prometheusProcessor) run(ctx context.Context) error {
blocks, err := pp.cl.Explore()
if err != nil {
return fmt.Errorf("explore failed: %s", err)
@@ -39,7 +40,7 @@ func (pp *prometheusProcessor) run() error {
return fmt.Errorf("found no blocks to import")
}
question := fmt.Sprintf("Found %d blocks to import. Continue?", len(blocks))
if !prompt(question) {
if !prompt(ctx, question) {
return nil
}

View File

@@ -47,7 +47,7 @@ func (rrp *remoteReadProcessor) run(ctx context.Context) error {
question := fmt.Sprintf("Selected time range %q - %q will be split into %d ranges according to %q step. Continue?",
rrp.filter.timeStart.String(), rrp.filter.timeEnd.String(), len(ranges), rrp.filter.chunk)
if !prompt(question) {
if !prompt(ctx, question) {
return nil
}

View File

@@ -2,6 +2,7 @@ package main
import (
"bufio"
"context"
"fmt"
"os"
"strings"
@@ -15,7 +16,7 @@ const barTpl = `{{ blue "%s:" }} {{ counters . }} {{ bar . "[" "█" (cycle . "
// isSilent should be inited in main
var isSilent bool
func prompt(question string) bool {
func prompt(ctx context.Context, question string) bool {
if isSilent {
return true
}
@@ -25,15 +26,32 @@ func prompt(question string) bool {
}
reader := bufio.NewReader(os.Stdin)
fmt.Print(question, " [Y/n] ")
answer, err := reader.ReadString('\n')
if err != nil {
answerCh := make(chan string, 1)
errCh := make(chan error, 1)
go func() {
answer, err := reader.ReadString('\n')
if err != nil {
errCh <- err
return
}
answerCh <- answer
}()
select {
case <-ctx.Done():
fmt.Println("\nCanceled.")
return false
case err := <-errCh:
panic(err)
case answer := <-answerCh:
answer = strings.TrimSpace(strings.ToLower(answer))
if answer == "" || answer == "yes" || answer == "y" {
return true
}
return false
}
answer = strings.TrimSpace(strings.ToLower(answer))
if answer == "" || answer == "yes" || answer == "y" {
return true
}
return false
}
func wrapErr(vmErr *vm.ImportError, verbose bool) error {

View File

@@ -79,7 +79,7 @@ func (p *vmNativeProcessor) run(ctx context.Context) error {
return fmt.Errorf("failed to get tenants: %w", err)
}
question := fmt.Sprintf("The following tenants were discovered: %s.\n Continue?", tenants)
if !prompt(question) {
if !prompt(ctx, question) {
return nil
}
}
@@ -233,7 +233,7 @@ func (p *vmNativeProcessor) runBackfilling(ctx context.Context, tenantID string,
// do not prompt for intercluster because there could be many tenants,
// and we don't want to interrupt the process when moving to the next tenant.
question := foundSeriesMsg + ". Continue?"
if !prompt(question) {
if !prompt(ctx, question) {
return nil
}
} else {

View File

@@ -11,9 +11,11 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/prometheus"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/ratelimiter"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/slicesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage/metricsmetadata"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeserieslimits"
)
@@ -50,8 +52,9 @@ var (
type InsertCtx struct {
Labels sortedLabels
mrs []storage.MetricRow
metricNamesBuf []byte
mrs []storage.MetricRow
mms []metricsmetadata.Row
metricNameBuf []byte
relabelCtx relabel.Ctx
streamAggrCtx streamAggrCtx
@@ -73,8 +76,13 @@ func (ctx *InsertCtx) Reset(rowsLen int) {
}
mrs = slicesutil.SetLength(mrs, rowsLen)
ctx.mrs = mrs[:0]
mms := ctx.mms
for i := range mms {
cleanMetricMetadata(&mms[i])
}
ctx.mms = mms[:0]
ctx.metricNamesBuf = ctx.metricNamesBuf[:0]
ctx.metricNameBuf = ctx.metricNameBuf[:0]
ctx.relabelCtx.Reset()
ctx.streamAggrCtx.Reset()
ctx.skipStreamAggr = false
@@ -84,11 +92,20 @@ func cleanMetricRow(mr *storage.MetricRow) {
mr.MetricNameRaw = nil
}
func cleanMetricMetadata(mm *metricsmetadata.Row) {
mm.MetricFamilyName = nil
mm.Unit = nil
mm.Help = nil
mm.Type = 0
mm.ProjectID = 0
mm.AccountID = 0
}
func (ctx *InsertCtx) marshalMetricNameRaw(prefix []byte, labels []prompb.Label) []byte {
start := len(ctx.metricNamesBuf)
ctx.metricNamesBuf = append(ctx.metricNamesBuf, prefix...)
ctx.metricNamesBuf = storage.MarshalMetricNameRaw(ctx.metricNamesBuf, labels)
metricNameRaw := ctx.metricNamesBuf[start:]
start := len(ctx.metricNameBuf)
ctx.metricNameBuf = append(ctx.metricNameBuf, prefix...)
ctx.metricNameBuf = storage.MarshalMetricNameRaw(ctx.metricNameBuf, labels)
metricNameRaw := ctx.metricNameBuf[start:]
return metricNameRaw[:len(metricNameRaw):len(metricNameRaw)]
}
@@ -143,7 +160,7 @@ func (ctx *InsertCtx) addRow(metricNameRaw []byte, timestamp int64, value float6
mr.MetricNameRaw = metricNameRaw
mr.Timestamp = timestamp
mr.Value = value
if len(ctx.metricNamesBuf) > 16*1024*1024 {
if len(ctx.metricNameBuf) > 16*1024*1024 {
if err := ctx.FlushBufs(); err != nil {
return err
}
@@ -151,6 +168,55 @@ func (ctx *InsertCtx) addRow(metricNameRaw []byte, timestamp int64, value float6
return nil
}
// WriteMetadata writes given prometheus protobuf metadata into the storage.
func (ctx *InsertCtx) WriteMetadata(mmpbs []prompb.MetricMetadata) error {
if len(mmpbs) == 0 {
return nil
}
mms := ctx.mms
mms = slicesutil.SetLength(mms, len(mmpbs))
for idx, mmpb := range mmpbs {
mm := &mms[idx]
mm.MetricFamilyName = bytesutil.ToUnsafeBytes(mmpb.MetricFamilyName)
mm.Help = bytesutil.ToUnsafeBytes(mmpb.Help)
mm.Type = mmpb.Type
mm.Unit = bytesutil.ToUnsafeBytes(mmpb.Unit)
}
err := vmstorage.AddMetadataRows(mms)
if err != nil {
return &httpserver.ErrorWithStatusCode{
Err: fmt.Errorf("cannot store metrics metadata: %w", err),
StatusCode: http.StatusServiceUnavailable,
}
}
return nil
}
// WritePromMetadata writes given prometheus metric metadata into the storage
func (ctx *InsertCtx) WritePromMetadata(mmps []prometheus.Metadata) error {
if len(mmps) == 0 {
return nil
}
mms := ctx.mms
mms = slicesutil.SetLength(mms, len(mmps))
for idx, mmpb := range mmps {
mm := &mms[idx]
mm.MetricFamilyName = bytesutil.ToUnsafeBytes(mmpb.Metric)
mm.Help = bytesutil.ToUnsafeBytes(mmpb.Help)
mm.Type = mmpb.Type
}
err := vmstorage.AddMetadataRows(mms)
if err != nil {
return &httpserver.ErrorWithStatusCode{
Err: fmt.Errorf("cannot store prometheus metrics metadata: %w", err),
StatusCode: http.StatusServiceUnavailable,
}
}
return nil
}
// AddLabelBytes adds (name, value) label to ctx.Labels.
//
// name and value must exist until ctx.Labels is used.

View File

@@ -27,6 +27,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/promremotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/relabel"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/vmimport"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/zabbixconnector"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
@@ -231,6 +232,17 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
}
firehose.WriteSuccessResponse(w, r)
return true
case "zabbixconnector/api/v1/history":
zabbixconnectorHistoryRequests.Inc()
if err := zabbixconnector.InsertHandlerForHTTP(r); err != nil {
zabbixconnectorHistoryErrors.Inc()
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusBadRequest)
fmt.Fprintf(w, `{"error":%q}`, err.Error())
return true
}
w.WriteHeader(http.StatusAccepted)
return true
case "/newrelic":
newrelicCheckRequest.Inc()
w.Header().Set("Content-Type", "application/json")
@@ -423,6 +435,9 @@ var (
opentelemetryPushRequests = metrics.NewCounter(`vm_http_requests_total{path="/opentelemetry/v1/metrics", protocol="opentelemetry"}`)
opentelemetryPushErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/opentelemetry/v1/metrics", protocol="opentelemetry"}`)
zabbixconnectorHistoryRequests = metrics.NewCounter(`vm_http_requests_total{path="/zabbixconnector/api/v1/history", protocol="zabbixconnector"}`)
zabbixconnectorHistoryErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/zabbixconnector/api/v1/history", protocol="zabbixconnector"}`)
newrelicWriteRequests = metrics.NewCounter(`vm_http_requests_total{path="/newrelic/infra/v2/metrics/events/bulk", protocol="newrelic"}`)
newrelicWriteErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/newrelic/infra/v2/metrics/events/bulk", protocol="newrelic"}`)

View File

@@ -6,6 +6,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/relabel"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prommetadata"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentelemetry/firehose"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentelemetry/stream"
@@ -14,8 +15,9 @@ import (
)
var (
rowsInserted = metrics.NewCounter(`vm_rows_inserted_total{type="opentelemetry"}`)
rowsPerInsert = metrics.NewHistogram(`vm_rows_per_insert{type="opentelemetry"}`)
rowsInserted = metrics.NewCounter(`vm_rows_inserted_total{type="opentelemetry"}`)
rowsPerInsert = metrics.NewHistogram(`vm_rows_per_insert{type="opentelemetry"}`)
metadataInserted = metrics.NewCounter(`vm_metadata_rows_inserted_total{type="opentelemetry"}`)
)
// InsertHandler processes opentelemetry metrics.
@@ -33,12 +35,12 @@ func InsertHandler(req *http.Request) error {
return fmt.Errorf("json encoding isn't supported for opentelemetry format. Use protobuf encoding")
}
}
return stream.ParseStream(req.Body, encoding, processBody, func(tss []prompb.TimeSeries, _ []prompb.MetricMetadata) error {
return insertRows(tss, extraLabels)
return stream.ParseStream(req.Body, encoding, processBody, func(tss []prompb.TimeSeries, mms []prompb.MetricMetadata) error {
return insertRows(tss, mms, extraLabels)
})
}
func insertRows(tss []prompb.TimeSeries, extraLabels []prompb.Label) error {
func insertRows(tss []prompb.TimeSeries, mms []prompb.MetricMetadata, extraLabels []prompb.Label) error {
ctx := common.GetInsertCtx()
defer common.PutInsertCtx(ctx)
@@ -75,5 +77,14 @@ func insertRows(tss []prompb.TimeSeries, extraLabels []prompb.Label) error {
}
rowsInserted.Add(rowsTotal)
rowsPerInsert.Update(float64(rowsTotal))
return ctx.FlushBufs()
if err := ctx.FlushBufs(); err != nil {
return fmt.Errorf("cannot flush metric bufs: %w", err)
}
if prommetadata.IsEnabled() {
if err := ctx.WriteMetadata(mms); err != nil {
return err
}
metadataInserted.Add(len(mms))
}
return nil
}

View File

@@ -1,6 +1,7 @@
package prometheusimport
import (
"fmt"
"net/http"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/common"
@@ -15,8 +16,9 @@ import (
)
var (
rowsInserted = metrics.NewCounter(`vm_rows_inserted_total{type="prometheus"}`)
rowsPerInsert = metrics.NewHistogram(`vm_rows_per_insert{type="prometheus"}`)
rowsInserted = metrics.NewCounter(`vm_rows_inserted_total{type="prometheus"}`)
rowsPerInsert = metrics.NewHistogram(`vm_rows_per_insert{type="prometheus"}`)
metadataInserted = metrics.NewCounter(`vm_metadata_rows_inserted_total{type="prometheus"}`)
)
// InsertHandler processes `/api/v1/import/prometheus` request.
@@ -30,14 +32,14 @@ func InsertHandler(req *http.Request) error {
return err
}
encoding := req.Header.Get("Content-Encoding")
return stream.Parse(req.Body, defaultTimestamp, encoding, true, prommetadata.IsEnabled(), func(rows []prometheus.Row, _ []prometheus.Metadata) error {
return insertRows(rows, extraLabels)
return stream.Parse(req.Body, defaultTimestamp, encoding, true, prommetadata.IsEnabled(), func(rows []prometheus.Row, mms []prometheus.Metadata) error {
return insertRows(rows, mms, extraLabels)
}, func(s string) {
httpserver.LogError(req, s)
})
}
func insertRows(rows []prometheus.Row, extraLabels []prompb.Label) error {
func insertRows(rows []prometheus.Row, mms []prometheus.Metadata, extraLabels []prompb.Label) error {
ctx := common.GetInsertCtx()
defer common.PutInsertCtx(ctx)
@@ -64,5 +66,15 @@ func insertRows(rows []prometheus.Row, extraLabels []prompb.Label) error {
}
rowsInserted.Add(len(rows))
rowsPerInsert.Update(float64(len(rows)))
return ctx.FlushBufs()
if err := ctx.FlushBufs(); err != nil {
return fmt.Errorf("cannot flush metric bufs: %w", err)
}
if prommetadata.IsEnabled() {
if err := ctx.WritePromMetadata(mms); err != nil {
return err
}
metadataInserted.Add(len(mms))
}
return nil
}

View File

@@ -4,13 +4,15 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/relabel"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prommetadata"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/metrics"
)
var (
rowsInserted = metrics.NewCounter(`vm_rows_inserted_total{type="promscrape"}`)
rowsPerInsert = metrics.NewHistogram(`vm_rows_per_insert{type="promscrape"}`)
rowsInserted = metrics.NewCounter(`vm_rows_inserted_total{type="promscrape"}`)
rowsPerInsert = metrics.NewHistogram(`vm_rows_per_insert{type="promscrape"}`)
metadataRowsInserted = metrics.NewCounter(`vm_metadata_rows_inserted_total{type="promscrape"}`)
)
const maxRowsPerBlock = 10000
@@ -41,6 +43,13 @@ func Push(wr *prompb.WriteRequest) {
}
push(ctx, tssBlock)
}
if prommetadata.IsEnabled() {
if err := ctx.WriteMetadata(wr.Metadata); err != nil {
logger.Errorf("cannot write promscrape metrics metadata to storage: %s", err)
} else {
metadataRowsInserted.Add(len(wr.Metadata))
}
}
}
func push(ctx *common.InsertCtx, tss []prompb.TimeSeries) {

View File

@@ -1,10 +1,12 @@
package promremotewrite
import (
"fmt"
"net/http"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/relabel"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prommetadata"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/promremotewrite/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
@@ -12,8 +14,9 @@ import (
)
var (
rowsInserted = metrics.NewCounter(`vm_rows_inserted_total{type="promremotewrite"}`)
rowsPerInsert = metrics.NewHistogram(`vm_rows_per_insert{type="promremotewrite"}`)
rowsInserted = metrics.NewCounter(`vm_rows_inserted_total{type="promremotewrite"}`)
rowsPerInsert = metrics.NewHistogram(`vm_rows_per_insert{type="promremotewrite"}`)
metadataInserted = metrics.NewCounter(`vm_metadata_rows_inserted_total{type="promremotewrite"}`)
)
// InsertHandler processes remote write for prometheus.
@@ -23,12 +26,12 @@ func InsertHandler(req *http.Request) error {
return err
}
isVMRemoteWrite := req.Header.Get("Content-Encoding") == "zstd"
return stream.Parse(req.Body, isVMRemoteWrite, func(tss []prompb.TimeSeries, _ []prompb.MetricMetadata) error {
return insertRows(tss, extraLabels)
return stream.Parse(req.Body, isVMRemoteWrite, func(tss []prompb.TimeSeries, mms []prompb.MetricMetadata) error {
return insertRows(tss, mms, extraLabels)
})
}
func insertRows(timeseries []prompb.TimeSeries, extraLabels []prompb.Label) error {
func insertRows(timeseries []prompb.TimeSeries, mms []prompb.MetricMetadata, extraLabels []prompb.Label) error {
ctx := common.GetInsertCtx()
defer common.PutInsertCtx(ctx)
@@ -68,5 +71,15 @@ func insertRows(timeseries []prompb.TimeSeries, extraLabels []prompb.Label) erro
}
rowsInserted.Add(rowsTotal)
rowsPerInsert.Update(float64(rowsTotal))
return ctx.FlushBufs()
if err := ctx.FlushBufs(); err != nil {
return fmt.Errorf("cannot flush metric bufs: %w", err)
}
if prommetadata.IsEnabled() {
if err := ctx.WriteMetadata(mms); err != nil {
return err
}
metadataInserted.Add(len(mms))
}
return nil
}

View File

@@ -0,0 +1,67 @@
package zabbixconnector
import (
"net/http"
"github.com/VictoriaMetrics/metrics"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/relabel"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/zabbixconnector"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/zabbixconnector/stream"
)
var (
rowsInserted = metrics.NewCounter(`vm_rows_inserted_total{type="zabbixconnector"}`)
rowsPerInsert = metrics.NewHistogram(`vm_rows_per_insert{type="zabbixconnector"}`)
)
// InsertHandlerForHTTP processes remote write for ZabbixConnector POST /zabbixconnector/v1/history request.
func InsertHandlerForHTTP(req *http.Request) error {
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
encoding := req.Header.Get("Content-Encoding")
return stream.Parse(req.Body, encoding, func(rows []zabbixconnector.Row) error {
return insertRows(rows, extraLabels)
})
}
func insertRows(rows []zabbixconnector.Row, extraLabels []prompb.Label) error {
ctx := common.GetInsertCtx()
defer common.PutInsertCtx(ctx)
rowsTotal := len(rows)
ctx.Reset(rowsTotal)
hasRelabeling := relabel.HasRelabeling()
for i := range rows {
r := &rows[i]
ctx.Labels = ctx.Labels[:0]
for k := range r.Tags {
t := &r.Tags[k]
ctx.AddLabelBytes(t.Key, t.Value)
}
for k := range extraLabels {
label := &extraLabels[k]
ctx.AddLabel(label.Name, label.Value)
}
if hasRelabeling {
ctx.ApplyRelabeling()
}
if len(ctx.Labels) == 0 {
// Skip metric without labels.
continue
}
ctx.SortLabelsIfNeeded()
if err := ctx.WriteDataPoint(nil, ctx.Labels, r.Timestamp, r.Value); err != nil {
return err
}
}
rowsInserted.Add(rowsTotal)
rowsPerInsert.Update(float64(rowsTotal))
return ctx.FlushBufs()
}

View File

@@ -104,7 +104,7 @@ func newDstFS() (*fslocal.FS, error) {
}
func newSrcFS(ctx context.Context) (common.RemoteFS, error) {
fs, err := actions.NewRemoteFS(ctx, *src)
fs, err := actions.NewRemoteFS(ctx, *src, nil)
if err != nil {
return nil, fmt.Errorf("cannot parse `-src`=%q: %w", *src, err)
}

View File

@@ -421,6 +421,16 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
}
w.WriteHeader(http.StatusNoContent)
return true
case "/api/v1/metadata":
// Return dumb placeholder for https://prometheus.io/docs/prometheus/latest/querying/api/#querying-metric-metadata
metadataRequests.Inc()
if err := prometheus.MetadataHandler(qt, startTime, w, r); err != nil {
metadataErrors.Inc()
httpserver.SendPrometheusError(w, r, err)
return true
}
return true
default:
return false
}
@@ -574,12 +584,6 @@ func handleStaticAndSimpleRequests(w http.ResponseWriter, r *http.Request, path
w.Header().Set("Content-Type", "application/json")
fmt.Fprint(w, `{"status":"success","data":{"notifiers":[]}}`)
return true
case "/api/v1/metadata":
// Return dumb placeholder for https://prometheus.io/docs/prometheus/latest/querying/api/#querying-metric-metadata
metadataRequests.Inc()
w.Header().Set("Content-Type", "application/json")
fmt.Fprintf(w, "%s", `{"status":"success","data":{}}`)
return true
case "/api/v1/status/buildinfo":
buildInfoRequests.Inc()
w.Header().Set("Content-Type", "application/json")
@@ -708,7 +712,9 @@ var (
alertsRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/alerts"}`)
notifiersRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/notifiers"}`)
metadataRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/metadata"}`)
metadataRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/metadata"}`)
metadataErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/api/v1/metadata"}`)
buildInfoRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/buildinfo"}`)
queryExemplarsRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/query_exemplars"}`)

View File

@@ -20,6 +20,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage/metricsmetadata"
)
var (
@@ -865,6 +866,23 @@ func LabelValues(qt *querytracer.Tracer, labelName string, sq *storage.SearchQue
return labelValues, nil
}
// GetMetricsMetadata returns time series metric names metadata for the given args
func GetMetricsMetadata(qt *querytracer.Tracer, limit int, metricName string) ([]*metricsmetadata.Row, error) {
qt = qt.NewChild("get metrics metadata: limit=%d, metric_name=%q", limit, metricName)
defer qt.Done()
metadata := vmstorage.Storage.GetMetadataRows(qt, limit, metricName)
sort.Slice(metadata, func(i, j int) bool {
return string(metadata[i].MetricFamilyName) < string(metadata[j].MetricFamilyName)
})
if limit > 0 && len(metadata) >= limit {
metadata = metadata[:limit]
}
return metadata, nil
}
// GraphiteTagValues returns tag values for the given tagName until the given deadline.
func GraphiteTagValues(qt *querytracer.Tracer, tagName, filter string, limit int, deadline searchutil.Deadline) ([]string, error) {
qt = qt.NewChild("get graphite tag values for tagName=%s, filter=%s, limit=%d", tagName, filter, limit)

View File

@@ -0,0 +1,35 @@
{% import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage/metricsmetadata"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
) %}
{% stripspace %}
MetadataResponse generates response for /api/v1/metadata
See https://prometheus.io/docs/prometheus/latest/querying/api/#querying-metric-metadata
{% func MetadataResponse( result []*metricsmetadata.Row, qt *querytracer.Tracer) %}
{
"status":"success",
"data": {
{% code
mapItems := len(result)
currentItem := 0
%}
{% for _, row := range result %}
"{%s string(row.MetricFamilyName) %}": [
{
"type": {%q= row.Type.String() %},
{% if len(row.Unit) > 0 -%}
"unit": {%q= string(row.Unit) %},
{% endif -%}
"help": {%q= string(row.Help) %}
}
]
{% if currentItem != mapItems-1 %},{% endif %}
{% code currentItem++ %}
{% endfor %}
}
{%= dumpQueryTrace(qt) %}
}
{% endfunc %}
{% endstripspace %}

View File

@@ -0,0 +1,108 @@
// Code generated by qtc from "metadata_response.qtpl". DO NOT EDIT.
// See https://github.com/valyala/quicktemplate for details.
//line app/vmselect/prometheus/metadata_response.qtpl:1
package prometheus
//line app/vmselect/prometheus/metadata_response.qtpl:1
import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage/metricsmetadata"
)
// MetadataResponse generates response for /api/v1/metadataSee https://prometheus.io/docs/prometheus/latest/querying/api/#querying-metric-metadata
//line app/vmselect/prometheus/metadata_response.qtpl:9
import (
qtio422016 "io"
qt422016 "github.com/valyala/quicktemplate"
)
//line app/vmselect/prometheus/metadata_response.qtpl:9
var (
_ = qtio422016.Copy
_ = qt422016.AcquireByteBuffer
)
//line app/vmselect/prometheus/metadata_response.qtpl:9
func StreamMetadataResponse(qw422016 *qt422016.Writer, result []*metricsmetadata.Row, qt *querytracer.Tracer) {
//line app/vmselect/prometheus/metadata_response.qtpl:9
qw422016.N().S(`{"status":"success","data": {`)
//line app/vmselect/prometheus/metadata_response.qtpl:14
mapItems := len(result)
currentItem := 0
//line app/vmselect/prometheus/metadata_response.qtpl:17
for _, row := range result {
//line app/vmselect/prometheus/metadata_response.qtpl:17
qw422016.N().S(`"`)
//line app/vmselect/prometheus/metadata_response.qtpl:18
qw422016.E().S(string(row.MetricFamilyName))
//line app/vmselect/prometheus/metadata_response.qtpl:18
qw422016.N().S(`": [{"type":`)
//line app/vmselect/prometheus/metadata_response.qtpl:20
qw422016.N().Q(row.Type.String())
//line app/vmselect/prometheus/metadata_response.qtpl:20
qw422016.N().S(`,`)
//line app/vmselect/prometheus/metadata_response.qtpl:21
if len(row.Unit) > 0 {
//line app/vmselect/prometheus/metadata_response.qtpl:21
qw422016.N().S(`"unit":`)
//line app/vmselect/prometheus/metadata_response.qtpl:22
qw422016.N().Q(string(row.Unit))
//line app/vmselect/prometheus/metadata_response.qtpl:22
qw422016.N().S(`,`)
//line app/vmselect/prometheus/metadata_response.qtpl:23
}
//line app/vmselect/prometheus/metadata_response.qtpl:23
qw422016.N().S(`"help":`)
//line app/vmselect/prometheus/metadata_response.qtpl:24
qw422016.N().Q(string(row.Help))
//line app/vmselect/prometheus/metadata_response.qtpl:24
qw422016.N().S(`}]`)
//line app/vmselect/prometheus/metadata_response.qtpl:27
if currentItem != mapItems-1 {
//line app/vmselect/prometheus/metadata_response.qtpl:27
qw422016.N().S(`,`)
//line app/vmselect/prometheus/metadata_response.qtpl:27
}
//line app/vmselect/prometheus/metadata_response.qtpl:28
currentItem++
//line app/vmselect/prometheus/metadata_response.qtpl:29
}
//line app/vmselect/prometheus/metadata_response.qtpl:29
qw422016.N().S(`}`)
//line app/vmselect/prometheus/metadata_response.qtpl:31
streamdumpQueryTrace(qw422016, qt)
//line app/vmselect/prometheus/metadata_response.qtpl:31
qw422016.N().S(`}`)
//line app/vmselect/prometheus/metadata_response.qtpl:33
}
//line app/vmselect/prometheus/metadata_response.qtpl:33
func WriteMetadataResponse(qq422016 qtio422016.Writer, result []*metricsmetadata.Row, qt *querytracer.Tracer) {
//line app/vmselect/prometheus/metadata_response.qtpl:33
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/prometheus/metadata_response.qtpl:33
StreamMetadataResponse(qw422016, result, qt)
//line app/vmselect/prometheus/metadata_response.qtpl:33
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/prometheus/metadata_response.qtpl:33
}
//line app/vmselect/prometheus/metadata_response.qtpl:33
func MetadataResponse(result []*metricsmetadata.Row, qt *querytracer.Tracer) string {
//line app/vmselect/prometheus/metadata_response.qtpl:33
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/prometheus/metadata_response.qtpl:33
WriteMetadataResponse(qb422016, result, qt)
//line app/vmselect/prometheus/metadata_response.qtpl:33
qs422016 := string(qb422016.B)
//line app/vmselect/prometheus/metadata_response.qtpl:33
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/prometheus/metadata_response.qtpl:33
return qs422016
//line app/vmselect/prometheus/metadata_response.qtpl:33
}

View File

@@ -639,6 +639,37 @@ func LabelsHandler(qt *querytracer.Tracer, startTime time.Time, w http.ResponseW
return nil
}
// MetadataHandler processes /api/v1/metadata request.
//
// See https://prometheus.io/docs/prometheus/latest/querying/api/#querying-metric-metadata
func MetadataHandler(qt *querytracer.Tracer, startTime time.Time, w http.ResponseWriter, r *http.Request) error {
limit, err := httputil.GetInt(r, "limit")
if err != nil {
return err
}
if limit < 0 {
limit = 0
}
metricName := r.FormValue("metric")
metadata, err := netstorage.GetMetricsMetadata(qt, limit, metricName)
if err != nil {
return fmt.Errorf("cannot get metadata: %w", err)
}
qt.Done()
w.Header().Set("Content-Type", "application/json")
bw := bufferedwriter.Get(w)
defer bufferedwriter.Put(bw)
WriteMetadataResponse(bw, metadata, qt)
if err := bw.Flush(); err != nil {
return fmt.Errorf("cannot send metadata response to remote client: %w", err)
}
return nil
}
var labelsDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/api/v1/labels"}`)
// SeriesCountHandler processes /api/v1/series/count request.

View File

@@ -1169,60 +1169,6 @@ func evalInstantRollup(qt *querytracer.Tracer, ec *EvalConfig, funcName string,
},
}
return evalExpr(qt, ec, be)
case "rate":
if iafc != nil {
if !strings.EqualFold(iafc.ae.Name, "sum") {
qt.Printf("do not apply instant rollup optimization for incremental aggregate %s()", iafc.ae.Name)
return evalAt(qt, timestamp, window)
}
qt.Printf("optimized calculation for sum(rate(m[d])) as (sum(increase(m[d])) / d)")
afe := expr.(*metricsql.AggrFuncExpr)
fe := afe.Args[0].(*metricsql.FuncExpr)
feIncrease := *fe
feIncrease.Name = "increase"
// copy RollupExpr to drop possible offset,
// see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9762
newArg := copyRollupExpr(fe.Args[0].(*metricsql.RollupExpr))
newArg.Offset = nil
feIncrease.Args = []metricsql.Expr{newArg}
d := newArg.Window.Duration(ec.Step)
if d == 0 {
d = ec.Step
}
afeIncrease := *afe
afeIncrease.Args = []metricsql.Expr{&feIncrease}
be := &metricsql.BinaryOpExpr{
Op: "/",
KeepMetricNames: true,
Left: &afeIncrease,
Right: &metricsql.NumberExpr{
N: float64(d) / 1000,
},
}
return evalExpr(qt, ec, be)
}
qt.Printf("optimized calculation for instant rollup rate(m[d]) as (increase(m[d]) / d)")
fe := expr.(*metricsql.FuncExpr)
feIncrease := *fe
feIncrease.Name = "increase"
// copy RollupExpr to drop possible offset,
// see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9762
newArg := copyRollupExpr(fe.Args[0].(*metricsql.RollupExpr))
newArg.Offset = nil
feIncrease.Args = []metricsql.Expr{newArg}
d := newArg.Window.Duration(ec.Step)
if d == 0 {
d = ec.Step
}
be := &metricsql.BinaryOpExpr{
Op: "/",
KeepMetricNames: fe.KeepMetricNames,
Left: &feIncrease,
Right: &metricsql.NumberExpr{
N: float64(d) / 1000,
},
}
return evalExpr(qt, ec, be)
case "max_over_time":
if iafc != nil {
if !strings.EqualFold(iafc.ae.Name, "max") {

View File

@@ -132,7 +132,7 @@ func InitRollupResultCache(cachePath string) {
c = workingsetcache.New(cacheSize)
rollupResultCacheKeyPrefix.Store(newRollupResultCacheKeyPrefix())
}
if *disableCache {
if *disableCache && len(rollupResultCachePath) > 0 && !*resetRollupResultCacheOnStartup {
c.Reset()
}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -37,10 +37,10 @@
<meta property="og:title" content="UI for VictoriaMetrics">
<meta property="og:url" content="https://victoriametrics.com/">
<meta property="og:description" content="Explore and troubleshoot your VictoriaMetrics data">
<script type="module" crossorigin src="./assets/index-zpalCSif.js"></script>
<link rel="modulepreload" crossorigin href="./assets/vendor-DY9kCvzk.js">
<script type="module" crossorigin src="./assets/index-Clpj_g75.js"></script>
<link rel="modulepreload" crossorigin href="./assets/vendor-D5YL0cqB.js">
<link rel="stylesheet" crossorigin href="./assets/vendor-D1GxaB_c.css">
<link rel="stylesheet" crossorigin href="./assets/index-CBxdwuZH.css">
<link rel="stylesheet" crossorigin href="./assets/index-jEWkrqzO.css">
</head>
<body>
<noscript>You need to enable JavaScript to run this app.</noscript>

View File

@@ -22,6 +22,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/mergeset"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage/metricsmetadata"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/stringsutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/syncwg"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeutil"
@@ -90,6 +91,9 @@ var (
"In most cases, this value should not be changed. The maximum allowed value is 23h.")
logNewSeriesAuthKey = flagutil.NewPassword("logNewSeriesAuthKey", "authKey, which must be passed in query string to /internal/log_new_series. It overrides -httpAuth.*")
metadataStorageSize = flagutil.NewBytes("storage.maxMetadataStorageSize", 0, "Overrides max size for metrics metadata entries in-memory storage. "+
"If set to 0 or a negative value, defaults to 1% of allowed memory.")
)
// CheckTimeRange returns true if the given tr is denied for querying.
@@ -114,12 +118,13 @@ func Init(resetCacheIfNeeded func(mrs []storage.MetricRow)) {
}
resetResponseCacheIfNeeded = resetCacheIfNeeded
storage.SetRetentionTimezoneOffset(*retentionTimezoneOffset)
storage.LegacySetRetentionTimezoneOffset(*retentionTimezoneOffset)
storage.SetFreeDiskSpaceLimit(minFreeDiskSpaceBytes.N)
storage.SetTSIDCacheSize(cacheSizeStorageTSID.IntN())
storage.SetTagFiltersCacheSize(cacheSizeIndexDBTagFilters.IntN())
storage.SetMetricNamesStatsCacheSize(cacheSizeMetricNamesStats.IntN())
storage.SetMetricNameCacheSize(cacheSizeStorageMetricName.IntN())
storage.SetMetadataStorageSize(metadataStorageSize.IntN())
mergeset.SetIndexBlocksCacheSize(cacheSizeIndexDBIndexBlocks.IntN())
mergeset.SetDataBlocksCacheSize(cacheSizeIndexDBDataBlocks.IntN())
mergeset.SetDataBlocksSparseCacheSize(cacheSizeIndexDBDataBlocksSparse.IntN())
@@ -194,6 +199,19 @@ func AddRows(mrs []storage.MetricRow) error {
return nil
}
// AddMetadataRows adds mrs to the storage.
//
// The caller should limit the number of concurrent calls to AddMetadataRows() in order to limit memory usage.
func AddMetadataRows(mms []metricsmetadata.Row) error {
if Storage.IsReadOnly() {
return errReadOnly
}
WG.Add(1)
Storage.AddMetadataRows(mms)
WG.Done()
return nil
}
var errReadOnly = errors.New("the storage is in read-only mode; check -storage.minFreeDiskSpaceBytes command-line flag value")
// RegisterMetricNames registers all the metrics from mrs in the storage.
@@ -482,7 +500,7 @@ func writeStorageMetrics(w io.Writer, strg *storage.Storage) {
var m storage.Metrics
strg.UpdateMetrics(&m)
tm := &m.TableMetrics
idbm := &m.IndexDBMetrics
idbm := &m.TableMetrics.IndexDBMetrics
metrics.WriteGaugeUint64(w, fmt.Sprintf(`vm_free_disk_space_bytes{path=%q}`, *DataPath), fs.MustGetFreeSpace(*DataPath))
metrics.WriteGaugeUint64(w, fmt.Sprintf(`vm_free_disk_space_limit_bytes{path=%q}`, *DataPath), uint64(minFreeDiskSpaceBytes.N))
@@ -610,75 +628,82 @@ func writeStorageMetrics(w io.Writer, strg *storage.Storage) {
metrics.WriteCounterUint64(w, `vm_missing_metric_names_for_metric_id_total`, idbm.MissingMetricNamesForMetricID)
metrics.WriteCounterUint64(w, `vm_date_metric_id_cache_syncs_total`, m.DateMetricIDCacheSyncsCount)
metrics.WriteCounterUint64(w, `vm_date_metric_id_cache_resets_total`, m.DateMetricIDCacheResetsCount)
metrics.WriteGaugeUint64(w, `vm_cache_entries{type="storage/indexBlocks"}`, tm.IndexBlocksCacheSize)
metrics.WriteGaugeUint64(w, `vm_cache_entries{type="storage/tsid"}`, m.TSIDCacheSize)
metrics.WriteGaugeUint64(w, `vm_cache_entries{type="storage/metricIDs"}`, m.MetricIDCacheSize)
metrics.WriteGaugeUint64(w, `vm_cache_entries{type="storage/metricName"}`, m.MetricNameCacheSize)
metrics.WriteGaugeUint64(w, `vm_cache_entries{type="storage/date_metricID"}`, m.DateMetricIDCacheSize)
metrics.WriteGaugeUint64(w, `vm_cache_entries{type="storage/hour_metric_ids"}`, m.HourMetricIDCacheSize)
metrics.WriteGaugeUint64(w, `vm_cache_entries{type="storage/next_day_metric_ids"}`, m.NextDayMetricIDCacheSize)
metrics.WriteGaugeUint64(w, `vm_cache_entries{type="storage/indexBlocks"}`, tm.IndexBlocksCacheSize)
metrics.WriteGaugeUint64(w, `vm_cache_entries{type="storage/regexps"}`, uint64(storage.RegexpCacheSize()))
metrics.WriteGaugeUint64(w, `vm_cache_entries{type="storage/regexpPrefixes"}`, uint64(storage.RegexpPrefixesCacheSize()))
metrics.WriteGaugeUint64(w, `vm_cache_entries{type="indexdb/dataBlocks"}`, idbm.DataBlocksCacheSize)
metrics.WriteGaugeUint64(w, `vm_cache_entries{type="indexdb/dataBlocksSparse"}`, idbm.DataBlocksSparseCacheSize)
metrics.WriteGaugeUint64(w, `vm_cache_entries{type="indexdb/indexBlocks"}`, idbm.IndexBlocksCacheSize)
metrics.WriteGaugeUint64(w, `vm_cache_entries{type="indexdb/metricID"}`, idbm.MetricIDCacheSize)
metrics.WriteGaugeUint64(w, `vm_cache_entries{type="indexdb/date_metricID"}`, idbm.DateMetricIDCacheSize)
metrics.WriteGaugeUint64(w, `vm_cache_entries{type="indexdb/tagFiltersToMetricIDs"}`, idbm.TagFiltersToMetricIDsCacheSize)
metrics.WriteGaugeUint64(w, `vm_cache_entries{type="storage/regexps"}`, uint64(storage.RegexpCacheSize()))
metrics.WriteGaugeUint64(w, `vm_cache_entries{type="storage/regexpPrefixes"}`, uint64(storage.RegexpPrefixesCacheSize()))
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="storage/indexBlocks"}`, tm.IndexBlocksCacheSizeBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="storage/tsid"}`, m.TSIDCacheSizeBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="storage/metricIDs"}`, m.MetricIDCacheSizeBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="storage/metricName"}`, m.MetricNameCacheSizeBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="storage/indexBlocks"}`, tm.IndexBlocksCacheSizeBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="storage/hour_metric_ids"}`, m.HourMetricIDCacheSizeBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="storage/next_day_metric_ids"}`, m.NextDayMetricIDCacheSizeBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="storage/regexps"}`, storage.RegexpCacheSizeBytes())
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="storage/regexpPrefixes"}`, storage.RegexpPrefixesCacheSizeBytes())
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="indexdb/metricID"}`, idbm.MetricIDCacheSizeBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="indexdb/date_metricID"}`, idbm.DateMetricIDCacheSizeBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="indexdb/dataBlocks"}`, idbm.DataBlocksCacheSizeBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="indexdb/dataBlocksSparse"}`, idbm.DataBlocksSparseCacheSizeBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="indexdb/indexBlocks"}`, idbm.IndexBlocksCacheSizeBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="storage/date_metricID"}`, m.DateMetricIDCacheSizeBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="storage/hour_metric_ids"}`, m.HourMetricIDCacheSizeBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="storage/next_day_metric_ids"}`, m.NextDayMetricIDCacheSizeBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="indexdb/tagFiltersToMetricIDs"}`, idbm.TagFiltersToMetricIDsCacheSizeBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="storage/regexps"}`, uint64(storage.RegexpCacheSizeBytes()))
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="storage/regexpPrefixes"}`, uint64(storage.RegexpPrefixesCacheSizeBytes()))
metrics.WriteGaugeUint64(w, `vm_cache_size_max_bytes{type="storage/indexBlocks"}`, tm.IndexBlocksCacheSizeMaxBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_max_bytes{type="storage/tsid"}`, m.TSIDCacheSizeMaxBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_max_bytes{type="storage/metricIDs"}`, m.MetricIDCacheSizeMaxBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_max_bytes{type="storage/metricName"}`, m.MetricNameCacheSizeMaxBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_max_bytes{type="storage/indexBlocks"}`, tm.IndexBlocksCacheSizeMaxBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_max_bytes{type="storage/regexps"}`, storage.RegexpCacheMaxSizeBytes())
metrics.WriteGaugeUint64(w, `vm_cache_size_max_bytes{type="storage/regexpPrefixes"}`, storage.RegexpPrefixesCacheMaxSizeBytes())
metrics.WriteGaugeUint64(w, `vm_cache_size_max_bytes{type="indexdb/dataBlocks"}`, idbm.DataBlocksCacheSizeMaxBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_max_bytes{type="indexdb/dataBlocksSparse"}`, idbm.DataBlocksSparseCacheSizeMaxBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_max_bytes{type="indexdb/indexBlocks"}`, idbm.IndexBlocksCacheSizeMaxBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_max_bytes{type="indexdb/tagFiltersToMetricIDs"}`, idbm.TagFiltersToMetricIDsCacheSizeMaxBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_max_bytes{type="storage/regexps"}`, uint64(storage.RegexpCacheMaxSizeBytes()))
metrics.WriteGaugeUint64(w, `vm_cache_size_max_bytes{type="storage/regexpPrefixes"}`, uint64(storage.RegexpPrefixesCacheMaxSizeBytes()))
metrics.WriteCounterUint64(w, `vm_cache_requests_total{type="storage/indexBlocks"}`, tm.IndexBlocksCacheRequests)
metrics.WriteCounterUint64(w, `vm_cache_requests_total{type="storage/tsid"}`, m.TSIDCacheRequests)
metrics.WriteCounterUint64(w, `vm_cache_requests_total{type="storage/metricIDs"}`, m.MetricIDCacheRequests)
metrics.WriteCounterUint64(w, `vm_cache_requests_total{type="storage/metricName"}`, m.MetricNameCacheRequests)
metrics.WriteCounterUint64(w, `vm_cache_requests_total{type="storage/indexBlocks"}`, tm.IndexBlocksCacheRequests)
metrics.WriteCounterUint64(w, `vm_cache_requests_total{type="storage/regexps"}`, storage.RegexpCacheRequests())
metrics.WriteCounterUint64(w, `vm_cache_requests_total{type="storage/regexpPrefixes"}`, storage.RegexpPrefixesCacheRequests())
metrics.WriteCounterUint64(w, `vm_cache_requests_total{type="indexdb/dataBlocks"}`, idbm.DataBlocksCacheRequests)
metrics.WriteCounterUint64(w, `vm_cache_requests_total{type="indexdb/dataBlocksSparse"}`, idbm.DataBlocksSparseCacheRequests)
metrics.WriteCounterUint64(w, `vm_cache_requests_total{type="indexdb/indexBlocks"}`, idbm.IndexBlocksCacheRequests)
metrics.WriteCounterUint64(w, `vm_cache_requests_total{type="indexdb/tagFiltersToMetricIDs"}`, idbm.TagFiltersToMetricIDsCacheRequests)
metrics.WriteCounterUint64(w, `vm_cache_requests_total{type="storage/regexps"}`, storage.RegexpCacheRequests())
metrics.WriteCounterUint64(w, `vm_cache_requests_total{type="storage/regexpPrefixes"}`, storage.RegexpPrefixesCacheRequests())
metrics.WriteCounterUint64(w, `vm_cache_misses_total{type="storage/indexBlocks"}`, tm.IndexBlocksCacheMisses)
metrics.WriteCounterUint64(w, `vm_cache_misses_total{type="storage/tsid"}`, m.TSIDCacheMisses)
metrics.WriteCounterUint64(w, `vm_cache_misses_total{type="storage/metricIDs"}`, m.MetricIDCacheMisses)
metrics.WriteCounterUint64(w, `vm_cache_misses_total{type="storage/metricName"}`, m.MetricNameCacheMisses)
metrics.WriteCounterUint64(w, `vm_cache_misses_total{type="storage/indexBlocks"}`, tm.IndexBlocksCacheMisses)
metrics.WriteCounterUint64(w, `vm_cache_misses_total{type="storage/regexps"}`, storage.RegexpCacheMisses())
metrics.WriteCounterUint64(w, `vm_cache_misses_total{type="storage/regexpPrefixes"}`, storage.RegexpPrefixesCacheMisses())
metrics.WriteCounterUint64(w, `vm_cache_misses_total{type="indexdb/dataBlocks"}`, idbm.DataBlocksCacheMisses)
metrics.WriteCounterUint64(w, `vm_cache_misses_total{type="indexdb/dataBlocksSparse"}`, idbm.DataBlocksSparseCacheMisses)
metrics.WriteCounterUint64(w, `vm_cache_misses_total{type="indexdb/indexBlocks"}`, idbm.IndexBlocksCacheMisses)
metrics.WriteCounterUint64(w, `vm_cache_misses_total{type="indexdb/tagFiltersToMetricIDs"}`, idbm.TagFiltersToMetricIDsCacheMisses)
metrics.WriteCounterUint64(w, `vm_cache_misses_total{type="storage/regexps"}`, storage.RegexpCacheMisses())
metrics.WriteCounterUint64(w, `vm_cache_misses_total{type="storage/regexpPrefixes"}`, storage.RegexpPrefixesCacheMisses())
metrics.WriteCounterUint64(w, `vm_deleted_metrics_total{type="indexdb"}`, m.DeletedMetricsCount)
metrics.WriteCounterUint64(w, `vm_cache_resets_total{type="indexdb/tagFiltersToMetricIDs"}`, idbm.TagFiltersToMetricIDsCacheResets)
metrics.WriteCounterUint64(w, `vm_cache_collisions_total{type="storage/tsid"}`, m.TSIDCacheCollisions)
metrics.WriteCounterUint64(w, `vm_cache_collisions_total{type="storage/metricName"}`, m.MetricNameCacheCollisions)
metrics.WriteCounterUint64(w, `vm_cache_syncs_total{type="indexdb/metricID"}`, idbm.MetricIDCacheSyncsCount)
metrics.WriteCounterUint64(w, `vm_cache_syncs_total{type="indexdb/date_metricID"}`, idbm.DateMetricIDCacheSyncsCount)
metrics.WriteCounterUint64(w, `vm_cache_rotations_total{type="indexdb/metricID"}`, idbm.MetricIDCacheRotationsCount)
metrics.WriteCounterUint64(w, `vm_cache_rotations_total{type="indexdb/date_metricID"}`, idbm.DateMetricIDCacheRotationsCount)
metrics.WriteCounterUint64(w, `vm_deleted_metrics_total{type="indexdb"}`, m.DeletedMetricsCount)
metrics.WriteGaugeUint64(w, `vm_next_retention_seconds`, m.NextRetentionSeconds)
if *trackMetricNamesStats {
@@ -689,6 +714,11 @@ func writeStorageMetrics(w io.Writer, strg *storage.Storage) {
metrics.WriteGaugeUint64(w, `vm_downsampling_partitions_scheduled`, tm.ScheduledDownsamplingPartitions)
metrics.WriteGaugeUint64(w, `vm_downsampling_partitions_scheduled_size_bytes`, tm.ScheduledDownsamplingPartitionsSize)
metrics.WriteGaugeUint64(w, `vm_metrics_metadata_storage_items`, m.MetadataStorageItemsCurrent)
metrics.WriteCounterUint64(w, `vm_metrics_metadata_storage_size_bytes`, m.MetadataStorageCurrentSizeBytes)
metrics.WriteCounterUint64(w, `vm_metrics_metadata_storage_max_size_bytes`, m.MetadataStorageMaxSizeBytes)
}
func jsonResponseError(w http.ResponseWriter, err error) {

View File

@@ -1,4 +1,4 @@
FROM golang:1.25.4 AS build-web-stage
FROM golang:1.25.5 AS build-web-stage
COPY build /build
WORKDIR /build

View File

@@ -5194,9 +5194,9 @@
"license": "MIT"
},
"node_modules/js-yaml": {
"version": "4.1.0",
"resolved": "https://registry.npmjs.org/js-yaml/-/js-yaml-4.1.0.tgz",
"integrity": "sha512-wpxZs9NoxZaJESJGIZTyDEaYpl0FKSA+FB9aJiyemKhMwkxQg63h4T1KJgUGHpTqPDNRcmmYLugrRjJlBtWvRA==",
"version": "4.1.1",
"resolved": "https://registry.npmjs.org/js-yaml/-/js-yaml-4.1.1.tgz",
"integrity": "sha512-qQKT4zQxXl8lLwBtHMWwaTcGfFOZviOJet3Oy/xmGk2gZH677CJM9EvtfdSkgWcATZhj/55JZ0rmy3myCT5lsA==",
"dev": true,
"license": "MIT",
"dependencies": {

View File

@@ -20,6 +20,7 @@ export interface ChartTooltipProps {
info?: ReactNode;
marker?: string;
show?: boolean;
duplicateCount?: number;
onClose?: (id: string) => void;
}
@@ -35,6 +36,7 @@ const ChartTooltip: FC<ChartTooltipProps> = ({
statsFormatted,
isSticky,
marker,
duplicateCount = 0,
onClose
}) => {
const tooltipRef = useRef<HTMLDivElement>(null);
@@ -156,6 +158,7 @@ const ChartTooltip: FC<ChartTooltipProps> = ({
<p className="vm-chart-tooltip-data__value">
<b>{value}</b>{unit}
</p>
{duplicateCount > 1 && <p>(overlapping points: {duplicateCount})</p>}
</div>
{statsFormatted && (
<table className="vm-chart-tooltip-stats">

View File

@@ -8,7 +8,8 @@ import { useHideDuplicateFields } from "./hooks/useHideDuplicateFields";
import Accordion from "../../../Main/Accordion/Accordion";
import { useLegendGroup } from "./hooks/useLegendGroup";
import useCopyToClipboard from "../../../../hooks/useCopyToClipboard";
import { DEFAULT_MAX_SERIES } from "../../../../constants/graph";
import { LEGEND_COLLAPSE_SERIES_LIMIT } from "../../../../constants/graph";
import { getFromStorage } from "../../../../utils/storage";
export type LegendProps = {
labels: LegendItemType[];
@@ -38,17 +39,26 @@ const LegendGroup: FC<LegendGroupProps> = ({ labels, group, isAnomalyView, onCha
const Content = isTableView ? LegendTable : LegendLines;
const disableAutoCollapse = getFromStorage("LEGEND_AUTO_COLLAPSE") === "false"
const defaultExpanded = disableAutoCollapse ? true : sortedLabels.length <= LEGEND_COLLAPSE_SERIES_LIMIT
const expandedWarning = (
<span className="vm-legend-group-header__warning">
Legend collapsed by default ({sortedLabels.length} series) click to expand.
</span>
)
return (
<div
className="vm-legend-group"
key={group}
>
<Accordion
defaultExpanded={sortedLabels.length < DEFAULT_MAX_SERIES.chart}
defaultExpanded={defaultExpanded}
title={(
<div className="vm-legend-group-header">
<div className="vm-legend-group-header-title">
Group by{groupByLabel ? "" : " query"}: <b>{group}</b>
Group by{groupByLabel ? "" : " query"}: <b>{group}</b> {!defaultExpanded && expandedWarning}
</div>
{!!duplicateFields.length && (
<div className="vm-legend-group-header-labels">

View File

@@ -32,6 +32,14 @@
}
}
&__warning {
flex-grow: 1;
text-align: right;
padding-right: calc($padding-large * 2);
font-size: $font-size-small;
color: $color-warning;
}
&-labels {
display: flex;
flex-wrap: wrap;

View File

@@ -1,15 +1,17 @@
import { forwardRef, useCallback, useImperativeHandle, useState } from "preact/compat";
import { forwardRef, useCallback, useEffect, useImperativeHandle, useState } from "preact/compat";
import { DisplayType, ErrorTypes } from "../../../../types";
import TextField from "../../../Main/TextField/TextField";
import Tooltip from "../../../Main/Tooltip/Tooltip";
import { InfoIcon, RestartIcon } from "../../../Main/Icons";
import Button from "../../../Main/Button/Button";
import { DEFAULT_MAX_SERIES } from "../../../../constants/graph";
import { DEFAULT_MAX_SERIES, LEGEND_COLLAPSE_SERIES_LIMIT } from "../../../../constants/graph";
import "./style.scss";
import classNames from "classnames";
import useDeviceDetect from "../../../../hooks/useDeviceDetect";
import { ChildComponentHandle } from "../GlobalSettings";
import { useCustomPanelDispatch, useCustomPanelState } from "../../../../state/customPanel/CustomPanelStateContext";
import Switch from "../../../Main/Switch/Switch";
import { getFromStorage, saveToStorage } from "../../../../utils/storage";
interface ServerConfiguratorProps {
onClose: () => void
@@ -27,6 +29,9 @@ const LimitsConfigurator = forwardRef<ChildComponentHandle, ServerConfiguratorPr
const { seriesLimits } = useCustomPanelState();
const customPanelDispatch = useCustomPanelDispatch();
const storageCollapse = getFromStorage("LEGEND_AUTO_COLLAPSE")
const [legendCollapse, setLegendCollapse] = useState(storageCollapse ? storageCollapse === "true" : true);
const [limits, setLimits] = useState(seriesLimits);
const [error, setError] = useState({
table: "",
@@ -52,6 +57,10 @@ const LimitsConfigurator = forwardRef<ChildComponentHandle, ServerConfiguratorPr
onClose();
}, [limits]);
useEffect(() => {
saveToStorage("LEGEND_AUTO_COLLAPSE", `${legendCollapse}`)
}, [legendCollapse]);
useImperativeHandle(ref, () => ({ handleApply }), [handleApply]);
return (
@@ -97,6 +106,19 @@ const LimitsConfigurator = forwardRef<ChildComponentHandle, ServerConfiguratorPr
</div>
))}
</div>
<div className="vm-graph-settings-row">
<span className="vm-graph-settings-row__label">Auto-collapse legend</span>
<Switch
value={legendCollapse}
onChange={setLegendCollapse}
label={legendCollapse ? "Enabled" : "Disabled"}
fullWidth={isMobile}
/>
<span className="vm-legend-configs-item__info">
Collapses the legend when series count exceeds {LEGEND_COLLAPSE_SERIES_LIMIT} to reduce UI load.
</span>
</div>
</div>
);
});

View File

@@ -18,6 +18,7 @@
align-items: center;
justify-content: space-between;
gap: $padding-global;
margin-bottom: $padding-global;
&_mobile {
gap: $padding-small;

View File

@@ -132,7 +132,7 @@ const BaseRule = ({ item }: BaseRuleProps) => {
<th>Series returned</th>
<th>Series fetched</th>
<th>Duration</th>
<th>Executed at</th>
<th>Execution timestamp</th>
</tr>
</thead>
<tbody>
@@ -154,7 +154,7 @@ const BaseRule = ({ item }: BaseRuleProps) => {
{!!item?.alerts?.length && (
<>
<span className="vm-alerts-title">Alerts</span>
<table>
<table className="vm-alerts-table">
<colgroup>
<col className="vm-col-sm"/>
<col className="vm-col-sm"/>
@@ -190,7 +190,7 @@ const BaseRule = ({ item }: BaseRuleProps) => {
</td>
<td>
<Badges
align="center"
align="start"
items={Object.fromEntries(Object.entries(alert.labels || {}).map(([name, value]) => [name, {
color: "passive",
value: value,

View File

@@ -44,6 +44,7 @@
word-break: break-word;
table-layout: fixed;
width: 100%;
td, th {
line-height: 30px;
padding: 4px $padding-small;
@@ -52,15 +53,33 @@
overflow: hidden;
text-overflow: ellipsis;
}
th {
white-space: nowrap;
}
td.align-center {
text-align: center
}
th {
font-weight: bold;
padding: 0 $padding-small;
}
}
.vm-alerts-table {
tr {
border-bottom: $border-divider;
&:hover {
background: $color-background-hover;
}
}
td {
vertical-align: top;
padding-block: $padding-small;
}
}
}

View File

@@ -46,9 +46,6 @@
.vm-text-field__input {
padding: 11px 28px;
}
.vm-text-field__icon-start {
height: 42px;
}
}
&__clear-icon {

View File

@@ -8,7 +8,7 @@
flex-direction: column;
position: relative;
&:has(>details[open]) {
&:has(>.vm-accordion-header_open) {
background-color: $color-background-item;
}

View File

@@ -61,7 +61,7 @@ const RulesHeader: FC<RulesHeaderProps> = ({
value={states}
list={allStates}
label="State"
placeholder="Please rule state"
placeholder="Please select rule state"
onChange={onChangeStates}
noOptionsText={noStateText}
includeAll

View File

@@ -26,9 +26,6 @@
.vm-text-field__input {
padding: 11px 28px;
}
.vm-text-field__icon-start {
height: 42px;
}
}
&__clear-icon {

View File

@@ -34,7 +34,7 @@
position: relative;
border-radius: $border-radius-small;
&:has(>details[open]) {
&:has(>.vm-accordion-header_open) {
background-color: $color-background-item;
}

View File

@@ -43,7 +43,7 @@ const ExploreMetricItem: FC<ExploreMetricItemGraphProps> = ({
const step = isHeatmap && customStep === defaultStep ? heatmapStep : customStep;
const query = useMemo(() => {
const queries = useMemo(() => {
const params = Object.entries({ job, instance })
.filter(val => val[1])
.map(([key, val]) => `${key}=${JSON.stringify(val)}`);
@@ -55,19 +55,19 @@ const ExploreMetricItem: FC<ExploreMetricItemGraphProps> = ({
const base = `{${params.join(",")}}`;
if (isBucket) {
return `sum(rate(${base})) by (vmrange, le)`;
return [`sum(rate(${base})) by (vmrange, le)`];
}
const queryBase = rateEnabled ? `rollup_rate(${base})` : `rollup(${base})`;
return `
return [`
with (q = ${queryBase}) (
alias(min(label_match(q, "rollup", "min")), "min"),
alias(max(label_match(q, "rollup", "max")), "max"),
alias(avg(label_match(q, "rollup", "avg")), "avg"),
)`;
)`];
}, [name, job, instance, rateEnabled, isBucket]);
const { isLoading, graphData, error, queryErrors, warning, isHistogram } = useFetchQuery({
predefinedQuery: [query],
predefinedQuery: queries,
visible: true,
customStep: step,
showAllSeries
@@ -98,7 +98,7 @@ with (q = ${queryBase}) (
{warning && (
<WarningLimitSeries
warning={warning}
query={[query]}
query={queries}
onChange={setShowAllSeries}
/>
)}
@@ -107,7 +107,7 @@ with (q = ${queryBase}) (
data={graphData}
period={period}
customStep={step}
query={[query]}
query={queries}
yaxis={yaxis}
setYaxisLimits={setYaxisLimits}
setPeriod={setPeriod}

View File

@@ -1,4 +1,5 @@
import { FC, useState, useEffect } from "preact/compat";
import classNames from "classnames";
import { JSX } from "preact";
import { ArrowDownIcon } from "../Icons";
import "./style.scss";
@@ -31,9 +32,12 @@ const Accordion: FC<AccordionProps> = ({
event.preventDefault();
return; // If the text is selected, cancel the execution of toggle.
}
const details = event.currentTarget.parentElement as HTMLDetailsElement;
onChange && onChange(details.open);
setIsOpen(details.open);
setIsOpen((prev) => {
const newState = !prev;
onChange && onChange(newState);
return newState;
});
};
useEffect(() => {
@@ -42,23 +46,32 @@ const Accordion: FC<AccordionProps> = ({
return (
<>
<details
className="vm-accordion-section"
key="content"
open={isOpen}
<header
className={classNames({
"vm-accordion-header": true,
"vm-accordion-header_open": isOpen,
})}
onClick={toggleOpen}
id={id}
>
<summary
className="vm-accordion-header"
onClick={toggleOpen}
{title}
<div
className={classNames({
"vm-accordion-header__arrow": true,
"vm-accordion-header__arrow_open": isOpen,
})}
>
{title}
<div className="vm-accordion-header__arrow">
<ArrowDownIcon />
</div>
</summary>
{children}
</details>
<ArrowDownIcon />
</div>
</header>
{isOpen && (
<section
className="vm-accordion-section"
key="content"
>
{children}
</section>
)}
</>
);
};

View File

@@ -17,6 +17,10 @@
transform: rotate(0);
transition: transform 200ms ease-in-out;
&_open {
transform: rotate(180deg);
}
svg {
width: 14px;
height: auto;
@@ -24,14 +28,6 @@
}
}
.vm-accordion-section[open] > summary {
& > .vm-accordion-header {
&__arrow {
transform: rotate(180deg);
}
}
}
.accordion-section {
overflow: hidden;
}

View File

@@ -137,6 +137,7 @@ const Select: FC<SelectProps> = ({
"vm-select_disabled": disabled
})}
>
{label && <span className="vm-text-field__label">{label}</span>}
<div
className="vm-select-input"
onClick={handleToggleList}
@@ -150,7 +151,7 @@ const Select: FC<SelectProps> = ({
onRemoveItem={handleSelected}
/>
)}
{!hideInput && !selectedValues?.length && (
{!hideInput && (
<input
value={textFieldValue}
type="text"
@@ -164,7 +165,6 @@ const Select: FC<SelectProps> = ({
/>
)}
</div>
{label && <span className="vm-text-field__label">{label}</span>}
{clearable && value && (
<div
className="vm-select-input__icon"

View File

@@ -1,6 +1,8 @@
@use "src/styles/variables" as *;
.vm-select {
position: relative;
display: grid;
&-input {
position: relative;
display: flex;

View File

@@ -89,8 +89,8 @@ const GraphView: FC<GraphViewProps> = ({
const [legendValue, setLegendValue] = useState<ChartTooltipProps | null>(null);
const getSeriesItem = useMemo(() => {
return getSeriesItemContext(data, hideSeries, alias, showAllPoints, isAnomalyView);
}, [data, hideSeries, alias, showAllPoints, isAnomalyView]);
return getSeriesItemContext(data, hideSeries, alias, showAllPoints, isAnomalyView, isRawQuery);
}, [data, hideSeries, alias, showAllPoints, isAnomalyView, isRawQuery]);
const setLimitsYaxis = (minVal: number, maxVal: number) => {
let min = Number.isFinite(minVal) ? minVal : 0;
@@ -144,8 +144,8 @@ const GraphView: FC<GraphViewProps> = ({
useEffect(() => {
const dLen = data.length;
const tsAnchor = data?.[0]?.values?.[0]?.[0]
const tsSet = new Set<number>([])
const tsAnchor = data?.[0]?.values?.[0]?.[0];
const tsArray: number[] = [];
const tempLegend = new Array<LegendItemType>(dLen);
const tempSeries = new Array<uPlotSeries>(dLen + 1);
tempSeries[0] = {};
@@ -162,7 +162,7 @@ const GraphView: FC<GraphViewProps> = ({
const vals = d.values;
for (let j = 0, vLen = vals.length; j < vLen; j++) {
const v = vals[j];
if (isRawQuery) tsSet.add(v[0])
if (isRawQuery) tsArray.push(v[0]);
const num = promValueToNumber(v[1]);
if (Number.isFinite(num)) {
if (num < minVal) minVal = num;
@@ -171,12 +171,12 @@ const GraphView: FC<GraphViewProps> = ({
}
}
const dpr = window.devicePixelRatio || 1
const dpr = window.devicePixelRatio || 1;
const widthPx = containerSize.width || window.innerWidth || 4096;
const pixels = Math.max(1, Math.floor(widthPx * Math.max(1, dpr)));
const timeSeries = isRawQuery
? Array.from(tsSet).sort((a,b) => a - b)
? tsArray.sort((a, b) => a - b)
: getTimeSeries(currentStep, period, pixels, tsAnchor);
const timeDataSeries: (number | null)[][] = data.map(d => {
@@ -195,6 +195,8 @@ const GraphView: FC<GraphViewProps> = ({
// Treat special values as nulls in order to satisfy uPlot.
// Otherwise it may draw unexpected graphs.
v = Number.isFinite(num) ? num : null;
// Advance to next value
j++;
}
results[k] = v;
}
@@ -281,7 +283,7 @@ const GraphView: FC<GraphViewProps> = ({
height={height}
isAnomalyView={isAnomalyView}
spanGaps={spanGaps}
showAllPoints={showAllPoints}
showAllPoints={isRawQuery ? true : showAllPoints}
/>
)}
{isHistogram && (

View File

@@ -8,6 +8,8 @@ export const DEFAULT_MAX_SERIES = {
code: 1000,
};
export const LEGEND_COLLAPSE_SERIES_LIMIT = 100;
export const GRAPH_SIZES: GraphSize[] = [
{
id: "small",

View File

@@ -49,6 +49,26 @@ const useLineTooltip = ({ u, metrics, series, unit, isAnomalyView }: LineTooltip
const max = u?.scales?.[1]?.max || 1;
const date = u?.data?.[0]?.[dataIdx] || 0;
let duplicateCount = 1;
if (u && seriesIdx > 0 && dataIdx >= 0) {
const xs = u.data[0] as (number | null)[];
const ys = u.data[seriesIdx] as (number | null)[];
const xVal = xs[dataIdx];
const yVal = ys[dataIdx];
if (xVal != null && yVal != null) {
duplicateCount = 0;
for (let i = 0; i < xs.length; i++) {
if (xs[i] === xVal && ys[i] === yVal) {
duplicateCount++;
}
}
}
}
const point = {
top: u ? u.valToPos((value || 0), seriesItem?.scale || "1") : 0,
left: u ? u.valToPos(date, "x") : 0,
@@ -65,6 +85,7 @@ const useLineTooltip = ({ u, metrics, series, unit, isAnomalyView }: LineTooltip
info: getMetricName(metricItem, seriesItem),
statsFormatted: seriesItem?.statsFormatted,
marker: `${seriesItem?.stroke}`,
duplicateCount,
};
}, [u, tooltipIdx, metrics, series, unit, isAnomalyView]);

View File

@@ -1,5 +1,4 @@
import { FC, useEffect, useState } from "preact/compat";
import { useLocation } from "react-router";
import { FC, useState } from "preact/compat";
import { useNotifiersSetQueryParams as useSetQueryParams } from "./hooks/useSetQueryParams";
import Spinner from "../../components/Main/Spinner/Spinner";
import Alert from "../../components/Main/Alert/Alert";
@@ -33,37 +32,6 @@ const ExploreNotifiers: FC = () => {
search: searchInput,
});
const location = useLocation();
const pageLoaded = !isLoading && !error && !!notifiers?.length;
const savedScrollTop = localStorage.getItem("scrollTop");
useEffect(() => {
if (!pageLoaded) return;
if (location.hash) {
const target = document.querySelector(location.hash);
if (target) {
let parent = target.closest("details");
while (parent) {
parent.open = true;
if (!parent?.parentElement) return;
parent = parent.parentElement.closest("details");
}
target.scrollIntoView();
}
} else {
if (savedScrollTop) {
window.scrollTo(0, parseInt(savedScrollTop));
}
const handleBeforeUnload = () => {
localStorage.setItem("scrollTop", (window.scrollY || 0).toString());
};
window.addEventListener("beforeunload", handleBeforeUnload);
return () => {
window.removeEventListener("beforeunload", handleBeforeUnload);
};
}
}, [location, savedScrollTop, pageLoaded]);
const handleChangeSearch = (input: string) => {
if (!input) {
setSearchInput("");

View File

@@ -1,5 +1,5 @@
import { FC, useEffect, useMemo, useState, useCallback } from "preact/compat";
import { useNavigate, useLocation, useSearchParams } from "react-router";
import { useSearchParams } from "react-router";
import { useRulesSetQueryParams as useSetQueryParams } from "./hooks/useSetQueryParams";
import Spinner from "../../components/Main/Spinner/Spinner";
import Alert from "../../components/Main/Alert/Alert";
@@ -33,16 +33,9 @@ const ExploreRules: FC = () => {
const [modalOpen, setModalOpen] = useState(true);
const [searchParams, setSearchParams] = useSearchParams();
const navigate = useNavigate();
const location = useLocation();
useEffect(() => {
if (!location.hash && groupId) {
setModalOpen(true);
} else {
setModalOpen(false);
}
}, [location.hash, groupId]);
setModalOpen(!!groupId);
}, [groupId]);
useSetQueryParams({
types: types.join("&"),
@@ -62,29 +55,29 @@ const ExploreRules: FC = () => {
}, [searchInput]);
const getModal = () => {
if (ruleId !== "") {
if (ruleId) {
return (
<ExploreRule
groupId={groupId}
id={ruleId}
mode={ruleId !== "" ? "rule" : "alert"}
onClose={handleClose(`rule-${ruleId}`)}
mode={ruleId ? "rule" : "alert"}
onClose={handleClose}
/>
);
} else if (alertId !== "") {
} else if (alertId) {
return (
<ExploreAlert
groupId={groupId}
id={alertId}
mode={ruleId !== "" ? "rule" : "alert"}
onClose={handleClose(`alert-${alertId}`)}
mode={ruleId ? "rule" : "alert"}
onClose={handleClose}
/>
);
} else if (groupId !== "") {
} else if (groupId) {
return (
<ExploreGroup
id={groupId}
onClose={handleClose(`group-${groupId}`)}
onClose={handleClose}
/>
);
}
@@ -92,18 +85,13 @@ const ExploreRules: FC = () => {
const noRuleFound = "No rules found!";
const handleClose = (id: string) => {
return () => {
const newParams = new URLSearchParams(searchParams);
newParams.delete("group_id");
newParams.delete("rule_id");
newParams.delete("alert_id");
setSearchParams(newParams);
setModalOpen(false);
navigate({
hash: `#${id}`,
});
};
const handleClose = () => {
const newParams = new URLSearchParams(searchParams);
newParams.delete("group_id");
newParams.delete("rule_id");
newParams.delete("alert_id");
setSearchParams(newParams);
setModalOpen(false);
};
const {
@@ -112,36 +100,6 @@ const ExploreRules: FC = () => {
error,
} = useFetchGroups({ blockFetch: modalOpen });
const pageLoaded = !isLoading && !error && !!groups?.length;
const savedScrollTop = localStorage.getItem("scrollTop");
useEffect(() => {
if (!pageLoaded) return;
if (location.hash) {
const target = document.querySelector(location.hash);
if (target) {
let parent = target.closest("details");
while (parent) {
parent.open = true;
if (!parent?.parentElement) return;
parent = parent.parentElement.closest("details");
}
target.scrollIntoView();
}
} else {
if (savedScrollTop) {
window.scrollTo(0, parseInt(savedScrollTop));
}
const updateScrollPosition = () => {
localStorage.setItem("scrollTop", (window.scrollY || 0).toString());
};
window.addEventListener("scroll", updateScrollPosition);
return () => {
window.removeEventListener("scroll", updateScrollPosition);
};
}
}, [location, savedScrollTop, pageLoaded]);
const { filteredGroups, allTypes, allStates } = useMemo(
() => filterGroups(groups || [], types, states, searchInput),
[groups, types, states, searchInput]

View File

@@ -1,11 +1,8 @@
@use "src/styles/variables" as *;
.vm-explore-alert-group {
content-visibility: auto;
width: 100%;
&:has(.vm-accordion-header_open) {
border: $border-divider;
border-radius: $border-radius-small;
}
}
.vm-explore-alerts.vm-modal {

View File

@@ -71,7 +71,15 @@ export const routerOptions: { [key: string]: RouterOptions } = {
[router.home]: getDefaultOptions(APP_TYPE),
[router.rawQuery]: {
title: "Raw query",
...routerOptionsDefault,
header: {
tenant: true,
stepControl: false,
timeSelector: true,
executionControls: {
tooltip: "Refresh dashboard",
useAutorefresh: true,
}
},
},
[router.metrics]: {
title: "Explore Prometheus metrics",

View File

@@ -20,7 +20,7 @@ describe("test server urls", () => {
it("https://play.vm.com/#/rules?q=test", () => {
const result = getDefaultURL("https://play.vm.com/#/rules?q=test");
expect(result).toBe("https://play.vm.com");
expect(result).toBe("https://play.vm.com/prometheus");
});
});
});

View File

@@ -4,7 +4,7 @@ import { APP_TYPE, AppType } from "../constants/appType";
import { getFromStorage } from "./storage";
export const getDefaultURL = (u: string) => {
return u.replace(/(\/(?:prometheus\/)?(?:graph|vmui)\/.*|\/#\/.*)/, "").replace(/(\/select\/[^/]+)$/, "$1/prometheus");
return u.replace(/(\/(?:prometheus\/)?(?:graph|vmui)\/.*|\/#\/.*)/, "/prometheus");
};
export const getDefaultServer = (tenantId?: string): string => {

View File

@@ -8,6 +8,7 @@ export type StorageKeys = "AUTOCOMPLETE"
| "NO_CACHE"
| "QUERY_TRACING"
| "SERIES_LIMITS"
| "LEGEND_AUTO_COLLAPSE"
| "TABLE_COMPACT"
| "TIMEZONE"
| "DISABLED_DEFAULT_TIMEZONE"

View File

@@ -0,0 +1,111 @@
import uPlot, { OrientCallback } from "uplot";
const deg360 = 2 * Math.PI;
// Base point size multiplier (in device pixels)
const BASE_POINT_SIZE = 4;
// Square size scale relative to circle size
const SQUARE_SIZE_SCALE = 1.2;
export const drawPoints = (u: uPlot, seriesIdx: number) => {
const size = BASE_POINT_SIZE * uPlot.pxRatio;
const r = size / 2;
const squareSize = size * SQUARE_SIZE_SCALE;
const squareHalf = squareSize / 2;
const orientCallback: OrientCallback = (
series,
dataX,
dataY,
scaleX,
scaleY,
valToPosX,
valToPosY,
xOff,
yOff,
xDim,
yDim,
_moveTo,
_lineTo,
rect,
arc,
) => {
const stroke = series?.stroke as unknown;
if (typeof stroke === "function") {
u.ctx.fillStyle = (stroke as () => string)();
}
const circlesPath = new Path2D();
const squaresPath = new Path2D();
const xMin = Number(scaleX.min);
const xMax = Number(scaleX.max);
const yMin = Number(scaleY.min);
const yMax = Number(scaleY.max);
const counts = new Map<string, number>();
const len = dataX.length;
for (let i = 0; i < len; i++) {
const xv = dataX[i];
const yv = dataY[i];
if (xv == null || yv == null) continue;
const xVal = Number(xv);
const yVal = Number(yv);
if (!Number.isFinite(xVal) || !Number.isFinite(yVal)) continue;
const key = `${xVal}|${yVal}`;
counts.set(key, (counts.get(key) ?? 0) + 1);
}
const duplicates = new Set<string>();
for (const [key, count] of counts) {
if (count > 1) duplicates.add(key);
}
for (let i = 0; i < len; i++) {
const xv = dataX[i];
const yv = dataY[i];
if (xv == null || yv == null) continue;
const xVal = Number(xv);
const yVal = Number(yv);
if (
!Number.isFinite(xVal) ||
!Number.isFinite(yVal) ||
xVal < xMin || xVal > xMax ||
yVal < yMin || yVal > yMax
) {
continue;
}
const cx = valToPosX(xVal, scaleX, xDim, xOff);
const cy = valToPosY(yVal, scaleY, yDim, yOff);
const key = `${xVal}|${yVal}`;
const isDuplicate = duplicates.has(key);
if (isDuplicate) {
rect(squaresPath, cx - squareHalf, cy - squareHalf, squareSize, squareSize);
} else {
circlesPath.moveTo(cx + r, cy);
arc(circlesPath, cx, cy, r, 0, deg360);
}
}
u.ctx.fill(circlesPath);
u.ctx.lineWidth = 1.4 * uPlot.pxRatio;
u.ctx.strokeStyle = u.ctx.fillStyle;
u.ctx.stroke(squaresPath);
};
uPlot.orient(u, seriesIdx, orientCallback);
return null;
};

View File

@@ -5,6 +5,7 @@ import { ForecastType, HideSeriesArgs, LegendItemType, SeriesItem } from "../../
import { anomalyColors, baseContrastColors, getColorFromString } from "../color";
import { getMathStats } from "../math";
import { formatPrettyNumber } from "./helpers";
import { drawPoints } from "./scatter";
// Helper function to extract freeFormFields values as a comma-separated string
export const extractFields = (metric: MetricBase["metric"]): string => {
@@ -31,7 +32,7 @@ export const isForecast = (metric: MetricBase["metric"]): ForecastMetricInfo =>
};
};
export const getSeriesItemContext = (data: MetricResult[], hideSeries: string[], alias: string[], showPoints?: boolean, isAnomalyUI?: boolean) => {
export const getSeriesItemContext = (data: MetricResult[], hideSeries: string[], alias: string[], showPoints?: boolean, isAnomalyUI?: boolean, isRawQuery?: boolean) => {
const colorState: {[key: string]: string} = {};
const maxColors = isAnomalyUI ? 0 : Math.min(data.length, baseContrastColors.length);
@@ -51,13 +52,14 @@ export const getSeriesItemContext = (data: MetricResult[], hideSeries: string[],
dash: getDashSeries(metricInfo),
width: getWidthSeries(metricInfo),
stroke: getStrokeSeries({ metricInfo, label, isAnomalyUI, colorState }),
points: getPointsSeries(metricInfo, showPoints),
points: getPointsSeries(metricInfo, showPoints, isRawQuery),
spanGaps: false,
forecast: metricInfo?.value,
forecastGroup: metricInfo?.group,
freeFormFields: d.metric,
show: !includesHideSeries(label, hideSeries),
scale: "1",
paths: isRawQuery ? drawPoints : undefined,
...getSeriesStatistics(d),
};
};
@@ -118,10 +120,10 @@ export const delSeries = (u: uPlot) => {
}
};
export const addSeries = (u: uPlot, series: uPlotSeries[], spanGaps = false, showPoints = false) => {
export const addSeries = (u: uPlot, series: uPlotSeries[], spanGaps = false, showPoints = false, isRawQuery?: boolean) => {
series.forEach((s,i) => {
if (s.label) s.spanGaps = spanGaps;
if (s.points) s.points.filter = showPoints ? undefined : filterPoints;
if (s.points) s.points.filter = showPoints || isRawQuery ? undefined : filterPoints;
i && u.addSeries(s);
});
};
@@ -157,17 +159,17 @@ const getWidthSeries = (metricInfo: ForecastMetricInfo | null): number => {
return 1.4;
};
const getPointsSeries = (metricInfo: ForecastMetricInfo | null, showPoints: boolean = false): uPlotSeries.Points => {
const getPointsSeries = (metricInfo: ForecastMetricInfo | null, showPoints: boolean = false, isRawQuery?: boolean): uPlotSeries.Points => {
const isAnomalyMetric = metricInfo?.value === ForecastType.anomaly;
if (isAnomalyMetric) {
return { size: 8, width: 4, space: 0 };
}
return {
size: 4,
size: isRawQuery ? 0 : 4,
width: 0,
show: true,
filter: showPoints ? null : filterPoints,
filter: showPoints || isRawQuery ? null : filterPoints,
};
};

View File

@@ -25,6 +25,7 @@ type PrometheusQuerier interface {
PrometheusAPIV1Labels(t *testing.T, query string, opts QueryOpts) *PrometheusAPIV1LabelsResponse
PrometheusAPIV1LabelValues(t *testing.T, labelName, query string, opts QueryOpts) *PrometheusAPIV1LabelValuesResponse
PrometheusAPIV1ExportNative(t *testing.T, query string, opts QueryOpts) []byte
PrometheusAPIV1Metadata(t *testing.T, metric string, limit int, opts QueryOpts) *PrometheusAPIV1Metadata
APIV1AdminTSDBDeleteSeries(t *testing.T, matchQuery string, opts QueryOpts)
@@ -37,7 +38,7 @@ type PrometheusQuerier interface {
// Writer contains methods for writing new data
type Writer interface {
// Prometheus APIs
PrometheusAPIV1Write(t *testing.T, records []prompb.TimeSeries, opts QueryOpts)
PrometheusAPIV1Write(t *testing.T, wr prompb.WriteRequest, opts QueryOpts)
PrometheusAPIV1ImportPrometheus(t *testing.T, records []string, opts QueryOpts)
PrometheusAPIV1ImportCSV(t *testing.T, records []string, opts QueryOpts)
PrometheusAPIV1ImportNative(t *testing.T, data []byte, opts QueryOpts)
@@ -350,6 +351,33 @@ func NewPrometheusAPIV1LabelValuesResponse(t *testing.T, s string) *PrometheusAP
return res
}
// PrometheusAPIV1Metadata is an inmemory representation of the
// /prometheus/api/v1/metadata response.
type PrometheusAPIV1Metadata struct {
Status string
IsPartial bool
Data map[string][]MetadataEntry
Trace *Trace
}
type MetadataEntry struct {
Type string
Help string
Unit string
}
// NewPrometheusAPIV1Metadata is a test helper function that creates a new
// instance of PrometheusAPIV1Metadata by unmarshalling a json string.
func NewPrometheusAPIV1Metadata(t *testing.T, s string) *PrometheusAPIV1Metadata {
t.Helper()
res := &PrometheusAPIV1Metadata{}
if err := json.Unmarshal([]byte(s), res); err != nil {
t.Fatalf("could not unmarshal series response data:\n%s\n err: %v", string(s), err)
}
return res
}
// Trace provides the description and the duration of some unit of work that has
// been performed during the request processing.
type Trace struct {

View File

@@ -99,37 +99,39 @@ func testDeduplication(tc *apptest.TestCase, sut apptest.PrometheusWriteQuerier,
ts3 := start.Add(3 * time.Second).UnixMilli()
ts5 := start.Add(5 * time.Second).UnixMilli()
ts10 := start.Add(10 * time.Second).UnixMilli()
data := []prompb.TimeSeries{
{
Labels: []prompb.Label{{Name: "__name__", Value: "metric1"}},
Samples: []prompb.Sample{
{Timestamp: ts1, Value: 3},
{Timestamp: ts3, Value: 10},
{Timestamp: ts5, Value: 5},
data := prompb.WriteRequest{
Timeseries: []prompb.TimeSeries{
{
Labels: []prompb.Label{{Name: "__name__", Value: "metric1"}},
Samples: []prompb.Sample{
{Timestamp: ts1, Value: 3},
{Timestamp: ts3, Value: 10},
{Timestamp: ts5, Value: 5},
},
},
},
{
Labels: []prompb.Label{{Name: "__name__", Value: "metric2"}},
Samples: []prompb.Sample{
{Timestamp: ts1, Value: 3},
{Timestamp: ts3, Value: decimal.StaleNaN},
{Timestamp: ts5, Value: 5},
{
Labels: []prompb.Label{{Name: "__name__", Value: "metric2"}},
Samples: []prompb.Sample{
{Timestamp: ts1, Value: 3},
{Timestamp: ts3, Value: decimal.StaleNaN},
{Timestamp: ts5, Value: 5},
},
},
},
{
Labels: []prompb.Label{{Name: "__name__", Value: "metric3"}},
Samples: []prompb.Sample{
{Timestamp: ts10, Value: 30},
{Timestamp: ts10, Value: 100},
{Timestamp: ts10, Value: 50},
{
Labels: []prompb.Label{{Name: "__name__", Value: "metric3"}},
Samples: []prompb.Sample{
{Timestamp: ts10, Value: 30},
{Timestamp: ts10, Value: 100},
{Timestamp: ts10, Value: 50},
},
},
},
{
Labels: []prompb.Label{{Name: "__name__", Value: "metric4"}},
Samples: []prompb.Sample{
{Timestamp: ts10, Value: 30},
{Timestamp: ts10, Value: decimal.StaleNaN},
{Timestamp: ts10, Value: 50},
{
Labels: []prompb.Label{{Name: "__name__", Value: "metric4"}},
Samples: []prompb.Sample{
{Timestamp: ts10, Value: 30},
{Timestamp: ts10, Value: decimal.StaleNaN},
{Timestamp: ts10, Value: 50},
},
},
},
}

View File

@@ -158,7 +158,11 @@ func TestSingleIngestionProtocols(t *testing.T) {
// prometheus text exposition format
sut.PrometheusAPIV1ImportPrometheus(t, []string{
`importprometheus_series 10 1707123456700`, // 2024-02-05T08:57:36.700Z
`# HELP importprometheus_series some help message`,
`# TYPE importprometheus_series gauge`,
`importprometheus_series 10 1707123456700`, // 2024-02-05T08:57:36.700Z
`# HELP importprometheus_series2 some help message second one`,
`# TYPE importprometheus_series2 gauge`,
`importprometheus_series2{label="foo",label1="value1"} 20 1707123456800`, // 2024-02-05T08:57:36.800Z
}, apptest.QueryOpts{
ExtraLabels: []string{"el1=elv1", "el2=elv2"},
@@ -187,42 +191,58 @@ func TestSingleIngestionProtocols(t *testing.T) {
})
// prometheus remote write format
pbData := []prompb.TimeSeries{
{
Labels: []prompb.Label{
{
Name: "__name__",
Value: "prometheusrw_series",
pbData := prompb.WriteRequest{
Timeseries: []prompb.TimeSeries{
{
Labels: []prompb.Label{
{
Name: "__name__",
Value: "prometheusrw_series",
},
},
Samples: []prompb.Sample{
{
Value: 10,
Timestamp: 1707123456700, // 2024-02-05T08:57:36.700Z
},
},
},
Samples: []prompb.Sample{
{
Value: 10,
Timestamp: 1707123456700, // 2024-02-05T08:57:36.700Z
{
Labels: []prompb.Label{
{
Name: "__name__",
Value: "prometheusrw_series2",
},
{
Name: "label",
Value: "foo2",
},
{
Name: "label1",
Value: "value1",
},
},
Samples: []prompb.Sample{
{
Value: 20,
Timestamp: 1707123456800, // 2024-02-05T08:57:36.800Z
},
},
},
},
{
Labels: []prompb.Label{
{
Name: "__name__",
Value: "prometheusrw_series2",
},
{
Name: "label",
Value: "foo2",
},
{
Name: "label1",
Value: "value1",
},
Metadata: []prompb.MetricMetadata{
{
Type: 1,
MetricFamilyName: "prometheusrw_series",
Help: "some help",
Unit: "",
},
Samples: []prompb.Sample{
{
Value: 20,
Timestamp: 1707123456800, // 2024-02-05T08:57:36.800Z
},
{
Type: 1,
MetricFamilyName: "prometheusrw_series2",
Help: "some help2",
Unit: "",
},
},
}
@@ -245,7 +265,6 @@ func TestSingleIngestionProtocols(t *testing.T) {
{Timestamp: 1707123456800, Value: 20}, // 2024-02-05T08:57:36.700Z
},
})
}
func TestClusterIngestionProtocols(t *testing.T) {
@@ -297,7 +316,11 @@ func TestClusterIngestionProtocols(t *testing.T) {
// prometheus text exposition format
vminsert.PrometheusAPIV1ImportPrometheus(t, []string{
`importprometheus_series 10 1707123456700`, // 2024-02-05T08:57:36.700Z
`# HELP importprometheus_series some help message`,
`# TYPE importprometheus_series gauge`,
`importprometheus_series 10 1707123456700`, // 2024-02-05T08:57:36.700Z
`# HELP importprometheus_series2 some help message second one`,
`# TYPE importprometheus_series2 gauge`,
`importprometheus_series2{label="foo",label1="value1"} 20 1707123456800`, // 2024-02-05T08:57:36.800Z
}, apptest.QueryOpts{
ExtraLabels: []string{"el1=elv1", "el2=elv2"},
@@ -434,42 +457,58 @@ func TestClusterIngestionProtocols(t *testing.T) {
})
// prometheus remote write format
pbData := []prompb.TimeSeries{
{
Labels: []prompb.Label{
{
Name: "__name__",
Value: "prometheusrw_series",
pbData := prompb.WriteRequest{
Timeseries: []prompb.TimeSeries{
{
Labels: []prompb.Label{
{
Name: "__name__",
Value: "prometheusrw_series",
},
},
Samples: []prompb.Sample{
{
Value: 10,
Timestamp: 1707123456700, // 2024-02-05T08:57:36.700Z
},
},
},
Samples: []prompb.Sample{
{
Value: 10,
Timestamp: 1707123456700, // 2024-02-05T08:57:36.700Z
{
Labels: []prompb.Label{
{
Name: "__name__",
Value: "prometheusrw_series2",
},
{
Name: "label",
Value: "foo2",
},
{
Name: "label1",
Value: "value1",
},
},
Samples: []prompb.Sample{
{
Value: 20,
Timestamp: 1707123456800, // 2024-02-05T08:57:36.800Z
},
},
},
},
{
Labels: []prompb.Label{
{
Name: "__name__",
Value: "prometheusrw_series2",
},
{
Name: "label",
Value: "foo2",
},
{
Name: "label1",
Value: "value1",
},
Metadata: []prompb.MetricMetadata{
{
Type: 1,
MetricFamilyName: "prometheusrw_series",
Help: "some help",
Unit: "",
},
Samples: []prompb.Sample{
{
Value: 20,
Timestamp: 1707123456800, // 2024-02-05T08:57:36.800Z
},
{
Type: 1,
MetricFamilyName: "prometheusrw_series2",
Help: "some help2",
Unit: "",
},
},
}

View File

@@ -0,0 +1,898 @@
package tests
import (
"fmt"
"os"
"path/filepath"
"slices"
"testing"
"time"
at "github.com/VictoriaMetrics/VictoriaMetrics/apptest"
)
var (
legacyVmsinglePath = os.Getenv("VM_LEGACY_VMSINGLE_PATH")
legacyVmstoragePath = os.Getenv("VM_LEGACY_VMSTORAGE_PATH")
)
type testLegacyDeleteSeriesOpts struct {
startLegacySUT func() at.PrometheusWriteQuerier
startNewSUT func() at.PrometheusWriteQuerier
stopLegacySUT func()
stopNewSUT func()
}
func TestLegacySingleDeleteSeries(t *testing.T) {
tc := at.NewTestCase(t)
defer tc.Stop()
storageDataPath := filepath.Join(tc.Dir(), "vmsingle")
opts := testLegacyDeleteSeriesOpts{
startLegacySUT: func() at.PrometheusWriteQuerier {
return tc.MustStartVmsingleAt("vmsingle-legacy", legacyVmsinglePath, []string{
"-storageDataPath=" + storageDataPath,
"-retentionPeriod=100y",
"-search.maxStalenessInterval=1m",
})
},
startNewSUT: func() at.PrometheusWriteQuerier {
return tc.MustStartVmsingle("vmsingle-new", []string{
"-storageDataPath=" + storageDataPath,
"-retentionPeriod=100y",
"-search.maxStalenessInterval=1m",
})
},
stopLegacySUT: func() {
tc.StopApp("vmsingle-legacy")
},
stopNewSUT: func() {
tc.StopApp("vmsingle-new")
},
}
testLegacyDeleteSeries(tc, opts)
}
func TestLegacyClusterDeleteSeries(t *testing.T) {
tc := at.NewTestCase(t)
defer tc.Stop()
storage1DataPath := filepath.Join(tc.Dir(), "vmstorage1")
storage2DataPath := filepath.Join(tc.Dir(), "vmstorage2")
opts := testLegacyDeleteSeriesOpts{
startLegacySUT: func() at.PrometheusWriteQuerier {
return tc.MustStartCluster(&at.ClusterOptions{
Vmstorage1Instance: "vmstorage1-legacy",
Vmstorage1Binary: legacyVmstoragePath,
Vmstorage1Flags: []string{
"-storageDataPath=" + storage1DataPath,
"-retentionPeriod=100y",
},
Vmstorage2Instance: "vmstorage2-legacy",
Vmstorage2Binary: legacyVmstoragePath,
Vmstorage2Flags: []string{
"-storageDataPath=" + storage2DataPath,
"-retentionPeriod=100y",
},
VminsertInstance: "vminsert",
VminsertFlags: []string{},
VmselectInstance: "vmselect",
VmselectFlags: []string{
"-search.maxStalenessInterval=1m",
},
})
},
startNewSUT: func() at.PrometheusWriteQuerier {
return tc.MustStartCluster(&at.ClusterOptions{
Vmstorage1Instance: "vmstorage1-new",
Vmstorage1Flags: []string{
"-storageDataPath=" + storage1DataPath,
"-retentionPeriod=100y",
},
Vmstorage2Instance: "vmstorage2-new",
Vmstorage2Flags: []string{
"-storageDataPath=" + storage2DataPath,
"-retentionPeriod=100y",
},
VminsertInstance: "vminsert",
VminsertFlags: []string{},
VmselectInstance: "vmselect",
VmselectFlags: []string{
"-search.maxStalenessInterval=1m",
},
})
},
stopLegacySUT: func() {
tc.StopApp("vminsert")
tc.StopApp("vmselect")
tc.StopApp("vmstorage1-legacy")
tc.StopApp("vmstorage2-legacy")
},
stopNewSUT: func() {
tc.StopApp("vminsert")
tc.StopApp("vmselect")
tc.StopApp("vmstorage1-new")
tc.StopApp("vmstorage2-new")
},
}
testLegacyDeleteSeries(tc, opts)
}
func testLegacyDeleteSeries(tc *at.TestCase, opts testLegacyDeleteSeriesOpts) {
t := tc.T()
type want struct {
series []map[string]string
queryResults []*at.QueryResult
}
genData := func(prefix string, start, end, step int64, value float64) (recs []string, w *want) {
count := (end - start) / step
recs = make([]string, count)
w = &want{
series: make([]map[string]string, count),
queryResults: make([]*at.QueryResult, count),
}
for i := range count {
name := fmt.Sprintf("%s_%03d", prefix, i)
timestamp := start + int64(i)*step
recs[i] = fmt.Sprintf("%s %f %d", name, value, timestamp)
w.series[i] = map[string]string{"__name__": name}
w.queryResults[i] = &at.QueryResult{
Metric: map[string]string{"__name__": name},
Samples: []*at.Sample{{Timestamp: timestamp, Value: value}},
}
}
return recs, w
}
assertSearchResults := func(app at.PrometheusQuerier, query string, start, end int64, step string, want *want) {
t.Helper()
tc.Assert(&at.AssertOptions{
Msg: "unexpected /api/v1/series response",
Got: func() any {
return app.PrometheusAPIV1Series(t, query, at.QueryOpts{
Start: fmt.Sprintf("%d", start),
End: fmt.Sprintf("%d", end),
}).Sort()
},
Want: &at.PrometheusAPIV1SeriesResponse{
Status: "success",
Data: want.series,
},
FailNow: true,
})
tc.Assert(&at.AssertOptions{
Msg: "unexpected /api/v1/query_range response",
Got: func() any {
return app.PrometheusAPIV1QueryRange(t, query, at.QueryOpts{
Start: fmt.Sprintf("%d", start),
End: fmt.Sprintf("%d", end),
Step: step,
})
},
Want: &at.PrometheusAPIV1QueryResponse{
Status: "success",
Data: &at.QueryData{
ResultType: "matrix",
Result: want.queryResults,
},
},
FailNow: true,
})
}
// - start legacy vmsingle
// - insert data1
// - confirm that metric names and samples are searcheable
// - stop legacy vmsingle
const step = 24 * 3600 * 1000 // 24h
start1 := time.Date(2000, 1, 1, 0, 0, 0, 0, time.UTC).UnixMilli()
end1 := time.Date(2000, 1, 10, 0, 0, 0, 0, time.UTC).UnixMilli()
data1, want1 := genData("metric", start1, end1, step, 1)
legacySUT := opts.startLegacySUT()
legacySUT.PrometheusAPIV1ImportPrometheus(t, data1, at.QueryOpts{})
legacySUT.ForceFlush(t)
assertSearchResults(legacySUT, `{__name__=~".*"}`, start1, end1, "1d", want1)
opts.stopLegacySUT()
// - start new vmsingle
// - confirm that data1 metric names and samples are searcheable
// - delete data1
// - confirm that data1 metric names and samples are not searcheable anymore
// - insert data2 (same metric names, different dates)
// - confirm that metric names become searcheable again
// - confirm that data1 samples are not searchable and data2 samples are searcheable
newSUT := opts.startNewSUT()
assertSearchResults(newSUT, `{__name__=~".*"}`, start1, end1, "1d", want1)
newSUT.APIV1AdminTSDBDeleteSeries(t, `{__name__=~".*"}`, at.QueryOpts{})
wantNoResults := &want{
series: []map[string]string{},
queryResults: []*at.QueryResult{},
}
assertSearchResults(newSUT, `{__name__=~".*"}`, start1, end1, "1d", wantNoResults)
start2 := time.Date(2000, 1, 11, 0, 0, 0, 0, time.UTC).UnixMilli()
end2 := time.Date(2000, 1, 20, 0, 0, 0, 0, time.UTC).UnixMilli()
data2, want2 := genData("metric", start2, end2, step, 2)
newSUT.PrometheusAPIV1ImportPrometheus(t, data2, at.QueryOpts{})
newSUT.ForceFlush(t)
assertSearchResults(newSUT, `{__name__=~".*"}`, start1, end2, "1d", want2)
// - restart new vmsingle
// - confirm that metric names still searchable, data1 samples are not
// searchable, and data2 samples are searcheable
opts.stopNewSUT()
newSUT = opts.startNewSUT()
assertSearchResults(newSUT, `{__name__=~".*"}`, start1, end2, "1d", want2)
opts.stopNewSUT()
}
type testLegacyBackupRestoreOpts struct {
startLegacySUT func() at.PrometheusWriteQuerier
startNewSUT func() at.PrometheusWriteQuerier
stopLegacySUT func()
stopNewSUT func()
storageDataPaths []string
snapshotCreateURLs func(at.PrometheusWriteQuerier) []string
}
func TestLegacySingleBackupRestore(t *testing.T) {
tc := at.NewTestCase(t)
defer tc.Stop()
storageDataPath := filepath.Join(tc.Dir(), "vmsingle")
opts := testLegacyBackupRestoreOpts{
startLegacySUT: func() at.PrometheusWriteQuerier {
return tc.MustStartVmsingleAt("vmsingle-legacy", legacyVmsinglePath, []string{
"-storageDataPath=" + storageDataPath,
"-retentionPeriod=100y",
"-search.disableCache=true",
"-search.maxStalenessInterval=1m",
})
},
startNewSUT: func() at.PrometheusWriteQuerier {
return tc.MustStartVmsingle("vmsingle-new", []string{
"-storageDataPath=" + storageDataPath,
"-retentionPeriod=100y",
"-search.disableCache=true",
"-search.maxStalenessInterval=1m",
})
},
stopLegacySUT: func() {
tc.StopApp("vmsingle-legacy")
},
stopNewSUT: func() {
tc.StopApp("vmsingle-new")
},
storageDataPaths: []string{
storageDataPath,
},
snapshotCreateURLs: func(sut at.PrometheusWriteQuerier) []string {
return []string{
sut.(*at.Vmsingle).SnapshotCreateURL(),
}
},
}
testLegacyBackupRestore(tc, opts)
}
func TestLegacyClusterBackupRestore(t *testing.T) {
tc := at.NewTestCase(t)
defer tc.Stop()
storage1DataPath := filepath.Join(tc.Dir(), "vmstorage1")
storage2DataPath := filepath.Join(tc.Dir(), "vmstorage2")
opts := testLegacyBackupRestoreOpts{
startLegacySUT: func() at.PrometheusWriteQuerier {
return tc.MustStartCluster(&at.ClusterOptions{
Vmstorage1Instance: "vmstorage1-legacy",
Vmstorage1Binary: legacyVmstoragePath,
Vmstorage1Flags: []string{
"-storageDataPath=" + storage1DataPath,
"-retentionPeriod=100y",
},
Vmstorage2Instance: "vmstorage2-legacy",
Vmstorage2Binary: legacyVmstoragePath,
Vmstorage2Flags: []string{
"-storageDataPath=" + storage2DataPath,
"-retentionPeriod=100y",
},
VminsertInstance: "vminsert",
VminsertFlags: []string{},
VmselectInstance: "vmselect",
VmselectFlags: []string{
"-search.disableCache=true",
"-search.maxStalenessInterval=1m",
},
})
},
startNewSUT: func() at.PrometheusWriteQuerier {
return tc.MustStartCluster(&at.ClusterOptions{
Vmstorage1Instance: "vmstorage1-new",
Vmstorage1Flags: []string{
"-storageDataPath=" + storage1DataPath,
"-retentionPeriod=100y",
},
Vmstorage2Instance: "vmstorage2-new",
Vmstorage2Flags: []string{
"-storageDataPath=" + storage2DataPath,
"-retentionPeriod=100y",
},
VminsertInstance: "vminsert",
VmselectInstance: "vmselect",
VmselectFlags: []string{
"-search.disableCache=true",
"-search.maxStalenessInterval=1m",
},
})
},
stopLegacySUT: func() {
tc.StopApp("vminsert")
tc.StopApp("vmselect")
tc.StopApp("vmstorage1-legacy")
tc.StopApp("vmstorage2-legacy")
},
stopNewSUT: func() {
tc.StopApp("vminsert")
tc.StopApp("vmselect")
tc.StopApp("vmstorage1-new")
tc.StopApp("vmstorage2-new")
},
storageDataPaths: []string{
storage1DataPath,
storage2DataPath,
},
snapshotCreateURLs: func(sut at.PrometheusWriteQuerier) []string {
c := sut.(*at.Vmcluster)
return []string{
c.Vmstorages[0].SnapshotCreateURL(),
c.Vmstorages[1].SnapshotCreateURL(),
}
},
}
testLegacyBackupRestore(tc, opts)
}
func testLegacyBackupRestore(tc *at.TestCase, opts testLegacyBackupRestoreOpts) {
t := tc.T()
const msecPerMinute = 60 * 1000
// Use the same number of metrics and time range for all the data ingestions
// below.
const numMetrics = 1000
start := time.Date(2025, 3, 1, 10, 0, 0, 0, time.UTC).Add(-numMetrics * time.Minute).UnixMilli()
end := time.Date(2025, 3, 1, 10, 0, 0, 0, time.UTC).UnixMilli()
genData := func(prefix string) (recs []string, wantSeries []map[string]string, wantQueryResults []*at.QueryResult) {
recs = make([]string, numMetrics)
wantSeries = make([]map[string]string, numMetrics)
wantQueryResults = make([]*at.QueryResult, numMetrics)
for i := range numMetrics {
name := fmt.Sprintf("%s_%03d", prefix, i)
value := float64(i)
timestamp := start + int64(i)*msecPerMinute
recs[i] = fmt.Sprintf("%s %f %d", name, value, timestamp)
wantSeries[i] = map[string]string{"__name__": name}
wantQueryResults[i] = &at.QueryResult{
Metric: map[string]string{"__name__": name},
Samples: []*at.Sample{{Timestamp: timestamp, Value: value}},
}
}
return recs, wantSeries, wantQueryResults
}
backupBaseDir, err := filepath.Abs(filepath.Join(tc.Dir(), "backups"))
if err != nil {
t.Fatalf("could not get absolute path for the backup base dir")
}
// assertSeries issues various queries to the app and compares the query
// results with the expected ones.
assertQueries := func(app at.PrometheusQuerier, query string, wantSeries []map[string]string, wantQueryResults []*at.QueryResult) {
t.Helper()
tc.Assert(&at.AssertOptions{
Msg: "unexpected /api/v1/series response",
Got: func() any {
return app.PrometheusAPIV1Series(t, query, at.QueryOpts{
Start: fmt.Sprintf("%d", start),
End: fmt.Sprintf("%d", end),
}).Sort()
},
Want: &at.PrometheusAPIV1SeriesResponse{
Status: "success",
Data: wantSeries,
},
FailNow: true,
})
tc.Assert(&at.AssertOptions{
Msg: "unexpected /api/v1/query_range response",
Got: func() any {
return app.PrometheusAPIV1QueryRange(t, query, at.QueryOpts{
Start: fmt.Sprintf("%d", start),
End: fmt.Sprintf("%d", end),
Step: "60s",
})
},
Want: &at.PrometheusAPIV1QueryResponse{
Status: "success",
Data: &at.QueryData{
ResultType: "matrix",
Result: wantQueryResults,
},
},
Retries: 300,
FailNow: true,
})
}
createBackup := func(sut at.PrometheusWriteQuerier, name string) {
t.Helper()
for i, storageDataPath := range opts.storageDataPaths {
replica := fmt.Sprintf("replica-%d", i)
instance := fmt.Sprintf("vmbackup-%s-%s", name, replica)
snapshotCreateURL := opts.snapshotCreateURLs(sut)[i]
backupPath := "fs://" + filepath.Join(backupBaseDir, name, replica)
tc.MustStartVmbackup(instance, storageDataPath, snapshotCreateURL, backupPath)
}
}
restoreFromBackup := func(name string) {
t.Helper()
for i, storageDataPath := range opts.storageDataPaths {
replica := fmt.Sprintf("replica-%d", i)
instance := fmt.Sprintf("vmrestore-%s-%s", name, replica)
backupPath := "fs://" + filepath.Join(backupBaseDir, name, replica)
tc.MustStartVmrestore(instance, backupPath, storageDataPath)
}
}
legacy1Data, wantLegacy1Series, wantLegacy1QueryResults := genData("legacy1")
legacy2Data, wantLegacy2Series, wantLegacy2QueryResults := genData("legacy2")
new1Data, wantNew1Series, wantNew1QueryResults := genData("new1")
new2Data, wantNew2Series, wantNew2QueryResults := genData("new2")
wantLegacy12Series := slices.Concat(wantLegacy1Series, wantLegacy2Series)
wantLegacy12QueryResults := slices.Concat(wantLegacy1QueryResults, wantLegacy2QueryResults)
wantLegacy1New1Series := slices.Concat(wantLegacy1Series, wantNew1Series)
wantLegacy1New1QueryResults := slices.Concat(wantLegacy1QueryResults, wantNew1QueryResults)
wantLegacy1New12Series := slices.Concat(wantLegacy1New1Series, wantNew2Series)
wantLegacy1New12QueryResults := slices.Concat(wantLegacy1New1QueryResults, wantNew2QueryResults)
var legacySUT, newSUT at.PrometheusWriteQuerier
// Verify backup/restore with legacy SUT.
// Start legacy SUT with empty storage data dir.
legacySUT = opts.startLegacySUT()
// Ingest legacy1 records, ensure the queries return legacy1, and create
// legacy1 backup.
legacySUT.PrometheusAPIV1ImportPrometheus(t, legacy1Data, at.QueryOpts{})
legacySUT.ForceFlush(t)
assertQueries(legacySUT, `{__name__=~".*"}`, wantLegacy1Series, wantLegacy1QueryResults)
createBackup(legacySUT, "legacy1")
// Ingest legacy2 records, ensure the queries return legacy1+legacy2, and
// create legacy1+legacy2 backup.
legacySUT.PrometheusAPIV1ImportPrometheus(t, legacy2Data, at.QueryOpts{})
legacySUT.ForceFlush(t)
assertQueries(legacySUT, `{__name__=~"legacy.*"}`, wantLegacy12Series, wantLegacy12QueryResults)
createBackup(legacySUT, "legacy12")
// Stop legacy SUT and restore legacy1 data.
// Start legacy SUT and ensure the queries return legacy1.
opts.stopLegacySUT()
restoreFromBackup("legacy1")
legacySUT = opts.startLegacySUT()
assertQueries(legacySUT, `{__name__=~".*"}`, wantLegacy1Series, wantLegacy1QueryResults)
opts.stopLegacySUT()
// Verify backup/restore with new SUT.
// Start new SUT (with partition indexDBs) with storage containing legacy1
// data and Ensure that queries return legacy1 data.
newSUT = opts.startNewSUT()
assertQueries(newSUT, `{__name__=~".*"}`, wantLegacy1Series, wantLegacy1QueryResults)
// Ingest new1 records, ensure that queries now return legacy1+new1, and
// create the legacy1+new1 backup.
newSUT.PrometheusAPIV1ImportPrometheus(t, new1Data, at.QueryOpts{})
newSUT.ForceFlush(t)
assertQueries(newSUT, `{__name__=~"(legacy|new).*"}`, wantLegacy1New1Series, wantLegacy1New1QueryResults)
createBackup(newSUT, "legacy1-new1")
// Ingest new2 records, ensure that queries now return legacy1+new1+new2,
// and create the legacy1+new1+new2 backup.
newSUT.PrometheusAPIV1ImportPrometheus(t, new2Data, at.QueryOpts{})
newSUT.ForceFlush(t)
assertQueries(newSUT, `{__name__=~"(legacy|new1|new2).*"}`, wantLegacy1New12Series, wantLegacy1New12QueryResults)
createBackup(newSUT, "legacy1-new12")
// Stop new SUT and restore legacy1+new1 data.
// Start new SUT and ensure queries return legacy1+new1 data.
opts.stopNewSUT()
restoreFromBackup("legacy1-new1")
newSUT = opts.startNewSUT()
assertQueries(newSUT, `{__name__=~".*"}`, wantLegacy1New1Series, wantLegacy1New1QueryResults)
opts.stopNewSUT()
// Verify backup/restore with legacy SUT again.
// Start legacy SUT with storage containing legacy1+new1 data.
//
// Ensure that the /series and /query_range queries return legacy1 data only.
// new1 data is not returned because legacy vmsingle does not know about
// partition indexDBs.
legacySUT = opts.startLegacySUT()
assertQueries(legacySUT, `{__name__=~".*"}`, wantLegacy1Series, wantLegacy1QueryResults)
// Stop legacy SUT and restore legacy1+legacy2 data.
// Start legacy SUT and ensure that queries now return legacy1+legacy2 data.
opts.stopLegacySUT()
restoreFromBackup("legacy12")
legacySUT = opts.startLegacySUT()
assertQueries(legacySUT, `{__name__=~".*"}`, wantLegacy12Series, wantLegacy12QueryResults)
opts.stopLegacySUT()
// Verify backup/restore with new vmsingle again.
// Start new vmsingle with storage containing legacy1+legacy2 data and
// ensure that queries return legacy1+legacy2 data.
newSUT = opts.startNewSUT()
assertQueries(newSUT, `{__name__=~".*"}`, wantLegacy12Series, wantLegacy12QueryResults)
// Stop new SUT and restore legacy1+new1+new2 data.
// Start new SUT and ensure that queries return legacy1+new1+new2 data.
opts.stopNewSUT()
restoreFromBackup("legacy1-new12")
newSUT = opts.startNewSUT()
assertQueries(newSUT, `{__name__=~"(legacy|new).*"}`, wantLegacy1New12Series, wantLegacy1New12QueryResults)
opts.stopNewSUT()
}
type testLegacyDowngradeOpts struct {
startLegacySUT func() at.PrometheusWriteQuerier
startNewSUT func() at.PrometheusWriteQuerier
stopLegacySUT func()
stopNewSUT func()
}
func TestLegacySingleDowngrade(t *testing.T) {
tc := at.NewTestCase(t)
defer tc.Stop()
storageDataPath := filepath.Join(tc.Dir(), "vmsingle")
opts := testLegacyDowngradeOpts{
startLegacySUT: func() at.PrometheusWriteQuerier {
return tc.MustStartVmsingleAt("vmsingle-legacy", legacyVmsinglePath, []string{
"-storageDataPath=" + storageDataPath,
"-retentionPeriod=100y",
"-search.disableCache=true",
"-search.maxStalenessInterval=1m",
})
},
startNewSUT: func() at.PrometheusWriteQuerier {
return tc.MustStartVmsingle("vmsingle-new", []string{
"-storageDataPath=" + storageDataPath,
"-retentionPeriod=100y",
"-search.disableCache=true",
"-search.maxStalenessInterval=1m",
})
},
stopLegacySUT: func() {
tc.StopApp("vmsingle-legacy")
},
stopNewSUT: func() {
tc.StopApp("vmsingle-new")
},
}
testLegacyDowngrade(tc, opts)
}
func TestLegacyClusterDowngrade(t *testing.T) {
tc := at.NewTestCase(t)
defer tc.Stop()
storage1DataPath := filepath.Join(tc.Dir(), "vmstorage1")
storage2DataPath := filepath.Join(tc.Dir(), "vmstorage2")
opts := testLegacyDowngradeOpts{
startLegacySUT: func() at.PrometheusWriteQuerier {
return tc.MustStartCluster(&at.ClusterOptions{
Vmstorage1Instance: "vmstorage1-legacy",
Vmstorage1Binary: legacyVmstoragePath,
Vmstorage1Flags: []string{
"-storageDataPath=" + storage1DataPath,
"-retentionPeriod=100y",
},
Vmstorage2Instance: "vmstorage2-legacy",
Vmstorage2Binary: legacyVmstoragePath,
Vmstorage2Flags: []string{
"-storageDataPath=" + storage2DataPath,
"-retentionPeriod=100y",
},
VminsertInstance: "vminsert",
VminsertFlags: []string{},
VmselectInstance: "vmselect",
VmselectFlags: []string{
"-search.disableCache=true",
"-search.maxStalenessInterval=1m",
},
})
},
startNewSUT: func() at.PrometheusWriteQuerier {
return tc.MustStartCluster(&at.ClusterOptions{
Vmstorage1Instance: "vmstorage1-new",
Vmstorage1Flags: []string{
"-storageDataPath=" + storage1DataPath,
"-retentionPeriod=100y",
},
Vmstorage2Instance: "vmstorage2-new",
Vmstorage2Flags: []string{
"-storageDataPath=" + storage2DataPath,
"-retentionPeriod=100y",
},
VminsertInstance: "vminsert",
VminsertFlags: []string{},
VmselectInstance: "vmselect",
VmselectFlags: []string{
"-search.disableCache=true",
"-search.maxStalenessInterval=1m",
},
})
},
stopLegacySUT: func() {
tc.StopApp("vminsert")
tc.StopApp("vmselect")
tc.StopApp("vmstorage1-legacy")
tc.StopApp("vmstorage2-legacy")
},
stopNewSUT: func() {
tc.StopApp("vminsert")
tc.StopApp("vmselect")
tc.StopApp("vmstorage1-new")
tc.StopApp("vmstorage2-new")
},
}
testLegacyDowngrade(tc, opts)
}
func testLegacyDowngrade(tc *at.TestCase, opts testLegacyDowngradeOpts) {
t := tc.T()
type want struct {
series []map[string]string
labels []string
labelValues []string
queryResults []*at.QueryResult
queryRangeResults []*at.QueryResult
}
uniq := func(s []string) []string {
slices.Sort(s)
return slices.Compact(s)
}
mergeWant := func(want1, want2 want) want {
var result want
result.series = slices.Concat(want1.series, want2.series)
result.labels = uniq(slices.Concat(want1.labels, want2.labels))
result.labelValues = slices.Concat(want1.labelValues, want2.labelValues)
result.queryResults = slices.Concat(want1.queryResults, want2.queryResults)
result.queryRangeResults = slices.Concat(want1.queryRangeResults, want2.queryRangeResults)
return result
}
// Use the same number of metrics and time range for all the data batches below.
const numMetrics = 1000
const labelName = "prefix"
start := time.Date(2025, 3, 1, 10, 0, 0, 0, time.UTC).UnixMilli()
end := start
genData := func(prefix string) (recs []string, want want) {
labelValue := prefix
recs = make([]string, numMetrics)
want.series = make([]map[string]string, numMetrics)
want.labels = []string{"__name__", labelName}
want.labelValues = []string{labelValue}
want.queryResults = make([]*at.QueryResult, numMetrics)
want.queryRangeResults = make([]*at.QueryResult, numMetrics)
for i := range numMetrics {
name := fmt.Sprintf("%s_%03d", prefix, i)
value := float64(i)
timestamp := start
recs[i] = fmt.Sprintf("%s{%s=\"%s\"} %f %d", name, labelName, labelValue, value, timestamp)
want.series[i] = map[string]string{"__name__": name, labelName: labelValue}
want.queryResults[i] = &at.QueryResult{
Metric: map[string]string{"__name__": name, labelName: labelValue},
Sample: &at.Sample{Timestamp: timestamp, Value: value},
}
want.queryRangeResults[i] = &at.QueryResult{
Metric: map[string]string{"__name__": name, labelName: labelValue},
Samples: []*at.Sample{{Timestamp: timestamp, Value: value}},
}
}
return recs, want
}
// assertSeries issues various queries to the app and compares the query
// results with the expected ones.
assertQueries := func(app at.PrometheusQuerier, query string, want want, wantSeriesCount uint64) {
t.Helper()
tc.Assert(&at.AssertOptions{
Msg: "unexpected /api/v1/series response",
Got: func() any {
return app.PrometheusAPIV1Series(t, query, at.QueryOpts{
Start: fmt.Sprintf("%d", start),
End: fmt.Sprintf("%d", end),
}).Sort()
},
Want: &at.PrometheusAPIV1SeriesResponse{
Status: "success",
Data: want.series,
},
FailNow: true,
})
tc.Assert(&at.AssertOptions{
Msg: "unexpected /api/v1/series/count response",
Got: func() any {
return app.PrometheusAPIV1SeriesCount(t, at.QueryOpts{
Start: fmt.Sprintf("%d", start),
End: fmt.Sprintf("%d", end),
})
},
Want: &at.PrometheusAPIV1SeriesCountResponse{
Status: "success",
Data: []uint64{wantSeriesCount},
},
FailNow: true,
})
tc.Assert(&at.AssertOptions{
Msg: "unexpected /api/v1/labels response",
Got: func() any {
return app.PrometheusAPIV1Labels(t, query, at.QueryOpts{
Start: fmt.Sprintf("%d", start),
End: fmt.Sprintf("%d", end),
})
},
Want: &at.PrometheusAPIV1LabelsResponse{
Status: "success",
Data: want.labels,
},
FailNow: true,
})
tc.Assert(&at.AssertOptions{
Msg: "unexpected /api/v1/label/../values response",
Got: func() any {
return app.PrometheusAPIV1LabelValues(t, labelName, query, at.QueryOpts{
Start: fmt.Sprintf("%d", start),
End: fmt.Sprintf("%d", end),
})
},
Want: &at.PrometheusAPIV1LabelValuesResponse{
Status: "success",
Data: want.labelValues,
},
FailNow: true,
})
tc.Assert(&at.AssertOptions{
Msg: "unexpected /api/v1/query response",
Got: func() any {
return app.PrometheusAPIV1Query(t, query, at.QueryOpts{
Time: fmt.Sprintf("%d", start),
Step: "10m",
})
},
Want: &at.PrometheusAPIV1QueryResponse{
Status: "success",
Data: &at.QueryData{
ResultType: "vector",
Result: want.queryResults,
},
},
Retries: 300,
FailNow: true,
})
tc.Assert(&at.AssertOptions{
Msg: "unexpected /api/v1/query_range response",
Got: func() any {
return app.PrometheusAPIV1QueryRange(t, query, at.QueryOpts{
Start: fmt.Sprintf("%d", start),
End: fmt.Sprintf("%d", end),
Step: "60s",
})
},
Want: &at.PrometheusAPIV1QueryResponse{
Status: "success",
Data: &at.QueryData{
ResultType: "matrix",
Result: want.queryRangeResults,
},
},
Retries: 300,
FailNow: true,
})
}
wantEmpty := want{
series: []map[string]string{},
labels: []string{"__name__"},
labelValues: []string{},
queryResults: []*at.QueryResult{},
queryRangeResults: []*at.QueryResult{},
}
legacy1Data, wantLegacy1 := genData("legacy1")
legacy2Data, wantLegacy2 := genData("legacy2")
new1Data, wantNew1 := genData("new1")
wantLegacy1New1 := mergeWant(wantLegacy1, wantNew1)
wantLegacy2New1 := mergeWant(wantLegacy2, wantNew1)
var legacySUT, newSUT at.PrometheusWriteQuerier
// Start legacy SUT with empty storage data dir.
// Ingest legacy1 records, ensure the queries return legacy1
legacySUT = opts.startLegacySUT()
legacySUT.PrometheusAPIV1ImportPrometheus(t, legacy1Data, at.QueryOpts{})
legacySUT.ForceFlush(t)
assertQueries(legacySUT, `{__name__=~".*"}`, wantLegacy1, numMetrics)
opts.stopLegacySUT()
// Start new SUT (with partition indexDBs) with storage containing legacy1
// data and ensure that queries return new1 and legacy1 data.
newSUT = opts.startNewSUT()
newSUT.PrometheusAPIV1ImportPrometheus(t, new1Data, at.QueryOpts{})
newSUT.ForceFlush(t)
assertQueries(newSUT, `{__name__=~".*"}`, wantLegacy1New1, 2*numMetrics)
opts.stopNewSUT()
// Downgrade to legacy SUT, ensure the queries return only legacy1.
// Delete all series, ensure that queries return no series.
// Ingest legacy2 records, ensure the queries return only legacy2.
legacySUT = opts.startLegacySUT()
assertQueries(legacySUT, `{__name__=~".*"}`, wantLegacy1, numMetrics)
legacySUT.APIV1AdminTSDBDeleteSeries(t, `{__name__=~".*"}`, at.QueryOpts{})
assertQueries(legacySUT, `{__name__=~".*"}`, wantEmpty, numMetrics)
legacySUT.PrometheusAPIV1ImportPrometheus(t, legacy2Data, at.QueryOpts{})
legacySUT.ForceFlush(t)
// series count includes deleted metrics
assertQueries(legacySUT, `{__name__=~".*"}`, wantLegacy2, 2*numMetrics)
opts.stopLegacySUT()
// Upgrade to new SUT, ensure the queries return recently ingested legacy2 and new1
// since legacy SUT cannot delete them.
// Delete all series, ensure that queries return no series.
newSUT = opts.startNewSUT()
// series count includes deleted metrics
assertQueries(newSUT, `{__name__=~".*"}`, wantLegacy2New1, 3*numMetrics)
newSUT.APIV1AdminTSDBDeleteSeries(t, `{__name__=~".*"}`, at.QueryOpts{})
// series count includes deleted metrics
assertQueries(newSUT, `{__name__=~".*"}`, wantEmpty, 3*numMetrics)
opts.stopNewSUT()
}

View File

@@ -0,0 +1,225 @@
package tests
import (
"fmt"
"testing"
"github.com/google/go-cmp/cmp"
"github.com/VictoriaMetrics/VictoriaMetrics/apptest"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fs"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
)
func TestSingleMetricsMetadata(t *testing.T) {
fs.MustRemoveDir(t.Name())
tc := apptest.NewTestCase(t)
defer tc.Stop()
sut := tc.MustStartVmsingle("vmsingle", []string{
"-storageDataPath=" + tc.Dir(),
"-retentionPeriod=100y",
"-enableMetadata",
})
// verify empty stats
resp := sut.PrometheusAPIV1Metadata(t, "", 0, apptest.QueryOpts{})
if len(resp.Data) != 0 {
t.Fatalf("unexpected resp Records: %d, want: %d", len(resp.Data), 0)
}
const ingestTimestamp = 1707123456700
prometheusTextDataSet := []string{
`# HELP metric_name_1 some help message`,
`# TYPE metric_name_1 gauge`,
`metric_name_1{label="foo"} 10`,
`metric_name_1{label="bar"} 10`,
`metric_name_1{label="baz"} 10`,
`# HELP metric_name_2 some help message`,
`# TYPE metric_name_2 counter`,
`metric_name_2{label="baz"} 20`,
`# HELP metric_name_3 some help message`,
`# TYPE metric_name_3 gauge`,
`metric_name_3{label="baz"} 30`,
}
prometheusRemoteWriteDataSet := prompb.WriteRequest{
Timeseries: []prompb.TimeSeries{
{Labels: []prompb.Label{{Name: "__name__", Value: "metric_name_4"}}, Samples: []prompb.Sample{{Value: 40, Timestamp: ingestTimestamp}}},
{Labels: []prompb.Label{{Name: "__name__", Value: "metric_name_5"}}, Samples: []prompb.Sample{{Value: 40, Timestamp: ingestTimestamp}}},
{Labels: []prompb.Label{{Name: "__name__", Value: "metric_name_6"}}, Samples: []prompb.Sample{{Value: 40, Timestamp: ingestTimestamp}}},
},
Metadata: []prompb.MetricMetadata{
{MetricFamilyName: "metric_name_4", Help: "some help message", Type: prompb.MetricTypeSummary},
{MetricFamilyName: "metric_name_5", Help: "some help message", Type: prompb.MetricTypeSummary},
{MetricFamilyName: "metric_name_6", Help: "some help message", Type: prompb.MetricTypeStateset},
},
}
sut.PrometheusAPIV1ImportPrometheus(t, prometheusTextDataSet, apptest.QueryOpts{})
sut.PrometheusAPIV1Write(t, prometheusRemoteWriteDataSet, apptest.QueryOpts{})
sut.ForceFlush(t)
expected := &apptest.PrometheusAPIV1Metadata{
Status: "success",
Data: map[string][]apptest.MetadataEntry{
"metric_name_1": {{Help: "some help message", Type: "gauge"}},
"metric_name_2": {{Help: "some help message", Type: "counter"}},
"metric_name_3": {{Help: "some help message", Type: "gauge"}},
"metric_name_4": {{Help: "some help message", Type: "summary"}},
"metric_name_5": {{Help: "some help message", Type: "summary"}},
"metric_name_6": {{Help: "some help message", Type: "stateset"}},
},
}
gotStats := sut.PrometheusAPIV1Metadata(t, "", 0, apptest.QueryOpts{})
if diff := cmp.Diff(expected, gotStats); diff != "" {
t.Errorf("unexpected response (-want, +got):\n%s", diff)
}
// check query metric name filter
tc.Assert(&apptest.AssertOptions{
Msg: "unexpected /api/v1/metadata response",
Got: func() any {
return sut.PrometheusAPIV1Metadata(t, "metric_name_4", 0, apptest.QueryOpts{})
},
Want: &apptest.PrometheusAPIV1Metadata{
Status: "success",
Data: map[string][]apptest.MetadataEntry{
"metric_name_4": {{Help: "some help message", Type: "summary"}},
},
},
})
// check query limit filter
tc.Assert(&apptest.AssertOptions{
Msg: "unexpected /api/v1/metadata response",
Got: func() any {
return sut.PrometheusAPIV1Metadata(t, "", 3, apptest.QueryOpts{})
},
Want: &apptest.PrometheusAPIV1Metadata{
Status: "success",
Data: map[string][]apptest.MetadataEntry{
"metric_name_1": {{Help: "some help message", Type: "gauge"}},
"metric_name_2": {{Help: "some help message", Type: "counter"}},
"metric_name_3": {{Help: "some help message", Type: "gauge"}},
},
},
})
}
func TestClusterMetricsMetadata(t *testing.T) {
fs.MustRemoveDir(t.Name())
tc := apptest.NewTestCase(t)
defer tc.Stop()
vmstorage1 := tc.MustStartVmstorage("vmstorage-1", []string{
"-storageDataPath=" + tc.Dir() + "/vmstorage-1",
"-retentionPeriod=100y",
})
vmstorage2 := tc.MustStartVmstorage("vmstorage-2", []string{
"-storageDataPath=" + tc.Dir() + "/vmstorage-2",
"-retentionPeriod=100y",
})
vminsert1 := tc.MustStartVminsert("vminsert1", []string{
fmt.Sprintf("-storageNode=%s,%s", vmstorage1.VminsertAddr(), vmstorage2.VminsertAddr()),
"-enableMetadata",
})
vminsert2 := tc.MustStartVminsert("vminsert-2", []string{
fmt.Sprintf("-storageNode=%s,%s", vmstorage1.VminsertAddr(), vmstorage2.VminsertAddr()),
"-enableMetadata",
})
vminsertGlobal := tc.MustStartVminsert("vminsert-global", []string{
fmt.Sprintf("-storageNode=%s,%s", vminsert1.ClusternativeListenAddr(), vminsert2.ClusternativeListenAddr()),
"-enableMetadata",
})
vmselect := tc.MustStartVmselect("vmselect", []string{
fmt.Sprintf("-storageNode=%s,%s", vmstorage1.VmselectAddr(), vmstorage2.VmselectAddr()),
})
// verify empty stats
resp := vmselect.PrometheusAPIV1Metadata(t, "", 0, apptest.QueryOpts{Tenant: "0:0"})
if len(resp.Data) != 0 {
t.Fatalf("unexpected resp Records: %d, want: %d", len(resp.Data), 0)
}
const ingestTimestamp = 1707123456700
prometheusTextDataSet := []string{
`# HELP metric_name_1 some help message`,
`# TYPE metric_name_1 gauge`,
`metric_name_1{label="foo"} 10`,
`metric_name_1{label="bar"} 10`,
`metric_name_1{label="baz"} 10`,
`# HELP metric_name_2 some help message`,
`# TYPE metric_name_2 counter`,
`metric_name_2{label="baz"} 20`,
`# HELP metric_name_3 some help message`,
`# TYPE metric_name_3 gauge`,
`metric_name_3{label="baz"} 30`,
}
prometheusRemoteWriteDataSet := prompb.WriteRequest{
Timeseries: []prompb.TimeSeries{
{Labels: []prompb.Label{{Name: "__name__", Value: "metric_name_4"}}, Samples: []prompb.Sample{{Value: 40, Timestamp: ingestTimestamp}}},
{Labels: []prompb.Label{{Name: "__name__", Value: "metric_name_5"}}, Samples: []prompb.Sample{{Value: 40, Timestamp: ingestTimestamp}}},
{Labels: []prompb.Label{{Name: "__name__", Value: "metric_name_6"}}, Samples: []prompb.Sample{{Value: 40, Timestamp: ingestTimestamp}}},
},
Metadata: []prompb.MetricMetadata{
{MetricFamilyName: "metric_name_4", Help: "some help message", Type: prompb.MetricTypeSummary},
{MetricFamilyName: "metric_name_5", Help: "some help message", Type: prompb.MetricTypeSummary},
{MetricFamilyName: "metric_name_6", Help: "some help message", Type: prompb.MetricTypeStateset},
},
}
assertMetadataIngestOn := func(t *testing.T, vminsert *apptest.Vminsert, tenantID string) {
t.Helper()
vminsert.PrometheusAPIV1ImportPrometheus(t, prometheusTextDataSet, apptest.QueryOpts{Tenant: tenantID})
vminsert.PrometheusAPIV1Write(t, prometheusRemoteWriteDataSet, apptest.QueryOpts{Tenant: tenantID})
vmstorage1.ForceFlush(t)
vmstorage2.ForceFlush(t)
expected := &apptest.PrometheusAPIV1Metadata{
Status: "success",
Data: map[string][]apptest.MetadataEntry{
"metric_name_1": {{Help: "some help message", Type: "gauge"}},
"metric_name_2": {{Help: "some help message", Type: "counter"}},
"metric_name_3": {{Help: "some help message", Type: "gauge"}},
"metric_name_4": {{Help: "some help message", Type: "summary"}},
"metric_name_5": {{Help: "some help message", Type: "summary"}},
"metric_name_6": {{Help: "some help message", Type: "stateset"}},
},
}
gotStats := vmselect.PrometheusAPIV1Metadata(t, "", 0, apptest.QueryOpts{Tenant: tenantID})
if diff := cmp.Diff(expected, gotStats); diff != "" {
t.Errorf("unexpected response (-want, +got):\n%s", diff)
}
}
assertMetadataIngestOn(t, vminsert1, "2:2")
assertMetadataIngestOn(t, vminsert2, "3:3")
assertMetadataIngestOn(t, vminsertGlobal, "5:5")
// check query metric name filter
tc.Assert(&apptest.AssertOptions{
Msg: "unexpected /api/v1/metadata response",
Got: func() any {
return vmselect.PrometheusAPIV1Metadata(t, "metric_name_4", 0, apptest.QueryOpts{Tenant: "multitenant"})
},
Want: &apptest.PrometheusAPIV1Metadata{
Status: "success",
Data: map[string][]apptest.MetadataEntry{
"metric_name_4": {{Help: "some help message", Type: "summary"}},
},
},
})
// check query limit filter
tc.Assert(&apptest.AssertOptions{
Msg: "unexpected /api/v1/metadata response",
Got: func() any {
return vmselect.PrometheusAPIV1Metadata(t, "", 3, apptest.QueryOpts{Tenant: "5:5"})
},
Want: &apptest.PrometheusAPIV1Metadata{
Status: "success",
Data: map[string][]apptest.MetadataEntry{
"metric_name_1": {{Help: "some help message", Type: "gauge"}},
"metric_name_2": {{Help: "some help message", Type: "counter"}},
"metric_name_3": {{Help: "some help message", Type: "gauge"}},
},
},
})
}

View File

@@ -47,14 +47,16 @@ func TestClusterInstantQuery(t *testing.T) {
}
func testInstantQueryWithUTFNames(t *testing.T, sut apptest.PrometheusWriteQuerier) {
data := []prompb.TimeSeries{
{
Labels: []prompb.Label{
{Name: "__name__", Value: "3fooµ¥"},
{Name: "3👋tfにちは", Value: "漢©®€£"},
},
Samples: []prompb.Sample{
{Value: 1, Timestamp: millis("2024-01-01T00:01:00Z")},
data := prompb.WriteRequest{
Timeseries: []prompb.TimeSeries{
{
Labels: []prompb.Label{
{Name: "__name__", Value: "3fooµ¥"},
{Name: "3👋tfにちは", Value: "漢©®€£"},
},
Samples: []prompb.Sample{
{Value: 1, Timestamp: millis("2024-01-01T00:01:00Z")},
},
},
},
}
@@ -89,23 +91,25 @@ func testInstantQueryWithUTFNames(t *testing.T, sut apptest.PrometheusWriteQueri
fn(`{"3👋tfにちは"="漢©®€£"}`)
}
var staleNaNsData = func() []prompb.TimeSeries {
return []prompb.TimeSeries{
{
Labels: []prompb.Label{
{
Name: "__name__",
Value: "metric",
var staleNaNsData = func() prompb.WriteRequest {
return prompb.WriteRequest{
Timeseries: []prompb.TimeSeries{
{
Labels: []prompb.Label{
{
Name: "__name__",
Value: "metric",
},
},
},
Samples: []prompb.Sample{
{
Value: 1,
Timestamp: millis("2024-01-01T00:01:00Z"),
},
{
Value: decimal.StaleNaN,
Timestamp: millis("2024-01-01T00:02:00Z"),
Samples: []prompb.Sample{
{
Value: 1,
Timestamp: millis("2024-01-01T00:01:00Z"),
},
{
Value: decimal.StaleNaN,
Timestamp: millis("2024-01-01T00:02:00Z"),
},
},
},
},
@@ -185,21 +189,23 @@ func testInstantQueryDoesNotReturnStaleNaNs(t *testing.T, sut apptest.Prometheus
// However, conversion of math.NaN to int64 could behave differently depending on platform and Go version.
// Hence, this test could succeed for some platforms even if fix is rolled back.
func testQueryRangeWithAtModifier(t *testing.T, sut apptest.PrometheusWriteQuerier) {
data := []prompb.TimeSeries{
{
Labels: []prompb.Label{
{Name: "__name__", Value: "up"},
data := prompb.WriteRequest{
Timeseries: []prompb.TimeSeries{
{
Labels: []prompb.Label{
{Name: "__name__", Value: "up"},
},
Samples: []prompb.Sample{
{Value: 1, Timestamp: millis("2025-01-01T00:01:00Z")},
},
},
Samples: []prompb.Sample{
{Value: 1, Timestamp: millis("2025-01-01T00:01:00Z")},
},
},
{
Labels: []prompb.Label{
{Name: "__name__", Value: "metricNaN"},
},
Samples: []prompb.Sample{
{Value: decimal.StaleNaN, Timestamp: millis("2025-01-01T00:01:00Z")},
{
Labels: []prompb.Label{
{Name: "__name__", Value: "metricNaN"},
},
Samples: []prompb.Sample{
{Value: decimal.StaleNaN, Timestamp: millis("2025-01-01T00:01:00Z")},
},
},
},
}

View File

@@ -139,41 +139,43 @@ func TestSingleIngestionWithRelabeling(t *testing.T) {
},
})
pbData := []prompb.TimeSeries{
{
Labels: []prompb.Label{
{
Name: "__name__",
Value: "prometheusrw_series",
pbData := prompb.WriteRequest{
Timeseries: []prompb.TimeSeries{
{
Labels: []prompb.Label{
{
Name: "__name__",
Value: "prometheusrw_series",
},
{
Name: "label",
Value: "foo2",
},
},
{
Name: "label",
Value: "foo2",
},
},
Samples: []prompb.Sample{
{
Value: 10,
Timestamp: 1707123456700, // 2024-02-05T08:57:36.700Z
Samples: []prompb.Sample{
{
Value: 10,
Timestamp: 1707123456700, // 2024-02-05T08:57:36.700Z
},
},
},
},
{
Labels: []prompb.Label{
{
Name: "__name__",
Value: "must_drop_series",
{
Labels: []prompb.Label{
{
Name: "__name__",
Value: "must_drop_series",
},
{
Name: "label",
Value: "foo2",
},
},
{
Name: "label",
Value: "foo2",
},
},
Samples: []prompb.Sample{
{
Value: 20,
Timestamp: 1707123456800, // 2024-02-05T08:57:36.800Z
Samples: []prompb.Sample{
{
Value: 20,
Timestamp: 1707123456800, // 2024-02-05T08:57:36.800Z
},
},
},
},

View File

@@ -973,7 +973,7 @@ func testGroupSkipSlowReplicas(tc *apptest.TestCase, opts *testGroupReplicationO
// The data is replicated across N groups of M nodes. Replication factor is
// globalRF. There is no replication across the nodes within each group or
//it is unknown it there is one.
// it is unknown it there is one.
//
// Max number of nodes to skip is M*(globalRF-1). This corresponds to the
// case when N-globalRF+1 groups have received the response from all of

Some files were not shown because too many files have changed in this diff Show More