Compare commits

..

1091 Commits

Author SHA1 Message Date
Artem Fetishev
1f1c619abb port-rebase fixes
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-02-25 12:30:35 +01:00
Artem Fetishev
217d116c2c bump roaring bitmap version to 2.14.4
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-02-25 12:28:22 +01:00
Artem Fetishev
449d4ff1a1 byte size benchmark
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-02-25 12:25:04 +01:00
Artem Fetishev
6128134e84 lib/uint64set: Add roaring64 bitmap to vendors and use it in benchmarks
The uint64set has been temporarily replaced in benchmarks with roaring64.Bitmap
in order to compare the performance with uint64set.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-02-25 12:24:57 +01:00
hagen1778
d467faf739 docs: add change lines after 673b2ca7db
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-02-25 11:27:53 +01:00
sias32
673b2ca7db dashboards/deployment: add links for vmalert (#10509)
### Describe Your Changes

1. Dashboard: Adding a link to an alert for quick access to it
(alert-statisticl)
2. Rules: Replace localhost with $externalURL to take the address from
the --external.url flag

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: sias32 <sias.32@yandex.ru>
2026-02-25 11:26:44 +01:00
hagen1778
40ccf0c333 app/vmalert: fix typo Minium => Minimum
Follow-up after a6200cc83d

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-02-25 09:28:04 +01:00
hklhai
fe341a4204 Improve Influx parsing error message when raw newline (\n) appears inside quoted fieldvmagent: Improve Influx parsing error message when raw newline (\n)… (#10524)
# Investigation & Root Cause --- InfluxDB Line Protocol Parsing with Raw
Newline (`\n`)

This document describes the investigation process and root cause
analysis for Influx Line Protocol parsing errors in VictoriaMetrics when
a **raw newline (`\n`) byte appears inside a quoted field value**.

------------------------------------------------------------------------

## Background

According to the Influx Line Protocol specification:

-   Each point must be represented as a single line.
-   The newline character (`\n`) separates points.
-   Literal newline bytes are not allowed inside quoted field values.

Therefore, any raw newline byte (`0x0A`) inside a quoted string makes
the line invalid.

------------------------------------------------------------------------

## Related Issue

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10067

------------------------------------------------------------------------

## Expected Behavior

VictoriaMetrics should reject Influx Line Protocol lines that contain a
raw newline inside a quoted field value, since this violates the
protocol specification.

The parsing failure itself is correct.

------------------------------------------------------------------------

## Actual Behavior

VictoriaMetrics rejects the line with the following error:

cannot parse field value for "...": missing closing quote for quoted
field value

While technically correct, the error message does not clearly indicate
that the root cause is a raw newline inside the quoted field value.

------------------------------------------------------------------------

## Minimal Reproducer

The issue can be reproduced without Telegraf or Jolokia:

``` bash
printf 'test value="hello
world"\n' | curl -X POST http://localhost:8428/write --data-binary @-
```

This produces:

cannot parse field value for "value": missing closing quote for quoted
field value

The failure occurs because the value contains an actual newline byte
(0x0A), not the escaped sequence `\n`.

------------------------------------------------------------------------

## Environment Setup

The issue was reproduced using the following stack:

-   VictoriaMetrics v1.127.0
-   InfluxDB 1.8
-   Spring Boot + Jolokia
-   Telegraf 1.36.2

Telegraf collects JVM `SystemProperties`, including:

``` json
"line.separator": "\n"
```

After JSON unmarshalling, this becomes a real newline byte in memory.

Detailed reproduction steps can be found here:

https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10067#issuecomment-3896175100

------------------------------------------------------------------------

## Observed Serialized Line

Using breakpoint debugging in:

    lib/bytesutil/bytebuffer.go:58

The `ReadFrom` function reads and assembles an Influx line containing:

    SystemProperties.line.separator="
    ",

The quoted field contains an actual newline byte before the closing
quote.

This breaks the single-line assumption of Influx Line Protocol.

VictoriaMetrics splits on `\n`, resulting in:

-   A truncated first line
-   A missing closing quote
-   Parsing failure

------------------------------------------------------------------------

## Important Clarification

This issue is **not** caused by the escaped sequence `"\\n"`.

The failure occurs only when the serialized Influx line contains an
actual newline byte (`0x0A`) inside the quoted value.

Escaped `\n` (two characters: `\` and `n`) is valid.

------------------------------------------------------------------------

## Root Cause

-   Telegraf serializes a field containing a real newline byte.
-   Influx Line Protocol forbids literal newline characters inside
    quoted fields.
-   VictoriaMetrics correctly treats `\n` as a line separator.
-   The parser then encounters an incomplete quoted field and reports
    "missing closing quote".

The parsing behavior is correct per specification.

------------------------------------------------------------------------

## Proposed Improvement

The parsing logic should remain unchanged.

However, the error message can be improved to better indicate the root
cause.

Suggested error message:

invalid Influx line protocol: missing closing quote for quoted field
value;
this may be caused by a raw newline (`\n`) inside the quoted field value

This makes the failure immediately actionable and easier to diagnose.

------------------------------------------------------------------------

## Summary

-   The failure is caused by a raw newline byte inside a quoted field
    value.
-   This violates the Influx Line Protocol specification.
-   VictoriaMetrics correctly rejects the line.
-   The error message should explicitly mention the possibility of a raw
    newline (`\n`) inside the quoted field.

Signed-off-by: hklhai <hkhai@outlook.com>
Co-authored-by: Max Kotliar <kotlyar.maksim@gmail.com>
2026-02-24 20:42:43 +02:00
Max Kotliar
83ebf00659 app/vmstorage: increase min free disk space from 10M to 100M (#10529)
### Describe Your Changes

The free disk space check is not continuous but occurs periodically. In
high-load environments with large ingestion rates, the system can exceed
the remaining 10MB between checks. This can lead to a situation where
disk space is exhausted before the next check occurs, causing panic.

Increase the default value 10x to cover the case.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9561

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-24 18:06:55 +02:00
Roman Khavronenko
5e602726f5 app/vmselect: properly apply extra filters for tenant tokens for /api/v1/label/../values (#10503)
Previosly, extra filters were ignored for
`/api/v1/label/vm_account_id/values` or
`/api/v1/label/vm_project_id/values` calls. In result, even if user's
visibility was limited by applying
`?extra_filters[]={vm_account_id="1"}` param they could get the list of
all available tenants in the system.

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>

(cherry picked from commit d2a033453e)
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-02-24 15:42:13 +01:00
hagen1778
a6200cc83d app/vmalert: rename MiniMum => Minimum
Follow-up after a5811d3c3b

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-02-24 15:37:02 +01:00
Fedor Kanin
a5811d3c3b docs/vmalert: fix a typo by replacing maxiMum with maximum (#10516)
### Describe Your Changes

Fix a typo by replacing `maxiMum` with `maximum` in Markdown docs and
CLI flags help.

Resolve #10515 

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-24 15:34:41 +01:00
JAYICE
5962b47c31 document: enrich the description of buckets_limit (#10465)
### Describe Your Changes

fix https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10417

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-24 15:33:16 +01:00
Roman Khavronenko
9a4edc738a docs: re-visit Troubleshooting docs (#10512)
* remove ToC in the beginning, as it duplicates right-bar functionality
and is easier to make a mistake with. For example, it didn't have the
ZFS section in it
* simplify wording where it was possible
* reference new tools VM got in recent releases
* re-prioritize tips order based on personal experience

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Signed-off-by: Roman Khavronenko <hagen1778@gmail.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
Co-authored-by: Pablo (Tomas) Fernandez <46322567+TomFern@users.noreply.github.com>
2026-02-24 15:30:31 +01:00
Roman Khavronenko
30d01e9cae dashboards: filter out zero value for Major page faults panel (#10517)
Components like vmselect and vminsert rarely touch disk, so most of the
time their values are 0. Filtering out 0 values makes the panel cleaner.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-02-24 15:30:05 +01:00
Artem Fetishev
6b46f3920c lib/uint64set: move set un/marshal methods from Storage to uint64set (#10521)
A refactoring that moves the uint64set.Set marshaling and unmarshaling from lib/storage/storage.go to lib/uint64set. Also added function docs and tests.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-02-24 11:15:49 +01:00
Zhu Jiekun
97b11146ee flaky test: disable GC during sync.Pool test (#10523)
Disable GC when testing sync.Pool `Get` and `Put` logic, so the items in pool won't be recycled too fast.

Follow-up for 785daff65d.
2026-02-24 10:19:04 +01:00
Fred Navruzov
2ef74bd6ea docs/vmanomaly - strip bad chars from filenames (#10525)
### Describe Your Changes

Strip spaces and `=` from filenames as suggested in #10522 

now
```shellhelp
find ./docs |egrep '[ =]'
```
returns no such files

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-24 10:05:48 +02:00
Max Kotliar
845161e377 .github: Run apptests on separate pool of runners
It should prvent apptest timeouts due to runners saturation. When
apptests are run with other tests and linters they do not have enough
CPU to complete in time and often times out.

If one re-runs the apptests shortly after they are likely to pass
because the same runner has enough resources available (other job
finished).

Remove GOGC=10 as the runner has enough memory (16Gb)  to run apptests.

I did some tests and obeserve drop in overal test duration from 4.5m to
3.30-3m.
2026-02-23 14:16:40 +02:00
Vadim Rutkovsky
f176a6624a dashboards: operator dashboard should extract version from metrics (#10502)
### Describe Your Changes

Use vm_app_version to determine operator version instead of static text

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

Signed-off-by: Vadim Rutkovsky <vadim@vrutkovs.eu>
2026-02-23 13:32:14 +02:00
Roman Khavronenko
4d06e34b66 docs: add dedicated opentelemetry section to docs (#10491)
The new section is supposed to contain otel related information for all
products, like VT, VM, VL.

It also supposed to be visible for readers right away, without need to
dig for info in each product.

It contains basic information and is supposed to act as a router to more
detailed info in each product.

While there, also updated VM-related otel info.


---------

Depends on
https://github.com/VictoriaMetrics/victoriametrics-datasource/pull/458

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-02-23 10:24:01 +01:00
Aliaksandr Valialkin
6d8ddcb9ed vendor: update github.com/valyala/fastjson from v1.6.9 to v1.6.10
This fixes the issue mentioned at https://github.com/VictoriaMetrics/VictoriaLogs/issues/1042#issuecomment-3936084518
2026-02-21 13:20:45 +01:00
Pablo (Tomas) Fernandez
dd4167709a Docs: Update guide "Getting started with VM Operator" (#10429)
### Describe Your Changes

- Add an introduction with a brief explanation of the operator and its
benefits as an intro
- Make some steps more explicit, instead of just linking to the VM
cluster guide
- Separate config/chart values files from kubectl apply (instead of
using heredoc and in-line yaml)
- Update screenshots and add figcaptions where needed
- Update Kubernetes and tools versions to newer releases
- Remove revision numbers from the Grafana config to install the latest
revision
- Added a section to configure scraping of Kubernetes resources (nodes,
pods, etc.)
- Tested updated instructions on GKE 1.33 and 1.34 (and a local k3s
instance) successfully
- Added and updated expected outputs. Some were missing and others were
outdated
- Updated Grafana dashboards screenshots since they changed from the
last revision
- Minor corrections and typo fixes. Improved flow
- Added a section at the end pointing readers to where they can go next.

### Checklist

The following checks are **mandatory**:

- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [X] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-20 22:38:11 +02:00
Pablo (Tomas) Fernandez
71e253e1f0 Docs: update guide "Headlamp Kubernetes UI and VictoriaMetrics" (#10462)
### Describe Your Changes

- Updated introduction
- Added proper steps
- Tested intructions on headlamp desktop version and the in-cluster web
ui
- Added images to guide user
- Mentioned that the test connection button does not work (it probes a
`-healthy` endpoint that is not supported by VM). The plugin still
works, it's just the test button that fails
- Added links to the single and cluster installation guides

### Checklist

The following checks are **mandatory**:

- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [X] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: Pablo (Tomas) Fernandez <46322567+TomFern@users.noreply.github.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-02-20 22:37:59 +02:00
Pablo (Tomas) Fernandez
9e155ffd9e Docs: Update Guide "How to delete or replace metrics in VictoriaMetrics" (#10500)
### Describe Your Changes

- Rewrote the introduction
- Added list of endpoints for single node, cluster, and cloud
- Added tips for working with VictoriaMetrics running on Kubernetes
- Flushed out explanations for each step
- Added reference links for all required endpoints
- Tested every command

### Checklist

The following checks are **mandatory**:

- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [X] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: Pablo (Tomas) Fernandez <46322567+TomFern@users.noreply.github.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-02-20 22:37:51 +02:00
Max Kotliar
2e9e40dc75 docs/changelog: add regexp example to bugfix description 2026-02-20 16:27:59 +02:00
Max Kotliar
10d4294f9b docs: tiny corrections 2026-02-20 16:21:06 +02:00
Max Kotliar
5e77771668 docs/changelog: chore changelog 2026-02-20 13:23:09 +02:00
Nikolay
dda5545078 lib/storage: properly search tenants
Commit 610b328e5a introduced a bug in the
date range search logic. If the first searched date for a given tenant
did not match, the search could proceed incorrectly.

This commit fixes the SearchTenants API by correctly advancing the date
passed to table.Seek.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10422
2026-02-20 12:03:00 +01:00
Roman Khavronenko
087efbc451 docs: clarify details on dump_request_on_errors
* add example of the produced log, so users could understand the impact;
* stress once again about sensetive data exposure when
dump_request_on_errors is enabled.
2026-02-20 11:54:09 +01:00
Roman Khavronenko
68e64536b1 app/vmauth: clarify the error message for all failed backends
This change adds some context to the error when all backend failed. From
support cases it seems like without the context users might not know
what to do with this error message. Clarification advises them to check
the prev error messages.
2026-02-20 11:53:16 +01:00
Yury Moladau
6e3ce4d55c app/vmui: fix label escaping for cardinality and autocomplete (#10498)
This PR fixes handling of label names containing special characters
(e.g. `.`, `/`, `-`).

Changes:
- Fixed escaping logic for cardinality requests.
- Fixed autocomplete insertion to escape label names in query selectors.

Related issue: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10485
2026-02-20 11:52:30 +01:00
Vadim Alekseev
8d1b88f985 lib/regexutil: prevent panic error parsing regexp: expression nests too deeply
Previously regex simplify function made an attempt to parse string representation of simplified regex.
And it could produce runtime panic due to std lib specification:

```
// Simplify returns a regexp equivalent to re but without counted repetitions
// and with various other simplifications, such as rewriting /(?:a+)+/ to /a+/.
// The resulting regexp will execute correctly but its string representation
// will not produce the same parse tree, because capturing parentheses
// may have been duplicated or removed.
```
 
 This commit ignores simplified regex parsing error and returns back original regex. 
It results into possible missing simplification of some niche regex patterns. 
But it's extremely rare cases rarely seen in production. So the tradeoff is acceptable. 

Fixes victoriaMetrics/victoriaLogs/issues/1112
2026-02-20 11:51:42 +01:00
Max Kotliar
3d3c057d52 docs: make docs-update-flags should rely on git tag (#10490)
### Describe Your Changes

As requested by @valyala changing the behvior of `make
docs-update-flags` from relying on git worktree, specific git remotes to
the git tags. Same way as `make publish-release` works.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-19 18:51:43 +02:00
Max Kotliar
94622fef29 lib/prommetadata: enable metrics metadata ingestion and storing by default (#10489)
### Describe Your Changes

Related to
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2974

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-19 18:45:44 +02:00
Aliaksandr Valialkin
804d77ffc5 all: run go fix -reflecttypefor 2026-02-19 14:05:06 +01:00
Aliaksandr Valialkin
79b18e9742 vendor: update github.com/valyala/fastjson from v1.6.8 to v1.6.9
This should help reducing memory usage at https://github.com/VictoriaMetrics/VictoriaLogs/issues/1042
2026-02-19 13:28:41 +01:00
Benjamin Nichols-Farquhar
3404a47a6d lib/backup implement cross-type backup copies
While server side copies when using the same backup origin and
destination are always most efficient there are times when moving
between backup locations is required.

Right now vmbackup throws an error in these cases. 

While its true that a user could always do a fresh backup from a
snapshot rather than copy an old backup, this requires access to storage
data locations and a running vmstorage instance, something that is not
_generally_ required for otherwise moving backups around in remote
locations using vmbackup.

This is a small change that makes the moving of backups from one
location to another transparent to users, without having to consider if
those locations are the same or different. This both simplifies backup
migrations and unlocks using vmbackup for more complex operations.

Specifically this came up in my use case because we want to orchestrate
the down-scaling of EBS volumes backing our vmstorage cluster, which
requires some complex backup operations, one of which being taking a
backup from s3 to a local filesystem.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10401
2026-02-18 21:45:28 +01:00
Aliaksandr Valialkin
0b8205ef46 lib/httpserver: escape the error string before sending it in the response to the client
See https://github.com/VictoriaMetrics/VictoriaMetrics/security/code-scanning/353
2026-02-18 20:39:52 +01:00
Aliaksandr Valialkin
53514febdc vendor: update github.com/VictoriaMetrics/VictoriaLogs from v0.0.0-20260125191521-bc89d84cd61d to v0.0.0-20260218111324-95b48d57d032 2026-02-18 20:39:19 +01:00
Aliaksandr Valialkin
8531d86da0 lib/timeutil: avoid losing the precision at decimalExp when converting it from int64 to int
This fixes https://github.com/VictoriaMetrics/VictoriaMetrics/security/code-scanning/354
2026-02-18 20:08:47 +01:00
Aliaksandr Valialkin
a47d32e129 vendor: run make vendor-update 2026-02-18 19:46:18 +01:00
Aliaksandr Valialkin
df96f4d3ab all: run go fix -omitzero 2026-02-18 19:37:07 +01:00
Aliaksandr Valialkin
84dc5453ad all: run go fix -minmax 2026-02-18 19:24:27 +01:00
Aliaksandr Valialkin
8093d98c0e all: run go fix -newexpr 2026-02-18 19:05:59 +01:00
Aliaksandr Valialkin
809f9471df all: run go fix -fmtappendf 2026-02-18 18:21:02 +01:00
Aliaksandr Valialkin
f9d6d2e428 all: run go fix -mapsloop 2026-02-18 18:17:20 +01:00
Aliaksandr Valialkin
32eac31416 all: run go fix -slicescontains 2026-02-18 18:17:20 +01:00
Artem Fetishev
4d4c1ff72e lib/storage: shard dateMetricIDCache (#10486)
Use the same sharded implementation as in metricIDCache. The change is
basically a copy-paste. The only difference is that the rotation period
remains `1h` instead `1m` in order not to break the fix for #10064.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-02-18 18:16:37 +01:00
Aliaksandr Valialkin
645ce2b6b3 all: run go fix -slicessort 2026-02-18 15:00:56 +01:00
Aliaksandr Valialkin
89600bd229 all: run go fix -any 2026-02-18 14:58:01 +01:00
Aliaksandr Valialkin
9b3a60efee lib/protoparser/protoparserutil: read request body to chunked buffer instead of contiguous byte slice
This should reduce memory reallocations and fragmentation when reading large request bodies from slow clients.
This also should reduce memory usage a bit because of the reduced memory fragmentation.

Updates https://github.com/VictoriaMetrics/VictoriaLogs/issues/1042
2026-02-18 14:50:31 +01:00
Aliaksandr Valialkin
a8c5934d1b vendor: update github.com/VictoriaMetrics/fastcache from v1.13.2 to v1.13.3 2026-02-18 14:28:34 +01:00
Aliaksandr Valialkin
43544fdb63 vendor: update github.com/valyala/fastjson from v1.6.7 to v1.6.8 2026-02-18 14:28:33 +01:00
Aliaksandr Valialkin
7a4df5755a go.mod: update github.com/VictoriaMetrics/metrics from v1.41.1 to v1.41.2, and github.com/VictoriaMetrics/metricsql from v0.84.10 to v0.85.0 2026-02-18 14:28:33 +01:00
Aliaksandr Valialkin
83bcbc43d1 app/vmauth: consistently use for i := range N instead of for i := 0; i < N; i++ 2026-02-18 14:28:32 +01:00
Aliaksandr Valialkin
79921cf434 app/vmctl: run go fix -rangeint 2026-02-18 14:28:32 +01:00
Aliaksandr Valialkin
40402fdac3 lib: run go fix -rangeint 2026-02-18 14:28:31 +01:00
Aliaksandr Valialkin
05943abc11 lib/persistentqueue: run go fix -rangeint 2026-02-18 14:28:31 +01:00
Aliaksandr Valialkin
e66e71c87e lib/streamaggr: run go fix -rangeint 2026-02-18 14:28:30 +01:00
Aliaksandr Valialkin
7f682c4c76 lib/promscrape: run go fix -rangeint 2026-02-18 14:28:30 +01:00
Aliaksandr Valialkin
4947cd7f14 lib/encoding: run go fix -rangeint 2026-02-18 14:28:30 +01:00
Aliaksandr Valialkin
5ea7314912 lib/mergeset: run go fix -rangeint 2026-02-18 14:28:29 +01:00
Aliaksandr Valialkin
655f0e9c1d lib/storage: run go fix -rangeint 2026-02-18 14:28:29 +01:00
Aliaksandr Valialkin
2ffd25a120 apptest: run go fix -rangeint 2026-02-18 14:28:28 +01:00
Aliaksandr Valialkin
175fcf6676 app/vmalert-tool: run go fix -rangeint 2026-02-18 14:28:28 +01:00
Aliaksandr Valialkin
c05516afbe app/vmselect: run go fix -rangeint 2026-02-18 14:28:27 +01:00
Aliaksandr Valialkin
6b12684e56 app/vmauth: run go fix -rangeint 2026-02-18 14:28:27 +01:00
Aliaksandr Valialkin
8f7c94f512 app/vmalert: run go fix -rangeint 2026-02-18 14:28:26 +01:00
Aliaksandr Valialkin
4a6259a9b2 app/vmagent: run go fix -rangeint 2026-02-18 14:28:26 +01:00
Max Kotliar
d5b9d3e641 dashboards/vmauth: Add Client request buffering latency panel (#10412)
### Describe Your Changes

In https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10310 ability
to [buffer request
body](https://docs.victoriametrics.com/victoriametrics/vmauth/#request-body-buffering)
was added to `vmauth`. This PR adds a new panel `Request body buffering
latency` to `vmauth` dashboard.

Related to
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10309

<img width="1504" height="680" alt="Screenshot 2026-02-07 at 00 28 46"
src="https://github.com/user-attachments/assets/ba98b06f-de2c-4d4c-96bb-e5c20049cebc"
/>

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

Signed-off-by: Max Kotliar <kotlyar.maksim@gmail.com>
Co-authored-by: Hui Wang <haley@victoriametrics.com>
2026-02-18 15:25:38 +02:00
Max Kotliar
6863de2c0e package/release: Add github-verify-release job (#10476)
### Describe Your Changes

The job ensure that:
- the draft release with given `$(TAG)` exists
- the release has excpected `$(GITHUB_ASSETS_COUNT)` number of uploaded
assets
- All the assets were uploaded succesfully.

It also adds helper job `github-get-release` which finds a draft release
by `$(TAG)` and stores into file `/tmp/vm-github-release-$(TAG)` file.

The `github-delete-release1 job is decoupled from the file produced by
`github-create-release job`. So it could be run at any time from any
machine.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-18 15:05:50 +02:00
Artem Fetishev
51a3e4e27a lib/storage: metricIDCache cache follow-up for e5c8581bad (#10468) (#10479)
This is a follow-up PR for e5c8581bad (#10468):

- Extract the bucket size into a constant and document it
- Make benchmark constant metricIDCache-specific
- Add the same benchmark for dateMetricIDCache to compare it with metricIDCache.  See https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10479 for benchmark results.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-02-17 16:10:32 +01:00
Max Kotliar
d7046d6e19 go.mod: update metrics module (#10470)
### Describe Your Changes

VictoriaMetrics binaries will now expose some process-level metrics when
run on macOS.

See:
- https://github.com/VictoriaMetrics/metrics/issues/75
- https://github.com/VictoriaMetrics/metrics/pull/107

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-16 19:52:14 +02:00
Max Kotliar
7e6c03e9c6 docs/changelog: correctly place feater into tip section 2026-02-16 19:43:01 +02:00
Max Kotliar
5267f35104 app/vmauth: authenticate by jwt token (#10435)
### Describe Your Changes

Adds JWT authentication support to vmauth with signature verification
and tenant-based access control. For now, public_keys have to set
explisitly in the config, OIDC discovery will be added in upcoming PRs.

Related to
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10445

Key Features

- JWT Configuration: Added `jwt_token` field to user config supporting
RSA/ECDSA public keys or skip_verify mode (for testing purposes).
- Token Validation: Verifies JWT signatures, checks expiration, and
extracts vm_access claims
- Compatible with vmgateway: jwt tokens issued for vmgateway should work
with vmauth too.

Examples

```yaml
users:
- jwt_token:
    public_keys:
    - |
      -----BEGIN PUBLIC KEY-----
      MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA...
      -----END PUBLIC KEY-----
  url_prefix: "http://victoria-metrics:8428/"
```

```yaml
users:
- jwt_token:
    skip_verify: true
  url_prefix: "http://victoria-metrics:8428/"
```


Constraints

- JWT tokens cannot be mixed with other auth methods (bearer_token,
username, password)
- Requires at least one public key OR skip_verify=true
- Limited to single JWT user (multiple JWT users will be supported in
the future)

Next steps
- Multiple `jwt_token` support. 
- Claim matching
- Claim based routing
- OIDC\JWKS support

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Co-authored-by: Pablo (Tomas) Fernandez <46322567+TomFern@users.noreply.github.com>
2026-02-16 19:40:54 +02:00
Max Kotliar
172ff84299 docs: start v1.136 lts line 2026-02-16 19:18:47 +02:00
Max Kotliar
a3f955dd84 docs: bump version to v1.136.0 2026-02-16 17:43:31 +02:00
Max Kotliar
19e7d986fe deplyoment/docker: bump version to v1.136.0
Signed-off-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-02-16 17:36:53 +02:00
Max Kotliar
db2ad6f900 docs/changelog: update changelog with LTS release notes 2026-02-16 17:31:02 +02:00
Max Kotliar
db1f3f4ab8 deployment/docker: Fix publish final fips images from rc 2026-02-16 14:18:55 +02:00
Max Kotliar
7386a35942 docs/changelog: cut v1.136.0 2026-02-13 19:58:15 +02:00
Max Kotliar
6be2d89008 app/vmselect: run make vmui-update 2026-02-13 19:44:54 +02:00
Artem Fetishev
e5c8581bad lib/storage: optimize metricIDCache sharding (#10468)
Exploit uint64set data structure peculiarities (adjacent elements are
stored in
64KiB buckets) to optimize metricIDCache memory footprint.

As the result the cache utilizes 87% less memory and is up to 90%
faster. See
[benchstat.txt](https://github.com/user-attachments/files/25294076/benchstat.txt).

Follow-up for #10388 and #10346.

Thanks to @valyala for the optimization idea.

---------

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-02-13 18:29:48 +02:00
Nikolay
14bc51554b lib/storage: properly report metrics for the last partition
Previously, on the last day of a month, storage could report empty
metrics for the last partition. This could happen if a new empty
partition was created in updateNextDayMetricIDs or if time series with
future timestamps were ingested.

This commit adds a check to ensure the last partition belongs to the
current month. Since this is typically the most actively used partition,
it should be treated as the last one.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10387
2026-02-13 11:21:20 +01:00
Max Kotliar
7db81d062c docs/changelog: chore tip before release 2026-02-13 10:32:42 +02:00
f41gh7
ad62fe88ed go.mod: update metricsql
It contains fix for https://github.com/VictoriaMetrics/metricsql/issues/60

Signed-off-by: f41gh7 <nik@victoriametrics.com>
2026-02-12 23:49:54 +01:00
Artem Fetishev
40b85eb211 Makefile: rename integration-test to apptest (#10461)
Follow-up for 73015bccb9

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-02-12 19:01:16 +01:00
Roman Khavronenko
88b2464fe8 docs: simplify wording in the top section (#10451)
The purpose of the change is to make better first impression for readers
by removing all unnecessary verbosity. As with status pages, try to
increase the density of useful information.

The initial idea was borrowed from @func25

---------------

<img width="961" height="649" alt="image"
src="https://github.com/user-attachments/assets/2a91ded5-17cf-49ad-a589-45b634af991a"
/>

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Signed-off-by: Roman Khavronenko <hagen1778@gmail.com>
Signed-off-by: Max Kotliar <kotlyar.maksim@gmail.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-02-12 19:27:46 +02:00
Max Kotliar
e4221f97a7 docs: mention top query by memory usage
Follow up on
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10391
2026-02-12 17:54:27 +02:00
Stephan Burns
d40696a2f2 Add restarts annotation to remaining dashboards (#10439)
### Describe Your Changes

Added annotation to show restarts.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: Stephan Burns <34520077+Sleuth56@users.noreply.github.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-02-12 16:39:38 +02:00
Aliaksandr Valialkin
b2a74ec494 dashboards/vm/vmauth.json: run make dashboards-sync after the commit 9774fe8df1 according to dashboards/README.md
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10437
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10438
2026-02-12 14:24:09 +01:00
Mathias Palmersheim
9774fe8df1 Change user count query so it accounts for multiple replicas of vmauth (#10438)
### Describe Your Changes

Fixes issue where multiple replicas of vmauth cause the user count to be
inflated for vmauth see #10437

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-12 14:22:22 +01:00
Artem Fetishev
efd3b66609 Makefile: make vet and golangci-lint to also check synctests
Follow-up for 3d6f353430

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-02-12 13:16:55 +01:00
Zhu Jiekun
785daff65d vminsert: proper reset labelsBuf for OpenTelemetry ingestion to avoid high memory usage
Ensure proper expansion and reset of `buf` size for OpenTelemetry
ingestion. This pull request does:
1. Flush data in `wctx` when `buf` is over 4MiB.
2. Do not return `wctx` with `buf` larger than 4MiB while the actual
in-use length is less than 1MiB to the pool.

Previously, when a small number of requests carried a large volume of
time series or labels, `buf` was over-expanded and recycled to the pool,
resulting in an excessive memory usage issue.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10378
2026-02-12 12:49:05 +01:00
Roman Khavronenko
e3a57a3d80 docs: fix the broken image for single-node (#10460)
See
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10449#issuecomment-3890326179

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-02-12 12:47:26 +01:00
Pablo (Tomas) Fernandez
161633158c Docs: Update guide "How to use OpenTelemetry with VictoriaMetrics and VictoriaLogs" (#10396)
This is part of the effort to upgrate and validate the [Guides in the
docs](https://docs.victoriametrics.com/guides/).

Doc page:
https://docs.victoriametrics.com/guides/getting-started-with-opentelemetry/

Functionally, nothing should change. Aside from the fix that prevented
one of the example applications to run, the rest of the commands in the
guide should be equivalent to the original.

Header anchor links do not change with this update. I added a few
headers but the existing headers anchors should remain unchanged to
prevent breaking existing links.

- Tested on a more modern version of GKE to validate it still works OK
(1.34.1-gke.3971001)
- Changed wording of some sections to improve flow and readability
- Added some missing steps/troubleshooting
- Add tips annotations for cardinality explorer and setup references to
make them stand apart form the main content
- Use `kubectl port-forward svc/...` instead of `kubecl port-forward
pod` (service selectors vs pod names) in some test commands to make
instructions simpler
- Updated OpenTelemetry version to fix error that prevented
`app.go-collector.example` sample code from running
- Replaced the "Visit these links" part in the second program (with the
fast/slow endpoints) with curl commands
- Updated the first VMUI test link to show table instead of graph while
testing OpenTelemetry ingestion (default graph view can be confusing as
there metric value for `k8s_container_ready` doesn't really show any
values)
- Minor typos, grammar check, and consistency (Kubernetes vs kubernetes,
Helm vs Helm, Collector vs collector, etc)
2026-02-12 12:47:06 +01:00
Aliaksandr Valialkin
7b708a8947 .github/workflows/test.yml: use Go version in the cache key for golangci-lint
This should fix issues like in the https://github.com/VictoriaMetrics/VictoriaMetrics/actions/runs/21943547755/job/63375204688 :

    package requires newer Go version go1.26 (application built with go1.25)
2026-02-12 12:20:48 +01:00
Roman Khavronenko
16d5f281fe lib/storage: use child trace during index searches
This change only affects query trace. It correctly uses the branched
query trace in callback function, so in trace it is placed in the right
actions branch.

Bug was introduced in
c705da74f6
2026-02-12 12:19:00 +01:00
JAYICE
6846ca09cb document: add description about time-based kafka commit
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10420
2026-02-12 12:08:53 +01:00
Roman Khavronenko
71997bc754 docs: mention Perses on integrations list (#10442)
While there, attempted to simplify wording in perses doc.
2026-02-12 12:08:27 +01:00
Roman Khavronenko
2ec6fafed0 docs: add diagrams for single and cluster components (#10449)
This PR adds diagram for single-node and updates diagram for cluster
version. Both diagram go with excalidraw source attached, so they can be
updated in future.

Related to
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10398
2026-02-12 12:08:06 +01:00
Roman Khavronenko
a8ac5dfae5 docs: excalidraw vmagent diagram
Source vmagent diagram to excalidraw, so it can be easily updated in
future.

-----------------

<img width="936" height="671" alt="image"
src="https://github.com/user-attachments/assets/1dfc9cb5-0323-4e0d-881c-3c76ccda578f"
/>

<img width="922" height="706" alt="image"
src="https://github.com/user-attachments/assets/42297ede-5986-451c-83fc-c11dba9560e3"
/>

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2026-02-12 12:07:36 +01:00
Phuong Le
6292d5fefa ci: scope Go artifact cache restore fallback by Go version
Fixes
https://github.com/VictoriaMetrics/VictoriaMetrics/actions/runs/21921172620/job/63301435721
2026-02-12 12:07:14 +01:00
Zhu Jiekun
2a09f25f78 docs: mentioning VictoriaTraces in vmalert's doc (#10457) 2026-02-12 12:06:50 +01:00
Aliaksandr Valialkin
6824ade224 lib/promscrape: follow-up for the commit 22696f378c
- Return back the check that the size of the scraped response doesn't exceed the maxScrapeSize
  at the client.ReadData(). Without this check the scraped response may be truncated to maxScrapeSize+1
  bytes, which can result in decompression error. The decompression error in this case
  hides the original errror about too big response side. This complicates troubleshooting by users.

- Stop decompressing the scraped response as soon as the decompressed response size exceeds maxScrapeSize.
  This protects from excess memory usage needed for holding the decompressed response with sizes exceeding
  the maxScrapeSize.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10320
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9481
2026-02-12 11:39:51 +01:00
Aliaksandr Valialkin
3a3c2084d3 vendor: run make vendor-update 2026-02-11 17:52:42 +01:00
Aliaksandr Valialkin
3d6f353430 deployment/docker/Makefile: update Go builder from Go1.25.7 to Go1.26.0
See https://go.dev/doc/go1.26
2026-02-11 17:35:56 +01:00
Vadim Alekseev
b1f333093b .github/workflows: use Go version from go.mod (#1092) 2026-02-11 16:10:53 +01:00
Artem Fetishev
19403b9cd1 lib/storage: use workingsetcache for tfss loops cache again (#10427)
lrucache causes huge cpu usage in some caches. See #10297.

There was a hypothesis that this was due to too short ttl in lrucache.
Setting it to 1h (the default workingsetcache eviction period) but it did not
completely eliminate the problem. The CPU utilization was not huge but still high.
See #10416.

Thus reverting back fix such deployments. This solution is temporary
because the cache consumes at least 32MB. There is one instance per
indexDB which means that if the retention is 3y then the total memory
utilized by this cache will be over 1GB and most of it will be unused.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-02-11 15:18:34 +01:00
Max Kotliar
4edff7eae2 .github: pin go version to 1.25 to fix CI (#10448)
Go1.26 has been recently released and was picked up by CI actions.

The tests and linter actions start to fail with:

GOEXPERIMENT=synctest go vet ./lib/...
go: unknown GOEXPERIMENT synctest

This happens because Go 1.26 remove synctest experiment.

Changelog:
This package was first available in Go 1.24 under GOEXPERIMENT=synctest,
with a slightly different API. The experiment has now graduated to
general availability. The old API is still present if
GOEXPERIMENT=synctest is set, but will be removed in Go 1.26.

https://go.dev/doc/go1.25#library

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-11 15:43:10 +02:00
Max Kotliar
ce4b131816 docs/changelog: cleanup after merge 2026-02-11 15:01:27 +02:00
Yury Moladau
cf69c56bb7 app/vmui: add label autocomplete context-aware by applying existing label matchers (#10399)
### Describe Your Changes

* Add context-aware label autocomplete by applying existing label
matchers (e.g. namespace/job) when fetching labels and label values.
* Update `package.json` dependencies.
* Update `vite.config.ts` to ensure correct API requests in playground
mode (`start:playground`).

Related issue: #9269

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

Signed-off-by: Yury Molodov <yurymolodov@gmail.com>
2026-02-11 15:00:13 +02:00
JAYICE
42ec981fe9 vmui: add Queries with most memory to execute section in Top Queries page (#10391)
### Describe Your Changes

fix  https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9330

<img width="5088" height="1674" alt="image"
src="https://github.com/user-attachments/assets/4364cfae-8c56-417d-9d1c-6a219fa8802c"
/>


### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

Signed-off-by: JAYICE <1185430411@qq.com>
2026-02-11 14:53:57 +02:00
Hui Wang
35e287d740 docs: remove incorrect description on -search.logSlowQueryStats (#10447)
>Query statistics logging is enabled by default {{% available_from
"v1.129.0" %}} with a threshold of 5s.
2026-02-11 14:49:22 +02:00
Fred Navruzov
9df9a77169 docs/vmanomaly: fix-non-canonical-url-reader-docs (#10444)
### Describe Your Changes

fix non-canonical link to MetricsQL

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-11 13:06:04 +02:00
Max Kotliar
17c514d2fa docs: use canonical link if life of sample diagram 2026-02-11 12:48:06 +02:00
Max Kotliar
c12512bdd7 lib/jwt: address code review comments (#10428)
### Describe Your Changes

Addressing code revoew comments from
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10426, kept them
separate to isolate copy-paste change from follow up changes

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-10 18:57:17 +02:00
Max Kotliar
a108da8215 lib/jwt: opensource jwt library (#10426)
### Describe Your Changes

It was
[decided](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9439#issuecomment-3612299461)
that OIDC authentication in vmauth will be part of open source repo.

That requires opensourcing lib/jwt. PR does not contain any changes in
logic, just copy-paste from enterprise repository.

Related to
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9439

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-10 18:49:23 +02:00
Aliaksandr Valialkin
4e7606f669 lib/backup/actions: properly validate the size for the last part during the restoring from backup
This issue has been found by https://www.cubic.dev/codebase-scan/7b15eebd-abc2-4604-9523-7f9bec5f67f6?violationId=324521b6-50fb-502d-8981-980bd9fd44ab
2026-02-10 15:16:21 +01:00
Aliaksandr Valialkin
060d7f6ed1 lib/protoparser/protoparserutil: limit the maximum size of the snappy-encoded data block, which can be read from the remote client
This is a follow-up for the commit 51b44afd34

This issue has been found by https://www.cubic.dev/codebase-scan/7b15eebd-abc2-4604-9523-7f9bec5f67f6?violationId=5a8fb3b7-1086-5d11-bb06-1f0864bd56ff
2026-02-10 15:03:34 +01:00
Aliaksandr Valialkin
b3c1b00e4d lib/protoparser/protoparserutil: re-use byte buffers in readUncompressedData() with the capacity up to 1MiB
The expected size of the data ingestion request body accepted by VictoriaMetrics / VictoriaLogs / VictoriaTraces
exceeds 64KiB, and is close to 1MiB. That's why it is better to re-use byte buffers with capacities up to 1MiB,
even if less than 25% of their capacity was used the last time.

This should reduce the number of GC cycles at high data ingestion rate when the request body sizes
are distributed at both sided of the 16KiB ... 64KiB range.
This is a follow-up for 09d2ce36e8

Updates https://github.com/VictoriaMetrics/VictoriaLogs/issues/1042
2026-02-10 13:04:47 +01:00
Fred Navruzov
a65f693649 docs/vmanomaly: fix iframe params (#10421)
### Describe Your Changes

fix iframe params in embedded playgrounds on /anomaly-detection/ui/ ,
anomaly-detection/quickstart/ pages

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-10 12:41:38 +02:00
Hui Wang
6285bc4179 app/vmselect: properly count vm_deduplicated_samples_total{type="select"}metric
Previously `vm_deduplicated_samples_total{type="select"}` didn't take in account identical samples.

This commit takes it in account in the same way as `vm_deduplicated_samples_total{type="merge"}` metric.

Related to  https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10384.
2026-02-10 10:23:40 +01:00
Roman Khavronenko
e89f131e34 docs: update metadata API reference across the docs
* mention support of multitenancy in metadata
* add a basic alerting rule for tracking cache utilization
* clarify cleanup policy of metadata cache
2026-02-10 10:19:19 +01:00
Roman Khavronenko
493c1d410f app/vmagent: clarify global nature of remoteWrite.label cmd-line flag
Before, by mistake, -remoteWrite.label flag was referenced in one part
of the doc as per-remoteWrite-url flag. In fact, -remoteWrite.label is
global and applies labels to all remoteWrite URLs unconditionally.

This commit tries to clarify it in docs:
* update the life-of-a-sample diagram to change the labels applying
logic
* add hint how to add a label via `extra_label`
* removes duplicated description for -remoteWrite.label flag

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10373
2026-02-10 10:18:56 +01:00
Pablo (Tomas) Fernandez
b0029ee933 Update guide/k8s-monitoring-via-vm-single (#10372)
This is the first PR on a proposed series of updates to the guides.

I started with this one because:

It's on the top ten guides according to Google Analytics
It's a good starting point for me to get familiar with VM on Kubernetes
I plan to work through the rest of the guides in the following days
(coordinating the effort with JJ).

Changelog for this guide:

- Updated GKE version to a more current 1.34+
- Updated guide to more modern Helm and Kubectl versions
- Tested updated instructions on GKE 1.34.1-gke.3971001 (and a local k3s
instance) successfully
- Removed revision from Grafana values for helm chart (confirmed it
pulls the latest revision)
- Split the helm chart values into more readable chunks and added
explanations next to each chunk
- Added and updated expected outputs. Some were missing and others were
outdated
- Updated Grafana dashboards screenshots since they changed from the
last revision
- Updated Grafana repo to use community org (old grafana chart was
deprecated
on Jan 30th -
[source](https://community.grafana.com/t/helm-repository-migration-grafana-community-charts/160983))
- Minor corrections and typo fixes
- Added a section at the end pointing readers where they can go next.
2026-02-10 10:17:58 +01:00
Alexander Frolov
97e1308386 vmselect: handle NaN values when merging blocks
`vmselect` merges samples from multiple replicas using an optimistic
deduplication path.

c7f52992e7/app/vmselect/netstorage/netstorage.go (L593-L595)

This is useful when `replicationFactor > 1`. However, identical series
containing NaN values from different replicas are treated as different
(due to `NaN != NaN`), forcing the slower fallback path unnecessarily.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10384
2026-02-10 10:16:17 +01:00
Max Kotliar
a279517034 dashboards: add source code data link to logging rate panel (#10406)
### Describe Your Changes

Add Source Code data link (link to bar or line in graph to see) that
points directly to a source code file on Github. `VictoriaMetrics -
cluster`, `VictoriaMetrics - single-node`, and `VictoriaMetrics -
vmagent` dashboards were updated. I did not add it to other panels since
they do not have Drilldown section at all.

Also, fixed a misplaced Drilldown link in `VictoriaMetrics -
single-node` dashboard.

Proxy service code is here
https://github.com/VictoriaMetrics/location2source/

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-10 10:25:46 +02:00
Fred Navruzov
f7ba76a59d docs/vmanomaly: v1.28.6-1.28.7 (#10419)
### Describe Your Changes

- Updated docs to reflect v1.28.6-v1.28.7 changes
- Fixed typos and misaligned section content
- Embedded playgrounds into documentation (data querying, vmanomaly
experiment)

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-10 10:20:43 +02:00
Max Kotliar
60dbd5a97e dashboards: Rename "Concurrent flushes on disk" panel to "Concurrent inserts" (#10409)
### Describe Your Changes

The new title better aligns with the code of
[writeconcurrencylimiter](d9dabea303/lib/writeconcurrencylimiter/concurrencylimiter.go (L140)),
the panel description and the metric used in the query.

Previously, the panel title suggested that it reflected only disk write
performance. During an incident investigation, this led to a wrong
assumption that the panel was unrelated to client-side performance.

In reality, the metric [includes the full write
path](98e320842c/lib/vminsertapi/server.go (L263)):
time spent reading data from the TCP connection, processing it, and
acknowledging the block. The updated title reflects this behavior more
accurately and reduces the risk of misinterpretation during incident
analysis.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-02-09 19:40:41 +02:00
Aliaksandr Valialkin
32ddfa973b docs/victoriametrics/Single-server-VictoriaMetrics.md: add https://docs.victoriametrics.com/VictoriaMetrics.html seen in the wild according to the 404 pages report in Google Analytics 2026-02-09 16:59:26 +01:00
Aliaksandr Valialkin
d9554a3a22 docs/victoriametrics/Cluster-VictoriaMetrics.md: add https://docs.victoriametrics.com/Cluster-VictoriaMetrics/ alias seen in wild according to the 404 pages report in Google Analytics 2026-02-09 16:58:05 +01:00
Aliaksandr Valialkin
fbab6403dc docs/victoriametrics/MetricsQL.md: add https://docs.victoriametrics.com/MetricsQL/ alias seen in wild according to the 404 pages report in Google Analytics 2026-02-09 16:56:49 +01:00
Jayice
07dd79608b app/vmselect: align graphite render API process timeout to query deadline
Previosly the error returned on timeout suggested a memory leak, which
could confuse a user. In reality timeout could happen if vmselect is
overloaded or the query takes a lot of time to process. The commit
aligns rss. RunParallel with query deadline set either via flag
`-search.maxQueryDuration` or the `timeout` query argument. The logged
warn message is adjusted to suggest resource increase or timeout
increase.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8484

Signed-off-by: JAYICE <jayice.zhou@qq.com>
Signed-off-by: Max Kotliar <kotlyar.maksim@gmail.com>
2026-02-09 14:03:26 +02:00
Artem Fetishev
5915c57b46 docs/changelog: add known issue to v1.132.0 release notes
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-02-09 11:03:19 +01:00
JAYICE
f36e1857c0 app/vmagent: improve kafka consumer performance
Previously, the Kafka consumer in vmagent committed offsets per message
(manual commit). At high message rates, this could overload the commit
path (coordinator, __consumer_offsets topic, and network).

This commit introduces time-based manual commits with a controlled window:
* enable.auto.commit remains false by default.
* After a successful TryPush (data accepted into the buffer before the
  vmagent queue/backend), vmagent adds the message to pending offsets.
* Offsets are committed periodically (every second), as well as during
  shutdown and partition rebalance.

This keeps the commit point tied to TryPush (stronger guarantees than
auto-commit) while significantly reducing commit QPS.

Auto-commit is also time-based, but it advances offsets based on poll()
delivery rather than application-level processing. This means offsets
may be committed before data is actually accepted by the vmagent
pipeline, slightly increasing the risk of data loss on crash or restart.

This change does not make the Kafka consumer fully transactional
end-to-end. Buffers in vmagent/vminsert/vmstorage still imply possible
data loss on hard stops. However, it provides stronger guarantees than
auto-commit, since commits are based on TryPush rather than poll().

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10395
2026-02-06 13:17:14 +01:00
Aliaksandr Valialkin
04f4a28cf4 docs/victoriametrics/integrations/zabbixconnector.md: add an alias - https://docs.victoriametrics.com/victoriametrics/integrations/zabbix/ - seen in the Internet
Visits to this page are seen in Google Analytics reports.
2026-02-05 23:52:08 +01:00
Aliaksandr Valialkin
7f3d370244 deployment/docker: update base Alpine Docker image from 3.23.2 to 3.23.3
See https://www.alpinelinux.org/posts/Alpine-3.20.9-3.21.6-3.22.3-3.23.3-released.html
2026-02-05 19:48:54 +01:00
Aliaksandr Valialkin
c89b7f7ad5 deployment/docker: update Go builder from Go1.25.6 to Go1.25.7
See https://github.com/golang/go/issues?q=milestone%3AGo1.25.7%20label%3ACherryPickApproved
2026-02-05 19:46:56 +01:00
Aliaksandr Valialkin
d9dabea303 docs/victoriametrics: add links on how to tune VictoriaMetrics for IoT and industrial monitoring cases with low churn rate for time series
The link is https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#index-tuning-for-low-churn-rate
Put this link to the docs which mention IoT and industrial monitoring, so users could figure out
how to optimize VictoriaMetrics for these cases.
2026-02-05 17:23:10 +01:00
Aliaksandr Valialkin
09d2ce36e8 lib/protoparser/protoparserutil: do not store byte slices with more than 75% of unused space in the pool
Keeping such byte slices in the pool may increase memory usage when processing a small share of requests
with much bigger sizes than the average processed request.

This should help reducing memory usage at https://github.com/VictoriaMetrics/VictoriaLogs/issues/1042
2026-02-04 15:31:21 +01:00
Max Kotliar
08755c838b docs: update changelog with LTS release notes 2026-02-02 18:45:07 +02:00
Max Kotliar
d2e438ef41 docs: bump version to v1.135.0 2026-02-02 18:38:03 +02:00
Max Kotliar
e508fa5fe2 deplyoment/docker: bump version to v1.135.0 2026-02-02 18:27:28 +02:00
f41gh7
9a7deca207 follow-up for 60cadfbad1
Respect the default value of http.DefaultTransport.Proxy. Previously,
it could be unintentionally overridden with a nil value.

This commit aligns Proxy configuration across all created transports.
2026-02-02 16:39:52 +01:00
Zane DeGraffenried
60cadfbad1 lib/promauth: fix oauth http client overwriting default proxy with nil
Previously, default `Proxy` was unconditionally replaced with config value, which could be nil. 
It made impossible to use  default http client proxy env variables.

This commit adds check in oauth http client builder that only overwrites the
transport proxy if a custom proxy url function is defined.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10385
2026-02-02 15:39:46 +01:00
Vadim Alekseev
b36c8b1110 app/vminsert/common: reduce allocations when writing metadata
Bug was introduced at 5a587f2006, while porting change from cluster branch.

This commit properlyslice `mms
[]metricsmetadata.Row` slice . Previously, every WriteMetadata call triggered a
slice allocation.
This shouldn't significantly impact overall performance, so I haven't
included benchmarks.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10392
2026-02-02 15:36:33 +01:00
Nikolay
90f0405b11 lib/promscrape: properly expose kubernetes_sd dialer metrics (#10381)
Commit 35b31f904d introduced a bug, where
dialer metrics for Kubernetes discovery were overwritten.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10382
2026-02-02 14:47:50 +01:00
Nikolay
eac0a7ed86 docs: mention downsampling export API behavior
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10326
2026-02-02 14:44:40 +01:00
Artem Fetishev
a8a99105b1 lib/storage: reduce number of shards in metricIDCache (#10388)
This should reduce cpu utilization while still removing the storage
connection saturation.

Follow-up for 6bc809813b (#10346)

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-02-01 18:13:44 +01:00
Max Kotliar
c7f52992e7 docs/changelog: cut v1.135.0 2026-01-30 14:12:05 +02:00
Max Kotliar
5fe14e5479 docs: run make docs-update-flags 2026-01-30 14:09:42 +02:00
Max Kotliar
c7ef079eba docs: run make docs-update-flags 2026-01-30 14:04:39 +02:00
Max Kotliar
424d007a39 app/vmselect: run make vmui-update 2026-01-30 13:58:57 +02:00
Zakhar Bessarab
ad4562cd56 lib/pushmetrics: allow enabling push metrics via config
This is needed in order to allow using lib/pushmetrics for vmctl as it does not use go native flags.

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>

app/vmctl: add metrics for the migrations

- add flags to allow setting up metrics push
- add metrics to track progress of the migration for all modes
- add metrics for generic backoff and limiter packages

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2026-01-30 13:02:13 +02:00
f41gh7
f9895d7e5e follow-up for a2271284
Remove duplicate line at app/vmui/Makefile
2026-01-30 11:28:50 +01:00
Andrei Baidarov
6bc809813b lib/storage: shard metricIdCache
The current implementation has a bottleneck – a single mutex to access
`prev`/`next` metric sets. Each rotation results in storage utilization
spikes since lock-free `curr` is almost empty, and cache needs to
promote metrics from `prev` to `next`.

This is an attempt to reduce contention by spliting cache into separate
shards.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10367
2026-01-30 11:19:40 +01:00
Hui Wang
9b40fd00e0 app/vmalert: do not skip sending alert notifications to -notifier.url if remote write requests fail
Note: remote write request won't fail immediately if `-remoteWrite.url`
is unreachable, as vmalert maintains a remote write queue (with capacity
controlled by `-remoteWrite.maxQueueSize(default 1e5)`) and uses a
separate process to batch and push queued data.

vmalert uses error group to print error messages associated with a
single group together, which should assist the group owner in reviewing
relevant error messages.
With this pull request, the error message would be like:
```
2026-01-30T08:26:46.641Z	error	app/vmalert/rule/group.go:395	group "group2": errors(3): 
rule "rule1": remote write failure: failed to push timeseries - queue is full (1 entries). Queue size is controlled by -remoteWrite.maxQueueSize flag
rule "rule1": notifier failure: failed to send alerts to addr "http://non-existing-alertmanager-1/api/v2/alerts": invalid SC 502 from "http://non-existing-alertmanager-1/api/v2/alerts"; response body: 
rule "rule1": notifier failure: failed to send alerts to addr "http://non-existing-alertmanager-2/api/v2/alerts": invalid SC 502 from "http://non-existing-alertmanager-2/api/v2/alerts"; response body: 
2026-01-30T08:26:46.641Z	error	app/vmalert/rule/group.go:395	group "group2": errors(3): 
rule "rule2": remote write failure: failed to push timeseries - queue is full (1 entries). Queue size is controlled by -remoteWrite.maxQueueSize flag
rule "rule2": notifier failure: failed to send alerts to addr "http://non-existing-alertmanager-2/api/v2/alerts": invalid SC 502 from "http://non-existing-alertmanager-2/api/v2/alerts"; response body: 
rule "rule2": notifier failure: failed to send alerts to addr "http://non-existing-alertmanager-1/api/v2/alerts": invalid SC 502 from "http://non-existing-alertmanager-1/api/v2/alerts"; response body: 
2026-01-30T08:26:52.229Z	error	app/vmalert/rule/group.go:395	group "group1": errors(3): 
rule "rule1": remote write failure: failed to push timeseries - queue is full (1 entries). Queue size is controlled by -remoteWrite.maxQueueSize flag
rule "rule1": notifier failure: failed to send alerts to addr "http://non-existing-alertmanager-1/api/v2/alerts": invalid SC 502 from "http://non-existing-alertmanager-1/api/v2/alerts"; response body: 
rule "rule1": notifier failure: failed to send alerts to addr "http://non-existing-alertmanager-2/api/v2/alerts": invalid SC 502 from "http://non-existing-alertmanager-2/api/v2/alerts"; response body: 
2026-01-30T08:26:52.229Z	error	app/vmalert/rule/group.go:395	group "group1": errors(3): 
rule "rule2": remote write failure: failed to push timeseries - queue is full (1 entries). Queue size is controlled by -remoteWrite.maxQueueSize flag
rule "rule2": notifier failure: failed to send alerts to addr "http://non-existing-alertmanager-2/api/v2/alerts": invalid SC 502 from "http://non-existing-alertmanager-2/api/v2/alerts"; response body: 
rule "rule2": notifier failure: failed to send alerts to addr "http://non-existing-alertmanager-1/api/v2/alerts": invalid SC 502 from "http://non-existing-alertmanager-1/api/v2/alerts"; response body: 
```

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10376
2026-01-30 11:14:27 +01:00
JAYICE
9d59a31290 expose topN average memory bytes consumption queries in /api/v1/status/top_queries (#10350)
### Describe Your Changes

part of https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9330

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: JAYICE <1185430411@qq.com>
2026-01-30 10:47:31 +02:00
Zakhar Bessarab
8391be18be app/vmbackupmanager: allow disabling scheduled backups
This commit adds a new flag `disableScheduledBackups` for `vmbackupmanager. Which disables any scheduled backups. It could be useful to keep vmbackupmanager running and serving API calls only.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10364
2026-01-29 13:46:00 +01:00
Vadim Rutkovsky
8feb8c17aa docs: update examples and documentation after nodes/proxy permission removed
Updated helm-charts and operators no longer come with nodes/proxy
permissions for vmagent/vmsingle roles. In the examples using kubelet's
proxy endpoint we should explicitly create ClusterRoles /
ClusterRoleBinding to grant access.

See https://github.com/VictoriaMetrics/operator/pull/1754 and
https://github.com/VictoriaMetrics/helm-charts/pull/2676

Ref: https://github.com/VictoriaMetrics/operator/issues/1753
2026-01-29 13:20:12 +01:00
Hui Wang
634b4d035d app/vmalert: ensure alert restore retrieve the correct previous alert state if the group takes long time to evaluate
The new `ALERTS_FOR_STATE` may be retrieved during restore when:
1. a group contains multiple heavy rules, alerting rule A may have
already been executed and its state metrics successfully uploaded to the
datasource by the time all rules within the group have finished
executing;
2. the datasource makes data queryable very quickly, for instance, when
users configure a small value for `-search.latencyOffset`.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10335
2026-01-29 13:18:32 +01:00
Hui Wang
1db7597e45 vmalert: disallow setting the -notifier.url command-line flag to a null value
Previously, running a vmalert with an empty notifier.url does not produce an error and leads to vmalert which will never send a notification successfully.

 This commit properly validates notifier.url empty value.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10355
2026-01-28 14:09:18 +01:00
Artem Fetishev
23fe7db35c lib/storage: follow-up for making searchAndMerge profile-friendly
Follow-up for c705da74f6
2026-01-28 14:07:26 +01:00
Hui Wang
817f2dc9e7 app/vmselect/promql: fix gaps at changes() functions
After changing the scrape interval from a smaller value (e.g., 30s) to a larger value (e.g., 60s), the changes() function starts to yield non-zero values even when the underlying values have not changed.

 This commit keeps unchanged series values when a large gap occurs between samples or when the scrape interval decreases.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10280
2026-01-28 14:06:51 +01:00
Max Kotliar
731ba17962 docs: Update vmctl flags in docs with a command (#10357)
### Describe Your Changes

The commit extends make docs-update-flags command so it updates vmctl
flags as well. It creates one md file with global flags and several
files per supported mode.


### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-01-28 14:12:45 +02:00
Max Kotliar
bb163692ba docs: add avilable_from to request body buffering vmauth doc
Follow-up for
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10310 and
e31abfc25c
2026-01-28 12:50:25 +02:00
Nikolay
952ef51cd1 lib/fs: properly check for partially deleted directories (#10342)
Commit 83da33d8cf introduced a check to
detect directories partially removed via IsPartiallyRemovedDir.

However, the check was performed using the full path, while de.Name()
returns only the current entry name (without the path). As a result, the
check always succeeded and the function did not behave as intended.
2026-01-28 10:30:35 +01:00
Nikolay
1fc548b63a lib/fs: add fs.disableMincore flag
This flag allows disabling the mincore() syscall introduced in
50fc48ac47. On older ZFS filesystems,
mincore() may trigger a bug related to ZFSÕs own in-memory cache. Mixing
reads from mmap()ed files and direct disk reads can corrupt the ZFS ARC
cache and lead to data read corruption.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10327
2026-01-27 20:29:01 +01:00
Nikolay
aa5236877c lib/storage: properly aggregate per IndexedDB cache stats
Commit f62893c151 added an attempt to fix
stats for `tagFiltersCache`, `metricIDCache`, and `dateMetricIDCache`.
Instead of aggregated stats, it returned the largest cache stats by
cache size.

This resulted in possible counter decreases for counter metric types. It
made aggregated metrics less usable.

This commit changes cache stats aggregation by metric type:
* size-related gauge metrics are returned based on max cache size usage
* metric counters are reported as a sum of all counters

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10275
2026-01-27 20:27:41 +01:00
Artem Fetishev
c705da74f6 lib/storage: make pt and legacy idbs visible in golang profiles
Rewrite the searchAndMerge so that golang profiles could show exactly
how much resources is consumed by each idb type.
2026-01-27 20:26:39 +01:00
Aliaksandr Valialkin
2e9bda2bff lib/{mergeset,storage}: add a comment explaining why the strange construct with anonymous function is needed
This is a follow-up for the commit 2a0e382a99

Updates https://github.com/VictoriaMetrics/VictoriaLogs/issues/1020
2026-01-27 19:44:49 +01:00
Jiekun
e1413536fc chore: add build version information to the home page for consistency with other projects
The build version added to:
- victoria-metrics
- vmagent
- vmalert

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10249

Co-authored-by: Hui Wang <haley@victoriametrics.com>
Signed-off-by: Zhu Jiekun <jiekun@victoriametrics.com>
2026-01-27 18:28:15 +02:00
Jayice
1a438a04ba introduce new alert for vmagent persistenqueue capacity 2026-01-27 18:14:02 +02:00
Aliaksandr Valialkin
4ad47d6fe3 docs/victoriametrics/README.md: remove obsolete docs about staleness markers during deduplication after the commit 7bd5d19f62
Staleness markers are ignored on the deduplication interval if there are other numeric samples exist on that interval.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10196
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5587
2026-01-27 16:08:45 +01:00
Aliaksandr Valialkin
879443f915 lib/storage/dedup.go: remove obsolete comment from DeduplicateSamples - it doesnt keep stale NaNs on purpose after the commit 7bd5d19f62
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5587
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10196
2026-01-27 16:08:44 +01:00
Max Kotliar
bd6788cb8f docs/changelog: fix ordering after merging pr.
related pr https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10320
2026-01-27 16:37:49 +02:00
Jayice
22696f378c lib/promscrape: apply promscrape.maxScrapeSize to decompressed data
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9481
2026-01-27 16:30:38 +02:00
Artur Minchukou
7205f479aa app/vmui: fix build of vmui by handling playground env variable correctly (#10354)
### Describe Your Changes

Fixed build of vmui by handling playground env variable correctly.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-01-27 16:24:18 +02:00
Yury Moladau
5c7031c000 vmui: fix "Percentage from total" for multiple metrics in Cardinality Explorer (#10323)
### Describe Your Changes

In the Cardinality Explorer, when filtering, a "Percentage from total"
stat appears. This stat is documented as "the share of these series in
the total number of time series".

This works for pages for individual metrics. However, if using a filter
that returns *multiple* metrics, the value of "Percentage from total"
will only account for the size of the *first* metric. One can have a
filter that returns, say, 10k time series (out of, say, 100k in the VM
cluster), and if the first metric returned has 1k time series, then
"Percentage from total" will show 1%, not 10%.

This PR fixes that calculation.

Credits to @PleasingFungus for the original fix (PR #10288).

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

Signed-off-by: Yury Molodov <yurymolodov@gmail.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
Co-authored-by: PleasingFungus <PleasingFungus@users.noreply.github.com>
2026-01-27 15:37:34 +02:00
Artur Minchukou
a227128467 app/vmui: move node from ci to docker and update build steps (#10299)
### Describe Your Changes

Moved node from CI to make command and update build steps.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-01-27 15:23:35 +02:00
Nikolay
777c8913b3 follow-up after e35a9a366c
Commit e35a9a366c changed the order of wg.Add calls in the Graphite transform package. Previously, all wg.Add calls were made upfront, but after that change it became possible for wg.Wait to exit earlier than expected.

This commit fixes the issue by spawning all background goroutines first and starting the goroutine that calls wg.Wait afterward.
2026-01-27 13:50:07 +01:00
Aliaksandr Valialkin
e35a9a366c all: consistently use sync.WaitGroup.Go() instead of sync.WaitGroup.Add(1) + sync.WaitGroup.Done()
This improves code readability a bit.
2026-01-27 00:29:47 +01:00
JAYICE
6bbc03ecf8 app/vmagent: support configuring different -remoteWrite-queues per url
Previously vmagent had remoteWrite.queues as a global setting that was be applied to every persistentqueue. However, it could be useful to specify remotewrite.queues per remotewrite.url.

Considering each rw might have different workload(latency, throughput, and availability), so it will be more flexible for tuning if we can set remoteWrite.queues separately for specific rw.

This commit, makes `-remoteWrite-queues` configurable per remoteWrite.url. 

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10270
2026-01-26 20:09:35 +01:00
Max Kotliar
ca34ae48b4 docs/changelog: chore changelog
- rename `these docs` link to a more explisit link
- Add thank you for contribution.
2026-01-26 18:45:04 +02:00
Max Kotliar
f18fd37433 docs: run make docs-update-flags 2026-01-26 18:43:35 +02:00
Zhu Jiekun
f191a052dc lib/promscrape: ceiling the last scrape size
ceiling the last scrape size as an integer in bytes or kilobytes to
avoid misleading dots.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10307
2026-01-26 12:46:24 +01:00
Max Kotliar
0fdd5cb435 app/vmauth: fix backend healthcheck for url prefixes defined inside url_map
Previously health checks for url prefixes defined inside `url_map` were
not properly stopped. See STR in
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10334#issuecomment-3791401822

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10334
2026-01-26 11:47:43 +01:00
f41gh7
76dd8f4adb lib/storage: properly search searchTenantsOnDate
Initial implementation of searchTenantsOnDate used a index scan for the given prefix (index prefix + tenant + date).
It did not check whether the date prefix was actually outside the current date.

This commit adds the missing date check and makes the tenant search results accurate.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10295
2026-01-26 11:35:27 +01:00
Aliaksandr Valialkin
e31abfc25c app/vmauth: allow buffering request body before proxying it to the backend
This should help reducing load on backends when many concurrent clients
send requests over slow networks (for example, when many IoT devices send metrics
to vmauth over slow connections).

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10309

This commit is based on top of https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10310
Thanks to @makasim for the initial idea.
2026-01-26 03:02:32 +01:00
Aliaksandr Valialkin
ac6d9d632f app/vmauth: properly increment vmauth_user_concurrent_requests_limit_reached_total and vmauth_unauthorized_user_concurrent_requests_limit_reached_total metrics when the request is rejected because of the concurrency limit
These metrics must be incremented when the request couldn't be processed because of the configured per-user concurrency limit.
The commit 76176ac1d3 moved the counter increase to the place when the current request
is put in the wait queue because of the concurrency limit is reached. This is incorrect, since such requests
can still be successfully processed during -maxQueueDuration . This also contradicts the docs at https://docs.victoriametrics.com/victoriametrics/vmauth/#concurrency-limiting

There is a small practical sense in counting the number of times the concurrency limit is reached,
while the request is successfully processed during the -maxQueueDuration after that.

Add missing alerting rule for rejected unauthorized requests because of the concurrency limit.

Add missing grouping by instance for per-user counter of rejected queries because of the concurrency limit.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10078
2026-01-25 21:43:44 +01:00
Aliaksandr Valialkin
e43de2a2b3 app/vmauth: put comments into the correct places after the commit 5f67f04f6b 2026-01-25 21:19:01 +01:00
Aliaksandr Valialkin
efe4a3b2dd vendor: update github.com/VictoriaMetrics/VictoriaLogs from v1.36.2-0.20251008164716-21c0fb3de84d to v0.0.0-20260125191521-bc89d84cd61d 2026-01-25 20:24:04 +01:00
Aliaksandr Valialkin
3bf5f0297b LICENSE: update the end copyright year from 2025 to 2026 2026-01-25 20:14:07 +01:00
Aliaksandr Valialkin
5632ccc64a lib/logger: count both printed and suppressed logs at vm_log_messages_total metric
This simplifies troubleshooting by investigating the vm_log_messages_total metric
when logs are unavailable. The logs may be unavailable when the -loggerLevel command-line
flag is set to value other than INFO. The logs may be unavailable when clients
use Monitoring of Monitoring service ( https://victoriametrics.com/products/mom/ ),
which provides metrics, but doesn't provide logs from VictoriaMetrics components
running at the client side.

Add `is_printed` label to the `vm_log_messages_total` metric in order to detect whether
the given log has been suppressed or printed.

See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10304

While at it, make more readable the description for the TooManyLogs alert,
which is based on the vm_log_messages_total metric.
Also return back the `level!="info"` instead of `level="error"` filter
in the query for this alerting rule, in order to be consistent with queries
at the official dashboards for VictoriaMetrics components.
TODO: investigate too high warnings rate at https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2760
and fix it at the source of these warnings instead of modifying the query
for the TooManyLogs alert.
2026-01-25 17:43:20 +01:00
Nikolay
446452857c lib/storage: tsdb stats fallback to legacy idb
Add fallback to legacy indexDB for stats search

After introducing the new partition index
(f97f627f79), storage stopped returning
stats for date ranges outside the partition index. This made the
migration backward incompatible, as there was no way to retrieve stats
for dates prior to the migration.

This change adds a fallback to the legacy indexDB search when the status
search on the current partition index returns zero series.

This is an imperfect solution: due to tag filters, the TSDB status
search may legitimately return empty results. However, the additional
overhead is small and acceptable.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10315
2026-01-23 16:43:13 +01:00
Max Kotliar
4ff409eb27 docs/changelog: mention already fixed bug fix in vmauth
The bug was fixed in
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10233 before we
realized it was a bug, at that time we considered it as improvement - do
less retries. But later in
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10318 we
realized that we actually fixed a bug.

Adding postfactum a record about bugfix to the changelog
2026-01-22 16:57:58 +02:00
Phuong Le
1b7f0172d2 fsutil: fix a typo related to default concurrent goroutines working with files
s/265/256
2026-01-20 21:46:38 +01:00
Andrii Chubatiuk
1c77ee9527 app/vmui: removed anomaly ui (#10316)
The vmanomaly has been moved to a separate repository. This means that the functionality related to vmanomaly is no longer needed in the app/vmui located in the VictoriaMetrics repository.

This commit removes all the functionality and unnecessary abstractions related to vmanomaly from the app/vmui repository. This should help improving long-term maintenance of the code.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9755
2026-01-20 21:46:09 +01:00
Phuong Le
2a0e382a99 lib/storage, lib/mergeset: avoid deadlock on panic while merging
Related to
https://github.com/VictoriaMetrics/VictoriaLogs/issues/1020#issuecomment-3763912067
2026-01-20 21:43:12 +01:00
Max Kotliar
02c8ea5a48 docs/changelog: fix typo in security upgrade 2026-01-20 21:53:52 +02:00
Aliaksandr Valialkin
34f242a6b8 vendor: run make vendor-update 2026-01-19 15:29:25 +01:00
f41gh7
bc8f6c5688 docs: point examples to the v1.134.0 release 2026-01-19 14:28:00 +01:00
f41gh7
c0fe67c2db docs: cut LTS releases v1.110.28 and v1.122.13
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2026-01-19 14:26:36 +01:00
Fred Navruzov
ede1c2cde9 docs/vmanomaly: release v1.28.5 (#10311)
### Describe Your Changes

- Adjusted vmanomaly docs for v1.28.5
- Added missing `server` page at /anomaly-detection/components/server

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-01-17 21:52:48 +02:00
Aliaksandr Valialkin
ad34a5eb53 lib/protoparser/protoparserutil: reduce memory usage in ReadUncompressedData() when processing big number of incoming connections
Wait for the first byte from the reader passed to ReadUncompressedData()
before obtaining concurrency token from -maxConcurrentInserts and before allocating
buffers needed for reading the request body in memory.
This should limit the amounts of memory needed for processing a big number of concurrent
HTTP requests via Prometheus remote_write protocol and via other HTTP-based data ingestion
protocols where every request contains a single block of data to process.
Now the maximum memory usage is limited by -maxConcurrentInserts, while the server
can process much more than -maxConcurrentInserts concurrent HTTP requests by pausing the excess requests.

Previously the memory usage wasn't limited by -maxConcurrentInserts, since buffers for reading the data
from concurrent connections were allocated before obtaining the concurrency token from -maxConcurrentInserts.

While at it, use protoparserutil.ReadUncompressedData() in lib/protoparser/promremotewrite/stream.Parse()
for the sake of consistency across parsers for protocols, which send the full block of data per every incoming HTTP request.

This is a follow-up for the commit d107dee9c7
2026-01-17 15:49:53 +01:00
f41gh7
eaf7a68c92 CHANGELOG.md: cut v1.134.0 release 2026-01-16 20:49:31 +01:00
Max Kotliar
c5e43e1c91 docs: use canonical link 2026-01-16 19:09:49 +02:00
f41gh7
b343f541f0 make vmui-update 2026-01-16 16:46:26 +01:00
f41gh7
a23a902953 deployment: update Go builder from v1.25.5 to v1.25.6
See https://github.com/golang/go/issues?q=milestone%3AGo1.25.6%20label%3ACherryPickApproved
2026-01-16 16:26:27 +01:00
Hui Wang
54c60706ca lib/streamaggr: prefer numerical values over stale markers when sample share the same timestamp during deduplication (#10300)
follow up
7bd5d19f62,
apply the same logic in stream aggregation.
2026-01-16 16:14:09 +01:00
Nikolay
cd2e11b7cf lib/storage: increase rotation time for daily metricID cache
This is follow-up for c5713a09d3

Originally, dateMetricID cache was fully rotate every 20 minutes. It
made daily-index pre-creation less efficient and caused CPU usage spikes
for index records lookup at midnight.

storage pre-fills index records for the next day in 1 hour before night.
But this rotation made only last 20 minutes before midnight visible in
the cache.

 This commit changes rotation period from 20 minutes to 2 hours ( 1 hour
 tick interval). While
it could slighlty increase cache memory usage ( in practice it shouldn't
be noticeable). It prevents from CPU usage spikes.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10064
2026-01-16 16:12:59 +01:00
Aliaksandr Valialkin
5423d5e93a docs/victoriametrics/relabeling.md: add an alias seen in wild - https://docs.victoriametrics.com/victoria-metrics/relabeling/
Google sends users to this alias according to the report on 404 pages.
2026-01-16 15:46:55 +01:00
Aliaksandr Valialkin
48819b6781 docs/victoriametrics/CaseStudies.md: added alias for this page seen on the Internet - https://docs.victoriametrics.com/casestudies.html
Google sends users to this url according to the report on 404 pages.
2026-01-16 15:32:40 +01:00
JAYICE
c4bff27f46 lib/storage: properly search for LabelNames and LabelValues
Issue was introduced at d6ef8a807b commit.

Due to variable shadowing, if filter matched more than 100_000 metricIDs, it's fallback to the indexDB scan.
But because of type, `filter` value was not properly updated. And it triggered incorrect results.

 This commit fixes this typo and adds test to verify this case.


fixes  https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10294
2026-01-16 13:53:29 +01:00
Max Kotliar
432b313a48 docs: cleanup changelog a bit before release 2026-01-16 12:51:08 +02:00
Haley Wang
7bd5d19f62 lib/storage: prefer numerical values over stale markers when samples share the same timestamp during deduplication
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10196

Prefer the non StaleNaN  value when both StaleNaN and non-StaleNaN samples share the timestamp during deduplication(downsampling). The scenario can occur when:
1. Multiple vmagent instances scrape the same target(without -promscrape.cluster.name flag), one instance fails to scrape due to issues such as network, while others succeed.
2. Multiple vmalert instances evaluate the same recording rule, with one instance receiving a partial response while others receive a complete response.

In both cases, since the samples share the same timestamp and represent the metric state at that moment,
the non-StaleNaN value is entirely valid, whereas the StaleNaN could be caused by other unknown issues.
Therefore, it is reasonable to prioritize the non-StaleNaN value.
2026-01-16 09:25:11 +02:00
Haley Wang
8d18bc288f vmselect: use the last 20 raw samples to auto-calculate the lookbehind during range query
Previously, the first 20 raw samples were used for calculation.
But compare to the first 20 samples, the last 20 samples represent the latest state of the metrics,
so the lookbehind window calculated from them should be more accurate when applied to the most recent samples,
resulting in better query results for recent time ranges.

For example,if the scrape interval changes at day4, and the query range is set to last 7 days.
Applying the window derived from the first 20 samples(the old scrape interval) to new samples could result in consistently incorrect results from day4 through day11.
Conversely, applying the window derived from the last 20 samples (the new scrape interval) could lead to incorrect results for [day0-day4),
which are old states and generally less important.

This pull request does not address any specific bug, but change the general behavior, so there is no changelog.

Inspired by https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10280, but not the fix for https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10280.
2026-01-16 09:00:51 +02:00
Nikolay
ff6e5c2983 app/vmstorage: reduce default value for storage.vminsertConnsShutdownDuration
This commit reduces default value for
`storage.vminsertConnsShutdownDuration` flag from `25s` to `10s`
seconds.
This change should help to reduce probability of ungraceful storage
shutdown at Kubernetes based environments, which has 30 seconds default
graceful termination period value (terminationGracePeriodSeconds).

Related issue
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10063
2026-01-15 17:26:12 +01:00
Yury Moladau
23af0086d8 app/vmui: fix heatmap rendering for uniform or sparse histogram buckets (#10292)
* Fixed a heatmap crash that happened when all visible cells had the
same value (division by zero produced invalid color indices).
* Improved how histogram buckets are chosen for display when the data is
very sparse, so the heatmap doesn’t look empty or drop the only
meaningful bucket.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10240
2026-01-15 16:55:24 +01:00
Yury Moladau
8657470068 app/vmui: bump package versions (#10291)
### Describe Your Changes

Updated project dependencies to the latest versions.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

Signed-off-by: Yury Molodov <yurymolodov@gmail.com>
2026-01-15 16:53:23 +02:00
Aliaksandr Valialkin
3f16bc7cb2 docs/victoriametrics/Articles.md: add https://www.keyvalue.systems/blog/kubernetes-observability-with-victoriametrics-loki-grafana/ 2026-01-15 13:53:26 +01:00
Aliaksandr Valialkin
655a0eb0c3 app/vmstorage/main.go: typo fix after the commit 7cbd2a8600: partition -> snapshot 2026-01-15 12:50:24 +01:00
Aliaksandr Valialkin
7cbd2a8600 app/vmstorage: delete just created snapshot if the client canceled the request for creating the snapshot
It is better to delete the snapshot, since the client is no longer interested in it.
This should prevent from creating many unused snapshots when clients cancel creating snapshots
because of timeouts. This is the real production case from one of VictoriaMetrics users:
the disk IO subsystem became very slow, so creating a snapshot took a lot of time, so vmbackup
was canceling creating the snapshot because of the timeout. But vmstorage was still continue
creating the snapshot. This resulted in the increasing number of created but unused snapshots.
2026-01-15 12:36:48 +01:00
Max Kotliar
5f67f04f6b app/vmauth: measure client cancelled requests
Without measuring this, we have a blind spot. Exposing it as a metric
improves visibility and should save time during future debugging
sessions.

Inspired by review commit
c9596a0364 (r173621968)
2026-01-15 12:13:35 +01:00
Nikolay
2056e5b46d lib/mergeset: do no cache inmemoryBlock with single item
indexDB mergeset has an edge for single item inmemoryBlock. It stores
such items blocks in-memory at blockheader firstItem. So there is no
need to perform on-disk read operations and storing copy of it at cache.

 It also may result in incorrect search results, inmemoryBlock with a
 single item has always zero index block offset. Which causes collisions
if it's cached with the next index block at part.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10239
Probably fixes
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10063
2026-01-15 12:12:08 +01:00
Hui Wang
4d1f262ec4 vmalert: add support for $isPartial variable in alerting rule annotation templating
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4531
2026-01-15 12:10:07 +01:00
Vadim Rutkovsky
afca599a46 app/vmauth: Ffx typo in auth config warning message 2026-01-15 12:09:36 +01:00
Yury Moladau
d667f694bc app/vmui: fix tenant ID handling via URL path (#10287)
**Problem**

* VMUI had two tenant ID sources:

  * URL path: `/select/<accountID>/vmui/`
  * Query param: `tenantID`
* These could differ, causing confusion and inconsistent behavior.

**Solution**

* Removed the legacy `tenantID` query parameter.
* Use the URL path as the single source of truth for tenant ID.
* Changing the tenant in the UI now updates the URL path.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10232
2026-01-15 12:05:12 +01:00
Aliaksandr Valialkin
fe2c60c79b dashboards: follow-up for the commit 36460f6297
Use $__range duration instead of 1h duration for the 'Retention errors' stats panel
in the similar way it was done in the commit 36460f6297
for the 'Backup errors' stats panel.

While at it, run `make dashboards-sync` in order to sync the dashboards
in the dasbhoards/vm/ folder. See https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/dashboards/README.md
for details.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10279
2026-01-14 23:30:08 +01:00
Stephan Burns
36460f6297 Make stats panel use the range specified in grafana (#10279)
### Describe Your Changes

The Backups errors panel uses a hard coded rate, when looking over a
large period of time this number would likely stay low do to the hard
coded rate when in reality the amount of errors is much larger.

This change addresses this by using the __rate variable in Grafana so
the rate will align with the date/time range in Grafana.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-01-14 23:21:16 +01:00
Aliaksandr Valialkin
d107dee9c7 lib/writeconcurrencylimiter: remove Reader.DecConcurrency() method
Call decConcurrency() inside Reader.Read() before calling the Read() at the underlying reader.
This reduces chances of improper use of the writeconcurrencylimiter.Reader by callers.

While at it, move the creation of writeconcurrencylimiter.GetReader() to the top of stream parser functions
at lib/protoparser/* packages, and call incConcurrency() inside GetReader() call.
This reduces the frequency of decConcurrency() / incConcurrency() calls
for typical buffered reads when parsing the incoming data. This, in turn,
reduces the contention on the concurrencyLimitCh.
2026-01-14 22:55:17 +01:00
Max Kotliar
b33d7c3ef9 dashboards: remove timezone from vmagent dashboard
The bug introduced in
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10267 and breaks
helm charts customization, see discussion
415ff27c74 (r174600675)
2026-01-14 13:28:35 +02:00
JAYICE
d3848f6802 vmagent: fix calculation of vm_persistentqueue_free_disk_space_bytes (#10271)
### Describe Your Changes

follow up https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10242,
see discussion in
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10267#issuecomment-3729577415
for more context

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-01-13 20:12:31 +02:00
Jayice
415ff27c74 dashboards: add Persistent queue Full ETA panel to the Drilldown section in vmagent dashboard 2026-01-13 20:03:43 +02:00
Max Kotliar
90f59383b2 docs: Add docs-update-flags step to release. (#10284)
### Describe Your Changes

Previously, we had to manually update flags in documentation whenever we
made flag-related changes in source code. Someone did it by hand, others
compiled enterprise binaries, executed them with `-help` flag, and
copied and pasted output to the documentation.

In https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9632 a
command `make docs-update-flags` was introduced. It automates the whole
process. It compiles binaries, runs `-help,` and syncs output to changes
automatically.

Now, we can **omit updating doc flags in the PR** and do it once before
releasing a new version.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-01-13 17:09:49 +02:00
Fred Navruzov
8fec7005d0 docs/vmanomaly-release-v1.28.4 (#10283)
### Describe Your Changes

Docs upgrades, including v1.28.4 adjustments, some diagrams refinement
and deprecations

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-01-13 15:32:03 +02:00
Max Kotliar
4d42b291e5 docs: run make docs-update-flags 2026-01-13 10:56:14 +02:00
Max Kotliar
50f4fbf28e lib/flagutil: Add explicit month duration unit (M) for -retentionPeriod.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10181
2026-01-13 10:37:02 +02:00
Max Kotliar
a5da6afb88 docs: run make docs-update-flags 2026-01-13 10:28:16 +02:00
Max Kotliar
71f9e7f2c4 app/vmctl: fix link to documentation
See https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10268

Co-authored-by: Danijel Tasov <data@consol.de>
Signed-off-by: Danijel Tasov <data@consol.de>
2026-01-12 20:53:35 +02:00
Max Kotliar
eb7c5df65e dashboards: run make dashboards-sync 2026-01-09 19:07:03 +02:00
Max Kotliar
5af493297a docs/changelog: move 2025 changes to CHANGELOG_2025.md, create CHANGELOG_2026.md 2026-01-09 13:24:47 +02:00
JAYICE
2f61fa867e Makefile: Move enterprise-only flags to a separate block in document (#10241)
### Describe Your Changes

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10218.

Improve `docs-update-flags` command to move enterprise-only flags to a
separate block.

<img width="936" height="964" alt="image"
src="https://github.com/user-attachments/assets/f96a3515-4acc-4a65-94b1-55e01fab6e25"
/>


### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-01-08 19:42:33 +02:00
Max Kotliar
729b1099d8 dashboards: Enhance VictoriaMetrics - single-node dashboard stats raw. (#10260)
### Describe Your Changes

Currently, the stats are small and hard to read (see screenshot in the
PR). In addition, the version and uptime panels work well for a single
vmsingle, but become inconvenient when multiple instances are present,
since only one is visible.

This PR changes the version and uptime panels from single stat to time
series, aligning them with the VictoriaMetrics – cluster dashboard. It
also enlarges the remaining stats so the values are easier to read,
consistent with the cluster dashboard (see screenshot in the PR).

Follow up on
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10187 and
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10132

Before:
<img width="1512" height="364" alt="Screenshot 2026-01-07 at 21 38 17"
src="https://github.com/user-attachments/assets/8d8baa86-b31b-4c58-ae22-cef94a1607e6"
/>

After:
<img width="1512" height="670" alt="Screenshot 2026-01-07 at 22 07 10"
src="https://github.com/user-attachments/assets/9e60596d-72ec-4060-af11-a69ce554d3b1"
/>

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

Co-authored-by: Hui Wang <haley@victoriametrics.com>
2026-01-08 13:49:46 +02:00
Yury Moladau
945ca569b9 app/vmui: add localStorage availability checks
* Added browser `localStorage` availability checks with user-facing
error reporting.
* Introduced `VMUI:`-prefixed `localStorage` keys to avoid key
collisions.
* Added migration logic for existing unprefixed `localStorage` keys.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10085
2026-01-08 11:13:38 +01:00
Hui Wang
7fb8a8a0b2 vmalert: skip alert annotation templating in replay mode
In alerting rules, annotations are only attached to alert messages that
are sent to the notifier (such as Alertmanager). These annotations
typically contain human-readable information, such as instructions for
resolving the alert.

In [replay
mode](https://docs.victoriametrics.com/victoriametrics/vmalert/#rules-backfilling),
vmalert does not send alert messages to the notifier at all(no notifier
is configured), as these alerts are outdated. Therefore, it does not
need to template the annotations in this mode.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10262
2026-01-08 11:09:21 +01:00
JAYICE
89f95f74ed vmagent: add metric for persistentqueue capacity
This commit adds new metric `vm_persistentqueue_free_disk_space_bytes`, which helps
to track free space for persistent queue.

part of implementation for
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10193
2026-01-08 11:07:28 +01:00
Hui Wang
46e13fe0ca vmselect: expose vm_rollup_result_cache_requests_total metric
which tracks the number of requests to the query rollup cache


As described in
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10117, when
retrieving cached data from the rollup result cache, there can be mixed
`get()` and `getBig()` calls to the underlaying fastcache. And it's
unpredictable how many times `getBig()` will call `get()`, so the
metrics from fastcache cannot be used to indicate query cache miss
ratio.
Exposing a new counter `vm_rollup_result_cache_requests_total` to track
the number of requests to the query rollup cache, together with the
existing `vm_rollup_result_cache_miss_total`, allows for monitoring the
rollup cache miss rate per query (or subquery), which is more
user-facing.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10117
related to https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5056
2026-01-08 11:05:25 +01:00
Fred Navruzov
50d8ad6733 docs/vmanomaly-release-v1.28.3 (#10258)
### Describe Your Changes

Docs update for vmanomaly v1.28.3 release + `retention` doc section for
model artifacts

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-01-07 20:40:32 +02:00
JAYICE
3b8550adb1 dashboard: refine vmsingle dashboard and align it to vmcluster dashboard (#10187)
### Describe Your Changes

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10132

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-01-07 12:58:49 +02:00
Max Kotliar
1708b73312 lib/promscrape: show (N/A) instead of hiding target response link when original labels are dropped (#10244)
Related to
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10237,
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9901

When `-promscrape.dropOriginalLabels=true` is enabled, original target
labels are unavailable. These labels are required to compute the target
ID used by the /target_response endpoint, so the response link cannot be
generated. See

7a5003212e/lib/promscrape/targetstatus.qtpl (L236)

Previously, the link silently disappeared from the UI. Now the UI shows
(N/A) Not available, explicitly indicating that required data is
missing.
2026-01-06 12:31:18 +01:00
f41gh7
57defe7ab4 apptest: add zabbixconnector integration test
follow-up for 859435a8df
2026-01-06 12:27:01 +01:00
Sinotov Vladimir
d58cfb7f36 app/vminsert: properly route zabbixconnector requests
Previously VictoriaMetrics: Single-node version used

`http://<victoriametrics-addr>:8428/zabbixconnector/api/v1/history`
resulted in a missing path error. The issue was introduced during changes back-porting from vmagent.


Additionally, the http response was fixed. Zabbix expects a 200 status
code during normal operation.

 Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10214
2026-01-06 11:17:09 +01:00
Cancai Cai
a244750bc6 doc/table: fix typo (#10243)
### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

Signed-off-by: cancaicai <2356672992@qq.com>
2026-01-05 21:42:01 +02:00
Max Kotliar
f06e7f9a6e app/vmagent: replace go.yaml.in/yaml/v3 package with gopkg.in.yaml.v2
It address the comment:
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10213/files#r2662305818

The reasons:
- It was decidede to use v2 for now and do not upgrade to v3.
- The later package is used in more places so it is better to use it
here too.
2026-01-05 21:34:08 +02:00
Artem Fetishev
7a5003212e docs: bump VictoriaMetrics components version
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-01-05 19:22:53 +01:00
Artem Fetishev
846392405e deployment/docker: bump VictoriaMetrics component version
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-01-05 19:18:49 +01:00
Artem Fetishev
37c3d8c26b CHANGELOG.md: fix issue links
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-01-05 19:15:50 +01:00
Artem Fetishev
8bc0475ee7 docs: update LTS releases
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-01-05 18:42:21 +01:00
Zhu Jiekun
89414062bf bugfix: allow reloading when init with empty remote write relabeling flags (#10213)
### Describe Your Changes

fix https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10211

This pull request adds `flagSet bool` field to `relabelConfigs` struct.
And use this flagSet value as the result of `isSet()` function.

The reloading should be available when at least one of the command-line
flags `-remoteWrite.relabelConfig` / `-remoteWrite.urlRelabelConfig` is
set.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Co-authored-by: Hui Wang <haley@victoriametrics.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2026-01-05 12:53:52 +02:00
JAYICE
67c51b009d document: guide users to use --data-binary in curl when import multi lines influx data (#10198)
### Describe Your Changes

fix https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10165.

Refer to [curl docs](https://curl.se/docs/manpage.html#--data).

> When --data is told to read from a file like that, carriage returns,
newlines and null bytes are stripped out

If users import multiple lines of data in file via `/api/v2/write`, he
may follow the example we gave to use `-d` to instruct curl, then
newlines will be stripped out, hence the parse error in VictoriaMetrics.

It's not VictoriaMetrics' bug, but it will be better to guide users to
use `--data-binary` just like how
[/api/v1/import](https://docs.victoriametrics.com/victoriametrics/url-examples/#apiv1import)
did.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Co-authored-by: Zhu Jiekun <jiekun@victoriametrics.com>
2026-01-05 10:54:47 +02:00
Artem Fetishev
e8160fc8fb CHANGELOG.md: cut v1.133.0 release
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-01-02 12:17:38 +00:00
Artem Fetishev
e3a4ceaef3 deployment/docker: upgrade base docker image (Alpine) from 3.22.2 to 3.23.2
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2026-01-02 10:22:41 +00:00
Andrii Chubatiuk
e9cedca8c8 docs: replace old grafana datasource page with links to a new one (#10231)
### Describe Your Changes

fixes https://github.com/VictoriaMetrics/vmdocs/issues/192

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2026-01-01 18:54:52 +02:00
Andrii Chubatiuk
b720e55c13 vmsingle: properly proxy requests to all supported vmalert paths (#10179)
### Describe Your Changes

modify initial request path before sending request to vmalert with a
proper value
sync vmalert proxy implementation with one in cluster branch
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10178

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Co-authored-by: Hui Wang <haley@victoriametrics.com>
2026-01-01 16:28:55 +02:00
Artem Fetishev
ab1429c896 lib/storage: fix tagFiltersCache stats collection (#10230)
Since the cache may be reset too often, using the sizeBytes as an
indicator that this is the first met indexDB to collect tfssCache stats
is unreliable because it often can be zero all indexDB instances. Use
Requests metric instead because it is never reset.

Follow-up for #10204.

---------

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-12-31 13:35:54 +01:00
JAYICE
74b03c93a6 makefile: support vmauth in docs-update-flags command (#10222)
### Describe Your Changes

implement
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10221

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-12-30 19:14:06 +02:00
Max Kotliar
0e9bb5a42d docs: sync flags in docs with acutal binaries 2025-12-30 18:59:33 +02:00
Max Kotliar
f1a88e57cf docs/changelog: fix link to PR
follow up on
1792b6bd9a
2025-12-30 17:38:48 +02:00
Max Kotliar
76176ac1d3 app/vmauth: increase concurency limit reached before waiting in queue
Follow up on
c9596a0364 (r173413964)

See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10078
2025-12-30 17:23:10 +02:00
Max Kotliar
c08adb31bb docs: remove available from placeholder from code block
The {{% available_from "#" %}} placeholder does not work inside code
blocks. Replacing it with hard coded value.

Introduced in
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10168.

See comment
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10168/files#r2651440620
for more details.
2025-12-30 16:10:55 +02:00
Artem Fetishev
b49b0471ef lib/storage: move legacy code to legacy files (#10215)
Follow-up for f97f627 (#8134)

The code was moved as is, no changes were made to moved code.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-12-30 13:16:24 +01:00
Artem Fetishev
13102045a7 changelog: update v1.132.0 release notes with a note on ungraceful shutdown
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-12-30 10:29:24 +01:00
Artem Fetishev
d226e5b95f lib/ingestserver: Actually close the first vminsert connection (#10224)
Since the first connection is not closed, the vmstorage will never
terminate gracefully which will cause the reset of all caches on the
start-up.

Follow-up for 244769a00d (#10136)

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-12-29 15:13:30 +01:00
Hui Wang
30bbb5660b docs: clarify recording rule labels do not support templating (#10186)
fix https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10183
2025-12-29 15:29:45 +02:00
Max Kotliar
1792b6bd9a docs/changelog: Add PR\issue links, fix typo in tip section 2025-12-29 12:58:07 +02:00
Artem Fetishev
f97f627f79 lib/storage: implement partition index (#8134)
This should reduce disk space occupied by indexDBs as they get deleted along
with the corresponding partitions once those partitions become outside the
retention window.

- Motivation: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7599
- What to expect: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8134

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Co-authored-by: Andrei Baidarov <baidarov@nebius.com>
2025-12-24 18:53:49 +01:00
Phuong Le
785c1fd053 issues/question-template: fix typos (#947) 2025-12-24 11:37:34 +01:00
Aliaksandr Valialkin
697bfd5cee app/vmauth: properly verify whether the request has been canceled by the client in handleConcurrecnyLimitError()
The `err` may contain information about request cancelation performed by the server code.
In such cases the error must be logged. The error must be ignored only if the client canceled the request.

This is a follow-up for the commit c9596a0364

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10078
2025-12-24 11:31:36 +01:00
Artem Fetishev
f0ac6d9ac9 lib/storage: log the beginning and end of saving metric name usage stats to file (#10205)
This is to debug cases when metric name tracker resets the tsid cache
after restart. It could be due vmstorage not having enough time to stop
gracefully. Logs should provide this info.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-12-23 17:25:43 +01:00
Artem Fetishev
f0b251d967 lib/storage: fix per-idb cache stats (#10204)
This fixes the following corner case: if all instances of a cache have
zero size, the stats won't be set at all. This results in some weird
graphs if the cache is reset very often (such as tfssCache): the cache
sizeMaxBytes alternates between the actual value and zero.

Follow-up for f62893c151

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-12-23 17:06:10 +01:00
Nikolay
c3346ae8fd app/victoria-metrics: properly add prometheus metrics metadata (#10192)
Commit 5a587f2006 was not properly ported
to the single node branch. Since single node is able to perform both
promscrape and self-scrape, it's required to add metadata add methods to
those paths.

 This commit fixes missing metadata add to the storage.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10175
2025-12-23 13:57:19 +01:00
Jinlin
0ffb3fdfce lib/storage: fix log typo 2025-12-23 13:50:40 +01:00
Zakhar Bessarab
4e234ccbd1 docs/enterprise: add description of license key update (#10194)
Describe Your Changes:

- describe options of updating the enterprise license key
- fix a few typos
2025-12-23 13:37:36 +01:00
Alexander Frolov
943589ca31 lib/promscrape: fix isAutoMetric to recognize all auto-generated metrics
Previously, `scrape_labels_limit` was missing from the check.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10197
2025-12-23 13:36:50 +01:00
Aliaksandr Valialkin
c9596a0364 app/vmauth: add -maxQueueDuration command-line flag for graceful handling of short spikes in the number of concurrent requests
Previously a short spike in the number of concurrent requests immediately led to `429 Too Many Requests` errors
when the number of concurrent requests exceeds -maxConcurrentRequests or -maxConcurrentPerUserRequests.

This commit allows processing short spikes in the number of concurrent requests during the -maxQueueDuration timeout.
The requests are rejected only if they couldn't be served accroding to the concurrency limits during the -maxQueueDuration.

See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10078
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10112
2025-12-22 16:39:01 +01:00
Aliaksandr Valialkin
e7b0a00493 app/vmauth: follow-up for the commit 7f689df824
- Introduce backendURLs struct, which holds all the backend urls and allows stopping
  all the health checkers across all the backend urls with a single call to backendURLs.stopHealthChecks().

- Immediately cancel the pending Dial call to the backend when backendURLs.stopHealthChecks() is called.
  Use lib/netutil.Dialer.DialContext() for this.

- Replace a fragile closing of stopHealthCheckCh channel via stopHealthCheckOnce.Do()
  with easier to maintain call of cancel() func for the corresponding healthChecksContext.

- Wait until health checker goroutines are finished before return from UserInfo.stopHealthChecks().
  Previously the health checker goroutines could run for some time trying to dial the backend
  after the return from UserInfo.stopHealthChecks().

- Try dialing the broken backend for https urls. It is better if the broken backend logs the error
  instead of routing client requests to the broken backend.

- Log dial errors to the broken backend, so users could troubleshoot the backend connectivity issue with more details.

- Refer the correct issue - https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9997 -
  in the comments explaining why periodic dialing of the broken backend is needed.
  Previously the https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9890 was incorrectly referred.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9997
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10147
2025-12-22 15:20:51 +01:00
Hui Wang
be0fe546e5 vmauth: skip a redundant request if all backends are broken with least_loaded policy (#10202)
similar to https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10170
2025-12-22 13:06:12 +01:00
Hui Wang
13911db316 vmauth: add new counters to track the number of user request errors
follow up https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10177

Add `vmauth_user_request_backend_requests_total` and
`vmauth_unauthorized_user_request_backend_requests_total` which track
the number of user request errors, and aligned with
`vmauth_user_requests_total`.

The existing `vmauth_http_request_errors_total` currently only counts
requests with `invalid_auth_token`. Once authorization has passed, any
subsequent request errors are tracked under
`xxx_user_request_backend_requests_total`.
2025-12-22 13:05:54 +01:00
Artem Fetishev
0cb90f91fc lib/storage: follow-up for d9c07dbc0b (#10169) - fix changelog
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-12-19 08:44:10 +01:00
Alexander Frolov
bdf65dde88 app/vmagent: make sure vmagent_rows_inserted_total counts samples (#10191)
As vminsert does

4d9b69b5a6/app/vminsert/newrelic/request_handler.go (L68)

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10191
2025-12-18 16:37:37 +01:00
Max Kotliar
4d9b69b5a6 docs/changelog: add known issue note related to memory leak on OpenTelemetry parsing code. 2025-12-18 12:39:12 +02:00
Nikolay
692a9be5fa lib/storage: check indexDB refCount at MustClose
In order to gracefully stop indexDB, refCount must be checked during
storage graceful shutdown.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10063
2025-12-17 18:48:53 +01:00
Kirill Kobylyanskiy
c8742ab120 lib/promscrape: add global sampleLimit support
This commit introduces the global `sampleLimit` setting to restrict the number
of samples accepted per scrape target, mirroring the behavior of
Prometheus.

Motivation:
1) The existing `-promscrape.seriesLimitPerTarget` flag currently takes
precedence over any `sample_limit` setting defined directly on the
scrape target. The new `sampleLimit` implementation ensures that the
target configuration is able to override the global setting, allowing
users to define specific limits per target.
2) The existing series limit flag uses memory-intensive Bloom filters,
resulting in high RAM consumption under high-cardinality scraping
scenarios. The `sampleLimit` provides a much simpler, low-overhead
alternative.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10145
2025-12-17 18:47:05 +01:00
Aliaksandr Valialkin
b6f8128273 Makefile: update golangci-lint from v2.4.0 to v2.7.2
See https://github.com/golangci/golangci-lint/releases/tag/v2.7.2
2025-12-17 16:59:02 +01:00
Aliaksandr Valialkin
bed7cbd0a4 all: consistently use encoding.DecompressZSTD* instead of zstd.Decompress* across the codebase
The encoding.DecompressZSTD* consistently updates the vm_zstd_block_decompress_calls_total metric.

Also make the follwing improvements after the commit 10f7cd2ffc:

- Add encoding.DecompressZSTDLimited() function and use it instead of zstd.DecompressLimited,
  so it properly updates vm_zstd_block_decompress_calls_total metric.

- Clarify description for the encoding.DecompressZSTD* and zstd.Decompress* functions.
2025-12-17 16:48:06 +01:00
Artem Fetishev
d9c07dbc0b lib/storage: rotate dateMetricIDCache instead of resetting (#10169)
Currently, `dateMetricIDCache` is reset when it is full and it is never
reset is not full but the data it stores is no longer needed. This leads
to the following problems:
- During regular data ingestion the cache sizeBytes may exceed max
allowed size and the cache gets reset which may potentially slow down
data ingestion (see #10064)
- The cache is per-indexDB. This means that in partition index (#8134)
there will be as many instances of this cache as the number of
partitions. If someone performs a backfill across all partitions, this
will fill all caches and they will never get reset even if no more
historical data is ingested.

So the solution is to periodically rotate the cache. After first
rotation the data is not deleted but moved to `prev` storage. After
second rotation `prev` gets deleted. This gives the cache an opportunity
to restore the `prev` data if it is still in use. Based on #10167.

This PR also removes the introduced recently introduced
`-storage.cacheSizeIndexDBDateMetricID` flag (see #10135). This should
be safe since it is new and its use case is very niche, i.e. no one
would really use it.

---------

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-12-17 15:43:05 +01:00
Artem Fetishev
20ad9cd395 lib/storage: introduce metricIDCache
The cache serves the same purpose as `dateMetricIDCache` but is used for
caching metricIDs from global index.
The cache was introduces in https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10167 and it has been decided to add it in a separate commit to reduce diff.

Related  PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10167
2025-12-17 13:31:11 +01:00
Hui Wang
8b3fe9cdec app/vmauth: add new counters to track the number of requests sent to backends
We have `vmauth_user_requests_total` and
`vmauth_unauthorized_user_requests_total` to track requests from the
user side. However, in scenarios such as request timeouts or when the
response code matches `retry_status_code`, a single request may be
retried across multiple backends.

Exposing counters `vmauth_user_request_backend_requests_total` and
`vmauth_unauthorized_user_request_backend_requests_total` that track the
number of requests sent to backends provides insight into the routing
logic and can help identify if requests are being consistently retried,
which may contribute to increased request duration.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10171
2025-12-17 13:27:08 +01:00
Hui Wang
e1e367b3cb app/vmauth: properly increment metric xxx_user_request_backend_errors_total
Currently, backendErrors may be counted twice if a request to the
backend fails due to context.DeadlineExceeded.

9bc7a17d80/app/vmauth/main.go (L328)

9bc7a17d80/app/vmauth/main.go (L294)

And we increment this counter in a way that is somewhat inconsistent.
Given that the counter's name is `xx_request_backend_errors_total`, it
should only increase when a backend request returns an error. This value
can exceed the user request error count if multiple backend requests
fail for a single user request.
The `xxx_request_backend_errors_total` counter should be used in
conjunction with the `xxx_request_backend_requests_total` introduced in
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10171.
2025-12-17 13:24:26 +01:00
Hui Wang
f40c6fcad1 app/vmauth: skip a redundant request if all backends are broken with first_available policy
There is no reason to send a request to the first backend if all
backends are marked as broken.
Also, 
>// getFirstAvailableBackendURL returns the first available backendURL,
which isn't broken.


The fix only skips a redundant request when all backends are
unavailable, it doesn't introduce any changes from user's perspective,
so I skipped changelog.
2025-12-17 13:22:37 +01:00
Aliaksandr Valialkin
b6bc186013 docs/victoriametrics/Articles.md: add https://developer-friendly.blog/blog/2024/06/17/unlocking-the-power-of-victoriametrics-a-prometheus-alternative/ 2025-12-16 15:46:23 +01:00
Aliaksandr Valialkin
9bc7a17d80 lib/protoparser/opentelemetry: typo fix: wince -> since
This is a follow-up for the commit 293d80910c
2025-12-15 20:13:45 +01:00
f41gh7
9ce548dcb5 docs: update release version to latest 2025-12-15 10:37:35 +01:00
f41gh7
82e583338d docs: update LTS releases 2025-12-15 10:34:43 +01:00
Aliaksandr Valialkin
19009836c7 vendor: update github.com/valyala/fastjson from v1.6.5 to v1.6.7 2025-12-14 23:09:43 +01:00
Max Kotliar
c2362ab670 docs: review links in changelogs 2025-12-12 19:43:15 +02:00
f41gh7
d04a42e846 make vmui-update 2025-12-12 12:50:13 +01:00
f41gh7
0d930dda16 CHANGELOG.md: cut v1.132.0 release 2025-12-12 12:45:34 +01:00
Artem Fetishev
e026215701 lib/storage: Document post-delete cache resets (#10158)
When the time series deletion is performed some of the storage caches
need to be reset but some not. This PR reviews all storage caches and
documents why there are reset or not and also places all the resetting
logic (and comments) in one place.
2025-12-12 11:11:30 +01:00
JAYICE
34a542c324 lib/storage: include last sample when query at the last millisecond of the day
One millisecond shouldn't be subtracted from the `tr.MaxTimestamp`, and
related test cases will be added

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9804
2025-12-12 11:01:06 +01:00
Fred Navruzov
ff0aaa38b7 docs/vmanomaly: release v1.28.2 (#10160)
### Describe Your Changes

Update docs and assets (visualizations) for /anomaly-detection section
with `v1.28.2` release

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-12-11 20:56:59 +02:00
Max Kotliar
0e2f0ac95f lib/protoparser/opentelemetry: fix typo in code
#
github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentelemetry/pb
lib/protoparser/opentelemetry/pb/pb.go:1683:19: undefined: lctx

Bug introduced in
1dc71212f8
2025-12-11 18:37:04 +02:00
Max Kotliar
7f689df824 app/vmauth: validate backend with a dial check before marking it healthy (#10147)
### Describe Your Changes

Previously, a backend was considered healthy as soon as its
'bu.brokenDeadline' deadline expired, even if it was still unavailable.
This caused avoidable request failures and retries.

Now vmauth performs a TCP dial (1s timeout) before restoring the backend
to the healthy
pool. This avoids routing traffic to backends that are still down.

The dial check also covers cases where a route to the backend cannot be
resolved. Without this check, user requests would hang until the
connection timeout, leading to long waits
or errors. The new check fails fast and doesn't impact real user
requests.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9997


### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-12-11 18:26:59 +02:00
Max Kotliar
bd725bdd69 dashboards: add usseful links to dashboards
Dashboards:

- Add a link to proper docs section
- Add a link to troubleshooting page
- Add links to community and enterprise support

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9904
2025-12-11 18:11:07 +02:00
Aliaksandr Valialkin
712b7cfeeb lib/promscrape: allow scraping targets with responses equal to c.maxScrapeSize
Return "too big response size" error only for responses bigger than c.maxScrapeSize
(this option can be set either via max_scrape_size option inside scrape config
or via -promscrape.maxScrapeSize command-line flag).

Previously responses with sizes equal to c.maxScrapeSize were incorrectly rejected.
2025-12-11 16:15:46 +01:00
Aliaksandr Valialkin
1dc71212f8 lib/protoparser/opentelemetry/pb: reset the decoderContext.ls.Labels length to zero after clearing all the references to the original byte slice
This is a follow-up for 25f49e6f54
2025-12-11 15:39:32 +01:00
Aliaksandr Valialkin
25f49e6f54 lib/protoparser/opentelemetry: explicitly clear all the references to the underlying byte slice at decoderContext.ls.Labels up to its capacity
This should prevent from the excess memory usage because of dangling source byte slices
referred by decoderContext.ls.Labels.

This is a change similar to 63a68edb05
2025-12-11 15:33:13 +01:00
Max Kotliar
dcf9f0eb7b lib/promscrape: Add a warning to active targets panel if -dropOriginalLabels=true (some debug info not available)
Previously the original labels were preserved (
-dropOriginalLabels=false). In
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9772 the default
behavior was changed. Now vmagent\vmsingle drops origianal labels. The
change
created some confusion related to UI. For example, debug relabling
column is completly hidden when the labels are not available. It created
a steram of questions.

This commit adds a warning similar to one we have at "Discovered
targets" tab, and also always show the "Debug relabeling" column. When
there is not info for it "N/A" printed.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9901
Follow-up https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9772
2025-12-11 16:24:50 +02:00
Aliaksandr Valialkin
606382178b lib/protoparser/protoparserutil: do not store too big buffers to the pool at ReadUncompressedData if only a small part of the buffer is used last time
This should prevent from excess memory usage because of inefficiently used buffers.

This should help the case at https://github.com/VictoriaMetrics/VictoriaLogs/issues/869
2025-12-11 15:11:44 +01:00
Artem Fetishev
220249f023 lib/storage: use lrucache to implement tagFilters loops cache
The tagFilters loop cache is per-indexDB which means that currently
there are two instances, one for idbCurr and one for idbPrev. When the
partition index (#8134) is released, there will be as many instances of
this cache as there will be partitions.

The cache is implemented using workingsetcache. Which occupies at least
30MB even when unused. Given that only the latest indexDB is used most
of the time, a lot of memory can be wasted.

Therefore the cache implementation is changed to lrucache because it
does not consume memory when it is unused and also has timeout-based
eviction.

This is a follow-up for 4cd727a511
(https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10072).
2025-12-11 08:42:58 +01:00
Max Kotliar
c6731f964c dashboards: add memory usage breakdown panels into Drilldown sections
Right now we have two separate panels: RSS memory % usage and RSS
anonymous memory % usage. This makes trend comparison difficult because
one have to visually correlate two independent panels. Another problem
is that these panels don't show Go runtime allocations at all. The same
applies to memory allocated in C. There are allocations in C (zstd) one
should account for but there is no even a metric to expose it.

The commit adds Memory usage breakdown panel into Drilldown section. It
provides insight into Go Stack, Go Heap, Go Heap Released, Go Other,
Mmap: VM Cache, File cache memory distribution

It should help spot trends changes in memory by type or invistigate
issues such as #10069 and #10028 easier.

Panel info:
This panel shows memory usage by category.

How to use:
- Start from the high-level RSS panel.
- Identify an instance with unexpected or abnormal memory growth.
- Filter to that instance to inspect the detailed breakdown here.

Interpretation
- A steadily rising Go Heap usually indicates a memory leak. Collect
pprof memory profile.
- A growing Go Stack commonly points to a goroutine leak.

<img width="1508" height="628" alt="Screenshot 2025-12-08 at 13 18 44"
src="https://github.com/user-attachments/assets/0e794324-e86d-468e-b926-8bb11f5a2043"
/>
<img width="1503" height="674" alt="Screenshot 2025-12-08 at 13 19 34"
src="https://github.com/user-attachments/assets/62fc3fff-33b3-4dfe-ad3f-ad0526a8a606"
/>

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10139
2025-12-11 08:39:00 +01:00
Sinotov Vladimir
859435a8df lib/protoparser: added push data with zabbix connector (#6087)
Support receiving data from the Zabbix connector with API `/zabbixconnector/api/v1/history`

Labels:
    - The metric name is added to the `__name__` label.
    - Host name to `host` label.
    - Visible name  to `hostname` label.

The returned response complies with the requirements of the Zabbix

 See the following doc for connector [protocol](https://www.zabbix.com/documentation/current/en/manual/config/export/streaming).

Useful links:
- Zabbix Streaming to external systems
(https://www.zabbix.com/documentation/current/en/manual/config/export/streaming)
- Zabbix Newline-delimited JSON expor
(https://www.zabbix.com/documentation/current/en/manual/appendix/protocols/real_time_export)

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/6087
2025-12-10 17:00:27 +01:00
Max Kotliar
5b12fd35d7 app/vminsert: improve slowness-based rerouting logic
Adjust slowness-based rerouting logic.

Rerouting now occurs only from the slowest node, and only if the cluster
as a whole has enough available capacity to handle the additional load.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9890
2025-12-10 16:25:44 +01:00
Aliaksandr Valialkin
293d80910c lib/protoparser/opentelemetry: eliminate memory allocations during parsing of samples send via OpenTelemetry protocol
This increases the parser performance by 4x-6x.

This commit uses the technique similar to https://github.com/VictoriaMetrics/VictoriaLogs/pull/720

goos: linux
goarch: amd64
pkg: github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentelemetry/stream
cpu: AMD Ryzen 7 PRO 5850U with Radeon Graphics
                                                    │   old.txt    │               new.txt               │
                                                    │    sec/op    │   sec/op     vs base                │
ParseStream/default-metrics-labels-formatting-16      15.565µ ± 1%   2.150µ ± 3%  -86.19% (p=0.000 n=10)
ParseStream/prometheus-metrics-labels-formatting-16   24.228µ ± 2%   4.355µ ± 1%  -82.02% (p=0.000 n=10)
ParseStream/prometheus-metrics-formatting-16          23.028µ ± 2%   3.395µ ± 1%  -85.26% (p=0.000 n=10)
geomean                                                20.55µ        3.168µ       -84.59%

                                                    │   old.txt    │                new.txt                 │
                                                    │     B/s      │      B/s       vs base                 │
ParseStream/default-metrics-labels-formatting-16      127.9Mi ± 1%    918.3Mi ± 3%  +617.82% (p=0.000 n=10)
ParseStream/prometheus-metrics-labels-formatting-16   82.19Mi ± 2%   453.32Mi ± 1%  +451.57% (p=0.000 n=10)
ParseStream/prometheus-metrics-formatting-16          86.47Mi ± 2%   581.56Mi ± 1%  +572.52% (p=0.000 n=10)
geomean                                               96.88Mi         623.3Mi       +543.34%

                                                    │   old.txt    │                 new.txt                  │
                                                    │     B/op     │    B/op      vs base                     │
ParseStream/default-metrics-labels-formatting-16      12.53Ki ± 0%   0.00Ki ± 0%  -100.00% (p=0.000 n=10)
ParseStream/prometheus-metrics-labels-formatting-16   21.15Ki ± 1%   0.00Ki ±  ?  -100.00% (p=0.000 n=10)
ParseStream/prometheus-metrics-formatting-16          20.74Ki ± 1%   0.00Ki ±  ?  -100.00% (p=0.000 n=10)
geomean                                               17.65Ki                     ?                       ¹ ²
¹ summaries must be >0 to compute geomean
² ratios must be >0 to compute geomean

                                                    │  old.txt   │                new.txt                 │
                                                    │ allocs/op  │ allocs/op  vs base                     │
ParseStream/default-metrics-labels-formatting-16      426.0 ± 0%    0.0 ± 0%  -100.00% (p=0.000 n=10)
ParseStream/prometheus-metrics-labels-formatting-16   514.0 ± 0%    0.0 ± 0%  -100.00% (p=0.000 n=10)
ParseStream/prometheus-metrics-formatting-16          514.0 ± 0%    0.0 ± 0%  -100.00% (p=0.000 n=10)
geomean                                               482.8                   ?                       ¹ ²
2025-12-10 16:11:59 +01:00
Artem Fetishev
bc4d98b358 app/vmstorage: properly name dateMetricIDCache metrics
The following dmc metrics were given standard names, i.e.:

- vm_date_metric_id_cache_resets_total became
vm_cache_resets_total{type="indexdb/date_metricID"}
- vm_date_metric_id_cache_syncs_total became
vm_cache_syncs_total{type="indexdb/date_metricID"}

This change should be safe since these metrics are currently not used in
VictoriaMetrics Gragana dashboards.

Additionally, other cache metrics were organized within the code so that
each metric has the same order.
2025-12-10 14:57:52 +01:00
Alexander Frolov
ad153f72ef lib/storage: utilize persisted hourMetricIDs cache to avoid redundant indexDB lookups after vmstorage restart
This commit optimizes the performance of the storage by improving the utilization of persisted hourMetricIDs cache to avoid redundant indexDB lookups after vmstorage restart. The change refactors the hour-based cache checking logic using a switch statement to handle multiple hour scenarios more efficiently.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10114
2025-12-10 14:56:05 +01:00
Vadim Rutkovsky
f2578a9764 docs/victoriametrics: update LTS-releases.md (#10153)
### Describe Your Changes

Doc update to mention fresh patch releases - 1.122.10 and 1.110.25

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-12-10 15:11:26 +02:00
Aliaksandr Valialkin
d5e19717b7 Makefile: use the correct -trim_path at pprof-cpu
It shouldn't end with @.

The `PPROF_FILE=/path/to/cpu.pprof make pprof-cpu` is good for investigating profiles received from production builds.
2025-12-10 13:41:10 +01:00
Max Kotliar
5c40328e5f docs: mention Grafana panel that can help with swap related issues 2025-12-10 14:08:43 +02:00
Yury Moladau
1117437456 app/vmui: improve legend auto-collapse threshold, warning and toggle (#10140)
### Describe Your Changes

This PR improves the legend auto-collapse behavior in vmui:
- Increase the legend auto-collapse threshold from `20` to `100` series.
- Add a warning message when the legend is collapsed by default, showing
the actual series count.
- Add a user setting to disable automatic legend collapsing (enabled by
default).

Related issue: #10075

<img width="352" alt="image"
src="https://github.com/user-attachments/assets/22ee2ef9-6369-47a8-87a1-c63a0e17fccd"
/>
<img width="1618" height="197" alt="image"
src="https://github.com/user-attachments/assets/791eb9b6-4397-476d-ad44-5152e50d1975"
/>


### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

Signed-off-by: Yury Molodov <yurymolodov@gmail.com>
2025-12-10 13:59:16 +02:00
Aliaksandr Valialkin
094a7cf3f9 lib/protoparser/opentelemetry/stream: benchmark cases when prometheus-compatible naming for metrics and labels is enabled 2025-12-10 11:50:46 +01:00
Artem Fetishev
538e489497 docs: Update cache tuning section (#10149)
- Remove mentions of `Caches` section in Grafana dashobards since this section does not exist anymore.
- Rewrite a bit the description of cache panels in Troubleshooting section.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2025-12-10 11:42:33 +01:00
Aliaksandr Valialkin
744aa3fe9f lib/protoparser/opentelemetry/stream: make the BenchmarkParseStream closer to real production cases
- Add more metrics to the protobuf to parse.
- Measure scan speed of the original protobuf at bytes/sec. Previously the number of ParseStream() calls per second was measured.
2025-12-10 11:21:57 +01:00
Aliaksandr Valialkin
44a3885f97 lib/protoparser/opentelemetry/stream: avoid memory allocations for bytes.NewBuffer() on every iteration of BenchmarkParseStream
Re-use benchReader for reading the same data on every iteration of BenchmarkParseStream.
2025-12-10 11:10:39 +01:00
Aliaksandr Valialkin
f43264f9f2 lib/ioutil: add missing package after the commit 2da010495c 2025-12-10 11:07:15 +01:00
Aliaksandr Valialkin
e07bc7a74e lib/prompb: move all the code related to WriteRequestUnmarshaler to a separate file - write_request_unmarshaler.go
This should improve code maintenance a bit.

This is a follow-up for the commit b98e592752
2025-12-10 10:45:59 +01:00
Aliaksandr Valialkin
d1680063f5 lib/prompb: rename MetricMetadataType to MetricType
Also rename MetricMetadata* constants to MetricType* constants.

This makes the code a bit more readable.

This is a follow-up for the commit 25cd5637bc

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2974
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9306
2025-12-10 01:18:44 +01:00
Aliaksandr Valialkin
2da010495c all: pool io.LimitedReader in order to save a memory allocation and reduce CPU usage a bit 2025-12-10 01:18:43 +01:00
Artem Fetishev
7c78f95f2e docs: Update flags (#10148)
Follow-up for dc5d7aa4ce
(https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10135)

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-12-09 17:48:40 +01:00
Kirill Yurkov
5bd67c5f49 docs: recommend disabling swap (#10113)
add swap disable commands in install recommendations to prevent
performance issues

---------

Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2025-12-09 15:50:13 +02:00
Max Kotliar
c618f471ca apptest: make results order stable in test special query regression
Sometimes test fails with error:

--- FAIL: TestClusterSpecialQueryRegression (15.57s)
special_query_regression_test.go:76: unexpected /api/v1/export
response (-want, +got):
          &apptest.PrometheusAPIV1QueryResponse{
          	... // 1 ignored field
          	Data: &apptest.QueryData{
          		... // 1 ignored field
          		Result: []*apptest.QueryResult{
          			&{
          				Metric: map[string]string{
          					"__name__":
"prometheus.sensitiveRegex",
- 					"label":
"SensitiveRegex",
+ 					"label":
"sensitiveRegex",
          				},
          				Sample:  nil,
          				Samples: {&{Timestamp:
1707123456700, Value: 10}},
          			},
          			&{
          				Metric: map[string]string{
          					"__name__":
"prometheus.sensitiveRegex",
- 					"label":
"sensitiveRegex",
+ 					"label":
"SensitiveRegex",
          				},
          				Sample:  nil,
          				Samples: {&{Timestamp:
1707123456700, Value: 10}},
          			},
          		},
          	},
          	ErrorType: "",
          	Error:     "",
          	IsPartial: false,
          }

FAIL
FAIL	github.com/VictoriaMetrics/VictoriaMetrics/apptest/tests
	18.676s
FAIL
2025-12-09 15:20:01 +02:00
Artem Fetishev
f62893c151 lib/storage: report per-idb cache stats only once
`tagFiltersCache` and `dateMeticIDCache` are now per-indexDB. Currently
we have 2 instance of indexDBs (prev and curr) and therefore 2 instances
of each cache.

When the storage stats is collected, the stats of individual caches is
added together. For example, is the `sizeMaxBytes` of each
tagFiltersCache is `100MB` and the `sizeBytes` of each instance is
`10MB` and `99MB`, then the resulting stats will be `sizeMaxBytes ==
200MB, sizeBytes == 109MB`.

While this is accurate, this stats hides a potential problem. It says
that the cache utilization is slightly above `50%` (109/200) and
everything seems to be okay. But in reality one of the caches is
utilized by 99% and soon will start evicting existing records to make
room for new ones, potentially slowing down the data retrieval. Ops
won't see it and will not take necessary action.

The solution is to report stats only for one instance of cache whose
utilization is the highest.

Alternatives considered:
- #10123. Might work, but breaks the encapsulation and can potentially
be slower
- Do not aggregate the stats and report is per-indexDB. This increases
the number of metrics and makes it dependent on the number of indexDB
instances (which can be many once #8134 is released).

Related issue https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8134
2025-12-09 12:43:50 +01:00
JAYICE
76f5def301 dashboard: fix page fault panel (#10141)
add `[$__rate_interval]` to fix page fault panel introduced in
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9977
2025-12-09 12:41:28 +01:00
Artem Fetishev
3be5ed0e32 Revert "lib/storage: after deleting series, reset tsid only once" (#10143)
This reverts commit dbe71700b5.

tsidCache is persistent and must be reset before deletedMetricID records
are added to the index. THis is needed to handle ungraceful shutdowns
properly.
2025-12-09 10:57:08 +01:00
Aliaksandr Valialkin
4ac40d955b lib/prompb: use MetricMetadataType type for MetricMetadata.Type field
This eliminates the need of manual conversion between MetricMetadataType and uint32 / int32.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2974

This is a follow-up for the commit 5a587f2006
2025-12-08 20:31:29 +01:00
Artem Fetishev
dc5d7aa4ce lib/storage: properly report dateMetricIDCache stats
A number of changes to `dateMetricIDCache` stats and configuration:

1. Export `SizeMaxBytes` metric and make the size configurable via a
flag
2. Fix `EntriesCount` and `SizeBytes` stats. Previously the cache
reported this stats for its immutable part only. Whereas there are cases
when the number of entries in its mutable part is comparable with the
number in immutable part. The stats from the mutable part remains
invisible until it is sync'ed to the immutable part. It is also possible
that the cache gets reset after the sync because the cache size exceeds
the max allowed size. Reporting the stats for both mutable and immutable
parts should provide a clear picture of the cache utilization.

Together, SizeBytes and SizeMaxBytes should enable tracking the cache
utilization properly. And take appropriate actions if necessary (such as
adjusting the memory resources and/or cache size limit via a flag).

Related issue https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10064
2025-12-08 14:17:58 +01:00
JAYICE
244769a00d vmstorage: skip last sleep when closing vminsertSrv connections
After closing last connection to vminsert, vmstorage will still wait for
an interval, causing actual shutdown time will be always longger than
configurations.

This commit just skip the last sleep

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10136
2025-12-08 14:10:17 +01:00
Max Kotliar
8e81d54851 Revert "dashboards: add memory usage breakdown panels into Drilldown sections"
This reverts commit 5117cde8bc.
2025-12-08 13:42:10 +02:00
Max Kotliar
5117cde8bc dashboards: add memory usage breakdown panels into Drilldown sections
Right now we have two separate panels: RSS memory % usage and RSS
anonymous memory % usage. This makes trend comparison difficult because
one have to visually correlate two independent panels. Another problem
is that these panels don't show Go runtime allocations at all. The same
applies to memory allocated in C. There are allocations in C (zstd) one
should account for but there is no even a metric to expose it.

The commit adds Memory usage breakdown panel into Drilldown section. It
provides insight into Go Stack, Go Heap, Go Heap Released, Go Other,
Mmap: VM Cache, File cache memory distribution

It should help spot trends changes in memory by type or invistigate
issues such as #10069 and #10028 easier.

Panel info:
This panel shows memory usage by category.

How to use:
- Start from the high-level RSS panel.
- Identify an instance with unexpected or abnormal memory growth.
- Filter to that instance to inspect the detailed breakdown here.

Interpretation
- A steadily rising Go Heap usually indicates a memory leak. Collect
pprof memory profile.
- A growing Go Stack commonly points to a goroutine leak.
2025-12-08 13:39:34 +02:00
Artem Fetishev
85367cae38 Idb blockcache metrics unittest (#10050)
indexDB has 3 block caches. These caches export metrics. Storage
collects these
metrics for each indexDB it has (currently prev and curr only).

There is a potential problem:
- These caches are shared by all indexDBs
- Each indexDB reports the block cache metrics.
- Storage collects the metrics of all indexDBs by adding them together.

I.e. it is possible to count block cache metrics several times.
It is not the case in current implementation because the addition of the
metrics
is not performed intentionally.

The added unit test 1) demonstrates that the resulting counts are
reported
correctly and 2) protects from future unintentional changes in this
behavior.

Additionally a code comment is added to explain why block cache metrics
are not summed up.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-12-06 18:14:52 +01:00
Aliaksandr Valialkin
159b71cabb lib/protoparser/influx: properly clean references to underlying byte slices from tagsPool and fieldsPool inside unmarshalContext
This should prevent from memory leaks when unmarshalContext fields point to unused byte slices.
2025-12-06 11:52:57 +01:00
Aliaksandr Valialkin
78b8c773ae docs/victoriametrics/: remove misleading statement about extending ext4 partition to 16TB+
It is enough to recommend the given format options for disks with 1TB+ sizes
2025-12-05 23:00:47 +01:00
Nikolay
aab92d3c0f protoparser/influx: reduce memory allocation (#10109)
Previously, influx parser allocated a new slice byte for
unescape of Row fields. It adds extra pressure at GC and increases CPU
usage.

 This commit changes escape to in-place updates for provided []byte.
Since request for parsing is actually a []byte converted into the
string, it's safe to update it in-place. To be able to interact with
[]byte directly, this commit changes parser API and accepts []byte
instead of string.

Benchstat:
```
                                 │   before    │                after                │
                                 │   sec/op    │   sec/op     vs base                │
RowsUnmarshalUnescape-10           74.68n ± 4%   54.23n ± 5%  -27.38% (p=0.000 n=10)
RowsUnmarshalUnescapeNoEscape-10   40.41n ± 2%   42.59n ± 1%   +5.39% (p=0.000 n=10)
geomean                            54.93n        48.06n       -12.51%

                                 │    before    │                after                 │
                                 │     B/s      │     B/s       vs base                │
RowsUnmarshalUnescape-10           1.035Gi ± 4%   1.425Gi ± 5%  +37.72% (p=0.000 n=10)
RowsUnmarshalUnescapeNoEscape-10   1.613Gi ± 2%   1.531Gi ± 1%   -5.11% (p=0.000 n=10)
geomean                            1.292Gi        1.477Gi       +14.32%

                                 │   before    │                after                 │
                                 │    B/op     │    B/op     vs base                  │
RowsUnmarshalUnescape-10           149.00 ± 0%   96.00 ± 0%  -35.57% (p=0.000 n=10)
RowsUnmarshalUnescapeNoEscape-10    80.00 ± 0%   80.00 ± 0%        ~ (p=1.000 n=10) ¹
geomean                             109.2        87.64       -19.73%
¹ all samples are equal

                                 │   before   │                after                 │
                                 │ allocs/op  │ allocs/op   vs base                  │
RowsUnmarshalUnescape-10           5.000 ± 0%   1.000 ± 0%  -80.00% (p=0.000 n=10)
RowsUnmarshalUnescapeNoEscape-10   1.000 ± 0%   1.000 ± 0%        ~ (p=1.000 n=10) ¹
geomean                            2.236        1.000       -55.28%
```

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10053

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
2025-12-05 18:18:07 +01:00
Nikolay
2bef26288e lib/memory: add validation for remaining system memory
Previously, if user defined value for `memory.allowedBytes` flag
exceeded system memory limit, remaining memory could take negative
value. It results into incorrect memory auto-detect calculations for
various components. Such as vmstorage unique timeseries limit and parts
size.

 This commit adds negative value check. And also logs system memory
limit at start-up of vm components.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10083
2025-12-05 18:14:04 +01:00
Hui Wang
c14dbad33b vmselect: disable rollup result cache for instant queries that contain rate function
Previously, in order to cache results for `rate`, we consider
`rate(m[d])` as `(increase(m[d]) / d)` and cache the `increase` result.
However, in MetricsQL, `rate(d) = (lastValue - firstValue) /
(lastTimestamp - firstTimestamp)`, so it does not equal to
`increase(d)/d` if `d != (lastTimestamp - firstTimestamp)`.
Although the issue primarily arises when the time series samples are not
continuous, but the discrepancy is hard to debug and can be confusing to
users. Because the range query doesn't use this optimization, causing
recording rule results to
differ from raw query results in VMUI. 
Therefore, it is better to disable the usage and only enable it when we
can cache it correctly.

fixes https://github.com/VictoriaMetrics/victoriaMetrics/issues/10098
2025-12-05 17:38:32 +01:00
Artem Fetishev
dbe71700b5 lib/storage: after deleting series, reset tsid only once
As indexDBs became independent from each other, the tsidCache is now
reset more than once when the DeleteSeries() operation is performed. But
it needs to be performed only once. Thus, move the deletion from indexDB
to Storage.

Follow-up for 16d75ab0bd.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10119
2025-12-05 17:38:02 +01:00
Hui Wang
d4fa326659 vmselect: reset rollup result cache with -search.disableCache when necessary
There’s no need to call `c.Reset()` for rollup result cache if it’s not
persisted(`-cacheDataPath` not specified) or has already been cleared by
`-search.resetRollupResultCacheOnStartup`, as it is already newly
created.


Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10095
2025-12-05 17:37:30 +01:00
Andrei Baidarov
040ef931d1 vmalert: do not increment errors counter on cancel context errors
Follow-up for https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10027

`vmalert_alerting_rules_errors_total` increments on any error


445f30a4a6/app/vmalert/rule/alerting.go (L455-L460)

while `vmalert_execution_errors_total` only on non-cancellation ones


445f30a4a6/app/vmalert/rule/group.go (L747-L756)

This commit ignores cancellation errors in
`vmalert_alerting_rules_errors_total` too
2025-12-05 17:36:42 +01:00
JAYICE
474009a7f1 dashboard: add page faults panel for vmsingle&vmcluster (#9977)
### Describe Your Changes
add page fault panel in `Troubleshooting`section for vmcluster and
vmsingle. fix
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9974

The query
```
sum(rate(process_minor_pagefaults_total{job=~"$job", instance=~"$instance"})) by (job,instance)

sum(rate(process_major_pagefaults_total{job=~"$job", instance=~"$instance"})) by (job,instance)
```

<img width="1088" height="306" alt="image"
src="https://github.com/user-attachments/assets/4b4ac884-5372-4141-a429-ac0b296dc926"
/>
2025-12-05 18:04:44 +02:00
Nikolay
1b1442d91b app/vmgateway: properly handle proxy request errors
Previously vmgateway didn't handle http.Abort error.
It could lead to the unexpected panic at webserver.

This commit adds panic recover and prevent app from crash.
2025-12-05 16:32:50 +01:00
Aliaksandr Valialkin
3e359dc920 lib/protoparser/influx: remove IgnoreErrors field from Rows and replace it with the explicit skipInvalidLines arg at Rows.Unmarshal()
This improves the maintainability of the code, since the caller of Rows.Unmarshal() always knows
whether invalid lines must be skipped.

While at it, add missing error checks returned from Rows.Unmarshal().

This is a follow-up for the commit daa7183749

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7090
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/7165
2025-12-05 16:24:54 +01:00
Hui Wang
e41f642a59 add flag description for -selectNode (#10022) 2025-12-05 14:53:06 +02:00
Artem Fetishev
7a2cc7fbad lib/storage: use deadline instead is.deadline
This makes SearchTSIDs() consistent with SearchMetricNames().

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-12-05 02:08:14 +01:00
Aliaksandr Valialkin
a7b99dd164 vendor: update github.com/VictoriaMetrics/easyproto from v0.1.4 to v1.0.0 2025-12-04 21:47:20 +01:00
Andrii Chubatiuk
647f107576 vmui: always add /prometheus prefix while generating backend url 2025-12-04 18:09:47 +02:00
dependabot[bot]
04f8296c85 build(deps-dev): bump js-yaml from 4.1.0 to 4.1.1 in /app/vmui/packages/vmui (#10017)
Bumps [js-yaml](https://github.com/nodeca/js-yaml) from 4.1.0 to 4.1.1.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/nodeca/js-yaml/blob/master/CHANGELOG.md">js-yaml's
changelog</a>.</em></p>
<blockquote>
<h2>[4.1.1] - 2025-11-12</h2>
<h3>Security</h3>
<ul>
<li>Fix prototype pollution issue in yaml merge (&lt;&lt;)
operator.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="cc482e7759"><code>cc482e7</code></a>
4.1.1 released</li>
<li><a
href="50968b862e"><code>50968b8</code></a>
dist rebuild</li>
<li><a
href="d092d86603"><code>d092d86</code></a>
lint fix</li>
<li><a
href="383665ff42"><code>383665f</code></a>
fix prototype pollution in merge (&lt;&lt;)</li>
<li><a
href="0d3ca7a27b"><code>0d3ca7a</code></a>
README.md: HTTP =&gt; HTTPS (<a
href="https://redirect.github.com/nodeca/js-yaml/issues/678">#678</a>)</li>
<li><a
href="49baadd52a"><code>49baadd</code></a>
doc: 'empty' style option for !!null</li>
<li><a
href="ba3460eb9d"><code>ba3460e</code></a>
Fix demo link (<a
href="https://redirect.github.com/nodeca/js-yaml/issues/618">#618</a>)</li>
<li>See full diff in <a
href="https://github.com/nodeca/js-yaml/compare/4.1.0...4.1.1">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=js-yaml&package-manager=npm_and_yarn&previous-version=4.1.0&new-version=4.1.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/VictoriaMetrics/VictoriaMetrics/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-04 17:42:17 +02:00
dependabot[bot]
1c3e64e9ad build(deps): bump actions/checkout from 4 to 6 (#10082)
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to
6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/releases">actions/checkout's
releases</a>.</em></p>
<blockquote>
<h2>v6.0.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Update README to include Node.js 24 support details and requirements
by <a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/2248">actions/checkout#2248</a></li>
<li>Persist creds to a separate file by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2286">actions/checkout#2286</a></li>
<li>v6-beta by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2298">actions/checkout#2298</a></li>
<li>update readme/changelog for v6 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2311">actions/checkout#2311</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v5.0.0...v6.0.0">https://github.com/actions/checkout/compare/v5.0.0...v6.0.0</a></p>
<h2>v6-beta</h2>
<h2>What's Changed</h2>
<p>Updated persist-credentials to store the credentials under
<code>$RUNNER_TEMP</code> instead of directly in the local git
config.</p>
<p>This requires a minimum Actions Runner version of <a
href="https://github.com/actions/runner/releases/tag/v2.329.0">v2.329.0</a>
to access the persisted credentials for <a
href="https://docs.github.com/en/actions/tutorials/use-containerized-services/create-a-docker-container-action">Docker
container action</a> scenarios.</p>
<h2>v5.0.1</h2>
<h2>What's Changed</h2>
<ul>
<li>Port v6 cleanup to v5 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2301">actions/checkout#2301</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v5...v5.0.1">https://github.com/actions/checkout/compare/v5...v5.0.1</a></p>
<h2>v5.0.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Update actions checkout to use node 24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2226">actions/checkout#2226</a></li>
<li>Prepare v5.0.0 release by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2238">actions/checkout#2238</a></li>
</ul>
<h2>⚠️ Minimum Compatible Runner Version</h2>
<p><strong>v2.327.1</strong><br />
<a
href="https://github.com/actions/runner/releases/tag/v2.327.1">Release
Notes</a></p>
<p>Make sure your runner is updated to this version or newer to use this
release.</p>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v4...v5.0.0">https://github.com/actions/checkout/compare/v4...v5.0.0</a></p>
<h2>v4.3.1</h2>
<h2>What's Changed</h2>
<ul>
<li>Port v6 cleanup to v4 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2305">actions/checkout#2305</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v4...v4.3.1">https://github.com/actions/checkout/compare/v4...v4.3.1</a></p>
<h2>v4.3.0</h2>
<h2>What's Changed</h2>
<ul>
<li>docs: update README.md by <a
href="https://github.com/motss"><code>@​motss</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1971">actions/checkout#1971</a></li>
<li>Add internal repos for checking out multiple repositories by <a
href="https://github.com/mouismail"><code>@​mouismail</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1977">actions/checkout#1977</a></li>
<li>Documentation update - add recommended permissions to Readme by <a
href="https://github.com/benwells"><code>@​benwells</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2043">actions/checkout#2043</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/blob/main/CHANGELOG.md">actions/checkout's
changelog</a>.</em></p>
<blockquote>
<h1>Changelog</h1>
<h2>V6.0.0</h2>
<ul>
<li>Persist creds to a separate file by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2286">actions/checkout#2286</a></li>
<li>Update README to include Node.js 24 support details and requirements
by <a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/2248">actions/checkout#2248</a></li>
</ul>
<h2>V5.0.1</h2>
<ul>
<li>Port v6 cleanup to v5 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2301">actions/checkout#2301</a></li>
</ul>
<h2>V5.0.0</h2>
<ul>
<li>Update actions checkout to use node 24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2226">actions/checkout#2226</a></li>
</ul>
<h2>V4.3.1</h2>
<ul>
<li>Port v6 cleanup to v4 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2305">actions/checkout#2305</a></li>
</ul>
<h2>V4.3.0</h2>
<ul>
<li>docs: update README.md by <a
href="https://github.com/motss"><code>@​motss</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1971">actions/checkout#1971</a></li>
<li>Add internal repos for checking out multiple repositories by <a
href="https://github.com/mouismail"><code>@​mouismail</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1977">actions/checkout#1977</a></li>
<li>Documentation update - add recommended permissions to Readme by <a
href="https://github.com/benwells"><code>@​benwells</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2043">actions/checkout#2043</a></li>
<li>Adjust positioning of user email note and permissions heading by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2044">actions/checkout#2044</a></li>
<li>Update README.md by <a
href="https://github.com/nebuk89"><code>@​nebuk89</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2194">actions/checkout#2194</a></li>
<li>Update CODEOWNERS for actions by <a
href="https://github.com/TingluoHuang"><code>@​TingluoHuang</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/2224">actions/checkout#2224</a></li>
<li>Update package dependencies by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2236">actions/checkout#2236</a></li>
</ul>
<h2>v4.2.2</h2>
<ul>
<li><code>url-helper.ts</code> now leverages well-known environment
variables by <a href="https://github.com/jww3"><code>@​jww3</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/1941">actions/checkout#1941</a></li>
<li>Expand unit test coverage for <code>isGhes</code> by <a
href="https://github.com/jww3"><code>@​jww3</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1946">actions/checkout#1946</a></li>
</ul>
<h2>v4.2.1</h2>
<ul>
<li>Check out other refs/* by commit if provided, fall back to ref by <a
href="https://github.com/orhantoy"><code>@​orhantoy</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1924">actions/checkout#1924</a></li>
</ul>
<h2>v4.2.0</h2>
<ul>
<li>Add Ref and Commit outputs by <a
href="https://github.com/lucacome"><code>@​lucacome</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1180">actions/checkout#1180</a></li>
<li>Dependency updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>- <a
href="https://redirect.github.com/actions/checkout/pull/1777">actions/checkout#1777</a>,
<a
href="https://redirect.github.com/actions/checkout/pull/1872">actions/checkout#1872</a></li>
</ul>
<h2>v4.1.7</h2>
<ul>
<li>Bump the minor-npm-dependencies group across 1 directory with 4
updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1739">actions/checkout#1739</a></li>
<li>Bump actions/checkout from 3 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1697">actions/checkout#1697</a></li>
<li>Check out other refs/* by commit by <a
href="https://github.com/orhantoy"><code>@​orhantoy</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1774">actions/checkout#1774</a></li>
<li>Pin actions/checkout's own workflows to a known, good, stable
version. by <a href="https://github.com/jww3"><code>@​jww3</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1776">actions/checkout#1776</a></li>
</ul>
<h2>v4.1.6</h2>
<ul>
<li>Check platform to set archive extension appropriately by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1732">actions/checkout#1732</a></li>
</ul>
<h2>v4.1.5</h2>
<ul>
<li>Update NPM dependencies by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1703">actions/checkout#1703</a></li>
<li>Bump github/codeql-action from 2 to 3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1694">actions/checkout#1694</a></li>
<li>Bump actions/setup-node from 1 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1696">actions/checkout#1696</a></li>
<li>Bump actions/upload-artifact from 2 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1695">actions/checkout#1695</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1af3b93b68"><code>1af3b93</code></a>
update readme/changelog for v6 (<a
href="https://redirect.github.com/actions/checkout/issues/2311">#2311</a>)</li>
<li><a
href="71cf2267d8"><code>71cf226</code></a>
v6-beta (<a
href="https://redirect.github.com/actions/checkout/issues/2298">#2298</a>)</li>
<li><a
href="069c695914"><code>069c695</code></a>
Persist creds to a separate file (<a
href="https://redirect.github.com/actions/checkout/issues/2286">#2286</a>)</li>
<li><a
href="ff7abcd0c3"><code>ff7abcd</code></a>
Update README to include Node.js 24 support details and requirements (<a
href="https://redirect.github.com/actions/checkout/issues/2248">#2248</a>)</li>
<li><a
href="08c6903cd8"><code>08c6903</code></a>
Prepare v5.0.0 release (<a
href="https://redirect.github.com/actions/checkout/issues/2238">#2238</a>)</li>
<li><a
href="9f265659d3"><code>9f26565</code></a>
Update actions checkout to use node 24 (<a
href="https://redirect.github.com/actions/checkout/issues/2226">#2226</a>)</li>
<li>See full diff in <a
href="https://github.com/actions/checkout/compare/v4...v6">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/checkout&package-manager=github_actions&previous-version=4&new-version=6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-04 17:40:42 +02:00
Hui Wang
4212491031 vmalert: clarify templating in alerting rule labels (#10121)
follow up
38dd971f58.

Labels only support limited templating variables in
https://docs.victoriametrics.com/victoriametrics/vmalert/#templating,
including `$labels`, `$value` and `expr`, to avoid breaking alert states
or causing cardinality issue with results.
2025-12-04 17:35:27 +02:00
Zakhar Bessarab
f76bc956ca app/vmctl: respect context cancellation during user prompts
Previously, context cancellation was ignored when reading user response
for the prompt. That leads to ignoring of "Ctrl+C" and other termination
signals to vmctl until user finishes the input.

Fix that by properly propagating the context and respecting the
cancellation of the context.


Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-12-04 15:57:31 +04:00
Aliaksandr Valialkin
655074c3e0 lib/protoparser/opentelemetry/pb: remove code related to parsing logs in OTEL format
This code is no longer needed after the commit 4ffb74448d

See https://github.com/VictoriaMetrics/VictoriaLogs/pull/720
2025-12-04 00:49:54 +01:00
Aliaksandr Valialkin
5e95fdf23e docs/victoriametrics/FAQ.md: add a link to the guide on how to calculate the needed disk space at VictoriaLogs at why indexdb size is so large? chapter
This is a follow-up for 68f670cbc5
2025-12-03 15:51:40 +01:00
Aliaksandr Valialkin
ffcfb74b17 deployment: update Go builder from v1.25.4 to v1.25.5
See https://github.com/golang/go/issues?q=milestone%3AGo1.25.5%20label%3ACherryPickApproved
2025-12-03 15:20:11 +01:00
Max Kotliar
fe803bfc6e Capitalize titles in operator.json
Signed-off-by: d3spair <git@agrshv.dev>
2025-12-03 13:43:39 +02:00
Andrii Chubatiuk
8ee466ab06 dashboard: add panels for operator flags and global params 2025-12-03 13:28:59 +02:00
Sylvain Rabot
6ca48d5025 lib/vmbackup/s3backup: support custom SSE KMS key id and ACL
Add more S3 configurations.

- SSES3KeyID allows to push to a bucket that is another account as the
KMS key it uses to encrypt data server side.
- ACL allows configure which permissions are given to the object
uploaded on the bucket (usefull when bucket policy expect a given
permission such as `bucket-owner-full-control`).

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
Co-authored-by: Andrii Chubatiuk <andrew.chubatiuk@gmail.com>
2025-12-03 10:06:57 +04:00
Fred Navruzov
70eb9d39d5 docs/vmanomaly: release v1.28.1 (#10111)
### Describe Your Changes

Updates of docs and examples to `vmanomaly` v1.28.1

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-12-02 21:08:17 +02:00
Zakhar Bessarab
1985c79a4d deployment: update references to the latest release 2025-12-01 21:16:14 +04:00
Zakhar Bessarab
f0dafacfd3 docs: update references to the latest release 2025-12-01 21:15:13 +04:00
Zakhar Bessarab
6c01f5d50f docs/changelog: backport LTS changelogs 2025-12-01 20:44:43 +04:00
f41gh7
84658e77da docs/changelog: sort changelog entries
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2025-12-01 11:11:54 +01:00
Zakhar Bessarab
4dc32ff1d7 app/vminsert/netstorage: fix list of nodes used for SD
Previously, vminsert was using original list of addrs instead of
discovered addrs. Properly use discovered list of addrs.
2025-12-01 11:11:53 +01:00
Artem Fetishev
08a1b2e75c lib/lrucache: do not reset requests and misses after cache reset
Follow-up for https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10072.

Do not reset requests and misses metrics since cache reset implies the
reset of the storage only.
2025-12-01 10:12:47 +01:00
Zakhar Bessarab
7e5b68fc1f docs/changelog: cut v1.131.0 2025-11-28 20:20:08 +04:00
Zakhar Bessarab
dcc130603c docs: update availble from tags 2025-11-28 20:13:42 +04:00
Zakhar Bessarab
9842ad2299 app/vmselect: run make vmui-update 2025-11-28 20:01:08 +04:00
Aliaksandr Valialkin
63c0cf673f Makefile: generate quicktemplate output files only at lib and app directories
Previously the output files were incorrectly generated inside unexpeted directories such as vendor
2025-11-28 16:07:22 +01:00
Nowa Ammerlaan
7f51bb4ce7 protoparser/influx: account for excess white spaces before timestamp
Some influx clients ( such as nimon monitoring client) adds excess white spaces in the influx line and does not set a
timestamp. Since Influx protocol requires whitespace before timestamp only when it set, it could present without timestamp. Whitespace before omitted timestamp confuses parser.

This commit adds check for the skipped timestamp and test case for it.

Fixes: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10049
2025-11-28 14:36:35 +01:00
Nikolay
38df52ea08 app/vmselect: improve performance for multi-level requests
Previously, proxy vmselect (aka 1st level vmselect) performed parsing
of MetricBlock received from vmstorage before forwarding it into top vmselect. It required an additional CPU and Memory, which greatly slowed down query requests.

This commit changes lib/vmselectapi iterator API, instead of MetricBlock, it returns encoded MetricBlock as a byte slice.
It allows to save CPU and memory at proxy vmselect by eliminating need of decoding MetricBlock received from storage.

In addition, it adds the following optimizations for proxy vmselect:
* reduces memory allocations by using iterator pool
 * add per storageNode workerItem for iterator

Also, it adds optimization for vmstorage, it no longer performs extra memory copy of MetricName for MetricBlock.

vmselect and vmstorage metrics vm_vmselect_metric_rows_read_total and vm_metric_rows_read_total were removed, it's not used at any dashboards and rules. New Iterator API doesn't support it.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9899
2025-11-28 13:04:55 +01:00
Max Kotliar
023a13435c dashboards: make dashboards-sync 2025-11-27 16:52:45 +02:00
Max Kotliar
1ddcbed6d7 dashboards: Show "Disk space usage % by type" as stacked graph in Cluster dashboard. (#10089)
### Describe Your Changes

VictoriaMetrics - cluster dashboard.

vmstorage -> Disk space usage % by type pane.

Switch panel to 100% stacked view to show space distribution.

The goal is to highlight how space is split between datapoints and
indexdb types; Simple time-series values made this hard to see. A 100%
stacked layout makes the distribution immediately visible.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9932

was: <img width="1201" height="609" alt="Image"
src="https://github.com/user-attachments/assets/1d199e65-5a20-4c63-a251-b7087020f42a"
/>


now: 
<img width="1208" height="608" alt="Screenshot 2025-11-27 at 13 14 51"
src="https://github.com/user-attachments/assets/96aa32f3-1243-486b-bac8-2d3c0f4bdb7a"
/>


### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-11-27 16:50:15 +02:00
Aliaksandr Valialkin
edd02cdb5b docs/victoriametrics/goals.md: clarify that bugs, which affect a small number of users at rare edge cases, can be fixed later 2025-11-27 14:29:17 +01:00
Artem Fetishev
4cd727a511 lib/storage: use lrucache for tfss cache (#10072)
The purpose of this PR is the same as #10000, except `lrucache` is used
for implementing tfss cache.

---------

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-11-27 14:18:03 +01:00
Andrii Chubatiuk
19c0477976 chore(app/vmui): conditionally render accordion children (#10068)
### Describe Your Changes

revert change, that was introduced in
483e00ffb9
since rendering of all nested children significantly impacts alerting
tab performance in case of multiple items
@Loori-R @arturminchukov , what do you think about using react-virtuoso
additionally for alerting tab to decrease dom size?

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-11-27 14:31:34 +02:00
Ben Randall
4fdd8f0906 lib/protoparser/opentelemetry: use separate loggers for unsupported delta temporality/metric type logs (#10021)
A throttled logger will continue to log messages occasionally with a
suffix indicating how many similar logs were throttled. Using the same
logger for multiple log messages can result in certain logs being
entirely suppressed and invisible in the logs. This updates most of the
loggers used in `appendFromScopeMetrics` to be their own logger so that
"unsupported delta temporality/metric type" logs will be visible for all
metric types. Additionally, `skippedSampleLogger` is only used by
`appendSamplesFromHistogram` so this was moved closer to that function.

Related to #9447
Related to #9498

- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [X] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Co-authored-by: Max Kotliar <kotlyar.maksim@gmail.com>
2025-11-27 14:19:43 +02:00
Andrii Chubatiuk
9897872ca9 lib/flagutil: clarify usage of quotes in array flag values 2025-11-27 14:17:07 +02:00
Hui Wang
b8bbb07431 dashboard: tidy vmauth panels (#10088)
before:
<img width="2498" height="1042" alt="image"
src="https://github.com/user-attachments/assets/0bbd7cc2-7062-494f-827b-96d86133537f"
/>
after:
<img width="2497" height="968" alt="image"
src="https://github.com/user-attachments/assets/6256ccc2-2f8f-40ea-a23b-a1a20e242b3c"
/>
which is more consistent with other dashabords.
2025-11-27 14:12:53 +02:00
Max Kotliar
eb1c8dd67d docs: add links to issues in changelog 2025-11-27 14:09:02 +02:00
Aliaksandr Valialkin
50fc48ac47 lib/fs: avoid Go runtime stalls on Linux when all the GOMAXPROCS threads are blocked in major pagefaults while reading the data from memory-mapped files
Go runtime executes all the goroutines on GOMAXPROCS operating system threads.
Go runtime cannot switch the OS thread to another goroutine if the current goroutine
is stuck in the major pagefault while reading the data from memory-mapped file,
because Go runtime doesn't distiguinsh between reading from regular memory and reading
from memory-mapped file. So the OS thread becomes stuck while waiting until the OS
reads the data from file at the requested memory address and returns back control to Go application.

In the worst case it is possible that all the GOMAXPROCS threads are stuck in major pagefaults,
so Go runtime pauses executing all the goroutines. This state is possible in environments
with small GOMAXPROCS and high-latency disks such as NFS or small HDD-based disks at AWS.

See https://valyala.medium.com/mmap-in-go-considered-harmful-d92a25cb161d for more details.

This commit protects from such stalls by verifying whether the given memory location from memory-mapped file
is already loaded in the OS page cache before reading from that memory.
If the location isn't in the OS page cache, then it falls back to pread() syscall for reading the data from file.
Go runtime allocates extra OS threads for long-running syscalls, so it can continue executing goroutines
across all the GOMAXPROCS threads while reading the data from slow storage via pread() syscall.

This commit uses mincore() syscall for detecting whether the given memory page is available in the OS page cache.
It also caches mincore() results for up to a minute in order to reduce the overhead for the mincore() syscall.

This commit reduces the increase rate for the process_major_pagefaults_total metric by multiple orders of magnitude
on systems with high-latency disks.
2025-11-26 20:52:27 +01:00
Artem Fetishev
3bd9c75acc lib/lrucache: use uint64 for SizeBytes() and SizeMaxBytes() (#10077)
Currently, `lrucache.Cache` `SizeBytes()` and `SizeMaxBytes()` return
type is `int`. The cache `Entry.SizeBytes()` also returns `int` value.
Changing the type to `uint64` will allow using `uint64set.Set` as the
cache entry type (see #10072).

Please note that using `uint64` regardless the cpu architecture is set
is not entirely correct, because in 32-bit systems the size won't ever
get bigger than `2^32`, so the `uint64` will too much. However current
type (`int`) is not correct either since it is signed and will only
allow to store values up to `2^31`. Alternatively, all `SizeBytes()`
methods should return `uint`.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-11-26 11:39:07 +01:00
Max Kotliar
9c0683f8d1 dashboards: run make dashboards-sync 2025-11-25 20:13:06 +02:00
Max Kotliar
bf4660912f .github: Add changelog tip linter 2025-11-25 13:41:48 +02:00
Yury Molodov
bb54b5e661 app/vmui: improve alert styles for better readability (#10012)
### Describe Your Changes

This PR improves vmui alert styles by adding borders between rows,
introducing a hover state for easier row identification, and aligning
badges to the left.

Related issue: #9856

| Before | After |
|--------|--------|
| <img width="1427" height="1310" alt="image"
src="https://github.com/user-attachments/assets/68f3469e-95df-449f-a85d-1c0285520e2d"
/> | <img width="1427" height="1310" alt="Image"
src="https://github.com/user-attachments/assets/89501efb-c66f-402a-9d14-01c86930a5e2"
/> |

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: Yury Molodov <yurymolodov@gmail.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2025-11-25 13:38:24 +02:00
Andrii Chubatiuk
200a729565 app/vmui: fixed ability to select multiple metrics in explore metrics tab (#10008)
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9995

change in only `Select` component leads to infinite
ExploreMetricsGraphItem component refresh since each time array has a
new reference

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-11-25 13:29:45 +02:00
Yury Molodov
7303495ae1 app/vmui: fix rendering of multiple points at the same timestamp (#10010)
### Describe Your Changes

1. Removed the *step* control from the **Raw Query** page, as it didn’t
affect chart rendering and caused confusion.
2. Fixed rendering of multiple points with the same timestamp -
previously, the second point was hidden.
3. Added proper visualization for points with the same timestamp and
identical values: such points are now shown as a square, and the tooltip
displays the number of duplicates.

**Example:**

```json
{
  "values": [1, 22, 10, 10, 5, 6],
  "timestamps": [
    1761955247950,
    1761955247950,
    1761955248960,
    1761955248960,
    1761955251980,
    1761955252990
  ]
}
```

<img width="500" height="1120" alt="image"
src="https://github.com/user-attachments/assets/192aa43e-8008-4f03-8966-00f59e52ec40"
/>
<img width="300" height="676" alt="image"
src="https://github.com/user-attachments/assets/8e361cb3-1286-452a-a687-b6b40ba7807b"
/>

Related issues: #9667 and #9666

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

Signed-off-by: Yury Molodov <yurymolodov@gmail.com>
2025-11-25 11:00:00 +02:00
Andrei Baidarov
98b5288e9c vmselect: do not immediately fail request if vmstorage returns search… (#10030)
….maxConcurrentRequests error

If `vmstorage` is currently overloaded it could return
maxConcurrentRequests error. Now `vmselect` immediately fails the whole
request even if `replicationFactor` is set up and other replicas could
respond without errors.

This PR treats them as regular errors, not fatal ones.

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-11-24 20:37:37 +02:00
Cancai Cai
d7f9cd971d docs/notes: fix syntax errors (#10019)
### Describe Your Changes

I'm not sure if this is a mistake.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

Signed-off-by: cancaicai <2356672992@qq.com>
2025-11-24 20:37:05 +02:00
cancaicai
2cb08095c6 docs/storage: fix typo
Signed-off-by: cancaicai <2356672992@qq.com>
2025-11-24 15:46:40 +02:00
dependabot[bot]
8bc41f4c79 build(deps): bump golang.org/x/crypto from 0.43.0 to 0.45.0 (#10052)
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from
0.43.0 to 0.45.0.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="4e0068c009"><code>4e0068c</code></a>
go.mod: update golang.org/x dependencies</li>
<li><a
href="e79546e28b"><code>e79546e</code></a>
ssh: curb GSSAPI DoS risk by limiting number of specified OIDs</li>
<li><a
href="f91f7a7c31"><code>f91f7a7</code></a>
ssh/agent: prevent panic on malformed constraint</li>
<li><a
href="2df4153a03"><code>2df4153</code></a>
acme/autocert: let automatic renewal work with short lifetime certs</li>
<li><a
href="bcf6a849ef"><code>bcf6a84</code></a>
acme: pass context to request</li>
<li><a
href="b4f2b62076"><code>b4f2b62</code></a>
ssh: fix error message on unsupported cipher</li>
<li><a
href="79ec3a51fc"><code>79ec3a5</code></a>
ssh: allow to bind to a hostname in remote forwarding</li>
<li><a
href="122a78f140"><code>122a78f</code></a>
go.mod: update golang.org/x dependencies</li>
<li><a
href="c0531f9c34"><code>c0531f9</code></a>
all: eliminate vet diagnostics</li>
<li><a
href="0997000b45"><code>0997000</code></a>
all: fix some comments</li>
<li>Additional commits viewable in <a
href="https://github.com/golang/crypto/compare/v0.43.0...v0.45.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=golang.org/x/crypto&package-manager=go_modules&previous-version=0.43.0&new-version=0.45.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/VictoriaMetrics/VictoriaMetrics/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-24 15:13:50 +02:00
Zhu Jiekun
bb1e0d8f3b opentsdb: Avoid blocking when a connection doesn't send anything (#10045)
### Describe Your Changes

fix #9987 

Avoid blocking when a connection to `-opentsdbListenAddr` doesn't send
any data. This issue blocked other connections from being handled.

> This bug can be tested with:
> 1. Start VictoriaMetrics Single-node with `-opentsdbListenAddr=:4242`.
> 2. Run: `telnet 127.0.0.1 4242` without typing any data after
connection established.
> 3. Run (in another terminal, after step 2): `curl -H 'Content-Type:
application/json' -d
'{"metric":"x.y.z","value":2222222.34,"tags":{"t1":"v1","t2":"v2"}}'
http://localhost:4242/api/put`
> 
> Before the change:
> - Step 3 was blocked infinitely.
> 
> Expect result after the change:
> - Step 3 was executed.
> - Connection established by step 2 will be closed after 5 seconds.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2025-11-24 14:31:19 +02:00
Mathias Palmersheim
70be2e7ea3 Remove threshold from available cpu panel (#10056)
### Describe Your Changes

fixes #9988 by removing the cpu threshold from the Available CPU panel

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-11-24 14:16:35 +02:00
Kirill Yurkov
61796e355a docs: link faq for large indexdb (#10061)
Clarified the index size note in
docs/guides/understand-your-setup-size/README.md to steer readers toward
the FAQ when indexdb feels oversized, noting typical ratios and
troubleshooting guidance.
2025-11-24 14:04:05 +02:00
dependabot[bot]
2ae3fd47eb build(deps): bump actions/checkout from 5 to 6 (#10060)
Bumps [actions/checkout](https://github.com/actions/checkout) from 5 to
6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/releases">actions/checkout's
releases</a>.</em></p>
<blockquote>
<h2>v6.0.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Update README to include Node.js 24 support details and requirements
by <a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/2248">actions/checkout#2248</a></li>
<li>Persist creds to a separate file by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2286">actions/checkout#2286</a></li>
<li>v6-beta by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2298">actions/checkout#2298</a></li>
<li>update readme/changelog for v6 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2311">actions/checkout#2311</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v5.0.0...v6.0.0">https://github.com/actions/checkout/compare/v5.0.0...v6.0.0</a></p>
<h2>v6-beta</h2>
<h2>What's Changed</h2>
<p>Updated persist-credentials to store the credentials under
<code>$RUNNER_TEMP</code> instead of directly in the local git
config.</p>
<p>This requires a minimum Actions Runner version of <a
href="https://github.com/actions/runner/releases/tag/v2.329.0">v2.329.0</a>
to access the persisted credentials for <a
href="https://docs.github.com/en/actions/tutorials/use-containerized-services/create-a-docker-container-action">Docker
container action</a> scenarios.</p>
<h2>v5.0.1</h2>
<h2>What's Changed</h2>
<ul>
<li>Port v6 cleanup to v5 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2301">actions/checkout#2301</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v5...v5.0.1">https://github.com/actions/checkout/compare/v5...v5.0.1</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/blob/main/CHANGELOG.md">actions/checkout's
changelog</a>.</em></p>
<blockquote>
<h1>Changelog</h1>
<h2>V6.0.0</h2>
<ul>
<li>Persist creds to a separate file by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2286">actions/checkout#2286</a></li>
<li>Update README to include Node.js 24 support details and requirements
by <a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/2248">actions/checkout#2248</a></li>
</ul>
<h2>V5.0.1</h2>
<ul>
<li>Port v6 cleanup to v5 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2301">actions/checkout#2301</a></li>
</ul>
<h2>V5.0.0</h2>
<ul>
<li>Update actions checkout to use node 24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2226">actions/checkout#2226</a></li>
</ul>
<h2>V4.3.1</h2>
<ul>
<li>Port v6 cleanup to v4 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2305">actions/checkout#2305</a></li>
</ul>
<h2>V4.3.0</h2>
<ul>
<li>docs: update README.md by <a
href="https://github.com/motss"><code>@​motss</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1971">actions/checkout#1971</a></li>
<li>Add internal repos for checking out multiple repositories by <a
href="https://github.com/mouismail"><code>@​mouismail</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1977">actions/checkout#1977</a></li>
<li>Documentation update - add recommended permissions to Readme by <a
href="https://github.com/benwells"><code>@​benwells</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2043">actions/checkout#2043</a></li>
<li>Adjust positioning of user email note and permissions heading by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2044">actions/checkout#2044</a></li>
<li>Update README.md by <a
href="https://github.com/nebuk89"><code>@​nebuk89</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2194">actions/checkout#2194</a></li>
<li>Update CODEOWNERS for actions by <a
href="https://github.com/TingluoHuang"><code>@​TingluoHuang</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/2224">actions/checkout#2224</a></li>
<li>Update package dependencies by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2236">actions/checkout#2236</a></li>
</ul>
<h2>v4.2.2</h2>
<ul>
<li><code>url-helper.ts</code> now leverages well-known environment
variables by <a href="https://github.com/jww3"><code>@​jww3</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/1941">actions/checkout#1941</a></li>
<li>Expand unit test coverage for <code>isGhes</code> by <a
href="https://github.com/jww3"><code>@​jww3</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1946">actions/checkout#1946</a></li>
</ul>
<h2>v4.2.1</h2>
<ul>
<li>Check out other refs/* by commit if provided, fall back to ref by <a
href="https://github.com/orhantoy"><code>@​orhantoy</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1924">actions/checkout#1924</a></li>
</ul>
<h2>v4.2.0</h2>
<ul>
<li>Add Ref and Commit outputs by <a
href="https://github.com/lucacome"><code>@​lucacome</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1180">actions/checkout#1180</a></li>
<li>Dependency updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>- <a
href="https://redirect.github.com/actions/checkout/pull/1777">actions/checkout#1777</a>,
<a
href="https://redirect.github.com/actions/checkout/pull/1872">actions/checkout#1872</a></li>
</ul>
<h2>v4.1.7</h2>
<ul>
<li>Bump the minor-npm-dependencies group across 1 directory with 4
updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1739">actions/checkout#1739</a></li>
<li>Bump actions/checkout from 3 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1697">actions/checkout#1697</a></li>
<li>Check out other refs/* by commit by <a
href="https://github.com/orhantoy"><code>@​orhantoy</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1774">actions/checkout#1774</a></li>
<li>Pin actions/checkout's own workflows to a known, good, stable
version. by <a href="https://github.com/jww3"><code>@​jww3</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1776">actions/checkout#1776</a></li>
</ul>
<h2>v4.1.6</h2>
<ul>
<li>Check platform to set archive extension appropriately by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1732">actions/checkout#1732</a></li>
</ul>
<h2>v4.1.5</h2>
<ul>
<li>Update NPM dependencies by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1703">actions/checkout#1703</a></li>
<li>Bump github/codeql-action from 2 to 3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1694">actions/checkout#1694</a></li>
<li>Bump actions/setup-node from 1 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1696">actions/checkout#1696</a></li>
<li>Bump actions/upload-artifact from 2 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1695">actions/checkout#1695</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1af3b93b68"><code>1af3b93</code></a>
update readme/changelog for v6 (<a
href="https://redirect.github.com/actions/checkout/issues/2311">#2311</a>)</li>
<li><a
href="71cf2267d8"><code>71cf226</code></a>
v6-beta (<a
href="https://redirect.github.com/actions/checkout/issues/2298">#2298</a>)</li>
<li><a
href="069c695914"><code>069c695</code></a>
Persist creds to a separate file (<a
href="https://redirect.github.com/actions/checkout/issues/2286">#2286</a>)</li>
<li><a
href="ff7abcd0c3"><code>ff7abcd</code></a>
Update README to include Node.js 24 support details and requirements (<a
href="https://redirect.github.com/actions/checkout/issues/2248">#2248</a>)</li>
<li>See full diff in <a
href="https://github.com/actions/checkout/compare/v5...v6">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/checkout&package-manager=github_actions&previous-version=5&new-version=6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-24 14:00:22 +02:00
Max Kotliar
ebad7e5496 docs: Describe relation between slow inserts and unsorted labels. 2025-11-24 13:35:23 +02:00
Max Kotliar
e52de06ee5 docs: sync flags in docs with actual binaries 2025-11-24 13:16:22 +02:00
Aliaksandr Valialkin
38dd971f58 docs/victoriametrics/vmalert.md: clarify that templates can be used inside rule labels
Rule labels can contain templates in the same way as annotations.
See aad6ab009e/app/vmalert/rule/alerting_test.go (L1192)
and https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/#templating

Document this, since users sometimes ask this question.
2025-11-24 10:50:55 +01:00
Artem Fetishev
aad6ab009e lib/storage: minor metricNameSearch fixes (#10065)
- Fix comment
- Re-use dst instead introducing a new variable.

This change has been requested to be in a separated PR during the
pt-index (#8134) code review.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-11-21 20:06:20 +01:00
Artem Fetishev
2c125e14c7 lib/storage: also create parts.json on parition creation (#10051)
Currently, when a partition is created its corresponding parts.json file
is not created right away (see createNewParition()). Its creation is
delayed until the first part files are created on disk (see
swapSrcWithDstParts()). However, the parts.json file is created for a
possibly empty partition when an existing partition is opened (see
mustOpenPartition()) and when a partition snapshot is create (see
MustCreateSnapshotAt()).

I.e. `parts.json` is an important part of a partition, since it is an
artifact that describes the partition contents. And it should be created
on pt creation even if its contents is empty.

To be honest, this change is mostly a no-op for the current storage
implementation. It only makes the code consistent, i.e. the parts.json
is created along with the partition.

However having it created when a partition is created becomes in
pt-index (#7599, #8134), because it allows having partitions with no
data and therefore without parts.json file. Still not a big deal but the
unit tests start failing.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-11-21 14:19:53 +01:00
Artem Fetishev
13dc60e257 lib/storage: refactoring - move dateMetricIDCache code to a separate file (#10055)
dateMetricIDCache does not belong to storage anymore since it has been
moved to indexDB. Instead moving the case to index_db.go, move it to a
separate file in order to navigate the code more easily.

No changes have been done to the code or tests.

Follow up for: #9983

---------

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Co-authored-by: Alexander Frolov <9749087+fxrlv@users.noreply.github.com>
2025-11-21 13:52:33 +01:00
Artem Fetishev
ed64c90e7a lib/storage: fix comments related to nextDayMetricIDs
Follow-up for 49b0a4fb16

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-11-21 13:31:20 +01:00
Artem Fetishev
49b0a4fb16 lib/storage: refactoring - simplify nextDayMetricIDs data structure (#10058)
The data structure used for holding the nextDayMetricIDs is too complex
and can be simplified (flattened).

Follow up for: #9983

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-11-21 13:02:02 +01:00
Artem Fetishev
5141496c43 lib/storage: add overlapsWith() and contains() methods to TimeRange (#10059)
The change was introduced in pt-index PR (#8134) and is extracted into a
separate PR.

Currently used in partition_search and partition. If you see more places
like this, please let me know.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-11-21 12:24:40 +01:00
Andrii Chubatiuk
24fac64875 docs: add warning blockquote regarding latest backup lifecycle policy (#10054)
Update formatting for warning text.

<img width="732" height="432" alt="image"
src="https://github.com/user-attachments/assets/1549e69a-fc65-445f-b567-9b5e4e1a8617"
/>
2025-11-20 13:46:34 +04:00
Aliaksandr Valialkin
8250f469a7 docs/victoriametrics/Articles.md: add https://medium.com/@kanakaraju896/backing-up-victoriametrics-data-a-complete-guide-24473c74450f 2025-11-20 08:36:59 +01:00
Aliaksandr Valialkin
7fb0f0e015 docs/victoriametrics/Articles.md: add https://blackmetalz.github.io/why-i-switched-to-victoriametrics-scaling-from-small-business-to-enterprise.html 2025-11-20 08:33:19 +01:00
Andrii Chubatiuk
563dbeaea1 app/vmalert: do not increment errors counter on cancel context
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/10027
2025-11-19 13:32:16 +01:00
Nikolay
7e6468c1e3 lib/storage: properly increment missing tsids metric
Bug was introduced at 2380e4829d

Due to typo vm_missing_tsids_for_metric_id_total metric was incremented instead of vm_missing_metric_names_for_metric_id_total for missing metricName for metricID search.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/10041
2025-11-19 13:29:44 +01:00
Hui Wang
328f33202f chore: clarify vmalert -external.label usage (#10042)
To clarify that HA vmalert doesn't need to specify `-external.label`.
2025-11-19 13:28:36 +01:00
Fred Navruzov
951331db80 docs/vmanomaly: release v1.28.0 (#10031)
### Describe Your Changes

Upgraded vmanomaly docs & guides to release v1.28.0 (UI v1.2.0)

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-11-18 21:47:03 +02:00
Andrii Chubatiuk
e6139be8ba docs/vmbackupmanager: mention version since which -backupTypeTagName flag is available (#10038)
Mention version since which `backupTypeTagName` flag is available
2025-11-18 18:56:19 +04:00
Andrii Chubatiuk
77e5920014 app/vmbackupmanager: set backup type tag on backup's items
* app/vmbackupmanager: set VMBackupType tag on backup's items

* address review comments
2025-11-18 16:30:13 +04:00
Zakhar Bessarab
78049e991b docs/cluster: remove mention of select for metadata (#10034)
vmselect does not have a flag to enable metadata querying, remove
invalid reference to it from the docs.
2025-11-18 15:32:20 +04:00
Artem Fetishev
c972d70f00 docs: update VictoriaMetrics components version to v1.130.0
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-11-17 22:03:17 +01:00
Artem Fetishev
b947562f2b deployment/docker: update VM components version to v1.130.0
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-11-17 21:56:42 +01:00
Artem Fetishev
344a81fa20 docs: bump last LTS versions
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-11-17 20:14:07 +00:00
Artem Fetishev
4b022ea8a8 docs/CHANGELOG.md: update changelog with LTS release notes
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-11-17 20:08:16 +00:00
Artem Fetishev
04c24fc831 lib/workingsetcache: Fix bytesSize metric calculation (#10025)
Follow-up for 3e6fc445a9

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-11-17 13:49:17 +01:00
Artem Fetishev
d2f78e4b2b docs/CHANGELOG.md: cut v1.130.0
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-11-14 17:45:20 +00:00
Max Kotliar
3995837c58 docs: update latest version in docs to v1.130.0 2025-11-14 19:37:14 +02:00
Artem Fetishev
1d53496f98 make vmui-update
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-11-14 17:21:49 +00:00
Artem Fetishev
73a1ce2dd6 lib/storage: Move dateMetricIDCache to indexDB (#9983)
Looks like the `dateMetricIDCache` must be per indexDB:

- the use of this cache and `is.hasDateMetricID()` often go in pairs. So
it makes
  sense to use this cache in that method.
- The same is true for `createPerDayIndexes()`: everytime the index
entry is
  created, a corresponding entry is added to the cache.
- As a result the generation field is also removed from the cache.

Related to #7599 and #8134.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-11-14 16:03:28 +01:00
Aliaksandr Valialkin
daa88f6a43 docs/victoriametrics: cross-link rebalancing section at VictoriaMetrics cluster docs and the corresponding question at the FAQ page 2025-11-14 15:36:57 +01:00
Aliaksandr Valialkin
7bff73b0f7 docs/victoriametrics/Cluster-VictoriaMetrics.md: add rebalancing chapter, which explains how to rebalance data among vmstorage nodes
This is very frequent question from new users of VcitoriaMetrcs who migrate from other solutions
with automatic data rebalancing among storage nodes, so it is a good idea to cover it in the docs.
2025-11-14 15:32:29 +01:00
Max Kotliar
bf3b1cf6b6 lib/storage/metricsmetadata: ensure deterministic sorting for identical metric names across tenants
Metrics metadata is loaded from a per-tenant storage map
(perTenantStorage map[uint64]map[string]*Row), so result rows order is
non-deterministic. The existing sortRows implementation only sorts by
metric name and ingestion time, which means rows that differ only by
tenant/account ID still sorted undeterministically.

This change updates `sortRows` to include account\project identifiers in
the comparison, ensuring stable and deterministic ordering for metadata
entries that share the same metric name and timestamp.

First discovered as flaky test:

--- FAIL: TestStorageRead (0.00s)
    storage_test.go:337: unexpected rows get result (-want, +got):
          []*metricsmetadata.Row{
          	&{
          		... // 2 ignored and 1 identical fields
          		Help:      "uselesshelp1",
          		Unit:      "seconds1",
        - 		AccountID: 1,
        + 		AccountID: 0,
        - 		ProjectID: 1,
        + 		ProjectID: 0,
          		Type:      1,
          	},
          	&{
          		... // 2 ignored and 1 identical fields
          		Help:      "uselesshelp1",
          		Unit:      "seconds1",
        - 		AccountID: 0,
        + 		AccountID: 1,
        - 		ProjectID: 0,
        + 		ProjectID: 1,
          		Type:      1,
          	},
          }
FAIL

https://github.com/VictoriaMetrics/VictoriaMetrics-enterprise/actions/runs/19361594138/job/55394642029#step:4:133
2025-11-14 15:22:27 +02:00
Max Kotliar
a10ff67354 docs/changelog: Add links to changelog 2025-11-14 13:41:59 +02:00
Haley Wang
9a8463df42 lib/storage: add a value check for retentionFilter to ensure it does not exceed retentionPeriod 2025-11-14 12:50:46 +02:00
Max Kotliar
7e22b169f1 docs: Add metrics metadata how to use in docs
follow-up for https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9487
2025-11-14 10:37:15 +01:00
f41gh7
80c1af5af1 apptest: add metrics metadata test for vmsingle
related issue github.com/VictoriaMetrics/VictoriaMetrics/issues/2974
2025-11-14 10:29:28 +01:00
f41gh7
5a587f2006 app/{vmstorage,vmselect,vminsert}: introduce metrics metadata storage
This commits adds storage part and cluster RPC methods for metrics metadata.

 Key concepts:
* vmstorage persists metadata in-memory only.
* vmstorage evicts metadata records older than 1 hour.
* vmstorage stores only the last value of metadata for time series
  metric name.
* vminsert opens an additional TCP connection to the vmstorage for
  metadata write requests.
* vmselect doesn't support `limit_per_metric_name`.

This feature is available optional and must be enabled via flag - `-enableMetadata` provided to vminsert/vmsingle.

Fixes github.com/VictoriaMetrics/VictoriaMetrics/issues/2974
2025-11-14 10:24:38 +01:00
Aliaksandr Valialkin
847cd1e336 docs/guides/understand-your-setup-size/README.md: remove the misleading recommendation for having at least 2vCPU cores per each vmstorage node
vmstorage nodes work perfectly with one CPU core and even with 10% of a single CPU core
if the allocated CPU resources matches their workload.

It is better to recommend allocating the an interger number of CPU cores to vmstorage
in order to achieve an optimal performance, since vmstorage allocates internal resources
according to the available CPU cores. If there is a fractional number of CPU cores,
then the allocation of internal resources may be not so optimal.

Fractional number of CPU cores may also lead to increased latencies and stalls
because some P threads at Go runtime won't be able to run goroutines from their ready queues
in a timely manner becasue of the lack of CPU time. See https://victoriametrics.com/blog/kubernetes-cpu-go-gomaxprocs/
2025-11-14 09:48:30 +01:00
Aliaksandr Valialkin
c86857b269 docs/victoriametrics/vmagent.md: mention that it isn't recommended increasing the -maxConcurrentRequests command-line flag value in general case
Too big values for the -maxConcurrentRequests command-line flag increase memory usage
and increase CPU overhead for processing incoming requests in most cases.
The only valid reason for increasing the value for -maxConcurrentRequests command-line flag
is when many clients send data to vmagent over very slow network.
2025-11-14 09:40:31 +01:00
Hui Wang
c93937101c Improve vmalert UI tip (#9998) 2025-11-13 21:04:39 +01:00
Aliaksandr Valialkin
cca7380dd3 docs/victoriametrics: fix broken link to /api/v1/rules docs at Prometheus 2025-11-13 19:40:10 +01:00
Aliaksandr Valialkin
ca3b9b18b5 docs/victoriametrics/README.md: add context links to the FAQ entry describing why IndexDB size may be too large 2025-11-13 19:36:18 +01:00
Nikolay
10f7cd2ffc lib/encoding/zstd: properly apply size limits
Previously, zstd Decoder didn't take in account Request Size limits
applied by VictoriaMetrics components.  And in case of incorrectly formed zstd block, VictoriaMetrics
component may allocate extra memory. Which may lead to the OOM errors.

This commit makes ingest endpoints check frame content size and window size headers based on MaxRequest Limits.
2025-11-13 18:13:23 +01:00
Hui Wang
fa85726a82 vmalert: print the error message as value if templating fails in alerting rule
For users, if an alerting rule has a misconfigured annotation, it's more
important to deliver the alert when the rule triggers rather than skip
it with templating error logs.
Then users can see the faulty annotation in alert message and fix it.

Note: the previous behavior is retained in replay mode because errors
there should be noticed immediately; hiding them could waste time,
resources and require a re-replay after fixes.
Also the rule's status in the vmalert UI remains unhealthy if templating
failed.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9853
2025-11-13 17:34:55 +01:00
Hui Wang
567c084d6d vmalert: drop labels with empty values in generated alerts and time series
In prometheus ecosystem, a label with an empty value equals no label,
since a query like `test{something=""}` matches all the series without
label `something`.
So for vmalert, preserving empty-value labels in generated alerts or
time series is unnecessary and can cause alert hash mismatches during
[restore](https://docs.victoriametrics.com/victoriametrics/vmalert/#alerts-state-on-restarts).
The empty-value label shouldn't come from datasource response since they
follow the same rule(omit empty-value labels), it may come from
`-external.label` or rule labels, but the empty value could be caused by
occasionally templating failures, which is hard to check there.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9984
2025-11-13 17:24:27 +01:00
Hui Wang
12a1388fbc vmalert: fix a potential race condition in web api during rule hot reload
Group rules are not protected by
[m.groupsMu](03c784e3e3/app/vmalert/manager.go (L25)),
they could be updated(with config hot reload) during `/api/v1/rule`,
`/api/v1/alert` and `/api/v1/alerts` API calls. This fix takes a
snapshot by calling `group.ToAPI()` first, making all reads safe.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9551
2025-11-13 17:22:25 +01:00
JAYICE
62c19b386a lib/httputl: fix failing to access http2 sd service by the shadow copy of http.DefaultTransport
Clone `http.DefaultTransport` and disable HTTP2 without resetting
`TLSClientConfig.NextProtos` in the shadow copy of
`http.DefaultTransport` will cause the request to HTTP/2 server to fail.
See https://github.com/golang/go/issues/39302.

To reproduce it, use a scrape config like:
```
scrape_configs:
  - job_name: test
    yandexcloud_sd_configs:
      - service: compute
        api_endpoint: https://api.cloud.yandex.net
```
Before the fix, access to the SD service would fail.

A solution is to specify `http/1.1` in  `TLSClientConfig.NextProtos`.

Related golang issue: https://github.com/golang/go/issues/39302

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9981
2025-11-13 17:19:15 +01:00
Andrii Chubatiuk
a84586a246 docs: update grafana plugin links, move root file to plugins repo (#10001)
### Describe Your Changes

update victorialogs grafana plugin links, moved root plugin file to
plugin repo

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-11-13 15:00:14 +02:00
Zhu Jiekun
24867a042b docs: mention VictoriaTraces playground in doc (#9999)
### Describe Your Changes

Add VictoriaTraces playground in doc.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-11-13 12:50:54 +02:00
Andrii Chubatiuk
9baade2898 docs: added perses section, move grafana datasource to integrations (#9994)
### Describe Your Changes

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9888

additionally adds grafana datasource into integrations section and
excludes previous location from menu and search

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-11-13 12:49:16 +02:00
Zhu Jiekun
6117b2ead9 VMUI/relabel debug: Allow labels textarea input without curly braces (#9950)
### Describe Your Changes

Fixed #9900

relax the validation for the labels text area. It now accepts input
labels without being enclosed in curly braces.

The following input format should be supported now:


```
	metric_name
	metric_name{label1="value1"}
	{__name__="metric_name", label1="value1"}
	__name__="metric_name", label1="value1"
	label1="value1"
```

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2025-11-13 12:41:38 +02:00
Aliaksandr Valialkin
0df2993cf4 vendor: update github.com/valyala/gozstd from v1.23.2 to v1.24.0
This is needed for being able to use DecompressLimited() function for limiting
the size of descropressed data.

See https://github.com/valyala/gozstd/pull/75
Updates https://github.com/VictoriaMetrics/VictoriaMetrics-enterprise/issues/958
2025-11-12 21:10:51 +01:00
Aliaksandr Valialkin
14f1bda8fc docs/victoriametrics/vmalert.md: remove "👉" char from the Common mistakes chapter, since this looks like AI-generated content
While at it, fix a typo `&step` -> `step`.

This is a follow-up for the commit 40ab285fb9

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9343
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9373
2025-11-12 18:51:21 +01:00
Max Kotliar
f96f4709f6 docs/changelog: move changelog line to tip. polish it a bit 2025-11-12 19:40:18 +02:00
Samarth Bagga
96c1392b45 Add log which will report dropped log count (#9752)
### Describe Your Changes

I have added a counter for the throttled logs which gets logged every 1
minute.
Fixes #9498

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Co-authored-by: Hui Wang <haley@victoriametrics.com>
2025-11-12 19:32:33 +02:00
Max Kotliar
8dd905c7a9 lib/envflag: apply -secret.flags inside envflag.Parse function (2nd attempt) (#9963)
### Describe Your Changes

The PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9942 was
reverted in
c90c7c3123
because of the import cycle in the enterprise VM. Needs more work.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-11-12 19:23:51 +02:00
Hui Wang
1c7abd3137 docs: fix flag type in descriptions (#9979)
Do not use backticks in command-line flag description, it pollutes the
flag type in descriptions.
2025-11-12 13:51:48 +02:00
Aliaksandr Valialkin
e8114806aa docs/victoriametrics: add a case study from Spotify based on the https://www.youtube.com/watch?v=87koDlpKDR4 2025-11-12 10:49:36 +01:00
Aliaksandr Valialkin
8f1c1cc7c9 docs/victoriametrics/FAQ.md: mention that disabling per-day index may reduce the growth rate of indexdb for static time series over time 2025-11-11 16:58:10 +01:00
Aliaksandr Valialkin
68f670cbc5 docs/victoriametrics/FAQ.md: add Why IndexDB is so large? chapter, since this is quite frequent question from VictoriaMetrics users 2025-11-11 16:48:03 +01:00
Aliaksandr Valialkin
dac7e8d554 docs/victoriametrics/FAQ.md: add trailing slashes to links to posts about VcitoriaMetrics components
Trailing slashes are needed to make the URLs canonical and avoid redirects.

This is a follow-up for d4aefcecc4
2025-11-11 15:59:20 +01:00
Aliaksandr Valialkin
3db6c40b70 deployment: update Go builder from v1.25.3 to v1.25.4
See https://github.com/golang/go/issues?q=milestone%3AGo1.25.4%20label%3ACherryPickApproved
2025-11-11 12:15:12 +01:00
Aliaksandr Valialkin
90c69a07a9 docs/victoriametrics/stream-aggregation: fix broken links to https://docs.victoriametrics.com/victoriametrics/stream-aggregation/configuration/#stream-aggregation-config ( was https://docs.victoriametrics.com/victoriametrics/stream-aggregation/configuration/#aggregation-config )
This is a follow-up after the commit f385e36b96
2025-11-11 02:01:04 +01:00
Aliaksandr Valialkin
f5b1092e07 lib/workingsetcache: prevent from duplicate misleading log messages when reading the cache from file
While at it, improve logging when reading workingsetcache from file and saving it to file.
This should simplify troubleshooting various issues related to the workingsetcache.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9750
2025-11-11 00:02:09 +01:00
Aliaksandr Valialkin
3e6fc445a9 lib/workingsetcache: properly update cache stats
This is a follow-up for the commit 1130adebad .

The EntriesCount, BytesSize and MaxBytesSize metrics must take into account the data
stored in both prev and curr caches, since this data occupies memory and it is expected
that the exposed metrics - vm_cache_entries, vm_cache_size_bytes and vm_cache_size_max_bytes -
take into account all the memory occupied by the corresponding caches.

The GetCalls, SetCalls, Collisions and Corruptions metrics must take into account stats
from the curr cache only, since the corresponding stats for the prev cache is already taken
during the rotation (when moving curr to prev and resetting the previous prev).

The Misses metric must take into account only misses in the prev cache, since these misses
mean that the given entry is missing the both the curr and the prev cache.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9553
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9715
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9657

While at it, make sure that the cache mode and cache stats is always read and updated under c.mu lock.
This may help resolving races similar to https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9921
2025-11-10 22:15:50 +01:00
Aliaksandr Valialkin
1130adebad Revert "lib/workingsetcache: properly count workingsetcache metrics "
This reverts commit 89fd27c922.

Reason for revert: this commit adds scalability bottleneck in the fast path - Cache.Get() -
in the form of c.getCalls.Add(). This call doesn't scale on systems with big number of CPU cores,
since it needs to update atomically a shared memory from big number of CPU cores.

The Cache.Get() is called per every ingested sample when obtaining TSID by MetricName from the cache
at lib/storage.Storage.get(), so this can be a major bottleneck on systems with many CPU cores.

The solution for https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9553
is to properly track cache requests and misses: cache requests must be taken into account
only at the curr cache, while cache misses must be taken into account only at the prev cache.
This will be implemented in the follow-up commit.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9657
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9715
2025-11-10 21:43:33 +01:00
Aliaksandr Valialkin
1fb3f105c3 Revert "lib/storage: Introduce vm_cache_eviction_bytes_total metric"
This reverts commit 994dadb4d5.

Reason for revert: the introduced metrics have zero practical applicability.

The lib/workingsetcache doesn't need manual tuning in most cases - its' size
is automatically adjusted to the given working set, if the working set is smaller
than the cache size limit set at the cache creation time. The limit just prevents
unbounded cache growth for large working sets.

If the working set exceeds the given limit, then the cache may become inefficient
because of the increased cache miss rate. The introduced metrics do not help determining
the needed cache size.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9293
2025-11-10 16:49:51 +01:00
Aliaksandr Valialkin
38d3033e66 go.mod: update github.com/VictoriaMetrics/fastcache from v1.13.1 to v1.13.2
This is needed for removing the EvictedBytes metric from the fastcache.

See the description of f6080737bb for details.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9293
Updates https://github.com/VictoriaMetrics/fastcache/pull/93
2025-11-10 16:43:33 +01:00
Aliaksandr Valialkin
cfba80ed4d lib/workingsetcache: replace Cache.Save() with Cache.MustSave()
If the cache cannot be saved to the given file, this is a fatal error.
It is better to log this fatal error inside Cache.MustSave() and then exit
instead of returning it to the caller. This makes the code more clear at the caller side.
2025-11-10 16:00:59 +01:00
Aliaksandr Valialkin
ee0eff0ca2 lib/workingsetcache: improve log messages for various expected cases when reading the cache from files
The improved log messages must help users understanding the logged cases
without asking VictoriaMetrics developers on these cases.
2025-11-10 14:54:22 +01:00
Aliaksandr Valialkin
181f95aaf6 deployment/docker/rules/alerts-health.yml: clarify the description of the TooManyTSIDMisses alert after the commit 30641b201b
It is expected that the number of TSIDs misses over the last 5 minutes is zero in steady state.
If it is non-zero, then something wrong happens. That's why it is better to use increase() instead of rate() function
for this alert.
2025-11-10 14:35:36 +01:00
Aliaksandr Valialkin
58485df033 lib/storage: consistently rename searchMetricNameWithCache() to SearchTSIDs() across comments after the commit 90d23d7c9f
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9765
2025-11-10 14:30:55 +01:00
Aliaksandr Valialkin
30641b201b deployment/docker/rules/alerts-health.yml: clarify the description for the TooManyTSIDMisses alert
This alert is expected after unclean shutdown (OOM, power off, kill -9) of VictoriaMetrics.
It should go away in a few minutes after the restart while VictoriaMetrics deletes metricIDs
for the missing MetricID->TSID entries which were created for the newly registered time series
just before unclean shutdown. It is OK to delete such metricIDs, since the corresponding time series
will be re-registered again. See the commit 20812008a7 .

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3502
2025-11-10 14:25:22 +01:00
Aliaksandr Valialkin
b2cd3bf1f2 lib/workingsetcache: properly initialize new cache when the stored cache has unexpected size
This is a follow-up for 9bc541587b
2025-11-10 12:49:04 +01:00
Artem Fetishev
5336091785 lib/storage: Fix data race in containsTimeRange() (#9965)
When one goroutine attemps to update the min timestamp under the lock it
could have been updated already by another goroutine with a smaller
timestamp. As a result the goroutine will update the timestamp with a
bigger value.

A simple unit test (included in this commit) demonstrates that.

Additionally, use a simple Mutex instead of RWMutex. RWMutexes only
introduce an unnecessary overhead for operations as simple as retrieving
a value from a map and regular Mutex should be preferred.

Thanks to @valyala for spotting a bug and the advice on RWMutexes.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-11-07 15:20:29 +01:00
Yury Molodov
5b11f6f384 app/vmui: improve chart performance and fix median calculation
This commit improves overall performance and stability of chart rendering,
refines time series generation, and fixes incorrect median calculation
in metric series.
JavaScript execution time improved by up to ×6 on large datasets.

**Changes:**

* Reworked `getTimeSeries` - one point per pixel.
* Added legend auto-collapse when >20 items.
* Switched median algorithm to Quickselect (Floyd–Rivest).
* Unified array stats functions (`min`, `max`, `avg`, `median`) into a
single pass.
* Removed unused `last` value from series.
* Renamed `roundToMilliseconds` to `roundToThousandths` and moved to
`utils/math`.
* Replaced `isSupportedDuration` with `parseSupportedDuration`, added
fractional duration support.

Related issue https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9699
Related issue https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9926
2025-11-07 16:19:56 +03:00
Zhu Jiekun
e8975e560d lib/promscrape: prevent early exit when one of multiple service discovery configs fails
When multiple service discovery configs of the same type exist (e.g.,
`hetzner_sd_config`), vmagent currently behaves as follows:
1. Attempts to request each config.
2. Exits immediately if any config returns an error.
3. Skips the rest configs and falls back to the previous service
discovery result.

The correct behavior—more compatible with Prometheus—should be:
1. Attempt to request each config.
2. Collect all valid results.
3. Use the valid results if there's at least one. otherwise (all
failed), fall back to the previous SD result.

Scrape example:

```yaml
scrape_configs:
  - job_name: hetzner-default
    hetzner_sd_configs:
      - role: "hcloud"
        authorization:
          credentials: "some_valid_value"
      - role: "hcloud"
        authorization:
          credentials: "some_wrong_value"
```

Expected outcome: 
- At least targets from `credentials: "some_valid_value"` should appear
in the service discovery result.

current outcome:
- the error from `credentials: "some_wrong_value"` leads to an **empty**
result.

This issue should affect service discovery which using
`getScrapeWorkGeneric` function:

- `azure_sd_config`
- `consul_sd_config`
- `consulagent_sd_config`
- `digitalocean_sd_config`
- `dns_sd_config`
- `docker_sd_config`
- `dockerswarm_sd_config`
- `ec2_sd_config`
- `eureka_sd_config`
- `gce_sd_config`
- `hetzner_sd_config`
- `http_sd_config`
- `kuma_sd_config`
- `marathon_sd_config`
- `nomad_sd_config`
- `openstack_sd_config`
- `ovhcloud_sd_config`
- `puppetdb_sd_config`
- `vultr_sd_config`
- `yandexcloud_sd_config`

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9375
2025-11-07 16:18:11 +03:00
Yury Molodov
3b5822398c app/vmui: fix points display; add option to show all points
* Fix rendering of isolated points at gaps.
* Add toggle to always show all points (even when connected by a line).

Related issue https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9666
2025-11-07 16:09:28 +03:00
Yury Molodov
742a3384dc app/vmui: fix incorrect median value calculation in series (#9926)
Signed-off-by: Yury Molodov <yurymolodov@gmail.com>
2025-11-07 13:42:22 +02:00
Clément Nussbaumer
8a1beef46d lib/promscrape/kubernetes: add namespace metadata discovery
permits attaching namespace metadata to pods, services, ingresses,
endpoints and endpointslices for kubernetes service-discovery.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7486
2025-11-07 11:49:08 +03:00
Fred Navruzov
22158b7272 docs/vmanomaly: patch release v1.27.1 (#9964)
### Describe Your Changes

Patch release doc updates (v1.27.1)

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-11-06 14:17:18 +02:00
Aliaksandr Valialkin
4a18f4e49d docs/victoriametrics/Articles.md: add https://www.tigrisdata.com/blog/billing-prometheus/ 2025-11-06 11:44:16 +01:00
Aliaksandr Valialkin
85cbbfe0bc lib/logstorage: verify that nobody holds references to parts when closing the partition
This is needed in order to detect and prevent cases of improper usage of partitions
while they are closed.

This is a follow-up for the commit 9725ee50ec .
2025-11-06 11:37:44 +01:00
Max Kotliar
d5705a9647 docs/guides: use canonical link 2025-11-05 21:09:19 +02:00
Max Kotliar
c90c7c3123 Revert "lib/envflag: apply -secret.flags inside envflag.Parse function (#9942)"
This reverts commit 1b11031ec8.

There is an import cycle because of the change in enterprise version of VM
2025-11-05 21:01:48 +02:00
Max Kotliar
1b11031ec8 lib/envflag: apply -secret.flags inside envflag.Parse function (#9942)
### Describe Your Changes

Follow up on PR:
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9839, which
addresses review comment

https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9839#discussion_r2477729886

Alex: 
```
this design decision isn't good, since it will lead to potential security issues over time when we'll forget adding ApplySecretFlags() call after the flag.Parse() call or add it at the wrong place. BTW, we do not call flag.Parse() explicitly - instead envflag.Parse() is called. So it is natural to call ApplySecretFlags() inside this call. Are there restrictions which prevent from doing this? If there are no restrictions, then there is no need in making this function public - it will be called explicitly inside envflag.Parse().
```

There is no changelog entry as there is no change in user-visible
behavior.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-11-05 20:53:43 +02:00
Max Kotliar
e7b7015eb1 docs: clarify why we advise 50% free RAM. Add link to discussion (#9943)
### Describe Your Changes

Based on answer
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9895#issuecomment-3442491150

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-11-05 20:51:17 +02:00
Max Kotliar
6ee1edeb4d docs: fix links in VictoriaMetrics topologies guide 2025-11-05 20:45:32 +02:00
Aliaksandr Valialkin
9725ee50ec lib/mergeset: verify that Table parts are no longer used at Table.MustClose()
This should catch possible errors related to improper release of Table parts.
Fix such an error at TestTableCreateSnapshotAt by properly closing all the initialized
TableSearch instances.

Thanks to @rtm0 for pointing to this issue.
2025-11-05 13:25:16 +01:00
f41gh7
780cb1bf05 docs: mention latest v1.129.1 release 2025-11-04 18:07:57 +03:00
f41gh7
a34d0d6056 docs: mention latest LTS releases
v1.110.23 and v1.122.8
2025-11-04 18:07:57 +03:00
f41gh7
5e98e0cff5 CHANGELOG.md: cut v1.129.1 release 2025-11-04 13:15:37 +03:00
Nikolay
51b44afd34 lib: properly apply snappy Decode limits
Previously, snappy Decoder didn't take in account Request Size limits
applied by VictoriaMetrics components.  And in case of incorrectly formed snappy block, VictoriaMetrics
 component may allocate extra memory. Which may lead to the OOM errors.

This commit makes ingest endpoints check block size header based on MaxRequest Limits.
2025-11-04 13:04:27 +03:00
Fred Navruzov
5a8d7984ca docs/vmanomaly: release v1.27.0 (#9954)
### Describe Your Changes

Docs update to follow vmanomaly's release v1.27.0, including:
- UI page update (changelogs, auth, new screenshots)
- Migration guide addition
- Cross-references of the above and version bumps

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-11-04 10:22:20 +04:00
Zakhar Bessarab
57752ca2c0 docs: update VM version to v1.129.0
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-11-03 16:55:32 +04:00
Zakhar Bessarab
171cdf0614 deployment/docker: update VM version to v1.129.0
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-11-03 16:53:21 +04:00
Artem Fetishev
7d19ec2e4d lib/storage: extract storage file names into constants (#9944)
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-10-31 20:32:16 +01:00
Zakhar Bessarab
a9b5033d50 docs/changelog: backport LTS changelog
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-10-31 22:41:17 +04:00
Zakhar Bessarab
a783b2048f docs/changelog: cut v1.129.0
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-10-31 20:04:01 +04:00
Zakhar Bessarab
fd48c72a83 docs: update version tooltips
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-10-31 19:59:27 +04:00
Zakhar Bessarab
05e52fa05a app/vmselect: run make vmui-update
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-10-31 19:47:16 +04:00
Zakhar Bessarab
b49e8d032f make: add vmutils target for linux/s390x for CI builds
Currently, CI builds are ignoring linux/s390x due to missing target. See an example: https://github.com/VictoriaMetrics/VictoriaMetrics-enterprise/actions/runs/18971198439/job/54179142369

Follow-up for 255a0cf6, 73b10d76

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-10-31 15:45:33 +04:00
Zakhar Bessarab
73b10d7621 make: include s390x binaries into release artifacts (#9941)
Previously, it was possible to build binaries with make targets but
those builds were not included in the release artifact. Update release
targets to include s390x artifacts in release artifacts.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9697

---------

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-10-31 15:16:07 +04:00
hagen1778
18268c3d13 docs: re-qualify load-balancing optimization to feature
Motivation: the change updates load-balancing logic, enhancing it rather than fixing
a critical bug. Such enhancement should not be ported to LTS versions.

See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9712

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-10-31 09:52:34 +01:00
hagen1778
bfb49c55af docs: add changelog for vmalert UI fix
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9892
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9909
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-10-31 09:34:29 +01:00
Kirill Yurkov
bd7fed9b41 docs: address review comments from PR #9919 (#9940)
- Standardize all sections to use 'Recommended for:' instead of mixed
'For whom:' and 'Target audience:'
- Fix wording: 'Query evaluation is always local' 

Addresses comments in #9919
2025-10-31 09:21:52 +01:00
Roman Khavronenko
a85c5830c1 app/vmalert: limit delayBeforeStart up to 5min (#9930)
vmalert tries to spread the moment group starts its evaluation
on `[0..group.interval]` duration. This approach allows to avoid
thundering herd problem when on vmalert start all groups execute their
rules simultaneously. It was introduced in
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/724

While for most configs it works great, for groups with big evaluation
intervals (30min, 60min) the first evaluation can be delayed
significantly.
This change introduces a start delay limit via new flag
`--group.maxStartDelay` (5m default).
It limits the `[0..group.interval]` start delay to
`[0..math.min(--group.maxStartDelay, group.interval)]`.
So all groups will start in first 5m or earlier.

The --group.maxStartDelay is ignored if user set `eval_offset`.

The 5m default limitation was picked high to not affect users with
relatively low evaluation intervals.

-----------

Based on https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9929

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-10-31 09:20:46 +01:00
Andrii Chubatiuk
009ddb9ce1 vmui: wrap annotations in alerting (#9909)
similar to https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9892
but for alerting tab in vmui
2025-10-31 09:13:32 +01:00
Zakhar Bessarab
91bce8d3b4 app/vmbackupmanager: enforce newline at the end of CLI result (#956)
* app/vmbackupmanager: enforce newline at the end of CLI result

Previously, vmbackupmanager only printed a response from API which did not include a newline character. That leads to issues with the rendering of the next command when using a shell.

Always append a newline character to avoid breaking shell formatting when using CLI mode.

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>

* Update docs/victoriametrics/changelog/CHANGELOG.md

Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>

---------

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2025-10-31 11:52:53 +04:00
Andrii Chubatiuk
a7d69cc51e app/vmbackupmanager: create vm_backup_last_created_at metric for latest backup (#954) 2025-10-31 11:52:47 +04:00
Aliaksandr Valialkin
2e6f42bff8 lib/promauth/config.go: typo fix: It -> If
The typo is spotted in https://github.com/VictoriaMetrics/VictoriaLogs/pull/765/files#r2434359693
2025-10-30 18:20:00 +01:00
Aliaksandr Valialkin
7ae1fd9614 docs/victoriametrics/Articles.md: add a link to https://medium.com/@vijayrauniyar1818/how-we-eliminated-10k-year-in-aws-cross-zone-data-transfer-costs-with-zone-aware-kubernetes-09fff0c2435b 2025-10-30 17:24:17 +01:00
Aliaksandr Valialkin
51d1c16230 app/vmauth: make load distribution more even among backends which execute queries with varying durations
The load distribution could be uneven when short queries arrive to vmauth while a part of backends are busy
with long-running queries. In this case the major load goes to the backend after a row of busy backend.

Suppose we have four backends - b1, b2, b3 and b4. The first two backends are busy with bigger number
of long-running queries than b3 and b4. Then 75% of short queries will go to b3, while only 25%
of short queries will go to b4.

The new algorithm makes the distribution more even in these cases by storing the next backend
after the chosen backend as candidate for the next query (its' index is stored in the atomicCounter).
Avoid races when updating atomicCounter from concurrently executed queries by using CompareAndSwap() -
if the concurrent query updated it first then the current query won't overwrite it with the outdated value.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9712
2025-10-30 14:37:48 +01:00
Max Kotliar
874f8b31a3 docs/changelog: fix pr link in v1.122.5 tag 2025-10-30 15:07:39 +02:00
Artem Fetishev
d3ac2473c0 lib/storage: Add data ingestion benchmarks for various data patterns
Data patterns considered:

- Same series, same date
- Same series, different dates
- Different series, same date
- Different series, different dates

To make sure that the pattern condition holds, a new storage instance is
started every benchmark iteration.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9912
2025-10-30 14:39:28 +03:00
Zakhar Bessarab
3f45690342 app/vmctl/remote-read: allow providing multiple label filters
Previously, vmctl only accepted one label for filtering. Extend this to
allow providing multiple-filters at once. This is useful when migrating
large volumes of data as it allows narrowing down migration scope of
migration for one run so that the source side is not overwhelmed with
migration.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9917/
2025-10-30 14:38:30 +03:00
Roman Khavronenko
3abd442742 app/vmalert: properly show last evaluation with 0 value
Before, rules that didn't get evaluated yet were showing weird values in
vmalert's UI. It was happening because of
`time.Since(r.LastEvaluation).Seconds()` expression when
`r.LastEvaluation` had 0 value.

With this change, rules that weren't evaluated yet would show `Never` in
Updated column instead.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9924
2025-10-30 14:37:34 +03:00
Zhu Jiekun
5cec04d8e2 lib/httpserver: revert HTTP/2 support
This commit request revert the commit
d6bbfaf164 for the following reasons:

1. HTTP/2 carries security risks.
2. Most components in the VictoriaMetrics stack do not require HTTP/2
support.
3. While HTTP/2 support was available only as an option in previous
commit, there remains a potential risk of misusing this option and
enabling HTTP/2 inadvertently.

For components (e.g., VictoriaTraces) that require HTTP/2 support, they
should currently build an HTTP server manually with built-in packages,
instead of using `lib/httpserver` in VictoriaMetrics. If the mentioned
issue is resolved in the future and more components need HTTP/2, this
support can be reintroduced into `lib/httpserver`.
 
Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9927
2025-10-30 14:36:48 +03:00
Aliaksandr Valialkin
60f777620f lib/fs/fsutil: set the default value for -fs.maxConcurrency depending on the number of available CPU cores
This should reduce the need to tune this flag on systems with different number of CPU cores.
16 concurrent file operations per CPU should give quite low Go scheduling latency (~10ms)
according to https://github.com/VictoriaMetrics/VictoriaLogs/issues/774#issuecomment-3456814064

This is a follow-up for the commit 8a9a40dbdd
2025-10-30 11:57:10 +01:00
Aliaksandr Valialkin
8a9a40dbdd lib/fs/fsutil: add -fs.maxConcurrency command-line flag for tuning concurrent operations with files
This flag can help tuning Go scheduling latency on systems with small number of CPU cores
vs data ingestion performance on systems with high-latency storage such as NFS or Ceph.

Updates https://github.com/VictoriaMetrics/VictoriaLogs/issues/774
Updates https://github.com/VictoriaMetrics/VictoriaLogs/issues/517
2025-10-30 11:46:27 +01:00
hagen1778
fde4b4013a docs: rm extra line
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-10-30 11:02:16 +01:00
hagen1778
bf69b0d686 docs: order recent changes by components
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-10-30 11:01:43 +01:00
hagen1778
a866474918 dashboards: run make dashboards-sync
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-10-30 10:58:58 +01:00
hagen1778
22f6cb6339 docs: mention PR author for dashboard change
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-10-30 10:56:08 +01:00
Samarth Bagga
0d7b7649bf dashboards: enable search for non default flags panel (#9928)
### Describe Your Changes

Added search for non default flags by editing the grafana configs.
Resolves #9910

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-10-30 10:53:36 +01:00
nemobis
74611ce6f2 docs: Update RELEX figures (#9931)
### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
2025-10-30 10:20:27 +01:00
Roman Khavronenko
a5dd0324a9 app/vmalert: simplify delayBeforeStart func (#9929)
It is a cosmetic change: it simplifies function signature by making it a
method of the Group struct.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-10-30 10:19:05 +01:00
hagen1778
45c0d40127 docs: fix typo after 3e0aa46
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-10-30 10:18:28 +01:00
Andrii Chubatiuk
fc978c95af app/vmalert: use search expression to match group and file names (#9920)
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9886

---------

Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
2025-10-30 10:01:25 +01:00
Kirill Yurkov
8e99efe0fa docs: add VictoriaMetrics architectures guide from startups to hypers… (#9919)
The new guide section about architecture from scratch to hyperscale!
2025-10-30 09:55:48 +01:00
hagen1778
3e0aa46fdb docs: reorder template functions alphabetically
follow-up after ea41fea453

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-10-30 09:54:40 +01:00
Anh-Dung Nguyen
ea41fea453 app/vmalert: add now template (#9913)
### Describe Your Changes

Related to #9864, add "now" as template in vmalert rules templating and
update the docs. I haven't been able to test the docs change as I can't
run make docs-debug locally so if anyone know how to do it locally,
please let me know!

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Co-authored-by: Hui Wang <haley@victoriametrics.com>
Co-authored-by: Max Kotliar <kotlyar.maksim@gmail.com>
2025-10-30 09:52:48 +01:00
hagen1778
0165108a8f docs: move change line to the right place
Follow-up after 2652a7c762

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-10-30 09:52:17 +01:00
Hui Wang
2652a7c762 vmalert: support alert_relabel_configs per each notifier in -notifier.config file (#9736)
fix https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5980
2025-10-30 09:47:40 +01:00
Stephan Burns
82aacc5b75 vmagent/docs: grammar changes (#9863)
### Describe Your Changes

Lots of small changes to grammar to make the docs flow nicer.

I'm sorry that this ended up being such a large PR. I will split these
up in the future.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

Co-authored-by: Phuong Le <39565248+func25@users.noreply.github.com>
2025-10-28 16:56:03 +02:00
dependabot[bot]
0544bb12e0 build(deps): bump vite from 7.1.5 to 7.1.11 in /app/vmui/packages/vmui (#9885)
Bumps [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite)
from 7.1.5 to 7.1.11.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/vitejs/vite/releases">vite's
releases</a>.</em></p>
<blockquote>
<h2>v7.1.11</h2>
<p>Please refer to <a
href="https://github.com/vitejs/vite/blob/v7.1.11/packages/vite/CHANGELOG.md">CHANGELOG.md</a>
for details.</p>
<h2>v7.1.10</h2>
<p>Please refer to <a
href="https://github.com/vitejs/vite/blob/v7.1.10/packages/vite/CHANGELOG.md">CHANGELOG.md</a>
for details.</p>
<h2>v7.1.9</h2>
<p>Please refer to <a
href="https://github.com/vitejs/vite/blob/v7.1.9/packages/vite/CHANGELOG.md">CHANGELOG.md</a>
for details.</p>
<h2>v7.1.8</h2>
<p>Please refer to <a
href="https://github.com/vitejs/vite/blob/v7.1.8/packages/vite/CHANGELOG.md">CHANGELOG.md</a>
for details.</p>
<h2>v7.1.7</h2>
<p>Please refer to <a
href="https://github.com/vitejs/vite/blob/v7.1.7/packages/vite/CHANGELOG.md">CHANGELOG.md</a>
for details.</p>
<h2>v7.1.6</h2>
<p>Please refer to <a
href="https://github.com/vitejs/vite/blob/v7.1.6/packages/vite/CHANGELOG.md">CHANGELOG.md</a>
for details.</p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/vitejs/vite/blob/main/packages/vite/CHANGELOG.md">vite's
changelog</a>.</em></p>
<blockquote>
<h2><!-- raw HTML omitted --><a
href="https://github.com/vitejs/vite/compare/v7.1.10...v7.1.11">7.1.11</a>
(2025-10-20)<!-- raw HTML omitted --></h2>
<h3>Bug Fixes</h3>
<ul>
<li><strong>dev:</strong> trim trailing slash before
<code>server.fs.deny</code> check (<a
href="https://redirect.github.com/vitejs/vite/issues/20968">#20968</a>)
(<a
href="f479cc57c4">f479cc5</a>)</li>
</ul>
<h3>Miscellaneous Chores</h3>
<ul>
<li><strong>deps:</strong> update all non-major dependencies (<a
href="https://redirect.github.com/vitejs/vite/issues/20966">#20966</a>)
(<a
href="6fb41a260b">6fb41a2</a>)</li>
</ul>
<h3>Code Refactoring</h3>
<ul>
<li>use subpath imports for types module reference (<a
href="https://redirect.github.com/vitejs/vite/issues/20921">#20921</a>)
(<a
href="d0094af639">d0094af</a>)</li>
</ul>
<h3>Build System</h3>
<ul>
<li>remove cjs reference in files field (<a
href="https://redirect.github.com/vitejs/vite/issues/20945">#20945</a>)
(<a
href="ef411cee26">ef411ce</a>)</li>
<li>remove hash from built filenames (<a
href="https://redirect.github.com/vitejs/vite/issues/20946">#20946</a>)
(<a
href="a81730754d">a817307</a>)</li>
</ul>
<h2><!-- raw HTML omitted --><a
href="https://github.com/vitejs/vite/compare/v7.1.9...v7.1.10">7.1.10</a>
(2025-10-14)<!-- raw HTML omitted --></h2>
<h3>Bug Fixes</h3>
<ul>
<li><strong>css:</strong> avoid duplicate style for server rendered
stylesheet link and client inline style during dev (<a
href="https://redirect.github.com/vitejs/vite/issues/20767">#20767</a>)
(<a
href="3a92bc79b3">3a92bc7</a>)</li>
<li><strong>css:</strong> respect emitAssets when cssCodeSplit=false (<a
href="https://redirect.github.com/vitejs/vite/issues/20883">#20883</a>)
(<a
href="d3e7eeefa9">d3e7eee</a>)</li>
<li><strong>deps:</strong> update all non-major dependencies (<a
href="879de86935">879de86</a>)</li>
<li><strong>deps:</strong> update all non-major dependencies (<a
href="https://redirect.github.com/vitejs/vite/issues/20894">#20894</a>)
(<a
href="3213f90ff0">3213f90</a>)</li>
<li><strong>dev:</strong> allow aliases starting with <code>//</code>
(<a
href="https://redirect.github.com/vitejs/vite/issues/20760">#20760</a>)
(<a
href="b95fa2aa75">b95fa2a</a>)</li>
<li><strong>dev:</strong> remove timestamp query consistently (<a
href="https://redirect.github.com/vitejs/vite/issues/20887">#20887</a>)
(<a
href="6537d15591">6537d15</a>)</li>
<li><strong>esbuild:</strong> inject esbuild helpers correctly for
esbuild 0.25.9+ (<a
href="https://redirect.github.com/vitejs/vite/issues/20906">#20906</a>)
(<a
href="446eb38632">446eb38</a>)</li>
<li>normalize path before calling <code>fileToBuiltUrl</code> (<a
href="https://redirect.github.com/vitejs/vite/issues/20898">#20898</a>)
(<a
href="73b6d243e0">73b6d24</a>)</li>
<li>preserve original sourcemap file field when combining sourcemaps (<a
href="https://redirect.github.com/vitejs/vite/issues/20926">#20926</a>)
(<a
href="c714776aa1">c714776</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>correct <code>WebSocket</code> spelling (<a
href="https://redirect.github.com/vitejs/vite/issues/20890">#20890</a>)
(<a
href="29e98dc3ef">29e98dc</a>)</li>
</ul>
<h3>Miscellaneous Chores</h3>
<ul>
<li><strong>deps:</strong> update rolldown-related dependencies (<a
href="https://redirect.github.com/vitejs/vite/issues/20923">#20923</a>)
(<a
href="a5e3b064fa">a5e3b06</a>)</li>
</ul>
<h2><!-- raw HTML omitted --><a
href="https://github.com/vitejs/vite/compare/v7.1.8...v7.1.9">7.1.9</a>
(2025-10-03)<!-- raw HTML omitted --></h2>
<h3>Reverts</h3>
<ul>
<li><strong>server:</strong> drain stdin when not interactive (<a
href="https://redirect.github.com/vitejs/vite/issues/20885">#20885</a>)
(<a
href="12d72b0538">12d72b0</a>)</li>
</ul>
<h2><!-- raw HTML omitted --><a
href="https://github.com/vitejs/vite/compare/v7.1.7...v7.1.8">7.1.8</a>
(2025-10-02)<!-- raw HTML omitted --></h2>
<h3>Bug Fixes</h3>
<ul>
<li><strong>css:</strong> improve url escape characters handling (<a
href="https://redirect.github.com/vitejs/vite/issues/20847">#20847</a>)
(<a
href="24a61a3f54">24a61a3</a>)</li>
<li><strong>deps:</strong> update all non-major dependencies (<a
href="https://redirect.github.com/vitejs/vite/issues/20855">#20855</a>)
(<a
href="788a183afc">788a183</a>)</li>
<li><strong>deps:</strong> update artichokie to 0.4.2 (<a
href="https://redirect.github.com/vitejs/vite/issues/20864">#20864</a>)
(<a
href="e670799e12">e670799</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="8b69c9e32c"><code>8b69c9e</code></a>
release: v7.1.11</li>
<li><a
href="f479cc57c4"><code>f479cc5</code></a>
fix(dev): trim trailing slash before <code>server.fs.deny</code> check
(<a
href="https://github.com/vitejs/vite/tree/HEAD/packages/vite/issues/20968">#20968</a>)</li>
<li><a
href="6fb41a260b"><code>6fb41a2</code></a>
chore(deps): update all non-major dependencies (<a
href="https://github.com/vitejs/vite/tree/HEAD/packages/vite/issues/20966">#20966</a>)</li>
<li><a
href="a81730754d"><code>a817307</code></a>
build: remove hash from built filenames (<a
href="https://github.com/vitejs/vite/tree/HEAD/packages/vite/issues/20946">#20946</a>)</li>
<li><a
href="ef411cee26"><code>ef411ce</code></a>
build: remove cjs reference in files field (<a
href="https://github.com/vitejs/vite/tree/HEAD/packages/vite/issues/20945">#20945</a>)</li>
<li><a
href="d0094af639"><code>d0094af</code></a>
refactor: use subpath imports for types module reference (<a
href="https://github.com/vitejs/vite/tree/HEAD/packages/vite/issues/20921">#20921</a>)</li>
<li><a
href="ed4a0dc913"><code>ed4a0dc</code></a>
release: v7.1.10</li>
<li><a
href="c714776aa1"><code>c714776</code></a>
fix: preserve original sourcemap file field when combining sourcemaps
(<a
href="https://github.com/vitejs/vite/tree/HEAD/packages/vite/issues/20926">#20926</a>)</li>
<li><a
href="446eb38632"><code>446eb38</code></a>
fix(esbuild): inject esbuild helpers correctly for esbuild 0.25.9+ (<a
href="https://github.com/vitejs/vite/tree/HEAD/packages/vite/issues/20906">#20906</a>)</li>
<li><a
href="879de86935"><code>879de86</code></a>
fix(deps): update all non-major dependencies</li>
<li>Additional commits viewable in <a
href="https://github.com/vitejs/vite/commits/v7.1.11/packages/vite">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=vite&package-manager=npm_and_yarn&previous-version=7.1.5&new-version=7.1.11)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/VictoriaMetrics/VictoriaMetrics/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-28 16:45:07 +02:00
Max Kotliar
70c293467a app/vmselect: Enable log slow query stats with 5s default
Rationale: Having query stats logging enabled by default can greatly
help in investigating incidents.

Currently, it is disabled by default, so many users don’t enable it, and
when issues occur there are no stats available.

After discussion with the team, a 5s threshold was agreed upon as a
reasonable default to capture meaningful slow query data without
excessive logging.
2025-10-28 16:43:04 +02:00
Max Kotliar
90b4c84ad5 docs/changelog: put misplaced changelog entries to tip 2025-10-28 16:37:15 +02:00
Hui Wang
0a194d067a stream aggregation: change the behavior when both `streamAggr.dropInp… (#9877)
stream aggregation: change the behavior when both `streamAggr.dropInput`
and `streamAggr.keepInput` are set to true

fix https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9724,
making dropInput and keepInput work separately.

<img width="744" height="366" alt="image"
src="https://github.com/user-attachments/assets/7ebb3d1e-872f-4789-8dd1-c4e3f80a84de"
/>

Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2025-10-28 16:19:33 +02:00
Hui Wang
9ffe965063 vmagent: add /remotewrite-relabel-config and `/remotewrite-url-rela… (#9722)
…bel-config` APIs to return `-promscrape.config` and
`-remoteWrite.relabelConfig` flag values

part of https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9504

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2025-10-27 13:52:46 +01:00
f41gh7
775ee71fad app/{vminsert,vmstorage}: implement RPC protocol for vmstorage-vminsert communication
This commit adds new RPC protocol for vminsert-vmstorage communication,
it acts in the same way as vmselect-vmstorage RPC.

  It's implemented with new handshake hello methods in a backward
compatible way. Server attempts to parse RPC only if client send new
Hello message, while client fallbacks to the old Hello message if server
closes connection.

This change is need for the new metrics metadata forwarded from vminsert
into vmstorage.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2974
Changes extracted from PR:
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9487
2025-10-27 09:56:07 +01:00
Artem Fetishev
75b597e727 lib/storage: fix loading nextDayMetricIDs cache from a file for different indexDB generation (#9911)
Previously, if a storage started with curr indexDB different from one
stored in nextDayMetricIDs cache file, the cache would still be loaded
into memory possibly affecting the next day prefill.

This is an unlikely case but it is still possible when:

- A programmer makes a mistake in the code and uses something else
instead of idbCurr.generation.
- Downgrading from pt-index to previous version

Related to #7599 and #8134.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-10-26 19:43:59 +01:00
Artem Fetishev
f3b0f4292d lib/storage: add a unit test for next day idb prefill (#9906)
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-10-25 18:16:32 +02:00
Roman Khavronenko
5fa87af6be app/vmalert/datasource: explicitly check response type during replay (#9868)
This change validates that QueryRange() method for prometheus datasource
receives response with `matrix` data type. It would throw an error
otherwise.

The change is needed to avoid confusions like in
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9779.

The fix is not elegant, but it should be simple from code support
perspective. So each API has its own parsing function. Even if some
processing code is repeated.

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: Andrii Chubatiuk <achubatiuk@victoriametrics.com>
2025-10-24 16:53:01 +02:00
Roman Khavronenko
b9a3369254 app/vmalert: preserve html formatting in annotations (#9892)
The change is purely visual. It preserves html formatting in annotations
when rendering them on rule or alert details page. The css change is
clumsy, but demonstrates the point.

--------

Before:
<img width="1179" height="297" alt="image"
src="https://github.com/user-attachments/assets/c30e2222-7b0f-4f28-bf6e-c546cc5bb2fc"
/>


After:
<img width="1196" height="321" alt="image"
src="https://github.com/user-attachments/assets/2c6d9530-7ae9-47fd-b4ba-87fe6f44c625"
/>


-------

p.s. @AndrewChubatiuk I sure know that you could make this change in
more elegant or stylish way than I did. Please do so, if you want.
Please port this change to vmui too. Thanks!

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-10-24 16:39:09 +02:00
Max Kotliar
159f990c8e docs/changelog: fix typo in update note 2025-10-24 13:27:49 +03:00
Max Kotliar
b5c3e93f7e docs/changelog: fix typo in LTS link 2025-10-24 13:17:11 +03:00
Zakhar Bessarab
7a6139416e lib/backup/s3remote: properly extend http client if it is present
fb1344b5 replaced an HTTP client unconditionally which overrides
configurations which were loaded by AWS SDK. This leads to AWS env
variables to being overwritten.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9858

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9898
2025-10-24 10:30:59 +02:00
Zhu Jiekun
d6bbfaf164 httpserver: add http2 option
Currently, the `httpserver` disabled HTTP/2 support by design, because:
```
// Disable http/2, since it doesn't give any advantages for VictoriaMetrics services.
```

As VictoriaLogs and VictoriaTraces rely on `httpserver`, in order to
support gRPC over HTTP/2, an option to support HTTP/2 is required.


Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9881
2025-10-24 10:30:27 +02:00
Nikolay
11f488d8ff lib/streamaggr: concurrently push timeseries to aggregators
Previously all timeseries pushed into aggregators were added
sequentially. It could cause delays on data ingestion and it was not
possible to use all available.

 This commit adds concurrency based on available CPU cores.

Also, it adds new generic Buffer and BufferPool into slicesutil.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9878
2025-10-24 10:29:48 +02:00
Aliaksandr Valialkin
d0f8773f4b app/vmauth: log the real cause for timed out requests to vmauth
Previously a misleading random error could be logged for canceled and/or timed out requests to vmauth.
Consistently log the request timeout error for timed out requests.

While at it, do not log errors for requests canceled by the remote client, since such logs aren't actionable
and just pollute error logs generated by vmauth.
2025-10-21 15:59:05 +02:00
Max Kotliar
7ec6f28a7c docs/changelog: add links to related PRs. 2025-10-21 16:11:21 +03:00
Max Kotliar
46ef5460a9 docs: run make docs-update-flags
sync flags with actual values in binaries
2025-10-21 16:04:20 +03:00
Max Kotliar
1a68d4ac8a docs/CHANGELOG.md: update changelog with LTS release notes; bump LTS versions 2025-10-21 11:23:01 +03:00
Max Kotliar
be7039429d docs: bump latest version in docs 2025-10-21 10:58:47 +03:00
Max Kotliar
76e5cd2cd4 deployment/docker: bump version 2025-10-21 10:55:35 +03:00
Max Kotliar
f91789eebd docs/CHANGELOG.md: cut v1.128.0 2025-10-17 14:53:46 +03:00
Max Kotliar
9b7fee13d6 docs: update version tooltips 2025-10-17 14:50:54 +03:00
Max Kotliar
c89825f64d app/{vmselect,vlselect}: run make vmui-update 2025-10-17 14:44:39 +03:00
Stephan Burns
2d0d96855b docs: grammatical changes (#9862)
### Describe Your Changes

Some small grammatical changes.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-10-17 14:09:05 +03:00
Rishub
e32b1ba6a5 Fix docker pull command for vmanomaly images (#9870)
### Describe Your Changes

The docker-hub image url was also set to quay.io changed it back to
docker.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-10-17 14:06:49 +03:00
Nikolay
41267decd3 lib/protoparser: add flag opentelemetry.convertMetricNamesToPrometheus (#9875)
Introduce a new flag, which converts only metric names into Prometheus
compatible format. And keeps label names in original form.

 It's needed to keep labels in original form, which
is useful for correlation with other telemetry sources, such as logs or
traces.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9830

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: f41gh7 <nik@victoriametrics.com>
2025-10-17 13:42:30 +03:00
Andrii Chubatiuk
75fef7a011 lib/streamaggr: disable impact of flush_on_shutdown on aggregated series flush time (#9852)
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9642

additionally renamed variable since meaning of variables
`!flushOnShutdown` and `skipIncompleteFlush` is not equal.

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
2025-10-17 13:03:29 +03:00
Andrii Chubatiuk
b87c515d14 {dashboards,rules}: update storage ETA calculations in both dashboards and rules (#9848)
currently index file size is calculated as average across all storages
from all clusters, updated it to get more valid calculations. also PR
[fixes helm chart issue
](https://github.com/VictoriaMetrics/helm-charts/issues/2474).

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-10-17 12:45:10 +03:00
Zakhar Bessarab
fb1344b5bf lib/backup/s3remote: use http client from AWS instead of custom implementation (#9869)
AWS SDK does not modify custom http client configuration if it was provided. This leads to
additional configuration such as environment variables being ignored.

Use AWS http client builder instead of custom implementation and
override DialContext to preserve metrics exposed by custom transport.

See: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9858

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-10-17 12:29:16 +04:00
Nikolay
168ee75a3c app/vmagent/kafka: add opentelemetry consumer format
This commit adds opentelemetry format for kafka consumer

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9734
2025-10-14 22:28:20 +02:00
dependabot[bot]
e3a5e29a02 build(deps): bump actions/setup-node from 4 to 6 (#9857)
Bumps [actions/setup-node](https://github.com/actions/setup-node) from 4
to 6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/setup-node/releases">actions/setup-node's
releases</a>.</em></p>
<blockquote>
<h2>v6.0.0</h2>
<h2>What's Changed</h2>
<p><strong>Breaking Changes</strong></p>
<ul>
<li>Limit automatic caching to npm, update workflows and documentation
by <a
href="https://github.com/priyagupta108"><code>@​priyagupta108</code></a>
in <a
href="https://redirect.github.com/actions/setup-node/pull/1374">actions/setup-node#1374</a></li>
</ul>
<p><strong>Dependency Upgrades</strong></p>
<ul>
<li>Upgrade ts-jest from 29.1.2 to 29.4.1 and document breaking changes
in v5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-node/pull/1336">#1336</a></li>
<li>Upgrade prettier from 2.8.8 to 3.6.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-node/pull/1334">#1334</a></li>
<li>Upgrade actions/publish-action from 0.3.0 to 0.4.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-node/pull/1362">#1362</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-node/compare/v5...v6.0.0">https://github.com/actions/setup-node/compare/v5...v6.0.0</a></p>
<h2>v5.0.0</h2>
<h2>What's Changed</h2>
<h3>Breaking Changes</h3>
<ul>
<li>Enhance caching in setup-node with automatic package manager
detection by <a
href="https://github.com/priya-kinthali"><code>@​priya-kinthali</code></a>
in <a
href="https://redirect.github.com/actions/setup-node/pull/1348">actions/setup-node#1348</a></li>
</ul>
<p>This update, introduces automatic caching when a valid
<code>packageManager</code> field is present in your
<code>package.json</code>. This aims to improve workflow performance and
make dependency management more seamless.
To disable this automatic caching, set <code>package-manager-cache:
false</code></p>
<pre lang="yaml"><code>steps:
- uses: actions/checkout@v5
- uses: actions/setup-node@v5
  with:
    package-manager-cache: false
</code></pre>
<ul>
<li>Upgrade action to use node24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/setup-node/pull/1325">actions/setup-node#1325</a></li>
</ul>
<p>Make sure your runner is on version v2.327.1 or later to ensure
compatibility with this release. <a
href="https://github.com/actions/runner/releases/tag/v2.327.1">See
Release Notes</a></p>
<h3>Dependency Upgrades</h3>
<ul>
<li>Upgrade <code>@​octokit/request-error</code> and
<code>@​actions/github</code> by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-node/pull/1227">actions/setup-node#1227</a></li>
<li>Upgrade uuid from 9.0.1 to 11.1.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-node/pull/1273">actions/setup-node#1273</a></li>
<li>Upgrade undici from 5.28.5 to 5.29.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-node/pull/1295">actions/setup-node#1295</a></li>
<li>Upgrade form-data to bring in fix for critical vulnerability by <a
href="https://github.com/gowridurgad"><code>@​gowridurgad</code></a> in
<a
href="https://redirect.github.com/actions/setup-node/pull/1332">actions/setup-node#1332</a></li>
<li>Upgrade actions/checkout from 4 to 5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-node/pull/1345">actions/setup-node#1345</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/priya-kinthali"><code>@​priya-kinthali</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/setup-node/pull/1348">actions/setup-node#1348</a></li>
<li><a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/setup-node/pull/1325">actions/setup-node#1325</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-node/compare/v4...v5.0.0">https://github.com/actions/setup-node/compare/v4...v5.0.0</a></p>
<h2>v4.4.0</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="2028fbc5c2"><code>2028fbc</code></a>
Limit automatic caching to npm, update workflows and documentation (<a
href="https://redirect.github.com/actions/setup-node/issues/1374">#1374</a>)</li>
<li><a
href="13427813f7"><code>1342781</code></a>
Bump actions/publish-action from 0.3.0 to 0.4.0 (<a
href="https://redirect.github.com/actions/setup-node/issues/1362">#1362</a>)</li>
<li><a
href="89d709d423"><code>89d709d</code></a>
Bump prettier from 2.8.8 to 3.6.2 (<a
href="https://redirect.github.com/actions/setup-node/issues/1334">#1334</a>)</li>
<li><a
href="cd2651c462"><code>cd2651c</code></a>
Bump ts-jest from 29.1.2 to 29.4.1 (<a
href="https://redirect.github.com/actions/setup-node/issues/1336">#1336</a>)</li>
<li><a
href="a0853c2454"><code>a0853c2</code></a>
Bump actions/checkout from 4 to 5 (<a
href="https://redirect.github.com/actions/setup-node/issues/1345">#1345</a>)</li>
<li><a
href="b7234cc9fe"><code>b7234cc</code></a>
Upgrade action to use node24 (<a
href="https://redirect.github.com/actions/setup-node/issues/1325">#1325</a>)</li>
<li><a
href="d7a11313b5"><code>d7a1131</code></a>
Enhance caching in setup-node with automatic package manager detection
(<a
href="https://redirect.github.com/actions/setup-node/issues/1348">#1348</a>)</li>
<li><a
href="5e2628c959"><code>5e2628c</code></a>
Bumps form-data (<a
href="https://redirect.github.com/actions/setup-node/issues/1332">#1332</a>)</li>
<li><a
href="65beceff8e"><code>65becef</code></a>
Bump undici from 5.28.5 to 5.29.0 (<a
href="https://redirect.github.com/actions/setup-node/issues/1295">#1295</a>)</li>
<li><a
href="7e24a656e1"><code>7e24a65</code></a>
Bump uuid from 9.0.1 to 11.1.0 (<a
href="https://redirect.github.com/actions/setup-node/issues/1273">#1273</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/actions/setup-node/compare/v4...v6">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/setup-node&package-manager=github_actions&previous-version=4&new-version=6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-14 17:32:28 +03:00
Max Kotliar
7dbe569fe7 deployment/docker: update Go builder from Go1.25.2 to Go1.25.3 (#9859)
Release https://go.dev/doc/devel/release#go1.25.3

Changeshttps://github.com/golang/go/issues?q=milestone%3AGo1.25.3%20label%3ACherryPickApproved

Follow up on
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9843
2025-10-14 17:09:01 +03:00
Roman Khavronenko
8f3e96fa38 deployment: bump alpine image to 3.22.2 (#9855)
See
https://www.alpinelinux.org/posts/Alpine-3.19.9-3.20.8-3.21.5-3.22.2-released.html

Addresses CVE-2025-9230, CVE-2025-9231, CVE-2025-9232.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-10-13 18:56:51 +03:00
Nikolay
cd9d5619d9 lib/promscrape/config: change promscrape.dropOriginalLabels default value to false
Tracking original labels requires storing a copy of labels obtained
from service discovery. It adds extra Garbage Collection pressure and as
a result increased CPU usage.

While dropOriginalLabels has almost no impact at test and small
installations.
Impact grows with a scale. And especially is impactful at Kubernetes
based installations.

 In addition, this flag is disabled by default for `k8s-stack` helm
chart, which is our main Kubernetes monitoring solution.

 An also, we recommend at vmagent optimisation guide to disable original
 labels storing.

 This commit changes default value to true and disables tracking of
dropped targets by default. In case of debugging, it could be easily
enabled back by providing `false` value to the flag:
`promscrape.dropOriginalLabels`. It should improve resource usage out of
box by reducing user-experience for minority of users.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9665

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9772
2025-10-13 11:05:24 +02:00
Andrii Chubatiuk
fb60857b8f app/vmui: make TextField box height constant regardless error message presence
TextField component has ability to show error message and depending on
it's presence text field height changes, which may cause visibility
issues if this field is vertically aligned with some neighbour
components. This PR makes textfield height constant and its input box
horizontally symmetrical

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9693
2025-10-13 11:03:16 +02:00
Andrii Chubatiuk
93f72c487d app/vmui: fixed code color value in light mode
Fixed typo in code color variable value in light mode

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9851
2025-10-13 11:02:36 +02:00
Zakhar Bessarab
0eb465782a deps: unpin AWS dependencies and add workaround for S3 compatibility (#9844)
Updates:
- unpin AWS dependencies and run `make vendor-update`
- add config options to enable checksums only if required by storage in
order to preserve backwards compatibility

Related issues:
- https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9748
- https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8622

Tested with: AWS S3, self-hosted MinIO, Linode object storage as it was
failing previously with multi-part uploads (reported here -
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8630#issuecomment-2772185033).
An updated library allows (PR with the
fix - https://github.com/aws/aws-sdk-go-v2/pull/3151) overriding
multi-part upload configurations so that compatibility can be preserved.

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-10-10 18:35:37 +04:00
Zakhar Bessarab
b703768e9c lib/backup/actions: improve progress logging (#9836)
Currently, it is hard to make sense of progress based on logging as it
requires manual calculation of progress and ETA.
Solve this by:
- making data units humanly readable
- adding an estimation of completion for the operation

---------

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-10-10 18:34:01 +04:00
dependabot[bot]
4e86ddfdaa build(deps): bump github/codeql-action from 3 to 4 (#9827)
Bumps [github/codeql-action](https://github.com/github/codeql-action)
from 3 to 4.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/github/codeql-action/releases">github/codeql-action's
releases</a>.</em></p>
<blockquote>
<h2>v3.30.7</h2>
<h1>CodeQL Action Changelog</h1>
<p>See the <a
href="https://github.com/github/codeql-action/releases">releases
page</a> for the relevant changes to the CodeQL CLI and language
packs.</p>
<h2>3.30.7 - 06 Oct 2025</h2>
<p>No user facing changes.</p>
<p>See the full <a
href="https://github.com/github/codeql-action/blob/v3.30.7/CHANGELOG.md">CHANGELOG.md</a>
for more information.</p>
<h2>v3.30.6</h2>
<h1>CodeQL Action Changelog</h1>
<p>See the <a
href="https://github.com/github/codeql-action/releases">releases
page</a> for the relevant changes to the CodeQL CLI and language
packs.</p>
<h2>3.30.6 - 02 Oct 2025</h2>
<ul>
<li>Update default CodeQL bundle version to 2.23.2. <a
href="https://redirect.github.com/github/codeql-action/pull/3168">#3168</a></li>
</ul>
<p>See the full <a
href="https://github.com/github/codeql-action/blob/v3.30.6/CHANGELOG.md">CHANGELOG.md</a>
for more information.</p>
<h2>v3.30.5</h2>
<h1>CodeQL Action Changelog</h1>
<p>See the <a
href="https://github.com/github/codeql-action/releases">releases
page</a> for the relevant changes to the CodeQL CLI and language
packs.</p>
<h2>3.30.5 - 26 Sep 2025</h2>
<ul>
<li>We fixed a bug that was introduced in <code>3.30.4</code> with
<code>upload-sarif</code> which resulted in files without a
<code>.sarif</code> extension not getting uploaded. <a
href="https://redirect.github.com/github/codeql-action/pull/3160">#3160</a></li>
</ul>
<p>See the full <a
href="https://github.com/github/codeql-action/blob/v3.30.5/CHANGELOG.md">CHANGELOG.md</a>
for more information.</p>
<h2>v3.30.4</h2>
<h1>CodeQL Action Changelog</h1>
<p>See the <a
href="https://github.com/github/codeql-action/releases">releases
page</a> for the relevant changes to the CodeQL CLI and language
packs.</p>
<h2>3.30.4 - 25 Sep 2025</h2>
<ul>
<li>We have improved the CodeQL Action's ability to validate that the
workflow it is used in does not use different versions of the CodeQL
Action for different workflow steps. Mixing different versions of the
CodeQL Action in the same workflow is unsupported and can lead to
unpredictable results. A warning will now be emitted from the
<code>codeql-action/init</code> step if different versions of the CodeQL
Action are detected in the workflow file. Additionally, an error will
now be thrown by the other CodeQL Action steps if they load a
configuration file that was generated by a different version of the
<code>codeql-action/init</code> step. <a
href="https://redirect.github.com/github/codeql-action/pull/3099">#3099</a>
and <a
href="https://redirect.github.com/github/codeql-action/pull/3100">#3100</a></li>
<li>We added support for reducing the size of dependency caches for Java
analyses, which will reduce cache usage and speed up workflows. This
will be enabled automatically at a later time. <a
href="https://redirect.github.com/github/codeql-action/pull/3107">#3107</a></li>
<li>You can now run the latest CodeQL nightly bundle by passing
<code>tools: nightly</code> to the <code>init</code> action. In general,
the nightly bundle is unstable and we only recommend running it when
directed by GitHub staff. <a
href="https://redirect.github.com/github/codeql-action/pull/3130">#3130</a></li>
<li>Update default CodeQL bundle version to 2.23.1. <a
href="https://redirect.github.com/github/codeql-action/pull/3118">#3118</a></li>
</ul>
<p>See the full <a
href="https://github.com/github/codeql-action/blob/v3.30.4/CHANGELOG.md">CHANGELOG.md</a>
for more information.</p>
<h2>v3.30.3</h2>
<h1>CodeQL Action Changelog</h1>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/github/codeql-action/blob/main/CHANGELOG.md">github/codeql-action's
changelog</a>.</em></p>
<blockquote>
<h2>3.29.4 - 23 Jul 2025</h2>
<p>No user facing changes.</p>
<h2>3.29.3 - 21 Jul 2025</h2>
<p>No user facing changes.</p>
<h2>3.29.2 - 30 Jun 2025</h2>
<ul>
<li>Experimental: When the <code>quality-queries</code> input for the
<code>init</code> action is provided with an argument, separate
<code>.quality.sarif</code> files are produced and uploaded for each
language with the results of the specified queries. Do not use this in
production as it is part of an internal experiment and subject to change
at any time. <a
href="https://redirect.github.com/github/codeql-action/pull/2935">#2935</a></li>
</ul>
<h2>3.29.1 - 27 Jun 2025</h2>
<ul>
<li>Fix bug in PR analysis where user-provided <code>include</code>
query filter fails to exclude non-included queries. <a
href="https://redirect.github.com/github/codeql-action/pull/2938">#2938</a></li>
<li>Update default CodeQL bundle version to 2.22.1. <a
href="https://redirect.github.com/github/codeql-action/pull/2950">#2950</a></li>
</ul>
<h2>3.29.0 - 11 Jun 2025</h2>
<ul>
<li>Update default CodeQL bundle version to 2.22.0. <a
href="https://redirect.github.com/github/codeql-action/pull/2925">#2925</a></li>
<li>Bump minimum CodeQL bundle version to 2.16.6. <a
href="https://redirect.github.com/github/codeql-action/pull/2912">#2912</a></li>
</ul>
<h2>3.28.21 - 28 July 2025</h2>
<p>No user facing changes.</p>
<h2>3.28.20 - 21 July 2025</h2>
<ul>
<li>Remove support for combining SARIF files from a single upload for
GHES 3.18, see <a
href="https://github.blog/changelog/2024-05-06-code-scanning-will-stop-combining-runs-from-a-single-upload/">the
changelog post</a>. <a
href="https://redirect.github.com/github/codeql-action/pull/2959">#2959</a></li>
</ul>
<h2>3.28.19 - 03 Jun 2025</h2>
<ul>
<li>The CodeQL Action no longer includes its own copy of the extractor
for the <code>actions</code> language, which is currently in public
preview.
The <code>actions</code> extractor has been included in the CodeQL CLI
since v2.20.6. If your workflow has enabled the <code>actions</code>
language <em>and</em> you have pinned
your <code>tools:</code> property to a specific version of the CodeQL
CLI earlier than v2.20.6, you will need to update to at least CodeQL
v2.20.6 or disable
<code>actions</code> analysis.</li>
<li>Update default CodeQL bundle version to 2.21.4. <a
href="https://redirect.github.com/github/codeql-action/pull/2910">#2910</a></li>
</ul>
<h2>3.28.18 - 16 May 2025</h2>
<ul>
<li>Update default CodeQL bundle version to 2.21.3. <a
href="https://redirect.github.com/github/codeql-action/pull/2893">#2893</a></li>
<li>Skip validating SARIF produced by CodeQL for improved performance.
<a
href="https://redirect.github.com/github/codeql-action/pull/2894">#2894</a></li>
<li>The number of threads and amount of RAM used by CodeQL can now be
set via the <code>CODEQL_THREADS</code> and <code>CODEQL_RAM</code>
runner environment variables. If set, these environment variables
override the <code>threads</code> and <code>ram</code> inputs
respectively. <a
href="https://redirect.github.com/github/codeql-action/pull/2891">#2891</a></li>
</ul>
<h2>3.28.17 - 02 May 2025</h2>
<ul>
<li>Update default CodeQL bundle version to 2.21.2. <a
href="https://redirect.github.com/github/codeql-action/pull/2872">#2872</a></li>
</ul>
<h2>3.28.16 - 23 Apr 2025</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="aac66ec793"><code>aac66ec</code></a>
Remove <code>update-proxy-release</code> workflow</li>
<li><a
href="91a63dc72c"><code>91a63dc</code></a>
Remove <code>undefined</code> values from results of
<code>unsafeEntriesInvariant</code></li>
<li><a
href="d25fa60a90"><code>d25fa60</code></a>
ESLint: Disable <code>no-unused-vars</code> for parameters starting with
<code>_</code></li>
<li><a
href="3adb1ff7b8"><code>3adb1ff</code></a>
Reorder supported tags in descending order</li>
<li>See full diff in <a
href="https://github.com/github/codeql-action/compare/v3...v4">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github/codeql-action&package-manager=github_actions&previous-version=3&new-version=4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-10 15:56:37 +03:00
Max Kotliar
7569d21123 docs: add CVE handling policy (#9847)
### Describe Your Changes

Add a CVE handling policy. 

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-10-10 15:53:41 +03:00
hagen1778
5d36616d02 Revert "docs: vmstorage 2 cpu requirement (#9846)"
This reverts commit 772ac8803e.

The reason for revert is that this recommendation should not be strict.
Installations with <= 1 vCPU will continue working efficiently. The load
from reads, writes and background merges will be evenly spread by Go runtime.

cc @Sleuth56 @tiny-pangolin
2025-10-10 14:16:08 +02:00
Roman Khavronenko
e31e0657c8 app/vmalert: uniformly populate error messages with URL context (#9845)
Before, `req.URL.Redacted` info was present in some error messages and
empty in others. This change uniformly adds it to the errors context.

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-10-10 14:09:49 +02:00
Andrii Chubatiuk
50af991677 docs: mention ignore_first_sample_interval in stream aggregation docs (#9834)
### Describe Your Changes

related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9615

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-10-10 14:09:24 +02:00
Roman Khavronenko
5617b24cef docs: fix typo in queryRequestsCount field name (#9829)
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-10-10 14:09:12 +02:00
Roman Khavronenko
ece3ee2427 docs: mention -metricNamesStatsResetAuthKey in Security (#9828)
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-10-10 14:09:02 +02:00
Andrii Chubatiuk
4add451701 app/vmui: store query hidden state in query args (#9826)
### Describe Your Changes

save queries hidden state to
`expr.hide:<hidden_query_idx_1,..,hidden_query_idx_n>` query argument

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-10-10 14:08:37 +02:00
Andrii Chubatiuk
1c7d8d030f app/vmui: pass query arg to search input, reset dropdown state during update (#9825)
- fixed ignored `search` query argument in `Notifiers` and `Rules` tabs
- added dropdown state reset, if other filters were updated and selected
state is not a subset of available items
- proxy requests to config.json for a local setup

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-10-10 14:03:45 +02:00
Yury Molodov
80b4ca6367 app/vmui: rename "Reset" to "Reset filters" and disable when no filters are modified (#9821)
### Describe Your Changes

This PR updates the **Cardinality Explorer** page in `vmui`:

* Renames the `Reset` button to `Reset filters`.
* Disables the button when no filters are modified.

Related issue: #9609

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

Signed-off-by: Yury Molodov <yurymolodov@gmail.com>
2025-10-10 13:57:42 +02:00
Yury Molodov
fafc754c11 app/vmui: prevent removing other query params when updating one (#9818)
### Describe Your Changes

Prevents removal of unrelated query parameters when updating a single
one in `vmui`.
Previously, changing one search parameter could unintentionally clear
others.

Related issue: #9816

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

Signed-off-by: Yury Molodov <yurymolodov@gmail.com>
2025-10-10 13:52:54 +02:00
Vadim Rutkovsky
e1e2b69b2c docs/guides: add Headlamp setup guide (#9817)
### Describe Your Changes

This adds a guide on how to configure Headlamp k8s UI to display k8s
metrics from VictoriaMetrics

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-10-10 13:51:38 +02:00
hagen1778
bc2b822d59 docs: fix link in build badge on github readme page
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-10-10 13:50:25 +02:00
hagen1778
d9b0c5a268 docs: restore build badge on github readme page
Based on https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9809

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-10-10 13:49:19 +02:00
Stephan Burns
772ac8803e docs: vmstorage 2 cpu requirement (#9846)
### Describe Your Changes



### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Co-authored-by: Mathias Palmersheim <mathias@victoriametrics.com>
2025-10-09 16:24:34 -04:00
Zakhar Bessarab
4d35b359bd deployment/docker: update Go builder from Go1.25.1 to Go1.25.2 (#9843)
See
https://github.com/golang/go/issues?q=milestone%3AGo1.25.2%20label%3ACherryPickApproved

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-10-09 16:53:00 +04:00
Max Kotliar
8c4faba658 app/{vmagent,vmalert}: add -secret.flags to configure flag to be hidd… (#9839)
This is a refined version of
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/6940, with all
work completed by @truepele.

---

### Describe Your Changes

Fixes #6938 

introduce -secret.flags to configure flag names to be hidden in logs and
on /metrics

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

Co-authored-by: andrii <truepele@gmail.com>
2025-10-09 15:07:46 +03:00
Fred Navruzov
5a29174b87 docs/vmanomaly: release v1.26.2 (#9841)
### Describe Your Changes

update the docs to forthcoming v1.26.2 patch release
⚠️ please do not merge upon approval before respective image tags
are live

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-10-09 10:06:58 +02:00
Fred Navruzov
ab89bbba5d docs/vmanomaly: ui page formatting fixes (#9840)
### Describe Your Changes

ui page formatting fixes

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-10-08 20:15:05 +02:00
Fred Navruzov
748c49cc9d docs/vmanomaly: release v1.26.1 (#9833)
### Describe Your Changes

release v1.26.1 docs updates

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-10-08 12:55:41 +02:00
Artem Fetishev
a88851f4a0 docs: bump latest version in docs
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-10-07 17:56:32 +02:00
Artem Fetishev
e0a76e2552 deployment/docker: bump version
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-10-07 17:50:17 +02:00
Artem Fetishev
a569224dad docs: bump last LTS versions
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-10-07 17:47:26 +02:00
Artem Fetishev
ef21fd6e2c docs/CHANGELOG.md: update changelog with LTS release notes
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-10-07 17:41:03 +02:00
hagen1778
57b16f0976 docs: update formatting for stream aggregation header
While there, remove excessive relabeling info and point users to Routing section.
The Routing section should explain how to build flexible processing pipleines.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-10-07 17:02:38 +02:00
hagen1778
4fbf26df02 deployment: revert upgrade libcrypto3 and libssl3 to 3.5.4-r0
Reverts 17ca1ba8c4

Reason for reverts are following:
1. The fix relies on release candidates of specific libraries
2. The real fix would be to update Alpine version, which is not released yet
3. It makes the fix partially done, as it would require follow-up in future to
switch from release candidates to stable versions, or to update Alpine version.
4. The fix is not effective, as it doesn't update the base image cached by Docker.
The real fix will be to host&update the base image separately like in https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9811.
5. VM binaries aren't vulnerable to mentioned vulnerabilites.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-10-07 16:55:05 +02:00
Zhu Jiekun
91f0738e67 docs: clarify how sharding in vmagent works when srv url is used (#9787)
### Describe Your Changes

clarify how sharding in vmagent works when srv url is used

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-07 16:45:15 +02:00
Roman Khavronenko
c70fc288f1 docs: fix small typos in changelog (#9798)
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-10-07 16:43:56 +02:00
Vadim Rutkovsky
921e6df382 lib/promb: restore 1000 iterations for write request benchmark (#9812)
### Describe Your Changes

Initially, this benchmark was running 1000 iterations. Later in
c005245741 it was refactored and bumped to
10_000. This alerts continuous benchmark systems, so this commit brings
it back to 1000
### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-10-07 16:42:54 +02:00
Yury Molodov
e5dd794db5 app/vmui: refactor Header.tsx and fix styles (#9823)
### Describe Your Changes

- Centered the logo in the header (minor visual tweak; see screenshots
below)
- Simplified sidebar visibility conditions in `Header.tsx` for
readability/maintainability

_No changelog entry - minor visual tweak and internal refactor; no
functional impact._

### Before / After

| Before | After |
|--------|-------|
| <img width="404" height="81" alt="Before"
src="https://github.com/user-attachments/assets/21fad295-8c90-4c03-8837-d335923e645c"
/> | <img width="404" height="81" alt="After"
src="https://github.com/user-attachments/assets/8c3d7a34-fd9c-4326-aea8-ef82eade8b72"
/> |



### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

Signed-off-by: Yury Molodov <yurymolodov@gmail.com>
2025-10-07 16:42:20 +02:00
Vadim Rutkovsky
cc87505f9d docs/victoriametrics/data-ingestion: add OpenShift guide (#9790)
### Describe Your Changes

This guide describes remote write configuration for OpenShift and
includes several useful tricks to make it efficient.

Relates to #8573

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

Co-authored-by: Roman Khavronenko <hagen1778@gmail.com>
2025-10-07 16:41:34 +02:00
Fred Navruzov
e198753f0d docs/vmanomaly: canonical link formatting (p2) (#9822)
### Describe Your Changes

canonical link formatting (p2)

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-10-07 16:08:45 +02:00
Fred Navruzov
6b9fa21d30 docs/vmanomaly: fix links to canonical form (1) (#9819)
### Describe Your Changes

1st iteration of improving remaining non-canonical links

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-10-07 15:43:06 +02:00
Fred Navruzov
e7abdce0de docs/vmanomaly: update schemas to v1.26.0 (#9813)
### Describe Your Changes

- update component schemas to v1.26.0; 
- fix old ?highlight=... links format

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-10-07 08:48:45 +02:00
Artem Fetishev
31adbf9094 docs/CHANGELOG.md: cut v1.127.0
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-10-03 18:27:29 +02:00
Artem Fetishev
dd508b9542 make vmui-update
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-10-03 16:09:33 +00:00
Roman Khavronenko
f151668188 apptest: tolerate time drift (#9801)
Shift time selectors by 1s to prevent time drifting in tests. Because of
shifting, test was flapping based on when it is run. See
https://github.com/VictoriaMetrics/VictoriaMetrics/actions/runs/18186734439/job/51824319927

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9770

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-10-03 16:54:16 +02:00
Roman Khavronenko
17ca1ba8c4 deployment: upgrade libcrypto3 and libssl3 to 3.5.4-r0 (#9805)
Addresses CVE-2025-9230, CVE-2025-9231, CVE-2025-9232.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-10-03 14:22:03 +02:00
Fred Navruzov
7654536aab docs/vmanomaly: typos and 404 after 1.26.0 release (#9800)
### Describe Your Changes

fixing broken links in UI/FAQ pages

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-10-02 21:04:19 +02:00
Fred Navruzov
901de22894 change page title from GUI to UI (#9799)
### Describe Your Changes

Fixing a typo in page name

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-10-02 17:40:42 +02:00
Fred Navruzov
96e514439d docs/vmanomaly: release v1.26.0 (#9793)
### Describe Your Changes

Docs update for v1.26.0 release of `vmanomaly`, including vmui-like GUI
docs and `VLogsReader`

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-10-02 17:34:03 +02:00
Aliaksandr Valialkin
43e189a56c docs/victoriametrics/enterprise.md: typo fix in the 'VictoriaLogs Enterprise features' chapter: VictoriaMetrics -> VictoriaLogs 2025-10-02 13:00:49 +02:00
Aliaksandr Valialkin
4dbf899d89 docs/victoriametrics/enterprise.md: properly state that Monitoring of Monitoring feature at VictoriaLogs enterprise helps preventing issues in VictoriaLogs setups, not VictoriaMetrics setups 2025-10-02 12:58:26 +02:00
Evgeny
2fcbf75539 app/vmalert: restore usage of query template in labels
- Fixes regression from https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9543
- Issue https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9783
2025-10-02 09:45:13 +02:00
Aliaksandr Valialkin
676a88793a Revert "integration test: prevent GetMetric from interrupting the test when metric not found"
This reverts commit ccf97a4143.

reason for revert: this change may break tests, which expect that ServesMetrics.GetMetric() fails
when the given metric doesn't exist in the output.

It is better to add 'TryGetMetric() (float64, bool)' function, which would return '(0, false)'
when the given metric doesn't exist, so the caller could decide what to do next.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9773
2025-10-02 02:43:00 +02:00
Aliaksandr Valialkin
8d3e9d1dac app/vmui/packages/vmui: deny indexing vmui page by Google and other web crawlers
The vmui page has zero interesting contents for indexing.
2025-10-01 13:53:40 +02:00
hagen1778
09251f0a1e docs: fix markdown formatting typo
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-10-01 13:34:11 +02:00
Hui Wang
4ea5f8a84d vmselect: prevent duplicate offset modifier when instant query uses r… (#9770)
…ollup functions rate() and avg_over_time() with cache available

fix https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9762
2025-10-01 13:31:08 +02:00
Roman Khavronenko
cd52978096 vendor: update metrics package to v1.40.2 (#9780)
Restore sorting order of summary and quantile metrics exposed by
VictoriaMetrics components on `/metrics` page.

https://github.com/VictoriaMetrics/metrics/pull/105

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-10-01 13:28:46 +02:00
Roman Khavronenko
f65e24b2ab docs: add Life of a sample section to vmagent docs (#9719)
The routing section aims to describe the processing flow in the exact
order to the user. It substitutes previous incomplete and verbose
routing documentation in Stream Aggregation docs
https://docs.victoriametrics.com/victoriametrics/stream-aggregation/#routing

The processing order is taken from picture in
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9646#issue-3367074827

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
Co-authored-by: func25 <phuongle0205@gmail.com>
Co-authored-by: Phuong Le <39565248+func25@users.noreply.github.com>
2025-10-01 13:20:53 +02:00
Andrii Chubatiuk
0579e68409 dashboards: add adhoc filter to query stats and operator (#9774)
Add ad-hoc filters to query stats and operator dashboards.
These filters are useful for exploring non-uniform metrics sets
without distinct job/instance filters.
2025-10-01 13:19:37 +02:00
Roman Khavronenko
f2aea8532f docs: clarify how vmagent addresses multi-level ingestion shortcomings (#9785)
The previous text didn't contain links to vmagent's capabilities.
Instead, it contained misleading multitenancy-mode link that doesn't
seem to be related to the subject.

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-10-01 13:17:05 +02:00
hagen1778
94473ed262 docs: rm unreachable link
https://www.vultr.com/docs/install-and-configure-victoriametrics-on-debian is not reachable anymore.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-10-01 13:13:58 +02:00
Roman Khavronenko
c646a66b60 app/{vmbackup/vmrestore}: push metrics on shutdown
Push metrics on shutdown if `-pushmetrics.url` is configured. Before
metrics reporting might have been skipped because of shutdown.

Obsoletes https://github.com/VictoriaMetrics/metrics/pull/103

--------------

To test:
1. Run local VictoriaMetrics instance
2. Build and run vmbackup or vmrestore:
```
make vmbackup && ./bin/vmbackup -storageDataPath=victoria-metrics-data -snapshot.createURL="http://user:pass@localhost:8428/snapshot/create?authKey=foobar" -dst=fs:////vmbackup/dir -pushmetrics.url=http://localhost:8428/api/v1/import/prometheus,http://127.0.0.1:8428/api/v1/import/prometheus
```
3. Try playing with `-pushmetrics.url` (good/bad/many addresses) and
observe logs

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9767
2025-09-30 09:28:25 +02:00
Zhu Jiekun
ccf97a4143 integration test: prevent GetMetric from interrupting the test when metric not found
Previously, `GetMetric` do `t.Fatalf` immediately when the target metric
not exist in `/metrics` page.

However, some metrics may start to appear after the process has been
running for a while. `t.Fatalf` invalidates the retry mechanism of
assertions, if the metric is not found the first time, the test case
will terminate.

This commit request changes `t.Fatalf` to `t.Logf` (instead of `t.Errorf`,
because error output may be considered a test case failure in some
scenarios).

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9773
2025-09-30 09:27:03 +02:00
Nikolay
66df8a5003 lib/workingsetache:add runtime finalizer to the cache
Follow-up for cea9505bab

 fastcache.Cache allocates off-heap memory, which must be explicitly
returned back to the pool with Reset method call.

 After changed made at commit above, during cache transit from whole to
split mode, it's possible that current cache is referenced by Cache.Get
or Cache.Call atomic pointers. It leads to potential memory leaks, since
we don't have any memory synchronization for atomic.Pointer.Store calls.

 This commit adds `Finalizer` to the `fastcache.Cache` instances.
It properly releases memory, when cache is no reachable.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9769
2025-09-30 09:25:57 +02:00
Nikolay
cea9505bab lib/workingsetcache: properly transit cache state
Previously, cache state transition from split into whole could left
cache into broken state, if Reset cache method was called in switching
mode.

 Also, cache Reset didn't start background workers and didn't change
cache size.

 This commit properly check mode during cache transition. In addition,
it no longer stops background workers after whole mode transition and
always start workers during start-up.

 Access to the prev, curr and mode Cache fields are properly locked
in order to mitigate possible race conditions.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9769
2025-09-29 14:54:17 +02:00
Hui Wang
30ac8cd3fa vmalert: add -rule.resultLimit command-line flag to allow limiting … (#9737)
…the number of alerts or recording results a single rule can produce

fix https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5792
2025-09-29 13:07:37 +02:00
hagen1778
a1f0b792af apptest: remove vlogs related code
VictoriaLogs has a new home for integration tests
https://github.com/VictoriaMetrics/VictoriaLogs/tree/master/apptest

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-09-29 12:57:51 +02:00
hagen1778
50f75d751f lib/streamaggr: prevent compilator from overoptimizing testing path
It seems like go compilator skipped computations and allocations for samples
as they weren't used afterwards. Sinking results into global variable removes
this optimizations and benchmark starts showing allocations within `pushSamples` fn.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-09-29 12:52:35 +02:00
hagen1778
27f7bc81e0 docs: fix links leading to legacy anchors
Change link to point to up-to-date documents instead
of pointing to legacy links.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-09-29 12:37:53 +02:00
Artem Fetishev
90d23d7c9f lib/storage: refactor tsid search (#9765)
- Make SearchTSIDs look similar to SearchMetricNames, i.e. search for metricIDs within the method
- Make the corresponding corrupted index test look similar to one for metric names search

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-09-26 15:44:02 +02:00
Artem Fetishev
f68c028673 lib/storage: remove unused storage field from Search type
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-09-25 16:18:58 +02:00
hagen1778
f24bf391a4 deployment/docker: update Go builder from Go1.25.0 to Go1.25.1
See https://github.com/golang/go/issues?q=milestone%3AGo1.25.1%20label%3ACherryPickApproved

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-09-25 11:41:10 +02:00
hagen1778
bc64ecfa3d deployment: bump Grafana to v12.2.0
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-09-25 11:34:22 +02:00
hagen1778
f0bbf6ec15 deployment: bump node-exporter to v1.9.1
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-09-25 11:33:34 +02:00
hagen1778
cff4bde4d6 deployment: bump alertmanager to v0.28.1
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-09-25 11:32:51 +02:00
hagen1778
1716f11677 deployment: drop vlogs-example-alerts
It was moved to VictoriaLogs repo https://github.com/VictoriaMetrics/VictoriaLogs/blob/master/deployment/docker/vlogs-example-alerts.yml

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-09-25 11:31:44 +02:00
Phuong Le
b4932ed2da docs: update internal vendoring contribution guide (#9739)
### Describe Your Changes

Consistently use the `v0.0.0-YYYYMMDDHHMMSS-commit_hash` reference for
the internal deps such as `github.com/VictoriaMetrics/VictoriaMetrics`
dependency, since it allows referring any commit without waiting for the
release tag.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-09-25 10:40:32 +02:00
hagen1778
77f2ab139f fix a small typo
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-09-25 10:39:49 +02:00
Hui Wang
5537140074 lib/protoparser: remove error log when marshaling an invalid comment or an empty HELP metadata line (#9732)
fix https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9710

---------

Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
2025-09-25 10:36:58 +02:00
i2blind
5d766bf7f1 docs/stream-aggregation: streamAggr.dedupInterval comments on old samples (#9731)
### Describe Your Changes

- Add comments to stream-aggregation README.md to clarify the effect
that the +flag will have on old samples
- Fix a spelling error with peridically to periodically in several files
that codespell-check caught.

Related to [#6775]

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Co-authored-by: Roman Khavronenko <hagen1778@gmail.com>
2025-09-25 10:32:02 +02:00
Andrii Chubatiuk
5907239181 app/vmui: reset select values, when 'ALL' selected (#9702)
### Describe Your Changes

resetting Select component selected items, when all items are selected,
this should speed up filtering on alerting page on VMUI

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Co-authored-by: Artur Minchukou <aminchukov@victoriametrics.com>
2025-09-25 10:13:15 +02:00
Andrii Chubatiuk
720c2bfa1d app/vmui: fix disabled state for select, textfield and datetimepicker components (#9698)
### Describe Your Changes

select and textfield components look confusing, while disabled. it's
impossible to guess if it's disabled or not before interaction. updated
colors for components, when they are disabled



### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-09-25 10:12:41 +02:00
Arie Heinrich
e971e6102e docs: markdown, grammar and spelling (#9695)
### Describe Your Changes

This pull request consists of the following:

1. Markdown fixes
    following https://www.markdownguide.org/basic-syntax/
and https://github.com/markdownlint/markdownlint/blob/main/docs/RULES.md

- Add empty lines after headers or lists
- Remove extra lines between paragraphs
- Remove extra spaces at the end of a line
- Add language to code quote
- Consistent list (dont mix astrixes and dashes on same file, choose one
and be consistent in the same file)
- Proper URL links
- Use meaningful context to URLs instead of "here".

2. Concise language

3. Grammar fixes

- removing extra spaces between words
- there are multiple ones but i picked the basic ones that triggered my
eye :)

4. Spelling fixes

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-09-25 10:12:01 +02:00
Andrii Chubatiuk
5cd6d7cfba app/vmui: add minDate and maxDate parameters for DatePicker to allow limiting available dates to select (#9694)
add ability to limit available in datePicker dates using `minDate` and
`maxDate` parameters. all dates before `minDate` and after `maxDate`
cannot be picked. lower and upper bounds can be set independently.

This `minDate` and `maxDate` parameters aren't set by default in vmui.
The datepicker component with these params is re-used elsewhere.
2025-09-25 09:48:47 +02:00
hagen1778
907aa1973a docs: add question about old and out-of-order metrics to FAQ
The change also explciitly mentions `out-of-order` phrase, as it is commonly
used in Prometheus ecosystem.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-09-25 09:31:39 +02:00
Artem Fetishev
d6dacd9771 lib/storage: introduce metricNameSearch type
Searching metricName by metricID happens many times during a single API
call. This requires getting the current set of idbs before those calls
happen. Which is fine but requires propagating idbs across the code
base. This is also fine in case of OSS version as it is used in Search
only.

Propagating idbs across the code base becomes a problem in Enterprise
version as it is used in at least 3 places. As a result it becomes very
difficult to merge things from OSS to Ent.

Localizing the all the dependencies in one searchMetricName type and
reusing this type everywhere should make things simpler.

Related enterprise changes:
https://github.com/VictoriaMetrics/VictoriaMetrics-enterprise/compare/search-metric-name-ent?expand=1

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9756
2025-09-24 15:25:48 +02:00
Artem Fetishev
5bb67a7f00 lib/storage: Move searchTSIDs to Storage
A small refactoring that reduces Search dependency on Storage:

- Move searchTSIDs() from Search to Storage because this method does not
depend on anything Search-specific but does depend on Storage.
- Use metricsTracker instead of storage.metricTracker.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9754
2025-09-24 15:17:21 +02:00
Artem Fetishev
8c1c92d4c9 lib/storage: rewrite search benchmarks to allow to make it easy adding new cases (#9691)
Benchmarking storage search api requires taking into account many
parameters, such as:
- data configuration: how many series, deleted series, search time range
- where the index data recides: prev and or indexDB
- which search operation to measure

While adding a new benchmark use case involves a lot boilerplate code.

This pr implements a framework for testing storage search ops that can
be relatively easily extended. This come in expecially handy when adding
new cases for parition index.

The current set of params will result of a lot of benchmarks to be run
which most probably does not make sense because:
- it will take a lot of time and
- the output data is hard to compare manually.

However, these benchmarks are very useful when only small set of params
is of interest. For example, if I want to compare the search of 100k
metric names when the index data resides in prevOnly, currOnly or
prevAndCurr indexDBs. This would translate in the following cmd:

```shell
go test ./lib/storage --loggerLevel=ERROR -run=^$ -bench=^BenchmarkSearch/MetricNames/.*/VariableSeries/100000$
```

Why this change:
- I often need to run benchmarks with configs that I did not have
before, requires either modifying the existing one or writing a new one.
It is easy to get lost and make benchmark non-comparable
- I need some way to make legacy and pt index benchmarks comparable

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-09-22 15:05:32 +02:00
Roman Khavronenko
95ca45d05a docs: replace link to WITH templates playgorund (#9729)
The new link is shorter and has nice UI.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-09-22 13:52:42 +02:00
Nils K
828a2aaf17 docs: fix typo of dree -> free in formula (#9743) 2025-09-22 13:52:13 +02:00
hagen1778
007ae5a3f0 docs: fix a few typos in the changelog
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-09-22 13:51:40 +02:00
Zakhar Bessarab
dcd23da4ba docs/vmbackupmanager: add docs to clarify unsafe usage of lifecycle rules (#9728)
- state that it is unsafe to use lifecycle rules and describe the reason
- update formatting according latest changes in docs


---------

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2025-09-22 11:56:48 +04:00
Roman Khavronenko
e33dbaf3d2 docs: update vmagent diagram image (#9727)
The original image seems outdated by now.
Replacing it with the updated and more detailed version from
https://victoriametrics.com/blog/vmagent-key-features-explained/

Picture is created by @func25

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: func25 <phuongle0205@gmail.com>
2025-09-17 16:34:52 +03:00
Arie Heinrich
c68973a247 Markdown, grammar and spelling (#9692)
### Describe Your Changes

This pull request consists of the following:

1. Markdown fixes
    following https://www.markdownguide.org/basic-syntax/
and https://github.com/markdownlint/markdownlint/blob/main/docs/RULES.md
- Add empty lines after headers or lists
- Remove extra lines between paragraphs
- Remove extra spaces at the end of a line
- Add language to code quote
- Consistent list (dont mix astrixes and dashes on same file, choose one
and be consistent in the same file)
- Proper URL links
- Use meaningful context to URLs instead of "here".

2. Concise language

3. Grammar fixes

- removing extra spaces between words
- there are multiple ones but i picked the basic ones that triggered my
eye :)

4. Spelling fixes

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-09-17 13:52:43 +03:00
Aliaksandr Valialkin
2c72ef0f38 app/vmauth: follow-up for 8ce4636bc0
- Rename copyStream to copyStreamToClient in order to make it more clear
  that the stream must be copied from backend to client.

- Make sure that the client implements net/http.Flusher interface.
  It is a programming error (BUG) if the client passed to copyStreamToClient
  doesn't implement net/http.Flusher interface.

- Do not write zero-length data to the backend.

Updates https://github.com/VictoriaMetrics/VictoriaLogs/issues/667
2025-09-17 10:26:40 +02:00
Roman Khavronenko
bd0551da3b deployment: drop logs-benchmark (#9726)
It has a new home now - see
https://github.com/VictoriaMetrics/VictoriaLogs/tree/master/deployment/logs-benchmark

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-09-16 17:14:33 +03:00
Dima Shur
9f52c40b0b Improvements for backup description and configuration for single node, cluster , quick start (#9459)
### Describe Your Changes

Updating backup-related documentation:
vmbackup, single node, cluster node, quick start to increase clarity and
improve doc structure

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
2025-09-16 16:41:54 +03:00
dependabot[bot]
ba3b50df1d build(deps): bump vite from 7.0.4 to 7.1.5 in /app/vmui/packages/vmui (#9706)
Bumps [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite)
from 7.0.4 to 7.1.5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/vitejs/vite/releases">vite's
releases</a>.</em></p>
<blockquote>
<h2>v7.1.5</h2>
<p>Please refer to <a
href="https://github.com/vitejs/vite/blob/v7.1.5/packages/vite/CHANGELOG.md">CHANGELOG.md</a>
for details.</p>
<h2>v7.1.4</h2>
<p>Please refer to <a
href="https://github.com/vitejs/vite/blob/v7.1.4/packages/vite/CHANGELOG.md">CHANGELOG.md</a>
for details.</p>
<h2>v7.1.3</h2>
<p>Please refer to <a
href="https://github.com/vitejs/vite/blob/v7.1.3/packages/vite/CHANGELOG.md">CHANGELOG.md</a>
for details.</p>
<h2>v7.1.2</h2>
<p>Please refer to <a
href="https://github.com/vitejs/vite/blob/v7.1.2/packages/vite/CHANGELOG.md">CHANGELOG.md</a>
for details.</p>
<h2>v7.1.1</h2>
<p>Please refer to <a
href="https://github.com/vitejs/vite/blob/v7.1.1/packages/vite/CHANGELOG.md">CHANGELOG.md</a>
for details.</p>
<h2>create-vite@7.1.1</h2>
<p>Please refer to <a
href="https://github.com/vitejs/vite/blob/create-vite@7.1.1/packages/create-vite/CHANGELOG.md">CHANGELOG.md</a>
for details.</p>
<h2>plugin-legacy@7.1.0</h2>
<p>Please refer to <a
href="https://github.com/vitejs/vite/blob/plugin-legacy@7.1.0/packages/plugin-legacy/CHANGELOG.md">CHANGELOG.md</a>
for details.</p>
<h2>create-vite@7.1.0</h2>
<p>Please refer to <a
href="https://github.com/vitejs/vite/blob/create-vite@7.1.0/packages/create-vite/CHANGELOG.md">CHANGELOG.md</a>
for details.</p>
<h2>v7.1.0</h2>
<p>Please refer to <a
href="https://github.com/vitejs/vite/blob/v7.1.0/packages/vite/CHANGELOG.md">CHANGELOG.md</a>
for details.</p>
<h2>v7.1.0-beta.1</h2>
<p>Please refer to <a
href="https://github.com/vitejs/vite/blob/v7.1.0-beta.1/packages/vite/CHANGELOG.md">CHANGELOG.md</a>
for details.</p>
<h2>v7.1.0-beta.0</h2>
<p>Please refer to <a
href="https://github.com/vitejs/vite/blob/v7.1.0-beta.0/packages/vite/CHANGELOG.md">CHANGELOG.md</a>
for details.</p>
<h2>v7.0.7</h2>
<p>Please refer to <a
href="https://github.com/vitejs/vite/blob/v7.0.7/packages/vite/CHANGELOG.md">CHANGELOG.md</a>
for details.</p>
<h2>v7.0.6</h2>
<p>Please refer to <a
href="https://github.com/vitejs/vite/blob/v7.0.6/packages/vite/CHANGELOG.md">CHANGELOG.md</a>
for details.</p>
<h2>v7.0.5</h2>
<p>Please refer to <a
href="https://github.com/vitejs/vite/blob/v7.0.5/packages/vite/CHANGELOG.md">CHANGELOG.md</a>
for details.</p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/vitejs/vite/blob/main/packages/vite/CHANGELOG.md">vite's
changelog</a>.</em></p>
<blockquote>
<h2><!-- raw HTML omitted --><a
href="https://github.com/vitejs/vite/compare/v7.1.4...v7.1.5">7.1.5</a>
(2025-09-08)<!-- raw HTML omitted --></h2>
<h3>Bug Fixes</h3>
<ul>
<li>apply <code>fs.strict</code> check to HTML files (<a
href="https://redirect.github.com/vitejs/vite/issues/20736">#20736</a>)
(<a
href="14015d794f">14015d7</a>)</li>
<li><strong>deps:</strong> update all non-major dependencies (<a
href="https://redirect.github.com/vitejs/vite/issues/20732">#20732</a>)
(<a
href="122bfbabeb">122bfba</a>)</li>
<li>upgrade sirv to 3.0.2 (<a
href="https://redirect.github.com/vitejs/vite/issues/20735">#20735</a>)
(<a
href="09f2b52e8d">09f2b52</a>)</li>
</ul>
<h2><!-- raw HTML omitted --><a
href="https://github.com/vitejs/vite/compare/v7.1.3...v7.1.4">7.1.4</a>
(2025-09-01)<!-- raw HTML omitted --></h2>
<h3>Bug Fixes</h3>
<ul>
<li>add missing awaits (<a
href="https://redirect.github.com/vitejs/vite/issues/20697">#20697</a>)
(<a
href="79d10ed634">79d10ed</a>)</li>
<li><strong>deps:</strong> update all non-major dependencies (<a
href="https://redirect.github.com/vitejs/vite/issues/20676">#20676</a>)
(<a
href="5a274b29df">5a274b2</a>)</li>
<li><strong>deps:</strong> update all non-major dependencies (<a
href="https://redirect.github.com/vitejs/vite/issues/20709">#20709</a>)
(<a
href="0401feba17">0401feb</a>)</li>
<li>pass rollup watch options when building in watch mode (<a
href="https://redirect.github.com/vitejs/vite/issues/20674">#20674</a>)
(<a
href="f367453ca2">f367453</a>)</li>
</ul>
<h3>Miscellaneous Chores</h3>
<ul>
<li>remove unused constants entry from rolldown.config.ts (<a
href="https://redirect.github.com/vitejs/vite/issues/20710">#20710</a>)
(<a
href="537fcf9186">537fcf9</a>)</li>
</ul>
<h3>Code Refactoring</h3>
<ul>
<li>remove unnecessary <code>minify</code> parameter from
<code>finalizeCss</code> (<a
href="https://redirect.github.com/vitejs/vite/issues/20701">#20701</a>)
(<a
href="8099582e53">8099582</a>)</li>
</ul>
<h2><!-- raw HTML omitted --><a
href="https://github.com/vitejs/vite/compare/v7.1.2...v7.1.3">7.1.3</a>
(2025-08-19)<!-- raw HTML omitted --></h2>
<h3>Features</h3>
<ul>
<li><strong>cli:</strong> add Node.js version warning for unsupported
versions (<a
href="https://redirect.github.com/vitejs/vite/issues/20638">#20638</a>)
(<a
href="a1be1bf090">a1be1bf</a>)</li>
<li>generate code frame for parse errors thrown by terser (<a
href="https://redirect.github.com/vitejs/vite/issues/20642">#20642</a>)
(<a
href="a9ba0174a5">a9ba017</a>)</li>
<li>support long lines in <code>generateCodeFrame</code> (<a
href="https://redirect.github.com/vitejs/vite/issues/20640">#20640</a>)
(<a
href="1559577317">1559577</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li><strong>deps:</strong> update all non-major dependencies (<a
href="https://redirect.github.com/vitejs/vite/issues/20634">#20634</a>)
(<a
href="4851cab3ba">4851cab</a>)</li>
<li><strong>optimizer:</strong> incorrect incompatible error (<a
href="https://redirect.github.com/vitejs/vite/issues/20439">#20439</a>)
(<a
href="446fe83033">446fe83</a>)</li>
<li>support multiline new URL(..., import.meta.url) expressions (<a
href="https://redirect.github.com/vitejs/vite/issues/20644">#20644</a>)
(<a
href="9ccf142764">9ccf142</a>)</li>
</ul>
<h3>Performance Improvements</h3>
<ul>
<li><strong>cli:</strong> dynamically import <code>resolveConfig</code>
(<a
href="https://redirect.github.com/vitejs/vite/issues/20646">#20646</a>)
(<a
href="f691f57e46">f691f57</a>)</li>
</ul>
<h3>Miscellaneous Chores</h3>
<ul>
<li><strong>deps:</strong> update rolldown-related dependencies (<a
href="https://redirect.github.com/vitejs/vite/issues/20633">#20633</a>)
(<a
href="98b92e8c4b">98b92e8</a>)</li>
</ul>
<h3>Code Refactoring</h3>
<ul>
<li>replace startsWith with strict equality (<a
href="https://redirect.github.com/vitejs/vite/issues/20603">#20603</a>)
(<a
href="42816dee0e">42816de</a>)</li>
<li>use <code>import</code> in worker threads (<a
href="https://redirect.github.com/vitejs/vite/issues/20641">#20641</a>)
(<a
href="530687a344">530687a</a>)</li>
</ul>
<h3>Tests</h3>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="564754061e"><code>5647540</code></a>
release: v7.1.5</li>
<li><a
href="09f2b52e8d"><code>09f2b52</code></a>
fix: upgrade sirv to 3.0.2 (<a
href="https://github.com/vitejs/vite/tree/HEAD/packages/vite/issues/20735">#20735</a>)</li>
<li><a
href="14015d794f"><code>14015d7</code></a>
fix: apply <code>fs.strict</code> check to HTML files (<a
href="https://github.com/vitejs/vite/tree/HEAD/packages/vite/issues/20736">#20736</a>)</li>
<li><a
href="122bfbabeb"><code>122bfba</code></a>
fix(deps): update all non-major dependencies (<a
href="https://github.com/vitejs/vite/tree/HEAD/packages/vite/issues/20732">#20732</a>)</li>
<li><a
href="bcc31449c0"><code>bcc3144</code></a>
release: v7.1.4</li>
<li><a
href="0401feba17"><code>0401feb</code></a>
fix(deps): update all non-major dependencies (<a
href="https://github.com/vitejs/vite/tree/HEAD/packages/vite/issues/20709">#20709</a>)</li>
<li><a
href="537fcf9186"><code>537fcf9</code></a>
chore: remove unused constants entry from rolldown.config.ts (<a
href="https://github.com/vitejs/vite/tree/HEAD/packages/vite/issues/20710">#20710</a>)</li>
<li><a
href="79d10ed634"><code>79d10ed</code></a>
fix: add missing awaits (<a
href="https://github.com/vitejs/vite/tree/HEAD/packages/vite/issues/20697">#20697</a>)</li>
<li><a
href="8099582e53"><code>8099582</code></a>
refactor: remove unnecessary <code>minify</code> parameter from
<code>finalizeCss</code> (<a
href="https://github.com/vitejs/vite/tree/HEAD/packages/vite/issues/20701">#20701</a>)</li>
<li><a
href="f367453ca2"><code>f367453</code></a>
fix: pass rollup watch options when building in watch mode (<a
href="https://github.com/vitejs/vite/tree/HEAD/packages/vite/issues/20674">#20674</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/vitejs/vite/commits/v7.1.5/packages/vite">compare
view</a></li>
</ul>
</details>
<details>
<summary>Maintainer changes</summary>
<p>This version was pushed to npm by [GitHub Actions](<a
href="https://www.npmjs.com/~GitHub">https://www.npmjs.com/~GitHub</a>
Actions), a new releaser for vite since your current version.</p>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=vite&package-manager=npm_and_yarn&previous-version=7.0.4&new-version=7.1.5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/VictoriaMetrics/VictoriaMetrics/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-16 15:56:42 +03:00
Aliaksandr Valialkin
3cfeae7f1a app/vmauth: do not log requests canceled by the client, since this is an expected condition
See https://github.com/VictoriaMetrics/VictoriaLogs/issues/667#issuecomment-3297270128
2025-09-16 11:59:06 +02:00
Max Kotliar
32da04725b docs: use canonical link 2025-09-16 12:42:59 +03:00
Aliaksandr Valialkin
8ce4636bc0 app/vmauth: flush data chunks from backends to clients as soon as possible without bufferring them at vmauth side
This allows the proper live tailing of responses from backends
such as VictoriaLogs live tailing - https://docs.victoriametrics.com/victorialogs/querying/#live-tailing

See https://github.com/VictoriaMetrics/VictoriaLogs/issues/667

Thanks to @func25 for the initial pull request at https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9723
2025-09-16 11:10:38 +02:00
Andrii Chubatiuk
6167ce655e lib/timerpool: removed unneeded code, unified package usage (#9735)
### Describe Your Changes

after golang 1.23 it's enough just to stop timer, no need to drain a
channel

related issue
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9721, but this
is not a fix for it

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-09-16 09:55:33 +02:00
Max Kotliar
f1e294aa2b docs: use canonical links 2025-09-16 10:19:20 +03:00
Max Kotliar
b72bf6961d docs: bump latest version in docs 2025-09-15 14:21:07 +03:00
Max Kotliar
2b880fe7db deployment/docker: bump version 2025-09-15 14:08:19 +03:00
Max Kotliar
9898743fbd docs: bump last LTS versions 2025-09-15 12:28:56 +03:00
Max Kotliar
ca372168ae docs/CHANGELOG.md: update changelog with LTS release notes 2025-09-15 12:24:34 +03:00
Max Kotliar
323974164b docs: correct the availabe from version 2025-09-15 10:46:41 +03:00
Max Kotliar
d0b948289b docs/changelog: fix link; chore a bit 2025-09-12 20:22:37 +03:00
Max Kotliar
aa429631a6 docs/CHANGELOG.md: cut v1.126.0 2025-09-12 15:57:42 +03:00
Max Kotliar
9e3cf9ab64 docs: update version help tooltips 2025-09-12 15:54:53 +03:00
Max Kotliar
94601365ca security: do not mention exact lts versions
Provide link to LTS page where the version is updated.
2025-09-12 15:47:07 +03:00
Max Kotliar
02b5849d92 app/vmselect: run make vmui-update 2025-09-12 15:30:12 +03:00
Zakhar Bessarab
933f5b39d6 app/vmbackupmanager: use full backup path for restore mark (#939)
1beb629b removed logic which was used in order to keep full backup
location path in the restore mark file. Because of this, backups created
with a shortname (e.g. `vmbackupmanager restore create
daily/2025-09-12`) will fail as backup location is not prepended.

Fix that by properly constructing full backup name from parsed canonical
values.

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-09-12 14:51:18 +03:00
Max Kotliar
367cdb089f vendor: update metrics package to v1.40.1 (#9725)
### Describe Your Changes

Includes fix https://github.com/VictoriaMetrics/metrics/pull/99

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-09-12 14:14:49 +03:00
Andrii Chubatiuk
2a1b3866e1 app/vmui: fixed backend URL for multitenant endpoints (#9703)
### Describe Your Changes

vmui builds incorrect endpoint, while using multitenant API. bug was
introduced in PR #8989

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2025-09-12 13:50:11 +03:00
Hui Wang
327a103367 fix automatic issuing of TLS certificates (#935)
* fix automatic issuing of TLS certificates
2025-09-12 13:27:37 +03:00
Aliaksandr Valialkin
5a80d4c552 lib/fs: sync the directory scheduled for removal after the removing the deleteDirFilename file
This should help removing various metadata in the directory, which may be left
by some exotic filesystems such as OSSFS2.

See https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9709
and https://github.com/VictoriaMetrics/VictoriaLogs/issues/649 for details.
2025-09-11 16:57:43 +02:00
dependabot[bot]
30c2868ff8 build(deps): bump actions/setup-go from 5 to 6 (#9688)
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 5 to
6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/setup-go/releases">actions/setup-go's
releases</a>.</em></p>
<blockquote>
<h2>v6.0.0</h2>
<h2>What's Changed</h2>
<h3>Breaking Changes</h3>
<ul>
<li>Improve toolchain handling to ensure more reliable and consistent
toolchain selection and management by <a
href="https://github.com/matthewhughes934"><code>@​matthewhughes934</code></a>
in <a
href="https://redirect.github.com/actions/setup-go/pull/460">actions/setup-go#460</a></li>
<li>Upgrade Nodejs runtime from node20 to node 24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/setup-go/pull/624">actions/setup-go#624</a></li>
</ul>
<p>Make sure your runner is on version v2.327.1 or later to ensure
compatibility with this release. <a
href="https://github.com/actions/runner/releases/tag/v2.327.1">See
Release Notes</a></p>
<h3>Dependency Upgrades</h3>
<ul>
<li>Upgrade <code>@​types/jest</code> from 29.5.12 to 29.5.14 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-go/pull/589">actions/setup-go#589</a></li>
<li>Upgrade <code>@​actions/tool-cache</code> from 2.0.1 to 2.0.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-go/pull/591">actions/setup-go#591</a></li>
<li>Upgrade <code>@​typescript-eslint/parser</code> from 8.31.1 to
8.35.1 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-go/pull/590">actions/setup-go#590</a></li>
<li>Upgrade undici from 5.28.5 to 5.29.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-go/pull/594">actions/setup-go#594</a></li>
<li>Upgrade typescript from 5.4.2 to 5.8.3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-go/pull/538">actions/setup-go#538</a></li>
<li>Upgrade eslint-plugin-jest from 28.11.0 to 29.0.1 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-go/pull/603">actions/setup-go#603</a></li>
<li>Upgrade <code>form-data</code> to bring in fix for critical
vulnerability by <a
href="https://github.com/matthewhughes934"><code>@​matthewhughes934</code></a>
in <a
href="https://redirect.github.com/actions/setup-go/pull/618">actions/setup-go#618</a></li>
<li>Upgrade actions/checkout from 4 to 5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-go/pull/631">actions/setup-go#631</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/matthewhughes934"><code>@​matthewhughes934</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/setup-go/pull/618">actions/setup-go#618</a></li>
<li><a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/setup-go/pull/624">actions/setup-go#624</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-go/compare/v5...v6.0.0">https://github.com/actions/setup-go/compare/v5...v6.0.0</a></p>
<h2>v5.5.0</h2>
<h2>What's Changed</h2>
<h3>Bug fixes:</h3>
<ul>
<li>Update self-hosted environment validation by <a
href="https://github.com/priyagupta108"><code>@​priyagupta108</code></a>
in <a
href="https://redirect.github.com/actions/setup-go/pull/556">actions/setup-go#556</a></li>
<li>Add manifest validation and improve error handling by <a
href="https://github.com/priyagupta108"><code>@​priyagupta108</code></a>
in <a
href="https://redirect.github.com/actions/setup-go/pull/586">actions/setup-go#586</a></li>
<li>Update template link by <a
href="https://github.com/jsoref"><code>@​jsoref</code></a> in <a
href="https://redirect.github.com/actions/setup-go/pull/527">actions/setup-go#527</a></li>
</ul>
<h3>Dependency  updates:</h3>
<ul>
<li>Upgrade <code>@​action/cache</code> from 4.0.2 to 4.0.3 by <a
href="https://github.com/aparnajyothi-y"><code>@​aparnajyothi-y</code></a>
in <a
href="https://redirect.github.com/actions/setup-go/pull/574">actions/setup-go#574</a></li>
<li>Upgrade <code>@​actions/glob</code> from 0.4.0 to 0.5.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/setup-go/pull/573">actions/setup-go#573</a></li>
<li>Upgrade ts-jest from 29.1.2 to 29.3.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/setup-go/pull/582">actions/setup-go#582</a></li>
<li>Upgrade eslint-plugin-jest from 27.9.0 to 28.11.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/setup-go/pull/537">actions/setup-go#537</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/jsoref"><code>@​jsoref</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/setup-go/pull/527">actions/setup-go#527</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-go/compare/v5...v5.5.0">https://github.com/actions/setup-go/compare/v5...v5.5.0</a></p>
<h2>v5.4.0</h2>
<h2>What's Changed</h2>
<h3>Dependency updates :</h3>
<ul>
<li>Upgrade semver from 7.6.0 to 7.6.3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/setup-go/pull/535">actions/setup-go#535</a></li>
<li>Upgrade eslint-config-prettier from 8.10.0 to 10.0.1 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/setup-go/pull/536">actions/setup-go#536</a></li>
<li>Upgrade <code>@​action/cache</code> from 4.0.0 to 4.0.2 by <a
href="https://github.com/aparnajyothi-y"><code>@​aparnajyothi-y</code></a>
in <a
href="https://redirect.github.com/actions/setup-go/pull/568">actions/setup-go#568</a></li>
<li>Upgrade undici from 5.28.4 to 5.28.5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/setup-go/pull/541">actions/setup-go#541</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="4469467582"><code>4469467</code></a>
Bump actions/checkout from 4 to 5 (<a
href="https://redirect.github.com/actions/setup-go/issues/631">#631</a>)</li>
<li><a
href="e093d1e9bb"><code>e093d1e</code></a>
Node 24 upgrade (<a
href="https://redirect.github.com/actions/setup-go/issues/624">#624</a>)</li>
<li><a
href="1d76b952eb"><code>1d76b95</code></a>
Improve toolchain handling (<a
href="https://redirect.github.com/actions/setup-go/issues/460">#460</a>)</li>
<li><a
href="e75c3e80bc"><code>e75c3e8</code></a>
Bump <code>form-data</code> to bring in fix for critical vulnerability
(<a
href="https://redirect.github.com/actions/setup-go/issues/618">#618</a>)</li>
<li><a
href="8e57b58e57"><code>8e57b58</code></a>
Bump eslint-plugin-jest from 28.11.0 to 29.0.1 (<a
href="https://redirect.github.com/actions/setup-go/issues/603">#603</a>)</li>
<li><a
href="7c0b336c9a"><code>7c0b336</code></a>
Bump typescript from 5.4.2 to 5.8.3 (<a
href="https://redirect.github.com/actions/setup-go/issues/538">#538</a>)</li>
<li><a
href="6f26dcc668"><code>6f26dcc</code></a>
Bump undici from 5.28.5 to 5.29.0 (<a
href="https://redirect.github.com/actions/setup-go/issues/594">#594</a>)</li>
<li><a
href="8d4083a006"><code>8d4083a</code></a>
Bump <code>@​typescript-eslint/parser</code> from 5.62.0 to 8.32.0 (<a
href="https://redirect.github.com/actions/setup-go/issues/590">#590</a>)</li>
<li><a
href="fa96338abe"><code>fa96338</code></a>
Bump <code>@​actions/tool-cache</code> from 2.0.1 to 2.0.2 (<a
href="https://redirect.github.com/actions/setup-go/issues/591">#591</a>)</li>
<li><a
href="4de67c04ab"><code>4de67c0</code></a>
Bump <code>@​types/jest</code> from 29.5.12 to 29.5.14 (<a
href="https://redirect.github.com/actions/setup-go/issues/589">#589</a>)</li>
<li>See full diff in <a
href="https://github.com/actions/setup-go/compare/v5...v6">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/setup-go&package-manager=github_actions&previous-version=5&new-version=6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-11 12:20:38 +02:00
Zhu Jiekun
d9ffef486d docs: add contributing guide for vendor package
Some VictoriaMetrics organization's repos vendor each others. To avoid
pull request like
https://github.com/VictoriaMetrics/VictoriaLogs/pull/658, this pull
request adds contributing guide for vendor package.

Related: https://github.com/VictoriaMetrics/VictoriaLogs/issues/659
2025-09-11 12:12:40 +02:00
Nikolay
cf6a1017bd app/vmagent: respect enable.auto.commit for kafka consumer
Previously, vmagent always set enable.auto.commit to false and manually
commited messages. It adds additional pressure to the kafka brokers and could slow down
data consumption.

 This commit allows vmagent to skip manual commit and use auto-commit
based on provided configuration. Which may improve message read throughput.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics-enterprise/pull/931
2025-09-11 12:09:27 +02:00
Max Kotliar
84fc71e876 docs: reduce redirects in docs (#9711)
### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-09-10 21:04:14 +03:00
Max Kotliar
26c920e738 docs: reduce redirect in docs 2025-09-10 14:20:37 +03:00
Max Kotliar
ebd736a30f docs: add slash at the end to avoid redirect (#9705)
### Describe Your Changes

Add a slash at the end of the link to avoid redirects. Remove `.html` in
links.

P.S. While working on this one, I found that anchors to guides are
broken. I'll address them ina separate PR.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-09-09 20:15:32 +03:00
Max Kotliar
76fcd96aec docs: use direct correct links (#9704)
### Describe Your Changes

Don't use legacy pages, use direct links to proper pages, avoid
redirects or alias (aka `.html`) pages.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-09-09 17:26:36 +03:00
Max Kotliar
4f82250845 docs: drop highlight query param from links
Text highlighting in the docs used to work, but it no longer does.
Removing it makes indexing the docs a bit more convenient.
2025-09-09 13:59:14 +03:00
Max Kotliar
cfff64295d Makefile: Add docs-update-flags command that syncs docs flags from the actual binaries (#9632)
### Describe Your Changes

This PR introduces a `make docs-update-flags` command that updates flags
in the documentation using the actual binaries compiled from the latest
`enterprise-single-node` and `enterprise-cluster` branches (hardcoded
for now). The command also normalizes the output format.

It can be run from any branch. All work happens inside temporary
directories under /tmp. The script checks out the required branch,
builds the binaries, and updates the documentation. The current Git
repository is not touched.

The command adjusts default values to more meaningful ones, such as
changing `-maxConcurrentInserts` (default 20) to (default
2*cgroup.AvailableCPUs()).

Currently the logic is implemented only for vminsert, vmstorage,
vmselect, vmagent, vmalert, and victoria-metrics (aka single).

The goal is to make it easy to keep documentation synchronized with real
binaries

_**Note:** Please ignore xxx_flags.md files for now. Review flags in
`README.md` and `Cluster-VictoriaMetrics.md`, and `vmagent.md`,
`vmalert.md` only. Once we agree on the changes in those files, I'll
replace the flags with the `{{% content "xxx_flags.md" %}}`._

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-09-09 13:29:34 +03:00
Aliaksandr Valialkin
89464789dc docs/victoriametrics/Articles.md: add new third-party articles about VictoriaMetrics 2025-09-08 17:56:10 +02:00
hagen1778
f8859574de docs: clarify details of data migration
* stress on requirement to have empty destination folder for copying;
* remove extra verbosity from docs;
* remove list vmctl migration options as they became unsynced. Instead of syncing,
  refer to the vmctl docs;
* fix typos.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-09-08 14:22:22 +02:00
Yury Molodov
9d4a8ed799 app/vmui: display vmselect version in footer (#9690)
### Describe Your Changes

* Updated `useFetchAppConfig` to respect the provided `serverUrl`.
* Added `vmselect` version display in the footer for easier debugging
and support.
<img width="1449" height="71" alt="image"
src="https://github.com/user-attachments/assets/228b4ed5-89c2-4e95-9436-ee464a7fd40b"
/>

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: Yury Molodov <yurymolodov@gmail.com>
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-09-05 17:09:09 +02:00
Roman Khavronenko
dfcfacd04f app/vmselect: encode application version into manifest (#9654)
The application version can be then displayed in the vmui. Showing the
application version in vmui should make it easier to determine currently
used VM version (at least vmselect version).

------------

@Loori-R it would be could to add the app version in vmui in a follow-up
PR or by pushing a commit to this branch.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-09-05 17:02:11 +02:00
Roman Khavronenko
28da18282b vmalert: re-factoring follow-up (#9683)
A minir changes after the follow-up in
85f556f53e

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-09-05 17:01:20 +02:00
Arie Heinrich
db8e40f26c docs: markdown, grammar and spelling (#9686)
### Describe Your Changes

This pull request consists of the following:

1. Markdown fixes
following https://www.markdownguide.org/basic-syntax/
and https://github.com/markdownlint/markdownlint/blob/main/docs/RULES.md

* Add empty lines after headers or lists
* Remove extra lines between paragraphs
* Remove extra spaces at the end of a line
* Add language to code quote
* Consistent list (dont mix astrixes and dashes on same file, choose one
and be consistent in the same file)
* Proper URL links
* Use meaningful context to URLs instead of "here".

2. Concise language

3. Grammar fixes

* removing extra spaces between words
* there are multiple ones but i picked the basic ones that triggered my
eye :)

4. Spelling fixes

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Co-authored-by: Roman Khavronenko <hagen1778@gmail.com>
Co-authored-by: Zhu Jiekun <jiekun@victoriametrics.com>
2025-09-05 17:01:05 +02:00
Andrii Chubatiuk
e958d488b0 app/vmui: fixed alerting page on mobile devices (#9678)
### Describe Your Changes

fixed visualisation issues while opening alerting page on mobile devices
before:
<img width="334" height="68" alt="image"
src="https://github.com/user-attachments/assets/fb085c46-5e01-430e-b109-46971e377a48"
/>
<img width="337" height="452" alt="image"
src="https://github.com/user-attachments/assets/871affb8-c4dc-4d23-9958-fba9f77a5612"
/>
<img width="318" height="509" alt="image"
src="https://github.com/user-attachments/assets/a66c8634-3e3e-4bd7-abc8-ec1a7fa92318"
/>


after:
<img width="334" height="74" alt="image"
src="https://github.com/user-attachments/assets/8ad127f2-cc61-4297-97fa-d54910f31761"
/>
<img width="337" height="419" alt="image"
src="https://github.com/user-attachments/assets/15e9fb04-0873-4967-aa59-1370f2b0adaf"
/>
<img width="305" height="501" alt="image"
src="https://github.com/user-attachments/assets/8233a43a-70ce-4b15-afb2-d64a6b696038"
/>



### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-09-05 17:00:27 +02:00
Andrii Chubatiuk
569f045728 app/vmui: minor main components improvements (#9644)
`Table` component:
- add `format` property for table column, which allows to apply custom
formatting depending on column type
- add `rowClasses` table property, that allows to pass function that
allows to customize row css class depending on row value
- add `rowAction` table property, that allows to execute action while
clicking on table row

`Popper` component:
- add `classes` to specify additional CSS classes for popper to
differentiate from other poppers, since it's mounted to a DOM root

`Switch` component:
- use gap instead of left-margin

`DateTimeInput` component:
- add `dateOnly` property to allow accepting only date in the input

additional fixes:
- fix TopQuery header fields alignment

<img width="1279" height="125" alt="image"
src="https://github.com/user-attachments/assets/08ad4dbc-19e5-47f5-9ccd-a9fb222335a4"
/>

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-09-05 16:59:03 +02:00
Max Kotliar
2bb42cfb16 go.mod: update metricsql lib to v0.84.8
https://github.com/VictoriaMetrics/metricsql/releases/tag/v0.84.8
2025-09-05 12:42:45 +03:00
Max Kotliar
e227eb82bd docs/url-examples: align export example format with import example (#9681)
### Describe Your Changes

The export/import format can be confusing for users unfamiliar with its
syntax. To make matters worse, the format shown in [export
examples](https://docs.victoriametrics.com/victoriametrics/url-examples/#apiv1exportcsv)
was incompatible with the one used in [import
examples](https://docs.victoriametrics.com/victoriametrics/url-examples/#apiv1importcsv).

This PR updates the examples so they are compatible, allowing users to
follow the export and import steps to complete a full data cycle.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-09-04 14:27:09 +03:00
hagen1778
3fff181e2c docs: add change after 5854d9df72
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-09-04 12:32:57 +02:00
wbwren-eric
5854d9df72 app/vmui: set rateEnabled default value to false for probe_success (#9648)
### Describe Your Changes

Set rateEnabled to false for probe_success in VMUI

Fix issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9655

Problem:
probe_success is incorrectly initialized with rateEnabled = true because
the regex detecting counters (/_sum?|_total?|_count?/) matches partial
strings like _su. This causes probe_success (a gauge) to be treated as a
counter, producing slightly misleading graphs. For example, when
rateEnabled is set to true, probe_success often shows as 0 in VMUI when
the probe is actually succeding.
It is not intuative for users to have to disable rateEnabled manually
just to get the correct value for probe_success in VMUI.

Solution:
Update the regex to strictly match suffixes:
`/_sum$|_total$|_count$/`

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

Signed-off-by: William Wren <william.wren@ericsson.com>
2025-09-04 12:29:01 +02:00
Hui Wang
85f556f53e vmalert: move the web types into sub-packages (#9560)
fix https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9551.

To avoid the group update blocking API calls for irrelevant resources
for too long, we don't lock the `m.groupsMu` during the [group
updates](fd928a0f5b/app/vmalert/manager.go (L100)).
And to avoid group changes during related API calls, a
[DeepCopy](61c5e8185c/app/vmalert/web_types.go (L341))
was used to copy needed group info, but it was not implemented correctly
and can't be implemented efficiently.
This pull request splits rule-related web types into sub-packages, which
should be clearer and easier to maintain.

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-09-04 12:27:48 +02:00
Aliaksandr Valialkin
42984e3413 docs/victoriametrics/vmauth.md: update ./vmauth -help output with the newly added -mergeQueryArgs command-line flag at 272f6b2a46 2025-09-04 11:14:43 +02:00
Hui Wang
08c835e79f dashboards: add panel Storage full ETA in the vmstorage section (#9670) 2025-09-04 09:12:27 +02:00
Andrii Chubatiuk
4c23f6913e app/vmui: make sidebar scrollable and its items collapsible (#9662)
after adding Alerting section all menu items cannot be displayed on
mobile devices in a sidebar. this PR:

- makes sidebar scrollable, when it's content overflows screen
- makes sidebar items collapsible
- fixes menu layout on mobile devices with big screens

before:

<img width="1074" height="57" alt="image"
src="https://github.com/user-attachments/assets/6ae69487-d89a-4aaa-985b-de788be06cff"
/>

<img width="198" height="490" alt="image"
src="https://github.com/user-attachments/assets/0a494c52-6db7-4160-a04d-df69b88604dc"
/>


after:

<img width="1170" height="55" alt="image"
src="https://github.com/user-attachments/assets/57909536-0353-4be2-8d8f-4302b3bfe338"
/>

<img width="199" height="509" alt="image"
src="https://github.com/user-attachments/assets/43f33536-86eb-41b1-91d8-5b8ca95faeca"
/>



### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-09-04 09:09:36 +02:00
Arie Heinrich
8411675d55 docs: markdown, grammar and spelling (#9675)
### Describe Your Changes

This pull request consists of the following:

1. Markdown fixes
following https://www.markdownguide.org/basic-syntax/
and https://github.com/markdownlint/markdownlint/blob/main/docs/RULES.md

- Add empty lines after headers or lists
- Remove extra lines between paragraphs
- Remove extra spaces at the end of a line
- Add language to code quote
- Consistent list (dont mix astrixes and dashes on same file, choose one
and be consistent in the same file)
- Proper URL links
- Use meaningful context to URLs instead of "here".

2. Concise language

3. Grammar fixes

- removing extra spaces between words
- there are multiple ones but i picked the basic ones that triggered my
eye :)

4. Spelling fixes

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-09-04 09:08:05 +02:00
hagen1778
ba5cacbe60 docs: change update note to known issues for consistency
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-09-04 09:03:00 +02:00
hagen1778
1d2d0c49cc docs: update changelog with fixes in recent releases
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-09-04 08:59:03 +02:00
f41gh7
a0a33f0ce1 docs: add v1.125.1 and v1.110.18 releases
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2025-09-03 22:10:27 +02:00
Aliaksandr Valialkin
9327a426e0 docs/victoriametrics/vmauth.md: typo fix: sepcified -> specified 2025-09-03 15:57:48 +02:00
Aliaksandr Valialkin
272f6b2a46 app/vmauth: add an ability to merge the given client query args with the query args specified at the backend url
This is needed for VictoriaLogs, which allows limiting query results with the given set of extra filters
specified via extra_filters query arg. The request url can contain multiple extra_filters query args -
they are all applied with AND logic to the query. See https://docs.victoriametrics.com/victorialogs/querying/#extra-filters

The merge_query_args option at vmauth allows merging the extra_filters provided by the client
(such as Grafana plugin for VictoriaLogs or built-in web UI) with the extra_filters specified in the backend
url at vmauth config.

This is needed for https://github.com/VictoriaMetrics/VictoriaLogs/issues/106
2025-09-03 15:50:46 +02:00
f41gh7
5f559b7307 CHANGELOG.md: cut v1.125.1 release 2025-09-03 15:40:05 +02:00
f41gh7
c06d499bf1 make vmui-update 2025-09-03 15:32:24 +02:00
Artem Fetishev
89fd27c922 lib/workingsetcache: properly count workingsetcache metrics
`workingsetcache` is built on top of two
[fastcache](https://github.com/VictoriaMetrics/fastcache) instances
(curr and prev) that are rotated periodically (configurable via
`-cacheExpireDuration` flag). During the rotation curr becomes prev and
prev is discarded, new curr is an empty. If an entry is not found in
curr then the prev cache is checked, and if the entry is found there it
is copied to curr.

`workingsetcache` also exports metrics, such as `EntriesCount`,
`GetCalls`, `SetCalls`, and `Misses` counts. These metrics are currently
implemented as the sum of the same metrics in prev and curr `fastcache`
instances. Given to rotation logic, these counts can be incorrect:

1. `EntriesCount`. It is the sum of prev and curr entry counts. If an
entry is not found in curr and found in prev (and therefore is copied
from prev to curr) the resulting entry count will be incorrect, i.e. it
will count copied entries two times.
2. `GetCalls`. It is the sum of prev and curr get calls. If an entry is
not found in curr the logic will attempt to retrieve it from prev, which
will result in double counting. While it is actually one get call to
`workingsetcache`.
3. `SetCalls`. It is the sum of prev and curr get calls. If an entry is
not found in curr but found in prev it will be copied to curr resulting
in a set call to curr. While from the `workingsetcache` perspective
there hasn't been any set operation at all.
4. `Misses`. It is the sum of prev and curr misses. If an etry is not
found in curr, it is recorded as a miss. If it is then found in prev,
the entry is returned to the caller, but that cache miss remains. If it
is not found in prev, then there will be 2 misses for 1
`worksingsetcache` get call.

This PR introduces `GetCalls`, `SetCalls`, and `Misses` counts at the
`workingsetcache` level in order to count the calls correctly. It also
excludes duplicates from `EntriesCount`.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9553
2025-09-03 15:22:56 +02:00
Phuong Le
ec4ec4c2be flag: introduce new flag ExtendedDuration
Related: https://github.com/VictoriaMetrics/VictoriaLogs/issues/50
2025-09-03 15:21:21 +02:00
Yury Molodov
d0993058b1 vmui: fix useSearchParamsFromObject not updating searchParams
Fix bug in `useSearchParamsFromObject` hook that prevented filtering on
the *Explore Cardinality* page.

 Bug was introduced at 483e00ffb9

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9674
2025-09-03 15:16:14 +02:00
Felix Yan
f7ee52c245 docs: correct a typo in vmalert.md (#9668)
### Describe Your Changes

Correct a typo in vmalert.md

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-09-03 11:50:46 +08:00
Andrii Chubatiuk
63a6b9b863 app/vmui: reuse codeexample component in alerts tab (#9649)
### Describe Your Changes

reuse codeexample component in vmui alerts page

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-09-02 14:38:26 +02:00
Dmytro Kozlov
fd23f6bfb3 benchmark: add gnuplot to show write speed (#9490)
### Describe Your Changes

Implemented the script that generates graphs using `gnuplot`. 
Those graphs show the write speed to the db. 
How to use it:
1. From the root run `make tsbs`;
2. The file will be generated automatically
`/tmp/tsbs-load-100000-2025-07-22T00:00:00Z-2025-07-23T00:00:00Z-80s.csv`
4. From the root run `make tsbs-plot-load` and observe the result
5. If you have two files with the `tsbs_load_victoriametrics` output,
just define the second in the
`TSBS_LOAD_RESULT_CSV_FILE_COMPARE=/tmp/tsbs-load-10
0000-2025-07-22T01:00:00Z-2025-07-23T01:00:00Z-80s.csv
`
To plot the measurements from some other benchmark, run
`make tsbs-plot-load TSBS_LOAD_RESULT_CSV_FILE=/path/to/file.csv`

To plot the measurements from two benchmarks, run
`make tsbs-plot-load TSBS_LOAD_RESULT_CSV_FILE=/path/to/file1.csv
TSBS_LOAD_RESULT_CSV_FILE_COMPARE=/path/to/file2.csv`

This command should generate a graph like described in the picture

<img width="638" height="578" alt="Screenshot 2025-07-25 at 15 35 42"
src="https://github.com/user-attachments/assets/900b05ab-0b98-4f7f-8f2c-18d28ad2eab6"
/>


### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Co-authored-by: Artem Fetishev <rtm@victoriametrics.com>
Co-authored-by: Artem Fetishev <149964189+rtm0@users.noreply.github.com>
2025-09-02 14:37:18 +02:00
Max Kotliar
1b8dc8a94c docs/stream-aggregation: Add deduplication common mistake (#9659)
### Describe Your Changes

Fix a stream aggregation pitfall when deduplication intervals differ
between storage and vmagent.

Follow up on
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9581

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-09-02 14:35:53 +02:00
Phuong Le
9109e2e7c3 docs: fix localhost link (#9661) 2025-09-02 14:34:53 +02:00
hagen1778
bc75bbfbe7 docs: re-organize changelog lines by priority and components
This helps to improve readability of changes, so users
can see more important changes first, and see changes related
to the same component one after another.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-09-02 14:33:12 +02:00
hagen1778
dd19a17ef6 dashboards: update descriptions for resource usage panel
The description new content is a courtesy of @func25

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-09-02 14:29:58 +02:00
Max Kotliar
03d93cc413 app/vmalert-tool: Force ipv4 binding in vmalert unit test (#9558)
### Describe Your Changes

Previously mock storage `net.Listen("tcp", …)` could succeed even if
another process was bound to the same port, due to dual-stack behavior
(`[::]:port` vs `0.0.0.0:port`). That lead to strange test results that
hard to bound to port misuse. Tests queried not mock server but whatever
was running on that port.

Switched to `"tcp4"` to ensure conflicts are detected correctly.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-09-02 15:04:35 +03:00
f41gh7
46d4635b08 app/vmselect: properly route requests for config.json
Bug was introduced during back-porting changes from single-node to the
cluster branch.

Follow-up after: 7f15e9f64c
2025-09-01 21:52:16 +02:00
Max Kotliar
ea5bf24676 .github/workflow: add check commit signed action (#9639)
### Describe Your Changes

    .github/workflow: add check commit signed action
    
    Add GitHub Action to verify commit signatures.
    
This action checks commit signatures, accepting G (good) and E (signed
    but key not available for full verification).
    
Note: This is not a 100% accurate check. The CI mainly targets unsigned
    commits from external contributors.
    
Reference:
https://git-scm.com/docs/pretty-formats#Documentation/pretty-formats.txt-G

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-09-01 18:41:39 +03:00
minxinyi
8a7b572ff4 refactor: use the built-in max/min to simplify the code (#9525)
### Describe Your Changes

use the built-in max/min to simplify the code


### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

Signed-off-by: minxinyi <minxinyi6@outlook.com>
2025-09-01 18:40:57 +03:00
hagen1778
611e96d875 docs: update flag description for Kafka related flags
Follow-up after 0278bc5d9a

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-09-01 16:37:13 +02:00
Roman Khavronenko
0278bc5d9a docs: move vmagent's Kafka integration to /integrations page (#9658)
This change requires a follow-up commit to update cmd-line flags in ENT
version.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-09-01 16:28:05 +02:00
Andrii Chubatiuk
a585d95365 docs: exclude updated files from rendering and from sitemap.xml (#9616)
### Describe Your Changes

fixes https://github.com/VictoriaMetrics/vmdocs/issues/164

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-09-01 16:27:03 +02:00
Arie Heinrich
b5578fcac2 docs: markdown, grammar and spelling (#9650)
### Describe Your Changes

As there are quite a few files, and each file might have multiple
changes and to make it easily to review, i limited the PR to 5 files at
a time.

I suggest you take a look at markdownlint and add it as part of your CI,
similar to
https://github.com/MicrosoftDocs/PowerShell-Docs/blob/main/.markdownlint.yaml
And while at it, take a look at cspell and how its used in thier repo
and replace the python one you have in your current implementation -
might open a PR with it after all the fixes PRs).

This pull request consists of the following:

1. Markdown fixes
    following https://www.markdownguide.org/basic-syntax/
and https://github.com/markdownlint/markdownlint/blob/main/docs/RULES.md

   - Add empty lines after headers or lists
   - Remove extra lines between paragraphs
   - Remove extra spaces at the end of a line
   - Add language to code quote
- Consistent list (dont mix astrixes and dashes on same file, choose one
and be consistent in the same file)
   - Proper URL links
   - Use meaningful context to URLs instead of "here".
   
2. Concise language

3. Grammar fixes
    - removing extra spaces between words
- there are multiple ones but i picked the basic ones that triggered my
eye :)

4. Spelling fixes

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-09-01 16:18:25 +02:00
Roman Khavronenko
86334534f6 docs: move vmagent's pubsub integration to /integrations page (#9656)
This change requires a follow-up commit to update cmd-line flags in ENT
version.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-09-01 16:09:58 +02:00
f41gh7
944af7b049 docs: replace v1.124.0 with v1.125.0 release
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2025-09-01 12:57:32 +02:00
f41gh7
3f98af6a0b docs: mention LTS releases
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2025-09-01 12:54:26 +02:00
Aliaksandr Valialkin
7967ad661e lib/fs: remove fsync for the parent directory from MustMkdirIfNotExist(), MustMkdirFailIfExist(), MustHardLinkFiles() and MustCopyDirectory()
This allows performing a single MustFsyncPath() for the parent directory after multiple calls to these functions.
This clarifies code paths, which call these functions, and makes them more maintainable.

This also removes a redundant fsync() call for the parent directory when creating a file-based part.
Previously the first fsync() was indirectly called when the directory was created via MustMkdirFailIfExist()
and the second fsync() was called via MustSyncPathAndParentDir() after all the data is written to the part.
2025-08-30 01:53:54 +02:00
Aliaksandr Valialkin
1aa72ecbfd lib/persistentqueue/persistentqueue.go: remove fs.MustSyncPath() call after fs.MustWriteSync()
The fs.MustWriteSync() already fsyncs the created file, so there is no need in additional fsync() call.

While at it, add missing fsync for the parent directory after creating a directory for persistent queue.
2025-08-30 01:47:10 +02:00
Aliaksandr Valialkin
3b656147ef lib/backup/fsremote/fsremote.go: remove unneeded fsync for the hard-linked file
The source file contents should be already fsynced to disk before creating a hard link,
so there is no sense in calling fsync() on the created hard link.
2025-08-30 01:44:21 +02:00
Aliaksandr Valialkin
3c4004673e docs/victoriametrics/sd_configs.md: fix internal links to different Kubernetes service discovery roles
This is a follow-up for the commit 51aebcd061
2025-08-29 16:26:31 +02:00
Aliaksandr Valialkin
45c9f31987 docs/victoriametrics/Articles.md: add https://amir-shams.medium.com/why-victoriametrics-a-practical-guide-to-scalable-and-faster-monitoring-than-prometheus-54ef21f10465 2025-08-29 16:24:24 +02:00
f41gh7
37013d36c0 follow-up after 24aef8ea90 fixes single-node branch build 2025-08-29 14:45:51 +02:00
f41gh7
c9b3088c9c CHANGELOG.md: cut v1.125.0 release 2025-08-29 14:36:08 +02:00
Charles-Antoine Mathieu
24aef8ea90 app/vmselect/graphite: enforce search.maxQueryLen for Graphite queries
This commit ensures that the -search.maxQueryLen flag applies to Graphite
queries, matching the behavior already present for Prometheus queries.
Previously, Graphite queries could bypass this limit, creating an
inconsistency and a potential vector for resource exhaustion.

Key changes:

Added getMaxQueryLen() to access the global query length limit.
Enforced query length validation in execExpr() for Graphite queries.
Added comprehensive tests for the new validation logic and edge cases.
Error messages are consistent with Prometheus query validation.
The default limit is 16KB (configurable via -search.maxQueryLen).
Setting the limit to 0 disables validation.
This change closes the gap where Graphite queries could exceed
configured length limits, providing consistent protection against
excessively long queries across both query APIs.

Follow-up for https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9534
Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9600
2025-08-29 14:29:37 +02:00
f41gh7
e540e5e381 make vmui-update 2025-08-29 14:16:58 +02:00
Aliaksandr Valialkin
51aebcd061 docs/victoriametrics/sd_configs.md: add titles per every target role in service discovery configs
This allows referring per-role docs via direct links to the correponsing sub-chapters with the given titles
2025-08-29 13:28:47 +02:00
Artem Fetishev
df7b752c7a lib/storage: fix double counting in vm_deleted_metrics_total
The vm_deleted_metrics_total metric value represents the number of
metricIDs stored in deletedMetricIDs cache. This cache lives at the
storage level and stores the deleted metrics from both prev and curr
idbs. However, the metric is populated at the idb level. Since there are
always 2 idbs (prev and curr), the value is populated twice. Hence the
doubled value of the metric.

The fix is to populate the metric value at the storage level.

Related issue https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9602
2025-08-29 11:18:07 +02:00
Andrii Chubatiuk
6f74b139cc app/vmui: craft UI configuration on backend instead of using /flags endpoint and static config.json file
- load and parse static`/vmui/config.json`, modify it according to
runtime values and use it as a replacement for static config.json
- remove using `/flags` endpoint for checking features, that should be
enabled on VMUI

 Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9635
2025-08-29 10:44:38 +02:00
Andrii Chubatiuk
e49609cbc2 app/vmui: removed home page hack
`router.home` represents `/` path, which is the same for all UI apps,
but content and title for root path differs depending on application
type. added `getDefaultOptions` function, which returns proper home
route configuration depending on application type, which allows to
remove renamings in respective layouts

 Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9641
2025-08-29 10:25:55 +02:00
Hui Wang
2e655a91bc lib/httpserver: properly issue automatic TLS certificates
Bug was introduced at commit 93ad502d6dcb4724e8ec40a4a0351b0316853af0

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics-enterprise/pull/930
2025-08-29 10:08:51 +02:00
Max Kotliar
1e927b2e53 lib/prommetadata: Extract -enableMetadata flag to separate package, avoid pulling in promscrape discovery flags into vminsert
The commit
25cd5637bc
introduced the `-enableMetadata` flag and the
`promscrape.IsMetadataEnabled()` function, which is now used in multiple
places, including the `app/vminsert/prometheusimport` [request
handler](b24b76ff08/app/vminsert/prometheusimport/request_handler.go (L36)).

Because of the use of `promscrape` package vminsert registered all
`-promscrape.*` service discovery flags, which were not relevant for
`vminsert`.

This change moves the metadata flag logic into a dedicated package,
preventing vminsert from unintentionally loading unrelated promscrape
flags.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9631
2025-08-29 10:07:16 +02:00
Hui Wang
21963a1cad fix vmcluster docker-compose example (#9643)
1. fix vmcluster docker-compose example: vminsert scrape job and vmagent
remote write authorization.
2. upgrade grafana to v12.1.1
2025-08-29 14:36:46 +08:00
hagen1778
87b291debe deployment/docker: replace single-node image on transparent background version
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-08-28 20:17:13 +02:00
hagen1778
cce1cdcb6d deployment/docker: strip victorialogs images from excalidraw sources
VictoriaLogs excalidraw images should be stored in VictoriaLogs repo

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-08-28 20:12:19 +02:00
hagen1778
03e003c828 deployment/docker: use light and dark images for github markdown for cluster images
This is an attempt to adjust image styles to GitHub themes, because
existing images with transparent backround become unreadable on dark theme.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-08-28 20:08:19 +02:00
hagen1778
ad9d11ba3f deployment/docker: use light and dark images for github markdown
This is an attempt to adjust image styles to GitHub themes, because
existing images with transparent backround become unreadable on dark theme.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-08-28 20:05:42 +02:00
hagen1778
5c2ed99dab deployment/docker: rm victorialogs images
The vlogs images were moved to VictoriaLogs github repo
and aren't needed here anymore.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-08-28 20:04:24 +02:00
f41gh7
eaec80b7f3 follow-up after 76eb654e7e
mention change at changelog
2025-08-28 17:08:31 +02:00
Oron Sharabi
d6ef8a807b lib/storage: improve searchLabels and searchLabelValues performance
When having a `match` of `__name__` key alone for labels api, it's going
to hit max series limit in case of high cardinality metric name.
Instead, we can skip looking by `metricIDs` and fallback to inverted
index scan with a `composite key` since we only have some `__name__` and
a label name.

 Common requests for optimisations are:
1) /api/v1/labels?match=up or /api/v1/labels?extra_filters=up
2) /api/v1/label/job/values?match=up or /api/v1/labels?extra_filters =up

 It's widely used by grafana variables.

 Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9489
2025-08-28 17:08:25 +02:00
f41gh7
c0318a84f0 deployment/docker: switch from musl to glibc
This should remove vertical scalability limit for data ingestion at VictoriaMetrics running on machines with big number of CPU cores.

Related issue: https://github.com/VictoriaMetrics/VictoriaLogs/issues/517
2025-08-28 16:25:18 +02:00
Artem Fetishev
5a056321af lib/uint64set: Optimize subtract operation
a.Subtract(b) perfomance degrades as b becomes bigger than a. For
example if len(b2) == 10xlen(b1) then time(a.Subtract(b2)) == 10x
time(a.Subtract(b1)).

A quick fix is to iterate over a elements in len(b) > len(a). Iterating
over a's elements and at the same time deleting should be safe since no
elements are actually deleted (i.e. memory freed, etc). Deletion here
means setting a corresponding bit from 1 to 0.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9602
2025-08-28 16:22:43 +02:00
Max Kotliar
686289c02b lib/flagutil: fix flag description. 2025-08-27 20:08:24 +03:00
Max Kotliar
9ae10247bb Revert "docs: sync documented flags with binaries"
This reverts commit 7c0c8cc702.
2025-08-27 19:10:31 +03:00
Aliaksandr Valialkin
06ce3f1496 go.mod: update github.com/valyala/gozstd from v1.22.0 to v1.23.2 2025-08-27 14:28:44 +02:00
Artem Fetishev
d0690ba15f benchmarks: support for all query types in TSBS (#9630)
### Describe Your Changes

Add the support of all standard TSDB query types that can be executed
against VictoriaMetrics. `double-groupby-all` is commented out as it
attempts to retrieve all 1B samples and fails. While this can be fixed
by setting the `-search.maxSamplesPerQuery` this query is left disabled
anyway because it will consume way too much memory and cpu time.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-08-27 13:49:35 +02:00
Andrii Chubatiuk
483e00ffb9 vmui: replace VMAlert proxy with Alerting tab in VMUI (#8989)
### Describe Your Changes

Rules page header + content
<img width="1235" height="520" alt="image"
src="https://github.com/user-attachments/assets/bb0c5818-c44a-46e6-bc47-e6718be34016"
/>
Expanded rule without alert
<img width="1418" alt="image"
src="https://github.com/user-attachments/assets/ae0b265f-24fe-4549-8913-b1be8e7c2862"
/>
Expanded rule with alert
<img width="1418" alt="image"
src="https://github.com/user-attachments/assets/8a138403-0712-4de2-bfa5-467da3a979dd"
/>
Notifiers page
<img width="1419" alt="image"
src="https://github.com/user-attachments/assets/557c2831-e960-44ec-9b93-f1ebfeb1fbb0"
/>

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8330
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6091
fixes https://github.com/VictoriaMetrics/VictoriaLogs/issues/90

VMUI:
- Added added `Alerting -> Rules` and `Alerting -> Notifiers` pages for
VictoriaMetrics
- Support includeAll option in Select component

VMAlert:
- added `/api/v1/group`useful to get information about certain group
- added `lastError` for `/api/v1/notifiers` for each target to see
information about failed notifiers

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-08-27 13:48:56 +02:00
Artem Fetishev
06f969a4a7 lib/storage: Follow-up for 9517f5cf1 - use 100k series in all benchmarks, fix benchmark names
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-08-27 11:53:43 +02:00
Artem Fetishev
9517f5cf1a lib/storage: new storage search benchmarks (#9620)
### Describe Your Changes

New benchmarks for storage search (data and index):
- Use the same dataset that accounts for prev and curr indexDBs and
deleted series
- The code is more structured
- Account for various numbers of series in response including higher
numbers (>10k) as this appears to be a quite common use case.

These bechmarks were used for investigating #9602 performance issue and
helped discover that prefetching metric names needed to be restored
#9619.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-08-27 11:19:29 +02:00
Alexander Frolov
e62e0685dc vmctl: inconsistent vm-native logs (#9607)
### Describe Your Changes

Some messages were written to `stdout` using `fmt.Printf` and
`fmt.Println`, while the other messages like import statistics were
written to `stderr` through the `log` package.

This led to ordering problems where the `Import finished!` +
`VictoriaMetrics importer stats` messages, which expected to be the last
messages, appeared before `Continue import process with filter`
messages, creating confusing output for users.

```
2025/08/20 13:07:26 Import finished!
2025/08/20 13:07:26 VictoriaMetrics importer stats:
  time spent while importing: 20h49m10.8497184s;
  total bytes: 277.1 GB;
  bytes/s: 3.7 MB;
  requests: 7978614;
  requests retries: 0;
2025/08/20 13:07:26 Total time: 20h49m10.851006088s
Continue import process with filter
        filter: match[]={__name__!=""}
        start: 2025-08-08T00:00:00Z
        end: 2025-08-15T00:00:00Z:
Continue import process with filter
        filter: match[]={__name__!=""}
        start: 2025-08-15T00:00:00Z
        end: 2025-08-19T16:18:15Z:
```


### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-08-26 18:53:59 +03:00
Max Kotliar
df92e617db Revert "app/{vminsert,vmagent}: added flags for periodical relabel and stream aggregation configs check (#9598)"
This reverts commit 07291c1d62 and partly
7c0c8cc702.

The reasons explained in
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9598#issuecomment-3223766551
2025-08-26 14:42:35 +03:00
Max Kotliar
7c0c8cc702 docs: sync documented flags with binaries 2025-08-26 10:53:43 +03:00
Andrii Chubatiuk
07291c1d62 app/{vminsert,vmagent}: added flags for periodical relabel and stream aggregation configs check (#9598)
related issue
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9590

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2025-08-26 09:46:44 +03:00
Alexander Frolov
7c0015b836 app/vmagent/remotewrite: restore protocol downgrade logic (#9621)
### Describe Your Changes

It seems db39f045e1 accidentally reverted
#9419 changes.
```patch
--- a/app/vmagent/remotewrite/client.go
+++ b/app/vmagent/remotewrite/client.go
@@ -448,7 +448,8 @@ again:
 	}
 
 	metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_remotewrite_requests_total{url=%q, status_code="%d"}`, c.sanitizedURL, statusCode)).Inc()
-	if statusCode == 409 {
+	switch statusCode {
+	case 409:
 		logBlockRejected(block, c.sanitizedURL, resp)
 
 		// Just drop block on 409 status code like Prometheus does.
@@ -461,7 +462,13 @@ again:
 		// - Remote Write v2 specification explicitly specifies a `415 Unsupported Media Type` for unsupported encodings.
 		// - Real-world implementations of v1 use both 400 and 415 status codes.
 		// See more in research: https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8462#issuecomment-2786918054
-	} else if statusCode == 415 || statusCode == 400 {
+	case 415, 400:
+		if c.canDowngradeVMProto.Swap(false) {
+			logger.Infof("received unsupported media type or bad request from remote storage at %q. Downgrading protocol from VictoriaMetrics to Prometheus remote write for all future requests. "+
+				"See https://docs.victoriametrics.com/victoriametrics/vmagent/#victoriametrics-remote-write-protocol", c.sanitizedURL)
+			c.useVMProto.Store(false)
+		}
+
 		if encoding.IsZstd(block) {
 			logger.Infof("received unsupported media type or bad request from remote storage at %q. Re-packing the block to Prometheus remote write and retrying."+
 				"See https://docs.victoriametrics.com/victoriametrics/vmagent/#victoriametrics-remote-write-protocol", c.sanitizedURL)
```

cc @makasim

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-08-26 09:17:53 +03:00
Hui Wang
06e52a99fd lib/prompb: replace fields hardcoded hex values with their correspond… (#9617)
…ing bitwise operations

fix https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9608
2025-08-26 09:03:36 +03:00
f41gh7
f5840951a4 app/vmagent: pubsub properly handle ingestion error
Previously, if pushBlockPubSub function returned error, vmagent stopped
remote write worker thread assigned for it. Expected behavior for this
scenario is to retry error inside pushBlockPubSub function. It must
return only on vmagent shutdown.

 This commit properly handles this error and prevents from ingestion
stop.
2025-08-24 21:37:30 +02:00
Aliaksandr Valialkin
9ca5a8d0f4 lib/netutil: return tls.Conn from TCPListener.Accept for TLS connections
This is needed because the servers, which may use the TCPListener, such as net/http.Server,
expect to get tls.Conn for TLS connections in order to properly fill various fields such as net/http.Request.TLS.
If the listener returns some other net.Conn, then these fields aren't filled properly,
and this may prevent from the proper mTLS-based authorization and request routing
such as https://docs.victoriametrics.com/victoriametrics/vmauth/#mtls-based-request-routing

Updates https://github.com/VictoriaMetrics/VictoriaLogs/issues/29
2025-08-22 20:25:40 +02:00
Aliaksandr Valialkin
894b22590d docs/victoriametrics/enterprise.md: mention VictoriaLogs enterprise
Updates https://github.com/VictoriaMetrics/VictoriaLogs/issues/120
2025-08-22 18:31:51 +02:00
hagen1778
f85fd161e4 docs: reword -vmalert.proxyURL usage in vmalert
Make it clear that `-vmalert.proxyURL` needs to be applied to
VM single or vmselect.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-08-22 09:49:13 +02:00
Max Kotliar
7d552dbd9a metricsql: improve timestamp function compatibility with Prometheus when used with sub-expressions (#9603)
### Describe Your Changes

Fixes
[#9527](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9527)
Related PR: https://github.com/VictoriaMetrics/metricsql/pull/55

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-08-21 17:38:12 +03:00
Max Kotliar
795c3deaee ib/appmetrics: revert accidental change 2025-08-21 17:30:12 +03:00
Max Kotliar
cb44353a36 docs/changelog: add update note 2025-08-21 17:29:32 +03:00
Andrii Chubatiuk
7e05200c60 deployment/rules: set proper job filters for rules (#9587)
### Describe Your Changes

related issue https://github.com/VictoriaMetrics/helm-charts/issues/2350

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-08-21 15:26:36 +02:00
hagen1778
a2f033ce6c docs: refresh vmui description
* add missing features
* re-organize text without breaking links to improve clarity

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-08-21 15:25:49 +02:00
Artur Minchukou
78b217d70c app/vmui: add export functionality for Query and RawQuery tabs with CSV/JSON support (#9463)
### Describe Your Changes

Related issue: #9332 
- add export functionality for Query and RawQuery tabs with CSV/JSON
support;
 - replace unused icons and update `DebugIcon` usage in `DownloadReport`

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-08-21 14:37:27 +02:00
Aliaksandr Valialkin
c9b23de9ce lib/httpserver: add missing whitespace after the dot in the description for the -tlsAutocertEmail command-line flag
This is a follow-up for 1d80e8f860
2025-08-21 11:02:43 +02:00
Andrii Chubatiuk
16a75129be docs: exclude files from rendering by hugo (#9591)
required for https://github.com/VictoriaMetrics/vmdocs/issues/164

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-08-20 12:04:06 +03:00
Nikolay
68bdb5e4d3 go.mod: unpin cloud.google.com/go/storage
Add build tag `disable_grpc_modules` for vmbackup, vmrestore and
vmbackupmanager. Binary size increases only for 3MB with it. It's
acceptable trade-off for security and feature updates.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8008
2025-08-19 12:21:54 +02:00
Fred Navruzov
4360d10962 docs/vmanomaly: release v1.25.3 (#9597)
### Describe Your Changes

Update docs to vmanomaly release v1.25.3

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-08-19 10:24:48 +04:00
Roman Khavronenko
ce9c868f59 benchmarks: update makefile commands
* check if built binary is present for `make tsbs-build`. Before, if
build fails, the command stopped working.
* make ENV variables configurable from command line, so `TSBS_STEP=15s
make tsbs-generate-data` would respect the configured step.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-08-18 22:47:16 +02:00
Arie Heinrich
212ce1baf0 Spelling and Markdown Standards
Another batch of documentation improvements

Fix Spelling in:
- Comments in code
- Displayed strings

One change was in a json file used for the anomaly dashboard in docker,
else no other code was changed.

Some Markdown changes, related to standards:
- URLs
- List numbering
- Empty spaces at the end of a line
2025-08-18 22:46:34 +02:00
Corporte Gadfly
1a091e5831 fix typo in sentence 2025-08-18 22:41:47 +02:00
Zakhar Bessarab
bac186fc65 deployment: update image tags to the latest release
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-08-18 16:08:35 +04:00
Zakhar Bessarab
15ce9e5e49 docs: update references to the latest releases
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-08-18 16:08:12 +04:00
Zakhar Bessarab
2c1596ea84 docs/changelog: backport LTS release notes
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-08-18 15:36:43 +04:00
f41gh7
21d4f844ab synctest: replace deprecated Run call with Test
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2025-08-17 21:00:17 +02:00
f41gh7
da0002ce66 Makefile: upgrade golangci-lint from 2.2.1 to 2.4.0
Changelog https://golangci-lint.run/docs/product/changelog/#240
2025-08-17 20:36:14 +02:00
f41gh7
f35b9ed36d deployment/docker: update Go builder from 1.24.6 to 1.25.0
Changes https://tip.golang.org/doc/go1.25
2025-08-17 20:31:30 +02:00
Zakhar Bessarab
b4dc67cba6 docs/CHANGELOG.md: cut v1.124.0
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-08-15 15:00:46 +04:00
Zakhar Bessarab
70afdd0285 docs: update version tooltips
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-08-15 14:51:10 +04:00
Zakhar Bessarab
51efd2c32b app/vmselect: run make vmui-update
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-08-15 14:46:49 +04:00
Max Kotliar
1e208a8c79 .github: add copilot instruction (#9586)
### Describe Your Changes

Trying to teach Copilot correct changelog changes, such as a misplaced
entry
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9306#issuecomment-3185126897

I couldn’t test this properly because Copilot doesn’t pick up
instructions from the PR itself. They must be on the master branch. The
instruction needs to be merged first, then tested. Please review.

If it doesn’t work, I’ll remove it.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-08-14 19:50:55 +03:00
Andrei Baidarov
e49027df8f app/vmagent: properly apply dropOnOverload condition
Previously, vmagent treated differently the following configuration:

1) ./bin/vmagent --remoteWrite.url=url-0 --remoteWrite.url=url-1 --remoteWrite.disableOndiskQueue

 and

2)./bin/vmagent --remoteWrite.url=url-0 --remoteWrite.url=url-1 --remoteWrite.disableOndiskQueue=true,true

In first case, it could produce duplicates and blocks ingestion requests if one of remote write targets were not accessible.
In second case, it implicitly added --remoteWrite.dropSamplesOnOverload as true and silently dropped samples for inaccessible target.

 This commit treat this configuration as the same and silently drop samples on both cases to mitigate possible duplicates. 

 It's expected, that vmagent provides delivery guarantees, only if it has a single remote write target, when flag remoteWrite.disableOndiskQueue=true is set.


Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9565
2025-08-14 16:11:08 +02:00
Andrii Chubatiuk
a518a4a904 lib/backup: added checksum algorithm for all S3 PutObject requests (#9549)
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9532
set checksum algorithm to SHA256, not sure if this property should be
configurable

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
Co-authored-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-08-14 17:50:03 +04:00
Artem Fetishev
ad46fce7d4 lib/storage: fix searchMetricName() (#9582)
While working on #9431 there has been introduced 2 bugs related to
indexDB.searchMetricName():

1. During the search the index records are unconditionally placed in
sparse index
2. If search touches index records in both prev and curr indexDBs, there
will be possible cases that metricIDs can be unintentionally removed
using `wasMetricIDMissingBefore()` logic

Additionally, the PR moves the searchMetricName from indexDB and Search
to Storage which simplifies the code and makes it spossible to reuse the
function as-is in enterprise code.

Follow up for #9431.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-08-14 10:10:21 +02:00
Max Kotliar
7cc13ee1cc docs/changelog: move metadata changelog record to tip
Follow up on
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9306
2025-08-13 21:58:49 +03:00
dependabot[bot]
74fcd10d2e build(deps): bump actions/checkout from 4 to 5 (#9574)
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to
5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/releases">actions/checkout's
releases</a>.</em></p>
<blockquote>
<h2>v5.0.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Update actions checkout to use node 24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2226">actions/checkout#2226</a></li>
<li>Prepare v5.0.0 release by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2238">actions/checkout#2238</a></li>
</ul>
<h2>⚠️ Minimum Compatible Runner Version</h2>
<p><strong>v2.327.1</strong><br />
<a
href="https://github.com/actions/runner/releases/tag/v2.327.1">Release
Notes</a></p>
<p>Make sure your runner is updated to this version or newer to use this
release.</p>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v4...v5.0.0">https://github.com/actions/checkout/compare/v4...v5.0.0</a></p>
<h2>v4.3.0</h2>
<h2>What's Changed</h2>
<ul>
<li>docs: update README.md by <a
href="https://github.com/motss"><code>@​motss</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1971">actions/checkout#1971</a></li>
<li>Add internal repos for checking out multiple repositories by <a
href="https://github.com/mouismail"><code>@​mouismail</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1977">actions/checkout#1977</a></li>
<li>Documentation update - add recommended permissions to Readme by <a
href="https://github.com/benwells"><code>@​benwells</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2043">actions/checkout#2043</a></li>
<li>Adjust positioning of user email note and permissions heading by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2044">actions/checkout#2044</a></li>
<li>Update README.md by <a
href="https://github.com/nebuk89"><code>@​nebuk89</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2194">actions/checkout#2194</a></li>
<li>Update CODEOWNERS for actions by <a
href="https://github.com/TingluoHuang"><code>@​TingluoHuang</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/2224">actions/checkout#2224</a></li>
<li>Update package dependencies by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2236">actions/checkout#2236</a></li>
<li>Prepare release v4.3.0 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2237">actions/checkout#2237</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/motss"><code>@​motss</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/checkout/pull/1971">actions/checkout#1971</a></li>
<li><a href="https://github.com/mouismail"><code>@​mouismail</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/checkout/pull/1977">actions/checkout#1977</a></li>
<li><a href="https://github.com/benwells"><code>@​benwells</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/checkout/pull/2043">actions/checkout#2043</a></li>
<li><a href="https://github.com/nebuk89"><code>@​nebuk89</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/checkout/pull/2194">actions/checkout#2194</a></li>
<li><a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/checkout/pull/2236">actions/checkout#2236</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v4...v4.3.0">https://github.com/actions/checkout/compare/v4...v4.3.0</a></p>
<h2>v4.2.2</h2>
<h2>What's Changed</h2>
<ul>
<li><code>url-helper.ts</code> now leverages well-known environment
variables by <a href="https://github.com/jww3"><code>@​jww3</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/1941">actions/checkout#1941</a></li>
<li>Expand unit test coverage for <code>isGhes</code> by <a
href="https://github.com/jww3"><code>@​jww3</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1946">actions/checkout#1946</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v4.2.1...v4.2.2">https://github.com/actions/checkout/compare/v4.2.1...v4.2.2</a></p>
<h2>v4.2.1</h2>
<h2>What's Changed</h2>
<ul>
<li>Check out other refs/* by commit if provided, fall back to ref by <a
href="https://github.com/orhantoy"><code>@​orhantoy</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1924">actions/checkout#1924</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/Jcambass"><code>@​Jcambass</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/checkout/pull/1919">actions/checkout#1919</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v4.2.0...v4.2.1">https://github.com/actions/checkout/compare/v4.2.0...v4.2.1</a></p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/blob/main/CHANGELOG.md">actions/checkout's
changelog</a>.</em></p>
<blockquote>
<h1>Changelog</h1>
<h2>V5.0.0</h2>
<ul>
<li>Update actions checkout to use node 24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2226">actions/checkout#2226</a></li>
</ul>
<h2>V4.3.0</h2>
<ul>
<li>docs: update README.md by <a
href="https://github.com/motss"><code>@​motss</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1971">actions/checkout#1971</a></li>
<li>Add internal repos for checking out multiple repositories by <a
href="https://github.com/mouismail"><code>@​mouismail</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1977">actions/checkout#1977</a></li>
<li>Documentation update - add recommended permissions to Readme by <a
href="https://github.com/benwells"><code>@​benwells</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2043">actions/checkout#2043</a></li>
<li>Adjust positioning of user email note and permissions heading by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2044">actions/checkout#2044</a></li>
<li>Update README.md by <a
href="https://github.com/nebuk89"><code>@​nebuk89</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2194">actions/checkout#2194</a></li>
<li>Update CODEOWNERS for actions by <a
href="https://github.com/TingluoHuang"><code>@​TingluoHuang</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/2224">actions/checkout#2224</a></li>
<li>Update package dependencies by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2236">actions/checkout#2236</a></li>
</ul>
<h2>v4.2.2</h2>
<ul>
<li><code>url-helper.ts</code> now leverages well-known environment
variables by <a href="https://github.com/jww3"><code>@​jww3</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/1941">actions/checkout#1941</a></li>
<li>Expand unit test coverage for <code>isGhes</code> by <a
href="https://github.com/jww3"><code>@​jww3</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1946">actions/checkout#1946</a></li>
</ul>
<h2>v4.2.1</h2>
<ul>
<li>Check out other refs/* by commit if provided, fall back to ref by <a
href="https://github.com/orhantoy"><code>@​orhantoy</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1924">actions/checkout#1924</a></li>
</ul>
<h2>v4.2.0</h2>
<ul>
<li>Add Ref and Commit outputs by <a
href="https://github.com/lucacome"><code>@​lucacome</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1180">actions/checkout#1180</a></li>
<li>Dependency updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>- <a
href="https://redirect.github.com/actions/checkout/pull/1777">actions/checkout#1777</a>,
<a
href="https://redirect.github.com/actions/checkout/pull/1872">actions/checkout#1872</a></li>
</ul>
<h2>v4.1.7</h2>
<ul>
<li>Bump the minor-npm-dependencies group across 1 directory with 4
updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1739">actions/checkout#1739</a></li>
<li>Bump actions/checkout from 3 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1697">actions/checkout#1697</a></li>
<li>Check out other refs/* by commit by <a
href="https://github.com/orhantoy"><code>@​orhantoy</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1774">actions/checkout#1774</a></li>
<li>Pin actions/checkout's own workflows to a known, good, stable
version. by <a href="https://github.com/jww3"><code>@​jww3</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1776">actions/checkout#1776</a></li>
</ul>
<h2>v4.1.6</h2>
<ul>
<li>Check platform to set archive extension appropriately by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1732">actions/checkout#1732</a></li>
</ul>
<h2>v4.1.5</h2>
<ul>
<li>Update NPM dependencies by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1703">actions/checkout#1703</a></li>
<li>Bump github/codeql-action from 2 to 3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1694">actions/checkout#1694</a></li>
<li>Bump actions/setup-node from 1 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1696">actions/checkout#1696</a></li>
<li>Bump actions/upload-artifact from 2 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1695">actions/checkout#1695</a></li>
<li>README: Suggest <code>user.email</code> to be
<code>41898282+github-actions[bot]@users.noreply.github.com</code> by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1707">actions/checkout#1707</a></li>
</ul>
<h2>v4.1.4</h2>
<ul>
<li>Disable <code>extensions.worktreeConfig</code> when disabling
<code>sparse-checkout</code> by <a
href="https://github.com/jww3"><code>@​jww3</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1692">actions/checkout#1692</a></li>
<li>Add dependabot config by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1688">actions/checkout#1688</a></li>
<li>Bump the minor-actions-dependencies group with 2 updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1693">actions/checkout#1693</a></li>
<li>Bump word-wrap from 1.2.3 to 1.2.5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1643">actions/checkout#1643</a></li>
</ul>
<h2>v4.1.3</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="08c6903cd8"><code>08c6903</code></a>
Prepare v5.0.0 release (<a
href="https://redirect.github.com/actions/checkout/issues/2238">#2238</a>)</li>
<li><a
href="9f265659d3"><code>9f26565</code></a>
Update actions checkout to use node 24 (<a
href="https://redirect.github.com/actions/checkout/issues/2226">#2226</a>)</li>
<li>See full diff in <a
href="https://github.com/actions/checkout/compare/v4...v5">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/checkout&package-manager=github_actions&previous-version=4&new-version=5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-13 19:01:59 +03:00
Zakhar Bessarab
59007cda51 docs: update examples to use proper license flags (#9579)
`-eula` was deprecated and made no-op in v1.123.0, so examples with
`-eula` will no longer work.
Replace those with proper license configuration.

While at it, remove license flags from vmbackupmanager CLI commands as
it is not required when using CLI.

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-08-13 19:12:19 +04:00
hagen1778
5869a39e7b metricsql: return a proper error message for scalar arguments
Follow-up for 8b92af9d45

Initial PR contained the change for getScalar function - see https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9548
But change was dropped during incorrect rebase.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-08-13 13:22:13 +02:00
Max Kotliar
c3c802a61c apptest: Fix flaky TestSingleVMAuthRouterWithAuth (#9575)
### Describe Your Changes

Do not check vmauth_config_last_reload_success_timestamp_seconds since
it may contain the timestamp < time.Now() due to how lib/fasttime works.

Instead, compare the number of config reloads.

follow up on
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9369 and
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9572

Also, split the config update and reload into two separate functions. 

master:
```
$gotest -race ./apptest/tests/ -run=TestSingleVMAuthRouterWithInternalAddr -count=40
ok  	github.com/VictoriaMetrics/VictoriaMetrics/apptest/tests	90.176s
```

pr:
```
$gotest -race ./apptest/tests/ -run=TestSingleVMAuthRouterWithInternalAddr -count=40
ok  	github.com/VictoriaMetrics/VictoriaMetrics/apptest/tests	46.130s
```

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-08-13 13:05:15 +02:00
Hui Wang
8b92af9d45 metricsql: return a proper error message when the function argument i… (#9548)
…s expected to be a string

In MetricsQL, functions like
[count_values](https://docs.victoriametrics.com/victoriametrics/metricsql/#count_values),
[label_replace](https://docs.victoriametrics.com/victoriametrics/metricsql/#label_replace)
expect string arguments, and `getString()` checks if the result from a
string expr query.
Previously, error messages were not intuitive, now
`label_replace("","","","",up)` and `label_replace("","","","",1)`
should return clearer error message.
2025-08-13 13:05:00 +02:00
Hui Wang
e313874d01 vmalert: fix the {{ $activeAt }} variable value in annotation templ… (#9576)
…ating when the alert has already triggered

fix https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9543, 
bug was introduced in
[v1.101.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.101.0)
with
a84491324d.
2025-08-13 12:59:00 +02:00
Hui Wang
58a4e48901 vmalert: fix potential data race and missing firing states when repla… (#9559)
…ying alerting rule with `-replay.ruleEvaluationConcurrency>1`
2025-08-13 12:56:27 +02:00
Andrei Baidarov
16d75ab0bd lib/storage: remove extDB from indexDB, search indexDBs independently (#9431)
Removing extDB from indexDB makes prev, curr, and next indexDBs independent.
I.e. the search is performed independently in prev and curr, the results are
then merged.
    
Additionally, since no search is now performed in extDB:
- all indexDB search methods now return the original maps used for populating
  the result, without invermediate conversion to slices.
 - `NoExtDB` suffix has been removed from method names
  
This has been extracted from #8134.
    
Signed-off-by: Andrei Baidarov <baidarov@nebius.com>
Co-authored-by: Artem Fetishev <rtm@victoriametrics.com>
2025-08-13 07:36:09 +02:00
Dmytro Kozlov
fe0afc3fea benchmark: update date calculation for the benchmark script (#9563)
### Describe Your Changes

Updated date calculation for the TSBS benchmark. Before it requires the
installation of the `coreutils` if you run those benchmarks on the macOS
system, but you do not need to install anything.
`make tsbs` should work correctly on Linux and macOS as well.

Checked on both systems, it works correctly:
1. MacOS 
<img width="1292" height="372" alt="Screenshot 2025-08-08 at 11 45 03"
src="https://github.com/user-attachments/assets/609a797d-c54a-40d3-abe2-270c173ff9c3"
/>

2. Linux
<img width="1440" height="283" alt="Screenshot 2025-08-08 at 11 46 33"
src="https://github.com/user-attachments/assets/e9f094a1-40cc-4cd2-afd5-55c5678c041f"
/>

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-08-12 16:43:24 +02:00
Roman Khavronenko
f99e49c15d dashboards/victoriametrics-cluster: show max 99th percentile on vmselect panels (#9555)
Before, we showed summarized 99th percentile for query complexity across
all available instance. This doesn't make much sense, as it doesn't
answer on the following questions:
1. What complexity limits to set per vmselect
2. What are the most expensive queries

The change is to use `max` instead of `sum`, to show only outliers, the
heaviest served queries. The update should help answering on questions
above.

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-08-12 16:43:07 +02:00
Andrii Chubatiuk
1ba994970b metricsql: fixed gaps in histogram_quantile calculation, when first bucket contains NaNs (#9547)
fixes case, when `histogram_quantile` result contains gaps, that occur
in same time range, where NaNs are present in a first bucket of a
histogram

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-08-12 16:41:58 +02:00
Hui Wang
25cd5637bc app/vmagent: add time series metadata support
By default, `vmagent` doesn't parse
[metadata](https://github.com/prometheus/docs/blob/main/docs/instrumenting/exposition_formats.md)
when scraping targets, and drops metadata that received via [Prometheus remote write v1(https://prometheus.io/docs/specs/prw/remote_write_spec/) or
[OpenTelemetryprotocol](https://github.com/open-telemetry/opentelemetryproto/blob/v1.7.0/opentelemetry/proto/metrics/v1/metrics.proto).

To enable parsing metadata when scraping and sending metadata to the
configured `-remoteWrite.url`, set `-enableMetadata=true`.

Besides native metadata fields, vmagent also adds tenant info to
metadata when `-enableMultitenantHandlers` is enabled and data is sent
via the multitenant endpoints (/insert/<accountID>/<suffix>), allowing
storing metadata under different tenants in VictoriaMetrics cluster.
However, if `vm_account_id` or `vm_project_id labels` are added directly
in metrics labels and send to the [vminsert multitenantendpoints](https://docs.victoriametrics.com/victoriametrics/cluster-victoriametrics/#multitenancy-via-labels),
tenant info won't be attached in the metadata, and it will be stored in
the default tenant of VictoriaMetrics cluster.

part of https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2974
2025-08-12 15:19:50 +02:00
Max Kotliar
00c7533095 apptest: fix flaky single vmauth router with auth test
Fix flaky integration test `TestSingleVMAuthRouterWithAuth`.
The flakiness is caused by the
`vmauth_config_last_reload_success_timestamp_seconds` metric, which
reports time with second-level precision.
Update the test to account for this when verifying that the config
reloads correctly.
2025-08-12 11:35:58 +02:00
Nikolay
f668e5d9c6 lib/storage: cardinality limiter prevent performance degradation on limit hit
Previously, if limit was reached for cardinality limiter, vmstorage
started to perform index lookups for any series exceed limit. Since
storage must skip index creation for such series, it's not possible to
cache it. It resulted into opposite effect of cardinality limiter -
instead of reducing resource usage, it increased it instead.

 This commit changes cardinality limit calculation from metricID to the
hash from raw metricName. It could slightly increase CPU usage if
cardinality limiter is configured, since hash must be calculated for
each metricName row. But it mitigates excessive CPU and memory usage on
limit hit

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9554
2025-08-12 11:32:31 +02:00
Nikolay
f4548a46a7 docs: add vmselect group and vmstorage node auto-discovery 2025-08-12 11:31:48 +02:00
Max Kotliar
7048de8d20 docs: add available from hint for -rpc.handshakeTimeout flag
follow up on
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9541
2025-08-12 10:12:02 +03:00
Max Kotliar
cba4b2f0df lib/handshake: set deadline for whole handshake; change deadline (1s per op to 3s whole process) (#9541)
The current one-second timeout for individual read or write operations
during the handshake phase has proven to be insufficient in some
scenarios
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9345. For
example, short-lived CPU spikes lasting a few seconds can cause
handshake failures due to the low timeout threshold.

While a small timeout may work well in environments with fast and
reliable networking, such as within a single datacenter, it becomes
problematic in more complex setups—particularly in a [multi-level
cluster
setup](https://docs.victoriametrics.com/victoriametrics/cluster-victoriametrics/#multi-level-cluster-setup)
where the top-level vmselect may reside in a different availability zone
and work on a less reliable network.

Another issue with the per-operation timeout approach is that it allows
the total time for a handshake to accumulate significantly in the
worst-case scenario. If each operation experiences a delay just under
the timeout threshold, the entire handshake process could take up to 6s.
Which accounts for 60% of `-search.maxQueueDuration` and leaves only 4s
for the actual query.

Introducing a single timeout for the entire handshake process would
provide more predictable behavior and improve usability from a
configuration standpoint. The timeout for the whole handshake op is also
easier to understand from the operator's point of view. Increasing the
timeout value and providing a configuration option for it would make the
system more resilient to transient conditions like CPU contention and
better suited for use cases involving cross-AZ communication.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9345

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-08-11 19:35:55 +03:00
Max Kotliar
068d5a4b07 .github/workflows: Run cross builds and tests in parallel (#9443)
### Describe Your Changes

The commit changes CI behavior:
- Run build in parallel for different os\arch
- Run unit\integration\lint in parallel
- Remove the custom Go cache step in favor of the logic provided in
`actions/setup-go`. The custom cache was used to build key based on
go.sum and makefiles. This logic is preserved.
- Introduce cache for golangci-lint. 

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-08-11 15:55:36 +03:00
Max Kotliar
e392cbbba3 apptest: Add vmauth use proxy protocol integration test (#9556)
### Describe Your Changes

Add an integration test that verifies that vmauth works with
`-httpListenAddr.useProxyProtocol=true` enabled and the x-forwarded-for
header is propagated correctly.

Related to https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9546

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-08-11 15:49:38 +03:00
Aliaksandr Valialkin
06f590ee63 lib/envtemplate: allow referring non-existing environment variables in config files and in command-line flags
A few users reported unexpected errors when environment variables referred other environment variables
at VictoriaMetrics startup. This resulted in the following fatal error on startup:

    cannot expand "..." env var value "...%{SOME_NON_EXISTING_ENV_VAR}..."

Fix this by leaving placeholders with non-existing env vars as is.
This improves the general usability of environment variables by VictoriaMetrics components
inside command-line flags and inside config files. User can easily notice placeholders with non-existing
environment variables by looking at the corresponding command-line flag or at the corresponding config option value.

While at it, replace duplicate docs about environment variables at the https://docs.victoriametrics.com/victoriametrics/cluster-victoriametrics/#environment-variables
with the link to the same docs at https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#environment-variables .

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3999
2025-08-09 21:05:13 +02:00
Aliaksandr Valialkin
5eef1d66e0 go.sum: run go mod tidy after 1f2c14260c 2025-08-08 20:23:57 +02:00
Aliaksandr Valialkin
5a572387cf deployment/docker: update Go builder from Go1.24.5 to Go1.24.6
See https://github.com/golang/go/issues?q=milestone%3AGo1.24.6+label%3ACherryPickApproved
2025-08-08 20:21:57 +02:00
Charles-Antoine Mathieu
c6b165ecba app/vmselect: truncate graphite excessive pathExpression field
vmselect is experiencing memory exhaustion and OOM kills
when processing complex Graphite queries with nested functions and large
numbers of label selectors (30k+ values).

The root cause was unbounded growth of the pathExpression field.

 This commit adds configurable truncation for Graphite pathExpression fields to
prevent memory exhaustion while preserving query functionality:

New flag: -search.maxGraphitePathExpressionLen=1024 (default 1024
characters)
Safe truncation: Long expressions are truncated with "..." suffix
Zero disables: Set to 0 to disable truncation entirely

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9534/
2025-08-08 13:44:29 +02:00
Max Kotliar
fd928a0f5b lib/netutil: fix linter issues in proxy protocol tests 2025-08-07 14:35:35 +03:00
Nikolay
c207c32c44 lib/netutil: properly accept proxy protocol
Previously, tcp listener perform synchronous proxy protocol header
read during connection accept. It could significantly reduce vmauth
performance and lead to timeout at serving http requests.

 This commit changes this logic and performs proxy protocol header
parsing during first Read request from connection or RemoteAddr method
call. It significantly improves performance and reduce possible
bottleneck at connections accept method.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9546/
2025-08-07 12:25:35 +02:00
f41gh7
1f2c14260c go.mod: update fastcache to v1.13.0 2025-08-06 18:29:05 +02:00
Max Kotliar
87604e6df6 lib/prompb: fix review comment after merge prompbmarshal into prompn
- Rename WriteRequestUnmarshaller to WriteRequestUnmarshaler
- Add a description to WriteRequestUnmarshaler struct

Review comments
b98e592752 (r163365472)

Follow up on
b98e592752
2025-08-06 19:23:50 +03:00
Alexander Frolov
1beb1f69d5 vmselect: properly release tmp blocks for /federate
The `/federate` endpoint handler might return early before calling
`rss.RunParallel()`, which causes temporary block files to not be closed
properly.

Related PR: https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9536
2025-08-06 18:20:06 +02:00
Andrii Chubatiuk
5266bf1f3b docs: override canonical url of pages, that have multiple copies (#9550)
### Describe Your Changes

multiple pages, that reference same document in `{% content %}`
shortcode same content, but different canonical URLs, added canonical
parameter to override default url

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-08-06 16:13:20 +02:00
Roman Khavronenko
d4aefcecc4 docs: mention series of articles on VM internals in FAQ (#9528)
While there, mention https://victoriametrics.com/blog in the articles
section, as it seems not being mentioned anywhere.

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: Mathias Palmersheim <mathias@victoriametrics.com>
2025-08-06 16:11:10 +02:00
Zakhar Bessarab
93c373d55a dashboards/vmagent: fix expression for samples rate (#9530)
In case vmagent does not scrape any metrics left part will be evaluated
as empty resulting in right part being skipped.

Before:
<details>
<img width="1401" height="1080" alt="image"
src="https://github.com/user-attachments/assets/c242593f-8503-4bd2-b6a7-85c1dcc54d0f"
/>
</details> 

After:
<details>
<img width="1416" height="1128" alt="image"
src="https://github.com/user-attachments/assets/45565c28-a731-4f5d-af54-1ab3daf75778"
/>
</details>

---------

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-08-06 16:08:02 +02:00
Hui Wang
58bc05ce56 vmalert-tool: fix panic when rule execution fails (#9540)
fix https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9526, 
bug was introduced from **v1.114.0**.

Please note, the rule execution failure should only happen if there is a
bad template or duplicated alert(rare case), added a test case to cover
the template.

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-08-06 16:02:22 +02:00
Roman Khavronenko
516a454f0a docs: update monitoring section (#9538)
* remove duplicated content between single and cluster versions
* mention recommendation to group component types by jobs in scrape
config
* link the example of scrape configs
* update wording

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-08-06 16:00:25 +02:00
Jamie Wiebe
9fd9de7ab4 vmui: fix typo in "returned too many series" message (#9533)
A few simple grammar changes on messages presented to the user
2025-08-06 16:00:12 +02:00
Max Kotliar
5a75b93535 docs/changelog: remove mention of latest Docker tag deprecation, clarify stable tag removal 2025-08-05 18:48:18 +03:00
f41gh7
787bf8ffed docs/cluster: follow-up after 33392e1135
Mention new logNewSeriesAuthKey flag at docs
2025-08-04 17:09:11 +02:00
f41gh7
f4bbb83b6a docs/changelog: add v1.110.15 and v1.122.1 changes
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2025-08-04 17:07:28 +02:00
f41gh7
b0409910dc docs: update LTS releases versions
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2025-08-04 17:05:25 +02:00
f41gh7
b421f43532 docs: mention v1.123.0 release at examples
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2025-08-04 17:04:21 +02:00
Aliaksandr Valialkin
847398b356 lib/fs/fs.go: added missing lock for the diskSpaceMapLock inside MustGetTotalSpace() function
This is a follow-up for 7da45924e2

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9523
Updates https://github.com/VictoriaMetrics/VictoriaLogs/issues/513
2025-08-04 10:12:14 +02:00
Aliaksandr Valialkin
53d8e99987 app/vmstorage: expose vm_total_disk_space_bytes metric, which shows disk volume size for -storageDataPath directory
This metric can be used for building alerts and graphs for free disk space usage percentage by using the following MetricsQL query:

    100 * (vm_free_disk_space_bytes / vm_total_disk_space_bytes)
2025-08-04 10:05:45 +02:00
Phuong Le
7da45924e2 lib/fs: Add total disk space retrieval (#9523)
Extends the disk space monitoring functionality by adding support for
retrieving total disk capacity in addition to free space.

Related: https://github.com/VictoriaMetrics/VictoriaLogs/issues/513

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-04 09:58:19 +02:00
Aliaksandr Valialkin
ddadfd6d58 vendor: run make vendor-update 2025-08-03 22:10:54 +02:00
Aliaksandr Valialkin
c025993e8a vendor: update github.com/VictoriaMetrics/metrics from v1.38.0 to v1.39.1 2025-08-03 22:06:19 +02:00
f41gh7
fbe5ddcc2b CHANGELOG.md: cut v1.123.0 release 2025-08-01 14:54:19 +02:00
f41gh7
6607711f45 make vmui-update 2025-08-01 14:42:56 +02:00
Zakhar Bessarab
8f6b7c2e8c apptest/tests: only flush component logs on test failure (#9491)
Update tests to only print component output in integration tests if the
test case is failing.

An example of pipeline failure:
https://github.com/VictoriaMetrics/VictoriaMetrics/actions/runs/16473633711/job/46569913168?pr=9491

---------

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-08-01 14:21:14 +02:00
Aliaksandr Valialkin
c1cde61f20 docs/victoriametrics/changelog/CHANGELOG_2024.md: typo fix: steaming -> streaming
This is a follow-up for 8a7045e206
2025-08-01 13:33:46 +02:00
hagen1778
b8e82eef72 docs: fix copy&paste typo in stream aggregation
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-08-01 09:19:36 +02:00
Alexander Frolov
d0045ba51d lib/encoding/zstd: use sync stream decoder (purego) (#9518)
### Describe Your Changes

By default `zstd.Reader` creates multiple goroutines to process a single
connection:
- It doesn't match cgo behavior, which works synchronously, and creates
a lot more concurrent goroutines (0.5k -> 5k on my workload)
- It results in non-zero `vm_tcpdialer_errors_total{type="read"}` errors
on vmselect because an underlying connection is closed while a goroutine
is still reading from it. The goroutine created by
`zstd.NewReader`/`zstd.Reset`

abb348e4db/lib/handshake/buffered_conn.go (L113-L120)
- vmselect (and vmagent) doesn't benefit from async mode since it has
multiple readers in-use at the same time, which usually exceeds the
number of cpu cores

Partly related to #9218

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-07-31 20:40:17 +03:00
Dima Shur
8a7045e206 Fixed typo (#9524)
### Describe Your Changes

Fixed typo (steaming aggregation -> streaming aggregation)
Updated vmalert doc - there were counterintuitive links, made it clearer

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-07-31 17:52:00 +03:00
leiwingqueen
33392e1135 app/vmstorage: introduce /internal/log_new_series API
This commit introduces new storage API: `/internal/log_new_series`.
It helps to dynamically debug newly created series. Changing `-logNewSeries` value requires storage restart,
which may introduce downtime and is not recommended for production deployments.

 In addition, this commit adds flags: `-logNewSeriesAuthKey`, which protects newly added API.

Fixes: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8879
2025-07-31 13:07:44 +02:00
Aliaksandr Valialkin
656bcb23d3 docs/victoriametrics/Articles.md: add https://itnext.io/kubernetes-monitoring-a-complete-solution-part-1-architecture-eb5b998658d5 2025-07-31 11:56:03 +02:00
Aliaksandr Valialkin
c582b3a485 docs/victoriametrics/Articles.md: add https://medium.com/@isasamor/optimizing-datadog-costs-with-victoriametrics-a-practical-step-by-step-guide-c984d32c7423 2025-07-31 11:32:56 +02:00
Aliaksandr Valialkin
b2d8fc4b97 docs/victoriametrics/Articles.md: add https://medium.com/@heliodevhub/construindo-uma-stack-de-monitoramento-escal%C3%A1vel-e-econ%C3%B4mica-na-aws-com-victoria-metrics-532d535dcfb6 2025-07-31 11:31:39 +02:00
Aliaksandr Valialkin
7c3679075e docs/victoriametrics/Articles.md: add https://davidhernandez21.github.io/posts/Victoriametrics-k8s-stack-gotchas/ 2025-07-31 10:54:05 +02:00
dstevensson
7dd9407b94 lib/promscrape/discovery/gce: add support for ipv6 in metadata labels
This change will expose any IPv6 addresses assigned to an instance under
the meta labels:
* `__meta_gce_public_ipv6` -native IPv6 address, globally routed
* `__meta_gce_internal_ipv6` - unique local address (ULA).

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9370
2025-07-31 09:31:33 +02:00
Yury Molodov
59486f0bc1 vmui: always show tenant selector if tenant list is not empty
Previously, the tenant selector was hidden when only one tenant
was returned, making it impossible to run queries in multi-tenant mode.
Now, the selector is always shown as long as at least one tenant exists.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9396
2025-07-31 09:20:53 +02:00
Artem Fetishev
1229ed230d lib/storage: remove prefetching metric names as it does not improve performance anyway
Benchmark results:

Machine Type | Benchmark Log | Brief Conclusion
---------------------| ----------------------- |
------------------------
`GCP e2-standard-8 (AMD)` |
[e2-standard-8-no-prefetch-metric-names.log](https://github.com/user-attachments/files/21481629/e2-standard-8-no-prefetch-metric-names.log)
| slight degradation < 5%
`GCP n2-standard-8 (Intel)` |
[n2-standard-8-no-prefetch-metric-names.log](https://github.com/user-attachments/files/21481631/n2-standard-8-no-prefetch-metric-names.log)
| slight improvement < 5%
`GCP n2d-standard-8 (AMD)` |
[n2d-standard-8-no-prefetch-metric-names.log](https://github.com/user-attachments/files/21481630/n2d-standard-8-no-prefetch-metric-names.log)
| slight improvement < 5%, slight degradation < 5%



Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9137
---------

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-07-31 09:19:44 +02:00
Nikolay
98659633cc app/vmauth: properly set useProxyProtocol for httpInternalListenAddr
Commit e77df5d00b introduced
unintentional change, which prevents from using httpInternalListenAddr.
Which is designed to use with clients that do not support proxy
protocol.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9515
2025-07-31 09:18:24 +02:00
Fred Navruzov
676e22b65c docs/vmanomaly: v1.25.2 update
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9520
2025-07-31 09:18:00 +02:00
Max Kotliar
b98e592752 lib/prompb: Merge prompbmarshal logic into prompb
The prompb and prompbmarshal share exactly the same models and provide
marshal and unmarshale capabilities for them. This creates duplication
(changes in one model has to be made in another, case with metadata) and
confusion where for example you compare same looking models but golang
says they are not the same (because of the type).

This commit merge prompbmarshal logic into prompb so the rest of the
code is aligned on prompb models.

Moves samplesPool and labelsPool to WriteRequestUnmarshaller.
Make WriteRequest struct clean from unmarshal logic.

The benchmark shows no significant changes:

$benchstat prompbmarshal.bench prompb2.bench
goos: darwin
goarch: arm64
pkg: github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb
cpu: Apple M1 Pro
                                 │ prompbmarshal.bench │           prompb2.bench            │
                                 │       sec/op        │   sec/op     vs base               │
WriteRequestUnmarshalProtobuf-10           189.2µ ± 5%   190.8µ ± 8%       ~ (p=0.579 n=10)
WriteRequestMarshalProtobuf-10             145.3µ ± 7%   143.6µ ± 2%       ~ (p=0.143 n=10)
geomean                                    165.8µ        165.5µ       -0.14%

                                 │ prompbmarshal.bench │            prompb2.bench            │
                                 │         B/s         │     B/s       vs base               │
WriteRequestUnmarshalProtobuf-10          50.42Mi ± 5%   49.99Mi ± 8%       ~ (p=0.593 n=10)
WriteRequestMarshalProtobuf-10            65.64Mi ± 7%   66.39Mi ± 2%       ~ (p=0.143 n=10)
geomean                                   57.53Mi        57.61Mi       +0.14%

                                 │ prompbmarshal.bench │            prompb2.bench             │
                                 │        B/op         │     B/op       vs base               │
WriteRequestUnmarshalProtobuf-10         27.70Ki ±  4%   26.90Ki ±  7%       ~ (p=0.190 n=10)
WriteRequestMarshalProtobuf-10           3.267Ki ± 12%   3.273Ki ± 12%       ~ (p=0.971 n=10)
geomean                                  9.514Ki         9.383Ki        -1.38%

                                 │ prompbmarshal.bench │            prompb2.bench            │
                                 │      allocs/op      │ allocs/op   vs base                 │
WriteRequestUnmarshalProtobuf-10          0.000 ± 0%     0.000 ± 0%       ~ (p=1.000 n=10) ¹
WriteRequestMarshalProtobuf-10            0.000 ± 0%     0.000 ± 0%       ~ (p=1.000 n=10) ¹
geomean                                              ²               +0.00%                ²
¹ all samples are equal
² summaries must be >0 to compute geomean
2025-07-31 01:04:11 +03:00
Ivan Dudin
2f4cc0b699 docs: fix sentence duplication in "ascent_over_time" description (#9506)
### Describe Your Changes

Fixed sentence duplication in MetricsQL function description

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-07-30 19:41:38 +03:00
Max Kotliar
9c80b54df3 docs: upd changelog message for idbPrefillStart flag
follow up on
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9461
2025-07-30 19:37:55 +03:00
Max Kotliar
229ba50c8f docs: upd changelog message for idbPrefillStart flag
follow up on
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9461
2025-07-30 19:35:38 +03:00
Max Kotliar
b1321e0294 docs: fix link in changelog
introduced in
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9461
2025-07-30 19:28:59 +03:00
Nikolay
752723198c lib/storage: make indexDB prefill variable configurable (#9461)
This commit introduces new flag: `-storage.idbPrefillStart` with
default value of `1h`. It allows to adjust start time of the prefill
process for the data written into the next indexDB.

By default, VictoriaMetrics starts prefill indexDB at 3 A.M UTC, while
indexDB rotates at 4 A.M UTC. It could be useful to change start time
from 3 A.M. to 1 A.M or 00:00 A.M. It should smooth overall resource
usage.

However, changing value to the number bigger than 4 hours for the
default
installations doesn't make much sense ( like 11 P.M. for the day before
rotation). Since, VictoriaMetrics maintenances daily-indexes and it have
to repopulate it twice.

But it could be useful in conjunction with `-retentionTimezoneOffset`,
which could delay index rotation for the current day and give more time
for the prefill process.

As an example, `-retentionTimezoneOffset=4h` adds an additional 4 hours
to the rotation time and `-storage.idbPrefillStart` could be changed
accordingly.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9393

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: f41gh7 <nik@victoriametrics.com>
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
2025-07-30 19:15:05 +03:00
Zakhar Bessarab
62ab06d635 docs/changelog: follow-up for 11cfdb8d
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-07-29 13:52:43 +04:00
Zakhar Bessarab
396cc14473 app/vmctl: allow overriding tmp files location for prometheus migration (#9513)
Previously, vmctl tried to create a tmp directory in a directory with
Prometheus snapshot. This is not always possible as snapshot can be
mounted in read-only mode.

Use a system default temporary location and allow user to customize the
tmp path in order to avoid issues with custom tmp locations.

Closes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9505

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-07-29 13:45:57 +04:00
Roman Khavronenko
f385e36b96 docs: move stream aggregation config to a sub-section (#9500)
https://docs.victoriametrics.com/victoriametrics/stream-aggregation/#configuration
section is the biggest section in the stream aggregation docs. Moving it
to a subsection should make the page easier to read.

In the future, new sub-sections are planned: `quick start` and
`examples`.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-07-28 19:10:40 +03:00
f41gh7
11cfdb8d20 docs/changelog: mention change 69760f1d0c
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2025-07-28 16:27:39 +02:00
Alexander Frolov
69760f1d0c discovery/kubernetes: avoid unnecessary blocking during cleanup
The `groupWatchersCleaner` iterates through all watchers, attempting to
lock them sequentially while holding the global `groupWatchersLock`.
Therefore, a large number of group watchers can cause the
`groupWatchersLock.Lock()` to block for a noticeable period.

Proposing to use `TryLock` as an optimistic case because if the watcher
lock is held, it's more likely that the watcher is still in-use so no
further cleanup is required.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9354#issuecomment-3089108889
2025-07-28 15:41:40 +02:00
Leonardo Taccari
69e8a0e247 docs/changelog: properly spell NetBSD
One byte typo fix for properly spelling NetBSD.

NetBSD is spelled with the uppercase N.
2025-07-28 15:35:34 +02:00
Roman Khavronenko
2d5b196e05 docs: mention that single-node supports HA in cluster recommendation
See context here
https://www.reddit.com/r/VictoriaMetrics/comments/1mb60ck/ha_setup_on_onprem_cluster/
Apparently, user was confused with inability of VM single-node to run in
HA.
2025-07-28 15:24:08 +02:00
Aliaksandr Valialkin
045c537a16 vendor: update github.com/VictoriaMetrics/VictoriaLogs to v0.0.0-20250727175446-3ac9ad9e7935 2025-07-28 14:46:14 +02:00
Andrii Chubatiuk
dee6bb0066 vmui: removed codeql, added reporter for eslint (#9468)
### Describe Your Changes

removed CodeQL and added ESLint annotations

<img width="1102" height="286" alt="image"
src="https://github.com/user-attachments/assets/0dc8f7de-b062-4b46-9490-82d2908da045"
/>

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-07-28 15:43:16 +03:00
Max Kotliar
82920393da app/vmselect/netstorage: do not retry "cannot obtain connection from the pool" errors (#9484)
Currently, all errors that occur during the handshake and dial phases
(except for timeouts) are retried in
[execOnConnWithPossibleRetry](0c4062b727/app/vmselect/netstorage/netstorage.go (L2431)).
However, such errors typically result from network issues or CPU
exhaustion on the storage side. In both cases, retrying is unlikely to
succeed and may instead contribute to additional, unnecessary load on
the system.

This PR disables retries for all errors encountered during the handshake
and dial process. The goal is to avoid redundant retry attempts in
scenarios where they are unlikely to help and may worsen the underlying
problem.

Related to
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9345

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-07-28 15:31:25 +03:00
Max Kotliar
7c44036aa2 docs: do not mention boringcrypto in fips docs (#9507)
### Describe Your Changes

BoryingCrypto was deprecated and is not used since go1.24, see
https://go.dev/blog/fips140

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-07-28 15:26:44 +03:00
Aliaksandr Valialkin
65251c8fe2 lib/{mergeset,storage}: open files inside parts in parallel
This should reduce the time needed for opening the parts on high-latency storage systems such as NFS or Ceph.

Updates https://github.com/VictoriaMetrics/VictoriaLogs/issues/517
2025-07-28 13:41:02 +02:00
Aliaksandr Valialkin
bbda00fec5 vendor: update github.com/VictoriaMetrics/VictoriaLogs to v0.0.0-20250727175446-3ac9ad9e7935 2025-07-27 20:02:31 +02:00
Aliaksandr Valialkin
8e155da0ac lib/{mergeset,storage}: close in parallel part files opened for reading
This should reduce the time needed for closing the part on high-latency
storage systems such as NFS or Ceph.

Updates https://github.com/VictoriaMetrics/VictoriaLogs/issues/517
2025-07-27 19:23:55 +02:00
Aliaksandr Valialkin
cc58d659c3 lib/{fs,filestream}: move lib/filestream.MustCloseWritersParallel() to lib/fs.MustCloseParallel()
The lib/fs.MustCloseParallel() accepts a slice of MustWriter items, which must implement only
a single method - MustWrite(). The previous lib/filestream.MustCloseWritersParallel() was
accepting CloseWriter items, which must implement Write() and Path() methods additionally
to MustClose() method. This was adding artificial restrictions on the applicability
of the MustCloseWritersParallel() method. Remove these restrictions.
2025-07-27 19:12:34 +02:00
Aliaksandr Valialkin
a05e4cf67b lib/{mergeset,storage}: store files inside in-memory parts to the persistent storage in parallel
This should reduce the time needed for converting in-memory parts to file-based parts on high-latency
storage systems such as NFS or Ceph.

Updates https://github.com/VictoriaMetrics/VictoriaLogs/issues/517
2025-07-27 18:19:22 +02:00
Aliaksandr Valialkin
3c7c3a5b0d lib/{mergeset,storage}: flush and close files inside the newly created parts in parallel
This should reduce the time needed for closing the newly created parts on high-latency
storage systems such as NFS or Ceph.

This should help https://github.com/VictoriaMetrics/VictoriaLogs/issues/517
2025-07-27 17:51:56 +02:00
Aliaksandr Valialkin
068eecf1c6 lib/fs: remove directory entries in parallel at MustRemoveDir()
This should reduce the time needed for the deletion of VictoriaLogs parts
with big number of files, which are created when wide events are ingested into VictoriaLogs
(e.g. logs with big number of log fields).

This may help improving scalability of VictoriaLogs at systems with big number of CPU cores,
which store data into high-latency storage such as Ceph or NFS.
See https://github.com/VictoriaMetrics/VictoriaLogs/issues/517
2025-07-27 17:25:17 +02:00
Aliaksandr Valialkin
4806fe02d8 lib/fs: add common path into MustSyncPath() for disabled fsync - just check that the given path exists
This removes the need of fsutil.IsFsyncDisabled() checks inside arch-specific implementations
of the mustSyncPath() function.
2025-07-27 17:14:46 +02:00
Aliaksandr Valialkin
73015bccb9 apptest: do not use "at" and "pb" import aliases for apptest and prombpmarshal packages
The import aliases may complicate maintenance of the code in the long term
if they aren't used consistently, e.g. if one file imports the apptest under the default name
while the other file imports the apptest under the "at" name.

The aliases also complicate grepping the code by apptest.* or prompbmarshal.* .
2025-07-26 01:04:50 +02:00
Aliaksandr Valialkin
da5c065f29 go.mod: update github.com/VictoriaMetrics/VictoriaLogs to v0.0.0-20250725215216-8de283002ba8 2025-07-26 00:04:13 +02:00
Aliaksandr Valialkin
eb2235d354 app/vmalert: consistently use lib/fs.MustRemoveDir() instead of os.RemoveAll() 2025-07-25 20:29:44 +02:00
Aliaksandr Valialkin
90a84f2526 apptest: consistently use lib/fs.MustRemoveDir() instead of os.RemoveAll()
This reduces the amounts of bolierplate code needed for error handling
2025-07-25 20:28:53 +02:00
Aliaksandr Valialkin
a092901e26 Makefile: add make apptest command - alias for the make integration-test
The `make apptest` is more natural because integration tests are located in the apptest directory.
2025-07-25 20:27:34 +02:00
Aliaksandr Valialkin
83da33d8cf lib/fs: simplify the code for directory removal and make it compatible with object storage (S3) and NFS
- Drop the code needed for asynchronous removal of the directory on NFS shares.
  This code was needed when VictoriaMetrics could keep open files after their deletion
  or renaming. This is no longer the case after the commit 43b24164ef .
  Now files are deleted only after all the readers close them.
  This updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/61

- Unify MustRemoveAll() and MustRemoveDirAtomic() into MustRemoveDir() and MustRemovePath()
  functions:

  - The MustRemoveDir() deletes the given directory with all its contents, in an "atomic" way:
    it creates a special `.delete-this-dir` file in the directory, then removes all its contents
    except of this file, and later removes the `.delete-this-dir` file together with the directory
    itself. This makes possible easily determining whether the given directory needs to be deleted
    after unclean shutdown - if it contains the `.delete-this-dir` file or if it is empty, it must be deleted.
    Add IsPartiallyRemovedDir() function, which can be used for detecting whether the given directory must be removed
    at starup.

    Previously the MustRemoveDirAtomic() was using a "trick" for atomic directory removal: it was "atomically" renaming
    the directory to a temporary directory with '.must-remove.' marker in the directory name, and after that it
    was removing the renamed directory. On startup all the directories with the `.must-remove.` marker were deleted
    if they are left after unclean shutdown. This "trick" doesn't work for NFS and object storage such as S3,
    since these storage systems do not support atomic renaming of directories with multiple entries inside.
    The new MustRemoveDir() function doesn't use this "trick", so it can be safely used in NFS and S3-like storage systems.

    This is based on the pull request from @func25 - https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9486/files .

  - The MustRemovePath() deletes the given file or an empty directory.

- Delete the existing parts and partitions at startup if they were partially deleted.

- Consistently use fs.MustRemoveDir() and fs.MustRemovePath() instead of os.RemoveAll() across the codebase.
  This reduces the amounts of bolierplate code related to error handling.

- Consistently use fs.MustWriteSync() instead of os.WriteFile() across the codebase.
2025-07-25 19:54:03 +02:00
Aliaksandr Valialkin
e8c622766b lib/storage/metricnamestats: consistently use lib/fs helpers instead of re-implementing them again
- Consistently use fs.IsPathExist() for checking whether the given path exists on the filesystem.

- Consistently use fs.MustWriteAtomic() for atomic store of the serialized state into file.

- Consistently panic with 'BUG:' prefix on unexpected errors.

- Read and write the state file contents in one go. This simplifies the code for loading and storing the state.
  This shouldn't increase memory usage too much, since the parsed state is already stored in RAM.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9074
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9102
2025-07-25 15:46:31 +02:00
Fred Navruzov
7c99b710aa docs/vmanomaly: release v1.25.1 (#9496)
### Describe Your Changes

Documentation updates for vmanomaly release v1.25.1

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-07-24 21:28:03 +02:00
Dmytro Kozlov
690f7b1827 Makefile: add TSBS build targets (#9353)
Added a simple realisation of TSBS benchmark
2025-07-24 13:47:09 +02:00
Artem Fetishev
ad053682ab lib/storage: Minor renamings requested in #8134 (#9483)
The following renamings were requested in #8134 to be done in a separate
PR:

- Rename putTSIDToCache(s) to storeTSIDToCache(s). This is because
get/put prefixes are normally reserved for pools and ref counters. This
unclear though whether name `getTSIDFromCache` is okay in this context.
- Rename `addPartitionNolock` to `addPartitionLocked`, because the
`Locked` suffix is used everywhere else
- Rename deleteMetricIDs to saveDeletedMetricIDs, because no deletion is
actually happening. The deleted metric ids are still present in the
index. One needs to filter the deleted metric ids out after the
retrieval.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-07-23 12:15:39 +02:00
Max Kotliar
dda649dd2c docs: fix "reduce CPU usage" feature entry missattributed in the changelog. 2025-07-22 17:42:50 +03:00
f41gh7
d083ff790a docs/changelog: mention netBSD builds
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2025-07-21 18:41:53 +02:00
Leonardo Taccari
55b9b8dfee Support NetBSD builds
Previously, it was not possible to compile netBSD binary due to missing OS constrains at lib/fs and lib/filestream packages.

 This commit fixes it by:

* apply proper constrains at lib/filestream
* Introduce statfs_t and statfs() to abstract unix internals: NetBSD
 needs to use unix.Statvfs_t and unix.Statvfs() unlike other Unix-es.
* apply proper constrain for vmctl terminal package

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9473
2025-07-21 18:37:40 +02:00
hagen1778
783803ee32 docs: fix typos
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-07-21 18:29:58 +02:00
Artem Fetishev
05a8657c65 docs: bump lastest LTS versions
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-07-21 10:32:12 +00:00
Artem Fetishev
6d8ae28442 docs: Bump VictoriaMetrics version to v1.122.0
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-07-21 10:23:22 +00:00
Aliaksandr Valialkin
dd9a113890 docs/victoriametrics/Articles.md: move the article from TiDB upper in the list of articles, since it has higher priority 2025-07-21 12:17:54 +02:00
Aliaksandr Valialkin
46535bcf4c docs/victoriametrics/Articles.md: move articles about VictoriaLogs to https://docs.victoriametrics.com/victorialogs/articles/
See cf309d7feb
2025-07-21 12:13:46 +02:00
Artem Fetishev
2fb38b0333 Bump image version to v1.122.0 in docker compose configs
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-07-21 10:07:57 +00:00
Roman Khavronenko
5bcb67e508 dashboards: remove victorialogs related dashboards (#9471)
The new home for vlogs dashboards is
https://github.com/VictoriaMetrics/VictoriaLogs/tree/master/dashboards

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-07-19 14:59:19 +02:00
hagen1778
282d6fa998 docs: move vlogs articles to a sub-section
while there, add a bunch of vlogs articles we've missed to add.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-07-19 14:58:20 +02:00
Mathias Palmersheim
2027233b33 added alert statistics dashboard (#9427)
### Describe Your Changes

adds alert statistics dashboard #8593

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-07-18 12:50:24 -05:00
Aliaksandr Valialkin
21c8ae4d02 docs/victoriametrics/Articles.md: add a link to https://www.truefoundry.com/blog/victorialogs-vs-loki 2025-07-18 17:46:41 +02:00
Artem Fetishev
f5fcfd8e46 docs/CHANGELOG.md: cut v1.122.0
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-07-18 13:59:10 +00:00
Aliaksandr Valialkin
9d9bea0348 lib/{storage,mergeset}: make sure the newly created data part is visible in the parent directory before storing it in parts.json
The newly created data part could become missing after unclean shutdown (such as hardware power off),
since the contents of the parent directory wasn't synced to disk before storing the newly created data part in the parts.json file.

Fix this by syncing the parent directory contents before storing the newly created part in the parts.json file.

This commit is based on https://github.com/VictoriaMetrics/VictoriaLogs/pull/507
2025-07-18 14:56:06 +02:00
Artem Fetishev
cfcb74381e deployment: update base Docker image from alpine:3.22.0 to alpine:3.22.1
See https://www.alpinelinux.org/posts/Alpine-3.19.8-3.20.7-3.21.4-3.22.1-released.html

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-07-18 12:19:55 +00:00
hagen1778
84f95f362a docs: add Scaling Observability: Why TiDB Moved from Prometheus to VictoriaMetrics
https://www.pingcap.com/blog/tidb-observability-migrating-prometheus-victoriametrics/
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-07-18 11:27:27 +02:00
Nikolay
eb1164278e lib/mergeset: reduce memory allocations on blockcache misses
This commit adds tmp inmemory and data blocks buffers for
index search requests. It allows to reduce memory allocations on block
cache misses. Since block cache puts block into cache only on after
configured number of cache misses.

Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9324
2025-07-18 10:47:30 +02:00
Max Kotliar
01088cc513 lib/netutil: reuse idle connections more actively (#9464)
Related to
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9345. While it
doesn't directly fix the issue, it should help improve the situation. We
should do fewer dials and handshakes, hence fewer chances to make an
error.

- Keep idle connection for a longer time, 30s -> 120s.
- Keep at least two established connections at all times.

Some stats from our internal deployments (note that the screenshot
reflects a minimum of four connections, whereas the final version uses
only two):

1:
<img width="1508" height="623" alt="Screenshot 2025-07-17 at 16 46 42"
src="https://github.com/user-attachments/assets/ce5c4007-8d4d-40d0-b8bc-c890e8f97208"
/>
<img width="1510" height="625" alt="Screenshot 2025-07-17 at 16 46 54"
src="https://github.com/user-attachments/assets/ed7592b8-4123-4c57-8131-0c392216477f"
/>

2:
<img width="1497" height="488" alt="Screenshot 2025-07-17 at 16 47 16"
src="https://github.com/user-attachments/assets/c1ff33ca-1cc0-4d71-ae37-c6ff192b9525"
/>
<img width="1496" height="489" alt="Screenshot 2025-07-17 at 16 47 25"
src="https://github.com/user-attachments/assets/688f136c-67a4-44a4-9855-b9a6a369cd49"
/>

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-07-18 10:52:09 +03:00
Andrei Baidarov
4d0cdf1dc6 lib/storage: non-empty extDB in search tests/benchmarks (#9446)
Current tests do not cover search from extDB
2025-07-18 09:00:45 +02:00
Max Kotliar
11bb47a03b dashboards: TCP connections shows spikes on small time ranges (#9465)
### Describe Your Changes

When `$__interval` is small and close to the scrape interval, we start
losing some data points, which can cause the sum to show misleading
spikes. This change ensures that the minimum interval does not go below
1m to avoid such issues.

The query works fine for long time ranges, like 7d, 30d, where interval
is bigger than 1m

The query:
```
sum(max_over_time(vm_tcplistener_conns{job=~"$job", instance=~"$instance"}[$__interval])) by(job)
```

Before:
<img width="1511" height="665" alt="Screenshot 2025-07-17 at 17 22 23"
src="https://github.com/user-attachments/assets/b15dd3db-cb5e-4c9d-9ce4-4a665c38273e"
/>

After:
<img width="1510" height="665" alt="Screenshot 2025-07-17 at 17 22 44"
src="https://github.com/user-attachments/assets/3f82f6d2-53e4-4a20-830e-2073255f6c01"
/>

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-07-18 08:23:05 +03:00
f41gh7
2f8ad3aaf4 docs: update changelog description
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2025-07-17 20:55:42 +02:00
Alexander Frolov
617e19a6d7 lib/cgroup: properly set process_cpu_cores_available metric
This commit respects online CPU count for process_cpu_cores_available metric. Since CPUQuota may exceed it.


 Detailed description:

 There might be a case when `getCPUQuota()` returns a value bigger than
available logic CPU cores. In this case the lower value should be used.

These changes also reflect upcoming go1.25 changes
https://tip.golang.org/doc/go1.25#container-aware-gomaxprocs
> If the CPU bandwidth limit is lower than the number of logical CPUs
available, GOMAXPROCS will default to the lower limit

In practice that happens with [CPU
Manager](https://kubernetes.io/blog/2022/12/27/cpumanager-ga/) enabled.
It requires setting `/sys/devices/system/cpu/online` which shows the
total number of cores on a node. The container doesn't have any cgroup
limits, but it's pinned to a limited subset of cores.
```
root@vmselect:/# cat /sys/fs/cgroup/cpu/cpu.cfs_quota_us
-1
root@vmselect:/# cat /sys/fs/cgroup/cpu/cpu.cfs_period_us
100000
root@vmselect:/# cat /sys/devices/system/cpu/online
0-255
```

go1.25 and go.uber.org/automaxprocs don't look into
`/sys/devices/system/cpu/online`, VictoriaMetrics does
2025-07-17 20:46:49 +02:00
Alexander Frolov
8d11b9f4a6 lib/promscrape: chunkedbuffer double-free
The chunkedbuffer is released twice, leading to concurrent use of the
buffer after it's acquired from a pool by two different goroutines.


 Issue was introduced at v1.115.0 at the following commit 5b87aff830
2025-07-17 20:37:39 +02:00
Andrei Baidarov
0a2213dbd6 apptest: support more API (#9462)
Extracted from #8134
2025-07-17 18:43:26 +02:00
Fred Navruzov
509cbe7b05 docs/vmanomaly: release v1.25.0 (#9455)
### Describe Your Changes

Doc updates to release v1.25.0, including
- config hot reload
- environment variable config placeholders
- cross-linking VM Operator support
- and other connected docs changes

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-07-17 16:11:25 +02:00
Max Kotliar
bf0dc413bf Use empty string when EXTRA_DOCKER_TAG_SUFFIX not provided. (#9458)
### Describe Your Changes

Previously, if I run `make publish-vmstorage` it composed the tag name
like:

```
docker.io/victoriametrics/vmstorage:heads-tcpdialer-increase-idle-timeout-0-g1b20c2dbd3-dirty-84b03c0eEXTRA_DOCKER_TAG_SUFFIX
```

It would work okay if I explisitly provide empty
`EXTRA_DOCKER_TAG_SUFFIX=` env var.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-07-17 17:03:21 +03:00
Nikolay
5794cf46d7 app/vmstorage: enable metric name stats tracker by default (#9457)
This feature helps to understand cardinality contribution by metric
names and it doesn't require extra resources. It's advisable to enable
it by default.

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: f41gh7 <nik@victoriametrics.com>
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-07-17 12:30:00 +02:00
Phuong Le
6da0390946 lib/storage: add test case for deduplication to keep first and last samples (#9376)
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-07-17 12:21:00 +02:00
Roman Khavronenko
704b65499e docs/vmalert: mention a common mistake of using dynamic label values (#9441)
The new tip added based on recent support case.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-07-17 12:20:42 +02:00
Artur Minchukou
582606716d app/vmui: add CI workflow for vmui with lint, typecheck, and testing (#9435)
### Describe Your Changes

Added CI workflow for vmui with lint, typecheck, and testing.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Co-authored-by: Andrii Chubatiuk <achubatiuk@victoriametrics.com>
2025-07-17 12:19:54 +02:00
Roman Khavronenko
194df32a30 Revert "app/vmselect: respect staleness markers when calculating rate and increase functions" (#9451)
This reverts commit 63e1bf5d97. The reason
for revert is that the change makes increase/rate behavior less
predictable for users. See
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9420.

Another negative impact of this change is that it negatively affects
recording rules that calculate increases or rates over time series on
big intervals. For example,
`increase(http_errors_total{instance="foo"}[30d])` would stop producing
results if `http_errors_total{instance="foo"}` was marked as stale. But
this is logically incorrect, as `http_errors_total{instance="foo"}` over
last 30d should still return results.

-------------

This change doesn't revert
63e1bf5d97
completely. It keeps changes made to `removeCounterResets` in order to
preserve original `staleNaN` values. Before, `staleNaN` were compared
with float values and producing `NaN` instead. Which could confuse us in
future. So keeping the fix and tests. It shouldn't have any negative
effect.

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-07-17 12:15:10 +02:00
nemobis
cb83ab7379 docs: Fix typo in changelog (#9456)
### Describe Your Changes
Typo fix.
### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-07-17 12:14:52 +02:00
Andrii Chubatiuk
c6e107c2e8 app/vmselect invalid notifiers and metadata paths (#9421)
fix vmselect /api/v1/notifiers and metadata paths in a cluster branch,
due to these inconsistencies both endpoints are not available

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Co-authored-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2025-07-16 16:38:26 +04:00
Andrii Chubatiuk
a46d2727cf lib/license: do not use -eula flag for license verification (#914)
* made -acceptEULA flag no-op

* Update lib/license/copyrights.go

Co-authored-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>

* Update lib/license/copyrights.go

* docs/changelog: add an update note for the change

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>

* docs/changelog: update formatting

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>

---------

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
Co-authored-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-07-16 16:28:46 +04:00
Zakhar Bessarab
b167f3f270 docs/changelog: fix order of items after 8d511e8d
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-07-16 16:15:50 +04:00
Zakhar Bessarab
8d511e8d09 app/vmstorage: delete old snapshots after 3 days by default (#9438)
Previously, snapshots were never deleted automatically. This lead to
disk space waste in case snapshots were left behind and never deleted,
which happened in case of backups failure. This required manual
investigation and snapshots cleanup.

This change enables removal of snapshots older than 3 days by default.
This should give enough time to upload backup in time and also make sure
old snapshots are not wasting disk space.

Closes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9344

---------

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-07-16 16:09:58 +04:00
Artem Fetishev
834457f0c6 Makefile: remove vlogs build targets (#9434)
### Describe Your Changes

VictoriaLogs has been moved to a separate repository. Remove remaining
vlogs build rules from VictoriaMetrics.
 
### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-07-15 15:50:33 +03:00
Max Kotliar
0308880418 lib/filesyste: make golangci-lint pass on macos (fields are used under build tag) 2025-07-15 14:50:32 +03:00
Andrii Chubatiuk
db39f045e1 ci: golangci-lint 1.6.x -> 2.2.1 2025-07-15 14:27:15 +03:00
Andrii Chubatiuk
336e4abc5c ci: do not run go actions on UI changes (#9440)
### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-07-15 14:08:46 +03:00
Max Kotliar
7d75e0353b app/vmagent/remotewrite: Prevent panic during block re-pack on protocol downgrade. (#9419)
### Describe Your Changes

Also, the protocol is downgraded only if vmagent can re-pack the block
successfully. It would prevent an accidental downgrade on a corrupted
block.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9417

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-07-15 13:43:30 +03:00
Andrii Chubatiuk
4576b67c0f ci: free disk space before main workflow (#9432)
Removing unneeded packages before running build to prevent build failures like this one - https://github.com/VictoriaMetrics/VictoriaMetrics-enterprise/actions/runs/16220839709/job/45803120572
2025-07-15 08:42:37 +04:00
Aliaksandr Valialkin
e41714f213 go.mod: do not use "replace" directive for overriding the version of the github.com/VictoriaMetrics/VictoriaLogs package
Refer to the needed package directly in the "require" directive.
This should make more useful the prefixes for file locations in the output logs
generated by VictoriaMetrics components.

See https://github.com/VictoriaMetrics/VictoriaLogs/issues/431#issuecomment-3071313506 for details.
2025-07-15 01:17:48 +02:00
Aliaksandr Valialkin
15242a70a7 vendor: run make vendor-update 2025-07-15 00:26:39 +02:00
Aliaksandr Valialkin
86cf5b47e1 lib/querytracer: compare time.Time vars with time.Time.Equal() instead of plain comparison with ==
See https://pkg.go.dev/time#Time on why the "t1 == t2" comparison may result to issues.
2025-07-15 00:20:51 +02:00
Aliaksandr Valialkin
4aa179e4f5 lib/netutil: do not use unnamed field of type connMetrics inside TCPListener struct, since it is too confusing
Give it a name - cm, and refer to its' fields explicitly.
2025-07-15 00:15:57 +02:00
Aliaksandr Valialkin
4f915b0278 all: consistently start error fmt.Errorf() messages with small letter 2025-07-15 00:12:40 +02:00
Aliaksandr Valialkin
dc14513009 all: replace strings.Replace(..., -1) with strings.ReplaceAll(...) 2025-07-14 23:58:39 +02:00
Aliaksandr Valialkin
02e15f1cc5 app/vmselect/vmui: run make vmui-update after the commit 7dc79a5d85
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9428
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9388
2025-07-14 21:38:53 +02:00
Aliaksandr Valialkin
1a8e9b3b15 app/vmui/Makefile: remove make vmui-logs-update command, since it became obsolete after 7dc79a5d85
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9428
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9388
2025-07-14 21:33:48 +02:00
Artur Minchukou
7dc79a5d85 app/vmui: remove all the code related to VictoriaLogs (#9428)
### Describe Your Changes

Related issue: #9388

- removed all the code related to VictoriaLogs
- updated dependencies to the latest versions
- removed unnecessary `React` import from components
- removed deprecated dependencies such as `lodash.get`,
`@babel/plugin-proposal-nullish-coalescing-operator`,
`@babel/plugin-proposal-private-property-in-object`
- removed unused packages
- fixed proxy for local playground development (after refresh the page
app cannot properly parse the `/#/` in the path)

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-07-14 21:29:14 +02:00
Roman Khavronenko
40ab285fb9 docs: update troubleshooting docs for vmalert (#9373)
Based on https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9343

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-07-14 20:48:50 +02:00
Max Kotliar
5b0c861915 docs/changelog: Clarify component (vmagent) in update note. 2025-07-14 15:39:50 +03:00
Andrei Baidarov
1ae28356cc lib/storage: add TestMustOpenIndexDBTables_* (#9418)
Add unit tests that verify filesystem hierarchy of the indexdb after
opening the storage and after rotation. The test have been originally
added in #8134 but are still applicable to the current version and will
also reduce the diff.

Co-authored-by: Artem Fetishev <rtm@victoriametrics.com>
2025-07-10 15:42:11 +02:00
f41gh7
22f34284be deployment: update Go builder from Go1.24.4 to Go1.24.5
See https://github.com/golang/go/issues?q=milestone%3AGo1.24.5+label%3ACherryPickApproved
2025-07-10 13:56:21 +02:00
Andrei Baidarov
a00d5cf4b5 lib/storage: move methods from Storage to indexDB (#9416)
- Move `prefetchMetricNames`, `SearchMetricNames`, and
`SearchLabelValues` from `Storage` to `indexDB`
- Rename `searchLabelValuesWithFiltersOnTimeRange` to
`searchLabelValuesOnTimeRange`
- Rename `searchLabelValuesWithFiltersOnDate` to
`searchLabelValuesOnDate`

Extracted from #8134.

Co-authored-by: Artem Fetishev <rtm@victoriametrics.com>
2025-07-10 13:16:29 +02:00
Max Kotliar
dde08ad92c docs/changelog: Enhance the release guide (#9405)
### Describe Your Changes

- Split the release process into two steps.
- Add some links 
- Described in more details what should be done to check branches in
sync.
- Added some small checks, like running tests, and testing final release
on sandbox.
- Changed the order of some actions (like build final image before
publish release).]

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-07-10 12:14:17 +03:00
Andrei Baidarov
01a2d43c4c lib/storage: refactor TestStorageRotateIndexDB_* tests (#9414)
Extracted from #8134
2025-07-09 15:29:03 +02:00
Andrei Baidarov
3a5356ee6f lib/storage: add DebugFlush methods for partition/table (#9413)
Extracted from #8134
2025-07-09 15:20:35 +02:00
Andrei Baidarov
678786ef7e lib/storage: make create(Global|PerDay)Indexes idb methods (#9412)
Extracted from #8134
2025-07-09 15:03:58 +02:00
Andrei Baidarov
25a1428329 lib/storage: pass timestamp explicitly in tests (#9411)
Extracted from #8134
2025-07-09 14:44:40 +02:00
Andrei Baidarov
28c0f41d1a lib/storage: move SearchGraphitePaths to indexDB (#9410)
`SearchGraphitePaths` does not depend on storage, so move it with dependencies to `index_db.go`.
Extracted from #8134.
2025-07-09 14:19:11 +02:00
Andrei Baidarov
0a6f6d5793 apptest: rename DeleteSeries to APIV1AdminTSDBDeleteSeries and implement it for vmsingle (#9409)
Extracted changes from #8134
2025-07-09 13:42:57 +02:00
Andrei Baidarov
b93ea6eba6 do not depend on platform, use maxInt64 explicilty (#9406)
Related to https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8134
2025-07-08 13:26:04 +02:00
Andrei Baidarov
98efb80727 lib/storage: rewrite TestStorageDeleteSeries_TooManyTimeseries
Related to https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8134
2025-07-08 13:24:57 +02:00
Andrei Baidarov
a1323196e1 lib/storage: refactor TestIndexDB
Related to https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8134
2025-07-08 11:03:19 +02:00
Andrei Baidarov
d25995aeb0 lib/storage: add TestStorageDeletePendingSeries
Related to https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8134
2025-07-08 11:01:38 +02:00
Andrei Baidarov
cc0ebec5fb lib/storage: rewrite TestStorageDeleteSeries_CachesAreUpdatedOrReset
Related to https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8134
2025-07-08 11:00:15 +02:00
Andrei Baidarov
627c536dfc lib/storage: use explicit TimeRange{0, math.MaxInt} for all time period search
Related to https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8134
2025-07-08 10:56:21 +02:00
Andrei Baidarov
2f4fea1d11 lib/storage: add and use testCreatePartition helper
Related to https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8134
2025-07-08 10:55:13 +02:00
Andrei Baidarov
dfb4e3f5bb lib/storage: use real storage in TestUpdateCurrHourMetricIDs
Related to https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8134
2025-07-08 10:54:20 +02:00
Dmytro Kozlov
b77904fabe deployment/docker: fix datasource provision file for cluster
When you try to run `make docker-cluster-up` and try to check metrics in 
the Explore page of the Grafana, you will see the basic auth request. If
you enter the correct login and password from the `auth-vm-cluster.yml`
it will continue to ask about the username and password. It happens
because the datasource is configured to ask for data vmauth, but in the
configuration, no auth information is provided.

Added the basich auth info to the datasource provision file
2025-07-08 10:53:44 +02:00
Dima Shur
7c5cae9efc docs/backups: add notes for snapshots troubleshooting, custom hostname usage (#9341)
Adding note for -dst config. Adding additional reference for snapshot troubleshooting for better accessibility

Making documentation easier to use following customer issues
2025-07-08 10:34:20 +04:00
Max Kotliar
f56d7c4045 docs: mention new LTS releases 2025-07-07 15:31:30 +03:00
Max Kotliar
ccc093e0f1 docs: update VictoriaMetrics Docker image tags to v1.121.0 2025-07-07 15:20:59 +03:00
Max Kotliar
f8fcc58396 deployment: update VictoriaMetrics Docker image tags to v1.121.0 2025-07-07 15:13:13 +03:00
Artem Fetishev
d81cebe287 apptest: Do not configure cluster storageDataPath and retentionPeriod by default (#9391)
Defaults are set only in StartDefaultCluster()

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-07-07 13:37:59 +02:00
Artem Fetishev
cdfdf791a4 apptest: Fix TestClusterMultiTenantSelect in master
TestClusterMultiTenantSelect passes in cluster branch but fails in master.
Integration tests must pass in any branch since they do not distinguish
between master and cluster.

The test fails because the multitenant_test.go is different in master and
cluster. This commit makes the file identical in both branches.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-07-07 11:41:33 +02:00
Andrii Chubatiuk
bee6926a89 apptest: removed victorialogs tests (#9389)
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9387
2025-07-07 10:45:02 +02:00
Matt Cocking
c6ac25bd56 docs/victoriametrics: fix vmalert activeAt example so from is before to (#9381)
### Describe Your Changes

The example for using the `activeAt` template in `vmalert.md` was wrong,
with the `from` field being before the `to` field. This switches them
round to be correct.

### Checklist

The following checks are **mandatory**:

- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
2025-07-07 11:42:53 +08:00
Aliaksandr Valialkin
3babb0f276 deployment/docker: remove all the code related to VictoriaLogs, since it has been migrated to https://github.com/VictoriaMetrics/VictoriaLogs/ 2025-07-07 03:43:05 +02:00
Aliaksandr Valialkin
d1d98f39b5 lib/{logstorage,prefixfilter}: remove these packages, since they have been moved to https://github.com/VictoriaMetrics/VictoriaLogs/ repository 2025-07-07 03:25:25 +02:00
Aliaksandr Valialkin
54cc6fc1df app/vlogsgenerator: remove the code here, since it has been moved to https://github.com/VictoriaMetrics/VictoriaLogs/ 2025-07-07 03:15:33 +02:00
Aliaksandr Valialkin
0b5f53fe7b app: remove VictoriaLogs-related apps, since they are moved to a separate repository - https://github.com/VictoriaMetrics/VictoriaLogs/ 2025-07-07 02:58:54 +02:00
Aliaksandr Valialkin
624497a954 docs/victorialogs: remove VictoriaLogs, since now they are located in the https://github.com/VictoriaMetrics/VictoriaLogs/ repository
Docs at https://github.com/VictoriaMetrics/VictoriaLogs/ are automatically synced to https://docs.victoriametrics.com/victorialogs/
2025-07-07 02:53:04 +02:00
Aliaksandr Valialkin
6d3d731851 docs/victoriametrics/query-stats.md: manually fix the improper available_from construction added in the commit 6751e43975
The `available_from` construction must have the form {{% available_from "#" %}}
instead of {{% available_from "tip" %}} in order to be able to automatically update before the release
with the `make docs-update-version` command.
See https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist
and https://docs.victoriametrics.com/victoriametrics/release-guide/#release-version-and-docker-images
2025-07-06 20:34:00 +02:00
Aliaksandr Valialkin
dc38e17f79 docs/victoriametrics/CONTRIBUTING.md: mention that the pull request must conform development goals from https://docs.victoriametrics.com/victoriametrics/goals/ 2025-07-06 20:10:09 +02:00
Aliaksandr Valialkin
9146c90ae1 .github/pull_request_template.md: add a link to https://docs.victoriametrics.com/victoriametrics/goals/
Users who submit pull requests must be aware of https://docs.victoriametrics.com/victoriametrics/goals/
and the pull request must be aligned with this document.
2025-07-06 20:08:28 +02:00
Aliaksandr Valialkin
a93ff1a6ec docs/Makefile: move the comments for make docs-images-to-webp command into the correct place
The comments weren't moved to the correct place in the commit 36bc458e9e

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/6825

P.S. it was a strange decision to use a bloated docker image from VictoriaMetrics/vmdocs repository
for the conversion of the `*.png` files to `*.webp` files instead of the specialized slim image - https://hub.docker.com/r/elswork/cwebp .
2025-07-06 19:55:55 +02:00
Aliaksandr Valialkin
8646b73efc lib/timeutil: put TryParseUnixTimestamp function here
This allows avoiding precision loss at VictoriaLogs query time when parsing fractional unix timestamps with millisecond, microsecond and nanosecond precisions.

This is a follow-up for 1d0e96c8d2

See https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8767#discussion_r2051657518
2025-07-06 17:17:07 +02:00
Aliaksandr Valialkin
6452ff8624 app/vmauth: fix tests after the commit 524f58c78c
The remoteAddr and requestURI aren't sent to the client in the error response after the commit 524f58c78c .
They are logged locally instead. This increases security by not exposing potentially sensitive information (remoteAddr and requestURI) to the client.
This doesn't reduce debuggability, since remoteAddr and requestURI are logged locally together with the error message.
2025-07-06 16:36:22 +02:00
Aliaksandr Valialkin
1fa5ecd981 lib/logstorage: optimize tryParseUint64() a bit 2025-07-06 16:31:26 +02:00
Aliaksandr Valialkin
1d0e96c8d2 lib/logstorage: rewrite TryParseUnixTimestamp(), so it doesnt loose precision when parsing fractional timestamps with millisecond, microsecond and nanosecond precision
- Optimize TryParseUnixTimestamp() by using custom parsing logic instead of generic strconv.Parse* functions.
- Add benchmarks for TryParseUnixTimestamp().
- Add more tests for edge cases with various unix timestamps.

See https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8767#discussion_r2051657518

This is a follow-up for 7294ffcdfc
2025-07-06 16:10:59 +02:00
Aliaksandr Valialkin
bb2945b944 docs/victorialogs/CHANGELOG.md: move the description of the format "<time:...>" improvement into the correct place
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8767
This is a follow-up for 7294ffcdfc
2025-07-06 12:38:32 +02:00
Vadim Alekseev
7294ffcdfc lib/logstorage: add support for parsing Unix timestamp in format pipe (#8767)
### Describe Your Changes

This PR adds support for parsing Unix timestamps (both integer and
float) in the format pipe using the `time:` prefix.
The timestamp precision (seconds, milliseconds, microseconds, or
nanoseconds) is automatically determined based on the value.

It might be worth creating a new prefix for this rather than reusing
`time:`, but I haven't found any compelling reason to extend the syntax.

### Checklist

The following checks are **mandatory**:

- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
Co-authored-by: Aliaksandr Valialkin <valyala@gmail.com>
2025-07-06 12:33:13 +02:00
Aliaksandr Valialkin
524f58c78c lib/httpserver: log the client address additionally to the requestURI in the SendPrometheusError()
This should allow detecting the client, which led to the error logged via SendPrometheusError() function,
e.g. this improves troubleshooting of the logged errors.

This is a follow-up for c9bb4ddeed
2025-07-05 23:51:23 +02:00
f41gh7
e4fc202ab8 docs/guides: add missing receiver to opentelemetry guide
Previously configuration example missed mandatory receiver for logs pipeline

Signed-off-by: f41gh7 <nik@victoriametrics.com>
2025-07-05 12:02:16 +02:00
Max Kotliar
b1003c0e65 docs/CHANGELOG.md: cut v1.121.0 2025-07-04 13:07:59 +00:00
Max Kotliar
4c5cde71eb app/vmselect: run make vmui-update 2025-07-04 12:44:00 +00:00
Max Kotliar
62611ee384 docs/victoriametrics/changelog: fix typo 2025-07-04 11:18:35 +00:00
Max Kotliar
3d7420d135 docs/victoriametrics/changelog: Add an update note on -retryMaxTime flag deprecation 2025-07-04 10:53:42 +00:00
f41gh7
9e9d697a31 docs/changelog: adjust changelog entry ordering
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2025-07-04 10:35:25 +02:00
Andrii Chubatiuk
f0859403da lib/promscrape: added label_limit global and per job scrape param (#7795)
This commit introduces new scrape config option - `label_limit`. Which defines labels count limit for time series.
Whole scrape will be discarded In case of limit breach.
 This option could also be defined with label `__label_limit__` during scrape relabeling process and globally for all scrape targets.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7660 and https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3233
2025-07-04 10:18:23 +02:00
Benjamin Nichols-Farquhar
63e133fe04 app/vmalert: introduce new flag replay.ruleEvaluationConcurrency
As discussed in https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7387 we want a way to speed up execution within the replay of a single rule in vmalert.

This commit adds the flag `replay.singleRuleEvaluationConcurrency` which
limits the concurrency for requests within the replay of a single
rule.

Related to https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7387
2025-07-04 10:13:43 +02:00
leiwingqueen
e049bdcfbd app/vmagent: rename flag remoteWrite.retryMaxTime
This commit renames flag `remoteWrite.retryMaxTime` into `remoteWrite.retryMaxInterval`. New name aligns with corresponding `MinInterval` flag. Previous flag name still could be used, but vmagent will log warning message with suggested migration.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9169
2025-07-03 18:28:51 +02:00
Vadim Alekseev
53b2082f53 deployment/docker: add configuration examples of vlagent for Docker
Modified the existing HA setups to use vlagent instead of writing to two
sources

Related to https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9328
2025-07-03 18:18:54 +02:00
Max Kotliar
d6a7dc0ea0 lib/fs: enhance MustReadAt panic message
enhance `MustReadAt` panic message to include filename for easier debugging of out-of-range read

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9106
2025-07-03 18:17:54 +02:00
f41gh7
28fb87baea lib/fs: make linter happy for windows build
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2025-07-03 18:12:16 +02:00
Andrei Baidarov
1799478438 apptest: Add an integration test for /graphite/metrics/index.json (#9357)
While working on #8134 it has been discovered that
`/graphite/index.json` always returns an empty result. This PR adds a
test and a fix. This fix is a no-op for the master but adding it here in
order to reduce the diff in #8134.

Co-authored-by: Artem Fetishev <149964189+rtm0@users.noreply.github.com>
Co-authored-by: Artem Fetishev <rtm@victoriametrics.com>
2025-07-03 15:14:11 +02:00
Olexandr88
dd40ed2456 docs: add links to codecov, release, A+, license icons (#9335)
### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
2025-07-03 11:40:35 +02:00
Hui Wang
420f6b322e vmalert: fix alerts state restoration for alerting rule using templat… (#9348)
…ing in the labels

fix https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9305

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-07-03 11:16:00 +02:00
Andrii Chubatiuk
8eb88798ca lib/streamaggr: properly clean quantiles output state on flush (#9351)
### Describe Your Changes

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9350

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
2025-07-03 10:28:30 +02:00
Hui Wang
6a71c53c41 vmalert: respect group order defined in the rule file during replay m… (#9349)
…ode to allow chained group if needed

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9334
2025-07-03 10:26:09 +02:00
Phuong Le
746cea61d0 VictoriaLogs: Add metrics to track the retention.maxDiskSpaceUsageBytes flag (#9319)
- `vl_max_disk_space_usage_bytes{path}` reports the configured maximum
disk space usage limit for the storage path. It helps to show the limit
on the dashboard.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9320
2025-07-03 08:29:26 +02:00
Phuong Le
ef88302975 VictoriaLogs: add message processor metrics (#9316)
Add metrics to track the behavior of the message processor:

- `vl_insert_processors_count` tracks the current number of active log
message processors.
- `vl_insert_flush_duration_seconds{type="protocol"}` measures the time
taken to flush log data from recently parsed logs to the underlying
storage system (in-memory buffering, remote storage nodes, or local
storage).

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9320

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-03 08:07:39 +02:00
Phuong Le
857dd8edec VictoriaLogs: fix rate_sum() inconsistency between normal queries and vmalert recording rules (#9304)
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9303

Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
2025-07-03 07:25:08 +02:00
Mac G
fedb5bbc9a docs/readme: fix typo in metric name
Fix typo in metric name mentioned in doc: vm_rows_ignored_total
2025-07-02 19:37:16 +04:00
Zakhar Bessarab
96633a3c4e app/vmbackupmanager: use "lib/snapshot" for snapshot operations
Re-use existing `lib/snapshot` for snapshot operations in order to reduce code duplicates and unify supported options and error handling.

Previously, vmbackupmanager did not support `snapshot.tls*` options and did not handle errors in a same was as vmbackup does.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9340
2025-07-02 14:44:23 +02:00
Max Kotliar
1a1e4fe2d2 lib/license: improve error message when an empty license is provided (#911) 2025-07-02 14:06:36 +02:00
Yury Molodov
d455c8afa1 vmui: disable autocomplete popup on initial page load
When the page loads, the query field automatically receives focus, which
is expected. However, this also immediately opens the autocomplete
popup, triggering an unnecessary request and showing suggestions before
the user starts typing.

 This PR commit the autocomplete popup from opening on initial page
load.

Related to https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9342
2025-07-02 13:53:55 +02:00
Nikolay
93843a1faf docs/release-guide: mention release candidate tagging process
This commits changes release process by introducing new intermediate
docker image tag with `-rcY` suffix. It's needed to properly test
releases for possible regressions without need to erase release docker
images. Which could be cached by some proxy and actually used in
production.

 Release candidate image must re-tagged into final release via make
command.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9136
2025-07-02 13:48:04 +02:00
Benjamin Nichols-Farquhar
994dadb4d5 lib/storage: Introduce vm_cache_eviction_bytes_total metric
This commit adds the following new metric `vm_cache_eviction_bytes_total` for tsid, metricName and metricIDs caches backed by workingset cache.

 The problem this is attempting to solve is doing fine grained tuning for
caches on large high churn, high query load VictoriaMetrics deployments.

Cache performance in those scenarios is critical for cluster
performance, however for stability reasons its imperative not to provide
_too much_ cache space compared to available memory or there becomes a
risk of OOM which can easily destabilize a cluster.

Given the cost of memory, especially for large deployments, there is
therefore a need to optimize cache usage to be near the minimum required
to provide acceptable performance under churn, ingestion and query
loads.

VM provides an excellent set of flags for configuring this cache
performance(size and expiry and removal percentage).

However specifically when tuning `prevCacheRemovalPercent` and
`cacheExpireDuration` it can be hard to known exactly what impact
they're having. Sometimes its very clear when theres cyclical behavior
that cacheExpireDuration is involved, but in other scenarios it can be
very unclear why cache miss rate goes up.

Specifically when attempting to debug spikes in cache miss rate there
are questions that are hard to answer:

* Did a new workload come in that was poorly cached?
* Were caches rotated due to expiry and the `prev` cache contained many
active entries due to limited cache size?
* Where the caches rotated due to miss percentage and our tuning is too
aggressive?

In many of these debugging scenarios it would be greatly helpful to have
explicit metrics on how many bytes of key caches were evicted and for
what reasons, it then becomes trivial to associate spikes in eviction
for a specific reason with a miss rate spike that causes poor
performance. The appropriate tuning them becomes much more obvious.

Related to https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9293
2025-07-02 13:41:54 +02:00
hagen1778
469b337707 docs: refresh FAQ.md
Partially based on https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9244

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-07-01 14:20:42 +02:00
dependabot[bot]
2b131c13b5 build(deps): bump github.com/go-viper/mapstructure/v2 from 2.2.1 to 2.3.0 (#9299)
Bumps
[github.com/go-viper/mapstructure/v2](https://github.com/go-viper/mapstructure)
from 2.2.1 to 2.3.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/go-viper/mapstructure/releases">github.com/go-viper/mapstructure/v2's
releases</a>.</em></p>
<blockquote>
<h2>v2.3.0</h2>
<h2>What's Changed</h2>
<ul>
<li>build(deps): bump actions/checkout from 4.1.7 to 4.2.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/go-viper/mapstructure/pull/46">go-viper/mapstructure#46</a></li>
<li>build(deps): bump golangci/golangci-lint-action from 6.1.0 to 6.1.1
by <a href="https://github.com/dependabot"><code>@​dependabot</code></a>
in <a
href="https://redirect.github.com/go-viper/mapstructure/pull/47">go-viper/mapstructure#47</a></li>
<li>[enhancement] Add check for <code>reflect.Value</code> in
<code>ComposeDecodeHookFunc</code> by <a
href="https://github.com/mahadzaryab1"><code>@​mahadzaryab1</code></a>
in <a
href="https://redirect.github.com/go-viper/mapstructure/pull/52">go-viper/mapstructure#52</a></li>
<li>build(deps): bump actions/setup-go from 5.0.2 to 5.1.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/go-viper/mapstructure/pull/51">go-viper/mapstructure#51</a></li>
<li>build(deps): bump actions/checkout from 4.2.0 to 4.2.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/go-viper/mapstructure/pull/50">go-viper/mapstructure#50</a></li>
<li>build(deps): bump actions/setup-go from 5.1.0 to 5.2.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/go-viper/mapstructure/pull/55">go-viper/mapstructure#55</a></li>
<li>build(deps): bump actions/setup-go from 5.2.0 to 5.3.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/go-viper/mapstructure/pull/58">go-viper/mapstructure#58</a></li>
<li>ci: add Go 1.24 to the test matrix by <a
href="https://github.com/sagikazarmark"><code>@​sagikazarmark</code></a>
in <a
href="https://redirect.github.com/go-viper/mapstructure/pull/74">go-viper/mapstructure#74</a></li>
<li>build(deps): bump golangci/golangci-lint-action from 6.1.1 to 6.5.0
by <a href="https://github.com/dependabot"><code>@​dependabot</code></a>
in <a
href="https://redirect.github.com/go-viper/mapstructure/pull/72">go-viper/mapstructure#72</a></li>
<li>build(deps): bump golangci/golangci-lint-action from 6.5.0 to 6.5.1
by <a href="https://github.com/dependabot"><code>@​dependabot</code></a>
in <a
href="https://redirect.github.com/go-viper/mapstructure/pull/76">go-viper/mapstructure#76</a></li>
<li>build(deps): bump actions/setup-go from 5.3.0 to 5.4.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/go-viper/mapstructure/pull/78">go-viper/mapstructure#78</a></li>
<li>feat: add decode hook for netip.Prefix by <a
href="https://github.com/tklauser"><code>@​tklauser</code></a> in <a
href="https://redirect.github.com/go-viper/mapstructure/pull/85">go-viper/mapstructure#85</a></li>
<li>Updates by <a
href="https://github.com/sagikazarmark"><code>@​sagikazarmark</code></a>
in <a
href="https://redirect.github.com/go-viper/mapstructure/pull/86">go-viper/mapstructure#86</a></li>
<li>build(deps): bump github/codeql-action from 2.13.4 to 3.28.15 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/go-viper/mapstructure/pull/87">go-viper/mapstructure#87</a></li>
<li>build(deps): bump actions/setup-go from 5.4.0 to 5.5.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/go-viper/mapstructure/pull/93">go-viper/mapstructure#93</a></li>
<li>build(deps): bump github/codeql-action from 3.28.15 to 3.28.17 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/go-viper/mapstructure/pull/92">go-viper/mapstructure#92</a></li>
<li>build(deps): bump github/codeql-action from 3.28.17 to 3.28.19 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/go-viper/mapstructure/pull/97">go-viper/mapstructure#97</a></li>
<li>build(deps): bump ossf/scorecard-action from 2.4.1 to 2.4.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/go-viper/mapstructure/pull/96">go-viper/mapstructure#96</a></li>
<li>Update README.md by <a
href="https://github.com/peczenyj"><code>@​peczenyj</code></a> in <a
href="https://redirect.github.com/go-viper/mapstructure/pull/90">go-viper/mapstructure#90</a></li>
<li>Add omitzero tag. by <a
href="https://github.com/Crystalix007"><code>@​Crystalix007</code></a>
in <a
href="https://redirect.github.com/go-viper/mapstructure/pull/98">go-viper/mapstructure#98</a></li>
<li>Use error structs instead of duplicated strings by <a
href="https://github.com/m1k1o"><code>@​m1k1o</code></a> in <a
href="https://redirect.github.com/go-viper/mapstructure/pull/102">go-viper/mapstructure#102</a></li>
<li>build(deps): bump github/codeql-action from 3.28.19 to 3.29.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/go-viper/mapstructure/pull/101">go-viper/mapstructure#101</a></li>
<li>feat: add common error interface by <a
href="https://github.com/sagikazarmark"><code>@​sagikazarmark</code></a>
in <a
href="https://redirect.github.com/go-viper/mapstructure/pull/105">go-viper/mapstructure#105</a></li>
<li>update linter by <a
href="https://github.com/sagikazarmark"><code>@​sagikazarmark</code></a>
in <a
href="https://redirect.github.com/go-viper/mapstructure/pull/106">go-viper/mapstructure#106</a></li>
<li>Feature allow unset pointer by <a
href="https://github.com/rostislaved"><code>@​rostislaved</code></a> in
<a
href="https://redirect.github.com/go-viper/mapstructure/pull/80">go-viper/mapstructure#80</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/tklauser"><code>@​tklauser</code></a>
made their first contribution in <a
href="https://redirect.github.com/go-viper/mapstructure/pull/85">go-viper/mapstructure#85</a></li>
<li><a href="https://github.com/peczenyj"><code>@​peczenyj</code></a>
made their first contribution in <a
href="https://redirect.github.com/go-viper/mapstructure/pull/90">go-viper/mapstructure#90</a></li>
<li><a
href="https://github.com/Crystalix007"><code>@​Crystalix007</code></a>
made their first contribution in <a
href="https://redirect.github.com/go-viper/mapstructure/pull/98">go-viper/mapstructure#98</a></li>
<li><a
href="https://github.com/rostislaved"><code>@​rostislaved</code></a>
made their first contribution in <a
href="https://redirect.github.com/go-viper/mapstructure/pull/80">go-viper/mapstructure#80</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/go-viper/mapstructure/compare/v2.2.1...v2.3.0">https://github.com/go-viper/mapstructure/compare/v2.2.1...v2.3.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="8c61ec1924"><code>8c61ec1</code></a>
Merge pull request <a
href="https://redirect.github.com/go-viper/mapstructure/issues/80">#80</a>
from rostislaved/feature-allow-unset-pointer</li>
<li><a
href="df765f469a"><code>df765f4</code></a>
Merge pull request <a
href="https://redirect.github.com/go-viper/mapstructure/issues/106">#106</a>
from go-viper/update-linter</li>
<li><a
href="5f34b05aa1"><code>5f34b05</code></a>
update linter</li>
<li><a
href="36de1e1d74"><code>36de1e1</code></a>
Merge pull request <a
href="https://redirect.github.com/go-viper/mapstructure/issues/105">#105</a>
from go-viper/error-refactor</li>
<li><a
href="6a283a390e"><code>6a283a3</code></a>
chore: update error type doc</li>
<li><a
href="599cb73236"><code>599cb73</code></a>
Merge pull request <a
href="https://redirect.github.com/go-viper/mapstructure/issues/101">#101</a>
from go-viper/dependabot/github_actions/github/codeql...</li>
<li><a
href="ed3f921815"><code>ed3f921</code></a>
feat: remove value from error messages</li>
<li><a
href="a3f8b227dc"><code>a3f8b22</code></a>
revert: error message change</li>
<li><a
href="9661f6d07c"><code>9661f6d</code></a>
feat: add common error interface</li>
<li><a
href="f12f6c76fe"><code>f12f6c7</code></a>
Merge pull request <a
href="https://redirect.github.com/go-viper/mapstructure/issues/102">#102</a>
from m1k1o/prettify-errors2</li>
<li>Additional commits viewable in <a
href="https://github.com/go-viper/mapstructure/compare/v2.2.1...v2.3.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/go-viper/mapstructure/v2&package-manager=go_modules&previous-version=2.2.1&new-version=2.3.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/VictoriaMetrics/VictoriaMetrics/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-01 12:59:21 +02:00
Olexandr88
aa342cea3b docs: add a link to the main, twitter, reddit icon (#9331)
### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
2025-07-01 12:55:00 +02:00
Roman Khavronenko
26a2f4c25c deployment/docker: add troubleshooting section (#9329)
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9323

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2025-07-01 12:53:02 +02:00
Andrii Chubatiuk
8a3a5e6867 app/vmalert: removed inline styles from UI, which are not working with recommended CSP (#9295)
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9236

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-07-01 12:50:49 +02:00
Phuong Le
9f9a1e1996 document/data-ingestion/journald: fix small typo (#9325)
s/`-journlad.streamFields`/`-journald.streamFields`
2025-07-01 10:46:05 +02:00
Roman Khavronenko
2d27404639 app/vmselect/promql: respect window interval for rollup functions when removing resets (#9273)
removeCounterResets was operating only using stalenessInterval (when set
by user) and didn't account for user-specified [window] in rollup
functions. Since in rollup functions MetricsQL could look behind for up
to [window] interval - it can capture those datapoints that weren't
accounted in removeCounterResets. With this change, we remove resets on
stalenessInterval+window interval to avoid this corner case.

Fixes
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8935#issuecomment-2978728661

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-07-01 10:17:39 +02:00
Nils K
91a05697fd docs: Update journald ingestion docs regarding suggested fields (#9275)
### Describe Your Changes

Point out a some things to watch out for with the default stream field
set and explain what the `_TRANSPORT` field is useful for

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).

Signed-off-by: Septatrix <24257556+septatrix@users.noreply.github.com>
2025-07-01 10:14:01 +02:00
hagen1778
e9571576b8 docs: add alias for vlogs keyConcepts page
GA noticed requests to /victorialogs/keyConcepts/ not being served properly

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-07-01 10:11:58 +02:00
Nikolay
314fed78cc app/vmagent: introduce concurrency setting for kafka producer (#906)
This commit adds a new option `concurrency` for kafka producer, which adds additional workers
for publishing messages into the kafka cluster.

 This setting could be useful at networks with high latency. And it increases write throughput.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9249
2025-06-30 18:21:47 +02:00
f41gh7
6751e43975 app/vmselect/promql: support search.logSlowQueryStatsHeaders for whitelisting header keys
Headers sent together with queries could contain important information
about the source of the request. This could help to find sources that send specific queries.

Logging all headers could expose sensitive info in logs.

So introducing `-search.logSlowQueryStatsHeaders` to whitelist headers that need
to be logged. See test case for example.
2025-06-30 18:21:39 +02:00
Andrii Chubatiuk
aaefe469c1 app/vmselect: proxy /api/v1/notifiers requests to vmalert (#9267)
Proxy /api/v1/notifiers requests to vmalert in the same manner, how it's done for other vmalert endpoints

Related to https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9267
2025-06-30 17:48:41 +02:00
Andrii Chubatiuk
fc33411d87 lib/awsapi: fixed static AWS credentials precedence
Previously lib/awsapi used incorrect precedence of authorization methods.

 This commit aligns this behaviour  with aws-sdk.

 Also, it fixes usage of `AWS_ROLE_ARN` env variable. It can only be used with web identity file and it must be ignored for any other configurations. 32873b942d/config/resolve_credentials.go (L121)

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9168
2025-06-30 17:47:02 +02:00
Nikolay
07574bdf7f lib/storage: properly optimise .+|^$ regex
Previously, regex `.+|^$` was optimised into `.+`. But in fact, it must
optimised into `.*`. Since this regex matches any non empty symbols and
empty value. Which combines into any input.

 This commit adds a special case of isDotStar check, which covers this
expression.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9290
2025-06-30 17:27:58 +02:00
Nikolay
e553a41fa0 app: add vlagent component
This commit introduces new component - VictoriaLogs Agent (vlagent).

It accepts logs data via any data ingestion protocol supported by
VictoriaLogs and forwards it to the provided remote storages.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8766
2025-06-30 17:01:05 +02:00
Alexander Frolov
f9aa74c367 lib/mergeset: misuse of unsafe.Pointer
It appears the `lib/mergeset.Item` violates the 3rd rule
https://pkg.go.dev/unsafe#Pointer

> (3) Conversion of a Pointer to a uintptr and back, with arithmetic.
> 
> Unlike in C, it is not valid to advance a pointer just beyond the end
of its original allocation:
> ```
> // INVALID: end points outside allocated space.
> b := make([]byte, n)`
> end = unsafe.Pointer(uintptr(unsafe.Pointer(&b[0])) + uintptr(n))
> ```


`CGO_ENABLED=0 go test -trimpath -gcflags=all=-d=checkptr -run
TestTableAddItemsSerial`
```
fatal error: checkptr: pointer arithmetic result points to invalid allocation


 This commit adds a special path to item encoding, that prevents invalid memory references.

 See this PR for details:
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9264
2025-06-30 16:56:28 +02:00
hagen1778
21878f0760 docs: mention vmui changes separately in vlogs and vm changelogs
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-06-30 15:02:38 +02:00
hagen1778
aca321f235 docs: add changelog after 71eef98b0e
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-06-30 14:56:20 +02:00
hagen1778
0b2e976706 dashboards: make dashboards-sync
follow-up after dc08b590c4

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-06-30 14:27:17 +02:00
Vadim Rutkovsky
dc08b590c4 dashboards: set equal heights for each row (#9298)
### Describe Your Changes
Several rows in cluster dashboard have 7 height, although the rest are 8
height. Setting all rows to 8 to be consistent


![Untitled](https://github.com/user-attachments/assets/16ba9601-9005-45f2-a01f-4b5fd59cd5b0)


### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).

Depends on: #9253
2025-06-30 14:26:32 +02:00
Artur Minchukou
71eef98b0e app/vmui: add crossorigin attribute to manifest links
The change should solve the problem with loading manifest when vmui is behind reverse-proxy
with basic auth enabled.
2025-06-30 14:25:40 +02:00
Artem Navoiev
6bc48ce669 docs: enterprise - clarify FIPS builds (#9310)
Signed-off-by: Artem Navoiev <tenmozes@gmail.com>
2025-06-30 14:24:20 +02:00
hagen1778
482ae8135f app/vmalert/notifier: follow-up a9b641531b
* reduce time wait on notifiers update
* deterministically sort notifiers() call. This improves and tests and output for users

a9b641531b
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-06-30 14:22:54 +02:00
Andrii Chubatiuk
393398996a docs/anomaly-detection: mark import_json_path as deprecated (#9283)
### Describe Your Changes

as noticed @f41gh7 in [operator
PR](https://github.com/VictoriaMetrics/operator/pull/1427#discussion_r2164171944)
vm writer `import_json_path` became deprecated starting
[v1.19.2](cc2cd0e084).
Adding this information to docs

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
2025-06-27 15:51:53 +02:00
Mikhail
7703a95709 docs: fix typo in multi-region setup guide (#9297)
### Describe Your Changes

Just a typo correction in docs
2025-06-27 15:51:26 +02:00
Max Kotliar
114d657670 docs/victoriametrics/changelog: Add changelog for merged PR "Log errors during cache restore from file" (#9296)
### Describe Your Changes

Follow up on
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8952

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
2025-06-27 15:49:56 +02:00
Hui Wang
a9b641531b vmalert: fix duplicated metrics for dynamically discovered notifier… (#9285)
…s via Consul and DNS

fix https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9260.

Bug was introduced in
[v1.114.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.114.0).
2025-06-27 15:49:10 +02:00
hagen1778
536df52682 dashboards: warn about correlation betweem concurrent requests and mem usage in panel description
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-06-27 11:36:05 +02:00
Zakhar Bessarab
5143ad7674 lib/backup/s3remote: retry ExpiredToken errors (#9286)
Tolerate token expiration as it might be handled ty token rotation
automatically when using automatic token rotation with EKS Pod Identity
and similar options.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9280

---------

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
Co-authored-by: Max Kotliar <kotlyar.maksim@gmail.com>
2025-06-27 13:06:55 +04:00
Roman Khavronenko
6e7df578c4 app/vmctl: print import error before pipe error (#9288)
See
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9220#issuecomment-3001880833:
```
2025-06-24T17:27:47.062Z      error VictoriaMetrics/app/vmctl/backoff/backoff.go:60 got error: failed to write into "http://vminsert-vmcluster.metrics.svc.cluster.local:8480/insert/0/prometheus": io: read/write on closed pipe on attempt: 1; will retry in 2s
```

This error could happen only if pipe-reader was closed before we tried
to write into pipe-writer. But in this case we won't see the error
explaining why pipe-reader was closed. In this change, we checking if
there was a pipe-reader error before checking pipe-writer error.

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-06-27 09:23:12 +02:00
hagen1778
4a7b4ae852 docs: whitelist /admin/ requests in vmauth config for cluster VM
Besides `/select/`-prefixed requests vmauth also does `/admin/` requests
to fetch tenant IDs. Without whitelisting, such requests will fail to route.

I am adding this change only to generic vmauth configs where /admin/ access
is expected to be by default.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-06-27 09:18:54 +02:00
Jose Gómez-Sellés
65087f08c4 Migrate docs to cloud repo (#9230)
This is ready, since https://github.com/VictoriaMetrics/cloud/pull/3229
is merged .

This PR deletes the cloud docs folder after moving it into the private
cloud repo. The main reason is to keep things tidy and handle reviews in
a better way, as the helm charts repo is doing.

Future updates should be protected since the github actions file with
rsync command should not be messing with this folder when updating
everything.

### Checklist

The following checks are **mandatory**:

- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
2025-06-26 12:55:25 +02:00
Artur Minchukou
ca0479fff3 app/vmui/logs: update the color of the VictoriaLogs favicon to make the color different from VictoriaMetrics (#9270)
### Describe Your Changes

Updated the favicon color for Victoria Logs that the icon could be
distinguished from Victoria Metrics. Took the same color as in
victorialogs-datasource plugin.

![image](https://github.com/user-attachments/assets/521fc747-8ea7-43c7-a719-5434fe39ab06)


### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
2025-06-26 10:39:15 +02:00
Yury Molodov
644c7a97c8 victorialogs/docs: fix invalid stats example in "Comments" section (#9272)
### Describe Your Changes

The example in the VictoriaLogs docs (Comments section) is currently
missing an aggregate function in the `stats` pipe:

```logsql
error                       # find logs with `error` word
  | stats by (_stream) logs # then count the number of logs per `_stream` label
  | sort by (logs) desc     # then sort by the found logs in descending order
  | limit 5                 # and show top 5 streams with the biggest number of logs
```

However, `stats by (_stream) logs` is invalid syntax - `logs` is
interpreted as a function name, which causes a parsing error.

This fix replaces it with a valid version using `count()`:

```logsql
| stats by (_stream) count() as logs
```

Without `count()`, the query fails with:

```
 cannot parse 'stats' pipe: unknown stats func "logs"
```

Signed-off-by: Yury Molodov <yurymolodov@gmail.com>
2025-06-26 10:34:57 +02:00
Roman Khavronenko
098cba5b73 dashboards: fix adhoc filters for vmalert and vmagent (#9271)
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8657

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-06-26 10:33:55 +02:00
Hui Wang
5d0e8c0d1b vmalert: fix data race in replay ut (#9278)
see https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9265
2025-06-26 15:15:35 +08:00
Aliaksandr Valialkin
442bfa6c35 lib/logstorage: add tests, which verify that NaN and Inf values cannot be parsed by tryParseFloat64
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8474
2025-06-25 23:00:56 +02:00
Aliaksandr Valialkin
ee031b21a7 docs/victorialogs/LogsQL.md: document that sum() and avg() returns NaN when all the field values are non-numeric
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8474
2025-06-25 22:51:42 +02:00
hagen1778
deec361a64 lib/prombpmarshal: make linter happy
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-06-24 22:06:45 +02:00
Roman Khavronenko
fd4dce81ce lib/prombp{marshal}: support metadata in remote write protocol (#9124)
This change adds support for parsing and sending Metadata field in
Prometheus Remote Write protocol. It implements the first step for
Metadata support.

See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2974

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-06-24 22:00:34 +02:00
hagen1778
5431696d83 docs: fix various typos and grammar errors
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-06-24 21:45:55 +02:00
hagen1778
1b71184bfb docs: fix unresolved link in cluster docs
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-06-24 21:39:35 +02:00
Phuong Le
29c06b9543 VictoriaLogs: add a High Availability section to the cluster documentation. (#9247) 2025-06-24 18:53:24 +02:00
hagen1778
369f3f0da1 docs: rm integrations section from single-node readme
* move netdata and carbon-api integrations to a dedicated `Integrations` page
* drop https://github.com/aorfanos/vmalert-cli as it seems stale

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-06-24 16:58:56 +02:00
hagen1778
6f93f0e1a7 docs: minor typo fix
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-06-24 16:51:29 +02:00
Roman Khavronenko
96b773198f docs: update vmctl docs (#9257)
* split migration mods in separate sub-section. This removes conflicting
#-anchors and makes it easier to read&modify in future
* remove duplicating wording
* simplify texts

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8964

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-06-24 16:48:13 +02:00
Andrii Chubatiuk
006381266b app/vmalert: add /api/v1/notifiers endpoint and datasource_type query argument filter for /api/v1/rules and /api/v1/alerts endpoints (#9046)
### Describe Your Changes

added /api/v1/notifiers endpoint and `datasource_type` query argument
for `/api/v1/rules` and `/api/v1/alerts` API endpoints to filter groups
and rules by datasource type. required for
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8989

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8537

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-06-24 16:41:38 +02:00
Max Kotliar
ae1dffe5d3 deployment/docker: Update grafana image version due to CVE issue (#9258)
### Describe Your Changes

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9207

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
2025-06-24 16:31:53 +02:00
Nikolay
80e508eac7 lib/promscrape: remove duplicate targets from service-discovery
Previously, if annotation was update on object, it could result into
duplicate targets register for dropped targets service-discovery page.

 Mostly it affects endpoint annotations update. Endpoint holds
annotation with last update time. If any pod that belongs to the given
annotation changed, it causes duplication targets for all pod backed by
the endpoint. It makes service-discovery debug page hard to use.

 This commit excludes `__metadata_kubernetes_*_annotation_` from key
generation for dropped targets map. Instead it updates target with new
labels value.

  It may lead to some targets collision, but
since hash function already could produce collisions, it should not be a
problem.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8626
2025-06-24 11:32:29 +02:00
f41gh7
86d8095417 docs: mention v1.120.0 release
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2025-06-23 16:36:30 +02:00
f41gh7
df847e62c0 docs: mention new LTS releases
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2025-06-23 16:32:35 +02:00
4746 changed files with 714004 additions and 352663 deletions

View File

@@ -5,7 +5,7 @@ body:
- type: textarea
id: describe-the-component
attributes:
label: Is your question request related to a specific component?
label: Is your question related to a specific component?
placeholder: |
VictoriaMetrics, vmagent, vmalert, vmui, etc...
validations:

23
.github/copilot-instructions.md vendored Normal file
View File

@@ -0,0 +1,23 @@
# Project Overview
VictoriaMetrics is a fast, cost-saving, and scalable solution for monitoring and managing time series data. It delivers high performance and reliability, making it an ideal choice for businesses of all sizes.
## Folder Structure
- `/app`: Contains the compilable binaries.
- `/lib`: Contains the golang reusable libraries
- `/docs/victoriametrics`: Contains documentation for the project.
- `/apptest/tests`: Contains integration tests.
## Libraries and Frameworks
- Backend: Golang, no framework. Use third-party libraries sparingly.
- Frontend: React.
## Code review guidelines
Ensure the feature or bugfix includes a changelog entry in /docs/victoriametrics/changelog/CHANGELOG.md.
Verify the entry is under the ## tip section and matches the structure and style of existing entries.
Chore-only changes may be omitted from the changelog.

View File

@@ -7,3 +7,4 @@ Please provide a brief description of the changes you made. Be as specific as po
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development goals](https://docs.victoriametrics.com/victoriametrics/goals/).

48
.github/scripts/lint-changelog-tip.sh vendored Executable file
View File

@@ -0,0 +1,48 @@
#!/usr/bin/env sh
set -e
CHANGELOG_FILE="docs/victoriametrics/changelog/CHANGELOG.md"
GITHUB_BASE_REF=${GITHUB_BASE_REF:-"master"}
GIT_REMOTE=${GIT_REMOTE:-"origin"}
git diff "${GIT_REMOTE}/${GITHUB_BASE_REF}"...HEAD -- $CHANGELOG_FILE > diff.txt
if ! grep -q "^+" diff.txt; then
echo "No additions in CHANGELOG.md"
exit 0
fi
ADDED_LINES=$(grep "^+\S" diff.txt | sed 's/^+//')
START_TIP=$(grep -n "^## tip" "$CHANGELOG_FILE" | head -1 | cut -d: -f1)
if [ -z "$START_TIP" ]; then
echo "ERROR: ${CHANGELOG_FILE} does not contain a ## tip section"
exit 1
fi
END_TIP=$(awk "NR>$START_TIP && /^## / {print NR; exit}" "${CHANGELOG_FILE}")
if [ -z "$END_TIP" ]; then
END_TIP=$(wc -l < "$CHANGELOG_FILE")
fi
BAD=0
while IFS= read -r line; do
# Grep exact line inside the file and get line numbers
MATCHES=$(grep -n -F "$line" "$CHANGELOG_FILE" | cut -d: -f1)
for m in $MATCHES; do
if [ "$m" -lt "$START_TIP" ] || [ "$m" -gt "$END_TIP" ]; then
echo "'$line' on line ${m} is outside ## tip section (lines ${START_TIP}-${END_TIP})"
BAD=1
fi
done
done << EOF
$ADDED_LINES
EOF
if [ "$BAD" -ne 0 ]; then
echo "CHANGELOG modifications must be placed inside the ## tip section."
exit 1
fi
echo "CHANGELOG modifications are valid."

View File

@@ -7,16 +7,20 @@ on:
- master
paths:
- '**.go'
- '**/Dockerfile*' # The trailing * is for app/vmui/Dockerfile-*.
- '**/Dockerfile'
- '**/Makefile'
- '!app/vmui/**'
- '.github/workflows/build.yml'
pull_request:
branches:
- cluster
- master
paths:
- '**.go'
- '**/Dockerfile*' # The trailing * is for app/vmui/Dockerfile-*.
- '**/Dockerfile'
- '**/Makefile'
- '!app/vmui/**'
- '.github/workflows/build.yml'
permissions:
contents: read
@@ -27,28 +31,51 @@ concurrency:
jobs:
build:
name: Build
name: ${{ matrix.os }}-${{ matrix.arch }}
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
include:
- os: linux
arch: 386
- os: linux
arch: amd64
- os: linux
arch: arm64
- os: linux
arch: arm
- os: linux
arch: ppc64le
- os: linux
arch: s390x
- os: darwin
arch: amd64
- os: darwin
arch: arm64
- os: freebsd
arch: amd64
- os: openbsd
arch: amd64
- os: windows
arch: amd64
steps:
- name: Code checkout
uses: actions/checkout@v4
uses: actions/checkout@v6
- name: Setup Go
id: go
uses: actions/setup-go@v5
uses: actions/setup-go@v6
with:
go-version: stable
cache: false
cache-dependency-path: |
go.sum
Makefile
app/**/Makefile
go-version-file: 'go.mod'
- run: go version
- name: Cache Go artifacts
uses: actions/cache@v4
with:
path: |
~/.cache/go-build
~/go/bin
~/go/pkg/mod
key: go-artifacts-${{ runner.os }}-crossbuild-${{ steps.go.outputs.go-version }}-${{ hashFiles('go.sum', 'Makefile', 'app/**/Makefile') }}
restore-keys: go-artifacts-${{ runner.os }}-crossbuild-
- name: Build victoria-metrics for ${{ matrix.os }}-${{ matrix.arch }}
run: make victoria-metrics-${{ matrix.os }}-${{ matrix.arch }}
- name: Run crossbuild
run: make crossbuild
- name: Build vmutils for ${{ matrix.os }}-${{ matrix.arch }}
run: make vmutils-${{ matrix.os }}-${{ matrix.arch }}

19
.github/workflows/changelog-linter.yml vendored Normal file
View File

@@ -0,0 +1,19 @@
name: 'changelog-linter'
on:
pull_request:
paths:
- "docs/victoriametrics/changelog/CHANGELOG.md"
jobs:
tip-lint:
runs-on: 'ubuntu-latest'
steps:
- uses: 'actions/checkout@v6'
with:
# needed for proper diff
fetch-depth: 0
- name: 'Validate that changelog changes are under ## tip'
run: |
GITHUB_BASE_REF=${{ github.base_ref }} ./.github/scripts/lint-changelog-tip.sh

View File

@@ -0,0 +1,37 @@
name: check-commit-signed
on:
pull_request:
jobs:
check-commit-signed:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v6
with:
fetch-depth: 0 # we need full history for commit verification
- name: Check commit signatures
run: |
if [ "${{ github.event_name }}" != "pull_request" ]; then
echo "Not a PR event, skipping signature check"
exit 0
fi
RANGE="${{ github.event.pull_request.base.sha }}..${{ github.event.pull_request.head.sha }}"
echo "Checking commits in PR range: $RANGE"
if [ -z "$(git rev-list $RANGE)" ]; then
echo "No new commits in this PR, skipping signature check"
exit 0
fi
unsigned=$(git log --pretty="%H %G?" $RANGE | grep -vE " (G|E)$" || true)
if [ -n "$unsigned" ]; then
echo "Found unsigned commits:"
echo "$unsigned"
exit 1
fi
echo "All commits in PR are signed (G or E)"

View File

@@ -19,11 +19,13 @@ jobs:
- name: Setup Go
id: go
uses: actions/setup-go@v5
uses: actions/setup-go@v6
with:
go-version: stable
go-version-file: 'go.mod'
cache: false
- run: go version
- name: Cache Go artifacts
uses: actions/cache@v4
with:
@@ -32,7 +34,7 @@ jobs:
~/go/pkg/mod
~/go/bin
key: go-artifacts-${{ runner.os }}-check-licenses-${{ steps.go.outputs.go-version }}-${{ hashFiles('go.sum', 'Makefile', 'app/**/Makefile') }}
restore-keys: go-artifacts-${{ runner.os }}-check-licenses-
restore-keys: go-artifacts-${{ runner.os }}-check-licenses-${{ steps.go.outputs.go-version }}-
- name: Check License
run: make check-licenses

View File

@@ -29,14 +29,15 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@v6
- name: Set up Go
id: go
uses: actions/setup-go@v5
uses: actions/setup-go@v6
with:
cache: false
go-version: stable
go-version-file: 'go.mod'
- run: go version
- name: Cache Go artifacts
uses: actions/cache@v4
@@ -46,17 +47,17 @@ jobs:
~/go/bin
~/go/pkg/mod
key: go-artifacts-${{ runner.os }}-codeql-analyze-${{ steps.go.outputs.go-version }}-${{ hashFiles('go.sum', 'Makefile', 'app/**/Makefile') }}
restore-keys: go-artifacts-${{ runner.os }}-codeql-analyze-
restore-keys: go-artifacts-${{ runner.os }}-codeql-analyze-${{ steps.go.outputs.go-version }}-
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
uses: github/codeql-action/init@v4
with:
languages: go
- name: Autobuild
uses: github/codeql-action/autobuild@v3
uses: github/codeql-action/autobuild@v4
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
uses: github/codeql-action/analyze@v4
with:
category: 'language:go'

View File

@@ -1,46 +0,0 @@
name: 'CodeQL JS/TS'
on:
push:
branches:
- cluster
- master
paths:
- '**.js'
- '**.ts'
- '**.tsx'
pull_request:
branches:
- cluster
- master
paths:
- '**.js'
- '**.ts'
- '**.tsx'
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
permissions:
actions: read
contents: read
security-events: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:
languages: javascript-typescript
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
with:
category: 'language:js/ts'

View File

@@ -16,12 +16,12 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Code checkout
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
path: __vm
- name: Checkout private code
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
repository: VictoriaMetrics/vmdocs
token: ${{ secrets.VM_BOT_GH_TOKEN }}

View File

@@ -1,122 +0,0 @@
name: main
on:
push:
branches:
- cluster
- master
paths:
- '**.go'
pull_request:
branches:
- cluster
- master
paths:
- '**.go'
permissions:
contents: read
concurrency:
cancel-in-progress: true
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
jobs:
lint:
name: lint
runs-on: ubuntu-latest
steps:
- name: Code checkout
uses: actions/checkout@v4
- name: Setup Go
id: go
uses: actions/setup-go@v5
with:
cache: false
go-version: stable
- name: Cache Go artifacts
uses: actions/cache@v4
with:
path: |
~/.cache/go-build
~/go/bin
~/go/pkg/mod
key: go-artifacts-${{ runner.os }}-check-all-${{ steps.go.outputs.go-version }}-${{ hashFiles('go.sum', 'Makefile', 'app/**/Makefile') }}
restore-keys: go-artifacts-${{ runner.os }}-check-all-
- name: Run check-all
run: |
make check-all
git diff --exit-code
test:
name: test
needs: lint
runs-on: ubuntu-latest
strategy:
matrix:
scenario:
- 'test-full'
- 'test-full-386'
- 'test-pure'
steps:
- name: Code checkout
uses: actions/checkout@v4
- name: Setup Go
id: go
uses: actions/setup-go@v5
with:
cache: false
go-version: stable
- name: Cache Go artifacts
uses: actions/cache@v4
with:
path: |
~/.cache/go-build
~/go/bin
~/go/pkg/mod
key: go-artifacts-${{ runner.os }}-${{ matrix.scenario }}-${{ steps.go.outputs.go-version }}-${{ hashFiles('go.sum', 'Makefile', 'app/**/Makefile') }}
restore-keys: go-artifacts-${{ runner.os }}-${{ matrix.scenario }}-
- name: Run tests
run: GOGC=10 make ${{ matrix.scenario}}
- name: Publish coverage
uses: codecov/codecov-action@v5
with:
files: ./coverage.txt
integration-test:
name: integration-test
needs: [lint, test]
runs-on: ubuntu-latest
steps:
- name: Code checkout
uses: actions/checkout@v4
- name: Setup Go
id: go
uses: actions/setup-go@v5
with:
cache: false
go-version: stable
- name: Cache Go artifacts
uses: actions/cache@v4
with:
path: |
~/.cache/go-build
~/go/bin
~/go/pkg/mod
key: go-artifacts-${{ runner.os }}-${{ matrix.scenario }}-${{ steps.go.outputs.go-version }}-${{ hashFiles('go.sum', 'Makefile', 'app/**/Makefile') }}
restore-keys: go-artifacts-${{ runner.os }}-${{ matrix.scenario }}-
- name: Run integration tests
run: make integration-test

116
.github/workflows/test.yml vendored Normal file
View File

@@ -0,0 +1,116 @@
name: test
on:
push:
branches:
- cluster
- master
paths:
- '**.go'
- 'go.*'
- '.github/workflows/main.yml'
pull_request:
branches:
- cluster
- master
paths:
- '**.go'
- 'go.*'
- '.github/workflows/main.yml'
permissions:
contents: read
concurrency:
cancel-in-progress: true
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
jobs:
lint:
name: lint
runs-on: ubuntu-latest
steps:
- name: Code checkout
uses: actions/checkout@v6
- name: Setup Go
id: go
uses: actions/setup-go@v6
with:
cache-dependency-path: |
go.sum
Makefile
app/**/Makefile
go-version-file: 'go.mod'
- run: go version
- name: Cache golangci-lint
uses: actions/cache@v4
with:
path: |
~/.cache/golangci-lint
~/go/bin
key: golangci-lint-${{ runner.os }}-${{ steps.go.outputs.go-version }}-${{ hashFiles('.golangci.yml') }}
- name: Run check-all
run: |
make check-all
git diff --exit-code
unit:
name: unit
runs-on: ubuntu-latest
strategy:
matrix:
scenario:
- 'test-full'
- 'test-full-386'
- 'test-pure'
steps:
- name: Code checkout
uses: actions/checkout@v6
- name: Setup Go
id: go
uses: actions/setup-go@v6
with:
cache-dependency-path: |
go.sum
Makefile
app/**/Makefile
go-version-file: 'go.mod'
- run: go version
- name: Run tests
run: make ${{ matrix.scenario}}
- name: Publish coverage
uses: codecov/codecov-action@v5
with:
files: ./coverage.txt
apptest:
name: apptest
runs-on: apptest
steps:
- name: Code checkout
uses: actions/checkout@v6
- name: Setup Go
id: go
uses: actions/setup-go@v6
with:
cache-dependency-path: |
go.sum
Makefile
app/**/Makefile
go-version-file: 'go.mod'
- run: go version
- name: Run app tests
run: make apptest

88
.github/workflows/vmui.yml vendored Normal file
View File

@@ -0,0 +1,88 @@
name: vmui
on:
push:
branches:
- cluster
- master
paths:
- 'app/vmui/packages/vmui/**'
- '.github/workflows/vmui.yml'
pull_request:
branches:
- cluster
- master
paths:
- 'app/vmui/packages/vmui/**'
- '.github/workflows/vmui.yml'
permissions:
contents: read
packages: read
pull-requests: read
checks: write
concurrency:
cancel-in-progress: true
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
jobs:
vmui-checks:
name: VMUI Checks (lint, test, typecheck)
runs-on: ubuntu-latest
steps:
- name: Code checkout
uses: actions/checkout@v6
- name: Cache node_modules
id: cache
uses: actions/cache@v5
with:
path: app/vmui/packages/vmui/node_modules
key: vmui-deps-${{ runner.os }}-${{ hashFiles('app/vmui/packages/vmui/package-lock.json', 'app/vmui/Dockerfile-build') }}
restore-keys: |
vmui-deps-${{ runner.os }}-
- name: Install dependencies
if: steps.cache.outputs.cache-hit != 'true'
run: make vmui-install
- name: Run lint
id: lint
run: make vmui-lint
continue-on-error: true
env:
VMUI_SKIP_INSTALL: true
- name: Run tests
id: test
run: make vmui-test
continue-on-error: true
env:
VMUI_SKIP_INSTALL: true
- name: Run typecheck
id: typecheck
run: make vmui-typecheck
continue-on-error: true
env:
VMUI_SKIP_INSTALL: true
- name: Annotate Code Linting Results
uses: ataylorme/eslint-annotate-action@v3
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
report-json: app/vmui/packages/vmui/vmui-lint-report.json
- name: Check overall status
run: |
echo "Lint status: ${{ steps.lint.outcome }}"
echo "Test status: ${{ steps.test.outcome }}"
echo "Typecheck status: ${{ steps.typecheck.outcome }}"
if [[ "${{ steps.lint.outcome }}" == "failure" || "${{ steps.test.outcome }}" == "failure" || "${{ steps.typecheck.outcome }}" == "failure" ]]; then
echo "One or more checks failed"
exit 1
else
echo "All checks passed"
fi

1
.gitignore vendored
View File

@@ -12,6 +12,7 @@
/victoria-logs-data
/victoria-metrics-data
/vmagent-remotewrite-data
/vlagent-remotewritewrite
/vmstorage-data
/vmselect-cache
/package/temp-deb-*

View File

@@ -1,22 +1,29 @@
run:
timeout: 2m
version: "2"
linters:
enable:
- revive
issues:
exclude-rules:
- linters:
- staticcheck
text: "SA(4003|1019|5011):"
include:
- EXC0012
- EXC0014
linters-settings:
errcheck:
exclude-functions:
- "fmt.Fprintf"
- "fmt.Fprint"
- "(net/http.ResponseWriter).Write"
settings:
errcheck:
exclude-functions:
- fmt.Fprintf
- fmt.Fprint
- (net/http.ResponseWriter).Write
exclusions:
generated: lax
presets:
- common-false-positives
- legacy
- std-error-handling
rules:
- linters:
- staticcheck
text: 'SA(4003|1019|5011):'
paths:
- third_party$
- builtin$
- examples$
formatters:
exclusions:
generated: lax
paths:
- third_party$
- builtin$
- examples$

View File

@@ -175,7 +175,7 @@
END OF TERMS AND CONDITIONS
Copyright 2019-2025 VictoriaMetrics, Inc.
Copyright 2019-2026 VictoriaMetrics, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

241
Makefile
View File

@@ -11,9 +11,14 @@ ifeq ($(PKG_TAG),)
PKG_TAG := $(BUILDINFO_TAG)
endif
EXTRA_DOCKER_TAG_SUFFIX ?=
EXTRA_GO_BUILD_TAGS ?=
GO_BUILDINFO = -X '$(PKG_PREFIX)/lib/buildinfo.Version=$(APP_NAME)-$(DATEINFO_TAG)-$(BUILDINFO_TAG)'
TAR_OWNERSHIP ?= --owner=1000 --group=1000
GOLANGCI_LINT_VERSION := 2.9.0
.PHONY: $(MAKECMDGOALS)
include app/*/Makefile
@@ -22,11 +27,10 @@ include docs/Makefile
include deployment/*/Makefile
include dashboards/Makefile
include package/release/Makefile
include benchmarks/Makefile
all: \
victoria-metrics-prod \
victoria-logs-prod \
vlogscli-prod \
vmagent-prod \
vmalert-prod \
vmalert-tool-prod \
@@ -50,8 +54,6 @@ publish: \
package: \
package-victoria-metrics \
package-victoria-logs \
package-vlogscli \
package-vmagent \
package-vmalert \
package-vmalert-tool \
@@ -123,6 +125,15 @@ vmutils-linux-ppc64le: \
vmrestore-linux-ppc64le \
vmctl-linux-ppc64le
vmutils-linux-s390x: \
vmagent-linux-s390x \
vmalert-linux-s390x \
vmalert-tool-linux-s390x \
vmauth-linux-s390x \
vmbackup-linux-s390x \
vmrestore-linux-s390x \
vmctl-linux-s390x
vmutils-darwin-amd64: \
vmagent-darwin-amd64 \
vmalert-darwin-amd64 \
@@ -168,9 +179,11 @@ vmutils-windows-amd64: \
vmrestore-windows-amd64 \
vmctl-windows-amd64
# When adding a new crossbuild target, please also add it to the .github/workflows/build.yml
crossbuild:
$(MAKE_PARALLEL) victoria-metrics-crossbuild vmutils-crossbuild
# When adding a new crossbuild target, please also add it to the .github/workflows/build.yml
victoria-metrics-crossbuild: \
victoria-metrics-linux-386 \
victoria-metrics-linux-amd64 \
@@ -183,6 +196,7 @@ victoria-metrics-crossbuild: \
victoria-metrics-openbsd-amd64 \
victoria-metrics-windows-amd64
# When adding a new crossbuild target, please also add it to the .github/workflows/build.yml
vmutils-crossbuild: \
vmutils-linux-386 \
vmutils-linux-amd64 \
@@ -195,6 +209,31 @@ vmutils-crossbuild: \
vmutils-openbsd-amd64 \
vmutils-windows-amd64
publish-final-images:
PKG_TAG=$(TAG) APP_NAME=victoria-metrics $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG) APP_NAME=vmagent $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG) APP_NAME=vmalert $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG) APP_NAME=vmalert-tool $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG) APP_NAME=vmauth $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG) APP_NAME=vmbackup $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG) APP_NAME=vmrestore $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG) APP_NAME=vmctl $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG)-cluster APP_NAME=vminsert $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG)-cluster APP_NAME=vmselect $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG)-cluster APP_NAME=vmstorage $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG)-enterprise APP_NAME=victoria-metrics $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG)-enterprise APP_NAME=vmagent $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG)-enterprise APP_NAME=vmalert $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG)-enterprise APP_NAME=vmauth $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG)-enterprise APP_NAME=vmbackup $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG)-enterprise APP_NAME=vmrestore $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG)-enterprise-cluster APP_NAME=vminsert $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG)-enterprise-cluster APP_NAME=vmselect $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG)-enterprise-cluster APP_NAME=vmstorage $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG)-enterprise APP_NAME=vmgateway $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG)-enterprise APP_NAME=vmbackupmanager $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG) $(MAKE) publish-latest
publish-latest:
PKG_TAG=$(TAG) APP_NAME=victoria-metrics $(MAKE) publish-via-docker-latest && \
PKG_TAG=$(TAG) APP_NAME=vmagent $(MAKE) publish-via-docker-latest && \
@@ -210,10 +249,6 @@ publish-latest:
PKG_TAG=$(TAG)-enterprise APP_NAME=vmgateway $(MAKE) publish-via-docker-latest
PKG_TAG=$(TAG)-enterprise APP_NAME=vmbackupmanager $(MAKE) publish-via-docker-latest
publish-victoria-logs-latest:
PKG_TAG=$(TAG) APP_NAME=victoria-logs $(MAKE) publish-via-docker-latest
PKG_TAG=$(TAG) APP_NAME=vlogscli $(MAKE) publish-via-docker-latest
publish-release:
rm -rf bin/*
git checkout $(TAG) && $(MAKE) release && $(MAKE) publish && \
@@ -231,6 +266,7 @@ release-victoria-metrics: \
release-victoria-metrics-linux-amd64 \
release-victoria-metrics-linux-arm \
release-victoria-metrics-linux-arm64 \
release-victoria-metrics-linux-s390x \
release-victoria-metrics-darwin-amd64 \
release-victoria-metrics-darwin-arm64 \
release-victoria-metrics-freebsd-amd64 \
@@ -249,6 +285,9 @@ release-victoria-metrics-linux-arm:
release-victoria-metrics-linux-arm64:
GOOS=linux GOARCH=arm64 $(MAKE) release-victoria-metrics-goos-goarch
release-victoria-metrics-linux-s390x:
GOOS=linux GOARCH=s390x $(MAKE) release-victoria-metrics-goos-goarch
release-victoria-metrics-darwin-amd64:
GOOS=darwin GOARCH=amd64 $(MAKE) release-victoria-metrics-goos-goarch
@@ -283,133 +322,12 @@ release-victoria-metrics-windows-goarch: victoria-metrics-windows-$(GOARCH)-prod
cd bin && rm -rf \
victoria-metrics-windows-$(GOARCH)-prod.exe
release-victoria-logs-bundle: \
release-victoria-logs \
release-vlogscli
publish-victoria-logs-bundle: \
publish-victoria-logs \
publish-vlogscli
release-victoria-logs:
$(MAKE_PARALLEL) release-victoria-logs-linux-386 \
release-victoria-logs-linux-amd64 \
release-victoria-logs-linux-arm \
release-victoria-logs-linux-arm64 \
release-victoria-logs-darwin-amd64 \
release-victoria-logs-darwin-arm64 \
release-victoria-logs-freebsd-amd64 \
release-victoria-logs-openbsd-amd64 \
release-victoria-logs-windows-amd64
release-victoria-logs-linux-386:
GOOS=linux GOARCH=386 $(MAKE) release-victoria-logs-goos-goarch
release-victoria-logs-linux-amd64:
GOOS=linux GOARCH=amd64 $(MAKE) release-victoria-logs-goos-goarch
release-victoria-logs-linux-arm:
GOOS=linux GOARCH=arm $(MAKE) release-victoria-logs-goos-goarch
release-victoria-logs-linux-arm64:
GOOS=linux GOARCH=arm64 $(MAKE) release-victoria-logs-goos-goarch
release-victoria-logs-darwin-amd64:
GOOS=darwin GOARCH=amd64 $(MAKE) release-victoria-logs-goos-goarch
release-victoria-logs-darwin-arm64:
GOOS=darwin GOARCH=arm64 $(MAKE) release-victoria-logs-goos-goarch
release-victoria-logs-freebsd-amd64:
GOOS=freebsd GOARCH=amd64 $(MAKE) release-victoria-logs-goos-goarch
release-victoria-logs-openbsd-amd64:
GOOS=openbsd GOARCH=amd64 $(MAKE) release-victoria-logs-goos-goarch
release-victoria-logs-windows-amd64:
GOARCH=amd64 $(MAKE) release-victoria-logs-windows-goarch
release-victoria-logs-goos-goarch: victoria-logs-$(GOOS)-$(GOARCH)-prod
cd bin && \
tar $(TAR_OWNERSHIP) --transform="flags=r;s|-$(GOOS)-$(GOARCH)||" -czf victoria-logs-$(GOOS)-$(GOARCH)-$(PKG_TAG).tar.gz \
victoria-logs-$(GOOS)-$(GOARCH)-prod \
&& sha256sum victoria-logs-$(GOOS)-$(GOARCH)-$(PKG_TAG).tar.gz \
victoria-logs-$(GOOS)-$(GOARCH)-prod \
| sed s/-$(GOOS)-$(GOARCH)-prod/-prod/ > victoria-logs-$(GOOS)-$(GOARCH)-$(PKG_TAG)_checksums.txt
cd bin && rm -rf victoria-logs-$(GOOS)-$(GOARCH)-prod
release-victoria-logs-windows-goarch: victoria-logs-windows-$(GOARCH)-prod
cd bin && \
zip victoria-logs-windows-$(GOARCH)-$(PKG_TAG).zip \
victoria-logs-windows-$(GOARCH)-prod.exe \
&& sha256sum victoria-logs-windows-$(GOARCH)-$(PKG_TAG).zip \
victoria-logs-windows-$(GOARCH)-prod.exe \
> victoria-logs-windows-$(GOARCH)-$(PKG_TAG)_checksums.txt
cd bin && rm -rf \
victoria-logs-windows-$(GOARCH)-prod.exe
release-vlogscli:
$(MAKE_PARALLEL) release-vlogscli-linux-386 \
release-vlogscli-linux-amd64 \
release-vlogscli-linux-arm \
release-vlogscli-linux-arm64 \
release-vlogscli-darwin-amd64 \
release-vlogscli-darwin-arm64 \
release-vlogscli-freebsd-amd64 \
release-vlogscli-openbsd-amd64 \
release-vlogscli-windows-amd64
release-vlogscli-linux-386:
GOOS=linux GOARCH=386 $(MAKE) release-vlogscli-goos-goarch
release-vlogscli-linux-amd64:
GOOS=linux GOARCH=amd64 $(MAKE) release-vlogscli-goos-goarch
release-vlogscli-linux-arm:
GOOS=linux GOARCH=arm $(MAKE) release-vlogscli-goos-goarch
release-vlogscli-linux-arm64:
GOOS=linux GOARCH=arm64 $(MAKE) release-vlogscli-goos-goarch
release-vlogscli-darwin-amd64:
GOOS=darwin GOARCH=amd64 $(MAKE) release-vlogscli-goos-goarch
release-vlogscli-darwin-arm64:
GOOS=darwin GOARCH=arm64 $(MAKE) release-vlogscli-goos-goarch
release-vlogscli-freebsd-amd64:
GOOS=freebsd GOARCH=amd64 $(MAKE) release-vlogscli-goos-goarch
release-vlogscli-openbsd-amd64:
GOOS=openbsd GOARCH=amd64 $(MAKE) release-vlogscli-goos-goarch
release-vlogscli-windows-amd64:
GOARCH=amd64 $(MAKE) release-vlogscli-windows-goarch
release-vlogscli-goos-goarch: vlogscli-$(GOOS)-$(GOARCH)-prod
cd bin && \
tar $(TAR_OWNERSHIP) --transform="flags=r;s|-$(GOOS)-$(GOARCH)||" -czf vlogscli-$(GOOS)-$(GOARCH)-$(PKG_TAG).tar.gz \
vlogscli-$(GOOS)-$(GOARCH)-prod \
&& sha256sum vlogscli-$(GOOS)-$(GOARCH)-$(PKG_TAG).tar.gz \
vlogscli-$(GOOS)-$(GOARCH)-prod \
| sed s/-$(GOOS)-$(GOARCH)-prod/-prod/ > vlogscli-$(GOOS)-$(GOARCH)-$(PKG_TAG)_checksums.txt
cd bin && rm -rf vlogscli-$(GOOS)-$(GOARCH)-prod
release-vlogscli-windows-goarch: vlogscli-windows-$(GOARCH)-prod
cd bin && \
zip vlogscli-windows-$(GOARCH)-$(PKG_TAG).zip \
vlogscli-windows-$(GOARCH)-prod.exe \
&& sha256sum vlogscli-windows-$(GOARCH)-$(PKG_TAG).zip \
vlogscli-windows-$(GOARCH)-prod.exe \
> vlogscli-windows-$(GOARCH)-$(PKG_TAG)_checksums.txt
cd bin && rm -rf \
vlogscli-windows-$(GOARCH)-prod.exe
release-vmutils: \
release-vmutils-linux-386 \
release-vmutils-linux-amd64 \
release-vmutils-linux-arm64 \
release-vmutils-linux-arm \
release-vmutils-linux-s390x \
release-vmutils-darwin-amd64 \
release-vmutils-darwin-arm64 \
release-vmutils-freebsd-amd64 \
@@ -428,6 +346,9 @@ release-vmutils-linux-arm64:
release-vmutils-linux-arm:
GOOS=linux GOARCH=arm $(MAKE) release-vmutils-goos-goarch
release-vmutils-linux-s390x:
GOOS=linux GOARCH=s390x $(MAKE) release-vmutils-goos-goarch
release-vmutils-darwin-amd64:
GOOS=darwin GOARCH=amd64 $(MAKE) release-vmutils-goos-goarch
@@ -514,7 +435,7 @@ release-vmutils-windows-goarch: \
vmctl-windows-$(GOARCH)-prod.exe
pprof-cpu:
go tool pprof -trim_path=github.com/VictoriaMetrics/VictoriaMetrics@ $(PPROF_FILE)
go tool pprof -trim_path=github.com/VictoriaMetrics/VictoriaMetrics $(PPROF_FILE)
fmt:
gofmt -l -w -s ./lib
@@ -522,7 +443,7 @@ fmt:
gofmt -l -w -s ./apptest
vet:
GOEXPERIMENT=synctest go vet ./lib/...
go vet -tags 'synctest' ./lib/...
go vet ./app/...
go vet ./apptest/...
@@ -531,61 +452,79 @@ check-all: fmt vet golangci-lint govulncheck
clean-checkers: remove-golangci-lint remove-govulncheck
test:
GOEXPERIMENT=synctest go test ./lib/... ./app/...
go test -tags 'synctest' ./lib/... ./app/...
test-race:
GOEXPERIMENT=synctest go test -race ./lib/... ./app/...
go test -tags 'synctest' -race ./lib/... ./app/...
test-pure:
GOEXPERIMENT=synctest CGO_ENABLED=0 go test ./lib/... ./app/...
CGO_ENABLED=0 go test -tags 'synctest' ./lib/... ./app/...
test-full:
GOEXPERIMENT=synctest go test -coverprofile=coverage.txt -covermode=atomic ./lib/... ./app/...
go test -tags 'synctest' -coverprofile=coverage.txt -covermode=atomic ./lib/... ./app/...
test-full-386:
GOEXPERIMENT=synctest GOARCH=386 go test -coverprofile=coverage.txt -covermode=atomic ./lib/... ./app/...
GOARCH=386 go test -tags 'synctest' -coverprofile=coverage.txt -covermode=atomic ./lib/... ./app/...
integration-test: victoria-metrics vmagent vmalert vmauth vmctl vmbackup vmrestore victoria-logs
go test ./apptest/... -skip="^TestCluster.*"
apptest:
$(MAKE) victoria-metrics vmagent vmalert vmauth vmctl vmbackup vmrestore
go test ./apptest/... -skip="^Test(Cluster|Legacy).*"
apptest-legacy: victoria-metrics vmbackup vmrestore
OS=$$(uname | tr '[:upper:]' '[:lower:]'); \
ARCH=$$(uname -m | tr '[:upper:]' '[:lower:]' | sed 's/x86_64/amd64/'); \
VERSION=v1.132.0; \
VMSINGLE=victoria-metrics-$${OS}-$${ARCH}-$${VERSION}.tar.gz; \
VMCLUSTER=victoria-metrics-$${OS}-$${ARCH}-$${VERSION}-cluster.tar.gz; \
URL=https://github.com/VictoriaMetrics/VictoriaMetrics/releases/download/$${VERSION}; \
DIR=/tmp/$${VERSION}; \
test -d $${DIR} || (mkdir $${DIR} && \
curl --output-dir /tmp -LO $${URL}/$${VMSINGLE} && tar xzf /tmp/$${VMSINGLE} -C $${DIR} && \
curl --output-dir /tmp -LO $${URL}/$${VMCLUSTER} && tar xzf /tmp/$${VMCLUSTER} -C $${DIR} \
); \
VM_LEGACY_VMSINGLE_PATH=$${DIR}/victoria-metrics-prod \
VM_LEGACY_VMSTORAGE_PATH=$${DIR}/vmstorage-prod \
go test ./apptest/tests -run="^TestLegacySingle.*"
benchmark:
GOEXPERIMENT=synctest go test -bench=. ./lib/...
go test -bench=. ./app/...
go test -run=NO_TESTS -bench=. ./lib/...
go test -run=NO_TESTS -bench=. ./app/...
benchmark-pure:
GOEXPERIMENT=synctest CGO_ENABLED=0 go test -bench=. ./lib/...
CGO_ENABLED=0 go test -bench=. ./app/...
CGO_ENABLED=0 go test -run=NO_TESTS -bench=. ./lib/...
CGO_ENABLED=0 go test -run=NO_TESTS -bench=. ./app/...
vendor-update:
go get -u ./lib/...
go get -u ./app/...
go mod tidy -compat=1.24
go mod tidy -compat=1.26
go mod vendor
app-local:
CGO_ENABLED=1 go build $(RACE) -ldflags "$(GO_BUILDINFO)" -o bin/$(APP_NAME)$(RACE) $(PKG_PREFIX)/app/$(APP_NAME)
CGO_ENABLED=1 go build $(RACE) -ldflags "$(GO_BUILDINFO)" -tags "$(EXTRA_GO_BUILD_TAGS)" -o bin/$(APP_NAME)$(RACE) $(PKG_PREFIX)/app/$(APP_NAME)
app-local-pure:
CGO_ENABLED=0 go build $(RACE) -ldflags "$(GO_BUILDINFO)" -o bin/$(APP_NAME)-pure$(RACE) $(PKG_PREFIX)/app/$(APP_NAME)
CGO_ENABLED=0 go build $(RACE) -ldflags "$(GO_BUILDINFO)" -tags "$(EXTRA_GO_BUILD_TAGS)" -o bin/$(APP_NAME)-pure$(RACE) $(PKG_PREFIX)/app/$(APP_NAME)
app-local-goos-goarch:
CGO_ENABLED=$(CGO_ENABLED) GOOS=$(GOOS) GOARCH=$(GOARCH) go build $(RACE) -ldflags "$(GO_BUILDINFO)" -o bin/$(APP_NAME)-$(GOOS)-$(GOARCH)$(RACE) $(PKG_PREFIX)/app/$(APP_NAME)
CGO_ENABLED=$(CGO_ENABLED) GOOS=$(GOOS) GOARCH=$(GOARCH) go build $(RACE) -ldflags "$(GO_BUILDINFO)" -tags "$(EXTRA_GO_BUILD_TAGS)" -o bin/$(APP_NAME)-$(GOOS)-$(GOARCH)$(RACE) $(PKG_PREFIX)/app/$(APP_NAME)
app-local-windows-goarch:
CGO_ENABLED=0 GOOS=windows GOARCH=$(GOARCH) go build $(RACE) -ldflags "$(GO_BUILDINFO)" -o bin/$(APP_NAME)-windows-$(GOARCH)$(RACE).exe $(PKG_PREFIX)/app/$(APP_NAME)
CGO_ENABLED=0 GOOS=windows GOARCH=$(GOARCH) go build $(RACE) -ldflags "$(GO_BUILDINFO)" -tags "$(EXTRA_GO_BUILD_TAGS)" -o bin/$(APP_NAME)-windows-$(GOARCH)$(RACE).exe $(PKG_PREFIX)/app/$(APP_NAME)
quicktemplate-gen: install-qtc
qtc
qtc -dir=lib
qtc -dir=app
install-qtc:
which qtc || go install github.com/valyala/quicktemplate/qtc@latest
golangci-lint: install-golangci-lint
GOEXPERIMENT=synctest golangci-lint run
golangci-lint run --build-tags 'synctest'
install-golangci-lint:
which golangci-lint || curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(shell go env GOPATH)/bin v1.64.7
which golangci-lint && (golangci-lint --version | grep -q $(GOLANGCI_LINT_VERSION)) || curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(shell go env GOPATH)/bin v$(GOLANGCI_LINT_VERSION)
remove-golangci-lint:
rm -rf `which golangci-lint`

View File

@@ -1,14 +1,14 @@
# VictoriaMetrics
![Latest Release](https://img.shields.io/github/v/release/VictoriaMetrics/VictoriaMetrics?sort=semver&label=&filter=!*-victorialogs&logo=github&labelColor=gray&color=gray&link=https%3A%2F%2Fgithub.com%2FVictoriaMetrics%2FVictoriaMetrics%2Freleases%2Flatest)
[![Latest Release](https://img.shields.io/github/v/release/VictoriaMetrics/VictoriaMetrics?sort=semver&label=&filter=!*-victorialogs&logo=github&labelColor=gray&color=gray&link=https%3A%2F%2Fgithub.com%2FVictoriaMetrics%2FVictoriaMetrics%2Freleases%2Flatest)](https://github.com/VictoriaMetrics/VictoriaMetrics/releases)
![Docker Pulls](https://img.shields.io/docker/pulls/victoriametrics/victoria-metrics?label=&logo=docker&logoColor=white&labelColor=2496ED&color=2496ED&link=https%3A%2F%2Fhub.docker.com%2Fr%2Fvictoriametrics%2Fvictoria-metrics)
![Go Report](https://goreportcard.com/badge/github.com/VictoriaMetrics/VictoriaMetrics?link=https%3A%2F%2Fgoreportcard.com%2Freport%2Fgithub.com%2FVictoriaMetrics%2FVictoriaMetrics)
![Build Status](https://github.com/VictoriaMetrics/VictoriaMetrics/actions/workflows/main.yml/badge.svg?branch=master&link=https%3A%2F%2Fgithub.com%2FVictoriaMetrics%2FVictoriaMetrics%2Factions)
![codecov](https://codecov.io/gh/VictoriaMetrics/VictoriaMetrics/branch/master/graph/badge.svg?link=https%3A%2F%2Fcodecov.io%2Fgh%2FVictoriaMetrics%2FVictoriaMetrics)
![License](https://img.shields.io/github/license/VictoriaMetrics/VictoriaMetrics?labelColor=green&label=&link=https%3A%2F%2Fgithub.com%2FVictoriaMetrics%2FVictoriaMetrics%2Fblob%2Fmaster%2FLICENSE)
[![Go Report](https://goreportcard.com/badge/github.com/VictoriaMetrics/VictoriaMetrics?link=https%3A%2F%2Fgoreportcard.com%2Freport%2Fgithub.com%2FVictoriaMetrics%2FVictoriaMetrics)](https://goreportcard.com/report/github.com/VictoriaMetrics/VictoriaMetrics)
[![Build Status](https://github.com/VictoriaMetrics/VictoriaMetrics/actions/workflows/build.yml/badge.svg?branch=master&link=https%3A%2F%2Fgithub.com%2FVictoriaMetrics%2FVictoriaMetrics%2Factions)](https://github.com/VictoriaMetrics/VictoriaMetrics/actions/workflows/build.yml)
[![codecov](https://codecov.io/gh/VictoriaMetrics/VictoriaMetrics/branch/master/graph/badge.svg?link=https%3A%2F%2Fcodecov.io%2Fgh%2FVictoriaMetrics%2FVictoriaMetrics)](https://app.codecov.io/gh/VictoriaMetrics/VictoriaMetrics)
[![License](https://img.shields.io/github/license/VictoriaMetrics/VictoriaMetrics?labelColor=green&label=&link=https%3A%2F%2Fgithub.com%2FVictoriaMetrics%2FVictoriaMetrics%2Fblob%2Fmaster%2FLICENSE)](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/LICENSE)
![Slack](https://img.shields.io/badge/Join-4A154B?logo=slack&link=https%3A%2F%2Fslack.victoriametrics.com)
![X](https://img.shields.io/twitter/follow/VictoriaMetrics?style=flat&label=Follow&color=black&logo=x&labelColor=black&link=https%3A%2F%2Fx.com%2FVictoriaMetrics)
![Reddit](https://img.shields.io/reddit/subreddit-subscribers/VictoriaMetrics?style=flat&label=Join&labelColor=red&logoColor=white&logo=reddit&link=https%3A%2F%2Fwww.reddit.com%2Fr%2FVictoriaMetrics)
[![X](https://img.shields.io/twitter/follow/VictoriaMetrics?style=flat&label=Follow&color=black&logo=x&labelColor=black&link=https%3A%2F%2Fx.com%2FVictoriaMetrics)](https://x.com/VictoriaMetrics/)
[![Reddit](https://img.shields.io/reddit/subreddit-subscribers/VictoriaMetrics?style=flat&label=Join&labelColor=red&logoColor=white&logo=reddit&link=https%3A%2F%2Fwww.reddit.com%2Fr%2FVictoriaMetrics)](https://www.reddit.com/r/VictoriaMetrics/)
<picture>
<source srcset="docs/victoriametrics/logo_white.webp" media="(prefers-color-scheme: dark)">
@@ -16,16 +16,21 @@
<img src="docs/victoriametrics/logo.webp" width="300" alt="VictoriaMetrics logo">
</picture>
VictoriaMetrics is a fast, cost-saving, and scalable solution for monitoring and managing time series data. It delivers high performance and reliability, making it an ideal choice for businesses of all sizes.
VictoriaMetrics is a fast, cost-effective, and scalable solution for monitoring and managing time series data. It delivers high performance and reliability, making it an ideal choice for businesses of all sizes.
Here are some resources and information about VictoriaMetrics:
- Documentation: [docs.victoriametrics.com](https://docs.victoriametrics.com)
- Case studies: [Grammarly, Roblox, Wix,...](https://docs.victoriametrics.com/victoriametrics/casestudies/).
- Available: [Binary releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest), docker images [Docker Hub](https://hub.docker.com/r/victoriametrics/victoria-metrics/) and [Quay](https://quay.io/repository/victoriametrics/victoria-metrics), [Source code](https://github.com/VictoriaMetrics/VictoriaMetrics)
- Deployment types: [Single-node version](https://docs.victoriametrics.com/), [Cluster version](https://docs.victoriametrics.com/victoriametrics/cluster-victoriametrics/), and [Enterprise version](https://docs.victoriametrics.com/victoriametrics/enterprise/)
- Changelog: [CHANGELOG](https://docs.victoriametrics.com/victoriametrics/changelog/), and [How to upgrade](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#how-to-upgrade-victoriametrics)
- Community: [Slack](https://slack.victoriametrics.com/), [X (Twitter)](https://x.com/VictoriaMetrics), [LinkedIn](https://www.linkedin.com/company/victoriametrics/), [YouTube](https://www.youtube.com/@VictoriaMetrics)
- **Case studies**: [Grammarly, Roblox, Wix, Spotify,...](https://docs.victoriametrics.com/victoriametrics/casestudies/).
- **Available**: [Binary releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest), Docker images on [Docker Hub](https://hub.docker.com/r/victoriametrics/victoria-metrics/) and [Quay](https://quay.io/repository/victoriametrics/victoria-metrics), [Source code](https://github.com/VictoriaMetrics/VictoriaMetrics).
- **Deployment types**: [Single-node version](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/) and [Cluster version](https://docs.victoriametrics.com/victoriametrics/cluster-victoriametrics/) under [Apache License 2.0](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/LICENSE).
- **Getting started:** Read [key concepts](https://docs.victoriametrics.com/victoriametrics/keyconcepts/) and follow the
[quick start guide](https://docs.victoriametrics.com/victoriametrics/quick-start/).
- **Community**: [Slack](https://slack.victoriametrics.com/) (join via [Slack Inviter](https://slack.victoriametrics.com/)), [X (Twitter)](https://x.com/VictoriaMetrics), [YouTube](https://www.youtube.com/@VictoriaMetrics). See full list [here](https://docs.victoriametrics.com/victoriametrics/#community-and-contributions).
- **Changelog**: Project evolves fast - check the [CHANGELOG](https://docs.victoriametrics.com/victoriametrics/changelog/), and [How to upgrade](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#how-to-upgrade-victoriametrics).
- **Enterprise support:** [Contact us](mailto:info@victoriametrics.com) for commercial support with additional [enterprise features](https://docs.victoriametrics.com/victoriametrics/enterprise/).
- **Enterprise releases:** Enterprise and [long-term support releases (LTS)](https://docs.victoriametrics.com/victoriametrics/lts-releases/) are publicly available and can be evaluated for free
using a [free trial license](https://victoriametrics.com/products/enterprise/trial/).
- **Security:** we achieved [security certifications](https://victoriametrics.com/security/) for Database Software Development and Software-Based Monitoring Services.
Yes, we open-source both the single-node VictoriaMetrics and the cluster version.

View File

@@ -4,12 +4,11 @@
The following versions of VictoriaMetrics receive regular security fixes:
| Version | Supported |
|---------|--------------------|
| [latest release](https://docs.victoriametrics.com/victoriametrics/changelog/) | :white_check_mark: |
| v1.102.x [LTS line](https://docs.victoriametrics.com/victoriametrics/lts-releases/) | :white_check_mark: |
| v1.110.x [LTS line](https://docs.victoriametrics.com/victoriametrics/lts-releases/) | :white_check_mark: |
| other releases | :x: |
| Version | Supported |
|--------------------------------------------------------------------------------|--------------------|
| [Latest release](https://docs.victoriametrics.com/victoriametrics/changelog/) | :white_check_mark: |
| [LTS releases](https://docs.victoriametrics.com/victoriametrics/lts-releases/) | :white_check_mark: |
| other releases | :x: |
See [this page](https://victoriametrics.com/security/) for more details.

View File

@@ -1,113 +0,0 @@
# All these commands must run from repository root.
victoria-logs:
APP_NAME=victoria-logs $(MAKE) app-local
victoria-logs-race:
APP_NAME=victoria-logs RACE=-race $(MAKE) app-local
victoria-logs-prod:
APP_NAME=victoria-logs $(MAKE) app-via-docker
victoria-logs-pure-prod:
APP_NAME=victoria-logs $(MAKE) app-via-docker-pure
victoria-logs-linux-amd64-prod:
APP_NAME=victoria-logs $(MAKE) app-via-docker-linux-amd64
victoria-logs-linux-arm-prod:
APP_NAME=victoria-logs $(MAKE) app-via-docker-linux-arm
victoria-logs-linux-arm64-prod:
APP_NAME=victoria-logs $(MAKE) app-via-docker-linux-arm64
victoria-logs-linux-ppc64le-prod:
APP_NAME=victoria-logs $(MAKE) app-via-docker-linux-ppc64le
victoria-logs-linux-386-prod:
APP_NAME=victoria-logs $(MAKE) app-via-docker-linux-386
victoria-logs-darwin-amd64-prod:
APP_NAME=victoria-logs $(MAKE) app-via-docker-darwin-amd64
victoria-logs-darwin-arm64-prod:
APP_NAME=victoria-logs $(MAKE) app-via-docker-darwin-arm64
victoria-logs-freebsd-amd64-prod:
APP_NAME=victoria-logs $(MAKE) app-via-docker-freebsd-amd64
victoria-logs-openbsd-amd64-prod:
APP_NAME=victoria-logs $(MAKE) app-via-docker-openbsd-amd64
victoria-logs-windows-amd64-prod:
APP_NAME=victoria-logs $(MAKE) app-via-docker-windows-amd64
package-victoria-logs:
APP_NAME=victoria-logs $(MAKE) package-via-docker
package-victoria-logs-pure:
APP_NAME=victoria-logs $(MAKE) package-via-docker-pure
package-victoria-logs-amd64:
APP_NAME=victoria-logs $(MAKE) package-via-docker-amd64
package-victoria-logs-arm:
APP_NAME=victoria-logs $(MAKE) package-via-docker-arm
package-victoria-logs-arm64:
APP_NAME=victoria-logs $(MAKE) package-via-docker-arm64
package-victoria-logs-ppc64le:
APP_NAME=victoria-logs $(MAKE) package-via-docker-ppc64le
package-victoria-logs-386:
APP_NAME=victoria-logs $(MAKE) package-via-docker-386
publish-victoria-logs:
APP_NAME=victoria-logs $(MAKE) publish-via-docker
victoria-logs-linux-amd64:
APP_NAME=victoria-logs CGO_ENABLED=1 GOOS=linux GOARCH=amd64 $(MAKE) app-local-goos-goarch
victoria-logs-linux-arm:
APP_NAME=victoria-logs CGO_ENABLED=0 GOOS=linux GOARCH=arm $(MAKE) app-local-goos-goarch
victoria-logs-linux-arm64:
APP_NAME=victoria-logs CGO_ENABLED=0 GOOS=linux GOARCH=arm64 $(MAKE) app-local-goos-goarch
victoria-logs-linux-ppc64le:
APP_NAME=victoria-logs CGO_ENABLED=0 GOOS=linux GOARCH=ppc64le $(MAKE) app-local-goos-goarch
victoria-logs-linux-s390x:
APP_NAME=victoria-logs CGO_ENABLED=0 GOOS=linux GOARCH=s390x $(MAKE) app-local-goos-goarch
victoria-logs-linux-loong64:
APP_NAME=victoria-logs CGO_ENABLED=0 GOOS=linux GOARCH=loong64 $(MAKE) app-local-goos-goarch
victoria-logs-linux-386:
APP_NAME=victoria-logs CGO_ENABLED=0 GOOS=linux GOARCH=386 $(MAKE) app-local-goos-goarch
victoria-logs-darwin-amd64:
APP_NAME=victoria-logs CGO_ENABLED=0 GOOS=darwin GOARCH=amd64 $(MAKE) app-local-goos-goarch
victoria-logs-darwin-arm64:
APP_NAME=victoria-logs CGO_ENABLED=0 GOOS=darwin GOARCH=arm64 $(MAKE) app-local-goos-goarch
victoria-logs-freebsd-amd64:
APP_NAME=victoria-logs CGO_ENABLED=0 GOOS=freebsd GOARCH=amd64 $(MAKE) app-local-goos-goarch
victoria-logs-openbsd-amd64:
APP_NAME=victoria-logs CGO_ENABLED=0 GOOS=openbsd GOARCH=amd64 $(MAKE) app-local-goos-goarch
victoria-logs-windows-amd64:
GOARCH=amd64 APP_NAME=victoria-logs $(MAKE) app-local-windows-goarch
victoria-logs-pure:
APP_NAME=victoria-logs $(MAKE) app-local-pure
run-victoria-logs:
mkdir -p victoria-logs-data
DOCKER_OPTS='-v $(shell pwd)/victoria-logs-data:/victoria-logs-data' \
APP_NAME=victoria-logs \
ARGS='' \
$(MAKE) run-via-docker

View File

@@ -0,0 +1 @@
VictoriaLogs source code has been moved to [github.com/VictoriaMetrics/VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaLogs/).

View File

@@ -1,8 +0,0 @@
ARG base_image=non-existing
FROM $base_image
EXPOSE 9428
ENTRYPOINT ["/victoria-logs-prod"]
ARG src_binary=non-existing
COPY $src_binary ./victoria-logs-prod

View File

@@ -1,113 +0,0 @@
package main
import (
"flag"
"fmt"
"net/http"
"os"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlselect"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/buildinfo"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/envflag"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fs"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/procutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/pushmetrics"
)
var (
httpListenAddrs = flagutil.NewArrayString("httpListenAddr", "TCP address to listen for incoming http requests. See also -httpListenAddr.useProxyProtocol")
useProxyProtocol = flagutil.NewArrayBool("httpListenAddr.useProxyProtocol", "Whether to use proxy protocol for connections accepted at the given -httpListenAddr . "+
"See https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt . "+
"With enabled proxy protocol http server cannot serve regular /metrics endpoint. Use -pushmetrics.url for metrics pushing")
)
func main() {
// Write flags and help message to stdout, since it is easier to grep or pipe.
flag.CommandLine.SetOutput(os.Stdout)
flag.Usage = usage
envflag.Parse()
buildinfo.Init()
logger.Init()
listenAddrs := *httpListenAddrs
if len(listenAddrs) == 0 {
listenAddrs = []string{":9428"}
}
logger.Infof("starting VictoriaLogs at %q...", listenAddrs)
startTime := time.Now()
vlstorage.Init()
vlselect.Init()
insertutil.SetLogRowsStorage(&vlstorage.Storage{})
vlinsert.Init()
go httpserver.Serve(listenAddrs, requestHandler, httpserver.ServeOptions{
UseProxyProtocol: useProxyProtocol,
})
logger.Infof("started VictoriaLogs in %.3f seconds; see https://docs.victoriametrics.com/victorialogs/", time.Since(startTime).Seconds())
pushmetrics.Init()
sig := procutil.WaitForSigterm()
logger.Infof("received signal %s", sig)
pushmetrics.Stop()
logger.Infof("gracefully shutting down webservice at %q", listenAddrs)
startTime = time.Now()
if err := httpserver.Stop(listenAddrs); err != nil {
logger.Fatalf("cannot stop the webservice: %s", err)
}
logger.Infof("successfully shut down the webservice in %.3f seconds", time.Since(startTime).Seconds())
vlinsert.Stop()
vlselect.Stop()
vlstorage.Stop()
fs.MustStopDirRemover()
logger.Infof("the VictoriaLogs has been stopped in %.3f seconds", time.Since(startTime).Seconds())
}
func requestHandler(w http.ResponseWriter, r *http.Request) bool {
if r.URL.Path == "/" {
if r.Method != http.MethodGet {
return false
}
w.Header().Add("Content-Type", "text/html; charset=utf-8")
fmt.Fprintf(w, "<h2>Single-node VictoriaLogs</h2></br>")
fmt.Fprintf(w, "See docs at <a href='https://docs.victoriametrics.com/victorialogs/'>https://docs.victoriametrics.com/victorialogs/</a></br>")
fmt.Fprintf(w, "Useful endpoints:</br>")
httpserver.WriteAPIHelp(w, [][2]string{
{"select/vmui", "Web UI for VictoriaLogs"},
{"metrics", "available service metrics"},
{"flags", "command-line flags"},
})
return true
}
if vlinsert.RequestHandler(w, r) {
return true
}
if vlselect.RequestHandler(w, r) {
return true
}
if vlstorage.RequestHandler(w, r) {
return true
}
return false
}
func usage() {
const s = `
victoria-logs is a log management and analytics service.
See the docs at https://docs.victoriametrics.com/victorialogs/
`
flagutil.Usage(s)
}

View File

@@ -1,12 +0,0 @@
# See https://medium.com/on-docker/use-multi-stage-builds-to-inject-ca-certs-ad1e8f01de1b
ARG certs_image=non-existing
ARG root_image=non-existing
FROM $certs_image AS certs
RUN apk update && apk upgrade && apk --update --no-cache add ca-certificates
FROM $root_image
COPY --from=certs /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
EXPOSE 9428
ENTRYPOINT ["/victoria-logs-prod"]
ARG TARGETARCH
COPY victoria-logs-linux-${TARGETARCH}-prod ./victoria-logs-prod

View File

@@ -27,6 +27,9 @@ victoria-metrics-linux-ppc64le-prod:
victoria-metrics-linux-386-prod:
APP_NAME=victoria-metrics $(MAKE) app-via-docker-linux-386
victoria-metrics-linux-s390x-prod:
APP_NAME=victoria-metrics $(MAKE) app-via-docker-linux-s390x
victoria-metrics-darwin-amd64-prod:
APP_NAME=victoria-metrics $(MAKE) app-via-docker-darwin-amd64

View File

@@ -17,7 +17,6 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/cgroup"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/envflag"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fs"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/procutil"
@@ -125,8 +124,6 @@ func main() {
vmstorage.Stop()
vmselect.Stop()
fs.MustStopDirRemover()
logger.Infof("the VictoriaMetrics has been stopped in %.3f seconds", time.Since(startTime).Seconds())
}
@@ -137,6 +134,7 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
}
w.Header().Add("Content-Type", "text/html; charset=utf-8")
fmt.Fprintf(w, "<h2>Single-node VictoriaMetrics</h2></br>")
fmt.Fprintf(w, "Version %s<br>", buildinfo.Version)
fmt.Fprintf(w, "See docs at <a href='https://docs.victoriametrics.com/'>https://docs.victoriametrics.com/</a></br>")
fmt.Fprintf(w, "Useful endpoints:</br>")
httpserver.WriteAPIHelp(w, [][2]string{
@@ -172,7 +170,7 @@ func usage() {
const s = `
victoria-metrics is a time series database and monitoring solution.
See the docs at https://docs.victoriametrics.com/
See the docs at https://docs.victoriametrics.com/victoriametrics/
`
flagutil.Usage(s)
}

View File

@@ -10,9 +10,11 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/decimal"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prommetadata"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/prometheus"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage/metricsmetadata"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeserieslimits"
)
@@ -27,11 +29,9 @@ var selfScraperWG sync.WaitGroup
func startSelfScraper() {
selfScraperStopCh = make(chan struct{})
selfScraperWG.Add(1)
go func() {
defer selfScraperWG.Done()
selfScraperWG.Go(func() {
selfScraper(*selfScrapeInterval)
}()
})
}
func stopSelfScraper() {
@@ -48,8 +48,9 @@ func selfScraper(scrapeInterval time.Duration) {
var bb bytesutil.ByteBuffer
var rows prometheus.Rows
var metadataRows prometheus.MetadataRows
var mrs []storage.MetricRow
var labels []prompbmarshal.Label
var labels []prompb.Label
t := time.NewTicker(scrapeInterval)
f := func(currentTime time.Time, sendStaleMarkers bool) {
currentTimestamp := currentTime.UnixNano() / 1e6
@@ -57,7 +58,12 @@ func selfScraper(scrapeInterval time.Duration) {
appmetrics.WritePrometheusMetrics(&bb)
s := bytesutil.ToUnsafeString(bb.B)
rows.Reset()
rows.Unmarshal(s)
// Parse metrics and optionally metadata when enabled
if prommetadata.IsEnabled() {
rows, metadataRows = prometheus.UnmarshalWithMetadata(rows, metadataRows, s, nil)
} else {
rows.UnmarshalWithErrLogger(s, nil)
}
mrs = mrs[:0]
for i := range rows.Rows {
r := &rows.Rows[i]
@@ -90,6 +96,19 @@ func selfScraper(scrapeInterval time.Duration) {
if err := vmstorage.AddRows(mrs); err != nil {
logger.Errorf("cannot store self-scraped metrics: %s", err)
}
if len(metadataRows.Rows) > 0 {
mms := make([]metricsmetadata.Row, 0, len(metadataRows.Rows))
for _, mm := range metadataRows.Rows {
mms = append(mms, metricsmetadata.Row{
MetricFamilyName: bytesutil.ToUnsafeBytes(mm.Metric),
Help: bytesutil.ToUnsafeBytes(mm.Help),
Type: mm.Type,
})
}
if err := vmstorage.AddMetadataRows(mms); err != nil {
logger.Errorf("cannot store self-scraped metrics metadata: %s", err)
}
}
}
for {
select {
@@ -104,11 +123,11 @@ func selfScraper(scrapeInterval time.Duration) {
}
}
func addLabel(dst []prompbmarshal.Label, key, value string) []prompbmarshal.Label {
func addLabel(dst []prompb.Label, key, value string) []prompb.Label {
if len(dst) < cap(dst) {
dst = dst[:len(dst)+1]
} else {
dst = append(dst, prompbmarshal.Label{})
dst = append(dst, prompb.Label{})
}
lb := &dst[len(dst)-1]
lb.Name = key

View File

@@ -33,13 +33,13 @@ func PopulateTimeTpl(b []byte, tGlobal time.Time) []byte {
}
switch strings.TrimSpace(parts[0]) {
case `TIME_S`:
return []byte(fmt.Sprintf("%d", t.Unix()))
return fmt.Appendf(nil, "%d", t.Unix())
case `TIME_MSZ`:
return []byte(fmt.Sprintf("%d", t.Unix()*1e3))
return fmt.Appendf(nil, "%d", t.Unix()*1e3)
case `TIME_MS`:
return []byte(fmt.Sprintf("%d", timeToMillis(t)))
return fmt.Appendf(nil, "%d", timeToMillis(t))
case `TIME_NS`:
return []byte(fmt.Sprintf("%d", t.UnixNano()))
return fmt.Appendf(nil, "%d", t.UnixNano())
default:
log.Fatalf("unknown time pattern %s in %s", parts[0], repl)
}

1
app/vlagent/README.md Normal file
View File

@@ -0,0 +1 @@
VictoriaLogs source code has been moved to [github.com/VictoriaMetrics/VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaLogs/).

1
app/vlinsert/README.md Normal file
View File

@@ -0,0 +1 @@
VictoriaLogs source code has been moved to [github.com/VictoriaMetrics/VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaLogs/).

View File

@@ -1,260 +0,0 @@
package datadog
import (
"bytes"
"fmt"
"net/http"
"strconv"
"time"
"github.com/VictoriaMetrics/metrics"
"github.com/valyala/fastjson"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
)
var (
datadogStreamFields = flagutil.NewArrayString("datadog.streamFields", "Comma-separated list of fields to use as log stream fields for logs ingested via DataDog protocol. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/datadog-agent/#stream-fields")
datadogIgnoreFields = flagutil.NewArrayString("datadog.ignoreFields", "Comma-separated list of fields to ignore for logs ingested via DataDog protocol. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/datadog-agent/#dropping-fields")
maxRequestSize = flagutil.NewBytes("datadog.maxRequestSize", 64*1024*1024, "The maximum size in bytes of a single DataDog request")
)
var parserPool fastjson.ParserPool
// RequestHandler processes Datadog insert requests
func RequestHandler(path string, w http.ResponseWriter, r *http.Request) bool {
switch path {
case "/insert/datadog/api/v1/validate":
fmt.Fprintf(w, `{}`)
return true
case "/insert/datadog/api/v2/logs":
return datadogLogsIngestion(w, r)
default:
return false
}
}
func datadogLogsIngestion(w http.ResponseWriter, r *http.Request) bool {
w.Header().Add("Content-Type", "application/json")
startTime := time.Now()
v2LogsRequestsTotal.Inc()
var ts int64
if tsValue := r.Header.Get("dd-message-timestamp"); tsValue != "" && tsValue != "0" {
var err error
ts, err = strconv.ParseInt(tsValue, 10, 64)
if err != nil {
httpserver.Errorf(w, r, "could not parse dd-message-timestamp header value: %s", err)
return true
}
ts *= 1e6
} else {
ts = startTime.UnixNano()
}
cp, err := insertutil.GetCommonParams(r)
if err != nil {
httpserver.Errorf(w, r, "%s", err)
return true
}
if len(cp.StreamFields) == 0 {
cp.StreamFields = *datadogStreamFields
}
if len(cp.IgnoreFields) == 0 {
cp.IgnoreFields = *datadogIgnoreFields
}
if err := insertutil.CanWriteData(); err != nil {
httpserver.Errorf(w, r, "%s", err)
return true
}
encoding := r.Header.Get("Content-Encoding")
err = protoparserutil.ReadUncompressedData(r.Body, encoding, maxRequestSize, func(data []byte) error {
lmp := cp.NewLogMessageProcessor("datadog", false)
err := readLogsRequest(ts, data, lmp)
lmp.MustClose()
return err
})
if err != nil {
httpserver.Errorf(w, r, "cannot read DataDog protocol data: %s", err)
return true
}
// update v2LogsRequestDuration only for successfully parsed requests
// There is no need in updating v2LogsRequestDuration for request errors,
// since their timings are usually much smaller than the timing for successful request parsing.
v2LogsRequestDuration.UpdateDuration(startTime)
w.WriteHeader(http.StatusAccepted)
fmt.Fprintf(w, `{}`)
return true
}
var (
v2LogsRequestsTotal = metrics.NewCounter(`vl_http_requests_total{path="/insert/datadog/api/v2/logs"}`)
v2LogsRequestDuration = metrics.NewSummary(`vl_http_request_duration_seconds{path="/insert/datadog/api/v2/logs"}`)
)
// datadog message field has two formats:
// - regular log message with string text
// - nested json format for serverless plugins
// which has the following format:
// {"message": {"message": "text","lamdba": {"arn": "string","requestID": "string"}, "timestamp": int64} }
//
// See https://github.com/DataDog/datadog-lambda-extension/blob/28b90c7e4e985b72d60b5f5a5147c69c7ac693c4/bottlecap/src/logs/lambda/mod.rs#L24
func appendMsgFields(fields []logstorage.Field, v *fastjson.Value) ([]logstorage.Field, error) {
switch v.Type() {
case fastjson.TypeString:
val := v.GetStringBytes()
fields = append(fields, logstorage.Field{
Name: "_msg",
Value: bytesutil.ToUnsafeString(val),
})
case fastjson.TypeObject:
var firstErr error
v.GetObject().Visit(func(k []byte, v *fastjson.Value) {
if firstErr != nil {
return
}
switch bytesutil.ToUnsafeString(k) {
case "message":
val := v.GetStringBytes()
fields = append(fields, logstorage.Field{
Name: "_msg",
Value: bytesutil.ToUnsafeString(val),
})
case "status":
val := v.GetStringBytes()
fields = append(fields, logstorage.Field{
Name: "status",
Value: bytesutil.ToUnsafeString(val),
})
case "lamdba":
obj, err := v.Object()
if err != nil {
firstErr = err
firstErr = fmt.Errorf("unexpected lambda value type for %q:%q; want object", k, v)
return
}
obj.Visit(func(k []byte, v *fastjson.Value) {
if firstErr != nil {
return
}
val, err := v.StringBytes()
if err != nil {
firstErr = fmt.Errorf("unexpected lambda label value type for %q:%q; want string", k, v)
return
}
fields = append(fields, logstorage.Field{
Name: bytesutil.ToUnsafeString(k),
Value: bytesutil.ToUnsafeString(val),
})
})
}
})
default:
return fields, fmt.Errorf("unsupported message type %q", v.Type().String())
}
return fields, nil
}
// readLogsRequest parses data according to DataDog logs format
// https://docs.datadoghq.com/api/latest/logs/#send-logs
func readLogsRequest(ts int64, data []byte, lmp insertutil.LogMessageProcessor) error {
p := parserPool.Get()
defer parserPool.Put(p)
v, err := p.ParseBytes(data)
if err != nil {
return fmt.Errorf("cannot parse JSON request body: %w", err)
}
records, err := v.Array()
if err != nil {
return fmt.Errorf("cannot extract array from parsed JSON: %w", err)
}
var fields []logstorage.Field
for _, r := range records {
o, err := r.Object()
if err != nil {
return fmt.Errorf("could not extract log record: %w", err)
}
o.Visit(func(k []byte, v *fastjson.Value) {
if err != nil {
return
}
switch bytesutil.ToUnsafeString(k) {
case "message":
fields, err = appendMsgFields(fields, v)
if err != nil {
return
}
case "timestamp":
val, e := v.Int64()
if e != nil {
err = fmt.Errorf("failed to parse timestamp for %q:%q", k, v)
}
if val > 0 {
ts = val * 1e6
}
case "ddtags":
// https://docs.datadoghq.com/getting_started/tagging/
val, e := v.StringBytes()
if e != nil {
err = fmt.Errorf("unexpected label value type for %q:%q; want string", k, v)
return
}
var pair []byte
idx := 0
for idx >= 0 {
idx = bytes.IndexByte(val, ',')
if idx < 0 {
pair = val
} else {
pair = val[:idx]
val = val[idx+1:]
}
if len(pair) > 0 {
n := bytes.IndexByte(pair, ':')
if n < 0 {
// No tag value.
fields = append(fields, logstorage.Field{
Name: bytesutil.ToUnsafeString(pair),
Value: "no_label_value",
})
}
fields = append(fields, logstorage.Field{
Name: bytesutil.ToUnsafeString(pair[:n]),
Value: bytesutil.ToUnsafeString(pair[n+1:]),
})
}
}
default:
val, e := v.StringBytes()
if e != nil {
err = fmt.Errorf("unexpected label value type for %q:%q; want string", k, v)
return
}
fields = append(fields, logstorage.Field{
Name: bytesutil.ToUnsafeString(k),
Value: bytesutil.ToUnsafeString(val),
})
}
})
if err != nil {
return err
}
lmp.AddRow(ts, fields, nil)
fields = fields[:0]
}
return nil
}

View File

@@ -1,104 +0,0 @@
package datadog
import (
"testing"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
)
func TestReadLogsRequestFailure(t *testing.T) {
f := func(data string) {
t.Helper()
ts := time.Now().UnixNano()
lmp := &insertutil.TestLogMessageProcessor{}
if err := readLogsRequest(ts, []byte(data), lmp); err == nil {
t.Fatalf("expecting non-empty error")
}
if err := lmp.Verify(nil, ""); err != nil {
t.Fatalf("unexpected error: %s", err)
}
}
f("foobar")
f(`{}`)
f(`["create":{}]`)
f(`{"create":{}}
foobar`)
}
func TestReadLogsRequestSuccess(t *testing.T) {
f := func(data string, rowsExpected int, resultExpected string) {
t.Helper()
ts := time.Now().UnixNano()
var timestampsExpected []int64
for i := 0; i < rowsExpected; i++ {
timestampsExpected = append(timestampsExpected, ts)
}
lmp := &insertutil.TestLogMessageProcessor{}
if err := readLogsRequest(ts, []byte(data), lmp); err != nil {
t.Fatalf("unexpected error: %s", err)
}
if err := lmp.Verify(timestampsExpected, resultExpected); err != nil {
t.Fatalf("unexpected error: %s", err)
}
}
// Verify non-empty data
data := `[
{
"ddsource":"nginx",
"ddtags":"tag1:value1,tag2:value2",
"hostname":"127.0.0.1",
"message":"bar",
"service":"test"
}, {
"ddsource":"nginx",
"ddtags":"tag1:value1,tag2:value2",
"hostname":"127.0.0.1",
"message":{"message": "nested"},
"service":"test"
}, {
"ddsource":"nginx",
"ddtags":"tag1:value1,tag2:value2",
"hostname":"127.0.0.1",
"message":"foobar",
"service":"test"
}, {
"ddsource":"nginx",
"ddtags":"tag1:value1,tag2:value2",
"hostname":"127.0.0.1",
"message":"baz",
"service":"test"
}, {
"ddsource":"nginx",
"ddtags":"tag1:value1,tag2:value2",
"hostname":"127.0.0.1",
"message":"xyz",
"service":"test"
}, {
"ddsource": "nginx",
"ddtags":"tag1:value1,tag2:value2,",
"hostname":"127.0.0.1",
"message":"xyz",
"service":"test"
}, {
"ddsource":"nginx",
"ddtags":",tag1:value1,tag2:value2",
"hostname":"127.0.0.1",
"message":"xyz",
"service":"test"
}
]`
rowsExpected := 7
resultExpected := `{"ddsource":"nginx","tag1":"value1","tag2":"value2","hostname":"127.0.0.1","_msg":"bar","service":"test"}
{"ddsource":"nginx","tag1":"value1","tag2":"value2","hostname":"127.0.0.1","_msg":"nested","service":"test"}
{"ddsource":"nginx","tag1":"value1","tag2":"value2","hostname":"127.0.0.1","_msg":"foobar","service":"test"}
{"ddsource":"nginx","tag1":"value1","tag2":"value2","hostname":"127.0.0.1","_msg":"baz","service":"test"}
{"ddsource":"nginx","tag1":"value1","tag2":"value2","hostname":"127.0.0.1","_msg":"xyz","service":"test"}
{"ddsource":"nginx","tag1":"value1","tag2":"value2","hostname":"127.0.0.1","_msg":"xyz","service":"test"}
{"ddsource":"nginx","tag1":"value1","tag2":"value2","hostname":"127.0.0.1","_msg":"xyz","service":"test"}`
f(data, rowsExpected, resultExpected)
}

View File

@@ -1,20 +0,0 @@
{% stripspace %}
{% func BulkResponse(n int, tookMs int64) %}
{
"took":{%dl tookMs %},
"errors":false,
"items":[
{% for i := 0; i < n; i++ %}
{
"create":{
"status":201
}
}
{% if i+1 < n %},{% endif %}
{% endfor %}
]
}
{% endfunc %}
{% endstripspace %}

View File

@@ -1,69 +0,0 @@
// Code generated by qtc from "bulk_response.qtpl". DO NOT EDIT.
// See https://github.com/valyala/quicktemplate for details.
//line app/vlinsert/elasticsearch/bulk_response.qtpl:3
package elasticsearch
//line app/vlinsert/elasticsearch/bulk_response.qtpl:3
import (
qtio422016 "io"
qt422016 "github.com/valyala/quicktemplate"
)
//line app/vlinsert/elasticsearch/bulk_response.qtpl:3
var (
_ = qtio422016.Copy
_ = qt422016.AcquireByteBuffer
)
//line app/vlinsert/elasticsearch/bulk_response.qtpl:3
func StreamBulkResponse(qw422016 *qt422016.Writer, n int, tookMs int64) {
//line app/vlinsert/elasticsearch/bulk_response.qtpl:3
qw422016.N().S(`{"took":`)
//line app/vlinsert/elasticsearch/bulk_response.qtpl:5
qw422016.N().DL(tookMs)
//line app/vlinsert/elasticsearch/bulk_response.qtpl:5
qw422016.N().S(`,"errors":false,"items":[`)
//line app/vlinsert/elasticsearch/bulk_response.qtpl:8
for i := 0; i < n; i++ {
//line app/vlinsert/elasticsearch/bulk_response.qtpl:8
qw422016.N().S(`{"create":{"status":201}}`)
//line app/vlinsert/elasticsearch/bulk_response.qtpl:14
if i+1 < n {
//line app/vlinsert/elasticsearch/bulk_response.qtpl:14
qw422016.N().S(`,`)
//line app/vlinsert/elasticsearch/bulk_response.qtpl:14
}
//line app/vlinsert/elasticsearch/bulk_response.qtpl:15
}
//line app/vlinsert/elasticsearch/bulk_response.qtpl:15
qw422016.N().S(`]}`)
//line app/vlinsert/elasticsearch/bulk_response.qtpl:18
}
//line app/vlinsert/elasticsearch/bulk_response.qtpl:18
func WriteBulkResponse(qq422016 qtio422016.Writer, n int, tookMs int64) {
//line app/vlinsert/elasticsearch/bulk_response.qtpl:18
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vlinsert/elasticsearch/bulk_response.qtpl:18
StreamBulkResponse(qw422016, n, tookMs)
//line app/vlinsert/elasticsearch/bulk_response.qtpl:18
qt422016.ReleaseWriter(qw422016)
//line app/vlinsert/elasticsearch/bulk_response.qtpl:18
}
//line app/vlinsert/elasticsearch/bulk_response.qtpl:18
func BulkResponse(n int, tookMs int64) string {
//line app/vlinsert/elasticsearch/bulk_response.qtpl:18
qb422016 := qt422016.AcquireByteBuffer()
//line app/vlinsert/elasticsearch/bulk_response.qtpl:18
WriteBulkResponse(qb422016, n, tookMs)
//line app/vlinsert/elasticsearch/bulk_response.qtpl:18
qs422016 := string(qb422016.B)
//line app/vlinsert/elasticsearch/bulk_response.qtpl:18
qt422016.ReleaseByteBuffer(qb422016)
//line app/vlinsert/elasticsearch/bulk_response.qtpl:18
return qs422016
//line app/vlinsert/elasticsearch/bulk_response.qtpl:18
}

View File

@@ -1,248 +0,0 @@
package elasticsearch
import (
"flag"
"fmt"
"io"
"net/http"
"strings"
"time"
"github.com/VictoriaMetrics/metrics"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bufferedwriter"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
)
var (
elasticsearchVersion = flag.String("elasticsearch.version", "8.9.0", "Elasticsearch version to report to client")
)
// RequestHandler processes Elasticsearch insert requests
func RequestHandler(path string, w http.ResponseWriter, r *http.Request) bool {
w.Header().Add("Content-Type", "application/json")
// This header is needed for Logstash
w.Header().Set("X-Elastic-Product", "Elasticsearch")
if strings.HasPrefix(path, "/insert/elasticsearch/_ilm/policy") {
// Return fake response for Elasticsearch ilm request.
fmt.Fprintf(w, `{}`)
return true
}
if strings.HasPrefix(path, "/insert/elasticsearch/_index_template") {
// Return fake response for Elasticsearch index template request.
fmt.Fprintf(w, `{}`)
return true
}
if strings.HasPrefix(path, "/insert/elasticsearch/_ingest") {
// Return fake response for Elasticsearch ingest pipeline request.
// See: https://www.elastic.co/guide/en/elasticsearch/reference/8.8/put-pipeline-api.html
fmt.Fprintf(w, `{}`)
return true
}
if strings.HasPrefix(path, "/insert/elasticsearch/_nodes") {
// Return fake response for Elasticsearch nodes discovery request.
// See: https://www.elastic.co/guide/en/elasticsearch/reference/8.8/cluster.html
fmt.Fprintf(w, `{}`)
return true
}
if strings.HasPrefix(path, "/insert/elasticsearch/logstash") || strings.HasPrefix(path, "/insert/elasticsearch/_logstash") {
// Return fake response for Logstash APIs requests.
// See: https://www.elastic.co/guide/en/elasticsearch/reference/8.8/logstash-apis.html
fmt.Fprintf(w, `{}`)
return true
}
switch path {
// some clients may omit trailing slash
// see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8353
case "/insert/elasticsearch/", "/insert/elasticsearch":
switch r.Method {
case http.MethodGet:
// Return fake response for Elasticsearch ping request.
// See the latest available version for Elasticsearch at https://github.com/elastic/elasticsearch/releases
fmt.Fprintf(w, `{
"version": {
"number": %q
}
}`, *elasticsearchVersion)
case http.MethodHead:
// Return empty response for Logstash ping request.
}
return true
case "/insert/elasticsearch/_license":
// Return fake response for Elasticsearch license request.
fmt.Fprintf(w, `{
"license": {
"uid": "cbff45e7-c553-41f7-ae4f-9205eabd80xx",
"type": "oss",
"status": "active",
"expiry_date_in_millis" : 4000000000000
}
}`)
return true
case "/insert/elasticsearch/_bulk":
startTime := time.Now()
bulkRequestsTotal.Inc()
cp, err := insertutil.GetCommonParams(r)
if err != nil {
httpserver.Errorf(w, r, "%s", err)
return true
}
if err := insertutil.CanWriteData(); err != nil {
httpserver.Errorf(w, r, "%s", err)
return true
}
lmp := cp.NewLogMessageProcessor("elasticsearch_bulk", true)
encoding := r.Header.Get("Content-Encoding")
streamName := fmt.Sprintf("remoteAddr=%s, requestURI=%q", httpserver.GetQuotedRemoteAddr(r), r.RequestURI)
n, err := readBulkRequest(streamName, r.Body, encoding, cp.TimeFields, cp.MsgFields, lmp)
lmp.MustClose()
if err != nil {
logger.Warnf("cannot decode log message #%d in /_bulk request: %s, stream fields: %s", n, err, cp.StreamFields)
return true
}
tookMs := time.Since(startTime).Milliseconds()
bw := bufferedwriter.Get(w)
defer bufferedwriter.Put(bw)
WriteBulkResponse(bw, n, tookMs)
_ = bw.Flush()
// update bulkRequestDuration only for successfully parsed requests
// There is no need in updating bulkRequestDuration for request errors,
// since their timings are usually much smaller than the timing for successful request parsing.
bulkRequestDuration.UpdateDuration(startTime)
return true
default:
return false
}
}
var (
bulkRequestsTotal = metrics.NewCounter(`vl_http_requests_total{path="/insert/elasticsearch/_bulk"}`)
bulkRequestDuration = metrics.NewSummary(`vl_http_request_duration_seconds{path="/insert/elasticsearch/_bulk"}`)
)
func readBulkRequest(streamName string, r io.Reader, encoding string, timeFields, msgFields []string, lmp insertutil.LogMessageProcessor) (int, error) {
// See https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html
reader, err := protoparserutil.GetUncompressedReader(r, encoding)
if err != nil {
return 0, fmt.Errorf("cannot decode Elasticsearch protocol data: %w", err)
}
defer protoparserutil.PutUncompressedReader(reader)
wcr := writeconcurrencylimiter.GetReader(reader)
defer writeconcurrencylimiter.PutReader(wcr)
lr := insertutil.NewLineReader(streamName, wcr)
n := 0
for {
ok, err := readBulkLine(lr, timeFields, msgFields, lmp)
wcr.DecConcurrency()
if err != nil || !ok {
return n, err
}
n++
}
}
func readBulkLine(lr *insertutil.LineReader, timeFields, msgFields []string, lmp insertutil.LogMessageProcessor) (bool, error) {
var line []byte
// Read the command, must be "create" or "index"
for len(line) == 0 {
if !lr.NextLine() {
err := lr.Err()
return false, err
}
line = lr.Line
}
lineStr := bytesutil.ToUnsafeString(line)
if !strings.Contains(lineStr, `"create"`) && !strings.Contains(lineStr, `"index"`) {
return false, fmt.Errorf(`unexpected command %q; expecting "create" or "index"`, line)
}
// Decode log message
if !lr.NextLine() {
if err := lr.Err(); err != nil {
return false, err
}
return false, fmt.Errorf(`missing log message after the "create" or "index" command`)
}
line = lr.Line
if len(line) == 0 {
// Special case - the line could be too long, so it was skipped.
// Continue parsing next lines.
return true, nil
}
p := logstorage.GetJSONParser()
if err := p.ParseLogMessage(line); err != nil {
return false, fmt.Errorf("cannot parse json-encoded log entry: %w", err)
}
ts, err := extractTimestampFromFields(timeFields, p.Fields)
if err != nil {
return false, fmt.Errorf("cannot parse timestamp: %w", err)
}
if ts == 0 {
ts = time.Now().UnixNano()
}
logstorage.RenameField(p.Fields, msgFields, "_msg")
lmp.AddRow(ts, p.Fields, nil)
logstorage.PutJSONParser(p)
return true, nil
}
func extractTimestampFromFields(timeFields []string, fields []logstorage.Field) (int64, error) {
for _, timeField := range timeFields {
for i := range fields {
f := &fields[i]
if f.Name != timeField {
continue
}
timestamp, err := parseElasticsearchTimestamp(f.Value)
if err != nil {
return 0, err
}
f.Value = ""
return timestamp, nil
}
}
return 0, nil
}
func parseElasticsearchTimestamp(s string) (int64, error) {
if s == "0" || s == "" {
// Special case - zero or empty timestamp must be substituted
// with the current time by the caller.
return 0, nil
}
if len(s) < len("YYYY-MM-DD") || s[len("YYYY")] != '-' {
// Try parsing timestamp in seconds or milliseconds
return insertutil.ParseUnixTimestamp(s)
}
if len(s) == len("YYYY-MM-DD") {
t, err := time.Parse("2006-01-02", s)
if err != nil {
return 0, fmt.Errorf("cannot parse date %q: %w", s, err)
}
return t.UnixNano(), nil
}
nsecs, ok := logstorage.TryParseTimestampRFC3339Nano(s)
if !ok {
return 0, fmt.Errorf("cannot parse timestamp %q", s)
}
return nsecs, nil
}

View File

@@ -1,128 +0,0 @@
package elasticsearch
import (
"bytes"
"fmt"
"github.com/golang/snappy"
"github.com/klauspost/compress/gzip"
"github.com/klauspost/compress/zlib"
"github.com/klauspost/compress/zstd"
"io"
"testing"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
)
func TestReadBulkRequest_Failure(t *testing.T) {
f := func(data string) {
t.Helper()
tlp := &insertutil.TestLogMessageProcessor{}
r := bytes.NewBufferString(data)
rows, err := readBulkRequest("test", r, "", []string{"_time"}, []string{"_msg"}, tlp)
if err == nil {
t.Fatalf("expecting non-empty error")
}
if rows != 0 {
t.Fatalf("unexpected non-zero rows=%d", rows)
}
}
f("foobar")
f(`{}`)
f(`{"create":{}}`)
f(`{"creat":{}}
{}`)
f(`{"create":{}}
foobar`)
}
func TestReadBulkRequest_Success(t *testing.T) {
f := func(data, encoding, timeField, msgField string, timestampsExpected []int64, resultExpected string) {
t.Helper()
timeFields := []string{"non_existing_foo", timeField, "non_existing_bar"}
msgFields := []string{"non_existing_foo", msgField, "non_exiting_bar"}
tlp := &insertutil.TestLogMessageProcessor{}
// Read the request without compression
r := bytes.NewBufferString(data)
rows, err := readBulkRequest("test", r, "", timeFields, msgFields, tlp)
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
if rows != len(timestampsExpected) {
t.Fatalf("unexpected rows read; got %d; want %d", rows, len(timestampsExpected))
}
if err := tlp.Verify(timestampsExpected, resultExpected); err != nil {
t.Fatal(err)
}
// Read the request with compression
tlp = &insertutil.TestLogMessageProcessor{}
if encoding != "" {
data = compressData(data, encoding)
}
r = bytes.NewBufferString(data)
rows, err = readBulkRequest("test", r, encoding, timeFields, msgFields, tlp)
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
if rows != len(timestampsExpected) {
t.Fatalf("unexpected rows read; got %d; want %d", rows, len(timestampsExpected))
}
if err := tlp.Verify(timestampsExpected, resultExpected); err != nil {
t.Fatalf("verification failure after compression: %s", err)
}
}
// Verify an empty data
f("", "gzip", "_time", "_msg", nil, "")
f("\n", "gzip", "_time", "_msg", nil, "")
f("\n\n", "gzip", "_time", "_msg", nil, "")
// Verify non-empty data
data := `{"create":{"_index":"filebeat-8.8.0"}}
{"@timestamp":"2023-06-06T04:48:11.735Z","log":{"offset":71770,"file":{"path":"/var/log/auth.log"}},"message":"foobar"}
{"create":{"_index":"filebeat-8.8.0"}}
{"@timestamp":"2023-06-06 04:48:12.735+01:00","message":"baz"}
{"index":{"_index":"filebeat-8.8.0"}}
{"message":"xyz","@timestamp":"1686026893735","x":"y"}
{"create":{"_index":"filebeat-8.8.0"}}
{"message":"qwe rty","@timestamp":"1686026893"}
{"create":{"_index":"filebeat-8.8.0"}}
{"message":"qwe rty float","@timestamp":"1686026123.62"}
`
timeField := "@timestamp"
msgField := "message"
timestampsExpected := []int64{1686026891735000000, 1686023292735000000, 1686026893735000000, 1686026893000000000, 1686026123620000000}
resultExpected := `{"log.offset":"71770","log.file.path":"/var/log/auth.log","_msg":"foobar"}
{"_msg":"baz"}
{"_msg":"xyz","x":"y"}
{"_msg":"qwe rty"}
{"_msg":"qwe rty float"}`
f(data, "zstd", timeField, msgField, timestampsExpected, resultExpected)
}
func compressData(s string, encoding string) string {
var bb bytes.Buffer
var zw io.WriteCloser
switch encoding {
case "gzip":
zw = gzip.NewWriter(&bb)
case "zstd":
zw, _ = zstd.NewWriter(&bb)
case "snappy":
return string(snappy.Encode(nil, []byte(s)))
case "deflate":
zw = zlib.NewWriter(&bb)
default:
panic(fmt.Errorf("%q encoding is not supported", encoding))
}
if _, err := zw.Write([]byte(s)); err != nil {
panic(fmt.Errorf("unexpected error when compressing data: %w", err))
}
if err := zw.Close(); err != nil {
panic(fmt.Errorf("unexpected error when closing gzip writer: %w", err))
}
return bb.String()
}

View File

@@ -1,59 +0,0 @@
package elasticsearch
import (
"bytes"
"fmt"
"testing"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
)
func BenchmarkReadBulkRequest(b *testing.B) {
b.Run("encoding:none", func(b *testing.B) {
benchmarkReadBulkRequest(b, "")
})
b.Run("encoding:gzip", func(b *testing.B) {
benchmarkReadBulkRequest(b, "gzip")
})
b.Run("encoding:zstd", func(b *testing.B) {
benchmarkReadBulkRequest(b, "zstd")
})
b.Run("encoding:deflate", func(b *testing.B) {
benchmarkReadBulkRequest(b, "deflate")
})
b.Run("encoding:snappy", func(b *testing.B) {
benchmarkReadBulkRequest(b, "snappy")
})
}
func benchmarkReadBulkRequest(b *testing.B, encoding string) {
data := `{"create":{"_index":"filebeat-8.8.0"}}
{"@timestamp":"2023-06-06T04:48:11.735Z","log":{"offset":71770,"file":{"path":"/var/log/auth.log"}},"message":"foobar"}
{"create":{"_index":"filebeat-8.8.0"}}
{"@timestamp":"2023-06-06T04:48:12.735Z","message":"baz"}
{"create":{"_index":"filebeat-8.8.0"}}
{"message":"xyz","@timestamp":"2023-06-06T04:48:13.735Z","x":"y"}
`
if encoding != "" {
data = compressData(data, encoding)
}
dataBytes := bytesutil.ToUnsafeBytes(data)
timeFields := []string{"@timestamp"}
msgFields := []string{"message"}
blp := &insertutil.BenchmarkLogMessageProcessor{}
b.ReportAllocs()
b.SetBytes(int64(len(data)))
b.RunParallel(func(pb *testing.PB) {
r := &bytes.Reader{}
for pb.Next() {
r.Reset(dataBytes)
_, err := readBulkRequest("test", r, encoding, timeFields, msgFields, blp)
if err != nil {
panic(fmt.Errorf("unexpected error: %w", err))
}
}
})
}

View File

@@ -1,335 +0,0 @@
package insertutil
import (
"flag"
"fmt"
"net/http"
"strconv"
"strings"
"sync"
"time"
"github.com/VictoriaMetrics/metrics"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httputil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeutil"
)
var (
defaultMsgValue = flag.String("defaultMsgValue", "missing _msg field; see https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field",
"Default value for _msg field if the ingested log entry doesn't contain it; see https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field")
)
// CommonParams contains common HTTP parameters used by log ingestion APIs.
//
// See https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters
type CommonParams struct {
TenantID logstorage.TenantID
TimeFields []string
MsgFields []string
StreamFields []string
IgnoreFields []string
DecolorizeFields []string
ExtraFields []logstorage.Field
IsTimeFieldSet bool
Debug bool
DebugRequestURI string
DebugRemoteAddr string
}
// GetCommonParams returns CommonParams from r.
func GetCommonParams(r *http.Request) (*CommonParams, error) {
// Extract tenantID
tenantID, err := logstorage.GetTenantIDFromRequest(r)
if err != nil {
return nil, err
}
var isTimeFieldSet bool
timeFields := []string{"_time"}
if tfs := httputil.GetArray(r, "_time_field", "VL-Time-Field"); len(tfs) > 0 {
isTimeFieldSet = true
timeFields = tfs
}
msgFields := httputil.GetArray(r, "_msg_field", "VL-Msg-Field")
streamFields := httputil.GetArray(r, "_stream_fields", "VL-Stream-Fields")
ignoreFields := httputil.GetArray(r, "ignore_fields", "VL-Ignore-Fields")
decolorizeFields := httputil.GetArray(r, "decolorize_fields", "VL-Decolorize-Fields")
extraFields, err := getExtraFields(r)
if err != nil {
return nil, err
}
debug := false
if dv := httputil.GetRequestValue(r, "debug", "VL-Debug"); dv != "" {
debug, err = strconv.ParseBool(dv)
if err != nil {
return nil, fmt.Errorf("cannot parse debug=%q: %w", dv, err)
}
}
debugRequestURI := ""
debugRemoteAddr := ""
if debug {
debugRequestURI = httpserver.GetRequestURI(r)
debugRemoteAddr = httpserver.GetQuotedRemoteAddr(r)
}
cp := &CommonParams{
TenantID: tenantID,
TimeFields: timeFields,
MsgFields: msgFields,
StreamFields: streamFields,
IgnoreFields: ignoreFields,
DecolorizeFields: decolorizeFields,
ExtraFields: extraFields,
IsTimeFieldSet: isTimeFieldSet,
Debug: debug,
DebugRequestURI: debugRequestURI,
DebugRemoteAddr: debugRemoteAddr,
}
return cp, nil
}
func getExtraFields(r *http.Request) ([]logstorage.Field, error) {
efs := httputil.GetArray(r, "extra_fields", "VL-Extra-Fields")
if len(efs) == 0 {
return nil, nil
}
extraFields := make([]logstorage.Field, len(efs))
for i, ef := range efs {
n := strings.Index(ef, "=")
if n <= 0 || n == len(ef)-1 {
return nil, fmt.Errorf(`invalid extra_field format: %q; must be in the form "field=value"`, ef)
}
extraFields[i] = logstorage.Field{
Name: ef[:n],
Value: ef[n+1:],
}
}
return extraFields, nil
}
// GetCommonParamsForSyslog returns common params needed for parsing syslog messages and storing them to the given tenantID.
func GetCommonParamsForSyslog(tenantID logstorage.TenantID, streamFields, ignoreFields, decolorizeFields []string, extraFields []logstorage.Field) *CommonParams {
// See https://docs.victoriametrics.com/victorialogs/logsql/#unpack_syslog-pipe
if streamFields == nil {
streamFields = []string{
"hostname",
"app_name",
"proc_id",
}
}
cp := &CommonParams{
TenantID: tenantID,
TimeFields: []string{
"timestamp",
},
MsgFields: []string{
"message",
},
StreamFields: streamFields,
IgnoreFields: ignoreFields,
DecolorizeFields: decolorizeFields,
ExtraFields: extraFields,
}
return cp
}
// LogRowsStorage is an interface for ingesting logs into the storage.
type LogRowsStorage interface {
// MustAddRows must add lr to the underlying storage.
MustAddRows(lr *logstorage.LogRows)
// CanWriteData must returns non-nil error if logs cannot be added to the underlying storage.
CanWriteData() error
}
var logRowsStorage LogRowsStorage
// SetLogRowsStorage sets the storage for writing data to via LogMessageProcessor.
//
// This function must be called before using LogMessageProcessor and CanWriteData from this package.
func SetLogRowsStorage(storage LogRowsStorage) {
logRowsStorage = storage
}
// CanWriteData returns non-nil error if data cannot be written to the underlying storage.
func CanWriteData() error {
return logRowsStorage.CanWriteData()
}
// LogMessageProcessor is an interface for log message processors.
type LogMessageProcessor interface {
// AddRow must add row to the LogMessageProcessor with the given timestamp and fields.
//
// If streamFields is non-nil, then the given streamFields must be used as log stream fields instead of pre-configured fields.
//
// The LogMessageProcessor implementation cannot hold references to fields, since the caller can reuse them.
AddRow(timestamp int64, fields, streamFields []logstorage.Field)
// MustClose() must flush all the remaining fields and free up resources occupied by LogMessageProcessor.
MustClose()
}
type logMessageProcessor struct {
mu sync.Mutex
wg sync.WaitGroup
stopCh chan struct{}
lastFlushTime time.Time
cp *CommonParams
lr *logstorage.LogRows
rowsIngestedTotal *metrics.Counter
bytesIngestedTotal *metrics.Counter
}
func (lmp *logMessageProcessor) initPeriodicFlush() {
lmp.lastFlushTime = time.Now()
lmp.wg.Add(1)
go func() {
defer lmp.wg.Done()
d := timeutil.AddJitterToDuration(time.Second)
ticker := time.NewTicker(d)
defer ticker.Stop()
for {
select {
case <-lmp.stopCh:
return
case <-ticker.C:
lmp.mu.Lock()
if time.Since(lmp.lastFlushTime) >= d {
lmp.flushLocked()
}
lmp.mu.Unlock()
}
}
}()
}
// AddRow adds new log message to lmp with the given timestamp and fields.
//
// If streamFields is non-nil, then it is used as log stream fields instead of the pre-configured stream fields.
func (lmp *logMessageProcessor) AddRow(timestamp int64, fields, streamFields []logstorage.Field) {
lmp.rowsIngestedTotal.Inc()
n := logstorage.EstimatedJSONRowLen(fields)
lmp.bytesIngestedTotal.Add(n)
if len(fields) > *MaxFieldsPerLine {
line := logstorage.MarshalFieldsToJSON(nil, fields)
logger.Warnf("dropping log line with %d fields; it exceeds -insert.maxFieldsPerLine=%d; %s", len(fields), *MaxFieldsPerLine, line)
rowsDroppedTotalTooManyFields.Inc()
return
}
lmp.mu.Lock()
defer lmp.mu.Unlock()
lmp.lr.MustAdd(lmp.cp.TenantID, timestamp, fields, streamFields)
if lmp.cp.Debug {
s := lmp.lr.GetRowString(0)
lmp.lr.ResetKeepSettings()
logger.Infof("remoteAddr=%s; requestURI=%s; ignoring log entry because of `debug` arg: %s", lmp.cp.DebugRemoteAddr, lmp.cp.DebugRequestURI, s)
rowsDroppedTotalDebug.Inc()
return
}
if lmp.lr.NeedFlush() {
lmp.flushLocked()
}
}
// InsertRowProcessor is used by native data ingestion protocol parser.
type InsertRowProcessor interface {
// AddInsertRow must add r to the underlying storage.
AddInsertRow(r *logstorage.InsertRow)
}
// AddInsertRow adds r to lmp.
func (lmp *logMessageProcessor) AddInsertRow(r *logstorage.InsertRow) {
lmp.rowsIngestedTotal.Inc()
n := logstorage.EstimatedJSONRowLen(r.Fields)
lmp.bytesIngestedTotal.Add(n)
if len(r.Fields) > *MaxFieldsPerLine {
line := logstorage.MarshalFieldsToJSON(nil, r.Fields)
logger.Warnf("dropping log line with %d fields; it exceeds -insert.maxFieldsPerLine=%d; %s", len(r.Fields), *MaxFieldsPerLine, line)
rowsDroppedTotalTooManyFields.Inc()
return
}
lmp.mu.Lock()
defer lmp.mu.Unlock()
lmp.lr.MustAddInsertRow(r)
if lmp.cp.Debug {
s := lmp.lr.GetRowString(0)
lmp.lr.ResetKeepSettings()
logger.Infof("remoteAddr=%s; requestURI=%s; ignoring log entry because of `debug` arg: %s", lmp.cp.DebugRemoteAddr, lmp.cp.DebugRequestURI, s)
rowsDroppedTotalDebug.Inc()
return
}
if lmp.lr.NeedFlush() {
lmp.flushLocked()
}
}
// flushLocked must be called under locked lmp.mu.
func (lmp *logMessageProcessor) flushLocked() {
lmp.lastFlushTime = time.Now()
logRowsStorage.MustAddRows(lmp.lr)
lmp.lr.ResetKeepSettings()
}
// MustClose flushes the remaining data to the underlying storage and closes lmp.
func (lmp *logMessageProcessor) MustClose() {
close(lmp.stopCh)
lmp.wg.Wait()
lmp.flushLocked()
logstorage.PutLogRows(lmp.lr)
lmp.lr = nil
}
// NewLogMessageProcessor returns new LogMessageProcessor for the given cp.
//
// MustClose() must be called on the returned LogMessageProcessor when it is no longer needed.
func (cp *CommonParams) NewLogMessageProcessor(protocolName string, isStreamMode bool) LogMessageProcessor {
lr := logstorage.GetLogRows(cp.StreamFields, cp.IgnoreFields, cp.DecolorizeFields, cp.ExtraFields, *defaultMsgValue)
rowsIngestedTotal := metrics.GetOrCreateCounter(fmt.Sprintf("vl_rows_ingested_total{type=%q}", protocolName))
bytesIngestedTotal := metrics.GetOrCreateCounter(fmt.Sprintf("vl_bytes_ingested_total{type=%q}", protocolName))
lmp := &logMessageProcessor{
cp: cp,
lr: lr,
rowsIngestedTotal: rowsIngestedTotal,
bytesIngestedTotal: bytesIngestedTotal,
stopCh: make(chan struct{}),
}
if isStreamMode {
lmp.initPeriodicFlush()
}
return lmp
}
var (
rowsDroppedTotalDebug = metrics.NewCounter(`vl_rows_dropped_total{reason="debug"}`)
rowsDroppedTotalTooManyFields = metrics.NewCounter(`vl_rows_dropped_total{reason="too_many_fields"}`)
)

View File

@@ -1,17 +0,0 @@
package insertutil
import (
"flag"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
)
var (
// MaxLineSizeBytes is the maximum length of a single line for /insert/* handlers
MaxLineSizeBytes = flagutil.NewBytes("insert.maxLineSizeBytes", 256*1024, "The maximum size of a single line, which can be read by /insert/* handlers; "+
"see https://docs.victoriametrics.com/victorialogs/faq/#what-length-a-log-record-is-expected-to-have")
// MaxFieldsPerLine is the maximum number of fields per line for /insert/* handlers
MaxFieldsPerLine = flag.Int("insert.maxFieldsPerLine", 1000, "The maximum number of log fields per line, which can be read by /insert/* handlers; "+
"see https://docs.victoriametrics.com/victorialogs/faq/#how-many-fields-a-single-log-entry-may-contain")
)

View File

@@ -1,158 +0,0 @@
package insertutil
import (
"bytes"
"errors"
"fmt"
"io"
"github.com/VictoriaMetrics/metrics"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/slicesutil"
)
// LineReader reads newline-delimited lines from the underlying reader
type LineReader struct {
// Line contains the next line read after the call to NextLine
//
// The Line contents is valid until the next call to NextLine.
Line []byte
// name is the LineReader name
name string
// r is the underlying reader to read data from
r io.Reader
// buf is a buffer for reading the next line
buf []byte
// bufOffset is the offset at buf to read the next line from
bufOffset int
// err is the last error when reading data from r
err error
// eofReached is set to true when all the data is read from r
eofReached bool
}
// NewLineReader returns LineReader for r.
func NewLineReader(name string, r io.Reader) *LineReader {
return &LineReader{
name: name,
r: r,
}
}
// NextLine reads the next line from the underlying reader.
//
// It returns true if the next line is successfully read into Line.
// If the line length exceeds MaxLineSizeBytes, then this line is skipped
// and an empty line is returned instead.
//
// If false is returned, then no more lines left to read from r.
// Check for Err in this case.
func (lr *LineReader) NextLine() bool {
for {
lr.Line = nil
if lr.bufOffset >= len(lr.buf) {
if lr.err != nil || lr.eofReached {
return false
}
if !lr.readMoreData() {
return false
}
if lr.bufOffset >= len(lr.buf) && lr.eofReached {
return false
}
}
buf := lr.buf[lr.bufOffset:]
if n := bytes.IndexByte(buf, '\n'); n >= 0 {
lr.Line = buf[:n]
lr.bufOffset += n + 1
return true
}
if lr.eofReached {
lr.Line = buf
lr.bufOffset += len(buf)
return true
}
if !lr.readMoreData() {
return false
}
}
}
// Err returns the last error after NextLine call.
func (lr *LineReader) Err() error {
if lr.err == nil {
return nil
}
return fmt.Errorf("%s: %s", lr.name, lr.err)
}
func (lr *LineReader) readMoreData() bool {
if lr.bufOffset > 0 {
lr.buf = append(lr.buf[:0], lr.buf[lr.bufOffset:]...)
lr.bufOffset = 0
}
bufLen := len(lr.buf)
if bufLen >= MaxLineSizeBytes.IntN() {
ok, skippedBytes := lr.skipUntilNextLine()
logger.Warnf("%s: the line length exceeds -insert.maxLineSizeBytes=%d; skipping it; total skipped bytes=%d",
lr.name, MaxLineSizeBytes.IntN(), skippedBytes)
tooLongLinesSkipped.Inc()
return ok
}
lr.buf = slicesutil.SetLength(lr.buf, MaxLineSizeBytes.IntN())
n, err := lr.r.Read(lr.buf[bufLen:])
lr.buf = lr.buf[:bufLen+n]
if err != nil {
if errors.Is(err, io.EOF) {
lr.eofReached = true
return true
}
lr.err = fmt.Errorf("cannot read the next line: %s", err)
}
return n > 0
}
var tooLongLinesSkipped = metrics.NewCounter("vl_too_long_lines_skipped_total")
func (lr *LineReader) skipUntilNextLine() (bool, int) {
// Initialize skipped bytes count with MaxLineSizeBytes because
// we've already read that many bytes without encountering a newline,
// indicating the line size exceeds the maximum allowed limit.
skipSizeBytes := MaxLineSizeBytes.IntN()
for {
lr.buf = slicesutil.SetLength(lr.buf, MaxLineSizeBytes.IntN())
n, err := lr.r.Read(lr.buf)
skipSizeBytes += n
lr.buf = lr.buf[:n]
if err != nil {
if errors.Is(err, io.EOF) {
lr.eofReached = true
lr.buf = lr.buf[:0]
return true, skipSizeBytes
}
lr.err = fmt.Errorf("cannot skip the current line: %s", err)
return false, skipSizeBytes
}
if n := bytes.IndexByte(lr.buf, '\n'); n >= 0 {
// Include skipped bytes before \n, including the newline itself.
skipSizeBytes += n + 1 - len(lr.buf)
// Include \n in the buf, so too long line is replaced with an empty line.
// This is needed for maintaining synchorinzation consistency between lines
// in protocols such as Elasticsearch bulk import.
lr.buf = append(lr.buf[:0], lr.buf[n:]...)
return true, skipSizeBytes
}
}
}

View File

@@ -1,164 +0,0 @@
package insertutil
import (
"bytes"
"fmt"
"io"
"reflect"
"testing"
)
func TestLineReader_Success(t *testing.T) {
f := func(data string, linesExpected []string) {
t.Helper()
r := bytes.NewBufferString(data)
lr := NewLineReader("foo", r)
var lines []string
for lr.NextLine() {
lines = append(lines, string(lr.Line))
}
if err := lr.Err(); err != nil {
t.Fatalf("unexpected error: %s", err)
}
if lr.NextLine() {
t.Fatalf("expecting error on the second call to NextLine()")
}
if len(lr.Line) > 0 {
t.Fatalf("unexpected non-empty line after failed NextLine(): %q", lr.Line)
}
if !reflect.DeepEqual(lines, linesExpected) {
t.Fatalf("unexpected lines\ngot\n%q\nwant\n%q", lines, linesExpected)
}
}
f("", nil)
f("\n", []string{""})
f("\n\n", []string{"", ""})
f("foo", []string{"foo"})
f("foo\n", []string{"foo"})
f("\nfoo", []string{"", "foo"})
f("foo\n\n", []string{"foo", ""})
f("foo\nbar", []string{"foo", "bar"})
f("foo\nbar\n", []string{"foo", "bar"})
f("\nfoo\n\nbar\n\n", []string{"", "foo", "", "bar", ""})
}
func TestLineReader_SkipUntilNextLine(t *testing.T) {
f := func(data string, linesExpected []string) {
t.Helper()
r := bytes.NewBufferString(data)
lr := NewLineReader("foo", r)
var lines []string
for lr.NextLine() {
lines = append(lines, string(lr.Line))
}
if err := lr.Err(); err != nil {
t.Fatalf("unexpected error for data=%q: %s", data, err)
}
if lr.NextLine() {
t.Fatalf("expecting error on the second call to NextLine()")
}
if !reflect.DeepEqual(lines, linesExpected) {
t.Fatalf("unexpected lines for data=%q\ngot\n%q\nwant\n%q", data, lines, linesExpected)
}
}
for _, overflow := range []int{0, 100, MaxLineSizeBytes.IntN(), MaxLineSizeBytes.IntN() + 1, 2 * MaxLineSizeBytes.IntN()} {
longLineLen := MaxLineSizeBytes.IntN() + overflow
longLine := string(make([]byte, longLineLen))
// Single long line
data := longLine
f(data, nil)
// Multiple long lines
data = longLine + "\n" + longLine
f(data, []string{""})
data = longLine + "\n" + longLine + "\n"
f(data, []string{"", ""})
// Long line in the middle
data = "foo\n" + longLine + "\nbar"
f(data, []string{"foo", "", "bar"})
// Multiple long lines in the middle
data = "foo\n" + longLine + "\n" + longLine + "\nbar"
f(data, []string{"foo", "", "", "bar"})
// Long line in the end
data = "foo\n" + longLine
f(data, []string{"foo"})
// Long line in the end
data = "foo\n" + longLine + "\n"
f(data, []string{"foo", ""})
}
}
func TestLineReader_Failure(t *testing.T) {
f := func(data string, linesExpected []string) {
t.Helper()
fr := &failureReader{
r: bytes.NewBufferString(data),
}
lr := NewLineReader("foo", fr)
var lines []string
for lr.NextLine() {
lines = append(lines, string(lr.Line))
}
if err := lr.Err(); err == nil {
t.Fatalf("expecting non-nil error")
}
if lr.NextLine() {
t.Fatalf("expecting error on the second call to NextLine()")
}
if err := lr.Err(); err == nil {
t.Fatalf("expecting non-nil error on the second call")
}
if !reflect.DeepEqual(lines, linesExpected) {
t.Fatalf("unexpected lines\ngot\n%q\nwant\n%q", lines, linesExpected)
}
}
f("", nil)
f("foo", nil)
f("foo\n", []string{"foo"})
f("\n", []string{""})
f("foo\nbar", []string{"foo"})
f("foo\nbar\n", []string{"foo", "bar"})
f("\nfoo\nbar\n\n", []string{"", "foo", "bar", ""})
// long line
longLineLen := MaxLineSizeBytes.IntN()
for _, overflow := range []int{0, 100, MaxLineSizeBytes.IntN(), MaxLineSizeBytes.IntN() + 1, 2 * MaxLineSizeBytes.IntN()} {
longLine := string(make([]byte, longLineLen+overflow))
data := longLine
f(data, nil)
data = "foo\n" + longLine
f(data, []string{"foo"})
data = longLine + "\nfoo"
f(data, []string{""})
data = longLine + "\nfoo\n"
f(data, []string{"", "foo"})
}
}
type failureReader struct {
r io.Reader
}
func (r *failureReader) Read(p []byte) (int, error) {
n, _ := r.r.Read(p)
if n > 0 {
return n, nil
}
return 0, fmt.Errorf("some error")
}

View File

@@ -1,56 +0,0 @@
package insertutil
import (
"fmt"
"reflect"
"strings"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
)
// TestLogMessageProcessor implements LogMessageProcessor for testing.
type TestLogMessageProcessor struct {
timestamps []int64
rows []string
}
// AddRow adds row with the given timestamp and fields to tlp
func (tlp *TestLogMessageProcessor) AddRow(timestamp int64, fields, streamFields []logstorage.Field) {
if streamFields != nil {
panic(fmt.Errorf("BUG: streamFields must be nil; got %v", streamFields))
}
tlp.timestamps = append(tlp.timestamps, timestamp)
tlp.rows = append(tlp.rows, string(logstorage.MarshalFieldsToJSON(nil, fields)))
}
// MustClose closes tlp.
func (tlp *TestLogMessageProcessor) MustClose() {
}
// Verify verifies the number of rows, timestamps and results after AddRow calls.
func (tlp *TestLogMessageProcessor) Verify(timestampsExpected []int64, resultExpected string) error {
result := strings.Join(tlp.rows, "\n")
if len(tlp.rows) != len(timestampsExpected) {
return fmt.Errorf("unexpected rows read; got %d; want %d;\nrows read:\n%s\nrows wanted\n%s", len(tlp.rows), len(timestampsExpected), result, resultExpected)
}
if !reflect.DeepEqual(tlp.timestamps, timestampsExpected) {
return fmt.Errorf("unexpected timestamps;\ngot\n%d\nwant\n%d", tlp.timestamps, timestampsExpected)
}
if result != resultExpected {
return fmt.Errorf("unexpected result;\ngot\n%s\nwant\n%s", result, resultExpected)
}
return nil
}
// BenchmarkLogMessageProcessor implements LogMessageProcessor for benchmarks.
type BenchmarkLogMessageProcessor struct{}
// AddRow implements LogMessageProcessor interface.
func (blp *BenchmarkLogMessageProcessor) AddRow(_ int64, _, _ []logstorage.Field) {
}
// MustClose implements LogMessageProcessor interface.
func (blp *BenchmarkLogMessageProcessor) MustClose() {
}

View File

@@ -1,106 +0,0 @@
package insertutil
import (
"fmt"
"math"
"strconv"
"strings"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
)
// ExtractTimestampFromFields extracts timestamp in nanoseconds from the first field the name from timeFields at fields.
//
// The value for the corresponding timeFields is set to empty string after returning from the function,
// so it could be ignored during data ingestion.
//
// The current timestamp is returned if fields do not contain a field with timeField name or if the timeField value is empty.
func ExtractTimestampFromFields(timeFields []string, fields []logstorage.Field) (int64, error) {
for _, timeField := range timeFields {
for i := range fields {
f := &fields[i]
if f.Name != timeField {
continue
}
nsecs, err := parseTimestamp(f.Value)
if err != nil {
return 0, fmt.Errorf("cannot parse timestamp from field %q: %s", f.Name, err)
}
f.Value = ""
if nsecs == 0 {
nsecs = time.Now().UnixNano()
}
return nsecs, nil
}
}
return time.Now().UnixNano(), nil
}
func parseTimestamp(s string) (int64, error) {
// "-" is a nil timestamp value, if the syslog
// application is incapable of obtaining system time
// https://datatracker.ietf.org/doc/html/rfc5424#section-6.2.3
if s == "" || s == "0" || s == "-" {
return time.Now().UnixNano(), nil
}
if len(s) <= len("YYYY") || s[len("YYYY")] != '-' {
return ParseUnixTimestamp(s)
}
nsecs, ok := logstorage.TryParseTimestampRFC3339Nano(s)
if !ok {
return 0, fmt.Errorf("cannot unmarshal rfc3339 timestamp %q", s)
}
return nsecs, nil
}
// ParseUnixTimestamp parses s as unix timestamp in seconds, milliseconds, microseconds or nanoseconds and returns the parsed timestamp in nanoseconds.
func ParseUnixTimestamp(s string) (int64, error) {
if strings.IndexByte(s, '.') >= 0 {
// Parse timestamp as floating-point value
f, err := strconv.ParseFloat(s, 64)
if err != nil {
return 0, fmt.Errorf("cannot parse unix timestamp from %q: %w", s, err)
}
if f < (1<<31) && f >= (-1<<31) {
// The timestamp is in seconds.
return int64(f * 1e9), nil
}
if f < 1e3*(1<<31) && f >= 1e3*(-1<<31) {
// The timestamp is in milliseconds.
return int64(f * 1e6), nil
}
if f < 1e6*(1<<31) && f >= 1e6*(-1<<31) {
// The timestamp is in microseconds.
return int64(f * 1e3), nil
}
// The timestamp is in nanoseconds
if f > math.MaxInt64 {
return 0, fmt.Errorf("too big timestamp in nanoseconds: %v; mustn't exceed %v", f, int64(math.MaxInt64))
}
if f < math.MinInt64 {
return 0, fmt.Errorf("too small timestamp in nanoseconds: %v; must be bigger or equal to %v", f, int64(math.MinInt64))
}
return int64(f), nil
}
// Parse timestamp as integer
n, err := strconv.ParseInt(s, 10, 64)
if err != nil {
return 0, fmt.Errorf("cannot parse unix timestamp from %q: %w", s, err)
}
if n < (1<<31) && n >= (-1<<31) {
// The timestamp is in seconds.
return n * 1e9, nil
}
if n < 1e3*(1<<31) && n >= 1e3*(-1<<31) {
// The timestamp is in milliseconds.
return n * 1e6, nil
}
if n < 1e6*(1<<31) && n >= 1e6*(-1<<31) {
// The timestamp is in microseconds.
return n * 1e3, nil
}
// The timestamp is in nanoseconds
return n, nil
}

View File

@@ -1,185 +0,0 @@
package insertutil
import (
"testing"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
)
func TestParseUnixTimestamp_Success(t *testing.T) {
f := func(s string, timestampExpected int64) {
t.Helper()
timestamp, err := ParseUnixTimestamp(s)
if err != nil {
t.Fatalf("unexpected error in ParseUnixTimestamp(%q): %s", s, err)
}
if timestamp != timestampExpected {
t.Fatalf("unexpected timestamp returned from ParseUnixTimestamp(%q); got %d; want %d", s, timestamp, timestampExpected)
}
}
f("0", 0)
// nanoseconds
f("-1234567890123456789", -1234567890123456789)
f("1234567890123456789", 1234567890123456789)
// microseconds
f("-1234567890123456", -1234567890123456000)
f("1234567890123456", 1234567890123456000)
f("1234567890123456.789", 1234567890123456768)
// milliseconds
f("-1234567890123", -1234567890123000000)
f("1234567890123", 1234567890123000000)
f("1234567890123.456", 1234567890123456000)
// seconds
f("-1234567890", -1234567890000000000)
f("1234567890", 1234567890000000000)
f("-1234567890.123456", -1234567890123456000)
}
func TestParseUnixTimestamp_Failure(t *testing.T) {
f := func(s string) {
t.Helper()
_, err := ParseUnixTimestamp(s)
if err == nil {
t.Fatalf("expecting non-nil error in ParseUnixTimestamp(%q)", s)
}
}
// non-numeric timestamp
f("")
f("foobar")
f("foo.bar")
// too big timestamp
f("12345678901234567890")
f("-12345678901234567890")
f("12345678901234567890.235424")
f("-12345678901234567890.235424")
}
func TestExtractTimestampFromFields_Success(t *testing.T) {
f := func(timeField string, fields []logstorage.Field, nsecsExpected int64) {
t.Helper()
nsecs, err := ExtractTimestampFromFields([]string{timeField}, fields)
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
if nsecs != nsecsExpected {
t.Fatalf("unexpected nsecs; got %d; want %d", nsecs, nsecsExpected)
}
for _, f := range fields {
if f.Name == timeField {
if f.Value != "" {
t.Fatalf("unexpected value for field %s; got %q; want %q", timeField, f.Value, "")
}
}
}
}
// UTC time
f("time", []logstorage.Field{
{Name: "foo", Value: "bar"},
{Name: "time", Value: "2024-06-18T23:37:20Z"},
}, 1718753840000000000)
// Time with timezone
f("time", []logstorage.Field{
{Name: "foo", Value: "bar"},
{Name: "time", Value: "2024-06-18T23:37:20+08:00"},
}, 1718725040000000000)
// SQL datetime format
f("time", []logstorage.Field{
{Name: "foo", Value: "bar"},
{Name: "time", Value: "2024-06-18 23:37:20.123-05:30"},
}, 1718773640123000000)
// Time with nanosecond precision
f("time", []logstorage.Field{
{Name: "time", Value: "2024-06-18T23:37:20.123456789-05:30"},
{Name: "foo", Value: "bar"},
}, 1718773640123456789)
// Unix timestamp in nanoseconds
f("time", []logstorage.Field{
{Name: "foo", Value: "bar"},
{Name: "time", Value: "1718773640123456789"},
}, 1718773640123456789)
// Unix timestamp in microseconds
f("time", []logstorage.Field{
{Name: "foo", Value: "bar"},
{Name: "time", Value: "1718773640123456"},
}, 1718773640123456000)
// Unix timestamp in milliseconds
f("time", []logstorage.Field{
{Name: "foo", Value: "bar"},
{Name: "time", Value: "1718773640123"},
}, 1718773640123000000)
// Unix timestamp in seconds
f("time", []logstorage.Field{
{Name: "foo", Value: "bar"},
{Name: "time", Value: "1718773640"},
}, 1718773640000000000)
}
func TestExtractTimestampFromFields_Now(t *testing.T) {
f := func(timeField string, fields []logstorage.Field) {
t.Helper()
nsecs, err := ExtractTimestampFromFields([]string{timeField}, fields)
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
if nsecs < 1 {
t.Fatalf("expected generated timestamp, got error: %s", err)
}
}
// RFC5424 allows `-` for nil timestamp (log ingestion time)
f("time", []logstorage.Field{
{Name: "time", Value: "-"},
})
f("time", []logstorage.Field{
{Name: "time", Value: ""},
})
f("time", []logstorage.Field{
{Name: "time", Value: "0"},
})
}
func TestExtractTimestampFromFields_Error(t *testing.T) {
f := func(s string) {
t.Helper()
fields := []logstorage.Field{
{Name: "time", Value: s},
}
nsecs, err := ExtractTimestampFromFields([]string{"time"}, fields)
if err == nil {
t.Fatalf("expecting non-nil error")
}
if nsecs != 0 {
t.Fatalf("unexpected nsecs; got %d; want %d", nsecs, 0)
}
}
// invalid time
f("foobar")
// incomplete time
f("2024-06-18")
f("2024-06-18T23:37")
}

View File

@@ -1,88 +0,0 @@
package internalinsert
import (
"fmt"
"net/http"
"time"
"github.com/VictoriaMetrics/metrics"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage/netinsert"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
)
var (
maxRequestSize = flagutil.NewBytes("internalinsert.maxRequestSize", 64*1024*1024, "The maximum size in bytes of a single request, which can be accepted at /internal/insert HTTP endpoint")
)
// RequestHandler processes /internal/insert requests.
func RequestHandler(w http.ResponseWriter, r *http.Request) {
startTime := time.Now()
if r.Method != "POST" {
w.WriteHeader(http.StatusMethodNotAllowed)
return
}
version := r.FormValue("version")
if version != netinsert.ProtocolVersion {
httpserver.Errorf(w, r, "unsupported protocol version=%q; want %q", version, netinsert.ProtocolVersion)
return
}
requestsTotal.Inc()
cp, err := insertutil.GetCommonParams(r)
if err != nil {
httpserver.Errorf(w, r, "%s", err)
return
}
if err := insertutil.CanWriteData(); err != nil {
httpserver.Errorf(w, r, "%s", err)
return
}
encoding := r.Header.Get("Content-Encoding")
err = protoparserutil.ReadUncompressedData(r.Body, encoding, maxRequestSize, func(data []byte) error {
lmp := cp.NewLogMessageProcessor("internalinsert", false)
irp := lmp.(insertutil.InsertRowProcessor)
err := parseData(irp, data)
lmp.MustClose()
return err
})
if err != nil {
errorsTotal.Inc()
httpserver.Errorf(w, r, "cannot parse internal insert request: %s", err)
return
}
requestDuration.UpdateDuration(startTime)
}
func parseData(irp insertutil.InsertRowProcessor, data []byte) error {
r := logstorage.GetInsertRow()
src := data
i := 0
for len(src) > 0 {
tail, err := r.UnmarshalInplace(src)
if err != nil {
return fmt.Errorf("cannot parse row #%d: %s", i, err)
}
src = tail
i++
irp.AddInsertRow(r)
}
logstorage.PutInsertRow(r)
return nil
}
var (
requestsTotal = metrics.NewCounter(`vl_http_requests_total{path="/internal/insert"}`)
errorsTotal = metrics.NewCounter(`vl_http_errors_total{path="/internal/insert"}`)
requestDuration = metrics.NewSummary(`vl_http_request_duration_seconds{path="/internal/insert"}`)
)

View File

@@ -1,377 +0,0 @@
package journald
import (
"bytes"
"encoding/binary"
"errors"
"flag"
"fmt"
"io"
"net/http"
"slices"
"strconv"
"strings"
"sync"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
"github.com/VictoriaMetrics/metrics"
)
// See https://github.com/systemd/systemd/blob/main/src/libsystemd/sd-journal/journal-file.c#L1703
const maxFieldNameLen = 64
var (
journaldStreamFields = flagutil.NewArrayString("journald.streamFields", "Comma-separated list of fields to use as log stream fields for logs ingested over journald protocol. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/journald/#stream-fields")
journaldIgnoreFields = flagutil.NewArrayString("journald.ignoreFields", "Comma-separated list of fields to ignore for logs ingested over journald protocol. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/journald/#dropping-fields")
journaldTimeField = flag.String("journald.timeField", "__REALTIME_TIMESTAMP", "Field to use as a log timestamp for logs ingested via journald protocol. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/journald/#time-field")
journaldTenantID = flag.String("journald.tenantID", "0:0", "TenantID for logs ingested via the Journald endpoint. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/journald/#multitenancy")
journaldIncludeEntryMetadata = flag.Bool("journald.includeEntryMetadata", false, "Include Journald fields with double underscore prefixes")
)
func getCommonParams(r *http.Request) (*insertutil.CommonParams, error) {
cp, err := insertutil.GetCommonParams(r)
if err != nil {
return nil, err
}
if cp.TenantID.AccountID == 0 && cp.TenantID.ProjectID == 0 {
tenantID, err := logstorage.ParseTenantID(*journaldTenantID)
if err != nil {
return nil, fmt.Errorf("cannot parse -journald.tenantID=%q for journald: %w", *journaldTenantID, err)
}
cp.TenantID = tenantID
}
if !cp.IsTimeFieldSet {
cp.TimeFields = []string{*journaldTimeField}
}
if len(cp.StreamFields) == 0 {
cp.StreamFields = getStreamFields()
}
if len(cp.IgnoreFields) == 0 {
cp.IgnoreFields = *journaldIgnoreFields
}
cp.MsgFields = []string{"MESSAGE"}
return cp, nil
}
func getStreamFields() []string {
if len(*journaldStreamFields) > 0 {
return *journaldStreamFields
}
return defaultStreamFields
}
var defaultStreamFields = []string{
"_MACHINE_ID",
"_HOSTNAME",
"_SYSTEMD_UNIT",
}
// RequestHandler processes Journald Export insert requests
func RequestHandler(path string, w http.ResponseWriter, r *http.Request) bool {
switch path {
case "/insert/journald/upload":
if r.Header.Get("Content-Type") != "application/vnd.fdo.journal" {
httpserver.Errorf(w, r, "only application/vnd.fdo.journal encoding is supported for Journald")
return true
}
handleJournald(r, w)
return true
default:
return false
}
}
// handleJournald parses Journal binary entries
func handleJournald(r *http.Request, w http.ResponseWriter) {
startTime := time.Now()
requestsTotal.Inc()
cp, err := getCommonParams(r)
if err != nil {
errorsTotal.Inc()
httpserver.Errorf(w, r, "cannot parse common params from request: %s", err)
return
}
if err := insertutil.CanWriteData(); err != nil {
errorsTotal.Inc()
httpserver.Errorf(w, r, "%s", err)
return
}
encoding := r.Header.Get("Content-Encoding")
reader, err := protoparserutil.GetUncompressedReader(r.Body, encoding)
if err != nil {
errorsTotal.Inc()
logger.Errorf("cannot decode journald request: %s", err)
return
}
lmp := cp.NewLogMessageProcessor("journald", true)
streamName := fmt.Sprintf("remoteAddr=%s, requestURI=%q", httpserver.GetQuotedRemoteAddr(r), r.RequestURI)
err = processStreamInternal(streamName, reader, lmp, cp)
protoparserutil.PutUncompressedReader(reader)
lmp.MustClose()
if err != nil {
errorsTotal.Inc()
httpserver.Errorf(w, r, "cannot read journald protocol data: %s", err)
return
}
// systemd starting release v258 will support compression, which starts working after negotiation: it expects supported compression
// algorithms list in Accept-Encoding response header in a format "<algorithm_1>[:<priority_1>][;<algorithm_2>:<priority_2>]"
// See https://github.com/systemd/systemd/pull/34822
w.Header().Set("Accept-Encoding", "zstd")
// update requestDuration only for successfully parsed requests
// There is no need in updating requestDuration for request errors,
// since their timings are usually much smaller than the timing for successful request parsing.
requestDuration.UpdateDuration(startTime)
}
var (
requestsTotal = metrics.NewCounter(`vl_http_requests_total{path="/insert/journald/upload"}`)
errorsTotal = metrics.NewCounter(`vl_http_errors_total{path="/insert/journald/upload"}`)
requestDuration = metrics.NewSummary(`vl_http_request_duration_seconds{path="/insert/journald/upload"}`)
)
func processStreamInternal(streamName string, r io.Reader, lmp insertutil.LogMessageProcessor, cp *insertutil.CommonParams) error {
wcr := writeconcurrencylimiter.GetReader(r)
defer writeconcurrencylimiter.PutReader(wcr)
lr := insertutil.NewLineReader("journald", wcr)
for {
err := readJournaldLogEntry(streamName, lr, lmp, cp)
wcr.DecConcurrency()
if err != nil {
if errors.Is(err, io.EOF) {
return nil
}
return fmt.Errorf("%s: %w", streamName, err)
}
}
}
type fieldsBuf struct {
fields []logstorage.Field
buf []byte
name []byte
value []byte
}
func (fb *fieldsBuf) reset() {
fb.fields = fb.fields[:0]
fb.buf = fb.buf[:0]
fb.name = fb.name[:0]
fb.value = fb.value[:0]
}
func (fb *fieldsBuf) addField(name, value string) {
bufLen := len(fb.buf)
fb.buf = append(fb.buf, name...)
nameCopy := bytesutil.ToUnsafeString(fb.buf[bufLen:])
bufLen = len(fb.buf)
fb.buf = append(fb.buf, value...)
valueCopy := bytesutil.ToUnsafeString(fb.buf[bufLen:])
fb.fields = append(fb.fields, logstorage.Field{
Name: nameCopy,
Value: valueCopy,
})
}
func (fb *fieldsBuf) appendNextLineToValue(lr *insertutil.LineReader) error {
if !lr.NextLine() {
if err := lr.Err(); err != nil {
return err
}
return fmt.Errorf("unexpected end of stream")
}
fb.value = append(fb.value, lr.Line...)
fb.value = append(fb.value, '\n')
return nil
}
func getFieldsBuf() *fieldsBuf {
fb := fieldsBufPool.Get()
if fb == nil {
return &fieldsBuf{}
}
return fb.(*fieldsBuf)
}
func putFieldsBuf(fb *fieldsBuf) {
fb.reset()
fieldsBufPool.Put(fb)
}
var fieldsBufPool sync.Pool
// readJournaldLogEntry reads a single log entry in Journald format.
//
// See https://systemd.io/JOURNAL_EXPORT_FORMATS/#journal-export-format
func readJournaldLogEntry(streamName string, lr *insertutil.LineReader, lmp insertutil.LogMessageProcessor, cp *insertutil.CommonParams) error {
var ts int64
var name, value string
fb := getFieldsBuf()
defer putFieldsBuf(fb)
if !lr.NextLine() {
if err := lr.Err(); err != nil {
return fmt.Errorf("cannot read the first field: %w", err)
}
return io.EOF
}
for {
line := lr.Line
if len(line) == 0 {
// The end of a single log entry. Write it to the storage
if len(fb.fields) > 0 {
if ts == 0 {
ts = time.Now().UnixNano()
}
lmp.AddRow(ts, fb.fields, nil)
}
return nil
}
// line could be either "key=value" or "key"
// according to https://systemd.io/JOURNAL_EXPORT_FORMATS/#journal-export-format
if n := bytes.IndexByte(line, '='); n >= 0 {
// line = "key=value"
fb.name = append(fb.name[:0], line[:n]...)
name = bytesutil.ToUnsafeString(fb.name)
fb.value = append(fb.value[:0], line[n+1:]...)
value = bytesutil.ToUnsafeString(fb.value)
} else {
// line = "key"
// Parse the binary-encoded value from the next line according to "key\n<little_endian_size_64>value\n" format
fb.name = append(fb.name[:0], line...)
name = bytesutil.ToUnsafeString(fb.name)
fb.value = fb.value[:0]
for len(fb.value) < 8 {
if err := fb.appendNextLineToValue(lr); err != nil {
return fmt.Errorf("cannot read value size: %w", err)
}
}
size := binary.LittleEndian.Uint64(fb.value[:8])
// Read the value until its lenth exceeds the given size - the last char in the read value will always be '\n'
// because it is appended by appendNextLineToValue().
for uint64(len(fb.value[8:])) <= size {
if err := fb.appendNextLineToValue(lr); err != nil {
return fmt.Errorf("cannot read %q value with size %d bytes; read only %d bytes: %w", fb.name, size, len(fb.value[8:]), err)
}
}
value = bytesutil.ToUnsafeString(fb.value[8 : len(fb.value)-1])
if uint64(len(value)) != size {
return fmt.Errorf("unexpected %q value size; got %d bytes; want %d bytes; value: %q", fb.name, len(value), size, value)
}
}
if !lr.NextLine() {
if err := lr.Err(); err != nil {
return fmt.Errorf("cannot read the next log field: %w", err)
}
// add the last log field below before the return
}
if len(name) > maxFieldNameLen {
logger.Errorf("%s: field name size should not exceed %d bytes; got %d bytes: %q; skipping this field", streamName, maxFieldNameLen, len(name), name)
continue
}
if !isValidFieldName(name) {
logger.Errorf("%s: invalid field name %q; it must consist of `A-Z0-9_` chars and must start from non-digit char; skipping this field", streamName, name)
continue
}
if slices.Contains(cp.TimeFields, name) {
t, err := strconv.ParseInt(value, 10, 64)
if err != nil {
logger.Errorf("%s: cannot parse timestamp from the field %q: %w; using the current timestamp", streamName, name, err)
ts = 0
} else {
// Convert journald microsecond timestamp to nanoseconds
ts = t * 1e3
}
continue
}
if slices.Contains(cp.MsgFields, name) {
name = "_msg"
}
if name == "PRIORITY" {
priority := journaldPriorityToLevel(value)
fb.addField("level", priority)
}
if !strings.HasPrefix(name, "__") || *journaldIncludeEntryMetadata {
fb.addField(name, value)
}
}
}
func journaldPriorityToLevel(priority string) string {
// See https://wiki.archlinux.org/title/Systemd/Journal#Priority_level
// and https://grafana.com/docs/grafana/latest/explore/logs-integration/#log-level
switch priority {
case "0":
return "emerg"
case "1":
return "alert"
case "2":
return "critical"
case "3":
return "error"
case "4":
return "warning"
case "5":
return "notice"
case "6":
return "info"
case "7":
return "debug"
default:
return priority
}
}
func isValidFieldName(s string) bool {
if len(s) == 0 {
return false
}
c := s[0]
if !(c >= 'A' && c <= 'Z' || c == '_') {
return false
}
for i := 1; i < len(s); i++ {
c := s[i]
if !(c >= 'A' && c <= 'Z' || c >= '0' && c <= '9' || c == '_') {
return false
}
}
return true
}

View File

@@ -1,166 +0,0 @@
package journald
import (
"bytes"
"net/http"
"testing"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
)
func TestIsValidFieldName(t *testing.T) {
f := func(name string, resultExpected bool) {
t.Helper()
result := isValidFieldName(name)
if result != resultExpected {
t.Fatalf("unexpected result for isValidJournaldFieldName(%q); got %v; want %v", name, result, resultExpected)
}
}
f("", false)
f("a", false)
f("1", false)
f("_", true)
f("X", true)
f("Xa", false)
f("X_343", true)
f("X_0123456789_AZ", true)
f("SDDFD sdf", false)
}
func TestGetCommonParams_TimeField(t *testing.T) {
f := func(timeFieldHeader, expectedTimeField string) {
t.Helper()
req, err := http.NewRequest("POST", "/insert/journald/upload", nil)
if err != nil {
t.Fatalf("unexpected error creating request: %s", err)
}
if timeFieldHeader != "" {
req.Header.Set("VL-Time-Field", timeFieldHeader)
}
cp, err := getCommonParams(req)
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
if len(cp.TimeFields) != 1 || cp.TimeFields[0] != expectedTimeField {
t.Fatalf("unexpected TimeFields; got %v; want [%s]", cp.TimeFields, expectedTimeField)
}
}
// Test default behavior - when no custom time field is specified, journald uses __REALTIME_TIMESTAMP
f("", "__REALTIME_TIMESTAMP")
// Test custom time field - when a custom time field is specified via HTTP header, it's respected
f("custom_time", "custom_time")
}
func TestPushJournald_Success(t *testing.T) {
f := func(src string, timestampsExpected []int64, resultExpected string) {
t.Helper()
tlp := &insertutil.TestLogMessageProcessor{}
r, err := http.NewRequest("GET", "https://foo.bar/baz", nil)
if err != nil {
t.Fatalf("cannot create request: %s", err)
}
cp, err := getCommonParams(r)
if err != nil {
t.Fatalf("cannot create commonParams: %s", err)
}
buf := bytes.NewBufferString(src)
if err := processStreamInternal("test", buf, tlp, cp); err != nil {
t.Fatalf("unexpected error: %s", err)
}
if err := tlp.Verify(timestampsExpected, resultExpected); err != nil {
t.Fatal(err)
}
}
// Single event
f("__REALTIME_TIMESTAMP=91723819283\nMESSAGE=Test message\n\n",
[]int64{91723819283000},
"{\"_msg\":\"Test message\"}",
)
// Multiple events
f("__REALTIME_TIMESTAMP=91723819283\nPRIORITY=3\nMESSAGE=Test message\n\n__REALTIME_TIMESTAMP=91723819284\nMESSAGE=Test message2\n",
[]int64{91723819283000, 91723819284000},
"{\"level\":\"error\",\"PRIORITY\":\"3\",\"_msg\":\"Test message\"}\n{\"_msg\":\"Test message2\"}",
)
// Parse binary data
f("__CURSOR=s=e0afe8412a6a49d2bfcf66aa7927b588;i=1f06;b=f778b6e2f7584a77b991a2366612a7b5;m=300bdfd420;t=62526e1182354;x=930dc44b370963b7\nE=JobStateChanged\n__REALTIME_TIMESTAMP=1729698775704404\n__MONOTONIC_TIMESTAMP=206357648416\n__SEQNUM=7942\n__SEQNUM_ID=e0afe8412a6a49d2bfcf66aa7927b588\n_BOOT_ID=f778b6e2f7584a77b991a2366612a7b5\n_UID=0\n_GID=0\n_MACHINE_ID=a4a970370c30a925df02a13c67167847\n_HOSTNAME=ecd5e4555787\n_RUNTIME_SCOPE=system\n_TRANSPORT=journal\n_CAP_EFFECTIVE=1ffffffffff\n_SYSTEMD_CGROUP=/init.scope\n_SYSTEMD_UNIT=init.scope\n_SYSTEMD_SLICE=-.slice\nCODE_FILE=<stdin>\nCODE_LINE=1\nCODE_FUNC=<module>\nSYSLOG_IDENTIFIER=python3\n_COMM=python3\n_EXE=/usr/bin/python3.12\n_CMDLINE=python3\nMESSAGE\n\x13\x00\x00\x00\x00\x00\x00\x00foo\nbar\n\n\nasda\nasda\n_PID=2763\n_SOURCE_REALTIME_TIMESTAMP=1729698775704375\n\n",
[]int64{1729698775704404000},
"{\"E\":\"JobStateChanged\",\"_BOOT_ID\":\"f778b6e2f7584a77b991a2366612a7b5\",\"_UID\":\"0\",\"_GID\":\"0\",\"_MACHINE_ID\":\"a4a970370c30a925df02a13c67167847\",\"_HOSTNAME\":\"ecd5e4555787\",\"_RUNTIME_SCOPE\":\"system\",\"_TRANSPORT\":\"journal\",\"_CAP_EFFECTIVE\":\"1ffffffffff\",\"_SYSTEMD_CGROUP\":\"/init.scope\",\"_SYSTEMD_UNIT\":\"init.scope\",\"_SYSTEMD_SLICE\":\"-.slice\",\"CODE_FILE\":\"\\u003cstdin>\",\"CODE_LINE\":\"1\",\"CODE_FUNC\":\"\\u003cmodule>\",\"SYSLOG_IDENTIFIER\":\"python3\",\"_COMM\":\"python3\",\"_EXE\":\"/usr/bin/python3.12\",\"_CMDLINE\":\"python3\",\"_msg\":\"foo\\nbar\\n\\n\\nasda\\nasda\",\"_PID\":\"2763\",\"_SOURCE_REALTIME_TIMESTAMP\":\"1729698775704375\"}",
)
// Parse binary data with trailing newline
f("__REALTIME_TIMESTAMP=1729698775704404\n_CMDLINE=python3\nMESSAGE\n\x14\x00\x00\x00\x00\x00\x00\x00foo\nbar\n\n\nasda\nasda\n\n_PID=2763\n\n",
[]int64{1729698775704404000},
`{"_CMDLINE":"python3","_msg":"foo\nbar\n\n\nasda\nasda\n","_PID":"2763"}`,
)
f("__REALTIME_TIMESTAMP=1729698775704404\n_CMDLINE=python3\nMESSAGE\n\x00\x00\x00\x00\x00\x00\x00\x00\n_PID=2763\n\n",
[]int64{1729698775704404000},
`{"_CMDLINE":"python3","_PID":"2763"}`,
)
f("__REALTIME_TIMESTAMP=1729698775704404\n_CMDLINE=python3\nMESSAGE\n\x0A\x00\x00\x00\x00\x00\x00\x00123456789\n\n_PID=2763\n\n",
[]int64{1729698775704404000},
`{"_CMDLINE":"python3","_msg":"123456789\n","_PID":"2763"}`,
)
f("__REALTIME_TIMESTAMP=1729698775704404\n_CMDLINE=python3\nMESSAGE\n\x0A\x00\x00\x00\x00\x00\x00\x001234567890\n_PID=2763\n\n",
[]int64{1729698775704404000},
`{"_CMDLINE":"python3","_msg":"1234567890","_PID":"2763"}`,
)
// Empty field name must be ignored
f("__REALTIME_TIMESTAMP=91723819283\na=b\n=Test message", nil, "")
f("__REALTIME_TIMESTAMP=91723819284\nMESSAGE=Test message2\n\n__REALTIME_TIMESTAMP=91723819283\n=Test message\n", []int64{91723819284000}, `{"_msg":"Test message2"}`)
// field name starting with number must be ignored
f("__REALTIME_TIMESTAMP=91723819283\n1incorrect=Test message\n\n__REALTIME_TIMESTAMP=91723819284\nMESSAGE=Test message2\n\n", []int64{91723819284000}, `{"_msg":"Test message2"}`)
// field name exceeding 64 bytes limit must be ignored
f("__REALTIME_TIMESTAMP=91723819283\ntoolooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooongcorrecooooooooooooong=Test message\n", nil, "")
// field name with invalid chars must be ignored
f("__REALTIME_TIMESTAMP=91723819283\nbadC!@$!@$as=Test message\n", nil, "")
}
func TestPushJournald_Failure(t *testing.T) {
f := func(data string) {
t.Helper()
tlp := &insertutil.TestLogMessageProcessor{}
r, err := http.NewRequest("GET", "https://foo.bar/baz", nil)
if err != nil {
t.Fatalf("cannot create request: %s", err)
}
cp, err := getCommonParams(r)
if err != nil {
t.Fatalf("cannot create commonParams: %s", err)
}
buf := bytes.NewBufferString(data)
if err := processStreamInternal("test", buf, tlp, cp); err == nil {
t.Fatalf("expecting non-nil error")
}
}
// too short binary encoded message
f("__CURSOR=s=e0afe8412a6a49d2bfcf66aa7927b588;i=1f06;b=f778b6e2f7584a77b991a2366612a7b5;m=300bdfd420;t=62526e1182354;x=930dc44b370963b7\n__REALTIME_TIMESTAMP=1729698775704404\nMESSAGE\n\x13\x00\x00\x00\x00\x00\x00\x00foo\nbar\n\n\nasdaasd")
f("__REALTIME_TIMESTAMP=1729698775704404\n_CMDLINE=python3\nMESSAGE\n\x00\x00\x00\x00\x00\x00\x00\x00_PID=2763\n\n")
f("__REALTIME_TIMESTAMP=1729698775704404\n_CMDLINE=python3\nMESSAGE\n\x0A\x00\x00\x00\x00\x00\x00\x001234567890_PID=2763\n\n")
f("__REALTIME_TIMESTAMP=1729698775704404\n_CMDLINE=python3\nMESSAGE\n\x0A\x00\x00\x00\x00\x00\x00\x00123456789\n_PID=2763\n\n")
// too long binary encoded message
f("__CURSOR=s=e0afe8412a6a49d2bfcf66aa7927b588;i=1f06;b=f778b6e2f7584a77b991a2366612a7b5;m=300bdfd420;t=62526e1182354;x=930dc44b370963b7\n__REALTIME_TIMESTAMP=1729698775704404\nMESSAGE\n\x13\x00\x00\x00\x00\x00\x00\x00foo\nbar\n\n\nasdaasdakljlsfd")
}

View File

@@ -1,82 +0,0 @@
package journald
import (
"bytes"
"encoding/binary"
"fmt"
"strings"
"testing"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
)
func BenchmarkIsValidFieldName(b *testing.B) {
b.ReportAllocs()
b.SetBytes(int64(len(benchmarkFields)))
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
for _, field := range benchmarkFields {
if !isValidFieldName(field) {
panic(fmt.Errorf("cannot validate field %q", field))
}
}
}
})
}
var benchmarkFields = strings.Split(
"E,_BOOT_ID,_UID,_GID,_MACHINE_ID,_HOSTNAME,_RUNTIME_SCOPE,_TRANSPORT,_CAP_EFFECTIVE,_SYSTEMD_CGROUP,_SYSTEMD_UNIT,"+
"_SYSTEMD_SLICE,CODE_FILE,CODE_LINE,CODE_FUNC,SYSLOG_IDENTIFIER,_COMM,_EXE,_CMDLINE,MESSAGE,_PID,_SOURCE_REALTIME_TIMESTAMP,_REALTIME_TIMESTAMP",
",")
func BenchmarkPushJournaldPerformance(b *testing.B) {
cp := &insertutil.CommonParams{
TimeFields: []string{"__REALTIME_TIMESTAMP"},
MsgFields: []string{"MESSAGE"},
}
const dataChunkSize = 1024 * 1024
data := generateJournaldData(dataChunkSize)
b.ReportAllocs()
b.SetBytes(int64(len(data)))
b.RunParallel(func(pb *testing.PB) {
r := &bytes.Reader{}
blp := &insertutil.BenchmarkLogMessageProcessor{}
for pb.Next() {
r.Reset(data)
if err := processStreamInternal("performance_test", r, blp, cp); err != nil {
panic(fmt.Errorf("unexpected error: %w", err))
}
}
})
}
func generateJournaldData(size int) []byte {
var buf []byte
timestamp := time.Now().UnixMicro()
binaryMsg := []byte("binary message data for performance test")
var sizeBuf [8]byte
for len(buf) < size {
timestamp++
var entry string
// Generate a mix of simple and binary messages
if timestamp%10 == 0 {
// Generate binary message
binary.LittleEndian.PutUint64(sizeBuf[:], uint64(len(binaryMsg)))
entry = fmt.Sprintf("__REALTIME_TIMESTAMP=%d\nMESSAGE\n%s%s\n\n",
timestamp,
sizeBuf[:],
binaryMsg,
)
} else {
// Generate simple message
entry = fmt.Sprintf("__REALTIME_TIMESTAMP=%d\nMESSAGE=Performance test message %d\n\n", timestamp, timestamp)
}
buf = append(buf, entry...)
}
return buf
}

View File

@@ -1,123 +0,0 @@
package jsonline
import (
"fmt"
"io"
"net/http"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
"github.com/VictoriaMetrics/metrics"
)
// RequestHandler processes jsonline insert requests
func RequestHandler(w http.ResponseWriter, r *http.Request) {
startTime := time.Now()
w.Header().Add("Content-Type", "application/json")
if r.Method != "POST" {
w.WriteHeader(http.StatusMethodNotAllowed)
return
}
requestsTotal.Inc()
cp, err := insertutil.GetCommonParams(r)
if err != nil {
httpserver.Errorf(w, r, "%s", err)
return
}
if err := insertutil.CanWriteData(); err != nil {
httpserver.Errorf(w, r, "%s", err)
return
}
encoding := r.Header.Get("Content-Encoding")
reader, err := protoparserutil.GetUncompressedReader(r.Body, encoding)
if err != nil {
logger.Errorf("cannot decode jsonline request: %s", err)
return
}
defer protoparserutil.PutUncompressedReader(reader)
lmp := cp.NewLogMessageProcessor("jsonline", true)
streamName := fmt.Sprintf("remoteAddr=%s, requestURI=%q", httpserver.GetQuotedRemoteAddr(r), r.RequestURI)
err = processStreamInternal(streamName, reader, cp.TimeFields, cp.MsgFields, lmp)
lmp.MustClose()
if err != nil {
httpserver.Errorf(w, r, "cannot process jsonline request; error: %s", err)
return
}
requestDuration.UpdateDuration(startTime)
}
func processStreamInternal(streamName string, r io.Reader, timeFields, msgFields []string, lmp insertutil.LogMessageProcessor) error {
wcr := writeconcurrencylimiter.GetReader(r)
defer writeconcurrencylimiter.PutReader(wcr)
lr := insertutil.NewLineReader(streamName, wcr)
n := 0
errors := 0
var lastError error
for {
ok, err := readLine(lr, timeFields, msgFields, lmp)
wcr.DecConcurrency()
if err != nil {
lastError = err
errors++
logger.Warnf("jsonline: cannot read line #%d in /jsonline request: %s", n, err)
}
if !ok {
break
}
n++
}
errorsTotal.Add(errors)
if errors > 0 && n == errors {
// Return an error if no logs were processed and there were errors
return lastError
}
return nil
}
func readLine(lr *insertutil.LineReader, timeFields, msgFields []string, lmp insertutil.LogMessageProcessor) (bool, error) {
var line []byte
for len(line) == 0 {
if !lr.NextLine() {
err := lr.Err()
return false, err
}
line = lr.Line
}
p := logstorage.GetJSONParser()
defer logstorage.PutJSONParser(p)
if err := p.ParseLogMessage(line); err != nil {
return true, fmt.Errorf("%s; line contents: %q", err, line)
}
ts, err := insertutil.ExtractTimestampFromFields(timeFields, p.Fields)
if err != nil {
return true, fmt.Errorf("%s; line contents: %q", err, line)
}
logstorage.RenameField(p.Fields, msgFields, "_msg")
lmp.AddRow(ts, p.Fields, nil)
return true, nil
}
var (
requestsTotal = metrics.NewCounter(`vl_http_requests_total{path="/insert/jsonline"}`)
errorsTotal = metrics.NewCounter(`vl_http_errors_total{path="/insert/jsonline"}`)
requestDuration = metrics.NewSummary(`vl_http_request_duration_seconds{path="/insert/jsonline"}`)
)

View File

@@ -1,97 +0,0 @@
package jsonline
import (
"bytes"
"strings"
"testing"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
)
func TestProcessStreamInternalSuccess(t *testing.T) {
f := func(data, timeField, msgField string, timestampsExpected []int64, resultExpected string) {
t.Helper()
timeFields := []string{timeField}
msgFields := []string{msgField}
tlp := &insertutil.TestLogMessageProcessor{}
r := bytes.NewBufferString(data)
if err := processStreamInternal("test", r, timeFields, msgFields, tlp); err != nil {
t.Fatalf("unexpected error: %s", err)
}
if err := tlp.Verify(timestampsExpected, resultExpected); err != nil {
t.Fatal(err)
}
}
data := `{"@timestamp":"2023-06-06T04:48:11.735Z","log":{"offset":71770,"file":{"path":"/var/log/auth.log"}},"message":"foobar"}
{"@timestamp":"2023-06-06T04:48:12.735+01:00","message":"baz"}
{"message":"xyz","@timestamp":"2023-06-06 04:48:13.735Z","x":"y"}
`
timeField := "@timestamp"
msgField := "message"
timestampsExpected := []int64{1686026891735000000, 1686023292735000000, 1686026893735000000}
resultExpected := `{"log.offset":"71770","log.file.path":"/var/log/auth.log","_msg":"foobar"}
{"_msg":"baz"}
{"_msg":"xyz","x":"y"}`
f(data, timeField, msgField, timestampsExpected, resultExpected)
// Non-existing msgField
data = `{"@timestamp":"2023-06-06T04:48:11.735Z","log":{"offset":71770,"file":{"path":"/var/log/auth.log"}},"message":"foobar"}
{"@timestamp":"2023-06-06T04:48:12.735+01:00","message":"baz"}
`
timeField = "@timestamp"
msgField = "foobar"
timestampsExpected = []int64{1686026891735000000, 1686023292735000000}
resultExpected = `{"log.offset":"71770","log.file.path":"/var/log/auth.log","message":"foobar"}
{"message":"baz"}`
f(data, timeField, msgField, timestampsExpected, resultExpected)
// invalid lines among valid lines
data = `
dsfodmasd
{"time":"2023-06-06T04:48:11.735Z","log":{"offset":71770,"file":{"path":"/var/log/auth.log"}},"message":"foobar"}
invalid line
{"time":"2023-06-06T04:48:12.735+01:00","message":"baz"}
asbsdf
`
timeField = "time"
msgField = "message"
timestampsExpected = []int64{1686026891735000000, 1686023292735000000}
resultExpected = `{"log.offset":"71770","log.file.path":"/var/log/auth.log","_msg":"foobar"}
{"_msg":"baz"}`
f(data, timeField, msgField, timestampsExpected, resultExpected)
}
func TestProcessStreamInternalFailure(t *testing.T) {
f := func(data string) {
t.Helper()
tlp := &insertutil.TestLogMessageProcessor{}
r := strings.NewReader(data)
if err := processStreamInternal("test", r, []string{"time"}, nil, tlp); err == nil {
t.Fatalf("expected error, got nil")
}
if err := tlp.Verify(nil, ""); err != nil {
t.Fatalf("unexpected error: %s", err)
}
}
// invalid json
f("foobar")
f(`foo
bar`)
f(`
foo
`)
// invalid timestamp field
f(`{"time":"foobar"}`)
}

View File

@@ -1,86 +0,0 @@
package loki
import (
"flag"
"fmt"
"net/http"
"strconv"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httputil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
)
var disableMessageParsing = flag.Bool("loki.disableMessageParsing", false, "Whether to disable automatic parsing of JSON-encoded log fields inside Loki log message into distinct log fields")
// RequestHandler processes Loki insert requests
func RequestHandler(path string, w http.ResponseWriter, r *http.Request) bool {
switch path {
case "/insert/loki/api/v1/push":
handleInsert(r, w)
return true
case "/insert/loki/ready":
// See https://grafana.com/docs/loki/latest/api/#identify-ready-loki-instance
w.WriteHeader(http.StatusOK)
w.Write([]byte("ready"))
return true
default:
return false
}
}
// See https://grafana.com/docs/loki/latest/api/#push-log-entries-to-loki
func handleInsert(r *http.Request, w http.ResponseWriter) {
contentType := r.Header.Get("Content-Type")
switch contentType {
case "application/json":
handleJSON(r, w)
default:
// Protobuf request body should be handled by default according to https://grafana.com/docs/loki/latest/api/#push-log-entries-to-loki
handleProtobuf(r, w)
}
}
type commonParams struct {
cp *insertutil.CommonParams
// Whether to parse JSON inside plaintext log message.
//
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8486
parseMessage bool
}
func getCommonParams(r *http.Request) (*commonParams, error) {
cp, err := insertutil.GetCommonParams(r)
if err != nil {
return nil, err
}
// If parsed tenant is (0,0) it is likely to be default tenant
// Try parsing tenant from Loki headers
if cp.TenantID.AccountID == 0 && cp.TenantID.ProjectID == 0 {
org := r.Header.Get("X-Scope-OrgID")
if org != "" {
tenantID, err := logstorage.ParseTenantID(org)
if err != nil {
return nil, err
}
cp.TenantID = tenantID
}
}
parseMessage := !*disableMessageParsing
if rv := httputil.GetRequestValue(r, "disable_message_parsing", "VL-Loki-Disable-Message-Parsing"); rv != "" {
bv, err := strconv.ParseBool(rv)
if err != nil {
return nil, fmt.Errorf("cannot parse dusable_message_parsing=%q: %s", rv, err)
}
parseMessage = !bv
}
return &commonParams{
cp: cp,
parseMessage: parseMessage,
}, nil
}

View File

@@ -1,225 +0,0 @@
package loki
import (
"fmt"
"net/http"
"time"
"github.com/VictoriaMetrics/metrics"
"github.com/valyala/fastjson"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
)
var maxRequestSize = flagutil.NewBytes("loki.maxRequestSize", 64*1024*1024, "The maximum size in bytes of a single Loki request")
var parserPool fastjson.ParserPool
func handleJSON(r *http.Request, w http.ResponseWriter) {
startTime := time.Now()
requestsJSONTotal.Inc()
cp, err := getCommonParams(r)
if err != nil {
httpserver.Errorf(w, r, "cannot parse common params from request: %s", err)
return
}
if err := insertutil.CanWriteData(); err != nil {
httpserver.Errorf(w, r, "%s", err)
return
}
encoding := r.Header.Get("Content-Encoding")
err = protoparserutil.ReadUncompressedData(r.Body, encoding, maxRequestSize, func(data []byte) error {
lmp := cp.cp.NewLogMessageProcessor("loki_json", false)
useDefaultStreamFields := len(cp.cp.StreamFields) == 0
err := parseJSONRequest(data, lmp, cp.cp.MsgFields, useDefaultStreamFields, cp.parseMessage)
lmp.MustClose()
return err
})
if err != nil {
httpserver.Errorf(w, r, "cannot read Loki json data: %s", err)
return
}
// update requestJSONDuration only for successfully parsed requests
// There is no need in updating requestJSONDuration for request errors,
// since their timings are usually much smaller than the timing for successful request parsing.
requestJSONDuration.UpdateDuration(startTime)
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8505
w.WriteHeader(http.StatusNoContent)
}
var (
requestsJSONTotal = metrics.NewCounter(`vl_http_requests_total{path="/insert/loki/api/v1/push",format="json"}`)
requestJSONDuration = metrics.NewSummary(`vl_http_request_duration_seconds{path="/insert/loki/api/v1/push",format="json"}`)
)
func parseJSONRequest(data []byte, lmp insertutil.LogMessageProcessor, msgFields []string, useDefaultStreamFields, parseMessage bool) error {
p := parserPool.Get()
defer parserPool.Put(p)
v, err := p.ParseBytes(data)
if err != nil {
return fmt.Errorf("cannot parse JSON request body: %w", err)
}
streamsV := v.Get("streams")
if streamsV == nil {
return fmt.Errorf("missing `streams` item in the parsed JSON")
}
streams, err := streamsV.Array()
if err != nil {
return fmt.Errorf("`streams` item in the parsed JSON must contain an array; got %q", streamsV)
}
fields := getFields()
defer putFields(fields)
var msgParser *logstorage.JSONParser
if parseMessage {
msgParser = logstorage.GetJSONParser()
defer logstorage.PutJSONParser(msgParser)
}
currentTimestamp := time.Now().UnixNano()
for _, stream := range streams {
// populate common labels from `stream` dict
fields.fields = fields.fields[:0]
labelsV := stream.Get("stream")
var labels *fastjson.Object
if labelsV != nil {
o, err := labelsV.Object()
if err != nil {
return fmt.Errorf("`stream` item in the parsed JSON must contain an object; got %q", labelsV)
}
labels = o
}
labels.Visit(func(k []byte, v *fastjson.Value) {
vStr, errLocal := v.StringBytes()
if errLocal != nil {
err = fmt.Errorf("unexpected label value type for %q:%q; want string", k, v)
return
}
fields.fields = append(fields.fields, logstorage.Field{
Name: bytesutil.ToUnsafeString(k),
Value: bytesutil.ToUnsafeString(vStr),
})
})
if err != nil {
return fmt.Errorf("error when parsing `stream` object: %w", err)
}
// populate messages from `values` array
linesV := stream.Get("values")
if linesV == nil {
return fmt.Errorf("missing `values` item in the parsed `stream` object %q", stream)
}
lines, err := linesV.Array()
if err != nil {
return fmt.Errorf("`values` item in the parsed JSON must contain an array; got %q", linesV)
}
commonFieldsLen := len(fields.fields)
for _, line := range lines {
fields.fields = fields.fields[:commonFieldsLen]
lineA, err := line.Array()
if err != nil {
return fmt.Errorf("unexpected contents of `values` item; want array; got %q", line)
}
if len(lineA) < 2 || len(lineA) > 3 {
return fmt.Errorf("unexpected number of values in `values` item array %q; got %d want 2 or 3", line, len(lineA))
}
// parse timestamp
timestamp, err := lineA[0].StringBytes()
if err != nil {
return fmt.Errorf("unexpected log timestamp type for %q; want string", lineA[0])
}
ts, err := parseLokiTimestamp(bytesutil.ToUnsafeString(timestamp))
if err != nil {
return fmt.Errorf("cannot parse log timestamp %q: %w", timestamp, err)
}
if ts == 0 {
ts = currentTimestamp
}
// parse structured metadata - see https://grafana.com/docs/loki/latest/reference/loki-http-api/#ingest-logs
if len(lineA) > 2 {
structuredMetadata, err := lineA[2].Object()
if err != nil {
return fmt.Errorf("unexpected structured metadata type for %q; want JSON object", lineA[2])
}
structuredMetadata.Visit(func(k []byte, v *fastjson.Value) {
vStr, errLocal := v.StringBytes()
if errLocal != nil {
err = fmt.Errorf("unexpected label value type for %q:%q; want string", k, v)
return
}
fields.fields = append(fields.fields, logstorage.Field{
Name: bytesutil.ToUnsafeString(k),
Value: bytesutil.ToUnsafeString(vStr),
})
})
if err != nil {
return fmt.Errorf("error when parsing `structuredMetadata` object: %w", err)
}
}
// parse log message
msg, err := lineA[1].StringBytes()
if err != nil {
return fmt.Errorf("unexpected log message type for %q; want string", lineA[1])
}
allowMsgRenaming := false
fields.fields, allowMsgRenaming = addMsgField(fields.fields, msgParser, bytesutil.ToUnsafeString(msg))
var streamFields []logstorage.Field
if useDefaultStreamFields {
streamFields = fields.fields[:commonFieldsLen]
}
if allowMsgRenaming {
logstorage.RenameField(fields.fields[commonFieldsLen:], msgFields, "_msg")
}
lmp.AddRow(ts, fields.fields, streamFields)
}
}
return nil
}
func addMsgField(dst []logstorage.Field, msgParser *logstorage.JSONParser, msg string) ([]logstorage.Field, bool) {
if msgParser == nil || len(msg) < 2 || msg[0] != '{' || msg[len(msg)-1] != '}' {
return append(dst, logstorage.Field{
Name: "_msg",
Value: msg,
}), false
}
if msgParser != nil && len(msg) >= 2 && msg[0] == '{' && msg[len(msg)-1] == '}' {
if err := msgParser.ParseLogMessage(bytesutil.ToUnsafeBytes(msg)); err == nil {
return append(dst, msgParser.Fields...), true
}
}
return append(dst, logstorage.Field{
Name: "_msg",
Value: msg,
}), false
}
func parseLokiTimestamp(s string) (int64, error) {
if s == "" {
// Special case - an empty timestamp must be substituted with the current time by the caller.
return 0, nil
}
return insertutil.ParseUnixTimestamp(s)
}

View File

@@ -1,169 +0,0 @@
package loki
import (
"testing"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
)
func TestParseJSONRequest_Failure(t *testing.T) {
f := func(s string) {
t.Helper()
tlp := &insertutil.TestLogMessageProcessor{}
if err := parseJSONRequest([]byte(s), tlp, nil, false, false); err == nil {
t.Fatalf("expecting non-nil error")
}
if err := tlp.Verify(nil, ""); err != nil {
t.Fatalf("unexpected error: %s", err)
}
}
f(``)
// Invalid json
f(`{}`)
f(`[]`)
f(`"foo"`)
f(`123`)
// invalid type for `streams` item
f(`{"streams":123}`)
// Missing `values` item
f(`{"streams":[{}]}`)
// Invalid type for `values` item
f(`{"streams":[{"values":"foobar"}]}`)
// Invalid type for `stream` item
f(`{"streams":[{"stream":[],"values":[]}]}`)
// Invalid type for `values` individual item
f(`{"streams":[{"values":[123]}]}`)
// Invalid length of `values` individual item
f(`{"streams":[{"values":[[]]}]}`)
f(`{"streams":[{"values":[["123"]]}]}`)
f(`{"streams":[{"values":[["123","456","789","8123"]]}]}`)
// Invalid type for timestamp inside `values` individual item
f(`{"streams":[{"values":[[123,"456"]}]}`)
// Invalid type for log message
f(`{"streams":[{"values":[["123",1234]]}]}`)
// invalid structured metadata type
f(`{"streams":[{"values":[["1577836800000000001", "foo bar", ["metadata_1", "md_value"]]]}]}`)
// structured metadata with unexpected value type
f(`{"streams":[{"values":[["1577836800000000001", "foo bar", {"metadata_1": 1}]] }]}`)
}
func TestParseJSONRequest_Success(t *testing.T) {
f := func(s string, timestampsExpected []int64, resultExpected string) {
t.Helper()
tlp := &insertutil.TestLogMessageProcessor{}
if err := parseJSONRequest([]byte(s), tlp, nil, false, false); err != nil {
t.Fatalf("unexpected error: %s", err)
}
if err := tlp.Verify(timestampsExpected, resultExpected); err != nil {
t.Fatal(err)
}
}
// Empty streams
f(`{"streams":[]}`, nil, ``)
f(`{"streams":[{"values":[]}]}`, nil, ``)
f(`{"streams":[{"stream":{},"values":[]}]}`, nil, ``)
f(`{"streams":[{"stream":{"foo":"bar"},"values":[]}]}`, nil, ``)
// Empty stream labels
f(`{"streams":[{"values":[["1577836800000000001", "foo bar"]]}]}`, []int64{1577836800000000001}, `{"_msg":"foo bar"}`)
f(`{"streams":[{"stream":{},"values":[["1577836800000000001", "foo bar"]]}]}`, []int64{1577836800000000001}, `{"_msg":"foo bar"}`)
// Non-empty stream labels
f(`{"streams":[{"stream":{
"label1": "value1",
"label2": "value2"
},"values":[
["1577836800000000001", "foo bar"],
["1686026123.62", "abc"],
["147.78369e9", "foobar"]
]}]}`, []int64{1577836800000000001, 1686026123620000000, 147783690000000000}, `{"label1":"value1","label2":"value2","_msg":"foo bar"}
{"label1":"value1","label2":"value2","_msg":"abc"}
{"label1":"value1","label2":"value2","_msg":"foobar"}`)
// Multiple streams
f(`{
"streams": [
{
"stream": {
"foo": "bar",
"a": "b"
},
"values": [
["1577836800000000001", "foo bar"],
["1577836900005000002", "abc"]
]
},
{
"stream": {
"x": "y"
},
"values": [
["1877836900005000002", "yx"]
]
}
]
}`, []int64{1577836800000000001, 1577836900005000002, 1877836900005000002}, `{"foo":"bar","a":"b","_msg":"foo bar"}
{"foo":"bar","a":"b","_msg":"abc"}
{"x":"y","_msg":"yx"}`)
// values with metadata
f(`{"streams":[{"values":[["1577836800000000001", "foo bar", {"metadata_1": "md_value"}]]}]}`, []int64{1577836800000000001}, `{"metadata_1":"md_value","_msg":"foo bar"}`)
f(`{"streams":[{"values":[["1577836800000000001", "foo bar", {}]]}]}`, []int64{1577836800000000001}, `{"_msg":"foo bar"}`)
}
func TestParseJSONRequest_ParseMessage(t *testing.T) {
f := func(s string, msgFields []string, timestampsExpected []int64, resultExpected string) {
t.Helper()
tlp := &insertutil.TestLogMessageProcessor{}
if err := parseJSONRequest([]byte(s), tlp, msgFields, false, true); err != nil {
t.Fatalf("unexpected error: %s", err)
}
if err := tlp.Verify(timestampsExpected, resultExpected); err != nil {
t.Fatal(err)
}
}
f(`{
"streams": [
{
"stream": {
"foo": "bar",
"a": "b"
},
"values": [
["1577836800000000001", "{\"user_id\":\"123\"}"],
["1577836900005000002", "abc", {"trace_id":"pqw"}],
["1577836900005000003", "{def}"]
]
},
{
"stream": {
"x": "y"
},
"values": [
["1877836900005000004", "{\"trace_id\":\"111\",\"parent_id\":\"abc\"}"]
]
}
]
}`, []string{"a", "trace_id"}, []int64{1577836800000000001, 1577836900005000002, 1577836900005000003, 1877836900005000004}, `{"foo":"bar","a":"b","user_id":"123"}
{"foo":"bar","a":"b","trace_id":"pqw","_msg":"abc"}
{"foo":"bar","a":"b","_msg":"{def}"}
{"x":"y","_msg":"111","parent_id":"abc"}`)
}

View File

@@ -1,78 +0,0 @@
package loki
import (
"fmt"
"strconv"
"testing"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
)
func BenchmarkParseJSONRequest(b *testing.B) {
for _, streams := range []int{5, 10} {
for _, rows := range []int{100, 1000} {
for _, labels := range []int{10, 50} {
b.Run(fmt.Sprintf("streams_%d/rows_%d/labels_%d", streams, rows, labels), func(b *testing.B) {
benchmarkParseJSONRequest(b, streams, rows, labels)
})
}
}
}
}
func benchmarkParseJSONRequest(b *testing.B, streams, rows, labels int) {
blp := &insertutil.BenchmarkLogMessageProcessor{}
b.ReportAllocs()
b.SetBytes(int64(streams * rows))
b.RunParallel(func(pb *testing.PB) {
data := getJSONBody(streams, rows, labels)
for pb.Next() {
if err := parseJSONRequest(data, blp, nil, false, true); err != nil {
panic(fmt.Errorf("unexpected error: %w", err))
}
}
})
}
func getJSONBody(streams, rows, labels int) []byte {
body := append([]byte{}, `{"streams":[`...)
now := time.Now().UnixNano()
valuePrefix := fmt.Sprintf(`["%d","value_`, now)
for i := 0; i < streams; i++ {
body = append(body, `{"stream":{`...)
for j := 0; j < labels; j++ {
body = append(body, `"label_`...)
body = strconv.AppendInt(body, int64(j), 10)
body = append(body, `":"value_`...)
body = strconv.AppendInt(body, int64(j), 10)
body = append(body, '"')
if j < labels-1 {
body = append(body, ',')
}
}
body = append(body, `}, "values":[`...)
for j := 0; j < rows; j++ {
body = append(body, valuePrefix...)
body = strconv.AppendInt(body, int64(j), 10)
body = append(body, `"]`...)
if j < rows-1 {
body = append(body, ',')
}
}
body = append(body, `]}`...)
if i < streams-1 {
body = append(body, ',')
}
}
body = append(body, `]}`...)
return body
}

View File

@@ -1,218 +0,0 @@
package loki
import (
"fmt"
"net/http"
"strconv"
"strings"
"sync"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/metrics"
)
var (
pushReqsPool sync.Pool
)
func handleProtobuf(r *http.Request, w http.ResponseWriter) {
startTime := time.Now()
requestsProtobufTotal.Inc()
cp, err := getCommonParams(r)
if err != nil {
httpserver.Errorf(w, r, "cannot parse common params from request: %s", err)
return
}
if err := insertutil.CanWriteData(); err != nil {
httpserver.Errorf(w, r, "%s", err)
return
}
encoding := r.Header.Get("Content-Encoding")
if encoding == "" {
// Loki protocol uses snappy compression by default.
// See https://grafana.com/docs/loki/latest/reference/loki-http-api/#ingest-logs
encoding = "snappy"
}
err = protoparserutil.ReadUncompressedData(r.Body, encoding, maxRequestSize, func(data []byte) error {
lmp := cp.cp.NewLogMessageProcessor("loki_protobuf", false)
useDefaultStreamFields := len(cp.cp.StreamFields) == 0
err := parseProtobufRequest(data, lmp, cp.cp.MsgFields, useDefaultStreamFields, cp.parseMessage)
lmp.MustClose()
return err
})
if err != nil {
httpserver.Errorf(w, r, "cannot read Loki protobuf data: %s", err)
return
}
// update requestProtobufDuration only for successfully parsed requests
// There is no need in updating requestProtobufDuration for request errors,
// since their timings are usually much smaller than the timing for successful request parsing.
requestProtobufDuration.UpdateDuration(startTime)
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8505
w.WriteHeader(http.StatusNoContent)
}
var (
requestsProtobufTotal = metrics.NewCounter(`vl_http_requests_total{path="/insert/loki/api/v1/push",format="protobuf"}`)
requestProtobufDuration = metrics.NewSummary(`vl_http_request_duration_seconds{path="/insert/loki/api/v1/push",format="protobuf"}`)
)
func parseProtobufRequest(data []byte, lmp insertutil.LogMessageProcessor, msgFields []string, useDefaultStreamFields, parseMessage bool) error {
req := getPushRequest()
defer putPushRequest(req)
err := req.UnmarshalProtobuf(data)
if err != nil {
return fmt.Errorf("cannot parse request body: %w", err)
}
fields := getFields()
defer putFields(fields)
var msgParser *logstorage.JSONParser
if parseMessage {
msgParser = logstorage.GetJSONParser()
defer logstorage.PutJSONParser(msgParser)
}
streams := req.Streams
currentTimestamp := time.Now().UnixNano()
for i := range streams {
stream := &streams[i]
// st.Labels contains labels for the stream.
// Labels are same for all entries in the stream.
fields.fields, err = parsePromLabels(fields.fields[:0], stream.Labels)
if err != nil {
return fmt.Errorf("cannot parse stream labels %q: %w", stream.Labels, err)
}
commonFieldsLen := len(fields.fields)
entries := stream.Entries
for j := range entries {
e := &entries[j]
fields.fields = fields.fields[:commonFieldsLen]
for _, lp := range e.StructuredMetadata {
fields.fields = append(fields.fields, logstorage.Field{
Name: lp.Name,
Value: lp.Value,
})
}
allowMsgRenaming := false
fields.fields, allowMsgRenaming = addMsgField(fields.fields, msgParser, e.Line)
ts := e.Timestamp.UnixNano()
if ts == 0 {
ts = currentTimestamp
}
var streamFields []logstorage.Field
if useDefaultStreamFields {
streamFields = fields.fields[:commonFieldsLen]
}
if allowMsgRenaming {
logstorage.RenameField(fields.fields[commonFieldsLen:], msgFields, "_msg")
}
lmp.AddRow(ts, fields.fields, streamFields)
}
}
return nil
}
func getFields() *fields {
v := fieldsPool.Get()
if v == nil {
return &fields{}
}
return v.(*fields)
}
func putFields(f *fields) {
f.fields = f.fields[:0]
fieldsPool.Put(f)
}
var fieldsPool sync.Pool
type fields struct {
fields []logstorage.Field
}
// parsePromLabels parses log fields in Prometheus text exposition format from s, appends them to dst and returns the result.
//
// See test data of promtail for examples: https://github.com/grafana/loki/blob/a24ef7b206e0ca63ee74ca6ecb0a09b745cd2258/pkg/push/types_test.go
func parsePromLabels(dst []logstorage.Field, s string) ([]logstorage.Field, error) {
// Make sure s is wrapped into `{...}`
s = strings.TrimSpace(s)
if len(s) < 2 {
return nil, fmt.Errorf("too short string to parse: %q", s)
}
if s[0] != '{' {
return nil, fmt.Errorf("missing `{` at the beginning of %q", s)
}
if s[len(s)-1] != '}' {
return nil, fmt.Errorf("missing `}` at the end of %q", s)
}
s = s[1 : len(s)-1]
for len(s) > 0 {
// Parse label name
n := strings.IndexByte(s, '=')
if n < 0 {
return nil, fmt.Errorf("cannot find `=` char for label value at %s", s)
}
name := s[:n]
s = s[n+1:]
// Parse label value
qs, err := strconv.QuotedPrefix(s)
if err != nil {
return nil, fmt.Errorf("cannot parse value for label %q at %s: %w", name, s, err)
}
s = s[len(qs):]
value, err := strconv.Unquote(qs)
if err != nil {
return nil, fmt.Errorf("cannot unquote value %q for label %q: %w", qs, name, err)
}
// Append the found field to dst.
dst = append(dst, logstorage.Field{
Name: name,
Value: value,
})
// Check whether there are other labels remaining
if len(s) == 0 {
break
}
if !strings.HasPrefix(s, ",") {
return nil, fmt.Errorf("missing `,` char at %s", s)
}
s = s[1:]
s = strings.TrimPrefix(s, " ")
}
return dst, nil
}
func getPushRequest() *PushRequest {
v := pushReqsPool.Get()
if v == nil {
return &PushRequest{}
}
return v.(*PushRequest)
}
func putPushRequest(req *PushRequest) {
req.reset()
pushReqsPool.Put(req)
}

View File

@@ -1,218 +0,0 @@
package loki
import (
"fmt"
"strings"
"testing"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
)
type testLogMessageProcessor struct {
pr PushRequest
}
func (tlp *testLogMessageProcessor) AddRow(timestamp int64, fields, streamFields []logstorage.Field) {
if streamFields != nil {
panic(fmt.Errorf("unexpected non-nil streamFields: %v", streamFields))
}
msg := ""
for _, f := range fields {
if f.Name == "_msg" {
msg = f.Value
}
}
var a []string
for _, f := range fields {
if f.Name == "_msg" {
continue
}
item := fmt.Sprintf("%s=%q", f.Name, f.Value)
a = append(a, item)
}
labels := "{" + strings.Join(a, ", ") + "}"
tlp.pr.Streams = append(tlp.pr.Streams, Stream{
Labels: labels,
Entries: []Entry{
{
Timestamp: time.Unix(0, timestamp),
Line: strings.Clone(msg),
},
},
})
}
func (tlp *testLogMessageProcessor) MustClose() {
}
func TestParseProtobufRequest_Success(t *testing.T) {
f := func(s string, timestampsExpected []int64, resultExpected string) {
t.Helper()
tlp := &testLogMessageProcessor{}
if err := parseJSONRequest([]byte(s), tlp, nil, false, false); err != nil {
t.Fatalf("unexpected error: %s", err)
}
if len(tlp.pr.Streams) != len(timestampsExpected) {
t.Fatalf("unexpected number of streams; got %d; want %d", len(tlp.pr.Streams), len(timestampsExpected))
}
data := tlp.pr.MarshalProtobuf(nil)
tlp2 := &insertutil.TestLogMessageProcessor{}
if err := parseProtobufRequest(data, tlp2, nil, false, false); err != nil {
t.Fatalf("unexpected error: %s", err)
}
if err := tlp2.Verify(timestampsExpected, resultExpected); err != nil {
t.Fatal(err)
}
}
// Empty streams
f(`{"streams":[]}`, nil, ``)
f(`{"streams":[{"values":[]}]}`, nil, ``)
f(`{"streams":[{"stream":{},"values":[]}]}`, nil, ``)
f(`{"streams":[{"stream":{"foo":"bar"},"values":[]}]}`, nil, ``)
// Empty stream labels
f(`{"streams":[{"values":[["1577836800000000001", "foo bar"]]}]}`, []int64{1577836800000000001}, `{"_msg":"foo bar"}`)
f(`{"streams":[{"stream":{},"values":[["1577836800000000001", "foo bar"]]}]}`, []int64{1577836800000000001}, `{"_msg":"foo bar"}`)
// Non-empty stream labels
f(`{"streams":[{"stream":{
"label1": "value1",
"label2": "value2"
},"values":[
["1577836800000000001", "foo bar"],
["1477836900005000002", "abc"],
["147.78369e9", "foobar"]
]}]}`, []int64{1577836800000000001, 1477836900005000002, 147783690000000000}, `{"label1":"value1","label2":"value2","_msg":"foo bar"}
{"label1":"value1","label2":"value2","_msg":"abc"}
{"label1":"value1","label2":"value2","_msg":"foobar"}`)
// Multiple streams
f(`{
"streams": [
{
"stream": {
"foo": "bar",
"a": "b"
},
"values": [
["1577836800000000001", "foo bar"],
["1577836900005000002", "abc"]
]
},
{
"stream": {
"x": "y"
},
"values": [
["1877836900005000002", "yx"]
]
}
]
}`, []int64{1577836800000000001, 1577836900005000002, 1877836900005000002}, `{"foo":"bar","a":"b","_msg":"foo bar"}
{"foo":"bar","a":"b","_msg":"abc"}
{"x":"y","_msg":"yx"}`)
}
func TestParseProtobufRequest_ParseMessage(t *testing.T) {
f := func(s string, msgFields []string, timestampsExpected []int64, resultExpected string) {
t.Helper()
tlp := &testLogMessageProcessor{}
if err := parseJSONRequest([]byte(s), tlp, nil, false, false); err != nil {
t.Fatalf("unexpected error: %s", err)
}
if len(tlp.pr.Streams) != len(timestampsExpected) {
t.Fatalf("unexpected number of streams; got %d; want %d", len(tlp.pr.Streams), len(timestampsExpected))
}
data := tlp.pr.MarshalProtobuf(nil)
tlp2 := &insertutil.TestLogMessageProcessor{}
if err := parseProtobufRequest(data, tlp2, msgFields, false, true); err != nil {
t.Fatalf("unexpected error: %s", err)
}
if err := tlp2.Verify(timestampsExpected, resultExpected); err != nil {
t.Fatal(err)
}
}
f(`{
"streams": [
{
"stream": {
"foo": "bar",
"a": "b"
},
"values": [
["1577836800000000001", "{\"user_id\":\"123\"}"],
["1577836900005000002", "abc", {"trace_id":"pqw"}],
["1577836900005000003", "{def}"]
]
},
{
"stream": {
"x": "y"
},
"values": [
["1877836900005000004", "{\"trace_id\":\"432\",\"parent_id\":\"qwerty\"}"]
]
}
]
}`, []string{"a", "trace_id"}, []int64{1577836800000000001, 1577836900005000002, 1577836900005000003, 1877836900005000004}, `{"foo":"bar","a":"b","user_id":"123"}
{"foo":"bar","a":"b","trace_id":"pqw","_msg":"abc"}
{"foo":"bar","a":"b","_msg":"{def}"}
{"x":"y","_msg":"432","parent_id":"qwerty"}`)
}
func TestParsePromLabels_Success(t *testing.T) {
f := func(s string) {
t.Helper()
fields, err := parsePromLabels(nil, s)
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
var a []string
for _, f := range fields {
a = append(a, fmt.Sprintf("%s=%q", f.Name, f.Value))
}
result := "{" + strings.Join(a, ", ") + "}"
if result != s {
t.Fatalf("unexpected result;\ngot\n%s\nwant\n%s", result, s)
}
}
f("{}")
f(`{foo="bar"}`)
f(`{foo="bar", baz="x", y="z"}`)
f(`{foo="ba\"r\\z\n", a="", b="\"\\"}`)
}
func TestParsePromLabels_Failure(t *testing.T) {
f := func(s string) {
t.Helper()
fields, err := parsePromLabels(nil, s)
if err == nil {
t.Fatalf("expecting non-nil error")
}
if len(fields) > 0 {
t.Fatalf("unexpected non-empty fields: %s", fields)
}
}
f("")
f("{")
f(`{foo}`)
f(`{foo=bar}`)
f(`{foo="bar}`)
f(`{foo="ba\",r}`)
f(`{foo="bar" baz="aa"}`)
f(`foobar`)
f(`foo{bar="baz"}`)
}

View File

@@ -1,80 +0,0 @@
package loki
import (
"fmt"
"strconv"
"testing"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
)
func BenchmarkParseProtobufRequest(b *testing.B) {
for _, streams := range []int{5, 10} {
for _, rows := range []int{100, 1000} {
for _, labels := range []int{10, 50} {
b.Run(fmt.Sprintf("streams_%d/rows_%d/labels_%d", streams, rows, labels), func(b *testing.B) {
benchmarkParseProtobufRequest(b, streams, rows, labels)
})
}
}
}
}
func benchmarkParseProtobufRequest(b *testing.B, streams, rows, labels int) {
blp := &insertutil.BenchmarkLogMessageProcessor{}
b.ReportAllocs()
b.SetBytes(int64(streams * rows))
b.RunParallel(func(pb *testing.PB) {
body := getProtobufBody(streams, rows, labels)
for pb.Next() {
if err := parseProtobufRequest(body, blp, nil, false, true); err != nil {
panic(fmt.Errorf("unexpected error: %w", err))
}
}
})
}
func getProtobufBody(streamsCount, rowsCount, labelsCount int) []byte {
var b []byte
var entries []Entry
streams := make([]Stream, streamsCount)
for i := range streams {
b = b[:0]
b = append(b, '{')
for j := 0; j < labelsCount; j++ {
b = append(b, "label_"...)
b = strconv.AppendInt(b, int64(j), 10)
b = append(b, `="value_`...)
b = strconv.AppendInt(b, int64(j), 10)
b = append(b, '"')
if j < labelsCount-1 {
b = append(b, ',')
}
}
b = append(b, '}')
labels := string(b)
var rowsBuf []byte
entriesLen := len(entries)
for j := 0; j < rowsCount; j++ {
rowsBufLen := len(rowsBuf)
rowsBuf = append(rowsBuf, "value_"...)
rowsBuf = strconv.AppendInt(rowsBuf, int64(j), 10)
entries = append(entries, Entry{
Timestamp: time.Now(),
Line: bytesutil.ToUnsafeString(rowsBuf[rowsBufLen:]),
})
}
st := &streams[i]
st.Labels = labels
st.Entries = entries[entriesLen:]
}
pr := PushRequest{
Streams: streams,
}
return pr.MarshalProtobuf(nil)
}

View File

@@ -1,302 +0,0 @@
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: push_request.proto
// source: https://raw.githubusercontent.com/grafana/loki/main/pkg/push/push_request.proto
// Licensed under the Apache License, Version 2.0 (the "License");
// https://github.com/grafana/loki/blob/main/pkg/push/LICENSE
package loki
import (
"fmt"
"time"
"github.com/VictoriaMetrics/easyproto"
)
var mp easyproto.MarshalerPool
// PushRequest represents Loki PushRequest
//
// See https://github.com/grafana/loki/blob/ada4b7b8713385fbe9f5984a5a0aaaddf1a7b851/pkg/push/push.proto#L14
type PushRequest struct {
Streams []Stream
entriesBuf []Entry
labelPairBuf []LabelPair
}
func (pr *PushRequest) reset() {
pr.Streams = pr.Streams[:0]
pr.entriesBuf = pr.entriesBuf[:0]
pr.labelPairBuf = pr.labelPairBuf[:0]
}
// UnmarshalProtobuf unmarshals pr from protobuf message at src.
//
// pr remains valid until src is modified.
func (pr *PushRequest) UnmarshalProtobuf(src []byte) error {
pr.reset()
var err error
pr.entriesBuf, pr.labelPairBuf, err = pr.unmarshalProtobuf(pr.entriesBuf, pr.labelPairBuf, src)
return err
}
// MarshalProtobuf marshals r to protobuf message, appends it to dst and returns the result.
func (pr *PushRequest) MarshalProtobuf(dst []byte) []byte {
m := mp.Get()
pr.marshalProtobuf(m.MessageMarshaler())
dst = m.Marshal(dst)
mp.Put(m)
return dst
}
func (pr *PushRequest) marshalProtobuf(mm *easyproto.MessageMarshaler) {
for _, s := range pr.Streams {
s.marshalProtobuf(mm.AppendMessage(1))
}
}
func (pr *PushRequest) unmarshalProtobuf(entriesBuf []Entry, labelPairBuf []LabelPair, src []byte) ([]Entry, []LabelPair, error) {
// message PushRequest {
// repeated Stream streams = 1;
// }
var err error
var fc easyproto.FieldContext
for len(src) > 0 {
src, err = fc.NextField(src)
if err != nil {
return entriesBuf, labelPairBuf, fmt.Errorf("cannot read next field in PushRequest: %w", err)
}
switch fc.FieldNum {
case 1:
data, ok := fc.MessageData()
if !ok {
return entriesBuf, labelPairBuf, fmt.Errorf("cannot read Stream data")
}
pr.Streams = append(pr.Streams, Stream{})
s := &pr.Streams[len(pr.Streams)-1]
entriesBuf, labelPairBuf, err = s.unmarshalProtobuf(entriesBuf, labelPairBuf, data)
if err != nil {
return entriesBuf, labelPairBuf, fmt.Errorf("cannot unmarshal Stream: %w", err)
}
}
}
return entriesBuf, labelPairBuf, nil
}
// Stream represents Loki stream.
//
// See https://github.com/grafana/loki/blob/ada4b7b8713385fbe9f5984a5a0aaaddf1a7b851/pkg/push/push.proto#L23
type Stream struct {
Labels string
Entries []Entry
}
func (s *Stream) marshalProtobuf(mm *easyproto.MessageMarshaler) {
mm.AppendString(1, s.Labels)
for _, e := range s.Entries {
e.marshalProtobuf(mm.AppendMessage(2))
}
}
func (s *Stream) unmarshalProtobuf(entriesBuf []Entry, labelPairBuf []LabelPair, src []byte) ([]Entry, []LabelPair, error) {
// message Stream {
// string labels = 1;
// repeated Entry entries = 2;
// }
var err error
var fc easyproto.FieldContext
entriesBufLen := len(entriesBuf)
for len(src) > 0 {
src, err = fc.NextField(src)
if err != nil {
return entriesBuf, labelPairBuf, fmt.Errorf("cannot read next field in Stream: %w", err)
}
switch fc.FieldNum {
case 1:
labels, ok := fc.String()
if !ok {
return entriesBuf, labelPairBuf, fmt.Errorf("cannot read labels")
}
s.Labels = labels
case 2:
data, ok := fc.MessageData()
if !ok {
return entriesBuf, labelPairBuf, fmt.Errorf("cannot read Entry data")
}
entriesBuf = append(entriesBuf, Entry{})
e := &entriesBuf[len(entriesBuf)-1]
labelPairBuf, err = e.unmarshalProtobuf(labelPairBuf, data)
if err != nil {
return entriesBuf, labelPairBuf, fmt.Errorf("cannot unmarshal Entry: %w", err)
}
}
}
s.Entries = entriesBuf[entriesBufLen:]
return entriesBuf, labelPairBuf, nil
}
// Entry represents Loki entry.
//
// See https://github.com/grafana/loki/blob/ada4b7b8713385fbe9f5984a5a0aaaddf1a7b851/pkg/push/push.proto#L38
type Entry struct {
Timestamp time.Time
Line string
StructuredMetadata []LabelPair
}
func (e *Entry) marshalProtobuf(mm *easyproto.MessageMarshaler) {
marshalTime(mm, 1, e.Timestamp)
mm.AppendString(2, e.Line)
for _, lp := range e.StructuredMetadata {
lp.marshalProtobuf(mm.AppendMessage(3))
}
}
func (e *Entry) unmarshalProtobuf(labelPairBuf []LabelPair, src []byte) ([]LabelPair, error) {
// message Entry {
// Timestamp timestamp = 1;
// string line = 2;
// repeated LabelPair structuredMetadata = 3;
// }
var err error
var fc easyproto.FieldContext
labelPairBufLen := len(labelPairBuf)
for len(src) > 0 {
src, err = fc.NextField(src)
if err != nil {
return labelPairBuf, fmt.Errorf("cannot read next field in Entry: %w", err)
}
switch fc.FieldNum {
case 1:
data, ok := fc.MessageData()
if !ok {
return labelPairBuf, fmt.Errorf("cannot read Timestamp data")
}
timestamp, err := unmarshalTime(data)
if err != nil {
return labelPairBuf, fmt.Errorf("cannot unmarshal Timestamp: %w", err)
}
e.Timestamp = timestamp
case 2:
line, ok := fc.String()
if !ok {
return labelPairBuf, fmt.Errorf("cannot read Line")
}
e.Line = line
case 3:
data, ok := fc.MessageData()
if !ok {
return labelPairBuf, fmt.Errorf("cannot read StructuredMetadata")
}
labelPairBuf = append(labelPairBuf, LabelPair{})
lp := &labelPairBuf[len(labelPairBuf)-1]
if err := lp.unmarshalProtobuf(data); err != nil {
return labelPairBuf, fmt.Errorf("cannot unmarshal StructuredMetadata: %w", err)
}
}
}
e.StructuredMetadata = labelPairBuf[labelPairBufLen:]
return labelPairBuf, nil
}
// LabelPair represents Loki label pair.
//
// See https://github.com/grafana/loki/blob/ada4b7b8713385fbe9f5984a5a0aaaddf1a7b851/pkg/push/push.proto#L33
type LabelPair struct {
Name string
Value string
}
func (lp *LabelPair) marshalProtobuf(mm *easyproto.MessageMarshaler) {
mm.AppendString(1, lp.Name)
mm.AppendString(2, lp.Value)
}
func (lp *LabelPair) unmarshalProtobuf(src []byte) (err error) {
// message LabelPair {
// string name = 1;
// string value = 2;
// }
var fc easyproto.FieldContext
for len(src) > 0 {
src, err = fc.NextField(src)
if err != nil {
return fmt.Errorf("cannot read next field in LabelPair: %w", err)
}
switch fc.FieldNum {
case 1:
name, ok := fc.String()
if !ok {
return fmt.Errorf("cannot read name")
}
lp.Name = name
case 2:
value, ok := fc.String()
if !ok {
return fmt.Errorf("cannot unmarshal value")
}
lp.Value = value
}
}
return nil
}
func marshalTime(mm *easyproto.MessageMarshaler, fieldNum uint32, timestamp time.Time) {
nsecs := timestamp.UnixNano()
ts := Timestamp{
Seconds: nsecs / 1e9,
Nanos: int32(nsecs % 1e9),
}
ts.marshalProtobuf(mm.AppendMessage(fieldNum))
}
func unmarshalTime(src []byte) (time.Time, error) {
var ts Timestamp
if err := ts.unmarshalProtobuf(src); err != nil {
return time.Time{}, err
}
timestamp := time.Unix(ts.Seconds, int64(ts.Nanos)).UTC()
return timestamp, nil
}
// Timestamp is protobuf well-known timestamp type.
type Timestamp struct {
Seconds int64
Nanos int32
}
func (ts *Timestamp) marshalProtobuf(mm *easyproto.MessageMarshaler) {
mm.AppendInt64(1, ts.Seconds)
mm.AppendInt32(2, ts.Nanos)
}
func (ts *Timestamp) unmarshalProtobuf(src []byte) (err error) {
// message Timestamp {
// int64 seconds = 1;
// int32 nanos = 2;
// }
var fc easyproto.FieldContext
for len(src) > 0 {
src, err = fc.NextField(src)
if err != nil {
return fmt.Errorf("cannot read next field in Timestamp: %w", err)
}
switch fc.FieldNum {
case 1:
seconds, ok := fc.Int64()
if !ok {
return fmt.Errorf("cannot read Seconds")
}
ts.Seconds = seconds
case 2:
nanos, ok := fc.Int32()
if !ok {
return fmt.Errorf("cannot read Nanos")
}
ts.Nanos = nanos
}
}
return nil
}

View File

@@ -1,88 +0,0 @@
package vlinsert
import (
"flag"
"fmt"
"net/http"
"strings"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/datadog"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/elasticsearch"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/internalinsert"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/journald"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/jsonline"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/loki"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/opentelemetry"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/syslog"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
)
var (
disableInsert = flag.Bool("insert.disable", false, "Whether to disable /insert/* HTTP endpoints")
disableInternal = flag.Bool("internalinsert.disable", false, "Whether to disable /internal/insert HTTP endpoint")
)
// Init initializes vlinsert
func Init() {
syslog.MustInit()
}
// Stop stops vlinsert
func Stop() {
syslog.MustStop()
}
// RequestHandler handles insert requests for VictoriaLogs
func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
path := strings.ReplaceAll(r.URL.Path, "//", "/")
if strings.HasPrefix(path, "/insert/") {
if *disableInsert {
httpserver.Errorf(w, r, "requests to /insert/* are disabled with -insert.disable command-line flag")
return true
}
return insertHandler(w, r, path)
}
if path == "/internal/insert" {
if *disableInternal || *disableInsert {
httpserver.Errorf(w, r, "requests to /internal/insert are disabled with -internalinsert.disable or -insert.disable command-line flag")
return true
}
internalinsert.RequestHandler(w, r)
return true
}
return false
}
func insertHandler(w http.ResponseWriter, r *http.Request, path string) bool {
switch path {
case "/insert/jsonline":
jsonline.RequestHandler(w, r)
return true
case "/insert/ready":
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(200)
fmt.Fprintf(w, `{"status":"ok"}`)
return true
}
switch {
// some clients may omit trailing slash at elasticsearch protocol.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8353
case strings.HasPrefix(path, "/insert/elasticsearch"):
return elasticsearch.RequestHandler(path, w, r)
case strings.HasPrefix(path, "/insert/loki/"):
return loki.RequestHandler(path, w, r)
case strings.HasPrefix(path, "/insert/opentelemetry/"):
return opentelemetry.RequestHandler(path, w, r)
case strings.HasPrefix(path, "/insert/journald/"):
return journald.RequestHandler(path, w, r)
case strings.HasPrefix(path, "/insert/datadog/"):
return datadog.RequestHandler(path, w, r)
}
return false
}

View File

@@ -1,153 +0,0 @@
package opentelemetry
import (
"fmt"
"net/http"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentelemetry/pb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/metrics"
)
var maxRequestSize = flagutil.NewBytes("opentelemetry.maxRequestSize", 64*1024*1024, "The maximum size in bytes of a single OpenTelemetry request")
// RequestHandler processes Opentelemetry insert requests
func RequestHandler(path string, w http.ResponseWriter, r *http.Request) bool {
switch path {
// use the same path as opentelemetry collector
// https://opentelemetry.io/docs/specs/otlp/#otlphttp-request
case "/insert/opentelemetry/v1/logs":
if r.Header.Get("Content-Type") == "application/json" {
httpserver.Errorf(w, r, "json encoding isn't supported for opentelemetry format. Use protobuf encoding")
return true
}
handleProtobuf(r, w)
return true
default:
return false
}
}
func handleProtobuf(r *http.Request, w http.ResponseWriter) {
startTime := time.Now()
requestsProtobufTotal.Inc()
cp, err := insertutil.GetCommonParams(r)
if err != nil {
httpserver.Errorf(w, r, "cannot parse common params from request: %s", err)
return
}
if err := insertutil.CanWriteData(); err != nil {
httpserver.Errorf(w, r, "%s", err)
return
}
encoding := r.Header.Get("Content-Encoding")
err = protoparserutil.ReadUncompressedData(r.Body, encoding, maxRequestSize, func(data []byte) error {
lmp := cp.NewLogMessageProcessor("opentelelemtry_protobuf", false)
useDefaultStreamFields := len(cp.StreamFields) == 0
err := pushProtobufRequest(data, lmp, cp.MsgFields, useDefaultStreamFields)
lmp.MustClose()
return err
})
if err != nil {
httpserver.Errorf(w, r, "cannot read OpenTelemetry protocol data: %s", err)
return
}
// update requestProtobufDuration only for successfully parsed requests
// There is no need in updating requestProtobufDuration for request errors,
// since their timings are usually much smaller than the timing for successful request parsing.
requestProtobufDuration.UpdateDuration(startTime)
}
var (
requestsProtobufTotal = metrics.NewCounter(`vl_http_requests_total{path="/insert/opentelemetry/v1/logs",format="protobuf"}`)
errorsTotal = metrics.NewCounter(`vl_http_errors_total{path="/insert/opentelemetry/v1/logs",format="protobuf"}`)
requestProtobufDuration = metrics.NewSummary(`vl_http_request_duration_seconds{path="/insert/opentelemetry/v1/logs",format="protobuf"}`)
)
func pushProtobufRequest(data []byte, lmp insertutil.LogMessageProcessor, msgFields []string, useDefaultStreamFields bool) error {
var req pb.ExportLogsServiceRequest
if err := req.UnmarshalProtobuf(data); err != nil {
errorsTotal.Inc()
return fmt.Errorf("cannot unmarshal request from %d bytes: %w", len(data), err)
}
var commonFields []logstorage.Field
for _, rl := range req.ResourceLogs {
commonFields = commonFields[:0]
commonFields = appendKeyValues(commonFields, rl.Resource.Attributes, "")
commonFieldsLen := len(commonFields)
for _, sc := range rl.ScopeLogs {
commonFields = pushFieldsFromScopeLogs(&sc, commonFields[:commonFieldsLen], lmp, msgFields, useDefaultStreamFields)
}
}
return nil
}
func pushFieldsFromScopeLogs(sc *pb.ScopeLogs, commonFields []logstorage.Field, lmp insertutil.LogMessageProcessor, msgFields []string, useDefaultStreamFields bool) []logstorage.Field {
fields := commonFields
for _, lr := range sc.LogRecords {
fields = fields[:len(commonFields)]
if lr.Body.KeyValueList != nil {
fields = appendKeyValues(fields, lr.Body.KeyValueList.Values, "")
logstorage.RenameField(fields[len(commonFields):], msgFields, "_msg")
} else {
fields = append(fields, logstorage.Field{
Name: "_msg",
Value: lr.Body.FormatString(true),
})
}
fields = appendKeyValues(fields, lr.Attributes, "")
if len(lr.TraceID) > 0 {
fields = append(fields, logstorage.Field{
Name: "trace_id",
Value: lr.TraceID,
})
}
if len(lr.SpanID) > 0 {
fields = append(fields, logstorage.Field{
Name: "span_id",
Value: lr.SpanID,
})
}
fields = append(fields, logstorage.Field{
Name: "severity",
Value: lr.FormatSeverity(),
})
var streamFields []logstorage.Field
if useDefaultStreamFields {
streamFields = commonFields
}
lmp.AddRow(lr.ExtractTimestampNano(), fields, streamFields)
}
return fields
}
func appendKeyValues(fields []logstorage.Field, kvs []*pb.KeyValue, parentField string) []logstorage.Field {
for _, attr := range kvs {
fieldName := attr.Key
if parentField != "" {
fieldName = parentField + "." + fieldName
}
if attr.Value.KeyValueList != nil {
fields = appendKeyValues(fields, attr.Value.KeyValueList.Values, fieldName)
} else {
fields = append(fields, logstorage.Field{
Name: fieldName,
Value: attr.Value.FormatString(true),
})
}
}
return fields
}

View File

@@ -1,208 +0,0 @@
package opentelemetry
import (
"testing"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentelemetry/pb"
)
func TestPushProtoOk(t *testing.T) {
f := func(src []pb.ResourceLogs, timestampsExpected []int64, resultExpected string) {
t.Helper()
lr := pb.ExportLogsServiceRequest{
ResourceLogs: src,
}
pData := lr.MarshalProtobuf(nil)
tlp := &insertutil.TestLogMessageProcessor{}
if err := pushProtobufRequest(pData, tlp, nil, false); err != nil {
t.Fatalf("unexpected error: %s", err)
}
if err := tlp.Verify(timestampsExpected, resultExpected); err != nil {
t.Fatal(err)
}
}
// single line without resource attributes
f([]pb.ResourceLogs{
{
ScopeLogs: []pb.ScopeLogs{
{
LogRecords: []pb.LogRecord{
{Attributes: []*pb.KeyValue{}, TimeUnixNano: 1234, SeverityNumber: 1, Body: pb.AnyValue{StringValue: ptrTo("log-line-message")}},
},
},
},
},
},
[]int64{1234},
`{"_msg":"log-line-message","severity":"Trace"}`,
)
// severities mapping
f([]pb.ResourceLogs{
{
ScopeLogs: []pb.ScopeLogs{
{
LogRecords: []pb.LogRecord{
{Attributes: []*pb.KeyValue{}, TimeUnixNano: 1234, SeverityNumber: 1, Body: pb.AnyValue{StringValue: ptrTo("log-line-message")}},
{Attributes: []*pb.KeyValue{}, TimeUnixNano: 1234, SeverityNumber: 13, Body: pb.AnyValue{StringValue: ptrTo("log-line-message")}},
{Attributes: []*pb.KeyValue{}, TimeUnixNano: 1234, SeverityNumber: 24, Body: pb.AnyValue{StringValue: ptrTo("log-line-message")}},
},
},
},
},
},
[]int64{1234, 1234, 1234},
`{"_msg":"log-line-message","severity":"Trace"}
{"_msg":"log-line-message","severity":"Warn"}
{"_msg":"log-line-message","severity":"Fatal4"}`,
)
// multi-line with resource attributes
f([]pb.ResourceLogs{
{
Resource: pb.Resource{
Attributes: []*pb.KeyValue{
{Key: "logger", Value: &pb.AnyValue{StringValue: ptrTo("context")}},
{Key: "instance_id", Value: &pb.AnyValue{IntValue: ptrTo[int64](10)}},
{Key: "node_taints", Value: &pb.AnyValue{KeyValueList: &pb.KeyValueList{
Values: []*pb.KeyValue{
{Key: "role", Value: &pb.AnyValue{StringValue: ptrTo("dev")}},
{Key: "cluster_load_percent", Value: &pb.AnyValue{DoubleValue: ptrTo(0.55)}},
},
}}},
},
},
ScopeLogs: []pb.ScopeLogs{
{
LogRecords: []pb.LogRecord{
{Attributes: []*pb.KeyValue{}, TimeUnixNano: 1234, SeverityNumber: 1, Body: pb.AnyValue{StringValue: ptrTo("log-line-message")}},
{Attributes: []*pb.KeyValue{}, TimeUnixNano: 1235, SeverityNumber: 25, Body: pb.AnyValue{StringValue: ptrTo("log-line-message-msg-2")}},
{Attributes: []*pb.KeyValue{}, TimeUnixNano: 1236, SeverityNumber: -1, Body: pb.AnyValue{StringValue: ptrTo("log-line-message-msg-2")}},
},
},
},
},
},
[]int64{1234, 1235, 1236},
`{"logger":"context","instance_id":"10","node_taints.role":"dev","node_taints.cluster_load_percent":"0.55","_msg":"log-line-message","severity":"Trace"}
{"logger":"context","instance_id":"10","node_taints.role":"dev","node_taints.cluster_load_percent":"0.55","_msg":"log-line-message-msg-2","severity":"Unspecified"}
{"logger":"context","instance_id":"10","node_taints.role":"dev","node_taints.cluster_load_percent":"0.55","_msg":"log-line-message-msg-2","severity":"Unspecified"}`,
)
// multi-scope with resource attributes and multi-line
f([]pb.ResourceLogs{
{
Resource: pb.Resource{
Attributes: []*pb.KeyValue{
{Key: "logger", Value: &pb.AnyValue{StringValue: ptrTo("context")}},
{Key: "instance_id", Value: &pb.AnyValue{IntValue: ptrTo[int64](10)}},
{Key: "node_taints", Value: &pb.AnyValue{KeyValueList: &pb.KeyValueList{
Values: []*pb.KeyValue{
{Key: "role", Value: &pb.AnyValue{StringValue: ptrTo("dev")}},
{Key: "cluster_load_percent", Value: &pb.AnyValue{DoubleValue: ptrTo(0.55)}},
},
}}},
},
},
ScopeLogs: []pb.ScopeLogs{
{
LogRecords: []pb.LogRecord{
{TimeUnixNano: 1234, SeverityNumber: 1, Body: pb.AnyValue{StringValue: ptrTo("log-line-message")}},
{TimeUnixNano: 1235, SeverityNumber: 5, Body: pb.AnyValue{StringValue: ptrTo("log-line-message-msg-2")}},
},
},
},
},
{
ScopeLogs: []pb.ScopeLogs{
{
LogRecords: []pb.LogRecord{
{TimeUnixNano: 2345, SeverityNumber: 10, Body: pb.AnyValue{StringValue: ptrTo("log-line-resource-scope-1-0-0")}},
{TimeUnixNano: 2346, SeverityNumber: 10, Body: pb.AnyValue{StringValue: ptrTo("log-line-resource-scope-1-0-1")}},
},
},
{
LogRecords: []pb.LogRecord{
{TimeUnixNano: 2347, SeverityNumber: 12, Body: pb.AnyValue{StringValue: ptrTo("log-line-resource-scope-1-1-0")}},
{TraceID: "1234", SpanID: "45", ObservedTimeUnixNano: 2348, SeverityNumber: 12, Body: pb.AnyValue{StringValue: ptrTo("log-line-resource-scope-1-1-1")}},
{TraceID: "4bf92f3577b34da6a3ce929d0e0e4736", SpanID: "00f067aa0ba902b7", ObservedTimeUnixNano: 3333, Body: pb.AnyValue{StringValue: ptrTo("log-line-resource-scope-1-1-2")}},
},
},
},
},
},
[]int64{1234, 1235, 2345, 2346, 2347, 2348, 3333},
`{"logger":"context","instance_id":"10","node_taints.role":"dev","node_taints.cluster_load_percent":"0.55","_msg":"log-line-message","severity":"Trace"}
{"logger":"context","instance_id":"10","node_taints.role":"dev","node_taints.cluster_load_percent":"0.55","_msg":"log-line-message-msg-2","severity":"Debug"}
{"_msg":"log-line-resource-scope-1-0-0","severity":"Info2"}
{"_msg":"log-line-resource-scope-1-0-1","severity":"Info2"}
{"_msg":"log-line-resource-scope-1-1-0","severity":"Info4"}
{"_msg":"log-line-resource-scope-1-1-1","trace_id":"1234","span_id":"45","severity":"Info4"}
{"_msg":"log-line-resource-scope-1-1-2","trace_id":"4bf92f3577b34da6a3ce929d0e0e4736","span_id":"00f067aa0ba902b7","severity":"Unspecified"}`,
)
// nested fields
f([]pb.ResourceLogs{
{
ScopeLogs: []pb.ScopeLogs{
{
LogRecords: []pb.LogRecord{
{
TimeUnixNano: 1234,
Body: pb.AnyValue{StringValue: ptrTo("nested fields")},
Attributes: []*pb.KeyValue{
{Key: "error", Value: &pb.AnyValue{KeyValueList: &pb.KeyValueList{Values: []*pb.KeyValue{
{
Key: "type",
Value: &pb.AnyValue{StringValue: ptrTo("document_parsing_exception")},
},
{
Key: "reason",
Value: &pb.AnyValue{StringValue: ptrTo("failed to parse field [_msg] of type [text]")},
},
{
Key: "caused_by",
Value: &pb.AnyValue{KeyValueList: &pb.KeyValueList{Values: []*pb.KeyValue{
{
Key: "type",
Value: &pb.AnyValue{StringValue: ptrTo("x_content_parse_exception")},
},
{
Key: "reason",
Value: &pb.AnyValue{StringValue: ptrTo("unexpected end-of-input in VALUE_STRING")},
},
{
Key: "caused_by",
Value: &pb.AnyValue{KeyValueList: &pb.KeyValueList{Values: []*pb.KeyValue{
{
Key: "type",
Value: &pb.AnyValue{StringValue: ptrTo("json_e_o_f_exception")},
},
{
Key: "reason",
Value: &pb.AnyValue{StringValue: ptrTo("eof")},
},
}}},
},
}}},
},
}}}},
},
},
},
},
},
},
}, []int64{1234},
`{"_msg":"nested fields","error.type":"document_parsing_exception","error.reason":"failed to parse field [_msg] of type [text]",`+
`"error.caused_by.type":"x_content_parse_exception","error.caused_by.reason":"unexpected end-of-input in VALUE_STRING",`+
`"error.caused_by.caused_by.type":"json_e_o_f_exception","error.caused_by.caused_by.reason":"eof","severity":"Unspecified"}`)
}
func ptrTo[T any](s T) *T {
return &s
}

View File

@@ -1,78 +0,0 @@
package opentelemetry
import (
"fmt"
"testing"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentelemetry/pb"
)
func BenchmarkParseProtobufRequest(b *testing.B) {
for _, scopes := range []int{1, 2} {
for _, rows := range []int{100, 1000} {
for _, attributes := range []int{5, 10} {
b.Run(fmt.Sprintf("scopes_%d/rows_%d/attributes_%d", scopes, rows, attributes), func(b *testing.B) {
benchmarkParseProtobufRequest(b, scopes, rows, attributes)
})
}
}
}
}
func benchmarkParseProtobufRequest(b *testing.B, streams, rows, labels int) {
blp := &insertutil.BenchmarkLogMessageProcessor{}
b.ReportAllocs()
b.SetBytes(int64(streams * rows))
b.RunParallel(func(pb *testing.PB) {
body := getProtobufBody(streams, rows, labels)
for pb.Next() {
if err := pushProtobufRequest(body, blp, nil, false); err != nil {
panic(fmt.Errorf("unexpected error: %w", err))
}
}
})
}
func getProtobufBody(scopesCount, rowsCount, attributesCount int) []byte {
msg := "12345678910"
attrValues := []*pb.AnyValue{
{StringValue: ptrTo("string-attribute")},
{IntValue: ptrTo[int64](12345)},
{DoubleValue: ptrTo(3.14)},
}
attrs := make([]*pb.KeyValue, attributesCount)
for j := 0; j < attributesCount; j++ {
attrs[j] = &pb.KeyValue{
Key: fmt.Sprintf("key-%d", j),
Value: attrValues[j%3],
}
}
entries := make([]pb.LogRecord, rowsCount)
for j := 0; j < rowsCount; j++ {
entries[j] = pb.LogRecord{
TimeUnixNano: 12345678910, ObservedTimeUnixNano: 12345678910, Body: pb.AnyValue{StringValue: &msg},
}
}
scopes := make([]pb.ScopeLogs, scopesCount)
for j := 0; j < scopesCount; j++ {
scopes[j] = pb.ScopeLogs{
LogRecords: entries,
}
}
pr := pb.ExportLogsServiceRequest{
ResourceLogs: []pb.ResourceLogs{
{
Resource: pb.Resource{
Attributes: attrs,
},
ScopeLogs: scopes,
},
},
}
return pr.MarshalProtobuf(nil)
}

View File

@@ -1,607 +0,0 @@
package syslog
import (
"bufio"
"crypto/tls"
"encoding/json"
"errors"
"flag"
"fmt"
"io"
"net"
"sort"
"strconv"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/cgroup"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/ingestserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/netutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/slicesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
"github.com/VictoriaMetrics/metrics"
)
var (
syslogTimezone = flag.String("syslog.timezone", "Local", "Timezone to use when parsing timestamps in RFC3164 syslog messages. Timezone must be a valid IANA Time Zone. "+
"For example: America/New_York, Europe/Berlin, Etc/GMT+3 . See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/")
streamFieldsTCP = flagutil.NewArrayString("syslog.streamFields.tcp", "Fields to use as log stream labels for logs ingested via the corresponding -syslog.listenAddr.tcp. "+
`See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#stream-fields`)
streamFieldsUDP = flagutil.NewArrayString("syslog.streamFields.udp", "Fields to use as log stream labels for logs ingested via the corresponding -syslog.listenAddr.udp. "+
`See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#stream-fields`)
ignoreFieldsTCP = flagutil.NewArrayString("syslog.ignoreFields.tcp", "Fields to ignore at logs ingested via the corresponding -syslog.listenAddr.tcp. "+
`See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#dropping-fields`)
ignoreFieldsUDP = flagutil.NewArrayString("syslog.ignoreFields.udp", "Fields to ignore at logs ingested via the corresponding -syslog.listenAddr.udp. "+
`See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#dropping-fields`)
decolorizeFieldsTCP = flagutil.NewArrayString("syslog.decolorizeFields.tcp", "Fields to remove ANSI color codes across logs ingested via the corresponding -syslog.listenAddr.tcp. "+
`See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#decolorizing-fields`)
decolorizeFieldsUDP = flagutil.NewArrayString("syslog.decolorizeFields.udp", "Fields to remove ANSI color codes across logs ingested via the corresponding -syslog.listenAddr.udp. "+
`See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#decolorizing-fields`)
extraFieldsTCP = flagutil.NewArrayString("syslog.extraFields.tcp", "Fields to add to logs ingested via the corresponding -syslog.listenAddr.tcp. "+
`See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#adding-extra-fields`)
extraFieldsUDP = flagutil.NewArrayString("syslog.extraFields.udp", "Fields to add to logs ingested via the corresponding -syslog.listenAddr.udp. "+
`See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#adding-extra-fields`)
tenantIDTCP = flagutil.NewArrayString("syslog.tenantID.tcp", "TenantID for logs ingested via the corresponding -syslog.listenAddr.tcp. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#multitenancy")
tenantIDUDP = flagutil.NewArrayString("syslog.tenantID.udp", "TenantID for logs ingested via the corresponding -syslog.listenAddr.udp. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#multitenancy")
listenAddrTCP = flagutil.NewArrayString("syslog.listenAddr.tcp", "Comma-separated list of TCP addresses to listen to for Syslog messages. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/")
listenAddrUDP = flagutil.NewArrayString("syslog.listenAddr.udp", "Comma-separated list of UDP address to listen to for Syslog messages. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/")
tlsEnable = flagutil.NewArrayBool("syslog.tls", "Whether to enable TLS for receiving syslog messages at the corresponding -syslog.listenAddr.tcp. "+
"The corresponding -syslog.tlsCertFile and -syslog.tlsKeyFile must be set if -syslog.tls is set. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#security")
tlsCertFile = flagutil.NewArrayString("syslog.tlsCertFile", "Path to file with TLS certificate for the corresponding -syslog.listenAddr.tcp if the corresponding -syslog.tls is set. "+
"Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#security")
tlsKeyFile = flagutil.NewArrayString("syslog.tlsKeyFile", "Path to file with TLS key for the corresponding -syslog.listenAddr.tcp if the corresponding -syslog.tls is set. "+
"The provided key file is automatically re-read every second, so it can be dynamically updated. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#security")
tlsCipherSuites = flagutil.NewArrayString("syslog.tlsCipherSuites", "Optional list of TLS cipher suites for -syslog.listenAddr.tcp if -syslog.tls is set. "+
"See the list of supported cipher suites at https://pkg.go.dev/crypto/tls#pkg-constants . "+
"See also https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#security")
tlsMinVersion = flag.String("syslog.tlsMinVersion", "TLS13", "The minimum TLS version to use for -syslog.listenAddr.tcp if -syslog.tls is set. "+
"Supported values: TLS10, TLS11, TLS12, TLS13. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#security")
compressMethodTCP = flagutil.NewArrayString("syslog.compressMethod.tcp", "Compression method for syslog messages received at the corresponding -syslog.listenAddr.tcp. "+
"Supported values: none, gzip, deflate. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#compression")
compressMethodUDP = flagutil.NewArrayString("syslog.compressMethod.udp", "Compression method for syslog messages received at the corresponding -syslog.listenAddr.udp. "+
"Supported values: none, gzip, deflate. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#compression")
useLocalTimestampTCP = flagutil.NewArrayBool("syslog.useLocalTimestamp.tcp", "Whether to use local timestamp instead of the original timestamp for the ingested syslog messages "+
"at the corresponding -syslog.listenAddr.tcp. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#log-timestamps")
useLocalTimestampUDP = flagutil.NewArrayBool("syslog.useLocalTimestamp.udp", "Whether to use local timestamp instead of the original timestamp for the ingested syslog messages "+
"at the corresponding -syslog.listenAddr.udp. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#log-timestamps")
)
// MustInit initializes syslog parser at the given -syslog.listenAddr.tcp and -syslog.listenAddr.udp ports
//
// This function must be called after flag.Parse().
//
// MustStop() must be called in order to free up resources occupied by the initialized syslog parser.
func MustInit() {
if workersStopCh != nil {
logger.Panicf("BUG: MustInit() called twice without MustStop() call")
}
workersStopCh = make(chan struct{})
for argIdx, addr := range *listenAddrTCP {
workersWG.Add(1)
go func(addr string, argIdx int) {
runTCPListener(addr, argIdx)
workersWG.Done()
}(addr, argIdx)
}
for argIdx, addr := range *listenAddrUDP {
workersWG.Add(1)
go func(addr string, argIdx int) {
runUDPListener(addr, argIdx)
workersWG.Done()
}(addr, argIdx)
}
currentYear := time.Now().Year()
globalCurrentYear.Store(int64(currentYear))
workersWG.Add(1)
go func() {
ticker := time.NewTicker(time.Minute)
for {
select {
case <-workersStopCh:
ticker.Stop()
workersWG.Done()
return
case <-ticker.C:
currentYear := time.Now().Year()
globalCurrentYear.Store(int64(currentYear))
}
}
}()
if *syslogTimezone != "" {
tz, err := time.LoadLocation(*syslogTimezone)
if err != nil {
logger.Fatalf("cannot parse -syslog.timezone=%q: %s", *syslogTimezone, err)
}
globalTimezone = tz
} else {
globalTimezone = time.Local
}
}
var (
globalCurrentYear atomic.Int64
globalTimezone *time.Location
)
var (
workersWG sync.WaitGroup
workersStopCh chan struct{}
)
// MustStop stops syslog parser initialized via MustInit()
func MustStop() {
close(workersStopCh)
workersWG.Wait()
workersStopCh = nil
}
func runUDPListener(addr string, argIdx int) {
ln, err := net.ListenPacket(netutil.GetUDPNetwork(), addr)
if err != nil {
logger.Fatalf("cannot start UDP syslog server at %q: %s", addr, err)
}
tenantIDStr := tenantIDUDP.GetOptionalArg(argIdx)
tenantID, err := logstorage.ParseTenantID(tenantIDStr)
if err != nil {
logger.Fatalf("cannot parse -syslog.tenantID.udp=%q for -syslog.listenAddr.udp=%q: %s", tenantIDStr, addr, err)
}
compressMethod := compressMethodUDP.GetOptionalArg(argIdx)
checkCompressMethod(compressMethod, addr, "udp")
useLocalTimestamp := useLocalTimestampUDP.GetOptionalArg(argIdx)
streamFieldsStr := streamFieldsUDP.GetOptionalArg(argIdx)
streamFields, err := parseFieldsList(streamFieldsStr)
if err != nil {
logger.Fatalf("cannot parse -syslog.streamFields.udp=%q for -syslog.listenAddr.udp=%q: %s", streamFieldsStr, addr, err)
}
ignoreFieldsStr := ignoreFieldsUDP.GetOptionalArg(argIdx)
ignoreFields, err := parseFieldsList(ignoreFieldsStr)
if err != nil {
logger.Fatalf("cannot parse -syslog.ignoreFields.udp=%q for -syslog.listenAddr.udp=%q: %s", ignoreFieldsStr, addr, err)
}
decolorizeFieldsStr := decolorizeFieldsUDP.GetOptionalArg(argIdx)
decolorizeFields, err := parseFieldsList(decolorizeFieldsStr)
if err != nil {
logger.Fatalf("cannot parse -syslog.decolorizeFields.udp=%q for -syslog.listenAddr.udp=%q: %s", decolorizeFieldsStr, addr, err)
}
extraFieldsStr := extraFieldsUDP.GetOptionalArg(argIdx)
extraFields, err := parseExtraFields(extraFieldsStr)
if err != nil {
logger.Fatalf("cannot parse -syslog.extraFields.udp=%q for -syslog.listenAddr.udp=%q: %s", extraFieldsStr, addr, err)
}
doneCh := make(chan struct{})
go func() {
serveUDP(ln, tenantID, compressMethod, useLocalTimestamp, streamFields, ignoreFields, decolorizeFields, extraFields)
close(doneCh)
}()
logger.Infof("started accepting syslog messages at -syslog.listenAddr.udp=%q", addr)
<-workersStopCh
if err := ln.Close(); err != nil {
logger.Fatalf("syslog: cannot close UDP listener at %s: %s", addr, err)
}
<-doneCh
logger.Infof("finished accepting syslog messages at -syslog.listenAddr.udp=%q", addr)
}
func runTCPListener(addr string, argIdx int) {
var tlsConfig *tls.Config
if tlsEnable.GetOptionalArg(argIdx) {
certFile := tlsCertFile.GetOptionalArg(argIdx)
keyFile := tlsKeyFile.GetOptionalArg(argIdx)
tc, err := netutil.GetServerTLSConfig(certFile, keyFile, *tlsMinVersion, *tlsCipherSuites)
if err != nil {
logger.Fatalf("cannot load TLS cert from -syslog.tlsCertFile=%q, -syslog.tlsKeyFile=%q, -syslog.tlsMinVersion=%q, -syslog.tlsCipherSuites=%q: %s",
certFile, keyFile, *tlsMinVersion, *tlsCipherSuites, err)
}
tlsConfig = tc
}
ln, err := netutil.NewTCPListener("syslog", addr, false, tlsConfig)
if err != nil {
logger.Fatalf("syslog: cannot start TCP listener at %s: %s", addr, err)
}
tenantIDStr := tenantIDTCP.GetOptionalArg(argIdx)
tenantID, err := logstorage.ParseTenantID(tenantIDStr)
if err != nil {
logger.Fatalf("cannot parse -syslog.tenantID.tcp=%q for -syslog.listenAddr.tcp=%q: %s", tenantIDStr, addr, err)
}
compressMethod := compressMethodTCP.GetOptionalArg(argIdx)
checkCompressMethod(compressMethod, addr, "tcp")
useLocalTimestamp := useLocalTimestampTCP.GetOptionalArg(argIdx)
streamFieldsStr := streamFieldsTCP.GetOptionalArg(argIdx)
streamFields, err := parseFieldsList(streamFieldsStr)
if err != nil {
logger.Fatalf("cannot parse -syslog.streamFields.tcp=%q for -syslog.listenAddr.tcp=%q: %s", streamFieldsStr, addr, err)
}
ignoreFieldsStr := ignoreFieldsTCP.GetOptionalArg(argIdx)
ignoreFields, err := parseFieldsList(ignoreFieldsStr)
if err != nil {
logger.Fatalf("cannot parse -syslog.ignoreFields.tcp=%q for -syslog.listenAddr.tcp=%q: %s", ignoreFieldsStr, addr, err)
}
decolorizeFieldsStr := decolorizeFieldsTCP.GetOptionalArg(argIdx)
decolorizeFields, err := parseFieldsList(decolorizeFieldsStr)
if err != nil {
logger.Fatalf("cannot parse -syslog.decolorizeFields.tcp=%q for -syslog.listenAddr.tcp=%q: %s", decolorizeFieldsStr, addr, err)
}
extraFieldsStr := extraFieldsTCP.GetOptionalArg(argIdx)
extraFields, err := parseExtraFields(extraFieldsStr)
if err != nil {
logger.Fatalf("cannot parse -syslog.extraFields.tcp=%q for -syslog.listenAddr.tcp=%q: %s", extraFieldsStr, addr, err)
}
doneCh := make(chan struct{})
go func() {
serveTCP(ln, tenantID, compressMethod, useLocalTimestamp, streamFields, ignoreFields, decolorizeFields, extraFields)
close(doneCh)
}()
logger.Infof("started accepting syslog messages at -syslog.listenAddr.tcp=%q", addr)
<-workersStopCh
if err := ln.Close(); err != nil {
logger.Fatalf("syslog: cannot close TCP listener at %s: %s", addr, err)
}
<-doneCh
logger.Infof("finished accepting syslog messages at -syslog.listenAddr.tcp=%q", addr)
}
func checkCompressMethod(compressMethod, addr, protocol string) {
switch compressMethod {
case "", "none", "zstd", "gzip", "deflate":
return
default:
logger.Fatalf("unsupported -syslog.compressMethod.%s=%q for -syslog.listenAddr.%s=%q; supported values: 'none', 'zstd', 'gzip', 'deflate'", protocol, compressMethod, protocol, addr)
}
}
func serveUDP(ln net.PacketConn, tenantID logstorage.TenantID, encoding string, useLocalTimestamp bool, streamFields, ignoreFields, decolorizeFields []string, extraFields []logstorage.Field) {
gomaxprocs := cgroup.AvailableCPUs()
var wg sync.WaitGroup
localAddr := ln.LocalAddr()
for i := 0; i < gomaxprocs; i++ {
wg.Add(1)
go func() {
defer wg.Done()
cp := insertutil.GetCommonParamsForSyslog(tenantID, streamFields, ignoreFields, decolorizeFields, extraFields)
var bb bytesutil.ByteBuffer
bb.B = bytesutil.ResizeNoCopyNoOverallocate(bb.B, 64*1024)
for {
bb.Reset()
bb.B = bb.B[:cap(bb.B)]
n, remoteAddr, err := ln.ReadFrom(bb.B)
if err != nil {
udpErrorsTotal.Inc()
var ne net.Error
if errors.As(err, &ne) {
if ne.Temporary() {
logger.Errorf("syslog: temporary error when listening for UDP at %q: %s", localAddr, err)
time.Sleep(time.Second)
continue
}
if strings.Contains(err.Error(), "use of closed network connection") {
break
}
}
logger.Errorf("syslog: cannot read UDP data from %s at %s: %s", remoteAddr, localAddr, err)
continue
}
bb.B = bb.B[:n]
udpRequestsTotal.Inc()
if err := processStream("udp", bb.NewReader(), encoding, useLocalTimestamp, cp); err != nil {
logger.Errorf("syslog: cannot process UDP data from %s at %s: %s", remoteAddr, localAddr, err)
}
}
}()
}
wg.Wait()
}
func serveTCP(ln net.Listener, tenantID logstorage.TenantID, encoding string, useLocalTimestamp bool, streamFields, ignoreFields, decolorizeFields []string, extraFields []logstorage.Field) {
var cm ingestserver.ConnsMap
cm.Init("syslog")
var wg sync.WaitGroup
addr := ln.Addr()
for {
c, err := ln.Accept()
if err != nil {
var ne net.Error
if errors.As(err, &ne) {
if ne.Temporary() {
logger.Errorf("syslog: temporary error when listening for TCP addr %q: %s", addr, err)
time.Sleep(time.Second)
continue
}
if strings.Contains(err.Error(), "use of closed network connection") {
break
}
logger.Fatalf("syslog: unrecoverable error when accepting TCP connections at %q: %s", addr, err)
}
logger.Fatalf("syslog: unexpected error when accepting TCP connections at %q: %s", addr, err)
}
if !cm.Add(c) {
_ = c.Close()
break
}
wg.Add(1)
go func() {
cp := insertutil.GetCommonParamsForSyslog(tenantID, streamFields, ignoreFields, decolorizeFields, extraFields)
if err := processStream("tcp", c, encoding, useLocalTimestamp, cp); err != nil {
logger.Errorf("syslog: cannot process TCP data at %q: %s", addr, err)
}
cm.Delete(c)
_ = c.Close()
wg.Done()
}()
}
cm.CloseAll(0)
wg.Wait()
}
// processStream parses a stream of syslog messages from r and ingests them into vlstorage.
func processStream(protocol string, r io.Reader, encoding string, useLocalTimestamp bool, cp *insertutil.CommonParams) error {
if err := insertutil.CanWriteData(); err != nil {
return err
}
lmp := cp.NewLogMessageProcessor("syslog_"+protocol, true)
err := processStreamInternal(r, encoding, useLocalTimestamp, lmp)
lmp.MustClose()
return err
}
func processStreamInternal(r io.Reader, encoding string, useLocalTimestamp bool, lmp insertutil.LogMessageProcessor) error {
reader, err := protoparserutil.GetUncompressedReader(r, encoding)
if err != nil {
return fmt.Errorf("cannot decode syslog data: %w", err)
}
defer protoparserutil.PutUncompressedReader(reader)
return processUncompressedStream(reader, useLocalTimestamp, lmp)
}
func processUncompressedStream(r io.Reader, useLocalTimestamp bool, lmp insertutil.LogMessageProcessor) error {
wcr := writeconcurrencylimiter.GetReader(r)
defer writeconcurrencylimiter.PutReader(wcr)
slr := getSyslogLineReader(wcr)
defer putSyslogLineReader(slr)
n := 0
for {
ok := slr.nextLine()
wcr.DecConcurrency()
if !ok {
break
}
currentYear := int(globalCurrentYear.Load())
err := processLine(slr.line, currentYear, globalTimezone, useLocalTimestamp, lmp)
if err != nil {
errorsTotal.Inc()
return fmt.Errorf("cannot read line #%d: %s", n, err)
}
n++
}
return slr.Error()
}
type syslogLineReader struct {
line []byte
br *bufio.Reader
err error
}
func (slr *syslogLineReader) reset(r io.Reader) {
slr.line = slr.line[:0]
slr.br.Reset(r)
slr.err = nil
}
// Error returns the last error occurred in slr.
func (slr *syslogLineReader) Error() error {
if slr.err == nil || slr.err == io.EOF {
return nil
}
return slr.err
}
// nextLine reads the next syslog line from slr and stores it at slr.line.
//
// false is returned if the next line cannot be read. Error() must be called in this case
// in order to verify whether there is an error or just slr stream has been finished.
func (slr *syslogLineReader) nextLine() bool {
if slr.err != nil {
return false
}
again:
prefix, err := slr.br.ReadSlice(' ')
if err != nil {
if err != io.EOF {
slr.err = fmt.Errorf("cannot read message frame prefix: %w", err)
return false
}
if len(prefix) == 0 {
slr.err = err
return false
}
}
// skip empty lines
for len(prefix) > 0 && prefix[0] == '\n' {
prefix = prefix[1:]
}
if len(prefix) == 0 {
// An empty prefix or a prefix with empty lines - try reading yet another prefix.
goto again
}
if prefix[0] >= '0' && prefix[0] <= '9' {
// This is octet-counting method. See https://www.ietf.org/archive/id/draft-gerhards-syslog-plain-tcp-07.html#msgxfer
msgLenStr := bytesutil.ToUnsafeString(prefix[:len(prefix)-1])
msgLen, err := strconv.ParseUint(msgLenStr, 10, 64)
if err != nil {
slr.err = fmt.Errorf("cannot parse message length from %q: %w", msgLenStr, err)
return false
}
if maxMsgLen := insertutil.MaxLineSizeBytes.IntN(); msgLen > uint64(maxMsgLen) {
slr.err = fmt.Errorf("cannot read message longer than %d bytes; msgLen=%d", maxMsgLen, msgLen)
return false
}
slr.line = slicesutil.SetLength(slr.line, int(msgLen))
if _, err := io.ReadFull(slr.br, slr.line); err != nil {
slr.err = fmt.Errorf("cannot read message with size %d bytes: %w", msgLen, err)
return false
}
return true
}
// This is octet-stuffing method. See https://www.ietf.org/archive/id/draft-gerhards-syslog-plain-tcp-07.html#octet-stuffing-legacy
slr.line = append(slr.line[:0], prefix...)
for {
line, err := slr.br.ReadSlice('\n')
if err == nil {
slr.line = append(slr.line, line[:len(line)-1]...)
return true
}
if err == io.EOF {
slr.line = append(slr.line, line...)
return true
}
if err == bufio.ErrBufferFull {
slr.line = append(slr.line, line...)
continue
}
slr.err = fmt.Errorf("cannot read message in octet-stuffing method: %w", err)
return false
}
}
func getSyslogLineReader(r io.Reader) *syslogLineReader {
v := syslogLineReaderPool.Get()
if v == nil {
br := bufio.NewReaderSize(r, 64*1024)
return &syslogLineReader{
br: br,
}
}
slr := v.(*syslogLineReader)
slr.reset(r)
return slr
}
func putSyslogLineReader(slr *syslogLineReader) {
syslogLineReaderPool.Put(slr)
}
var syslogLineReaderPool sync.Pool
func processLine(line []byte, currentYear int, timezone *time.Location, useLocalTimestamp bool, lmp insertutil.LogMessageProcessor) error {
p := logstorage.GetSyslogParser(currentYear, timezone)
lineStr := bytesutil.ToUnsafeString(line)
p.Parse(lineStr)
var ts int64
if useLocalTimestamp {
ts = time.Now().UnixNano()
} else {
nsecs, err := insertutil.ExtractTimestampFromFields(timeFields, p.Fields)
if err != nil {
return fmt.Errorf("cannot get timestamp from syslog line %q: %w", line, err)
}
ts = nsecs
}
logstorage.RenameField(p.Fields, msgFields, "_msg")
lmp.AddRow(ts, p.Fields, nil)
logstorage.PutSyslogParser(p)
return nil
}
var timeFields = []string{"timestamp"}
var msgFields = []string{"message"}
var (
errorsTotal = metrics.NewCounter(`vl_errors_total{type="syslog"}`)
udpRequestsTotal = metrics.NewCounter(`vl_udp_reqests_total{type="syslog"}`)
udpErrorsTotal = metrics.NewCounter(`vl_udp_errors_total{type="syslog"}`)
)
func parseFieldsList(s string) ([]string, error) {
if s == "" {
return nil, nil
}
var a []string
err := json.Unmarshal([]byte(s), &a)
return a, err
}
func parseExtraFields(s string) ([]logstorage.Field, error) {
if s == "" {
return nil, nil
}
var m map[string]string
if err := json.Unmarshal([]byte(s), &m); err != nil {
return nil, err
}
fields := make([]logstorage.Field, 0, len(m))
for k, v := range m {
fields = append(fields, logstorage.Field{
Name: k,
Value: v,
})
}
sort.Slice(fields, func(i, j int) bool {
return fields[i].Name < fields[j].Name
})
return fields, nil
}

View File

@@ -1,129 +0,0 @@
package syslog
import (
"bytes"
"reflect"
"testing"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
)
func TestSyslogLineReader_Success(t *testing.T) {
f := func(data string, linesExpected []string) {
t.Helper()
r := bytes.NewBufferString(data)
slr := getSyslogLineReader(r)
defer putSyslogLineReader(slr)
var lines []string
for slr.nextLine() {
lines = append(lines, string(slr.line))
}
if err := slr.Error(); err != nil {
t.Fatalf("unexpected error: %s", err)
}
if !reflect.DeepEqual(lines, linesExpected) {
t.Fatalf("unexpected lines read;\ngot\n%q\nwant\n%q", lines, linesExpected)
}
}
f("", nil)
f("\n", nil)
f("\n\n\n", nil)
f("foobar", []string{"foobar"})
f("foobar\n", []string{"foobar\n"})
f("\n\nfoo\n\nbar\n\n", []string{"foo\n\nbar\n\n"})
f(`Jun 3 12:08:33 abcd systemd: Starting Update the local ESM caches...`, []string{"Jun 3 12:08:33 abcd systemd: Starting Update the local ESM caches..."})
f(`Jun 3 12:08:33 abcd systemd: Starting Update the local ESM caches...
48 <165>Jun 4 12:08:33 abcd systemd[345]: abc defg<123>1 2023-06-03T17:42:12.345Z mymachine.example.com appname 12345 ID47 [exampleSDID@32473 iut="3" eventSource="Application 123 = ] 56" eventID="11211"] This is a test message with structured data.
`, []string{
"Jun 3 12:08:33 abcd systemd: Starting Update the local ESM caches...",
"<165>Jun 4 12:08:33 abcd systemd[345]: abc defg",
`<123>1 2023-06-03T17:42:12.345Z mymachine.example.com appname 12345 ID47 [exampleSDID@32473 iut="3" eventSource="Application 123 = ] 56" eventID="11211"] This is a test message with structured data.`,
})
}
func TestSyslogLineReader_Failure(t *testing.T) {
f := func(data string) {
t.Helper()
r := bytes.NewBufferString(data)
slr := getSyslogLineReader(r)
defer putSyslogLineReader(slr)
if slr.nextLine() {
t.Fatalf("expecting failure to read the first line")
}
if err := slr.Error(); err == nil {
t.Fatalf("expecting non-nil error")
}
}
// invalid format for message size
f("12foo bar")
// too big message size
f("123 aa")
f("1233423432 abc")
}
func TestProcessStreamInternal_Success(t *testing.T) {
f := func(data string, currentYear int, timestampsExpected []int64, resultExpected string) {
t.Helper()
MustInit()
defer MustStop()
globalTimezone = time.UTC
globalCurrentYear.Store(int64(currentYear))
tlp := &insertutil.TestLogMessageProcessor{}
r := bytes.NewBufferString(data)
if err := processStreamInternal(r, "", false, tlp); err != nil {
t.Fatalf("unexpected error: %s", err)
}
if err := tlp.Verify(timestampsExpected, resultExpected); err != nil {
t.Fatal(err)
}
}
data := `Jun 3 12:08:33 abcd systemd: Starting Update the local ESM caches...
48 <165>Jun 4 12:08:33 abcd systemd[345]: abc defg<123>1 2023-06-03T17:42:12.345Z mymachine.example.com appname 12345 ID47 [exampleSDID@32473 iut="3" eventSource="Application 123 = ] 56" eventID="11211"] This is a test message with structured data.
`
currentYear := 2023
timestampsExpected := []int64{1685794113000000000, 1685880513000000000, 1685814132345000000}
resultExpected := `{"format":"rfc3164","hostname":"abcd","app_name":"systemd","_msg":"Starting Update the local ESM caches..."}
{"priority":"165","facility_keyword":"local4","level":"notice","facility":"20","severity":"5","format":"rfc3164","hostname":"abcd","app_name":"systemd","proc_id":"345","_msg":"abc defg"}
{"priority":"123","facility_keyword":"solaris-cron","level":"error","facility":"15","severity":"3","format":"rfc5424","hostname":"mymachine.example.com","app_name":"appname","proc_id":"12345","msg_id":"ID47","exampleSDID@32473.iut":"3","exampleSDID@32473.eventSource":"Application 123 = ] 56","exampleSDID@32473.eventID":"11211","_msg":"This is a test message with structured data."}`
f(data, currentYear, timestampsExpected, resultExpected)
}
func TestProcessStreamInternal_Failure(t *testing.T) {
f := func(data string) {
t.Helper()
MustInit()
defer MustStop()
tlp := &insertutil.TestLogMessageProcessor{}
r := bytes.NewBufferString(data)
if err := processStreamInternal(r, "", false, tlp); err == nil {
t.Fatalf("expecting non-nil error")
}
}
// invalid format for message size
f("12foo bar")
// too big message size
f("123 foo")
f("123456789 bar")
}

View File

@@ -1,109 +0,0 @@
# All these commands must run from repository root.
vlogscli:
APP_NAME=vlogscli $(MAKE) app-local
vlogscli-race:
APP_NAME=vlogscli RACE=-race $(MAKE) app-local
vlogscli-prod:
APP_NAME=vlogscli $(MAKE) app-via-docker
vlogscli-pure-prod:
APP_NAME=vlogscli $(MAKE) app-via-docker-pure
vlogscli-linux-amd64-prod:
APP_NAME=vlogscli $(MAKE) app-via-docker-linux-amd64
vlogscli-linux-arm-prod:
APP_NAME=vlogscli $(MAKE) app-via-docker-linux-arm
vlogscli-linux-arm64-prod:
APP_NAME=vlogscli $(MAKE) app-via-docker-linux-arm64
vlogscli-linux-ppc64le-prod:
APP_NAME=vlogscli $(MAKE) app-via-docker-linux-ppc64le
vlogscli-linux-386-prod:
APP_NAME=vlogscli $(MAKE) app-via-docker-linux-386
vlogscli-darwin-amd64-prod:
APP_NAME=vlogscli $(MAKE) app-via-docker-darwin-amd64
vlogscli-darwin-arm64-prod:
APP_NAME=vlogscli $(MAKE) app-via-docker-darwin-arm64
vlogscli-freebsd-amd64-prod:
APP_NAME=vlogscli $(MAKE) app-via-docker-freebsd-amd64
vlogscli-openbsd-amd64-prod:
APP_NAME=vlogscli $(MAKE) app-via-docker-openbsd-amd64
vlogscli-windows-amd64-prod:
APP_NAME=vlogscli $(MAKE) app-via-docker-windows-amd64
package-vlogscli:
APP_NAME=vlogscli $(MAKE) package-via-docker
package-vlogscli-pure:
APP_NAME=vlogscli $(MAKE) package-via-docker-pure
package-vlogscli-amd64:
APP_NAME=vlogscli $(MAKE) package-via-docker-amd64
package-vlogscli-arm:
APP_NAME=vlogscli $(MAKE) package-via-docker-arm
package-vlogscli-arm64:
APP_NAME=vlogscli $(MAKE) package-via-docker-arm64
package-vlogscli-ppc64le:
APP_NAME=vlogscli $(MAKE) package-via-docker-ppc64le
package-vlogscli-386:
APP_NAME=vlogscli $(MAKE) package-via-docker-386
publish-vlogscli:
APP_NAME=vlogscli $(MAKE) publish-via-docker
vlogscli-linux-amd64:
APP_NAME=vlogscli CGO_ENABLED=1 GOOS=linux GOARCH=amd64 $(MAKE) app-local-goos-goarch
vlogscli-linux-arm:
APP_NAME=vlogscli CGO_ENABLED=0 GOOS=linux GOARCH=arm $(MAKE) app-local-goos-goarch
vlogscli-linux-arm64:
APP_NAME=vlogscli CGO_ENABLED=0 GOOS=linux GOARCH=arm64 $(MAKE) app-local-goos-goarch
vlogscli-linux-ppc64le:
APP_NAME=vlogscli CGO_ENABLED=0 GOOS=linux GOARCH=ppc64le $(MAKE) app-local-goos-goarch
vlogscli-linux-s390x:
APP_NAME=vlogscli CGO_ENABLED=0 GOOS=linux GOARCH=s390x $(MAKE) app-local-goos-goarch
vlogscli-linux-loong64:
APP_NAME=vlogscli CGO_ENABLED=0 GOOS=linux GOARCH=loong64 $(MAKE) app-local-goos-goarch
vlogscli-linux-386:
APP_NAME=vlogscli CGO_ENABLED=0 GOOS=linux GOARCH=386 $(MAKE) app-local-goos-goarch
vlogscli-darwin-amd64:
APP_NAME=vlogscli CGO_ENABLED=0 GOOS=darwin GOARCH=amd64 $(MAKE) app-local-goos-goarch
vlogscli-darwin-arm64:
APP_NAME=vlogscli CGO_ENABLED=0 GOOS=darwin GOARCH=arm64 $(MAKE) app-local-goos-goarch
vlogscli-freebsd-amd64:
APP_NAME=vlogscli CGO_ENABLED=0 GOOS=freebsd GOARCH=amd64 $(MAKE) app-local-goos-goarch
vlogscli-openbsd-amd64:
APP_NAME=vlogscli CGO_ENABLED=0 GOOS=openbsd GOARCH=amd64 $(MAKE) app-local-goos-goarch
vlogscli-windows-amd64:
GOARCH=amd64 APP_NAME=vlogscli $(MAKE) app-local-windows-goarch
vlogscli-pure:
APP_NAME=vlogscli $(MAKE) app-local-pure
run-vlogscli:
APP_NAME=vlogscli $(MAKE) run-via-docker

View File

@@ -1,5 +1 @@
# vlogscli
Command-line utility for querying [VictoriaLogs](https://docs.victoriametrics.com/victorialogs/).
See [these docs](https://docs.victoriametrics.com/victorialogs/querying/vlogscli/).
VictoriaLogs source code has been moved to [github.com/VictoriaMetrics/VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaLogs/).

View File

@@ -1,6 +0,0 @@
ARG base_image=non-existing
FROM $base_image
ENTRYPOINT ["/vlogscli-prod"]
ARG src_binary=non-existing
COPY $src_binary ./vlogscli-prod

View File

@@ -1,245 +0,0 @@
package main
import (
"bufio"
"encoding/json"
"fmt"
"io"
"sort"
"strings"
"sync"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
)
type outputMode int
const (
outputModeJSONMultiline = outputMode(0)
outputModeJSONSingleline = outputMode(1)
outputModeLogfmt = outputMode(2)
outputModeCompact = outputMode(3)
)
func getOutputFormatter(outputMode outputMode) func(w io.Writer, fields []logstorage.Field) error {
switch outputMode {
case outputModeJSONMultiline:
return func(w io.Writer, fields []logstorage.Field) error {
return writeJSONObject(w, fields, true)
}
case outputModeJSONSingleline:
return func(w io.Writer, fields []logstorage.Field) error {
return writeJSONObject(w, fields, false)
}
case outputModeLogfmt:
return writeLogfmtObject
case outputModeCompact:
return writeCompactObject
default:
panic(fmt.Errorf("BUG: unexpected outputMode=%d", outputMode))
}
}
type jsonPrettifier struct {
r io.ReadCloser
formatter func(w io.Writer, fields []logstorage.Field) error
d *json.Decoder
pr *io.PipeReader
pw *io.PipeWriter
bw *bufio.Writer
wg sync.WaitGroup
}
func newJSONPrettifier(r io.ReadCloser, outputMode outputMode) *jsonPrettifier {
d := json.NewDecoder(r)
pr, pw := io.Pipe()
bw := bufio.NewWriter(pw)
formatter := getOutputFormatter(outputMode)
jp := &jsonPrettifier{
r: r,
formatter: formatter,
d: d,
pr: pr,
pw: pw,
bw: bw,
}
jp.wg.Add(1)
go func() {
defer jp.wg.Done()
err := jp.prettifyJSONLines()
jp.closePipesWithError(err)
}()
return jp
}
func (jp *jsonPrettifier) closePipesWithError(err error) {
_ = jp.pr.CloseWithError(err)
_ = jp.pw.CloseWithError(err)
}
func (jp *jsonPrettifier) prettifyJSONLines() error {
for jp.d.More() {
fields, err := readNextJSONObject(jp.d)
if err != nil {
return err
}
sort.Slice(fields, func(i, j int) bool {
return fields[i].Name < fields[j].Name
})
if err := jp.formatter(jp.bw, fields); err != nil {
return err
}
// Flush bw after every output line in order to show results as soon as they appear.
if err := jp.bw.Flush(); err != nil {
return err
}
}
return nil
}
func (jp *jsonPrettifier) Close() error {
jp.closePipesWithError(io.ErrUnexpectedEOF)
err := jp.r.Close()
jp.wg.Wait()
return err
}
func (jp *jsonPrettifier) Read(p []byte) (int, error) {
return jp.pr.Read(p)
}
func readNextJSONObject(d *json.Decoder) ([]logstorage.Field, error) {
t, err := d.Token()
if err != nil {
return nil, fmt.Errorf("cannot read '{': %w", err)
}
delim, ok := t.(json.Delim)
if !ok || delim.String() != "{" {
return nil, fmt.Errorf("unexpected token read; got %q; want '{'", delim)
}
var fields []logstorage.Field
for {
// Read object key
t, err := d.Token()
if err != nil {
return nil, fmt.Errorf("cannot read JSON object key or closing brace: %w", err)
}
delim, ok := t.(json.Delim)
if ok {
if delim.String() == "}" {
return fields, nil
}
return nil, fmt.Errorf("unexpected delimiter read; got %q; want '}'", delim)
}
key, ok := t.(string)
if !ok {
return nil, fmt.Errorf("unexpected token read for object key: %v; want string or '}'", t)
}
// read object value
t, err = d.Token()
if err != nil {
return nil, fmt.Errorf("cannot read JSON object value: %w", err)
}
value, ok := t.(string)
if !ok {
return nil, fmt.Errorf("unexpected token read for object value: %v; want string", t)
}
fields = append(fields, logstorage.Field{
Name: key,
Value: value,
})
}
}
func writeLogfmtObject(w io.Writer, fields []logstorage.Field) error {
data := logstorage.MarshalFieldsToLogfmt(nil, fields)
_, err := fmt.Fprintf(w, "%s\n", data)
return err
}
func writeCompactObject(w io.Writer, fields []logstorage.Field) error {
if len(fields) == 1 {
// Just write field value as is without name
_, err := fmt.Fprintf(w, "%s\n", fields[0].Value)
return err
}
if len(fields) == 2 && (fields[0].Name == "_time" || fields[1].Name == "_time") {
// Write _time\tfieldValue as is
if fields[0].Name == "_time" {
_, err := fmt.Fprintf(w, "%s\t%s\n", fields[0].Value, fields[1].Value)
return err
}
_, err := fmt.Fprintf(w, "%s\t%s\n", fields[1].Value, fields[0].Value)
return err
}
// Fall back to logfmt
return writeLogfmtObject(w, fields)
}
func writeJSONObject(w io.Writer, fields []logstorage.Field, isMultiline bool) error {
if len(fields) == 0 {
fmt.Fprintf(w, "{}\n")
return nil
}
fmt.Fprintf(w, "{")
writeNewlineIfNeeded(w, isMultiline)
if err := writeJSONObjectKeyValue(w, fields[0], isMultiline); err != nil {
return err
}
for _, f := range fields[1:] {
fmt.Fprintf(w, ",")
writeNewlineIfNeeded(w, isMultiline)
if err := writeJSONObjectKeyValue(w, f, isMultiline); err != nil {
return err
}
}
writeNewlineIfNeeded(w, isMultiline)
fmt.Fprintf(w, "}\n")
return nil
}
func writeNewlineIfNeeded(w io.Writer, isMultiline bool) {
if isMultiline {
fmt.Fprintf(w, "\n")
}
}
func writeJSONObjectKeyValue(w io.Writer, f logstorage.Field, isMultiline bool) error {
key := getJSONString(f.Name)
value := getJSONString(f.Value)
if isMultiline {
_, err := fmt.Fprintf(w, " %s: %s", key, value)
return err
}
_, err := fmt.Fprintf(w, "%s:%s", key, value)
return err
}
func getJSONString(s string) string {
data, err := json.Marshal(s)
if err != nil {
panic(fmt.Errorf("unexpected error when marshaling string to JSON: %w", err))
}
return jsonHTMLReplacer.Replace(string(data))
}
var jsonHTMLReplacer = strings.NewReplacer(
`\u003c`, "\u003c",
`\u003e`, "\u003e",
`\u0026`, "\u0026",
)

View File

@@ -1,123 +0,0 @@
package main
import (
"errors"
"fmt"
"io"
"os"
"os/exec"
"os/signal"
"sync"
"syscall"
"github.com/mattn/go-isatty"
)
func isTerminal() bool {
return isatty.IsTerminal(os.Stdout.Fd()) && isatty.IsTerminal(os.Stderr.Fd())
}
func readWithLess(r io.Reader, disableColors, wrapLongLines bool) error {
if !isTerminal() {
// Just write everything to stdout if no terminal is available.
_, err := io.Copy(os.Stdout, r)
if err != nil && !isErrPipe(err) {
return fmt.Errorf("error when forwarding data to stdout: %w", err)
}
if err := os.Stdout.Sync(); err != nil {
return fmt.Errorf("cannot sync data to stdout: %w", err)
}
return nil
}
pr, pw, err := os.Pipe()
if err != nil {
return fmt.Errorf("cannot create pipe: %w", err)
}
defer func() {
_ = pr.Close()
_ = pw.Close()
}()
// Ignore Ctrl+C in the current process, so 'less' could handle it properly
cancel := ignoreSignals(os.Interrupt)
defer cancel()
// Start 'less' process
path, err := exec.LookPath("less")
if err != nil {
return fmt.Errorf("cannot find 'less' command: %w", err)
}
opts := []string{"less", "-F", "-X"}
if !disableColors {
opts = append(opts, "-R")
}
if !wrapLongLines {
opts = append(opts, "-S")
}
p, err := os.StartProcess(path, opts, &os.ProcAttr{
Env: append(os.Environ(), "LESSCHARSET=utf-8"),
Files: []*os.File{pr, os.Stdout, os.Stderr},
})
if err != nil {
return fmt.Errorf("cannot start 'less' process: %w", err)
}
// Close pr after 'less' finishes in a parallel goroutine
// in order to unblock forwarding data to stopped 'less' below.
waitch := make(chan *os.ProcessState)
go func() {
// Wait for 'less' process to finish.
ps, err := p.Wait()
if err != nil {
fatalf("unexpected error when waiting for 'less' process: %w", err)
}
_ = pr.Close()
waitch <- ps
}()
// Forward data from r to 'less'
_, err = io.Copy(pw, r)
_ = pw.Sync()
_ = pw.Close()
// Wait until 'less' finished
ps := <-waitch
// Verify 'less' status.
if !ps.Success() {
return fmt.Errorf("'less' finished with unexpected code %d", ps.ExitCode())
}
if err != nil && !isErrPipe(err) {
return fmt.Errorf("error when forwarding data to 'less': %w", err)
}
return nil
}
func isErrPipe(err error) bool {
return errors.Is(err, syscall.EPIPE) || errors.Is(err, io.ErrClosedPipe)
}
func ignoreSignals(sigs ...os.Signal) func() {
ch := make(chan os.Signal, 1)
signal.Notify(ch, sigs...)
var wg sync.WaitGroup
wg.Add(1)
go func() {
defer wg.Done()
for {
_, ok := <-ch
if !ok {
return
}
}
}()
return func() {
signal.Stop(ch)
close(ch)
wg.Wait()
}
}

View File

@@ -1,457 +0,0 @@
package main
import (
"context"
"errors"
"flag"
"fmt"
"io"
"io/fs"
"net/http"
"net/url"
"os"
"os/signal"
"strconv"
"strings"
"syscall"
"time"
"github.com/ergochat/readline"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/buildinfo"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/envflag"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
)
var (
datasourceURL = flag.String("datasource.url", "http://localhost:9428/select/logsql/query", "URL for querying VictoriaLogs; "+
"see https://docs.victoriametrics.com/victorialogs/querying/#querying-logs . See also -tail.url")
tailURL = flag.String("tail.url", "", "URL for live tailing queries to VictoriaLogs; see https://docs.victoriametrics.com/victorialogs/querying/#live-tailing ."+
"The url is automatically detected from -datasource.url by replacing /query with /tail at the end if -tail.url is empty")
historyFile = flag.String("historyFile", "vlogscli-history", "Path to file with command history")
header = flagutil.NewArrayString("header", "Optional header to pass in request -datasource.url in the form 'HeaderName: value'")
accountID = flag.Int("accountID", 0, "Account ID to query; see https://docs.victoriametrics.com/victorialogs/#multitenancy")
projectID = flag.Int("projectID", 0, "Project ID to query; see https://docs.victoriametrics.com/victorialogs/#multitenancy")
)
const (
firstLinePrompt = ";> "
nextLinePrompt = ""
)
func main() {
// Write flags and help message to stdout, since it is easier to grep or pipe.
flag.CommandLine.SetOutput(os.Stdout)
flag.Usage = usage
envflag.Parse()
buildinfo.Init()
logger.InitNoLogFlags()
hes, err := parseHeaders(*header)
if err != nil {
fatalf("cannot parse -header command-line flag: %s", err)
}
headers = hes
incompleteLine := ""
cfg := &readline.Config{
Prompt: firstLinePrompt,
DisableAutoSaveHistory: true,
Listener: func(line []rune, pos int, _ rune) ([]rune, int, bool) {
incompleteLine = string(line)
return line, pos, false
},
}
rl, err := readline.NewFromConfig(cfg)
if err != nil {
fatalf("cannot initialize readline: %s", err)
}
fmt.Fprintf(rl, "sending queries to -datasource.url=%s\n", *datasourceURL)
fmt.Fprintf(rl, `type ? and press enter to see available commands`+"\n")
runReadlineLoop(rl, &incompleteLine)
if err := rl.Close(); err != nil {
fatalf("cannot close readline: %s", err)
}
}
func runReadlineLoop(rl *readline.Instance, incompleteLine *string) {
historyLines, err := loadFromHistory(*historyFile)
if err != nil {
fatalf("cannot load query history: %s", err)
}
for _, line := range historyLines {
if err := rl.SaveToHistory(line); err != nil {
fatalf("cannot initialize query history: %s", err)
}
}
outputMode := outputModeJSONMultiline
disableColors := true
wrapLongLines := false
s := ""
for {
line, err := rl.ReadLine()
if err != nil {
switch err {
case io.EOF:
if s != "" {
// This is non-interactive query execution.
executeQuery(context.Background(), rl, s, outputMode, disableColors, wrapLongLines)
}
return
case readline.ErrInterrupt:
if s == "" && *incompleteLine == "" {
fmt.Fprintf(rl, "interrupted\n")
os.Exit(128 + int(syscall.SIGINT))
}
// Default value for Ctrl+C - clear the prompt and store the incompletely entered line into history
s += *incompleteLine
historyLines = pushToHistory(rl, historyLines, s)
s = ""
rl.SetPrompt(firstLinePrompt)
continue
default:
fatalf("unexpected error in readline: %s", err)
}
}
s += line
if s == "" {
// Skip empty lines
continue
}
if isQuitCommand(s) {
fmt.Fprintf(rl, "bye!\n")
_ = pushToHistory(rl, historyLines, s)
return
}
if isHelpCommand(s) {
printCommandsHelp(rl)
historyLines = pushToHistory(rl, historyLines, s)
s = ""
continue
}
if s == `\s` {
fmt.Fprintf(rl, "singleline json output mode\n")
outputMode = outputModeJSONSingleline
historyLines = pushToHistory(rl, historyLines, s)
s = ""
continue
}
if s == `\m` {
fmt.Fprintf(rl, "multiline json output mode\n")
outputMode = outputModeJSONMultiline
historyLines = pushToHistory(rl, historyLines, s)
s = ""
continue
}
if s == `\c` {
fmt.Fprintf(rl, "compact output mode\n")
outputMode = outputModeCompact
historyLines = pushToHistory(rl, historyLines, s)
s = ""
continue
}
if s == `\logfmt` {
fmt.Fprintf(rl, "logfmt output mode\n")
outputMode = outputModeLogfmt
historyLines = pushToHistory(rl, historyLines, s)
s = ""
continue
}
if s == `\wrap_long_lines` {
if wrapLongLines {
wrapLongLines = false
fmt.Fprintf(rl, "wrapping of long lines is disabled\n")
} else {
wrapLongLines = true
fmt.Fprintf(rl, "wrapping of long lines is enabled\n")
}
historyLines = pushToHistory(rl, historyLines, s)
s = ""
continue
}
if s == `\disable_colors` {
if !disableColors {
disableColors = true
fmt.Fprintf(rl, `disabled colors in compact output mode; enter \enable_colors for enabling it`+"\n")
}
historyLines = pushToHistory(rl, historyLines, s)
s = ""
continue
}
if s == `\enable_colors` {
if disableColors {
disableColors = false
fmt.Fprintf(rl, `enabled colors in compact output mode; type \disable_colors for disabling it`+"\n")
}
historyLines = pushToHistory(rl, historyLines, s)
s = ""
continue
}
if line != "" && !strings.HasSuffix(line, ";") {
// Assume the query is incomplete and allow the user finishing the query on the next line
s += "\n"
rl.SetPrompt(nextLinePrompt)
continue
}
// Execute the query
ctx, cancel := signal.NotifyContext(context.Background(), os.Interrupt)
executeQuery(ctx, rl, s, outputMode, disableColors, wrapLongLines)
cancel()
historyLines = pushToHistory(rl, historyLines, s)
s = ""
rl.SetPrompt(firstLinePrompt)
}
}
func pushToHistory(rl *readline.Instance, historyLines []string, s string) []string {
s = strings.TrimSpace(s)
if len(historyLines) == 0 || historyLines[len(historyLines)-1] != s {
historyLines = append(historyLines, s)
if len(historyLines) > 500 {
historyLines = historyLines[len(historyLines)-500:]
}
if err := saveToHistory(*historyFile, historyLines); err != nil {
fatalf("cannot save query history: %s", err)
}
}
if err := rl.SaveToHistory(s); err != nil {
fatalf("cannot update query history: %s", err)
}
return historyLines
}
func loadFromHistory(filePath string) ([]string, error) {
data, err := os.ReadFile(filePath)
if err != nil {
if errors.Is(err, fs.ErrNotExist) {
return nil, nil
}
return nil, err
}
linesQuoted := strings.Split(string(data), "\n")
lines := make([]string, 0, len(linesQuoted))
i := 0
for _, lineQuoted := range linesQuoted {
i++
if lineQuoted == "" {
continue
}
line, err := strconv.Unquote(lineQuoted)
if err != nil {
return nil, fmt.Errorf("cannot parse line #%d at %s: %w; line: [%s]", i, filePath, err, line)
}
lines = append(lines, line)
}
return lines, nil
}
func saveToHistory(filePath string, lines []string) error {
linesQuoted := make([]string, len(lines))
for i, line := range lines {
lineQuoted := strconv.Quote(line)
linesQuoted[i] = lineQuoted
}
data := strings.Join(linesQuoted, "\n")
return os.WriteFile(filePath, []byte(data), 0600)
}
func isQuitCommand(s string) bool {
switch s {
case `\q`, "q", "quit", "exit":
return true
default:
return false
}
}
func isHelpCommand(s string) bool {
switch s {
case `\h`, "h", "help", "?":
return true
default:
return false
}
}
func printCommandsHelp(w io.Writer) {
fmt.Fprintf(w, "%s", `Available commands:
\q - quit
\h - show this help
\s - singleline json output mode
\m - multiline json output mode
\c - compact output mode
\logfmt - logfmt output mode
\wrap_long_lines - toggles wrapping long lines
\enable_colors - enable ANSI colors in compact output mode
\disable_colors - disable ANSI colors in compact output mode
\tail <query> - live tail <query> results
See https://docs.victoriametrics.com/victorialogs/querying/vlogscli/ for more details
`)
}
func executeQuery(ctx context.Context, output io.Writer, qStr string, outputMode outputMode, disableColors, wrapLongLines bool) {
if strings.HasPrefix(qStr, `\tail `) {
tailQuery(ctx, output, qStr, outputMode)
return
}
respBody := getQueryResponse(ctx, output, qStr, outputMode, *datasourceURL)
if respBody == nil {
return
}
defer func() {
_ = respBody.Close()
}()
if err := readWithLess(respBody, disableColors, wrapLongLines); err != nil {
fmt.Fprintf(output, "error when reading query response: %s\n", err)
return
}
}
func tailQuery(ctx context.Context, output io.Writer, qStr string, outputMode outputMode) {
qStr = strings.TrimPrefix(qStr, `\tail `)
qURL, err := getTailURL()
if err != nil {
fmt.Fprintf(output, "%s\n", err)
return
}
respBody := getQueryResponse(ctx, output, qStr, outputMode, qURL)
if respBody == nil {
return
}
defer func() {
_ = respBody.Close()
}()
if _, err := io.Copy(output, respBody); err != nil {
if !errors.Is(err, context.Canceled) && !isErrPipe(err) {
fmt.Fprintf(output, "error when live tailing query response: %s\n", err)
}
fmt.Fprintf(output, "\n")
return
}
}
func getTailURL() (string, error) {
if *tailURL != "" {
return *tailURL, nil
}
u, err := url.Parse(*datasourceURL)
if err != nil {
return "", fmt.Errorf("cannot parse -datasource.url=%q: %w", *datasourceURL, err)
}
if !strings.HasSuffix(u.Path, "/query") {
return "", fmt.Errorf("cannot find /query suffix in -datasource.url=%q", *datasourceURL)
}
u.Path = u.Path[:len(u.Path)-len("/query")] + "/tail"
return u.String(), nil
}
func getQueryResponse(ctx context.Context, output io.Writer, qStr string, outputMode outputMode, qURL string) io.ReadCloser {
// Parse the query and convert it to canonical view.
qStr = strings.TrimSuffix(qStr, ";")
q, err := logstorage.ParseQuery(qStr)
if err != nil {
fmt.Fprintf(output, "cannot parse query: %s\n", err)
return nil
}
qStr = q.String()
fmt.Fprintf(output, "executing [%s]...", qStr)
// Prepare HTTP request for qURL
args := make(url.Values)
args.Set("query", qStr)
data := strings.NewReader(args.Encode())
req, err := http.NewRequestWithContext(ctx, "POST", qURL, data)
if err != nil {
panic(fmt.Errorf("BUG: cannot prepare request to server: %w", err))
}
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
for _, h := range headers {
req.Header.Set(h.Name, h.Value)
}
req.Header.Set("AccountID", strconv.Itoa(*accountID))
req.Header.Set("ProjectID", strconv.Itoa(*projectID))
// Execute HTTP request at qURL
startTime := time.Now()
resp, err := httpClient.Do(req)
queryDuration := time.Since(startTime)
fmt.Fprintf(output, "; duration: %.3fs\n", queryDuration.Seconds())
if err != nil {
if errors.Is(err, context.Canceled) {
fmt.Fprintf(output, "\n")
} else {
fmt.Fprintf(output, "cannot execute query: %s\n", err)
}
return nil
}
// Verify response code
if resp.StatusCode != http.StatusOK {
body, err := io.ReadAll(resp.Body)
if err != nil {
body = []byte(fmt.Sprintf("cannot read response body: %s", err))
}
fmt.Fprintf(output, "unexpected status code: %d; response body:\n%s\n", resp.StatusCode, body)
return nil
}
// Prettify the response body
jp := newJSONPrettifier(resp.Body, outputMode)
return jp
}
var httpClient = &http.Client{}
var headers []headerEntry
type headerEntry struct {
Name string
Value string
}
func parseHeaders(a []string) ([]headerEntry, error) {
hes := make([]headerEntry, len(a))
for i, s := range a {
a := strings.SplitN(s, ":", 2)
if len(a) != 2 {
return nil, fmt.Errorf("cannot parse header=%q; it must contain at least one ':'; for example, 'Cookie: foo'", s)
}
hes[i] = headerEntry{
Name: strings.TrimSpace(a[0]),
Value: strings.TrimSpace(a[1]),
}
}
return hes, nil
}
func fatalf(format string, args ...any) {
fmt.Fprintf(os.Stderr, format+"\n", args...)
os.Exit(1)
}
func usage() {
const s = `
vlogscli is a command-line tool for querying VictoriaLogs.
See the docs at https://docs.victoriametrics.com/victorialogs/querying/vlogscli/
`
flagutil.Usage(s)
}

View File

@@ -1,11 +0,0 @@
# See https://medium.com/on-docker/use-multi-stage-builds-to-inject-ca-certs-ad1e8f01de1b
ARG certs_image=non-existing
ARG root_image=non-existing
FROM $certs_image AS certs
RUN apk update && apk upgrade && apk --update --no-cache add ca-certificates
FROM $root_image
COPY --from=certs /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
ENTRYPOINT ["/vlogscli-prod"]
ARG TARGETARCH
COPY vlogscli-linux-${TARGETARCH}-prod ./vlogscli-prod

View File

@@ -1,7 +0,0 @@
# All these commands must run from repository root.
vlogsgenerator:
APP_NAME=vlogsgenerator $(MAKE) app-local
vlogsgenerator-race:
APP_NAME=vlogsgenerator RACE=-race $(MAKE) app-local

View File

@@ -1,158 +1 @@
# vlogsgenerator
Logs generator for [VictoriaLogs](https://docs.victoriametrics.com/victorialogs/).
## How to build vlogsgenerator?
Run `make vlogsgenerator` from the repository root. This builds `bin/vlogsgenerator` binary.
## How run vlogsgenerator?
`vlogsgenerator` generates logs in [JSON line format](https://jsonlines.org/) suitable for the ingestion
via [`/insert/jsonline` endpoint at VictoriaLogs](https://docs.victoriametrics.com/victorialogs/data-ingestion/#json-stream-api).
By default it writes the generated logs into `stdout`. For example, the following command writes generated logs to `stdout`:
```
bin/vlogsgenerator
```
It is possible to redirect the generated logs to file. For example, the following command writes the generated logs to `logs.json` file:
```
bin/vlogsgenerator > logs.json
```
The generated logs at `logs.json` file can be inspected with the following command:
```
head logs.json | jq .
```
Below is an example output:
```json
{
"_time": "2024-05-08T14:34:00.854Z",
"_msg": "message for the stream 8 and worker 0; ip=185.69.136.129; uuid=b4fe8f1a-c93c-dea3-ba11-5b9f0509291e; u64=8996587920687045253",
"host": "host_8",
"worker_id": "0",
"run_id": "f9b3deee-e6b6-7f56-5deb-1586e4e81725",
"const_0": "some value 0 8",
"const_1": "some value 1 8",
"const_2": "some value 2 8",
"var_0": "some value 0 12752539384823438260",
"dict_0": "warn",
"dict_1": "info",
"u8_0": "6",
"u16_0": "35202",
"u32_0": "1964973739",
"u64_0": "4810489083243239145",
"float_0": "1.868",
"ip_0": "250.34.75.125",
"timestamp_0": "1799-03-16T01:34:18.311Z",
"json_0": "{\"foo\":\"bar_3\",\"baz\":{\"a\":[\"x\",\"y\"]},\"f3\":NaN,\"f4\":32}"
}
{
"_time": "2024-05-08T14:34:00.854Z",
"_msg": "message for the stream 9 and worker 0; ip=164.244.254.194; uuid=7e8373b1-ce0d-1ce7-8e96-4bcab8955598; u64=13949903463741076522",
"host": "host_9",
"worker_id": "0",
"run_id": "f9b3deee-e6b6-7f56-5deb-1586e4e81725",
"const_0": "some value 0 9",
"const_1": "some value 1 9",
"const_2": "some value 2 9",
"var_0": "some value 0 5371555382075206134",
"dict_0": "INFO",
"dict_1": "FATAL",
"u8_0": "219",
"u16_0": "31459",
"u32_0": "3918836777",
"u64_0": "6593354256620219850",
"float_0": "1.085",
"ip_0": "253.151.88.158",
"timestamp_0": "2042-10-05T16:42:57.082Z",
"json_0": "{\"foo\":\"bar_5\",\"baz\":{\"a\":[\"x\",\"y\"]},\"f3\":NaN,\"f4\":27}"
}
```
The `run_id` field uniquely identifies every `vlogsgenerator` invocation.
### How to write logs to VictoriaLogs?
The generated logs can be written directly to VictoriaLogs by passing the address of [`/insert/jsonline` endpoint](https://docs.victoriametrics.com/victorialogs/data-ingestion/#json-stream-api)
to `-addr` command-line flag. For example, the following command writes the generated logs to VictoriaLogs running at `localhost`:
```
bin/vlogsgenerator -addr=http://localhost:9428/insert/jsonline
```
### Configuration
`vlogsgenerator` accepts various command-line flags, which can be used for configuring the number and the shape of the generated logs.
These flags can be inspected by running `vlogsgenerator -help`. Below are the most interesting flags:
* `-start` - starting timestamp for generating logs. Logs are evenly generated on the [`-start` ... `-end`] interval.
* `-end` - ending timestamp for generating logs. Logs are evenly generated on the [`-start` ... `-end`] interval.
* `-activeStreams` - the number of active [log streams](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields) to generate.
* `-logsPerStream` - the number of log entries to generate per each log stream. Log entries are evenly distributed on the [`-start` ... `-end`] interval.
The total number of generated logs can be calculated as `-activeStreams` * `-logsPerStream`.
For example, the following command generates `1_000_000` log entries on the time range `[2024-01-01 - 2024-02-01]` across `100`
[log streams](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields), where every logs stream contains `10_000` log entries,
and writes them to `http://localhost:9428/insert/jsonline`:
```
bin/vlogsgenerator \
-start=2024-01-01 -end=2024-02-01 \
-activeStreams=100 \
-logsPerStream=10_000 \
-addr=http://localhost:9428/insert/jsonline
```
### Churn rate
It is possible to generate churn rate for active [log streams](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields)
by specifying `-totalStreams` command-line flag bigger than `-activeStreams`. For example, the following command generates
logs for `1000` total streams, while the number of active streams equals to `100`. This means that at every time there are logs for `100` streams,
but these streams change over the given [`-start` ... `-end`] time range, so the total number of streams on the given time range becomes `1000`:
```
bin/vlogsgenerator \
-start=2024-01-01 -end=2024-02-01 \
-activeStreams=100 \
-totalStreams=1_000 \
-logsPerStream=10_000 \
-addr=http://localhost:9428/insert/jsonline
```
In this case the total number of generated logs equals to `-totalStreams` * `-logsPerStream` = `10_000_000`.
### Benchmark tuning
By default `vlogsgenerator` generates and writes logs by a single worker. This may limit the maximum data ingestion rate during benchmarks.
The number of workers can be changed via `-workers` command-line flag. For example, the following command generates and writes logs with `16` workers:
```
bin/vlogsgenerator \
-start=2024-01-01 -end=2024-02-01 \
-activeStreams=100 \
-logsPerStream=10_000 \
-addr=http://localhost:9428/insert/jsonline \
-workers=16
```
### Output statistics
Every 10 seconds `vlogsgenerator` writes statistics about the generated logs into `stderr`. The frequency of the generated statistics can be adjusted via `-statInterval` command-line flag.
For example, the following command writes statistics every 2 seconds:
```
bin/vlogsgenerator \
-start=2024-01-01 -end=2024-02-01 \
-activeStreams=100 \
-logsPerStream=10_000 \
-addr=http://localhost:9428/insert/jsonline \
-statInterval=2s
```
VictoriaLogs source code has been moved to [github.com/VictoriaMetrics/VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaLogs/).

View File

@@ -1,354 +0,0 @@
package main
import (
"bufio"
"flag"
"fmt"
"io"
"math"
"math/rand"
"net/http"
"net/url"
"os"
"strconv"
"sync"
"sync/atomic"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/buildinfo"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/envflag"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeutil"
)
var (
addr = flag.String("addr", "stdout", "HTTP address to push the generated logs to; if it is set to stdout, then logs are generated to stdout")
workers = flag.Int("workers", 1, "The number of workers to use to push logs to -addr")
start = newTimeFlag("start", "-1d", "Generated logs start from this time; see https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#timestamp-formats")
end = newTimeFlag("end", "0s", "Generated logs end at this time; see https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#timestamp-formats")
activeStreams = flag.Int("activeStreams", 100, "The number of active log streams to generate; see https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields")
totalStreams = flag.Int("totalStreams", 0, "The number of total log streams; if -totalStreams > -activeStreams, then some active streams are substituted with new streams "+
"during data generation")
logsPerStream = flag.Int64("logsPerStream", 1_000, "The number of log entries to generate per each log stream. Log entries are evenly distributed between -start and -end")
constFieldsPerLog = flag.Int("constFieldsPerLog", 3, "The number of fields with constant values to generate per each log entry; "+
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
varFieldsPerLog = flag.Int("varFieldsPerLog", 1, "The number of fields with variable values to generate per each log entry; "+
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
dictFieldsPerLog = flag.Int("dictFieldsPerLog", 2, "The number of fields with up to 8 different values to generate per each log entry; "+
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
u8FieldsPerLog = flag.Int("u8FieldsPerLog", 1, "The number of fields with uint8 values to generate per each log entry; "+
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
u16FieldsPerLog = flag.Int("u16FieldsPerLog", 1, "The number of fields with uint16 values to generate per each log entry; "+
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
u32FieldsPerLog = flag.Int("u32FieldsPerLog", 1, "The number of fields with uint32 values to generate per each log entry; "+
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
u64FieldsPerLog = flag.Int("u64FieldsPerLog", 1, "The number of fields with uint64 values to generate per each log entry; "+
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
i64FieldsPerLog = flag.Int("i64FieldsPerLog", 1, "The number of fields with int64 values to generate per each log entry; "+
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
floatFieldsPerLog = flag.Int("floatFieldsPerLog", 1, "The number of fields with float64 values to generate per each log entry; "+
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
ipFieldsPerLog = flag.Int("ipFieldsPerLog", 1, "The number of fields with IPv4 values to generate per each log entry; "+
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
timestampFieldsPerLog = flag.Int("timestampFieldsPerLog", 1, "The number of fields with ISO8601 timestamps per each log entry; "+
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
jsonFieldsPerLog = flag.Int("jsonFieldsPerLog", 1, "The number of JSON fields to generate per each log entry; "+
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
statInterval = flag.Duration("statInterval", 10*time.Second, "The interval between publishing the stats")
)
func main() {
// Write flags and help message to stdout, since it is easier to grep or pipe.
flag.CommandLine.SetOutput(os.Stdout)
envflag.Parse()
buildinfo.Init()
logger.Init()
var remoteWriteURL *url.URL
if *addr != "stdout" {
urlParsed, err := url.Parse(*addr)
if err != nil {
logger.Fatalf("cannot parse -addr=%q: %s", *addr, err)
}
qs, err := url.ParseQuery(urlParsed.RawQuery)
if err != nil {
logger.Fatalf("cannot parse query string in -addr=%q: %w", *addr, err)
}
qs.Set("_stream_fields", "host,worker_id")
urlParsed.RawQuery = qs.Encode()
remoteWriteURL = urlParsed
}
if start.nsec >= end.nsec {
logger.Fatalf("-start=%s must be smaller than -end=%s", start, end)
}
if *activeStreams <= 0 {
logger.Fatalf("-activeStreams must be bigger than 0; got %d", *activeStreams)
}
if *logsPerStream <= 0 {
logger.Fatalf("-logsPerStream must be bigger than 0; got %d", *logsPerStream)
}
if *totalStreams < *activeStreams {
*totalStreams = *activeStreams
}
cfg := &workerConfig{
url: remoteWriteURL,
activeStreams: *activeStreams,
totalStreams: *totalStreams,
}
// divide total and active streams among workers
if *workers <= 0 {
logger.Fatalf("-workers must be bigger than 0; got %d", *workers)
}
if *workers > *activeStreams {
logger.Fatalf("-workers=%d cannot exceed -activeStreams=%d", *workers, *activeStreams)
}
cfg.activeStreams /= *workers
cfg.totalStreams /= *workers
logger.Infof("start -workers=%d workers for ingesting -logsPerStream=%d log entries per each -totalStreams=%d (-activeStreams=%d) on a time range -start=%s, -end=%s to -addr=%s",
*workers, *logsPerStream, *totalStreams, *activeStreams, toRFC3339(start.nsec), toRFC3339(end.nsec), *addr)
startTime := time.Now()
var wg sync.WaitGroup
for i := 0; i < *workers; i++ {
wg.Add(1)
go func(workerID int) {
defer wg.Done()
generateAndPushLogs(cfg, workerID)
}(i)
}
go func() {
prevEntries := uint64(0)
prevBytes := uint64(0)
ticker := time.NewTicker(*statInterval)
for range ticker.C {
currEntries := logEntriesCount.Load()
deltaEntries := currEntries - prevEntries
rateEntries := float64(deltaEntries) / statInterval.Seconds()
currBytes := bytesGenerated.Load()
deltaBytes := currBytes - prevBytes
rateBytes := float64(deltaBytes) / statInterval.Seconds()
logger.Infof("generated %dK log entries (%dK total) at %.0fK entries/sec, %dMB (%dMB total) at %.0fMB/sec",
deltaEntries/1e3, currEntries/1e3, rateEntries/1e3, deltaBytes/1e6, currBytes/1e6, rateBytes/1e6)
prevEntries = currEntries
prevBytes = currBytes
}
}()
wg.Wait()
dSecs := time.Since(startTime).Seconds()
currEntries := logEntriesCount.Load()
currBytes := bytesGenerated.Load()
rateEntries := float64(currEntries) / dSecs
rateBytes := float64(currBytes) / dSecs
logger.Infof("ingested %dK log entries (%dMB) in %.3f seconds; avg ingestion rate: %.0fK entries/sec, %.0fMB/sec", currEntries/1e3, currBytes/1e6, dSecs, rateEntries/1e3, rateBytes/1e6)
}
var logEntriesCount atomic.Uint64
var bytesGenerated atomic.Uint64
type workerConfig struct {
url *url.URL
activeStreams int
totalStreams int
}
type statWriter struct {
w io.Writer
}
func (sw *statWriter) Write(p []byte) (int, error) {
bytesGenerated.Add(uint64(len(p)))
return sw.w.Write(p)
}
func generateAndPushLogs(cfg *workerConfig, workerID int) {
pr, pw := io.Pipe()
sw := &statWriter{
w: pw,
}
// The 1MB write buffer increases data ingestion performance by reducing the number of send() syscalls
bw := bufio.NewWriterSize(sw, 1024*1024)
doneCh := make(chan struct{})
go func() {
generateLogs(bw, workerID, cfg.activeStreams, cfg.totalStreams)
_ = bw.Flush()
_ = pw.Close()
close(doneCh)
}()
if cfg.url == nil {
_, err := io.Copy(os.Stdout, pr)
if err != nil {
logger.Fatalf("unexpected error when writing logs to stdout: %s", err)
}
return
}
req, err := http.NewRequest("POST", cfg.url.String(), pr)
if err != nil {
logger.Fatalf("cannot create request to %q: %s", cfg.url, err)
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
logger.Fatalf("cannot perform request to %q: %s", cfg.url, err)
}
defer resp.Body.Close()
if resp.StatusCode/100 != 2 {
logger.Fatalf("unexpected status code got from %q: %d; want 2xx", cfg.url, err)
}
// Wait until all the generateLogs goroutine is finished.
<-doneCh
}
func generateLogs(bw *bufio.Writer, workerID, activeStreams, totalStreams int) {
streamLifetime := int64(float64(end.nsec-start.nsec) * (float64(activeStreams) / float64(totalStreams)))
streamStep := int64(float64(end.nsec-start.nsec) / float64(totalStreams-activeStreams+1))
step := streamLifetime / (*logsPerStream - 1)
currNsec := start.nsec
for currNsec < end.nsec {
firstStreamID := int((currNsec - start.nsec) / streamStep)
generateLogsAtTimestamp(bw, workerID, currNsec, firstStreamID, activeStreams)
currNsec += step
}
}
var runID = toUUID(rand.Uint64(), rand.Uint64())
func generateLogsAtTimestamp(bw *bufio.Writer, workerID int, ts int64, firstStreamID, activeStreams int) {
streamID := firstStreamID
timeStr := toRFC3339(ts)
for i := 0; i < activeStreams; i++ {
ip := toIPv4(rand.Uint32())
uuid := toUUID(rand.Uint64(), rand.Uint64())
fmt.Fprintf(bw, `{"_time":"%s","_msg":"message for the stream %d and worker %d; ip=%s; uuid=%s; u64=%d","host":"host_%d","worker_id":"%d"`,
timeStr, streamID, workerID, ip, uuid, rand.Uint64(), streamID, workerID)
fmt.Fprintf(bw, `,"run_id":"%s"`, runID)
for j := 0; j < *constFieldsPerLog; j++ {
fmt.Fprintf(bw, `,"const_%d":"some value %d %d"`, j, j, streamID)
}
for j := 0; j < *varFieldsPerLog; j++ {
fmt.Fprintf(bw, `,"var_%d":"some value %d %d"`, j, j, rand.Uint64())
}
for j := 0; j < *dictFieldsPerLog; j++ {
fmt.Fprintf(bw, `,"dict_%d":"%s"`, j, dictValues[rand.Intn(len(dictValues))])
}
for j := 0; j < *u8FieldsPerLog; j++ {
fmt.Fprintf(bw, `,"u8_%d":"%d"`, j, uint8(rand.Uint32()))
}
for j := 0; j < *u16FieldsPerLog; j++ {
fmt.Fprintf(bw, `,"u16_%d":"%d"`, j, uint16(rand.Uint32()))
}
for j := 0; j < *u32FieldsPerLog; j++ {
fmt.Fprintf(bw, `,"u32_%d":"%d"`, j, rand.Uint32())
}
for j := 0; j < *u64FieldsPerLog; j++ {
fmt.Fprintf(bw, `,"u64_%d":"%d"`, j, rand.Uint64())
}
for j := 0; j < *i64FieldsPerLog; j++ {
fmt.Fprintf(bw, `,"i64_%d":"%d"`, j, int64(rand.Uint64()))
}
for j := 0; j < *floatFieldsPerLog; j++ {
fmt.Fprintf(bw, `,"float_%d":"%v"`, j, math.Round(10_000*rand.Float64())/1000)
}
for j := 0; j < *ipFieldsPerLog; j++ {
ip := toIPv4(rand.Uint32())
fmt.Fprintf(bw, `,"ip_%d":"%s"`, j, ip)
}
for j := 0; j < *timestampFieldsPerLog; j++ {
timestamp := toISO8601(int64(rand.Uint64()))
fmt.Fprintf(bw, `,"timestamp_%d":"%s"`, j, timestamp)
}
for j := 0; j < *jsonFieldsPerLog; j++ {
fmt.Fprintf(bw, `,"json_%d":"{\"foo\":\"bar_%d\",\"baz\":{\"a\":[\"x\",\"y\"]},\"f3\":NaN,\"f4\":%d}"`, j, rand.Intn(10), rand.Intn(100))
}
fmt.Fprintf(bw, "}\n")
logEntriesCount.Add(1)
streamID++
}
}
var dictValues = []string{
"debug",
"info",
"warn",
"error",
"fatal",
"ERROR",
"FATAL",
"INFO",
}
func newTimeFlag(name, defaultValue, description string) *timeFlag {
var tf timeFlag
if err := tf.Set(defaultValue); err != nil {
logger.Panicf("invalid defaultValue=%q for flag %q: %w", defaultValue, name, err)
}
flag.Var(&tf, name, description)
return &tf
}
type timeFlag struct {
s string
nsec int64
}
func (tf *timeFlag) Set(s string) error {
msec, err := timeutil.ParseTimeMsec(s)
if err != nil {
return fmt.Errorf("cannot parse time from %q: %w", s, err)
}
tf.s = s
tf.nsec = msec * 1e6
return nil
}
func (tf *timeFlag) String() string {
return tf.s
}
func toRFC3339(nsec int64) string {
return time.Unix(0, nsec).UTC().Format(time.RFC3339Nano)
}
func toISO8601(nsec int64) string {
return time.Unix(0, nsec).UTC().Format("2006-01-02T15:04:05.000Z")
}
func toIPv4(n uint32) string {
dst := make([]byte, 0, len("255.255.255.255"))
dst = marshalUint64(dst, uint64(n>>24))
dst = append(dst, '.')
dst = marshalUint64(dst, uint64((n>>16)&0xff))
dst = append(dst, '.')
dst = marshalUint64(dst, uint64((n>>8)&0xff))
dst = append(dst, '.')
dst = marshalUint64(dst, uint64(n&0xff))
return string(dst)
}
func toUUID(a, b uint64) string {
return fmt.Sprintf("%08x-%04x-%04x-%04x-%012x", a&(1<<32-1), (a>>32)&(1<<16-1), (a >> 48), b&(1<<16-1), b>>16)
}
// marshalUint64 appends string representation of n to dst and returns the result.
func marshalUint64(dst []byte, n uint64) []byte {
return strconv.AppendUint(dst, n, 10)
}

1
app/vlselect/README.md Normal file
View File

@@ -0,0 +1 @@
VictoriaLogs source code has been moved to [github.com/VictoriaMetrics/VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaLogs/).

View File

@@ -1,316 +0,0 @@
package internalselect
import (
"context"
"fmt"
"net/http"
"strconv"
"sync"
"time"
"github.com/VictoriaMetrics/metrics"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage/netselect"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/atomicutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding/zstd"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/netutil"
)
// RequestHandler processes requests to /internal/select/*
func RequestHandler(ctx context.Context, w http.ResponseWriter, r *http.Request) {
startTime := time.Now()
path := r.URL.Path
rh := requestHandlers[path]
if rh == nil {
httpserver.Errorf(w, r, "unsupported endpoint requested: %s", path)
return
}
metrics.GetOrCreateCounter(fmt.Sprintf(`vl_http_requests_total{path=%q}`, path)).Inc()
if err := rh(ctx, w, r); err != nil && !netutil.IsTrivialNetworkError(err) {
metrics.GetOrCreateCounter(fmt.Sprintf(`vl_http_request_errors_total{path=%q}`, path)).Inc()
httpserver.Errorf(w, r, "%s", err)
// The return is skipped intentionally in order to track the duration of failed queries.
}
metrics.GetOrCreateSummary(fmt.Sprintf(`vl_http_request_duration_seconds{path=%q}`, path)).UpdateDuration(startTime)
}
var requestHandlers = map[string]func(ctx context.Context, w http.ResponseWriter, r *http.Request) error{
"/internal/select/query": processQueryRequest,
"/internal/select/field_names": processFieldNamesRequest,
"/internal/select/field_values": processFieldValuesRequest,
"/internal/select/stream_field_names": processStreamFieldNamesRequest,
"/internal/select/stream_field_values": processStreamFieldValuesRequest,
"/internal/select/streams": processStreamsRequest,
"/internal/select/stream_ids": processStreamIDsRequest,
}
func processQueryRequest(ctx context.Context, w http.ResponseWriter, r *http.Request) error {
cp, err := getCommonParams(r, netselect.QueryProtocolVersion)
if err != nil {
return err
}
w.Header().Set("Content-Type", "application/octet-stream")
var wLock sync.Mutex
var dataLenBuf []byte
sendBuf := func(bb *bytesutil.ByteBuffer) error {
if len(bb.B) == 0 {
return nil
}
data := bb.B
if !cp.DisableCompression {
bufLen := len(bb.B)
bb.B = zstd.CompressLevel(bb.B, bb.B, 1)
data = bb.B[bufLen:]
}
wLock.Lock()
dataLenBuf = encoding.MarshalUint64(dataLenBuf[:0], uint64(len(data)))
_, err := w.Write(dataLenBuf)
if err == nil {
_, err = w.Write(data)
}
wLock.Unlock()
// Reset the sent buf
bb.Reset()
return err
}
var bufs atomicutil.Slice[bytesutil.ByteBuffer]
var errGlobalLock sync.Mutex
var errGlobal error
writeBlock := func(workerID uint, db *logstorage.DataBlock) {
if errGlobal != nil {
return
}
bb := bufs.Get(workerID)
bb.B = db.Marshal(bb.B)
if len(bb.B) < 1024*1024 {
// Fast path - the bb is too small to be sent to the client yet.
return
}
// Slow path - the bb must be sent to the client.
if err := sendBuf(bb); err != nil {
errGlobalLock.Lock()
if errGlobal != nil {
errGlobal = err
}
errGlobalLock.Unlock()
}
}
if err := vlstorage.RunQuery(ctx, cp.TenantIDs, cp.Query, writeBlock); err != nil {
return err
}
if errGlobal != nil {
return errGlobal
}
// Send the remaining data
for _, bb := range bufs.All() {
if err := sendBuf(bb); err != nil {
return err
}
}
return nil
}
func processFieldNamesRequest(ctx context.Context, w http.ResponseWriter, r *http.Request) error {
cp, err := getCommonParams(r, netselect.FieldNamesProtocolVersion)
if err != nil {
return err
}
fieldNames, err := vlstorage.GetFieldNames(ctx, cp.TenantIDs, cp.Query)
if err != nil {
return fmt.Errorf("cannot obtain field names: %w", err)
}
return writeValuesWithHits(w, fieldNames, cp.DisableCompression)
}
func processFieldValuesRequest(ctx context.Context, w http.ResponseWriter, r *http.Request) error {
cp, err := getCommonParams(r, netselect.FieldValuesProtocolVersion)
if err != nil {
return err
}
fieldName := r.FormValue("field")
limit, err := getInt64FromRequest(r, "limit")
if err != nil {
return err
}
fieldValues, err := vlstorage.GetFieldValues(ctx, cp.TenantIDs, cp.Query, fieldName, uint64(limit))
if err != nil {
return fmt.Errorf("cannot obtain field values: %w", err)
}
return writeValuesWithHits(w, fieldValues, cp.DisableCompression)
}
func processStreamFieldNamesRequest(ctx context.Context, w http.ResponseWriter, r *http.Request) error {
cp, err := getCommonParams(r, netselect.StreamFieldNamesProtocolVersion)
if err != nil {
return err
}
fieldNames, err := vlstorage.GetStreamFieldNames(ctx, cp.TenantIDs, cp.Query)
if err != nil {
return fmt.Errorf("cannot obtain stream field names: %w", err)
}
return writeValuesWithHits(w, fieldNames, cp.DisableCompression)
}
func processStreamFieldValuesRequest(ctx context.Context, w http.ResponseWriter, r *http.Request) error {
cp, err := getCommonParams(r, netselect.StreamFieldValuesProtocolVersion)
if err != nil {
return err
}
fieldName := r.FormValue("field")
limit, err := getInt64FromRequest(r, "limit")
if err != nil {
return err
}
fieldValues, err := vlstorage.GetStreamFieldValues(ctx, cp.TenantIDs, cp.Query, fieldName, uint64(limit))
if err != nil {
return fmt.Errorf("cannot obtain stream field values: %w", err)
}
return writeValuesWithHits(w, fieldValues, cp.DisableCompression)
}
func processStreamsRequest(ctx context.Context, w http.ResponseWriter, r *http.Request) error {
cp, err := getCommonParams(r, netselect.StreamsProtocolVersion)
if err != nil {
return err
}
limit, err := getInt64FromRequest(r, "limit")
if err != nil {
return err
}
streams, err := vlstorage.GetStreams(ctx, cp.TenantIDs, cp.Query, uint64(limit))
if err != nil {
return fmt.Errorf("cannot obtain streams: %w", err)
}
return writeValuesWithHits(w, streams, cp.DisableCompression)
}
func processStreamIDsRequest(ctx context.Context, w http.ResponseWriter, r *http.Request) error {
cp, err := getCommonParams(r, netselect.StreamIDsProtocolVersion)
if err != nil {
return err
}
limit, err := getInt64FromRequest(r, "limit")
if err != nil {
return err
}
streamIDs, err := vlstorage.GetStreamIDs(ctx, cp.TenantIDs, cp.Query, uint64(limit))
if err != nil {
return fmt.Errorf("cannot obtain streams: %w", err)
}
return writeValuesWithHits(w, streamIDs, cp.DisableCompression)
}
type commonParams struct {
TenantIDs []logstorage.TenantID
Query *logstorage.Query
DisableCompression bool
}
func getCommonParams(r *http.Request, expectedProtocolVersion string) (*commonParams, error) {
version := r.FormValue("version")
if version != expectedProtocolVersion {
return nil, fmt.Errorf("unexpected version=%q; want %q", version, expectedProtocolVersion)
}
tenantIDsStr := r.FormValue("tenant_ids")
tenantIDs, err := logstorage.UnmarshalTenantIDs([]byte(tenantIDsStr))
if err != nil {
return nil, fmt.Errorf("cannot unmarshal tenant_ids=%q: %w", tenantIDsStr, err)
}
timestamp, err := getInt64FromRequest(r, "timestamp")
if err != nil {
return nil, err
}
qStr := r.FormValue("query")
q, err := logstorage.ParseQueryAtTimestamp(qStr, timestamp)
if err != nil {
return nil, fmt.Errorf("cannot unmarshal query=%q: %w", qStr, err)
}
s := r.FormValue("disable_compression")
disableCompression, err := strconv.ParseBool(s)
if err != nil {
return nil, fmt.Errorf("cannot parse disable_compression=%q: %w", s, err)
}
cp := &commonParams{
TenantIDs: tenantIDs,
Query: q,
DisableCompression: disableCompression,
}
return cp, nil
}
func writeValuesWithHits(w http.ResponseWriter, vhs []logstorage.ValueWithHits, disableCompression bool) error {
var b []byte
for i := range vhs {
b = vhs[i].Marshal(b)
}
if !disableCompression {
b = zstd.CompressLevel(nil, b, 1)
}
w.Header().Set("Content-Type", "application/octet-stream")
if _, err := w.Write(b); err != nil {
return fmt.Errorf("cannot send response to the client: %w", err)
}
return nil
}
func getInt64FromRequest(r *http.Request, argName string) (int64, error) {
s := r.FormValue(argName)
n, err := strconv.ParseInt(s, 10, 64)
if err != nil {
return 0, fmt.Errorf("cannot parse %s=%q: %w", argName, s, err)
}
return n, nil
}

View File

@@ -1,49 +0,0 @@
{% import (
"slices"
) %}
{% stripspace %}
{% func FacetsResponse(m map[string][]facetEntry) %}
{
{% code
sortedKeys := make([]string, 0, len(m))
for k := range m {
sortedKeys = append(sortedKeys, k)
}
slices.Sort(sortedKeys)
%}
"facets":[
{% if len(sortedKeys) > 0 %}
{%= facetsLine(m, sortedKeys[0]) %}
{% for _, k := range sortedKeys[1:] %}
,{%= facetsLine(m, k) %}
{% endfor %}
{% endif %}
]
}
{% endfunc %}
{% func facetsLine(m map[string][]facetEntry, k string) %}
{
"field_name":{%q= k %},
"values":[
{% code fes := m[k] %}
{% if len(fes) > 0 %}
{%= facetLine(fes[0]) %}
{% for _, fe := range fes[1:] %}
,{%= facetLine(fe) %}
{% endfor %}
{% endif %}
]
}
{% endfunc %}
{% func facetLine(fe facetEntry) %}
{
"field_value":{%q= fe.value %},
"hits":{%s= fe.hits %}
}
{% endfunc %}
{% endstripspace %}

View File

@@ -1,178 +0,0 @@
// Code generated by qtc from "facets_response.qtpl". DO NOT EDIT.
// See https://github.com/valyala/quicktemplate for details.
//line app/vlselect/logsql/facets_response.qtpl:1
package logsql
//line app/vlselect/logsql/facets_response.qtpl:1
import (
"slices"
)
//line app/vlselect/logsql/facets_response.qtpl:7
import (
qtio422016 "io"
qt422016 "github.com/valyala/quicktemplate"
)
//line app/vlselect/logsql/facets_response.qtpl:7
var (
_ = qtio422016.Copy
_ = qt422016.AcquireByteBuffer
)
//line app/vlselect/logsql/facets_response.qtpl:7
func StreamFacetsResponse(qw422016 *qt422016.Writer, m map[string][]facetEntry) {
//line app/vlselect/logsql/facets_response.qtpl:7
qw422016.N().S(`{`)
//line app/vlselect/logsql/facets_response.qtpl:10
sortedKeys := make([]string, 0, len(m))
for k := range m {
sortedKeys = append(sortedKeys, k)
}
slices.Sort(sortedKeys)
//line app/vlselect/logsql/facets_response.qtpl:15
qw422016.N().S(`"facets":[`)
//line app/vlselect/logsql/facets_response.qtpl:17
if len(sortedKeys) > 0 {
//line app/vlselect/logsql/facets_response.qtpl:18
streamfacetsLine(qw422016, m, sortedKeys[0])
//line app/vlselect/logsql/facets_response.qtpl:19
for _, k := range sortedKeys[1:] {
//line app/vlselect/logsql/facets_response.qtpl:19
qw422016.N().S(`,`)
//line app/vlselect/logsql/facets_response.qtpl:20
streamfacetsLine(qw422016, m, k)
//line app/vlselect/logsql/facets_response.qtpl:21
}
//line app/vlselect/logsql/facets_response.qtpl:22
}
//line app/vlselect/logsql/facets_response.qtpl:22
qw422016.N().S(`]}`)
//line app/vlselect/logsql/facets_response.qtpl:25
}
//line app/vlselect/logsql/facets_response.qtpl:25
func WriteFacetsResponse(qq422016 qtio422016.Writer, m map[string][]facetEntry) {
//line app/vlselect/logsql/facets_response.qtpl:25
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vlselect/logsql/facets_response.qtpl:25
StreamFacetsResponse(qw422016, m)
//line app/vlselect/logsql/facets_response.qtpl:25
qt422016.ReleaseWriter(qw422016)
//line app/vlselect/logsql/facets_response.qtpl:25
}
//line app/vlselect/logsql/facets_response.qtpl:25
func FacetsResponse(m map[string][]facetEntry) string {
//line app/vlselect/logsql/facets_response.qtpl:25
qb422016 := qt422016.AcquireByteBuffer()
//line app/vlselect/logsql/facets_response.qtpl:25
WriteFacetsResponse(qb422016, m)
//line app/vlselect/logsql/facets_response.qtpl:25
qs422016 := string(qb422016.B)
//line app/vlselect/logsql/facets_response.qtpl:25
qt422016.ReleaseByteBuffer(qb422016)
//line app/vlselect/logsql/facets_response.qtpl:25
return qs422016
//line app/vlselect/logsql/facets_response.qtpl:25
}
//line app/vlselect/logsql/facets_response.qtpl:27
func streamfacetsLine(qw422016 *qt422016.Writer, m map[string][]facetEntry, k string) {
//line app/vlselect/logsql/facets_response.qtpl:27
qw422016.N().S(`{"field_name":`)
//line app/vlselect/logsql/facets_response.qtpl:29
qw422016.N().Q(k)
//line app/vlselect/logsql/facets_response.qtpl:29
qw422016.N().S(`,"values":[`)
//line app/vlselect/logsql/facets_response.qtpl:31
fes := m[k]
//line app/vlselect/logsql/facets_response.qtpl:32
if len(fes) > 0 {
//line app/vlselect/logsql/facets_response.qtpl:33
streamfacetLine(qw422016, fes[0])
//line app/vlselect/logsql/facets_response.qtpl:34
for _, fe := range fes[1:] {
//line app/vlselect/logsql/facets_response.qtpl:34
qw422016.N().S(`,`)
//line app/vlselect/logsql/facets_response.qtpl:35
streamfacetLine(qw422016, fe)
//line app/vlselect/logsql/facets_response.qtpl:36
}
//line app/vlselect/logsql/facets_response.qtpl:37
}
//line app/vlselect/logsql/facets_response.qtpl:37
qw422016.N().S(`]}`)
//line app/vlselect/logsql/facets_response.qtpl:40
}
//line app/vlselect/logsql/facets_response.qtpl:40
func writefacetsLine(qq422016 qtio422016.Writer, m map[string][]facetEntry, k string) {
//line app/vlselect/logsql/facets_response.qtpl:40
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vlselect/logsql/facets_response.qtpl:40
streamfacetsLine(qw422016, m, k)
//line app/vlselect/logsql/facets_response.qtpl:40
qt422016.ReleaseWriter(qw422016)
//line app/vlselect/logsql/facets_response.qtpl:40
}
//line app/vlselect/logsql/facets_response.qtpl:40
func facetsLine(m map[string][]facetEntry, k string) string {
//line app/vlselect/logsql/facets_response.qtpl:40
qb422016 := qt422016.AcquireByteBuffer()
//line app/vlselect/logsql/facets_response.qtpl:40
writefacetsLine(qb422016, m, k)
//line app/vlselect/logsql/facets_response.qtpl:40
qs422016 := string(qb422016.B)
//line app/vlselect/logsql/facets_response.qtpl:40
qt422016.ReleaseByteBuffer(qb422016)
//line app/vlselect/logsql/facets_response.qtpl:40
return qs422016
//line app/vlselect/logsql/facets_response.qtpl:40
}
//line app/vlselect/logsql/facets_response.qtpl:42
func streamfacetLine(qw422016 *qt422016.Writer, fe facetEntry) {
//line app/vlselect/logsql/facets_response.qtpl:42
qw422016.N().S(`{"field_value":`)
//line app/vlselect/logsql/facets_response.qtpl:44
qw422016.N().Q(fe.value)
//line app/vlselect/logsql/facets_response.qtpl:44
qw422016.N().S(`,"hits":`)
//line app/vlselect/logsql/facets_response.qtpl:45
qw422016.N().S(fe.hits)
//line app/vlselect/logsql/facets_response.qtpl:45
qw422016.N().S(`}`)
//line app/vlselect/logsql/facets_response.qtpl:47
}
//line app/vlselect/logsql/facets_response.qtpl:47
func writefacetLine(qq422016 qtio422016.Writer, fe facetEntry) {
//line app/vlselect/logsql/facets_response.qtpl:47
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vlselect/logsql/facets_response.qtpl:47
streamfacetLine(qw422016, fe)
//line app/vlselect/logsql/facets_response.qtpl:47
qt422016.ReleaseWriter(qw422016)
//line app/vlselect/logsql/facets_response.qtpl:47
}
//line app/vlselect/logsql/facets_response.qtpl:47
func facetLine(fe facetEntry) string {
//line app/vlselect/logsql/facets_response.qtpl:47
qb422016 := qt422016.AcquireByteBuffer()
//line app/vlselect/logsql/facets_response.qtpl:47
writefacetLine(qb422016, fe)
//line app/vlselect/logsql/facets_response.qtpl:47
qs422016 := string(qb422016.B)
//line app/vlselect/logsql/facets_response.qtpl:47
qt422016.ReleaseByteBuffer(qb422016)
//line app/vlselect/logsql/facets_response.qtpl:47
return qs422016
//line app/vlselect/logsql/facets_response.qtpl:47
}

View File

@@ -1,70 +0,0 @@
{% import (
"slices"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
) %}
{% stripspace %}
// FieldsForHits formats labels for /select/logsql/hits response
{% func FieldsForHits(columns []logstorage.BlockColumn, rowIdx int) %}
{
{% if len(columns) > 0 %}
{%q= columns[0].Name %}:{%q= columns[0].Values[rowIdx] %}
{% for _, c := range columns[1:] %}
,{%q= c.Name %}:{%q= c.Values[rowIdx] %}
{% endfor %}
{% endif %}
}
{% endfunc %}
{% func HitsSeries(m map[string]*hitsSeries) %}
{
{% code
sortedKeys := make([]string, 0, len(m))
for k := range m {
sortedKeys = append(sortedKeys, k)
}
slices.Sort(sortedKeys)
%}
"hits":[
{% if len(sortedKeys) > 0 %}
{%= hitsSeriesLine(m, sortedKeys[0]) %}
{% for _, k := range sortedKeys[1:] %}
,{%= hitsSeriesLine(m, k) %}
{% endfor %}
{% endif %}
]
}
{% endfunc %}
{% func hitsSeriesLine(m map[string]*hitsSeries, k string) %}
{
{% code
hs := m[k]
hs.sort()
timestamps := hs.timestamps
hits := hs.hits
%}
"fields":{%s= k %},
"timestamps":[
{% if len(timestamps) > 0 %}
{%q= timestamps[0] %}
{% for _, ts := range timestamps[1:] %}
,{%q= ts %}
{% endfor %}
{% endif %}
],
"values":[
{% if len(hits) > 0 %}
{%dul= hits[0] %}
{% for _, v := range hits[1:] %}
,{%dul= v %}
{% endfor %}
{% endif %}
],
"total":{%dul= hs.hitsTotal %}
}
{% endfunc %}
{% endstripspace %}

View File

@@ -1,223 +0,0 @@
// Code generated by qtc from "hits_response.qtpl". DO NOT EDIT.
// See https://github.com/valyala/quicktemplate for details.
//line app/vlselect/logsql/hits_response.qtpl:1
package logsql
//line app/vlselect/logsql/hits_response.qtpl:1
import (
"slices"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
)
// FieldsForHits formats labels for /select/logsql/hits response
//line app/vlselect/logsql/hits_response.qtpl:10
import (
qtio422016 "io"
qt422016 "github.com/valyala/quicktemplate"
)
//line app/vlselect/logsql/hits_response.qtpl:10
var (
_ = qtio422016.Copy
_ = qt422016.AcquireByteBuffer
)
//line app/vlselect/logsql/hits_response.qtpl:10
func StreamFieldsForHits(qw422016 *qt422016.Writer, columns []logstorage.BlockColumn, rowIdx int) {
//line app/vlselect/logsql/hits_response.qtpl:10
qw422016.N().S(`{`)
//line app/vlselect/logsql/hits_response.qtpl:12
if len(columns) > 0 {
//line app/vlselect/logsql/hits_response.qtpl:13
qw422016.N().Q(columns[0].Name)
//line app/vlselect/logsql/hits_response.qtpl:13
qw422016.N().S(`:`)
//line app/vlselect/logsql/hits_response.qtpl:13
qw422016.N().Q(columns[0].Values[rowIdx])
//line app/vlselect/logsql/hits_response.qtpl:14
for _, c := range columns[1:] {
//line app/vlselect/logsql/hits_response.qtpl:14
qw422016.N().S(`,`)
//line app/vlselect/logsql/hits_response.qtpl:15
qw422016.N().Q(c.Name)
//line app/vlselect/logsql/hits_response.qtpl:15
qw422016.N().S(`:`)
//line app/vlselect/logsql/hits_response.qtpl:15
qw422016.N().Q(c.Values[rowIdx])
//line app/vlselect/logsql/hits_response.qtpl:16
}
//line app/vlselect/logsql/hits_response.qtpl:17
}
//line app/vlselect/logsql/hits_response.qtpl:17
qw422016.N().S(`}`)
//line app/vlselect/logsql/hits_response.qtpl:19
}
//line app/vlselect/logsql/hits_response.qtpl:19
func WriteFieldsForHits(qq422016 qtio422016.Writer, columns []logstorage.BlockColumn, rowIdx int) {
//line app/vlselect/logsql/hits_response.qtpl:19
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vlselect/logsql/hits_response.qtpl:19
StreamFieldsForHits(qw422016, columns, rowIdx)
//line app/vlselect/logsql/hits_response.qtpl:19
qt422016.ReleaseWriter(qw422016)
//line app/vlselect/logsql/hits_response.qtpl:19
}
//line app/vlselect/logsql/hits_response.qtpl:19
func FieldsForHits(columns []logstorage.BlockColumn, rowIdx int) string {
//line app/vlselect/logsql/hits_response.qtpl:19
qb422016 := qt422016.AcquireByteBuffer()
//line app/vlselect/logsql/hits_response.qtpl:19
WriteFieldsForHits(qb422016, columns, rowIdx)
//line app/vlselect/logsql/hits_response.qtpl:19
qs422016 := string(qb422016.B)
//line app/vlselect/logsql/hits_response.qtpl:19
qt422016.ReleaseByteBuffer(qb422016)
//line app/vlselect/logsql/hits_response.qtpl:19
return qs422016
//line app/vlselect/logsql/hits_response.qtpl:19
}
//line app/vlselect/logsql/hits_response.qtpl:21
func StreamHitsSeries(qw422016 *qt422016.Writer, m map[string]*hitsSeries) {
//line app/vlselect/logsql/hits_response.qtpl:21
qw422016.N().S(`{`)
//line app/vlselect/logsql/hits_response.qtpl:24
sortedKeys := make([]string, 0, len(m))
for k := range m {
sortedKeys = append(sortedKeys, k)
}
slices.Sort(sortedKeys)
//line app/vlselect/logsql/hits_response.qtpl:29
qw422016.N().S(`"hits":[`)
//line app/vlselect/logsql/hits_response.qtpl:31
if len(sortedKeys) > 0 {
//line app/vlselect/logsql/hits_response.qtpl:32
streamhitsSeriesLine(qw422016, m, sortedKeys[0])
//line app/vlselect/logsql/hits_response.qtpl:33
for _, k := range sortedKeys[1:] {
//line app/vlselect/logsql/hits_response.qtpl:33
qw422016.N().S(`,`)
//line app/vlselect/logsql/hits_response.qtpl:34
streamhitsSeriesLine(qw422016, m, k)
//line app/vlselect/logsql/hits_response.qtpl:35
}
//line app/vlselect/logsql/hits_response.qtpl:36
}
//line app/vlselect/logsql/hits_response.qtpl:36
qw422016.N().S(`]}`)
//line app/vlselect/logsql/hits_response.qtpl:39
}
//line app/vlselect/logsql/hits_response.qtpl:39
func WriteHitsSeries(qq422016 qtio422016.Writer, m map[string]*hitsSeries) {
//line app/vlselect/logsql/hits_response.qtpl:39
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vlselect/logsql/hits_response.qtpl:39
StreamHitsSeries(qw422016, m)
//line app/vlselect/logsql/hits_response.qtpl:39
qt422016.ReleaseWriter(qw422016)
//line app/vlselect/logsql/hits_response.qtpl:39
}
//line app/vlselect/logsql/hits_response.qtpl:39
func HitsSeries(m map[string]*hitsSeries) string {
//line app/vlselect/logsql/hits_response.qtpl:39
qb422016 := qt422016.AcquireByteBuffer()
//line app/vlselect/logsql/hits_response.qtpl:39
WriteHitsSeries(qb422016, m)
//line app/vlselect/logsql/hits_response.qtpl:39
qs422016 := string(qb422016.B)
//line app/vlselect/logsql/hits_response.qtpl:39
qt422016.ReleaseByteBuffer(qb422016)
//line app/vlselect/logsql/hits_response.qtpl:39
return qs422016
//line app/vlselect/logsql/hits_response.qtpl:39
}
//line app/vlselect/logsql/hits_response.qtpl:41
func streamhitsSeriesLine(qw422016 *qt422016.Writer, m map[string]*hitsSeries, k string) {
//line app/vlselect/logsql/hits_response.qtpl:41
qw422016.N().S(`{`)
//line app/vlselect/logsql/hits_response.qtpl:44
hs := m[k]
hs.sort()
timestamps := hs.timestamps
hits := hs.hits
//line app/vlselect/logsql/hits_response.qtpl:48
qw422016.N().S(`"fields":`)
//line app/vlselect/logsql/hits_response.qtpl:49
qw422016.N().S(k)
//line app/vlselect/logsql/hits_response.qtpl:49
qw422016.N().S(`,"timestamps":[`)
//line app/vlselect/logsql/hits_response.qtpl:51
if len(timestamps) > 0 {
//line app/vlselect/logsql/hits_response.qtpl:52
qw422016.N().Q(timestamps[0])
//line app/vlselect/logsql/hits_response.qtpl:53
for _, ts := range timestamps[1:] {
//line app/vlselect/logsql/hits_response.qtpl:53
qw422016.N().S(`,`)
//line app/vlselect/logsql/hits_response.qtpl:54
qw422016.N().Q(ts)
//line app/vlselect/logsql/hits_response.qtpl:55
}
//line app/vlselect/logsql/hits_response.qtpl:56
}
//line app/vlselect/logsql/hits_response.qtpl:56
qw422016.N().S(`],"values":[`)
//line app/vlselect/logsql/hits_response.qtpl:59
if len(hits) > 0 {
//line app/vlselect/logsql/hits_response.qtpl:60
qw422016.N().DUL(hits[0])
//line app/vlselect/logsql/hits_response.qtpl:61
for _, v := range hits[1:] {
//line app/vlselect/logsql/hits_response.qtpl:61
qw422016.N().S(`,`)
//line app/vlselect/logsql/hits_response.qtpl:62
qw422016.N().DUL(v)
//line app/vlselect/logsql/hits_response.qtpl:63
}
//line app/vlselect/logsql/hits_response.qtpl:64
}
//line app/vlselect/logsql/hits_response.qtpl:64
qw422016.N().S(`],"total":`)
//line app/vlselect/logsql/hits_response.qtpl:66
qw422016.N().DUL(hs.hitsTotal)
//line app/vlselect/logsql/hits_response.qtpl:66
qw422016.N().S(`}`)
//line app/vlselect/logsql/hits_response.qtpl:68
}
//line app/vlselect/logsql/hits_response.qtpl:68
func writehitsSeriesLine(qq422016 qtio422016.Writer, m map[string]*hitsSeries, k string) {
//line app/vlselect/logsql/hits_response.qtpl:68
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vlselect/logsql/hits_response.qtpl:68
streamhitsSeriesLine(qw422016, m, k)
//line app/vlselect/logsql/hits_response.qtpl:68
qt422016.ReleaseWriter(qw422016)
//line app/vlselect/logsql/hits_response.qtpl:68
}
//line app/vlselect/logsql/hits_response.qtpl:68
func hitsSeriesLine(m map[string]*hitsSeries, k string) string {
//line app/vlselect/logsql/hits_response.qtpl:68
qb422016 := qt422016.AcquireByteBuffer()
//line app/vlselect/logsql/hits_response.qtpl:68
writehitsSeriesLine(qb422016, m, k)
//line app/vlselect/logsql/hits_response.qtpl:68
qs422016 := string(qb422016.B)
//line app/vlselect/logsql/hits_response.qtpl:68
qt422016.ReleaseByteBuffer(qb422016)
//line app/vlselect/logsql/hits_response.qtpl:68
return qs422016
//line app/vlselect/logsql/hits_response.qtpl:68
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,32 +0,0 @@
{% import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
) %}
{% stripspace %}
// ValuesWithHitsJSON generates JSON from the given values.
{% func ValuesWithHitsJSON(values []logstorage.ValueWithHits) %}
{
"values":{%= valuesWithHitsJSONArray(values) %}
}
{% endfunc %}
{% func valuesWithHitsJSONArray(values []logstorage.ValueWithHits) %}
[
{% if len(values) > 0 %}
{%= valueWithHitsJSON(values[0]) %}
{% for _, v := range values[1:] %}
,{%= valueWithHitsJSON(v) %}
{% endfor %}
{% endif %}
]
{% endfunc %}
{% func valueWithHitsJSON(v logstorage.ValueWithHits) %}
{
"value":{%q= v.Value %},
"hits":{%dul= v.Hits %}
}
{% endfunc %}
{% endstripspace %}

View File

@@ -1,152 +0,0 @@
// Code generated by qtc from "logsql.qtpl". DO NOT EDIT.
// See https://github.com/valyala/quicktemplate for details.
//line app/vlselect/logsql/logsql.qtpl:1
package logsql
//line app/vlselect/logsql/logsql.qtpl:1
import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
)
// ValuesWithHitsJSON generates JSON from the given values.
//line app/vlselect/logsql/logsql.qtpl:8
import (
qtio422016 "io"
qt422016 "github.com/valyala/quicktemplate"
)
//line app/vlselect/logsql/logsql.qtpl:8
var (
_ = qtio422016.Copy
_ = qt422016.AcquireByteBuffer
)
//line app/vlselect/logsql/logsql.qtpl:8
func StreamValuesWithHitsJSON(qw422016 *qt422016.Writer, values []logstorage.ValueWithHits) {
//line app/vlselect/logsql/logsql.qtpl:8
qw422016.N().S(`{"values":`)
//line app/vlselect/logsql/logsql.qtpl:10
streamvaluesWithHitsJSONArray(qw422016, values)
//line app/vlselect/logsql/logsql.qtpl:10
qw422016.N().S(`}`)
//line app/vlselect/logsql/logsql.qtpl:12
}
//line app/vlselect/logsql/logsql.qtpl:12
func WriteValuesWithHitsJSON(qq422016 qtio422016.Writer, values []logstorage.ValueWithHits) {
//line app/vlselect/logsql/logsql.qtpl:12
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vlselect/logsql/logsql.qtpl:12
StreamValuesWithHitsJSON(qw422016, values)
//line app/vlselect/logsql/logsql.qtpl:12
qt422016.ReleaseWriter(qw422016)
//line app/vlselect/logsql/logsql.qtpl:12
}
//line app/vlselect/logsql/logsql.qtpl:12
func ValuesWithHitsJSON(values []logstorage.ValueWithHits) string {
//line app/vlselect/logsql/logsql.qtpl:12
qb422016 := qt422016.AcquireByteBuffer()
//line app/vlselect/logsql/logsql.qtpl:12
WriteValuesWithHitsJSON(qb422016, values)
//line app/vlselect/logsql/logsql.qtpl:12
qs422016 := string(qb422016.B)
//line app/vlselect/logsql/logsql.qtpl:12
qt422016.ReleaseByteBuffer(qb422016)
//line app/vlselect/logsql/logsql.qtpl:12
return qs422016
//line app/vlselect/logsql/logsql.qtpl:12
}
//line app/vlselect/logsql/logsql.qtpl:14
func streamvaluesWithHitsJSONArray(qw422016 *qt422016.Writer, values []logstorage.ValueWithHits) {
//line app/vlselect/logsql/logsql.qtpl:14
qw422016.N().S(`[`)
//line app/vlselect/logsql/logsql.qtpl:16
if len(values) > 0 {
//line app/vlselect/logsql/logsql.qtpl:17
streamvalueWithHitsJSON(qw422016, values[0])
//line app/vlselect/logsql/logsql.qtpl:18
for _, v := range values[1:] {
//line app/vlselect/logsql/logsql.qtpl:18
qw422016.N().S(`,`)
//line app/vlselect/logsql/logsql.qtpl:19
streamvalueWithHitsJSON(qw422016, v)
//line app/vlselect/logsql/logsql.qtpl:20
}
//line app/vlselect/logsql/logsql.qtpl:21
}
//line app/vlselect/logsql/logsql.qtpl:21
qw422016.N().S(`]`)
//line app/vlselect/logsql/logsql.qtpl:23
}
//line app/vlselect/logsql/logsql.qtpl:23
func writevaluesWithHitsJSONArray(qq422016 qtio422016.Writer, values []logstorage.ValueWithHits) {
//line app/vlselect/logsql/logsql.qtpl:23
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vlselect/logsql/logsql.qtpl:23
streamvaluesWithHitsJSONArray(qw422016, values)
//line app/vlselect/logsql/logsql.qtpl:23
qt422016.ReleaseWriter(qw422016)
//line app/vlselect/logsql/logsql.qtpl:23
}
//line app/vlselect/logsql/logsql.qtpl:23
func valuesWithHitsJSONArray(values []logstorage.ValueWithHits) string {
//line app/vlselect/logsql/logsql.qtpl:23
qb422016 := qt422016.AcquireByteBuffer()
//line app/vlselect/logsql/logsql.qtpl:23
writevaluesWithHitsJSONArray(qb422016, values)
//line app/vlselect/logsql/logsql.qtpl:23
qs422016 := string(qb422016.B)
//line app/vlselect/logsql/logsql.qtpl:23
qt422016.ReleaseByteBuffer(qb422016)
//line app/vlselect/logsql/logsql.qtpl:23
return qs422016
//line app/vlselect/logsql/logsql.qtpl:23
}
//line app/vlselect/logsql/logsql.qtpl:25
func streamvalueWithHitsJSON(qw422016 *qt422016.Writer, v logstorage.ValueWithHits) {
//line app/vlselect/logsql/logsql.qtpl:25
qw422016.N().S(`{"value":`)
//line app/vlselect/logsql/logsql.qtpl:27
qw422016.N().Q(v.Value)
//line app/vlselect/logsql/logsql.qtpl:27
qw422016.N().S(`,"hits":`)
//line app/vlselect/logsql/logsql.qtpl:28
qw422016.N().DUL(v.Hits)
//line app/vlselect/logsql/logsql.qtpl:28
qw422016.N().S(`}`)
//line app/vlselect/logsql/logsql.qtpl:30
}
//line app/vlselect/logsql/logsql.qtpl:30
func writevalueWithHitsJSON(qq422016 qtio422016.Writer, v logstorage.ValueWithHits) {
//line app/vlselect/logsql/logsql.qtpl:30
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vlselect/logsql/logsql.qtpl:30
streamvalueWithHitsJSON(qw422016, v)
//line app/vlselect/logsql/logsql.qtpl:30
qt422016.ReleaseWriter(qw422016)
//line app/vlselect/logsql/logsql.qtpl:30
}
//line app/vlselect/logsql/logsql.qtpl:30
func valueWithHitsJSON(v logstorage.ValueWithHits) string {
//line app/vlselect/logsql/logsql.qtpl:30
qb422016 := qt422016.AcquireByteBuffer()
//line app/vlselect/logsql/logsql.qtpl:30
writevalueWithHitsJSON(qb422016, v)
//line app/vlselect/logsql/logsql.qtpl:30
qs422016 := string(qb422016.B)
//line app/vlselect/logsql/logsql.qtpl:30
qt422016.ReleaseByteBuffer(qb422016)
//line app/vlselect/logsql/logsql.qtpl:30
return qs422016
//line app/vlselect/logsql/logsql.qtpl:30
}

View File

@@ -1,103 +0,0 @@
package logsql
import (
"testing"
)
func TestParseExtraFilters_Success(t *testing.T) {
f := func(s, resultExpected string) {
t.Helper()
f, err := parseExtraFilters(s)
if err != nil {
t.Fatalf("unexpected error in parseExtraFilters: %s", err)
}
result := f.String()
if result != resultExpected {
t.Fatalf("unexpected result\ngot\n%s\nwant\n%s", result, resultExpected)
}
}
f("", "")
// JSON string
f(`{"foo":"bar"}`, `foo:=bar`)
f(`{"foo":["bar","baz"]}`, `foo:in(bar,baz)`)
f(`{"z":"=b ","c":["d","e,"],"a":[],"_msg":"x"}`, `z:="=b " c:in(d,"e,") =x`)
// LogsQL filter
f(`foobar`, `foobar`)
f(`foo:bar`, `foo:bar`)
f(`foo:(bar or baz) error _time:5m {"foo"=bar,baz="z"}`, `{foo="bar",baz="z"} (foo:bar or foo:baz) error _time:5m`)
}
func TestParseExtraFilters_Failure(t *testing.T) {
f := func(s string) {
t.Helper()
_, err := parseExtraFilters(s)
if err == nil {
t.Fatalf("expecting non-nil error")
}
}
// Invalid JSON
f(`{"foo"}`)
f(`[1,2]`)
f(`{"foo":[1]}`)
// Invliad LogsQL filter
f(`foo:(bar`)
// excess pipe
f(`foo | count()`)
}
func TestParseExtraStreamFilters_Success(t *testing.T) {
f := func(s, resultExpected string) {
t.Helper()
f, err := parseExtraStreamFilters(s)
if err != nil {
t.Fatalf("unexpected error in parseExtraStreamFilters: %s", err)
}
result := f.String()
if result != resultExpected {
t.Fatalf("unexpected result;\ngot\n%s\nwant\n%s", result, resultExpected)
}
}
f("", "")
// JSON string
f(`{"foo":"bar"}`, `{foo="bar"}`)
f(`{"foo":["bar","baz"]}`, `{foo=~"bar|baz"}`)
f(`{"z":"b","c":["d","e|\""],"a":[],"_msg":"x"}`, `{z="b",c=~"d|e\\|\"",_msg="x"}`)
// LogsQL filter
f(`foobar`, `foobar`)
f(`foo:bar`, `foo:bar`)
f(`foo:(bar or baz) error _time:5m {"foo"=bar,baz="z"}`, `{foo="bar",baz="z"} (foo:bar or foo:baz) error _time:5m`)
}
func TestParseExtraStreamFilters_Failure(t *testing.T) {
f := func(s string) {
t.Helper()
_, err := parseExtraStreamFilters(s)
if err == nil {
t.Fatalf("expecting non-nil error")
}
}
// Invalid JSON
f(`{"foo"}`)
f(`[1,2]`)
f(`{"foo":[1]}`)
// Invliad LogsQL filter
f(`foo:(bar`)
// excess pipe
f(`foo | count()`)
}

View File

@@ -1,64 +0,0 @@
{% import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
) %}
{% stripspace %}
// JSONRow creates JSON row from the given fields.
{% func JSONRow(columns []logstorage.BlockColumn, rowIdx int) %}
{% code
i := 0
for i < len(columns) && columns[i].Values[rowIdx] == "" {
i++
}
columns = columns[i:]
%}
{% if len(columns) == 0 %}
{% return %}
{% endif %}
{
{% code c := &columns[0] %}
{%q= c.Name %}:{%q= c.Values[rowIdx] %}
{% code columns = columns[1:] %}
{% for colIdx := range columns %}
{% code
c := &columns[colIdx]
v := c.Values[rowIdx]
%}
{% if v == "" %}
{% continue %}
{% endif %}
,{%q= c.Name %}:{%q= c.Values[rowIdx] %}
{% endfor %}
}{% newline %}
{% endfunc %}
// JSONRows prints formatted rows
{% func JSONRows(rows [][]logstorage.Field) %}
{% if len(rows) == 0 %}
{% return %}
{% endif %}
{% for _, fields := range rows %}
{% code fields = logstorage.SkipLeadingFieldsWithoutValues(fields) %}
{% if len(fields) == 0 %}
{% continue %}
{% endif %}
{
{% if len(fields) > 0 %}
{% code
f := fields[0]
fields = fields[1:]
%}
{%q= f.Name %}:{%q= f.Value %}
{% for _, f := range fields %}
{% if f.Value == "" %}
{% continue %}
{% endif %}
,{%q= f.Name %}:{%q= f.Value %}
{% endfor %}
{% endif %}
}{% newline %}
{% endfor %}
{% endfunc %}
{% endstripspace %}

View File

@@ -1,201 +0,0 @@
// Code generated by qtc from "query_response.qtpl". DO NOT EDIT.
// See https://github.com/valyala/quicktemplate for details.
//line app/vlselect/logsql/query_response.qtpl:1
package logsql
//line app/vlselect/logsql/query_response.qtpl:1
import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
)
// JSONRow creates JSON row from the given fields.
//line app/vlselect/logsql/query_response.qtpl:8
import (
qtio422016 "io"
qt422016 "github.com/valyala/quicktemplate"
)
//line app/vlselect/logsql/query_response.qtpl:8
var (
_ = qtio422016.Copy
_ = qt422016.AcquireByteBuffer
)
//line app/vlselect/logsql/query_response.qtpl:8
func StreamJSONRow(qw422016 *qt422016.Writer, columns []logstorage.BlockColumn, rowIdx int) {
//line app/vlselect/logsql/query_response.qtpl:10
i := 0
for i < len(columns) && columns[i].Values[rowIdx] == "" {
i++
}
columns = columns[i:]
//line app/vlselect/logsql/query_response.qtpl:16
if len(columns) == 0 {
//line app/vlselect/logsql/query_response.qtpl:17
return
//line app/vlselect/logsql/query_response.qtpl:18
}
//line app/vlselect/logsql/query_response.qtpl:18
qw422016.N().S(`{`)
//line app/vlselect/logsql/query_response.qtpl:20
c := &columns[0]
//line app/vlselect/logsql/query_response.qtpl:21
qw422016.N().Q(c.Name)
//line app/vlselect/logsql/query_response.qtpl:21
qw422016.N().S(`:`)
//line app/vlselect/logsql/query_response.qtpl:21
qw422016.N().Q(c.Values[rowIdx])
//line app/vlselect/logsql/query_response.qtpl:22
columns = columns[1:]
//line app/vlselect/logsql/query_response.qtpl:23
for colIdx := range columns {
//line app/vlselect/logsql/query_response.qtpl:25
c := &columns[colIdx]
v := c.Values[rowIdx]
//line app/vlselect/logsql/query_response.qtpl:28
if v == "" {
//line app/vlselect/logsql/query_response.qtpl:29
continue
//line app/vlselect/logsql/query_response.qtpl:30
}
//line app/vlselect/logsql/query_response.qtpl:30
qw422016.N().S(`,`)
//line app/vlselect/logsql/query_response.qtpl:31
qw422016.N().Q(c.Name)
//line app/vlselect/logsql/query_response.qtpl:31
qw422016.N().S(`:`)
//line app/vlselect/logsql/query_response.qtpl:31
qw422016.N().Q(c.Values[rowIdx])
//line app/vlselect/logsql/query_response.qtpl:32
}
//line app/vlselect/logsql/query_response.qtpl:32
qw422016.N().S(`}`)
//line app/vlselect/logsql/query_response.qtpl:33
qw422016.N().S(`
`)
//line app/vlselect/logsql/query_response.qtpl:34
}
//line app/vlselect/logsql/query_response.qtpl:34
func WriteJSONRow(qq422016 qtio422016.Writer, columns []logstorage.BlockColumn, rowIdx int) {
//line app/vlselect/logsql/query_response.qtpl:34
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vlselect/logsql/query_response.qtpl:34
StreamJSONRow(qw422016, columns, rowIdx)
//line app/vlselect/logsql/query_response.qtpl:34
qt422016.ReleaseWriter(qw422016)
//line app/vlselect/logsql/query_response.qtpl:34
}
//line app/vlselect/logsql/query_response.qtpl:34
func JSONRow(columns []logstorage.BlockColumn, rowIdx int) string {
//line app/vlselect/logsql/query_response.qtpl:34
qb422016 := qt422016.AcquireByteBuffer()
//line app/vlselect/logsql/query_response.qtpl:34
WriteJSONRow(qb422016, columns, rowIdx)
//line app/vlselect/logsql/query_response.qtpl:34
qs422016 := string(qb422016.B)
//line app/vlselect/logsql/query_response.qtpl:34
qt422016.ReleaseByteBuffer(qb422016)
//line app/vlselect/logsql/query_response.qtpl:34
return qs422016
//line app/vlselect/logsql/query_response.qtpl:34
}
// JSONRows prints formatted rows
//line app/vlselect/logsql/query_response.qtpl:37
func StreamJSONRows(qw422016 *qt422016.Writer, rows [][]logstorage.Field) {
//line app/vlselect/logsql/query_response.qtpl:38
if len(rows) == 0 {
//line app/vlselect/logsql/query_response.qtpl:39
return
//line app/vlselect/logsql/query_response.qtpl:40
}
//line app/vlselect/logsql/query_response.qtpl:41
for _, fields := range rows {
//line app/vlselect/logsql/query_response.qtpl:42
fields = logstorage.SkipLeadingFieldsWithoutValues(fields)
//line app/vlselect/logsql/query_response.qtpl:43
if len(fields) == 0 {
//line app/vlselect/logsql/query_response.qtpl:44
continue
//line app/vlselect/logsql/query_response.qtpl:45
}
//line app/vlselect/logsql/query_response.qtpl:45
qw422016.N().S(`{`)
//line app/vlselect/logsql/query_response.qtpl:47
if len(fields) > 0 {
//line app/vlselect/logsql/query_response.qtpl:49
f := fields[0]
fields = fields[1:]
//line app/vlselect/logsql/query_response.qtpl:52
qw422016.N().Q(f.Name)
//line app/vlselect/logsql/query_response.qtpl:52
qw422016.N().S(`:`)
//line app/vlselect/logsql/query_response.qtpl:52
qw422016.N().Q(f.Value)
//line app/vlselect/logsql/query_response.qtpl:53
for _, f := range fields {
//line app/vlselect/logsql/query_response.qtpl:54
if f.Value == "" {
//line app/vlselect/logsql/query_response.qtpl:55
continue
//line app/vlselect/logsql/query_response.qtpl:56
}
//line app/vlselect/logsql/query_response.qtpl:56
qw422016.N().S(`,`)
//line app/vlselect/logsql/query_response.qtpl:57
qw422016.N().Q(f.Name)
//line app/vlselect/logsql/query_response.qtpl:57
qw422016.N().S(`:`)
//line app/vlselect/logsql/query_response.qtpl:57
qw422016.N().Q(f.Value)
//line app/vlselect/logsql/query_response.qtpl:58
}
//line app/vlselect/logsql/query_response.qtpl:59
}
//line app/vlselect/logsql/query_response.qtpl:59
qw422016.N().S(`}`)
//line app/vlselect/logsql/query_response.qtpl:60
qw422016.N().S(`
`)
//line app/vlselect/logsql/query_response.qtpl:61
}
//line app/vlselect/logsql/query_response.qtpl:62
}
//line app/vlselect/logsql/query_response.qtpl:62
func WriteJSONRows(qq422016 qtio422016.Writer, rows [][]logstorage.Field) {
//line app/vlselect/logsql/query_response.qtpl:62
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vlselect/logsql/query_response.qtpl:62
StreamJSONRows(qw422016, rows)
//line app/vlselect/logsql/query_response.qtpl:62
qt422016.ReleaseWriter(qw422016)
//line app/vlselect/logsql/query_response.qtpl:62
}
//line app/vlselect/logsql/query_response.qtpl:62
func JSONRows(rows [][]logstorage.Field) string {
//line app/vlselect/logsql/query_response.qtpl:62
qb422016 := qt422016.AcquireByteBuffer()
//line app/vlselect/logsql/query_response.qtpl:62
WriteJSONRows(qb422016, rows)
//line app/vlselect/logsql/query_response.qtpl:62
qs422016 := string(qb422016.B)
//line app/vlselect/logsql/query_response.qtpl:62
qt422016.ReleaseByteBuffer(qb422016)
//line app/vlselect/logsql/query_response.qtpl:62
return qs422016
//line app/vlselect/logsql/query_response.qtpl:62
}

View File

@@ -1,52 +0,0 @@
{% stripspace %}
// StatsQueryRangeResponse generates response for /select/logsql/stats_query_range
{% func StatsQueryRangeResponse(rows []*statsSeries) %}
{
"status":"success",
"data":{
"resultType":"matrix",
"result":[
{% if len(rows) > 0 %}
{%= formatStatsSeries(rows[0]) %}
{% code rows = rows[1:] %}
{% for i := range rows %}
,{%= formatStatsSeries(rows[i]) %}
{% endfor %}
{% endif %}
]
}
}
{% endfunc %}
{% func formatStatsSeries(ss *statsSeries) %}
{
"metric":{
"__name__":{%q= ss.Name %}
{% if len(ss.Labels) > 0 %}
{% for _, label := range ss.Labels %}
,{%q= label.Name %}:{%q= label.Value %}
{% endfor %}
{% endif %}
},
"values":[
{% code points := ss.Points %}
{% if len(points) > 0 %}
{%= formatStatsPoint(&points[0]) %}
{% code points = points[1:] %}
{% for i := range points %}
,{%= formatStatsPoint(&points[i]) %}
{% endfor %}
{% endif %}
]
}
{% endfunc %}
{% func formatStatsPoint(p *statsPoint) %}
[
{%f= float64(p.Timestamp)/1e9 %},
{%q= p.Value %}
]
{% endfunc %}
{% endstripspace %}

View File

@@ -1,188 +0,0 @@
// Code generated by qtc from "stats_query_range_response.qtpl". DO NOT EDIT.
// See https://github.com/valyala/quicktemplate for details.
// StatsQueryRangeResponse generates response for /select/logsql/stats_query_range
//line app/vlselect/logsql/stats_query_range_response.qtpl:4
package logsql
//line app/vlselect/logsql/stats_query_range_response.qtpl:4
import (
qtio422016 "io"
qt422016 "github.com/valyala/quicktemplate"
)
//line app/vlselect/logsql/stats_query_range_response.qtpl:4
var (
_ = qtio422016.Copy
_ = qt422016.AcquireByteBuffer
)
//line app/vlselect/logsql/stats_query_range_response.qtpl:4
func StreamStatsQueryRangeResponse(qw422016 *qt422016.Writer, rows []*statsSeries) {
//line app/vlselect/logsql/stats_query_range_response.qtpl:4
qw422016.N().S(`{"status":"success","data":{"resultType":"matrix","result":[`)
//line app/vlselect/logsql/stats_query_range_response.qtpl:10
if len(rows) > 0 {
//line app/vlselect/logsql/stats_query_range_response.qtpl:11
streamformatStatsSeries(qw422016, rows[0])
//line app/vlselect/logsql/stats_query_range_response.qtpl:12
rows = rows[1:]
//line app/vlselect/logsql/stats_query_range_response.qtpl:13
for i := range rows {
//line app/vlselect/logsql/stats_query_range_response.qtpl:13
qw422016.N().S(`,`)
//line app/vlselect/logsql/stats_query_range_response.qtpl:14
streamformatStatsSeries(qw422016, rows[i])
//line app/vlselect/logsql/stats_query_range_response.qtpl:15
}
//line app/vlselect/logsql/stats_query_range_response.qtpl:16
}
//line app/vlselect/logsql/stats_query_range_response.qtpl:16
qw422016.N().S(`]}}`)
//line app/vlselect/logsql/stats_query_range_response.qtpl:20
}
//line app/vlselect/logsql/stats_query_range_response.qtpl:20
func WriteStatsQueryRangeResponse(qq422016 qtio422016.Writer, rows []*statsSeries) {
//line app/vlselect/logsql/stats_query_range_response.qtpl:20
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vlselect/logsql/stats_query_range_response.qtpl:20
StreamStatsQueryRangeResponse(qw422016, rows)
//line app/vlselect/logsql/stats_query_range_response.qtpl:20
qt422016.ReleaseWriter(qw422016)
//line app/vlselect/logsql/stats_query_range_response.qtpl:20
}
//line app/vlselect/logsql/stats_query_range_response.qtpl:20
func StatsQueryRangeResponse(rows []*statsSeries) string {
//line app/vlselect/logsql/stats_query_range_response.qtpl:20
qb422016 := qt422016.AcquireByteBuffer()
//line app/vlselect/logsql/stats_query_range_response.qtpl:20
WriteStatsQueryRangeResponse(qb422016, rows)
//line app/vlselect/logsql/stats_query_range_response.qtpl:20
qs422016 := string(qb422016.B)
//line app/vlselect/logsql/stats_query_range_response.qtpl:20
qt422016.ReleaseByteBuffer(qb422016)
//line app/vlselect/logsql/stats_query_range_response.qtpl:20
return qs422016
//line app/vlselect/logsql/stats_query_range_response.qtpl:20
}
//line app/vlselect/logsql/stats_query_range_response.qtpl:22
func streamformatStatsSeries(qw422016 *qt422016.Writer, ss *statsSeries) {
//line app/vlselect/logsql/stats_query_range_response.qtpl:22
qw422016.N().S(`{"metric":{"__name__":`)
//line app/vlselect/logsql/stats_query_range_response.qtpl:25
qw422016.N().Q(ss.Name)
//line app/vlselect/logsql/stats_query_range_response.qtpl:26
if len(ss.Labels) > 0 {
//line app/vlselect/logsql/stats_query_range_response.qtpl:27
for _, label := range ss.Labels {
//line app/vlselect/logsql/stats_query_range_response.qtpl:27
qw422016.N().S(`,`)
//line app/vlselect/logsql/stats_query_range_response.qtpl:28
qw422016.N().Q(label.Name)
//line app/vlselect/logsql/stats_query_range_response.qtpl:28
qw422016.N().S(`:`)
//line app/vlselect/logsql/stats_query_range_response.qtpl:28
qw422016.N().Q(label.Value)
//line app/vlselect/logsql/stats_query_range_response.qtpl:29
}
//line app/vlselect/logsql/stats_query_range_response.qtpl:30
}
//line app/vlselect/logsql/stats_query_range_response.qtpl:30
qw422016.N().S(`},"values":[`)
//line app/vlselect/logsql/stats_query_range_response.qtpl:33
points := ss.Points
//line app/vlselect/logsql/stats_query_range_response.qtpl:34
if len(points) > 0 {
//line app/vlselect/logsql/stats_query_range_response.qtpl:35
streamformatStatsPoint(qw422016, &points[0])
//line app/vlselect/logsql/stats_query_range_response.qtpl:36
points = points[1:]
//line app/vlselect/logsql/stats_query_range_response.qtpl:37
for i := range points {
//line app/vlselect/logsql/stats_query_range_response.qtpl:37
qw422016.N().S(`,`)
//line app/vlselect/logsql/stats_query_range_response.qtpl:38
streamformatStatsPoint(qw422016, &points[i])
//line app/vlselect/logsql/stats_query_range_response.qtpl:39
}
//line app/vlselect/logsql/stats_query_range_response.qtpl:40
}
//line app/vlselect/logsql/stats_query_range_response.qtpl:40
qw422016.N().S(`]}`)
//line app/vlselect/logsql/stats_query_range_response.qtpl:43
}
//line app/vlselect/logsql/stats_query_range_response.qtpl:43
func writeformatStatsSeries(qq422016 qtio422016.Writer, ss *statsSeries) {
//line app/vlselect/logsql/stats_query_range_response.qtpl:43
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vlselect/logsql/stats_query_range_response.qtpl:43
streamformatStatsSeries(qw422016, ss)
//line app/vlselect/logsql/stats_query_range_response.qtpl:43
qt422016.ReleaseWriter(qw422016)
//line app/vlselect/logsql/stats_query_range_response.qtpl:43
}
//line app/vlselect/logsql/stats_query_range_response.qtpl:43
func formatStatsSeries(ss *statsSeries) string {
//line app/vlselect/logsql/stats_query_range_response.qtpl:43
qb422016 := qt422016.AcquireByteBuffer()
//line app/vlselect/logsql/stats_query_range_response.qtpl:43
writeformatStatsSeries(qb422016, ss)
//line app/vlselect/logsql/stats_query_range_response.qtpl:43
qs422016 := string(qb422016.B)
//line app/vlselect/logsql/stats_query_range_response.qtpl:43
qt422016.ReleaseByteBuffer(qb422016)
//line app/vlselect/logsql/stats_query_range_response.qtpl:43
return qs422016
//line app/vlselect/logsql/stats_query_range_response.qtpl:43
}
//line app/vlselect/logsql/stats_query_range_response.qtpl:45
func streamformatStatsPoint(qw422016 *qt422016.Writer, p *statsPoint) {
//line app/vlselect/logsql/stats_query_range_response.qtpl:45
qw422016.N().S(`[`)
//line app/vlselect/logsql/stats_query_range_response.qtpl:47
qw422016.N().F(float64(p.Timestamp) / 1e9)
//line app/vlselect/logsql/stats_query_range_response.qtpl:47
qw422016.N().S(`,`)
//line app/vlselect/logsql/stats_query_range_response.qtpl:48
qw422016.N().Q(p.Value)
//line app/vlselect/logsql/stats_query_range_response.qtpl:48
qw422016.N().S(`]`)
//line app/vlselect/logsql/stats_query_range_response.qtpl:50
}
//line app/vlselect/logsql/stats_query_range_response.qtpl:50
func writeformatStatsPoint(qq422016 qtio422016.Writer, p *statsPoint) {
//line app/vlselect/logsql/stats_query_range_response.qtpl:50
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vlselect/logsql/stats_query_range_response.qtpl:50
streamformatStatsPoint(qw422016, p)
//line app/vlselect/logsql/stats_query_range_response.qtpl:50
qt422016.ReleaseWriter(qw422016)
//line app/vlselect/logsql/stats_query_range_response.qtpl:50
}
//line app/vlselect/logsql/stats_query_range_response.qtpl:50
func formatStatsPoint(p *statsPoint) string {
//line app/vlselect/logsql/stats_query_range_response.qtpl:50
qb422016 := qt422016.AcquireByteBuffer()
//line app/vlselect/logsql/stats_query_range_response.qtpl:50
writeformatStatsPoint(qb422016, p)
//line app/vlselect/logsql/stats_query_range_response.qtpl:50
qs422016 := string(qb422016.B)
//line app/vlselect/logsql/stats_query_range_response.qtpl:50
qt422016.ReleaseByteBuffer(qb422016)
//line app/vlselect/logsql/stats_query_range_response.qtpl:50
return qs422016
//line app/vlselect/logsql/stats_query_range_response.qtpl:50
}

View File

@@ -1,36 +0,0 @@
{% stripspace %}
// StatsQueryResponse generates response for /select/logsql/stats_query
{% func StatsQueryResponse(rows []statsRow) %}
{
"status":"success",
"data":{
"resultType":"vector",
"result":[
{% if len(rows) > 0 %}
{%= formatStatsRow(&rows[0]) %}
{% code rows = rows[1:] %}
{% for i := range rows %}
,{%= formatStatsRow(&rows[i]) %}
{% endfor %}
{% endif %}
]
}
}
{% endfunc %}
{% func formatStatsRow(r *statsRow) %}
{
"metric":{
"__name__":{%q= r.Name %}
{% if len(r.Labels) > 0 %}
{% for _, label := range r.Labels %}
,{%q= label.Name %}:{%q= label.Value %}
{% endfor %}
{% endif %}
},
"value":[{%f= float64(r.Timestamp)/1e9 %},{%q= r.Value %}]
}
{% endfunc %}
{% endstripspace %}

View File

@@ -1,133 +0,0 @@
// Code generated by qtc from "stats_query_response.qtpl". DO NOT EDIT.
// See https://github.com/valyala/quicktemplate for details.
// StatsQueryResponse generates response for /select/logsql/stats_query
//line app/vlselect/logsql/stats_query_response.qtpl:4
package logsql
//line app/vlselect/logsql/stats_query_response.qtpl:4
import (
qtio422016 "io"
qt422016 "github.com/valyala/quicktemplate"
)
//line app/vlselect/logsql/stats_query_response.qtpl:4
var (
_ = qtio422016.Copy
_ = qt422016.AcquireByteBuffer
)
//line app/vlselect/logsql/stats_query_response.qtpl:4
func StreamStatsQueryResponse(qw422016 *qt422016.Writer, rows []statsRow) {
//line app/vlselect/logsql/stats_query_response.qtpl:4
qw422016.N().S(`{"status":"success","data":{"resultType":"vector","result":[`)
//line app/vlselect/logsql/stats_query_response.qtpl:10
if len(rows) > 0 {
//line app/vlselect/logsql/stats_query_response.qtpl:11
streamformatStatsRow(qw422016, &rows[0])
//line app/vlselect/logsql/stats_query_response.qtpl:12
rows = rows[1:]
//line app/vlselect/logsql/stats_query_response.qtpl:13
for i := range rows {
//line app/vlselect/logsql/stats_query_response.qtpl:13
qw422016.N().S(`,`)
//line app/vlselect/logsql/stats_query_response.qtpl:14
streamformatStatsRow(qw422016, &rows[i])
//line app/vlselect/logsql/stats_query_response.qtpl:15
}
//line app/vlselect/logsql/stats_query_response.qtpl:16
}
//line app/vlselect/logsql/stats_query_response.qtpl:16
qw422016.N().S(`]}}`)
//line app/vlselect/logsql/stats_query_response.qtpl:20
}
//line app/vlselect/logsql/stats_query_response.qtpl:20
func WriteStatsQueryResponse(qq422016 qtio422016.Writer, rows []statsRow) {
//line app/vlselect/logsql/stats_query_response.qtpl:20
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vlselect/logsql/stats_query_response.qtpl:20
StreamStatsQueryResponse(qw422016, rows)
//line app/vlselect/logsql/stats_query_response.qtpl:20
qt422016.ReleaseWriter(qw422016)
//line app/vlselect/logsql/stats_query_response.qtpl:20
}
//line app/vlselect/logsql/stats_query_response.qtpl:20
func StatsQueryResponse(rows []statsRow) string {
//line app/vlselect/logsql/stats_query_response.qtpl:20
qb422016 := qt422016.AcquireByteBuffer()
//line app/vlselect/logsql/stats_query_response.qtpl:20
WriteStatsQueryResponse(qb422016, rows)
//line app/vlselect/logsql/stats_query_response.qtpl:20
qs422016 := string(qb422016.B)
//line app/vlselect/logsql/stats_query_response.qtpl:20
qt422016.ReleaseByteBuffer(qb422016)
//line app/vlselect/logsql/stats_query_response.qtpl:20
return qs422016
//line app/vlselect/logsql/stats_query_response.qtpl:20
}
//line app/vlselect/logsql/stats_query_response.qtpl:22
func streamformatStatsRow(qw422016 *qt422016.Writer, r *statsRow) {
//line app/vlselect/logsql/stats_query_response.qtpl:22
qw422016.N().S(`{"metric":{"__name__":`)
//line app/vlselect/logsql/stats_query_response.qtpl:25
qw422016.N().Q(r.Name)
//line app/vlselect/logsql/stats_query_response.qtpl:26
if len(r.Labels) > 0 {
//line app/vlselect/logsql/stats_query_response.qtpl:27
for _, label := range r.Labels {
//line app/vlselect/logsql/stats_query_response.qtpl:27
qw422016.N().S(`,`)
//line app/vlselect/logsql/stats_query_response.qtpl:28
qw422016.N().Q(label.Name)
//line app/vlselect/logsql/stats_query_response.qtpl:28
qw422016.N().S(`:`)
//line app/vlselect/logsql/stats_query_response.qtpl:28
qw422016.N().Q(label.Value)
//line app/vlselect/logsql/stats_query_response.qtpl:29
}
//line app/vlselect/logsql/stats_query_response.qtpl:30
}
//line app/vlselect/logsql/stats_query_response.qtpl:30
qw422016.N().S(`},"value":[`)
//line app/vlselect/logsql/stats_query_response.qtpl:32
qw422016.N().F(float64(r.Timestamp) / 1e9)
//line app/vlselect/logsql/stats_query_response.qtpl:32
qw422016.N().S(`,`)
//line app/vlselect/logsql/stats_query_response.qtpl:32
qw422016.N().Q(r.Value)
//line app/vlselect/logsql/stats_query_response.qtpl:32
qw422016.N().S(`]}`)
//line app/vlselect/logsql/stats_query_response.qtpl:34
}
//line app/vlselect/logsql/stats_query_response.qtpl:34
func writeformatStatsRow(qq422016 qtio422016.Writer, r *statsRow) {
//line app/vlselect/logsql/stats_query_response.qtpl:34
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vlselect/logsql/stats_query_response.qtpl:34
streamformatStatsRow(qw422016, r)
//line app/vlselect/logsql/stats_query_response.qtpl:34
qt422016.ReleaseWriter(qw422016)
//line app/vlselect/logsql/stats_query_response.qtpl:34
}
//line app/vlselect/logsql/stats_query_response.qtpl:34
func formatStatsRow(r *statsRow) string {
//line app/vlselect/logsql/stats_query_response.qtpl:34
qb422016 := qt422016.AcquireByteBuffer()
//line app/vlselect/logsql/stats_query_response.qtpl:34
writeformatStatsRow(qb422016, r)
//line app/vlselect/logsql/stats_query_response.qtpl:34
qs422016 := string(qb422016.B)
//line app/vlselect/logsql/stats_query_response.qtpl:34
qt422016.ReleaseByteBuffer(qb422016)
//line app/vlselect/logsql/stats_query_response.qtpl:34
return qs422016
//line app/vlselect/logsql/stats_query_response.qtpl:34
}

View File

@@ -1,324 +0,0 @@
package vlselect
import (
"context"
"embed"
"flag"
"fmt"
"net/http"
"strings"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlselect/internalselect"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlselect/logsql"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/cgroup"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httputil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/metrics"
)
var (
maxConcurrentRequests = flag.Int("search.maxConcurrentRequests", getDefaultMaxConcurrentRequests(), "The maximum number of concurrent search requests. "+
"It shouldn't be high, since a single request can saturate all the CPU cores, while many concurrently executed requests may require high amounts of memory. "+
"See also -search.maxQueueDuration")
maxQueueDuration = flag.Duration("search.maxQueueDuration", 10*time.Second, "The maximum time the search request waits for execution when -search.maxConcurrentRequests "+
"limit is reached; see also -search.maxQueryDuration")
maxQueryDuration = flag.Duration("search.maxQueryDuration", time.Second*30, "The maximum duration for query execution. It can be overridden to a smaller value on a per-query basis via 'timeout' query arg")
disableSelect = flag.Bool("select.disable", false, "Whether to disable /select/* HTTP endpoints")
disableInternal = flag.Bool("internalselect.disable", false, "Whether to disable /internal/select/* HTTP endpoints")
)
func getDefaultMaxConcurrentRequests() int {
n := cgroup.AvailableCPUs()
if n <= 4 {
n *= 2
}
if n > 16 {
// A single request can saturate all the CPU cores, so there is no sense
// in allowing higher number of concurrent requests - they will just contend
// for unavailable CPU time.
n = 16
}
return n
}
// Init initializes vlselect
func Init() {
concurrencyLimitCh = make(chan struct{}, *maxConcurrentRequests)
}
// Stop stops vlselect
func Stop() {
}
var concurrencyLimitCh chan struct{}
var (
concurrencyLimitReached = metrics.NewCounter(`vl_concurrent_select_limit_reached_total`)
concurrencyLimitTimeout = metrics.NewCounter(`vl_concurrent_select_limit_timeout_total`)
_ = metrics.NewGauge(`vl_concurrent_select_capacity`, func() float64 {
return float64(cap(concurrencyLimitCh))
})
_ = metrics.NewGauge(`vl_concurrent_select_current`, func() float64 {
return float64(len(concurrencyLimitCh))
})
)
//go:embed vmui
var vmuiFiles embed.FS
var vmuiFileServer = http.FileServer(http.FS(vmuiFiles))
// RequestHandler handles select requests for VictoriaLogs
func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
path := strings.ReplaceAll(r.URL.Path, "//", "/")
if strings.HasPrefix(path, "/select/") {
if *disableSelect {
httpserver.Errorf(w, r, "requests to /select/* are disabled with -select.disable command-line flag")
return true
}
return selectHandler(w, r, path)
}
if strings.HasPrefix(path, "/internal/select/") {
if *disableInternal || *disableSelect {
httpserver.Errorf(w, r, "requests to /internal/select/* are disabled with -internalselect.disable or -select.disable command-line flag")
return true
}
internalselect.RequestHandler(r.Context(), w, r)
return true
}
return false
}
func selectHandler(w http.ResponseWriter, r *http.Request, path string) bool {
ctx := r.Context()
if path == "/select/vmui" {
// VMUI access via incomplete url without `/` in the end. Redirect to complete url.
// Use relative redirect, since the hostname and path prefix may be incorrect if VictoriaMetrics
// is hidden behind vmauth or similar proxy.
_ = r.ParseForm()
newURL := "vmui/?" + r.Form.Encode()
httpserver.Redirect(w, newURL)
return true
}
if strings.HasPrefix(path, "/select/vmui/") {
if strings.HasPrefix(path, "/select/vmui/static/") {
// Allow clients caching static contents for long period of time, since it shouldn't change over time.
// Path to static contents (such as js and css) must be changed whenever its contents is changed.
// See https://developer.chrome.com/docs/lighthouse/performance/uses-long-cache-ttl/
w.Header().Set("Cache-Control", "max-age=31536000")
}
r.URL.Path = strings.TrimPrefix(path, "/select")
vmuiFileServer.ServeHTTP(w, r)
return true
}
if path == "/select/logsql/tail" {
logsqlTailRequests.Inc()
// Process live tailing request without timeout, since it is OK to run live tailing requests for very long time.
// Also do not apply concurrency limit to tail requests, since these limits are intended for non-tail requests.
logsql.ProcessLiveTailRequest(ctx, w, r)
return true
}
// Limit the number of concurrent queries, which can consume big amounts of CPU time.
startTime := time.Now()
d := getMaxQueryDuration(r)
ctxWithTimeout, cancel := context.WithTimeout(ctx, d)
defer cancel()
if !incRequestConcurrency(ctxWithTimeout, w, r) {
return true
}
defer decRequestConcurrency()
ok := processSelectRequest(ctxWithTimeout, w, r, path)
if !ok {
return false
}
logRequestErrorIfNeeded(ctxWithTimeout, w, r, startTime)
return true
}
func logRequestErrorIfNeeded(ctx context.Context, w http.ResponseWriter, r *http.Request, startTime time.Time) {
err := ctx.Err()
switch err {
case nil:
// nothing to do
case context.Canceled:
// do not log canceled requests, since they are expected and legal.
case context.DeadlineExceeded:
err = &httpserver.ErrorWithStatusCode{
Err: fmt.Errorf("the request couldn't be executed in %.3f seconds; possible solutions: "+
"to increase -search.maxQueryDuration=%s; to pass bigger value to 'timeout' query arg", time.Since(startTime).Seconds(), maxQueryDuration),
StatusCode: http.StatusServiceUnavailable,
}
httpserver.Errorf(w, r, "%s", err)
default:
httpserver.Errorf(w, r, "unexpected error: %s", err)
}
}
func incRequestConcurrency(ctx context.Context, w http.ResponseWriter, r *http.Request) bool {
startTime := time.Now()
stopCh := ctx.Done()
select {
case concurrencyLimitCh <- struct{}{}:
return true
default:
// Sleep for a while until giving up. This should resolve short bursts in requests.
concurrencyLimitReached.Inc()
select {
case concurrencyLimitCh <- struct{}{}:
return true
case <-stopCh:
switch ctx.Err() {
case context.Canceled:
remoteAddr := httpserver.GetQuotedRemoteAddr(r)
requestURI := httpserver.GetRequestURI(r)
logger.Infof("client has canceled the pending request after %.3f seconds: remoteAddr=%s, requestURI: %q",
time.Since(startTime).Seconds(), remoteAddr, requestURI)
case context.DeadlineExceeded:
concurrencyLimitTimeout.Inc()
err := &httpserver.ErrorWithStatusCode{
Err: fmt.Errorf("couldn't start executing the request in %.3f seconds, since -search.maxConcurrentRequests=%d concurrent requests "+
"are executed. Possible solutions: to reduce query load; to add more compute resources to the server; "+
"to increase -search.maxQueueDuration=%s; to increase -search.maxQueryDuration=%s; to increase -search.maxConcurrentRequests; "+
"to pass bigger value to 'timeout' query arg",
time.Since(startTime).Seconds(), *maxConcurrentRequests, maxQueueDuration, maxQueryDuration),
StatusCode: http.StatusServiceUnavailable,
}
httpserver.Errorf(w, r, "%s", err)
}
return false
}
}
}
func decRequestConcurrency() {
<-concurrencyLimitCh
}
func processSelectRequest(ctx context.Context, w http.ResponseWriter, r *http.Request, path string) bool {
httpserver.EnableCORS(w, r)
startTime := time.Now()
switch path {
case "/select/logsql/facets":
logsqlFacetsRequests.Inc()
logsql.ProcessFacetsRequest(ctx, w, r)
logsqlFacetsDuration.UpdateDuration(startTime)
return true
case "/select/logsql/field_names":
logsqlFieldNamesRequests.Inc()
logsql.ProcessFieldNamesRequest(ctx, w, r)
logsqlFieldNamesDuration.UpdateDuration(startTime)
return true
case "/select/logsql/field_values":
logsqlFieldValuesRequests.Inc()
logsql.ProcessFieldValuesRequest(ctx, w, r)
logsqlFieldValuesDuration.UpdateDuration(startTime)
return true
case "/select/logsql/hits":
logsqlHitsRequests.Inc()
logsql.ProcessHitsRequest(ctx, w, r)
logsqlHitsDuration.UpdateDuration(startTime)
return true
case "/select/logsql/query":
logsqlQueryRequests.Inc()
logsql.ProcessQueryRequest(ctx, w, r)
logsqlQueryDuration.UpdateDuration(startTime)
return true
case "/select/logsql/stats_query":
logsqlStatsQueryRequests.Inc()
logsql.ProcessStatsQueryRequest(ctx, w, r)
logsqlStatsQueryDuration.UpdateDuration(startTime)
return true
case "/select/logsql/stats_query_range":
logsqlStatsQueryRangeRequests.Inc()
logsql.ProcessStatsQueryRangeRequest(ctx, w, r)
logsqlStatsQueryRangeDuration.UpdateDuration(startTime)
return true
case "/select/logsql/stream_field_names":
logsqlStreamFieldNamesRequests.Inc()
logsql.ProcessStreamFieldNamesRequest(ctx, w, r)
logsqlStreamFieldNamesDuration.UpdateDuration(startTime)
return true
case "/select/logsql/stream_field_values":
logsqlStreamFieldValuesRequests.Inc()
logsql.ProcessStreamFieldValuesRequest(ctx, w, r)
logsqlStreamFieldValuesDuration.UpdateDuration(startTime)
return true
case "/select/logsql/stream_ids":
logsqlStreamIDsRequests.Inc()
logsql.ProcessStreamIDsRequest(ctx, w, r)
logsqlStreamIDsDuration.UpdateDuration(startTime)
return true
case "/select/logsql/streams":
logsqlStreamsRequests.Inc()
logsql.ProcessStreamsRequest(ctx, w, r)
logsqlStreamsDuration.UpdateDuration(startTime)
return true
default:
return false
}
}
// getMaxQueryDuration returns the maximum duration for query from r.
func getMaxQueryDuration(r *http.Request) time.Duration {
dms, err := httputil.GetDuration(r, "timeout", 0)
if err != nil {
dms = 0
}
d := time.Duration(dms) * time.Millisecond
if d <= 0 || d > *maxQueryDuration {
d = *maxQueryDuration
}
return d
}
var (
logsqlFacetsRequests = metrics.NewCounter(`vl_http_requests_total{path="/select/logsql/facets"}`)
logsqlFacetsDuration = metrics.NewSummary(`vl_http_request_duration_seconds{path="/select/logsql/facets"}`)
logsqlFieldNamesRequests = metrics.NewCounter(`vl_http_requests_total{path="/select/logsql/field_names"}`)
logsqlFieldNamesDuration = metrics.NewSummary(`vl_http_request_duration_seconds{path="/select/logsql/field_names"}`)
logsqlFieldValuesRequests = metrics.NewCounter(`vl_http_requests_total{path="/select/logsql/field_values"}`)
logsqlFieldValuesDuration = metrics.NewSummary(`vl_http_request_duration_seconds{path="/select/logsql/field_values"}`)
logsqlHitsRequests = metrics.NewCounter(`vl_http_requests_total{path="/select/logsql/hits"}`)
logsqlHitsDuration = metrics.NewSummary(`vl_http_request_duration_seconds{path="/select/logsql/hits"}`)
logsqlQueryRequests = metrics.NewCounter(`vl_http_requests_total{path="/select/logsql/query"}`)
logsqlQueryDuration = metrics.NewSummary(`vl_http_request_duration_seconds{path="/select/logsql/query"}`)
logsqlStatsQueryRequests = metrics.NewCounter(`vl_http_requests_total{path="/select/logsql/stats_query"}`)
logsqlStatsQueryDuration = metrics.NewSummary(`vl_http_request_duration_seconds{path="/select/logsql/stats_query"}`)
logsqlStatsQueryRangeRequests = metrics.NewCounter(`vl_http_requests_total{path="/select/logsql/stats_query_range"}`)
logsqlStatsQueryRangeDuration = metrics.NewSummary(`vl_http_request_duration_seconds{path="/select/logsql/stats_query_range"}`)
logsqlStreamFieldNamesRequests = metrics.NewCounter(`vl_http_requests_total{path="/select/logsql/stream_field_names"}`)
logsqlStreamFieldNamesDuration = metrics.NewSummary(`vl_http_request_duration_seconds{path="/select/logsql/stream_field_names"}`)
logsqlStreamFieldValuesRequests = metrics.NewCounter(`vl_http_requests_total{path="/select/logsql/stream_field_values"}`)
logsqlStreamFieldValuesDuration = metrics.NewSummary(`vl_http_request_duration_seconds{path="/select/logsql/stream_field_values"}`)
logsqlStreamIDsRequests = metrics.NewCounter(`vl_http_requests_total{path="/select/logsql/stream_ids"}`)
logsqlStreamIDsDuration = metrics.NewSummary(`vl_http_request_duration_seconds{path="/select/logsql/stream_ids"}`)
logsqlStreamsRequests = metrics.NewCounter(`vl_http_requests_total{path="/select/logsql/streams"}`)
logsqlStreamsDuration = metrics.NewSummary(`vl_http_request_duration_seconds{path="/select/logsql/streams"}`)
// no need to track duration for tail requests, as they usually take long time
logsqlTailRequests = metrics.NewCounter(`vl_http_requests_total{path="/select/logsql/tail"}`)
)

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1 +0,0 @@
.uplot,.uplot *,.uplot *:before,.uplot *:after{box-sizing:border-box}.uplot{font-family:system-ui,-apple-system,Segoe UI,Roboto,Helvetica Neue,Arial,Noto Sans,sans-serif,"Apple Color Emoji","Segoe UI Emoji",Segoe UI Symbol,"Noto Color Emoji";line-height:1.5;width:min-content}.u-title{text-align:center;font-size:18px;font-weight:700}.u-wrap{position:relative;-webkit-user-select:none;user-select:none}.u-over,.u-under{position:absolute}.u-under{overflow:hidden}.uplot canvas{display:block;position:relative;width:100%;height:100%}.u-axis{position:absolute}.u-legend{font-size:14px;margin:auto;text-align:center}.u-inline{display:block}.u-inline *{display:inline-block}.u-inline tr{margin-right:16px}.u-legend th{font-weight:600}.u-legend th>*{vertical-align:middle;display:inline-block}.u-legend .u-marker{width:1em;height:1em;margin-right:4px;background-clip:padding-box!important}.u-inline.u-live th:after{content:":";vertical-align:middle}.u-inline:not(.u-live) .u-value{display:none}.u-series>*{padding:4px}.u-series th{cursor:pointer}.u-legend .u-off>*{opacity:.3}.u-select{background:#00000012;position:absolute;pointer-events:none}.u-cursor-x,.u-cursor-y{position:absolute;left:0;top:0;pointer-events:none;will-change:transform}.u-hz .u-cursor-x,.u-vt .u-cursor-y{height:100%;border-right:1px dashed #607D8B}.u-hz .u-cursor-y,.u-vt .u-cursor-x{width:100%;border-bottom:1px dashed #607D8B}.u-cursor-pt{position:absolute;top:0;left:0;border-radius:50%;border:0 solid;pointer-events:none;will-change:transform;background-clip:padding-box!important}.u-axis.u-off,.u-select.u-off,.u-cursor-x.u-off,.u-cursor-y.u-off,.u-cursor-pt.u-off{display:none}

File diff suppressed because one or more lines are too long

View File

@@ -1,5 +0,0 @@
{
"license": {
"type": "opensource"
}
}

View File

@@ -1 +0,0 @@
<svg width="48" height="48" fill="none" xmlns="http://www.w3.org/2000/svg"><path d="M24.5475 0C10.3246.0265251 1.11379 3.06365 4.40623 6.10077c0 0 12.32997 11.23333 16.58217 14.84083.8131.6896 2.1728 1.1936 3.5191 1.2201h.1199c1.3463-.0265 2.706-.5305 3.5191-1.2201 4.2522-3.5942 16.5422-14.84083 16.5422-14.84083C48.0478 3.06365 38.8636.0265251 24.6674 0" fill="#020202"/><path d="M28.1579 27.0159c-.8131.6896-2.1728 1.1936-3.5191 1.2201h-.12c-1.3463-.0265-2.7059-.5305-3.519-1.2201-2.9725-2.5067-13.35639-11.87-17.26201-15.3979v5.4112c0 .5968.22661 1.3793.6265 1.7506C7.00358 21.1936 17.2675 30.5437 20.9731 33.6737c.8132.6896 2.1728 1.1936 3.5191 1.2201h.12c1.3463-.0265 2.7059-.5305 3.519-1.2201 3.679-3.13 13.9429-12.4536 16.6089-14.8939.4132-.3713.6265-1.1538.6265-1.7506V11.618c-3.9323 3.5411-14.3162 12.931-17.2354 15.3979h.0267Z" fill="#020202"/><path d="M28.1579 39.748c-.8131.6897-2.1728 1.1937-3.5191 1.2202h-.12c-1.3463-.0265-2.7059-.5305-3.519-1.2202-2.9725-2.4933-13.35639-11.8567-17.26201-15.3978v5.4111c0 .5969.22661 1.3793.6265 1.7507C7.00358 33.9258 17.2675 43.2759 20.9731 46.4058c.8132.6897 2.1728 1.1937 3.5191 1.2202h.12c1.3463-.0265 2.7059-.5305 3.519-1.2202 3.679-3.1299 13.9429-12.4535 16.6089-14.8938.4132-.3714.6265-1.1538.6265-1.7507v-5.4111c-3.9323 3.5411-14.3162 12.931-17.2354 15.3978h.0267Z" fill="#020202"/></svg>

Before

Width:  |  Height:  |  Size: 1.3 KiB

View File

@@ -1,57 +0,0 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8"/>
<link rel="icon" href="./favicon.svg" />
<link rel="apple-touch-icon" href="./favicon.svg" />
<link rel="mask-icon" href="./favicon.svg" color="#000000">
<meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=5"/>
<meta name="theme-color" content="#000000"/>
<meta name="description" content="Explore your log data with VictoriaLogs UI"/>
<!--
manifest.json provides metadata used when your web app is installed on a
user's mobile device or desktop. See https://developers.google.com/web/fundamentals/web-app-manifest/
-->
<link rel="manifest" href="./manifest.json"/>
<!--
Notice the use of in the tags above.
It will be replaced with the URL of the `public` folder during the build.
Only files inside the `public` folder can be referenced from the HTML.
Unlike "/favicon.ico" or "favicon.ico", "/favicon.ico" will
work correctly both with client-side routing and a non-root public URL.
Learn how to configure a non-root public URL by running `npm run build`.
-->
<title>UI for VictoriaLogs</title>
<meta name="twitter:card" content="summary">
<meta name="twitter:title" content="UI for VictoriaLogs">
<meta name="twitter:site" content="@https://victoriametrics.com/products/victorialogs/">
<meta name="twitter:description" content="Explore your log data with VictoriaLogs UI">
<meta name="twitter:image" content="./preview.jpg">
<meta property="og:type" content="website">
<meta property="og:title" content="UI for VictoriaLogs">
<meta property="og:url" content="https://victoriametrics.com/products/victorialogs/">
<meta property="og:description" content="Explore your log data with VictoriaLogs UI">
<script type="module" crossorigin src="./assets/index-721xTF8u.js"></script>
<link rel="modulepreload" crossorigin href="./assets/vendor-V4vnRsM-.js">
<link rel="stylesheet" crossorigin href="./assets/vendor-D1GxaB_c.css">
<link rel="stylesheet" crossorigin href="./assets/index-C36SC0pJ.css">
</head>
<body>
<noscript>You need to enable JavaScript to run this app.</noscript>
<div id="root"></div>
<!--
This HTML file is a template.
If you open it directly in the browser, you will see an empty page.
You can add webfonts, meta tags, or analytics to this file.
The build step will place the bundled scripts into the <body> tag.
To begin the development, run `npm start` or `yarn start`.
To create a production bundle, use `npm run build` or `yarn build`.
-->
</body>
</html>

Some files were not shown because too many files have changed in this diff Show More