Compare commits

...

212 Commits

Author SHA1 Message Date
Jiekun
520123508e feature: [memory limit] clean unused codes 2025-07-04 15:14:34 +08:00
Jiekun
1590626d0b feature: [memory limit] change memory check interval to flag 2025-07-03 15:20:22 +08:00
Jiekun
d2487fdb19 feature: [memory limit] change memory detection 2025-07-03 14:54:08 +08:00
Jiekun
0c1f624985 feature: [memory limit] remove logging 2025-07-03 00:07:00 +08:00
Jiekun
a3fb0fece1 feature: [memory limit] replace memory usage 2025-07-03 00:01:52 +08:00
Jiekun
0d842a7620 feature: [memory limit] add debug log 2025-07-02 23:36:55 +08:00
Jiekun
48bd251817 feature: [memory limit] add flag to control the default value 2025-07-02 20:55:57 +08:00
Jiekun
11ce870625 feature: [memory limit] log circuit breaker correctly 2025-07-02 17:18:36 +08:00
Jiekun
d84f96505b feature: [memory limit] update memory check for virtual machine 2025-07-02 17:11:40 +08:00
Jiekun
124163d4b2 feature: [memory limit] unit test on gcp 2025-07-02 16:27:26 +08:00
Jose Gómez-Sellés
65087f08c4 Migrate docs to cloud repo (#9230)
This is ready, since https://github.com/VictoriaMetrics/cloud/pull/3229
is merged .

This PR deletes the cloud docs folder after moving it into the private
cloud repo. The main reason is to keep things tidy and handle reviews in
a better way, as the helm charts repo is doing.

Future updates should be protected since the github actions file with
rsync command should not be messing with this folder when updating
everything.

### Checklist

The following checks are **mandatory**:

- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
2025-06-26 12:55:25 +02:00
Artur Minchukou
ca0479fff3 app/vmui/logs: update the color of the VictoriaLogs favicon to make the color different from VictoriaMetrics (#9270)
### Describe Your Changes

Updated the favicon color for Victoria Logs that the icon could be
distinguished from Victoria Metrics. Took the same color as in
victorialogs-datasource plugin.

![image](https://github.com/user-attachments/assets/521fc747-8ea7-43c7-a719-5434fe39ab06)


### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
2025-06-26 10:39:15 +02:00
Yury Molodov
644c7a97c8 victorialogs/docs: fix invalid stats example in "Comments" section (#9272)
### Describe Your Changes

The example in the VictoriaLogs docs (Comments section) is currently
missing an aggregate function in the `stats` pipe:

```logsql
error                       # find logs with `error` word
  | stats by (_stream) logs # then count the number of logs per `_stream` label
  | sort by (logs) desc     # then sort by the found logs in descending order
  | limit 5                 # and show top 5 streams with the biggest number of logs
```

However, `stats by (_stream) logs` is invalid syntax - `logs` is
interpreted as a function name, which causes a parsing error.

This fix replaces it with a valid version using `count()`:

```logsql
| stats by (_stream) count() as logs
```

Without `count()`, the query fails with:

```
 cannot parse 'stats' pipe: unknown stats func "logs"
```

Signed-off-by: Yury Molodov <yurymolodov@gmail.com>
2025-06-26 10:34:57 +02:00
Roman Khavronenko
098cba5b73 dashboards: fix adhoc filters for vmalert and vmagent (#9271)
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8657

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-06-26 10:33:55 +02:00
Hui Wang
5d0e8c0d1b vmalert: fix data race in replay ut (#9278)
see https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9265
2025-06-26 15:15:35 +08:00
Aliaksandr Valialkin
442bfa6c35 lib/logstorage: add tests, which verify that NaN and Inf values cannot be parsed by tryParseFloat64
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8474
2025-06-25 23:00:56 +02:00
Aliaksandr Valialkin
ee031b21a7 docs/victorialogs/LogsQL.md: document that sum() and avg() returns NaN when all the field values are non-numeric
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8474
2025-06-25 22:51:42 +02:00
hagen1778
deec361a64 lib/prombpmarshal: make linter happy
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-06-24 22:06:45 +02:00
Roman Khavronenko
fd4dce81ce lib/prombp{marshal}: support metadata in remote write protocol (#9124)
This change adds support for parsing and sending Metadata field in
Prometheus Remote Write protocol. It implements the first step for
Metadata support.

See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2974

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-06-24 22:00:34 +02:00
hagen1778
5431696d83 docs: fix various typos and grammar errors
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-06-24 21:45:55 +02:00
hagen1778
1b71184bfb docs: fix unresolved link in cluster docs
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-06-24 21:39:35 +02:00
Phuong Le
29c06b9543 VictoriaLogs: add a High Availability section to the cluster documentation. (#9247) 2025-06-24 18:53:24 +02:00
hagen1778
369f3f0da1 docs: rm integrations section from single-node readme
* move netdata and carbon-api integrations to a dedicated `Integrations` page
* drop https://github.com/aorfanos/vmalert-cli as it seems stale

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-06-24 16:58:56 +02:00
hagen1778
6f93f0e1a7 docs: minor typo fix
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-06-24 16:51:29 +02:00
Roman Khavronenko
96b773198f docs: update vmctl docs (#9257)
* split migration mods in separate sub-section. This removes conflicting
#-anchors and makes it easier to read&modify in future
* remove duplicating wording
* simplify texts

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8964

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-06-24 16:48:13 +02:00
Andrii Chubatiuk
006381266b app/vmalert: add /api/v1/notifiers endpoint and datasource_type query argument filter for /api/v1/rules and /api/v1/alerts endpoints (#9046)
### Describe Your Changes

added /api/v1/notifiers endpoint and `datasource_type` query argument
for `/api/v1/rules` and `/api/v1/alerts` API endpoints to filter groups
and rules by datasource type. required for
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8989

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8537

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-06-24 16:41:38 +02:00
Max Kotliar
ae1dffe5d3 deployment/docker: Update grafana image version due to CVE issue (#9258)
### Describe Your Changes

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9207

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
2025-06-24 16:31:53 +02:00
Nikolay
80e508eac7 lib/promscrape: remove duplicate targets from service-discovery
Previously, if annotation was update on object, it could result into
duplicate targets register for dropped targets service-discovery page.

 Mostly it affects endpoint annotations update. Endpoint holds
annotation with last update time. If any pod that belongs to the given
annotation changed, it causes duplication targets for all pod backed by
the endpoint. It makes service-discovery debug page hard to use.

 This commit excludes `__metadata_kubernetes_*_annotation_` from key
generation for dropped targets map. Instead it updates target with new
labels value.

  It may lead to some targets collision, but
since hash function already could produce collisions, it should not be a
problem.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8626
2025-06-24 11:32:29 +02:00
f41gh7
86d8095417 docs: mention v1.120.0 release
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2025-06-23 16:36:30 +02:00
f41gh7
df847e62c0 docs: mention new LTS releases
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2025-06-23 16:32:35 +02:00
Aliaksandr Valialkin
1e3791cbf6 lib/storage: move testing/synctest usage into separate files ending with _synctest_test.go
The files with synctest usage are built only if the goexperiment.synctest build tag is set.
This allows running `go test ./lib/...` and `go vet ./lib/...` without the need to pass GOEXPERIMENT=synctest to them.

This is a follow-up for d33d7e20be
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8848
2025-06-21 12:27:22 +02:00
Aliaksandr Valialkin
b1f3d7f3eb app/vlinsert/journald: add a benchark for the isValidFieldName() function 2025-06-21 12:15:45 +02:00
Aliaksandr Valialkin
c2c6f97bd3 lib/fasttime: remove performance overhead when UnixTimestamp() is called from benchmark loops
Move the synctest-related implementation into a separate file protected with go:build goexperiment.synctest.

This is a follow-up for the commit 06c26315df
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8740
2025-06-21 12:09:55 +02:00
Aliaksandr Valialkin
274e38ebee docs/victorialogs/data-ingestion/Journald.md: mention that _TRANSPORT and _SYSTEMD_USER_UNIT fields are also good candidates for log streams
Thanks to @septatrix for the suggestion at https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9143#issuecomment-2983752442
2025-06-20 23:26:16 +02:00
Aliaksandr Valialkin
e05738749c docs/victorialogs/CHANGELOG.md: typo fixes in links to stats functions 2025-06-20 22:25:57 +02:00
Aliaksandr Valialkin
3428d0d27d deployment: update VictoriaLogs Docker image tags from v1.23.3-victorialogs to v1.24.0-victorialogs
See https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.24.0-victorialogs
2025-06-20 21:29:54 +02:00
Aliaksandr Valialkin
f8170b2471 docs/victorialogs/CHANGELOG.md: cut v1.24.0-victorialogs 2025-06-20 21:20:35 +02:00
Aliaksandr Valialkin
57925c34e6 app/vlselect/vmui: run make vmui-logs-update after 9e60fc8fc8 2025-06-20 21:19:07 +02:00
Artur Minchukou
9e60fc8fc8 app/vmui/logs: move compact mode of VictoriaLogs table to separate tab (#8745)
### Describe Your Changes

Related issue: #7047 and #8438

- removed compact mode of VictoriaLogs table
- added sorting by field to JSON tab 

![image](https://github.com/user-attachments/assets/c02bbbed-cf2c-41e3-86fe-97bc205654a5)
- added sorting of field names to JSON tab


### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-06-20 17:26:28 +02:00
f41gh7
07c8a4a9a7 make vmui-update 2025-06-20 17:04:17 +02:00
f41gh7
e098ef901f CHANGELOG.md: cut v1.120.0 release 2025-06-20 16:41:00 +02:00
f41gh7
7693c4bcce docker/Makefile: add EXTRA_TAG_SUFFIX variable to buildx publish
New variable should help to change docker image tag name during release prepare.

Instead of publishing an exact version, like v1.1.0, it should use suffixed version v1.1.0-rc1.
Which should be re-tagged later, when release will be promoted to stable.

Related to https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9136

Signed-off-by: f41gh7 <nik@victoriametrics.com>
2025-06-20 16:39:27 +02:00
Fred Navruzov
8bb56f8ef5 docs/vmanomaly: release v1.24.1 (#9241)
### Describe Your Changes

Patch release doc updates for vmanomaly (v1.24.1)

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
2025-06-20 16:35:02 +02:00
Alexander Frolov
b91d249a29 lib/httpserver: option for disabling HTTP keep-alives (#9125)
### Describe Your Changes

Some network configurations may not work optimally with long-lived
connections. \
Address the issue described by #2395 for other components.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).

---------

Signed-off-by: Aleksandr Frolov <fxrlv@nebius.com>
Co-authored-by: Aliaksandr Valialkin <valyala@gmail.com>
Co-authored-by: Max Kotliar <kotlyar.maksim@gmail.com>
Co-authored-by: Roman Khavronenko <hagen1778@gmail.com>
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
2025-06-20 12:43:21 +02:00
Hui Wang
7b274e0d6d vmalert: correct the rule evaluation timestamp if the system clock is… (#9228)
… changed during runtime

Strip the monotonic clock with `t.Round(d)` or `t.Truncate(d)` before
apply `Sub()`, to force the wall clock usage which corrects the rule
evaluation timestamp if the system clock is changed during runtime.

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8790

---------

Co-authored-by: Max Kotliar <kotlyar.maksim@gmail.com>
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
2025-06-20 12:17:31 +02:00
Benjamin Nichols-Farquhar
b3c92540e5 vmalert: respect group.concurrency in replay mode (#9214)
### Describe Your Changes

Revival and modification of original PR
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8762 after
discussion on
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7387.

`group.concurrency` is now respected if and only if
`-replay.rulesDelay=0` rather than always. This allows rules to be run
concurrently without ambiguity about rule chaining. If
`-replay.rulesDelay` is set greater than zero concurrency is still
ignored. This will be the default behavior since it defaults to 1s.

Implementation considerations:

I chose to add split some simple logic into a helper function in
preparation for adding `replay.singleRuleEvaluationConcurrency` in a
follow up PR as thats we're we can share that logic.

cc @Haleygo 
### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).

---------

Co-authored-by: Hui Wang <haley@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-06-20 12:14:18 +02:00
Artem Fetishev
0dd63a60d0 lib/storage: Change BenchmarkHeadPostingForMatchers to use global index time range explicitly (#9233)
During the data retrieval, VictoriaMetrics switches to global index
search if the time range is > 40 days. Prior
ba0d7dc2fc, this switch was handled in
indexDB code. That commit moved that logic to the storage level. As the
result, indexDB will search per-day index for whatever time range that
is passed to it.

This broke the BenchmarkHeadPostingForMatchers. It now shows very slow
indexDB performance compared to v1.111.0 (the last release before that
commit). This is because now indexDB tries to search 55 years of data by
spawning a separate goroutine for each day. Even though the data was
inserted only for the current day, running that many goroutines
significantly slows down the search.

The fix is to use global index time range explicitly.

Fixes #9203 

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-06-20 11:55:04 +02:00
Benjamin Nichols-Farquhar
32fc801519 vmstorage: add flag for storage metricName cache tuning (#9156)
Similar to other storage caches `storage/metricName` can be very
important to performance, however it is not tunable independently like
other caches.

In high cardinality setups where a large amount of that cardinality is
actively queried we can see a high `metricName` miss rate.

The only way to correct this is to increase available memory either by
provisioning more or by increasing `memory.allowedPercent`, which is
often expensive or undesirable for stability reasons.

Its possible to work around this by increasing `memory.allowedPercent`
and then adjusting `storage.cacheSizeIndexDBDataBlocks` and
`storage.cacheSizeStorageTSID` down as they are the largest caches, but
this is a less ideal solution than being able to directly control this
cache size.

Related to
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8843
2025-06-20 11:33:39 +02:00
Artem Fetishev
771f742842 lib/storage: extract adding metricIDs to the pendingHourEntries into a separate func (#9138)
This change has been requested to be done in a separate PR during the
partition index review. See:
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8134#discussion_r2131692079

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-06-20 09:58:11 +02:00
Max Kotliar
a845fe815a lib/logstorage: clarify comment on writeBlockResultFunc usage constraints (#9235)
### Describe Your Changes

The `DataBlock` contains structs with string fields, and while the
original comment mentioned not holding references to `br`, it wasn't
immediately clear that this also applies to fields like strings within
the data.

This change clarifies that the `writeBlockResultFunc` must not retain
references to any part of `br`, including its fields. This makes it
explicit that even seemingly safe types like strings must be copied if
needed.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
2025-06-19 16:30:56 +02:00
Phuong Le
2eadd4c9f9 vlogs: fix inconsistent type for HTTP request duration metric (#9225)
The 'select' HTTP request duration metric is currently implemented as a
summary, while the 'insert' HTTP request duration uses a histogram.

All time series under the same metric name must use the same metric
type. Using the summary type for this metric is preferable, as it
significantly reduces the number of unique time series generated. While
summaries have the limitation of not supporting accurate aggregation
across multiple instances or paths, this trade-off is more manageable
than dealing with a high-cardinality explosion caused by histograms.
2025-06-19 14:49:29 +02:00
Aliaksandr Valialkin
eb7b088c91 lib/logstorage: provide standard string representation for all the priority and severity levels in Syslog and Journald protocols inside the "level" field
It is better from usability PoV to provide string representation for all the priority and severity levels
instead of merging some of them into a common groups.

This is requested at https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9209
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8535
2025-06-19 14:39:20 +02:00
Aliaksandr Valialkin
19a50ae7cd app/vlinsert/journald: follow-up for 01d413873e
- Add more tests, which cover various edge cases for binary encoding of log field value in journald format
- Moved common code for reading the next line to fieldsBuf.value. This simplifies the code a bit.
- Added more comments, which try explaining the braindead logic for parsing binary-encoded log field values in journald format

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9070
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9153
2025-06-19 13:50:46 +02:00
Andrii Chubatiuk
01d413873e app/vlinsert/journald: fixed binary value parsing (#9234)
### Describe Your Changes

- removed unneeded loop for binary field value size extraction, since it
should not be delimited by multiple `\n` symbols
- changed loop condition

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
2025-06-19 13:47:56 +02:00
Aliaksandr Valialkin
d307a64cd2 app/vlinsert: do not trim prefixes from "/insert/*" URL paths
This improves maintainability and readability of the code, since it simplifies searching
for the particular request handler by its' full path.
2025-06-19 08:46:57 +02:00
Artur Minchukou
27f1c1ab13 app/vmui/logs: fix missing field values in auto-complete (#8799)
### Describe Your Changes

Related issue: #8749 
Added query to requests `/field_names` and `/field_values` to get more
precision results:
 
- If `filterName` is `_msg` or `_stream_id`, the query cannot be
generated specifically,
    so a wildcard query (`"*"`) is returned.
 
- If `filterName` is `_stream`, the query is generated using regexp
(`{type=~"value.*"}`).
 
- If `filterName` is `_time`, a simplified query is created by trimming
the value up
    to the first occurrence of a delimiter such as `-` or `:`.
 
- For all other values of `filterName`, a prefix query is returned using
    the `query` value with a `*` appended (e.g., `"value*"`).

Related issue: #8806 
Enhanced autocomplete with parsed field suggestions from unpack pipe.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Co-authored-by: Nikolay <nik@victoriametrics.com>
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
2025-06-19 01:10:45 +02:00
Artur Minchukou
60396d0daa app/vmui/logs: ensure the live tailing tab automatically reconnects on connection loss (#9162)
### Describe Your Changes

Related issue: #9129 

Added restarting the live tailing tab on connection loss

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).

Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
2025-06-19 01:08:11 +02:00
Yury Molodov
9b2c1b00cf vmui/logs: fix hits chart not updating on tenant change (#9171)
### Describe Your Changes

Adds `tenant` to the dependency array for log hits fetching to ensure
the chart updates when `AccountID` or `ProjectID` changes.
Related issue: #9157 

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).

Signed-off-by: Yury Molodov <yurymolodov@gmail.com>
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
2025-06-19 01:06:19 +02:00
Aliaksandr Valialkin
892008b05d docs/victorialogs/CHANGELOG.md: document dc2da9a71b
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9200
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9201
2025-06-19 01:02:56 +02:00
Vadim Alekseev
dc2da9a71b app/logstorage: optimize pipes after appending limit pipe (#9201)
### Describe Your Changes

This PR adds a call to optimize pipes after the `limit` pipe has been
appended.

Related: #9200

While this approach is not ideal, since it forces us to re-optimize all
pipes, but it is simpler.

An alternative would be to reapply only the relevant optimizations
specifically for this case, something like:

```go
func (q *Query) AddPipeLimit(n uint64) {
	if len(q.pipes) > 0 {
		ps, ok := q.pipes[len(q.pipes)-1].(*pipeSort)
		if ok {
			if ps.limit == 0 || n < ps.limit {
				ps.limit = n
			}
			return
		}
		pu, ok := q.pipes[len(q.pipes)-1].(*pipeUniq)
		if ok {
			if pu.limit == 0 || n < pu.limit {
				pu.limit = n
			}
			return
		}
	}
	q.pipes = append(q.pipes, &pipeLimit{
		limit: n,
	})
}
```

### Checklist

The following checks are **mandatory**:

- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
2025-06-19 00:57:24 +02:00
Artur Minchukou
5908ee1009 app/vmui/logs: improve usability of the live tailing tab (#9194)
### Describe Your Changes

Related issue: #9130 

- add `ScrollToTopButton` component for better navigation UX and
integrate it into Live Tailing view;
 
<img width="1293" alt="image"
src="https://github.com/user-attachments/assets/98a96ac8-fe2b-43fa-a470-a51f68df2e01"
/>

- replace log throttling logic with the new `LogFlowAnalyzer` utility
for enhanced performance state tracking;
- enhance `GroupLogsItem` with a copy-to-clipboard feature, which allows
to copy only one log object;
 
<img width="1268" alt="image"
src="https://github.com/user-attachments/assets/c42ad5e1-0d02-456e-b231-d42064f48eb4"
/>

 - update styles for better responsiveness and aesthetics;
- change raw json view into expandable view in the live tailing tab by
default and keep the value of this setting in the local storage, it
became possible thanks to [this
issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9130#9083).
  
<img width="554" alt="image"
src="https://github.com/user-attachments/assets/6bcda0f1-dd2f-4608-8e0d-f9c0bc6efac3"
/>



Short demo: 


https://github.com/user-attachments/assets/351133a7-f3ef-41ad-b23d-cc12f030a357


### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
2025-06-19 00:54:50 +02:00
Nikolay
c930a81ea9 app/vlinsert: remove vlstorage dependency for vlinsert (#9221)
vlinsert package could be used as an imported dependency. Mostly it's
needed for vlagent, but it could be a case for other applications.

 This commit introduces new interface for vlinsertutil package, which
represents actual storage for log rows ingestion.

 It includes CanWriteData, which also could be used to introduce
 back-pressure mechanism for vlinsert clients.

Related PR: https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9034

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
2025-06-19 00:33:04 +02:00
Nikolay
c95990f47f lib/logstorage: properly iterate over ForEachRow (#9222)
Previously, ForEachRow always reset last row fields after iteration.
It makes impossible concurrent iteration with forEachRow, since
ForEachRow performed hidden mutation of LogRows.

 This commit resolves this issue by removal of fields reference.

Related to
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9076
2025-06-19 00:24:50 +02:00
Aliaksandr Valialkin
f0442e40a0 lib/logstorage: follow-up for 5d06c74e2b
Move the lex.isQuotedToken() check to the top of the lexer.isInvalidQuotedString() function
in order to simplify understanding the code.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9167
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9219
2025-06-19 00:19:17 +02:00
Andrii Chubatiuk
5d06c74e2b lib/logstorage: fix panic when not paired quotes are passed as a pipe value (#9219)
### Describe Your Changes

nextToken method, which is called prior to getCompoundTokenExt already
unquotes string, paired quotes check inside getCompoundTokenExt is
redundant
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9167

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).

Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
2025-06-19 00:15:44 +02:00
Aliaksandr Valialkin
63dccea932 app/vlinsert/journald: parse journald logs in streaming manner
This allows parsing unlimited number of logs in a single HTTP request,
without the need to buffer the logs in memory.

This is needed for https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9070

Thanks to @AndrewChubatiuk for the initial pull request - https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9153

This commit is based on the https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9153 .
It contains the following changes comparing to the original pull request:

- Remove ugly function LineReader.NextLineWithLineFn(). Instead, uglify the Journald parser a bit
  with hacky calls to LineReader.NextLine() in order to parse binary-encoded field values.
  This should preserve the maintainability of the LineReader, which is shared among multiple protocol parsers,
  under control, while keeping the complexity of Journald parsing inside the app/vlinsert/journald package.

- Fix a typo bug inside isNameValid() - `(r < '0' && r > '9')` must be written as `(r < '0' || r > '9')`.
  Rewrite isNameValid() into easier to understand code and rename it to isValidJournaldFieldName() for better readability.
  Add tests for this function.

- Remove mentioning of the -journald.maxRequestSize command-line flag from VictoriaLogs docs.

- Add the description of the fix to VictoriaLogs changelog.

- Properly increment errorsTotal metric on every journald parse error.

- Add missing protoparserutil.PutUncompressedReader(reader) call, so the reader could be re-used between client requests.

- Remove improperly working code, which tries continuing parsing the request stream after parse errors.
  It is impossible to recover reliably from journald parse errors related to reading the data from the request stream,
  since the journald protocol format is completely braindead. So it is better to immediately return the error
  to the client instead of trying to recover. The only errors, which could be recovered, are related to invalid field names / values.
  Such errors are logged with the WARN level and the corresponding fields are skipped.

- Fix incorrect storage of the re-used name and value strings into fb.fields. The contents of the name and value strings
  must be copied per every loop, which reads these strings from the request stream. Otherwise the contents of the previously
  added Name and Value fields into fb.fields will be overwritten on the next loop.

- Ensure that LineReader.Line is set to nil after LineReader.NextLine() returns false. This should prevent from subtle bugs
  when the LineReader.Line is read after LineReader.NextLine() returns false.
2025-06-18 23:48:22 +02:00
Fred Navruzov
46acf8edc0 docs/vmanomaly: release 1.24.0 post-release updates (#9224)
### Describe Your Changes

Fixing typos and missing links after v1.24.0 doc update in #9191 

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
2025-06-18 23:29:54 +02:00
Fred Navruzov
d478d1496a Docs: vmanomaly release v1.24.0 (#9191)
### Describe Your Changes

> ⚠️  still draft, don't merge even if already approved

Docs update for vmanomaly v1.24.0 release with stateful service option

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
2025-06-18 22:41:59 +02:00
Aliaksandr Valialkin
272a77a9c3 lib/logstorage: optimize OR filters where one of these filters is *
Such filters can be optimized to `*`. This avoid executing other OR filters.
For example, `foo or * or bar` is optimized to `*`, while `foo` and `bar` filters aren't executed.

Such filters are frequently generated by Grafana, so this should improve query performance there.
2025-06-18 16:53:35 +02:00
Aliaksandr Valialkin
e72a3fdb67 lib/logstorage: properly parse unquoted regexp filters ending with *
Use getCompoundToken() instead of getCompoundFuncArg() for obtaining regexp filter value,
since getCompoundFuncArg() skips trailing '*' chars.

This allows detecting invalid queries in the https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8582 .
2025-06-18 16:53:34 +02:00
Zakhar Bessarab
7413000e57 docs/changelog: add link to an issue after 971c759a
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-06-18 18:06:32 +04:00
Roman Khavronenko
ddd686c026 app/vmalert: rename samples to series (#9204)
`Samples` could be confusing for users, especially for alerting rules:
* we say in docs that each returned "series" will create a new alert
* we say that there are "series fetched", so both columns should be
either "series" or "samples".

Let's rename it to `series` for consistency.

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-06-18 14:50:24 +02:00
Hui Wang
0e8007a02b vmalert: do not break vmalert process under replay mode when rule use… (#9206)
…s `query` template, but only logging a warning

The `query` template is not supported in replay mode, because we perform
range queries on the rule’s expression, but not on the `query` template.
Previously, if user see error `query template isn't supported in replay
mode`, they need remove the `query` template from the rule for replay
mode.
Also, templating is only used for alerting rules. Replaying alerting
rules don't send notifications(rule annotations are included here) and
users mainly focus on the generated `ALERTS`, the `query` result is
trivial. This pull request shouldn't break things but simplifies the
usage of replay mode for the case.

related https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8746
2025-06-18 14:23:12 +02:00
Dmytro Kozlov
cbd76ac4dc docs/vmctl: add information about the prometheus snapshot folder structure (#9208)
### Describe Your Changes

Users try to migrate from systems like Thanos or Prometheus using
`vmctl` and `promtheus` migration protocol, with questions about the
problems when they try to specify the snapshot, but `vmctl` shows `0`
series to be found for migration.
This issue happens because users specify the block folder instead of the
snapshot folder.
Added clarification about snapshot structure and its appearance with
multiple blocks inside.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
2025-06-18 14:22:42 +02:00
Andrii Chubatiuk
96b8213b0d lib/streamaggr: fixed rate_avg and rate_sum when scrape interval is bigger than aggregation interval (#9170)
### Describe Your Changes

fixes #9017
additionally introduced testing/synctest library to cover this and cases
from previous releases related to aggregation windows and panics in
outputs with state

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-06-18 14:19:51 +02:00
Zakhar Bessarab
971c759acc packaging/make: fix archiving release binaries for cluster (#903)
* packaging/make: fix archiving release binaries for cluster

Cluster binaries for non-windows platforms did not include fips binaries, fix that by properly including binaries.

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-06-18 16:06:14 +04:00
Phuong Le
01a7ab8bf4 vlinsert/journald: fix timestamp parsing when using default time field (#9150)
See #9144 

VictoriaLogs was not correctly parsing timestamps from journald data
when using the default time field configuration. This caused
VictoriaLogs to look for a `_time` field instead of
`__REALTIME_TIMESTAMP` in journald data by default, resulting in
timestamps falling back to ingestion time rather than the actual log
timestamps.

Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
2025-06-18 12:59:46 +02:00
Aliaksandr Valialkin
d29bb97fec docs/victorialogs/CHANGELOG.md: move the BUGFIX entry to the correct place after the commit 9b21dc5a30
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9062
2025-06-18 12:37:37 +02:00
Aliaksandr Valialkin
48554d51b9 app/vlinsert/journald: properly read timestamp from __REALTIME_TIMESTAMP field by default
The CommonParams.TimeFields is initialized to []{"_time"} by default. This prevent from the proper usage of the -journald.timeField
as the default field for reading log timestamps.

This bug has been introduced in the commit a1a731eb61

While at it, make sure that every parsed log entry has its own timestamp if the timestamp couldn't be read from the log entry.
This provides reliable sort order of the log fields.
2025-06-18 12:36:02 +02:00
Aliaksandr Valialkin
105a42ce08 app/vlinsert/journald: use (_MACHINE_ID, _HOSTNAME, _SYSTEMD_UNIT) as default log stream fields for logs ingested via journald data ingestion protocol
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9143
2025-06-18 12:03:21 +02:00
Will Sargent
8b58dc1892 deployment/docker/victorialogs: Fix container name to fluentbit-oltp (#9213)
### Describe Your Changes

The container name for oltp has the same name as the loki container.
Fixed.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
2025-06-18 11:57:20 +04:00
Aliaksandr Valialkin
d9064dc781 docs/victorialogs: document facility_keyword field, which is automatically added by Syslog parser
The facility_keyword field has been added in the commit ff9cb3f821
2025-06-17 10:56:47 +02:00
Aliaksandr Valialkin
ff9cb3f821 app/vlinsert: use string representation of log level at logs ingested into VictoriaLogs via syslog and journald protocols
It is better from usability PoV to use string representation for the 'level' log field
instead of numeric representation.

Remove the -journald.priorityAsLevel and -syslog.severityAsLevel command-line flags,
since there are zero practical reasons when the `level` log field shouldn't be initialized automatically.

Move the CHANGELOG description for this feature into the correct place at docs/victorialogs/CHANGELOG.md,
and make it more human-readable.

Document the 'level' log field at https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/
and at https://docs.victoriametrics.com/victorialogs/data-ingestion/journald/

This is a follow-up for 50969ca780

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8535
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8553
2025-06-17 10:49:23 +02:00
Andrii Chubatiuk
50969ca780 app/vlinsert: introduced flags, that enable syslog severity and journald priority fields casting to a level field (#8553)
### Describe Your Changes

fixes #8535 

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-06-17 08:29:17 +02:00
Zakhar Bessarab
195afd1c2e app/vmbackupmanager: increase storage healthcheck duration (#902)
Storage node with large number of partitions (e.g. 150+ partitions with 12Tb of data) will take more than 30 seconds to start. This was causing vmbackupmanager restarts when running vmstorage colocated with vmbackupmanager in a single pod.

Use exponential backoff for retries and increase overall timeout for storage node healthcheck to 3 minutes to avoid vmbackupmanager restarts during storage node startup.

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-06-16 17:50:51 +04:00
Zakhar Bessarab
c5743a7099 lib/backup/azremote: do not use SAS token for copying objects (#9172)
Using SAS token is not required when copying data within a single
storage account. Using a plain object URL is sufficient for this case. A
single storage account is normally used to store backups so it is safe
to remove SAS tokens usage.

This fixes support of server-side copy when using managed identity as
authentication source as SAS token can only be generated when using
"shared key" type of credentials.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9131

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-06-16 11:30:16 +04:00
Dmytro Kozlov
ac11f184fc apptest/vmctl: implement integration tests for the remote read protocol (#9164)
Implemented integration tests for the remote read protocol (for both mode stream and default).
Removed old implementation from the vmctl. 

Related issue
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7700

Co-authored-by: Artem Fetishev <149964189+rtm0@users.noreply.github.com>
2025-06-16 07:09:08 +02:00
Andrii Chubatiuk
973eb1cc4f apptest: validate relabelling after reload (#9175)
Related issue https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8946

Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2025-06-13 14:35:50 +02:00
hagen1778
3382bbf285 dashboards: set Y-min and units for re-processing panel
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-06-13 13:53:02 +02:00
Fred Navruzov
5ec7cc5dd4 docs/vmanomaly: release 1.23.3 (#9180)
### Describe Your Changes

Docs update to vmanomaly v1.23.3

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
2025-06-13 10:17:16 +02:00
Nick Yang
9b21dc5a30 app/vlinsert: support - timestamp for rsyslog
rfc5424 allows for `-` for timestamps when the syslog application cannot
get the current time (e.g., embedded devices that have not yet NTP'd),
and the log ingester should apply its timestamp in that case

RFC5424:
https://datatracker.ietf.org/doc/html/rfc5424#section-6.2.3

Related PR:
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9062
2025-06-11 14:20:06 +02:00
Roman Khavronenko
e828f03eaa app/vmselect/promql: add rate_prometheus
support
[rate_prometheus](https://docs.victoriametrics.com/victoriametrics/metricsql/#rate_prometheus)
function, an equivalent to `increase_prometheus(series_selector[d]) / d`

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8901
2025-06-11 14:15:02 +02:00
f41gh7
4dc9ca26fc vendor: update metricsql 2025-06-11 14:13:21 +02:00
Roman Khavronenko
63e1bf5d97 app/vmselect: respect staleness markers when calculating rate and increase functions
The new behavior will interrupt rate/increase calculation if last sample
on the selected time window is a staleness marker,
making the series to disappear immediately instead of slowly fading
away.

See more details in
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8891#issuecomment-2883119388.
2025-06-11 13:57:31 +02:00
Nikolay
d8b36fb2e3 apptest: add base test cases for victoria-logs
This commit adds key concepts and ingestion protocol tests for
victoria-logs component.
2025-06-11 13:56:13 +02:00
Phuong Le
aa3a2b01aa spellcheck: run 2025-06-11 13:54:16 +02:00
Max Kotliar
fd543883fa .github/workflows: allow codecov to report without failing CI build
### Describe Your Changes

The `fail_ci_if_error` flag only affects the upload step and does not
control
Codecov's status checks (e.g., codecov/patch, codecov/project). My prior
tests
did not surface this behavior.

Switched to 'informational' mode per Codecov docs to avoid blocking CI.
See:

https://docs.codecov.com/docs/common-recipe-list#set-non-blocking-status-checks

Tested in https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9146

Follow up on
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9139,
52022e482c
2025-06-11 13:47:51 +02:00
Dmytro Kozlov
bbf3ab099b apptest/vmctl: migrate vmctl test for the vm-native migration process (#9059)
Moved the migration process via the native protocol (vmsingle to
vmsingle) test to the apptest folder, where all integration tests are
held

Related issue
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7700

Co-authored-by: Artem Fetishev <rtm@victoriametrics.com>
2025-06-10 16:03:01 +02:00
Andrii Chubatiuk
fa68453e41 deployment/docker/victorialogs: fixed journald ignore filters in example (#9154)
### Describe Your Changes

removed trailing comma in ignoreFields, which led to _msg field removal,
since it is converted to empty string before filtering

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
2025-06-10 13:54:23 +02:00
sforests
8f47e30c1d lib/storage:fix Less Method in tagFilter Struct (#9127)
### Describe Your Changes

This pull request addresses a bug in the Less method of the tagFilter
struct. The original implementation incorrectly assigned the value of
isCompositeB by calling tf.isComposite() instead of other.isComposite().
This caused both isCompositeA and isCompositeB to always have the same
value, leading to incorrect comparisons when determining the order of
tagFilter instances.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).

---------

Co-authored-by: Zhu Jiekun <jiekun@victoriametrics.com>
2025-06-10 13:52:21 +02:00
Zakhar Bessarab
f66981cac1 docs/changelog: backport changes for 1.110.11
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-06-10 14:03:34 +04:00
hagen1778
780c67d139 dashboards: add panel Partitions scheduled for re-processing to VM cluster dashboard
It shows the amount of data scheduled for [downsampling](https://docs.victoriametrics.com/#downsampling)
 or [retention filters](https://docs.victoriametrics.com/#retention-filters).
 The new panel should help to correlate resource usage with background re-processing of partitions.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-06-10 09:30:17 +02:00
hagen1778
9244557b6e dashboards: use RSS anon memory in per-component sections
Anonymous RSS memory usage per component type is more useful to observe
for finding anomalies.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-06-10 09:12:35 +02:00
Aliaksandr Valialkin
ee940e81ec lib/logstorage: improve performance for isTokenChar() by using 256-byte lookup table
This increases performance for the isTokenChar() by up to 30%.

Thanks to @ahfuzhang for the initial idea at https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9064/files#diff-27b31ccad49a8ceaf033f97deb3d876d62eab4119374cbb3ae65278e894f6c69
2025-06-09 20:59:37 +02:00
Aliaksandr Valialkin
695532fc8d lib/logstorage: call isTokenChar() for ascii chars passed to isTokenRune()
This improves isTokenRune() performance for ascii chars by up to 30%.

Thanks to @ahfuzhang for the initial idea at https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9064/files#diff-27b31ccad49a8ceaf033f97deb3d876d62eab4119374cbb3ae65278e894f6c69
2025-06-09 20:59:37 +02:00
Aliaksandr Valialkin
ccb5b47914 lib/prompbmarshal: make size() private methods, since they arent used outside lib/prompbmarshal 2025-06-09 19:32:22 +02:00
Aliaksandr Valialkin
e0f3ecd073 lib/prompbmarshal: make marshalToSizedBuffer() private methods, since they arent used outside lib/prompbmarshal 2025-06-09 19:29:56 +02:00
Max Kotliar
646604d850 .github/workflows: allow codecov to report without failing CI build (#9139)
### Describe Your Changes

Currently, some PRs have a failed CI due to low code coverage reported
by Codecov. However, the team typically ignore this and relies on other
quality indicators such as thorough code reviews.

This change configures Codecov to continue posting coverage reports
without marking the build as failed.

It also helps reduce confusion for external contributors, who might
otherwise feel pressured to add unnecessary tests just to satisfy
Codecov requirements (for example
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9002#discussion_r2111651046).

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
2025-06-09 17:51:23 +02:00
Zakhar Bessarab
0e6b3eabb5 docs/release-guide: fix link to release follow-up
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-06-09 16:00:24 +04:00
Zakhar Bessarab
7165820b6a docs: update version of VictoriaMetrics components to v1.119.0
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-06-09 15:56:30 +04:00
Zakhar Bessarab
d233170ada deployment/docker: update versions of VictoriaMetrics components to v1.119.0
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-06-09 15:53:18 +04:00
Zakhar Bessarab
c7f2d91d08 docs/victoriametrics/changelog: backport LTS changelog
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-06-09 15:13:42 +04:00
Artem Fetishev
7f5a8af464 apptest: Add basic backup/restore integration test (#9133)
This commit adds a basic backup/restore test for vmsingle and vmcluster. A
more sophisticated was originally added to the partition index PR
(#8134) and was aimed to test backup/restore when switching back and
forth between legacy and partition index. During the code review it was
decided that it would be good to have a separate test as well since
legacy code will be removed in future and so will the test.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-06-09 12:17:54 +02:00
Fred Navruzov
181a465c89 docs/vmanomaly: release v1.23.2 (#9135)
### Describe Your Changes

Update docs to vmanomaly release v1.23.2

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
2025-06-09 11:28:56 +02:00
Fred Navruzov
7288adab21 docs/vmanomaly: release v1.23.1 (#9132)
### Describe Your Changes

Update the docs to release v1.23.1

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
2025-06-08 23:07:02 +02:00
Aliaksandr Valialkin
993a9d92d6 deployment/docker/builder/Dockerfile: download musl archives for cross-compilation from the local repository instead of musl.cc
This speeds up building the Go builder image significantly (from hours to a few minutes),
since the build speed was limited by the download speed from https://musl.cc , and this speed
was extremely slow (e.g. 10kb/s and slower).

This also improves build security, since the local mirror of musl.cc is under our control.
2025-06-08 13:47:13 +02:00
Aliaksandr Valialkin
45c889a1cf docs/victorialogs/querying/README.md: mention that web UI supports live tailing
This is a follow-up for the commit 231bfcf4cf

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8882
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7046
2025-06-08 09:12:54 +02:00
Aliaksandr Valialkin
dca5d44f2b deployment: update base Docker image from alpine:3.21.3 to alpine:3.22.0
See https://alpinelinux.org/posts/Alpine-3.22.0-released.html
2025-06-08 08:08:16 +02:00
Aliaksandr Valialkin
9e4f0cc900 go.mod: update Go from 1.24.3 to 1.24.4
See https://github.com/golang/go/issues?q=milestone%3AGo1.24.4+label%3ACherryPickApproved

This is a follow-up for 54dc9cc322
2025-06-08 08:03:03 +02:00
Aliaksandr Valialkin
54dc9cc322 deployment: update Go builder from Go1.24.3 to Go1.24.4
See https://github.com/golang/go/issues?q=milestone%3AGo1.24.4+label%3ACherryPickApproved
2025-06-08 08:01:45 +02:00
Aliaksandr Valialkin
67e6752b82 docs/victoriametrics/goals.md: clarify the main goal 2025-06-08 07:57:54 +02:00
Zakhar Bessarab
bb54075c23 docs/victoriametrics/changelog: cut v1.119.0
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-06-06 16:48:02 +04:00
Zakhar Bessarab
08f5220bc3 app/{vmselect,vlselect}: run make vmui-update vmui-logs-update
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-06-06 16:38:04 +04:00
DmitrySafonov
f9015da6eb app/promql/rollup_result_cache: include extra_filters in rollupCache key for multi-tenant support
This PR addresses two issues:

When tenant labels (e.g. vm_account_id, vm_project_id) are passed via
extra_filters, they were not included in the rollupCache key. This could
cause cache entries to be reused across different tenants, resulting in
incorrect query results.
If a tenant is specified only via extra_filters, and that tenant does
not exist in TenantsCached, it gets silently filtered out by
GetTenantTokensFromFilters, causing the query to fall back to a global
(non-tenant) query — which is likely unexpected and potentially unsafe.
This fix ensures correct tenant scoping and avoids unintended data
exposure or cache pollution.

Related issue
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9001

---------

Co-authored-by: Zakhar Bessarab <me@zekker.dev>
Co-authored-by: Max Kotliar <kotlyar.maksim@gmail.com>
2025-06-06 15:56:36 +04:00
Aliaksandr Valialkin
539498058e lib/atomicutil: add CacheLineSize const equal to the size of CPU cache line, and use this const for padding against false sharing across the code base
This should reduce the waste of memory on the padding from 128 bytes to 64 bytes on GOARCH=amd64,
while preserving bigger padding for platforms with bigger cache line sizes.

See https://stackoverflow.com/questions/68320687/why-are-most-cache-line-sizes-designed-to-be-64-byte-instead-of-32-128byte-now

Thanks to @tIGO for the hint
2025-06-06 10:21:40 +02:00
Peter Gervai
b3d22403eb docs: update LogsQL "field pipe" typo (#9037)
### Describe Your Changes

Typo? It's called "fields" pipe, not "field".

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/).
2025-06-06 09:55:06 +02:00
hagen1778
0fce51e3b4 docs: prettify the changelog
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-06-06 09:52:02 +02:00
hagen1778
9e118fe1ee docs: rm 11831-victoria-metrics-cluster-ig1-version dashboard
Remove the community-provided dashboard as it remains without updates
for a few years already. Recommending it may hurt user's experience.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-06-06 09:44:37 +02:00
Zakhar Bessarab
3553c60399 app/vmctl: enable dual-stack mode by default (#9119)
Dual stack mode is disabled by default in order to avoid accidentally
exposing components via IPv6 networks.

vmctl does not expose any endpoints and does not allow using default Go
flags as it is using `urfave/cli` lib.

This commit enables IPv6 support by default since there is no security
risks related to network configuration and this make vmctl easier to use
with default configuration.

See: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9116

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-06-06 09:40:26 +02:00
Dmitry Ponomaryov
9b54bd6e8d app/vlinsert: add logging of skipped bytes for log lines exceeding insert.maxLineSizeBytes (#9082)
### Describe Your Changes

This change adds logging of the number of skipped bytes when a log line
exceeds the configured `insert.maxLineSizeBytes`.

it helps diagnose and tune systems dealing with oversized log records by
showing how much to increase the parameter for the log to fit in
storage.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).

---------

Signed-off-by: Dmitry Ponomaryov <iamhalje@gmail.com>
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
2025-06-06 09:08:57 +02:00
Artur Minchukou
a90edc71c7 app/vmui/logs: optimize live tailing performance by limiting logs to 200 and notifying users (#9083)
### Describe Your Changes

Added a log limit if the 200 logs per second limit is reached and a
notification for the user asking them to add a filter to the query

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
2025-06-06 09:02:43 +02:00
Nils K
83deddc84c Fix typo of journald in CLI flag (#9112)
### Describe Your Changes

Fix swapped letters

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).

Signed-off-by: Septatrix <24257556+septatrix@users.noreply.github.com>
2025-06-06 08:59:16 +02:00
Jose Gómez-Sellés
434cb7028c docs/cloud: add explore data page (#9113)
### Describe Your Changes

This PR adds the explore section to the docs. It emphasizes on
explaining and linking assets for VMUI and
MetricsQL

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
2025-06-06 08:53:59 +02:00
Zakhar Bessarab
107b6517b7 lib/netutil/netutil: fix strings index check
### Describe Your Changes

Properly check precense of `/`, previously it was 
ignoring a case where "/" would be at the beginning of the string.

This is a follow-up for 00712b18

---------

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-06-06 09:01:50 +04:00
Fred Navruzov
f68f5b3113 docs/vmanomaly - missing updates for v1.23.0 (#9114)
### Describe Your Changes

Some of the missing doc updates after 1.23.0 release

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
2025-06-05 20:21:52 +02:00
Aliaksandr Valialkin
d49b4a7550 docs/victorialogs/cluster.md: added missing -storageNode option in the example on how to disable /insert/* requests at vlselect
This is a follow-up for 41558066db

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9061
2025-06-05 18:44:17 +02:00
Aliaksandr Valialkin
bd8b4eb78b app/vmauth: add tests for the case when url_prefix ends with / and the requested path starts with /
This is needed for verifying https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9096
2025-06-05 18:37:53 +02:00
Phuong Le
41558066db vlselect/vlinsert: allow disabling the vlinsert and vlselect endpoints (#9067)
## Problem

In vlcluster evel setups, components like vlselect can still accept and
forward /insert requests. The lack of strict endpoint control increases
the risk of human error and undermines deployment security boundaries.

See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9061

## Fix

Add flags to disable the vlinsert and vlselect endpoints. The
`-insert.disable` flag also disables the internalinsert endpoint.
Similarly for vlselect.

---------

Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
2025-06-05 18:21:29 +02:00
Fred Navruzov
af064ca65a docs/vmanomaly - release v1.23.0 (#9111)
### Describe Your Changes

Docs update for vmanomaly v1.23.0 release

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
2025-06-05 17:43:58 +02:00
Nikolay
eced71a96d lib/storage: properly load metric_usage_tracker file content
Previously, if metric_usage_tracker file was corrupted. It prevented
VictoriaMetrics from start and required manual action. Corruption may
happen in various reasons, such as unclean shutdown of the process.

 This commit changes panic into error message, in the same way as other
caches do.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9074
2025-06-05 14:07:31 +02:00
Aliaksandr Valialkin
23fd269ccf lib/{storage,mergeset}: reduce the multi-CPU contention on global stats vars, which are updated during background merge
Background merge updates the global stats on the number of merged / deleted items. This may result in slowdown
when multiple goroutines update these global stats at frequent rate, since every goroutine must fetch the actual value
for the updated stats from slow memory on every update. It is much faster to count the needed stats locally per every goroutine
and then periodically updating the global stats (once per ~second).

Thanks to @tIGO for the intial implementation of this idea at https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8683/files#diff-95e28ae911944708f94f3bb31fa9ba8bc185dedc23ae6fb02a272c34b8f83244

This should help improving scalability of background merges on multi-CPU systems.
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8682
2025-06-05 12:24:03 +02:00
Aliaksandr Valialkin
1f5d02e059 lib: make sure that frequently updated global counters are padded in order to protect from false sharing issues on multi-CPU systems
Go linker packs global variables close to each other in the memory. This may lead to false sharing (https://en.wikipedia.org/wiki/False_sharing)
among these variables if frequently updated vars are put close to mostly read-only vars like described
at https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8682 .

This commit adds padding to frequently updated global vars. This guarantees that these variables are put into distinct CPU cache lines
comparing to the rest of global variables. See https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8683#issuecomment-2943254119

Thanks to @tIGO for the intial attempt to fix the issue at https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8683
2025-06-05 11:40:20 +02:00
Zakhar Bessarab
690aaf7d2d app/vmselect/netstorage/tenant_cache: fix inconsistent fetching of tenants list
Previously, any case when cache returned items was skipping lookup of
tenants at vmstorage nodes. This leaded to inconsistent results for
cases when cache contained items to cover only some part of requested
time range.

Fix this by forcing a cache item to cover full requested time range.
This forces cache hits to always be "full hits".

See: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9042

Target branch for this PR is another PR related to the same issue -
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9048, this is in
order to avoid additional rebasing/merge as this PR will conflict with
cluster branch after initial PR merge. GH will change target for this pr
to cluster brance once #9048 will be merged.

---------

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-06-05 11:09:32 +04:00
f41gh7
1e0f7f0d28 app/vmgateway: add support of mTLS for read and write backends
This commit introduces new flags for mTLS configuration:

```
  -read.tlsCAFile
  -read.tlsCertFile
  -read.tlsInsecureSkipVerify
  -read.tlsKeyFile
  -read.tlsServerName

  -write.tlsCAFile
  -write.tlsCertFile
  -write.tlsInsecureSkipVerify
  -write.tlsKeyFile
  -write.tlsServerName
```

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8841
2025-06-04 19:36:39 +02:00
Andrii Chubatiuk
c7a16e1df6 app/vmgateway: added more select routes
This commits adds additional vmselect routes.
Such as `/static`, `/api/v1/status/metric_names_stats` and others.

 In addition it properly redirects `/vmui` and `/vmalert` access endpoints requests. Such endpoints require to preserve trailing `/`. Previously it was omitted and redirect requests failed.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9003
2025-06-04 19:36:28 +02:00
Dmytro Kozlov
2cb909022f apptest/vmctl: migrate vmctl test for the prometheus migration process (#9047)
Moved Prometheus migration process test to the apptest folder, where all
integration tests are held

Related issue https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7700

Other tests will be migrated one by one.
2025-06-04 16:40:26 +02:00
Zakhar Bessarab
fe70b963e4 app/vmselect/netstorage: allow disabling cache for list of tenants
Properly respect passing `nocache=1` or using `search.disableCache` when
executing a query. Also allow disabling tenant cache separately in order
to make debugging easier.

Related: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9042

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/).

---------

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-06-04 17:05:39 +04:00
Zakhar Bessarab
9bb726751c deployment/docker: do not update stable tag
### Describe Your Changes

Stop updating `:stable` tags as those are exactly the same as `:latest`
but not used by default by docker/podman and other commands.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).

---------

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-06-04 16:57:40 +04:00
Hui Wang
3c85ffb1e6 docs: minor fixes (#9090) 2025-06-04 10:04:40 +02:00
hagen1778
65cb6468ac docs: typo fix
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-06-04 10:03:27 +02:00
Robin Hayer
8e645ea708 lib/workingsetcache: log error when restoring cache from file (#8952)
### What this PR does

log error returned by `fastcache.LoadFromFile` before falling back to
creating a new cache instance. this improves observability and helps
detect problems like file corruption or permission issues early.

this replaces `fastcache.LoadFromFileOrNew` with a custom function
`loadFromFileOrNewWithLog` that explicitly logs errors encountered
during cache restoration.

---

### Related Issue

Closes #8934

---

### Test Plan

- manually tested by simulating a missing file scenario  
- ensured expected log output on cache load failure  
- verified normal cache creation fallback path  

---

### Changelog

log error when cache fails to restore from file during workingsetcache
initialization (#8934)

---

### Checklist

- [x] Signed commits  
- [x] Follows coding and commit message conventions  
- [x] Tested manually  
- [x] Scope limited to relevant change  
- [x] Changelog entry added

Co-authored-by: Robin Hayer <rshayer95@gmail.com>
Co-authored-by: Roman Khavronenko <hagen1778@gmail.com>
2025-06-04 09:56:24 +02:00
Vadim Alekseev
b95bdb5781 app/vlselect: set missing Authorization header (#9089)
### Describe Your Changes

Set missing `Authorization` header when querying a storage node and
Basic Auth is enabled.

See: #9080 

### Checklist

The following checks are **mandatory**:

- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
2025-06-04 08:57:06 +02:00
Aliaksandr Valialkin
5ecc5770c2 deployment/docker/Makefile: properly publish multi-architecture Docker images at latest and stable tags
The previous approach was assigning only the current architecture image to the `latest` and `stable` tags.

This is a follow-up for 02c03793b3

Thanks to @zekker6 for the initial attempt to address this issue at https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9088 .
This attempt was using a third-party component - skopeo , which must be installed manually.
This complicates the usage of the `make publish-latest` command.

The new approach, which is implemented in this commit, is to use the standard `docker buildx imagetool create` command
for creating `latest` and `stable` tags, which contain images for all the architectures from the source tag.
See https://docs.docker.com/reference/cli/docker/buildx/imagetools/create/

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7336
2025-06-04 08:46:17 +02:00
Aliaksandr Valialkin
02c03793b3 Makefile: add TAG=v1.x.y make publish-latest command for publishing latest and stable Docker image tags from the given TAG
Add the step for running this command after publishing Docker images during the release process.
See docs/victoriametrics/Release-Guide.md

This commit resolves the issue with the missing `latest` and `stable` tags after the https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7336
This also resolve issues with accidental publishing of incorrect Docker images under the `latest` and `stable` tags.
It is very easy to fix incorrectly published `latest` and `stable` tags by re-running the `TAG=v1.x.y make publish-latest` command,
which updates the `latest` and `stable` tags, so they point to the given TAG=v1.x.y.
2025-06-03 18:02:07 +02:00
hagen1778
c74c4b24d7 docs: fix broken image in victoriametrics-cloud docs
bug was introduced in 07be0c6129

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-06-03 16:07:37 +02:00
hagen1778
07be0c6129 docs: fix broken links in victoriametrics-cloud docs
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-06-03 11:56:27 +02:00
hagen1778
826c408e0e docs: fix broken links in victorialogs docs
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-06-03 11:52:15 +02:00
hagen1778
913b64d9b5 docs: fix broken links in victoriametrics docs
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-06-03 11:49:51 +02:00
hagen1778
6b76dead5a docs: fix broken links in vmanomaly docs
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-06-03 11:44:10 +02:00
hagen1778
41991edb34 docs: follow the same approach for assets linking as in other docs
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-06-03 11:37:58 +02:00
hagen1778
eb7c21bde5 docs: fix typos and reference errors in k8s monitoring guide
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9069

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-06-03 11:32:35 +02:00
Roman Khavronenko
3cc8013dd9 docs: add guideline for merging PRs (#9066)
### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: Max Kotliar <kotlyar.maksim@gmail.com>
2025-06-03 11:12:22 +02:00
Hui Wang
1209f33c6d doc: clarify ingested metric usage more (#9075)
### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/).
2025-06-03 11:05:57 +02:00
maegpankey
3c87e361ba docs: fix various typos and grammar in FAQ #9072 (#9073)
### Describe Your Changes

Fixed grammatical and phrasing issues in first half of FAQ docs.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/).

---------

Co-authored-by: Roman Khavronenko <hagen1778@gmail.com>
2025-06-03 11:05:19 +02:00
Aliaksandr Valialkin
f5c9c5bf01 lib/logstorage: allow using prefix filters on log fields in some LogsQL pipes
This should simplify working with big number of log fields in LogsQL queries.
Examples:

- `... | keep foo*` leaves only fields starting with `foo` prefix
- `... | rm foo*` removes all the fields starting with `foo` prefix
- `... | mv foo* bar*` replaces `foo` prefix with `bar` prefix in log fields
- `... | sum(foo*)` sums all the log fields starting with `foo` prefix
2025-06-02 22:41:57 +02:00
Aliaksandr Valialkin
7712a34ba6 docs/victorialogs/CHANGELOG.md: typo fix - use the proper link to v1.18.0-victorialogs release 2025-06-02 21:47:04 +02:00
Aliaksandr Valialkin
d890bf52fe docs/victorialogs/CHANGELOG.md: add missing closing brace 2025-06-02 21:45:08 +02:00
Aliaksandr Valialkin
f52478dac7 deployment: update VictoriaLogs Docker image tag from v1.23.2-victorialogs to v1.23.3-victorialogs
See https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.23.3-victorialogs
2025-06-02 21:43:53 +02:00
Aliaksandr Valialkin
bcc2c85e53 docs/victorialogs/CHANGELOG.md: cut the release v1.23.3-victorialogs 2025-06-02 21:38:15 +02:00
Aliaksandr Valialkin
001f9218b1 app/vlselect: properly sort results for /select/logsql/query with limit query arg and for /select/logsql/tail
The DataBlock.GetTimestamps() was returning a slice of strings, which belong to the DataBlock.
These strings are changed whenever the DataBlock is re-used for the next block.
So these strings couldn't be assigned to logRow.timestamp and to tailProcessor.lastTimestamps,
which outlive the DataBlock. The commit aa8c18fc9f5d44091d7ca92be6935eeaf3b85d7f broke this assumption,
which triggered the following bugs:

1. The bug, which could return incorrectly sorted results from /select/logsql/query when the 'limit' query arg is passed to it.
   The endpoint must return the last 'limit' log entries on the selected time range in this case, and these log entries
   must be sorted by _time.

2. The bug, which could return incorrect results from /select/logsql/tail (e.g. it could incorrectly skip some matching logs,
   it could return the same logs multiple times and it could return out-of-order logs without proper sorting by _time).

The solution is to return parsed timestamps from the DataBlock.GetTimestamps() function, so they could be safely
used by the caller without worries that they could be changed while in use.
2025-06-02 21:34:02 +02:00
Aliaksandr Valialkin
f7fc897f85 docs/victorialogs: add a link to the post from the user who migrated from 27-node Elasticsearch to a single-node VictoriaMetrics
The link is https://aus.social/@phs/114583927679254536
2025-06-02 19:18:15 +02:00
Aliaksandr Valialkin
e58b512305 docs/victorialogs/data-ingestion/DataDogAgent.md: add commonly used alias in the Internet for this page - https://docs.victoriametrics.com/victorialogs/data-ingestion/datadog/
The https://docs.victoriametrics.com/victorialogs/data-ingestion/datadog/ shows in Google Analytics report for 404 pages.
2025-06-02 18:19:06 +02:00
Aliaksandr Valialkin
d33efbbd95 app/vlselect: drop all the pipes from LogsQL query passed to HTTP querying APIs used in auto-suggestion
Auto-suggestion expects field names and values from the real logs stored in the database.
It doesn't expect field names and values created by pipes.

See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9068#issuecomment-2931275012
2025-06-02 17:53:23 +02:00
Zhu Jiekun
23cb0475e9 vmselect: remove tenant info when exporting data in native format
Fix https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9016.

Data will carry `vm_account_id` and `vm_project_id` labels when
exporting with native export API in cluster.

These labels could be treated as normal labels and be imported to
victoriametrics cluster, hence inconsistent with the source metrics
data.

e.g.:
1. source data: `{__name__="metrics_test"}`.
2. exported data: `{__name__="metrics_test", vm_account_id="0",
vm_project_id="0"}`.
3. re-imported data: `{__name__="metrics_test", vm_account_id="0",
vm_project_id="0", vm_account_id="0", vm_project_id="0"}`.
4. query result for MetricsQL `metrics_test{}`:
`{__name__="metrics_test", vm_account_id="0", vm_project_id="0"}`.
5. expect query result: `{__name__="metrics_test"}`

In VictoriaMetrics cluster, `vm_account_id` and `vm_project_id` label
are only useful when doing multi-tenant export/import. So they should be
remove if the export URL is not for multi-tenant.

This pull request:
- properly remove tenant info when exporting data in native format.

Note:
- Commit 67514c37ef23c22b91638e80e30504be23fa8dc1 is for apptest and
need to be cherry pick to master branch cc @rtm0 .

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/).

---------

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
Co-authored-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-06-02 19:17:44 +04:00
Nick Yang
3d3fcf8fcb docs/contribution: fix makefile target typo
### Describe Your Changes

`tests-full` (plural) target doesn't exist, but test (singular) does

discovered while working through unrelated PR

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/).
2025-06-02 19:00:18 +04:00
Zakhar Bessarab
d99e3e52f3 lib/backup: add support of object metadata configuration
Add an option to configure metadata of objects when uploading backups.
For AWS S3 also support using object tagging.

Using metadata of objects is useful in order to get extended reports
about bucket content and billing details. It is also useful when
performing queries to bucket content based on metadata.

See: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8010

---------

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
2025-06-02 18:52:05 +04:00
Aliaksandr Valialkin
bbcfc0ce59 vendor: run make vendor-update 2025-06-02 16:10:26 +02:00
Aliaksandr Valialkin
d9ac6867cb vendor: update github.com/valyala/gozstd from v1.21.2 to v1.22.0
This updates upstream zstd from v1.5.6 to v1.5.7 . See https://github.com/facebook/zstd/releases/tag/v1.5.7
2025-06-02 15:52:54 +02:00
Zakhar Bessarab
00712b184b app/netstorage: improve validation for address provided at storageNode
Previously, address was always parsed as "host:port" and added port if
it was missing. This leaded to hard to understand errors in case address
was provided in "http://host:port" format.

Improve error validation in order to provide more precise error message
in case of invalid address format.

See: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9029

Previous error message: `cannot dial storageNode
"http://localhost:8488": dial tcp4: address http://localhost:8488: too
many colons in address` and vminsert continue running.
Current error message: `cannot normalize
-storageNode="http://localhost:8480": invalid address
"http://localhost:8480"; expected format: host:port` and vminsert exists
with error status code.

---------

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-06-02 13:29:15 +04:00
Zakhar Bessarab
30ca617960 lib/promrelabel: follow up for aef59d9
Sync quick-template to add missing comma for the resulting JSON.

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-06-02 11:22:01 +04:00
f41gh7
aba5205896 lib/storage: properly apply retentionFilter changes
Previously, if the value of rentetionFilter was changed within the same
retention, storage didn't start background merge for historical data.

 This commits changes this behaviour by writing applied
filters into metadata.json. For backward-compatibility it reads content
of appliedRetention.txt file. It should prevent from triggering
background merge on storage update. If needed, manually remove appliedRetention.txt file from
storage/data/PART folder and remove storage.

 Also, it properly applies retentionFilter for data back-filling.
Previously, it was ignored and data outside of retention could be
ingested.

 In addition, it changes scheduling of historical merges.
Instead 2 separate background processes, storage launches a single
thread. It reduces CPU resource and disk IO resources usage.

Related issues:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8885
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4592
2025-05-30 19:38:05 +02:00
Zakhar Bessarab
aef59d9281 lib/promrelabel: prevent panic caused by invalid label name or value in debug interface
### Describe Your Changes

Previously, invalid label name or value could cause a panic of vmselect
or vmsingle as it was using MustNewLabelsFromString which was added for
usage in tests only.

Fix this by properly handling and propagating error to user interface if
there is any.

See: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8661

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-05-30 19:22:48 +02:00
Zakhar Bessarab
b1582b3012 app/vmbackupmanager: verify backup availability when creating a restore mark
Previously, restore mark could be create to a backup which does not exist or incomplete. This would lead to a crash when attempting to perform restore later on.

This commit adds verification of backup availability and completion to prevent such issues from happening. It also adds a verification bypass mechanism for cases when user wants to create a restore mark which is not currently available.

related issues:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5361
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8771
2025-05-30 15:20:44 +02:00
f41gh7
dd769d87c0 app/vmbackupmanager: add support for user-defined timezone for backup scheduling
This is useful in order to create backups at midnight in timezone specific to the user allowing to make sure backups are taken at off-peak hours of operations.

See these issues for details:
- https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6707
- https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3950

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-05-30 15:20:32 +02:00
Zhu Jiekun
febe9a2882 vmselect: dynamic concurrent dial limit
- dynamically adjusts the concurrent dial limit between 8 and 64 based
on the `-search.maxConcurrentRequests`.
- goroutines now have the chance to access available connections while
awaiting the dial limit token.

Related PR:
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8922
2025-05-30 15:14:07 +02:00
Aliaksandr Valialkin
337ccd7c62 deployment: update VictoriaLogs Docker image tag from v1.23.1-victorialogs to v1.23.2-victorialogs
See https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.23.2-victorialogs
2025-05-30 00:25:23 +02:00
Aliaksandr Valialkin
c9789b3c18 docs/victorialogs/CHANGELOG.md: cut v1.23.2-victorialogs release 2025-05-30 00:21:40 +02:00
Aliaksandr Valialkin
c9db487613 app/vlselect/vmui: run make vmui-logs-update after the commit 51fdd885ea 2025-05-30 00:20:23 +02:00
Aliaksandr Valialkin
77fffb4dc7 deployment: update VictoriaLogs Docker image tag from v1.23.0-victorialogs to v1.23.1-victorialogs
See https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.23.1-victorialogs
2025-05-30 00:05:26 +02:00
Aliaksandr Valialkin
8701ec0968 docs/victorialogs/CHANGELOG.md: cut v1.23.1-victorialogs release 2025-05-29 23:58:00 +02:00
Aliaksandr Valialkin
94f3302aca lib/logstorage: properly handle stats pipe in multi-level cluster setup when a vlselect queries another vlselect, which, in turn, queries vlstorage or another vlselect
The intermediate `vlselect` should properly proxy the `stats` state from the lower-level nodes to the upper-level `vlselect`.
Previously it was finalizing the state instead of proxying it to the upper-level `vlselect, so the upper-level `vlselect`
couldn't read it.

Fix this by introducing `proxy` mode for `stats` pipe. This mode accepts state from lower-level node, aggregates the state
and then proxies it to the upper node.

Thanks to @AndrewChubatiuk for the initial attempt to fix this issue at https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9023 .
Thanks to @func25 for the idea with introduction of a new `proxy` mode for `stats` pipe at https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9023/files#r2107735835 ,
which has been implemented in this commit. This approach results in less code changes comparing to the approach
taken at https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9023

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8815
2025-05-29 20:59:05 +02:00
f41gh7
16909a2b6b lib/storage/downsampling: revert pre-filtering optimisation for downsampling rules
Skipping downsampling rules with filters based on the timestamp and offset leads to unexpected behaviour in case both rules with and without filters are present.

For example, with the following configuration: `-downsampling.period='{__name__="foo"}:60d:2m,7d:4m'`
The user would expect `foo` metrics to be downsampled only after 60d to 2m intervals. But actually pre-filter would skip scoped rule and use global rule after 7d with 4m interval.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8969
2025-05-29 19:13:14 +02:00
Yury Molodov
51fdd885ea vmui/logs: fix query trigger when chart is hidden (#9006)
### Describe Your Changes

Fix an issue where queries were not triggered when relative time was
selected and the chart was hidden.
Related issue: #8983 

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/).
2025-05-29 15:43:50 +02:00
Yury Molodov
a213f5a423 vmui/logs: add sort pipe handling (#9004)
### Describe Your Changes

* UI now respects the `sort by` pipe in queries — if it's present, the
order returned by the server is preserved. Related issue: #8660.
* If no `sort by` pipe is used, logs are reversed on the client to show
the newest entries first (since VictoriaLogs returns them in ascending
time order — [see this in the
code](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/app/vlselect/logsql/logsql.go#L1047)).
* Removed redundant client-side time-based sorting logic.

Additionally:

* Log record fields are now sorted alphabetically in UI selectors such
as **Group by field**, **Display fields**, and **Customize columns**.
Related issue: #8438.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/).
2025-05-29 15:41:02 +02:00
Roman Khavronenko
3a812a8b28 apptest: run tests in parallel to imrpove testing speed (#9050)
This change reduces integration tests time from 90s to 30s:
1. before
https://github.com/VictoriaMetrics/VictoriaMetrics/actions/runs/15321154610/job/43105211053?pr=9048#step:5:2
2. after
https://github.com/VictoriaMetrics/VictoriaMetrics/actions/runs/15324035886/job/43114340500#step:5:2

Signed-off-by: hagen1778 <roman@victoriametrics.com>
(cherry picked from commit 70f8c60c96)
2025-05-29 15:38:35 +02:00
hagen1778
4375699013 docs: move change 53a6bbfdf8
Move change 53a6bbfdf8 to the actual
release. Before, it was mistakenly merged to prev release.

Re-classify change from BUGFIX to FEATURE due to following reasons:
* the risk of facing this issue is low, as it reveals itself only for short staleness intervals
* it slightly changes increase_pure logic in a good way. But it is still a change, not bugfix.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-05-29 11:40:26 +02:00
Roman Khavronenko
53a6bbfdf8 app/vmselect/promql: detect staleness between real timestamps for increase, increase_pure or delta (#9000)
This change has effect only if one of the flags below are set:
`-search.maxLookback`, `-search.setLookbackToStep` or
`-search.maxStalenessInterval`

These flags instruct query engine to ignore data points outside of the
look-behind window if these data points are beyond the staleness
interval.

This logic is used for `removeCounterResets` function, and in functions
`increase`, `increase_pure` or `delta`. The bug described in
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8935 hit the
corner case when `removeCounterResets` detected the stale series and
`increase` did not.

The reason why staleness detection failed for `increase` is that
`removeCounterResets` calculates interval between real data points. And
`realPrevValue` (that is used by those functions) calculates the
difference between look-behind window start and previous data point.
Which, at smaller gaps or smaller staleness intervals, could affect
staleness detection and make it different to `removeCounterResets`.

This change makes `realPrevValue` to acocunt for staleness between first
data point in captured look-behind window and previous data point.

-------

While there, also updated `increase_pure` logic. It was changed in
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/1381 without
good explanation. Turns out, that `increase_pure` always compared last
value on the interval with value before the interval. While other
increase or delta functions did compare it with first data point on
interval, and only if it is missing - with the realPrevValue.

This change makes `increase_pure` logic consistent with other similar
function. The reason why it is not a separate PR is because tests
started to fail once `realPrevValue` callculation logic changed and
there were no good solution to isolate this change.

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: Max Kotliar <kotlyar.maksim@gmail.com>
2025-05-29 11:23:46 +02:00
Hui Wang
897f1b97e3 docs: fix cmd-line flag default values in description (#9008)
follow up
b9f080321c
2025-05-29 11:19:13 +02:00
Hui Wang
309f1898b3 alerts: fix the alerting rule ScrapePoolHasNoTargets (#9045)
as it may cause false positive in [sharding
mode](https://docs.victoriametrics.com/victoriametrics/vmagent/#scraping-big-number-of-targets)

related https://github.com/VictoriaMetrics/helm-charts/issues/2200
2025-05-29 11:17:52 +02:00
Alexander Marshalov
8998526384 docs/vmcloud: add info about api go client to the docs (#9040)
### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/).
2025-05-29 00:32:23 +02:00
Aliaksandr Valialkin
e55e2a4274 docs/victoriametrics/Cluster-VictoriaMetrics.md: update description for the -downsampling.period command-line flag after the commit 7dbfe1e5474b69cd1ab7cc8b5e936b6dc62d5f71 2025-05-29 00:13:45 +02:00
Aliaksandr Valialkin
29ec5d2898 all: consistently end docs.victoriametrics.com urls with /
Urls to docs.victoriametrics.com, which do not end with `/`, are working, but they lead to an unnecessary redirect to /index.html url,
which breaks backwards navigation. For example, https://docs.victoriametrics.com/victoriametrics/integrations/prometheus
redirects to https://docs.victoriametrics.com/victoriametrics/integrations/prometheus/index.html .

So it is better to consistently end all the urls to docs.victoriametrics.com with `/` in order to prevent
the unnecessary redirect and preserve backwards navigation. E.g. https://docs.victoriametrics.com/victoriametrics/integrations/prometheus
is replaced with https://docs.victoriametrics.com/victoriametrics/integrations/prometheus/ , etc.

This is a follow-up for commits starting from 6ec422160b
2025-05-29 00:05:30 +02:00
Aliaksandr Valialkin
adef9693af docs/victoriametrics-cloud: consistently use absolute links to VictoriaMetrics Cloud docs after the commit cddf36af43 2025-05-29 00:05:30 +02:00
Aliaksandr Valialkin
8f01ac42a8 docs/victoriametrics/integrations/prometheus.md: add an alias /data-ingestion/prometheus/ , since it is already used all over the Internet after the commit a46d554f74
The link to https://docs.victoriametrics.com/victoriametrics/data-ingestion/prometheus/ became borken after the commit 7d199d1d83,
which renamed the link to https://docs.victoriametrics.com/victoriametrics/integrations/prometheus .
2025-05-29 00:05:29 +02:00
Vadim Alekseev
8223a5235f deployment/logs-benchmark: fix URLs to benchmark data (#9030)
### Describe Your Changes

When downloading archives for benchmarks, an error appears saying that
the archive was placed in a new path.

The error could have been prevented by providing the `-L (--location)`
flag that would tell curl to follow the redirect, so in addition to
updating the paths, this flag was added.

### Checklist

The following checks are **mandatory**:

- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/).
2025-05-28 16:39:06 +02:00
Cheyi Lin
fe5f2bd5d7 dashboards: fix newline escape in panel descriptions (#9036)
### Describe Your Changes

Fix extra newline escape characters in panel descriptions.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/).
2025-05-28 16:37:46 +02:00
Aliaksandr Valialkin
00075ac4ee deployment: update VictoriaLogs Docker image tag from v1.22.2-victorialogs to v1.23.0-victorialogs
See https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.23.0-victorialogs
2025-05-28 14:26:55 +02:00
Aliaksandr Valialkin
3f39946f99 docs/victorialogs/CHANGELOG.md: cut v1.23.0-victorialogs release 2025-05-28 14:20:21 +02:00
Aliaksandr Valialkin
1ddfd55e51 docs/victorialogs/logsql-examples.md: add an example how to get duration since the last seen log, which matches the given filter
This is a follow-up for 5bb012b67b

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9013
2025-05-28 14:04:14 +02:00
Phuong Le
5bb012b67b logsql: math now() (#9014)
Resolves https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9013
2025-05-28 13:43:23 +02:00
Phuong Le
78fb987bef vlstorage: automatically recover missing parts.json files on startup (#9007)
Fixes
[#8873](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8873).

Automatically recover missing `parts.json` files on startup.
VictoriaLogs now scans existing part directories and recreates missing
`parts.json` files instead of crashing. This aligns with
VictoriaMetrics' approach.

---------

Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
2025-05-28 13:19:05 +02:00
Aliaksandr Valialkin
a0084dc223 docs/victorialogs/LogsQL.md: remove superflouos "returns" word 2025-05-27 15:56:41 +02:00
982 changed files with 34832 additions and 103104 deletions

View File

@@ -6,4 +6,4 @@ Please provide a brief description of the changes you made. Be as specific as po
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/).
- [ ] My change adheres to [VictoriaMetrics contributing guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).

View File

@@ -195,6 +195,25 @@ vmutils-crossbuild: \
vmutils-openbsd-amd64 \
vmutils-windows-amd64
publish-latest:
PKG_TAG=$(TAG) APP_NAME=victoria-metrics $(MAKE) publish-via-docker-latest && \
PKG_TAG=$(TAG) APP_NAME=vmagent $(MAKE) publish-via-docker-latest && \
PKG_TAG=$(TAG) APP_NAME=vmalert $(MAKE) publish-via-docker-latest && \
PKG_TAG=$(TAG) APP_NAME=vmalert-tool $(MAKE) publish-via-docker-latest && \
PKG_TAG=$(TAG) APP_NAME=vmauth $(MAKE) publish-via-docker-latest && \
PKG_TAG=$(TAG) APP_NAME=vmbackup $(MAKE) publish-via-docker-latest && \
PKG_TAG=$(TAG) APP_NAME=vmrestore $(MAKE) publish-via-docker-latest && \
PKG_TAG=$(TAG) APP_NAME=vmctl $(MAKE) publish-via-docker-latest && \
PKG_TAG=$(TAG)-cluster APP_NAME=vminsert $(MAKE) publish-via-docker-latest && \
PKG_TAG=$(TAG)-cluster APP_NAME=vmselect $(MAKE) publish-via-docker-latest && \
PKG_TAG=$(TAG)-cluster APP_NAME=vmstorage $(MAKE) publish-via-docker-latest && \
PKG_TAG=$(TAG)-enterprise APP_NAME=vmgateway $(MAKE) publish-via-docker-latest
PKG_TAG=$(TAG)-enterprise APP_NAME=vmbackupmanager $(MAKE) publish-via-docker-latest
publish-victoria-logs-latest:
PKG_TAG=$(TAG) APP_NAME=victoria-logs $(MAKE) publish-via-docker-latest
PKG_TAG=$(TAG) APP_NAME=vlogscli $(MAKE) publish-via-docker-latest
publish-release:
rm -rf bin/*
git checkout $(TAG) && $(MAKE) release && $(MAKE) publish && \
@@ -526,7 +545,7 @@ test-full:
test-full-386:
GOEXPERIMENT=synctest GOARCH=386 go test -coverprofile=coverage.txt -covermode=atomic ./lib/... ./app/...
integration-test: victoria-metrics vmagent vmalert vmauth
integration-test: victoria-metrics vmagent vmalert vmauth vmctl vmbackup vmrestore victoria-logs
go test ./apptest/... -skip="^TestCluster.*"
benchmark:

View File

@@ -40,16 +40,16 @@ VictoriaMetrics is optimized for timeseries data, even when old time series are
* **Easy to setup**: No dependencies, single [small binary](https://medium.com/@valyala/stripping-dependency-bloat-in-victoriametrics-docker-image-983fb5912b0d), configuration through command-line flags, but the default is also fine-tuned; backup and restore with [instant snapshots](https://medium.com/@valyala/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282).
* **Global query view**: Multiple Prometheus instances or any other data sources may ingest data into VictoriaMetrics and queried via a single query.
* **Various Protocols**: Support metric scraping, ingestion and backfilling in various protocol.
* [Prometheus exporters](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#how-to-scrape-prometheus-exporters-such-as-node-exporter), [Prometheus remote write API](https://docs.victoriametrics.com/victoriametrics/integrations/prometheus), [Prometheus exposition format](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#how-to-import-data-in-prometheus-exposition-format).
* [InfluxDB line protocol](https://docs.victoriametrics.com/victoriametrics/integrations/influxdb) over HTTP, TCP and UDP.
* [Prometheus exporters](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#how-to-scrape-prometheus-exporters-such-as-node-exporter), [Prometheus remote write API](https://docs.victoriametrics.com/victoriametrics/integrations/prometheus/), [Prometheus exposition format](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#how-to-import-data-in-prometheus-exposition-format).
* [InfluxDB line protocol](https://docs.victoriametrics.com/victoriametrics/integrations/influxdb/) over HTTP, TCP and UDP.
* [Graphite plaintext protocol](https://docs.victoriametrics.com/victoriametrics/integrations/graphite/#ingesting) with [tags](https://graphite.readthedocs.io/en/latest/tags.html#carbon).
* [OpenTSDB put message](https://docs.victoriametrics.com/victoriametrics/integrations/opentsdb#sending-data-via-telnet).
* [HTTP OpenTSDB /api/put requests](https://docs.victoriametrics.com/victoriametrics/integrations/opentsdb#sending-data-via-http).
* [OpenTSDB put message](https://docs.victoriametrics.com/victoriametrics/integrations/opentsdb/#sending-data-via-telnet).
* [HTTP OpenTSDB /api/put requests](https://docs.victoriametrics.com/victoriametrics/integrations/opentsdb/#sending-data-via-http).
* [JSON line format](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#how-to-import-data-in-json-line-format).
* [Arbitrary CSV data](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#how-to-import-csv-data).
* [Native binary format](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#how-to-import-data-in-native-format).
* [DataDog agent or DogStatsD](https://docs.victoriametrics.com/victoriametrics/integrations/datadog).
* [NewRelic infrastructure agent](https://docs.victoriametrics.com/victoriametrics/integrations/newrelic#sending-data-from-agent).
* [DataDog agent or DogStatsD](https://docs.victoriametrics.com/victoriametrics/integrations/datadog/).
* [NewRelic infrastructure agent](https://docs.victoriametrics.com/victoriametrics/integrations/newrelic/#sending-data-from-agent).
* [OpenTelemetry metrics format](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#sending-data-via-opentelemetry).
* **NFS-based storages**: Supports storing data on NFS-based storages such as Amazon EFS, Google Filestore.
* And many other features such as metrics relabeling, cardinality limiter, etc.

View File

@@ -8,6 +8,7 @@ import (
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlselect"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/buildinfo"
@@ -44,6 +45,8 @@ func main() {
vlstorage.Init()
vlselect.Init()
insertutil.SetLogRowsStorage(&vlstorage.Storage{})
vlinsert.Init()
go httpserver.Serve(listenAddrs, requestHandler, httpserver.ServeOptions{

View File

@@ -11,7 +11,6 @@ import (
"github.com/valyala/fastjson"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
@@ -33,10 +32,10 @@ var parserPool fastjson.ParserPool
// RequestHandler processes Datadog insert requests
func RequestHandler(path string, w http.ResponseWriter, r *http.Request) bool {
switch path {
case "/api/v1/validate":
case "/insert/datadog/api/v1/validate":
fmt.Fprintf(w, `{}`)
return true
case "/api/v2/logs":
case "/insert/datadog/api/v2/logs":
return datadogLogsIngestion(w, r)
default:
return false
@@ -74,7 +73,7 @@ func datadogLogsIngestion(w http.ResponseWriter, r *http.Request) bool {
cp.IgnoreFields = *datadogIgnoreFields
}
if err := vlstorage.CanWriteData(); err != nil {
if err := insertutil.CanWriteData(); err != nil {
httpserver.Errorf(w, r, "%s", err)
return true
}
@@ -102,7 +101,7 @@ func datadogLogsIngestion(w http.ResponseWriter, r *http.Request) bool {
var (
v2LogsRequestsTotal = metrics.NewCounter(`vl_http_requests_total{path="/insert/datadog/api/v2/logs"}`)
v2LogsRequestDuration = metrics.NewHistogram(`vl_http_request_duration_seconds{path="/insert/datadog/api/v2/logs"}`)
v2LogsRequestDuration = metrics.NewSummary(`vl_http_request_duration_seconds{path="/insert/datadog/api/v2/logs"}`)
)
// datadog message field has two formats:

View File

@@ -11,7 +11,6 @@ import (
"github.com/VictoriaMetrics/metrics"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bufferedwriter"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
@@ -31,36 +30,38 @@ func RequestHandler(path string, w http.ResponseWriter, r *http.Request) bool {
// This header is needed for Logstash
w.Header().Set("X-Elastic-Product", "Elasticsearch")
if strings.HasPrefix(path, "/_ilm/policy") {
if strings.HasPrefix(path, "/insert/elasticsearch/_ilm/policy") {
// Return fake response for Elasticsearch ilm request.
fmt.Fprintf(w, `{}`)
return true
}
if strings.HasPrefix(path, "/_index_template") {
if strings.HasPrefix(path, "/insert/elasticsearch/_index_template") {
// Return fake response for Elasticsearch index template request.
fmt.Fprintf(w, `{}`)
return true
}
if strings.HasPrefix(path, "/_ingest") {
if strings.HasPrefix(path, "/insert/elasticsearch/_ingest") {
// Return fake response for Elasticsearch ingest pipeline request.
// See: https://www.elastic.co/guide/en/elasticsearch/reference/8.8/put-pipeline-api.html
fmt.Fprintf(w, `{}`)
return true
}
if strings.HasPrefix(path, "/_nodes") {
if strings.HasPrefix(path, "/insert/elasticsearch/_nodes") {
// Return fake response for Elasticsearch nodes discovery request.
// See: https://www.elastic.co/guide/en/elasticsearch/reference/8.8/cluster.html
fmt.Fprintf(w, `{}`)
return true
}
if strings.HasPrefix(path, "/logstash") || strings.HasPrefix(path, "/_logstash") {
if strings.HasPrefix(path, "/insert/elasticsearch/logstash") || strings.HasPrefix(path, "/insert/elasticsearch/_logstash") {
// Return fake response for Logstash APIs requests.
// See: https://www.elastic.co/guide/en/elasticsearch/reference/8.8/logstash-apis.html
fmt.Fprintf(w, `{}`)
return true
}
switch path {
case "/", "":
// some clients may omit trailing slash
// see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8353
case "/insert/elasticsearch/", "/insert/elasticsearch":
switch r.Method {
case http.MethodGet:
// Return fake response for Elasticsearch ping request.
@@ -75,7 +76,7 @@ func RequestHandler(path string, w http.ResponseWriter, r *http.Request) bool {
}
return true
case "/_license":
case "/insert/elasticsearch/_license":
// Return fake response for Elasticsearch license request.
fmt.Fprintf(w, `{
"license": {
@@ -86,7 +87,7 @@ func RequestHandler(path string, w http.ResponseWriter, r *http.Request) bool {
}
}`)
return true
case "/_bulk":
case "/insert/elasticsearch/_bulk":
startTime := time.Now()
bulkRequestsTotal.Inc()
@@ -95,7 +96,7 @@ func RequestHandler(path string, w http.ResponseWriter, r *http.Request) bool {
httpserver.Errorf(w, r, "%s", err)
return true
}
if err := vlstorage.CanWriteData(); err != nil {
if err := insertutil.CanWriteData(); err != nil {
httpserver.Errorf(w, r, "%s", err)
return true
}
@@ -128,7 +129,7 @@ func RequestHandler(path string, w http.ResponseWriter, r *http.Request) bool {
var (
bulkRequestsTotal = metrics.NewCounter(`vl_http_requests_total{path="/insert/elasticsearch/_bulk"}`)
bulkRequestDuration = metrics.NewHistogram(`vl_http_request_duration_seconds{path="/insert/elasticsearch/_bulk"}`)
bulkRequestDuration = metrics.NewSummary(`vl_http_request_duration_seconds{path="/insert/elasticsearch/_bulk"}`)
)
func readBulkRequest(streamName string, r io.Reader, encoding string, timeFields, msgFields []string, lmp insertutil.LogMessageProcessor) (int, error) {

View File

@@ -11,7 +11,6 @@ import (
"github.com/VictoriaMetrics/metrics"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httputil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
@@ -36,6 +35,7 @@ type CommonParams struct {
DecolorizeFields []string
ExtraFields []logstorage.Field
IsTimeFieldSet bool
Debug bool
DebugRequestURI string
DebugRemoteAddr string
@@ -49,8 +49,10 @@ func GetCommonParams(r *http.Request) (*CommonParams, error) {
return nil, err
}
var isTimeFieldSet bool
timeFields := []string{"_time"}
if tfs := httputil.GetArray(r, "_time_field", "VL-Time-Field"); len(tfs) > 0 {
isTimeFieldSet = true
timeFields = tfs
}
@@ -86,9 +88,11 @@ func GetCommonParams(r *http.Request) (*CommonParams, error) {
IgnoreFields: ignoreFields,
DecolorizeFields: decolorizeFields,
ExtraFields: extraFields,
Debug: debug,
DebugRequestURI: debugRequestURI,
DebugRemoteAddr: debugRemoteAddr,
IsTimeFieldSet: isTimeFieldSet,
Debug: debug,
DebugRequestURI: debugRequestURI,
DebugRemoteAddr: debugRemoteAddr,
}
return cp, nil
@@ -141,6 +145,29 @@ func GetCommonParamsForSyslog(tenantID logstorage.TenantID, streamFields, ignore
return cp
}
// LogRowsStorage is an interface for ingesting logs into the storage.
type LogRowsStorage interface {
// MustAddRows must add lr to the underlying storage.
MustAddRows(lr *logstorage.LogRows)
// CanWriteData must returns non-nil error if logs cannot be added to the underlying storage.
CanWriteData() error
}
var logRowsStorage LogRowsStorage
// SetLogRowsStorage sets the storage for writing data to via LogMessageProcessor.
//
// This function must be called before using LogMessageProcessor and CanWriteData from this package.
func SetLogRowsStorage(storage LogRowsStorage) {
logRowsStorage = storage
}
// CanWriteData returns non-nil error if data cannot be written to the underlying storage.
func CanWriteData() error {
return logRowsStorage.CanWriteData()
}
// LogMessageProcessor is an interface for log message processors.
type LogMessageProcessor interface {
// AddRow must add row to the LogMessageProcessor with the given timestamp and fields.
@@ -264,7 +291,7 @@ func (lmp *logMessageProcessor) AddInsertRow(r *logstorage.InsertRow) {
// flushLocked must be called under locked lmp.mu.
func (lmp *logMessageProcessor) flushLocked() {
lmp.lastFlushTime = time.Now()
vlstorage.MustAddRows(lmp.lr)
logRowsStorage.MustAddRows(lmp.lr)
lmp.lr.ResetKeepSettings()
}

View File

@@ -56,6 +56,7 @@ func NewLineReader(name string, r io.Reader) *LineReader {
// Check for Err in this case.
func (lr *LineReader) NextLine() bool {
for {
lr.Line = nil
if lr.bufOffset >= len(lr.buf) {
if lr.err != nil || lr.eofReached {
return false
@@ -101,9 +102,11 @@ func (lr *LineReader) readMoreData() bool {
bufLen := len(lr.buf)
if bufLen >= MaxLineSizeBytes.IntN() {
logger.Warnf("%s: the line length exceeds -insert.maxLineSizeBytes=%d; skipping it; line contents=%q", lr.name, MaxLineSizeBytes.IntN(), lr.buf)
ok, skippedBytes := lr.skipUntilNextLine()
logger.Warnf("%s: the line length exceeds -insert.maxLineSizeBytes=%d; skipping it; total skipped bytes=%d",
lr.name, MaxLineSizeBytes.IntN(), skippedBytes)
tooLongLinesSkipped.Inc()
return lr.skipUntilNextLine()
return ok
}
lr.buf = slicesutil.SetLength(lr.buf, MaxLineSizeBytes.IntN())
@@ -121,26 +124,35 @@ func (lr *LineReader) readMoreData() bool {
var tooLongLinesSkipped = metrics.NewCounter("vl_too_long_lines_skipped_total")
func (lr *LineReader) skipUntilNextLine() bool {
func (lr *LineReader) skipUntilNextLine() (bool, int) {
// Initialize skipped bytes count with MaxLineSizeBytes because
// we've already read that many bytes without encountering a newline,
// indicating the line size exceeds the maximum allowed limit.
skipSizeBytes := MaxLineSizeBytes.IntN()
for {
lr.buf = slicesutil.SetLength(lr.buf, MaxLineSizeBytes.IntN())
n, err := lr.r.Read(lr.buf)
skipSizeBytes += n
lr.buf = lr.buf[:n]
if err != nil {
if errors.Is(err, io.EOF) {
lr.eofReached = true
lr.buf = lr.buf[:0]
return true
return true, skipSizeBytes
}
lr.err = fmt.Errorf("cannot skip the current line: %s", err)
return false
return false, skipSizeBytes
}
if n := bytes.IndexByte(lr.buf, '\n'); n >= 0 {
// Include skipped bytes before \n, including the newline itself.
skipSizeBytes += n + 1 - len(lr.buf)
// Include \n in the buf, so too long line is replaced with an empty line.
// This is needed for maintaining synchorinzation consistency between lines
// in protocols such as Elasticsearch bulk import.
lr.buf = append(lr.buf[:0], lr.buf[n:]...)
return true
return true, skipSizeBytes
}
}
}

View File

@@ -24,6 +24,9 @@ func TestLineReader_Success(t *testing.T) {
if lr.NextLine() {
t.Fatalf("expecting error on the second call to NextLine()")
}
if len(lr.Line) > 0 {
t.Fatalf("unexpected non-empty line after failed NextLine(): %q", lr.Line)
}
if !reflect.DeepEqual(lines, linesExpected) {
t.Fatalf("unexpected lines\ngot\n%q\nwant\n%q", lines, linesExpected)
}

View File

@@ -38,7 +38,10 @@ func ExtractTimestampFromFields(timeFields []string, fields []logstorage.Field)
}
func parseTimestamp(s string) (int64, error) {
if s == "" || s == "0" {
// "-" is a nil timestamp value, if the syslog
// application is incapable of obtaining system time
// https://datatracker.ietf.org/doc/html/rfc5424#section-6.2.3
if s == "" || s == "0" || s == "-" {
return time.Now().UnixNano(), nil
}
if len(s) <= len("YYYY") || s[len("YYYY")] != '-' {

View File

@@ -133,6 +133,33 @@ func TestExtractTimestampFromFields_Success(t *testing.T) {
}, 1718773640000000000)
}
func TestExtractTimestampFromFields_Now(t *testing.T) {
f := func(timeField string, fields []logstorage.Field) {
t.Helper()
nsecs, err := ExtractTimestampFromFields([]string{timeField}, fields)
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
if nsecs < 1 {
t.Fatalf("expected generated timestamp, got error: %s", err)
}
}
// RFC5424 allows `-` for nil timestamp (log ingestion time)
f("time", []logstorage.Field{
{Name: "time", Value: "-"},
})
f("time", []logstorage.Field{
{Name: "time", Value: ""},
})
f("time", []logstorage.Field{
{Name: "time", Value: "0"},
})
}
func TestExtractTimestampFromFields_Error(t *testing.T) {
f := func(s string) {
t.Helper()

View File

@@ -1,7 +1,6 @@
package internalinsert
import (
"flag"
"fmt"
"net/http"
"time"
@@ -9,7 +8,6 @@ import (
"github.com/VictoriaMetrics/metrics"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage/netinsert"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
@@ -18,17 +16,11 @@ import (
)
var (
disableInsert = flag.Bool("internalinsert.disable", false, "Whether to disable /internal/insert HTTP endpoint")
maxRequestSize = flagutil.NewBytes("internalinsert.maxRequestSize", 64*1024*1024, "The maximum size in bytes of a single request, which can be accepted at /internal/insert HTTP endpoint")
)
// RequestHandler processes /internal/insert requests.
func RequestHandler(w http.ResponseWriter, r *http.Request) {
if *disableInsert {
httpserver.Errorf(w, r, "requests to /internal/insert are disabled with -internalinsert.disable command-line flag")
return
}
startTime := time.Now()
if r.Method != "POST" {
w.WriteHeader(http.StatusMethodNotAllowed)
@@ -47,7 +39,7 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) {
httpserver.Errorf(w, r, "%s", err)
return
}
if err := vlstorage.CanWriteData(); err != nil {
if err := insertutil.CanWriteData(); err != nil {
httpserver.Errorf(w, r, "%s", err)
return
}
@@ -92,5 +84,5 @@ var (
requestsTotal = metrics.NewCounter(`vl_http_requests_total{path="/internal/insert"}`)
errorsTotal = metrics.NewCounter(`vl_http_errors_total{path="/internal/insert"}`)
requestDuration = metrics.NewHistogram(`vl_http_request_duration_seconds{path="/internal/insert"}`)
requestDuration = metrics.NewSummary(`vl_http_request_duration_seconds{path="/internal/insert"}`)
)

View File

@@ -3,29 +3,30 @@ package journald
import (
"bytes"
"encoding/binary"
"errors"
"flag"
"fmt"
"io"
"net/http"
"regexp"
"slices"
"strconv"
"strings"
"sync"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
"github.com/VictoriaMetrics/metrics"
)
// See https://github.com/systemd/systemd/blob/main/src/libsystemd/sd-journal/journal-file.c#L1703
const journaldEntryMaxNameLen = 64
var allowedJournaldEntryNameChars = regexp.MustCompile(`^[A-Z_][A-Z0-9_]*`)
const maxFieldNameLen = 64
var (
journaldStreamFields = flagutil.NewArrayString("journald.streamFields", "Comma-separated list of fields to use as log stream fields for logs ingested over journald protocol. "+
@@ -36,9 +37,7 @@ var (
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/journald/#time-field")
journaldTenantID = flag.String("journald.tenantID", "0:0", "TenantID for logs ingested via the Journald endpoint. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/journald/#multitenancy")
journaldIncludeEntryMetadata = flag.Bool("journald.includeEntryMetadata", false, "Include journal entry fields, which with double underscores.")
maxRequestSize = flagutil.NewBytes("journald.maxRequestSize", 64*1024*1024, "The maximum size in bytes of a single journald request")
journaldIncludeEntryMetadata = flag.Bool("journald.includeEntryMetadata", false, "Include Journald fields with double underscore prefixes")
)
func getCommonParams(r *http.Request) (*insertutil.CommonParams, error) {
@@ -53,11 +52,12 @@ func getCommonParams(r *http.Request) (*insertutil.CommonParams, error) {
}
cp.TenantID = tenantID
}
if len(cp.TimeFields) == 0 {
if !cp.IsTimeFieldSet {
cp.TimeFields = []string{*journaldTimeField}
}
if len(cp.StreamFields) == 0 {
cp.StreamFields = *journaldStreamFields
cp.StreamFields = getStreamFields()
}
if len(cp.IgnoreFields) == 0 {
cp.IgnoreFields = *journaldIgnoreFields
@@ -66,10 +66,23 @@ func getCommonParams(r *http.Request) (*insertutil.CommonParams, error) {
return cp, nil
}
func getStreamFields() []string {
if len(*journaldStreamFields) > 0 {
return *journaldStreamFields
}
return defaultStreamFields
}
var defaultStreamFields = []string{
"_MACHINE_ID",
"_HOSTNAME",
"_SYSTEMD_UNIT",
}
// RequestHandler processes Journald Export insert requests
func RequestHandler(path string, w http.ResponseWriter, r *http.Request) bool {
switch path {
case "/upload":
case "/insert/journald/upload":
if r.Header.Get("Content-Type") != "application/vnd.fdo.journal" {
httpserver.Errorf(w, r, "only application/vnd.fdo.journal encoding is supported for Journald")
return true
@@ -84,7 +97,7 @@ func RequestHandler(path string, w http.ResponseWriter, r *http.Request) bool {
// handleJournald parses Journal binary entries
func handleJournald(r *http.Request, w http.ResponseWriter) {
startTime := time.Now()
requestsJournaldTotal.Inc()
requestsTotal.Inc()
cp, err := getCommonParams(r)
if err != nil {
@@ -93,19 +106,25 @@ func handleJournald(r *http.Request, w http.ResponseWriter) {
return
}
if err := vlstorage.CanWriteData(); err != nil {
if err := insertutil.CanWriteData(); err != nil {
errorsTotal.Inc()
httpserver.Errorf(w, r, "%s", err)
return
}
encoding := r.Header.Get("Content-Encoding")
err = protoparserutil.ReadUncompressedData(r.Body, encoding, maxRequestSize, func(data []byte) error {
lmp := cp.NewLogMessageProcessor("journald", false)
err := parseJournaldRequest(data, lmp, cp)
lmp.MustClose()
return err
})
reader, err := protoparserutil.GetUncompressedReader(r.Body, encoding)
if err != nil {
errorsTotal.Inc()
logger.Errorf("cannot decode journald request: %s", err)
return
}
lmp := cp.NewLogMessageProcessor("journald", true)
streamName := fmt.Sprintf("remoteAddr=%s, requestURI=%q", httpserver.GetQuotedRemoteAddr(r), r.RequestURI)
err = processStreamInternal(streamName, reader, lmp, cp)
protoparserutil.PutUncompressedReader(reader)
lmp.MustClose()
if err != nil {
errorsTotal.Inc()
httpserver.Errorf(w, r, "cannot read journald protocol data: %s", err)
@@ -117,102 +136,185 @@ func handleJournald(r *http.Request, w http.ResponseWriter) {
// See https://github.com/systemd/systemd/pull/34822
w.Header().Set("Accept-Encoding", "zstd")
// update requestJournaldDuration only for successfully parsed requests
// There is no need in updating requestJournaldDuration for request errors,
// update requestDuration only for successfully parsed requests
// There is no need in updating requestDuration for request errors,
// since their timings are usually much smaller than the timing for successful request parsing.
requestJournaldDuration.UpdateDuration(startTime)
requestDuration.UpdateDuration(startTime)
}
var (
requestsJournaldTotal = metrics.NewCounter(`vl_http_requests_total{path="/insert/journald/upload"}`)
errorsTotal = metrics.NewCounter(`vl_http_errors_total{path="/insert/journald/upload"}`)
requestJournaldDuration = metrics.NewHistogram(`vl_http_request_duration_seconds{path="/insert/journald/upload"}`)
requestsTotal = metrics.NewCounter(`vl_http_requests_total{path="/insert/journald/upload"}`)
errorsTotal = metrics.NewCounter(`vl_http_errors_total{path="/insert/journald/upload"}`)
requestDuration = metrics.NewSummary(`vl_http_request_duration_seconds{path="/insert/journald/upload"}`)
)
func processStreamInternal(streamName string, r io.Reader, lmp insertutil.LogMessageProcessor, cp *insertutil.CommonParams) error {
wcr := writeconcurrencylimiter.GetReader(r)
defer writeconcurrencylimiter.PutReader(wcr)
lr := insertutil.NewLineReader("journald", wcr)
for {
err := readJournaldLogEntry(streamName, lr, lmp, cp)
wcr.DecConcurrency()
if err != nil {
if errors.Is(err, io.EOF) {
return nil
}
return fmt.Errorf("%s: %w", streamName, err)
}
}
}
type fieldsBuf struct {
fields []logstorage.Field
buf []byte
name []byte
value []byte
}
func (fb *fieldsBuf) reset() {
fb.fields = fb.fields[:0]
fb.buf = fb.buf[:0]
fb.name = fb.name[:0]
fb.value = fb.value[:0]
}
func (fb *fieldsBuf) addField(name, value string) {
bufLen := len(fb.buf)
fb.buf = append(fb.buf, name...)
nameCopy := bytesutil.ToUnsafeString(fb.buf[bufLen:])
bufLen = len(fb.buf)
fb.buf = append(fb.buf, value...)
valueCopy := bytesutil.ToUnsafeString(fb.buf[bufLen:])
fb.fields = append(fb.fields, logstorage.Field{
Name: nameCopy,
Value: valueCopy,
})
}
func (fb *fieldsBuf) appendNextLineToValue(lr *insertutil.LineReader) error {
if !lr.NextLine() {
if err := lr.Err(); err != nil {
return err
}
return fmt.Errorf("unexpected end of stream")
}
fb.value = append(fb.value, lr.Line...)
fb.value = append(fb.value, '\n')
return nil
}
func getFieldsBuf() *fieldsBuf {
fb := fieldsBufPool.Get()
if fb == nil {
return &fieldsBuf{}
}
return fb.(*fieldsBuf)
}
func putFieldsBuf(fb *fieldsBuf) {
fb.reset()
fieldsBufPool.Put(fb)
}
var fieldsBufPool sync.Pool
// readJournaldLogEntry reads a single log entry in Journald format.
//
// See https://systemd.io/JOURNAL_EXPORT_FORMATS/#journal-export-format
func parseJournaldRequest(data []byte, lmp insertutil.LogMessageProcessor, cp *insertutil.CommonParams) error {
var fields []logstorage.Field
func readJournaldLogEntry(streamName string, lr *insertutil.LineReader, lmp insertutil.LogMessageProcessor, cp *insertutil.CommonParams) error {
var ts int64
var size uint64
var name, value string
var line []byte
currentTimestamp := time.Now().UnixNano()
fb := getFieldsBuf()
defer putFieldsBuf(fb)
for len(data) > 0 {
idx := bytes.IndexByte(data, '\n')
switch {
case idx > 0:
// process fields
line = data[:idx]
data = data[idx+1:]
case idx == 0:
// next message or end of file
// double new line is a separator for the next message
if len(fields) > 0 {
if !lr.NextLine() {
if err := lr.Err(); err != nil {
return fmt.Errorf("cannot read the first field: %w", err)
}
return io.EOF
}
for {
line := lr.Line
if len(line) == 0 {
// The end of a single log entry. Write it to the storage
if len(fb.fields) > 0 {
if ts == 0 {
ts = currentTimestamp
ts = time.Now().UnixNano()
}
lmp.AddRow(ts, fields, nil)
fields = fields[:0]
lmp.AddRow(ts, fb.fields, nil)
}
// skip newline separator
data = data[1:]
continue
case idx < 0:
return fmt.Errorf("missing new line separator, unread data left=%d", len(data))
return nil
}
idx = bytes.IndexByte(line, '=')
// could b either e key=value\n pair
// or just key\n
// with binary data at the buffer
if idx > 0 {
name = bytesutil.ToUnsafeString(line[:idx])
value = bytesutil.ToUnsafeString(line[idx+1:])
// line could be either "key=value" or "key"
// according to https://systemd.io/JOURNAL_EXPORT_FORMATS/#journal-export-format
if n := bytes.IndexByte(line, '='); n >= 0 {
// line = "key=value"
fb.name = append(fb.name[:0], line[:n]...)
name = bytesutil.ToUnsafeString(fb.name)
fb.value = append(fb.value[:0], line[n+1:]...)
value = bytesutil.ToUnsafeString(fb.value)
} else {
name = bytesutil.ToUnsafeString(line)
if len(data) == 0 {
return fmt.Errorf("unexpected zero data for binary field value of key=%s", name)
// line = "key"
// Parse the binary-encoded value from the next line according to "key\n<little_endian_size_64>value\n" format
fb.name = append(fb.name[:0], line...)
name = bytesutil.ToUnsafeString(fb.name)
fb.value = fb.value[:0]
for len(fb.value) < 8 {
if err := fb.appendNextLineToValue(lr); err != nil {
return fmt.Errorf("cannot read value size: %w", err)
}
}
// size of binary data encoded as le i64 at the begging
idx, err := binary.Decode(data, binary.LittleEndian, &size)
if err != nil {
return fmt.Errorf("failed to extract binary field %q value size: %w", name, err)
size := binary.LittleEndian.Uint64(fb.value[:8])
// Read the value until its lenth exceeds the given size - the last char in the read value will always be '\n'
// because it is appended by appendNextLineToValue().
for uint64(len(fb.value[8:])) <= size {
if err := fb.appendNextLineToValue(lr); err != nil {
return fmt.Errorf("cannot read %q value with size %d bytes; read only %d bytes: %w", fb.name, size, len(fb.value[8:]), err)
}
}
// skip binary data size
data = data[idx:]
if size == 0 {
return fmt.Errorf("unexpected zero binary data size decoded %d", size)
value = bytesutil.ToUnsafeString(fb.value[8 : len(fb.value)-1])
if uint64(len(value)) != size {
return fmt.Errorf("unexpected %q value size; got %d bytes; want %d bytes; value: %q", fb.name, len(value), size, value)
}
if int(size) > len(data) {
return fmt.Errorf("binary data size=%d cannot exceed size of the data at buffer=%d", size, len(data))
}
value = bytesutil.ToUnsafeString(data[:size])
data = data[int(size):]
// binary data must has new line separator for the new line or next field
if len(data) == 0 {
return fmt.Errorf("unexpected empty buffer after binary field=%s read", name)
}
lastB := data[0]
if lastB != '\n' {
return fmt.Errorf("expected new line separator after binary field=%s, got=%s", name, string(lastB))
}
data = data[1:]
}
if len(name) > journaldEntryMaxNameLen {
return fmt.Errorf("journald entry name should not exceed %d symbols, got: %q", journaldEntryMaxNameLen, name)
if !lr.NextLine() {
if err := lr.Err(); err != nil {
return fmt.Errorf("cannot read the next log field: %w", err)
}
// add the last log field below before the return
}
if !allowedJournaldEntryNameChars.MatchString(name) {
return fmt.Errorf("journald entry name should consist of `A-Z0-9_` characters and must start from non-digit symbol")
if len(name) > maxFieldNameLen {
logger.Errorf("%s: field name size should not exceed %d bytes; got %d bytes: %q; skipping this field", streamName, maxFieldNameLen, len(name), name)
continue
}
if !isValidFieldName(name) {
logger.Errorf("%s: invalid field name %q; it must consist of `A-Z0-9_` chars and must start from non-digit char; skipping this field", streamName, name)
continue
}
if slices.Contains(cp.TimeFields, name) {
n, err := strconv.ParseInt(value, 10, 64)
t, err := strconv.ParseInt(value, 10, 64)
if err != nil {
return fmt.Errorf("failed to parse Journald timestamp, %w", err)
logger.Errorf("%s: cannot parse timestamp from the field %q: %w; using the current timestamp", streamName, name, err)
ts = 0
} else {
// Convert journald microsecond timestamp to nanoseconds
ts = t * 1e3
}
ts = n * 1e3
continue
}
@@ -220,18 +322,56 @@ func parseJournaldRequest(data []byte, lmp insertutil.LogMessageProcessor, cp *i
name = "_msg"
}
if *journaldIncludeEntryMetadata || !strings.HasPrefix(name, "__") {
fields = append(fields, logstorage.Field{
Name: name,
Value: value,
})
if name == "PRIORITY" {
priority := journaldPriorityToLevel(value)
fb.addField("level", priority)
}
if !strings.HasPrefix(name, "__") || *journaldIncludeEntryMetadata {
fb.addField(name, value)
}
}
if len(fields) > 0 {
if ts == 0 {
ts = currentTimestamp
}
lmp.AddRow(ts, fields, nil)
}
return nil
}
func journaldPriorityToLevel(priority string) string {
// See https://wiki.archlinux.org/title/Systemd/Journal#Priority_level
// and https://grafana.com/docs/grafana/latest/explore/logs-integration/#log-level
switch priority {
case "0":
return "emerg"
case "1":
return "alert"
case "2":
return "critical"
case "3":
return "error"
case "4":
return "warning"
case "5":
return "notice"
case "6":
return "info"
case "7":
return "debug"
default:
return priority
}
}
func isValidFieldName(s string) bool {
if len(s) == 0 {
return false
}
c := s[0]
if !(c >= 'A' && c <= 'Z' || c == '_') {
return false
}
for i := 1; i < len(s); i++ {
c := s[i]
if !(c >= 'A' && c <= 'Z' || c >= '0' && c <= '9' || c == '_') {
return false
}
}
return true
}

View File

@@ -1,20 +1,81 @@
package journald
import (
"bytes"
"net/http"
"testing"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
)
func TestPushJournaldOk(t *testing.T) {
func TestIsValidFieldName(t *testing.T) {
f := func(name string, resultExpected bool) {
t.Helper()
result := isValidFieldName(name)
if result != resultExpected {
t.Fatalf("unexpected result for isValidJournaldFieldName(%q); got %v; want %v", name, result, resultExpected)
}
}
f("", false)
f("a", false)
f("1", false)
f("_", true)
f("X", true)
f("Xa", false)
f("X_343", true)
f("X_0123456789_AZ", true)
f("SDDFD sdf", false)
}
func TestGetCommonParams_TimeField(t *testing.T) {
f := func(timeFieldHeader, expectedTimeField string) {
t.Helper()
req, err := http.NewRequest("POST", "/insert/journald/upload", nil)
if err != nil {
t.Fatalf("unexpected error creating request: %s", err)
}
if timeFieldHeader != "" {
req.Header.Set("VL-Time-Field", timeFieldHeader)
}
cp, err := getCommonParams(req)
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
if len(cp.TimeFields) != 1 || cp.TimeFields[0] != expectedTimeField {
t.Fatalf("unexpected TimeFields; got %v; want [%s]", cp.TimeFields, expectedTimeField)
}
}
// Test default behavior - when no custom time field is specified, journald uses __REALTIME_TIMESTAMP
f("", "__REALTIME_TIMESTAMP")
// Test custom time field - when a custom time field is specified via HTTP header, it's respected
f("custom_time", "custom_time")
}
func TestPushJournald_Success(t *testing.T) {
f := func(src string, timestampsExpected []int64, resultExpected string) {
t.Helper()
tlp := &insertutil.TestLogMessageProcessor{}
cp := &insertutil.CommonParams{
TimeFields: []string{"__REALTIME_TIMESTAMP"},
MsgFields: []string{"MESSAGE"},
r, err := http.NewRequest("GET", "https://foo.bar/baz", nil)
if err != nil {
t.Fatalf("cannot create request: %s", err)
}
if err := parseJournaldRequest([]byte(src), tlp, cp); err != nil {
cp, err := getCommonParams(r)
if err != nil {
t.Fatalf("cannot create commonParams: %s", err)
}
buf := bytes.NewBufferString(src)
if err := processStreamInternal("test", buf, tlp, cp); err != nil {
t.Fatalf("unexpected error: %s", err)
}
@@ -22,16 +83,17 @@ func TestPushJournaldOk(t *testing.T) {
t.Fatal(err)
}
}
// Single event
f("__REALTIME_TIMESTAMP=91723819283\nMESSAGE=Test message\n",
f("__REALTIME_TIMESTAMP=91723819283\nMESSAGE=Test message\n\n",
[]int64{91723819283000},
"{\"_msg\":\"Test message\"}",
)
// Multiple events
f("__REALTIME_TIMESTAMP=91723819283\nMESSAGE=Test message\n\n__REALTIME_TIMESTAMP=91723819284\nMESSAGE=Test message2\n",
f("__REALTIME_TIMESTAMP=91723819283\nPRIORITY=3\nMESSAGE=Test message\n\n__REALTIME_TIMESTAMP=91723819284\nMESSAGE=Test message2\n",
[]int64{91723819283000, 91723819284000},
"{\"_msg\":\"Test message\"}\n{\"_msg\":\"Test message2\"}",
"{\"level\":\"error\",\"PRIORITY\":\"3\",\"_msg\":\"Test message\"}\n{\"_msg\":\"Test message2\"}",
)
// Parse binary data
@@ -39,30 +101,66 @@ func TestPushJournaldOk(t *testing.T) {
[]int64{1729698775704404000},
"{\"E\":\"JobStateChanged\",\"_BOOT_ID\":\"f778b6e2f7584a77b991a2366612a7b5\",\"_UID\":\"0\",\"_GID\":\"0\",\"_MACHINE_ID\":\"a4a970370c30a925df02a13c67167847\",\"_HOSTNAME\":\"ecd5e4555787\",\"_RUNTIME_SCOPE\":\"system\",\"_TRANSPORT\":\"journal\",\"_CAP_EFFECTIVE\":\"1ffffffffff\",\"_SYSTEMD_CGROUP\":\"/init.scope\",\"_SYSTEMD_UNIT\":\"init.scope\",\"_SYSTEMD_SLICE\":\"-.slice\",\"CODE_FILE\":\"\\u003cstdin>\",\"CODE_LINE\":\"1\",\"CODE_FUNC\":\"\\u003cmodule>\",\"SYSLOG_IDENTIFIER\":\"python3\",\"_COMM\":\"python3\",\"_EXE\":\"/usr/bin/python3.12\",\"_CMDLINE\":\"python3\",\"_msg\":\"foo\\nbar\\n\\n\\nasda\\nasda\",\"_PID\":\"2763\",\"_SOURCE_REALTIME_TIMESTAMP\":\"1729698775704375\"}",
)
// Parse binary data with trailing newline
f("__REALTIME_TIMESTAMP=1729698775704404\n_CMDLINE=python3\nMESSAGE\n\x14\x00\x00\x00\x00\x00\x00\x00foo\nbar\n\n\nasda\nasda\n\n_PID=2763\n\n",
[]int64{1729698775704404000},
`{"_CMDLINE":"python3","_msg":"foo\nbar\n\n\nasda\nasda\n","_PID":"2763"}`,
)
f("__REALTIME_TIMESTAMP=1729698775704404\n_CMDLINE=python3\nMESSAGE\n\x00\x00\x00\x00\x00\x00\x00\x00\n_PID=2763\n\n",
[]int64{1729698775704404000},
`{"_CMDLINE":"python3","_PID":"2763"}`,
)
f("__REALTIME_TIMESTAMP=1729698775704404\n_CMDLINE=python3\nMESSAGE\n\x0A\x00\x00\x00\x00\x00\x00\x00123456789\n\n_PID=2763\n\n",
[]int64{1729698775704404000},
`{"_CMDLINE":"python3","_msg":"123456789\n","_PID":"2763"}`,
)
f("__REALTIME_TIMESTAMP=1729698775704404\n_CMDLINE=python3\nMESSAGE\n\x0A\x00\x00\x00\x00\x00\x00\x001234567890\n_PID=2763\n\n",
[]int64{1729698775704404000},
`{"_CMDLINE":"python3","_msg":"1234567890","_PID":"2763"}`,
)
// Empty field name must be ignored
f("__REALTIME_TIMESTAMP=91723819283\na=b\n=Test message", nil, "")
f("__REALTIME_TIMESTAMP=91723819284\nMESSAGE=Test message2\n\n__REALTIME_TIMESTAMP=91723819283\n=Test message\n", []int64{91723819284000}, `{"_msg":"Test message2"}`)
// field name starting with number must be ignored
f("__REALTIME_TIMESTAMP=91723819283\n1incorrect=Test message\n\n__REALTIME_TIMESTAMP=91723819284\nMESSAGE=Test message2\n\n", []int64{91723819284000}, `{"_msg":"Test message2"}`)
// field name exceeding 64 bytes limit must be ignored
f("__REALTIME_TIMESTAMP=91723819283\ntoolooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooongcorrecooooooooooooong=Test message\n", nil, "")
// field name with invalid chars must be ignored
f("__REALTIME_TIMESTAMP=91723819283\nbadC!@$!@$as=Test message\n", nil, "")
}
func TestPushJournald_Failure(t *testing.T) {
f := func(data string) {
t.Helper()
tlp := &insertutil.TestLogMessageProcessor{}
cp := &insertutil.CommonParams{
TimeFields: []string{"__REALTIME_TIMESTAMP"},
MsgFields: []string{"MESSAGE"},
r, err := http.NewRequest("GET", "https://foo.bar/baz", nil)
if err != nil {
t.Fatalf("cannot create request: %s", err)
}
if err := parseJournaldRequest([]byte(data), tlp, cp); err == nil {
t.Fatalf("expected non nil error")
cp, err := getCommonParams(r)
if err != nil {
t.Fatalf("cannot create commonParams: %s", err)
}
buf := bytes.NewBufferString(data)
if err := processStreamInternal("test", buf, tlp, cp); err == nil {
t.Fatalf("expecting non-nil error")
}
}
// missing new line terminator for binary encoded message
f("__CURSOR=s=e0afe8412a6a49d2bfcf66aa7927b588;i=1f06;b=f778b6e2f7584a77b991a2366612a7b5;m=300bdfd420;t=62526e1182354;x=930dc44b370963b7\n__REALTIME_TIMESTAMP=1729698775704404\nMESSAGE\n\x13\x00\x00\x00\x00\x00\x00\x00foo\nbar\n\n\nasdaasda2")
// missing new line terminator
f("__REALTIME_TIMESTAMP=91723819283\n=Test message")
// empty field name
f("__REALTIME_TIMESTAMP=91723819283\n=Test message\n")
// field name starting with number
f("__REALTIME_TIMESTAMP=91723819283\n1incorrect=Test message\n")
// field name exceeds 64 limit
f("__REALTIME_TIMESTAMP=91723819283\ntoolooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooongcorrecooooooooooooong=Test message\n")
// Only allow A-Z0-9 and '_'
f("__REALTIME_TIMESTAMP=91723819283\nbadC!@$!@$as=Test message\n")
// too short binary encoded message
f("__CURSOR=s=e0afe8412a6a49d2bfcf66aa7927b588;i=1f06;b=f778b6e2f7584a77b991a2366612a7b5;m=300bdfd420;t=62526e1182354;x=930dc44b370963b7\n__REALTIME_TIMESTAMP=1729698775704404\nMESSAGE\n\x13\x00\x00\x00\x00\x00\x00\x00foo\nbar\n\n\nasdaasd")
f("__REALTIME_TIMESTAMP=1729698775704404\n_CMDLINE=python3\nMESSAGE\n\x00\x00\x00\x00\x00\x00\x00\x00_PID=2763\n\n")
f("__REALTIME_TIMESTAMP=1729698775704404\n_CMDLINE=python3\nMESSAGE\n\x0A\x00\x00\x00\x00\x00\x00\x001234567890_PID=2763\n\n")
f("__REALTIME_TIMESTAMP=1729698775704404\n_CMDLINE=python3\nMESSAGE\n\x0A\x00\x00\x00\x00\x00\x00\x00123456789\n_PID=2763\n\n")
// too long binary encoded message
f("__CURSOR=s=e0afe8412a6a49d2bfcf66aa7927b588;i=1f06;b=f778b6e2f7584a77b991a2366612a7b5;m=300bdfd420;t=62526e1182354;x=930dc44b370963b7\n__REALTIME_TIMESTAMP=1729698775704404\nMESSAGE\n\x13\x00\x00\x00\x00\x00\x00\x00foo\nbar\n\n\nasdaasdakljlsfd")
}

View File

@@ -0,0 +1,82 @@
package journald
import (
"bytes"
"encoding/binary"
"fmt"
"strings"
"testing"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
)
func BenchmarkIsValidFieldName(b *testing.B) {
b.ReportAllocs()
b.SetBytes(int64(len(benchmarkFields)))
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
for _, field := range benchmarkFields {
if !isValidFieldName(field) {
panic(fmt.Errorf("cannot validate field %q", field))
}
}
}
})
}
var benchmarkFields = strings.Split(
"E,_BOOT_ID,_UID,_GID,_MACHINE_ID,_HOSTNAME,_RUNTIME_SCOPE,_TRANSPORT,_CAP_EFFECTIVE,_SYSTEMD_CGROUP,_SYSTEMD_UNIT,"+
"_SYSTEMD_SLICE,CODE_FILE,CODE_LINE,CODE_FUNC,SYSLOG_IDENTIFIER,_COMM,_EXE,_CMDLINE,MESSAGE,_PID,_SOURCE_REALTIME_TIMESTAMP,_REALTIME_TIMESTAMP",
",")
func BenchmarkPushJournaldPerformance(b *testing.B) {
cp := &insertutil.CommonParams{
TimeFields: []string{"__REALTIME_TIMESTAMP"},
MsgFields: []string{"MESSAGE"},
}
const dataChunkSize = 1024 * 1024
data := generateJournaldData(dataChunkSize)
b.ReportAllocs()
b.SetBytes(int64(len(data)))
b.RunParallel(func(pb *testing.PB) {
r := &bytes.Reader{}
blp := &insertutil.BenchmarkLogMessageProcessor{}
for pb.Next() {
r.Reset(data)
if err := processStreamInternal("performance_test", r, blp, cp); err != nil {
panic(fmt.Errorf("unexpected error: %w", err))
}
}
})
}
func generateJournaldData(size int) []byte {
var buf []byte
timestamp := time.Now().UnixMicro()
binaryMsg := []byte("binary message data for performance test")
var sizeBuf [8]byte
for len(buf) < size {
timestamp++
var entry string
// Generate a mix of simple and binary messages
if timestamp%10 == 0 {
// Generate binary message
binary.LittleEndian.PutUint64(sizeBuf[:], uint64(len(binaryMsg)))
entry = fmt.Sprintf("__REALTIME_TIMESTAMP=%d\nMESSAGE\n%s%s\n\n",
timestamp,
sizeBuf[:],
binaryMsg,
)
} else {
// Generate simple message
entry = fmt.Sprintf("__REALTIME_TIMESTAMP=%d\nMESSAGE=Performance test message %d\n\n", timestamp, timestamp)
}
buf = append(buf, entry...)
}
return buf
}

View File

@@ -7,7 +7,6 @@ import (
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
@@ -33,7 +32,7 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) {
httpserver.Errorf(w, r, "%s", err)
return
}
if err := vlstorage.CanWriteData(); err != nil {
if err := insertutil.CanWriteData(); err != nil {
httpserver.Errorf(w, r, "%s", err)
return
}
@@ -120,5 +119,5 @@ var (
requestsTotal = metrics.NewCounter(`vl_http_requests_total{path="/insert/jsonline"}`)
errorsTotal = metrics.NewCounter(`vl_http_errors_total{path="/insert/jsonline"}`)
requestDuration = metrics.NewHistogram(`vl_http_request_duration_seconds{path="/insert/jsonline"}`)
requestDuration = metrics.NewSummary(`vl_http_request_duration_seconds{path="/insert/jsonline"}`)
)

View File

@@ -16,10 +16,10 @@ var disableMessageParsing = flag.Bool("loki.disableMessageParsing", false, "Whet
// RequestHandler processes Loki insert requests
func RequestHandler(path string, w http.ResponseWriter, r *http.Request) bool {
switch path {
case "/api/v1/push":
case "/insert/loki/api/v1/push":
handleInsert(r, w)
return true
case "/ready":
case "/insert/loki/ready":
// See https://grafana.com/docs/loki/latest/api/#identify-ready-loki-instance
w.WriteHeader(http.StatusOK)
w.Write([]byte("ready"))

View File

@@ -9,7 +9,6 @@ import (
"github.com/valyala/fastjson"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
@@ -30,7 +29,7 @@ func handleJSON(r *http.Request, w http.ResponseWriter) {
httpserver.Errorf(w, r, "cannot parse common params from request: %s", err)
return
}
if err := vlstorage.CanWriteData(); err != nil {
if err := insertutil.CanWriteData(); err != nil {
httpserver.Errorf(w, r, "%s", err)
return
}
@@ -59,7 +58,7 @@ func handleJSON(r *http.Request, w http.ResponseWriter) {
var (
requestsJSONTotal = metrics.NewCounter(`vl_http_requests_total{path="/insert/loki/api/v1/push",format="json"}`)
requestJSONDuration = metrics.NewHistogram(`vl_http_request_duration_seconds{path="/insert/loki/api/v1/push",format="json"}`)
requestJSONDuration = metrics.NewSummary(`vl_http_request_duration_seconds{path="/insert/loki/api/v1/push",format="json"}`)
)
func parseJSONRequest(data []byte, lmp insertutil.LogMessageProcessor, msgFields []string, useDefaultStreamFields, parseMessage bool) error {

View File

@@ -9,7 +9,6 @@ import (
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
@@ -29,7 +28,7 @@ func handleProtobuf(r *http.Request, w http.ResponseWriter) {
httpserver.Errorf(w, r, "cannot parse common params from request: %s", err)
return
}
if err := vlstorage.CanWriteData(); err != nil {
if err := insertutil.CanWriteData(); err != nil {
httpserver.Errorf(w, r, "%s", err)
return
}
@@ -63,7 +62,7 @@ func handleProtobuf(r *http.Request, w http.ResponseWriter) {
var (
requestsProtobufTotal = metrics.NewCounter(`vl_http_requests_total{path="/insert/loki/api/v1/push",format="protobuf"}`)
requestProtobufDuration = metrics.NewHistogram(`vl_http_request_duration_seconds{path="/insert/loki/api/v1/push",format="protobuf"}`)
requestProtobufDuration = metrics.NewSummary(`vl_http_request_duration_seconds{path="/insert/loki/api/v1/push",format="protobuf"}`)
)
func parseProtobufRequest(data []byte, lmp insertutil.LogMessageProcessor, msgFields []string, useDefaultStreamFields, parseMessage bool) error {

View File

@@ -1,6 +1,7 @@
package vlinsert
import (
"flag"
"fmt"
"net/http"
"strings"
@@ -13,6 +14,12 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/loki"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/opentelemetry"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/syslog"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
)
var (
disableInsert = flag.Bool("insert.disable", false, "Whether to disable /insert/* HTTP endpoints")
disableInternal = flag.Bool("internalinsert.disable", false, "Whether to disable /internal/insert HTTP endpoint")
)
// Init initializes vlinsert
@@ -27,49 +34,55 @@ func Stop() {
// RequestHandler handles insert requests for VictoriaLogs
func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
path := r.URL.Path
path := strings.ReplaceAll(r.URL.Path, "//", "/")
if strings.HasPrefix(path, "/insert/") {
if *disableInsert {
httpserver.Errorf(w, r, "requests to /insert/* are disabled with -insert.disable command-line flag")
return true
}
return insertHandler(w, r, path)
}
if path == "/internal/insert" {
if *disableInternal || *disableInsert {
httpserver.Errorf(w, r, "requests to /internal/insert are disabled with -internalinsert.disable or -insert.disable command-line flag")
return true
}
internalinsert.RequestHandler(w, r)
return true
}
if !strings.HasPrefix(path, "/insert/") {
// Skip requests, which do not start with /insert/, since these aren't our requests.
return false
}
path = strings.TrimPrefix(path, "/insert")
path = strings.ReplaceAll(path, "//", "/")
return false
}
func insertHandler(w http.ResponseWriter, r *http.Request, path string) bool {
switch path {
case "/jsonline":
case "/insert/jsonline":
jsonline.RequestHandler(w, r)
return true
case "/ready":
case "/insert/ready":
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(200)
fmt.Fprintf(w, `{"status":"ok"}`)
return true
}
switch {
case strings.HasPrefix(path, "/elasticsearch"):
// some clients may omit trailing slash
// see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8353
path = strings.TrimPrefix(path, "/elasticsearch")
// some clients may omit trailing slash at elasticsearch protocol.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8353
case strings.HasPrefix(path, "/insert/elasticsearch"):
return elasticsearch.RequestHandler(path, w, r)
case strings.HasPrefix(path, "/loki/"):
path = strings.TrimPrefix(path, "/loki")
case strings.HasPrefix(path, "/insert/loki/"):
return loki.RequestHandler(path, w, r)
case strings.HasPrefix(path, "/opentelemetry/"):
path = strings.TrimPrefix(path, "/opentelemetry")
case strings.HasPrefix(path, "/insert/opentelemetry/"):
return opentelemetry.RequestHandler(path, w, r)
case strings.HasPrefix(path, "/journald/"):
path = strings.TrimPrefix(path, "/journald")
case strings.HasPrefix(path, "/insert/journald/"):
return journald.RequestHandler(path, w, r)
case strings.HasPrefix(path, "/datadog/"):
path = strings.TrimPrefix(path, "/datadog")
case strings.HasPrefix(path, "/insert/datadog/"):
return datadog.RequestHandler(path, w, r)
default:
return false
}
return false
}

View File

@@ -6,7 +6,6 @@ import (
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
@@ -22,7 +21,7 @@ func RequestHandler(path string, w http.ResponseWriter, r *http.Request) bool {
switch path {
// use the same path as opentelemetry collector
// https://opentelemetry.io/docs/specs/otlp/#otlphttp-request
case "/v1/logs":
case "/insert/opentelemetry/v1/logs":
if r.Header.Get("Content-Type") == "application/json" {
httpserver.Errorf(w, r, "json encoding isn't supported for opentelemetry format. Use protobuf encoding")
return true
@@ -43,7 +42,7 @@ func handleProtobuf(r *http.Request, w http.ResponseWriter) {
httpserver.Errorf(w, r, "cannot parse common params from request: %s", err)
return
}
if err := vlstorage.CanWriteData(); err != nil {
if err := insertutil.CanWriteData(); err != nil {
httpserver.Errorf(w, r, "%s", err)
return
}
@@ -71,7 +70,7 @@ var (
requestsProtobufTotal = metrics.NewCounter(`vl_http_requests_total{path="/insert/opentelemetry/v1/logs",format="protobuf"}`)
errorsTotal = metrics.NewCounter(`vl_http_errors_total{path="/insert/opentelemetry/v1/logs",format="protobuf"}`)
requestProtobufDuration = metrics.NewHistogram(`vl_http_request_duration_seconds{path="/insert/opentelemetry/v1/logs",format="protobuf"}`)
requestProtobufDuration = metrics.NewSummary(`vl_http_request_duration_seconds{path="/insert/opentelemetry/v1/logs",format="protobuf"}`)
)
func pushProtobufRequest(data []byte, lmp insertutil.LogMessageProcessor, msgFields []string, useDefaultStreamFields bool) error {

View File

@@ -17,7 +17,6 @@ import (
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/cgroup"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
@@ -385,7 +384,7 @@ func serveTCP(ln net.Listener, tenantID logstorage.TenantID, encoding string, us
// processStream parses a stream of syslog messages from r and ingests them into vlstorage.
func processStream(protocol string, r io.Reader, encoding string, useLocalTimestamp bool, cp *insertutil.CommonParams) error {
if err := vlstorage.CanWriteData(); err != nil {
if err := insertutil.CanWriteData(); err != nil {
return err
}

View File

@@ -101,8 +101,8 @@ func TestProcessStreamInternal_Success(t *testing.T) {
currentYear := 2023
timestampsExpected := []int64{1685794113000000000, 1685880513000000000, 1685814132345000000}
resultExpected := `{"format":"rfc3164","hostname":"abcd","app_name":"systemd","_msg":"Starting Update the local ESM caches..."}
{"priority":"165","facility":"20","severity":"5","format":"rfc3164","hostname":"abcd","app_name":"systemd","proc_id":"345","_msg":"abc defg"}
{"priority":"123","facility":"15","severity":"3","format":"rfc5424","hostname":"mymachine.example.com","app_name":"appname","proc_id":"12345","msg_id":"ID47","exampleSDID@32473.iut":"3","exampleSDID@32473.eventSource":"Application 123 = ] 56","exampleSDID@32473.eventID":"11211","_msg":"This is a test message with structured data."}`
{"priority":"165","facility_keyword":"local4","level":"notice","facility":"20","severity":"5","format":"rfc3164","hostname":"abcd","app_name":"systemd","proc_id":"345","_msg":"abc defg"}
{"priority":"123","facility_keyword":"solaris-cron","level":"error","facility":"15","severity":"3","format":"rfc5424","hostname":"mymachine.example.com","app_name":"appname","proc_id":"12345","msg_id":"ID47","exampleSDID@32473.iut":"3","exampleSDID@32473.eventSource":"Application 123 = ] 56","exampleSDID@32473.eventID":"11211","_msg":"This is a test message with structured data."}`
f(data, currentYear, timestampsExpected, resultExpected)
}

View File

@@ -2,7 +2,6 @@ package internalselect
import (
"context"
"flag"
"fmt"
"net/http"
"strconv"
@@ -22,15 +21,8 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/netutil"
)
var disableSelect = flag.Bool("internalselect.disable", false, "Whether to disable /internal/select/* HTTP endpoints")
// RequestHandler processes requests to /internal/select/*
func RequestHandler(ctx context.Context, w http.ResponseWriter, r *http.Request) {
if *disableSelect {
httpserver.Errorf(w, r, "requests to /internal/select/* are disabled with -internalselect.disable command-line flag")
return
}
startTime := time.Now()
path := r.URL.Path

View File

@@ -55,7 +55,10 @@ func ProcessFacetsRequest(ctx context.Context, w http.ResponseWriter, r *http.Re
}
keepConstFields := httputil.GetBool(r, "keep_const_fields")
// Pipes must be dropped, since it is expected facets are obtained
// from the real logs stored in the database.
q.DropAllPipes()
q.AddFacetsPipe(limit, maxValuesPerField, maxValueLen, keepConstFields)
var mLock sync.Mutex
@@ -156,8 +159,10 @@ func ProcessHitsRequest(ctx context.Context, w http.ResponseWriter, r *http.Requ
fieldsLimit = 0
}
// Prepare the query for hits count.
// Pipes must be dropped, since it is expected hits are obtained
// from the real logs stored in the database.
q.DropAllPipes()
q.AddCountByTimePipe(int64(step), int64(offset), fields)
var mLock sync.Mutex
@@ -290,6 +295,10 @@ func ProcessFieldNamesRequest(ctx context.Context, w http.ResponseWriter, r *htt
return
}
// Pipes must be dropped, since it is expected field names are obtained
// from the real logs stored in the database.
q.DropAllPipes()
// Obtain field names for the given query
fieldNames, err := vlstorage.GetFieldNames(ctx, tenantIDs, q)
if err != nil {
@@ -329,6 +338,10 @@ func ProcessFieldValuesRequest(ctx context.Context, w http.ResponseWriter, r *ht
limit = 0
}
// Pipes must be dropped, since it is expected field values are obtained
// from the real logs stored in the database.
q.DropAllPipes()
// Obtain unique values for the given field
values, err := vlstorage.GetFieldValues(ctx, tenantIDs, q, fieldName, uint64(limit))
if err != nil {
@@ -351,6 +364,10 @@ func ProcessStreamFieldNamesRequest(ctx context.Context, w http.ResponseWriter,
return
}
// Pipes must be dropped, since it is expected stream field names are obtained
// from the real logs stored in the database.
q.DropAllPipes()
// Obtain stream field names for the given query
names, err := vlstorage.GetStreamFieldNames(ctx, tenantIDs, q)
if err != nil {
@@ -389,6 +406,10 @@ func ProcessStreamFieldValuesRequest(ctx context.Context, w http.ResponseWriter,
limit = 0
}
// Pipes must be dropped, since it is expected stream field values are obtained
// from the real logs stored in the database.
q.DropAllPipes()
// Obtain stream field values for the given query and the given fieldName
values, err := vlstorage.GetStreamFieldValues(ctx, tenantIDs, q, fieldName, uint64(limit))
if err != nil {
@@ -420,6 +441,10 @@ func ProcessStreamIDsRequest(ctx context.Context, w http.ResponseWriter, r *http
limit = 0
}
// Pipes must be dropped, since it is expected stream ids are obtained
// from the real logs stored in the database.
q.DropAllPipes()
// Obtain streamIDs for the given query
streamIDs, err := vlstorage.GetStreamIDs(ctx, tenantIDs, q, uint64(limit))
if err != nil {
@@ -451,6 +476,10 @@ func ProcessStreamsRequest(ctx context.Context, w http.ResponseWriter, r *http.R
limit = 0
}
// Pipes must be dropped, since it is expected stream are obtained
// from the real logs stored in the database.
q.DropAllPipes()
// Obtain streams for the given query
streams, err := vlstorage.GetStreams(ctx, tenantIDs, q, uint64(limit))
if err != nil {
@@ -551,7 +580,7 @@ var liveTailRequests = metrics.NewCounter(`vl_live_tailing_requests`)
const tailOffsetNsecs = 5e9
type logRow struct {
timestamp string
timestamp int64
fields []logstorage.Field
}
@@ -567,7 +596,7 @@ type tailProcessor struct {
mu sync.Mutex
perStreamRows map[string][]logRow
lastTimestamps map[string]string
lastTimestamps map[string]int64
err error
}
@@ -577,7 +606,7 @@ func newTailProcessor(cancel func()) *tailProcessor {
cancel: cancel,
perStreamRows: make(map[string][]logRow),
lastTimestamps: make(map[string]string),
lastTimestamps: make(map[string]int64),
}
}
@@ -594,7 +623,7 @@ func (tp *tailProcessor) writeBlock(_ uint, db *logstorage.DataBlock) {
}
// Make sure columns contain _time field, since it is needed for proper tail work.
timestamps, ok := db.GetTimestamps()
timestamps, ok := db.GetTimestamps(nil)
if !ok {
tp.err = fmt.Errorf("missing _time field")
tp.cancel()
@@ -1043,9 +1072,7 @@ func getLastNQueryResults(ctx context.Context, tenantIDs []logstorage.TenantID,
}
func getLastNRows(rows []logRow, limit int) []logRow {
sort.Slice(rows, func(i, j int) bool {
return rows[i].timestamp < rows[j].timestamp
})
sortLogRows(rows)
if len(rows) > limit {
rows = rows[len(rows)-limit:]
}
@@ -1070,7 +1097,7 @@ func getQueryResultsWithLimit(ctx context.Context, tenantIDs []logstorage.Tenant
clonedColumnNames[i] = strings.Clone(c.Name)
}
timestamps, ok := db.GetTimestamps()
timestamps, ok := db.GetTimestamps(nil)
if !ok {
missingTimeColumn.Store(true)
cancel()

View File

@@ -25,6 +25,9 @@ var (
maxQueueDuration = flag.Duration("search.maxQueueDuration", 10*time.Second, "The maximum time the search request waits for execution when -search.maxConcurrentRequests "+
"limit is reached; see also -search.maxQueryDuration")
maxQueryDuration = flag.Duration("search.maxQueryDuration", time.Second*30, "The maximum duration for query execution. It can be overridden to a smaller value on a per-query basis via 'timeout' query arg")
disableSelect = flag.Bool("select.disable", false, "Whether to disable /select/* HTTP endpoints")
disableInternal = flag.Bool("internalselect.disable", false, "Whether to disable /internal/select/* HTTP endpoints")
)
func getDefaultMaxConcurrentRequests() int {
@@ -71,13 +74,31 @@ var vmuiFileServer = http.FileServer(http.FS(vmuiFiles))
// RequestHandler handles select requests for VictoriaLogs
func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
path := r.URL.Path
path := strings.ReplaceAll(r.URL.Path, "//", "/")
if !strings.HasPrefix(path, "/select/") && !strings.HasPrefix(path, "/internal/select/") {
// Skip requests, which do not start with /select/, since these aren't our requests.
return false
if strings.HasPrefix(path, "/select/") {
if *disableSelect {
httpserver.Errorf(w, r, "requests to /select/* are disabled with -select.disable command-line flag")
return true
}
return selectHandler(w, r, path)
}
path = strings.ReplaceAll(path, "//", "/")
if strings.HasPrefix(path, "/internal/select/") {
if *disableInternal || *disableSelect {
httpserver.Errorf(w, r, "requests to /internal/select/* are disabled with -internalselect.disable or -select.disable command-line flag")
return true
}
internalselect.RequestHandler(r.Context(), w, r)
return true
}
return false
}
func selectHandler(w http.ResponseWriter, r *http.Request, path string) bool {
ctx := r.Context()
if path == "/select/vmui" {
// VMUI access via incomplete url without `/` in the end. Redirect to complete url.
@@ -100,7 +121,6 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
return true
}
ctx := r.Context()
if path == "/select/logsql/tail" {
logsqlTailRequests.Inc()
// Process live tailing request without timeout, since it is OK to run live tailing requests for very long time.
@@ -120,13 +140,6 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
}
defer decRequestConcurrency()
if strings.HasPrefix(path, "/internal/select/") {
// Process internal request from vlselect without timeout (e.g. use ctx instead of ctxWithTimeout),
// since the timeout must be controlled by the vlselect.
internalselect.RequestHandler(ctx, w, r)
return true
}
ok := processSelectRequest(ctxWithTimeout, w, r, path)
if !ok {
return false

View File

@@ -66,8 +66,8 @@ or at your own [VictoriaMetrics instance](https://docs.victoriametrics.com/victo
The list of MetricsQL features on top of PromQL:
* Graphite-compatible filters can be passed via `{__graphite__="foo.*.bar"}` syntax.
See [these docs](https://docs.victoriametrics.com/victoriametrics/integrations/graphite#selecting-graphite-metrics).
VictoriaMetrics can be used as Graphite datasource in Grafana. See [these docs](https://docs.victoriametrics.com/victoriametrics/integrations/graphite#graphite-api-usage) for details.
See [these docs](https://docs.victoriametrics.com/victoriametrics/integrations/graphite/#selecting-graphite-metrics).
VictoriaMetrics can be used as Graphite datasource in Grafana. See [these docs](https://docs.victoriametrics.com/victoriametrics/integrations/graphite/#graphite-api-usage) for details.
See also [label_graphite_group](#label_graphite_group) function, which can be used for extracting the given groups from Graphite metric name.
* Lookbehind window in square brackets for [rollup functions](#rollup-functions) may be omitted. VictoriaMetrics automatically selects the lookbehind window
depending on the `step` query arg passed to [/api/v1/query_range](https://docs.victoriametrics.com/victoriametrics/keyconcepts/#range-query)
@@ -742,7 +742,23 @@ Metric names are stripped from the resulting rollups. Add [keep_metric_names](#k
This function is supported by PromQL.
See also [irate](#irate) and [rollup_rate](#rollup_rate).
See also [irate](#irate), [rollup_rate](#rollup_rate) and [rate_prometheus](#rate_prometheus).
#### rate_prometheus
`rate_prometheus(series_selector[d])` {{% available_from "#" %}} is a [rollup function](#rollup-functions), which calculates the average per-second
increase rate over the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/victoriametrics/keyconcepts/#filtering).
The resulting calculation is equivalent to `increase_prometheus(series_selector[d]) / d`.
It doesn't take into account the last sample before the given lookbehind window `d` when calculating the result in the same way as Prometheus does.
See [this article](https://medium.com/@romanhavronenko/victoriametrics-promql-compliance-d4318203f51e) for details.
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
This function is usually applied to [counters](https://docs.victoriametrics.com/victoriametrics/keyconcepts/#counter).
See also [increase_prometheus](#increase_prometheus) and [rate](#rate).
#### rate_over_sum

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -35,10 +35,10 @@
<meta property="og:title" content="UI for VictoriaLogs">
<meta property="og:url" content="https://victoriametrics.com/products/victorialogs/">
<meta property="og:description" content="Explore your log data with VictoriaLogs UI">
<script type="module" crossorigin src="./assets/index-DLp5TlUn.js"></script>
<link rel="modulepreload" crossorigin href="./assets/vendor-D8IJGiEn.js">
<script type="module" crossorigin src="./assets/index-721xTF8u.js"></script>
<link rel="modulepreload" crossorigin href="./assets/vendor-V4vnRsM-.js">
<link rel="stylesheet" crossorigin href="./assets/vendor-D1GxaB_c.css">
<link rel="stylesheet" crossorigin href="./assets/index-C85_NB5q.css">
<link rel="stylesheet" crossorigin href="./assets/index-C36SC0pJ.css">
</head>
<body>
<noscript>You need to enable JavaScript to run this app.</noscript>

View File

@@ -253,8 +253,11 @@ func processForceFlush(w http.ResponseWriter, r *http.Request) bool {
return true
}
// Storage implements insertutil.LogRowsStorage interface
type Storage struct{}
// CanWriteData returns non-nil error if it cannot write data to vlstorage
func CanWriteData() error {
func (*Storage) CanWriteData() error {
if localStorage == nil {
// The data can be always written in non-local mode.
return nil
@@ -273,7 +276,7 @@ func CanWriteData() error {
// MustAddRows adds lr to vlstorage
//
// It is advised to call CanWriteData() before calling MustAddRows()
func MustAddRows(lr *logstorage.LogRows) {
func (*Storage) MustAddRows(lr *logstorage.LogRows) {
if localStorage != nil {
// Store lr in the local storage.
localStorage.MustAddRows(lr)

View File

@@ -248,6 +248,9 @@ func (sn *storageNode) executeRequestAt(ctx context.Context, path string, args u
if err != nil {
logger.Panicf("BUG: unexpected error when creating a request: %s", err)
}
if err := sn.ac.SetHeaders(req, true); err != nil {
return nil, fmt.Errorf("cannot set auth headers for %q: %w", reqURL, err)
}
// send the request to the storage node
resp, err := sn.c.Do(req)

View File

@@ -89,15 +89,18 @@ func (t *Type) ValidateExpr(expr string) error {
return nil
}
// SupportedType is true if given datasource type is supported
func SupportedType(dsType string) bool {
return dsType == "graphite" || dsType == "prometheus" || dsType == "vlogs"
}
// UnmarshalYAML implements the yaml.Unmarshaler interface.
func (t *Type) UnmarshalYAML(unmarshal func(any) error) error {
var s string
if err := unmarshal(&s); err != nil {
return err
}
switch s {
case "graphite", "prometheus", "vlogs":
default:
if !SupportedType(s) {
return fmt.Errorf("unknown datasource type=%q, want prometheus, graphite or vlogs", s)
}
t.Name = s

View File

@@ -148,9 +148,13 @@ func main() {
if err != nil {
logger.Fatalf("failed to init datasource: %s", err)
}
if err := replay(groupsCfg, q, rw); err != nil {
totalRows, droppedRows, err := replay(groupsCfg, q, rw)
if err != nil {
logger.Fatalf("replay failed: %s", err)
}
if droppedRows > 0 {
logger.Fatalf("failed to push all generated samples to remote write url, dropped %d samples out of %d", droppedRows, totalRows)
}
logger.Infof("replay succeed!")
return
}

View File

@@ -216,7 +216,7 @@ var (
)
// GetDroppedRows returns value of droppedRows metric
func GetDroppedRows() int64 { return int64(droppedRows.Get()) }
func GetDroppedRows() int { return int(droppedRows.Get()) }
// flush is a blocking function that marshals WriteRequest and sends
// it to remote-write endpoint. Flush performs limited amount of retries

View File

@@ -20,9 +20,8 @@ var (
"The time filter in RFC3339 format to finish the replay by. E.g. '2020-01-01T20:07:00Z'. "+
"By default, is set to the current time.")
replayRulesDelay = flag.Duration("replay.rulesDelay", time.Second,
"Delay between rules evaluation within the group. Could be important if there are chained rules inside the group "+
"and processing need to wait for previous rule results to be persisted by remote storage before evaluating the next rule."+
"Keep it equal or bigger than -remoteWrite.flushInterval.")
"Delay before evaluating the next rule within the group. Is important for chained rules. "+
"Keep it equal or bigger than -remoteWrite.flushInterval. When set to >0, replay ignores group's concurrency setting.")
replayMaxDatapoints = flag.Int("replay.maxDatapointsPerQuery", 1e3,
"Max number of data points expected in one request. It affects the max time range for every '/query_range' request during the replay. The higher the value, the less requests will be made during replay.")
replayRuleRetryAttempts = flag.Int("replay.ruleRetryAttempts", 5,
@@ -31,13 +30,13 @@ var (
"Progress bar rendering might be verbose or break the logs parsing, so it is recommended to be disabled when not used in interactive mode.")
)
func replay(groupsCfg []config.Group, qb datasource.QuerierBuilder, rw remotewrite.RWClient) error {
func replay(groupsCfg []config.Group, qb datasource.QuerierBuilder, rw remotewrite.RWClient) (totalRows, droppedRows int, err error) {
if *replayMaxDatapoints < 1 {
return fmt.Errorf("replay.maxDatapointsPerQuery can't be lower than 1")
return 0, 0, fmt.Errorf("replay.maxDatapointsPerQuery can't be lower than 1")
}
tFrom, err := time.Parse(time.RFC3339, *replayFrom)
if err != nil {
return fmt.Errorf("failed to parse replay.timeFrom=%q: %w", *replayFrom, err)
return 0, 0, fmt.Errorf("failed to parse replay.timeFrom=%q: %w", *replayFrom, err)
}
// use tFrom location for default value, otherwise filters could have different locations
@@ -45,12 +44,12 @@ func replay(groupsCfg []config.Group, qb datasource.QuerierBuilder, rw remotewri
if *replayTo != "" {
tTo, err = time.Parse(time.RFC3339, *replayTo)
if err != nil {
return fmt.Errorf("failed to parse replay.timeTo=%q: %w", *replayTo, err)
return 0, 0, fmt.Errorf("failed to parse replay.timeTo=%q: %w", *replayTo, err)
}
}
if !tTo.After(tFrom) {
return fmt.Errorf("replay.timeTo=%v must be bigger than replay.timeFrom=%v", tTo, tFrom)
return 0, 0, fmt.Errorf("replay.timeTo=%v must be bigger than replay.timeFrom=%v", tTo, tFrom)
}
labels := make(map[string]string)
for _, s := range *externalLabels {
@@ -59,7 +58,7 @@ func replay(groupsCfg []config.Group, qb datasource.QuerierBuilder, rw remotewri
}
n := strings.IndexByte(s, '=')
if n < 0 {
return fmt.Errorf("missing '=' in `-label`. It must contain label in the form `name=value`; got %q", s)
return 0, 0, fmt.Errorf("missing '=' in `-label`. It must contain label in the form `name=value`; got %q", s)
}
labels[s[:n]] = s[n+1:]
}
@@ -70,18 +69,14 @@ func replay(groupsCfg []config.Group, qb datasource.QuerierBuilder, rw remotewri
"\nmax data points per request: %d\n",
tFrom, tTo, *replayMaxDatapoints)
var total int
for _, cfg := range groupsCfg {
ng := rule.NewGroup(cfg, qb, *evaluationInterval, labels)
total += ng.Replay(tFrom, tTo, rw, *replayMaxDatapoints, *replayRuleRetryAttempts, *replayRulesDelay, *disableProgressBar)
totalRows += ng.Replay(tFrom, tTo, rw, *replayMaxDatapoints, *replayRuleRetryAttempts, *replayRulesDelay, *disableProgressBar)
}
logger.Infof("replay evaluation finished, generated %d samples", total)
logger.Infof("replay evaluation finished, generated %d samples", totalRows)
if err := rw.Close(); err != nil {
return err
return 0, 0, err
}
droppedRows := remotewrite.GetDroppedRows()
if droppedRows > 0 {
return fmt.Errorf("failed to push all generated samples to remote write url, dropped %d samples out of %d", droppedRows, total)
}
return nil
droppedRows = remotewrite.GetDroppedRows()
return totalRows, droppedRows, nil
}

View File

@@ -8,38 +8,45 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/config"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/datasource"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutil"
)
type fakeReplayQuerier struct {
datasource.FakeQuerier
registry map[string]map[string]struct{}
registry map[string]map[string][]datasource.Metric
}
func (fr *fakeReplayQuerier) BuildWithParams(_ datasource.QuerierParams) datasource.Querier {
return fr
}
type fakeRWClient struct{}
func (fc *fakeRWClient) Push(_ prompbmarshal.TimeSeries) error {
return nil
}
func (fc *fakeRWClient) Close() error {
return nil
}
func (fr *fakeReplayQuerier) QueryRange(_ context.Context, q string, from, to time.Time) (res datasource.Result, err error) {
key := fmt.Sprintf("%s+%s", from.Format("15:04:05"), to.Format("15:04:05"))
dps, ok := fr.registry[q]
if !ok {
return res, fmt.Errorf("unexpected query received: %q", q)
}
_, ok = dps[key]
metrics, ok := dps[key]
if !ok {
return res, fmt.Errorf("unexpected time range received: %q", key)
}
delete(dps, key)
if len(fr.registry[q]) < 1 {
delete(fr.registry, q)
}
res.Data = metrics
return res, nil
}
func TestReplay(t *testing.T) {
f := func(from, to string, maxDP int, cfg []config.Group, qb *fakeReplayQuerier) {
f := func(from, to string, maxDP int, ruleDelay time.Duration, cfg []config.Group, qb *fakeReplayQuerier, expectTotalRows int) {
t.Helper()
fromOrig, toOrig, maxDatapointsOrig := *replayFrom, *replayTo, *replayMaxDatapoints
@@ -51,90 +58,172 @@ func TestReplay(t *testing.T) {
}()
*replayRuleRetryAttempts = 1
*replayRulesDelay = time.Millisecond
rwb := &remotewrite.DebugClient{}
*replayRulesDelay = ruleDelay
rwb := &fakeRWClient{}
*replayFrom = from
*replayTo = to
*replayMaxDatapoints = maxDP
if err := replay(cfg, qb, rwb); err != nil {
totalRows, _, err := replay(cfg, qb, rwb)
if err != nil {
t.Fatalf("replay failed: %s", err)
}
if len(qb.registry) > 0 {
t.Fatalf("not all requests were sent: %#v", qb.registry)
if totalRows != expectTotalRows {
t.Fatalf("unexpected total rows count: got %d, want %d", totalRows, expectTotalRows)
}
}
// one rule + one response
f("2021-01-01T12:00:00.000Z", "2021-01-01T12:02:00.000Z", 10, []config.Group{
f("2021-01-01T12:00:00.000Z", "2021-01-01T12:02:00.000Z", 10, time.Millisecond, []config.Group{
{Rules: []config.Rule{{Record: "foo", Expr: "sum(up)"}}},
}, &fakeReplayQuerier{
registry: map[string]map[string]struct{}{
"sum(up)": {"12:00:00+12:02:00": {}},
registry: map[string]map[string][]datasource.Metric{
"sum(up)": {"12:00:00+12:02:00": {
{
Timestamps: []int64{1},
Values: []float64{1},
},
}},
},
})
}, 1)
// one rule + multiple responses
f("2021-01-01T12:00:00.000Z", "2021-01-01T12:02:30.000Z", 1, []config.Group{
f("2021-01-01T12:00:00.000Z", "2021-01-01T12:02:30.000Z", 1, time.Millisecond, []config.Group{
{Rules: []config.Rule{{Record: "foo", Expr: "sum(up)"}}},
}, &fakeReplayQuerier{
registry: map[string]map[string]struct{}{
registry: map[string]map[string][]datasource.Metric{
"sum(up)": {
"12:00:00+12:01:00": {},
"12:00:00+12:01:00": {
{
Timestamps: []int64{1},
Values: []float64{1},
},
},
"12:01:00+12:02:00": {},
"12:02:00+12:02:30": {},
"12:02:00+12:02:30": {
{
Timestamps: []int64{1},
Values: []float64{1},
},
},
},
},
})
}, 2)
// datapoints per step
f("2021-01-01T12:00:00.000Z", "2021-01-01T15:02:30.000Z", 60, []config.Group{
f("2021-01-01T12:00:00.000Z", "2021-01-01T15:02:30.000Z", 60, time.Millisecond, []config.Group{
{Interval: promutil.NewDuration(time.Minute), Rules: []config.Rule{{Record: "foo", Expr: "sum(up)"}}},
}, &fakeReplayQuerier{
registry: map[string]map[string]struct{}{
registry: map[string]map[string][]datasource.Metric{
"sum(up)": {
"12:00:00+13:00:00": {},
"13:00:00+14:00:00": {},
"12:00:00+13:00:00": {
{
Timestamps: []int64{1, 2},
Values: []float64{1, 2},
},
},
"13:00:00+14:00:00": {
{
Timestamps: []int64{1},
Values: []float64{1},
},
},
"14:00:00+15:00:00": {},
"15:00:00+15:02:30": {},
},
},
})
}, 3)
// multiple recording rules + multiple responses
f("2021-01-01T12:00:00.000Z", "2021-01-01T12:02:30.000Z", 1, []config.Group{
f("2021-01-01T12:00:00.000Z", "2021-01-01T12:02:30.000Z", 1, time.Millisecond, []config.Group{
{Rules: []config.Rule{{Record: "foo", Expr: "sum(up)"}}},
{Rules: []config.Rule{{Record: "bar", Expr: "max(up)"}}},
}, &fakeReplayQuerier{
registry: map[string]map[string]struct{}{
registry: map[string]map[string][]datasource.Metric{
"sum(up)": {
"12:00:00+12:01:00": {},
"12:00:00+12:01:00": {
{
Timestamps: []int64{1, 2},
Values: []float64{1, 2},
},
},
"12:01:00+12:02:00": {},
"12:02:00+12:02:30": {},
},
"max(up)": {
"12:00:00+12:01:00": {},
"12:01:00+12:02:00": {},
"12:01:00+12:02:00": {
{
Timestamps: []int64{1, 2},
Values: []float64{1, 2},
},
},
"12:02:00+12:02:30": {},
},
},
})
}, 4)
// multiple alerting rules + multiple responses
f("2021-01-01T12:00:00.000Z", "2021-01-01T12:02:30.000Z", 1, []config.Group{
// alerting rule generates two series `ALERTS` and `ALERTS_FOR_STATE` when triggered
f("2021-01-01T12:00:00.000Z", "2021-01-01T12:02:30.000Z", 1, time.Millisecond, []config.Group{
{Rules: []config.Rule{{Alert: "foo", Expr: "sum(up) > 1"}}},
{Rules: []config.Rule{{Alert: "bar", Expr: "max(up) < 1"}}},
}, &fakeReplayQuerier{
registry: map[string]map[string]struct{}{
registry: map[string]map[string][]datasource.Metric{
"sum(up) > 1": {
"12:00:00+12:01:00": {},
"12:00:00+12:01:00": {
{
Timestamps: []int64{1, 2},
Values: []float64{1, 2},
},
},
"12:01:00+12:02:00": {},
"12:02:00+12:02:30": {},
},
"max(up) < 1": {
"12:00:00+12:01:00": {},
"12:00:00+12:01:00": {
{
Timestamps: []int64{1},
Values: []float64{1},
},
},
"12:01:00+12:02:00": {},
"12:02:00+12:02:30": {},
},
},
})
}, 6)
// multiple recording rules in one group+ multiple responses + concurrency
f("2021-01-01T12:00:00.000Z", "2021-01-01T12:02:30.000Z", 1, 0, []config.Group{
{Rules: []config.Rule{{Record: "foo", Expr: "sum(up) > 1"}, {Record: "bar", Expr: "max(up) < 1"}}, Concurrency: 2}}, &fakeReplayQuerier{
registry: map[string]map[string][]datasource.Metric{
"sum(up) > 1": {
"12:00:00+12:01:00": {
{
Timestamps: []int64{1},
Values: []float64{1},
},
},
"12:01:00+12:02:00": {
{
Timestamps: []int64{1},
Values: []float64{1},
},
},
"12:02:00+12:02:30": {
{
Timestamps: []int64{1},
Values: []float64{1},
},
},
},
"max(up) < 1": {
"12:00:00+12:01:00": {},
"12:01:00+12:02:00": {{
Timestamps: []int64{1},
Values: []float64{1},
}},
"12:02:00+12:02:30": {},
},
},
}, 4)
}

View File

@@ -4,6 +4,7 @@ import (
"context"
"fmt"
"hash/fnv"
"math"
"sort"
"strings"
"sync"
@@ -335,7 +336,9 @@ func (ar *AlertingRule) execRange(ctx context.Context, start, end time.Time) ([]
var result []prompbmarshal.TimeSeries
holdAlertState := make(map[uint64]*notifier.Alert)
qFn := func(_ string) ([]datasource.Metric, error) {
return nil, fmt.Errorf("`query` template isn't supported in replay mode")
logger.Warnf("`query` template isn't supported in replay mode, mocked data is used")
// mock query results to allow common used template {{ query <$expr> | first | value }}
return []datasource.Metric{{Timestamps: []int64{0}, Values: []float64{math.NaN()}}}, nil
}
for _, s := range res.Data {
ls, as, err := ar.expandTemplates(s, qFn, time.Time{})
@@ -413,7 +416,7 @@ func (ar *AlertingRule) exec(ctx context.Context, ts time.Time, limit int) ([]pr
return nil, fmt.Errorf("failed to execute query %q: %w", ar.Expr, err)
}
ar.logDebugf(ts, nil, "query returned %d samples (elapsed: %s, isPartial: %t)", curState.Samples, curState.Duration, isPartialResponse(res))
ar.logDebugf(ts, nil, "query returned %d series (elapsed: %s, isPartial: %t)", curState.Samples, curState.Duration, isPartialResponse(res))
qFn := func(query string) ([]datasource.Metric, error) {
res, _, err := ar.q.Query(ctx, query, ts)
return res.Data, err

View File

@@ -445,11 +445,17 @@ func (g *Group) Start(ctx context.Context, nts func() []notifier.Notifier, rw re
g.infof("re-started")
case <-t.C:
missed := (time.Since(evalTS) / g.Interval) - 1
// calculate the real wall clock offset by stripping the monotonic clock first,
// then evalTS can be corrected when wall clock is adjusted.
// see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8790#issuecomment-2986541829
offset := time.Now().Round(0).Sub(evalTS.Round(0))
missed := (offset / g.Interval) - 1
if missed < 0 {
// missed can become < 0 due to irregular delays during evaluation
// which can result in time.Since(evalTS) < g.Interval
// which can result in time.Since(evalTS) < g.Interval;
// or the system wall clock was changed backward
missed = 0
evalTS = time.Now()
}
if missed > 0 {
g.metrics.iterationMissed.Inc()
@@ -514,36 +520,84 @@ func (g *Group) Replay(start, end time.Time, rw remotewrite.RWClient, maxDataPoi
iterations := int(end.Sub(start)/step) + 1
fmt.Printf("\nGroup %q"+
"\ninterval: \t%v"+
"\nrequests to make: \t%d"+
"\nconcurrency: \t %d"+
"\nrequests to make per rule: \t%d"+
"\nmax range per request: \t%v\n",
g.Name, g.Interval, iterations, step)
g.Name, g.Interval, g.Concurrency, iterations, step)
if g.Limit > 0 {
fmt.Printf("\nPlease note, `limit: %d` param has no effect during replay.\n",
fmt.Printf("\nWarning: `limit: %d` param has no effect during replay.\n",
g.Limit)
}
for _, rule := range g.Rules {
fmt.Printf("> Rule %q (ID: %d)\n", rule, rule.ID())
var bar *pb.ProgressBar
if !disableProgressBar {
bar = pb.StartNew(iterations)
}
ri.reset()
for ri.next() {
n, err := replayRule(rule, ri.s, ri.e, rw, replayRuleRetryAttempts)
if err != nil {
logger.Fatalf("rule %q: %s", rule, err)
concurrency := g.Concurrency
if g.Concurrency > 1 && replayDelay > 0 {
fmt.Printf("\nWarning: group concurrency %d will be ignored since `-replay.rulesDelay` is %.3f seconds."+
" Set -replay.rulesDelay=0 to enable concurrency for replay.\n", g.Concurrency, replayDelay.Seconds())
concurrency = 1
}
if concurrency == 1 {
for _, rule := range g.Rules {
var bar *pb.ProgressBar
if !disableProgressBar {
bar = pb.StartNew(iterations)
}
total += n
// pass ri as a copy, so it can be modified within the replayRuleRange
total += replayRuleRange(rule, ri, bar, rw, replayRuleRetryAttempts)
if bar != nil {
bar.Increment()
bar.Finish()
}
// sleep to let remote storage to flush data on-disk
// so chained rules could be calculated correctly
time.Sleep(replayDelay)
}
return total
}
sem := make(chan struct{}, g.Concurrency)
res := make(chan int, len(g.Rules)*iterations)
wg := sync.WaitGroup{}
var bar *pb.ProgressBar
if !disableProgressBar {
bar = pb.StartNew(iterations * len(g.Rules))
}
for _, r := range g.Rules {
sem <- struct{}{}
wg.Add(1)
go func(r Rule, ri rangeIterator) {
// pass ri as a copy, so it can be modified within the replayRuleRange
res <- replayRuleRange(r, ri, bar, rw, replayRuleRetryAttempts)
<-sem
wg.Done()
}(r, ri)
}
wg.Wait()
close(res)
close(sem)
if bar != nil {
bar.Finish()
}
total = 0
for n := range res {
total += n
}
return total
}
func replayRuleRange(r Rule, ri rangeIterator, bar *pb.ProgressBar, rw remotewrite.RWClient, replayRuleRetryAttempts int) int {
fmt.Printf("> Rule %q (ID: %d)\n", r, r.ID())
total := 0
for ri.next() {
n, err := replayRule(r, ri.s, ri.e, rw, replayRuleRetryAttempts)
if err != nil {
logger.Fatalf("rule %q: %s", r, err)
}
if bar != nil {
bar.Finish()
bar.Increment()
}
// sleep to let remote storage to flush data on-disk
// so chained rules could be calculated correctly
time.Sleep(replayDelay)
total += n
}
return total
}
@@ -570,11 +624,10 @@ type rangeIterator struct {
s, e time.Time
}
func (ri *rangeIterator) reset() {
ri.iter = 0
ri.s, ri.e = time.Time{}, time.Time{}
}
// next iterates with given step between start and end
// by modifying iter, s and e.
// Returns true until it reaches end.
// next modifies ri and isn't thread-safe.
func (ri *rangeIterator) next() bool {
ri.s = ri.start.Add(ri.step * time.Duration(ri.iter))
if !ri.end.After(ri.s) {

View File

@@ -9,6 +9,7 @@ import (
"strconv"
"strings"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/config"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/notifier"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/rule"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/tpl"
@@ -27,6 +28,7 @@ var (
// such as Grafana, and proxied via vmselect.
{"api/v1/rules", "list all loaded groups and rules"},
{"api/v1/alerts", "list all active alerts"},
{"api/v1/notifiers", "list all notifiers"},
{fmt.Sprintf("api/v1/alert?%s=<int>&%s=<int>", paramGroupID, paramAlertID), "get alert status by group and alert ID"},
}
systemLinks = [][2]string{
@@ -42,6 +44,10 @@ var (
{Name: "Notifiers", URL: "notifiers"},
{Name: "Docs", URL: "https://docs.victoriametrics.com/victoriametrics/vmalert/"},
}
ruleTypeMap = map[string]string{
"alert": ruleTypeAlerting,
"record": ruleTypeRecording,
}
)
type requestHandler struct {
@@ -89,10 +95,13 @@ func (rh *requestHandler) handler(w http.ResponseWriter, r *http.Request) bool {
WriteRuleDetails(w, r, rule)
return true
case "/vmalert/groups":
filter := r.URL.Query().Get("filter")
rf := extractRulesFilter(r, filter)
rf, err := newRulesFilter(r)
if err != nil {
httpserver.Errorf(w, r, "%s", err)
return true
}
data := rh.groups(rf)
WriteListGroups(w, r, data, filter)
WriteListGroups(w, r, data, rf.filter)
return true
case "/vmalert/notifiers":
WriteListTargets(w, r, notifier.GetTargets())
@@ -102,23 +111,35 @@ func (rh *requestHandler) handler(w http.ResponseWriter, r *http.Request) bool {
// served without `vmalert` prefix:
case "/rules":
// Grafana makes an extra request to `/rules`
// handler in addition to `/api/v1/rules` calls in alerts UI,
var data []apiGroup
filter := r.URL.Query().Get("filter")
rf := extractRulesFilter(r, filter)
// handler in addition to `/api/v1/rules` calls in alerts UI
var data []*apiGroup
rf, err := newRulesFilter(r)
if err != nil {
httpserver.Errorf(w, r, "%s", err)
return true
}
data = rh.groups(rf)
WriteListGroups(w, r, data, filter)
WriteListGroups(w, r, data, rf.filter)
return true
case "/vmalert/api/v1/notifiers", "/api/v1/notifiers":
data, err := rh.listNotifiers()
if err != nil {
httpserver.Errorf(w, r, "%s", err)
return true
}
w.Header().Set("Content-Type", "application/json")
w.Write(data)
return true
case "/vmalert/api/v1/rules", "/api/v1/rules":
// path used by Grafana for ng alerting
var data []byte
var err error
filter := r.URL.Query().Get("filter")
rf := extractRulesFilter(r, filter)
rf, err := newRulesFilter(r)
if err != nil {
httpserver.Errorf(w, r, "%s", err)
return true
}
data, err = rh.listGroups(rf)
if err != nil {
httpserver.Errorf(w, r, "%s", err)
return true
@@ -129,7 +150,12 @@ func (rh *requestHandler) handler(w http.ResponseWriter, r *http.Request) bool {
case "/vmalert/api/v1/alerts", "/api/v1/alerts":
// path used by Grafana for ng alerting
data, err := rh.listAlerts()
rf, err := newRulesFilter(r)
if err != nil {
httpserver.Errorf(w, r, "%s", err)
return true
}
data, err := rh.listAlerts(rf)
if err != nil {
httpserver.Errorf(w, r, "%s", err)
return true
@@ -218,7 +244,7 @@ func (rh *requestHandler) getAlert(r *http.Request) (*apiAlert, error) {
type listGroupsResponse struct {
Status string `json:"status"`
Data struct {
Groups []apiGroup `json:"groups"`
Groups []*apiGroup `json:"groups"`
} `json:"data"`
}
@@ -229,82 +255,102 @@ type rulesFilter struct {
ruleNames []string
ruleType string
excludeAlerts bool
onlyUnhealthy bool
onlyNoMatch bool
filter string
dsType config.Type
}
func extractRulesFilter(r *http.Request, filter string) rulesFilter {
rf := rulesFilter{}
func newRulesFilter(r *http.Request) (*rulesFilter, error) {
rf := &rulesFilter{}
query := r.URL.Query()
var ruleType string
ruleTypeParam := r.URL.Query().Get("type")
// for some reason, `type` in filter doesn't match `type` in response,
// so we use this matching here
if ruleTypeParam == "alert" {
ruleType = ruleTypeAlerting
} else if ruleTypeParam == "record" {
ruleType = ruleTypeRecording
ruleTypeParam := query.Get("type")
if len(ruleTypeParam) > 0 {
if ruleType, ok := ruleTypeMap[ruleTypeParam]; ok {
rf.ruleType = ruleType
} else {
return nil, errResponse(fmt.Errorf(`invalid parameter "type": not supported value %q`, ruleTypeParam), http.StatusBadRequest)
}
}
dsType := query.Get("datasource_type")
if len(dsType) > 0 {
if config.SupportedType(dsType) {
rf.dsType = config.NewRawType(dsType)
} else {
return nil, errResponse(fmt.Errorf(`invalid parameter "datasource_type": not supported value %q`, dsType), http.StatusBadRequest)
}
}
filter := strings.ToLower(query.Get("filter"))
if len(filter) > 0 {
if filter == "nomatch" || filter == "unhealthy" {
rf.filter = filter
} else {
return nil, errResponse(fmt.Errorf(`invalid parameter "filter": not supported value %q`, filter), http.StatusBadRequest)
}
}
rf.ruleType = ruleType
rf.excludeAlerts = httputil.GetBool(r, "exclude_alerts")
rf.ruleNames = append([]string{}, r.Form["rule_name[]"]...)
rf.groupNames = append([]string{}, r.Form["rule_group[]"]...)
rf.files = append([]string{}, r.Form["file[]"]...)
switch filter {
case "unhealthy":
rf.onlyUnhealthy = true
case "noMatch":
rf.onlyNoMatch = true
}
return rf
return rf, nil
}
func (rh *requestHandler) groups(rf rulesFilter) []apiGroup {
func (rf *rulesFilter) matchesGroup(group *rule.Group) bool {
if len(rf.groupNames) > 0 && !slices.Contains(rf.groupNames, group.Name) {
return false
}
if len(rf.files) > 0 && !slices.Contains(rf.files, group.File) {
return false
}
if len(rf.dsType.Name) > 0 && rf.dsType.String() != group.Type.String() {
return false
}
return true
}
func (rh *requestHandler) groups(rf *rulesFilter) []*apiGroup {
rh.m.groupsMu.RLock()
defer rh.m.groupsMu.RUnlock()
groups := make([]apiGroup, 0)
groups := make([]*apiGroup, 0)
for _, group := range rh.m.groups {
if len(rf.groupNames) > 0 && !slices.Contains(rf.groupNames, group.Name) {
if !rf.matchesGroup(group) {
continue
}
if len(rf.files) > 0 && !slices.Contains(rf.files, group.File) {
continue
}
g := groupToAPI(group)
// the returned list should always be non-nil
// https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4221
filteredRules := make([]apiRule, 0)
for _, r := range g.Rules {
if rf.ruleType != "" && rf.ruleType != r.Type {
for _, rule := range g.Rules {
if rf.ruleType != "" && rf.ruleType != rule.Type {
continue
}
if len(rf.ruleNames) > 0 && !slices.Contains(rf.ruleNames, r.Name) {
if len(rf.ruleNames) > 0 && !slices.Contains(rf.ruleNames, rule.Name) {
continue
}
if (rule.LastError == "" && rf.filter == "unhealthy") || (!isNoMatch(rule) && rf.filter == "nomatch") {
continue
}
if rf.excludeAlerts {
r.Alerts = nil
rule.Alerts = nil
}
if (r.LastError == "" && rf.onlyUnhealthy) || (!isNoMatch(r) && rf.onlyNoMatch) {
continue
}
if r.LastError != "" {
if rule.LastError != "" {
g.Unhealthy++
} else {
g.Healthy++
}
if isNoMatch(r) {
if isNoMatch(rule) {
g.NoMatch++
}
filteredRules = append(filteredRules, r)
filteredRules = append(filteredRules, rule)
}
g.Rules = filteredRules
groups = append(groups, g)
}
// sort list of groups for deterministic output
slices.SortFunc(groups, func(a, b apiGroup) int {
slices.SortFunc(groups, func(a, b *apiGroup) int {
if a.Name != b.Name {
return strings.Compare(a.Name, b.Name)
}
@@ -313,7 +359,7 @@ func (rh *requestHandler) groups(rf rulesFilter) []apiGroup {
return groups
}
func (rh *requestHandler) listGroups(rf rulesFilter) ([]byte, error) {
func (rh *requestHandler) listGroups(rf *rulesFilter) ([]byte, error) {
lr := listGroupsResponse{Status: "success"}
lr.Data.Groups = rh.groups(rf)
b, err := json.Marshal(lr)
@@ -360,14 +406,17 @@ func (rh *requestHandler) groupAlerts() []groupAlerts {
return gAlerts
}
func (rh *requestHandler) listAlerts() ([]byte, error) {
func (rh *requestHandler) listAlerts(rf *rulesFilter) ([]byte, error) {
rh.m.groupsMu.RLock()
defer rh.m.groupsMu.RUnlock()
lr := listAlertsResponse{Status: "success"}
lr.Data.Alerts = make([]*apiAlert, 0)
for _, g := range rh.m.groups {
for _, r := range g.Rules {
for _, group := range rh.m.groups {
if !rf.matchesGroup(group) {
continue
}
for _, r := range group.Rules {
a, ok := r.(*rule.AlertingRule)
if !ok {
continue
@@ -391,6 +440,42 @@ func (rh *requestHandler) listAlerts() ([]byte, error) {
return b, nil
}
type listNotifiersResponse struct {
Status string `json:"status"`
Data struct {
Notifiers []*apiNotifier `json:"notifiers"`
} `json:"data"`
}
func (rh *requestHandler) listNotifiers() ([]byte, error) {
targets := notifier.GetTargets()
lr := listNotifiersResponse{Status: "success"}
lr.Data.Notifiers = make([]*apiNotifier, 0)
for protoName, protoTargets := range targets {
notifier := &apiNotifier{
Kind: string(protoName),
Targets: make([]*apiTarget, 0, len(protoTargets)),
}
for _, target := range protoTargets {
notifier.Targets = append(notifier.Targets, &apiTarget{
Address: target.Notifier.Addr(),
Labels: target.Labels.ToMap(),
})
}
lr.Data.Notifiers = append(lr.Data.Notifiers, notifier)
}
b, err := json.Marshal(lr)
if err != nil {
return nil, &httpserver.ErrorWithStatusCode{
Err: fmt.Errorf(`error encoding list of notifiers: %w`, err),
StatusCode: http.StatusInternalServerError,
}
}
return b, nil
}
func errResponse(err error, sc int) *httpserver.ErrorWithStatusCode {
return &httpserver.ErrorWithStatusCode{
Err: err,

View File

@@ -93,18 +93,18 @@
{%= tpl.Footer(r) %}
{% endfunc %}
{% func ListGroups(r *http.Request, groups []apiGroup, filter string) %}
{% func ListGroups(r *http.Request, groups []*apiGroup, filter string) %}
{%code
prefix := vmalertutil.Prefix(r.URL.Path)
filters := map[string]string{
"": "All",
"unhealthy": "Unhealthy",
"noMatch": "No Match",
"nomatch": "No Match",
}
icons := map[string]string{
"": "all",
"unhealthy": "unhealthy",
"noMatch": "nomatch",
"nomatch": "nomatch",
}
currentText := filters[filter]
currentIcon := icons[filter]
@@ -162,7 +162,7 @@
<thead>
<tr>
<th scope="col" style="width: 60%">Rule</th>
<th scope="col" style="width: 20%" class="text-center" title="How many samples were produced by the rule">Samples</th>
<th scope="col" style="width: 20%" class="text-center" title="How many series were produced by the rule">Series</th>
<th scope="col" style="width: 20%" class="text-center" title="How many seconds ago rule was executed">Updated</th>
</tr>
</thead>
@@ -594,7 +594,7 @@
<thead>
<tr>
<th scope="col" title="The time when event was created">Updated at</th>
<th scope="col" style="width: 10%" class="text-center" title="How many samples were returned">Samples</th>
<th scope="col" style="width: 10%" class="text-center" title="How many series expression returns. Each series will represent an alert.">Series returned</th>
{% if seriesFetchedEnabled %}<th scope="col" style="width: 10%" class="text-center" title="How many series were scanned by datasource during the evaluation">Series fetched</th>{% endif %}
<th scope="col" style="width: 10%" class="text-center" title="How many seconds request took">Duration</th>
<th scope="col" class="text-center" title="Time used for rule execution">Executed at</th>

View File

@@ -316,7 +316,7 @@ func Welcome(r *http.Request) string {
}
//line app/vmalert/web.qtpl:96
func StreamListGroups(qw422016 *qt422016.Writer, r *http.Request, groups []apiGroup, filter string) {
func StreamListGroups(qw422016 *qt422016.Writer, r *http.Request, groups []*apiGroup, filter string) {
//line app/vmalert/web.qtpl:96
qw422016.N().S(`
`)
@@ -325,12 +325,12 @@ func StreamListGroups(qw422016 *qt422016.Writer, r *http.Request, groups []apiGr
filters := map[string]string{
"": "All",
"unhealthy": "Unhealthy",
"noMatch": "No Match",
"nomatch": "No Match",
}
icons := map[string]string{
"": "all",
"unhealthy": "unhealthy",
"noMatch": "nomatch",
"nomatch": "nomatch",
}
currentText := filters[filter]
currentIcon := icons[filter]
@@ -524,7 +524,7 @@ func StreamListGroups(qw422016 *qt422016.Writer, r *http.Request, groups []apiGr
<thead>
<tr>
<th scope="col" style="width: 60%">Rule</th>
<th scope="col" style="width: 20%" class="text-center" title="How many samples were produced by the rule">Samples</th>
<th scope="col" style="width: 20%" class="text-center" title="How many series were produced by the rule">Series</th>
<th scope="col" style="width: 20%" class="text-center" title="How many seconds ago rule was executed">Updated</th>
</tr>
</thead>
@@ -722,7 +722,7 @@ func StreamListGroups(qw422016 *qt422016.Writer, r *http.Request, groups []apiGr
}
//line app/vmalert/web.qtpl:222
func WriteListGroups(qq422016 qtio422016.Writer, r *http.Request, groups []apiGroup, filter string) {
func WriteListGroups(qq422016 qtio422016.Writer, r *http.Request, groups []*apiGroup, filter string) {
//line app/vmalert/web.qtpl:222
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmalert/web.qtpl:222
@@ -733,7 +733,7 @@ func WriteListGroups(qq422016 qtio422016.Writer, r *http.Request, groups []apiGr
}
//line app/vmalert/web.qtpl:222
func ListGroups(r *http.Request, groups []apiGroup, filter string) string {
func ListGroups(r *http.Request, groups []*apiGroup, filter string) string {
//line app/vmalert/web.qtpl:222
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmalert/web.qtpl:222
@@ -1697,7 +1697,7 @@ func StreamRuleDetails(qw422016 *qt422016.Writer, r *http.Request, rule apiRule)
<thead>
<tr>
<th scope="col" title="The time when event was created">Updated at</th>
<th scope="col" style="width: 10%" class="text-center" title="How many samples were returned">Samples</th>
<th scope="col" style="width: 10%" class="text-center" title="How many series expression returns. Each series will represent an alert.">Series returned</th>
`)
//line app/vmalert/web.qtpl:598
if seriesFetchedEnabled {

View File

@@ -19,25 +19,34 @@ import (
func TestHandler(t *testing.T) {
fq := &datasource.FakeQuerier{}
fq.Add(datasource.Metric{
Values: []float64{1}, Timestamps: []int64{0},
Values: []float64{1},
Timestamps: []int64{0},
})
g := rule.NewGroup(config.Group{
Name: "group",
File: "rules.yaml",
Concurrency: 1,
Rules: []config.Rule{
{ID: 0, Alert: "alert"},
{ID: 1, Record: "record"},
},
}, fq, 1*time.Minute, nil)
ar := g.Rules[0].(*rule.AlertingRule)
rr := g.Rules[1].(*rule.RecordingRule)
g.ExecOnce(context.Background(), func() []notifier.Notifier { return nil }, nil, time.Time{})
m := &manager{groups: map[uint64]*rule.Group{
g.CreateID(): g,
}}
m := &manager{groups: map[uint64]*rule.Group{}}
var ar *rule.AlertingRule
var rr *rule.RecordingRule
for _, dsType := range []string{"prometheus", "", "graphite"} {
g := rule.NewGroup(config.Group{
Name: "group",
File: "rules.yaml",
Type: config.NewRawType(dsType),
Concurrency: 1,
Rules: []config.Rule{
{
ID: 0,
Alert: "alert",
},
{
ID: 1,
Record: "record",
},
},
}, fq, 1*time.Minute, nil)
ar = g.Rules[0].(*rule.AlertingRule)
rr = g.Rules[1].(*rule.RecordingRule)
g.ExecOnce(context.Background(), func() []notifier.Notifier { return nil }, nil, time.Time{})
m.groups[g.CreateID()] = g
}
rh := &requestHandler{m: m}
getResp := func(t *testing.T, url string, to any, code int) {
@@ -54,7 +63,7 @@ func TestHandler(t *testing.T) {
t.Fatalf("err closing body %s", err)
}
}()
if to != nil {
if to != nil && code < 300 {
if err = json.NewDecoder(resp.Body).Decode(to); err != nil {
t.Fatalf("unexpected err %s", err)
}
@@ -95,14 +104,23 @@ func TestHandler(t *testing.T) {
t.Run("/api/v1/alerts", func(t *testing.T) {
lr := listAlertsResponse{}
getResp(t, ts.URL+"/api/v1/alerts", &lr, 200)
if length := len(lr.Data.Alerts); length != 1 {
t.Fatalf("expected 1 alert got %d", length)
if length := len(lr.Data.Alerts); length != 3 {
t.Fatalf("expected 3 alert got %d", length)
}
lr = listAlertsResponse{}
getResp(t, ts.URL+"/vmalert/api/v1/alerts", &lr, 200)
if length := len(lr.Data.Alerts); length != 1 {
t.Fatalf("expected 1 alert got %d", length)
if length := len(lr.Data.Alerts); length != 3 {
t.Fatalf("expected 3 alert got %d", length)
}
lr = listAlertsResponse{}
getResp(t, ts.URL+"/api/v1/alerts?datasource_type=test", &lr, 400)
lr = listAlertsResponse{}
getResp(t, ts.URL+"/api/v1/alerts?datasource_type=prometheus", &lr, 200)
if length := len(lr.Data.Alerts); length != 2 {
t.Fatalf("expected 2 alert got %d", length)
}
})
t.Run("/api/v1/alert?alertID&groupID", func(t *testing.T) {
@@ -138,14 +156,14 @@ func TestHandler(t *testing.T) {
t.Run("/api/v1/rules", func(t *testing.T) {
lr := listGroupsResponse{}
getResp(t, ts.URL+"/api/v1/rules", &lr, 200)
if length := len(lr.Data.Groups); length != 1 {
t.Fatalf("expected 1 group got %d", length)
if length := len(lr.Data.Groups); length != 3 {
t.Fatalf("expected 3 group got %d", length)
}
lr = listGroupsResponse{}
getResp(t, ts.URL+"/vmalert/api/v1/rules", &lr, 200)
if length := len(lr.Data.Groups); length != 1 {
t.Fatalf("expected 1 group got %d", length)
if length := len(lr.Data.Groups); length != 3 {
t.Fatalf("expected 3 group got %d", length)
}
})
t.Run("/api/v1/rule?ruleID&groupID", func(t *testing.T) {
@@ -172,10 +190,10 @@ func TestHandler(t *testing.T) {
})
t.Run("/api/v1/rules&filters", func(t *testing.T) {
check := func(url string, expGroups, expRules int) {
check := func(url string, statusCode, expGroups, expRules int) {
t.Helper()
lr := listGroupsResponse{}
getResp(t, ts.URL+url, &lr, 200)
getResp(t, ts.URL+url, &lr, statusCode)
if length := len(lr.Data.Groups); length != expGroups {
t.Fatalf("expected %d groups got %d", expGroups, length)
}
@@ -191,25 +209,31 @@ func TestHandler(t *testing.T) {
}
}
check("/api/v1/rules?type=alert", 1, 1)
check("/api/v1/rules?type=record", 1, 1)
check("/api/v1/rules?type=alert", 200, 3, 3)
check("/api/v1/rules?type=record", 200, 3, 3)
check("/api/v1/rules?type=records", 400, 0, 0)
check("/vmalert/api/v1/rules?type=alert", 1, 1)
check("/vmalert/api/v1/rules?type=record", 1, 1)
check("/vmalert/api/v1/rules?type=alert", 200, 3, 3)
check("/vmalert/api/v1/rules?type=record", 200, 3, 3)
check("/vmalert/api/v1/rules?type=recording", 400, 0, 0)
check("/vmalert/api/v1/rules?datasource_type=prometheus", 200, 2, 4)
check("/vmalert/api/v1/rules?datasource_type=graphite", 200, 1, 2)
check("/vmalert/api/v1/rules?datasource_type=graphiti", 400, 0, 0)
// no filtering expected due to bad params
check("/api/v1/rules?type=badParam", 1, 2)
check("/api/v1/rules?foo=bar", 1, 2)
check("/api/v1/rules?type=badParam", 400, 0, 0)
check("/api/v1/rules?foo=bar", 200, 3, 6)
check("/api/v1/rules?rule_group[]=foo&rule_group[]=bar", 0, 0)
check("/api/v1/rules?rule_group[]=foo&rule_group[]=group&rule_group[]=bar", 1, 2)
check("/api/v1/rules?rule_group[]=foo&rule_group[]=bar", 200, 0, 0)
check("/api/v1/rules?rule_group[]=foo&rule_group[]=group&rule_group[]=bar", 200, 3, 6)
check("/api/v1/rules?rule_group[]=group&file[]=foo", 0, 0)
check("/api/v1/rules?rule_group[]=group&file[]=rules.yaml", 1, 2)
check("/api/v1/rules?rule_group[]=group&file[]=foo", 200, 0, 0)
check("/api/v1/rules?rule_group[]=group&file[]=rules.yaml", 200, 3, 6)
check("/api/v1/rules?rule_group[]=group&file[]=rules.yaml&rule_name[]=foo", 1, 0)
check("/api/v1/rules?rule_group[]=group&file[]=rules.yaml&rule_name[]=alert", 1, 1)
check("/api/v1/rules?rule_group[]=group&file[]=rules.yaml&rule_name[]=alert&rule_name[]=record", 1, 2)
check("/api/v1/rules?rule_group[]=group&file[]=rules.yaml&rule_name[]=foo", 200, 3, 0)
check("/api/v1/rules?rule_group[]=group&file[]=rules.yaml&rule_name[]=alert", 200, 3, 3)
check("/api/v1/rules?rule_group[]=group&file[]=rules.yaml&rule_name[]=alert&rule_name[]=record", 200, 3, 6)
})
t.Run("/api/v1/rules&exclude_alerts=true", func(t *testing.T) {
// check if response returns active alerts by default
@@ -259,7 +283,7 @@ func TestEmptyResponse(t *testing.T) {
t.Fatalf("err closing body %s", err)
}
}()
if to != nil {
if to != nil && code < 300 {
if err = json.NewDecoder(resp.Body).Decode(to); err != nil {
t.Fatalf("unexpected err %s", err)
}

View File

@@ -20,6 +20,16 @@ const (
paramRuleID = "rule_id"
)
type apiNotifier struct {
Kind string `json:"kind"`
Targets []*apiTarget `json:"targets"`
}
type apiTarget struct {
Address string `json:"address"`
Labels map[string]string `json:"labels"`
}
// apiAlert represents a notifier.AlertingRule state
// for WEB view
// https://github.com/prometheus/compliance/blob/main/alert_generator/specification.md#get-apiv1rules
@@ -108,7 +118,7 @@ type apiGroup struct {
// groupAlerts represents a group of alerts for WEB view
type groupAlerts struct {
Group apiGroup
Group *apiGroup
Alerts []*apiAlert
}
@@ -327,7 +337,7 @@ func newAlertAPI(ar *rule.AlertingRule, a *notifier.Alert) *apiAlert {
return aa
}
func groupToAPI(g *rule.Group) apiGroup {
func groupToAPI(g *rule.Group) *apiGroup {
g = g.DeepCopy()
ag := apiGroup{
// encode as string to avoid rounding
@@ -353,7 +363,7 @@ func groupToAPI(g *rule.Group) apiGroup {
for _, r := range g.Rules {
ag.Rules = append(ag.Rules, ruleToAPI(r))
}
return ag
return &ag
}
func urlValuesToStrings(values url.Values) []string {

View File

@@ -120,6 +120,9 @@ func normalizeURL(uOrig *url.URL) *url.URL {
u := *uOrig
// Prevent from attacks with using `..` in r.URL.Path
u.Path = path.Clean(u.Path)
if u.Path == "." {
u.Path = "/"
}
if !strings.HasSuffix(u.Path, "/") && strings.HasSuffix(uOrig.Path, "/") {
// The path.Clean() removes trailing slash.
// Return it back if needed.

View File

@@ -128,7 +128,40 @@ func TestCreateTargetURLSuccess(t *testing.T) {
// Simple routing with `url_prefix`
f(&UserInfo{
URLPrefix: mustParseURL("http://foo.bar"),
}, "", "http://foo.bar/.", "", "", nil, "least_loaded", 0)
}, "", "http://foo.bar", "", "", nil, "least_loaded", 0)
f(&UserInfo{
URLPrefix: mustParseURL("http://foo.bar"),
}, "/", "http://foo.bar", "", "", nil, "least_loaded", 0)
f(&UserInfo{
URLPrefix: mustParseURL("http://foo.bar"),
}, "http://aaa///", "http://foo.bar", "", "", nil, "least_loaded", 0)
f(&UserInfo{
URLPrefix: mustParseURL("http://foo.bar/"),
}, "/", "http://foo.bar/", "", "", nil, "least_loaded", 0)
f(&UserInfo{
URLPrefix: mustParseURL("http://foo.bar/"),
}, "/x", "http://foo.bar/x", "", "", nil, "least_loaded", 0)
f(&UserInfo{
URLPrefix: mustParseURL("http://foo.bar/"),
}, "/x/", "http://foo.bar/x/", "", "", nil, "least_loaded", 0)
f(&UserInfo{
URLPrefix: mustParseURL("http://foo.bar/"),
}, "http://abc///x/", "http://foo.bar/x/", "", "", nil, "least_loaded", 0)
f(&UserInfo{
URLPrefix: mustParseURL("http://foo.bar/"),
}, "http://foo//x", "http://foo.bar/x", "", "", nil, "least_loaded", 0)
f(&UserInfo{
URLPrefix: mustParseURL("http://foo.bar/baz"),
}, "", "http://foo.bar/baz", "", "", nil, "least_loaded", 0)
f(&UserInfo{
URLPrefix: mustParseURL("http://foo.bar/baz"),
}, "/", "http://foo.bar/baz", "", "", nil, "least_loaded", 0)
f(&UserInfo{
URLPrefix: mustParseURL("http://foo.bar/x/"),
}, "/abc", "http://foo.bar/x/abc", "", "", nil, "least_loaded", 0)
f(&UserInfo{
URLPrefix: mustParseURL("http://foo.bar/x/"),
}, "/abc/", "http://foo.bar/x/abc/", "", "", nil, "least_loaded", 0)
f(&UserInfo{
URLPrefix: mustParseURL("http://foo.bar"),
HeadersConf: HeadersConf{
@@ -149,6 +182,12 @@ func TestCreateTargetURLSuccess(t *testing.T) {
f(&UserInfo{
URLPrefix: mustParseURL("http://foo.bar"),
}, "a/b?c=d", "http://foo.bar/a/b?c=d", "", "", nil, "least_loaded", 0)
f(&UserInfo{
URLPrefix: mustParseURL("http://foo.bar"),
}, "/a/b?c=d", "http://foo.bar/a/b?c=d", "", "", nil, "least_loaded", 0)
f(&UserInfo{
URLPrefix: mustParseURL("http://foo.bar/"),
}, "/a/b?c=d", "http://foo.bar/a/b?c=d", "", "", nil, "least_loaded", 0)
f(&UserInfo{
URLPrefix: mustParseURL("https://sss:3894/x/y"),
}, "/z", "https://sss:3894/x/y/z", "", "", nil, "least_loaded", 0)

View File

@@ -70,7 +70,7 @@ var (
Usage: "VictoriaMetrics address to perform import requests. \n" +
"Should be the same as --httpListenAddr value for single-node version or vminsert component. \n" +
"When importing into the clustered version do not forget to set additionally --vm-account-id flag. \n" +
"Please note, that `vmctl` performs initial readiness check for the given address by checking `/health` endpoint.",
"Please note, that vmctl performs initial readiness check for the given address by checking /health endpoint.",
},
&cli.StringFlag{
Name: vmUser,
@@ -514,27 +514,27 @@ var (
},
&cli.StringFlag{
Name: vmNativeSrcBearerToken,
Usage: "Optional bearer auth token to use for the corresponding `--vm-native-src-addr`",
Usage: "Optional bearer auth token to use for the corresponding --vm-native-src-addr",
},
&cli.StringFlag{
Name: vmNativeSrcCertFile,
Usage: "Optional path to client-side TLS certificate file to use when connecting to `--vm-native-src-addr`",
Usage: "Optional path to client-side TLS certificate file to use when connecting to --vm-native-src-addr",
},
&cli.StringFlag{
Name: vmNativeSrcKeyFile,
Usage: "Optional path to client-side TLS key to use when connecting to `--vm-native-src-addr`",
Usage: "Optional path to client-side TLS key to use when connecting to --vm-native-src-addr",
},
&cli.StringFlag{
Name: vmNativeSrcCAFile,
Usage: "Optional path to TLS CA file to use for verifying connections to `--vm-native-src-addr`. By default, system CA is used",
Usage: "Optional path to TLS CA file to use for verifying connections to --vm-native-src-addr. By default, system CA is used",
},
&cli.StringFlag{
Name: vmNativeSrcServerName,
Usage: "Optional TLS server name to use for connections to `--vm-native-src-addr`. By default, the server name from `--vm-native-src-addr` is used",
Usage: "Optional TLS server name to use for connections to --vm-native-src-addr. By default, the server name from --vm-native-src-addr is used",
},
&cli.BoolFlag{
Name: vmNativeSrcInsecureSkipVerify,
Usage: "Whether to skip TLS certificate verification when connecting to `--vm-native-src-addr`",
Usage: "Whether to skip TLS certificate verification when connecting to --vm-native-src-addr",
Value: false,
},
@@ -563,27 +563,27 @@ var (
},
&cli.StringFlag{
Name: vmNativeDstBearerToken,
Usage: "Optional bearer auth token to use for the corresponding `--vm-native-dst-addr`",
Usage: "Optional bearer auth token to use for the corresponding --vm-native-dst-addr",
},
&cli.StringFlag{
Name: vmNativeDstCertFile,
Usage: "Optional path to client-side TLS certificate file to use when connecting to `--vm-native-dst-addr`",
Usage: "Optional path to client-side TLS certificate file to use when connecting to --vm-native-dst-addr",
},
&cli.StringFlag{
Name: vmNativeDstKeyFile,
Usage: "Optional path to client-side TLS key to use when connecting to `--vm-native-dst-addr`",
Usage: "Optional path to client-side TLS key to use when connecting to --vm-native-dst-addr",
},
&cli.StringFlag{
Name: vmNativeDstCAFile,
Usage: "Optional path to TLS CA file to use for verifying connections to `--vm-native-dst-addr`. By default, system CA is used",
Usage: "Optional path to TLS CA file to use for verifying connections to --vm-native-dst-addr. By default, system CA is used",
},
&cli.StringFlag{
Name: vmNativeDstServerName,
Usage: "Optional TLS server name to use for connections to `--vm-native-dst-addr`. By default, the server name from `--vm-native-dst-addr` is used",
Usage: "Optional TLS server name to use for connections to --vm-native-dst-addr. By default, the server name from --vm-native-dst-addr is used",
},
&cli.BoolFlag{
Name: vmNativeDstInsecureSkipVerify,
Usage: "Whether to skip TLS certificate verification when connecting to `--vm-native-dst-addr`",
Usage: "Whether to skip TLS certificate verification when connecting to --vm-native-dst-addr",
Value: false,
},
@@ -597,7 +597,7 @@ var (
Name: vmRateLimit,
Usage: "Optional data transfer rate limit in bytes per second.\n" +
"By default, the rate limit is disabled. It can be useful for limiting load on source or destination databases. \n" +
"Rate limit is applied per worker, see `--vm-concurrency`.",
"Rate limit is applied per worker, see --vm-concurrency.",
},
&cli.BoolFlag{
Name: vmInterCluster,

View File

@@ -19,6 +19,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/barpool"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/native"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/remoteread"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/netutil"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/influx"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/opentsdb"
@@ -44,6 +45,7 @@ func main() {
if c.Bool(globalDisableProgressBar) {
barpool.Disable(true)
}
netutil.EnableIPv6()
return nil
}
app := &cli.App{

View File

@@ -1,215 +0,0 @@
package main
import (
"context"
"fmt"
"log"
"os"
"testing"
"time"
"github.com/prometheus/prometheus/model/labels"
"github.com/prometheus/prometheus/storage"
"github.com/prometheus/prometheus/tsdb"
"github.com/prometheus/prometheus/tsdb/chunkenc"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/backoff"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/barpool"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/prometheus"
remote_read_integration "github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/testdata/servers_integration_test"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/vm"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/promql"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmstorage"
)
const (
testSnapshot = "./testdata/snapshots/20250118T124506Z-59d1b952d7eaf547"
blockData = "./testdata/snapshots/20250118T124506Z-59d1b952d7eaf547/01JHWQ445Y2P1TDYB05AEKD6MC"
)
// This test simulates close process if user abort it
func TestPrometheusProcessorRun(t *testing.T) {
f := func(startStr, endStr string, numOfSeries int, resultExpected []vm.TimeSeries) {
t.Helper()
dst := remote_read_integration.NewRemoteWriteServer(t)
defer func() {
dst.Close()
}()
dst.Series(resultExpected)
dst.ExpectedSeries(resultExpected)
if err := fillStorage(resultExpected); err != nil {
t.Fatalf("cannot fill storage: %s", err)
}
isSilent = true
defer func() { isSilent = false }()
bf, err := backoff.New(1, 1.8, time.Second*2)
if err != nil {
t.Fatalf("cannot create backoff: %s", err)
}
importerCfg := vm.Config{
Addr: dst.URL(),
Transport: nil,
Concurrency: 1,
Backoff: bf,
}
ctx := context.Background()
importer, err := vm.NewImporter(ctx, importerCfg)
if err != nil {
t.Fatalf("cannot create importer: %s", err)
}
defer importer.Close()
matchName := "__name__"
matchValue := ".*"
filter := prometheus.Filter{
TimeMin: startStr,
TimeMax: endStr,
Label: matchName,
LabelValue: matchValue,
}
runner, err := prometheus.NewClient(prometheus.Config{
Snapshot: testSnapshot,
Filter: filter,
})
if err != nil {
t.Fatalf("cannot create prometheus client: %s", err)
}
p := &prometheusProcessor{
cl: runner,
im: importer,
cc: 1,
}
if err := p.run(); err != nil {
t.Fatalf("run() error: %s", err)
}
collectedTs := dst.GetCollectedTimeSeries()
t.Logf("collected timeseries: %d; expected timeseries: %d", len(collectedTs), len(resultExpected))
if len(collectedTs) != len(resultExpected) {
t.Fatalf("unexpected number of collected time series; got %d; want %d", len(collectedTs), numOfSeries)
}
deleted, err := deleteSeries(matchName, matchValue)
if err != nil {
t.Fatalf("cannot delete series: %s", err)
}
if deleted != numOfSeries {
t.Fatalf("unexpected number of deleted series; got %d; want %d", deleted, numOfSeries)
}
}
processFlags()
vmstorage.Init(promql.ResetRollupResultCacheIfNeeded)
defer func() {
vmstorage.Stop()
if err := os.RemoveAll(storagePath); err != nil {
log.Fatalf("cannot remove %q: %s", storagePath, err)
}
}()
barpool.Disable(true)
defer func() {
barpool.Disable(false)
}()
b, err := tsdb.OpenBlock(nil, blockData, nil, nil)
if err != nil {
t.Fatalf("cannot open block: %s", err)
}
// timestamp is equal to minTime and maxTime from meta.json
ss, err := readBlock(b, 1737204082361, 1737204302539)
if err != nil {
t.Fatalf("cannot read block: %s", err)
}
resultExpected, err := prepareExpectedData(ss)
if err != nil {
t.Fatalf("cannot prepare expected data: %s", err)
}
f("2025-01-18T12:40:00Z", "2025-01-18T12:46:00Z", 2792, resultExpected)
}
func readBlock(b tsdb.BlockReader, timeMin int64, timeMax int64) (storage.SeriesSet, error) {
minTime, maxTime := b.Meta().MinTime, b.Meta().MaxTime
if timeMin != 0 {
minTime = timeMin
}
if timeMax != 0 {
maxTime = timeMax
}
q, err := tsdb.NewBlockQuerier(b, minTime, maxTime)
if err != nil {
return nil, err
}
matchName := "__name__"
matchValue := ".*"
ctx := context.Background()
ss := q.Select(ctx, false, nil, labels.MustNewMatcher(labels.MatchRegexp, matchName, matchValue))
return ss, nil
}
func prepareExpectedData(ss storage.SeriesSet) ([]vm.TimeSeries, error) {
var expectedSeriesSet []vm.TimeSeries
var it chunkenc.Iterator
for ss.Next() {
var name string
var labelPairs []vm.LabelPair
series := ss.At()
for _, label := range series.Labels() {
if label.Name == "__name__" {
name = label.Value
continue
}
labelPairs = append(labelPairs, vm.LabelPair{
Name: label.Name,
Value: label.Value,
})
}
if name == "" {
return nil, fmt.Errorf("failed to find `__name__` label in labelset for block")
}
var timestamps []int64
var values []float64
it = series.Iterator(it)
for {
typ := it.Next()
if typ == chunkenc.ValNone {
break
}
if typ != chunkenc.ValFloat {
// Skip unsupported values
continue
}
t, v := it.At()
timestamps = append(timestamps, t)
values = append(values, v)
}
if err := it.Err(); err != nil {
return nil, err
}
ts := vm.TimeSeries{
Name: name,
LabelPairs: labelPairs,
Timestamps: timestamps,
Values: values,
}
expectedSeriesSet = append(expectedSeriesSet, ts)
}
return expectedSeriesSet, nil
}

View File

@@ -1,351 +0,0 @@
package main
import (
"context"
"testing"
"time"
"github.com/prometheus/prometheus/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/backoff"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/barpool"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/remoteread"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/stepper"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/testdata/servers_integration_test"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/vm"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httputil"
)
func TestRemoteRead(t *testing.T) {
barpool.Disable(true)
defer func() {
barpool.Disable(false)
}()
defer func() { isSilent = false }()
var testCases = []struct {
name string
remoteReadConfig remoteread.Config
vmCfg vm.Config
start string
end string
numOfSamples int64
numOfSeries int64
rrp remoteReadProcessor
chunk string
remoteReadSeries func(start, end, numOfSeries, numOfSamples int64) []*prompb.TimeSeries
expectedSeries []vm.TimeSeries
}{
{
name: "step minute on minute time range",
remoteReadConfig: remoteread.Config{Addr: "", LabelName: "__name__", LabelValue: ".*"},
vmCfg: vm.Config{Addr: "", Concurrency: 1},
start: "2022-11-26T11:23:05+02:00",
end: "2022-11-26T11:24:05+02:00",
numOfSamples: 2,
numOfSeries: 3,
chunk: stepper.StepMinute,
remoteReadSeries: remote_read_integration.GenerateRemoteReadSeries,
expectedSeries: []vm.TimeSeries{
{
Name: "vm_metric_1",
LabelPairs: []vm.LabelPair{{Name: "job", Value: "0"}},
Timestamps: []int64{1669454585000, 1669454615000},
Values: []float64{0, 0},
},
{
Name: "vm_metric_1",
LabelPairs: []vm.LabelPair{{Name: "job", Value: "1"}},
Timestamps: []int64{1669454585000, 1669454615000},
Values: []float64{100, 100},
},
{
Name: "vm_metric_1",
LabelPairs: []vm.LabelPair{{Name: "job", Value: "2"}},
Timestamps: []int64{1669454585000, 1669454615000},
Values: []float64{200, 200},
},
},
},
{
name: "step month on month time range",
remoteReadConfig: remoteread.Config{Addr: "", LabelName: "__name__", LabelValue: ".*"},
vmCfg: vm.Config{
Addr: "",
Concurrency: 1,
Transport: httputil.NewTransport(false, "vmctl_test_read"),
},
start: "2022-09-26T11:23:05+02:00",
end: "2022-11-26T11:24:05+02:00",
numOfSamples: 2,
numOfSeries: 3,
chunk: stepper.StepMonth,
remoteReadSeries: remote_read_integration.GenerateRemoteReadSeries,
expectedSeries: []vm.TimeSeries{
{
Name: "vm_metric_1",
LabelPairs: []vm.LabelPair{{Name: "job", Value: "0"}},
Timestamps: []int64{1664184185000},
Values: []float64{0},
},
{
Name: "vm_metric_1",
LabelPairs: []vm.LabelPair{{Name: "job", Value: "1"}},
Timestamps: []int64{1664184185000},
Values: []float64{100},
},
{
Name: "vm_metric_1",
LabelPairs: []vm.LabelPair{{Name: "job", Value: "2"}},
Timestamps: []int64{1664184185000},
Values: []float64{200},
},
{
Name: "vm_metric_1",
LabelPairs: []vm.LabelPair{{Name: "job", Value: "0"}},
Timestamps: []int64{1666819415000},
Values: []float64{0},
},
{
Name: "vm_metric_1",
LabelPairs: []vm.LabelPair{{Name: "job", Value: "1"}},
Timestamps: []int64{1666819415000},
Values: []float64{100},
},
{
Name: "vm_metric_1",
LabelPairs: []vm.LabelPair{{Name: "job", Value: "2"}},
Timestamps: []int64{1666819415000},
Values: []float64{200}},
},
},
}
for _, tt := range testCases {
t.Run(tt.name, func(t *testing.T) {
ctx := context.Background()
remoteReadServer := remote_read_integration.NewRemoteReadServer(t)
defer remoteReadServer.Close()
remoteWriteServer := remote_read_integration.NewRemoteWriteServer(t)
defer remoteWriteServer.Close()
tt.remoteReadConfig.Addr = remoteReadServer.URL()
rr, err := remoteread.NewClient(tt.remoteReadConfig)
if err != nil {
t.Fatalf("error create remote read client: %s", err)
}
start, err := time.Parse(time.RFC3339, tt.start)
if err != nil {
t.Fatalf("Error parse start time: %s", err)
}
end, err := time.Parse(time.RFC3339, tt.end)
if err != nil {
t.Fatalf("Error parse end time: %s", err)
}
rrs := tt.remoteReadSeries(start.Unix(), end.Unix(), tt.numOfSeries, tt.numOfSamples)
remoteReadServer.SetRemoteReadSeries(rrs)
remoteWriteServer.ExpectedSeries(tt.expectedSeries)
tt.vmCfg.Addr = remoteWriteServer.URL()
b, err := backoff.New(10, 1.8, time.Second*2)
if err != nil {
t.Fatalf("failed to create backoff: %s", err)
}
tt.vmCfg.Backoff = b
importer, err := vm.NewImporter(ctx, tt.vmCfg)
if err != nil {
t.Fatalf("failed to create VM importer: %s", err)
}
defer importer.Close()
rmp := remoteReadProcessor{
src: rr,
dst: importer,
filter: remoteReadFilter{
timeStart: &start,
timeEnd: &end,
chunk: tt.chunk,
},
cc: 1,
isVerbose: false,
}
err = rmp.run(ctx)
if err != nil {
t.Fatalf("failed to run remote read processor: %s", err)
}
})
}
}
func TestSteamRemoteRead(t *testing.T) {
barpool.Disable(true)
defer func() {
barpool.Disable(false)
}()
defer func() { isSilent = false }()
var testCases = []struct {
name string
remoteReadConfig remoteread.Config
vmCfg vm.Config
start string
end string
numOfSamples int64
numOfSeries int64
rrp remoteReadProcessor
chunk string
remoteReadSeries func(start, end, numOfSeries, numOfSamples int64) []*prompb.TimeSeries
expectedSeries []vm.TimeSeries
}{
{
name: "step minute on minute time range",
remoteReadConfig: remoteread.Config{Addr: "", LabelName: "__name__", LabelValue: ".*", UseStream: true},
vmCfg: vm.Config{Addr: "", Concurrency: 1},
start: "2022-11-26T11:23:05+02:00",
end: "2022-11-26T11:24:05+02:00",
numOfSamples: 2,
numOfSeries: 3,
chunk: stepper.StepMinute,
remoteReadSeries: remote_read_integration.GenerateRemoteReadSeries,
expectedSeries: []vm.TimeSeries{
{
Name: "vm_metric_1",
LabelPairs: []vm.LabelPair{{Name: "job", Value: "0"}},
Timestamps: []int64{1669454585000, 1669454615000},
Values: []float64{0, 0},
},
{
Name: "vm_metric_1",
LabelPairs: []vm.LabelPair{{Name: "job", Value: "1"}},
Timestamps: []int64{1669454585000, 1669454615000},
Values: []float64{100, 100},
},
{
Name: "vm_metric_1",
LabelPairs: []vm.LabelPair{{Name: "job", Value: "2"}},
Timestamps: []int64{1669454585000, 1669454615000},
Values: []float64{200, 200},
},
},
},
{
name: "step month on month time range",
remoteReadConfig: remoteread.Config{Addr: "", LabelName: "__name__", LabelValue: ".*", UseStream: true},
vmCfg: vm.Config{Addr: "", Concurrency: 1},
start: "2022-09-26T11:23:05+02:00",
end: "2022-11-26T11:24:05+02:00",
numOfSamples: 2,
numOfSeries: 3,
chunk: stepper.StepMonth,
remoteReadSeries: remote_read_integration.GenerateRemoteReadSeries,
expectedSeries: []vm.TimeSeries{
{
Name: "vm_metric_1",
LabelPairs: []vm.LabelPair{{Name: "job", Value: "0"}},
Timestamps: []int64{1664184185000},
Values: []float64{0},
},
{
Name: "vm_metric_1",
LabelPairs: []vm.LabelPair{{Name: "job", Value: "1"}},
Timestamps: []int64{1664184185000},
Values: []float64{100},
},
{
Name: "vm_metric_1",
LabelPairs: []vm.LabelPair{{Name: "job", Value: "2"}},
Timestamps: []int64{1664184185000},
Values: []float64{200},
},
{
Name: "vm_metric_1",
LabelPairs: []vm.LabelPair{{Name: "job", Value: "0"}},
Timestamps: []int64{1666819415000},
Values: []float64{0},
},
{
Name: "vm_metric_1",
LabelPairs: []vm.LabelPair{{Name: "job", Value: "1"}},
Timestamps: []int64{1666819415000},
Values: []float64{100},
},
{
Name: "vm_metric_1",
LabelPairs: []vm.LabelPair{{Name: "job", Value: "2"}},
Timestamps: []int64{1666819415000},
Values: []float64{200}},
},
},
}
for _, tt := range testCases {
t.Run(tt.name, func(t *testing.T) {
ctx := context.Background()
remoteReadServer := remote_read_integration.NewRemoteReadStreamServer(t)
defer remoteReadServer.Close()
remoteWriteServer := remote_read_integration.NewRemoteWriteServer(t)
defer remoteWriteServer.Close()
tt.remoteReadConfig.Addr = remoteReadServer.URL()
rr, err := remoteread.NewClient(tt.remoteReadConfig)
if err != nil {
t.Fatalf("error create remote read client: %s", err)
}
start, err := time.Parse(time.RFC3339, tt.start)
if err != nil {
t.Fatalf("Error parse start time: %s", err)
}
end, err := time.Parse(time.RFC3339, tt.end)
if err != nil {
t.Fatalf("Error parse end time: %s", err)
}
rrs := tt.remoteReadSeries(start.Unix(), end.Unix(), tt.numOfSeries, tt.numOfSamples)
remoteReadServer.InitMockStorage(rrs)
remoteWriteServer.ExpectedSeries(tt.expectedSeries)
tt.vmCfg.Addr = remoteWriteServer.URL()
b, err := backoff.New(10, 1.8, time.Second*2)
if err != nil {
t.Fatalf("failed to create backoff: %s", err)
}
tt.vmCfg.Backoff = b
importer, err := vm.NewImporter(ctx, tt.vmCfg)
if err != nil {
t.Fatalf("failed to create VM importer: %s", err)
}
defer importer.Close()
rmp := remoteReadProcessor{
src: rr,
dst: importer,
filter: remoteReadFilter{
timeStart: &start,
timeEnd: &end,
chunk: tt.chunk,
},
cc: 1,
isVerbose: false,
}
err = rmp.run(ctx)
if err != nil {
t.Fatalf("failed to run remote read processor: %s", err)
}
})
}
}

View File

@@ -1,306 +0,0 @@
package remote_read_integration
import (
"bufio"
"bytes"
"encoding/json"
"fmt"
"log"
"net/http"
"net/http/httptest"
"reflect"
"sort"
"strconv"
"sync"
"testing"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/vm"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/prometheus"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/native/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
parser "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/vmimport"
)
// LabelValues represents series from api/v1/series response
type LabelValues map[string]string
// Response represents response from api/v1/series
type Response struct {
Status string `json:"status"`
Series []LabelValues `json:"data"`
}
type MetricNamesResponse struct {
Status string `json:"status"`
Data []string `json:"data"`
}
// RemoteWriteServer represents fake remote write server with database
type RemoteWriteServer struct {
server *httptest.Server
series []vm.TimeSeries
expectedSeries []vm.TimeSeries
tss []vm.TimeSeries
}
// NewRemoteWriteServer prepares test remote write server
func NewRemoteWriteServer(t *testing.T) *RemoteWriteServer {
rws := &RemoteWriteServer{series: make([]vm.TimeSeries, 0)}
mux := http.NewServeMux()
mux.Handle("/api/v1/import", rws.getWriteHandler(t))
mux.Handle("/health", rws.handlePing())
mux.Handle("/api/v1/series", rws.seriesHandler())
mux.Handle("/api/v1/label/__name__/values", rws.valuesHandler())
mux.Handle("/api/v1/export/native", rws.exportNativeHandler())
mux.Handle("/api/v1/import/native", rws.importNativeHandler(t))
rws.server = httptest.NewServer(mux)
return rws
}
// Close closes the server
func (rws *RemoteWriteServer) Close() {
rws.server.Close()
}
// Series saves generated series for fake database
func (rws *RemoteWriteServer) Series(series []vm.TimeSeries) {
rws.series = append(rws.series, series...)
}
// ExpectedSeries saves expected results to check in the handler
func (rws *RemoteWriteServer) ExpectedSeries(series []vm.TimeSeries) {
rws.expectedSeries = append(rws.expectedSeries, series...)
}
func (rws *RemoteWriteServer) GetCollectedTimeSeries() []vm.TimeSeries {
return rws.tss
}
// URL returns server url
func (rws *RemoteWriteServer) URL() string {
return rws.server.URL
}
func (rws *RemoteWriteServer) getWriteHandler(t *testing.T) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
scanner := bufio.NewScanner(r.Body)
var rows parser.Rows
for scanner.Scan() {
rows.Unmarshal(scanner.Text())
for _, row := range rows.Rows {
var labelPairs []vm.LabelPair
var ts vm.TimeSeries
nameValue := ""
for _, tag := range row.Tags {
if string(tag.Key) == "__name__" {
nameValue = string(tag.Value)
continue
}
labelPairs = append(labelPairs, vm.LabelPair{Name: string(tag.Key), Value: string(tag.Value)})
}
ts.Values = append(ts.Values, row.Values...)
ts.Timestamps = append(ts.Timestamps, row.Timestamps...)
ts.Name = nameValue
ts.LabelPairs = labelPairs
rws.tss = append(rws.tss, ts)
}
rows.Reset()
}
w.WriteHeader(http.StatusNoContent)
return
})
}
func (rws *RemoteWriteServer) handlePing() http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
_, _ = w.Write([]byte("OK"))
})
}
func (rws *RemoteWriteServer) seriesHandler() http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
var labelValues []LabelValues
for _, ser := range rws.series {
metricNames := make(LabelValues)
if ser.Name != "" {
metricNames["__name__"] = ser.Name
}
for _, p := range ser.LabelPairs {
metricNames[p.Name] = p.Value
}
labelValues = append(labelValues, metricNames)
}
resp := Response{
Status: "success",
Series: labelValues,
}
err := json.NewEncoder(w).Encode(resp)
if err != nil {
log.Printf("error send series: %s", err)
w.WriteHeader(http.StatusInternalServerError)
return
}
})
}
func (rws *RemoteWriteServer) valuesHandler() http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
labelNames := make(map[string]struct{})
for _, ser := range rws.series {
if ser.Name != "" {
labelNames[ser.Name] = struct{}{}
}
}
metricNames := make([]string, 0, len(labelNames))
for k := range labelNames {
metricNames = append(metricNames, k)
}
resp := MetricNamesResponse{
Status: "success",
Data: metricNames,
}
buf := bytes.NewBuffer(nil)
err := json.NewEncoder(buf).Encode(resp)
if err != nil {
log.Printf("error send series: %s", err)
w.WriteHeader(http.StatusInternalServerError)
return
}
w.WriteHeader(http.StatusOK)
_, err = w.Write(buf.Bytes())
if err != nil {
log.Printf("error send series: %s", err)
w.WriteHeader(http.StatusInternalServerError)
return
}
return
})
}
func (rws *RemoteWriteServer) exportNativeHandler() http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
now := time.Now()
err := prometheus.ExportNativeHandler(now, w, r)
if err != nil {
log.Printf("error export series via native protocol: %s", err)
w.WriteHeader(http.StatusInternalServerError)
return
}
return
})
}
func (rws *RemoteWriteServer) importNativeHandler(t *testing.T) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
protoparserutil.StartUnmarshalWorkers()
defer protoparserutil.StopUnmarshalWorkers()
var gotTimeSeries []vm.TimeSeries
var mx sync.RWMutex
err := stream.Parse(r.Body, "", func(block *stream.Block) error {
mn := &block.MetricName
var timeseries vm.TimeSeries
timeseries.Name = string(mn.MetricGroup)
timeseries.Timestamps = append(timeseries.Timestamps, block.Timestamps...)
timeseries.Values = append(timeseries.Values, block.Values...)
for i := range mn.Tags {
tag := &mn.Tags[i]
timeseries.LabelPairs = append(timeseries.LabelPairs, vm.LabelPair{
Name: string(tag.Key),
Value: string(tag.Value),
})
}
mx.Lock()
gotTimeSeries = append(gotTimeSeries, timeseries)
mx.Unlock()
return nil
})
if err != nil {
log.Printf("error parse stream blocks: %s", err)
w.WriteHeader(http.StatusInternalServerError)
return
}
// got timeseries should be sorted
// because they are processed independently
sort.SliceStable(gotTimeSeries, func(i, j int) bool {
iv, jv := gotTimeSeries[i], gotTimeSeries[j]
switch {
case iv.Values[0] != jv.Values[0]:
return iv.Values[0] < jv.Values[0]
case iv.Timestamps[0] != jv.Timestamps[0]:
return iv.Timestamps[0] < jv.Timestamps[0]
default:
return iv.Name < jv.Name
}
})
if !reflect.DeepEqual(gotTimeSeries, rws.expectedSeries) {
w.WriteHeader(http.StatusInternalServerError)
t.Fatalf("datasets not equal, expected: %#v;\n got: %#v", rws.expectedSeries, gotTimeSeries)
}
w.WriteHeader(http.StatusNoContent)
return
})
}
// GenerateVNSeries generates test timeseries
func GenerateVNSeries(start, end, numOfSeries, numOfSamples int64) []vm.TimeSeries {
var ts []vm.TimeSeries
j := 0
for i := 0; i < int(numOfSeries); i++ {
if i%3 == 0 {
j++
}
timeSeries := vm.TimeSeries{
Name: fmt.Sprintf("vm_metric_%d", j),
LabelPairs: []vm.LabelPair{
{Name: "job", Value: strconv.Itoa(i)},
},
}
ts = append(ts, timeSeries)
}
for i := range ts {
t, v := generateTimeStampsAndValues(i, start, end, numOfSamples)
ts[i].Timestamps = t
ts[i].Values = v
}
return ts
}
func generateTimeStampsAndValues(idx int, startTime, endTime, numOfSamples int64) ([]int64, []float64) {
delta := (endTime - startTime) / numOfSamples
var timestamps []int64
var values []float64
t := startTime
for t != endTime {
v := 100 * int64(idx)
timestamps = append(timestamps, t*1000)
values = append(values, float64(v))
t = t + delta
}
return timestamps, values
}

View File

@@ -1,17 +0,0 @@
{
"ulid": "01JHWQ445Y2P1TDYB05AEKD6MC",
"minTime": 1737204082361,
"maxTime": 1737204302539,
"stats": {
"numSamples": 60275,
"numSeries": 2792,
"numChunks": 2792
},
"compaction": {
"level": 1,
"sources": [
"01JHWQ445Y2P1TDYB05AEKD6MC"
]
},
"version": 1
}

View File

@@ -1,268 +1,9 @@
package main
import (
"context"
"flag"
"fmt"
"log"
"net/http"
"os"
"testing"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/backoff"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/barpool"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/native"
remote_read_integration "github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/testdata/servers_integration_test"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/vm"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/promql"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httputil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
)
const (
storagePath = "TestStorage"
retentionPeriod = "100y"
deleteSeriesLimit = 3e3
)
func TestVMNativeProcessorRun(t *testing.T) {
f := func(startStr, endStr string, numOfSeries, numOfSamples int, resultExpected []vm.TimeSeries) {
t.Helper()
src := remote_read_integration.NewRemoteWriteServer(t)
dst := remote_read_integration.NewRemoteWriteServer(t)
defer func() {
src.Close()
dst.Close()
}()
start, err := time.Parse(time.RFC3339, startStr)
if err != nil {
t.Fatalf("cannot parse start time: %s", err)
}
end, err := time.Parse(time.RFC3339, endStr)
if err != nil {
t.Fatalf("cannot parse end time: %s", err)
}
matchName := "__name__"
matchValue := ".*"
filter := native.Filter{
Match: fmt.Sprintf("{%s=~%q}", matchName, matchValue),
TimeStart: startStr,
TimeEnd: endStr,
}
rws := remote_read_integration.GenerateVNSeries(start.Unix(), end.Unix(), int64(numOfSeries), int64(numOfSamples))
src.Series(rws)
dst.ExpectedSeries(resultExpected)
if err := fillStorage(rws); err != nil {
t.Fatalf("cannot add series to storage: %s", err)
}
tr := httputil.NewTransport(false, "test_client")
tr.DisableKeepAlives = false
srcClient := &native.Client{
AuthCfg: nil,
Addr: src.URL(),
ExtraLabels: []string{},
HTTPClient: &http.Client{
Transport: tr,
},
}
dstClient := &native.Client{
AuthCfg: nil,
Addr: dst.URL(),
ExtraLabels: []string{},
HTTPClient: &http.Client{
Transport: tr,
},
}
isSilent = true
defer func() { isSilent = false }()
bf, err := backoff.New(10, 1.8, time.Second*2)
if err != nil {
t.Fatalf("cannot create backoff: %s", err)
}
p := &vmNativeProcessor{
filter: filter,
dst: dstClient,
src: srcClient,
backoff: bf,
cc: 1,
isNative: true,
}
ctx := context.Background()
if err := p.run(ctx); err != nil {
t.Fatalf("run() error: %s", err)
}
deleted, err := deleteSeries(matchName, matchValue)
if err != nil {
t.Fatalf("cannot delete series: %s", err)
}
if deleted != numOfSeries {
t.Fatalf("unexpected number of deleted series; got %d; want %d", deleted, numOfSeries)
}
}
processFlags()
vmstorage.Init(promql.ResetRollupResultCacheIfNeeded)
defer func() {
vmstorage.Stop()
if err := os.RemoveAll(storagePath); err != nil {
log.Fatalf("cannot remove %q: %s", storagePath, err)
}
}()
barpool.Disable(true)
defer func() {
barpool.Disable(false)
}()
// step minute on minute time range
start := "2022-11-25T11:23:05+02:00"
end := "2022-11-27T11:24:05+02:00"
numOfSeries := 3
numOfSamples := 2
resultExpected := []vm.TimeSeries{
{
Name: "vm_metric_1",
LabelPairs: []vm.LabelPair{{Name: "job", Value: "0"}},
Timestamps: []int64{1669368185000, 1669454615000},
Values: []float64{0, 0},
},
{
Name: "vm_metric_1",
LabelPairs: []vm.LabelPair{{Name: "job", Value: "1"}},
Timestamps: []int64{1669368185000, 1669454615000},
Values: []float64{100, 100},
},
{
Name: "vm_metric_1",
LabelPairs: []vm.LabelPair{{Name: "job", Value: "2"}},
Timestamps: []int64{1669368185000, 1669454615000},
Values: []float64{200, 200},
},
}
f(start, end, numOfSeries, numOfSamples, resultExpected)
// step month on month time range
start = "2022-09-26T11:23:05+02:00"
end = "2022-11-26T11:24:05+02:00"
numOfSeries = 3
numOfSamples = 2
resultExpected = []vm.TimeSeries{
{
Name: "vm_metric_1",
LabelPairs: []vm.LabelPair{{Name: "job", Value: "0"}},
Timestamps: []int64{1664184185000},
Values: []float64{0},
},
{
Name: "vm_metric_1",
LabelPairs: []vm.LabelPair{{Name: "job", Value: "0"}},
Timestamps: []int64{1666819415000},
Values: []float64{0},
},
{
Name: "vm_metric_1",
LabelPairs: []vm.LabelPair{{Name: "job", Value: "1"}},
Timestamps: []int64{1664184185000},
Values: []float64{100},
},
{
Name: "vm_metric_1",
LabelPairs: []vm.LabelPair{{Name: "job", Value: "1"}},
Timestamps: []int64{1666819415000},
Values: []float64{100},
},
{
Name: "vm_metric_1",
LabelPairs: []vm.LabelPair{{Name: "job", Value: "2"}},
Timestamps: []int64{1664184185000},
Values: []float64{200},
},
{
Name: "vm_metric_1",
LabelPairs: []vm.LabelPair{{Name: "job", Value: "2"}},
Timestamps: []int64{1666819415000},
Values: []float64{200},
},
}
f(start, end, numOfSeries, numOfSamples, resultExpected)
}
func processFlags() {
flag.Parse()
for _, fv := range []struct {
flag string
value string
}{
{flag: "storageDataPath", value: storagePath},
{flag: "retentionPeriod", value: retentionPeriod},
} {
// panics if flag doesn't exist
if err := flag.Lookup(fv.flag).Value.Set(fv.value); err != nil {
log.Fatalf("unable to set %q with value %q, err: %v", fv.flag, fv.value, err)
}
}
}
func fillStorage(series []vm.TimeSeries) error {
var mrs []storage.MetricRow
for _, series := range series {
var labels []prompbmarshal.Label
for _, lp := range series.LabelPairs {
labels = append(labels, prompbmarshal.Label{
Name: lp.Name,
Value: lp.Value,
})
}
if series.Name != "" {
labels = append(labels, prompbmarshal.Label{
Name: "__name__",
Value: series.Name,
})
}
mr := storage.MetricRow{}
mr.MetricNameRaw = storage.MarshalMetricNameRaw(mr.MetricNameRaw[:0], labels)
timestamps := series.Timestamps
values := series.Values
for i, value := range values {
mr.Timestamp = timestamps[i]
mr.Value = value
mrs = append(mrs, mr)
}
}
if err := vmstorage.AddRows(mrs); err != nil {
return fmt.Errorf("unexpected error in AddRows: %s", err)
}
vmstorage.Storage.DebugFlush()
return nil
}
func deleteSeries(name, value string) (int, error) {
tfs := storage.NewTagFilters()
if err := tfs.Add([]byte(name), []byte(value), false, true); err != nil {
return 0, fmt.Errorf("unexpected error in TagFilters.Add: %w", err)
}
return vmstorage.DeleteSeries(nil, []*storage.TagFilters{tfs}, deleteSeriesLimit)
}
func TestBuildMatchWithFilter_Failure(t *testing.T) {
f := func(filter, metricName string) {
t.Helper()

View File

@@ -8,8 +8,6 @@ import (
"strings"
"time"
"github.com/VictoriaMetrics/metrics"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/csvimport"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/datadogsketches"
@@ -36,6 +34,7 @@ import (
influxserver "github.com/VictoriaMetrics/VictoriaMetrics/lib/ingestserver/influx"
opentsdbserver "github.com/VictoriaMetrics/VictoriaMetrics/lib/ingestserver/opentsdb"
opentsdbhttpserver "github.com/VictoriaMetrics/VictoriaMetrics/lib/ingestserver/opentsdbhttp"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/memory"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/procutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape"
@@ -43,6 +42,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/stringsutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeserieslimits"
"github.com/VictoriaMetrics/metrics"
)
var (
@@ -70,6 +70,7 @@ var (
maxLabelsPerTimeseries = flag.Int("maxLabelsPerTimeseries", 40, "The maximum number of labels per time series to be accepted. Series with superfluous labels are ignored. In this case the vm_rows_ignored_total{reason=\"too_many_labels\"} metric at /metrics page is incremented")
maxLabelNameLen = flag.Int("maxLabelNameLen", 256, "The maximum length of label name in the accepted time series. Series with longer label name are ignored. In this case the vm_rows_ignored_total{reason=\"too_long_label_name\"} metric at /metrics page is incremented")
maxLabelValueLen = flag.Int("maxLabelValueLen", 4*1024, "The maximum length of label values in the accepted time series. Series with longer label value are ignored. In this case the vm_rows_ignored_total{reason=\"too_long_label_value\"} metric at /metrics page is incremented")
maxMemoryUsage = flag.Int("insert.circuitBreakMemoryUsage", 90, "Reject insert requests when memory usage exceeds a certain percentage. 0 means no circuit breaking. An integer value from 1-100 represents 1%-100%.")
)
var (
@@ -131,6 +132,13 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
startTime := time.Now()
defer requestDuration.UpdateDuration(startTime)
if *maxMemoryUsage >= 1 && *maxMemoryUsage <= 100 {
if memory.CurrentPercentage() > *maxMemoryUsage {
httpserver.Errorf(w, r, "server overloaded, request rejected by circuit breaker")
return true
}
}
path := strings.Replace(r.URL.Path, "//", "/", -1)
if strings.HasPrefix(path, "/static") {
staticServer.ServeHTTP(w, r)

View File

@@ -15,7 +15,7 @@ import (
)
var maxGraphiteSeries = flag.Int("search.maxGraphiteSeries", 300e3, "The maximum number of time series, which can be scanned during queries to Graphite Render API. "+
"See https://docs.victoriametrics.com/victoriametrics/integrations/graphite#render-api")
"See https://docs.victoriametrics.com/victoriametrics/integrations/graphite/#render-api")
type evalConfig struct {
startTime int64

View File

@@ -22,9 +22,9 @@ import (
var (
maxGraphiteTagKeysPerSearch = flag.Int("search.maxGraphiteTagKeys", 100e3, "The maximum number of tag keys returned from Graphite API, which returns tags. "+
"See https://docs.victoriametrics.com/victoriametrics/integrations/graphite#tags-api")
"See https://docs.victoriametrics.com/victoriametrics/integrations/graphite/#tags-api")
maxGraphiteTagValuesPerSearch = flag.Int("search.maxGraphiteTagValues", 100e3, "The maximum number of tag values returned from Graphite API, which returns tag values. "+
"See https://docs.victoriametrics.com/victoriametrics/integrations/graphite#tags-api")
"See https://docs.victoriametrics.com/victoriametrics/integrations/graphite/#tags-api")
)
// TagsDelSeriesHandler implements /tags/delSeries handler.

View File

@@ -818,6 +818,7 @@ func QueryHandler(qt *querytracer.Tracer, startTime time.Time, w http.ResponseWr
LookbackDelta: lookbackDelta,
RoundDigits: getRoundDigits(r),
EnforcedTagFilterss: etfs,
CacheTagFilters: etfs,
GetRequestURI: func() string {
return httpserver.GetRequestURI(r)
},
@@ -927,6 +928,7 @@ func queryRangeHandler(qt *querytracer.Tracer, startTime time.Time, w http.Respo
LookbackDelta: lookbackDelta,
RoundDigits: getRoundDigits(r),
EnforcedTagFilterss: etfs,
CacheTagFilters: etfs,
GetRequestURI: func() string {
return httpserver.GetRequestURI(r)
},

View File

@@ -5,8 +5,10 @@ import (
"strings"
"unsafe"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/netstorage"
"github.com/VictoriaMetrics/metricsql"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/netstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/atomicutil"
)
// callbacks for optimized incremental calculations for aggregate functions
@@ -66,9 +68,8 @@ var incrementalAggrFuncCallbacksMap = map[string]*incrementalAggrFuncCallbacks{
type incrementalAggrContextMap struct {
m map[string]*incrementalAggrContext
// The padding prevents false sharing on widespread platforms with
// 128 mod (cache line size) = 0 .
_ [128 - unsafe.Sizeof(map[string]*incrementalAggrContext{})%128]byte
// The padding prevents false sharing
_ [atomicutil.CacheLineSize - unsafe.Sizeof(map[string]*incrementalAggrContext{})%atomicutil.CacheLineSize]byte
}
type incrementalAggrFuncContext struct {

View File

@@ -17,6 +17,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/netstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/searchutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/atomicutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/cgroup"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/decimal"
@@ -139,6 +140,13 @@ type EvalConfig struct {
// EnforcedTagFilterss may contain additional label filters to use in the query.
EnforcedTagFilterss [][]storage.TagFilter
// CacheTagFilters stores the original tag-filter sets and extra_label from the request.
// The slice is never modified after creation and is used only to build
// the query-cache key.
//
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9001
CacheTagFilters [][]storage.TagFilter
// The callback, which returns the request URI during logging.
// The request URI isn't stored here because its' construction may take non-trivial amounts of CPU.
GetRequestURI func() string
@@ -165,6 +173,7 @@ func copyEvalConfig(src *EvalConfig) *EvalConfig {
ec.LookbackDelta = src.LookbackDelta
ec.RoundDigits = src.RoundDigits
ec.EnforcedTagFilterss = src.EnforcedTagFilterss
ec.CacheTagFilters = src.CacheTagFilters
ec.GetRequestURI = src.GetRequestURI
ec.QueryStats = src.QueryStats
@@ -1885,9 +1894,8 @@ func doRollupForTimeseries(funcName string, keepMetricNames bool, rc *rollupConf
type timeseriesWithPadding struct {
tss []*timeseries
// The padding prevents false sharing on widespread platforms with
// 128 mod (cache line size) = 0 .
_ [128 - unsafe.Sizeof([]*timeseries{})%128]byte
// The padding prevents false sharing
_ [atomicutil.CacheLineSize - unsafe.Sizeof([]*timeseries{})%atomicutil.CacheLineSize]byte
}
type timeseriesByWorkerID struct {
@@ -1966,11 +1974,14 @@ func sumNoOverflow(a, b int64) int64 {
}
func dropStaleNaNs(funcName string, values []float64, timestamps []int64) ([]float64, []int64) {
if *noStaleMarkers || funcName == "default_rollup" || funcName == "stale_samples_over_time" {
if *noStaleMarkers || funcName == "stale_samples_over_time" ||
funcName == "default_rollup" || funcName == "increase" || funcName == "rate" {
// Do not drop Prometheus staleness marks (aka stale NaNs) for default_rollup() function,
// since it uses them for Prometheus-style staleness detection.
// Do not drop staleness marks for stale_samples_over_time() function, since it needs
// to calculate the number of staleness markers.
// Do not drop staleness marks for increase() and rate() function, so they could stop
// returning results for stale series. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8891
return values, timestamps
}
// Remove Prometheus staleness marks, so non-default rollup functions don't hit NaN values.

View File

@@ -71,7 +71,8 @@ var rollupFuncs = map[string]newRollupFunc{
"quantile_over_time": newRollupQuantile,
"quantiles_over_time": newRollupQuantiles,
"range_over_time": newRollupFuncOneArg(rollupRange),
"rate": newRollupFuncOneArg(rollupDerivFast), // + rollupFuncsRemoveCounterResets
"rate": newRollupFuncOneArg(rollupDerivFast), // + rollupFuncsRemoveCounterResets
"rate_prometheus": newRollupFuncOneArg(rollupDerivFastPrometheus), // + rollupFuncsRemoveCounterResets
"rate_over_sum": newRollupFuncOneArg(rollupRateOverSum),
"resets": newRollupFuncOneArg(rollupResets),
"rollup": newRollupFuncOneOrTwoArgs(rollupFake),
@@ -195,7 +196,7 @@ var rollupAggrFuncs = map[string]rollupFunc{
"zscore_over_time": rollupZScoreOverTime,
}
// VictoriaMetrics can extends lookbehind window for these functions
// VictoriaMetrics can extend lookbehind window for these functions
// in order to make sure it contains enough points for returning non-empty results.
//
// This is needed for returning the expected non-empty graphs when zooming in the graph in Grafana,
@@ -225,6 +226,7 @@ var rollupFuncsRemoveCounterResets = map[string]bool{
"increase_pure": true,
"irate": true,
"rate": true,
"rate_prometheus": true,
"rollup_increase": true,
"rollup_rate": true,
}
@@ -252,6 +254,7 @@ var rollupFuncsSamplesScannedPerCall = map[string]int{
"lifetime": 2,
"present_over_time": 1,
"rate": 2,
"rate_prometheus": 2,
"scrape_interval": 2,
"tfirst_over_time": 1,
"timestamp": 1,
@@ -529,7 +532,7 @@ type rollupFuncArg struct {
timestamps []int64
// Real value preceding values.
// Is populated if preceding value is within the -search.maxStalenessInterval (rc.LookbackDelta).
// Is populated if preceding value is within the rc.LookbackDelta.
realPrevValue float64
// Real value which goes after values.
@@ -776,13 +779,18 @@ func (rc *rollupConfig) doInternal(dstValues []float64, tsm *timeseriesMap, valu
rfa.realPrevValue = nan
if i > 0 {
prevValue, prevTimestamp := values[i-1], timestamps[i-1]
// set realPrevValue if rc.LookbackDelta == 0
// or if distance between datapoint in prev interval and beginning of this interval
// set realPrevValue if rc.LookbackDelta == 0 or
// if distance between datapoint in prev interval and first datapoint in this interval
// doesn't exceed LookbackDelta.
// https://github.com/VictoriaMetrics/VictoriaMetrics/pull/1381
// https://github.com/VictoriaMetrics/VictoriaMetrics/issues/894
// https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8045
if rc.LookbackDelta == 0 || (tStart-prevTimestamp) < rc.LookbackDelta {
// https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8935
currTimestamp := tStart
if len(rfa.timestamps) > 0 {
currTimestamp = rfa.timestamps[0]
}
if rc.LookbackDelta == 0 || (currTimestamp-prevTimestamp) < rc.LookbackDelta {
rfa.realPrevValue = prevValue
}
}
@@ -908,15 +916,18 @@ func getMaxPrevInterval(scrapeInterval int64) int64 {
return scrapeInterval + scrapeInterval/8
}
// removeCounterResets removes resets for rollup functions over counters - see rollupFuncsRemoveCounterResets
// it doesn't remove resets between samples with staleNaNs, or samples that exceed maxStalenessInterval
func removeCounterResets(values []float64, timestamps []int64, maxStalenessInterval int64) {
// There is no need in handling NaNs here, since they are impossible
// on values from vmstorage.
if len(values) == 0 {
return
}
var correction float64
prevValue := values[0]
for i, v := range values {
if decimal.IsStaleNaN(v) {
continue
}
d := v - prevValue
if d < 0 {
if (-d * 8) < prevValue {
@@ -1826,14 +1837,18 @@ func rollupIncreasePure(rfa *rollupFuncArg) float64 {
// There is no need in handling NaNs here, since they must be cleaned up
// before calling rollup funcs.
values := rfa.values
// restore to the real value because of potential staleness reset
prevValue := rfa.realPrevValue
prevValue := rfa.prevValue
if math.IsNaN(prevValue) {
if len(values) == 0 {
return nan
}
// Assume the counter starts from 0.
prevValue = 0
if !math.IsNaN(rfa.realPrevValue) {
// Assume that the value didn't change during the current gap
// if realPrevValue exists.
prevValue = rfa.realPrevValue
}
}
if len(values) == 0 {
// Assume the counter didn't change since prevValue.
@@ -1844,8 +1859,13 @@ func rollupIncreasePure(rfa *rollupFuncArg) float64 {
func rollupDelta(rfa *rollupFuncArg) float64 {
// There is no need in handling NaNs here, since they must be cleaned up
// before calling rollup funcs.
// before calling rollup funcs. Only StaleNaNs could remain in values - see dropStaleNaNs().
values := rfa.values
if len(values) > 0 && decimal.IsStaleNaN(values[len(values)-1]) {
// if last sample on interval is staleness marker then the selected series is expected
// to stop rendering immediately. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8891
return nan
}
prevValue := rfa.prevValue
if math.IsNaN(prevValue) {
if len(values) == 0 {
@@ -1929,10 +1949,23 @@ func rollupDerivSlow(rfa *rollupFuncArg) float64 {
return k
}
func rollupDerivFastPrometheus(rfa *rollupFuncArg) float64 {
delta := rollupDeltaPrometheus(rfa)
if math.IsNaN(delta) || rfa.window == 0 {
return nan
}
return delta / (float64(rfa.window) / 1e3)
}
func rollupDerivFast(rfa *rollupFuncArg) float64 {
// There is no need in handling NaNs here, since they must be cleaned up
// before calling rollup funcs.
// before calling rollup funcs. Only StaleNaNs could remain in values - see - see dropStaleNaNs().
values := rfa.values
if len(values) > 0 && decimal.IsStaleNaN(values[len(values)-1]) {
// if last sample on interval is staleness marker then the selected series is expected
// to stop rendering immediately. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8891
return nan
}
timestamps := rfa.timestamps
prevValue := rfa.prevValue
prevTimestamp := rfa.prevTimestamp

View File

@@ -291,7 +291,7 @@ func (rrc *rollupResultCache) GetSeries(qt *querytracer.Tracer, ec *EvalConfig,
bb := bbPool.Get()
defer bbPool.Put(bb)
bb.B = marshalRollupResultCacheKeyForSeries(bb.B[:0], expr, window, ec.Step, ec.EnforcedTagFilterss)
bb.B = marshalRollupResultCacheKeyForSeries(bb.B[:0], expr, window, ec.Step, ec.CacheTagFilters)
metainfoBuf := rrc.c.Get(nil, bb.B)
if len(metainfoBuf) == 0 {
qt.Printf("nothing found")
@@ -313,7 +313,7 @@ func (rrc *rollupResultCache) GetSeries(qt *querytracer.Tracer, ec *EvalConfig,
if !ok {
mi.RemoveKey(key)
metainfoBuf = mi.Marshal(metainfoBuf[:0])
bb.B = marshalRollupResultCacheKeyForSeries(bb.B[:0], expr, window, ec.Step, ec.EnforcedTagFilterss)
bb.B = marshalRollupResultCacheKeyForSeries(bb.B[:0], expr, window, ec.Step, ec.CacheTagFilters)
rrc.c.Set(bb.B, metainfoBuf)
return nil, ec.Start
}
@@ -419,7 +419,7 @@ func (rrc *rollupResultCache) PutSeries(qt *querytracer.Tracer, ec *EvalConfig,
metainfoBuf := bbPool.Get()
defer bbPool.Put(metainfoBuf)
metainfoKey.B = marshalRollupResultCacheKeyForSeries(metainfoKey.B[:0], expr, window, ec.Step, ec.EnforcedTagFilterss)
metainfoKey.B = marshalRollupResultCacheKeyForSeries(metainfoKey.B[:0], expr, window, ec.Step, ec.CacheTagFilters)
metainfoBuf.B = rrc.c.Get(metainfoBuf.B[:0], metainfoKey.B)
var mi rollupResultCacheMetainfo
if len(metainfoBuf.B) > 0 {

View File

@@ -156,6 +156,14 @@ func TestRemoveCounterResets(t *testing.T) {
removeCounterResets(values, timestamps, 10)
testRowsEqual(t, values, timestamps, valuesExpected, timestamps)
// verify that staleNaNs are respected
// it is important to have counter reset in values below to trigger correction logic
values = []float64{2, 4, 2, decimal.StaleNaN}
timestamps = []int64{10, 20, 30, 40}
valuesExpected = []float64{2, 4, 6, decimal.StaleNaN}
removeCounterResets(values, timestamps, 10)
testRowsEqual(t, values, timestamps, valuesExpected, timestamps)
// verify results always increase monotonically with possible float operations precision error
values = []float64{34.094223, 2.7518, 2.140669, 0.044878, 1.887095, 2.546569, 2.490149, 0.045, 0.035684, 0.062454, 0.058296}
timestampsExpected = []int64{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
@@ -648,6 +656,7 @@ func TestRollupNewRollupFuncSuccess(t *testing.T) {
f("irate", 0)
f("outlier_iqr_over_time", nan)
f("rate", 2200)
f("rate_prometheus", 2200)
f("resets", 5)
f("range_over_time", 111)
f("avg_over_time", 47.083333333333336)
@@ -1525,16 +1534,31 @@ func testRowsEqual(t *testing.T, values []float64, timestamps []int64, valuesExp
i, ts, tsExpected, timestamps, timestampsExpected)
}
vExpected := valuesExpected[i]
if decimal.IsStaleNaN(v) {
if !decimal.IsStaleNaN(vExpected) {
t.Fatalf("unexpected stale NaN value at values[%d]; want %f\nvalues=\n%v\nvaluesExpected=\n%v",
i, vExpected, values, valuesExpected)
}
continue
}
// staleNaNBits == math.NaN(), but decimal.IsStaleNaN(math.NaN()) == false
// so we check for decimal.IsStaleNaN first.
if decimal.IsStaleNaN(vExpected) {
if !decimal.IsStaleNaN(v) {
t.Fatalf("unexpected value at values[%d]; got %f; want stale NaN\nvalues=\n%v\nvaluesExpected=\n%v",
i, v, values, valuesExpected)
}
}
if math.IsNaN(v) {
if !math.IsNaN(vExpected) {
t.Fatalf("unexpected nan value at values[%d]; want %f\nvalues=\n%v\nvaluesExpected=\n%v",
t.Fatalf("unexpected NaN value at values[%d]; want %f\nvalues=\n%v\nvaluesExpected=\n%v",
i, vExpected, values, valuesExpected)
}
continue
}
if math.IsNaN(vExpected) {
if !math.IsNaN(v) {
t.Fatalf("unexpected value at values[%d]; got %f; want nan\nvalues=\n%v\nvaluesExpected=\n%v",
t.Fatalf("unexpected value at values[%d]; got %f; want NaN\nvalues=\n%v\nvaluesExpected=\n%v",
i, v, values, valuesExpected)
}
continue
@@ -1608,6 +1632,33 @@ func TestRollupDelta(t *testing.T) {
f(100, nan, nan, nil, 0)
}
func TestRollupDerivFastPrometheus(t *testing.T) {
f := func(values []float64, window int64, resultExpected float64) {
t.Helper()
rfa := &rollupFuncArg{
values: values,
window: window,
}
result := rollupDerivFastPrometheus(rfa)
if math.IsNaN(result) {
if !math.IsNaN(resultExpected) {
t.Fatalf("unexpected result; got %v; want %v", result, resultExpected)
}
return
}
if result != resultExpected {
t.Fatalf("unexpected result; got %v; want %v", result, resultExpected)
}
}
f(nil, 0, nan)
f(nil, 10, nan)
f([]float64{0, 10}, 0, nan)
f([]float64{10}, 10, nan)
f([]float64{0, 20}, 10e3, 2)
f([]float64{0, 10, 20}, 10e3, 2)
}
func TestRollupDeltaWithStaleness(t *testing.T) {
// there is a gap between samples in the dataset below
timestamps := []int64{0, 15000, 30000, 70000}
@@ -1719,6 +1770,55 @@ func TestRollupDeltaWithStaleness(t *testing.T) {
timestampsExpected := []int64{0, 10e3, 20e3, 30e3, 40e3}
testRowsEqual(t, gotValues, rc.Timestamps, valuesExpected, timestampsExpected)
})
t.Run("issue-8935", func(t *testing.T) {
// https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8935
// below dataset has a gap that exceeds LookbackDelta.
// The step is picked in a way that on [60e3-90e3] window
// the prevValue will be NaN, but 60e3-55e3 still matches
// timestamp=10e3 and stores its value as realPrevValue.
// This results into delta=1-50=-49 increase result.
// The fix makes it to deduct LookbackDelta not from window start
// but from first captured data point in the window, so it becomes 70e3-55e3=15e3.
// And realPrevValue becomes NaN due to staleness detection.
timestamps = []int64{0, 10000, 70000, 80000}
values = []float64{50, 50, 1, 1}
rc := rollupConfig{
Func: rollupDelta,
Start: 0,
End: 90e3,
Step: 30e3,
LookbackDelta: 55e3,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = rc.getTimestamps()
gotValues, _ := rc.Do(nil, values, timestamps)
valuesExpected := []float64{0, 0, 0, 1}
timestampsExpected := []int64{0, 30e3, 60e3, 90e3}
testRowsEqual(t, gotValues, rc.Timestamps, valuesExpected, timestampsExpected)
})
// the last sample is stale NaN
timestamps = []int64{0, 10000, 20000, 30000, 40000}
values = []float64{0, 0, 0, 10, decimal.StaleNaN}
t.Run("last point is stale nan", func(t *testing.T) {
rc := rollupConfig{
Func: rollupDelta,
Start: 40001,
End: 40001,
Step: 50000,
Window: 0,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = rc.getTimestamps()
gotValues, samplesScanned := rc.Do(nil, values, timestamps)
if samplesScanned != 10 {
t.Fatalf("expecting 10 samplesScanned from rollupConfig.Do; got %d", samplesScanned)
}
valuesExpected := []float64{nan}
timestampsExpected := []int64{40001}
testRowsEqual(t, gotValues, rc.Timestamps, valuesExpected, timestampsExpected)
})
}
func TestRollupIncreasePureWithStaleness(t *testing.T) {
@@ -1833,3 +1933,48 @@ func TestRollupIncreasePureWithStaleness(t *testing.T) {
testRowsEqual(t, gotValues, rc.Timestamps, valuesExpected, timestampsExpected)
})
}
func TestRollupDerivFastWithStaleness(t *testing.T) {
timestamps := []int64{0, 10000, 20000, 30000, 40000}
values := []float64{0, 0, 0, 0, 10}
t.Run("no stale marker", func(t *testing.T) {
rc := rollupConfig{
Func: rollupDerivFast,
Start: 40001,
End: 40001,
Step: 50000,
Window: 0,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = rc.getTimestamps()
gotValues, samplesScanned := rc.Do(nil, values, timestamps)
if samplesScanned != 10 {
t.Fatalf("expecting 10 samplesScanned from rollupConfig.Do; got %d", samplesScanned)
}
valuesExpected := []float64{0.25}
timestampsExpected := []int64{40001}
testRowsEqual(t, gotValues, rc.Timestamps, valuesExpected, timestampsExpected)
})
// the last sample is stale NaN
timestamps = []int64{0, 10000, 20000, 30000, 40000}
values = []float64{0, 0, 0, 10, decimal.StaleNaN}
t.Run("last point is stale nan", func(t *testing.T) {
rc := rollupConfig{
Func: rollupDerivFast,
Start: 40001,
End: 40001,
Step: 50000,
Window: 0,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = rc.getTimestamps()
gotValues, samplesScanned := rc.Do(nil, values, timestamps)
if samplesScanned != 10 {
t.Fatalf("expecting 10 samplesScanned from rollupConfig.Do; got %d", samplesScanned)
}
valuesExpected := []float64{nan}
timestampsExpected := []int64{40001}
testRowsEqual(t, gotValues, rc.Timestamps, valuesExpected, timestampsExpected)
})
}

View File

@@ -66,8 +66,8 @@ or at your own [VictoriaMetrics instance](https://docs.victoriametrics.com/victo
The list of MetricsQL features on top of PromQL:
* Graphite-compatible filters can be passed via `{__graphite__="foo.*.bar"}` syntax.
See [these docs](https://docs.victoriametrics.com/victoriametrics/integrations/graphite#selecting-graphite-metrics).
VictoriaMetrics can be used as Graphite datasource in Grafana. See [these docs](https://docs.victoriametrics.com/victoriametrics/integrations/graphite#graphite-api-usage) for details.
See [these docs](https://docs.victoriametrics.com/victoriametrics/integrations/graphite/#selecting-graphite-metrics).
VictoriaMetrics can be used as Graphite datasource in Grafana. See [these docs](https://docs.victoriametrics.com/victoriametrics/integrations/graphite/#graphite-api-usage) for details.
See also [label_graphite_group](#label_graphite_group) function, which can be used for extracting the given groups from Graphite metric name.
* Lookbehind window in square brackets for [rollup functions](#rollup-functions) may be omitted. VictoriaMetrics automatically selects the lookbehind window
depending on the `step` query arg passed to [/api/v1/query_range](https://docs.victoriametrics.com/victoriametrics/keyconcepts/#range-query)
@@ -742,7 +742,23 @@ Metric names are stripped from the resulting rollups. Add [keep_metric_names](#k
This function is supported by PromQL.
See also [irate](#irate) and [rollup_rate](#rollup_rate).
See also [irate](#irate), [rollup_rate](#rollup_rate) and [rate_prometheus](#rate_prometheus).
#### rate_prometheus
`rate_prometheus(series_selector[d])` {{% available_from "#" %}} is a [rollup function](#rollup-functions), which calculates the average per-second
increase rate over the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/victoriametrics/keyconcepts/#filtering).
The resulting calculation is equivalent to `increase_prometheus(series_selector[d]) / d`.
It doesn't take into account the last sample before the given lookbehind window `d` when calculating the result in the same way as Prometheus does.
See [this article](https://medium.com/@romanhavronenko/victoriametrics-promql-compliance-d4318203f51e) for details.
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
This function is usually applied to [counters](https://docs.victoriametrics.com/victoriametrics/keyconcepts/#counter).
See also [increase_prometheus](#increase_prometheus) and [rate](#rate).
#### rate_over_sum

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -36,10 +36,10 @@
<meta property="og:title" content="UI for VictoriaMetrics">
<meta property="og:url" content="https://victoriametrics.com/">
<meta property="og:description" content="Explore and troubleshoot your VictoriaMetrics data">
<script type="module" crossorigin src="./assets/index-xmjGcv4-.js"></script>
<script type="module" crossorigin src="./assets/index-BiQY-19a.js"></script>
<link rel="modulepreload" crossorigin href="./assets/vendor-D8IJGiEn.js">
<link rel="stylesheet" crossorigin href="./assets/vendor-D1GxaB_c.css">
<link rel="stylesheet" crossorigin href="./assets/index-C85_NB5q.css">
<link rel="stylesheet" crossorigin href="./assets/index-ojCMu5lE.css">
</head>
<body>
<noscript>You need to enable JavaScript to run this app.</noscript>

View File

@@ -63,6 +63,8 @@ var (
cacheSizeStorageTSID = flagutil.NewBytes("storage.cacheSizeStorageTSID", 0, "Overrides max size for storage/tsid cache. "+
"See https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#cache-tuning")
cacheSizeStorageMetricName = flagutil.NewBytes("storage.cacheSizeStorageMetricName", 0, "Overrides max size for storage/metricName cache. "+
"See https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#cache-tuning")
cacheSizeIndexDBIndexBlocks = flagutil.NewBytes("storage.cacheSizeIndexDBIndexBlocks", 0, "Overrides max size for indexdb/indexBlocks cache. "+
"See https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#cache-tuning")
cacheSizeIndexDBDataBlocks = flagutil.NewBytes("storage.cacheSizeIndexDBDataBlocks", 0, "Overrides max size for indexdb/dataBlocks cache. "+
@@ -111,6 +113,7 @@ func Init(resetCacheIfNeeded func(mrs []storage.MetricRow)) {
storage.SetTSIDCacheSize(cacheSizeStorageTSID.IntN())
storage.SetTagFiltersCacheSize(cacheSizeIndexDBTagFilters.IntN())
storage.SetMetricNamesStatsCacheSize(cacheSizeMetricNamesStats.IntN())
storage.SetMetricNameCacheSize(cacheSizeStorageMetricName.IntN())
mergeset.SetIndexBlocksCacheSize(cacheSizeIndexDBIndexBlocks.IntN())
mergeset.SetDataBlocksCacheSize(cacheSizeIndexDBDataBlocks.IntN())
mergeset.SetDataBlocksSparseCacheSize(cacheSizeIndexDBDataBlocksSparse.IntN())

View File

@@ -1,4 +1,4 @@
FROM golang:1.24.3 AS build-web-stage
FROM golang:1.24.4 AS build-web-stage
COPY build /build
WORKDIR /build
@@ -6,7 +6,7 @@ COPY web/ /build/
RUN GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -o web-amd64 github.com/VictoriMetrics/vmui/ && \
GOOS=windows GOARCH=amd64 CGO_ENABLED=0 go build -o web-windows github.com/VictoriMetrics/vmui/
FROM alpine:3.21.3
FROM alpine:3.22.0
USER root
COPY --from=build-web-stage /build/web-amd64 /app/web

View File

@@ -2,9 +2,9 @@
<html lang="en">
<head>
<meta charset="utf-8"/>
<link rel="icon" href="/favicon.svg" />
<link rel="apple-touch-icon" href="/favicon.svg" />
<link rel="mask-icon" href="/favicon.svg" color="#000000">
<link rel="icon" href="/favicon.victorialogs.svg" />
<link rel="apple-touch-icon" href="/favicon.victorialogs.svg" />
<link rel="mask-icon" href="/favicon.victorialogs.svg" color="#000000">
<meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=5"/>
<meta name="theme-color" content="#000000"/>

View File

@@ -10,6 +10,8 @@
"dependencies": {
"@types/lodash.debounce": "^4.0.9",
"@types/lodash.get": "^4.4.9",
"@types/lodash.orderBy": "^4.6.9",
"@types/lodash.throttle": "^4.1.9",
"@types/qs": "^6.9.18",
"@types/react": "^19.1.2",
"@types/react-input-mask": "^3.0.6",
@@ -18,6 +20,8 @@
"dayjs": "^1.11.13",
"lodash.debounce": "^4.0.8",
"lodash.get": "^4.4.2",
"lodash.orderBy": "^4.6.0",
"lodash.throttle": "^4.1.1",
"marked": "^15.0.8",
"marked-emoji": "^2.0.0",
"preact": "^10.26.5",
@@ -2191,6 +2195,24 @@
"@types/lodash": "*"
}
},
"node_modules/@types/lodash.orderBy": {
"version": "4.6.9",
"resolved": "https://registry.npmjs.org/@types/lodash.orderby/-/lodash.orderby-4.6.9.tgz",
"integrity": "sha512-T9o2wkIJOmxXwVTPTmwJ59W6eTi2FseiLR369fxszG649Po/xe9vqFNhf/MtnvT5jrbDiyWKxPFPZbpSVK0SVQ==",
"license": "MIT",
"dependencies": {
"@types/lodash": "*"
}
},
"node_modules/@types/lodash.throttle": {
"version": "4.1.9",
"resolved": "https://registry.npmjs.org/@types/lodash.throttle/-/lodash.throttle-4.1.9.tgz",
"integrity": "sha512-PCPVfpfueguWZQB7pJQK890F2scYKoDUL3iM522AptHWn7d5NQmeS/LTEHIcLr5PaTzl3dK2Z0xSUHHTHwaL5g==",
"license": "MIT",
"dependencies": {
"@types/lodash": "*"
}
},
"node_modules/@types/node": {
"version": "22.14.1",
"resolved": "https://registry.npmjs.org/@types/node/-/node-22.14.1.tgz",
@@ -5742,6 +5764,18 @@
"dev": true,
"license": "MIT"
},
"node_modules/lodash.orderBy": {
"version": "4.6.0",
"resolved": "https://registry.npmjs.org/lodash.orderby/-/lodash.orderby-4.6.0.tgz",
"integrity": "sha512-T0rZxKmghOOf5YPnn8EY5iLYeWCpZq8G41FfqoVHH5QDTAFaghJRmAdLiadEDq+ztgM2q5PjA+Z1fOwGrLgmtg==",
"license": "MIT"
},
"node_modules/lodash.throttle": {
"version": "4.1.1",
"resolved": "https://registry.npmjs.org/lodash.throttle/-/lodash.throttle-4.1.1.tgz",
"integrity": "sha512-wIkUCfVKpVsWo3JSZlc+8MB5it+2AN5W8J7YVMST30UrvcQNZ1Okbj+rbVniijTWE6FGYy4XJq/rHkas8qJMLQ==",
"license": "MIT"
},
"node_modules/loose-envify": {
"version": "1.4.0",
"resolved": "https://registry.npmjs.org/loose-envify/-/loose-envify-1.4.0.tgz",

View File

@@ -7,6 +7,8 @@
"dependencies": {
"@types/lodash.debounce": "^4.0.9",
"@types/lodash.get": "^4.4.9",
"@types/lodash.orderBy": "^4.6.9",
"@types/lodash.throttle": "^4.1.9",
"@types/qs": "^6.9.18",
"@types/react": "^19.1.2",
"@types/react-input-mask": "^3.0.6",
@@ -15,6 +17,8 @@
"dayjs": "^1.11.13",
"lodash.debounce": "^4.0.8",
"lodash.get": "^4.4.2",
"lodash.orderBy": "^4.6.0",
"lodash.throttle": "^4.1.1",
"marked": "^15.0.8",
"marked-emoji": "^2.0.0",
"preact": "^10.26.5",

View File

@@ -0,0 +1,5 @@
<svg width="48" height="48" fill="#e94600" xmlns="http://www.w3.org/2000/svg">
<path d="M24.5475 0C10.3246.0265251 1.11379 3.06365 4.40623 6.10077c0 0 12.32997 11.23333 16.58217 14.84083.8131.6896 2.1728 1.1936 3.5191 1.2201h.1199c1.3463-.0265 2.706-.5305 3.5191-1.2201 4.2522-3.5942 16.5422-14.84083 16.5422-14.84083C48.0478 3.06365 38.8636.0265251 24.6674 0"/>
<path d="M28.1579 27.0159c-.8131.6896-2.1728 1.1936-3.5191 1.2201h-.12c-1.3463-.0265-2.7059-.5305-3.519-1.2201-2.9725-2.5067-13.35639-11.87-17.26201-15.3979v5.4112c0 .5968.22661 1.3793.6265 1.7506C7.00358 21.1936 17.2675 30.5437 20.9731 33.6737c.8132.6896 2.1728 1.1936 3.5191 1.2201h.12c1.3463-.0265 2.7059-.5305 3.519-1.2201 3.679-3.13 13.9429-12.4536 16.6089-14.8939.4132-.3713.6265-1.1538.6265-1.7506V11.618c-3.9323 3.5411-14.3162 12.931-17.2354 15.3979h.0267Z"/>
<path d="M28.1579 39.748c-.8131.6897-2.1728 1.1937-3.5191 1.2202h-.12c-1.3463-.0265-2.7059-.5305-3.519-1.2202-2.9725-2.4933-13.35639-11.8567-17.26201-15.3978v5.4111c0 .5969.22661 1.3793.6265 1.7507C7.00358 33.9258 17.2675 43.2759 20.9731 46.4058c.8132.6897 2.1728 1.1937 3.5191 1.2202h.12c1.3463-.0265 2.7059-.5305 3.519-1.2202 3.679-3.1299 13.9429-12.4535 16.6089-14.8938.4132-.3714.6265-1.1538.6265-1.7507v-5.4111c-3.9323 3.5411-14.3162 12.931-17.2354 15.3978h.0267Z"/>
</svg>

After

Width:  |  Height:  |  Size: 1.3 KiB

View File

@@ -66,8 +66,8 @@ or at your own [VictoriaMetrics instance](https://docs.victoriametrics.com/victo
The list of MetricsQL features on top of PromQL:
* Graphite-compatible filters can be passed via `{__graphite__="foo.*.bar"}` syntax.
See [these docs](https://docs.victoriametrics.com/victoriametrics/integrations/graphite#selecting-graphite-metrics).
VictoriaMetrics can be used as Graphite datasource in Grafana. See [these docs](https://docs.victoriametrics.com/victoriametrics/integrations/graphite#graphite-api-usage) for details.
See [these docs](https://docs.victoriametrics.com/victoriametrics/integrations/graphite/#selecting-graphite-metrics).
VictoriaMetrics can be used as Graphite datasource in Grafana. See [these docs](https://docs.victoriametrics.com/victoriametrics/integrations/graphite/#graphite-api-usage) for details.
See also [label_graphite_group](#label_graphite_group) function, which can be used for extracting the given groups from Graphite metric name.
* Lookbehind window in square brackets for [rollup functions](#rollup-functions) may be omitted. VictoriaMetrics automatically selects the lookbehind window
depending on the `step` query arg passed to [/api/v1/query_range](https://docs.victoriametrics.com/victoriametrics/keyconcepts/#range-query)
@@ -742,7 +742,23 @@ Metric names are stripped from the resulting rollups. Add [keep_metric_names](#k
This function is supported by PromQL.
See also [irate](#irate) and [rollup_rate](#rollup_rate).
See also [irate](#irate), [rollup_rate](#rollup_rate) and [rate_prometheus](#rate_prometheus).
#### rate_prometheus
`rate_prometheus(series_selector[d])` {{% available_from "#" %}} is a [rollup function](#rollup-functions), which calculates the average per-second
increase rate over the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/victoriametrics/keyconcepts/#filtering).
The resulting calculation is equivalent to `increase_prometheus(series_selector[d]) / d`.
It doesn't take into account the last sample before the given lookbehind window `d` when calculating the result in the same way as Prometheus does.
See [this article](https://medium.com/@romanhavronenko/victoriametrics-promql-compliance-d4318203f51e) for details.
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
This function is usually applied to [counters](https://docs.victoriametrics.com/victoriametrics/keyconcepts/#counter).
See also [increase_prometheus](#increase_prometheus) and [rate](#rate).
#### rate_over_sum

View File

@@ -33,8 +33,12 @@ const LogsQueryEditorAutocomplete: FC<QueryEditorAutocompleteProps> = ({
const part = logicalParts.find(p => caretPosition[0] >= p.position[0] && caretPosition[0] <= p.position[1]);
if (!part) return;
const cursorStartPosition = caretPosition[0] - part.position[0];
const prevPart = logicalParts.find(p => p.id === part.id - 1);
const queryBeforeIncompleteFilter = prevPart ? value.substring(0, prevPart.position[1] + 1) : undefined;
return {
...part,
queryBeforeIncompleteFilter,
query: value,
...getContextData(part, cursorStartPosition)
};
}, [logicalParts, caretPosition]);
@@ -50,6 +54,8 @@ const LogsQueryEditorAutocomplete: FC<QueryEditorAutocompleteProps> = ({
return fieldValues;
case ContextType.PipeName:
return pipeList;
case ContextType.FilterOrPipeName:
return [...fieldNames, ...pipeList];
default:
return [];
}
@@ -58,7 +64,7 @@ const LogsQueryEditorAutocomplete: FC<QueryEditorAutocompleteProps> = ({
const getUpdatedValue = (insertValue: string, logicalParts: LogicalPart[], id?: number) => {
return logicalParts.reduce((acc, part) => {
const value = part.id === id ? insertValue : part.value;
const separator = part.type === LogicalPartType.Pipe ? " | " : " ";
const separator = part.separator === "|" ? " | " : " ";
return `${acc}${separator}${value}`;
}, "").trim();
};
@@ -70,7 +76,7 @@ const LogsQueryEditorAutocomplete: FC<QueryEditorAutocompleteProps> = ({
modifiedInsert += ":";
} else if (contextType === ContextType.FilterValue) {
const insertWithQuotes = value.startsWith("_stream:") ? modifiedInsert : `${JSON.stringify(modifiedInsert)}`;
modifiedInsert = `${contextData?.filterName || ""}:${insertWithQuotes}`;
modifiedInsert = `${contextData?.filterName || ""}${contextData?.operator || ":"}${insertWithQuotes}`;
}
return modifiedInsert;
@@ -86,7 +92,13 @@ const LogsQueryEditorAutocomplete: FC<QueryEditorAutocompleteProps> = ({
const insertValue = getModifyInsert(insert, contextType, value, item.type);
const newValue = getUpdatedValue(insertValue, logicalParts, id);
const updatedPosition = (position[0] || 1) + insertValue.length + (item.type === ContextType.PipeName ? 1 : 0);
const logicalPart = logicalParts.find(p => p.id === id);
const getPositionCorrection = () => {
if (logicalPart?.type === LogicalPartType.FilterOrPipe) return 1;
if (item.type === ContextType.PipeName) return 1;
return 0;
};
const updatedPosition = (position[0] || 1) + insertValue.length + getPositionCorrection();
onSelect(newValue, updatedPosition);
}, [contextData, logicalParts]);

View File

@@ -9,6 +9,7 @@ export const splitLogicalParts = (expr: string) => {
const input = expr; //.replace(/\s*:\s*/g, ":");
const parts: LogicalPart[] = [];
let currentPart = "";
let separator: undefined | " " | "|" = undefined;
let isPipePart = false;
const quotes = ["'", "\"", "`"];
@@ -43,8 +44,9 @@ export const splitLogicalParts = (expr: string) => {
isPipePart = true;
const countStartSpaces = currentPart.match(/^ */)?.[0].length || 0;
const countEndSpaces = currentPart.match(/ *$/)?.[0].length || 0;
pushPart(currentPart, true, [startIndex + countStartSpaces, i - countEndSpaces - 1], parts);
pushPart(currentPart, true, [startIndex + countStartSpaces, i - countEndSpaces - 1], parts, separator);
currentPart = "";
separator = "|";
startIndex = i + 1;
continue;
}
@@ -54,7 +56,8 @@ export const splitLogicalParts = (expr: string) => {
const nextStr = input.slice(i).replace(/^\s*/, "");
const prevStr = input.slice(0, i).replace(/\s*$/, "");
if (!nextStr.startsWith(":") && !prevStr.endsWith(":")) {
pushPart(currentPart, false, [startIndex, i - 1], parts);
pushPart(currentPart, false, [startIndex, i - 1], parts, separator);
separator = " ";
currentPart = "";
startIndex = i + 1;
continue;
@@ -65,26 +68,35 @@ export const splitLogicalParts = (expr: string) => {
}
// push the last part
pushPart(currentPart, isPipePart, [startIndex, input.length], parts);
pushPart(currentPart, isPipePart, [startIndex, input.length], parts, separator);
return parts;
};
const pushPart = (currentPart: string, isPipePart: boolean, position: LogicalPartPosition, parts: LogicalPart[]) => {
const pushPart = (currentPart: string, isPipePart: boolean, position: LogicalPartPosition, parts: LogicalPart[], separator: LogicalPart["separator"]) => {
const trimmedPart = currentPart.trim();
if (!trimmedPart) return;
const isOperator = BUILDER_OPERATORS.includes(trimmedPart.toUpperCase());
const pipesTypes = [LogicalPartType.Pipe, LogicalPartType.FilterOrPipe];
const isPreviousPartPipe = parts.length > 0 && pipesTypes.includes(parts[parts.length - 1].type);
const getType = () => {
if (isPreviousPartPipe) return LogicalPartType.FilterOrPipe;
if (isPipePart) return LogicalPartType.Pipe;
if (isOperator) return LogicalPartType.Operator;
return LogicalPartType.Filter;
};
parts.push({
id: parts.length,
value: trimmedPart,
position,
type: isPipePart
? LogicalPartType.Pipe
: isOperator ? LogicalPartType.Operator : LogicalPartType.Filter,
type: getType(),
separator,
});
};
export const getContextData = (part: LogicalPart, cursorPos: number) => {
export const getContextData = (part: LogicalPart, cursorPos: number): ContextData => {
const valueBeforeCursor = part.value.substring(0, cursorPos);
const valueAfterCursor = part.value.substring(cursorPos);
@@ -95,23 +107,91 @@ export const getContextData = (part: LogicalPart, cursorPos: number) => {
contextType: ContextType.Unknown,
};
if (part.type === LogicalPartType.Filter) {
const noColon = !valueBeforeCursor.includes(":") && !valueAfterCursor.includes(":");
if (noColon) {
metaData.contextType = ContextType.FilterUnknown;
} else if (valueBeforeCursor.includes(":")) {
const [filterName, filterValue] = valueBeforeCursor.split(":");
metaData.contextType = ContextType.FilterValue;
metaData.filterName = filterName;
metaData.valueContext = filterValue;
} else {
metaData.contextType = ContextType.FilterName;
}
} else if (part.type === LogicalPartType.Pipe) {
const valueStartWithPipe = PIPE_NAMES.some(p => part.value.startsWith(p));
metaData.contextType = valueStartWithPipe ? ContextType.PipeValue : ContextType.PipeName;
}
// Determine context type based on logical part type
determineContextType(part, valueBeforeCursor, valueAfterCursor, metaData);
// Clean up quotes in valueContext
metaData.valueContext = metaData.valueContext.replace(/^["']|["']$/g, "");
return metaData;
};
/** Helper function to determine if a string starts with any of the pipe names */
const startsWithPipe = (value: string): boolean => {
return PIPE_NAMES.some(p => value.startsWith(p));
};
/** Helper function to check for colon presence */
const hasNoColon = (before: string, after: string): boolean => {
return !before.includes(":") && !after.includes(":");
};
/** Helper function to extract filter name and update metadata for filter values */
const handleFilterValue = (valueBeforeCursor: string, metaData: ContextData): void => {
const [filterName, ...filterValue] = valueBeforeCursor.split(":");
metaData.contextType = ContextType.FilterValue;
metaData.filterName = filterName;
const enhanceOperators = ["=", "-", "!", "~", "<", ">", "<=", ">="] as const;
const enhanceOperator = enhanceOperators.find(op => op === filterValue[0]);
if (enhanceOperator) {
metaData.valueContext = filterValue.slice(1).join(":");
metaData.operator = `:${enhanceOperator}`;
} else {
metaData.valueContext = filterValue.join(":");
metaData.operator = ":";
}
};
/** Function to determine context type based on part type and value */
const determineContextType = (
part: LogicalPart,
valueBeforeCursor: string,
valueAfterCursor: string,
metaData: ContextData
): void => {
switch (part.type) {
case LogicalPartType.Filter:
handleFilterType(valueBeforeCursor, valueAfterCursor, metaData);
break;
case LogicalPartType.Pipe:
metaData.contextType = startsWithPipe(part.value)
? ContextType.PipeValue
: ContextType.PipeName;
break;
case LogicalPartType.FilterOrPipe:
handleFilterOrPipeType(part.value, valueBeforeCursor, metaData);
break;
}
};
/** Handle filter type context determination */
const handleFilterType = (
valueBeforeCursor: string,
valueAfterCursor: string,
metaData: ContextData
): void => {
if (hasNoColon(valueBeforeCursor, valueAfterCursor)) {
metaData.contextType = ContextType.FilterUnknown;
} else if (valueBeforeCursor.includes(":")) {
handleFilterValue(valueBeforeCursor, metaData);
} else {
metaData.contextType = ContextType.FilterName;
}
};
/** Handle FilterOrPipeType context determination */
const handleFilterOrPipeType = (
value: string,
valueBeforeCursor: string,
metaData: ContextData
): void => {
if (startsWithPipe(value)) {
metaData.contextType = ContextType.PipeValue;
} else if (valueBeforeCursor.includes(":")) {
handleFilterValue(valueBeforeCursor, metaData);
} else {
metaData.contextType = ContextType.FilterOrPipeName;
}
};

View File

@@ -2,15 +2,19 @@ export enum LogicalPartType {
Filter = "Filter",
Pipe = "Pipe",
Operator = "Operator",
FilterOrPipe = "FilterOrPipe",
}
export type LogicalPartPosition = [start: number, end: number];
export type LogicalPartSeparator = " " | "|";
export interface LogicalPart {
id: number;
value: string;
type: LogicalPartType;
position: LogicalPartPosition;
separator?: LogicalPartSeparator;
}
export interface ContextData {
@@ -19,6 +23,10 @@ export interface ContextData {
contextType: ContextType;
valueContext: string;
filterName?: string;
query?: string;
queryBeforeIncompleteFilter?: string;
separator?: LogicalPartSeparator;
operator?: ":" | ":!" | ":-" | ":=" | ":~" | ":<" | ":>" | ":<=" | ":>=";
}
export enum ContextType {
@@ -28,4 +36,5 @@ export enum ContextType {
PipeName = "Pipes",
PipeValue = "PipeValue",
Unknown = "Unknown",
FilterOrPipeName = "FilterOrPipeName",
}

View File

@@ -10,11 +10,11 @@ import { AUTOCOMPLETE_LIMITS } from "../../../../constants/queryAutocomplete";
import { LogsFiledValues } from "../../../../api/types";
import { useLogsDispatch, useLogsState } from "../../../../state/logsPanel/LogsStateContext";
import { useTenant } from "../../../../hooks/useTenant";
import { generateQuery } from "./utils";
type FetchDataArgs = {
urlSuffix: string;
setter: Dispatch<SetStateAction<AutocompleteOptions[]>>
type: ContextType;
setter: (value: LogsFiledValues[]) => void;
params?: URLSearchParams;
}
@@ -24,7 +24,8 @@ const icons = {
[ContextType.FilterValue]: <ValueIcon/>,
[ContextType.PipeName]: <FunctionIcon/>,
[ContextType.PipeValue]: <LabelIcon/>,
[ContextType.Unknown]: <ValueIcon/>
[ContextType.Unknown]: <ValueIcon/>,
[ContextType.FilterOrPipeName]: <FunctionIcon/>
};
export const useFetchLogsQLOptions = (contextData?: ContextData) => {
@@ -61,7 +62,7 @@ export const useFetchLogsQLOptions = (contextData?: ContextData) => {
}));
};
const fetchData = async ({ urlSuffix, setter, type, params }: FetchDataArgs) => {
const fetchData = async ({ urlSuffix, setter, params }: FetchDataArgs) => {
abortControllerRef.current.abort();
abortControllerRef.current = new AbortController();
const { signal } = abortControllerRef.current;
@@ -73,7 +74,7 @@ export const useFetchLogsQLOptions = (contextData?: ContextData) => {
try {
const cachedData = autocompleteCache.get(key);
if (cachedData) {
setter(processData(cachedData, type));
setter(cachedData);
setLoading(false);
return;
}
@@ -86,7 +87,7 @@ export const useFetchLogsQLOptions = (contextData?: ContextData) => {
if (response.ok) {
const data = await response.json();
const value = (data?.values || []) as LogsFiledValues[];
setter(value ? processData(value, type) : []);
setter(value || []);
dispatch({ type: "SET_AUTOCOMPLETE_CACHE", payload: { key, value } });
}
setLoading(false);
@@ -101,7 +102,7 @@ export const useFetchLogsQLOptions = (contextData?: ContextData) => {
// fetch field names
useEffect(() => {
const validContexts = [ContextType.FilterName, ContextType.FilterUnknown];
const validContexts = [ContextType.FilterName, ContextType.FilterUnknown, ContextType.FilterOrPipeName];
const isInvalidContext = !validContexts.includes(contextData?.contextType || ContextType.Unknown);
if (!serverUrl || isInvalidContext) {
return;
@@ -109,11 +110,14 @@ export const useFetchLogsQLOptions = (contextData?: ContextData) => {
setFieldNames([]);
const setter = (filterNames: LogsFiledValues[]) => {
setFieldNames(processData(filterNames, ContextType.FilterName));
};
fetchData({
urlSuffix: "field_names",
setter: setFieldNames,
type: ContextType.FilterName,
params: getQueryParams({ query: "*" })
setter: setter,
params: getQueryParams({ query: contextData?.queryBeforeIncompleteFilter || "*" })
});
return () => abortControllerRef.current?.abort();
@@ -128,11 +132,14 @@ export const useFetchLogsQLOptions = (contextData?: ContextData) => {
setFieldValues([]);
const setter = (filterValues: LogsFiledValues[]) => {
setFieldValues(processData(filterValues, ContextType.FilterValue));
};
fetchData({
urlSuffix: "field_values",
setter: setFieldValues,
type: ContextType.FilterValue,
params: getQueryParams({ query: "*", field: contextData.filterName })
setter: setter,
params: getQueryParams({ query: generateQuery(contextData), field: contextData.filterName })
});
return () => abortControllerRef.current?.abort();

View File

@@ -0,0 +1,131 @@
import { expect } from "vitest";
import { generateQuery } from "./utils";
import { ContextType } from "./types";
describe("utils", () => {
describe("_time", () => {
it("should return the trimmed value by `-`", () => {
expect(generateQuery({
queryBeforeIncompleteFilter: "_stream:{type=\"WatchEvent\"}",
contextType: ContextType.FilterValue,
filterName: "_time",
query: "_stream:{type=\"WatchEvent\"} _time:2025-04-1",
valueAfterCursor: "",
valueBeforeCursor: "_time=2025-04-1",
valueContext: "2025-04-1"
})).toStrictEqual("_stream:{type=\"WatchEvent\"} _time:2025-04");
});
it("should return the trimmed value by `:` if char `-` also exist in the query", () => {
expect(generateQuery({
queryBeforeIncompleteFilter: "_stream:{type=\"WatchEvent\"}",
contextType: ContextType.FilterValue,
filterName: "_time",
query: "_stream:{type=\"WatchEvent\"} _time:2025-04-10T23:45:5",
valueAfterCursor: "",
valueBeforeCursor: "_time=2025-04-10T23:45:5",
valueContext: "2025-04-10T23:45:5"
})).toStrictEqual("_stream:{type=\"WatchEvent\"} _time:2025-04-10T23:45");
});
it("should return default `*` instead of -time filter", () => {
expect(generateQuery({
queryBeforeIncompleteFilter: "_stream:{type=\"WatchEvent\"}",
contextType: ContextType.FilterValue,
filterName: "_time",
query: "_stream:{type=\"WatchEvent\"} _time:202",
valueAfterCursor: "",
valueBeforeCursor: "_time=202",
valueContext: "202"
})).toStrictEqual("_stream:{type=\"WatchEvent\"} *");
});
});
describe("_stream", () => {
it("should add regexp to filter value", () => {
expect(generateQuery({
queryBeforeIncompleteFilter: "",
contextType: ContextType.FilterValue,
filterName: "_stream",
query: "_stream:{type=\"WatchEve",
valueAfterCursor: "",
valueBeforeCursor: "_stream:{type=\"WatchEve",
valueContext: "{type=\"WatchEve"
})).toStrictEqual("_stream:{type=~\"WatchEve.*\"}");
});
it("should add regexp to filter value if cursor in the middle of value", () => {
expect(generateQuery({
queryBeforeIncompleteFilter: "",
contextType: ContextType.FilterValue,
filterName: "_stream",
query: "_stream:{type=\"WatchEve\"}",
valueAfterCursor: "",
valueBeforeCursor: "_stream:{type=\"WatchEve",
valueContext: "{type=\"WatchEve"
})).toStrictEqual("_stream:{type=~\"WatchEve.*\"}");
});
it("should return * if do not have value after =", () => {
expect(generateQuery({
queryBeforeIncompleteFilter: "",
contextType: ContextType.FilterValue,
filterName: "_stream",
query: "_stream:{type=",
valueAfterCursor: "",
valueBeforeCursor: "_stream:{type=",
valueContext: "{type="
})).toStrictEqual("*");
});
});
it("_msg", () => {
expect(generateQuery({
queryBeforeIncompleteFilter: "_stream:{type=\"WatchEvent\"}",
contextType: ContextType.FilterValue,
filterName: "_msg",
query: "_stream:{type=\"WatchEvent\"} _msg:453",
valueAfterCursor: "",
valueBeforeCursor: "_msg:453",
valueContext: "453"
})).toStrictEqual("_stream:{type=\"WatchEvent\"} *");
});
it("_stream_id", () => {
expect(generateQuery({
queryBeforeIncompleteFilter: "_stream:{type=\"WatchEvent\"}",
contextType: ContextType.FilterValue,
filterName: "_stream_id",
query: "_stream:{type=\"WatchEvent\"} _stream_id:453",
valueAfterCursor: "",
valueBeforeCursor: "_stream_id:453",
valueContext: "453"
})).toStrictEqual("_stream:{type=\"WatchEvent\"} *");
});
describe("other fields", () => {
it("should add prefix filter to other type of field names", () => {
expect(generateQuery({
queryBeforeIncompleteFilter: "",
contextType: ContextType.FilterValue,
filterName: "repo.name",
query: "repo.name:Victori",
valueAfterCursor: "",
valueBeforeCursor: "repo.name:Victori",
valueContext: "Victori"
})).toStrictEqual("repo.name:Victori*");
});
it("should add prefix filter to other type of field names with escaped via double quote", () => {
expect(generateQuery({
queryBeforeIncompleteFilter: "",
contextType: ContextType.FilterValue,
filterName: "repo.name",
query: "repo.name:\"Victori",
valueAfterCursor: "",
valueBeforeCursor: "repo.name:\"Victori",
valueContext: "Victori"
})).toStrictEqual("repo.name:Victori*");
});
});
});

View File

@@ -0,0 +1,61 @@
import { ContextData } from "./types";
const getStreamFieldQuery = (valueContext: string) => {
if (valueContext.includes("=")) {
const [fieldName, fieldValue] = valueContext.split("=");
if (fieldValue) {
return `_stream:${fieldName}=~${fieldValue}.*"}`;
}
}
return "*";
};
const getLastPartUntilDelimiter = (value: string, delimiter: string) => {
const lastIndexOfDelimiter = value.lastIndexOf(delimiter);
return lastIndexOfDelimiter !== -1 ? value.slice(0, lastIndexOfDelimiter) : "";
};
const getDateQuery = (contextData: ContextData) => {
let fieldValue = "";
if (contextData.valueContext.includes(":")) {
fieldValue = getLastPartUntilDelimiter(contextData.valueContext, ":");
} else if (contextData.valueContext.includes("-")) {
fieldValue = getLastPartUntilDelimiter(contextData.valueContext, "-");
}
return fieldValue ? `${contextData.filterName}:${fieldValue}` : "*";
};
/**
* Generates a query string based on the provided context data.
*
* The function processes the input based on the `filterName` property:
*
* - If `filterName` is `_msg` or `_stream_id`, the query cannot be generated specifically,
* so a wildcard query (`"*"`) is returned.
*
* - If `filterName` is `_stream`, the query is generated using regexp (`{type=~"value.*"}`).
*
* - If `filterName` is `_time`, a simplified query is created by trimming the value up
* to the first occurrence of a delimiter such as `-` or `:`.
*
* - For all other values of `filterName`, a prefix query is returned using
* the `query` value with a `*` appended (e.g., `"value*"`).
*
* @param {ContextData} contextData - The context object containing query parameters and metadata.
* @returns {string} The generated query string.
*/
export const generateQuery = (contextData: ContextData): string => {
let fieldQuery = "";
if (!contextData.filterName || !contextData.query || ["_msg", "_stream_id"].includes(contextData.filterName)) {
fieldQuery = "*";
} else if ("_stream" === contextData.filterName) {
fieldQuery = getStreamFieldQuery(contextData.valueContext);
} else if ("_time" === contextData.filterName) {
fieldQuery = getDateQuery(contextData);
} else {
fieldQuery = `${contextData.filterName}:${contextData.valueContext}*`;
}
return contextData.queryBeforeIncompleteFilter ? `${contextData.queryBeforeIncompleteFilter}${contextData.separator ?? " "}${fieldQuery}` : fieldQuery;
};

View File

@@ -0,0 +1,43 @@
import { hasSortPipe } from "./sort";
describe("hasSortPipe()", () => {
/** Queries that MUST be recognised as containing a sort/order pipe. */
const positive: string[] = [
// ───── basic usage ─────
"sort by (_time)",
"| sort by (_time)",
"|sort(_time) desc",
"| order by (foo desc)",
"_time:5m | sort by (_stream, _time)",
// ───── documented options ─────
"_time:1h | sort by (request_duration desc) limit 10",
"_time:1h | sort by (request_duration desc) partition by (host) limit 3",
"_time:5m | sort by (_time) rank as position",
// ───── whitespace / tabs ─────
"|\t sort\tby (host)",
// ───── no space after the pipe ─────
"foo|sort by (_time)",
];
/** Queries that MUST **not** be recognised (false positives). */
const negative: string[] = [
"", // empty
"error | sample 100", // no sort
"|sorted(field)", // 'sorted' ≠ 'sort'
"|sorter(field)", // 'sorter' ≠ 'sort'
"my_sort(field)", // function name
"| sorta by (field)", // 'sorta'
"foo | orderliness by (bar)", // 'orderliness' ≠ 'order'
];
it.each(positive)("detects pipe in ➜ %s", query => {
expect(hasSortPipe(query)).toBe(true);
});
it.each(negative)("does NOT detect pipe in ➜ %s", query => {
expect(hasSortPipe(query)).toBe(false);
});
});

View File

@@ -0,0 +1,5 @@
const hasSortPipeRe = /(?:^|\|)\s*(?:sort|order)\b/i;
export function hasSortPipe(query: string): boolean {
return hasSortPipeRe.test(query);
}

View File

@@ -1,4 +1,4 @@
import React, { FC, useMemo, useState } from "preact/compat";
import { FC, useMemo, useState } from "preact/compat";
import useBoolean from "../../../hooks/useBoolean";
import { RestartIcon, SettingsIcon } from "../../Main/Icons";
import Button from "../../Main/Button/Button";
@@ -19,8 +19,8 @@ import {
LOGS_URL_PARAMS,
WITHOUT_GROUPING
} from "../../../constants/logs";
import { getFromStorage, saveToStorage } from "../../../utils/storage";
import LogParsingSwitches from "../../Configurators/LogsSettings/LogParsingSwitches";
import { useLocalStorageBoolean } from "../../../hooks/useLocalStorageBoolean";
const {
GROUP_BY,
@@ -48,7 +48,7 @@ const GroupLogsConfigurators: FC<Props> = ({ logs }) => {
const [dateFormat, setDateFormat] = useState(searchParams.get(DATE_FORMAT) || LOGS_DATE_FORMAT);
const [errorFormat, setErrorFormat] = useState("");
const [disabledHovers, setDisabledHovers] = useState(!!getFromStorage("LOGS_DISABLED_HOVERS"));
const [disabledHovers, handleSetDisabledHovers] = useLocalStorageBoolean("LOGS_DISABLED_HOVERS");
const isGroupChanged = groupBy !== LOGS_GROUP_BY;
const isDisplayFieldsChanged = displayFields.length !== 1 || displayFields[0] !== LOGS_DISPLAY_FIELDS;
@@ -62,7 +62,8 @@ const GroupLogsConfigurators: FC<Props> = ({ logs }) => {
].some(Boolean);
const logsKeys = useMemo(() => {
return Array.from(new Set(logs.map(l => Object.keys(l)).flat()));
const uniqueKeys = new Set(logs.map(l => Object.keys(l)).flat());
return Array.from(uniqueKeys).sort((a, b) => a.localeCompare(b));
}, [logs]);
const {
@@ -116,11 +117,6 @@ const GroupLogsConfigurators: FC<Props> = ({ logs }) => {
handleClose();
};
const handleSetDisabledHovers = (value: boolean) => {
setDisabledHovers(value);
saveToStorage("LOGS_DISABLED_HOVERS", value);
};
const tooltipContent = () => {
if (!hasChanges) return title;
return (

View File

@@ -1,4 +1,4 @@
import React, { FC } from "preact/compat";
import { FC } from "preact/compat";
import classNames from "classnames";
import { MouseEvent as ReactMouseEvent, ReactNode } from "react";
import "./style.scss";
@@ -14,6 +14,7 @@ interface ButtonProps {
disabled?: boolean
children?: ReactNode
className?: string
"data-id"?: string
onClick?: (e: ReactMouseEvent<HTMLButtonElement>) => void
onMouseDown?: (e: ReactMouseEvent<HTMLButtonElement>) => void
}
@@ -31,6 +32,7 @@ const Button: FC<ButtonProps> = ({
disabled,
onClick,
onMouseDown,
"data-id": dataId
}) => {
const classesButton = classNames({
@@ -50,6 +52,7 @@ const Button: FC<ButtonProps> = ({
aria-label={ariaLabel}
onClick={onClick}
onMouseDown={onMouseDown}
data-id={dataId}
>
{startIcon}{children}{endIcon}
</button>

View File

@@ -1,4 +1,3 @@
import React from "react";
import { getCssVariable } from "../../../utils/theme";
export const LogoIcon = () => (
@@ -643,3 +642,47 @@ export const PauseIcon = () => (
<path d="M6 19h4V5H6v14zm8-14v14h4V5h-4z" />
</svg>
);
export const ScrollToTopIcon = () => (
<svg
viewBox="0 0 24 24"
fill="currentColor"
>
<path
d="M8 12l4-4 4 4m-4-4v12"
strokeWidth="2"
stroke="currentColor"
fill="none"
/>
</svg>
);
export const SortIcon = () => (
<svg
viewBox="0 0 24 24"
fill="currentColor"
>
<path d="M4 3 L4 15 L1.5 15 L5.5 21 L9.5 15 L7 15 L7 3 Z"/>
<path d="M13 21 L13 9 L10.5 9 L14.5 3 L18.5 9 L16 9 L16 21 Z"/>
</svg>
);
export const SortArrowDownIcon = () => (
<svg
viewBox="0 0 24 24"
fill="currentColor"
>
<path d="M10.5 3 L10.5 15 L8 15 L12 21 L16 15 L13.5 15 L13.5 3 Z"/>
</svg>
);
export const SortArrowUpIcon = () => (
<svg
viewBox="0 0 24 24"
fill="currentColor"
>
<path d="M10.5 21 L10.5 9 L8 9 L12 3 L16 9 L13.5 9 L13.5 21 Z"/>
</svg>
);

View File

@@ -0,0 +1,59 @@
import { FC, useEffect, useState } from "preact/compat";
import Button from "../Main/Button/Button";
import Tooltip from "../Main/Tooltip/Tooltip";
import { ScrollToTopIcon } from "../Main/Icons";
import classNames from "classnames";
import "./style.scss";
import { useCallback } from "react";
interface ScrollToTopButtonProps {
className?: string;
}
const ScrollToTopButton: FC<ScrollToTopButtonProps> = ({ className }) => {
const [isVisible, setIsVisible] = useState(false);
const checkScrollPosition = () => {
const scrollPosition = window.pageYOffset || document.documentElement.scrollTop;
const visibleHeightThreshold = window.innerHeight;
setIsVisible(scrollPosition > visibleHeightThreshold);
};
const scrollToTop = useCallback(() => {
window.scrollTo({
top: 0,
behavior: "smooth"
});
}, []);
useEffect(() => {
window.addEventListener("scroll", checkScrollPosition);
checkScrollPosition();
return () => {
window.removeEventListener("scroll", checkScrollPosition);
};
}, []);
return (
<div
className={classNames({
"vm-scroll-to-top-button": true,
"vm-scroll-to-top-button_visible": isVisible
}, className)}
>
<Tooltip title="Scroll to top">
<Button
variant="contained"
color="primary"
onClick={scrollToTop}
ariaLabel="Scroll to top"
startIcon={<ScrollToTopIcon />}
/>
</Tooltip>
</div>
);
};
export default ScrollToTopButton;

View File

@@ -0,0 +1,26 @@
@use "src/styles/variables" as *;
.vm-scroll-to-top-button {
position: fixed;
bottom: 20px;
right: 20px;
z-index: 4;
opacity: 0;
visibility: hidden;
transition: opacity 0.3s, visibility 0.3s;
&_visible {
opacity: 1;
visibility: visible;
}
.vm-button {
border-radius: 50%;
width: 40px;
height: 40px;
display: flex;
align-items: center;
justify-content: center;
box-shadow: 0 2px 5px rgba(0, 0, 0, 0.2);
}
}

View File

@@ -1,4 +1,4 @@
import React, { FC, useEffect, useRef, useMemo } from "preact/compat";
import { FC, useEffect, useRef, useMemo } from "preact/compat";
import Button from "../../Main/Button/Button";
import { SearchIcon, SettingsIcon } from "../../Main/Icons";
import "./style.scss";
@@ -18,8 +18,8 @@ const title = "Table settings";
interface TableSettingsProps {
columns: string[];
selectedColumns?: string[];
tableCompact: boolean;
toggleTableCompact: () => void;
tableCompact?: boolean;
toggleTableCompact?: () => void;
onChangeColumns: (arr: string[]) => void
}
@@ -49,8 +49,8 @@ const TableSettings: FC<TableSettingsProps> = ({
const filteredColumns = useMemo(() => {
const allColumns = customColumns.concat(columns);
if (!searchColumn) return allColumns;
return allColumns.filter(col => col.includes(searchColumn));
const result = searchColumn ? allColumns.filter(col => col.includes(searchColumn)) : allColumns;
return result.sort((a, b) => a.localeCompare(b));
}, [columns, customColumns, searchColumn]);
const isAllChecked = useMemo(() => {
@@ -195,18 +195,20 @@ const TableSettings: FC<TableSettingsProps> = ({
</div>
</div>
</div>
<div className="vm-table-settings-modal-section">
<div className="vm-table-settings-modal-section__title">
{toggleTableCompact && tableCompact !== undefined && (
<div className="vm-table-settings-modal-section">
<div className="vm-table-settings-modal-section__title">
Table view
</div>
<div className="vm-table-settings-modal-columns-list__item">
<Switch
label={"Compact view"}
value={tableCompact}
onChange={toggleTableCompact}
/>
</div>
</div>
<div className="vm-table-settings-modal-columns-list__item">
<Switch
label={"Compact view"}
value={tableCompact}
onChange={toggleTableCompact}
/>
</div>
</div>
)}
</Modal>)}
</div>
);

View File

@@ -0,0 +1,70 @@
import { act, renderHook } from "@testing-library/preact";
import { useLocalStorageBoolean } from "./useLocalStorageBoolean";
import * as storageUtils from "../utils/storage";
import { Mock } from "vitest";
import { StorageKeys } from "../utils/storage";
vi.mock("../utils/storage");
const testStorageKey = "TEST_STORAGE_KEY" as StorageKeys;
describe("useLocalStorageBoolean", () => {
const { getFromStorage, saveToStorage } = storageUtils;
beforeEach(() => {
vi.clearAllMocks();
});
it("initializes with the value from localStorage", () => {
const mockGetFromStorage = getFromStorage as Mock;
mockGetFromStorage.mockReturnValueOnce(true);
const { result } = renderHook(() => useLocalStorageBoolean(testStorageKey));
expect(result.current[0]).toBe(true);
expect(getFromStorage).toHaveBeenCalledWith(testStorageKey);
});
it("updates localStorage and state when setter is called", () => {
const mockGetFromStorage = getFromStorage as Mock;
mockGetFromStorage.mockReturnValueOnce(false);
const { result } = renderHook(() => useLocalStorageBoolean(testStorageKey));
act(() => {
result.current[1](true);
});
expect(saveToStorage).toHaveBeenCalledWith(testStorageKey, true);
expect(result.current[0]).toBe(false);
});
it("reacts to changes in localStorage by storage events", () => {
const mockGetFromStorage = getFromStorage as Mock;
mockGetFromStorage.mockReturnValueOnce(false);
const { result } = renderHook(() => useLocalStorageBoolean(testStorageKey));
// Simulate a storage event
act(() => {
mockGetFromStorage.mockReturnValueOnce(true);
window.dispatchEvent(new StorageEvent("storage", { key: testStorageKey, newValue: "true" }));
});
expect(result.current[0]).toBe(true);
});
it("does not update state if the localStorage value remains the same", () => {
const mockGetFromStorage = getFromStorage as Mock;
mockGetFromStorage.mockReturnValueOnce(false);
const { result } = renderHook(() => useLocalStorageBoolean(testStorageKey));
act(() => {
mockGetFromStorage.mockReturnValueOnce(false);
window.dispatchEvent(new StorageEvent("storage", { key: testStorageKey, newValue: "false" }));
});
expect(result.current[0]).toBe(false);
});
});

View File

@@ -0,0 +1,31 @@
import { useMemo, useState } from "preact/compat";
import { getFromStorage, saveToStorage, StorageKeys } from "../utils/storage";
import useEventListener from "./useEventListener";
import { useCallback } from "react";
/**
* A custom hook that synchronizes a boolean state with a value stored in localStorage.
*
* @param {StorageKeys} key - The key used to access the corresponding value in localStorage.
* @returns {[boolean, function]} A tuple containing the current boolean value from localStorage and a setter function to update the value in localStorage.
*
* The hook listens to the "storage" event to automatically update the state when the localStorage value changes.
*/
export const useLocalStorageBoolean = (key: StorageKeys): [boolean, (value: boolean) => void] => {
const [value, setValue] = useState(!!getFromStorage(key));
const handleUpdateStorage = useCallback(() => {
const newValue = !!getFromStorage(key);
if (newValue !== value) {
setValue(newValue);
}
}, [key, value]);
const setNewValue = useCallback((newValue: boolean) => {
saveToStorage(key, newValue);
}, [key]);
useEventListener("storage", handleUpdateStorage);
return useMemo(() => [value, setNewValue], [value, setNewValue]);
};

View File

@@ -1,4 +1,4 @@
import React, { FC, useEffect, useMemo, useState } from "preact/compat";
import { FC, useEffect, useMemo, useState } from "preact/compat";
import ExploreLogsBody from "./ExploreLogsBody/ExploreLogsBody";
import useStateSearchParams from "../../hooks/useStateSearchParams";
import useSearchParamsFromObject from "../../hooks/useSearchParamsFromObject";
@@ -18,6 +18,7 @@ import { useSearchParams } from "react-router-dom";
import { useQueryDispatch, useQueryState } from "../../state/query/QueryStateContext";
import { getUpdatedHistory } from "../../components/QueryHistory/utils";
import { useDebounceCallback } from "../../hooks/useDebounceCallback";
import usePrevious from "../../hooks/usePrevious";
const storageLimit = Number(getFromStorage("LOGS_LIMIT"));
const defaultLimit = isNaN(storageLimit) ? LOGS_ENTRIES_LIMIT : storageLimit;
@@ -30,6 +31,7 @@ const ExploreLogs: FC = () => {
const { setSearchParamsFromKeys } = useSearchParamsFromObject();
const [searchParams] = useSearchParams();
const hideChart = useMemo(() => searchParams.get("hide_chart"), [searchParams]);
const prevHideChart = usePrevious(hideChart);
const [limit, setLimit] = useStateSearchParams(defaultLimit, "limit");
const [query, setQuery] = useStateSearchParams("*", "query");
@@ -118,11 +120,10 @@ const ExploreLogs: FC = () => {
}, [query, isUpdatingQuery]);
useEffect(() => {
if (!hideChart) debouncedFetchLogs(period, true);
return () => {
debouncedFetchLogs.cancel?.();
};
}, [hideChart, period]);
if (!hideChart && prevHideChart) {
fetchLogHits(period);
}
}, [hideChart, prevHideChart, period]);
return (
<div className="vm-explore-logs">

View File

@@ -5,7 +5,7 @@
&-header {
background-color: $color-background-block;
z-index: 1;
z-index: 3;
margin: -$padding-medium 0-$padding-medium 0;
position: sticky;
top: 0;

View File

@@ -1,21 +1,67 @@
import React, { FC } from "preact/compat";
import React, { FC, useMemo, useCallback, createPortal } from "preact/compat";
import DownloadLogsButton from "../../../DownloadLogsButton/DownloadLogsButton";
import { createPortal } from "preact/compat";
import JsonViewComponent from "../../../../../components/Views/JsonView/JsonView";
import { ViewProps } from "../../types";
import EmptyLogs from "../components/EmptyLogs/EmptyLogs";
import { useCallback } from "react";
import JsonViewSettings from "./JsonViewSettings/JsonViewSettings";
import { useSearchParams } from "react-router-dom";
import orderBy from "lodash.orderBy";
import "./style.scss";
import { Logs } from "../../../../../api/types";
import { SortDirection } from "./types";
const MemoizedJsonView = React.memo(JsonViewComponent);
const jsonQuerySortParam = "json_sort";
const fieldSortQueryParamName = "json_field_sort";
const JsonView: FC<ViewProps> = ({ data, settingsRef }) => {
const getLogs = useCallback(() => data, [data]);
const [searchParams] = useSearchParams();
const sortParam = searchParams.get(jsonQuerySortParam);
const fieldSortParam = searchParams.get(fieldSortQueryParamName) as SortDirection;
const [sortField, sortDirection] = useMemo(() => {
const [sortField, sortDirection] = sortParam?.split(":").map(decodeURIComponent) || [];
return [sortField, sortDirection as "asc" | "desc" | undefined];
}, [sortParam]);
const fields = useMemo(() => {
const keys = new Set(data.flatMap(Object.keys));
return Array.from(keys);
}, [data]);
const orderedFieldsData = useMemo(() => {
if (!fieldSortParam) return data;
const orderedFields = fields.toSorted((a, b) => fieldSortParam === "asc" ? a.localeCompare(b): b.localeCompare(a));
return data.map((item) => {
return orderedFields.reduce((acc, field) => {
if (item[field]) acc[field] = item[field];
return acc;
}, {} as Logs);
});
}, [fields, fieldSortParam, data]);
const sortedData = useMemo(() => {
if (!sortField || !sortDirection) return orderedFieldsData;
return orderBy(orderedFieldsData, [sortField], [sortDirection]);
}, [orderedFieldsData, sortField, sortDirection]);
const renderSettings = () => {
if (!settingsRef.current) return null;
return createPortal(
data.length > 0 && <DownloadLogsButton getLogs={getLogs} />,
data.length > 0 && (
<div className="vm-json-view__settings-container">
<DownloadLogsButton getLogs={getLogs} />
<JsonViewSettings
fields={fields}
sortQueryParamName={jsonQuerySortParam}
fieldSortQueryParamName={fieldSortQueryParamName}
/>
</div>
),
settingsRef.current
);
};
@@ -25,9 +71,11 @@ const JsonView: FC<ViewProps> = ({ data, settingsRef }) => {
return (
<>
{renderSettings()}
<MemoizedJsonView data={data} />
<MemoizedJsonView
data={sortedData}
/>
</>
);
};
export default JsonView;
export default JsonView;

View File

@@ -0,0 +1,185 @@
import { FC, useMemo, useRef } from "preact/compat";
import Button from "../../../../../../components/Main/Button/Button";
import { SettingsIcon, SortArrowDownIcon, SortArrowUpIcon, SortIcon } from "../../../../../../components/Main/Icons";
import Tooltip from "../../../../../../components/Main/Tooltip/Tooltip";
import Select from "../../../../../../components/Main/Select/Select";
import useBoolean from "../../../../../../hooks/useBoolean";
import { useState, useEffect, useCallback } from "react";
import Modal from "../../../../../../components/Main/Modal/Modal";
import { useSearchParams } from "react-router-dom";
import "./style.scss";
import { SortDirection } from "../types";
const title = "JSON settings";
const directionList = ["asc", "desc"];
interface JsonSettingsProps {
fields: string[];
sortQueryParamName: string;
fieldSortQueryParamName: string;
}
const JsonViewSettings: FC<JsonSettingsProps> = ({
fields,
sortQueryParamName,
fieldSortQueryParamName
}) => {
const [searchParams, setSearchParams] = useSearchParams();
const buttonRef = useRef<HTMLDivElement>(null);
const [fieldSortDirection, setFieldSortDirection] = useState<SortDirection>(null);
const {
value: openSettings,
toggle: toggleOpenSettings,
setFalse: handleClose,
} = useBoolean(false);
const [sortField, setSortField] = useState<string | null>(null);
const [sortDirection, setSortDirection] = useState<SortDirection>(null);
useEffect(() => {
const sortParam = searchParams.get(sortQueryParamName);
const isSortDirection = (value: string) : value is Exclude<SortDirection, null> => directionList.includes(value);
if (sortParam) {
const [field, direction] = sortParam.split(":").map(decodeURIComponent);
if (field && (isSortDirection(direction))) {
setSortField(field);
setSortDirection(direction);
}
}
const fieldSortParam = searchParams.get(fieldSortQueryParamName);
if (fieldSortParam === "asc" || fieldSortParam === "desc") {
setFieldSortDirection(fieldSortParam);
}
}, [searchParams, sortQueryParamName, fieldSortQueryParamName, setSortField, setSortDirection, setFieldSortDirection]);
const updateSortParams = useCallback((field: string | null, direction: SortDirection) => {
const updatedParams = new URLSearchParams(searchParams.toString());
if (!field || !direction) {
updatedParams.delete(sortQueryParamName);
} else {
updatedParams.set(sortQueryParamName, `${field}:${direction || ""}`);
}
setSearchParams(updatedParams);
}, [searchParams, sortQueryParamName]);
const handleSort = (field: string) => {
const newDirection: SortDirection = sortDirection || "asc";
setSortField(field);
setSortDirection(newDirection);
updateSortParams(field, newDirection);
};
const resetSort = () => {
setSortField(null);
setSortDirection(null);
updateSortParams(null, null);
};
const changeFieldSortDirection = useCallback(() => {
let newFieldSortDirection: SortDirection = null;
if (fieldSortDirection === null) {
newFieldSortDirection = "asc";
}else if (fieldSortDirection === "asc") {
newFieldSortDirection = "desc";
}
setFieldSortDirection(newFieldSortDirection);
const updatedParams = new URLSearchParams(searchParams.toString());
if (!newFieldSortDirection) {
updatedParams.delete(fieldSortQueryParamName);
} else {
updatedParams.set(fieldSortQueryParamName, encodeURIComponent(newFieldSortDirection));
}
setSearchParams(updatedParams);
},[fieldSortDirection, searchParams, fieldSortQueryParamName]);
const handleChangeSortDirection = (direction: string) => {
const field = sortField || fields[0];
setSortField(field);
setSortDirection(direction as SortDirection);
updateSortParams(field, direction as SortDirection);
};
const fieldSortMeta = useMemo(() => ({
default: {
title: "Set field sort order. Click to sort in ascending order",
icon: <SortIcon />
},
asc: {
title: "Fields sorted ascending. Click to sort in descending order",
icon: <SortArrowDownIcon />
},
desc: {
title: "Fields sorted descending. Click to reset sort",
icon: <SortArrowUpIcon />
},
}), []);
const fieldSortButton = useMemo(() => {
const { title, icon } = fieldSortMeta[fieldSortDirection ?? "default"];
return <Tooltip title={title}>
<Button
variant="text"
startIcon={icon}
onClick={changeFieldSortDirection}
ariaLabel={title}
/>
</Tooltip>;
}, [fieldSortDirection, toggleOpenSettings, changeFieldSortDirection, fieldSortMeta]);
return (
<div className="vm-json-settings">
{fieldSortButton}
<Tooltip title={title}>
<div ref={buttonRef}>
<Button
variant="text"
startIcon={<SettingsIcon/>}
onClick={toggleOpenSettings}
ariaLabel={title}
/>
</div>
</Tooltip>
{openSettings && (
<Modal
title={title}
className="vm-json-settings-modal"
onClose={handleClose}
>
<div className="vm-json-settings-modal-section">
<div className="vm-json-settings-modal-section__sort-settings-container">
<Select
value={sortField || ""}
onChange={handleSort}
list={fields}
label="Select field"
/>
<Select
value={sortDirection || ""}
onChange={handleChangeSortDirection}
list={directionList}
label="Sort direction"
/>
{(sortField || sortDirection) && (
<Button
variant="outlined"
color="error"
onClick={resetSort}
>
Reset sort
</Button>
)}
</div>
</div>
</Modal>)}
</div>
);
};
export default JsonViewSettings;

View File

@@ -0,0 +1,34 @@
@use "src/styles/variables" as *;
.vm-json-settings {
display: flex;
flex-direction: row;
&-modal {
.vm-modal-content-body {
min-width: clamp(300px, 600px, 90vw);
padding: 0;
}
&-section {
padding-block: $padding-global;
border-top: $border-divider;
&:first-child {
padding-top: 0;
border-top: none;
}
&__sort-settings-container {
display: grid;
padding: $padding-medium;
grid-template-columns: 1fr 1fr 80px;
gap: $padding-medium;
@media (max-width: 500px) {
grid-template-columns: 1fr;
}
}
}
}
}

Some files were not shown because too many files have changed in this diff Show More