Compare commits

...

113 Commits

Author SHA1 Message Date
Zakhar Bessarab
e950846534 docs/CHANGELOG.md: cut v1.114.0
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-03-21 16:42:46 +04:00
Zakhar Bessarab
7195862a51 app/{vmselect,vlselect}: run make vmui-update vmui-logs-update
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-03-21 16:37:34 +04:00
Zhu Jiekun
ca9bb48f3f doc: [guide] update k8s vmsingle vmcluster helm and operator guide
This commit updates some wording for the k8s guide here:
- [Kubernetes monitoring via VictoriaMetrics
Single](https://docs.victoriametrics.com/guides/k8s-monitoring-via-vm-single)
- [Kubernetes monitoring with VictoriaMetrics
Cluster](https://docs.victoriametrics.com/guides/k8s-monitoring-via-vm-cluster)
- [Getting started with VM Operator,
](https://docs.victoriametrics.com/guides/getting-started-with-vm-operator)
2025-03-21 11:07:55 +01:00
Fred Navruzov
8239c119ca docs/vmanomaly: release v1.21.0 + HC/HA docs (#8555)
### Describe Your Changes

PR updates docs to release v1.21.0, in particular, adjust docs and its
structure to High Availability (HA) and horizontal scalability (HS)
capabilities.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-03-20 18:21:50 +02:00
Max Kotliar
c05ffa906d lib/promscrape: improve streamParse performance
Previously, performance of stream.Parse could be limited by mutex.Lock on callback function. It used shared writeContext. With complicated relabeling rules and any slowness at pushData function, it could significantly decrease parsed rows processing performance.

 This commit removes locks and makes parsed rows processing lock-free in the same manner as `stream.Parse` processing implemented at push ingestion processing.

 Implementation details:
- Removing global lock around stream.Parse callback.
- Using atomic operations for counters
- Creating write contexts per callback instead of sharing
- Improving series limit checking with sync.Once
- Optimizing labels hash calculation with buffer pooling
- Adding comprehensive tests for concurrency correctness

 Benchmark performance:
```
# before
BenchmarkScrapeWorkScrapeInternalStreamBigData-10             13          81973945 ns/op          37.68 MB/s    18947868 B/op        197 allocs/op

# after
goos: darwin
goarch: arm64
pkg: github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape
cpu: Apple M1 Pro
BenchmarkScrapeWorkScrapeInternalStreamBigData-10             74          15761331 ns/op         195.98 MB/s    15487399 B/op        148 allocs/op
PASS
ok      github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape       1.806s
```

Related issue:
 https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8159
---------
Signed-off-by: Maksim Kotlyar <kotlyar.maksim@gmail.com>
Co-authored-by: Roman Khavronenko <hagen1778@gmail.com>
2025-03-20 16:49:24 +01:00
Zakhar Bessarab
30625e83b6 app/vmbackupmanager: properly set vm_backup_last_run_failed metric
Previously, `getBackupsList` was appending `latest` backup in all cases without checking if it actually exists.
This lead to `vm_backup_last_run_failed` metric being set to `1` since folder did not contain successful completion marker.

This commit adds a check to handle a case when remote storage does not contain any backups yet.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8490

---
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-03-20 15:50:13 +01:00
Roman Khavronenko
4c9c2501f3 docs: fix a few typos 2025-03-20 15:37:16 +01:00
Zakhar Bessarab
3447cb0bd3 lib/backup/s3remote: add retries for "IncompleteBody" errors
These errors could be caused by intermittent network issues, especially
in case of using proxies when accessing S3 storage. Previously, such
error would abort backup/restore process and require manual intervention
to ensure backups consistency.

This commit adds automatic retries to handle this to improve backups
reliability and resilience to network issues.
2025-03-20 12:43:18 +01:00
f41gh7
985f7faad8 app/vmgateway: remove vmalert dependency
1. remove "vmalert" word from vmgateway doc and exposed metrics;
2. remove unrelated flags like -datasource.roundDigits, remoteRead.disablePathAppend, -datasource.disableStepParam
2025-03-20 12:39:24 +01:00
Zakhar Bessarab
3155542168 app/vmbackupmanager: properly close run channel when stopping
vmbackupmanager uses `runC` channel for inter-goroutine communication between `scheduler` and `execute` goroutines.

 Previously, `runC` wasn't closed during graceful shutdown. And vmbackupmanager process couldn't gracefully stop. It could only be killed with-in configured timeout.

This commit properly closes `runC` by `scheduler` when stopping vmbackupmanager in order to avoid shutdown delay.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8554
2025-03-20 12:39:01 +01:00
f41gh7
756c93c66a app/vmbackupmanager: prevent backups being scheduled one after another
Previously, backup was first scheduled at 00:00:00 and `getSleepDuration` was immediately executed to get the sleep duration for the next backup. Since it was returning `1 * time.Second` the next backup was attempted and failed to be scheduled.

Update logic to wait for full backup interval in this case so that there will be no attempt to schedule an unneeded backup.

 It also adds the following changes:
* fix error log entry reference to type of policy
* add a message about retention completion similar to existing message for backups to make it more consistent

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8499

This is a follow-up for fb6d2e92e3a1cf412d1f7dee64a4852941a8aa1b
2025-03-20 12:38:50 +01:00
Andrii Chubatiuk
511517f491 lib/streamaggr: fix threshold update, when deduplication and windows are enabled (#8525)
### Describe Your Changes

during initial flush with deduplication and windows enabled lower
timestamps threshold is set to an upper bound of the next deduplication
interval, which leads to ignoring all samples on subsequent intervals

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-03-20 09:54:35 +01:00
Yury Molodov
db66ab1852 vmui: show hidden common labels in group name (#8517)
### Describe Your Changes

Changes:
1. When `hide common labels` is enabled, they will now be displayed in
the group name.
2. Legend settings toggles have been moved below the graph for better
accessibility.


![image](https://github.com/user-attachments/assets/fc8c7f4c-c155-4056-8862-301ad375d7ae)

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
2025-03-20 09:41:15 +01:00
Yury Molodov
31f662a0f7 vmui/logs: implement nanosecond log sorting (#8518)
### Describe Your Changes

This PR adds nanosecond precision to log sorting, ensuring accurate
ordering of entries with sub-millisecond differences.

Related issue:  #8346

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
2025-03-20 09:30:57 +01:00
hagen1778
7f69553230 docs: fix typos in release versions
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-03-20 09:23:34 +01:00
hagen1778
90d547dec0 docs: mark LTS releases explicitly in changelog
The mark is important for readers to understand the type of release.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-03-20 09:12:31 +01:00
Hui Wang
4587d092fd vmgateway: fix vmgateway_ratelimit_refresh_duration_seconds
* vmgateway: fix `vmgateway_ratelimit_refresh_duration_seconds`

* revert reload ep change

---------

Co-authored-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-03-20 09:41:42 +04:00
Hui Wang
bd11e00a59 app/vmalert: properly register group and rules metrics
Commit 9ca74d1fff introduced an issue with metrics registration. Due to metrics.Summary type always registered at the global state of metrics package, vmalert had increased memory and CPU usage after multiple configuration reloads.

 This commit addresses this issue and properly registers metrics.Summary metric. Now metrics for group and rules must be explicitly registered before group.Start with group.Init method. It simplifies metrics usage an ensures that all needed metrics were registered and group is ready to start.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8532
2025-03-19 13:57:34 +01:00
Aliaksandr Valialkin
348991d1b3 app/vlselect: move the code responsible for limiting the number of concurrently executed requests, into separate functions
This improves code readability a bit.
2025-03-19 13:28:03 +01:00
Aliaksandr Valialkin
c35dfd4585 lib/chunkedbuffer: add Buffer.Len() method, which returns the byte length of the data stored in the buffer 2025-03-19 13:24:39 +01:00
Aliaksandr Valialkin
baf701889a lib/logstorage: typo fix in the comment to Storage.GetStreamFieldValues() function 2025-03-19 13:22:16 +01:00
Hui Wang
b97cacad45 app/vmalert: fix possible data race on group checksum
1. fix possible data race on group checksum when reload is called
concurrently. Before, it didn't affect much but might update the group
one more time.
2. remove the unnecessary g.mu.RLock() and compute group.id at newGroup creation. Changes to group.ID()
indicate that type and interval have changed, and the group is new.

Related PR:
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8540
2025-03-19 12:58:51 +01:00
Aliaksandr Valialkin
acdd6faecf docs/VictoriaLogs/querying/README.md: fix the docs for /select/logsql/stream_field_values when the limit arg is set
It returns up to `limit` values for the given log stream field with the biggest number of hits.
2025-03-19 12:41:27 +01:00
Aliaksandr Valialkin
ea4534d154 lib/logstorage: support for {field in (*)} and {field not_in (*)} syntax in LogsQL
This is needed for https://github.com/VictoriaMetrics/victorialogs-datasource/issues/238
to be consistent with `in(*)` feature, which has been added in the commit 84d5771b41
2025-03-19 12:09:55 +01:00
Hui Wang
dec237b7d6 app/vmalert: fix memory leak with -notifier.blackhole
Previous commit 9ca74d1fff added a regression for notifier's metrics exposed by vmalert. vmalert returned new notifier instances for the blackhole notifier type. And it registered new metrics each get notifiers function was called. It registered duplicate metrics and lead to OOM crash.

 This commit properly init blachole notifier instances and add metrics for it only once, during application start.

 Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8532
2025-03-19 10:43:57 +01:00
Nikolay
27d7d0b25c lib/promscrape: properly send staleness markers
Previously, vmagent may incorrectly store partial scrape response
in case of scrapping error. It may happen if `sw.ReadData` call fetched
some chunked response and store it at buffer. And later context deadline
exceed error happened.
 As a result, at the next scrape iteration this partial response could
 be forwarded to the `sw.sendStaleSeries(lastScrape...)` function call
 and lead to `Prometheus line` parsing error.

 This commit properly set response body to the empty value in case of
scrapping error. It prevents storing partial scrape response body. And
it no longer sends partial staleness markers to the remote storage.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8528
2025-03-19 10:37:54 +01:00
Aliaksandr Valialkin
f645479b5e lib/protoparser: rename lib/protoparser/common to lib/protoparser/protoparserutil
This improves readability of the code, which uses this package.
2025-03-18 16:24:51 +01:00
Aliaksandr Valialkin
1aa14f3b6d app/vlinsert: do not start background flusher in the LogMessageProcessor used for synchornous processing of a single data block
This should reduce CPU usage a bit for data ingestion protocols,
which process a single message per every request without streaming.
2025-03-18 11:17:08 +01:00
Aliaksandr Valialkin
9f5ff82532 lib/protoparser/common: limit the maximum memory, which could be occupied by snappy-compressed message at ReadUncompressedData 2025-03-18 11:17:08 +01:00
Roman Khavronenko
3511e2e6af dashboards: add Memory allocations rate to ResourceUsage tab (#8508)
This panel should have help us to identify
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8501 during
release checks. While we already track the GC pressure, its value is
relative and change wasn't noticeable for the workloads that we
observed.

The absolute values of allocations rate could have helped to see the
anomaly.

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-03-17 16:42:53 +01:00
Alexander Frolov
127d4f37b8 lib/promrelabel: comment typo (#8520)
### Describe Your Changes

`prasedRelabelConfig` -> `parsedRelabelConfig`

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-03-17 16:40:03 +01:00
Yury Molodov
44a54e4590 vmui/logs: fix endless group expansion loop bug (#8519)
### Describe Your Changes

Fix endless group expansion loop.
Related issue: #8347

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-03-17 16:39:41 +01:00
hagen1778
d4560ee015 docs: add update note for renamed metric vm_mmapped_files
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-03-17 16:34:36 +01:00
Guillem Jover
76d205feae spelling and grammar fixes via codespell (#8497)
### Describe Your Changes

Fix many spelling errors and some grammar, including misspellings in
filenames. 

The change also fixes a typo in metric `vm_mmaped_files` to `vm_mmapped_files`.
While this is a breaking change, this metric isn't used in alerts or dashboards. 
So it seems to have low impact on users.

The change also deprecates `cspell` as it is much heavier and less usable. 
---------

Co-authored-by: Andrii Chubatiuk <achubatiuk@victoriametrics.com>
Co-authored-by: Andrii Chubatiuk <andrew.chubatiuk@gmail.com>
2025-03-17 16:32:10 +01:00
Jose Gómez-Sellés
d852e5e0b4 docs/cloud: add FAQ for VM Cloud (#8523)
### Describe Your Changes

This PR adds the dedicated FAQ for VictoriaMetrics Cloud

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-03-17 15:54:52 +01:00
Aliaksandr Valialkin
336f954056 lib/logstorage: switch the type of LogRows.streamTagCanonicals from [][]byte to []string
This reduces the size of LogRows.streamTagCanonicals by 1/3 because of the eliminated `cap` field
in the slice header (reflect.SliceHeader) compared to the string header (reflect.StringHeader).
2025-03-17 15:02:51 +01:00
Aliaksandr Valialkin
6837e0c7d3 lib/prompb: use clear() function instead of loops for clearing WriteRequest fields inside WriteRequest.Reset
This makes the code shorter without lossing the clarity.
2025-03-17 14:31:05 +01:00
Fred Navruzov
ee3ed8ab86 docs/vmanomaly: update to patch release v1.20.1 (#8521)
### Describe Your Changes

Doc updates to a patch release v1.20.1, fixing a bug in
`PeriodicScheduler` that may affect some of the customers' deployments

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-03-16 21:34:19 +02:00
Aliaksandr Valialkin
c005cb89fc deployment: update VictoriaLogs Docker image tag from v1.16.0-victorialogs to v1.17.0-victorialogs
See https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.17.0-victorialogs
2025-03-16 01:21:59 +01:00
Aliaksandr Valialkin
771233ebcd docs/VictoriaLogs/CHANGELOG.md: cut v1.17.0-victorialogs 2025-03-16 01:15:54 +01:00
Aliaksandr Valialkin
39082103a6 app/vlinsert/opentelemetry: follow-up for a884949aba
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8502
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8511
2025-03-16 01:09:07 +01:00
Devops
a884949aba fix:Fixed an issue where and were incorrectly displayed (#8511)
### Describe Your Changes

Fixed an issue where and were incorrectly displayed when sent from
OpenTelemetry Collector to Victoria Logs

Fixes #8502
2025-03-16 00:33:52 +01:00
Aliaksandr Valialkin
d2f4698e3f docs/VictoriaLogs/querying/README.md: mention that /select/logsql/query endpoint may return arbitary number of logs matching the given query filter, and this is OK
This is needed for https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8507
and https://github.com/VictoriaMetrics/victorialogs-datasource/issues/261
2025-03-16 00:01:52 +01:00
Aliaksandr Valialkin
23c4e4cdb2 app/vlinsert: send 204 No Content response code at /insert/loki/api/v1/push endpoint
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8505
2025-03-15 23:34:49 +01:00
Aliaksandr Valialkin
13ff9a8ebd lib/{mergeset,storage,logstorage}: use chunked buffer instead of bytesutil.ByteBuffer as a storage for in-memory parts
This commit adds lib/chunkedbuffer.Buffer - an in-memory chunked buffer
optimized for random access via MustReadAt() function.
It is better than bytesutil.ByteBuffer for storing large volumes of data,
since it stores the data in chunks of a fixed size (4KiB at the moment)
instead of using a contiguous memory region. This has the following benefits over bytesutil.ByteBuffer:

- reduced memory fragmentation
- reduced memory re-allocations when new data is written to the buffer
- reduced memory usage, since the allocated chunks can be re-used
  by other Buffer instances after Buffer.Reset() call

Performance tests show up to 2x memory reduction for VictoriaLogs
when ingesting logs with big number of fields (aka wide events) under high speed.
2025-03-15 20:58:33 +01:00
Aliaksandr Valialkin
73aae546e0 lib/logstorage: pre-allocate buffers for fields and rows inside block.appendRowsTo()
This reduces the number of memory re-allocations inside the loop, which copies the rows.
2025-03-15 17:18:45 +01:00
Aliaksandr Valialkin
174a6db19f lib/logstorage: pre-allocated buffers for fields and rows inside rows.appendRows()
This should reduce the number of memory re-allocations inside the loop, which copies the rows.
2025-03-15 16:39:19 +01:00
Aliaksandr Valialkin
0e413a7efb lib/logstorage: pre-allocate the buffer needed for marshaling a block of strings inside marshalStringsBlock
This reduces the number of memory re-allocations when appending the strings to the buffer in the loop.
2025-03-15 15:56:33 +01:00
Aliaksandr Valialkin
9769ad3a24 lib/logstorage: optimize copying dict values inside valuesDict.copyFrom a bit
Pre-allocate the needed slice of strings and then assign items to it by index
instead of appending them. This reduces the number of memory allocations
and improves performance a bit.
2025-03-15 15:32:21 +01:00
Aliaksandr Valialkin
8e773564b1 lib/logstorage: intern column names instead of cloning them during data ingestion
This reduces the number of memory allocations when ingesting logs with big number of fields (aka wide events)
2025-03-15 15:29:54 +01:00
Aliaksandr Valialkin
fbcfb6d72e lib/protoparser/common: properly decode snappy-encoded requests
Snappy-encoded requests are encoded in block mode instead of stream mode.
Stream mode is incompatible with block mode. See https://pkg.go.dev/github.com/golang/snappy
That's why Snappy-encoded requests must be read in block mode.

Also add a protection against passing invalid readers to PutUncompressedReader().

This is a follow-up for 0451a1c9e0

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8416
2025-03-15 14:44:10 +01:00
Roman Khavronenko
3d9f2e3937 lib/bytesutil: don't drop ByteBuffer.B when its capacity is bigger th… (#8510)
…an 64KB at Reset

This commit reverts
b58e2ab214
as it has negative impacts when ByteBuffer is used for workloads that
always exceed 64KiB size. This significantly slows down affected
components because:
* buffers aren't beign reused;
* growing new buffers to >64KiB is very slow.

https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8501

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-03-15 01:38:36 +01:00
Aliaksandr Valialkin
2c7dd2b991 lib/logstorage: support for {label in (v1,...,vN)} and {label not_in (v1, ..., vN)} syntax 2025-03-15 01:35:13 +01:00
Aliaksandr Valialkin
0451a1c9e0 app/vlinsert: follow-up for 37ed1842ab
- Properly decode protobuf-encoded Loki request if it has no Content-Encoding header.
  Protobuf Loki message is snappy-encoded by default, so snappy decoding must be used
  when Content-Encoding header is missing.

- Return back the previous signatures of parseJSONRequest and parseProtobufRequest functions.
  This eliminates the churn in tests for these functions. This also fixes broken
  benchmarks BenchmarkParseJSONRequest and BenchmarkParseProtobufRequest, which consume
  the whole request body on the first iteration and do nothing on subsequent iterations.

- Put the CHANGELOG entries into correct places, since they were incorrectly put into already released
  versions of VictoriaMetrics and VictoriaLogs.

- Add support for reading zstd-compressed data ingestion requests into the remaining protocols
  at VictoriaLogs and VictoriaMetrics.

- Remove the `encoding` arg from PutUncompressedReader() - it has enough information about
  the passed reader arg in order to properly deal with it.

- Add ReadUncompressedData to lib/protoparser/common for reading uncompressed data from the reader until EOF.
  This allows removing repeated code across request-based protocol parsers without streaming mode.

- Consistently limit data ingestion request sizes, which can be read by ReadUncompressedData function.
  Previously this wasn't the case for all the supported protocols.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8416
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8380
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8300
2025-03-15 00:03:03 +01:00
Aliaksandr Valialkin
c60b4175bb app/vlinsert: add an ability to ignore log fields starting with the given prefixes
The `ignore_fields` HTTTP query args can contain prefixes ending with '*'.
For example, `ignore_fields=foo.*,bar` skips all the fields starting with `foo.`
during data ingestion.
2025-03-15 00:03:02 +01:00
Aliaksandr Valialkin
f874a3aa7b lib/logstorage: show a link to query options docs in the error message emitted during failure to parse query options
This should help figuring out and fixing the error by the user.
2025-03-15 00:03:02 +01:00
hagen1778
972f14d540 docs: add vmsingle to affected components
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-03-14 19:12:05 +01:00
hagen1778
f62b690599 changelog: mention #8501 in update notes
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8501
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-03-14 19:04:00 +01:00
Roman Khavronenko
dc1f7ef0d0 app/vmselect/promql: optimize binary operator or for common cases (#8489)
The optimization touches 2 things:
1. Reduces amount of allocations when comparing canonical metric names
between left and right parts of expressions.
2. Adds fast path for cases when right part of expression returns
scalar: `series_selector or on() vector(1)`, which is a typical
expression.

```
benchcmp old.txt new.txt
benchcmp is deprecated in favor of benchstat: https://pkg.go.dev/golang.org/x/perf/cmd/benchstat
benchmark                                             old ns/op     new ns/op     delta
BenchmarkBinaryOpOr/tss:1_or_tss:1-14                 291           272           -6.56%
BenchmarkBinaryOpOr/tss:1_or_tss:1000-14              44590         28592         -35.88%
BenchmarkBinaryOpOr/tss:1000_or_tss:1-14              103124        39563         -61.64%
BenchmarkBinaryOpOr/tss:1000_or_tss:1000-14           20386150      1859335       -90.88%
BenchmarkBinaryOpOr/tss:1000_or_on()_vector(0)-14     91382         36805         -59.72%
```

https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8382

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-03-14 12:07:22 +01:00
hagen1778
8b0129f29b docs: re-organize order of items in vmagent docs
* tie relevant functionality together
* change hierarchy of related options to visually group it

No breaking changes to links.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-03-14 12:02:23 +01:00
Zhu Jiekun
9548b7e442 docs: revert doc change for on-disk persistence and move new content to another section (#8506)
### Describe Your Changes

revert doc change in
815bad3687
and move new content to another section.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-03-14 11:50:17 +01:00
Roman Khavronenko
85f1bd172b docs: re-organize docs (#8493)
* move related sections clother to each other
* group related sections within the same section

The intention of the change is to tie related documentation together.

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-03-14 11:47:00 +01:00
Aliaksandr Valialkin
b6e4abb31e app/vlogsgenerator: increase write buffer size in order to reduce the number of send() syscalls
This increases data ingestion performance, which can be achieved by the vlogsgenerator
2025-03-14 03:14:01 +01:00
Aliaksandr Valialkin
8c079602c1 lib/logstorage: optimize handling long constant fields
Long constant fields cannot be stored in columnsHeader as a const column,
because their size exceeds maxConstColumnValueSize, so they are stored as regular values.
This commit optimizes storing such fields by storing only a single value
across the field values in a block instead of storing multiple values.
This should improve data ingestion performance a bit. This also should improve query
performance when the query accesses such fields because of better cache locality.

Also improve persisting of constant string lengths by storing them only once.
2025-03-14 03:14:01 +01:00
Aliaksandr Valialkin
974d504043 lib/logstorage: add a test for marshalUint64Block / unmarshalUint64Block 2025-03-14 03:14:00 +01:00
Aliaksandr Valialkin
c62ccf11ae lib/logstorage: newTestLogRows: create a const column, which cannot be stored in the column header because its length exceeds maxConstColumnValueSize 2025-03-14 03:14:00 +01:00
f41gh7
5301af33c0 app/vmselect: properly cancel multitenant query request
Previously, vmselect didn't stop multitenant query execution if it
receives error from vmstorage. Such as limit error or any other. It
continued to execute queries until it did it for all tenants. It leads
to the potential waste of resources.
 In addition, callback error was incorrectly reference and can be updated by
subsequent callback call.

This commit returns error earlier, cancels sub-sequent requests for
tenants and properly return storageNode request error.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8461
2025-03-14 00:52:01 +01:00
Andrii Chubatiuk
37ed1842ab lib/protoparser: support zstd in all logs http ingestion, datadog and otel metrics protocols (#8416)
This commit introduces common readers for multiple compression encoding algorithms. 

Currently, supported encodings are:
* zstd
* gzip
* deflat
* snappy

 It adds new common reader to the all VictoriaLogs ingestion protocols.
And updates opentelemetry metrics parsing for VictoriaMetrics components.

Also, it ports zstd stream parses from cluster branch.

Related issues:
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8380
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8300

---------
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
Co-authored-by: f41gh7 <nik@victoriametrics.com>
2025-03-14 00:39:52 +01:00
Zhu Jiekun
815bad3687 app/vmagent: prevent dropping persistent queue if -remoteWrite.showURL changed
Previously, if the command-line flag value `-remoteWrite.showURL` changed, vmagent dropped content of persistent queues. It's not expected behavior and may lead to data-loss at queue.
 Further more if command-line flag value `-remoteWrite.showURL` is set to `true`, any changes to url query arguments will lead to persistent queue drop. The most common uses is kafka and gcp pub-sub integration. It uses url query arguments for client configuration.
 Also, it complicates copy content of persistent queue between vmagents. Since it requires to properly change name inside metainfo.json.

 This commit removes persistent queue name equality check from `lib/persistentqueue`. This check was added as an additional protection from on-disk data corruption.
 It's safe to skip this check for vmagent, because vmagent encodes remoteWrite.url as part of path to the queue. It guarantees that there will be no collision. 

related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8477.


### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: f41gh7 <nik@victoriametrics.com>
Co-authored-by: f41gh7 <nik@victoriametrics.com>
2025-03-13 23:50:01 +01:00
Andrii Chubatiuk
6e34ea62c7 lib/awsapi: add EKS Pod Identity auth method
AWS introduced a new secure way for Kubernetes Pod authorization at AWS API.
The feature is called Pod Identity.
 It adds the following env variables to the Pod:
* AWS_CONTAINER_CREDENTIALS_FULL_URI -  endpoint URI served by the EKS Pod Identity Agent running on the worker node.
* AWS_CONTAINER_AUTHORIZATION_TOKEN_FILE - projected JWT token that is used to exchange for IAM credentials.

See related blog post https://aws.amazon.com/blogs/containers/amazon-eks-pod-identity-a-new-way-for-applications-on-eks-to-obtain-iam-credentials/

related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5780
2025-03-13 23:39:19 +01:00
Zakhar Bessarab
7dfdee5709 lib/httputils: always set up TLS config
Previously, TLS config was only created for URLs with `https` scheme.
This could lead to unexpected errors when original URL was redirecting
to `https` one as TLS config is not applied.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8494
2025-03-13 23:27:49 +01:00
Artem Fetishev
d48b70a5d3 apptest: Add the support of forced merge to vmsingle and vmstorage
This support is already present in enterprise.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-03-13 18:05:54 +01:00
Artem Fetishev
aad79e574a lib/storage: Rewrite deduplication integration test with forced merge and retries
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-03-13 17:59:10 +01:00
Artem Fetishev
b2d2315c39 lib/storage: Deduplication integration test (#8480)
Add an integration test to confirm that deduplication works for the
current month. See #6965.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
2025-03-13 17:07:02 +01:00
Artem Fetishev
c218fa7b29 lib/storage: increment indexdb refcount during data ingestion and retrieval (#8437)
Almost all storage API operations, both ingestion and retrieval, involve
writing and/or reading the indexdb. However, during these operations,
the indexdb refcount is not incremented. This may lead to panics if
indexdb is rotated more than once during these operations.

This commit increments the refcount before using indexdb and decrements it
after use.

Note that rotating indexdb more than once during some operation is an
impossible case under normal circumstances as the min retention period
is 1 day (i.e. the indexdb will be rotated once per day). However, we
want the storage to behave correctly in all cases.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
2025-03-13 11:15:43 +01:00
Aliaksandr Valialkin
dfc22950bd deployment: update VictoriaLogs Docker image from v1.15.0-victorialogs to v1.16.0-victorialogs
See https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.15.0-victorialogs
2025-03-12 23:48:26 +01:00
Aliaksandr Valialkin
4b52f7973d docs/VictoriaLogs/CHANGELOG.md: cut v1.16.0-victorialogs 2025-03-12 23:41:16 +01:00
Aliaksandr Valialkin
b1fab92d1f vendor: run make vendor-update 2025-03-12 22:40:42 +01:00
Aliaksandr Valialkin
19165c436f app/vlinsert/loki: automatically parse JSON-encoded log fields from the plaintext log message
Loki doesn't support well high-cardinality log fields (e.g. fields with big number of unique values).
That's why Promtail, Grafana Agent and Grafana Alloy encode such fields into a JSON and push them
as a plaintext log message to the remote storage. This isn't an efficient way to store high-cardinality
log fields in VictoriaLogs, since it is optimized for storing and querying such fields when they are stored
distinctly as a regular log fields according to VictoriaLogs data model ( https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model ).

This commit enables automatic parsing of JSON-encoded log fields at plaintext log message received over Loki protocol
and storing them as a separate log fields. This should improve data compression ratio and reduce disk space usage.
This should also improve query performance when the parsed log fields are used in queries for filtering and aggregation.
The old behaviour can be restored by passing -loki.disableMessageParsing command-line flag to VictoriaLogs.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8486
2025-03-12 21:23:28 +01:00
Artem Fetishev
8d177f06da lib/storage: a followup for ee66d601b4: enable cluster integration tests
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-03-12 18:13:24 +01:00
Artem Fetishev
ee66d601b4 lib/storage: fix active timeseries collection when per-day index is disabled (#8485)
Fix metric that shows number of active time series when per-day index is disabled. Previously, once per-day index was disabled, the active time series metric would stop being populated and the `Active time series` chart would show 0.

See: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8411.
2025-03-12 17:12:09 +01:00
Aliaksandr Valialkin
5e231fe07b Makefile: update golangci-lint from v1.64.5 to v1.64.7
See https://github.com/golangci/golangci-lint/releases/tag/v1.64.7
2025-03-12 16:35:11 +01:00
Aliaksandr Valialkin
cb240eda70 app/vlinsert: follow-up for 67f8fa66ed
- Properly handle negative timestamps (e.g. timestamps before 1970-01-01)

- Optimize parsing floating-point timestamps by eliminating the memory allocation
  needed for returning an error from strconv.ParseInt. Instead, check whether the string contains a dot,
  and then parse it as a floating-point number.

- Add tests for ParseUnixTimestamp function.

- Make the code easier to understand and maintain by removing unneeded generic function toNano().

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8470
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8472
2025-03-12 16:28:40 +01:00
Aliaksandr Valialkin
02bef62a66 docs/VictoriaLogs/CHANGELOG.md: move the description of the fix for the proper OpenTelemetry attributes conversion into JSON into the correct place
The bugfix isn't released yet, so move it from v1.15.0-victorialogs release to the tip.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8384
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8387

This is a follow-up for 26fba57cfa
2025-03-12 15:57:10 +01:00
Aliaksandr Valialkin
4d44c3e154 lib/logstorage: properly parse floating-point numbers with leading zeroes in fractional part
Parsing for floating-point numbers with leading zeroes such as 1.023, 1.00234 has been broken
in the commit ae5e28524e .

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8464
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8361
2025-03-12 15:25:58 +01:00
Emre Yazıcı
cfd2c6e5e7 app/vmalert: add vmalert_alerts_send_duration_seconds metric (#8468)
### Describe Your Changes

Add `vmalert_alerts_send_latency_seconds` metric for
alertmanager.notifier.

To measure the time for alertmanager calls to send alerts per notifier.
This is needed to see the latency for each notifier from vmalert calls.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: emreya <e.yazici1990@gmail.com>
Co-authored-by: Hui Wang <haley@victoriametrics.com>
Co-authored-by: Roman Khavronenko <hagen1778@gmail.com>
2025-03-12 14:30:54 +01:00
alicja-karasiewicz
d47d329ce7 feat: make topN limit configurable from CLI
Implement changes mentioned in #6898

Allow the administrator to specify the limit of returned TSDB series in
`/api/v1/status/tsdb` by making a TopN limit configurable from CLI.

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: alicja-karasiewicz <alicja.karasiewicz@allegro.com>
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
Co-authored-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-03-12 11:33:30 +04:00
Nikolay
11436d5f00 docs: update metric names stats description (#8483)
* add version since feature is available
* add cluster endpoint paths

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

Signed-off-by: f41gh7 <nik@victoriametrics.com>
2025-03-12 11:01:24 +04:00
Evgeny
486b9e1c64 lib/promscrape: use original job name as scrapePool value in targets api (#8457)
### Fix scrapePool name

If in the scrape file, I do some magic and manipulate the job name then
Prometheus will show scrapePool as the original job name in the targets
API, but vmagent will set it to the final value which is wrong.
example
```
job: consul-targets
...

- source_labels: [ __meta_consul_service ]
      regex: (\w+)[_-]exporter
      target_label: job
      replacement: $1
```

curl to prom API will show
`"scrapePool": "consul-targets",`
vmagent:
`""scrapePool": "node",`

before changes:
```
curl -s 'http://localhost:8429/api/v1/targets' | jq -r '.data.activeTargets[].scrapePool'| sort|uniq
blackbox
pgbackrest
postgres
```
after changes
```
curl -s 'http://localhost:8429/api/v1/targets' | jq -r '.data.activeTargets[].scrapePool'| sort|uniq
blackbox
consul-targets
```

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-03-11 13:11:35 +01:00
Roman Khavronenko
18d6c715ac vendor: bump go-control-plane/envoy to v1.32.4
Solves the following error:
verifying github.com/envoyproxy/go-control-plane/envoy@v1.32.3/go.mod:
checksum mismatch

See https://github.com/envoyproxy/go-control-plane/issues/1083
2025-03-11 11:11:43 +01:00
hagen1778
6a1c70115a docs: order releases in changelog by their version
Ordering changes by release versions enhances the searchability
of the documentation. For example, tracking which release got
the bugfix becomes easier if releases are already sorted.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-03-11 10:11:58 +01:00
Alexander Marshalov
9007a4803c vmcloud docs: information about new APIs in Cloud Public API: cloud providers, regions, tiers, deployments and access tokens. (#8442)
2025-03-10 20:33:42 +01:00
Jose Gómez-Sellés
0873d1d8ab docs/cloud: add account management section (#8467)
### Describe Your Changes

This PR updates the documentation by removing old assets and adding the
user management chapter, divided in different sections.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-03-10 20:28:55 +01:00
Aliaksandr Valialkin
edecc433ff deployment: update Go builder from Go1.24.0 to Go1.24.1
See https://github.com/golang/go/issues?q=milestone%3AGo1.24.1+label%3ACherryPickApproved
2025-03-10 18:59:59 +01:00
Artem Fetishev
77b08cdbb0 docs: update vm apps versions to the v1.113.0 release
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-03-10 15:17:14 +01:00
Artem Fetishev
44b0466281 docs/changelog: mention LTS releases
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-03-10 14:45:55 +01:00
Naveen
179c530095 docs: update vlogs README.md (#8460)
### Describe Your Changes

Fixed the typo in the documentation. Updated `Ir provides` to `It
provides`

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-03-10 13:52:43 +01:00
Andrii Chubatiuk
c174a046e2 lib/streamaggr: fixed streamaggr panic (#8471)
### Describe Your Changes

fixes #8469

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-03-10 13:50:55 +01:00
Andrii Chubatiuk
67f8fa66ed app/vlinsert: support floats for elasticseach timestamps (#8472)
### Describe Your Changes

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8470

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-03-10 13:48:48 +01:00
Artem Fetishev
a7d0a75f4c docs/CHANGELOG.md: cut v1.113.0
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-03-07 14:57:47 +01:00
Artem Fetishev
ecc46a4f42 make docs-update-version 2025-03-07 14:51:56 +01:00
Artem Fetishev
c4fd62188a app/{vmselect,vlselect}: run make vmui-update vmui-logs-update
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-03-07 14:15:38 +01:00
hagen1778
b131c3bc22 docs: add available release mark to vmalert chaining groups
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-03-07 14:04:46 +01:00
f41gh7
2b8b9b8536 lib/storage: reject downsampling rules with zero interval configuration
Using zero interval for downsampling rules is not useful and caused a panic when performing validation of intervals.

Reject such rules during parsing in order to highlight incorrect usage and prevent panics.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8454
---------
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-03-07 13:27:41 +01:00
Artem Fetishev
3f2289653c lib/metricnamestats: follow-up after b85b28d30a: Fix flaky integration tests
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-03-07 12:13:11 +01:00
hagen1778
021c2552dd docs: change #tip changes order to reflect importance
Put more important features first in the list.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-03-07 10:49:27 +01:00
hagen1778
f7e1c430bb docs: restore accidentally dropped changelog line
Line about `$__interval` was accidentally dropped in
b85b28d30a (diff-6564e3f60c3a7942189fe87a0c8f02e0f9841a71f914d64cd5487eb8b23ad66a)

The order was changed intentionally, so this commit could be cherry-picked
to cluster branch.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-03-07 09:59:15 +01:00
Hui Wang
e8e2ef54a0 vmalert: allow chaining groups with eval_offset (#8402)
address https://github.com/VictoriaMetrics/VictoriaMetrics/issues/860,
see
https://github.com/VictoriaMetrics/VictoriaMetrics/blob/change-evaloffset-behavior/docs/vmalert.md#chaining-groups

Also related to
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8154
2025-03-07 09:45:16 +01:00
Zakhar Bessarab
fd7b016c5b docs/victoria-logs/data-ingestion/promtail: fix typo (#8451)
### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-03-07 11:03:04 +04:00
f41gh7
ec68ea2222 lib/metricnamestats: follow-up after b85b28d30a
* properly save state for cross-device mount points
* properly check empty state for tracker

Signed-off-by: f41gh7 <nik@victoriametrics.com>
2025-03-06 23:18:49 +01:00
Nikolay
b85b28d30a lib/storage: add tracker for time series metric names statistics
This feature allows to track query requests by metric names. Tracker
state is stored in-memory, capped by 1/100 of allocated memory to the
storage. If cap exceeds, tracker rejects any new items add and instead
registers query requests for already observed metric names.

This feature is disable by default and new flag:
`-storage.trackMetricNamesStats` enables it.

  New API added to the select component:

* /api/v1/status/metric_names_stats - which returns a JSON
object
    with usage statistics.
* /admin/api/v1/status/metric_names_stats/reset - which resets internal
    state of the tracker and reset tsid/cache.

   New metrics were added for this feature:

  * vm_cache_size_bytes{type="storage/metricNamesUsageTracker"}
  * vm_cache_size{type="storage/metricNamesUsageTracker"}
  * vm_cache_size_max_bytes{type="storage/metricNamesUsageTracker"}

  Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4458

---------

Signed-off-by: f41gh7 <nik@victoriametrics.com>
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
2025-03-06 22:06:50 +01:00
Zakhar Bessarab
7dfaef9088 app/vmselect/promql: fix panic with using @ with series which is not present at the start of the query (#8445)
### Describe Your Changes

Previously, "selector @ another_selector" assumed that
"another_selector" metric is supposed to exist since "start" used in the
query.

If the query was evaluated in the following case (timestamps):
- start - 2, end - 10
- "another_selector" 5,6,7,8,9,10
- "selector" The resulting "at" timestamp would be taken from NaN (as
`int64(NaN * 1000)`), causing a panic or invalid behavior later.

Note that type cast of `NaN` to int64 is also platform-dependent, so
value of `int64(math.NaN() * 1000)` can produce `0` or max int64 on
different platforms and versions of Go.

This commit changes this and checks for the first non-NaN value. This
makes it easier to use for users as series are not always aligned and
returning an error in this case would disallow using this for some time
ranges.

See: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8444

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-03-06 16:42:19 +01:00
Dmytro Kozlov
75601c2d9a vendore: bump metricsql ot v0.84.1 (#8450)
### Describe Your Changes

Updated MetricsQL dependency to v0.84.1

See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8435

### Checklist

The following checks are **mandatory**:

- [X] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: Zhu Jiekun <jiekun@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-03-06 15:13:18 +01:00
1055 changed files with 48887 additions and 8752 deletions

View File

@@ -18,7 +18,7 @@ TAR_OWNERSHIP ?= --owner=1000 --group=1000
.PHONY: $(MAKECMDGOALS)
include app/*/Makefile
include cspell/Makefile
include codespell/Makefile
include docs/Makefile
include deployment/*/Makefile
include dashboards/Makefile
@@ -567,7 +567,7 @@ golangci-lint: install-golangci-lint
golangci-lint run
install-golangci-lint:
which golangci-lint || curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(shell go env GOPATH)/bin v1.64.5
which golangci-lint || curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(shell go env GOPATH)/bin v1.64.7
remove-golangci-lint:
rm -rf `which golangci-lint`

View File

@@ -3,7 +3,6 @@ package datadog
import (
"bytes"
"fmt"
"io"
"net/http"
"strconv"
"time"
@@ -16,15 +15,17 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
)
var (
datadogStreamFields = flagutil.NewArrayString("datadog.streamFields", "Datadog tags to be used as stream fields.")
datadogIgnoreFields = flagutil.NewArrayString("datadog.ignoreFields", "Datadog tags to ignore.")
datadogStreamFields = flagutil.NewArrayString("datadog.streamFields", "Comma-separated list of fields to use as log stream fields for logs ingested via DataDog protocol. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/datadog-agent/#stream-fields")
datadogIgnoreFields = flagutil.NewArrayString("datadog.ignoreFields", "Comma-separated list of fields to ignore for logs ingested via DataDog protocol. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/datadog-agent/#dropping-fields")
maxRequestSize = flagutil.NewBytes("datadog.maxRequestSize", 64*1024*1024, "The maximum size in bytes of a single DataDog request")
)
var parserPool fastjson.ParserPool
@@ -46,7 +47,6 @@ func datadogLogsIngestion(w http.ResponseWriter, r *http.Request) bool {
w.Header().Add("Content-Type", "application/json")
startTime := time.Now()
v2LogsRequestsTotal.Inc()
reader := r.Body
var ts int64
if tsValue := r.Header.Get("dd-message-timestamp"); tsValue != "" && tsValue != "0" {
@@ -61,24 +61,6 @@ func datadogLogsIngestion(w http.ResponseWriter, r *http.Request) bool {
ts = startTime.UnixNano()
}
if r.Header.Get("Content-Encoding") == "gzip" {
zr, err := common.GetGzipReader(reader)
if err != nil {
httpserver.Errorf(w, r, "cannot read gzipped logs request: %s", err)
return true
}
defer common.PutGzipReader(zr)
reader = zr
}
wcr := writeconcurrencylimiter.GetReader(reader)
data, err := io.ReadAll(wcr)
writeconcurrencylimiter.PutReader(wcr)
if err != nil {
httpserver.Errorf(w, r, "cannot read request body: %s", err)
return true
}
cp, err := insertutils.GetCommonParams(r)
if err != nil {
httpserver.Errorf(w, r, "%s", err)
@@ -97,11 +79,15 @@ func datadogLogsIngestion(w http.ResponseWriter, r *http.Request) bool {
return true
}
lmp := cp.NewLogMessageProcessor("datadog")
err = readLogsRequest(ts, data, lmp)
lmp.MustClose()
encoding := r.Header.Get("Content-Encoding")
err = protoparserutil.ReadUncompressedData(r.Body, encoding, maxRequestSize, func(data []byte) error {
lmp := cp.NewLogMessageProcessor("datadog", false)
err := readLogsRequest(ts, data, lmp)
lmp.MustClose()
return err
})
if err != nil {
logger.Warnf("cannot decode log message in /api/v2/logs request: %s, stream fields: %s", err, cp.StreamFields)
httpserver.Errorf(w, r, "cannot read DataDog protocol data: %s", err)
return true
}
@@ -121,7 +107,7 @@ var (
// datadog message field has two formats:
// - regular log message with string text
// - nested json format for serverless plugins
// which has folowing format:
// which has the following format:
// {"message": {"message": "text","lamdba": {"arn": "string","requestID": "string"}, "timestamp": int64} }
//
// See https://github.com/DataDog/datadog-lambda-extension/blob/28b90c7e4e985b72d60b5f5a5147c69c7ac693c4/bottlecap/src/logs/lambda/mod.rs#L24

View File

@@ -17,7 +17,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
)
@@ -99,10 +99,10 @@ func RequestHandler(path string, w http.ResponseWriter, r *http.Request) bool {
httpserver.Errorf(w, r, "%s", err)
return true
}
lmp := cp.NewLogMessageProcessor("elasticsearch_bulk")
isGzip := r.Header.Get("Content-Encoding") == "gzip"
lmp := cp.NewLogMessageProcessor("elasticsearch_bulk", true)
encoding := r.Header.Get("Content-Encoding")
streamName := fmt.Sprintf("remoteAddr=%s, requestURI=%q", httpserver.GetQuotedRemoteAddr(r), r.RequestURI)
n, err := readBulkRequest(streamName, r.Body, isGzip, cp.TimeField, cp.MsgFields, lmp)
n, err := readBulkRequest(streamName, r.Body, encoding, cp.TimeField, cp.MsgFields, lmp)
lmp.MustClose()
if err != nil {
logger.Warnf("cannot decode log message #%d in /_bulk request: %s, stream fields: %s", n, err, cp.StreamFields)
@@ -131,19 +131,16 @@ var (
bulkRequestDuration = metrics.NewHistogram(`vl_http_request_duration_seconds{path="/insert/elasticsearch/_bulk"}`)
)
func readBulkRequest(streamName string, r io.Reader, isGzip bool, timeField string, msgFields []string, lmp insertutils.LogMessageProcessor) (int, error) {
func readBulkRequest(streamName string, r io.Reader, encoding string, timeField string, msgFields []string, lmp insertutils.LogMessageProcessor) (int, error) {
// See https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html
if isGzip {
zr, err := common.GetGzipReader(r)
if err != nil {
return 0, fmt.Errorf("cannot read gzipped _bulk request: %w", err)
}
defer common.PutGzipReader(zr)
r = zr
reader, err := protoparserutil.GetUncompressedReader(r, encoding)
if err != nil {
return 0, fmt.Errorf("cannot decode Elasticsearch protocol data: %w", err)
}
defer protoparserutil.PutUncompressedReader(reader)
wcr := writeconcurrencylimiter.GetReader(r)
wcr := writeconcurrencylimiter.GetReader(reader)
defer writeconcurrencylimiter.PutReader(wcr)
lr := insertutils.NewLineReader(streamName, wcr)

View File

@@ -2,8 +2,12 @@ package elasticsearch
import (
"bytes"
"compress/gzip"
"fmt"
"github.com/golang/snappy"
"github.com/klauspost/compress/gzip"
"github.com/klauspost/compress/zlib"
"github.com/klauspost/compress/zstd"
"io"
"testing"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
@@ -15,7 +19,7 @@ func TestReadBulkRequest_Failure(t *testing.T) {
tlp := &insertutils.TestLogMessageProcessor{}
r := bytes.NewBufferString(data)
rows, err := readBulkRequest("test", r, false, "_time", []string{"_msg"}, tlp)
rows, err := readBulkRequest("test", r, "", "_time", []string{"_msg"}, tlp)
if err == nil {
t.Fatalf("expecting non-empty error")
}
@@ -33,7 +37,7 @@ foobar`)
}
func TestReadBulkRequest_Success(t *testing.T) {
f := func(data, timeField, msgField string, timestampsExpected []int64, resultExpected string) {
f := func(data, encoding, timeField, msgField string, timestampsExpected []int64, resultExpected string) {
t.Helper()
msgFields := []string{"non_existing_foo", msgField, "non_exiting_bar"}
@@ -41,7 +45,7 @@ func TestReadBulkRequest_Success(t *testing.T) {
// Read the request without compression
r := bytes.NewBufferString(data)
rows, err := readBulkRequest("test", r, false, timeField, msgFields, tlp)
rows, err := readBulkRequest("test", r, "", timeField, msgFields, tlp)
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
@@ -54,9 +58,11 @@ func TestReadBulkRequest_Success(t *testing.T) {
// Read the request with compression
tlp = &insertutils.TestLogMessageProcessor{}
compressedData := compressData(data)
r = bytes.NewBufferString(compressedData)
rows, err = readBulkRequest("test", r, true, timeField, msgFields, tlp)
if encoding != "" {
data = compressData(data, encoding)
}
r = bytes.NewBufferString(data)
rows, err = readBulkRequest("test", r, encoding, timeField, msgFields, tlp)
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
@@ -69,9 +75,9 @@ func TestReadBulkRequest_Success(t *testing.T) {
}
// Verify an empty data
f("", "_time", "_msg", nil, "")
f("\n", "_time", "_msg", nil, "")
f("\n\n", "_time", "_msg", nil, "")
f("", "gzip", "_time", "_msg", nil, "")
f("\n", "gzip", "_time", "_msg", nil, "")
f("\n\n", "gzip", "_time", "_msg", nil, "")
// Verify non-empty data
data := `{"create":{"_index":"filebeat-8.8.0"}}
@@ -82,20 +88,35 @@ func TestReadBulkRequest_Success(t *testing.T) {
{"message":"xyz","@timestamp":"1686026893735","x":"y"}
{"create":{"_index":"filebeat-8.8.0"}}
{"message":"qwe rty","@timestamp":"1686026893"}
{"create":{"_index":"filebeat-8.8.0"}}
{"message":"qwe rty float","@timestamp":"1686026123.62"}
`
timeField := "@timestamp"
msgField := "message"
timestampsExpected := []int64{1686026891735000000, 1686023292735000000, 1686026893735000000, 1686026893000000000}
timestampsExpected := []int64{1686026891735000000, 1686023292735000000, 1686026893735000000, 1686026893000000000, 1686026123620000000}
resultExpected := `{"log.offset":"71770","log.file.path":"/var/log/auth.log","_msg":"foobar"}
{"_msg":"baz"}
{"_msg":"xyz","x":"y"}
{"_msg":"qwe rty"}`
f(data, timeField, msgField, timestampsExpected, resultExpected)
{"_msg":"qwe rty"}
{"_msg":"qwe rty float"}`
f(data, "zstd", timeField, msgField, timestampsExpected, resultExpected)
}
func compressData(s string) string {
func compressData(s string, encoding string) string {
var bb bytes.Buffer
zw := gzip.NewWriter(&bb)
var zw io.WriteCloser
switch encoding {
case "gzip":
zw = gzip.NewWriter(&bb)
case "zstd":
zw, _ = zstd.NewWriter(&bb)
case "snappy":
zw = snappy.NewBufferedWriter(&bb)
case "deflate":
zw = zlib.NewWriter(&bb)
default:
panic(fmt.Errorf("%q encoding is not supported", encoding))
}
if _, err := zw.Write([]byte(s)); err != nil {
panic(fmt.Errorf("unexpected error when compressing data: %w", err))
}

View File

@@ -10,15 +10,24 @@ import (
)
func BenchmarkReadBulkRequest(b *testing.B) {
b.Run("gzip:off", func(b *testing.B) {
benchmarkReadBulkRequest(b, false)
b.Run("encoding:none", func(b *testing.B) {
benchmarkReadBulkRequest(b, "")
})
b.Run("gzip:on", func(b *testing.B) {
benchmarkReadBulkRequest(b, true)
b.Run("encoding:gzip", func(b *testing.B) {
benchmarkReadBulkRequest(b, "gzip")
})
b.Run("encoding:zstd", func(b *testing.B) {
benchmarkReadBulkRequest(b, "zstd")
})
b.Run("encoding:deflate", func(b *testing.B) {
benchmarkReadBulkRequest(b, "deflate")
})
b.Run("encoding:snappy", func(b *testing.B) {
benchmarkReadBulkRequest(b, "snappy")
})
}
func benchmarkReadBulkRequest(b *testing.B, isGzip bool) {
func benchmarkReadBulkRequest(b *testing.B, encoding string) {
data := `{"create":{"_index":"filebeat-8.8.0"}}
{"@timestamp":"2023-06-06T04:48:11.735Z","log":{"offset":71770,"file":{"path":"/var/log/auth.log"}},"message":"foobar"}
{"create":{"_index":"filebeat-8.8.0"}}
@@ -26,8 +35,8 @@ func benchmarkReadBulkRequest(b *testing.B, isGzip bool) {
{"create":{"_index":"filebeat-8.8.0"}}
{"message":"xyz","@timestamp":"2023-06-06T04:48:13.735Z","x":"y"}
`
if isGzip {
data = compressData(data)
if encoding != "" {
data = compressData(data, encoding)
}
dataBytes := bytesutil.ToUnsafeBytes(data)
@@ -41,7 +50,7 @@ func benchmarkReadBulkRequest(b *testing.B, isGzip bool) {
r := &bytes.Reader{}
for pb.Next() {
r.Reset(dataBytes)
_, err := readBulkRequest("test", r, isGzip, timeField, msgFields, blp)
_, err := readBulkRequest("test", r, encoding, timeField, msgFields, blp)
if err != nil {
panic(fmt.Errorf("unexpected error: %w", err))
}

View File

@@ -141,7 +141,7 @@ type LogMessageProcessor interface {
//
// If streamFields is non-nil, then the given streamFields must be used as log stream fields instead of pre-configured fields.
//
// The LogMessageProcessor implementation cannot hold references to fields, since the caller can re-use them.
// The LogMessageProcessor implementation cannot hold references to fields, since the caller can reuse them.
AddRow(timestamp int64, fields, streamFields []logstorage.Field)
// MustClose() must flush all the remaining fields and free up resources occupied by LogMessageProcessor.
@@ -191,9 +191,6 @@ func (lmp *logMessageProcessor) initPeriodicFlush() {
//
// If streamFields is non-nil, then it is used as log stream fields instead of the pre-configured stream fields.
func (lmp *logMessageProcessor) AddRow(timestamp int64, fields, streamFields []logstorage.Field) {
lmp.mu.Lock()
defer lmp.mu.Unlock()
lmp.rowsIngestedTotal.Inc()
n := logstorage.EstimatedJSONRowLen(fields)
lmp.bytesIngestedTotal.Add(n)
@@ -205,7 +202,11 @@ func (lmp *logMessageProcessor) AddRow(timestamp int64, fields, streamFields []l
return
}
lmp.mu.Lock()
defer lmp.mu.Unlock()
lmp.lr.MustAdd(lmp.cp.TenantID, timestamp, fields, streamFields)
if lmp.cp.Debug {
s := lmp.lr.GetRowString(0)
lmp.lr.ResetKeepSettings()
@@ -238,7 +239,7 @@ func (lmp *logMessageProcessor) MustClose() {
// NewLogMessageProcessor returns new LogMessageProcessor for the given cp.
//
// MustClose() must be called on the returned LogMessageProcessor when it is no longer needed.
func (cp *CommonParams) NewLogMessageProcessor(protocolName string) LogMessageProcessor {
func (cp *CommonParams) NewLogMessageProcessor(protocolName string, isStreamMode bool) LogMessageProcessor {
lr := logstorage.GetLogRows(cp.StreamFields, cp.IgnoreFields, cp.ExtraFields, *defaultMsgValue)
rowsIngestedTotal := metrics.GetOrCreateCounter(fmt.Sprintf("vl_rows_ingested_total{type=%q}", protocolName))
bytesIngestedTotal := metrics.GetOrCreateCounter(fmt.Sprintf("vl_bytes_ingested_total{type=%q}", protocolName))
@@ -251,7 +252,10 @@ func (cp *CommonParams) NewLogMessageProcessor(protocolName string) LogMessagePr
stopCh: make(chan struct{}),
}
lmp.initPeriodicFlush()
if isStreamMode {
lmp.initPeriodicFlush()
}
return lmp
}

View File

@@ -2,7 +2,9 @@ package insertutils
import (
"fmt"
"math"
"strconv"
"strings"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
@@ -49,6 +51,35 @@ func parseTimestamp(s string) (int64, error) {
// ParseUnixTimestamp parses s as unix timestamp in seconds, milliseconds, microseconds or nanoseconds and returns the parsed timestamp in nanoseconds.
func ParseUnixTimestamp(s string) (int64, error) {
if strings.IndexByte(s, '.') >= 0 {
// Parse timestamp as floating-point value
f, err := strconv.ParseFloat(s, 64)
if err != nil {
return 0, fmt.Errorf("cannot parse unix timestamp from %q: %w", s, err)
}
if f < (1<<31) && f >= (-1<<31) {
// The timestamp is in seconds.
return int64(f * 1e9), nil
}
if f < 1e3*(1<<31) && f >= 1e3*(-1<<31) {
// The timestamp is in milliseconds.
return int64(f * 1e6), nil
}
if f < 1e6*(1<<31) && f >= 1e6*(-1<<31) {
// The timestamp is in microseconds.
return int64(f * 1e3), nil
}
// The timestamp is in nanoseconds
if f > math.MaxInt64 {
return 0, fmt.Errorf("too big timestamp in nanoseconds: %v; mustn't exceed %v", f, int64(math.MaxInt64))
}
if f < math.MinInt64 {
return 0, fmt.Errorf("too small timestamp in nanoseconds: %v; must be bigger or equal to %v", f, int64(math.MinInt64))
}
return int64(f), nil
}
// Parse timestamp as integer
n, err := strconv.ParseInt(s, 10, 64)
if err != nil {
return 0, fmt.Errorf("cannot parse unix timestamp from %q: %w", s, err)

View File

@@ -6,6 +6,63 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
)
func TestParseUnixTimestamp_Success(t *testing.T) {
f := func(s string, timestampExpected int64) {
t.Helper()
timestamp, err := ParseUnixTimestamp(s)
if err != nil {
t.Fatalf("unexpected error in ParseUnixTimestamp(%q): %s", s, err)
}
if timestamp != timestampExpected {
t.Fatalf("unexpected timestamp returned from ParseUnixTimestamp(%q); got %d; want %d", s, timestamp, timestampExpected)
}
}
f("0", 0)
// nanoseconds
f("-1234567890123456789", -1234567890123456789)
f("1234567890123456789", 1234567890123456789)
// microseconds
f("-1234567890123456", -1234567890123456000)
f("1234567890123456", 1234567890123456000)
f("1234567890123456.789", 1234567890123456768)
// milliseconds
f("-1234567890123", -1234567890123000000)
f("1234567890123", 1234567890123000000)
f("1234567890123.456", 1234567890123456000)
// seconds
f("-1234567890", -1234567890000000000)
f("1234567890", 1234567890000000000)
f("-1234567890.123456", -1234567890123456000)
}
func TestParseUnixTimestamp_Failure(t *testing.T) {
f := func(s string) {
t.Helper()
_, err := ParseUnixTimestamp(s)
if err == nil {
t.Fatalf("expecting non-nil error in ParseUnixTimestamp(%q)", s)
}
}
// non-numeric timestamp
f("")
f("foobar")
f("foo.bar")
// too big timestamp
f("12345678901234567890")
f("-12345678901234567890")
f("12345678901234567890.235424")
f("-12345678901234567890.235424")
}
func TestExtractTimestampFromFields_Success(t *testing.T) {
f := func(timeField string, fields []logstorage.Field, nsecsExpected int64) {
t.Helper()

View File

@@ -5,7 +5,6 @@ import (
"encoding/binary"
"flag"
"fmt"
"io"
"net/http"
"regexp"
"slices"
@@ -16,32 +15,30 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding/zstd"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/metrics"
)
const (
journaldEntryMaxNameLen = 64
)
// See https://github.com/systemd/systemd/blob/main/src/libsystemd/sd-journal/journal-file.c#L1703
const journaldEntryMaxNameLen = 64
var allowedJournaldEntryNameChars = regexp.MustCompile(`^[A-Z_][A-Z0-9_]*`)
var (
bodyBufferPool bytesutil.ByteBufferPool
allowedJournaldEntryNameChars = regexp.MustCompile(`^[A-Z_][A-Z0-9_]*`)
)
var (
journaldStreamFields = flagutil.NewArrayString("journald.streamFields", "Journal fields to be used as stream fields. "+
"See the list of allowed fields at https://www.freedesktop.org/software/systemd/man/latest/systemd.journal-fields.html.")
journaldIgnoreFields = flagutil.NewArrayString("journald.ignoreFields", "Journal fields to ignore. "+
"See the list of allowed fields at https://www.freedesktop.org/software/systemd/man/latest/systemd.journal-fields.html.")
journaldTimeField = flag.String("journald.timeField", "__REALTIME_TIMESTAMP", "Journal field to be used as time field. "+
"See the list of allowed fields at https://www.freedesktop.org/software/systemd/man/latest/systemd.journal-fields.html.")
journaldTenantID = flag.String("journald.tenantID", "0:0", "TenantID for logs ingested via the Journald endpoint.")
journaldStreamFields = flagutil.NewArrayString("journald.streamFields", "Comma-separated list of fields to use as log stream fields for logs ingested over journald protocol. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/journald/#stream-fields")
journaldIgnoreFields = flagutil.NewArrayString("journald.ignoreFields", "Comma-separated list of fields to ignore for logs ingested over journald protocol. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/journald/#dropping-fields")
journaldTimeField = flag.String("journald.timeField", "__REALTIME_TIMESTAMP", "Field to use as a log timestamp for logs ingested via journald protocol. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/journald/#time-field")
journaldTenantID = flag.String("journald.tenantID", "0:0", "TenantID for logs ingested via the Journald endpoint. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/journald/#multitenancy")
journaldIncludeEntryMetadata = flag.Bool("journald.includeEntryMetadata", false, "Include journal entry fields, which with double underscores.")
maxRequestSize = flagutil.NewBytes("journald.maxRequestSize", 64*1024*1024, "The maximum size in bytes of a single journald request")
)
func getCommonParams(r *http.Request) (*insertutils.CommonParams, error) {
@@ -89,43 +86,29 @@ func handleJournald(r *http.Request, w http.ResponseWriter) {
startTime := time.Now()
requestsJournaldTotal.Inc()
if err := vlstorage.CanWriteData(); err != nil {
httpserver.Errorf(w, r, "%s", err)
return
}
reader := r.Body
var err error
wcr := writeconcurrencylimiter.GetReader(reader)
data, err := io.ReadAll(wcr)
if err != nil {
httpserver.Errorf(w, r, "cannot read request body: %s", err)
return
}
writeconcurrencylimiter.PutReader(wcr)
bb := bodyBufferPool.Get()
defer bodyBufferPool.Put(bb)
if r.Header.Get("Content-Encoding") == "zstd" {
bb.B, err = zstd.Decompress(bb.B[:0], data)
if err != nil {
httpserver.Errorf(w, r, "cannot decompress zstd-encoded request with length %d: %s", len(data), err)
return
}
data = bb.B
}
cp, err := getCommonParams(r)
if err != nil {
errorsTotal.Inc()
httpserver.Errorf(w, r, "cannot parse common params from request: %s", err)
return
}
lmp := cp.NewLogMessageProcessor("journald")
err = parseJournaldRequest(data, lmp, cp)
lmp.MustClose()
if err := vlstorage.CanWriteData(); err != nil {
errorsTotal.Inc()
httpserver.Errorf(w, r, "%s", err)
return
}
encoding := r.Header.Get("Content-Encoding")
err = protoparserutil.ReadUncompressedData(r.Body, encoding, maxRequestSize, func(data []byte) error {
lmp := cp.NewLogMessageProcessor("journald", false)
err := parseJournaldRequest(data, lmp, cp)
lmp.MustClose()
return err
})
if err != nil {
errorsTotal.Inc()
httpserver.Errorf(w, r, "cannot parse Journald protobuf request: %s", err)
httpserver.Errorf(w, r, "cannot read journald protocol data: %s", err)
return
}
@@ -198,7 +181,7 @@ func parseJournaldRequest(data []byte, lmp insertutils.LogMessageProcessor, cp *
if err != nil {
return fmt.Errorf("failed to extract binary field %q value size: %w", name, err)
}
// skip binary data sise
// skip binary data size
data = data[idx:]
if size == 0 {
return fmt.Errorf("unexpected zero binary data size decoded %d", size)
@@ -218,7 +201,6 @@ func parseJournaldRequest(data []byte, lmp insertutils.LogMessageProcessor, cp *
}
data = data[1:]
}
// https://github.com/systemd/systemd/blob/main/src/libsystemd/sd-journal/journal-file.c#L1703
if len(name) > journaldEntryMaxNameLen {
return fmt.Errorf("journald entry name should not exceed %d symbols, got: %q", journaldEntryMaxNameLen, name)
}

View File

@@ -11,7 +11,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
"github.com/VictoriaMetrics/metrics"
)
@@ -38,18 +38,15 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) {
return
}
reader := r.Body
if r.Header.Get("Content-Encoding") == "gzip" {
zr, err := common.GetGzipReader(reader)
if err != nil {
logger.Errorf("cannot read gzipped jsonline request: %s", err)
return
}
defer common.PutGzipReader(zr)
reader = zr
encoding := r.Header.Get("Content-Encoding")
reader, err := protoparserutil.GetUncompressedReader(r.Body, encoding)
if err != nil {
logger.Errorf("cannot decode jsonline request: %s", err)
return
}
defer protoparserutil.PutUncompressedReader(reader)
lmp := cp.NewLogMessageProcessor("jsonline")
lmp := cp.NewLogMessageProcessor("jsonline", true)
streamName := fmt.Sprintf("remoteAddr=%s, requestURI=%q", httpserver.GetQuotedRemoteAddr(r), r.RequestURI)
processStreamInternal(streamName, reader, cp.TimeField, cp.MsgFields, lmp)
lmp.MustClose()

View File

@@ -1,12 +1,18 @@
package loki
import (
"flag"
"fmt"
"net/http"
"strconv"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httputils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
)
var disableMessageParsing = flag.Bool("loki.disableMessageParsing", false, "Whether to disable automatic parsing of JSON-encoded log fields inside Loki log message into distinct log fields")
// RequestHandler processes Loki insert requests
func RequestHandler(path string, w http.ResponseWriter, r *http.Request) bool {
switch path {
@@ -35,7 +41,16 @@ func handleInsert(r *http.Request, w http.ResponseWriter) {
}
}
func getCommonParams(r *http.Request) (*insertutils.CommonParams, error) {
type commonParams struct {
cp *insertutils.CommonParams
// Whether to parse JSON inside plaintext log message.
//
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8486
parseMessage bool
}
func getCommonParams(r *http.Request) (*commonParams, error) {
cp, err := insertutils.GetCommonParams(r)
if err != nil {
return nil, err
@@ -55,5 +70,17 @@ func getCommonParams(r *http.Request) (*insertutils.CommonParams, error) {
}
return cp, nil
parseMessage := !*disableMessageParsing
if rv := httputils.GetRequestValue(r, "disable_message_parsing", "VL-Loki-Disable-Message-Parsing"); rv != "" {
bv, err := strconv.ParseBool(rv)
if err != nil {
return nil, fmt.Errorf("cannot parse dusable_message_parsing=%q: %s", rv, err)
}
parseMessage = !bv
}
return &commonParams{
cp: cp,
parseMessage: parseMessage,
}, nil
}

View File

@@ -2,10 +2,7 @@ package loki
import (
"fmt"
"io"
"math"
"net/http"
"strconv"
"time"
"github.com/VictoriaMetrics/metrics"
@@ -14,35 +11,19 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
)
var maxRequestSize = flagutil.NewBytes("loki.maxRequestSize", 64*1024*1024, "The maximum size in bytes of a single Loki request")
var parserPool fastjson.ParserPool
func handleJSON(r *http.Request, w http.ResponseWriter) {
startTime := time.Now()
requestsJSONTotal.Inc()
reader := r.Body
if r.Header.Get("Content-Encoding") == "gzip" {
zr, err := common.GetGzipReader(reader)
if err != nil {
httpserver.Errorf(w, r, "cannot initialize gzip reader: %s", err)
return
}
defer common.PutGzipReader(zr)
reader = zr
}
wcr := writeconcurrencylimiter.GetReader(reader)
data, err := io.ReadAll(wcr)
writeconcurrencylimiter.PutReader(wcr)
if err != nil {
httpserver.Errorf(w, r, "cannot read request body: %s", err)
return
}
cp, err := getCommonParams(r)
if err != nil {
@@ -53,12 +34,17 @@ func handleJSON(r *http.Request, w http.ResponseWriter) {
httpserver.Errorf(w, r, "%s", err)
return
}
lmp := cp.NewLogMessageProcessor("loki_json")
useDefaultStreamFields := len(cp.StreamFields) == 0
err = parseJSONRequest(data, lmp, useDefaultStreamFields)
lmp.MustClose()
encoding := r.Header.Get("Content-Encoding")
err = protoparserutil.ReadUncompressedData(r.Body, encoding, maxRequestSize, func(data []byte) error {
lmp := cp.cp.NewLogMessageProcessor("loki_json", false)
useDefaultStreamFields := len(cp.cp.StreamFields) == 0
err := parseJSONRequest(data, lmp, cp.cp.MsgFields, useDefaultStreamFields, cp.parseMessage)
lmp.MustClose()
return err
})
if err != nil {
httpserver.Errorf(w, r, "cannot parse Loki json request: %s; data=%s", err, data)
httpserver.Errorf(w, r, "cannot read Loki json data: %s", err)
return
}
@@ -66,6 +52,9 @@ func handleJSON(r *http.Request, w http.ResponseWriter) {
// There is no need in updating requestJSONDuration for request errors,
// since their timings are usually much smaller than the timing for successful request parsing.
requestJSONDuration.UpdateDuration(startTime)
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8505
w.WriteHeader(http.StatusNoContent)
}
var (
@@ -73,9 +62,10 @@ var (
requestJSONDuration = metrics.NewHistogram(`vl_http_request_duration_seconds{path="/insert/loki/api/v1/push",format="json"}`)
)
func parseJSONRequest(data []byte, lmp insertutils.LogMessageProcessor, useDefaultStreamFields bool) error {
func parseJSONRequest(data []byte, lmp insertutils.LogMessageProcessor, msgFields []string, useDefaultStreamFields, parseMessage bool) error {
p := parserPool.Get()
defer parserPool.Put(p)
v, err := p.ParseBytes(data)
if err != nil {
return fmt.Errorf("cannot parse JSON request body: %w", err)
@@ -90,11 +80,20 @@ func parseJSONRequest(data []byte, lmp insertutils.LogMessageProcessor, useDefau
return fmt.Errorf("`streams` item in the parsed JSON must contain an array; got %q", streamsV)
}
fields := getFields()
defer putFields(fields)
var msgParser *logstorage.JSONParser
if parseMessage {
msgParser = logstorage.GetJSONParser()
defer logstorage.PutJSONParser(msgParser)
}
currentTimestamp := time.Now().UnixNano()
var commonFields []logstorage.Field
for _, stream := range streams {
// populate common labels from `stream` dict
commonFields = commonFields[:0]
fields.fields = fields.fields[:0]
labelsV := stream.Get("stream")
var labels *fastjson.Object
if labelsV != nil {
@@ -110,7 +109,7 @@ func parseJSONRequest(data []byte, lmp insertutils.LogMessageProcessor, useDefau
err = fmt.Errorf("unexpected label value type for %q:%q; want string", k, v)
return
}
commonFields = append(commonFields, logstorage.Field{
fields.fields = append(fields.fields, logstorage.Field{
Name: bytesutil.ToUnsafeString(k),
Value: bytesutil.ToUnsafeString(vStr),
})
@@ -129,8 +128,10 @@ func parseJSONRequest(data []byte, lmp insertutils.LogMessageProcessor, useDefau
return fmt.Errorf("`values` item in the parsed JSON must contain an array; got %q", linesV)
}
fields := commonFields
commonFieldsLen := len(fields.fields)
for _, line := range lines {
fields.fields = fields.fields[:commonFieldsLen]
lineA, err := line.Array()
if err != nil {
return fmt.Errorf("unexpected contents of `values` item; want array; got %q", line)
@@ -152,17 +153,6 @@ func parseJSONRequest(data []byte, lmp insertutils.LogMessageProcessor, useDefau
ts = currentTimestamp
}
// parse log message
msg, err := lineA[1].StringBytes()
if err != nil {
return fmt.Errorf("unexpected log message type for %q; want string", lineA[1])
}
fields = append(fields[:len(commonFields)], logstorage.Field{
Name: "_msg",
Value: bytesutil.ToUnsafeString(msg),
})
// parse structured metadata - see https://grafana.com/docs/loki/latest/reference/loki-http-api/#ingest-logs
if len(lineA) > 2 {
structuredMetadata, err := lineA[2].Object()
@@ -177,7 +167,7 @@ func parseJSONRequest(data []byte, lmp insertutils.LogMessageProcessor, useDefau
return
}
fields = append(fields, logstorage.Field{
fields.fields = append(fields.fields, logstorage.Field{
Name: bytesutil.ToUnsafeString(k),
Value: bytesutil.ToUnsafeString(vStr),
})
@@ -186,39 +176,51 @@ func parseJSONRequest(data []byte, lmp insertutils.LogMessageProcessor, useDefau
return fmt.Errorf("error when parsing `structuredMetadata` object: %w", err)
}
}
// parse log message
msg, err := lineA[1].StringBytes()
if err != nil {
return fmt.Errorf("unexpected log message type for %q; want string", lineA[1])
}
allowMsgRenaming := false
fields.fields, allowMsgRenaming = addMsgField(fields.fields, msgParser, bytesutil.ToUnsafeString(msg))
var streamFields []logstorage.Field
if useDefaultStreamFields {
streamFields = commonFields
streamFields = fields.fields[:commonFieldsLen]
}
lmp.AddRow(ts, fields, streamFields)
if allowMsgRenaming {
logstorage.RenameField(fields.fields[commonFieldsLen:], msgFields, "_msg")
}
lmp.AddRow(ts, fields.fields, streamFields)
}
}
return nil
}
func addMsgField(dst []logstorage.Field, msgParser *logstorage.JSONParser, msg string) ([]logstorage.Field, bool) {
if msgParser == nil || len(msg) < 2 || msg[0] != '{' || msg[len(msg)-1] != '}' {
return append(dst, logstorage.Field{
Name: "_msg",
Value: msg,
}), false
}
if msgParser != nil && len(msg) >= 2 && msg[0] == '{' && msg[len(msg)-1] == '}' {
if err := msgParser.ParseLogMessage(bytesutil.ToUnsafeBytes(msg)); err == nil {
return append(dst, msgParser.Fields...), true
}
}
return append(dst, logstorage.Field{
Name: "_msg",
Value: msg,
}), false
}
func parseLokiTimestamp(s string) (int64, error) {
if s == "" {
// Special case - an empty timestamp must be substituted with the current time by the caller.
return 0, nil
}
n, err := strconv.ParseInt(s, 10, 64)
if err != nil {
// Fall back to parsing floating-point value
f, err := strconv.ParseFloat(s, 64)
if err != nil {
return 0, err
}
if f > math.MaxInt64 {
return 0, fmt.Errorf("too big timestamp in nanoseconds: %v; mustn't exceed %v", f, int64(math.MaxInt64))
}
if f < math.MinInt64 {
return 0, fmt.Errorf("too small timestamp in nanoseconds: %v; must be bigger or equal to %v", f, int64(math.MinInt64))
}
n = int64(f)
}
if n < 0 {
return 0, fmt.Errorf("too small timestamp in nanoseconds: %d; must be bigger than 0", n)
}
return n, nil
return insertutils.ParseUnixTimestamp(s)
}

View File

@@ -11,7 +11,7 @@ func TestParseJSONRequest_Failure(t *testing.T) {
t.Helper()
tlp := &insertutils.TestLogMessageProcessor{}
if err := parseJSONRequest([]byte(s), tlp, false); err == nil {
if err := parseJSONRequest([]byte(s), tlp, nil, false, false); err == nil {
t.Fatalf("expecting non-nil error")
}
if err := tlp.Verify(nil, ""); err != nil {
@@ -65,7 +65,7 @@ func TestParseJSONRequest_Success(t *testing.T) {
tlp := &insertutils.TestLogMessageProcessor{}
if err := parseJSONRequest([]byte(s), tlp, false); err != nil {
if err := parseJSONRequest([]byte(s), tlp, nil, false, false); err != nil {
t.Fatalf("unexpected error: %s", err)
}
if err := tlp.Verify(timestampsExpected, resultExpected); err != nil {
@@ -89,9 +89,9 @@ func TestParseJSONRequest_Success(t *testing.T) {
"label2": "value2"
},"values":[
["1577836800000000001", "foo bar"],
["1477836900005000002", "abc"],
["1686026123.62", "abc"],
["147.78369e9", "foobar"]
]}]}`, []int64{1577836800000000001, 1477836900005000002, 147783690000}, `{"label1":"value1","label2":"value2","_msg":"foo bar"}
]}]}`, []int64{1577836800000000001, 1686026123620000000, 147783690000000000}, `{"label1":"value1","label2":"value2","_msg":"foo bar"}
{"label1":"value1","label2":"value2","_msg":"abc"}
{"label1":"value1","label2":"value2","_msg":"foobar"}`)
@@ -122,6 +122,48 @@ func TestParseJSONRequest_Success(t *testing.T) {
{"x":"y","_msg":"yx"}`)
// values with metadata
f(`{"streams":[{"values":[["1577836800000000001", "foo bar", {"metadata_1": "md_value"}]]}]}`, []int64{1577836800000000001}, `{"_msg":"foo bar","metadata_1":"md_value"}`)
f(`{"streams":[{"values":[["1577836800000000001", "foo bar", {"metadata_1": "md_value"}]]}]}`, []int64{1577836800000000001}, `{"metadata_1":"md_value","_msg":"foo bar"}`)
f(`{"streams":[{"values":[["1577836800000000001", "foo bar", {}]]}]}`, []int64{1577836800000000001}, `{"_msg":"foo bar"}`)
}
func TestParseJSONRequest_ParseMessage(t *testing.T) {
f := func(s string, msgFields []string, timestampsExpected []int64, resultExpected string) {
t.Helper()
tlp := &insertutils.TestLogMessageProcessor{}
if err := parseJSONRequest([]byte(s), tlp, msgFields, false, true); err != nil {
t.Fatalf("unexpected error: %s", err)
}
if err := tlp.Verify(timestampsExpected, resultExpected); err != nil {
t.Fatal(err)
}
}
f(`{
"streams": [
{
"stream": {
"foo": "bar",
"a": "b"
},
"values": [
["1577836800000000001", "{\"user_id\":\"123\"}"],
["1577836900005000002", "abc", {"trace_id":"pqw"}],
["1577836900005000003", "{def}"]
]
},
{
"stream": {
"x": "y"
},
"values": [
["1877836900005000004", "{\"trace_id\":\"111\",\"parent_id\":\"abc\"}"]
]
}
]
}`, []string{"a", "trace_id"}, []int64{1577836800000000001, 1577836900005000002, 1577836900005000003, 1877836900005000004}, `{"foo":"bar","a":"b","user_id":"123"}
{"foo":"bar","a":"b","trace_id":"pqw","_msg":"abc"}
{"foo":"bar","a":"b","_msg":"{def}"}
{"x":"y","_msg":"111","parent_id":"abc"}`)
}

View File

@@ -28,7 +28,7 @@ func benchmarkParseJSONRequest(b *testing.B, streams, rows, labels int) {
b.RunParallel(func(pb *testing.PB) {
data := getJSONBody(streams, rows, labels)
for pb.Next() {
if err := parseJSONRequest(data, blp, false); err != nil {
if err := parseJSONRequest(data, blp, nil, false, true); err != nil {
panic(fmt.Errorf("unexpected error: %w", err))
}
}

View File

@@ -2,7 +2,6 @@ package loki
import (
"fmt"
"io"
"net/http"
"strconv"
"strings"
@@ -11,29 +10,19 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/metrics"
"github.com/golang/snappy"
)
var (
bytesBufPool bytesutil.ByteBufferPool
pushReqsPool sync.Pool
)
func handleProtobuf(r *http.Request, w http.ResponseWriter) {
startTime := time.Now()
requestsProtobufTotal.Inc()
wcr := writeconcurrencylimiter.GetReader(r.Body)
data, err := io.ReadAll(wcr)
writeconcurrencylimiter.PutReader(wcr)
if err != nil {
httpserver.Errorf(w, r, "cannot read request body: %s", err)
return
}
cp, err := getCommonParams(r)
if err != nil {
@@ -44,12 +33,22 @@ func handleProtobuf(r *http.Request, w http.ResponseWriter) {
httpserver.Errorf(w, r, "%s", err)
return
}
lmp := cp.NewLogMessageProcessor("loki_protobuf")
useDefaultStreamFields := len(cp.StreamFields) == 0
err = parseProtobufRequest(data, lmp, useDefaultStreamFields)
lmp.MustClose()
encoding := r.Header.Get("Content-Encoding")
if encoding == "" {
// Loki protocol uses snappy compression by default.
// See https://grafana.com/docs/loki/latest/reference/loki-http-api/#ingest-logs
encoding = "snappy"
}
err = protoparserutil.ReadUncompressedData(r.Body, encoding, maxRequestSize, func(data []byte) error {
lmp := cp.cp.NewLogMessageProcessor("loki_protobuf", false)
useDefaultStreamFields := len(cp.cp.StreamFields) == 0
err := parseProtobufRequest(data, lmp, cp.cp.MsgFields, useDefaultStreamFields, cp.parseMessage)
lmp.MustClose()
return err
})
if err != nil {
httpserver.Errorf(w, r, "cannot parse Loki protobuf request: %s", err)
httpserver.Errorf(w, r, "cannot read Loki protobuf data: %s", err)
return
}
@@ -57,6 +56,9 @@ func handleProtobuf(r *http.Request, w http.ResponseWriter) {
// There is no need in updating requestProtobufDuration for request errors,
// since their timings are usually much smaller than the timing for successful request parsing.
requestProtobufDuration.UpdateDuration(startTime)
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8505
w.WriteHeader(http.StatusNoContent)
}
var (
@@ -64,20 +66,11 @@ var (
requestProtobufDuration = metrics.NewHistogram(`vl_http_request_duration_seconds{path="/insert/loki/api/v1/push",format="protobuf"}`)
)
func parseProtobufRequest(data []byte, lmp insertutils.LogMessageProcessor, useDefaultStreamFields bool) error {
bb := bytesBufPool.Get()
defer bytesBufPool.Put(bb)
buf, err := snappy.Decode(bb.B[:cap(bb.B)], data)
if err != nil {
return fmt.Errorf("cannot decode snappy-encoded request body: %w", err)
}
bb.B = buf
func parseProtobufRequest(data []byte, lmp insertutils.LogMessageProcessor, msgFields []string, useDefaultStreamFields, parseMessage bool) error {
req := getPushRequest()
defer putPushRequest(req)
err = req.UnmarshalProtobuf(bb.B)
err := req.UnmarshalProtobuf(data)
if err != nil {
return fmt.Errorf("cannot parse request body: %w", err)
}
@@ -85,8 +78,15 @@ func parseProtobufRequest(data []byte, lmp insertutils.LogMessageProcessor, useD
fields := getFields()
defer putFields(fields)
var msgParser *logstorage.JSONParser
if parseMessage {
msgParser = logstorage.GetJSONParser()
defer logstorage.PutJSONParser(msgParser)
}
streams := req.Streams
currentTimestamp := time.Now().UnixNano()
for i := range streams {
stream := &streams[i]
// st.Labels contains labels for the stream.
@@ -109,10 +109,8 @@ func parseProtobufRequest(data []byte, lmp insertutils.LogMessageProcessor, useD
})
}
fields.fields = append(fields.fields, logstorage.Field{
Name: "_msg",
Value: e.Line,
})
allowMsgRenaming := false
fields.fields, allowMsgRenaming = addMsgField(fields.fields, msgParser, e.Line)
ts := e.Timestamp.UnixNano()
if ts == 0 {
@@ -123,6 +121,9 @@ func parseProtobufRequest(data []byte, lmp insertutils.LogMessageProcessor, useD
if useDefaultStreamFields {
streamFields = fields.fields[:commonFieldsLen]
}
if allowMsgRenaming {
logstorage.RenameField(fields.fields[commonFieldsLen:], msgFields, "_msg")
}
lmp.AddRow(ts, fields.fields, streamFields)
}
}

View File

@@ -8,7 +8,6 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/golang/snappy"
)
type testLogMessageProcessor struct {
@@ -53,7 +52,7 @@ func TestParseProtobufRequest_Success(t *testing.T) {
t.Helper()
tlp := &testLogMessageProcessor{}
if err := parseJSONRequest([]byte(s), tlp, false); err != nil {
if err := parseJSONRequest([]byte(s), tlp, nil, false, false); err != nil {
t.Fatalf("unexpected error: %s", err)
}
if len(tlp.pr.Streams) != len(timestampsExpected) {
@@ -61,10 +60,9 @@ func TestParseProtobufRequest_Success(t *testing.T) {
}
data := tlp.pr.MarshalProtobuf(nil)
encodedData := snappy.Encode(nil, data)
tlp2 := &insertutils.TestLogMessageProcessor{}
if err := parseProtobufRequest(encodedData, tlp2, false); err != nil {
if err := parseProtobufRequest(data, tlp2, nil, false, false); err != nil {
t.Fatalf("unexpected error: %s", err)
}
if err := tlp2.Verify(timestampsExpected, resultExpected); err != nil {
@@ -90,7 +88,7 @@ func TestParseProtobufRequest_Success(t *testing.T) {
["1577836800000000001", "foo bar"],
["1477836900005000002", "abc"],
["147.78369e9", "foobar"]
]}]}`, []int64{1577836800000000001, 1477836900005000002, 147783690000}, `{"label1":"value1","label2":"value2","_msg":"foo bar"}
]}]}`, []int64{1577836800000000001, 1477836900005000002, 147783690000000000}, `{"label1":"value1","label2":"value2","_msg":"foo bar"}
{"label1":"value1","label2":"value2","_msg":"abc"}
{"label1":"value1","label2":"value2","_msg":"foobar"}`)
@@ -121,6 +119,57 @@ func TestParseProtobufRequest_Success(t *testing.T) {
{"x":"y","_msg":"yx"}`)
}
func TestParseProtobufRequest_ParseMessage(t *testing.T) {
f := func(s string, msgFields []string, timestampsExpected []int64, resultExpected string) {
t.Helper()
tlp := &testLogMessageProcessor{}
if err := parseJSONRequest([]byte(s), tlp, nil, false, false); err != nil {
t.Fatalf("unexpected error: %s", err)
}
if len(tlp.pr.Streams) != len(timestampsExpected) {
t.Fatalf("unexpected number of streams; got %d; want %d", len(tlp.pr.Streams), len(timestampsExpected))
}
data := tlp.pr.MarshalProtobuf(nil)
tlp2 := &insertutils.TestLogMessageProcessor{}
if err := parseProtobufRequest(data, tlp2, msgFields, false, true); err != nil {
t.Fatalf("unexpected error: %s", err)
}
if err := tlp2.Verify(timestampsExpected, resultExpected); err != nil {
t.Fatal(err)
}
}
f(`{
"streams": [
{
"stream": {
"foo": "bar",
"a": "b"
},
"values": [
["1577836800000000001", "{\"user_id\":\"123\"}"],
["1577836900005000002", "abc", {"trace_id":"pqw"}],
["1577836900005000003", "{def}"]
]
},
{
"stream": {
"x": "y"
},
"values": [
["1877836900005000004", "{\"trace_id\":\"432\",\"parent_id\":\"qwerty\"}"]
]
}
]
}`, []string{"a", "trace_id"}, []int64{1577836800000000001, 1577836900005000002, 1577836900005000003, 1877836900005000004}, `{"foo":"bar","a":"b","user_id":"123"}
{"foo":"bar","a":"b","trace_id":"pqw","_msg":"abc"}
{"foo":"bar","a":"b","_msg":"{def}"}
{"x":"y","_msg":"432","parent_id":"qwerty"}`)
}
func TestParsePromLabels_Success(t *testing.T) {
f := func(s string) {
t.Helper()

View File

@@ -6,8 +6,6 @@ import (
"testing"
"time"
"github.com/golang/snappy"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
)
@@ -31,7 +29,7 @@ func benchmarkParseProtobufRequest(b *testing.B, streams, rows, labels int) {
b.RunParallel(func(pb *testing.PB) {
body := getProtobufBody(streams, rows, labels)
for pb.Next() {
if err := parseProtobufRequest(body, blp, false); err != nil {
if err := parseProtobufRequest(body, blp, nil, false, true); err != nil {
panic(fmt.Errorf("unexpected error: %w", err))
}
}
@@ -78,8 +76,5 @@ func getProtobufBody(streamsCount, rowsCount, labelsCount int) []byte {
Streams: streams,
}
body := pr.MarshalProtobuf(nil)
encodedBody := snappy.Encode(nil, body)
return encodedBody
return pr.MarshalProtobuf(nil)
}

View File

@@ -2,21 +2,22 @@ package opentelemetry
import (
"fmt"
"io"
"net/http"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentelemetry/pb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/slicesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
"github.com/VictoriaMetrics/metrics"
)
var maxRequestSize = flagutil.NewBytes("opentelemetry.maxRequestSize", 64*1024*1024, "The maximum size in bytes of a single OpenTelemetry request")
// RequestHandler processes Opentelemetry insert requests
func RequestHandler(path string, w http.ResponseWriter, r *http.Request) bool {
switch path {
@@ -37,24 +38,6 @@ func RequestHandler(path string, w http.ResponseWriter, r *http.Request) bool {
func handleProtobuf(r *http.Request, w http.ResponseWriter) {
startTime := time.Now()
requestsProtobufTotal.Inc()
reader := r.Body
if r.Header.Get("Content-Encoding") == "gzip" {
zr, err := common.GetGzipReader(reader)
if err != nil {
httpserver.Errorf(w, r, "cannot initialize gzip reader: %s", err)
return
}
defer common.PutGzipReader(zr)
reader = zr
}
wcr := writeconcurrencylimiter.GetReader(reader)
data, err := io.ReadAll(wcr)
writeconcurrencylimiter.PutReader(wcr)
if err != nil {
httpserver.Errorf(w, r, "cannot read request body: %s", err)
return
}
cp, err := insertutils.GetCommonParams(r)
if err != nil {
@@ -66,12 +49,16 @@ func handleProtobuf(r *http.Request, w http.ResponseWriter) {
return
}
lmp := cp.NewLogMessageProcessor("opentelelemtry_protobuf")
useDefaultStreamFields := len(cp.StreamFields) == 0
err = pushProtobufRequest(data, lmp, useDefaultStreamFields)
lmp.MustClose()
encoding := r.Header.Get("Content-Encoding")
err = protoparserutil.ReadUncompressedData(r.Body, encoding, maxRequestSize, func(data []byte) error {
lmp := cp.NewLogMessageProcessor("opentelelemtry_protobuf", false)
useDefaultStreamFields := len(cp.StreamFields) == 0
err := pushProtobufRequest(data, lmp, useDefaultStreamFields)
lmp.MustClose()
return err
})
if err != nil {
httpserver.Errorf(w, r, "cannot parse OpenTelemetry protobuf request: %s", err)
httpserver.Errorf(w, r, "cannot read OpenTelemetry protocol data: %s", err)
return
}

View File

@@ -24,6 +24,7 @@ func TestPushProtoOk(t *testing.T) {
t.Fatal(err)
}
}
// single line without resource attributes
f([]pb.ResourceLogs{
{
@@ -39,6 +40,7 @@ func TestPushProtoOk(t *testing.T) {
[]int64{1234},
`{"_msg":"log-line-message","severity":"Trace"}`,
)
// multi-line with resource attributes
f([]pb.ResourceLogs{
{
@@ -106,19 +108,21 @@ func TestPushProtoOk(t *testing.T) {
{
LogRecords: []pb.LogRecord{
{TimeUnixNano: 2347, SeverityNumber: 12, Body: pb.AnyValue{StringValue: ptrTo("log-line-resource-scope-1-1-0")}},
{ObservedTimeUnixNano: 2348, SeverityNumber: 12, Body: pb.AnyValue{StringValue: ptrTo("log-line-resource-scope-1-1-1")}},
{TraceID: "1234", SpanID: "45", ObservedTimeUnixNano: 2348, SeverityNumber: 12, Body: pb.AnyValue{StringValue: ptrTo("log-line-resource-scope-1-1-1")}},
{TraceID: "4bf92f3577b34da6a3ce929d0e0e4736", SpanID: "00f067aa0ba902b7", ObservedTimeUnixNano: 3333, Body: pb.AnyValue{StringValue: ptrTo("log-line-resource-scope-1-1-2")}},
},
},
},
},
},
[]int64{1234, 1235, 2345, 2346, 2347, 2348},
[]int64{1234, 1235, 2345, 2346, 2347, 2348, 3333},
`{"logger":"context","instance_id":"10","node_taints":"{\"role\":\"dev\",\"cluster_load_percent\":0.55}","_msg":"log-line-message","severity":"Trace"}
{"logger":"context","instance_id":"10","node_taints":"{\"role\":\"dev\",\"cluster_load_percent\":0.55}","_msg":"log-line-message-msg-2","severity":"Debug"}
{"_msg":"log-line-resource-scope-1-0-0","severity":"Info2"}
{"_msg":"log-line-resource-scope-1-0-1","severity":"Info2"}
{"_msg":"log-line-resource-scope-1-1-0","severity":"Info4"}
{"_msg":"log-line-resource-scope-1-1-1","severity":"Info4"}`,
{"_msg":"log-line-resource-scope-1-1-1","trace_id":"1234","span_id":"45","severity":"Info4"}
{"_msg":"log-line-resource-scope-1-1-2","trace_id":"4bf92f3577b34da6a3ce929d0e0e4736","span_id":"00f067aa0ba902b7","severity":"Unspecified"}`,
)
}

View File

@@ -16,8 +16,6 @@ import (
"sync/atomic"
"time"
"github.com/klauspost/compress/gzip"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
@@ -27,7 +25,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/netutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/slicesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
"github.com/VictoriaMetrics/metrics"
@@ -274,14 +272,14 @@ func runTCPListener(addr string, argIdx int) {
func checkCompressMethod(compressMethod, addr, protocol string) {
switch compressMethod {
case "", "none", "gzip", "deflate":
case "", "none", "zstd", "gzip", "deflate":
return
default:
logger.Fatalf("unsupported -syslog.compressMethod.%s=%q for -syslog.listenAddr.%s=%q; supported values: 'none', 'gzip', 'deflate'", protocol, compressMethod, protocol, addr)
logger.Fatalf("unsupported -syslog.compressMethod.%s=%q for -syslog.listenAddr.%s=%q; supported values: 'none', 'zstd', 'gzip', 'deflate'", protocol, compressMethod, protocol, addr)
}
}
func serveUDP(ln net.PacketConn, tenantID logstorage.TenantID, compressMethod string, useLocalTimestamp bool, streamFields, ignoreFields []string, extraFields []logstorage.Field) {
func serveUDP(ln net.PacketConn, tenantID logstorage.TenantID, encoding string, useLocalTimestamp bool, streamFields, ignoreFields []string, extraFields []logstorage.Field) {
gomaxprocs := cgroup.AvailableCPUs()
var wg sync.WaitGroup
localAddr := ln.LocalAddr()
@@ -314,7 +312,7 @@ func serveUDP(ln net.PacketConn, tenantID logstorage.TenantID, compressMethod st
}
bb.B = bb.B[:n]
udpRequestsTotal.Inc()
if err := processStream("udp", bb.NewReader(), compressMethod, useLocalTimestamp, cp); err != nil {
if err := processStream("udp", bb.NewReader(), encoding, useLocalTimestamp, cp); err != nil {
logger.Errorf("syslog: cannot process UDP data from %s at %s: %s", remoteAddr, localAddr, err)
}
}
@@ -323,7 +321,7 @@ func serveUDP(ln net.PacketConn, tenantID logstorage.TenantID, compressMethod st
wg.Wait()
}
func serveTCP(ln net.Listener, tenantID logstorage.TenantID, compressMethod string, useLocalTimestamp bool, streamFields, ignoreFields []string, extraFields []logstorage.Field) {
func serveTCP(ln net.Listener, tenantID logstorage.TenantID, encoding string, useLocalTimestamp bool, streamFields, ignoreFields []string, extraFields []logstorage.Field) {
var cm ingestserver.ConnsMap
cm.Init("syslog")
@@ -354,7 +352,7 @@ func serveTCP(ln net.Listener, tenantID logstorage.TenantID, compressMethod stri
wg.Add(1)
go func() {
cp := insertutils.GetCommonParamsForSyslog(tenantID, streamFields, ignoreFields, extraFields)
if err := processStream("tcp", c, compressMethod, useLocalTimestamp, cp); err != nil {
if err := processStream("tcp", c, encoding, useLocalTimestamp, cp); err != nil {
logger.Errorf("syslog: cannot process TCP data at %q: %s", addr, err)
}
@@ -369,49 +367,26 @@ func serveTCP(ln net.Listener, tenantID logstorage.TenantID, compressMethod stri
}
// processStream parses a stream of syslog messages from r and ingests them into vlstorage.
func processStream(protocol string, r io.Reader, compressMethod string, useLocalTimestamp bool, cp *insertutils.CommonParams) error {
func processStream(protocol string, r io.Reader, encoding string, useLocalTimestamp bool, cp *insertutils.CommonParams) error {
if err := vlstorage.CanWriteData(); err != nil {
return err
}
lmp := cp.NewLogMessageProcessor("syslog_" + protocol)
err := processStreamInternal(r, compressMethod, useLocalTimestamp, lmp)
lmp := cp.NewLogMessageProcessor("syslog_"+protocol, true)
err := processStreamInternal(r, encoding, useLocalTimestamp, lmp)
lmp.MustClose()
return err
}
func processStreamInternal(r io.Reader, compressMethod string, useLocalTimestamp bool, lmp insertutils.LogMessageProcessor) error {
switch compressMethod {
case "", "none":
case "gzip":
zr, err := common.GetGzipReader(r)
if err != nil {
return fmt.Errorf("cannot read gzipped data: %w", err)
}
r = zr
case "deflate":
zr, err := common.GetZlibReader(r)
if err != nil {
return fmt.Errorf("cannot read deflated data: %w", err)
}
r = zr
default:
logger.Panicf("BUG: unsupported compressMethod=%q; supported values: none, gzip, deflate", compressMethod)
func processStreamInternal(r io.Reader, encoding string, useLocalTimestamp bool, lmp insertutils.LogMessageProcessor) error {
reader, err := protoparserutil.GetUncompressedReader(r, encoding)
if err != nil {
return fmt.Errorf("cannot decode syslog data: %w", err)
}
defer protoparserutil.PutUncompressedReader(reader)
err := processUncompressedStream(r, useLocalTimestamp, lmp)
switch compressMethod {
case "gzip":
zr := r.(*gzip.Reader)
common.PutGzipReader(zr)
case "deflate":
zr := r.(io.ReadCloser)
common.PutZlibReader(zr)
}
return err
return processUncompressedStream(reader, useLocalTimestamp, lmp)
}
func processUncompressedStream(r io.Reader, useLocalTimestamp bool, lmp insertutils.LogMessageProcessor) error {

View File

@@ -154,7 +154,7 @@ func readNextJSONObject(d *json.Decoder) ([]logstorage.Field, error) {
}
value, ok := t.(string)
if !ok {
return nil, fmt.Errorf("unexpected token read for oject value: %v; want string", t)
return nil, fmt.Errorf("unexpected token read for object value: %v; want string", t)
}
fields = append(fields, logstorage.Field{

View File

@@ -31,7 +31,7 @@ var (
totalStreams = flag.Int("totalStreams", 0, "The number of total log streams; if -totalStreams > -activeStreams, then some active streams are substituted with new streams "+
"during data generation")
logsPerStream = flag.Int64("logsPerStream", 1_000, "The number of log entries to generate per each log stream. Log entries are evenly distributed between -start and -end")
constFieldsPerLog = flag.Int("constFieldsPerLog", 3, "The number of fields with constaint values to generate per each log entry; "+
constFieldsPerLog = flag.Int("constFieldsPerLog", 3, "The number of fields with constant values to generate per each log entry; "+
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
varFieldsPerLog = flag.Int("varFieldsPerLog", 1, "The number of fields with variable values to generate per each log entry; "+
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
@@ -177,7 +177,10 @@ func generateAndPushLogs(cfg *workerConfig, workerID int) {
sw := &statWriter{
w: pw,
}
bw := bufio.NewWriter(sw)
// The 1MB write buffer increases data ingestion performance by reducing the number of send() syscalls
bw := bufio.NewWriterSize(sw, 1024*1024)
doneCh := make(chan struct{})
go func() {
generateLogs(bw, workerID, cfg.activeStreams, cfg.totalStreams)

View File

@@ -71,6 +71,7 @@ var vmuiFileServer = http.FileServer(http.FS(vmuiFiles))
// RequestHandler handles select requests for VictoriaLogs
func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
path := r.URL.Path
if !strings.HasPrefix(path, "/select/") {
// Skip requests, which do not start with /select/, since these aren't our requests.
return false
@@ -105,38 +106,10 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
ctxWithTimeout, cancel := context.WithTimeout(ctx, d)
defer cancel()
stopCh := ctxWithTimeout.Done()
select {
case concurrencyLimitCh <- struct{}{}:
defer func() { <-concurrencyLimitCh }()
default:
// Sleep for a while until giving up. This should resolve short bursts in requests.
concurrencyLimitReached.Inc()
select {
case concurrencyLimitCh <- struct{}{}:
defer func() { <-concurrencyLimitCh }()
case <-stopCh:
switch ctxWithTimeout.Err() {
case context.Canceled:
remoteAddr := httpserver.GetQuotedRemoteAddr(r)
requestURI := httpserver.GetRequestURI(r)
logger.Infof("client has canceled the pending request after %.3f seconds: remoteAddr=%s, requestURI: %q",
time.Since(startTime).Seconds(), remoteAddr, requestURI)
case context.DeadlineExceeded:
concurrencyLimitTimeout.Inc()
err := &httpserver.ErrorWithStatusCode{
Err: fmt.Errorf("couldn't start executing the request in %.3f seconds, since -search.maxConcurrentRequests=%d concurrent requests "+
"are executed. Possible solutions: to reduce query load; to add more compute resources to the server; "+
"to increase -search.maxQueueDuration=%s; to increase -search.maxQueryDuration=%s; to increase -search.maxConcurrentRequests; "+
"to pass bigger value to 'timeout' query arg",
d.Seconds(), *maxConcurrentRequests, maxQueueDuration, maxQueryDuration),
StatusCode: http.StatusServiceUnavailable,
}
httpserver.Errorf(w, r, "%s", err)
}
return true
}
if !incRequestConcurrency(ctxWithTimeout, w, r) {
return true
}
defer decRequestConcurrency()
if path == "/select/logsql/tail" {
logsqlTailRequests.Inc()
@@ -163,7 +136,7 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
case context.DeadlineExceeded:
err = &httpserver.ErrorWithStatusCode{
Err: fmt.Errorf("the request couldn't be executed in %.3f seconds; possible solutions: "+
"to increase -search.maxQueryDuration=%s; to pass bigger value to 'timeout' query arg", d.Seconds(), maxQueryDuration),
"to increase -search.maxQueryDuration=%s; to pass bigger value to 'timeout' query arg", time.Since(startTime).Seconds(), maxQueryDuration),
StatusCode: http.StatusServiceUnavailable,
}
httpserver.Errorf(w, r, "%s", err)
@@ -174,6 +147,46 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
return true
}
func incRequestConcurrency(ctx context.Context, w http.ResponseWriter, r *http.Request) bool {
startTime := time.Now()
stopCh := ctx.Done()
select {
case concurrencyLimitCh <- struct{}{}:
return true
default:
// Sleep for a while until giving up. This should resolve short bursts in requests.
concurrencyLimitReached.Inc()
select {
case concurrencyLimitCh <- struct{}{}:
return true
case <-stopCh:
switch ctx.Err() {
case context.Canceled:
remoteAddr := httpserver.GetQuotedRemoteAddr(r)
requestURI := httpserver.GetRequestURI(r)
logger.Infof("client has canceled the pending request after %.3f seconds: remoteAddr=%s, requestURI: %q",
time.Since(startTime).Seconds(), remoteAddr, requestURI)
case context.DeadlineExceeded:
concurrencyLimitTimeout.Inc()
err := &httpserver.ErrorWithStatusCode{
Err: fmt.Errorf("couldn't start executing the request in %.3f seconds, since -search.maxConcurrentRequests=%d concurrent requests "+
"are executed. Possible solutions: to reduce query load; to add more compute resources to the server; "+
"to increase -search.maxQueueDuration=%s; to increase -search.maxQueryDuration=%s; to increase -search.maxConcurrentRequests; "+
"to pass bigger value to 'timeout' query arg",
time.Since(startTime).Seconds(), *maxConcurrentRequests, maxQueueDuration, maxQueryDuration),
StatusCode: http.StatusServiceUnavailable,
}
httpserver.Errorf(w, r, "%s", err)
}
return false
}
}
}
func decRequestConcurrency() {
<-concurrencyLimitCh
}
func processSelectRequest(ctx context.Context, w http.ResponseWriter, r *http.Request, path string) bool {
httpserver.EnableCORS(w, r)
startTime := time.Now()

View File

@@ -2349,4 +2349,4 @@ VictoriaMetrics performs the following implicit conversions for incoming queries
is passed to [rollup function](#rollup-functions), then a [subquery](#subqueries) with `1i` lookbehind window and `1i` step is automatically formed.
For example, `rate(sum(up))` is automatically converted to `rate((sum(default_rollup(up)))[1i:1i])`.
This behavior can be disabled or logged via `-search.disableImplicitConversion` and `-search.logImplicitConversion` command-line flags
starting from [`v1.101.0` release](https://docs.victoriametrics.com/changelog/).
starting from [`v1.102.0-rc2` release](https://docs.victoriametrics.com/changelog/changelog_2024/#v11020-rc2).

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -35,10 +35,10 @@
<meta property="og:title" content="UI for VictoriaLogs">
<meta property="og:url" content="https://victoriametrics.com/products/victorialogs/">
<meta property="og:description" content="Explore your log data with VictoriaLogs UI">
<script type="module" crossorigin src="./assets/index-DuTUAk-m.js"></script>
<script type="module" crossorigin src="./assets/index-BgdvCSTM.js"></script>
<link rel="modulepreload" crossorigin href="./assets/vendor-DojlIpLz.js">
<link rel="stylesheet" crossorigin href="./assets/vendor-D1GxaB_c.css">
<link rel="stylesheet" crossorigin href="./assets/index-CEiptoJw.css">
<link rel="stylesheet" crossorigin href="./assets/index-u4IOGr0E.css">
</head>
<body>
<noscript>You need to enable JavaScript to run this app.</noscript>

View File

@@ -7,9 +7,9 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
parser "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/csvimport"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/csvimport"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/csvimport/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
"github.com/VictoriaMetrics/metrics"
)
@@ -22,16 +22,16 @@ var (
// InsertHandler processes csv data from req.
func InsertHandler(at *auth.Token, req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
return stream.Parse(req, func(rows []parser.Row) error {
return stream.Parse(req, func(rows []csvimport.Row) error {
return insertRows(at, rows, extraLabels)
})
}
func insertRows(at *auth.Token, rows []parser.Row, extraLabels []prompbmarshal.Label) error {
func insertRows(at *auth.Token, rows []csvimport.Row, extraLabels []prompbmarshal.Label) error {
ctx := common.GetPushCtx()
defer common.PutPushCtx(ctx)

View File

@@ -7,10 +7,10 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogsketches"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogsketches/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
"github.com/VictoriaMetrics/metrics"
)
@@ -23,7 +23,7 @@ var (
// InsertHandlerForHTTP processes remote write for DataDog POST /api/beta/sketches request.
func InsertHandlerForHTTP(at *auth.Token, req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}

View File

@@ -7,10 +7,10 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogv1"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogv1/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
"github.com/VictoriaMetrics/metrics"
)
@@ -23,7 +23,7 @@ var (
// InsertHandlerForHTTP processes remote write for DataDog POST /api/v1/series request.
func InsertHandlerForHTTP(at *auth.Token, req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}

View File

@@ -7,10 +7,10 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogv2"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogv2/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
"github.com/VictoriaMetrics/metrics"
)
@@ -25,7 +25,7 @@ var (
//
// See https://docs.datadoghq.com/api/latest/metrics/#submit-metrics
func InsertHandlerForHTTP(at *auth.Token, req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}

View File

@@ -21,7 +21,7 @@ var (
//
// See https://graphite.readthedocs.io/en/latest/feeding-carbon.html#the-plaintext-protocol
func InsertHandler(r io.Reader) error {
return stream.Parse(r, false, func(rows []parser.Row) error {
return stream.Parse(r, "", func(rows []parser.Row) error {
return insertRows(nil, rows)
})
}

View File

@@ -12,9 +12,9 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promrelabel"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
parser "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/influx"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/influx"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/influx/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
"github.com/VictoriaMetrics/metrics"
)
@@ -35,8 +35,8 @@ var (
// InsertHandlerForReader processes remote write for influx line protocol.
//
// See https://github.com/influxdata/telegraf/tree/master/plugins/inputs/socket_listener/
func InsertHandlerForReader(at *auth.Token, r io.Reader, isGzipped bool) error {
return stream.Parse(r, true, isGzipped, "", "", func(db string, rows []parser.Row) error {
func InsertHandlerForReader(at *auth.Token, r io.Reader, encoding string) error {
return stream.Parse(r, encoding, true, "", "", func(db string, rows []influx.Row) error {
return insertRows(at, db, rows, nil)
})
}
@@ -45,22 +45,22 @@ func InsertHandlerForReader(at *auth.Token, r io.Reader, isGzipped bool) error {
//
// See https://github.com/influxdata/influxdb/blob/4cbdc197b8117fee648d62e2e5be75c6575352f0/tsdb/README.md
func InsertHandlerForHTTP(at *auth.Token, req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
isGzipped := req.Header.Get("Content-Encoding") == "gzip"
isStreamMode := req.Header.Get("Stream-Mode") == "1"
q := req.URL.Query()
precision := q.Get("precision")
// Read db tag from https://docs.influxdata.com/influxdb/v1.7/tools/api/#write-http-endpoint
db := q.Get("db")
return stream.Parse(req.Body, isStreamMode, isGzipped, precision, db, func(db string, rows []parser.Row) error {
encoding := req.Header.Get("Content-Encoding")
isStreamMode := req.Header.Get("Stream-Mode") == "1"
return stream.Parse(req.Body, encoding, isStreamMode, precision, db, func(db string, rows []influx.Row) error {
return insertRows(at, db, rows, extraLabels)
})
}
func insertRows(at *auth.Token, db string, rows []parser.Row, extraLabels []prompbmarshal.Label) error {
func insertRows(at *auth.Token, db string, rows []influx.Row, extraLabels []prompbmarshal.Label) error {
ctx := getPushCtx()
defer putPushCtx(ctx)

View File

@@ -42,8 +42,8 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/procutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentelemetry/firehose"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/pushmetrics"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/stringsutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeserieslimits"
@@ -145,10 +145,10 @@ func main() {
startTime := time.Now()
remotewrite.StartIngestionRateLimiter()
remotewrite.Init()
common.StartUnmarshalWorkers()
protoparserutil.StartUnmarshalWorkers()
if len(*influxListenAddr) > 0 {
influxServer = influxserver.MustStart(*influxListenAddr, *influxUseProxyProtocol, func(r io.Reader) error {
return influx.InsertHandlerForReader(nil, r, false)
return influx.InsertHandlerForReader(nil, r, "")
})
}
if len(*graphiteListenAddr) > 0 {
@@ -195,7 +195,7 @@ func main() {
if len(*opentsdbHTTPListenAddr) > 0 {
opentsdbhttpServer.MustStop()
}
common.StopUnmarshalWorkers()
protoparserutil.StopUnmarshalWorkers()
remotewrite.Stop()
logger.Infof("successfully stopped vmagent in %.3f seconds", time.Since(startTime).Seconds())
@@ -282,7 +282,7 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
}
switch path {
case "/prometheus/api/v1/write", "/api/v1/write", "/api/v1/push", "/prometheus/api/v1/push":
if common.HandleVMProtoServerHandshake(w, r) {
if protoparserutil.HandleVMProtoServerHandshake(w, r) {
return true
}
prometheusWriteRequests.Inc()

View File

@@ -9,8 +9,8 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/native/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
"github.com/VictoriaMetrics/metrics"
)
@@ -25,12 +25,12 @@ var (
//
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6
func InsertHandler(at *auth.Token, req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
isGzip := req.Header.Get("Content-Encoding") == "gzip"
return stream.Parse(req.Body, isGzip, func(block *stream.Block) error {
encoding := req.Header.Get("Content-Encoding")
return stream.Parse(req.Body, encoding, func(block *stream.Block) error {
return insertRows(at, block, extraLabels)
})
}

View File

@@ -10,9 +10,9 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/newrelic"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/newrelic/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
)
@@ -24,13 +24,12 @@ var (
// InsertHandlerForHTTP processes remote write for NewRelic POST /infra/v2/metrics/events/bulk request.
func InsertHandlerForHTTP(at *auth.Token, req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
ce := req.Header.Get("Content-Encoding")
isGzip := ce == "gzip"
return stream.Parse(req.Body, isGzip, func(rows []newrelic.Row) error {
encoding := req.Header.Get("Content-Encoding")
return stream.Parse(req.Body, encoding, func(rows []newrelic.Row) error {
return insertRows(at, rows, extraLabels)
})
}

View File

@@ -8,9 +8,9 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentelemetry/firehose"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentelemetry/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
"github.com/VictoriaMetrics/metrics"
)
@@ -23,11 +23,11 @@ var (
// InsertHandler processes opentelemetry metrics.
func InsertHandler(at *auth.Token, req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
isGzipped := req.Header.Get("Content-Encoding") == "gzip"
encoding := req.Header.Get("Content-Encoding")
var processBody func([]byte) ([]byte, error)
if req.Header.Get("Content-Type") == "application/json" {
if req.Header.Get("X-Amz-Firehose-Protocol-Version") != "" {
@@ -36,7 +36,7 @@ func InsertHandler(at *auth.Token, req *http.Request) error {
return fmt.Errorf("json encoding isn't supported for opentelemetry format. Use protobuf encoding")
}
}
return stream.ParseStream(req.Body, isGzipped, processBody, func(tss []prompbmarshal.TimeSeries) error {
return stream.ParseStream(req.Body, encoding, processBody, func(tss []prompbmarshal.TimeSeries) error {
return insertRows(at, tss, extraLabels)
})
}

View File

@@ -7,9 +7,9 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
parser "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentsdbhttp"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentsdbhttp"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentsdbhttp/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/metrics"
)
@@ -21,16 +21,16 @@ var (
// InsertHandler processes HTTP OpenTSDB put requests.
// See http://opentsdb.net/docs/build/html/api_http/put.html
func InsertHandler(at *auth.Token, req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
return stream.Parse(req, func(rows []parser.Row) error {
return stream.Parse(req, func(rows []opentsdbhttp.Row) error {
return insertRows(at, rows, extraLabels)
})
}
func insertRows(at *auth.Token, rows []parser.Row, extraLabels []prompbmarshal.Label) error {
func insertRows(at *auth.Token, rows []opentsdbhttp.Row, extraLabels []prompbmarshal.Label) error {
ctx := common.GetPushCtx()
defer common.PutPushCtx(ctx)

View File

@@ -8,9 +8,9 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
parser "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/prometheus"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/prometheus"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/prometheus/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
"github.com/VictoriaMetrics/metrics"
)
@@ -23,23 +23,23 @@ var (
// InsertHandler processes `/api/v1/import/prometheus` request.
func InsertHandler(at *auth.Token, req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
defaultTimestamp, err := parserCommon.GetTimestamp(req)
defaultTimestamp, err := protoparserutil.GetTimestamp(req)
if err != nil {
return err
}
isGzipped := req.Header.Get("Content-Encoding") == "gzip"
return stream.Parse(req.Body, defaultTimestamp, isGzipped, true, func(rows []parser.Row) error {
encoding := req.Header.Get("Content-Encoding")
return stream.Parse(req.Body, defaultTimestamp, encoding, true, func(rows []prometheus.Row) error {
return insertRows(at, rows, extraLabels)
}, func(s string) {
httpserver.LogError(req, s)
})
}
func insertRows(at *auth.Token, rows []parser.Row, extraLabels []prompbmarshal.Label) error {
func insertRows(at *auth.Token, rows []prometheus.Row, extraLabels []prompbmarshal.Label) error {
ctx := common.GetPushCtx()
defer common.PutPushCtx(ctx)

View File

@@ -12,7 +12,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fs"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
)
var (
@@ -26,7 +26,7 @@ func TestInsertHandler(t *testing.T) {
req := httptest.NewRequest(http.MethodPost, "/insert/0/api/v1/import/prometheus", bytes.NewBufferString(`{"foo":"bar"}
go_memstats_alloc_bytes_total 1`))
if err := InsertHandler(nil, req); err != nil {
t.Fatalf("unxepected error %s", err)
t.Fatalf("unexpected error %s", err)
}
expectedMsg := "cannot unmarshal Prometheus line"
if !strings.Contains(testOutput.String(), expectedMsg) {
@@ -44,14 +44,14 @@ func setUp() {
log.Fatalf("unable to set %q with value %q, err: %v", remoteWriteFlag, srv.URL, err)
}
logger.Init()
common.StartUnmarshalWorkers()
protoparserutil.StartUnmarshalWorkers()
remotewrite.Init()
testOutput = &bytes.Buffer{}
logger.SetOutputForTests(testOutput)
}
func tearDown() {
common.StopUnmarshalWorkers()
protoparserutil.StopUnmarshalWorkers()
srv.Close()
logger.ResetOutputForTest()
tmpDataDir := flag.Lookup("remoteWrite.tmpDataPath").Value.String()

View File

@@ -8,8 +8,8 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/promremotewrite/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
"github.com/VictoriaMetrics/metrics"
)
@@ -22,7 +22,7 @@ var (
// InsertHandler processes remote write for prometheus.
func InsertHandler(at *auth.Token, req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}

View File

@@ -18,7 +18,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/netutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/persistentqueue"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promauth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/ratelimiter"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timerpool"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeutil"
@@ -170,7 +170,7 @@ func newHTTPClient(argIdx int, remoteWriteURL, sanitizedURL string, fq *persiste
doRequest := func(url string) (*http.Response, error) {
return c.doRequest(url, nil)
}
useVMProto = common.HandleVMProtoClientHandshake(c.remoteWriteURL, doRequest)
useVMProto = protoparserutil.HandleVMProtoClientHandshake(c.remoteWriteURL, doRequest)
if !useVMProto {
logger.Infof("the remote storage at %q doesn't support VictoriaMetrics remote write protocol. Switching to Prometheus remote write protocol. "+
"See https://docs.victoriametrics.com/vmagent/#victoriametrics-remote-write-protocol", sanitizedURL)

View File

@@ -118,7 +118,7 @@ type writeRequest struct {
}
func (wr *writeRequest) reset() {
// Do not reset lastFlushTime, fq, isVMRemoteWrite, significantFigures and roundDigits, since they are re-used.
// Do not reset lastFlushTime, fq, isVMRemoteWrite, significantFigures and roundDigits, since they are reused.
wr.wr.Timeseries = nil

View File

@@ -56,7 +56,7 @@ var (
"See https://docs.victoriametrics.com/stream-aggregation/#ignoring-old-samples")
streamAggrIgnoreFirstIntervals = flagutil.NewArrayInt("remoteWrite.streamAggr.ignoreFirstIntervals", 0, "Number of aggregation intervals to skip after the start "+
"for the corresponding -remoteWrite.streamAggr.config at the corresponding -remoteWrite.url. Increase this value if "+
"you observe incorrect aggregation results after vmagent restarts. It could be caused by receiving bufferred delayed data from clients pushing data into the vmagent. "+
"you observe incorrect aggregation results after vmagent restarts. It could be caused by receiving buffered delayed data from clients pushing data into the vmagent. "+
"See https://docs.victoriametrics.com/stream-aggregation/#ignore-aggregation-intervals-on-start")
streamAggrDropInputLabels = flagutil.NewArrayString("remoteWrite.streamAggr.dropInputLabels", "An optional list of labels to drop from samples "+
"before stream de-duplication and aggregation with -remoteWrite.streamAggr.config and -remoteWrite.streamAggr.dedupInterval at the corresponding -remoteWrite.url. "+
@@ -131,7 +131,7 @@ func reloadStreamAggrConfigGlobal() {
func initStreamAggrConfigGlobal() {
sas, err := newStreamAggrConfigGlobal()
if err != nil {
logger.Fatalf("cannot initialize gloabl stream aggregators: %s", err)
logger.Fatalf("cannot initialize global stream aggregators: %s", err)
}
if sas != nil {
filePath := sas.FilePath()

View File

@@ -9,8 +9,8 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
parser "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/vmimport"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/vmimport"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/vmimport/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
"github.com/VictoriaMetrics/metrics"
@@ -26,17 +26,17 @@ var (
//
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6
func InsertHandler(at *auth.Token, req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
isGzipped := req.Header.Get("Content-Encoding") == "gzip"
return stream.Parse(req.Body, isGzipped, func(rows []parser.Row) error {
encoding := req.Header.Get("Content-Encoding")
return stream.Parse(req.Body, encoding, func(rows []vmimport.Row) error {
return insertRows(at, rows, extraLabels)
})
}
func insertRows(at *auth.Token, rows []parser.Row, extraLabels []prompbmarshal.Label) error {
func insertRows(at *auth.Token, rows []vmimport.Row, extraLabels []prompbmarshal.Label) error {
ctx := common.GetPushCtx()
defer common.PutPushCtx(ctx)

View File

@@ -88,6 +88,9 @@ func (g *Group) Validate(validateTplFn ValidateTplFn, validateExpressions bool)
if g.EvalOffset.Duration() > g.Interval.Duration() {
return fmt.Errorf("eval_offset should be smaller than interval; now eval_offset: %v, interval: %v", g.EvalOffset.Duration(), g.Interval.Duration())
}
if g.EvalOffset != nil && g.EvalDelay != nil {
return fmt.Errorf("eval_offset cannot be used with eval_delay")
}
if g.Limit < 0 {
return fmt.Errorf("invalid limit %d, shouldn't be less than 0", g.Limit)
}

View File

@@ -9,7 +9,7 @@ groups:
annotations:
summary: "{{ }}"
description: "{{$labels}}"
- alert: UnkownAnnotationsFunction
- alert: UnknownAnnotationsFunction
for: 5m
expr: vm_rows > 0
labels:

View File

@@ -1,7 +1,7 @@
groups:
- name: group
rules:
- alert: UnkownLabelFunction
- alert: UnknownLabelFunction
for: 5m
expr: vm_rows > 0
labels:

View File

@@ -169,7 +169,6 @@ groups:
checkCfg(nil)
groupsLen = lenLocked(m)
if groupsLen != 2 {
fmt.Println(m.groups)
t.Fatalf("expected to have exactly 2 groups loaded; got %d", groupsLen)
}

View File

@@ -83,7 +83,8 @@ func (m *manager) close() {
func (m *manager) startGroup(ctx context.Context, g *rule.Group, restore bool) error {
m.wg.Add(1)
id := g.ID()
id := g.GetID()
g.Init()
go func() {
defer m.wg.Done()
if restore {
@@ -112,7 +113,7 @@ func (m *manager) update(ctx context.Context, groupsCfg []config.Group, restore
}
}
ng := rule.NewGroup(cfg, m.querierBuilder, *evaluationInterval, m.labels)
groupsRegistry[ng.ID()] = ng
groupsRegistry[ng.GetID()] = ng
}
if rrPresent && m.rw == nil {
@@ -130,17 +131,17 @@ func (m *manager) update(ctx context.Context, groupsCfg []config.Group, restore
m.groupsMu.Lock()
for _, og := range m.groups {
ng, ok := groupsRegistry[og.ID()]
ng, ok := groupsRegistry[og.GetID()]
if !ok {
// old group is not present in new list,
// so must be stopped and deleted
og.Close()
delete(m.groups, og.ID())
delete(m.groups, og.GetID())
og = nil
continue
}
delete(groupsRegistry, ng.ID())
if og.Checksum != ng.Checksum {
delete(groupsRegistry, ng.GetID())
if og.GetCheckSum() != ng.GetCheckSum() {
toUpdate = append(toUpdate, updateItem{old: og, new: ng})
}
}

View File

@@ -144,7 +144,7 @@ func TestManagerUpdate_Success(t *testing.T) {
}
for _, wantG := range groupsExpected {
gotG, ok := m.groups[wantG.ID()]
gotG, ok := m.groups[wantG.CreateID()]
if !ok {
t.Fatalf("expected to have group %q", wantG.Name)
}
@@ -258,7 +258,7 @@ func compareGroups(t *testing.T, a, b *rule.Group) {
}
for i, r := range a.Rules {
got, want := r, b.Rules[i]
if a.ID() != b.ID() {
if a.CreateID() != b.CreateID() {
t.Fatalf("expected to have rule %q; got %q", want.ID(), got.ID())
}
if err := rule.CompareRules(t, want, got); err != nil {

View File

@@ -37,8 +37,9 @@ type AlertManager struct {
type notifierMetrics struct {
set *metrics.Set
alertsSent *metrics.Counter
alertsSendErrors *metrics.Counter
alertsSent *metrics.Counter
alertsSendErrors *metrics.Counter
alertsSendDuration *metrics.Histogram
}
func newNotifierMetrics(addr string) *notifierMetrics {
@@ -46,9 +47,10 @@ func newNotifierMetrics(addr string) *notifierMetrics {
metrics.RegisterSet(set)
return &notifierMetrics{
set: set,
alertsSent: set.GetOrCreateCounter(fmt.Sprintf("vmalert_alerts_sent_total{addr=%q}", addr)),
alertsSendErrors: set.GetOrCreateCounter(fmt.Sprintf("vmalert_alerts_send_errors_total{addr=%q}", addr)),
set: set,
alertsSent: set.NewCounter(fmt.Sprintf("vmalert_alerts_sent_total{addr=%q}", addr)),
alertsSendErrors: set.NewCounter(fmt.Sprintf("vmalert_alerts_send_errors_total{addr=%q}", addr)),
alertsSendDuration: set.NewHistogram(fmt.Sprintf("vmalert_alerts_send_duration_seconds{addr=%q}", addr)),
}
}
@@ -72,7 +74,9 @@ func (am AlertManager) Addr() string {
// Send an alert or resolve message
func (am *AlertManager) Send(ctx context.Context, alerts []Alert, headers map[string]string) error {
am.metrics.alertsSent.Add(len(alerts))
startTime := time.Now()
err := am.send(ctx, alerts, headers)
am.metrics.alertsSendDuration.UpdateDuration(startTime)
if err != nil {
am.metrics.alertsSendErrors.Add(len(alerts))
}

View File

@@ -102,9 +102,9 @@ func Init(gen AlertURLGenerator, extLabels map[string]string, extURL string) (fu
if len(*addrs) > 0 || *configPath != "" {
return nil, fmt.Errorf("only one of -notifier.blackhole, -notifier.url and -notifier.config flags must be specified")
}
notifier := newBlackHoleNotifier()
staticNotifiersFn = func() []Notifier {
return []Notifier{newBlackHoleNotifier()}
return []Notifier{notifier}
}
return staticNotifiersFn, nil
}

View File

@@ -133,7 +133,7 @@ func NewAlertingRule(qb datasource.QuerierBuilder, group *Group, cfg config.Rule
KeepFiringFor: cfg.KeepFiringFor.Duration(),
Labels: cfg.Labels,
Annotations: cfg.Annotations,
GroupID: group.ID(),
GroupID: group.GetID(),
GroupName: group.Name,
File: group.File,
EvalInterval: group.Interval,
@@ -159,12 +159,11 @@ func NewAlertingRule(qb datasource.QuerierBuilder, group *Group, cfg config.Rule
ar.state = &ruleState{
entries: make([]StateEntry, entrySize),
}
ar.metrics = newAlertingRuleMetrics(group.metrics.set, ar)
return ar
}
func (ar *AlertingRule) registerMetrics(g *Group) {
ar.metrics = newAlertingRuleMetrics(g.metrics.set, ar)
func (ar *AlertingRule) registerMetrics(set *metrics.Set) {
ar.metrics = newAlertingRuleMetrics(set, ar)
}
// close unregisters rule metrics

View File

@@ -198,7 +198,7 @@ func TestAlertingRule_Exec(t *testing.T) {
fakeGroup := Group{
Name: "TestRule_Exec",
}
rule.GroupID = fakeGroup.ID()
rule.GroupID = fakeGroup.GetID()
for i, step := range steps {
fq.Reset()
fq.Add(step...)
@@ -556,7 +556,7 @@ func TestAlertingRuleExecRange(t *testing.T) {
fq := &datasource.FakeQuerier{}
rule.q = fq
rule.GroupID = fakeGroup.ID()
rule.GroupID = fakeGroup.GetID()
fq.Add(data...)
gotTS, err := rule.execRange(context.TODO(), time.Unix(1, 0), time.Unix(5, 0))
if err != nil {
@@ -625,7 +625,7 @@ func TestAlertingRuleExecRange(t *testing.T) {
{State: notifier.StatePending, ActiveAt: time.Unix(5, 0)},
}, map[uint64]*notifier.Alert{
hash(map[string]string{"alertname": "for-pending"}): {
GroupID: fakeGroup.ID(),
GroupID: fakeGroup.GetID(),
Name: "for-pending",
Labels: map[string]string{"alertname": "for-pending"},
Annotations: map[string]string{"activeAt": "5000"},
@@ -644,7 +644,7 @@ func TestAlertingRuleExecRange(t *testing.T) {
{State: notifier.StateFiring, ActiveAt: time.Unix(1, 0)},
}, map[uint64]*notifier.Alert{
hash(map[string]string{"alertname": "for-firing"}): {
GroupID: fakeGroup.ID(),
GroupID: fakeGroup.GetID(),
Name: "for-firing",
Labels: map[string]string{"alertname": "for-firing"},
Annotations: map[string]string{"activeAt": "1000"},
@@ -664,7 +664,7 @@ func TestAlertingRuleExecRange(t *testing.T) {
{State: notifier.StatePending, ActiveAt: time.Unix(5, 0)},
}, map[uint64]*notifier.Alert{
hash(map[string]string{"alertname": "for-hold-pending"}): {
GroupID: fakeGroup.ID(),
GroupID: fakeGroup.GetID(),
Name: "for-hold-pending",
Labels: map[string]string{"alertname": "for-hold-pending"},
Annotations: map[string]string{"activeAt": "5000"},
@@ -719,7 +719,7 @@ func TestAlertingRuleExecRange(t *testing.T) {
},
}, map[uint64]*notifier.Alert{
hash(map[string]string{"alertname": "multi-series"}): {
GroupID: fakeGroup.ID(),
GroupID: fakeGroup.GetID(),
Name: "multi-series",
Labels: map[string]string{"alertname": "multi-series"},
Annotations: map[string]string{},
@@ -730,7 +730,7 @@ func TestAlertingRuleExecRange(t *testing.T) {
For: 3 * time.Second,
},
hash(map[string]string{"alertname": "multi-series", "foo": "bar"}): {
GroupID: fakeGroup.ID(),
GroupID: fakeGroup.GetID(),
Name: "multi-series",
Labels: map[string]string{"alertname": "multi-series", "foo": "bar"},
Annotations: map[string]string{},
@@ -789,6 +789,7 @@ func TestGroup_Restore(t *testing.T) {
}
fg := NewGroup(config.Group{Name: "TestRestore", Rules: rules}, fqr, time.Second, nil)
fg.Init()
wg := sync.WaitGroup{}
wg.Add(1)
go func() {
@@ -930,7 +931,7 @@ func TestGroup_Restore(t *testing.T) {
},
})
// one active alert with restore labels missmatch
// one active alert with restore labels mismatch
ts = time.Now().Truncate(time.Hour)
fqr.Set(`default_rollup(ALERTS_FOR_STATE{alertgroup="TestRestore",alertname="foo",env="dev"}[3600s])`,
stateMetric("foo", ts, "env", "dev", "team", "foo"))
@@ -975,7 +976,7 @@ func TestAlertingRule_Exec_Negative(t *testing.T) {
t.Fatalf("expected to get err; got nil")
}
if !strings.Contains(err.Error(), expErr) {
t.Fatalf("expected to get err %q; got %q insterad", expErr, err)
t.Fatalf("expected to get err %q; got %q instead", expErr, err)
}
}
@@ -1041,7 +1042,7 @@ func TestAlertingRule_Template(t *testing.T) {
Name: "TestRule_Exec",
}
fq := &datasource.FakeQuerier{}
rule.GroupID = fakeGroup.ID()
rule.GroupID = fakeGroup.GetID()
rule.q = fq
rule.state = &ruleState{
entries: make([]StateEntry, 10),

View File

@@ -27,19 +27,22 @@ import (
var (
ruleUpdateEntriesLimit = flag.Int("rule.updateEntriesLimit", 20, "Defines the max number of rule's state updates stored in-memory. "+
"Rule's updates are available on rule's Details page and are used for debugging purposes. The number of stored updates can be overridden per rule via update_entries_limit param.")
resendDelay = flag.Duration("rule.resendDelay", 0, "MiniMum amount of time to wait before resending an alert to notifier")
resendDelay = flag.Duration("rule.resendDelay", 0, "MiniMum amount of time to wait before resending an alert to notifier.")
maxResolveDuration = flag.Duration("rule.maxResolveDuration", 0, "Limits the maxiMum duration for automatic alert expiration, "+
"which by default is 4 times evaluationInterval of the parent group")
evalDelay = flag.Duration("rule.evalDelay", 30*time.Second, "Adjustment of the `time` parameter for rule evaluation requests to compensate intentional data delay from the datasource."+
"Normally, should be equal to `-search.latencyOffset` (cmd-line flag configured for VictoriaMetrics single-node or vmselect).")
evalDelay = flag.Duration("rule.evalDelay", 30*time.Second, "Adjustment of the `time` parameter for rule evaluation requests to compensate intentional data delay from the datasource. "+
"Normally, should be equal to `-search.latencyOffset` (cmd-line flag configured for VictoriaMetrics single-node or vmselect). "+
"This doesn't apply to groups with eval_offset specified.")
disableAlertGroupLabel = flag.Bool("disableAlertgroupLabel", false, "Whether to disable adding group's Name as label to generated alerts and time series.")
remoteReadLookBack = flag.Duration("remoteRead.lookback", time.Hour, "Lookback defines how far to look into past for alerts timeseries."+
" For example, if lookback=1h then range from now() to now()-1h will be scanned.")
remoteReadLookBack = flag.Duration("remoteRead.lookback", time.Hour, "Lookback defines how far to look into past for alerts timeseries. "+
"For example, if lookback=1h then range from now() to now()-1h will be scanned.")
)
// Group is an entity for grouping rules
type Group struct {
mu sync.RWMutex
mu sync.RWMutex
// id stores the unique id for the group, and shouldn't change after the group create.
id uint64
Name string
File string
Rules []Rule
@@ -48,10 +51,11 @@ type Group struct {
EvalOffset *time.Duration
// EvalDelay will adjust timestamp for rule evaluation requests to compensate intentional query delay from datasource.
// see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5155
EvalDelay *time.Duration
Limit int
Concurrency int
Checksum string
EvalDelay *time.Duration
Limit int
Concurrency int
// checksum stores the hash of yaml definition for this group.
checksum string
LastEvaluation time.Time
Labels map[string]string
@@ -67,7 +71,7 @@ type Group struct {
// evalCancel stores the cancel fn for interrupting
// rules evaluation. Used on groups update() and close().
evalCancel context.CancelFunc
// metrics contains metrics for group and its rules, will be created during Init()
metrics *groupMetrics
// evalAlignment will make the timestamp of group query
// requests be aligned with interval
@@ -83,31 +87,6 @@ type groupMetrics struct {
iterationInterval *metrics.Gauge
}
func newGroupMetrics(g *Group) *groupMetrics {
m := &groupMetrics{}
m.set = metrics.NewSet()
labels := fmt.Sprintf(`group=%q, file=%q`, g.Name, g.File)
m.iterationTotal = m.set.GetOrCreateCounter(fmt.Sprintf(`vmalert_iteration_total{%s}`, labels))
m.iterationDuration = m.set.GetOrCreateSummary(fmt.Sprintf(`vmalert_iteration_duration_seconds{%s}`, labels))
m.iterationMissed = m.set.GetOrCreateCounter(fmt.Sprintf(`vmalert_iteration_missed_total{%s}`, labels))
m.iterationInterval = m.set.GetOrCreateGauge(fmt.Sprintf(`vmalert_iteration_interval_seconds{%s}`, labels), func() float64 {
g.mu.RLock()
i := g.Interval.Seconds()
g.mu.RUnlock()
return i
})
return m
}
func (m *groupMetrics) start() {
metrics.RegisterSet(m.set)
}
func (m *groupMetrics) close() {
metrics.UnregisterSet(m.set, true)
}
// merges group rule labels into result map
// set2 has priority over set1.
func mergeLabels(groupName, ruleName string, set1, set2 map[string]string) map[string]string {
@@ -134,7 +113,7 @@ func NewGroup(cfg config.Group, qb datasource.QuerierBuilder, defaultInterval ti
Interval: cfg.Interval.Duration(),
Limit: cfg.Limit,
Concurrency: cfg.Concurrency,
Checksum: cfg.Checksum,
checksum: cfg.Checksum,
Params: cfg.Params,
Headers: make(map[string]string),
NotifierHeaders: make(map[string]string),
@@ -157,13 +136,13 @@ func NewGroup(cfg config.Group, qb datasource.QuerierBuilder, defaultInterval ti
if cfg.EvalDelay != nil {
g.EvalDelay = &cfg.EvalDelay.D
}
g.id = g.CreateID()
for _, h := range cfg.Headers {
g.Headers[h.Key] = h.Value
}
for _, h := range cfg.NotifierHeaders {
g.NotifierHeaders[h.Key] = h.Value
}
g.metrics = newGroupMetrics(g)
rules := make([]Rule, len(cfg.Rules))
for i, r := range cfg.Rules {
var extraLabels map[string]string
@@ -193,12 +172,22 @@ func (g *Group) newRule(qb datasource.QuerierBuilder, r config.Rule) Rule {
return NewRecordingRule(qb, g, r)
}
// ID return unique group ID that consists of
// rules file and group Name
func (g *Group) ID() uint64 {
// GetCheckSum returns group checksum
func (g *Group) GetCheckSum() string {
g.mu.RLock()
defer g.mu.RUnlock()
return g.checksum
}
// GetID returns the unique group ID
func (g *Group) GetID() uint64 {
return g.id
}
// CreateID returns the unique ID based on group basic fields.
// Should only be called when creating new group,
// and use GetID() afterward.
func (g *Group) CreateID() uint64 {
hash := fnv.New64a()
hash.Write([]byte(g.File))
hash.Write([]byte("\xff"))
@@ -270,20 +259,17 @@ func (g *Group) updateWith(newGroup *Group) error {
}
// add the rest of rules from registry
for _, nr := range rulesRegistry {
nr.registerMetrics(g)
nr.registerMetrics(g.metrics.set)
newRules = append(newRules, nr)
}
// note that g.Interval is not updated here
// so the value can be compared later in
// group.Start function
g.Type = newGroup.Type
g.Concurrency = newGroup.Concurrency
g.Params = newGroup.Params
g.Headers = newGroup.Headers
g.NotifierHeaders = newGroup.NotifierHeaders
g.Labels = newGroup.Labels
g.Limit = newGroup.Limit
g.Checksum = newGroup.Checksum
g.checksum = newGroup.checksum
g.Rules = newRules
return nil
}
@@ -309,23 +295,42 @@ func (g *Group) Close() {
g.InterruptEval()
<-g.finishedCh
g.metrics.close()
g.closeGroupMetrics()
}
func (g *Group) closeGroupMetrics() {
metrics.UnregisterSet(g.metrics.set, true)
}
// SkipRandSleepOnGroupStart will skip random sleep delay in group first evaluation
var SkipRandSleepOnGroupStart bool
// Init must be called before group Start()
func (g *Group) Init() {
ns := metrics.NewSet()
g.metrics = &groupMetrics{set: ns}
labels := fmt.Sprintf(`group=%q, file=%q`, g.Name, g.File)
g.metrics.iterationTotal = g.metrics.set.NewCounter(fmt.Sprintf(`vmalert_iteration_total{%s}`, labels))
g.metrics.iterationDuration = g.metrics.set.NewSummary(fmt.Sprintf(`vmalert_iteration_duration_seconds{%s}`, labels))
g.metrics.iterationMissed = g.metrics.set.NewCounter(fmt.Sprintf(`vmalert_iteration_missed_total{%s}`, labels))
g.metrics.iterationInterval = g.metrics.set.NewGauge(fmt.Sprintf(`vmalert_iteration_interval_seconds{%s}`, labels), func() float64 {
i := g.Interval.Seconds()
return i
})
for i := range g.Rules {
g.Rules[i].registerMetrics(g.metrics.set)
}
metrics.RegisterSet(g.metrics.set)
}
// Start starts group's evaluation
func (g *Group) Start(ctx context.Context, nts func() []notifier.Notifier, rw remotewrite.RWClient, rr datasource.QuerierBuilder) {
defer func() { close(g.finishedCh) }()
g.metrics.start()
evalTS := time.Now()
// sleep random duration to spread group rules evaluation
// over time in order to reduce load on datasource.
if !SkipRandSleepOnGroupStart {
sleepBeforeStart := delayBeforeStart(evalTS, g.ID(), g.Interval, g.EvalOffset)
sleepBeforeStart := delayBeforeStart(evalTS, g.GetID(), g.Interval, g.EvalOffset)
g.infof("will start in %v", sleepBeforeStart)
sleepTimer := time.NewTimer(sleepBeforeStart)
@@ -375,6 +380,7 @@ func (g *Group) Start(ctx context.Context, nts func() []notifier.Notifier, rw re
}
resolveDuration := getResolveDuration(g.Interval, *resendDelay, *maxResolveDuration)
// adjust request timestamp using evalDelay and evalAlignment if necessary
ts = g.adjustReqTimestamp(ts)
errs := e.execConcurrently(ctx, g.Rules, ts, g.Concurrency, resolveDuration, g.Limit)
for err := range errs {
@@ -468,10 +474,18 @@ func (g *Group) DeepCopy() *Group {
return &newG
}
// delayBeforeStart returns a duration on the interval between [ts..ts+interval].
// delayBeforeStart accounts for `offset`, so returned duration should be always
// bigger than the `offset`.
// if offset is specified, delayBeforeStart returns a duration to help aligning timestamp with offset;
// otherwise, it returns a random duration between [0..interval] based on group key.
func delayBeforeStart(ts time.Time, key uint64, interval time.Duration, offset *time.Duration) time.Duration {
if offset != nil {
currentOffsetPoint := ts.Truncate(interval).Add(*offset)
if currentOffsetPoint.Before(ts) {
// wait until the next offset point
return currentOffsetPoint.Add(interval).Sub(ts)
}
return currentOffsetPoint.Sub(ts)
}
var randSleep time.Duration
randSleep = time.Duration(float64(interval) * (float64(key) / (1 << 64)))
sleepOffset := time.Duration(ts.UnixNano() % interval.Nanoseconds())
@@ -479,15 +493,6 @@ func delayBeforeStart(ts time.Time, key uint64, interval time.Duration, offset *
randSleep += interval
}
randSleep -= sleepOffset
// check if `ts` after randSleep is before `offset`,
// if it is, add extra eval_offset to randSleep.
// see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3409.
if offset != nil {
tmpEvalTS := ts.Add(randSleep)
if tmpEvalTS.Before(tmpEvalTS.Truncate(interval).Add(*offset)) {
randSleep += *offset
}
}
return randSleep
}
@@ -593,26 +598,14 @@ func getResolveDuration(groupInterval, delta, maxDuration time.Duration) time.Du
}
func (g *Group) adjustReqTimestamp(timestamp time.Time) time.Time {
// if `eval_offset` is specified, timestamp is already aligned with offset, do nothing
if g.EvalOffset != nil {
// calculate the min timestamp on the evaluationInterval
intervalStart := timestamp.Truncate(g.Interval)
ts := intervalStart.Add(*g.EvalOffset)
if timestamp.Before(ts) {
// if passed timestamp is before the expected evaluation offset,
// then we should adjust it to the previous evaluation round.
// E.g. request with evaluationInterval=1h and evaluationOffset=30m
// was evaluated at 11:20. Then the timestamp should be adjusted
// to 10:30, to the previous evaluationInterval.
return ts.Add(-g.Interval)
}
// when `eval_offset` is using, ts shouldn't be effect by `eval_alignment` and `eval_delay`
// since it should be always aligned.
return ts
return timestamp
}
timestamp = timestamp.Add(-g.getEvalDelay())
// always apply the alignment as a last step
// apply the alignment as the last step
if g.evalAlignment == nil || *g.evalAlignment {
// align query time with interval to get similar result with grafana when plotting time series.
// see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5049

View File

@@ -39,10 +39,11 @@ func TestUpdateWith(t *testing.T) {
f := func(currentRules, newRules []config.Rule) {
t.Helper()
ns := metrics.NewSet()
g := &Group{
Name: "test",
Name: "test",
metrics: &groupMetrics{set: ns},
}
g.metrics = newGroupMetrics(g)
qb := &datasource.FakeQuerier{}
for _, r := range currentRules {
r.ID = config.HashRule(r)
@@ -52,7 +53,6 @@ func TestUpdateWith(t *testing.T) {
ng := &Group{
Name: "test",
}
ng.metrics = newGroupMetrics(ng)
for _, r := range newRules {
r.ID = config.HashRule(r)
ng.Rules = append(ng.Rules, ng.newRule(qb, r))
@@ -198,7 +198,7 @@ func TestUpdateDuringRandSleep(t *testing.T) {
Interval: 100 * time.Hour,
updateCh: make(chan *Group),
}
g.metrics = newGroupMetrics(g)
g.Init()
go g.Start(context.Background(), nil, nil, nil)
rule1 := AlertingRule{
@@ -213,7 +213,6 @@ func TestUpdateDuringRandSleep(t *testing.T) {
&rule1,
},
}
g1.metrics = newGroupMetrics(g1)
g.updateCh <- g1
time.Sleep(10 * time.Millisecond)
g.mu.RLock()
@@ -237,7 +236,6 @@ func TestUpdateDuringRandSleep(t *testing.T) {
&rule2,
},
}
g2.metrics = newGroupMetrics(g2)
g.updateCh <- g2
time.Sleep(10 * time.Millisecond)
g.mu.RLock()
@@ -331,6 +329,7 @@ func TestGroupStart(t *testing.T) {
finished := make(chan struct{})
fs.Add(m1)
fs.Add(m2)
g.Init()
go func() {
g.Start(context.Background(), func() []notifier.Notifier { return []notifier.Notifier{fn} }, nil, fs)
close(finished)
@@ -487,6 +486,7 @@ func TestCloseWithEvalInterruption(t *testing.T) {
const evalInterval = time.Millisecond
g := NewGroup(groups[0], fq, evalInterval, nil)
g.Init()
go g.Start(context.Background(), nil, nil, nil)
@@ -533,34 +533,14 @@ func TestGroupStartDelay(t *testing.T) {
f("2023-01-01T00:00:29.000+00:00", "2023-01-01T00:00:30.000+00:00")
f("2023-01-01T00:00:31.000+00:00", "2023-01-01T00:05:30.000+00:00")
// test group with offset smaller than above fixed randSleep,
// this way randSleep will always be enough
offset := 20 * time.Second
// test group with offset
offset := 3 * time.Minute
g.EvalOffset = &offset
f("2023-01-01T00:00:00.000+00:00", "2023-01-01T00:00:30.000+00:00")
f("2023-01-01T00:00:29.000+00:00", "2023-01-01T00:00:30.000+00:00")
f("2023-01-01T00:00:31.000+00:00", "2023-01-01T00:05:30.000+00:00")
// test group with offset bigger than above fixed randSleep,
// this way offset will be added to delay
offset = 3 * time.Minute
g.EvalOffset = &offset
f("2023-01-01T00:00:00.000+00:00", "2023-01-01T00:03:30.000+00:00")
f("2023-01-01T00:00:29.000+00:00", "2023-01-01T00:03:30.000+00:00")
f("2023-01-01T00:01:00.000+00:00", "2023-01-01T00:08:30.000+00:00")
f("2023-01-01T00:03:30.000+00:00", "2023-01-01T00:08:30.000+00:00")
f("2023-01-01T00:07:30.000+00:00", "2023-01-01T00:13:30.000+00:00")
offset = 10 * time.Minute
g.EvalOffset = &offset
// interval of 1h and key generate a static delay of 6m
g.Interval = time.Hour
f("2023-01-01T00:00:00.000+00:00", "2023-01-01T00:16:00.000+00:00")
f("2023-01-01T00:05:00.000+00:00", "2023-01-01T00:16:00.000+00:00")
f("2023-01-01T00:30:00.000+00:00", "2023-01-01T01:16:00.000+00:00")
f("2023-01-01T00:00:15.000+00:00", "2023-01-01T00:03:00.000+00:00")
f("2023-01-01T00:01:00.000+00:00", "2023-01-01T00:03:00.000+00:00")
f("2023-01-01T00:03:30.000+00:00", "2023-01-01T00:08:00.000+00:00")
f("2023-01-01T00:08:00.000+00:00", "2023-01-01T00:08:00.000+00:00")
}
func TestGetPrometheusReqTimestamp(t *testing.T) {
@@ -590,17 +570,11 @@ func TestGetPrometheusReqTimestamp(t *testing.T) {
evalAlignment: &disableAlign,
}, "2023-08-28T11:11:00+00:00", "2023-08-28T11:10:30+00:00")
// with eval_offset, find previous offset point + default evalDelay
// with eval_offset
f(&Group{
EvalOffset: &offset,
Interval: time.Hour,
}, "2023-08-28T11:11:00+00:00", "2023-08-28T10:30:00+00:00")
// with eval_offset + default evalDelay
f(&Group{
EvalOffset: &offset,
Interval: time.Hour,
}, "2023-08-28T11:41:00+00:00", "2023-08-28T11:30:00+00:00")
}, "2023-08-28T11:30:00+00:00", "2023-08-28T11:30:00+00:00")
// 1h interval with eval_delay
f(&Group{

View File

@@ -88,7 +88,7 @@ func NewRecordingRule(qb datasource.QuerierBuilder, group *Group, cfg config.Rul
Name: cfg.Record,
Expr: cfg.Expr,
Labels: cfg.Labels,
GroupID: group.ID(),
GroupID: group.GetID(),
GroupName: group.Name,
File: group.File,
q: qb.BuildWithParams(datasource.QuerierParams{
@@ -110,13 +110,11 @@ func NewRecordingRule(qb datasource.QuerierBuilder, group *Group, cfg config.Rul
rr.state = &ruleState{
entries: make([]StateEntry, entrySize),
}
rr.metrics = newRecordingRuleMetrics(group.metrics.set, rr)
return rr
}
func (rr *RecordingRule) registerMetrics(g *Group) {
rr.metrics = newRecordingRuleMetrics(g.metrics.set, rr)
func (rr *RecordingRule) registerMetrics(set *metrics.Set) {
rr.metrics = newRecordingRuleMetrics(set, rr)
}
// close unregisters rule metrics

View File

@@ -194,7 +194,7 @@ func TestRecordingRule_ExecRange(t *testing.T) {
t.Fatalf("unexpected RecordingRule.execRange error: %s", err)
}
if err := compareTimeSeries(t, tssExpected, tss); err != nil {
t.Fatalf("timeseries missmatch: %s", err)
t.Fatalf("timeseries mismatch: %s", err)
}
}
@@ -384,7 +384,7 @@ func TestRecordingRuleExec_Negative(t *testing.T) {
t.Fatalf("expected to get err; got nil")
}
if !strings.Contains(err.Error(), expErr) {
t.Fatalf("expected to get err %q; got %q insterad", expErr, err)
t.Fatalf("expected to get err %q; got %q instead", expErr, err)
}
fq.Reset()

View File

@@ -7,6 +7,8 @@ import (
"sync"
"time"
"github.com/VictoriaMetrics/metrics"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
@@ -30,7 +32,7 @@ type Rule interface {
// unregister Rule metrics
unregisterMetrics()
// register Rule metrics with the given group
registerMetrics(g *Group)
registerMetrics(set *metrics.Set)
}
var errDuplicate = errors.New("result contains metrics with the same labelset during evaluation. See https://docs.victoriametrics.com/vmalert/#series-with-the-same-labelset for details")

View File

@@ -97,7 +97,7 @@ btn-primary
<a href="#group-{%s g.ID %}">{%s g.Name %}{% if g.Type != "prometheus" %} ({%s g.Type %}){% endif %} (every {%f.0 g.Interval %}s) #</a>
{% if rNotOk[g.ID] > 0 %}<span class="badge bg-danger" title="Number of rules with status Error">{%d rNotOk[g.ID] %}</span> {% endif %}
{% if rNoMatch[g.ID] > 0 %}<span class="badge bg-warning" title="Number of rules with status NoMatch">{%d rNoMatch[g.ID] %}</span> {% endif %}
<span class="badge bg-success" title="Number of rules withs status Ok">{%d rOk[g.ID] %}</span>
<span class="badge bg-success" title="Number of rules with status Ok">{%d rOk[g.ID] %}</span>
<p class="fs-6 fw-lighter">{%s g.File %}</p>
{% if len(g.Params) > 0 %}
<div class="fs-6 fw-lighter">Extra params

View File

@@ -355,7 +355,7 @@ func StreamListGroups(qw422016 *qt422016.Writer, r *http.Request, originGroups [
}
//line app/vmalert/web.qtpl:99
qw422016.N().S(`
<span class="badge bg-success" title="Number of rules withs status Ok">`)
<span class="badge bg-success" title="Number of rules with status Ok">`)
//line app/vmalert/web.qtpl:100
qw422016.N().D(rOk[g.ID])
//line app/vmalert/web.qtpl:100

View File

@@ -36,7 +36,7 @@ func TestHandler(t *testing.T) {
g.ExecOnce(context.Background(), func() []notifier.Notifier { return nil }, nil, time.Time{})
m := &manager{groups: map[uint64]*rule.Group{
g.ID(): g,
g.CreateID(): g,
}}
rh := &requestHandler{m: m}

View File

@@ -325,7 +325,7 @@ func groupToAPI(g *rule.Group) apiGroup {
g = g.DeepCopy()
ag := apiGroup{
// encode as string to avoid rounding
ID: fmt.Sprintf("%d", g.ID()),
ID: fmt.Sprintf("%d", g.GetID()),
Name: g.Name,
Type: g.Type.String(),

View File

@@ -41,7 +41,7 @@ func TestRecordingToApi(t *testing.T) {
Type: ruleTypeRecording,
DatasourceType: "prometheus",
ID: "1248",
GroupID: fmt.Sprintf("%d", g.ID()),
GroupID: fmt.Sprintf("%d", g.CreateID()),
GroupName: "group",
File: "rules.yaml",
MaxUpdates: 44,

View File

@@ -141,7 +141,7 @@ func (h *Header) UnmarshalYAML(f func(any) error) error {
n := strings.IndexByte(s, ':')
if n < 0 {
return fmt.Errorf("missing speparator char ':' between Name and Value in the header %q; expected format - 'Name: Value'", s)
return fmt.Errorf("missing separator char ':' between Name and Value in the header %q; expected format - 'Name: Value'", s)
}
h.Name = strings.TrimSpace(s[:n])
h.Value = strings.TrimSpace(s[n+1:])
@@ -173,7 +173,7 @@ type URLMap struct {
// DiscoverBackendIPs instructs discovering URLPrefix backend IPs via DNS.
DiscoverBackendIPs *bool `yaml:"discover_backend_ips,omitempty"`
// HeadersConf is the config for augumenting request and response headers.
// HeadersConf is the config for augmenting request and response headers.
HeadersConf HeadersConf `yaml:",inline"`
// RetryStatusCodes is the list of response status codes used for retrying requests.
@@ -438,7 +438,7 @@ func getFirstAvailableBackendURL(bus []*backendURL) *backendURL {
return bu
}
// Slow path - the first url is temporarily unavailabel. Fall back to the remaining urls.
// Slow path - the first url is temporarily unavailable. Fall back to the remaining urls.
for i := 1; i < len(bus); i++ {
if !bus[i].isBroken() {
bu = bus[i]

View File

@@ -26,8 +26,8 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/vm"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/buildinfo"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httputils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/native/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
)
func main() {
@@ -293,7 +293,7 @@ func main() {
auth.WithBearer(c.String(vmNativeSrcBearerToken)),
auth.WithHeaders(c.String(vmNativeSrcHeaders)))
if err != nil {
return fmt.Errorf("error initilize auth config for source: %s", srcAddr)
return fmt.Errorf("error initialize auth config for source: %s", srcAddr)
}
// create TLS config
@@ -320,7 +320,7 @@ func main() {
auth.WithBearer(c.String(vmNativeDstBearerToken)),
auth.WithHeaders(c.String(vmNativeDstHeaders)))
if err != nil {
return fmt.Errorf("error initilize auth config for destination: %s", dstAddr)
return fmt.Errorf("error initialize auth config for destination: %s", dstAddr)
}
// create TLS config
@@ -382,9 +382,12 @@ func main() {
},
Before: beforeFn,
Action: func(c *cli.Context) error {
common.StartUnmarshalWorkers()
protoparserutil.StartUnmarshalWorkers()
blockPath := c.Args().First()
isBlockGzipped := c.Bool("gunzip")
encoding := ""
if c.Bool("gunzip") {
encoding = "gzip"
}
if len(blockPath) == 0 {
return cli.Exit("you must provide path for exported data block", 1)
}
@@ -395,7 +398,7 @@ func main() {
}
defer f.Close()
var blocksCount atomic.Uint64
if err := stream.Parse(f, isBlockGzipped, func(_ *stream.Block) error {
if err := stream.Parse(f, encoding, func(_ *stream.Block) error {
blocksCount.Add(1)
return nil
}); err != nil {

View File

@@ -442,7 +442,7 @@ func (si *mockSamplesIterator) At() (int64, float64) {
}
func (si *mockSamplesIterator) AtHistogram(*histogram.Histogram) (int64, *histogram.Histogram) {
panic("BUG: musn't be called")
panic("BUG: mustn't be called")
}
func (si *mockSamplesIterator) AtFloatHistogram(*histogram.FloatHistogram) (int64, *histogram.FloatHistogram) {

View File

@@ -17,8 +17,8 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmctl/vm"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/prometheus"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/native/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
parser "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/vmimport"
)
@@ -206,13 +206,13 @@ func (rws *RemoteWriteServer) exportNativeHandler() http.Handler {
func (rws *RemoteWriteServer) importNativeHandler(t *testing.T) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
common.StartUnmarshalWorkers()
defer common.StopUnmarshalWorkers()
protoparserutil.StartUnmarshalWorkers()
defer protoparserutil.StopUnmarshalWorkers()
var gotTimeSeries []vm.TimeSeries
var mx sync.RWMutex
err := stream.Parse(r.Body, false, func(block *stream.Block) error {
err := stream.Parse(r.Body, "", func(block *stream.Block) error {
mn := &block.MetricName
var timeseries vm.TimeSeries
timeseries.Name = string(mn.MetricGroup)

View File

@@ -6,9 +6,9 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/relabel"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
parser "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/csvimport"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/csvimport"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/csvimport/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/metrics"
)
@@ -19,16 +19,16 @@ var (
// InsertHandler processes /api/v1/import/csv requests.
func InsertHandler(req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
return stream.Parse(req, func(rows []parser.Row) error {
return stream.Parse(req, func(rows []csvimport.Row) error {
return insertRows(rows, extraLabels)
})
}
func insertRows(rows []parser.Row, extraLabels []prompbmarshal.Label) error {
func insertRows(rows []csvimport.Row, extraLabels []prompbmarshal.Label) error {
ctx := common.GetInsertCtx()
defer common.PutInsertCtx(ctx)

View File

@@ -6,10 +6,10 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/relabel"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogsketches"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogsketches/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/metrics"
)
@@ -20,7 +20,7 @@ var (
// InsertHandlerForHTTP processes remote write for DataDog POST /api/beta/sketches request.
func InsertHandlerForHTTP(req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}

View File

@@ -6,10 +6,10 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/relabel"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogv1"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogv1/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/metrics"
)
@@ -20,7 +20,7 @@ var (
// InsertHandlerForHTTP processes remote write for DataDog POST /api/v1/series request.
func InsertHandlerForHTTP(req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}

View File

@@ -6,10 +6,10 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/relabel"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogv2"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogv2/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/metrics"
)
@@ -22,7 +22,7 @@ var (
//
// See https://docs.datadoghq.com/api/latest/metrics/#submit-metrics
func InsertHandlerForHTTP(req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}

View File

@@ -19,7 +19,7 @@ var (
//
// See https://graphite.readthedocs.io/en/latest/feeding-carbon.html#the-plaintext-protocol
func InsertHandler(r io.Reader) error {
return stream.Parse(r, false, insertRows)
return stream.Parse(r, "", insertRows)
}
func insertRows(rows []parser.Row) error {

View File

@@ -10,9 +10,9 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/relabel"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
parser "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/influx"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/influx"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/influx/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeserieslimits"
"github.com/VictoriaMetrics/metrics"
@@ -34,7 +34,7 @@ var (
//
// See https://github.com/influxdata/telegraf/tree/master/plugins/inputs/socket_listener/
func InsertHandlerForReader(r io.Reader) error {
return stream.Parse(r, true, false, "", "", func(db string, rows []parser.Row) error {
return stream.Parse(r, "", true, "", "", func(db string, rows []influx.Row) error {
return insertRows(db, rows, nil)
})
}
@@ -43,22 +43,22 @@ func InsertHandlerForReader(r io.Reader) error {
//
// See https://github.com/influxdata/influxdb/blob/4cbdc197b8117fee648d62e2e5be75c6575352f0/tsdb/README.md
func InsertHandlerForHTTP(req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
isGzipped := req.Header.Get("Content-Encoding") == "gzip"
isStreamMode := req.Header.Get("Stream-Mode") == "1"
q := req.URL.Query()
precision := q.Get("precision")
// Read db tag from https://docs.influxdata.com/influxdb/v1.7/tools/api/#write-http-endpoint
db := q.Get("db")
return stream.Parse(req.Body, isStreamMode, isGzipped, precision, db, func(db string, rows []parser.Row) error {
encoding := req.Header.Get("Content-Encoding")
isStreamMode := req.Header.Get("Stream-Mode") == "1"
return stream.Parse(req.Body, encoding, isStreamMode, precision, db, func(db string, rows []influx.Row) error {
return insertRows(db, rows, extraLabels)
})
}
func insertRows(db string, rows []parser.Row, extraLabels []prompbmarshal.Label) error {
func insertRows(db string, rows []influx.Row, extraLabels []prompbmarshal.Label) error {
ctx := getPushCtx()
defer putPushCtx(ctx)

View File

@@ -10,7 +10,7 @@ import (
"github.com/VictoriaMetrics/metrics"
vminsertCommon "github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/csvimport"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/datadogsketches"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/datadogv1"
@@ -39,8 +39,8 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/procutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentelemetry/firehose"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/stringsutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeserieslimits"
)
@@ -87,8 +87,8 @@ var staticServer = http.FileServer(http.FS(staticFiles))
// Init initializes vminsert.
func Init() {
relabel.Init()
vminsertCommon.InitStreamAggr()
common.StartUnmarshalWorkers()
common.InitStreamAggr()
protoparserutil.StartUnmarshalWorkers()
if len(*graphiteListenAddr) > 0 {
graphiteServer = graphiteserver.MustStart(*graphiteListenAddr, *graphiteUseProxyProtocol, graphite.InsertHandler)
}
@@ -122,8 +122,8 @@ func Stop() {
if len(*opentsdbHTTPListenAddr) > 0 {
opentsdbhttpServer.MustStop()
}
common.StopUnmarshalWorkers()
vminsertCommon.MustStopStreamAggr()
protoparserutil.StopUnmarshalWorkers()
common.MustStopStreamAggr()
}
// RequestHandler is a handler for Prometheus remote storage write API
@@ -165,7 +165,7 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
}
switch path {
case "/prometheus/api/v1/write", "/api/v1/write", "/api/v1/push", "/prometheus/api/v1/push":
if common.HandleVMProtoServerHandshake(w, r) {
if protoparserutil.HandleVMProtoServerHandshake(w, r) {
return true
}
prometheusWriteRequests.Inc()

View File

@@ -8,8 +8,8 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/relabel"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/native/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/metrics"
)
@@ -21,12 +21,12 @@ var (
// InsertHandler processes `/api/v1/import/native` request.
func InsertHandler(req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
isGzip := req.Header.Get("Content-Encoding") == "gzip"
return stream.Parse(req.Body, isGzip, func(block *stream.Block) error {
encoding := req.Header.Get("Content-Encoding")
return stream.Parse(req.Body, encoding, func(block *stream.Block) error {
return insertRows(block, extraLabels)
})
}

View File

@@ -8,9 +8,9 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/relabel"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/newrelic"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/newrelic/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
)
var (
@@ -20,13 +20,12 @@ var (
// InsertHandlerForHTTP processes remote write for request to /newrelic/infra/v2/metrics/events/bulk request.
func InsertHandlerForHTTP(req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
ce := req.Header.Get("Content-Encoding")
isGzip := ce == "gzip"
return stream.Parse(req.Body, isGzip, func(rows []newrelic.Row) error {
encoding := req.Header.Get("Content-Encoding")
return stream.Parse(req.Body, encoding, func(rows []newrelic.Row) error {
return insertRows(rows, extraLabels)
})
}

View File

@@ -7,9 +7,9 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/relabel"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentelemetry/firehose"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentelemetry/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/metrics"
)
@@ -20,11 +20,11 @@ var (
// InsertHandler processes opentelemetry metrics.
func InsertHandler(req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
isGzipped := req.Header.Get("Content-Encoding") == "gzip"
encoding := req.Header.Get("Content-Encoding")
var processBody func([]byte) ([]byte, error)
if req.Header.Get("Content-Type") == "application/json" {
if req.Header.Get("X-Amz-Firehose-Protocol-Version") != "" {
@@ -33,7 +33,7 @@ func InsertHandler(req *http.Request) error {
return fmt.Errorf("json encoding isn't supported for opentelemetry format. Use protobuf encoding")
}
}
return stream.ParseStream(req.Body, isGzipped, processBody, func(tss []prompbmarshal.TimeSeries) error {
return stream.ParseStream(req.Body, encoding, processBody, func(tss []prompbmarshal.TimeSeries) error {
return insertRows(tss, extraLabels)
})
}

View File

@@ -7,9 +7,9 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/relabel"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
parser "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentsdbhttp"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentsdbhttp"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentsdbhttp/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/metrics"
)
@@ -24,11 +24,11 @@ func InsertHandler(req *http.Request) error {
path := req.URL.Path
switch path {
case "/opentsdb/api/put", "/api/put":
extraLabels, err := parserCommon.GetExtraLabels(req)
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
return stream.Parse(req, func(rows []parser.Row) error {
return stream.Parse(req, func(rows []opentsdbhttp.Row) error {
return insertRows(rows, extraLabels)
})
default:
@@ -36,7 +36,7 @@ func InsertHandler(req *http.Request) error {
}
}
func insertRows(rows []parser.Row, extraLabels []prompbmarshal.Label) error {
func insertRows(rows []opentsdbhttp.Row, extraLabels []prompbmarshal.Label) error {
ctx := common.GetInsertCtx()
defer common.PutInsertCtx(ctx)

View File

@@ -7,9 +7,9 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/relabel"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
parser "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/prometheus"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/prometheus"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/prometheus/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/metrics"
)
@@ -20,23 +20,23 @@ var (
// InsertHandler processes `/api/v1/import/prometheus` request.
func InsertHandler(req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
defaultTimestamp, err := parserCommon.GetTimestamp(req)
defaultTimestamp, err := protoparserutil.GetTimestamp(req)
if err != nil {
return err
}
isGzipped := req.Header.Get("Content-Encoding") == "gzip"
return stream.Parse(req.Body, defaultTimestamp, isGzipped, true, func(rows []parser.Row) error {
encoding := req.Header.Get("Content-Encoding")
return stream.Parse(req.Body, defaultTimestamp, encoding, true, func(rows []prometheus.Row) error {
return insertRows(rows, extraLabels)
}, func(s string) {
httpserver.LogError(req, s)
})
}
func insertRows(rows []parser.Row, extraLabels []prompbmarshal.Label) error {
func insertRows(rows []prometheus.Row, extraLabels []prompbmarshal.Label) error {
ctx := common.GetInsertCtx()
defer common.PutInsertCtx(ctx)

View File

@@ -7,8 +7,8 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/relabel"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/promremotewrite/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/metrics"
)
@@ -19,7 +19,7 @@ var (
// InsertHandler processes remote write for prometheus.
func InsertHandler(req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}

View File

@@ -8,8 +8,8 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/relabel"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
parser "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/vmimport"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/vmimport"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/vmimport/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/metrics"
@@ -24,17 +24,17 @@ var (
//
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6
func InsertHandler(req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
isGzipped := req.Header.Get("Content-Encoding") == "gzip"
return stream.Parse(req.Body, isGzipped, func(rows []parser.Row) error {
encoding := req.Header.Get("Content-Encoding")
return stream.Parse(req.Body, encoding, func(rows []vmimport.Row) error {
return insertRows(rows, extraLabels)
})
}
func insertRows(rows []parser.Row, extraLabels []prompbmarshal.Label) error {
func insertRows(rows []vmimport.Row, extraLabels []prompbmarshal.Label) error {
ctx := getPushCtx()
defer putPushCtx(ctx)

View File

@@ -64,7 +64,7 @@ type series struct {
expr graphiteql.Expr
// consolidateFunc is applied to raw samples in order to generate data points algined to the given step.
// consolidateFunc is applied to raw samples in order to generate data points aligned to the given step.
// see series.consolidate() function for details.
consolidateFunc aggrFunc

View File

@@ -36,7 +36,7 @@ func TestExecExprSuccess(t *testing.T) {
if err := compareSeries(ss, expectedSeries, expr); err != nil {
t.Fatalf("series mismatch for query %q: %s\ngot series\n%s\nexpected series\n%s", query, err, printSeriess(ss), printSeriess(expectedSeries))
}
// make sure ec isn't changed during query exection.
// make sure ec isn't changed during query execution.
if !reflect.DeepEqual(ec, &ecCopy) {
t.Fatalf("unexpected ec\ngot\n%v\nwant\n%v", &ecCopy, ec)
}

View File

@@ -159,7 +159,7 @@
"asPercent": {
"name": "asPercent",
"function": "asPercent(seriesList, total=None, *nodes)",
"description": "Calculates a percentage of the total of a wildcard series. If `total` is specified,\neach series will be calculated as a percentage of that total. If `total` is not specified,\nthe sum of all points in the wildcard series will be used instead.\n\nA list of nodes can optionally be provided, if so they will be used to match series with their\ncorresponding totals following the same logic as :py:func:`groupByNodes <groupByNodes>`.\n\nWhen passing `nodes` the `total` parameter may be a series list or `None`. If it is `None` then\nfor each series in `seriesList` the percentage of the sum of series in that group will be returned.\n\nWhen not passing `nodes`, the `total` parameter may be a single series, reference the same number\nof series as `seriesList` or be a numeric value.\n\nExample:\n\n.. code-block:: none\n\n # Server01 connections failed and succeeded as a percentage of Server01 connections attempted\n &target=asPercent(Server01.connections.{failed,succeeded}, Server01.connections.attempted)\n\n # For each server, its connections failed as a percentage of its connections attempted\n &target=asPercent(Server*.connections.failed, Server*.connections.attempted)\n\n # For each server, its connections failed and succeeded as a percentage of its connections attemped\n &target=asPercent(Server*.connections.{failed,succeeded}, Server*.connections.attempted, 0)\n\n # apache01.threads.busy as a percentage of 1500\n &target=asPercent(apache01.threads.busy,1500)\n\n # Server01 cpu stats as a percentage of its total\n &target=asPercent(Server01.cpu.*.jiffies)\n\n # cpu stats for each server as a percentage of its total\n &target=asPercent(Server*.cpu.*.jiffies, None, 0)\n\nWhen using `nodes`, any series or totals that can't be matched will create output series with\nnames like ``asPercent(someSeries,MISSING)`` or ``asPercent(MISSING,someTotalSeries)`` and all\nvalues set to None. If desired these series can be filtered out by piping the result through\n``|exclude(\"MISSING\")`` as shown below:\n\n.. code-block:: none\n\n &target=asPercent(Server{1,2}.memory.used,Server{1,3}.memory.total,0)\n\n # will produce 3 output series:\n # asPercent(Server1.memory.used,Server1.memory.total) [values will be as expected]\n # asPercent(Server2.memory.used,MISSING) [all values will be None]\n # asPercent(MISSING,Server3.memory.total) [all values will be None]\n\n &target=asPercent(Server{1,2}.memory.used,Server{1,3}.memory.total,0)|exclude(\"MISSING\")\n\n # will produce 1 output series:\n # asPercent(Server1.memory.used,Server1.memory.total) [values will be as expected]\n\nEach node may be an integer referencing a node in the series name or a string identifying a tag.\n\n.. note::\n\n When `total` is a seriesList, specifying `nodes` to match series with the corresponding total\n series will increase reliability.",
"description": "Calculates a percentage of the total of a wildcard series. If `total` is specified,\neach series will be calculated as a percentage of that total. If `total` is not specified,\nthe sum of all points in the wildcard series will be used instead.\n\nA list of nodes can optionally be provided, if so they will be used to match series with their\ncorresponding totals following the same logic as :py:func:`groupByNodes <groupByNodes>`.\n\nWhen passing `nodes` the `total` parameter may be a series list or `None`. If it is `None` then\nfor each series in `seriesList` the percentage of the sum of series in that group will be returned.\n\nWhen not passing `nodes`, the `total` parameter may be a single series, reference the same number\nof series as `seriesList` or be a numeric value.\n\nExample:\n\n.. code-block:: none\n\n # Server01 connections failed and succeeded as a percentage of Server01 connections attempted\n &target=asPercent(Server01.connections.{failed,succeeded}, Server01.connections.attempted)\n\n # For each server, its connections failed as a percentage of its connections attempted\n &target=asPercent(Server*.connections.failed, Server*.connections.attempted)\n\n # For each server, its connections failed and succeeded as a percentage of its connections attempted\n &target=asPercent(Server*.connections.{failed,succeeded}, Server*.connections.attempted, 0)\n\n # apache01.threads.busy as a percentage of 1500\n &target=asPercent(apache01.threads.busy,1500)\n\n # Server01 cpu stats as a percentage of its total\n &target=asPercent(Server01.cpu.*.jiffies)\n\n # cpu stats for each server as a percentage of its total\n &target=asPercent(Server*.cpu.*.jiffies, None, 0)\n\nWhen using `nodes`, any series or totals that can't be matched will create output series with\nnames like ``asPercent(someSeries,MISSING)`` or ``asPercent(MISSING,someTotalSeries)`` and all\nvalues set to None. If desired these series can be filtered out by piping the result through\n``|exclude(\"MISSING\")`` as shown below:\n\n.. code-block:: none\n\n &target=asPercent(Server{1,2}.memory.used,Server{1,3}.memory.total,0)\n\n # will produce 3 output series:\n # asPercent(Server1.memory.used,Server1.memory.total) [values will be as expected]\n # asPercent(Server2.memory.used,MISSING) [all values will be None]\n # asPercent(MISSING,Server3.memory.total) [all values will be None]\n\n &target=asPercent(Server{1,2}.memory.used,Server{1,3}.memory.total,0)|exclude(\"MISSING\")\n\n # will produce 1 output series:\n # asPercent(Server1.memory.used,Server1.memory.total) [values will be as expected]\n\nEach node may be an integer referencing a node in the series name or a string identifying a tag.\n\n.. note::\n\n When `total` is a seriesList, specifying `nodes` to match series with the corresponding total\n series will increase reliability.",
"module": "graphite.render.functions",
"group": "Combine",
"params": [
@@ -635,7 +635,7 @@
"pct": {
"name": "pct",
"function": "pct(seriesList, total=None, *nodes)",
"description": "Calculates a percentage of the total of a wildcard series. If `total` is specified,\neach series will be calculated as a percentage of that total. If `total` is not specified,\nthe sum of all points in the wildcard series will be used instead.\n\nA list of nodes can optionally be provided, if so they will be used to match series with their\ncorresponding totals following the same logic as :py:func:`groupByNodes <groupByNodes>`.\n\nWhen passing `nodes` the `total` parameter may be a series list or `None`. If it is `None` then\nfor each series in `seriesList` the percentage of the sum of series in that group will be returned.\n\nWhen not passing `nodes`, the `total` parameter may be a single series, reference the same number\nof series as `seriesList` or be a numeric value.\n\nExample:\n\n.. code-block:: none\n\n # Server01 connections failed and succeeded as a percentage of Server01 connections attempted\n &target=asPercent(Server01.connections.{failed,succeeded}, Server01.connections.attempted)\n\n # For each server, its connections failed as a percentage of its connections attempted\n &target=asPercent(Server*.connections.failed, Server*.connections.attempted)\n\n # For each server, its connections failed and succeeded as a percentage of its connections attemped\n &target=asPercent(Server*.connections.{failed,succeeded}, Server*.connections.attempted, 0)\n\n # apache01.threads.busy as a percentage of 1500\n &target=asPercent(apache01.threads.busy,1500)\n\n # Server01 cpu stats as a percentage of its total\n &target=asPercent(Server01.cpu.*.jiffies)\n\n # cpu stats for each server as a percentage of its total\n &target=asPercent(Server*.cpu.*.jiffies, None, 0)\n\nWhen using `nodes`, any series or totals that can't be matched will create output series with\nnames like ``asPercent(someSeries,MISSING)`` or ``asPercent(MISSING,someTotalSeries)`` and all\nvalues set to None. If desired these series can be filtered out by piping the result through\n``|exclude(\"MISSING\")`` as shown below:\n\n.. code-block:: none\n\n &target=asPercent(Server{1,2}.memory.used,Server{1,3}.memory.total,0)\n\n # will produce 3 output series:\n # asPercent(Server1.memory.used,Server1.memory.total) [values will be as expected]\n # asPercent(Server2.memory.used,MISSING) [all values will be None]\n # asPercent(MISSING,Server3.memory.total) [all values will be None]\n\n &target=asPercent(Server{1,2}.memory.used,Server{1,3}.memory.total,0)|exclude(\"MISSING\")\n\n # will produce 1 output series:\n # asPercent(Server1.memory.used,Server1.memory.total) [values will be as expected]\n\nEach node may be an integer referencing a node in the series name or a string identifying a tag.\n\n.. note::\n\n When `total` is a seriesList, specifying `nodes` to match series with the corresponding total\n series will increase reliability.",
"description": "Calculates a percentage of the total of a wildcard series. If `total` is specified,\neach series will be calculated as a percentage of that total. If `total` is not specified,\nthe sum of all points in the wildcard series will be used instead.\n\nA list of nodes can optionally be provided, if so they will be used to match series with their\ncorresponding totals following the same logic as :py:func:`groupByNodes <groupByNodes>`.\n\nWhen passing `nodes` the `total` parameter may be a series list or `None`. If it is `None` then\nfor each series in `seriesList` the percentage of the sum of series in that group will be returned.\n\nWhen not passing `nodes`, the `total` parameter may be a single series, reference the same number\nof series as `seriesList` or be a numeric value.\n\nExample:\n\n.. code-block:: none\n\n # Server01 connections failed and succeeded as a percentage of Server01 connections attempted\n &target=asPercent(Server01.connections.{failed,succeeded}, Server01.connections.attempted)\n\n # For each server, its connections failed as a percentage of its connections attempted\n &target=asPercent(Server*.connections.failed, Server*.connections.attempted)\n\n # For each server, its connections failed and succeeded as a percentage of its connections attempted\n &target=asPercent(Server*.connections.{failed,succeeded}, Server*.connections.attempted, 0)\n\n # apache01.threads.busy as a percentage of 1500\n &target=asPercent(apache01.threads.busy,1500)\n\n # Server01 cpu stats as a percentage of its total\n &target=asPercent(Server01.cpu.*.jiffies)\n\n # cpu stats for each server as a percentage of its total\n &target=asPercent(Server*.cpu.*.jiffies, None, 0)\n\nWhen using `nodes`, any series or totals that can't be matched will create output series with\nnames like ``asPercent(someSeries,MISSING)`` or ``asPercent(MISSING,someTotalSeries)`` and all\nvalues set to None. If desired these series can be filtered out by piping the result through\n``|exclude(\"MISSING\")`` as shown below:\n\n.. code-block:: none\n\n &target=asPercent(Server{1,2}.memory.used,Server{1,3}.memory.total,0)\n\n # will produce 3 output series:\n # asPercent(Server1.memory.used,Server1.memory.total) [values will be as expected]\n # asPercent(Server2.memory.used,MISSING) [all values will be None]\n # asPercent(MISSING,Server3.memory.total) [all values will be None]\n\n &target=asPercent(Server{1,2}.memory.used,Server{1,3}.memory.total,0)|exclude(\"MISSING\")\n\n # will produce 1 output series:\n # asPercent(Server1.memory.used,Server1.memory.total) [values will be as expected]\n\nEach node may be an integer referencing a node in the series name or a string identifying a tag.\n\n.. note::\n\n When `total` is a seriesList, specifying `nodes` to match series with the corresponding total\n series will increase reliability.",
"module": "graphite.render.functions",
"group": "Combine",
"params": [
@@ -988,7 +988,7 @@
"integralByInterval": {
"name": "integralByInterval",
"function": "integralByInterval(seriesList, intervalUnit)",
"description": "This will do the same as integral() funcion, except resetting the total to 0\nat the given time in the parameter \"from\"\nUseful for finding totals per hour/day/week/..\n\nExample:\n\n.. code-block:: none\n\n &target=integralByInterval(company.sales.perMinute, \"1d\")&from=midnight-10days\n\nThis would start at zero on the left side of the graph, adding the sales each\nminute, and show the evolution of sales per day during the last 10 days.",
"description": "This will do the same as integral() function, except resetting the total to 0\nat the given time in the parameter \"from\"\nUseful for finding totals per hour/day/week/..\n\nExample:\n\n.. code-block:: none\n\n &target=integralByInterval(company.sales.perMinute, \"1d\")&from=midnight-10days\n\nThis would start at zero on the left side of the graph, adding the sales each\nminute, and show the evolution of sales per day during the last 10 days.",
"module": "graphite.render.functions",
"group": "Transform",
"params": [
@@ -1776,7 +1776,7 @@
"movingAverage": {
"name": "movingAverage",
"function": "movingAverage(seriesList, windowSize, xFilesFactor=None)",
"description": "Graphs the moving average of a metric (or metrics) over a fixed number of\npast points, or a time interval.\n\nTakes one metric or a wildcard seriesList followed by a number N of datapoints\nor a quoted string with a length of time like '1hour' or '5min' (See ``from /\nuntil`` in the :doc:`Render API <render_api>` for examples of time formats), and an xFilesFactor value to specify\nhow many points in the window must be non-null for the output to be considered valid. Graphs the\naverage of the preceeding datapoints for each point on the graph.\n\nExample:\n\n.. code-block:: none\n\n &target=movingAverage(Server.instance01.threads.busy,10)\n &target=movingAverage(Server.instance*.threads.idle,'5min')",
"description": "Graphs the moving average of a metric (or metrics) over a fixed number of\npast points, or a time interval.\n\nTakes one metric or a wildcard seriesList followed by a number N of datapoints\nor a quoted string with a length of time like '1hour' or '5min' (See ``from /\nuntil`` in the :doc:`Render API <render_api>` for examples of time formats), and an xFilesFactor value to specify\nhow many points in the window must be non-null for the output to be considered valid. Graphs the\naverage of the preceding datapoints for each point on the graph.\n\nExample:\n\n.. code-block:: none\n\n &target=movingAverage(Server.instance01.threads.busy,10)\n &target=movingAverage(Server.instance*.threads.idle,'5min')",
"module": "graphite.render.functions",
"group": "Calculate",
"params": [
@@ -1809,7 +1809,7 @@
"movingMax": {
"name": "movingMax",
"function": "movingMax(seriesList, windowSize, xFilesFactor=None)",
"description": "Graphs the moving maximum of a metric (or metrics) over a fixed number of\npast points, or a time interval.\n\nTakes one metric or a wildcard seriesList followed by a number N of datapoints\nor a quoted string with a length of time like '1hour' or '5min' (See ``from /\nuntil`` in the :doc:`Render API <render_api>` for examples of time formats), and an xFilesFactor value to specify\nhow many points in the window must be non-null for the output to be considered valid. Graphs the\nmaximum of the preceeding datapoints for each point on the graph.\n\nExample:\n\n.. code-block:: none\n\n &target=movingMax(Server.instance01.requests,10)\n &target=movingMax(Server.instance*.errors,'5min')",
"description": "Graphs the moving maximum of a metric (or metrics) over a fixed number of\npast points, or a time interval.\n\nTakes one metric or a wildcard seriesList followed by a number N of datapoints\nor a quoted string with a length of time like '1hour' or '5min' (See ``from /\nuntil`` in the :doc:`Render API <render_api>` for examples of time formats), and an xFilesFactor value to specify\nhow many points in the window must be non-null for the output to be considered valid. Graphs the\nmaximum of the preceding datapoints for each point on the graph.\n\nExample:\n\n.. code-block:: none\n\n &target=movingMax(Server.instance01.requests,10)\n &target=movingMax(Server.instance*.errors,'5min')",
"module": "graphite.render.functions",
"group": "Calculate",
"params": [
@@ -1842,7 +1842,7 @@
"movingMedian": {
"name": "movingMedian",
"function": "movingMedian(seriesList, windowSize, xFilesFactor=None)",
"description": "Graphs the moving median of a metric (or metrics) over a fixed number of\npast points, or a time interval.\n\nTakes one metric or a wildcard seriesList followed by a number N of datapoints\nor a quoted string with a length of time like '1hour' or '5min' (See ``from /\nuntil`` in the :doc:`Render API <render_api>` for examples of time formats), and an xFilesFactor value to specify\nhow many points in the window must be non-null for the output to be considered valid. Graphs the\nmedian of the preceeding datapoints for each point on the graph.\n\nExample:\n\n.. code-block:: none\n\n &target=movingMedian(Server.instance01.threads.busy,10)\n &target=movingMedian(Server.instance*.threads.idle,'5min')",
"description": "Graphs the moving median of a metric (or metrics) over a fixed number of\npast points, or a time interval.\n\nTakes one metric or a wildcard seriesList followed by a number N of datapoints\nor a quoted string with a length of time like '1hour' or '5min' (See ``from /\nuntil`` in the :doc:`Render API <render_api>` for examples of time formats), and an xFilesFactor value to specify\nhow many points in the window must be non-null for the output to be considered valid. Graphs the\nmedian of the preceding datapoints for each point on the graph.\n\nExample:\n\n.. code-block:: none\n\n &target=movingMedian(Server.instance01.threads.busy,10)\n &target=movingMedian(Server.instance*.threads.idle,'5min')",
"module": "graphite.render.functions",
"group": "Calculate",
"params": [
@@ -1875,7 +1875,7 @@
"movingMin": {
"name": "movingMin",
"function": "movingMin(seriesList, windowSize, xFilesFactor=None)",
"description": "Graphs the moving minimum of a metric (or metrics) over a fixed number of\npast points, or a time interval.\n\nTakes one metric or a wildcard seriesList followed by a number N of datapoints\nor a quoted string with a length of time like '1hour' or '5min' (See ``from /\nuntil`` in the :doc:`Render API <render_api>` for examples of time formats), and an xFilesFactor value to specify\nhow many points in the window must be non-null for the output to be considered valid. Graphs the\nminimum of the preceeding datapoints for each point on the graph.\n\nExample:\n\n.. code-block:: none\n\n &target=movingMin(Server.instance01.requests,10)\n &target=movingMin(Server.instance*.errors,'5min')",
"description": "Graphs the moving minimum of a metric (or metrics) over a fixed number of\npast points, or a time interval.\n\nTakes one metric or a wildcard seriesList followed by a number N of datapoints\nor a quoted string with a length of time like '1hour' or '5min' (See ``from /\nuntil`` in the :doc:`Render API <render_api>` for examples of time formats), and an xFilesFactor value to specify\nhow many points in the window must be non-null for the output to be considered valid. Graphs the\nminimum of the preceding datapoints for each point on the graph.\n\nExample:\n\n.. code-block:: none\n\n &target=movingMin(Server.instance01.requests,10)\n &target=movingMin(Server.instance*.errors,'5min')",
"module": "graphite.render.functions",
"group": "Calculate",
"params": [
@@ -1908,7 +1908,7 @@
"movingSum": {
"name": "movingSum",
"function": "movingSum(seriesList, windowSize, xFilesFactor=None)",
"description": "Graphs the moving sum of a metric (or metrics) over a fixed number of\npast points, or a time interval.\n\nTakes one metric or a wildcard seriesList followed by a number N of datapoints\nor a quoted string with a length of time like '1hour' or '5min' (See ``from /\nuntil`` in the :doc:`Render API <render_api>` for examples of time formats), and an xFilesFactor value to specify\nhow many points in the window must be non-null for the output to be considered valid. Graphs the\nsum of the preceeding datapoints for each point on the graph.\n\nExample:\n\n.. code-block:: none\n\n &target=movingSum(Server.instance01.requests,10)\n &target=movingSum(Server.instance*.errors,'5min')",
"description": "Graphs the moving sum of a metric (or metrics) over a fixed number of\npast points, or a time interval.\n\nTakes one metric or a wildcard seriesList followed by a number N of datapoints\nor a quoted string with a length of time like '1hour' or '5min' (See ``from /\nuntil`` in the :doc:`Render API <render_api>` for examples of time formats), and an xFilesFactor value to specify\nhow many points in the window must be non-null for the output to be considered valid. Graphs the\nsum of the preceding datapoints for each point on the graph.\n\nExample:\n\n.. code-block:: none\n\n &target=movingSum(Server.instance01.requests,10)\n &target=movingSum(Server.instance*.errors,'5min')",
"module": "graphite.render.functions",
"group": "Calculate",
"params": [
@@ -1941,7 +1941,7 @@
"movingWindow": {
"name": "movingWindow",
"function": "movingWindow(seriesList, windowSize, func='average', xFilesFactor=None)",
"description": "Graphs a moving window function of a metric (or metrics) over a fixed number of\npast points, or a time interval.\n\nTakes one metric or a wildcard seriesList, a number N of datapoints\nor a quoted string with a length of time like '1hour' or '5min' (See ``from /\nuntil`` in the :doc:`Render API <render_api>` for examples of time formats), a function to apply to the points\nin the window to produce the output, and an xFilesFactor value to specify how many points in the\nwindow must be non-null for the output to be considered valid. Graphs the\noutput of the function for the preceeding datapoints for each point on the graph.\n\nExample:\n\n.. code-block:: none\n\n &target=movingWindow(Server.instance01.threads.busy,10)\n &target=movingWindow(Server.instance*.threads.idle,'5min','median',0.5)\n\n.. note::\n\n `xFilesFactor` follows the same semantics as in Whisper storage schemas. Setting it to 0 (the\n default) means that only a single value in a given interval needs to be non-null, setting it to\n 1 means that all values in the interval must be non-null. A setting of 0.5 means that at least\n half the values in the interval must be non-null.",
"description": "Graphs a moving window function of a metric (or metrics) over a fixed number of\npast points, or a time interval.\n\nTakes one metric or a wildcard seriesList, a number N of datapoints\nor a quoted string with a length of time like '1hour' or '5min' (See ``from /\nuntil`` in the :doc:`Render API <render_api>` for examples of time formats), a function to apply to the points\nin the window to produce the output, and an xFilesFactor value to specify how many points in the\nwindow must be non-null for the output to be considered valid. Graphs the\noutput of the function for the preceding datapoints for each point on the graph.\n\nExample:\n\n.. code-block:: none\n\n &target=movingWindow(Server.instance01.threads.busy,10)\n &target=movingWindow(Server.instance*.threads.idle,'5min','median',0.5)\n\n.. note::\n\n `xFilesFactor` follows the same semantics as in Whisper storage schemas. Setting it to 0 (the\n default) means that only a single value in a given interval needs to be non-null, setting it to\n 1 means that all values in the interval must be non-null. A setting of 0.5 means that at least\n half the values in the interval must be non-null.",
"module": "graphite.render.functions",
"group": "Calculate",
"params": [

View File

@@ -15,6 +15,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/prometheus"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/promql"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/searchutils"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/stats"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/cgroup"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
@@ -29,7 +30,10 @@ import (
)
var (
deleteAuthKey = flagutil.NewPassword("deleteAuthKey", "authKey for metrics' deletion via /api/v1/admin/tsdb/delete_series and /tags/delSeries. It could be passed via authKey query arg. It overrides -httpAuth.*")
deleteAuthKey = flagutil.NewPassword("deleteAuthKey", "authKey for metrics' deletion via /api/v1/admin/tsdb/delete_series and /tags/delSeries. It could be passed via authKey query arg. It overrides -httpAuth.*")
metricNamesStatsResetAuthKey = flagutil.NewPassword("metricNamesStatsResetAuthKey", "authKey for resetting metric names usage cache via /api/v1/admin/status/metric_names_stats/reset. It overrides -httpAuth.*. "+
"See https://docs.victoriametrics.com/#track-ingested-metrics-usage")
maxConcurrentRequests = flag.Int("search.maxConcurrentRequests", getDefaultMaxConcurrentRequests(), "The maximum number of concurrent search requests. "+
"It shouldn't be high, since a single request can saturate all the CPU cores, while many concurrently executed requests may require high amounts of memory. "+
"See also -search.maxQueueDuration and -search.maxMemoryPerQuery")
@@ -178,7 +182,6 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
promql.ResetRollupResultCache()
return true
}
if strings.HasPrefix(path, "/api/v1/label/") {
s := path[len("/api/v1/label/"):]
if strings.HasSuffix(s, "/values") {
@@ -399,6 +402,26 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
}
w.WriteHeader(http.StatusNoContent)
return true
case "/api/v1/status/metric_names_stats":
metricNamesStatsRequests.Inc()
if err := stats.MetricNamesStatsHandler(qt, w, r); err != nil {
metricNamesStatsErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
return true
}
return true
case "/api/v1/admin/status/metric_names_stats/reset":
metricNamesStatsResetRequests.Inc()
if !httpserver.CheckAuthFlag(w, r, metricNamesStatsResetAuthKey) {
return true
}
if err := stats.ResetMetricNamesStatsHandler(qt); err != nil {
metricNamesStatsResetErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
return true
}
w.WriteHeader(http.StatusNoContent)
return true
default:
return false
}
@@ -674,6 +697,12 @@ var (
metadataRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/metadata"}`)
buildInfoRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/buildinfo"}`)
queryExemplarsRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/query_exemplars"}`)
metricNamesStatsRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/status/metric_names_stats"}`)
metricNamesStatsErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/api/v1/status/metric_names_stats"}`)
metricNamesStatsResetRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/admin/status/metric_names_stats/reset"}`)
metricNamesStatsResetErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/api/v1/admin/status/metric_names_stats/reset"}`)
)
func proxyVMAlertRequests(w http.ResponseWriter, r *http.Request) {

View File

@@ -1367,3 +1367,18 @@ func applyGraphiteRegexpFilter(filter string, ss []string) ([]string, error) {
//
// See https://github.com/golang/go/blob/704401ffa06c60e059c9e6e4048045b4ff42530a/src/runtime/malloc.go#L11
const maxFastAllocBlockSize = 32 * 1024
// GetMetricNamesStats returns statistic for timeseries metric names usage.
func GetMetricNamesStats(qt *querytracer.Tracer, limit, le int, matchPattern string) (storage.MetricNamesStatsResponse, error) {
qt = qt.NewChild("get metric names usage statistics with limit: %d, less or equal to: %d, match pattern=%q", limit, le, matchPattern)
defer qt.Done()
return vmstorage.GetMetricNamesStats(qt, limit, le, matchPattern)
}
// ResetMetricNamesStats resets state of metric names usage
func ResetMetricNamesStats(qt *querytracer.Tracer) error {
qt = qt.NewChild("reset metric names usage stats")
defer qt.Done()
vmstorage.ResetMetricNamesStats(qt)
return nil
}

View File

@@ -237,7 +237,7 @@ WITH (
</pre>
<p>
4. Put node_cpu_seconds_total{commonFilters} into its own varialbe with the name cpuSeconds:
4. Put node_cpu_seconds_total{commonFilters} into its own variable with the name cpuSeconds:
</p>
<pre>
WITH (

View File

@@ -350,7 +350,7 @@ WITH (
</pre>
<p>
4. Put node_cpu_seconds_total{commonFilters} into its own varialbe with the name cpuSeconds:
4. Put node_cpu_seconds_total{commonFilters} into its own variable with the name cpuSeconds:
</p>
<pre>
WITH (

View File

@@ -53,12 +53,13 @@ var (
maxUniqueTimeseries = flag.Int("search.maxUniqueTimeseries", 0, "The maximum number of unique time series, which can be selected during /api/v1/query and /api/v1/query_range queries. This option allows limiting memory usage. "+
"When set to zero, the limit is automatically calculated based on -search.maxConcurrentRequests (inversely proportional) and memory available to the process (proportional).")
maxFederateSeries = flag.Int("search.maxFederateSeries", 1e6, "The maximum number of time series, which can be returned from /federate. This option allows limiting memory usage")
maxExportSeries = flag.Int("search.maxExportSeries", 10e6, "The maximum number of time series, which can be returned from /api/v1/export* APIs. This option allows limiting memory usage")
maxTSDBStatusSeries = flag.Int("search.maxTSDBStatusSeries", 10e6, "The maximum number of time series, which can be processed during the call to /api/v1/status/tsdb. This option allows limiting memory usage")
maxSeriesLimit = flag.Int("search.maxSeries", 30e3, "The maximum number of time series, which can be returned from /api/v1/series. This option allows limiting memory usage")
maxDeleteSeries = flag.Int("search.maxDeleteSeries", 1e6, "The maximum number of time series, which can be deleted using /api/v1/admin/tsdb/delete_series. This option allows limiting memory usage")
maxLabelsAPISeries = flag.Int("search.maxLabelsAPISeries", 1e6, "The maximum number of time series, which could be scanned when searching for the matching time series "+
maxFederateSeries = flag.Int("search.maxFederateSeries", 1e6, "The maximum number of time series, which can be returned from /federate. This option allows limiting memory usage")
maxExportSeries = flag.Int("search.maxExportSeries", 10e6, "The maximum number of time series, which can be returned from /api/v1/export* APIs. This option allows limiting memory usage")
maxTSDBStatusSeries = flag.Int("search.maxTSDBStatusSeries", 10e6, "The maximum number of time series, which can be processed during the call to /api/v1/status/tsdb. This option allows limiting memory usage")
maxSeriesLimit = flag.Int("search.maxSeries", 30e3, "The maximum number of time series, which can be returned from /api/v1/series. This option allows limiting memory usage")
maxDeleteSeries = flag.Int("search.maxDeleteSeries", 1e6, "The maximum number of time series, which can be deleted using /api/v1/admin/tsdb/delete_series. This option allows limiting memory usage")
maxTSDBStatusTopNSeries = flag.Int("search.maxTSDBStatusTopNSeries", 1000, "The maximum value of `topN` argument that can be passed to /api/v1/status/tsdb API. This option allows limiting memory usage. See https://docs.victoriametrics.com/readme/#tsdb-stats")
maxLabelsAPISeries = flag.Int("search.maxLabelsAPISeries", 1e6, "The maximum number of time series, which could be scanned when searching for the matching time series "+
"at /api/v1/labels and /api/v1/label/.../values. This option allows limiting memory usage and CPU usage. See also -search.maxLabelsAPIDuration, "+
"-search.maxTagKeys, -search.maxTagValues and -search.ignoreExtraFiltersAtLabelsAPI")
maxPointsPerTimeseries = flag.Int("search.maxPointsPerTimeseries", 30e3, "The maximum points per a single timeseries returned from /api/v1/query_range. "+
@@ -537,7 +538,7 @@ func LabelValuesHandler(qt *querytracer.Tracer, startTime time.Time, labelName s
defer bufferedwriter.Put(bw)
WriteLabelValuesResponse(bw, labelValues, qt)
if err := bw.Flush(); err != nil {
return fmt.Errorf("canot flush label values to remote client: %w", err)
return fmt.Errorf("cannot flush label values to remote client: %w", err)
}
return nil
}
@@ -584,8 +585,8 @@ func TSDBStatusHandler(qt *querytracer.Tracer, startTime time.Time, w http.Respo
if n <= 0 {
n = 1
}
if n > 1000 {
n = 1000
if n > *maxTSDBStatusTopNSeries {
n = *maxTSDBStatusTopNSeries
}
topN = n
}
@@ -985,7 +986,7 @@ func removeEmptyValuesAndTimeseries(tss []netstorage.Result) []netstorage.Result
// Slow path: remove NaNs.
srcTimestamps := ts.Timestamps
dstValues := ts.Values[:0]
// Do not re-use ts.Timestamps for dstTimestamps, since ts.Timestamps
// Do not reuse ts.Timestamps for dstTimestamps, since ts.Timestamps
// may be shared among multiple time series.
dstTimestamps := make([]int64, 0, len(ts.Timestamps))
for j, v := range ts.Values {

View File

@@ -478,7 +478,7 @@ func aggrFuncShare(afa *aggrFuncArg) ([]*timeseries, error) {
}
sum += v
}
// Divide every non-negative value at poisition i by sum in order to get its' share.
// Divide every non-negative value at position i by sum in order to get its' share.
for _, ts := range tss {
v := ts.Values[i]
if math.IsNaN(v) || v < 0 {

View File

@@ -161,7 +161,7 @@ func (iafc *incrementalAggrFuncContext) finalizeTimeseries() []*timeseries {
finalizeAggrFunc(iac)
tss = append(tss, iac.ts)
}
// reset iafc state, so it could be re-used
// reset iafc state, so it could be reused
iafc.resetState()
return tss
}

View File

@@ -1,6 +1,7 @@
package promql
import (
"bytes"
"fmt"
"math"
"strings"
@@ -539,13 +540,41 @@ func fillLeftNaNsWithRightValues(tssLeft, tssRight []*timeseries) {
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7759
// https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7640
func fillLeftNaNsWithRightValuesOrMerge(tssLeft, tssRight []*timeseries) {
if isScalar(tssRight) {
// fast path: if tssRight is scalar then it can be merged
// with tssLeft only when tssLeft is also a scalar.
// If tssLeft is not a scalar, then no need in comparing MetricNames.
// Typical case is: metric_selector or on() vector(0)
canBeMerged := isScalar(tssLeft)
valuesRight := tssRight[0].Values
for _, tsLeft := range tssLeft {
valuesLeft := tsLeft.Values
for i, v := range valuesLeft {
leftIsNaN := math.IsNaN(v)
valueRight := valuesRight[i]
if leftIsNaN && canBeMerged {
// fill NaNs with valueRight if labels match
valuesLeft[i] = valueRight
}
if !leftIsNaN || canBeMerged {
// set NaN to valueRight if valueLeft is not NaN
// or if left and right can be merged
valuesRight[i] = nan
}
}
}
return
}
nameLeft, nameRight := bbPool.Get(), bbPool.Get()
for _, tsLeft := range tssLeft {
valuesLeft := tsLeft.Values
nameLeft := tsLeft.MetricName.String()
nameLeft.B = marshalMetricNameSorted(nameLeft.B[:0], &tsLeft.MetricName)
for i, v := range valuesLeft {
leftIsNaN := math.IsNaN(v)
for _, tsRight := range tssRight {
canBeMerged := nameLeft == tsRight.MetricName.String()
nameRight.B = marshalMetricNameSorted(nameRight.B[:0], &tsRight.MetricName)
canBeMerged := bytes.Equal(nameLeft.B, nameRight.B)
valueRight := tsRight.Values[i]
if leftIsNaN && canBeMerged {
// fill NaNs with valueRight if labels match
@@ -559,6 +588,8 @@ func fillLeftNaNsWithRightValuesOrMerge(tssLeft, tssRight []*timeseries) {
}
}
}
bbPool.Put(nameLeft)
bbPool.Put(nameRight)
}
func binaryOpIfnot(bfa *binaryOpFuncArg) ([]*timeseries, error) {

View File

@@ -0,0 +1,107 @@
package promql
import (
"fmt"
"testing"
"github.com/VictoriaMetrics/metricsql"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
)
func BenchmarkBinaryOpOr(b *testing.B) {
mustParseMetricsQL := func(s string) *metricsql.BinaryOpExpr {
e, err := metricsql.Parse(s)
if err != nil {
b.Fatalf("cannot parse %q: %s", s, err)
}
be, ok := e.(*metricsql.BinaryOpExpr)
if !ok {
b.Fatalf("expected BinaryOpExpr from %q: %s", s, err)
}
return be
}
ts := func(name string) *timeseries {
mg := []byte(name)
if len(name) == 0 {
mg = nil
}
return &timeseries{
MetricName: storage.MetricName{
MetricGroup: mg,
},
Timestamps: []int64{1},
Values: []float64{1},
}
}
b.Run("tss:1 or tss:1", func(b *testing.B) {
bfa := &binaryOpFuncArg{
be: mustParseMetricsQL("a or b"),
left: []*timeseries{ts("a")},
right: []*timeseries{ts("b")},
}
benchmarkBinaryOpOr(b, bfa)
})
b.Run("tss:1 or tss:1000", func(b *testing.B) {
right := make([]*timeseries, 1000)
for i := range right {
right[i] = ts(fmt.Sprintf(`b{foo="%d"}`, i))
}
bfa := &binaryOpFuncArg{
be: mustParseMetricsQL("a or b"),
left: []*timeseries{ts(`a{foo="0"}`)},
right: right,
}
benchmarkBinaryOpOr(b, bfa)
})
b.Run("tss:1000 or tss:1", func(b *testing.B) {
left := make([]*timeseries, 1000)
for i := range left {
left[i] = ts(fmt.Sprintf(`b{foo="%d"}`, i))
}
bfa := &binaryOpFuncArg{
be: mustParseMetricsQL("a or b"),
left: left,
right: []*timeseries{ts(`b{foo="0"}`)},
}
benchmarkBinaryOpOr(b, bfa)
})
b.Run("tss:1000 or tss:1000", func(b *testing.B) {
left, right := make([]*timeseries, 1000), make([]*timeseries, 1000)
for i := range left {
left[i] = ts(fmt.Sprintf(`a{foo="%d"}`, i))
right[i] = ts(fmt.Sprintf(`b{foo="%d"}`, i))
}
bfa := &binaryOpFuncArg{
be: mustParseMetricsQL("a or b"),
left: left,
right: right,
}
benchmarkBinaryOpOr(b, bfa)
})
b.Run("tss:1000 or on() vector(0)", func(b *testing.B) {
left := make([]*timeseries, 1000)
for i := range left {
left[i] = ts(fmt.Sprintf(`a{foo="%d"}`, i))
}
bfa := &binaryOpFuncArg{
be: mustParseMetricsQL("a or on() vector(0)"),
left: left,
right: []*timeseries{ts(``)},
}
benchmarkBinaryOpOr(b, bfa)
})
}
func benchmarkBinaryOpOr(b *testing.B, bfa *binaryOpFuncArg) {
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
_, err := binaryOpOr(bfa)
if err != nil {
b.Fatalf("unexpected error: %s", err)
}
}
})
}

View File

@@ -12,6 +12,9 @@ import (
"time"
"unsafe"
"github.com/VictoriaMetrics/metrics"
"github.com/VictoriaMetrics/metricsql"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/netstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/searchutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
@@ -25,8 +28,6 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/stringsutil"
"github.com/VictoriaMetrics/metrics"
"github.com/VictoriaMetrics/metricsql"
)
var (
@@ -814,7 +815,19 @@ func evalRollupFunc(qt *querytracer.Tracer, ec *EvalConfig, funcName string, rf
Err: fmt.Errorf("`@` modifier must return a single series; it returns %d series instead", len(tssAt)),
}
}
atTimestamp := int64(tssAt[0].Values[0] * 1000)
atValue := math.NaN()
for _, v := range tssAt[0].Values {
if !math.IsNaN(v) {
atValue = v
break
}
}
if math.IsNaN(atValue) {
return nil, &httpserver.UserReadableError{
Err: fmt.Errorf("`@` modifier must return a non-NaN value"),
}
}
atTimestamp := int64(atValue * 1000)
ecNew := copyEvalConfig(ec)
ecNew.Start = atTimestamp
ecNew.End = atTimestamp
@@ -1821,7 +1834,7 @@ func evalRollupWithIncrementalAggregate(qt *querytracer.Tracer, funcName string,
samplesScannedTotal.Add(samplesScanned)
iafc.updateTimeseries(ts, workerID)
// ts.Timestamps points to sharedTimestamps. Zero it, so it can be re-used.
// ts.Timestamps points to sharedTimestamps. Zero it, so it can be reused.
ts.Timestamps = nil
ts.denyReuse = false
}

View File

@@ -10195,7 +10195,7 @@ max_over_time(cpuIdle[1h:])`)
f("avg(foo[5m])")
f("sort(foo[5m])")
// These are valid subqueries with MetricsQL extention, which allows omitting lookbehind window for rollup functions
// These are valid subqueries with MetricsQL extension, which allows omitting lookbehind window for rollup functions
f("rate(rate(http_total)[5m:1m])")
f("rate(sum(rate(http_total))[5m:1m])")
f("rate(sum(rate(http_total))[5m:1m])")

View File

@@ -121,7 +121,7 @@ func InitRollupResultCache(cachePath string) {
var c *workingsetcache.Cache
if len(rollupResultCachePath) > 0 {
if *resetRollupResultCacheOnStartup {
logger.Infof("removing rollupResult cache at %q becasue -search.resetRollupResultCacheOnStartup command-line flag is set", rollupResultCachePath)
logger.Infof("removing rollupResult cache at %q because -search.resetRollupResultCacheOnStartup command-line flag is set", rollupResultCachePath)
fs.MustRemoveAll(rollupResultCachePath)
} else {
logger.Infof("loading rollupResult cache from %q...", rollupResultCachePath)
@@ -474,7 +474,7 @@ func (rrc *rollupResultCache) getSeriesFromCache(qt *querytracer.Tracer, key []b
}
qt.Printf("load compressed entry from cache with size %d bytes", len(compressedResultBuf.B))
// Decompress into newly allocated byte slice, since tss returned from unmarshalTimeseriesFast
// refers to the byte slice, so it cannot be re-used.
// refers to the byte slice, so it cannot be reused.
resultBuf, err := encoding.DecompressZSTD(nil, compressedResultBuf.B)
if err != nil {
logger.Panicf("BUG: cannot decompress resultBuf from rollupResultCache: %s; it looks like it was improperly saved", err)

Some files were not shown because too many files have changed in this diff Show More