Compare commits

...

179 Commits

Author SHA1 Message Date
Jiekun
005068ea5a debug: [vmagent] log req header and resp header for scrape status code != 200 2024-12-24 21:36:01 +08:00
Dima Shur
7941877233 docs: changed typo in label (enterpriSe instead of enterpriZe) (#7925)
### Describe Your Changes

Fixed typo in contributing.md (enterpriZe -> enterpriSe in the label
name)

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2024-12-24 15:51:35 +04:00
Phuong Le
f303081304 vminsert: sort the storage nodes during initialization (#7899)
Fixes #7898
2024-12-23 19:41:17 +01:00
Ted Possible
a84628f701 app/vminsert: support for rate limiting number of samples/sec with -maxIngestionRate
This commit adds feature to limit sample ingestion rate globally for ingestion protocols. 

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7377
2024-12-23 17:37:30 +01:00
Github Actions
f823a225ac Automatic update operator docs from VictoriaMetrics/operator@471f183 (#7916)
Automated changes by
[create-pull-request](https://github.com/peter-evans/create-pull-request)
GitHub action

Signed-off-by: Github Actions <133988544+victoriametrics-bot@users.noreply.github.com>
Co-authored-by: f41gh7 <18450869+f41gh7@users.noreply.github.com>
2024-12-23 16:48:49 +01:00
Andrii Chubatiuk
79f1a37ee6 vlinsert: take into account order of msgfields to have predictable _msg field selection in case of multiple matches (#7784)
### Describe Your Changes

Currently if multiple msgFields are present in a log row it's not
obvious which field is selected as a _msg field. With this PR and order
of msgfield values defined either via headers or query arg params
defines a priority of these values

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2024-12-23 10:10:02 +01:00
Andrii Chubatiuk
f9cd408ca9 datadog-serverless: fixed metrics and logs ingestion from Datadog serverless extensions for AWS and GCP (#7769)
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7761

### Describe Your Changes

- datadog /api/v2/logs api supports message field in json format, which
is not documented and is used by serverless extension. This PR allows
message field to be both string and object type. Also added support of
not documented timestamp field
- added `-datadog.streamFields` and `-datadog.ignoreFields` flags to
configure default stream fields for datadog logs, where there's no
alternative option to pass extra headers and query args
- added ingest `max` and `min` values of data, which are ingested using
`datadogsketches` API, which is also actively used by serverless
extensions
- use default `.` separator instead of `_` for sketches metric names
until metrics are not sanitized
2024-12-23 09:57:48 +01:00
Aliaksandr Valialkin
c2811d8d11 docs/VictoriaLogs/LogsQL.md: fix a link to count_uniq_hash stats function docs
It must be consistent with the other stats functions

This is a follow-up for de0ae735aa
2024-12-22 14:39:27 +01:00
Aliaksandr Valialkin
8d981b15c9 deployment: update VictoriaLogs Docker image from v1.3.2-victorialogs to v1.4.0-victorialogs
See https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.4.0-victorialogs
2024-12-22 14:36:49 +01:00
Aliaksandr Valialkin
58f09fe3f8 docs/VictoriaLogs/CHANGELOG.md: cut v1.4.0-victorialogs release 2024-12-22 14:31:31 +01:00
Aliaksandr Valialkin
afd926a0b0 lib/logstorage: limit the maximum number of logs and/or log streams, which can be passed to stream_context pipe
This should prevent from excess usage of CPU, RAM and other resources when too many logs
are passed to 'stream_context' pipe.

It is expected that 'stream_context' pipe results are investigated by humans, who cannot inspect
surrounding logs for millions of initial logs. That's why it is OK to limit the number of logs
and/or log streams, which can be passed to 'stream_context' pipe.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7766
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7903
2024-12-22 14:28:50 +01:00
Aliaksandr Valialkin
204c102342 app/vlselect/vmui: run make vmui-logs-update after the commit 1fbc2c0db1
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7288
2024-12-22 13:53:45 +01:00
Aliaksandr Valialkin
c5949af9e8 lib/logstorage: reduce memory allocations when splitting in(...) values into tokens and calculating hashes for these tokens
While at it, reduce memory allocations at Storage.getFieldValuesNoHits and make it more scalable on multi-CPU systems.

This improves performance of in(<query>) filter when the <query> returns big number of values.
2024-12-22 13:13:44 +01:00
Aliaksandr Valialkin
5dc0413bc0 lib/logstorage: allow specifying hits column name in the top pipe via top ... hits as <column_name> syntax 2024-12-22 11:23:19 +01:00
Aliaksandr Valialkin
f919783de9 lib/logstorage: uncommend accidentally commented tests at 60f9f44150 2024-12-22 02:20:57 +01:00
Aliaksandr Valialkin
60f9f44150 lib/logstorage: reduce memory allocations at stats and top pipes
Use chunked allocator in order to reduce memory allocations. It allocates objects from slices of up to 64Kb size.
This improves performance for `stats` and `top` pipes by up to 2x when they are applied to big number of `by (...)` groups.

Also parallelize execution of `count_uniq`, `count_uniq_hash` and `uniq_values` stats functions,
so they are executed faster on hosts with many CPU cores when applied to fields with big number
of unique values.
2024-12-22 02:13:02 +01:00
Github Actions
0fcbe8fdae Automatic update Grafana datasource docs from VictoriaMetrics/victoriametrics-datasource@b18583c (#7910) 2024-12-21 09:42:48 -08:00
Github Actions
458b602938 Automatic update Grafana datasource docs from VictoriaMetrics/victoriametrics-datasource@cbff3fa (#7909) 2024-12-21 09:31:07 -08:00
Aliaksandr Valialkin
471f1d0a09 lib/logstorage: fixed a typo in blockResult.reset()
The commit 4599429f51 improperly set br.cs to nil,
while it should set br.bs to nil instead. This resulted in excess memory allocations
at br.csInit() and br.csInitFast().
2024-12-21 13:39:25 +01:00
hagen1778
7f80c1633f docs: mention filebeat version requirement for vlogs integration
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-12-20 16:21:13 +01:00
Yury Molodov
186b00df6b vmui: add export button for raw query data (#7828)
### Describe Your Changes

1. Added the ability to export data from the `Raw Query` page and import
exported data to the `Query Analyzer` page (related issue #7628).
2. Added a `Title` input field; the `Title` is displayed when importing
data on the `Query Analyzer` page.
3. Implemented `Markdown` support for comments in exported data.  
4. Updated the styling of the `Query Analyzer` page.  
5. Fixed an issue where the `Upload JSON` button on the `Query Analyzer`
page was only clickable on the button text (now clickable on the entire
button area).
6. Added a tooltip with `Deduplication` information on the `Raw Query`
page (related to issue #7763).

<details>
  <summary>Screenshots</summary>
  
#### Data export and `Markdown` preview

<img width="400"
src="https://github.com/user-attachments/assets/bbab31bb-81d3-4335-98c3-d01c8786bde4"/>
<img width="400"
src="https://github.com/user-attachments/assets/3cfd9938-b518-45d6-8ded-e3e7e6ab9299"/>

#### `Query Analyzer` page displaying data from `Raw Query`

<img width="900"
src="https://github.com/user-attachments/assets/008e0e93-92f2-4c25-a20e-3cee90a03397"/>

#### Viewing stats and comments on the `Query Analyzer` page  
    
<img width="600"
src="https://github.com/user-attachments/assets/18bfbba1-a11c-420e-84f2-78229ac7bd25"/>

#### Viewing stats data from the `Query` page

<img width="900"
src="https://github.com/user-attachments/assets/0f7a3009-9fb5-4727-b0c4-257aa196a9c1"/>

#### Tooltip on the `Raw Query` page  

<img width="900"
src="https://github.com/user-attachments/assets/400f86e7-f362-4307-8b1d-24af3c67020e"/>
  
</details>

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2024-12-20 15:51:06 +01:00
Github Actions
4205ae3011 Automatic update helm docs from VictoriaMetrics/helm-charts@feb0675 (#7897)
Automated changes by
[create-pull-request](https://github.com/peter-evans/create-pull-request)
GitHub action

Signed-off-by: Github Actions <133988544+victoriametrics-bot@users.noreply.github.com>
Co-authored-by: AndrewChubatiuk <3162380+AndrewChubatiuk@users.noreply.github.com>
2024-12-20 15:27:51 +01:00
Daria Karavaieva
491028774a docs/vmanomaly: popup deprecated_from and available_from for all docs (#7905)
### Describe Your Changes
added deprecated form and available from popups in vmanomaly docs

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2024-12-20 15:24:16 +01:00
Mathias Palmersheim
565b79c9ca docs: update vmalert+victorialogs doc with multitenant recording (#7779)
### Describe Your Changes
 
- Adds Headers to FAQ questions in vmalert for Victorialogs
- Adds FAQ for multitenant recording rules described in #7656

### Checklist

The following checks are **mandatory**:

- [X] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Co-authored-by: Haley Wang <haley@victoriametrics.com>
2024-12-20 15:02:00 +01:00
Aliaksandr Valialkin
5478cc61c2 lib/cgroup: add missing initialization of gogc variable inside SetGOGC
This is a follow-up for 79c08ecac4

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7902
2024-12-20 14:56:59 +01:00
Aliaksandr Valialkin
79c08ecac4 lib/cgroup: use the default GOGC=100 for the most of VictoriaMetrics components
Historically some of VictoriaMetrics components were optimized for the low rate of memory allocations.
These are: vmagent, single-node VictoriaMetrics and vmstorage. These components benefit from the low
GOGC value, since this allow reducing their memory usage in steady state on typical workloads.

Other VictoriaMetrics components aren't optimized for the reduced rate of memory allocations.
This results in the increased CPU usage spent on garbage collection (GC) in these components,
since it must be triggered at higher rate. See https://tip.golang.org/doc/gc-guide#GOGC for details.

These components do not use too much memory, so it is OK increasing the GOGC for these components
from 30 to 100 - this won't affect the most users.

Keep GOGC to 30 only for vmagent, single-node VictoriaMetrics and vmstorage components.
See 077193d87c and 54b9e1d3cb .

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7902
2024-12-20 14:48:28 +01:00
hagen1778
f47fd83e54 docs: add example with dots in label name to vlogs rules
This change adds an example of how to use labels with `.` dots
in rule annotations.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-12-20 14:07:48 +01:00
Aliaksandr Valialkin
9c39bac565 lib/logstorage: fix imroper sorting of numeric fields when they are stored as const values at sort pipe
Numeric fields can be stored as const values in the block of logs. In this case the `sort` pipe
was incorrectly comparing such values as strings instead of numbers. This results in incorrect
sort results. For example, 123 was smaller than 2. Fix this by removing the incorrect case
for comparing const fields.

While at it, replace lessString() with strings.LessNatural() in the sortBlockLess.
This improves sorting performance a bit, since the sortBlockLess function already tried
comparing numeric values, and it doesn't need to spend CPU time on such a comparison again inside lessString() call.
The commit 42c9183281 wasn't correct by replacing strings.LessNatural() with lessString()
inside the sortBlockLess() function.
2024-12-20 13:26:20 +01:00
Roman Khavronenko
1042f07498 docs: update OTEL guide (#7887)
* simplify wording
* update styles
* remove extra info about go application details. The details are likely
not needed and we didn't have details for rolling-dice app anyway. So
keep it simple for consstency and brevity.
* update navigation for simplicity sake
* fix typos

follow-up after
40b47601d1

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-12-19 15:13:03 +01:00
Nikolay
79a595c6d0 app/vmauth: properly log host at debugInfo function (#7886)
vmauth started to use request.Host after commit
f4776fec1b for`src_hosts` routing rules.

This commit adds http.Request.Host to the debugInfo output in order to
be consistent with routing logic.

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: f41gh7 <nik@victoriametrics.com>
2024-12-19 15:04:37 +01:00
Andrii Chubatiuk
40b47601d1 docs/guides/otel: added logs integration, updated old otel dependencies
### Describe Your Changes

- added VictoriaLogs to OpenTelemetry guide
- updated deprecated dependencies
- added deltatocumulative processor to example and deltatemporality
selector to one of examples to use for counters by default
- added exponential histograms to example

---
Signed-off-by: Andrii Chubatiuk <andrew.chubatiuk@gmail.com>
2024-12-19 12:32:41 +01:00
Zakhar Bessarab
6bfcbe66f7 docs/release-guide: add a note about versioning in helm charts and ansible
### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2024-12-19 12:28:32 +01:00
f41gh7
94118c63f6 docs: update VM apps version to v1.108.1
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2024-12-19 12:25:37 +01:00
f41gh7
9605d73809 CHANGELOG.md: cut v1.108.1 release 2024-12-18 23:34:58 +01:00
f41gh7
3237c64ef3 make vmui-update 2024-12-18 23:08:22 +01:00
Yury Molodov
1fbc2c0db1 vmui: fix cursor reset in query input
Fix cursor reset in query input field. 

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7288.
2024-12-18 22:30:08 +01:00
Nikolay
71bb9fc0d0 app/vminsert: properly apply relabeling at ingestion
Regression was introduced at 564e6ea024
after implementing:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6928

ctx.Labels array could be incorrectly updated and changes to it after
relabeling rules can be lost.
E.g. ctx.Labels passed to WriteDataPoint function as slice copy, but
results of relabeling only changed an actual slice at ctx.Labels.

This commit replaces implicit relabeling call with explicit
`TryPrepareLabels` function.
It also reduces code diffs with cluster version and adds integration tests

 related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7865

---------

Signed-off-by: f41gh7 <nik@victoriametrics.com>
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
2024-12-18 22:27:51 +01:00
Github Actions
0210f4ebd2 Automatic update operator docs from VictoriaMetrics/operator@9b337c1 (#7879)
Automated changes by
[create-pull-request](https://github.com/peter-evans/create-pull-request)
GitHub action

Signed-off-by: Github Actions <133988544+victoriametrics-bot@users.noreply.github.com>
Co-authored-by: f41gh7 <18450869+f41gh7@users.noreply.github.com>
2024-12-18 16:27:37 +01:00
Andrii Chubatiuk
891ad8f202 app/vlinsert: loki healthcheck endpoint (#7864)
### Describe Your Changes

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7824

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
2024-12-18 14:59:44 +01:00
Github Actions
e501640f44 Automatic update Grafana datasource docs from VictoriaMetrics/victoriametrics-datasource@d830b2a (#7855) 2024-12-18 12:27:15 +01:00
Daria Karavaieva
21082405ec docs/vmanomaly: add version popup (#7860)
### Describe Your Changes

Added `available_from` popup into documentation of vmanomaly

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2024-12-18 12:26:42 +01:00
Github Actions
094a5ab58f Automatic update Grafana datasource docs from VictoriaMetrics/victorialogs-datasource@b8fd925 (#7862) 2024-12-18 12:26:11 +01:00
hagen1778
bbc84fa119 docs: add requirements to commit message to contributing
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-12-18 12:22:20 +01:00
Github Actions
9d1a72aca8 Automatic update operator docs from VictoriaMetrics/operator@5c3657d (#7871)
Automated changes by
[create-pull-request](https://github.com/peter-evans/create-pull-request)
GitHub action

Signed-off-by: Github Actions <133988544+victoriametrics-bot@users.noreply.github.com>
Co-authored-by: f41gh7 <18450869+f41gh7@users.noreply.github.com>
2024-12-18 12:13:58 +01:00
Dmytro Kozlov
05d3db248b deployment/docker: rename victorialogs-datasource to victoriametrics-logs-datasource (#7874)
### Describe Your Changes

Renamed victorialogs-datasource to victoriametrics-logs-datasource.

We prepared the victorialogs Grafana plugin for sign and updated the
plugin ID. This action require to update configs in our ops repository

Please check this
[release](https://github.com/VictoriaMetrics/victorialogs-datasource/releases/tag/v0.13.0)
and https://github.com/VictoriaMetrics/victorialogs-datasource/pull/161
with changes

### Checklist

The following checks are **mandatory**:

- [X] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2024-12-18 11:41:16 +01:00
Github Actions
59d739ff0b Automatic update helm docs from VictoriaMetrics/helm-charts@3b7bfbd (#7876)
Automated changes by
[create-pull-request](https://github.com/peter-evans/create-pull-request)
GitHub action

Signed-off-by: Github Actions <133988544+victoriametrics-bot@users.noreply.github.com>
Co-authored-by: AndrewChubatiuk <3162380+AndrewChubatiuk@users.noreply.github.com>
2024-12-18 11:40:39 +01:00
hagen1778
b54d10be63 docs: port LTS changelog
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-12-17 20:20:19 +01:00
Aliaksandr Valialkin
524f0e8d8b lib/logstorage: eliminate memory allocations when finalizing per-group values calculated by stats pipe
This improves query performance a bit when `stats by (...)` returns millions of individual `by (...)` groups
2024-12-17 15:17:01 +01:00
Roman Khavronenko
72419834af docs: add missing resource usage limits (#7856)
### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-12-17 15:02:54 +01:00
Aliaksandr Valialkin
e6b7d25ab4 app/vlselect: allow passing arbitrary LogsQL filters to extra_filters and extra_stream_filters query args
While at at, allow passing an array of string values per each JSON entry at extra_filters and extra_stream_filters.
For example, `extra_filters={"foo":["bar","baz"]}` is converted into `foo:in("bar", "baz")` extra filter,
while `extra_stream_fitlers={"foo":["bar","baz"]}` is converted into `{foo=~"bar|baz"}` extra filter.

This should simplify creating faceted search when multiple values per a single log field must be selected.
This is needed for https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7365#issuecomment-2447964259

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5542
2024-12-17 13:02:13 +01:00
Daria Karavaieva
ac124cf5aa docs/vmanomaly: deprecate Overview page (#7812)
### Describe Your Changes

-Deprecate Overview page in Anomaly Detection docs. 
- Adding service description  to `README.md`
- Moving Licensing information to Quickstart page

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2024-12-17 12:45:44 +01:00
Aliaksandr Valialkin
3d7f8377f7 lib/logstorage: do not return log fields with the same constant value across all the selected logs from facets pipe
Such log fields do not give any useful information during logs' exploration.
They just clutter the output of the `facets` pipe. So it is better to drop such fields by default.

If these fields are needed, then `keep_const_fields` option can be added to `facets` pipe.
2024-12-17 12:23:00 +01:00
Mathias Palmersheim
4992e083f0 fixed #7804 Added NoSelfMonitoringMetrics rule (#7805)
### Describe Your Changes

fixes #7804 by adding alert for missing uptime metric in vmanomaly

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2024-12-16 10:00:29 -06:00
hagen1778
71a9fb16f7 deployment/docker: fix typo after d86788e9a2
Thanks to @Haleygo for pointing it out here https://github.com/VictoriaMetrics/VictoriaMetrics/pull/7843#issuecomment-2545949268

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-12-16 16:37:21 +01:00
Artem Fetishev
7e7d029de1 docs: fix typo in keyConcepts.md (#7844)
Fix a typo and simplify the statement

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2024-12-16 16:34:33 +01:00
Github Actions
983f30c326 Automatic update helm docs from VictoriaMetrics/helm-charts@c486483 (#7840)
Automated changes by
[create-pull-request](https://github.com/peter-evans/create-pull-request)
GitHub action

Signed-off-by: Github Actions <133988544+victoriametrics-bot@users.noreply.github.com>
Co-authored-by: AndrewChubatiuk <3162380+AndrewChubatiuk@users.noreply.github.com>
2024-12-16 16:17:06 +01:00
Artem Fetishev
efd8098b0b docs: update instant query description in key concepts (#7842)
### Describe Your Changes

Update docs to reflect the changes introduced in #7767 to fix #5796

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2024-12-16 16:14:54 +01:00
Dima Shur
d86788e9a2 deployment/docker: set vmalert --remoteWrite.url to port 8429 (vmagent) (#7843)
### Describe Your Changes

Updated docker.compose.yml, set remotewrite.url to port 8429 so it would
correspond to documentation

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2024-12-16 16:13:45 +01:00
Aliaksandr Valialkin
a87ad250d0 docs/VictoriaLogs/data-ingestion/README.md: add missing of 2024-12-16 15:01:01 +01:00
hagen1778
bf84de3c6b docs: move change from c6f6302ca4 to #tip
The change was mistakenly put to the released version of VM

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-12-16 14:20:32 +01:00
Aliaksandr Valialkin
7ec8ea8301 docs/VictoriaLogs/data-ingestion/Vector.md: improve docs a bit
- Remove Loki sink, since it brings more troubles when users try using it in Vector.
  For example, it encodes all the log fields as a JSON string and puts it into "message" field.
  This results in storing the "message" field with the JSON string containing all the log fields
  in VictoriaLogs. This is not what expected - every log field must be stored as a separate field
  according to https://docs.victoriametrics.com/victorialogs/keyconcepts/

- Remove 'mode: bulk' option from Elasticsearch sink configuration, since this option is set by default to this value,
  so there is no need in explicit setting.

- Add 'compression: gzip' to all the config examples, since the compression reduces the used network bandwidth by 4-5 times,
  while it doesn't increase CPU usage too much at both Vector and VictoriaLogs sides. So it is better to enable the compression in config examples.

- Mention about HTTP parameters accepted by VictoriaLogs data ingestion APIs in both examples for Elasticsearch and JSON line protocols.
2024-12-16 13:52:35 +01:00
Artem Fetishev
c6f6302ca4 Fix inconsistent treatment of millisecond-precision time for instant queries (#7767)
### Describe Your Changes

This PR fixes #5796. See the points 6 and 7 in `Steps to reproduce`:

> Now let's set time to only 5ms past the timestamp of the first point,
since even 199ms worked for the second point. Surprise, the point isn't
returned 💥:
>
> ```curl -s $VMQURL -d 'query=series1' -d 'time=1707123456705' -d
'step=1ms' | grep 10 # nothing!```
>
> But, 4ms works: 🤨🤔
>
> ```curl -s $VMQURL -d 'query=series1' -d 'time=1707123456704' -d
'step=1ms' | grep 10 # found```

This happens so because the actual step becomes 5ms due to jitter being
applied. THe fix is to do not apply jitter if scrape interval was not
detected (the case when vmstorage returns only one result). In this case
the scrape interval is set to `5m+step`.

An integration test has been added to check the steps to reproduce and
then to confirm that fix works. Note that the cluster tests are
currently disabled because the fix is not in cluster branch yet.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2024-12-16 13:24:52 +01:00
Github Actions
87100e55cc Automatic update helm docs from VictoriaMetrics/helm-charts@ec141b8 (#7836)
Automated changes by
[create-pull-request](https://github.com/peter-evans/create-pull-request)
GitHub action

Signed-off-by: Github Actions <133988544+victoriametrics-bot@users.noreply.github.com>
Co-authored-by: AndrewChubatiuk <3162380+AndrewChubatiuk@users.noreply.github.com>
2024-12-16 12:51:04 +01:00
Roman Khavronenko
c464d4484f lib/storage: update dedup tests
* update misleading comments about preferring NaNs on intervals. NaNs
are only preferred on timestamp conflicts
* add conflicting timestamps to the benchmark test. Previously,
benchmark wasn't checking the timestamp conflict code branch. The
updated results after
c0fcfd6b97
are the following:
```
benchstat old.txt new.txt

goos: darwin
goarch: arm64
pkg: github.com/VictoriaMetrics/VictoriaMetrics/lib/storage
cpu: Apple M4 Pro
                                                       │   old.txt    │               new.txt                │
                                                       │    sec/op    │    sec/op     vs base                │
DeduplicateSamples/minScrapeInterval=3s-14               889.7n ± ∞ ¹   904.3n ± ∞ ¹       ~ (p=1.000 n=1) ²
DeduplicateSamples/minScrapeInterval=4s-14               735.9n ± ∞ ¹   748.7n ± ∞ ¹       ~ (p=1.000 n=1) ²
DeduplicateSamples/minScrapeInterval=10s-14              637.7n ± ∞ ¹   659.3n ± ∞ ¹       ~ (p=1.000 n=1) ²
DeduplicateSamplesDuringMerge/minScrapeInterval=3s-14    838.8n ± ∞ ¹   810.4n ± ∞ ¹       ~ (p=1.000 n=1) ²
DeduplicateSamplesDuringMerge/minScrapeInterval=4s-14    765.2n ± ∞ ¹   735.1n ± ∞ ¹       ~ (p=1.000 n=1) ²
DeduplicateSamplesDuringMerge/minScrapeInterval=10s-14   673.1n ± ∞ ¹   622.4n ± ∞ ¹       ~ (p=1.000 n=1) ²
geomean                                                  751.7n         741.0n        -1.42%
```

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

---
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-12-16 12:50:41 +01:00
f41gh7
91f858ee1e docs: bump last VM versions
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2024-12-16 12:19:51 +01:00
f41gh7
da0d57e4b6 CHANGELOG.md: cut v1.108.0 release 2024-12-16 12:12:02 +01:00
hagen1778
fa621b384e docs: mention deprecation of metric names in update notes
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-12-16 11:20:32 +01:00
f41gh7
02fedb8585 docs/changelog: add missing PR links
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2024-12-13 12:08:16 +01:00
f41gh7
04d19a2200 make vmui-update 2024-12-13 12:01:03 +01:00
f41gh7
e612877fe7 app/vmselect: respect -search.skipSlowReplicas when -globalReplicationFactor > 1
Previously cluster with the following vmselect configuration:

./bin/vmselect
  -storageNode=gr1/:8211,gr1/:8212
  -storageNode=gr2/:8213,gr2/:8214
  -search.skipSlowReplicas=true
  -globalReplicationFactor=2

Here we have two vmstorage groups and -globalReplicationFactor=2, which effectively means that "every ingested sample is replicated across multiple vmstorage groups". Hence, gr1 and gr2 contain identical data set. And when we set -search.skipSlowReplicas=true it is expected vmselect should return result as soon as at least one storage group returned the full result.
In current state, -search.skipSlowReplicas is ignored on the storage group level. It is only respected within the group (with -replicationFactor flag).

   This commit fixes global replication for skipSlowReplicas.

 To ensure that the fix works and does not break
anything replication tests have been added. For checking the fix for
skipping slow replicas see `testGroupSkipSlowReplicas()`.

To emulate storage groups, the integration test creates a cluster with
multilevel vminsert. The L1 inserts are group-level inserts, each writes
to its own group of vmstorages. The L2 vminsert is a global vminsert
that writes replicated to the L1 vminserts.

To enable multilevel inserts changes in apptest framework and
`lib/ingestserver/clusternative/server.go` were necessary.

related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6924

---------

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2024-12-13 11:59:03 +01:00
Zhu Jiekun
43181b67b1 discovery/dockerswarm: add missing service labels to tasks discovery role
Previously service labels won't be attached when `role: tasks` is set.
Because the `addServicesLabels` function is shared by `role: tasks` and
`role: services`, and it will return nothing when `vip.Addr` is invalid
or empty.

In Prometheus, even if `vip.Addr` is empty, it attach common service
labels with [a standalone
function](f10c3454e9/discovery/moby/services.go (L129)),
which offers:
- `__meta_dockerswarm_service_id`: the id of the service.
- `__meta_dockerswarm_service_name`: the name of the service.
- `__meta_dockerswarm_service_mode`: the mode of the service.
- `__meta_dockerswarm_service_label_<labelname>`: each label of the
service, with any unsupported characters converted to an underscore.

This PR add a `addServicesLabelsForTask`, to replace the usage of
`addServicesLabels` when `role: tasks` is set. This function offers
common service labels listed above.

related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7800
2024-12-13 11:28:04 +01:00
Hui Wang
b0ed5b6174 app/vmalert: fixes reload of external templates
Previously after configuration reload call `externalURL` templaing function defined at external templates could be lost. Since it was added only at initial `Load` call and never copied during template reload process.
External templates for vmalert could be defined via `-rule.templates` flag.

 This commit properly reload external templates. It's no longer copies mutated templates and instead fully reloads it each time if there is any changes.
2024-12-13 10:29:19 +01:00
Github Actions
4aeda4b267 Automatic update helm docs from VictoriaMetrics/helm-charts@4b32065 (#7817)
Automated changes by
[create-pull-request](https://github.com/peter-evans/create-pull-request)
GitHub action

Signed-off-by: Github Actions <133988544+victoriametrics-bot@users.noreply.github.com>
Co-authored-by: AndrewChubatiuk <3162380+AndrewChubatiuk@users.noreply.github.com>
2024-12-13 10:04:56 +01:00
Github Actions
20a2822c23 Automatic update Grafana datasource docs from VictoriaMetrics/victorialogs-datasource@21936b8 (#7814) 2024-12-13 10:04:42 +01:00
hagen1778
1891b74a0a docs: mention required version for multitenancy endpoint in vmalert
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-12-12 13:01:26 +01:00
Andrei Baidarov
0dc576d3da lib/storage: prefer stale markers over other values on dedup interval
Previously, during de-duplication staleness markers could be removed due to incorrect logic at
values equality check.
 During the evaluation of read query vmselect deduplicates samples using dedupInterval option. It picks the highest value across all points with the same timestamp next to the border of dedupInterval. The issue is any comparison with NaN via <, > returns false. This means that the position of NaN in srcValues could affect the result.


 This commit changes this logic with additional step, that explicitly checks for staleness marker for the following cases:
 1. Deduplication on vmselect
2. Deduplication in vmstorage during merges
3. Deduplication in stream aggregation

check performed only for stale markers, because other NaNs are rejected on ingestion
by vmstorage or by stream aggregation.

Checking for stale markers in general slows down dedup speed by 3%:
```
 benchstat old.txt new.txt

goos: darwin
goarch: arm64
pkg: github.com/VictoriaMetrics/VictoriaMetrics/lib/storage
cpu: Apple M4 Pro
                                                       │   old.txt    │               new.txt                │
                                                       │    sec/op    │    sec/op     vs base                │
DeduplicateSamples/minScrapeInterval=1s-14               462.8n ± ∞ ¹   425.2n ± ∞ ¹       ~ (p=1.000 n=1) ²
DeduplicateSamples/minScrapeInterval=2s-14               905.6n ± ∞ ¹   903.3n ± ∞ ¹       ~ (p=1.000 n=1) ²
DeduplicateSamples/minScrapeInterval=5s-14               710.0n ± ∞ ¹   698.9n ± ∞ ¹       ~ (p=1.000 n=1) ²
DeduplicateSamples/minScrapeInterval=10s-14              632.7n ± ∞ ¹   638.5n ± ∞ ¹       ~ (p=1.000 n=1) ²
DeduplicateSamplesDuringMerge/minScrapeInterval=1s-14    439.7n ± ∞ ¹   409.9n ± ∞ ¹       ~ (p=1.000 n=1) ²
DeduplicateSamplesDuringMerge/minScrapeInterval=2s-14    908.9n ± ∞ ¹   882.2n ± ∞ ¹       ~ (p=1.000 n=1) ²
DeduplicateSamplesDuringMerge/minScrapeInterval=5s-14    721.2n ± ∞ ¹   684.7n ± ∞ ¹       ~ (p=1.000 n=1) ²
DeduplicateSamplesDuringMerge/minScrapeInterval=10s-14   659.1n ± ∞ ¹   630.6n ± ∞ ¹       ~ (p=1.000 n=1) ²
geomean                                                  659.5n         636.0n        -3.56%
```

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7674
---------
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2024-12-12 12:34:17 +01:00
dependabot[bot]
88861c66fe build(deps): bump golang.org/x/crypto from 0.29.0 to 0.31.0 (#7807)
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from
0.29.0 to 0.31.0.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="b4f1988a35"><code>b4f1988</code></a>
ssh: make the public key cache a 1-entry FIFO cache</li>
<li><a
href="7042ebcbe0"><code>7042ebc</code></a>
openpgp/clearsign: just use rand.Reader in tests</li>
<li><a
href="3e90321ac7"><code>3e90321</code></a>
go.mod: update golang.org/x dependencies</li>
<li><a
href="8c4e668694"><code>8c4e668</code></a>
x509roots/fallback: update bundle</li>
<li>See full diff in <a
href="https://github.com/golang/crypto/compare/v0.29.0...v0.31.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=golang.org/x/crypto&package-manager=go_modules&previous-version=0.29.0&new-version=0.31.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/VictoriaMetrics/VictoriaMetrics/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-12 09:07:43 +01:00
hagen1778
1ee5ba8d55 docs: clarify meaning of deduplication for exported data
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7763

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-12-11 20:57:35 +01:00
Andrii Chubatiuk
e0ab3fccaf app/vlinsert/syslog: fixed structured data parsing (#7801)
### Describe Your Changes

rfc5424 doesn't allow structured data to be started from whitespace, but
it can be present in the end of this section
related issue
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7776

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
2024-12-11 17:08:36 +01:00
Github Actions
2fe6640193 Automatic update helm docs from VictoriaMetrics/helm-charts@9524e91 (#7793)
Automated changes by
[create-pull-request](https://github.com/peter-evans/create-pull-request)
GitHub action

Signed-off-by: Github Actions <133988544+victoriametrics-bot@users.noreply.github.com>
Co-authored-by: AndrewChubatiuk <3162380+AndrewChubatiuk@users.noreply.github.com>
2024-12-11 17:03:58 +01:00
Yury Molodov
d1ccf205c4 vmui: add more details for "clipboard not supported" error (#7778)
### Describe Your Changes

Added a message for Clipboard API errors with common issues and a link
to the docs. Added a check for secure context, showing a clear error and
a doc link if the context is not secure.

Related issue: #7677

<img width="400"
src="https://github.com/user-attachments/assets/a448d82e-f484-43de-9004-fbd5a57f49a7">
<img width="400"
src="https://github.com/user-attachments/assets/8de97577-89a3-445d-a4bb-a091a4549f39">

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
2024-12-11 17:03:35 +01:00
Aliaksandr Valialkin
b42ed019f5 docs/VictoriaLogs/LogsQL.md: collapse_nums pipe docs: clarify that <N> is a placeholder 2024-12-11 16:34:40 +01:00
Aliaksandr Valialkin
5a41c7f5a5 docs/VictoriaLogs/LogsQL.md: mention that collapse_nums can miss collapsing some numbers or can collapse unexpected numbers
Suggest a solution with replace_regexp() pipe for custom collapsing.
2024-12-11 16:32:36 +01:00
Aliaksandr Valialkin
ec193ef691 docs/VictoriaLogs/querying/vlogscli.md: document \wrap_long_lines option
This is a follow-up for f55791f20b
2024-12-11 15:54:35 +01:00
hagen1778
e669c87af4 docs/changelog: re-order LTS releases for better navigation
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-12-11 15:27:38 +01:00
Roman Khavronenko
87c1b2de6f deployment/docker: update base Alpine docker image from 3.20.3 to 3.21.0 (#7798)
See https://alpinelinux.org/posts/Alpine-3.21.0-released.html

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-12-11 11:30:37 +01:00
hagen1778
bcd8d9d6c6 docs: re-order changes by priority
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-12-11 11:06:38 +01:00
f41gh7
dbed0de650 lib/timeserieslimits: follow-up for 564e6ea024
Changed enabled limit condition to `or` instead of `and`. Since labels must checked if at least one of the limits is defined.

Signed-off-by: f41gh7 <nik@victoriametrics.com>
2024-12-11 11:00:27 +01:00
hagen1778
34a730ac65 docs: update wording after 564e6ea024
Mention all related limits and the way to troubleshoot them.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-12-11 10:55:07 +01:00
hagen1778
e21bdcdbc7 docs: make wording more transparent for readers
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-12-11 10:46:52 +01:00
Hui Wang
9db8e071c4 vmalert-tool: support debug mode for alerting rule (#7788)
User can enable [debug
mode](https://docs.victoriametrics.com/vmalert/#debug-mode) in
vmalert-tool, to check alerting rule evaluation status and write
`alert_rule_test` cases.
2024-12-11 09:49:14 +01:00
hagen1778
1627bcc6cb dashboards: add missing filter by instance to Go scheduling latency panel
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-12-11 09:03:09 +01:00
Github Actions
5033d05d55 Automatic update Grafana datasource docs from VictoriaMetrics/victorialogs-datasource@f31bdac (#7790) 2024-12-10 21:23:47 +01:00
hagen1778
5279faf02f deployment: bump victorialogs datasource version
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-12-10 21:23:08 +01:00
Andrii Chubatiuk
564e6ea024 app/{vminsert,vmagent}: drop time series on exceeding labels limits.
Previously, time series with labels exceeding the configured limits were truncated and written to storage, potentially causing data inconsistency. This could lead to collisions between time series and make it difficult to identify the source due to truncated labels.

This commit changes the behavior:
*  Such time series are now rejected outright.
* Rejected time series are logged to stdout, and corresponding counters are incremented.  
* removes `vm_too_long_label_values_total`, `vm_too_long_label_names_total`, `vm_metrics_with_dropped_labels_total` metrics.  
* adds new values `[too_many_labels,too_long_label_name,too_long_label_value]`  to `reason` label of the `vm_rows_ignored_total` metric name

related issues:
- https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6928
- https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7661
2024-12-10 21:19:16 +01:00
Zhu Jiekun
6b48126603 discovery/docker: add match_first_network support for docker_sd_configs
This commit aligns behaviour of docker service discovery with Prometheus implementation.

It adds the following changes:
* introduce new config param `match_first_network` with default value of `true`. It uses the first network if the container has multiple networks
defined.  It should help to avoid collecting duplicate targets error with multi network setups.

* add `networks` for the containers with linked network to the other containers with `network_mode: container:id` setting. It resolve an issue with attached containers aka `pods` in Kubernetes.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7398
2024-12-10 20:15:33 +01:00
Yury Molodov
4a2192431d vmui: prevent accordion collapse on text selection in headers
Prevent accordion from collapsing when selecting text in headers.

Related issue: 
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7742
2024-12-10 20:05:32 +01:00
Yury Molodov
86bc7d5cd1 vmui: fix incorrect message in Table tab
Updated the message in the “Table” tab of the VictoriaMetrics UI. It now
correctly displays the step value based on the actual configuration.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7401
2024-12-10 20:00:23 +01:00
hagen1778
d05fadf988 docs: fix typo in facets example
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-12-10 15:52:33 +01:00
Hui Wang
e439e40e79 app/vmalert: fix possible template overwritten between rule annotations
Previous commit b09272ccac added regression, which could lead to the template
global state overwrites. 
 
 The issue related to the mechanism how `vmalert` inherits templates. It has global templates, that could be changed via `rule.templates` flag. And local templates defined per labels/annotations for rules and groups.

 During labels/annotations templating state could be changed via `define` syntax. 

 This commit restores previous behavior with `Clone` call for templates before templating labels/annotations.

 Affected releases:
- 1.106.1
- v1.102.7
- v1.97.12

 Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6894
2024-12-10 14:59:40 +01:00
Nikolay
d6f5ba2887 app/vmauth: allow to start with empty auth config file
This commit adds ability to launch vmauth without configuration file.
Which is possible use case for operator based installations.

  Operator provides global resource `VMAuth` and allows to create
`VMUser` objects for it. Eventually operator creates configuration for
`VMAuth` based on user defined selectors for `VMUser`.

  Since there is no direct relations between
those objects. And any object could be created in on-demand by
Kubernetes users. It's required to be able to start `vmauth` with empty
auth config file.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6467
2024-12-10 14:51:11 +01:00
Github Actions
94e4c4e367 Automatic update helm docs from VictoriaMetrics/helm-charts@42d92a7 (#7783)
Automated changes by
[create-pull-request](https://github.com/peter-evans/create-pull-request)
GitHub action

Signed-off-by: Github Actions <133988544+victoriametrics-bot@users.noreply.github.com>
Co-authored-by: AndrewChubatiuk <3162380+AndrewChubatiuk@users.noreply.github.com>
2024-12-10 14:48:29 +01:00
Github Actions
aadd8d5f3a Automatic update Grafana datasource docs from VictoriaMetrics/victorialogs-datasource@649d972 (#7786) 2024-12-10 14:47:03 +01:00
Dmytro Kozlov
44d8e6a19d deployment/docker: update victorialogs datasource versions to the latest releases (#7787)
### Describe Your Changes
Updated victorialogs-datasource to the latest
[release](https://github.com/VictoriaMetrics/victorialogs-datasource/tree/v0.11.0)
### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2024-12-10 13:08:55 +01:00
hagen1778
6b0ae0b79f docs/vmalert: update debug description
* mention that `debug` messages require -loggerLevel=INFO
* rm version requirement, as mentioned version is pretty old
and it is liklely everyone is using a newer version

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-12-10 10:22:39 +01:00
Nikolay
a51a18403c lib/storage: properly apply dedup.minScrapeInterval
Previously, if only `-dedup.minScrapeInterval` was set without
`downsampling.Period, function
getDownsamplingFilters returned empty result for
downsamplingPeriodFilters. Because it didn't take in
account globalDedup variable.

 This commit adds fast path for this case and returns a single
downsampling filter with global interval value.

In addition, it adds the following changes:

* Removes global state modification at ParseDownsamplingPeriods
  function. Which could lead to data races at vmselect
* simplifies logic of isDedupNeeded function. Since
  donwsamplingPeriodsWithout filters is subset of
dowsamplingPeriodByFilters. There is no need for len check
* Improves tests by proper reset global state of downsampling

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7764
2024-12-09 15:20:22 +01:00
Aliaksandr Valialkin
de0ae735aa lib/logstorage: add count_uniq_hash function to stats pipe
This function calculates the number of unique value hashes. This number is a good approximation
for the number of unique values. The `count_uniq_hash` function uses less memory and works faster
than `count_uniq` when applied to fields with big number of unique values.
2024-12-09 13:29:41 +01:00
Alexander Marshalov
acbe526307 vmbackupmanager: increase min sleep time between scheduling cycles from 0 to 1s to avoid spammed logs. (#807)
* vmbackupmanager: increase min sleep time between scheduling cycles from 0 to 1s to avoid spammed logs.

* Update docs/changelog/CHANGELOG.md

Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>

---------

Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-12-09 12:29:25 +01:00
Github Actions
9a6ddb48df Automatic update operator docs from VictoriaMetrics/operator@ee0406d (#7765)
Automated changes by
[create-pull-request](https://github.com/peter-evans/create-pull-request)
GitHub action

Signed-off-by: Github Actions <133988544+victoriametrics-bot@users.noreply.github.com>
Co-authored-by: f41gh7 <18450869+f41gh7@users.noreply.github.com>
2024-12-09 12:21:30 +01:00
Github Actions
7d3e60f7f1 Automatic update helm docs from VictoriaMetrics/helm-charts@4b502fc (#7768)
Automated changes by
[create-pull-request](https://github.com/peter-evans/create-pull-request)
GitHub action

Signed-off-by: Github Actions <133988544+victoriametrics-bot@users.noreply.github.com>
Co-authored-by: AndrewChubatiuk <3162380+AndrewChubatiuk@users.noreply.github.com>
2024-12-09 12:21:01 +01:00
Aliaksandr Valialkin
f54f73033b docs/VictoriaLogs/LogsQL.md: typo fix: remove double with with 2024-12-09 00:37:01 +01:00
Aliaksandr Valialkin
75a2e23b7e deployment: update VictoriaLogs Docker image from v1.3.1-victorialogs to v1.3.2-victorialogs
See https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.3.2-victorialogs
2024-12-09 00:34:53 +01:00
Aliaksandr Valialkin
6fe079dbfb docs/VictoriaLogs/CHANGELOG.md: cut v1.3.2-victorialogs release 2024-12-09 00:30:59 +01:00
Aliaksandr Valialkin
843fae3419 lib/logstorage: fix possible panic in stream_context pipe
The panic may occur when the surrounding logs for some original log entry are empty.
This is possible when these logs were included into surrounding logs for the previous original log entry.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7762
2024-12-09 00:24:20 +01:00
Aliaksandr Valialkin
db961f8609 lib/logstorage: add an ability to detect common patterns at collapse_nums pipe
The following patterns are detected:

- `<N>-<N>-<N>-<N>-<N>` is replaced with `<UUID>`.
- `<N>.<N>.<N>.<N>` is replaced with `<IP4>`.
- `<N>:<N>:<N>` is replaced with `<TIME>`. Optional fractional seconds after the time are treated as a part of `<TIME>`.
- `<N>-<N>-<N>` and `<N>/<N>/<N>` is replaced with `<DATE>`.
- `<N>-<N>-<N>T<N>:<N>:<N>` and `<N>-<N>-<N> <N>:<N>:<N>` is replaced with `<DATETIME>`. Optional timezone after the datetime is treated as a part of `<DATETIME>`.
2024-12-08 20:09:02 +01:00
Aliaksandr Valialkin
c45451bf69 lib/promutils: properly parse timestamps in microseconds and nanoseconds
This is needed for _time filter in VictoriaLogs, which supports timestamps with nanosecond precision
2024-12-08 20:07:43 +01:00
Aliaksandr Valialkin
30029f1e39 deployment: update VictoriaLogs Docker image from v1.3.0-victorialogs to v1.3.1-victorialogs
See https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.3.1-victorialogs
2024-12-08 02:20:37 +01:00
Aliaksandr Valialkin
48f395456e lib/logstorage: fix assignment to entry in nil map panic at facets pipe
The panic has been introduced in the commit b4f3861690
2024-12-08 02:16:46 +01:00
Aliaksandr Valialkin
08ce6ef825 deployment: update VictoriaLogs Docker image from v1.2.0-victorialogs to v1.3.0-victorialogs
See https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.3.0-victorialogs
2024-12-08 01:49:13 +01:00
Aliaksandr Valialkin
cd10bb585c docs/VictoriaLogs/CHANGELOG.md: cut v1.3.0-victorialogs release 2024-12-08 01:41:41 +01:00
Aliaksandr Valialkin
4ac94db2c7 app/vlogscli: show '<', '>' and '&' as is in JSON output instead of using the corresponding \uXXXX encoding
This improves reading JSON lines with these chars at vlogscli
2024-12-08 01:26:11 +01:00
Aliaksandr Valialkin
65d831a0ee lib/logstorage: add collapse_nums pipe, which replaces decimal and hexadecimal nums in the given log field with <N>
This is useful for detecting patterns across log messages, which differ by various numeric fields,
with the following query:

_time:1h | collapse_nums | top 10 by (_msg)
2024-12-08 01:03:30 +01:00
Aliaksandr Valialkin
48540ac409 app/vlselect: allow passing max_value_len query arg to /select/logsql/facets API
The max_value_len query arg allows controlling the maximum length of values
per every log field. If the length is exceeded, then the log field is dropped
from the results, since it contains incomplete (misleading) set of most frequently seen field values.
2024-12-07 14:30:07 +01:00
Aliaksandr Valialkin
3cef820cba lib/logstorage: facets pipe: return back ignoring empty values
It is impossible to count all the empty value per every seen field,
since they aren't counted for data blocks, which do not contain the given field.
So it is better ignoring empty values in order to reduce the level of confusion
when users see incorrect hits for empty per-field values.
2024-12-07 14:24:39 +01:00
Aliaksandr Valialkin
b4f3861690 lib/logstorage: facets pipe: ignore fields, which contain at least a single value with too big length
It is very confusing to see incomplete set of values for fields, which contain a subset of short values,
while the rest of values are too long. It is better to ignore all the values in such fields.

It is also very confusing if the list of most frequently values has no an empty value.
So it is better counting hits for an empty value.
2024-12-07 12:45:42 +01:00
Aliaksandr Valialkin
4c8691450a lib/logstorage: stream_context pipe: reduce the amounts of surrounding logs to check
Do not check surrounding logs before the selected log if `after N` in set,
and do not check logs after the selected log if `before N` is set

This is a follow-up for 08af80ebe0

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7637
2024-12-07 12:21:22 +01:00
Andrii Chubatiuk
fefa3e7936 deployment/rules: updated sum expressions in alerts to be able to inject cluster labels in helm charts scripts (#7670)
### Describe Your Changes

Many users are running k8s-stack in multiple kubernetes clusters and to
configure a proper routing in alertmanager it's required to support
`cluster` label in alerting rules. It's now implemented in helm-chart
hack scripts, but it's tricky part to define if cluster label should be
added or not, when functions has no `by` expression. Updated existing
alerts to provide later an ability to inject cluster label later

Also take into an account `storage.minFreeDiskSpaceBytes` in
`DiskRunsOutOfSpace` alerts

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
2024-12-06 15:48:09 +01:00
Aliaksandr Valialkin
08af80ebe0 lib/logstorage: add an ability to change the time window for searching for surrounding logs in the stream_context pipe
Thanks to @worker24h for the idea at https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7637#issuecomment-2523313740
2024-12-06 15:47:52 +01:00
Andrii Chubatiuk
915867fe56 cspell: fixed typos, updated dictionary 2024-12-06 15:29:34 +01:00
Github Actions
786ce2c5b3 Automatic update helm docs from VictoriaMetrics/helm-charts@45df5e5 (#7751)
Automated changes by
[create-pull-request](https://github.com/peter-evans/create-pull-request)
GitHub action

Signed-off-by: Github Actions <133988544+victoriametrics-bot@users.noreply.github.com>
Co-authored-by: AndrewChubatiuk <3162380+AndrewChubatiuk@users.noreply.github.com>
2024-12-06 15:28:55 +01:00
Aliaksandr Valialkin
bddb0e369f lib/logstorage: optimize stream_context pipe over log streams with tens of millions of logs
`stream_context` is implemented in the way, which needs scanning all the logs for the selected log streams.
The scan performance is usually fast, since the majority of blocks are skipped, since they do not contain
rows with the needed timestamps. But there was a pathological case with `stream_context before N`:

VictoriaLogs usually scans blocks in chronological order. That means that the `before` context logs are constantly
updated with the new logs. This requires reading the actual data for the requested log fields from disk.
The workaround is to split the process of obtaining stream context logs into two phases:

1. Select only timestamps for the stream context logs, whithout selecting other log fields.
   This operation is usually much faster than reading the requested log fields.

2. Select stream context logs for the selected timestamps. This operation is usually fast,
   since the requested number of context logs is usually not so big.

Performance testing for the new algorithm shows up to 30x speed improvement for `stream_context before N`
and up to 5x speed improvement for `stream_context after N` when applied to log stream with 50M logs.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7637
2024-12-06 15:00:46 +01:00
Github Actions
f322494ca2 Automatic update operator docs from VictoriaMetrics/operator@a5fd092 (#7753)
Automated changes by
[create-pull-request](https://github.com/peter-evans/create-pull-request)
GitHub action

Signed-off-by: Github Actions <133988544+victoriametrics-bot@users.noreply.github.com>
Co-authored-by: f41gh7 <18450869+f41gh7@users.noreply.github.com>
Co-authored-by: Hui Wang <haley@victoriametrics.com>
2024-12-06 11:48:10 +08:00
Aliaksandr Valialkin
ceb081a018 deployment: update VictoriaLogs Docker image from v1.1.0-victorialogs to v1.2.0-victorialogs
See https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.2.0-victorialogs
2024-12-06 04:42:39 +01:00
Aliaksandr Valialkin
e2fa25ab29 docs/VictoriaLogs/CHANGELOG.md: cut v1.2.0-victorialogs release 2024-12-06 04:35:15 +01:00
Aliaksandr Valialkin
740548ccfc app/vlselect: add /select/logsql/facets endpoint
This endpoint returns the most frequent values per each field seen in the selected logs.
This endpoint is going to be used by VictoriaLogs web UI for faceted search.
2024-12-06 02:41:09 +01:00
Aliaksandr Valialkin
dbec34bafc lib/logstorage: add facets pipe for returning the most frequent values across all the log fields seen in the selected logs 2024-12-06 01:24:15 +01:00
Aliaksandr Valialkin
04796ba249 lib/fs: suggest increasing the limit on the number of open files in the error message when the file cannot be opened by ReaderAt
This should simplify troubleshooting of too low limit on the number of open files
2024-12-06 01:07:30 +01:00
Aliaksandr Valialkin
5c7b044685 lib/fs: suggest possible solutions inside cannot allocate memory errors during failed mmap attempt
This should improve troubleshooting of the such errors
2024-12-06 00:52:18 +01:00
Aliaksandr Valialkin
80c5066ef3 lib/logstorage: properly format math pipe expressions, which contain multiple binary operators with the same priority
Previously such expressions were improperly formatted, which could result
in incorrect calculations at vlogscli.

For example, 'x / (y / z)' was formatted as 'x / y / z',
while 'x - (y + z)' was formatted as 'x - y + z'.
2024-12-05 17:10:47 +01:00
Aliaksandr Valialkin
c3b8da81cd lib/logstorage: add rate and rate_sum stats functions
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7415
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7646
2024-12-05 17:10:46 +01:00
f41gh7
8b1fd6a619 docs/changelog: mention vmselect panic bugfix
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2024-12-05 15:34:15 +01:00
Aliaksandr Valialkin
b57f8d3cb6 docs/VictoriaLogs/LogsQL.md: add references to first and last pipes from the top pipe description
`top` pipe can be confused with the `first` and `last` pipes, so add references to these pipes from the `top` pipe docs.
This should help users locating the needed pipes.
2024-12-05 13:33:40 +01:00
Hui Wang
f4776fec1b app/vmauth: fix requests routing by host when using `src_hosts"
Requests processed by built-in HTTP server has the [origin
form](https://datatracker.ietf.org/doc/html/rfc7230#section-5.3) rather
than the absolute form.

 So in[Request.URL](https://pkg.go.dev/net/http#Request), fields other than
Path and RawQuery will be empty.
> 	// For server requests, the URL is parsed from the URI
> 	// supplied on the Request-Line as stored in RequestURI.  For
> 	// most requests, fields other than Path and RawQuery will be
> 	// empty. (See RFC 7230, Section 5.3)

 Using `request.Host` field instead to match `src_hosts` fixes issue and allows to route requests properly.

An addition It allows user to route requests with customized `Host` header.
2024-12-05 11:44:59 +01:00
Github Actions
b76c77649d Automatic update operator docs from VictoriaMetrics/operator@a1ef5dd (#7716)
Automated changes by
[create-pull-request](https://github.com/peter-evans/create-pull-request)
GitHub action

Signed-off-by: Github Actions <133988544+victoriametrics-bot@users.noreply.github.com>
Co-authored-by: f41gh7 <18450869+f41gh7@users.noreply.github.com>
2024-12-05 10:00:12 +01:00
Github Actions
cedacf5f5c Automatic update Grafana datasource docs from VictoriaMetrics/victorialogs-datasource@3bc9782 (#7743) 2024-12-05 09:59:25 +01:00
Github Actions
a2c3b33e42 Automatic update helm docs from VictoriaMetrics/helm-charts@5c35472 (#7747)
Automated changes by
[create-pull-request](https://github.com/peter-evans/create-pull-request)
GitHub action

Signed-off-by: Github Actions <133988544+victoriametrics-bot@users.noreply.github.com>
Co-authored-by: AndrewChubatiuk <3162380+AndrewChubatiuk@users.noreply.github.com>
2024-12-05 09:59:04 +01:00
hagen1778
7d1477e984 deployment: bump victorialogs plugin version to v0.10.0
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-12-05 09:47:58 +01:00
Aliaksandr Valialkin
02effba767 vendor: update github.com/VictoriaMetrics/metricsql from v0.81.0 to v0.81.1
This fixes possible `index out of range` panic when -search.logImplicitConversion
or -search.disableImplicitConversion command-line flags are passed to vmselect
and it tries executing incorrect query with too small number of arguments passed
to rollup function.
2024-12-05 02:39:06 +01:00
Aliaksandr Valialkin
601a25d4e8 deployment: update VictoriaLogs Docker image from v1.0.0-victorialogs to v1.1.0-victorialogs
See https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.1.0-victorialogs
2024-12-05 02:02:23 +01:00
Aliaksandr Valialkin
b9b117d149 docs/VictoriaLogs/CHANGELOG.md: cut v1.1.0-victorialogs release 2024-12-05 01:55:11 +01:00
Aliaksandr Valialkin
0b021fa5a7 app/vlselect/vmui: run make vmui-logs-update after the commit 10c42668a1 2024-12-05 01:54:03 +01:00
Aliaksandr Valialkin
b43fcc0cf8 lib/logstorage: add tests, which verify that offset and limit pipes cannot be used in /select/logsql/stats_query_range
`offset` and `limit` pipes cannot be applied individually per every step on the [start ... end] time range,
so they must be disallowed at /select/logsql/stats_query_range.

This is a follow-up for 534371031e
2024-12-05 01:49:39 +01:00
Aliaksandr Valialkin
534371031e lib/logstorage: add first and last pipes
The `first N by (field)` pipe is a shorthand to `sort by (field) limit N`,
while the `last N by (field)` pipe is a shorthand to `sort by (field) desc limit N`.

While at it, add support for partitioning sort results by log groups and applying
individual limit per each group.

For example, the following query returns up to 3 logs per each host with the biggest value
for the `request_duration` field:

_time:5m | last 3 by (request_duration) partition by (host)

This query is equivalent to the following one:

_time:5m | sort by (request_duration) desc limit 3 partition by (host)

Automatically add the 'partition by (_time)` into `sort`, `first` and `last` pipes
used in the query to `/select/logsql/stats_query_range` API.
This is needed for https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7699
2024-12-05 01:42:03 +01:00
Aliaksandr Valialkin
0602d60047 deployment/docker: update Go builder from Go1.23.3 to Go1.23.4
See https://github.com/golang/go/issues?q=milestone%3AGo1.23.4+label%3ACherryPickApproved
2024-12-04 22:45:20 +01:00
Aliaksandr Valialkin
cdc0db8ad7 lib/logstorage: properly ignore log fields when they are passed via streamFields arg to LogRows.MustAdd()
Previously streamFields were unconditionally added to log stream fields, even if they were listed in the ignoreFields.
Also do not add extraStreamFields to log stream fields if streamFields is non-nil, since this may confuse users.

This is a follow-up for 17b813ba28

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/7554
2024-12-04 21:45:06 +01:00
Aliaksandr Valialkin
77430b797d lib/logstorage: add support for uppercase/lowercase transformations for log fields in "| format ..." pipe
This is needed for consistent formatting of some log fields in the same case.
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7620#issuecomment-2502170924
2024-12-04 14:38:37 +01:00
Artem Fetishev
f2ad481a1f vendor: uppdate metricsql to v0.81.0
This is a follow-up for
https://github.com/VictoriaMetrics/metricsql/pull/37. 

It ports the fix to the VictoriaMetrics repo

Related issue:

https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5796


---------

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2024-12-04 14:27:09 +01:00
Aliaksandr Valialkin
17b813ba28 app/vlinsert: use default set of log stream fields for Loki and OpenTelemetry protocols if _stream_fields query arg is empty
Loki protocol supports a list of log stream labels - see https://grafana.com/docs/loki/latest/get-started/labels/

OpenTelemetry protocol also supports a list of log stream labels, which are named resource attributes there.
See https://opentelemetry.io/docs/concepts/resources/#semantic-attributes-with-sdk-provided-default-value

Simplify logs' ingestion into VictoriaLogs for these protocols by allowing the data ingestion without
the need to specify _stream_fields query arg or VL-Stream-Fields HTTP header. In this case the upstream log stream fields
are used during data ingestion. The set of log stream fields can be overriden via _stream_fields query arg
and via VL-Stream-Fields HTTP header if needed.

Thanks to @AndrewChubatiuk for the initial idea and implementation at https://github.com/VictoriaMetrics/VictoriaMetrics/pull/7554
2024-12-04 13:57:23 +01:00
Aliaksandr Valialkin
6a71921565 lib/logstorage: ignore logs with too many fields instead of trying to store them
The storage isn't designed to work efficiently with logs containing too many log fields.
It is better to emit a warning to the user and ignore such logs instead of trying to store them.
This will allow fixing the issue by the user ASAP, and won't lead to excess resource usage
at VictoriaLogs side, such as RAM, CPU, disk IO and disk space.

While at it, ignore too long logs with the size exceeding the maximum block size during data ingestion.
This should prevent from possible issues when dealing with such long logs if they were stored in the storage.
Emit a warning in this case, so the user could identify and fix the issue ASAP.

This is a follow-up for 22e6385f56

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7568
2024-12-04 12:18:34 +01:00
Aliaksandr Valialkin
7e924d7ecf app/vlinsert: properly skip too long lines at Elasticsearch bulk import protocol
Previously too long line in Elasticsearch bulk import protocol resulted in clsoing
the client stream and ignoring the rest of log messages in the stream.

Now only the too long message is ignored properly, while the rest of log messages
are read successfully.

This is a follow-up for 61e7c77ce25967269192ed2e201f67d8c48b972e
2024-12-04 12:18:32 +01:00
Aliaksandr Valialkin
480a8be48f app/vlinsert: track vl_rows_ingested_total metric in a single place
Previously vl_rows_ingested_total metric was tracked individually per each supported data ingestion protocols.
It is better from maintainability PoV tracking this metric consistently in a single place - at logMessageProcessor.AddRow() function
in the same way as vl_bytes_ingested_total metric is tracked.

This is a follow-up for 50bfa689c9
2024-12-04 12:18:30 +01:00
Aliaksandr Valialkin
c58d0549a8 app/vlinsert: continue parsing lines after too long lines in JSON line stream and Elasticsearch bulk import stream
Previously all the lines after the too long line in the stream were ignored. This wasn't expected by most users.
2024-12-04 12:18:28 +01:00
hagen1778
cc70b5bb34 docs: fix typo after 9feacf9761
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-12-04 09:10:54 +01:00
hagen1778
9feacf9761 docs: add version marker for retention/downsampling filters
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-12-04 09:01:36 +01:00
Fred Navruzov
53d438aab0 docs/vmanomaly: release v1.18.8 (#7734)
### Describe Your Changes

docs/vmanomaly: release v1.18.8

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2024-12-03 23:49:55 +02:00
hagen1778
d29260f4e8 docs: update telegraf ingestion examples
* explicitly mention it is using HTTP protocol
* consistently use `victoriametrics_url` placeholder across the docs
* mention v2 influx format in docs
* consistently remove/add extra newlines for better formatting

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-12-03 20:54:38 +01:00
hagen1778
671ba82894 docs: fix newline typo in vmauth
Extra new line was removed after macros substitution

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-12-03 15:47:15 +01:00
Github Actions
88fe4ebb34 Automatic update Grafana datasource docs from VictoriaMetrics/victorialogs-datasource@558d333 (#7725) 2024-12-03 15:44:02 +01:00
Dmytro Kozlov
1b1aef57e0 deployment/docker: update victorialogs datasource versions to the latest releases (#7726)
### Describe Your Changes

Update victorialogs datasource to the latest release
[version](https://github.com/VictoriaMetrics/victorialogs-datasource/releases/tag/v0.9.0)

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2024-12-03 15:43:34 +01:00
Artem Fetishev
30b9167965 apptest: Cluster replication tests (#7693)
### Describe Your Changes

Add cluster replication tests. No group replication yet. Some necessary
enhancements to the apptest framework have been done as well. Also other
existing tests were revisitied to take advantage of new QueryOpts added
by @f41gh7 in #7635.

The tests verify the following scenarios:
1.  Data is written to vmstorages multiple times
2. Vmselect deduplicates replicated data
3. Vmselect does not return partial result if it receives responses from
enough replicas
4. Vmselect does not wait for the rest from all replicas (skips slower
ones)

Something similar will be added for storage groups. These tests should
be used to prove that the fix for #6924 works and at the same time does
not break other aspects of replication.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2024-12-03 12:25:53 +01:00
hagen1778
24a2a4a962 dashboards: mention reserved disk space in descriptions
Follow-up after 57ddb51089

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-12-03 12:16:36 +01:00
Fred Navruzov
cdf384eb4d docs/vmanomaly - release v1.18.7 (#7719)
### Describe Your Changes

- docs/vmanomaly - release v1.18.7
- modified table markdown for proper rendering on vmanomaly component
pages

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2024-12-02 20:32:17 +02:00
f41gh7
c89926bdf7 docs/changelog: fixes date typo 2023->2024
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2024-12-02 14:27:57 +01:00
hagen1778
33a60f907c docs: bump last LTS versions
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-12-02 13:07:47 +01:00
hagen1778
5e0db31914 docs: re-order and cleanup changelog items
* sort changes by importance
* cleanup wording in change nots

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-12-02 13:02:23 +01:00
Github Actions
54f1a33a63 Automatic update operator docs from VictoriaMetrics/operator@2fc01c5 (#7714)
Automated changes by
[create-pull-request](https://github.com/peter-evans/create-pull-request)
GitHub action

Signed-off-by: Github Actions <133988544+victoriametrics-bot@users.noreply.github.com>
Co-authored-by: f41gh7 <18450869+f41gh7@users.noreply.github.com>
2024-12-02 11:54:29 +01:00
Lauri Tirkkonen
e4525516e2 docs/vlogs: replace reference to fluentbit with vector
the helm chart being referenced switched from fluentbit to vector in
a1aea5d694ff725c350b325205e3372b52242639, so update the docs to match
2024-12-02 11:53:29 +01:00
Github Actions
d3101b075f Automatic update helm docs from VictoriaMetrics/helm-charts@612923a (#7713)
Automated changes by
[create-pull-request](https://github.com/peter-evans/create-pull-request)
GitHub action

Signed-off-by: Github Actions <133988544+victoriametrics-bot@users.noreply.github.com>
Co-authored-by: AndrewChubatiuk <3162380+AndrewChubatiuk@users.noreply.github.com>
2024-12-02 11:46:15 +01:00
f41gh7
b4d8d135e9 {docs,deployment}: update vm apps to the latest v1.107.0 release version 2024-12-02 11:09:05 +01:00
406 changed files with 15874 additions and 6424 deletions

View File

@@ -14,6 +14,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/promql"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/buildinfo"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/cgroup"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/envflag"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fs"
@@ -39,9 +40,20 @@ var (
"The saved data survives unclean shutdowns such as OOM crash, hardware reset, SIGKILL, etc. "+
"Bigger intervals may help increase the lifetime of flash storage with limited write cycles (e.g. Raspberry PI). "+
"Smaller intervals increase disk IO load. Minimum supported value is 1s")
maxIngestionRate = flag.Int("maxIngestionRate", 0, "The maximum number of samples vmsingle can receive per second. Data ingestion is paused when the limit is exceeded. "+
"By default there are no limits on samples ingestion rate.")
)
func main() {
// VictoriaMetrics is optimized for reduced memory allocations,
// so it can run with the reduced GOGC in order to reduce the used memory,
// while keeping CPU usage spent in GC at low levels.
//
// Some workloads may need increased GOGC values. Then such values can be set via GOGC environment variable.
// It is recommended increasing GOGC if go_memstats_gc_cpu_fraction metric exposed at /metrics page
// exceeds 0.05 for extended periods of time.
cgroup.SetGOGC(30)
// Write flags and help message to stdout, since it is easier to grep or pipe.
flag.CommandLine.SetOutput(os.Stdout)
flag.Usage = usage
@@ -76,6 +88,7 @@ func main() {
storage.SetDataFlushInterval(*inmemoryDataFlushInterval)
vmstorage.Init(promql.ResetRollupResultCacheIfNeeded)
vmselect.Init()
vminsertcommon.StartIngestionRateLimiter(*maxIngestionRate)
vminsert.Init()
startSelfScraper()
@@ -97,6 +110,7 @@ func main() {
}
logger.Infof("successfully shut down the webservice in %.3f seconds", time.Since(startTime).Seconds())
vminsert.Stop()
vminsertcommon.StopIngestionRateLimiter()
vmstorage.Stop()
vmselect.Stop()

View File

@@ -13,6 +13,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/prometheus"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeserieslimits"
)
var (
@@ -68,6 +69,10 @@ func selfScraper(scrapeInterval time.Duration) {
t := &r.Tags[j]
labels = addLabel(labels, t.Key, t.Value)
}
if timeserieslimits.IsExceeding(labels) {
// Skip metric with exceeding labels.
continue
}
if len(mrs) < cap(mrs) {
mrs = mrs[:len(mrs)+1]
} else {

View File

@@ -14,6 +14,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
@@ -21,6 +22,11 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
)
var (
datadogStreamFields = flagutil.NewArrayString("datadog.streamFields", "Datadog tags to be used as stream fields.")
datadogIgnoreFields = flagutil.NewArrayString("datadog.ignoreFields", "Datadog tags to ignore.")
)
var parserPool fastjson.ParserPool
// RequestHandler processes Datadog insert requests
@@ -79,17 +85,21 @@ func datadogLogsIngestion(w http.ResponseWriter, r *http.Request) bool {
return true
}
if len(cp.StreamFields) == 0 {
cp.StreamFields = *datadogStreamFields
}
if len(cp.IgnoreFields) == 0 {
cp.IgnoreFields = *datadogIgnoreFields
}
if err := vlstorage.CanWriteData(); err != nil {
httpserver.Errorf(w, r, "%s", err)
return true
}
lmp := cp.NewLogMessageProcessor("datadog")
n, err := readLogsRequest(ts, data, lmp.AddRow)
err = readLogsRequest(ts, data, lmp)
lmp.MustClose()
if n > 0 {
rowsIngestedTotal.Add(n)
}
if err != nil {
logger.Warnf("cannot decode log message in /api/v2/logs request: %s, stream fields: %s", err, cp.StreamFields)
return true
@@ -105,47 +115,118 @@ func datadogLogsIngestion(w http.ResponseWriter, r *http.Request) bool {
var (
v2LogsRequestsTotal = metrics.NewCounter(`vl_http_requests_total{path="/insert/datadog/api/v2/logs"}`)
rowsIngestedTotal = metrics.NewCounter(`vl_rows_ingested_total{type="datadog"}`)
v2LogsRequestDuration = metrics.NewHistogram(`vl_http_request_duration_seconds{path="/insert/datadog/api/v2/logs"}`)
)
// datadog message field has two formats:
// - regular log message with string text
// - nested json format for serverless plugins
// which has folowing format:
// {"message": {"message": "text","lamdba": {"arn": "string","requestID": "string"}, "timestamp": int64} }
//
// See https://github.com/DataDog/datadog-lambda-extension/blob/28b90c7e4e985b72d60b5f5a5147c69c7ac693c4/bottlecap/src/logs/lambda/mod.rs#L24
func appendMsgFields(fields []logstorage.Field, v *fastjson.Value) ([]logstorage.Field, error) {
switch v.Type() {
case fastjson.TypeString:
val := v.GetStringBytes()
fields = append(fields, logstorage.Field{
Name: "_msg",
Value: bytesutil.ToUnsafeString(val),
})
case fastjson.TypeObject:
var firstErr error
v.GetObject().Visit(func(k []byte, v *fastjson.Value) {
if firstErr != nil {
return
}
switch bytesutil.ToUnsafeString(k) {
case "message":
val := v.GetStringBytes()
fields = append(fields, logstorage.Field{
Name: "_msg",
Value: bytesutil.ToUnsafeString(val),
})
case "status":
val := v.GetStringBytes()
fields = append(fields, logstorage.Field{
Name: "status",
Value: bytesutil.ToUnsafeString(val),
})
case "lamdba":
obj, err := v.Object()
if err != nil {
firstErr = err
firstErr = fmt.Errorf("unexpected lambda value type for %q:%q; want object", k, v)
return
}
obj.Visit(func(k []byte, v *fastjson.Value) {
if firstErr != nil {
return
}
val, err := v.StringBytes()
if err != nil {
firstErr = fmt.Errorf("unexpected lambda label value type for %q:%q; want string", k, v)
return
}
fields = append(fields, logstorage.Field{
Name: bytesutil.ToUnsafeString(k),
Value: bytesutil.ToUnsafeString(val),
})
})
}
})
default:
return fields, fmt.Errorf("unsupported message type %q", v.Type().String())
}
return fields, nil
}
// readLogsRequest parses data according to DataDog logs format
// https://docs.datadoghq.com/api/latest/logs/#send-logs
func readLogsRequest(ts int64, data []byte, processLogMessage func(int64, []logstorage.Field)) (int, error) {
func readLogsRequest(ts int64, data []byte, lmp insertutils.LogMessageProcessor) error {
p := parserPool.Get()
defer parserPool.Put(p)
v, err := p.ParseBytes(data)
if err != nil {
return 0, fmt.Errorf("cannot parse JSON request body: %w", err)
return fmt.Errorf("cannot parse JSON request body: %w", err)
}
records, err := v.Array()
if err != nil {
return 0, fmt.Errorf("cannot extract array from parsed JSON: %w", err)
return fmt.Errorf("cannot extract array from parsed JSON: %w", err)
}
var fields []logstorage.Field
for m, r := range records {
for _, r := range records {
o, err := r.Object()
if err != nil {
return m + 1, fmt.Errorf("could not extract log record: %w", err)
return fmt.Errorf("could not extract log record: %w", err)
}
o.Visit(func(k []byte, v *fastjson.Value) {
if err != nil {
return
}
val, e := v.StringBytes()
if e != nil {
err = fmt.Errorf("unexpected label value type for %q:%q; want string", k, v)
return
}
switch string(k) {
switch bytesutil.ToUnsafeString(k) {
case "message":
fields = append(fields, logstorage.Field{
Name: "_msg",
Value: bytesutil.ToUnsafeString(val),
})
fields, err = appendMsgFields(fields, v)
if err != nil {
return
}
case "timestamp":
val, e := v.Int64()
if e != nil {
err = fmt.Errorf("failed to parse timestamp for %q:%q", k, v)
}
if val > 0 {
ts = val * 1e6
}
case "ddtags":
// https://docs.datadoghq.com/getting_started/tagging/
val, e := v.StringBytes()
if e != nil {
err = fmt.Errorf("unexpected label value type for %q:%q; want string", k, v)
return
}
var pair []byte
idx := 0
for idx >= 0 {
@@ -172,14 +253,22 @@ func readLogsRequest(ts int64, data []byte, processLogMessage func(int64, []logs
}
}
default:
val, e := v.StringBytes()
if e != nil {
err = fmt.Errorf("unexpected label value type for %q:%q; want string", k, v)
return
}
fields = append(fields, logstorage.Field{
Name: bytesutil.ToUnsafeString(k),
Value: bytesutil.ToUnsafeString(val),
})
}
})
processLogMessage(ts, fields)
if err != nil {
return err
}
lmp.AddRow(ts, fields, nil)
fields = fields[:0]
}
return len(records), nil
return nil
}

View File

@@ -1,12 +1,10 @@
package datadog
import (
"fmt"
"strings"
"testing"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
)
func TestReadLogsRequestFailure(t *testing.T) {
@@ -15,16 +13,12 @@ func TestReadLogsRequestFailure(t *testing.T) {
ts := time.Now().UnixNano()
processLogMessage := func(timestamp int64, fields []logstorage.Field) {
t.Fatalf("unexpected call to processLogMessage with timestamp=%d, fields=%s", timestamp, fields)
}
rows, err := readLogsRequest(ts, []byte(data), processLogMessage)
if err == nil {
lmp := &insertutils.TestLogMessageProcessor{}
if err := readLogsRequest(ts, []byte(data), lmp); err == nil {
t.Fatalf("expecting non-empty error")
}
if rows != 0 {
t.Fatalf("unexpected non-zero rows=%d", rows)
if err := lmp.Verify(nil, ""); err != nil {
t.Fatalf("unexpected error: %s", err)
}
}
f("foobar")
@@ -39,30 +33,16 @@ func TestReadLogsRequestSuccess(t *testing.T) {
t.Helper()
ts := time.Now().UnixNano()
var result string
processLogMessage := func(_ int64, fields []logstorage.Field) {
a := make([]string, len(fields))
for i, f := range fields {
a[i] = fmt.Sprintf("%q:%q", f.Name, f.Value)
}
if len(result) > 0 {
result = result + "\n"
}
s := "{" + strings.Join(a, ",") + "}"
result += s
var timestampsExpected []int64
for i := 0; i < rowsExpected; i++ {
timestampsExpected = append(timestampsExpected, ts)
}
// Read the request without compression
rows, err := readLogsRequest(ts, []byte(data), processLogMessage)
if err != nil {
lmp := &insertutils.TestLogMessageProcessor{}
if err := readLogsRequest(ts, []byte(data), lmp); err != nil {
t.Fatalf("unexpected error: %s", err)
}
if rows != rowsExpected {
t.Fatalf("unexpected rows read; got %d; want %d", rows, rowsExpected)
}
if result != resultExpected {
t.Fatalf("unexpected result;\ngot\n%s\nwant\n%s", result, resultExpected)
if err := lmp.Verify(timestampsExpected, resultExpected); err != nil {
t.Fatalf("unexpected error: %s", err)
}
}
@@ -74,6 +54,12 @@ func TestReadLogsRequestSuccess(t *testing.T) {
"hostname":"127.0.0.1",
"message":"bar",
"service":"test"
}, {
"ddsource":"nginx",
"ddtags":"tag1:value1,tag2:value2",
"hostname":"127.0.0.1",
"message":{"message": "nested"},
"service":"test"
}, {
"ddsource":"nginx",
"ddtags":"tag1:value1,tag2:value2",
@@ -106,8 +92,9 @@ func TestReadLogsRequestSuccess(t *testing.T) {
"service":"test"
}
]`
rowsExpected := 6
rowsExpected := 7
resultExpected := `{"ddsource":"nginx","tag1":"value1","tag2":"value2","hostname":"127.0.0.1","_msg":"bar","service":"test"}
{"ddsource":"nginx","tag1":"value1","tag2":"value2","hostname":"127.0.0.1","_msg":"nested","service":"test"}
{"ddsource":"nginx","tag1":"value1","tag2":"value2","hostname":"127.0.0.1","_msg":"foobar","service":"test"}
{"ddsource":"nginx","tag1":"value1","tag2":"value2","hostname":"127.0.0.1","_msg":"baz","service":"test"}
{"ddsource":"nginx","tag1":"value1","tag2":"value2","hostname":"127.0.0.1","_msg":"xyz","service":"test"}

View File

@@ -1,8 +1,6 @@
package elasticsearch
import (
"bufio"
"errors"
"flag"
"fmt"
"io"
@@ -103,7 +101,8 @@ func RequestHandler(path string, w http.ResponseWriter, r *http.Request) bool {
}
lmp := cp.NewLogMessageProcessor("elasticsearch_bulk")
isGzip := r.Header.Get("Content-Encoding") == "gzip"
n, err := readBulkRequest(r.Body, isGzip, cp.TimeField, cp.MsgFields, lmp)
streamName := fmt.Sprintf("remoteAddr=%s, requestURI=%q", httpserver.GetQuotedRemoteAddr(r), r.RequestURI)
n, err := readBulkRequest(streamName, r.Body, isGzip, cp.TimeField, cp.MsgFields, lmp)
lmp.MustClose()
if err != nil {
logger.Warnf("cannot decode log message #%d in /_bulk request: %s, stream fields: %s", n, err, cp.StreamFields)
@@ -129,11 +128,10 @@ func RequestHandler(path string, w http.ResponseWriter, r *http.Request) bool {
var (
bulkRequestsTotal = metrics.NewCounter(`vl_http_requests_total{path="/insert/elasticsearch/_bulk"}`)
rowsIngestedTotal = metrics.NewCounter(`vl_rows_ingested_total{type="elasticsearch_bulk"}`)
bulkRequestDuration = metrics.NewHistogram(`vl_http_request_duration_seconds{path="/insert/elasticsearch/_bulk"}`)
)
func readBulkRequest(r io.Reader, isGzip bool, timeField string, msgFields []string, lmp insertutils.LogMessageProcessor) (int, error) {
func readBulkRequest(streamName string, r io.Reader, isGzip bool, timeField string, msgFields []string, lmp insertutils.LogMessageProcessor) (int, error) {
// See https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html
if isGzip {
@@ -148,48 +146,29 @@ func readBulkRequest(r io.Reader, isGzip bool, timeField string, msgFields []str
wcr := writeconcurrencylimiter.GetReader(r)
defer writeconcurrencylimiter.PutReader(wcr)
lb := lineBufferPool.Get()
defer lineBufferPool.Put(lb)
lb.B = bytesutil.ResizeNoCopyNoOverallocate(lb.B, insertutils.MaxLineSizeBytes.IntN())
sc := bufio.NewScanner(wcr)
sc.Buffer(lb.B, len(lb.B))
lr := insertutils.NewLineReader(streamName, wcr)
n := 0
nCheckpoint := 0
for {
ok, err := readBulkLine(sc, timeField, msgFields, lmp)
ok, err := readBulkLine(lr, timeField, msgFields, lmp)
wcr.DecConcurrency()
if err != nil || !ok {
rowsIngestedTotal.Add(n - nCheckpoint)
return n, err
}
n++
if batchSize := n - nCheckpoint; n >= 1000 {
rowsIngestedTotal.Add(batchSize)
nCheckpoint = n
}
}
}
var lineBufferPool bytesutil.ByteBufferPool
func readBulkLine(sc *bufio.Scanner, timeField string, msgFields []string, lmp insertutils.LogMessageProcessor) (bool, error) {
func readBulkLine(lr *insertutils.LineReader, timeField string, msgFields []string, lmp insertutils.LogMessageProcessor) (bool, error) {
var line []byte
// Read the command, must be "create" or "index"
for len(line) == 0 {
if !sc.Scan() {
if err := sc.Err(); err != nil {
if errors.Is(err, bufio.ErrTooLong) {
return false, fmt.Errorf(`cannot read "create" or "index" command, since its size exceeds -insert.maxLineSizeBytes=%d`,
insertutils.MaxLineSizeBytes.IntN())
}
return false, err
}
return false, nil
if !lr.NextLine() {
err := lr.Err()
return false, err
}
line = sc.Bytes()
line = lr.Line
}
lineStr := bytesutil.ToUnsafeString(line)
if !strings.Contains(lineStr, `"create"`) && !strings.Contains(lineStr, `"index"`) {
@@ -197,16 +176,18 @@ func readBulkLine(sc *bufio.Scanner, timeField string, msgFields []string, lmp i
}
// Decode log message
if !sc.Scan() {
if err := sc.Err(); err != nil {
if errors.Is(err, bufio.ErrTooLong) {
return false, fmt.Errorf("cannot read log message, since its size exceeds -insert.maxLineSizeBytes=%d", insertutils.MaxLineSizeBytes.IntN())
}
if !lr.NextLine() {
if err := lr.Err(); err != nil {
return false, err
}
return false, fmt.Errorf(`missing log message after the "create" or "index" command`)
}
line = sc.Bytes()
line = lr.Line
if len(line) == 0 {
// Special case - the line could be too long, so it was skipped.
// Continue parsing next lines.
return true, nil
}
p := logstorage.GetJSONParser()
if err := p.ParseLogMessage(line); err != nil {
return false, fmt.Errorf("cannot parse json-encoded log entry: %w", err)
@@ -220,7 +201,7 @@ func readBulkLine(sc *bufio.Scanner, timeField string, msgFields []string, lmp i
ts = time.Now().UnixNano()
}
logstorage.RenameField(p.Fields, msgFields, "_msg")
lmp.AddRow(ts, p.Fields)
lmp.AddRow(ts, p.Fields, nil)
logstorage.PutJSONParser(p)
return true, nil

View File

@@ -15,7 +15,7 @@ func TestReadBulkRequest_Failure(t *testing.T) {
tlp := &insertutils.TestLogMessageProcessor{}
r := bytes.NewBufferString(data)
rows, err := readBulkRequest(r, false, "_time", []string{"_msg"}, tlp)
rows, err := readBulkRequest("test", r, false, "_time", []string{"_msg"}, tlp)
if err == nil {
t.Fatalf("expecting non-empty error")
}
@@ -33,7 +33,7 @@ foobar`)
}
func TestReadBulkRequest_Success(t *testing.T) {
f := func(data, timeField, msgField string, rowsExpected int, timestampsExpected []int64, resultExpected string) {
f := func(data, timeField, msgField string, timestampsExpected []int64, resultExpected string) {
t.Helper()
msgFields := []string{"non_existing_foo", msgField, "non_exiting_bar"}
@@ -41,14 +41,14 @@ func TestReadBulkRequest_Success(t *testing.T) {
// Read the request without compression
r := bytes.NewBufferString(data)
rows, err := readBulkRequest(r, false, timeField, msgFields, tlp)
rows, err := readBulkRequest("test", r, false, timeField, msgFields, tlp)
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
if rows != rowsExpected {
t.Fatalf("unexpected rows read; got %d; want %d", rows, rowsExpected)
if rows != len(timestampsExpected) {
t.Fatalf("unexpected rows read; got %d; want %d", rows, len(timestampsExpected))
}
if err := tlp.Verify(rowsExpected, timestampsExpected, resultExpected); err != nil {
if err := tlp.Verify(timestampsExpected, resultExpected); err != nil {
t.Fatal(err)
}
@@ -56,22 +56,22 @@ func TestReadBulkRequest_Success(t *testing.T) {
tlp = &insertutils.TestLogMessageProcessor{}
compressedData := compressData(data)
r = bytes.NewBufferString(compressedData)
rows, err = readBulkRequest(r, true, timeField, msgFields, tlp)
rows, err = readBulkRequest("test", r, true, timeField, msgFields, tlp)
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
if rows != rowsExpected {
t.Fatalf("unexpected rows read; got %d; want %d", rows, rowsExpected)
if rows != len(timestampsExpected) {
t.Fatalf("unexpected rows read; got %d; want %d", rows, len(timestampsExpected))
}
if err := tlp.Verify(rowsExpected, timestampsExpected, resultExpected); err != nil {
if err := tlp.Verify(timestampsExpected, resultExpected); err != nil {
t.Fatalf("verification failure after compression: %s", err)
}
}
// Verify an empty data
f("", "_time", "_msg", 0, nil, "")
f("\n", "_time", "_msg", 0, nil, "")
f("\n\n", "_time", "_msg", 0, nil, "")
f("", "_time", "_msg", nil, "")
f("\n", "_time", "_msg", nil, "")
f("\n\n", "_time", "_msg", nil, "")
// Verify non-empty data
data := `{"create":{"_index":"filebeat-8.8.0"}}
@@ -85,13 +85,12 @@ func TestReadBulkRequest_Success(t *testing.T) {
`
timeField := "@timestamp"
msgField := "message"
rowsExpected := 4
timestampsExpected := []int64{1686026891735000000, 1686023292735000000, 1686026893735000000, 1686026893000000000}
resultExpected := `{"log.offset":"71770","log.file.path":"/var/log/auth.log","_msg":"foobar"}
{"_msg":"baz"}
{"_msg":"xyz","x":"y"}
{"_msg":"qwe rty"}`
f(data, timeField, msgField, rowsExpected, timestampsExpected, resultExpected)
f(data, timeField, msgField, timestampsExpected, resultExpected)
}
func compressData(s string) string {

View File

@@ -41,7 +41,7 @@ func benchmarkReadBulkRequest(b *testing.B, isGzip bool) {
r := &bytes.Reader{}
for pb.Next() {
r.Reset(dataBytes)
_, err := readBulkRequest(r, isGzip, timeField, msgFields, blp)
_, err := readBulkRequest("test", r, isGzip, timeField, msgFields, blp)
if err != nil {
panic(fmt.Errorf("unexpected error: %w", err))
}

View File

@@ -137,10 +137,12 @@ func GetCommonParamsForSyslog(tenantID logstorage.TenantID, streamFields, ignore
// LogMessageProcessor is an interface for log message processors.
type LogMessageProcessor interface {
// AddRow must add row to the LogMessageProcessor with the given timestamp and the given fields.
// AddRow must add row to the LogMessageProcessor with the given timestamp and fields.
//
// If streamFields is non-nil, then the given streamFields must be used as log stream fields instead of pre-configured fields.
//
// The LogMessageProcessor implementation cannot hold references to fields, since the caller can re-use them.
AddRow(timestamp int64, fields []logstorage.Field)
AddRow(timestamp int64, fields, streamFields []logstorage.Field)
// MustClose() must flush all the remaining fields and free up resources occupied by LogMessageProcessor.
MustClose()
@@ -155,7 +157,8 @@ type logMessageProcessor struct {
cp *CommonParams
lr *logstorage.LogRows
processedBytesTotal *metrics.Counter
rowsIngestedTotal *metrics.Counter
bytesIngestedTotal *metrics.Counter
}
func (lmp *logMessageProcessor) initPeriodicFlush() {
@@ -185,12 +188,15 @@ func (lmp *logMessageProcessor) initPeriodicFlush() {
}
// AddRow adds new log message to lmp with the given timestamp and fields.
func (lmp *logMessageProcessor) AddRow(timestamp int64, fields []logstorage.Field) {
//
// If streamFields is non-nil, then it is used as log stream fields instead of the pre-configured stream fields.
func (lmp *logMessageProcessor) AddRow(timestamp int64, fields, streamFields []logstorage.Field) {
lmp.mu.Lock()
defer lmp.mu.Unlock()
n := getApproxJSONRowLen(fields)
lmp.processedBytesTotal.Add(n)
lmp.rowsIngestedTotal.Inc()
n := logstorage.EstimatedJSONRowLen(fields)
lmp.bytesIngestedTotal.Add(n)
if len(fields) > *MaxFieldsPerLine {
rf := logstorage.RowFormatter(fields)
@@ -199,7 +205,7 @@ func (lmp *logMessageProcessor) AddRow(timestamp int64, fields []logstorage.Fiel
return
}
lmp.lr.MustAdd(lmp.cp.TenantID, timestamp, fields)
lmp.lr.MustAdd(lmp.cp.TenantID, timestamp, fields, streamFields)
if lmp.cp.Debug {
s := lmp.lr.GetRowString(0)
lmp.lr.ResetKeepSettings()
@@ -212,16 +218,6 @@ func (lmp *logMessageProcessor) AddRow(timestamp int64, fields []logstorage.Fiel
}
}
// getApproxJSONRowLen returns an approximate length of the log entry with the given fields if represented as JSON.
func getApproxJSONRowLen(fields []logstorage.Field) int {
n := len("{}\n")
n += len(`"_time":""`) + len(time.RFC3339Nano)
for _, f := range fields {
n += len(`,"":""`) + len(f.Name) + len(f.Value)
}
return n
}
// flushLocked must be called under locked lmp.mu.
func (lmp *logMessageProcessor) flushLocked() {
lmp.lastFlushTime = time.Now()
@@ -244,12 +240,14 @@ func (lmp *logMessageProcessor) MustClose() {
// MustClose() must be called on the returned LogMessageProcessor when it is no longer needed.
func (cp *CommonParams) NewLogMessageProcessor(protocolName string) LogMessageProcessor {
lr := logstorage.GetLogRows(cp.StreamFields, cp.IgnoreFields, cp.ExtraFields, *defaultMsgValue)
processedBytesTotal := metrics.GetOrCreateCounter(fmt.Sprintf("vl_bytes_ingested_total{type=%q}", protocolName))
rowsIngestedTotal := metrics.GetOrCreateCounter(fmt.Sprintf("vl_rows_ingested_total{type=%q}", protocolName))
bytesIngestedTotal := metrics.GetOrCreateCounter(fmt.Sprintf("vl_bytes_ingested_total{type=%q}", protocolName))
lmp := &logMessageProcessor{
cp: cp,
lr: lr,
processedBytesTotal: processedBytesTotal,
rowsIngestedTotal: rowsIngestedTotal,
bytesIngestedTotal: bytesIngestedTotal,
stopCh: make(chan struct{}),
}

View File

@@ -0,0 +1,135 @@
package insertutils
import (
"bytes"
"errors"
"fmt"
"io"
"github.com/VictoriaMetrics/metrics"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/slicesutil"
)
// LineReader reads newline-delimited lines from the underlying reader
type LineReader struct {
// Line contains the next line read after the call to NextLine
Line []byte
// name is the LineReader name
name string
// r is the underlying reader to read data from
r io.Reader
// buf is a buffer for reading the next line
buf []byte
// err is the last error when reading data from r
err error
// eofReached is set to true when all the data is read from r
eofReached bool
}
// NewLineReader returns LineReader for r.
func NewLineReader(name string, r io.Reader) *LineReader {
return &LineReader{
name: name,
r: r,
}
}
// NextLine reads the next line from the underlying reader.
//
// It returns true if the next line is successfully read into Line.
// If the line length exceeds MaxLineSizeBytes, then this line is skipped
// and an empty line is returned instead.
//
// If false is returned, then no more lines left to read from r.
// Check for Err in this case.
func (lr *LineReader) NextLine() bool {
for {
if len(lr.buf) == 0 {
if lr.err != nil || lr.eofReached {
return false
}
if !lr.readMoreData() {
return false
}
if len(lr.buf) == 0 && lr.eofReached {
return false
}
}
if n := bytes.IndexByte(lr.buf, '\n'); n >= 0 {
lr.Line = append(lr.Line[:0], lr.buf[:n]...)
lr.buf = append(lr.buf[:0], lr.buf[n+1:]...)
return true
}
if lr.eofReached {
lr.Line = append(lr.Line[:0], lr.buf...)
lr.buf = lr.buf[:0]
return true
}
if !lr.readMoreData() {
return false
}
}
}
// Err returns the last error after NextLine call.
func (lr *LineReader) Err() error {
if lr.err == nil {
return nil
}
return fmt.Errorf("%s: %s", lr.name, lr.err)
}
func (lr *LineReader) readMoreData() bool {
bufLen := len(lr.buf)
if bufLen >= MaxLineSizeBytes.IntN() {
logger.Warnf("%s: the line length exceeds -insert.maxLineSizeBytes=%d; skipping it; line contents=%q", lr.name, MaxLineSizeBytes.IntN(), lr.buf)
tooLongLinesSkipped.Inc()
return lr.skipUntilNextLine()
}
lr.buf = slicesutil.SetLength(lr.buf, MaxLineSizeBytes.IntN())
n, err := lr.r.Read(lr.buf[bufLen:])
lr.buf = lr.buf[:bufLen+n]
if err != nil {
if errors.Is(err, io.EOF) {
lr.eofReached = true
return true
}
lr.err = fmt.Errorf("cannot read the next line: %s", err)
}
return n > 0
}
var tooLongLinesSkipped = metrics.NewCounter("vl_too_long_lines_skipped_total")
func (lr *LineReader) skipUntilNextLine() bool {
for {
lr.buf = slicesutil.SetLength(lr.buf, MaxLineSizeBytes.IntN())
n, err := lr.r.Read(lr.buf)
lr.buf = lr.buf[:n]
if err != nil {
if errors.Is(err, io.EOF) {
lr.eofReached = true
lr.buf = lr.buf[:0]
return true
}
lr.err = fmt.Errorf("cannot skip the current line: %s", err)
return false
}
if n := bytes.IndexByte(lr.buf, '\n'); n >= 0 {
// Include \n in the buf, so too long line is replaced with an empty line.
// This is needed for maintaining synchorinzation consistency between lines
// in protocols such as Elasticsearch bulk import.
lr.buf = append(lr.buf[:0], lr.buf[n:]...)
return true
}
}
}

View File

@@ -0,0 +1,161 @@
package insertutils
import (
"bytes"
"fmt"
"io"
"reflect"
"testing"
)
func TestLineReader_Success(t *testing.T) {
f := func(data string, linesExpected []string) {
t.Helper()
r := bytes.NewBufferString(data)
lr := NewLineReader("foo", r)
var lines []string
for lr.NextLine() {
lines = append(lines, string(lr.Line))
}
if err := lr.Err(); err != nil {
t.Fatalf("unexpected error: %s", err)
}
if lr.NextLine() {
t.Fatalf("expecting error on the second call to NextLine()")
}
if !reflect.DeepEqual(lines, linesExpected) {
t.Fatalf("unexpected lines\ngot\n%q\nwant\n%q", lines, linesExpected)
}
}
f("", nil)
f("\n", []string{""})
f("\n\n", []string{"", ""})
f("foo", []string{"foo"})
f("foo\n", []string{"foo"})
f("\nfoo", []string{"", "foo"})
f("foo\n\n", []string{"foo", ""})
f("foo\nbar", []string{"foo", "bar"})
f("foo\nbar\n", []string{"foo", "bar"})
f("\nfoo\n\nbar\n\n", []string{"", "foo", "", "bar", ""})
}
func TestLineReader_SkipUntilNextLine(t *testing.T) {
f := func(data string, linesExpected []string) {
t.Helper()
r := bytes.NewBufferString(data)
lr := NewLineReader("foo", r)
var lines []string
for lr.NextLine() {
lines = append(lines, string(lr.Line))
}
if err := lr.Err(); err != nil {
t.Fatalf("unexpected error for data=%q: %s", data, err)
}
if lr.NextLine() {
t.Fatalf("expecting error on the second call to NextLine()")
}
if !reflect.DeepEqual(lines, linesExpected) {
t.Fatalf("unexpected lines for data=%q\ngot\n%q\nwant\n%q", data, lines, linesExpected)
}
}
for _, overflow := range []int{0, 100, MaxLineSizeBytes.IntN(), MaxLineSizeBytes.IntN() + 1, 2 * MaxLineSizeBytes.IntN()} {
longLineLen := MaxLineSizeBytes.IntN() + overflow
longLine := string(make([]byte, longLineLen))
// Single long line
data := longLine
f(data, nil)
// Multiple long lines
data = longLine + "\n" + longLine
f(data, []string{""})
data = longLine + "\n" + longLine + "\n"
f(data, []string{"", ""})
// Long line in the middle
data = "foo\n" + longLine + "\nbar"
f(data, []string{"foo", "", "bar"})
// Multiple long lines in the middle
data = "foo\n" + longLine + "\n" + longLine + "\nbar"
f(data, []string{"foo", "", "", "bar"})
// Long line in the end
data = "foo\n" + longLine
f(data, []string{"foo"})
// Long line in the end
data = "foo\n" + longLine + "\n"
f(data, []string{"foo", ""})
}
}
func TestLineReader_Failure(t *testing.T) {
f := func(data string, linesExpected []string) {
t.Helper()
fr := &failureReader{
r: bytes.NewBufferString(data),
}
lr := NewLineReader("foo", fr)
var lines []string
for lr.NextLine() {
lines = append(lines, string(lr.Line))
}
if err := lr.Err(); err == nil {
t.Fatalf("expecting non-nil error")
}
if lr.NextLine() {
t.Fatalf("expecting error on the second call to NextLine()")
}
if err := lr.Err(); err == nil {
t.Fatalf("expecting non-nil error on the second call")
}
if !reflect.DeepEqual(lines, linesExpected) {
t.Fatalf("unexpected lines\ngot\n%q\nwant\n%q", lines, linesExpected)
}
}
f("", nil)
f("foo", nil)
f("foo\n", []string{"foo"})
f("\n", []string{""})
f("foo\nbar", []string{"foo"})
f("foo\nbar\n", []string{"foo", "bar"})
f("\nfoo\nbar\n\n", []string{"", "foo", "bar", ""})
// long line
longLineLen := MaxLineSizeBytes.IntN()
for _, overflow := range []int{0, 100, MaxLineSizeBytes.IntN(), MaxLineSizeBytes.IntN() + 1, 2 * MaxLineSizeBytes.IntN()} {
longLine := string(make([]byte, longLineLen+overflow))
data := longLine
f(data, nil)
data = "foo\n" + longLine
f(data, []string{"foo"})
data = longLine + "\nfoo"
f(data, []string{""})
data = longLine + "\nfoo\n"
f(data, []string{"", "foo"})
}
}
type failureReader struct {
r io.Reader
}
func (r *failureReader) Read(p []byte) (int, error) {
n, _ := r.r.Read(p)
if n > 0 {
return n, nil
}
return 0, fmt.Errorf("some error")
}

View File

@@ -15,7 +15,10 @@ type TestLogMessageProcessor struct {
}
// AddRow adds row with the given timestamp and fields to tlp
func (tlp *TestLogMessageProcessor) AddRow(timestamp int64, fields []logstorage.Field) {
func (tlp *TestLogMessageProcessor) AddRow(timestamp int64, fields, streamFields []logstorage.Field) {
if streamFields != nil {
panic(fmt.Errorf("BUG: streamFields must be nil; got %v", streamFields))
}
tlp.timestamps = append(tlp.timestamps, timestamp)
tlp.rows = append(tlp.rows, string(logstorage.MarshalFieldsToJSON(nil, fields)))
}
@@ -25,10 +28,10 @@ func (tlp *TestLogMessageProcessor) MustClose() {
}
// Verify verifies the number of rows, timestamps and results after AddRow calls.
func (tlp *TestLogMessageProcessor) Verify(rowsExpected int, timestampsExpected []int64, resultExpected string) error {
func (tlp *TestLogMessageProcessor) Verify(timestampsExpected []int64, resultExpected string) error {
result := strings.Join(tlp.rows, "\n")
if len(tlp.rows) != rowsExpected {
return fmt.Errorf("unexpected rows read; got %d; want %d;\nrows read:\n%s\nrows wanted\n%s", len(tlp.rows), rowsExpected, result, resultExpected)
if len(tlp.rows) != len(timestampsExpected) {
return fmt.Errorf("unexpected rows read; got %d; want %d;\nrows read:\n%s\nrows wanted\n%s", len(tlp.rows), len(timestampsExpected), result, resultExpected)
}
if !reflect.DeepEqual(tlp.timestamps, timestampsExpected) {
@@ -45,7 +48,7 @@ func (tlp *TestLogMessageProcessor) Verify(rowsExpected int, timestampsExpected
type BenchmarkLogMessageProcessor struct{}
// AddRow implements LogMessageProcessor interface.
func (blp *BenchmarkLogMessageProcessor) AddRow(_ int64, _ []logstorage.Field) {
func (blp *BenchmarkLogMessageProcessor) AddRow(_ int64, _, _ []logstorage.Field) {
}
// MustClose implements LogMessageProcessor interface.

View File

@@ -121,7 +121,7 @@ func handleJournald(r *http.Request, w http.ResponseWriter) {
}
lmp := cp.NewLogMessageProcessor("journald")
n, err := parseJournaldRequest(data, lmp, cp)
err = parseJournaldRequest(data, lmp, cp)
lmp.MustClose()
if err != nil {
errorsTotal.Inc()
@@ -129,8 +129,6 @@ func handleJournald(r *http.Request, w http.ResponseWriter) {
return
}
rowsIngestedJournaldTotal.Add(n)
// update requestJournaldDuration only for successfully parsed requests
// There is no need in updating requestJournaldDuration for request errors,
// since their timings are usually much smaller than the timing for successful request parsing.
@@ -138,8 +136,6 @@ func handleJournald(r *http.Request, w http.ResponseWriter) {
}
var (
rowsIngestedJournaldTotal = metrics.NewCounter(`vl_rows_ingested_total{type="journald"}`)
requestsJournaldTotal = metrics.NewCounter(`vl_http_requests_total{path="/insert/journald/upload"}`)
errorsTotal = metrics.NewCounter(`vl_http_errors_total{path="/insert/journald/upload"}`)
@@ -147,7 +143,7 @@ var (
)
// See https://systemd.io/JOURNAL_EXPORT_FORMATS/#journal-export-format
func parseJournaldRequest(data []byte, lmp insertutils.LogMessageProcessor, cp *insertutils.CommonParams) (rowsIngested int, err error) {
func parseJournaldRequest(data []byte, lmp insertutils.LogMessageProcessor, cp *insertutils.CommonParams) error {
var fields []logstorage.Field
var ts int64
var size uint64
@@ -170,15 +166,14 @@ func parseJournaldRequest(data []byte, lmp insertutils.LogMessageProcessor, cp *
if ts == 0 {
ts = currentTimestamp
}
lmp.AddRow(ts, fields)
rowsIngested++
lmp.AddRow(ts, fields, nil)
fields = fields[:0]
}
// skip newline separator
data = data[1:]
continue
case idx < 0:
return rowsIngested, fmt.Errorf("missing new line separator, unread data left=%d", len(data))
return fmt.Errorf("missing new line separator, unread data left=%d", len(data))
}
idx = bytes.IndexByte(line, '=')
@@ -191,46 +186,46 @@ func parseJournaldRequest(data []byte, lmp insertutils.LogMessageProcessor, cp *
} else {
name = bytesutil.ToUnsafeString(line)
if len(data) == 0 {
return rowsIngested, fmt.Errorf("unexpected zero data for binary field value of key=%s", name)
return fmt.Errorf("unexpected zero data for binary field value of key=%s", name)
}
// size of binary data encoded as le i64 at the begging
idx, err := binary.Decode(data, binary.LittleEndian, &size)
if err != nil {
return rowsIngested, fmt.Errorf("failed to extract binary field %q value size: %w", name, err)
return fmt.Errorf("failed to extract binary field %q value size: %w", name, err)
}
// skip binary data sise
data = data[idx:]
if size == 0 {
return rowsIngested, fmt.Errorf("unexpected zero binary data size decoded %d", size)
return fmt.Errorf("unexpected zero binary data size decoded %d", size)
}
if int(size) > len(data) {
return rowsIngested, fmt.Errorf("binary data size=%d cannot exceed size of the data at buffer=%d", size, len(data))
return fmt.Errorf("binary data size=%d cannot exceed size of the data at buffer=%d", size, len(data))
}
value = bytesutil.ToUnsafeString(data[:size])
data = data[int(size):]
// binary data must has new line separator for the new line or next field
if len(data) == 0 {
return rowsIngested, fmt.Errorf("unexpected empty buffer after binary field=%s read", name)
return fmt.Errorf("unexpected empty buffer after binary field=%s read", name)
}
lastB := data[0]
if lastB != '\n' {
return rowsIngested, fmt.Errorf("expected new line separator after binary field=%s, got=%s", name, string(lastB))
return fmt.Errorf("expected new line separator after binary field=%s, got=%s", name, string(lastB))
}
data = data[1:]
}
// https://github.com/systemd/systemd/blob/main/src/libsystemd/sd-journal/journal-file.c#L1703
if len(name) > journaldEntryMaxNameLen {
return rowsIngested, fmt.Errorf("journald entry name should not exceed %d symbols, got: %q", journaldEntryMaxNameLen, name)
return fmt.Errorf("journald entry name should not exceed %d symbols, got: %q", journaldEntryMaxNameLen, name)
}
if !allowedJournaldEntryNameChars.MatchString(name) {
return rowsIngested, fmt.Errorf("journald entry name should consist of `A-Z0-9_` characters and must start from non-digit symbol")
return fmt.Errorf("journald entry name should consist of `A-Z0-9_` characters and must start from non-digit symbol")
}
if name == cp.TimeField {
ts, err = strconv.ParseInt(value, 10, 64)
n, err := strconv.ParseInt(value, 10, 64)
if err != nil {
return 0, fmt.Errorf("failed to parse Journald timestamp, %w", err)
return fmt.Errorf("failed to parse Journald timestamp, %w", err)
}
ts *= 1e3
ts = n * 1e3
continue
}
@@ -249,8 +244,7 @@ func parseJournaldRequest(data []byte, lmp insertutils.LogMessageProcessor, cp *
if ts == 0 {
ts = currentTimestamp
}
lmp.AddRow(ts, fields)
rowsIngested++
lmp.AddRow(ts, fields, nil)
}
return rowsIngested, nil
return nil
}

View File

@@ -14,12 +14,11 @@ func TestPushJournaldOk(t *testing.T) {
TimeField: "__REALTIME_TIMESTAMP",
MsgFields: []string{"MESSAGE"},
}
n, err := parseJournaldRequest([]byte(src), tlp, cp)
if err != nil {
if err := parseJournaldRequest([]byte(src), tlp, cp); err != nil {
t.Fatalf("unexpected error: %s", err)
}
if err := tlp.Verify(n, timestampsExpected, resultExpected); err != nil {
if err := tlp.Verify(timestampsExpected, resultExpected); err != nil {
t.Fatal(err)
}
}
@@ -50,8 +49,7 @@ func TestPushJournald_Failure(t *testing.T) {
TimeField: "__REALTIME_TIMESTAMP",
MsgFields: []string{"MESSAGE"},
}
_, err := parseJournaldRequest([]byte(data), tlp, cp)
if err == nil {
if err := parseJournaldRequest([]byte(data), tlp, cp); err == nil {
t.Fatalf("expected non nil error")
}
}

View File

@@ -1,8 +1,6 @@
package jsonline
import (
"bufio"
"errors"
"fmt"
"io"
"net/http"
@@ -10,7 +8,6 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
@@ -53,7 +50,8 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) {
}
lmp := cp.NewLogMessageProcessor("jsonline")
err = processStreamInternal(reader, cp.TimeField, cp.MsgFields, lmp)
streamName := fmt.Sprintf("remoteAddr=%s, requestURI=%q", httpserver.GetQuotedRemoteAddr(r), r.RequestURI)
err = processStreamInternal(streamName, reader, cp.TimeField, cp.MsgFields, lmp)
lmp.MustClose()
if err != nil {
@@ -66,20 +64,15 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) {
}
}
func processStreamInternal(r io.Reader, timeField string, msgFields []string, lmp insertutils.LogMessageProcessor) error {
func processStreamInternal(streamName string, r io.Reader, timeField string, msgFields []string, lmp insertutils.LogMessageProcessor) error {
wcr := writeconcurrencylimiter.GetReader(r)
defer writeconcurrencylimiter.PutReader(wcr)
lb := lineBufferPool.Get()
defer lineBufferPool.Put(lb)
lb.B = bytesutil.ResizeNoCopyNoOverallocate(lb.B, insertutils.MaxLineSizeBytes.IntN())
sc := bufio.NewScanner(wcr)
sc.Buffer(lb.B, len(lb.B))
lr := insertutils.NewLineReader(streamName, wcr)
n := 0
for {
ok, err := readLine(sc, timeField, msgFields, lmp)
ok, err := readLine(lr, timeField, msgFields, lmp)
wcr.DecConcurrency()
if err != nil {
errorsTotal.Inc()
@@ -89,23 +82,17 @@ func processStreamInternal(r io.Reader, timeField string, msgFields []string, lm
return nil
}
n++
rowsIngestedTotal.Inc()
}
}
func readLine(sc *bufio.Scanner, timeField string, msgFields []string, lmp insertutils.LogMessageProcessor) (bool, error) {
func readLine(lr *insertutils.LineReader, timeField string, msgFields []string, lmp insertutils.LogMessageProcessor) (bool, error) {
var line []byte
for len(line) == 0 {
if !sc.Scan() {
if err := sc.Err(); err != nil {
if errors.Is(err, bufio.ErrTooLong) {
return false, fmt.Errorf(`cannot read json line, since its size exceeds -insert.maxLineSizeBytes=%d`, insertutils.MaxLineSizeBytes.IntN())
}
return false, err
}
return false, nil
if !lr.NextLine() {
err := lr.Err()
return false, err
}
line = sc.Bytes()
line = lr.Line
}
p := logstorage.GetJSONParser()
@@ -117,17 +104,13 @@ func readLine(sc *bufio.Scanner, timeField string, msgFields []string, lmp inser
return false, fmt.Errorf("cannot get timestamp: %w", err)
}
logstorage.RenameField(p.Fields, msgFields, "_msg")
lmp.AddRow(ts, p.Fields)
lmp.AddRow(ts, p.Fields, nil)
logstorage.PutJSONParser(p)
return true, nil
}
var lineBufferPool bytesutil.ByteBufferPool
var (
rowsIngestedTotal = metrics.NewCounter(`vl_rows_ingested_total{type="jsonline"}`)
requestsTotal = metrics.NewCounter(`vl_http_requests_total{path="/insert/jsonline"}`)
errorsTotal = metrics.NewCounter(`vl_http_errors_total{path="/insert/jsonline"}`)

View File

@@ -8,17 +8,17 @@ import (
)
func TestProcessStreamInternal_Success(t *testing.T) {
f := func(data, timeField, msgField string, rowsExpected int, timestampsExpected []int64, resultExpected string) {
f := func(data, timeField, msgField string, timestampsExpected []int64, resultExpected string) {
t.Helper()
msgFields := []string{msgField}
tlp := &insertutils.TestLogMessageProcessor{}
r := bytes.NewBufferString(data)
if err := processStreamInternal(r, timeField, msgFields, tlp); err != nil {
if err := processStreamInternal("test", r, timeField, msgFields, tlp); err != nil {
t.Fatalf("unexpected error: %s", err)
}
if err := tlp.Verify(rowsExpected, timestampsExpected, resultExpected); err != nil {
if err := tlp.Verify(timestampsExpected, resultExpected); err != nil {
t.Fatal(err)
}
}
@@ -29,12 +29,11 @@ func TestProcessStreamInternal_Success(t *testing.T) {
`
timeField := "@timestamp"
msgField := "message"
rowsExpected := 3
timestampsExpected := []int64{1686026891735000000, 1686023292735000000, 1686026893735000000}
resultExpected := `{"log.offset":"71770","log.file.path":"/var/log/auth.log","_msg":"foobar"}
{"_msg":"baz"}
{"_msg":"xyz","x":"y"}`
f(data, timeField, msgField, rowsExpected, timestampsExpected, resultExpected)
f(data, timeField, msgField, timestampsExpected, resultExpected)
// Non-existing msgField
data = `{"@timestamp":"2023-06-06T04:48:11.735Z","log":{"offset":71770,"file":{"path":"/var/log/auth.log"}},"message":"foobar"}
@@ -42,11 +41,10 @@ func TestProcessStreamInternal_Success(t *testing.T) {
`
timeField = "@timestamp"
msgField = "foobar"
rowsExpected = 2
timestampsExpected = []int64{1686026891735000000, 1686023292735000000}
resultExpected = `{"log.offset":"71770","log.file.path":"/var/log/auth.log","message":"foobar"}
{"message":"baz"}`
f(data, timeField, msgField, rowsExpected, timestampsExpected, resultExpected)
f(data, timeField, msgField, timestampsExpected, resultExpected)
}
func TestProcessStreamInternal_Failure(t *testing.T) {
@@ -55,7 +53,7 @@ func TestProcessStreamInternal_Failure(t *testing.T) {
tlp := &insertutils.TestLogMessageProcessor{}
r := bytes.NewBufferString(data)
if err := processStreamInternal(r, "time", nil, tlp); err == nil {
if err := processStreamInternal("test", r, "time", nil, tlp); err == nil {
t.Fatalf("expecting non-nil error")
}
}

View File

@@ -54,15 +54,14 @@ func handleJSON(r *http.Request, w http.ResponseWriter) {
return
}
lmp := cp.NewLogMessageProcessor("loki_json")
n, err := parseJSONRequest(data, lmp)
useDefaultStreamFields := len(cp.StreamFields) == 0
err = parseJSONRequest(data, lmp, useDefaultStreamFields)
lmp.MustClose()
if err != nil {
httpserver.Errorf(w, r, "cannot parse Loki json request: %s; data=%s", err, data)
return
}
rowsIngestedJSONTotal.Add(n)
// update requestJSONDuration only for successfully parsed requests
// There is no need in updating requestJSONDuration for request errors,
// since their timings are usually much smaller than the timing for successful request parsing.
@@ -70,31 +69,29 @@ func handleJSON(r *http.Request, w http.ResponseWriter) {
}
var (
requestsJSONTotal = metrics.NewCounter(`vl_http_requests_total{path="/insert/loki/api/v1/push",format="json"}`)
rowsIngestedJSONTotal = metrics.NewCounter(`vl_rows_ingested_total{type="loki_json"}`)
requestJSONDuration = metrics.NewHistogram(`vl_http_request_duration_seconds{path="/insert/loki/api/v1/push",format="json"}`)
requestsJSONTotal = metrics.NewCounter(`vl_http_requests_total{path="/insert/loki/api/v1/push",format="json"}`)
requestJSONDuration = metrics.NewHistogram(`vl_http_request_duration_seconds{path="/insert/loki/api/v1/push",format="json"}`)
)
func parseJSONRequest(data []byte, lmp insertutils.LogMessageProcessor) (int, error) {
func parseJSONRequest(data []byte, lmp insertutils.LogMessageProcessor, useDefaultStreamFields bool) error {
p := parserPool.Get()
defer parserPool.Put(p)
v, err := p.ParseBytes(data)
if err != nil {
return 0, fmt.Errorf("cannot parse JSON request body: %w", err)
return fmt.Errorf("cannot parse JSON request body: %w", err)
}
streamsV := v.Get("streams")
if streamsV == nil {
return 0, fmt.Errorf("missing `streams` item in the parsed JSON")
return fmt.Errorf("missing `streams` item in the parsed JSON")
}
streams, err := streamsV.Array()
if err != nil {
return 0, fmt.Errorf("`streams` item in the parsed JSON must contain an array; got %q", streamsV)
return fmt.Errorf("`streams` item in the parsed JSON must contain an array; got %q", streamsV)
}
currentTimestamp := time.Now().UnixNano()
var commonFields []logstorage.Field
rowsIngested := 0
for _, stream := range streams {
// populate common labels from `stream` dict
commonFields = commonFields[:0]
@@ -103,7 +100,7 @@ func parseJSONRequest(data []byte, lmp insertutils.LogMessageProcessor) (int, er
if labelsV != nil {
o, err := labelsV.Object()
if err != nil {
return rowsIngested, fmt.Errorf("`stream` item in the parsed JSON must contain an object; got %q", labelsV)
return fmt.Errorf("`stream` item in the parsed JSON must contain an object; got %q", labelsV)
}
labels = o
}
@@ -119,37 +116,37 @@ func parseJSONRequest(data []byte, lmp insertutils.LogMessageProcessor) (int, er
})
})
if err != nil {
return rowsIngested, fmt.Errorf("error when parsing `stream` object: %w", err)
return fmt.Errorf("error when parsing `stream` object: %w", err)
}
// populate messages from `values` array
linesV := stream.Get("values")
if linesV == nil {
return rowsIngested, fmt.Errorf("missing `values` item in the parsed `stream` object %q", stream)
return fmt.Errorf("missing `values` item in the parsed `stream` object %q", stream)
}
lines, err := linesV.Array()
if err != nil {
return rowsIngested, fmt.Errorf("`values` item in the parsed JSON must contain an array; got %q", linesV)
return fmt.Errorf("`values` item in the parsed JSON must contain an array; got %q", linesV)
}
fields := commonFields
for _, line := range lines {
lineA, err := line.Array()
if err != nil {
return rowsIngested, fmt.Errorf("unexpected contents of `values` item; want array; got %q", line)
return fmt.Errorf("unexpected contents of `values` item; want array; got %q", line)
}
if len(lineA) < 2 || len(lineA) > 3 {
return rowsIngested, fmt.Errorf("unexpected number of values in `values` item array %q; got %d want 2 or 3", line, len(lineA))
return fmt.Errorf("unexpected number of values in `values` item array %q; got %d want 2 or 3", line, len(lineA))
}
// parse timestamp
timestamp, err := lineA[0].StringBytes()
if err != nil {
return rowsIngested, fmt.Errorf("unexpected log timestamp type for %q; want string", lineA[0])
return fmt.Errorf("unexpected log timestamp type for %q; want string", lineA[0])
}
ts, err := parseLokiTimestamp(bytesutil.ToUnsafeString(timestamp))
if err != nil {
return rowsIngested, fmt.Errorf("cannot parse log timestamp %q: %w", timestamp, err)
return fmt.Errorf("cannot parse log timestamp %q: %w", timestamp, err)
}
if ts == 0 {
ts = currentTimestamp
@@ -158,7 +155,7 @@ func parseJSONRequest(data []byte, lmp insertutils.LogMessageProcessor) (int, er
// parse log message
msg, err := lineA[1].StringBytes()
if err != nil {
return rowsIngested, fmt.Errorf("unexpected log message type for %q; want string", lineA[1])
return fmt.Errorf("unexpected log message type for %q; want string", lineA[1])
}
fields = append(fields[:len(commonFields)], logstorage.Field{
@@ -170,7 +167,7 @@ func parseJSONRequest(data []byte, lmp insertutils.LogMessageProcessor) (int, er
if len(lineA) > 2 {
structuredMetadata, err := lineA[2].Object()
if err != nil {
return rowsIngested, fmt.Errorf("unexpected structured metadata type for %q; want JSON object", lineA[2])
return fmt.Errorf("unexpected structured metadata type for %q; want JSON object", lineA[2])
}
structuredMetadata.Visit(func(k []byte, v *fastjson.Value) {
@@ -186,15 +183,18 @@ func parseJSONRequest(data []byte, lmp insertutils.LogMessageProcessor) (int, er
})
})
if err != nil {
return rowsIngested, fmt.Errorf("error when parsing `structuredMetadata` object: %w", err)
return fmt.Errorf("error when parsing `structuredMetadata` object: %w", err)
}
}
lmp.AddRow(ts, fields)
var streamFields []logstorage.Field
if useDefaultStreamFields {
streamFields = commonFields
}
lmp.AddRow(ts, fields, streamFields)
}
rowsIngested += len(lines)
}
return rowsIngested, nil
return nil
}
func parseLokiTimestamp(s string) (int64, error) {

View File

@@ -11,12 +11,11 @@ func TestParseJSONRequest_Failure(t *testing.T) {
t.Helper()
tlp := &insertutils.TestLogMessageProcessor{}
n, err := parseJSONRequest([]byte(s), tlp)
if err == nil {
if err := parseJSONRequest([]byte(s), tlp, false); err == nil {
t.Fatalf("expecting non-nil error")
}
if n != 0 {
t.Fatalf("unexpected number of parsed lines: %d; want 0", n)
if err := tlp.Verify(nil, ""); err != nil {
t.Fatalf("unexpected error: %s", err)
}
}
f(``)
@@ -66,11 +65,10 @@ func TestParseJSONRequest_Success(t *testing.T) {
tlp := &insertutils.TestLogMessageProcessor{}
n, err := parseJSONRequest([]byte(s), tlp)
if err != nil {
if err := parseJSONRequest([]byte(s), tlp, false); err != nil {
t.Fatalf("unexpected error: %s", err)
}
if err := tlp.Verify(n, timestampsExpected, resultExpected); err != nil {
if err := tlp.Verify(timestampsExpected, resultExpected); err != nil {
t.Fatal(err)
}
}

View File

@@ -28,8 +28,7 @@ func benchmarkParseJSONRequest(b *testing.B, streams, rows, labels int) {
b.RunParallel(func(pb *testing.PB) {
data := getJSONBody(streams, rows, labels)
for pb.Next() {
_, err := parseJSONRequest(data, blp)
if err != nil {
if err := parseJSONRequest(data, blp, false); err != nil {
panic(fmt.Errorf("unexpected error: %w", err))
}
}

View File

@@ -45,15 +45,14 @@ func handleProtobuf(r *http.Request, w http.ResponseWriter) {
return
}
lmp := cp.NewLogMessageProcessor("loki_protobuf")
n, err := parseProtobufRequest(data, lmp)
useDefaultStreamFields := len(cp.StreamFields) == 0
err = parseProtobufRequest(data, lmp, useDefaultStreamFields)
lmp.MustClose()
if err != nil {
httpserver.Errorf(w, r, "cannot parse Loki protobuf request: %s", err)
return
}
rowsIngestedProtobufTotal.Add(n)
// update requestProtobufDuration only for successfully parsed requests
// There is no need in updating requestProtobufDuration for request errors,
// since their timings are usually much smaller than the timing for successful request parsing.
@@ -61,18 +60,17 @@ func handleProtobuf(r *http.Request, w http.ResponseWriter) {
}
var (
requestsProtobufTotal = metrics.NewCounter(`vl_http_requests_total{path="/insert/loki/api/v1/push",format="protobuf"}`)
rowsIngestedProtobufTotal = metrics.NewCounter(`vl_rows_ingested_total{type="loki_protobuf"}`)
requestProtobufDuration = metrics.NewHistogram(`vl_http_request_duration_seconds{path="/insert/loki/api/v1/push",format="protobuf"}`)
requestsProtobufTotal = metrics.NewCounter(`vl_http_requests_total{path="/insert/loki/api/v1/push",format="protobuf"}`)
requestProtobufDuration = metrics.NewHistogram(`vl_http_request_duration_seconds{path="/insert/loki/api/v1/push",format="protobuf"}`)
)
func parseProtobufRequest(data []byte, lmp insertutils.LogMessageProcessor) (int, error) {
func parseProtobufRequest(data []byte, lmp insertutils.LogMessageProcessor, useDefaultStreamFields bool) error {
bb := bytesBufPool.Get()
defer bytesBufPool.Put(bb)
buf, err := snappy.Decode(bb.B[:cap(bb.B)], data)
if err != nil {
return 0, fmt.Errorf("cannot decode snappy-encoded request body: %w", err)
return fmt.Errorf("cannot decode snappy-encoded request body: %w", err)
}
bb.B = buf
@@ -81,13 +79,12 @@ func parseProtobufRequest(data []byte, lmp insertutils.LogMessageProcessor) (int
err = req.UnmarshalProtobuf(bb.B)
if err != nil {
return 0, fmt.Errorf("cannot parse request body: %w", err)
return fmt.Errorf("cannot parse request body: %w", err)
}
fields := getFields()
defer putFields(fields)
rowsIngested := 0
streams := req.Streams
currentTimestamp := time.Now().UnixNano()
for i := range streams {
@@ -96,7 +93,7 @@ func parseProtobufRequest(data []byte, lmp insertutils.LogMessageProcessor) (int
// Labels are same for all entries in the stream.
fields.fields, err = parsePromLabels(fields.fields[:0], stream.Labels)
if err != nil {
return rowsIngested, fmt.Errorf("cannot parse stream labels %q: %w", stream.Labels, err)
return fmt.Errorf("cannot parse stream labels %q: %w", stream.Labels, err)
}
commonFieldsLen := len(fields.fields)
@@ -122,11 +119,14 @@ func parseProtobufRequest(data []byte, lmp insertutils.LogMessageProcessor) (int
ts = currentTimestamp
}
lmp.AddRow(ts, fields.fields)
var streamFields []logstorage.Field
if useDefaultStreamFields {
streamFields = fields.fields[:commonFieldsLen]
}
lmp.AddRow(ts, fields.fields, streamFields)
}
rowsIngested += len(stream.Entries)
}
return rowsIngested, nil
return nil
}
func getFields() *fields {

View File

@@ -15,7 +15,10 @@ type testLogMessageProcessor struct {
pr PushRequest
}
func (tlp *testLogMessageProcessor) AddRow(timestamp int64, fields []logstorage.Field) {
func (tlp *testLogMessageProcessor) AddRow(timestamp int64, fields, streamFields []logstorage.Field) {
if streamFields != nil {
panic(fmt.Errorf("unexpected non-nil streamFields: %v", streamFields))
}
msg := ""
for _, f := range fields {
if f.Name == "_msg" {
@@ -50,23 +53,21 @@ func TestParseProtobufRequest_Success(t *testing.T) {
t.Helper()
tlp := &testLogMessageProcessor{}
n, err := parseJSONRequest([]byte(s), tlp)
if err != nil {
if err := parseJSONRequest([]byte(s), tlp, false); err != nil {
t.Fatalf("unexpected error: %s", err)
}
if n != len(tlp.pr.Streams) {
t.Fatalf("unexpected number of streams; got %d; want %d", len(tlp.pr.Streams), n)
if len(tlp.pr.Streams) != len(timestampsExpected) {
t.Fatalf("unexpected number of streams; got %d; want %d", len(tlp.pr.Streams), len(timestampsExpected))
}
data := tlp.pr.MarshalProtobuf(nil)
encodedData := snappy.Encode(nil, data)
tlp2 := &insertutils.TestLogMessageProcessor{}
n, err = parseProtobufRequest(encodedData, tlp2)
if err != nil {
if err := parseProtobufRequest(encodedData, tlp2, false); err != nil {
t.Fatalf("unexpected error: %s", err)
}
if err := tlp2.Verify(n, timestampsExpected, resultExpected); err != nil {
if err := tlp2.Verify(timestampsExpected, resultExpected); err != nil {
t.Fatal(err)
}
}

View File

@@ -31,8 +31,7 @@ func benchmarkParseProtobufRequest(b *testing.B, streams, rows, labels int) {
b.RunParallel(func(pb *testing.PB) {
body := getProtobufBody(streams, rows, labels)
for pb.Next() {
_, err := parseProtobufRequest(body, blp)
if err != nil {
if err := parseProtobufRequest(body, blp, false); err != nil {
panic(fmt.Errorf("unexpected error: %w", err))
}
}

View File

@@ -1,6 +1,7 @@
package vlinsert
import (
"fmt"
"net/http"
"strings"
@@ -34,9 +35,15 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
path = strings.TrimPrefix(path, "/insert")
path = strings.ReplaceAll(path, "//", "/")
if path == "/jsonline" {
switch path {
case "/jsonline":
jsonline.RequestHandler(w, r)
return true
case "/ready":
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(200)
fmt.Fprintf(w, `{"status":"ok"}`)
return true
}
switch {
case strings.HasPrefix(path, "/elasticsearch/"):

View File

@@ -67,15 +67,14 @@ func handleProtobuf(r *http.Request, w http.ResponseWriter) {
}
lmp := cp.NewLogMessageProcessor("opentelelemtry_protobuf")
n, err := pushProtobufRequest(data, lmp)
useDefaultStreamFields := len(cp.StreamFields) == 0
err = pushProtobufRequest(data, lmp, useDefaultStreamFields)
lmp.MustClose()
if err != nil {
httpserver.Errorf(w, r, "cannot parse OpenTelemetry protobuf request: %s", err)
return
}
rowsIngestedProtobufTotal.Add(n)
// update requestProtobufDuration only for successfully parsed requests
// There is no need in updating requestProtobufDuration for request errors,
// since their timings are usually much smaller than the timing for successful request parsing.
@@ -83,22 +82,19 @@ func handleProtobuf(r *http.Request, w http.ResponseWriter) {
}
var (
rowsIngestedProtobufTotal = metrics.NewCounter(`vl_rows_ingested_total{type="opentelemetry_protobuf"}`)
requestsProtobufTotal = metrics.NewCounter(`vl_http_requests_total{path="/insert/opentelemetry/v1/logs",format="protobuf"}`)
errorsTotal = metrics.NewCounter(`vl_http_errors_total{path="/insert/opentelemetry/v1/logs",format="protobuf"}`)
requestProtobufDuration = metrics.NewHistogram(`vl_http_request_duration_seconds{path="/insert/opentelemetry/v1/logs",format="protobuf"}`)
)
func pushProtobufRequest(data []byte, lmp insertutils.LogMessageProcessor) (int, error) {
func pushProtobufRequest(data []byte, lmp insertutils.LogMessageProcessor, useDefaultStreamFields bool) error {
var req pb.ExportLogsServiceRequest
if err := req.UnmarshalProtobuf(data); err != nil {
errorsTotal.Inc()
return 0, fmt.Errorf("cannot unmarshal request from %d bytes: %w", len(data), err)
return fmt.Errorf("cannot unmarshal request from %d bytes: %w", len(data), err)
}
var rowsIngested int
var commonFields []logstorage.Field
for _, rl := range req.ResourceLogs {
attributes := rl.Resource.Attributes
@@ -109,16 +105,14 @@ func pushProtobufRequest(data []byte, lmp insertutils.LogMessageProcessor) (int,
}
commonFieldsLen := len(commonFields)
for _, sc := range rl.ScopeLogs {
var scopeIngested int
commonFields, scopeIngested = pushFieldsFromScopeLogs(&sc, commonFields[:commonFieldsLen], lmp)
rowsIngested += scopeIngested
commonFields = pushFieldsFromScopeLogs(&sc, commonFields[:commonFieldsLen], lmp, useDefaultStreamFields)
}
}
return rowsIngested, nil
return nil
}
func pushFieldsFromScopeLogs(sc *pb.ScopeLogs, commonFields []logstorage.Field, lmp insertutils.LogMessageProcessor) ([]logstorage.Field, int) {
func pushFieldsFromScopeLogs(sc *pb.ScopeLogs, commonFields []logstorage.Field, lmp insertutils.LogMessageProcessor, useDefaultStreamFields bool) []logstorage.Field {
fields := commonFields
for _, lr := range sc.LogRecords {
fields = fields[:len(commonFields)]
@@ -137,7 +131,11 @@ func pushFieldsFromScopeLogs(sc *pb.ScopeLogs, commonFields []logstorage.Field,
Value: lr.FormatSeverity(),
})
lmp.AddRow(lr.ExtractTimestampNano(), fields)
var streamFields []logstorage.Field
if useDefaultStreamFields {
streamFields = commonFields
}
lmp.AddRow(lr.ExtractTimestampNano(), fields, streamFields)
}
return fields, len(sc.LogRecords)
return fields
}

View File

@@ -16,12 +16,11 @@ func TestPushProtoOk(t *testing.T) {
pData := lr.MarshalProtobuf(nil)
tlp := &insertutils.TestLogMessageProcessor{}
n, err := pushProtobufRequest(pData, tlp)
if err != nil {
if err := pushProtobufRequest(pData, tlp, false); err != nil {
t.Fatalf("unexpected error: %s", err)
}
if err := tlp.Verify(n, timestampsExpected, resultExpected); err != nil {
if err := tlp.Verify(timestampsExpected, resultExpected); err != nil {
t.Fatal(err)
}
}

View File

@@ -27,8 +27,7 @@ func benchmarkParseProtobufRequest(b *testing.B, streams, rows, labels int) {
b.RunParallel(func(pb *testing.PB) {
body := getProtobufBody(streams, rows, labels)
for pb.Next() {
_, err := pushProtobufRequest(body, blp)
if err != nil {
if err := pushProtobufRequest(body, blp, false); err != nil {
panic(fmt.Errorf("unexpected error: %w", err))
}
}

View File

@@ -436,7 +436,6 @@ func processUncompressedStream(r io.Reader, useLocalTimestamp bool, lmp insertut
return fmt.Errorf("cannot read line #%d: %s", n, err)
}
n++
rowsIngestedTotal.Inc()
}
return slr.Error()
}
@@ -568,7 +567,7 @@ func processLine(line []byte, currentYear int, timezone *time.Location, useLocal
ts = nsecs
}
logstorage.RenameField(p.Fields, msgFields, "_msg")
lmp.AddRow(ts, p.Fields)
lmp.AddRow(ts, p.Fields, nil)
logstorage.PutSyslogParser(p)
return nil
@@ -577,8 +576,6 @@ func processLine(line []byte, currentYear int, timezone *time.Location, useLocal
var msgFields = []string{"message"}
var (
rowsIngestedTotal = metrics.NewCounter(`vl_rows_ingested_total{type="syslog"}`)
errorsTotal = metrics.NewCounter(`vl_errors_total{type="syslog"}`)
udpRequestsTotal = metrics.NewCounter(`vl_udp_reqests_total{type="syslog"}`)

View File

@@ -75,7 +75,7 @@ func TestSyslogLineReader_Failure(t *testing.T) {
}
func TestProcessStreamInternal_Success(t *testing.T) {
f := func(data string, currentYear, rowsExpected int, timestampsExpected []int64, resultExpected string) {
f := func(data string, currentYear int, timestampsExpected []int64, resultExpected string) {
t.Helper()
MustInit()
@@ -89,7 +89,7 @@ func TestProcessStreamInternal_Success(t *testing.T) {
if err := processStreamInternal(r, "", false, tlp); err != nil {
t.Fatalf("unexpected error: %s", err)
}
if err := tlp.Verify(rowsExpected, timestampsExpected, resultExpected); err != nil {
if err := tlp.Verify(timestampsExpected, resultExpected); err != nil {
t.Fatal(err)
}
}
@@ -99,12 +99,11 @@ func TestProcessStreamInternal_Success(t *testing.T) {
48 <165>Jun 4 12:08:33 abcd systemd[345]: abc defg<123>1 2023-06-03T17:42:12.345Z mymachine.example.com appname 12345 ID47 [exampleSDID@32473 iut="3" eventSource="Application 123 = ] 56" eventID="11211"] This is a test message with structured data.
`
currentYear := 2023
rowsExpected := 3
timestampsExpected := []int64{1685794113000000000, 1685880513000000000, 1685814132345000000}
resultExpected := `{"format":"rfc3164","hostname":"abcd","app_name":"systemd","_msg":"Starting Update the local ESM caches..."}
{"priority":"165","facility":"20","severity":"5","format":"rfc3164","hostname":"abcd","app_name":"systemd","proc_id":"345","_msg":"abc defg"}
{"priority":"123","facility":"15","severity":"3","format":"rfc5424","hostname":"mymachine.example.com","app_name":"appname","proc_id":"12345","msg_id":"ID47","exampleSDID@32473.iut":"3","exampleSDID@32473.eventSource":"Application 123 = ] 56","exampleSDID@32473.eventID":"11211","_msg":"This is a test message with structured data."}`
f(data, currentYear, rowsExpected, timestampsExpected, resultExpected)
f(data, currentYear, timestampsExpected, resultExpected)
}
func TestProcessStreamInternal_Failure(t *testing.T) {

View File

@@ -6,6 +6,7 @@ import (
"fmt"
"io"
"sort"
"strings"
"sync"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
@@ -234,5 +235,11 @@ func getJSONString(s string) string {
if err != nil {
panic(fmt.Errorf("unexpected error when marshaling string to JSON: %w", err))
}
return string(data)
return jsonHTMLReplacer.Replace(string(data))
}
var jsonHTMLReplacer = strings.NewReplacer(
`\u003c`, "\u003c",
`\u003e`, "\u003e",
`\u0026`, "\u0026",
)

View File

@@ -0,0 +1,49 @@
{% import (
"slices"
) %}
{% stripspace %}
{% func FacetsResponse(m map[string][]facetEntry) %}
{
{% code
sortedKeys := make([]string, 0, len(m))
for k := range m {
sortedKeys = append(sortedKeys, k)
}
slices.Sort(sortedKeys)
%}
"facets":[
{% if len(sortedKeys) > 0 %}
{%= facetsLine(m, sortedKeys[0]) %}
{% for _, k := range sortedKeys[1:] %}
,{%= facetsLine(m, k) %}
{% endfor %}
{% endif %}
]
}
{% endfunc %}
{% func facetsLine(m map[string][]facetEntry, k string) %}
{
"field_name":{%q= k %},
"values":[
{% code fes := m[k] %}
{% if len(fes) > 0 %}
{%= facetLine(fes[0]) %}
{% for _, fe := range fes[1:] %}
,{%= facetLine(fe) %}
{% endfor %}
{% endif %}
]
}
{% endfunc %}
{% func facetLine(fe facetEntry) %}
{
"field_value":{%q= fe.value %},
"hits":{%s= fe.hits %}
}
{% endfunc %}
{% endstripspace %}

View File

@@ -0,0 +1,178 @@
// Code generated by qtc from "facets_response.qtpl". DO NOT EDIT.
// See https://github.com/valyala/quicktemplate for details.
//line app/vlselect/logsql/facets_response.qtpl:1
package logsql
//line app/vlselect/logsql/facets_response.qtpl:1
import (
"slices"
)
//line app/vlselect/logsql/facets_response.qtpl:7
import (
qtio422016 "io"
qt422016 "github.com/valyala/quicktemplate"
)
//line app/vlselect/logsql/facets_response.qtpl:7
var (
_ = qtio422016.Copy
_ = qt422016.AcquireByteBuffer
)
//line app/vlselect/logsql/facets_response.qtpl:7
func StreamFacetsResponse(qw422016 *qt422016.Writer, m map[string][]facetEntry) {
//line app/vlselect/logsql/facets_response.qtpl:7
qw422016.N().S(`{`)
//line app/vlselect/logsql/facets_response.qtpl:10
sortedKeys := make([]string, 0, len(m))
for k := range m {
sortedKeys = append(sortedKeys, k)
}
slices.Sort(sortedKeys)
//line app/vlselect/logsql/facets_response.qtpl:15
qw422016.N().S(`"facets":[`)
//line app/vlselect/logsql/facets_response.qtpl:17
if len(sortedKeys) > 0 {
//line app/vlselect/logsql/facets_response.qtpl:18
streamfacetsLine(qw422016, m, sortedKeys[0])
//line app/vlselect/logsql/facets_response.qtpl:19
for _, k := range sortedKeys[1:] {
//line app/vlselect/logsql/facets_response.qtpl:19
qw422016.N().S(`,`)
//line app/vlselect/logsql/facets_response.qtpl:20
streamfacetsLine(qw422016, m, k)
//line app/vlselect/logsql/facets_response.qtpl:21
}
//line app/vlselect/logsql/facets_response.qtpl:22
}
//line app/vlselect/logsql/facets_response.qtpl:22
qw422016.N().S(`]}`)
//line app/vlselect/logsql/facets_response.qtpl:25
}
//line app/vlselect/logsql/facets_response.qtpl:25
func WriteFacetsResponse(qq422016 qtio422016.Writer, m map[string][]facetEntry) {
//line app/vlselect/logsql/facets_response.qtpl:25
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vlselect/logsql/facets_response.qtpl:25
StreamFacetsResponse(qw422016, m)
//line app/vlselect/logsql/facets_response.qtpl:25
qt422016.ReleaseWriter(qw422016)
//line app/vlselect/logsql/facets_response.qtpl:25
}
//line app/vlselect/logsql/facets_response.qtpl:25
func FacetsResponse(m map[string][]facetEntry) string {
//line app/vlselect/logsql/facets_response.qtpl:25
qb422016 := qt422016.AcquireByteBuffer()
//line app/vlselect/logsql/facets_response.qtpl:25
WriteFacetsResponse(qb422016, m)
//line app/vlselect/logsql/facets_response.qtpl:25
qs422016 := string(qb422016.B)
//line app/vlselect/logsql/facets_response.qtpl:25
qt422016.ReleaseByteBuffer(qb422016)
//line app/vlselect/logsql/facets_response.qtpl:25
return qs422016
//line app/vlselect/logsql/facets_response.qtpl:25
}
//line app/vlselect/logsql/facets_response.qtpl:27
func streamfacetsLine(qw422016 *qt422016.Writer, m map[string][]facetEntry, k string) {
//line app/vlselect/logsql/facets_response.qtpl:27
qw422016.N().S(`{"field_name":`)
//line app/vlselect/logsql/facets_response.qtpl:29
qw422016.N().Q(k)
//line app/vlselect/logsql/facets_response.qtpl:29
qw422016.N().S(`,"values":[`)
//line app/vlselect/logsql/facets_response.qtpl:31
fes := m[k]
//line app/vlselect/logsql/facets_response.qtpl:32
if len(fes) > 0 {
//line app/vlselect/logsql/facets_response.qtpl:33
streamfacetLine(qw422016, fes[0])
//line app/vlselect/logsql/facets_response.qtpl:34
for _, fe := range fes[1:] {
//line app/vlselect/logsql/facets_response.qtpl:34
qw422016.N().S(`,`)
//line app/vlselect/logsql/facets_response.qtpl:35
streamfacetLine(qw422016, fe)
//line app/vlselect/logsql/facets_response.qtpl:36
}
//line app/vlselect/logsql/facets_response.qtpl:37
}
//line app/vlselect/logsql/facets_response.qtpl:37
qw422016.N().S(`]}`)
//line app/vlselect/logsql/facets_response.qtpl:40
}
//line app/vlselect/logsql/facets_response.qtpl:40
func writefacetsLine(qq422016 qtio422016.Writer, m map[string][]facetEntry, k string) {
//line app/vlselect/logsql/facets_response.qtpl:40
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vlselect/logsql/facets_response.qtpl:40
streamfacetsLine(qw422016, m, k)
//line app/vlselect/logsql/facets_response.qtpl:40
qt422016.ReleaseWriter(qw422016)
//line app/vlselect/logsql/facets_response.qtpl:40
}
//line app/vlselect/logsql/facets_response.qtpl:40
func facetsLine(m map[string][]facetEntry, k string) string {
//line app/vlselect/logsql/facets_response.qtpl:40
qb422016 := qt422016.AcquireByteBuffer()
//line app/vlselect/logsql/facets_response.qtpl:40
writefacetsLine(qb422016, m, k)
//line app/vlselect/logsql/facets_response.qtpl:40
qs422016 := string(qb422016.B)
//line app/vlselect/logsql/facets_response.qtpl:40
qt422016.ReleaseByteBuffer(qb422016)
//line app/vlselect/logsql/facets_response.qtpl:40
return qs422016
//line app/vlselect/logsql/facets_response.qtpl:40
}
//line app/vlselect/logsql/facets_response.qtpl:42
func streamfacetLine(qw422016 *qt422016.Writer, fe facetEntry) {
//line app/vlselect/logsql/facets_response.qtpl:42
qw422016.N().S(`{"field_value":`)
//line app/vlselect/logsql/facets_response.qtpl:44
qw422016.N().Q(fe.value)
//line app/vlselect/logsql/facets_response.qtpl:44
qw422016.N().S(`,"hits":`)
//line app/vlselect/logsql/facets_response.qtpl:45
qw422016.N().S(fe.hits)
//line app/vlselect/logsql/facets_response.qtpl:45
qw422016.N().S(`}`)
//line app/vlselect/logsql/facets_response.qtpl:47
}
//line app/vlselect/logsql/facets_response.qtpl:47
func writefacetLine(qq422016 qtio422016.Writer, fe facetEntry) {
//line app/vlselect/logsql/facets_response.qtpl:47
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vlselect/logsql/facets_response.qtpl:47
streamfacetLine(qw422016, fe)
//line app/vlselect/logsql/facets_response.qtpl:47
qt422016.ReleaseWriter(qw422016)
//line app/vlselect/logsql/facets_response.qtpl:47
}
//line app/vlselect/logsql/facets_response.qtpl:47
func facetLine(fe facetEntry) string {
//line app/vlselect/logsql/facets_response.qtpl:47
qb422016 := qt422016.AcquireByteBuffer()
//line app/vlselect/logsql/facets_response.qtpl:47
writefacetLine(qb422016, fe)
//line app/vlselect/logsql/facets_response.qtpl:47
qs422016 := string(qb422016.B)
//line app/vlselect/logsql/facets_response.qtpl:47
qt422016.ReleaseByteBuffer(qb422016)
//line app/vlselect/logsql/facets_response.qtpl:47
return qs422016
//line app/vlselect/logsql/facets_response.qtpl:47
}

View File

@@ -5,6 +5,7 @@ import (
"fmt"
"math"
"net/http"
"regexp"
"slices"
"sort"
"strconv"
@@ -13,6 +14,7 @@ import (
"time"
"github.com/VictoriaMetrics/metrics"
"github.com/valyala/fastjson"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
@@ -23,6 +25,82 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutils"
)
// ProcessFacetsRequest handles /select/logsql/facets request.
//
// See https://docs.victoriametrics.com/victorialogs/querying/#querying-facets
func ProcessFacetsRequest(ctx context.Context, w http.ResponseWriter, r *http.Request) {
q, tenantIDs, err := parseCommonArgs(r)
if err != nil {
httpserver.Errorf(w, r, "%s", err)
return
}
limit, err := httputils.GetInt(r, "limit")
if err != nil {
httpserver.Errorf(w, r, "%s", err)
return
}
maxValuesPerField, err := httputils.GetInt(r, "max_values_per_field")
if err != nil {
httpserver.Errorf(w, r, "%s", err)
return
}
maxValueLen, err := httputils.GetInt(r, "max_value_len")
if err != nil {
httpserver.Errorf(w, r, "%s", err)
return
}
keepConstFields := httputils.GetBool(r, "keep_const_fields")
q.DropAllPipes()
q.AddFacetsPipe(limit, maxValuesPerField, maxValueLen, keepConstFields)
var mLock sync.Mutex
m := make(map[string][]facetEntry)
writeBlock := func(_ uint, _ []int64, columns []logstorage.BlockColumn) {
if len(columns) == 0 || len(columns[0].Values) == 0 {
return
}
if len(columns) != 3 {
logger.Panicf("BUG: expecting 3 columns; got %d columns", len(columns))
}
fieldNames := columns[0].Values
fieldValues := columns[1].Values
hits := columns[2].Values
bb := blockResultPool.Get()
for i := range fieldNames {
fieldName := strings.Clone(fieldNames[i])
fieldValue := strings.Clone(fieldValues[i])
hitsStr := strings.Clone(hits[i])
mLock.Lock()
m[fieldName] = append(m[fieldName], facetEntry{
value: fieldValue,
hits: hitsStr,
})
mLock.Unlock()
}
blockResultPool.Put(bb)
}
// Execute the query
if err := vlstorage.RunQuery(ctx, tenantIDs, q, writeBlock); err != nil {
httpserver.Errorf(w, r, "cannot execute query [%s]: %s", q, err)
return
}
// Write response
w.Header().Set("Content-Type", "application/json")
WriteFacetsResponse(w, m)
}
type facetEntry struct {
value string
hits string
}
// ProcessHitsRequest handles /select/logsql/hits request.
//
// See https://docs.victoriametrics.com/victorialogs/querying/#querying-hits-stats
@@ -1017,18 +1095,20 @@ func parseCommonArgs(r *http.Request) (*logstorage.Query, []logstorage.TenantID,
}
// Parse optional extra_filters
extraFilters, err := getExtraFilters(r, "extra_filters")
extraFiltersStr := r.FormValue("extra_filters")
extraFilters, err := parseExtraFilters(extraFiltersStr)
if err != nil {
return nil, nil, err
}
q.AddExtraFilters(extraFilters)
// Parse optional extra_stream_filters
extraStreamFilters, err := getExtraFilters(r, "extra_stream_filters")
extraStreamFiltersStr := r.FormValue("extra_stream_filters")
extraStreamFilters, err := parseExtraStreamFilters(extraStreamFiltersStr)
if err != nil {
return nil, nil, err
}
q.AddExtraStreamFilters(extraStreamFilters)
q.AddExtraFilters(extraStreamFilters)
return q, tenantIDs, nil
}
@@ -1046,15 +1126,114 @@ func getTimeNsec(r *http.Request, argName string) (int64, bool, error) {
return nsecs, true, nil
}
func getExtraFilters(r *http.Request, argName string) ([]logstorage.Field, error) {
s := r.FormValue(argName)
func parseExtraFilters(s string) (*logstorage.Filter, error) {
if s == "" {
return nil, nil
}
var p logstorage.JSONParser
if err := p.ParseLogMessage([]byte(s)); err != nil {
return nil, fmt.Errorf("cannot parse %s: %w", argName, err)
if !strings.HasPrefix(s, `{"`) {
return logstorage.ParseFilter(s)
}
return p.Fields, nil
// Extra filters in the form {"field":"value",...}.
kvs, err := parseExtraFiltersJSON(s)
if err != nil {
return nil, err
}
filters := make([]string, len(kvs))
for i, kv := range kvs {
if len(kv.values) == 1 {
filters[i] = fmt.Sprintf("%q:=%q", kv.key, kv.values[0])
} else {
orValues := make([]string, len(kv.values))
for j, v := range kv.values {
orValues[j] = fmt.Sprintf("%q", v)
}
filters[i] = fmt.Sprintf("%q:in(%s)", kv.key, strings.Join(orValues, ","))
}
}
s = strings.Join(filters, " ")
return logstorage.ParseFilter(s)
}
func parseExtraStreamFilters(s string) (*logstorage.Filter, error) {
if s == "" {
return nil, nil
}
if !strings.HasPrefix(s, `{"`) {
return logstorage.ParseFilter(s)
}
// Extra stream filters in the form {"field":"value",...}.
kvs, err := parseExtraFiltersJSON(s)
if err != nil {
return nil, err
}
filters := make([]string, len(kvs))
for i, kv := range kvs {
if len(kv.values) == 1 {
filters[i] = fmt.Sprintf("%q=%q", kv.key, kv.values[0])
} else {
orValues := make([]string, len(kv.values))
for j, v := range kv.values {
orValues[j] = regexp.QuoteMeta(v)
}
filters[i] = fmt.Sprintf("%q=~%q", kv.key, strings.Join(orValues, "|"))
}
}
s = "{" + strings.Join(filters, ",") + "}"
return logstorage.ParseFilter(s)
}
type extraFilter struct {
key string
values []string
}
func parseExtraFiltersJSON(s string) ([]extraFilter, error) {
v, err := fastjson.Parse(s)
if err != nil {
return nil, err
}
o := v.GetObject()
var errOuter error
var filters []extraFilter
o.Visit(func(k []byte, v *fastjson.Value) {
if errOuter != nil {
return
}
switch v.Type() {
case fastjson.TypeString:
filters = append(filters, extraFilter{
key: string(k),
values: []string{string(v.GetStringBytes())},
})
case fastjson.TypeArray:
a := v.GetArray()
if len(a) == 0 {
return
}
orValues := make([]string, len(a))
for i, av := range a {
ov, err := av.StringBytes()
if err != nil {
errOuter = fmt.Errorf("cannot obtain string item at the array for key %q; item: %s", k, av)
return
}
orValues[i] = string(ov)
}
filters = append(filters, extraFilter{
key: string(k),
values: orValues,
})
default:
errOuter = fmt.Errorf("unexpected type of value for key %q: %s; value: %s", k, v.Type(), v)
}
})
if errOuter != nil {
return nil, errOuter
}
return filters, nil
}

View File

@@ -0,0 +1,103 @@
package logsql
import (
"testing"
)
func TestParseExtraFilters_Success(t *testing.T) {
f := func(s, resultExpected string) {
t.Helper()
f, err := parseExtraFilters(s)
if err != nil {
t.Fatalf("unexpected error in parseExtraFilters: %s", err)
}
result := f.String()
if result != resultExpected {
t.Fatalf("unexpected result\ngot\n%s\nwant\n%s", result, resultExpected)
}
}
f("", "")
// JSON string
f(`{"foo":"bar"}`, `foo:=bar`)
f(`{"foo":["bar","baz"]}`, `foo:in(bar,baz)`)
f(`{"z":"=b ","c":["d","e,"],"a":[],"_msg":"x"}`, `z:="=b " c:in(d,"e,") =x`)
// LogsQL filter
f(`foobar`, `foobar`)
f(`foo:bar`, `foo:bar`)
f(`foo:(bar or baz) error _time:5m {"foo"=bar,baz="z"}`, `(foo:bar or foo:baz) error _time:5m {foo="bar",baz="z"}`)
}
func TestParseExtraFilters_Failure(t *testing.T) {
f := func(s string) {
t.Helper()
_, err := parseExtraFilters(s)
if err == nil {
t.Fatalf("expecting non-nil error")
}
}
// Invalid JSON
f(`{"foo"}`)
f(`[1,2]`)
f(`{"foo":[1]}`)
// Invliad LogsQL filter
f(`foo:(bar`)
// excess pipe
f(`foo | count()`)
}
func TestParseExtraStreamFilters_Success(t *testing.T) {
f := func(s, resultExpected string) {
t.Helper()
f, err := parseExtraStreamFilters(s)
if err != nil {
t.Fatalf("unexpected error in parseExtraStreamFilters: %s", err)
}
result := f.String()
if result != resultExpected {
t.Fatalf("unexpected result;\ngot\n%s\nwant\n%s", result, resultExpected)
}
}
f("", "")
// JSON string
f(`{"foo":"bar"}`, `{foo="bar"}`)
f(`{"foo":["bar","baz"]}`, `{foo=~"bar|baz"}`)
f(`{"z":"b","c":["d","e|\""],"a":[],"_msg":"x"}`, `{z="b",c=~"d|e\\|\"",_msg="x"}`)
// LogsQL filter
f(`foobar`, `foobar`)
f(`foo:bar`, `foo:bar`)
f(`foo:(bar or baz) error _time:5m {"foo"=bar,baz="z"}`, `(foo:bar or foo:baz) error _time:5m {foo="bar",baz="z"}`)
}
func TestParseExtraStreamFilters_Failure(t *testing.T) {
f := func(s string) {
t.Helper()
_, err := parseExtraStreamFilters(s)
if err == nil {
t.Fatalf("expecting non-nil error")
}
}
// Invalid JSON
f(`{"foo"}`)
f(`[1,2]`)
f(`{"foo":[1]}`)
// Invliad LogsQL filter
f(`foo:(bar`)
// excess pipe
f(`foo | count()`)
}

View File

@@ -177,6 +177,10 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
func processSelectRequest(ctx context.Context, w http.ResponseWriter, r *http.Request, path string) bool {
httpserver.EnableCORS(w, r)
switch path {
case "/select/logsql/facets":
logsqlFacetsRequests.Inc()
logsql.ProcessFacetsRequest(ctx, w, r)
return true
case "/select/logsql/field_names":
logsqlFieldNamesRequests.Inc()
logsql.ProcessFieldNamesRequest(ctx, w, r)
@@ -236,6 +240,7 @@ func getMaxQueryDuration(r *http.Request) time.Duration {
}
var (
logsqlFacetsRequests = metrics.NewCounter(`vl_http_requests_total{path="/select/logsql/facets"}`)
logsqlFieldNamesRequests = metrics.NewCounter(`vl_http_requests_total{path="/select/logsql/field_names"}`)
logsqlFieldValuesRequests = metrics.NewCounter(`vl_http_requests_total{path="/select/logsql/field_values"}`)
logsqlHitsRequests = metrics.NewCounter(`vl_http_requests_total{path="/select/logsql/hits"}`)

View File

@@ -1,13 +1,13 @@
{
"files": {
"main.css": "./static/css/main.faf86aa5.css",
"main.js": "./static/js/main.b204330a.js",
"main.css": "./static/css/main.fa83344e.css",
"main.js": "./static/js/main.8ad2bc1f.js",
"static/js/685.f772060c.chunk.js": "./static/js/685.f772060c.chunk.js",
"static/media/MetricsQL.md": "./static/media/MetricsQL.a00044c91d9781cf8557.md",
"index.html": "./index.html"
},
"entrypoints": [
"static/css/main.faf86aa5.css",
"static/js/main.b204330a.js"
"static/css/main.fa83344e.css",
"static/js/main.8ad2bc1f.js"
]
}

View File

@@ -1 +1 @@
<!doctype html><html lang="en"><head><meta charset="utf-8"/><link rel="icon" href="./favicon.svg"/><link rel="apple-touch-icon" href="./favicon.svg"/><link rel="mask-icon" href="./favicon.svg" color="#000000"><meta name="viewport" content="width=device-width,initial-scale=1,maximum-scale=5"/><meta name="theme-color" content="#000000"/><meta name="description" content="Explore your log data with VictoriaLogs UI"/><link rel="manifest" href="./manifest.json"/><title>UI for VictoriaLogs</title><meta name="twitter:card" content="summary"><meta name="twitter:title" content="UI for VictoriaLogs"><meta name="twitter:site" content="@https://victoriametrics.com/products/victorialogs/"><meta name="twitter:description" content="Explore your log data with VictoriaLogs UI"><meta name="twitter:image" content="./preview.jpg"><meta property="og:type" content="website"><meta property="og:title" content="UI for VictoriaLogs"><meta property="og:url" content="https://victoriametrics.com/products/victorialogs/"><meta property="og:description" content="Explore your log data with VictoriaLogs UI"><script defer="defer" src="./static/js/main.b204330a.js"></script><link href="./static/css/main.faf86aa5.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div></body></html>
<!doctype html><html lang="en"><head><meta charset="utf-8"/><link rel="icon" href="./favicon.svg"/><link rel="apple-touch-icon" href="./favicon.svg"/><link rel="mask-icon" href="./favicon.svg" color="#000000"><meta name="viewport" content="width=device-width,initial-scale=1,maximum-scale=5"/><meta name="theme-color" content="#000000"/><meta name="description" content="Explore your log data with VictoriaLogs UI"/><link rel="manifest" href="./manifest.json"/><title>UI for VictoriaLogs</title><meta name="twitter:card" content="summary"><meta name="twitter:title" content="UI for VictoriaLogs"><meta name="twitter:site" content="@https://victoriametrics.com/products/victorialogs/"><meta name="twitter:description" content="Explore your log data with VictoriaLogs UI"><meta name="twitter:image" content="./preview.jpg"><meta property="og:type" content="website"><meta property="og:title" content="UI for VictoriaLogs"><meta property="og:url" content="https://victoriametrics.com/products/victorialogs/"><meta property="og:description" content="Explore your log data with VictoriaLogs UI"><script defer="defer" src="./static/js/main.8ad2bc1f.js"></script><link href="./static/css/main.fa83344e.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div></body></html>

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -30,6 +30,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/buildinfo"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/cgroup"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/envflag"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
@@ -45,6 +46,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentelemetry/firehose"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/pushmetrics"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/stringsutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeserieslimits"
)
var (
@@ -77,6 +79,9 @@ var (
dryRun = flag.Bool("dryRun", false, "Whether to check config files without running vmagent. The following files are checked: "+
"-promscrape.config, -remoteWrite.relabelConfig, -remoteWrite.urlRelabelConfig, -remoteWrite.streamAggr.config . "+
"Unknown config entries aren't allowed in -promscrape.config by default. This can be changed by passing -promscrape.config.strictParse=false command-line flag")
maxLabelsPerTimeseries = flag.Int("maxLabelsPerTimeseries", 0, "The maximum number of labels per time series to be accepted. Series with superfluous labels are ignored. In this case the vm_rows_ignored_total{reason=\"too_many_labels\"} metric at /metrics page is incremented")
maxLabelNameLen = flag.Int("maxLabelNameLen", 0, "The maximum length of label names in the accepted time series. Series with longer label name are ignored. In this case the vm_rows_ignored_total{reason=\"too_long_label_name\"} metric at /metrics page is incremented")
maxLabelValueLen = flag.Int("maxLabelValueLen", 0, "The maximum length of label values in the accepted time series. Series with longer label value are ignored. In this case the vm_rows_ignored_total{reason=\"too_long_label_value\"} metric at /metrics page is incremented")
)
var (
@@ -93,6 +98,15 @@ var (
)
func main() {
// vmagent is optimized for reduced memory allocations,
// so it can run with the reduced GOGC in order to reduce the used memory,
// while keeping CPU usage spent in GC at low levels.
//
// Some workloads may need increased GOGC values. Then such values can be set via GOGC environment variable.
// It is recommended increasing GOGC if go_memstats_gc_cpu_fraction metric exposed at /metrics page
// exceeds 0.05 for extended periods of time.
cgroup.SetGOGC(30)
// Write flags and help message to stdout, since it is easier to grep or pipe.
flag.CommandLine.SetOutput(os.Stdout)
flag.Usage = usage
@@ -100,6 +114,7 @@ func main() {
remotewrite.InitSecretFlags()
buildinfo.Init()
logger.Init()
timeserieslimits.Init(*maxLabelsPerTimeseries, *maxLabelNameLen, *maxLabelValueLen)
if promscrape.IsDryRun() {
if err := promscrape.CheckConfig(); err != nil {

View File

@@ -7,13 +7,10 @@ import (
"net/url"
"path/filepath"
"slices"
"strconv"
"sync"
"sync/atomic"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bloomfilter"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
@@ -21,6 +18,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fs"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/memory"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/persistentqueue"
@@ -30,6 +28,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/ratelimiter"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/streamaggr"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeserieslimits"
"github.com/VictoriaMetrics/metrics"
"github.com/cespare/xxhash/v2"
)
@@ -472,6 +471,15 @@ func tryPush(at *auth.Token, wr *prompbmarshal.WriteRequest, forceDropSamplesOnF
rowsCountAfterRelabel := getRowsCount(tssBlock)
rowsDroppedByGlobalRelabel.Add(rowsCountBeforeRelabel - rowsCountAfterRelabel)
}
if timeserieslimits.Enabled() {
tmpBlock := tssBlock[:0]
for _, ts := range tssBlock {
if !timeserieslimits.IsExceeding(ts.Labels) {
tmpBlock = append(tmpBlock, ts)
}
}
tssBlock = tmpBlock
}
sortLabelsIfNeeded(tssBlock)
tssBlock = limitSeriesCardinality(tssBlock)
if sas.IsEnabled() {
@@ -716,29 +724,14 @@ func logSkippedSeries(labels []prompbmarshal.Label, flagName string, flagValue i
select {
case <-logSkippedSeriesTicker.C:
// Do not use logger.WithThrottler() here, since this will increase CPU usage
// because every call to logSkippedSeries will result to a call to labelsToString.
logger.Warnf("skip series %s because %s=%d reached", labelsToString(labels), flagName, flagValue)
// because every call to logSkippedSeries will result to a call to prompbmarshal.LabelsToString.
logger.Warnf("skip series %s because %s=%d reached", prompbmarshal.LabelsToString(labels), flagName, flagValue)
default:
}
}
var logSkippedSeriesTicker = time.NewTicker(5 * time.Second)
func labelsToString(labels []prompbmarshal.Label) string {
var b []byte
b = append(b, '{')
for i, label := range labels {
b = append(b, label.Name...)
b = append(b, '=')
b = strconv.AppendQuote(b, label.Value)
if i+1 < len(labels) {
b = append(b, ',')
}
}
b = append(b, '}')
return string(b)
}
var (
globalRowsPushedBeforeRelabel = metrics.NewCounter("vmagent_remotewrite_global_rows_pushed_before_relabel_total")
rowsDroppedByGlobalRelabel = metrics.NewCounter("vmagent_remotewrite_global_relabel_metrics_dropped_total")

View File

@@ -51,9 +51,14 @@ Examples:
Usage: `Optional external URL to template in rule's labels or annotations.`,
Required: false,
},
&cli.StringFlag{
Name: "loggerLevel",
Usage: `Minimum level of errors to log. Possible values: INFO, WARN, ERROR, FATAL, PANIC (default "ERROR").`,
Required: false,
},
},
Action: func(c *cli.Context) error {
if failed := unittest.UnitTest(c.StringSlice("files"), c.Bool("disableAlertgroupLabel"), c.StringSlice("external.label"), c.String("external.url")); failed {
if failed := unittest.UnitTest(c.StringSlice("files"), c.Bool("disableAlertgroupLabel"), c.StringSlice("external.label"), c.String("external.url"), c.String("loggerLevel")); failed {
return fmt.Errorf("unittest failed")
}
return nil

View File

@@ -5,6 +5,7 @@ import (
"flag"
"fmt"
"net/http"
"net/url"
"os"
"path/filepath"
"reflect"
@@ -46,17 +47,24 @@ var (
testRemoteWritePath = "http://127.0.0.1" + httpListenAddr
testHealthHTTPPath = "http://127.0.0.1" + httpListenAddr + "/health"
testLogLevel = "ERROR"
disableAlertgroupLabel bool
)
const (
testStoragePath = "vmalert-unittest"
testLogLevel = "ERROR"
)
// UnitTest runs unittest for files
func UnitTest(files []string, disableGroupLabel bool, externalLabels []string, externalURL string) bool {
if err := templates.Load([]string{}, true); err != nil {
func UnitTest(files []string, disableGroupLabel bool, externalLabels []string, externalURL, logLevel string) bool {
if logLevel != "" {
testLogLevel = logLevel
}
eu, err := url.Parse(externalURL)
if err != nil {
logger.Fatalf("failed to parse external URL: %w", err)
}
if err := templates.Load([]string{}, *eu); err != nil {
logger.Fatalf("failed to load template: %v", err)
}
storagePath = filepath.Join(os.TempDir(), testStoragePath)

View File

@@ -1,24 +1,14 @@
package unittest
import (
"os"
"testing"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/templates"
)
func TestMain(m *testing.M) {
if err := templates.Load([]string{}, true); err != nil {
os.Exit(1)
}
os.Exit(m.Run())
}
func TestUnitTest_Failure(t *testing.T) {
f := func(files []string) {
t.Helper()
failed := UnitTest(files, false, nil, "")
failed := UnitTest(files, false, nil, "", "")
if !failed {
t.Fatalf("expecting failed test")
}
@@ -33,7 +23,7 @@ func TestUnitTest_Success(t *testing.T) {
f := func(disableGroupLabel bool, files []string, externalLabels []string, externalURL string) {
t.Helper()
failed := UnitTest(files, disableGroupLabel, externalLabels, externalURL)
failed := UnitTest(files, disableGroupLabel, externalLabels, externalURL, "")
if failed {
t.Fatalf("unexpected failed test")
}

View File

@@ -16,7 +16,7 @@ import (
)
func TestMain(m *testing.M) {
if err := templates.Load([]string{"testdata/templates/*good.tmpl"}, true); err != nil {
if err := templates.Load([]string{"testdata/templates/*good.tmpl"}, url.URL{}); err != nil {
os.Exit(1)
}
os.Exit(m.Run())

View File

@@ -7,7 +7,7 @@ groups:
labels:
label: bar
annotations:
summary: "{{ $value }"
summary: "{{ }}"
description: "{{$labels}}"
- alert: UnkownAnnotationsFunction
for: 5m

View File

@@ -81,7 +81,10 @@ absolute path to all .tpl files in root.
dryRun = flag.Bool("dryRun", false, "Whether to check only config files without running vmalert. The rules file are validated. The -rule flag must be specified.")
)
var alertURLGeneratorFn notifier.AlertURLGenerator
var (
alertURLGeneratorFn notifier.AlertURLGenerator
extURL *url.URL
)
func main() {
// Write flags and help message to stdout, since it is easier to grep or pipe.
@@ -95,9 +98,15 @@ func main() {
buildinfo.Init()
logger.Init()
err := templates.Load(*ruleTemplatesPath, true)
var err error
extURL, err = getExternalURL(*externalURL)
if err != nil {
logger.Fatalf("failed to parse %q: %s", *ruleTemplatesPath, err)
logger.Fatalf("failed to init external.url %q: %s", *externalURL, err)
}
err = templates.Load(*ruleTemplatesPath, *extURL)
if err != nil {
logger.Fatalf("failed to load template %q: %s", *ruleTemplatesPath, err)
}
if *dryRun {
@@ -111,12 +120,7 @@ func main() {
return
}
eu, err := getExternalURL(*externalURL)
if err != nil {
logger.Fatalf("failed to init `-external.url`: %s", err)
}
alertURLGeneratorFn, err = getAlertURLGenerator(eu, *externalAlertSource, *validateTemplates)
alertURLGeneratorFn, err = getAlertURLGenerator(extURL, *externalAlertSource, *validateTemplates)
if err != nil {
logger.Fatalf("failed to init `external.alert.source`: %s", err)
}
@@ -304,7 +308,7 @@ func getAlertURLGenerator(externalURL *url.URL, externalAlertSource string, vali
}
templated, err := alert.ExecTemplate(qFn, alert.Labels, m)
if err != nil {
logger.Errorf("can not exec source template %s", err)
logger.Errorf("cannot template alert source: %s", err)
}
return fmt.Sprintf("%s/%s", externalURL, templated["tpl"])
}, nil
@@ -359,7 +363,7 @@ func configReload(ctx context.Context, m *manager, groupsCfg []config.Group, sig
logger.Errorf("failed to reload notifier config: %s", err)
continue
}
err := templates.Load(*ruleTemplatesPath, false)
err := templates.Load(*ruleTemplatesPath, *extURL)
if err != nil {
setConfigError(err)
logger.Errorf("failed to load new templates: %s", err)

View File

@@ -74,7 +74,10 @@ func TestGetAlertURLGenerator(t *testing.T) {
func TestConfigReload(t *testing.T) {
originalRulePath := *rulePath
originalExternalURL := extURL
extURL = &url.URL{}
defer func() {
extURL = originalExternalURL
*rulePath = originalRulePath
}()

View File

@@ -3,6 +3,7 @@ package main
import (
"context"
"math/rand"
"net/url"
"os"
"strings"
"sync"
@@ -18,7 +19,7 @@ import (
)
func TestMain(m *testing.M) {
if err := templates.Load([]string{"testdata/templates/*good.tmpl"}, true); err != nil {
if err := templates.Load([]string{"testdata/templates/*good.tmpl"}, url.URL{}); err != nil {
os.Exit(1)
}
os.Exit(m.Run())

View File

@@ -127,7 +127,7 @@ func ExecTemplate(q templates.QueryFn, annotations map[string]string, tplData Al
// ValidateTemplates validate annotations for possible template error, uses empty data for template population
func ValidateTemplates(annotations map[string]string) error {
tmpl, err := templates.Get()
tmpl, err := templates.GetWithFuncs(nil)
if err != nil {
return err
}
@@ -146,12 +146,21 @@ func templateAnnotations(annotations map[string]string, data AlertTplData, tmpl
tData := tplData{data, externalLabels, externalURL}
header := strings.Join(tplHeaders, "")
for key, text := range annotations {
// simple check to skip text without template
if !strings.Contains(text, "{{") || !strings.Contains(text, "}}") {
r[key] = text
continue
}
buf.Reset()
builder.Reset()
builder.Grow(len(header) + len(text))
builder.WriteString(header)
builder.WriteString(text)
if err := templateAnnotation(&buf, builder.String(), tData, tmpl, execute); err != nil {
// clone a new template for each parse to avoid collision
ctmpl, _ := tmpl.Clone()
ctmpl = ctmpl.Option("missingkey=zero")
if err := templateAnnotation(&buf, builder.String(), tData, ctmpl, execute); err != nil {
r[key] = text
eg.Add(fmt.Errorf("key %q, template %q: %w", key, text, err))
continue

View File

@@ -75,7 +75,13 @@ func TestAlertExecTemplate(t *testing.T) {
Labels: map[string]string{
"instance": "localhost",
},
}, map[string]string{}, map[string]string{})
}, map[string]string{
"summary": "it's a test summary",
"description": "it's a test description",
}, map[string]string{
"summary": "it's a test summary",
"description": "it's a test description",
})
// label-template
f(&Alert{
@@ -93,6 +99,19 @@ func TestAlertExecTemplate(t *testing.T) {
"description": "It is 10000 connections for localhost for more than 5m0s",
})
// label template override
f(&Alert{
Value: 1e4,
}, map[string]string{
"summary": `{{- define "default.template" -}} {{ printf "summary" }} {{- end -}} {{ template "default.template" . }}`,
"description": `{{ template "default.template" . }}`,
"value": `{{$value }}`,
}, map[string]string{
"summary": "summary",
"description": "",
"value": "10000",
})
// expression-template
f(&Alert{
Expr: `vm_rows{"label"="bar"}<0`,

View File

@@ -7,7 +7,6 @@ import (
"strings"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/templates"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promauth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutils"
@@ -93,13 +92,11 @@ var (
func Init(gen AlertURLGenerator, extLabels map[string]string, extURL string) (func() []Notifier, error) {
externalURL = extURL
externalLabels = extLabels
eu, err := url.Parse(externalURL)
_, err := url.Parse(externalURL)
if err != nil {
return nil, fmt.Errorf("failed to parse external URL: %w", err)
}
templates.UpdateWithFuncs(templates.FuncsWithExternalURL(eu))
if *blackHole {
if len(*addrs) > 0 || *configPath != "" {
return nil, fmt.Errorf("only one of -notifier.blackhole, -notifier.url and -notifier.config flags must be specified")

View File

@@ -1,13 +1,15 @@
package notifier
import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/templates"
"net/url"
"os"
"testing"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/templates"
)
func TestMain(m *testing.M) {
if err := templates.Load([]string{"testdata/templates/*good.tmpl"}, true); err != nil {
if err := templates.Load([]string{"testdata/templates/*good.tmpl"}, url.URL{}); err != nil {
os.Exit(1)
}
os.Exit(m.Run())

View File

@@ -4,6 +4,7 @@ import (
"context"
"fmt"
"math"
"net/url"
"os"
"sort"
"testing"
@@ -26,7 +27,7 @@ func init() {
}
func TestMain(m *testing.M) {
if err := templates.Load([]string{}, true); err != nil {
if err := templates.Load([]string{}, url.URL{}); err != nil {
fmt.Println("failed to load template for test")
os.Exit(1)
}

View File

@@ -54,10 +54,9 @@ func newTemplate() *textTpl.Template {
}
// Load func loads templates from multiple globs specified in pathPatterns and either
// sets them directly to current template if it's undefined or with overwrite=true
// or sets replacement templates and adds templates with new names to a current
func Load(pathPatterns []string, overwrite bool) error {
var err error
// sets them directly to current template if it's the first init;
// or sets replacement templates and wait for Reload() to replace current template with replacement.
func Load(pathPatterns []string, externalURL url.URL) error {
tmpl := newTemplate()
for _, tp := range pathPatterns {
p, err := doublestar.FilepathGlob(tp)
@@ -79,36 +78,12 @@ func Load(pathPatterns []string, overwrite bool) error {
}
tplMu.Lock()
defer tplMu.Unlock()
if masterTmpl.current == nil || overwrite {
masterTmpl.replacement = nil
masterTmpl.current = newTemplate()
} else {
masterTmpl.replacement = newTemplate()
if err = copyTemplates(tmpl, masterTmpl.replacement, overwrite); err != nil {
return err
}
}
return copyTemplates(tmpl, masterTmpl.current, overwrite)
}
tmpl = tmpl.Funcs(funcsWithExternalURL(externalURL))
func copyTemplates(from *textTpl.Template, to *textTpl.Template, overwrite bool) error {
if from == nil {
return nil
}
if to == nil {
to = newTemplate()
}
tmpl, err := from.Clone()
if err != nil {
return err
}
for _, t := range tmpl.Templates() {
if to.Lookup(t.Name()) == nil || overwrite {
to, err = to.AddParseTree(t.Name(), t.Tree)
if err != nil {
return fmt.Errorf("failed to add template %q: %w", t.Name(), err)
}
}
if masterTmpl.current == nil {
masterTmpl.current = tmpl
} else {
masterTmpl.replacement = tmpl
}
return nil
}
@@ -153,13 +128,6 @@ func datasourceMetricsToTemplateMetrics(ms []datasource.Metric) []metric {
// for templating functions.
type QueryFn func(query string) ([]datasource.Metric, error)
// UpdateWithFuncs updates existing or sets a new function map for a template
func UpdateWithFuncs(funcs textTpl.FuncMap) {
tplMu.Lock()
defer tplMu.Unlock()
masterTmpl.current = masterTmpl.current.Funcs(funcs)
}
// GetWithFuncs returns a copy of current template with additional FuncMap
// provided with funcs argument
func GetWithFuncs(funcs textTpl.FuncMap) (*textTpl.Template, error) {
@@ -174,13 +142,6 @@ func GetWithFuncs(funcs textTpl.FuncMap) (*textTpl.Template, error) {
return tmpl.Funcs(funcs), nil
}
// Get returns a copy of a template
func Get() (*textTpl.Template, error) {
tplMu.RLock()
defer tplMu.RUnlock()
return masterTmpl.current.Clone()
}
// FuncsWithQuery returns a function map that depends on metric data
func FuncsWithQuery(query QueryFn) textTpl.FuncMap {
return textTpl.FuncMap{
@@ -198,8 +159,8 @@ func FuncsWithQuery(query QueryFn) textTpl.FuncMap {
}
}
// FuncsWithExternalURL returns a function map that depends on externalURL value
func FuncsWithExternalURL(externalURL *url.URL) textTpl.FuncMap {
// funcsWithExternalURL returns a function map that depends on externalURL value
func funcsWithExternalURL(externalURL url.URL) textTpl.FuncMap {
return textTpl.FuncMap{
"externalURL": func() string {
return externalURL.String()

View File

@@ -2,6 +2,7 @@ package templates
import (
"math"
"net/url"
"strings"
"testing"
textTpl "text/template"
@@ -152,7 +153,7 @@ func TestTemplatesLoad_Failure(t *testing.T) {
f := func(pathPatterns []string, expectedErrStr string) {
t.Helper()
err := Load(pathPatterns, false)
err := Load(pathPatterns, url.URL{})
if err == nil {
t.Fatalf("expecting non-nil error")
}
@@ -171,128 +172,17 @@ func TestTemplatesLoad_Failure(t *testing.T) {
}
func TestTemplatesLoad_Success(t *testing.T) {
f := func(initialTmpl textTemplate, pathPatterns []string, overwrite bool, expectedTmpl textTemplate) {
f := func(pathPatterns []string, expectedTmpl textTemplate) {
t.Helper()
masterTmplOrig := masterTmpl
masterTmpl = initialTmpl
defer func() {
masterTmpl = masterTmplOrig
}()
if err := Load(pathPatterns, overwrite); err != nil {
if err := Load(pathPatterns, url.URL{}); err != nil {
t.Fatalf("cannot load templates: %s", err)
}
if !equalTemplates(masterTmpl.replacement, expectedTmpl.replacement) {
t.Fatalf("unexpected replacement template\ngot\n%+v\nwant\n%+v", masterTmpl.replacement, expectedTmpl.replacement)
}
if !equalTemplates(masterTmpl.current, expectedTmpl.current) {
t.Fatalf("unexpected current template\ngot\n%+v\nwant\n%+v", masterTmpl.current, expectedTmpl.current)
}
}
// non existing path undefined template override
initialTmpl := mkTemplate(nil, nil)
pathPatterns := []string{
"templates/non-existing/good-*.tpl",
"templates/absent/good-*.tpl",
}
overwrite := true
expectedTmpl := mkTemplate(``, nil)
f(initialTmpl, pathPatterns, overwrite, expectedTmpl)
// non existing path defined template override
initialTmpl = mkTemplate(`
{{- define "test.1" -}}
{{- printf "value" -}}
{{- end -}}
`, nil)
pathPatterns = []string{
"templates/non-existing/good-*.tpl",
"templates/absent/good-*.tpl",
}
overwrite = true
expectedTmpl = mkTemplate(``, nil)
f(initialTmpl, pathPatterns, overwrite, expectedTmpl)
// existing path undefined template override
initialTmpl = mkTemplate(nil, nil)
pathPatterns = []string{
"templates/other/nested/good0-*.tpl",
"templates/test/good0-*.tpl",
}
overwrite = false
expectedTmpl = mkTemplate(`
{{- define "good0-test.tpl" -}}{{- end -}}
{{- define "test.0" -}}
{{ printf "Hello %s!" externalURL }}
{{- end -}}
{{- define "test.1" -}}
{{ printf "Hello %s!" externalURL }}
{{- end -}}
{{- define "test.2" -}}
{{ printf "Hello %s!" externalURL }}
{{- end -}}
{{- define "test.3" -}}
{{ printf "Hello %s!" externalURL }}
{{- end -}}
`, nil)
f(initialTmpl, pathPatterns, overwrite, expectedTmpl)
// existing path defined template override
initialTmpl = mkTemplate(`
{{- define "test.1" -}}
{{ printf "Hello %s!" "world" }}
{{- end -}}
`, nil)
pathPatterns = []string{
"templates/other/nested/good0-*.tpl",
"templates/test/good0-*.tpl",
}
overwrite = false
expectedTmpl = mkTemplate(`
{{- define "good0-test.tpl" -}}{{- end -}}
{{- define "test.0" -}}
{{ printf "Hello %s!" externalURL }}
{{- end -}}
{{- define "test.1" -}}
{{ printf "Hello %s!" "world" }}
{{- end -}}
{{- define "test.2" -}}
{{ printf "Hello %s!" externalURL }}
{{- end -}}
{{- define "test.3" -}}
{{ printf "Hello %s!" externalURL }}
{{- end -}}
`, `
{{- define "good0-test.tpl" -}}{{- end -}}
{{- define "test.0" -}}
{{ printf "Hello %s!" externalURL }}
{{- end -}}
{{- define "test.1" -}}
{{ printf "Hello %s!" externalURL }}
{{- end -}}
{{- define "test.2" -}}
{{ printf "Hello %s!" externalURL }}
{{- end -}}
{{- define "test.3" -}}
{{ printf "Hello %s!" externalURL }}
{{- end -}}
`)
f(initialTmpl, pathPatterns, overwrite, expectedTmpl)
}
func TestTemplatesReload(t *testing.T) {
f := func(initialTmpl, expectedTmpl textTemplate) {
t.Helper()
masterTmplOrig := masterTmpl
masterTmpl = initialTmpl
defer func() {
masterTmpl = masterTmplOrig
}()
Reload()
if !equalTemplates(masterTmpl.replacement, expectedTmpl.replacement) {
@@ -303,46 +193,47 @@ func TestTemplatesReload(t *testing.T) {
}
}
// empty current and replacement templates
f(mkTemplate(nil, nil), mkTemplate(nil, nil))
// non existing path
pathPatterns := []string{
"templates/non-existing/good-*.tpl",
"templates/absent/good-*.tpl",
}
expectedTmpl := mkTemplate(``, nil)
f(pathPatterns, expectedTmpl)
// empty current template only
f(mkTemplate(`
{{- define "test.1" -}}
{{- printf "value" -}}
{{- end -}}
`, nil), mkTemplate(`
{{- define "test.1" -}}
{{- printf "value" -}}
{{- end -}}
`, nil))
// empty replacement template only
f(mkTemplate(nil, `
{{- define "test.1" -}}
{{- printf "value" -}}
{{- end -}}
`), mkTemplate(`
{{- define "test.1" -}}
{{- printf "value" -}}
{{- end -}}
`, nil))
// defined both templates
f(mkTemplate(`
// existing path
pathPatterns = []string{
"templates/test/good0-*.tpl",
}
expectedTmpl = mkTemplate(`
{{- define "good0-test.tpl" -}}{{- end -}}
{{- define "test.0" -}}
{{- printf "value" -}}
{{ printf "Hello %s!" externalURL }}
{{- end -}}
{{- define "test.2" -}}
{{ printf "Hello %s!" externalURL }}
{{- end -}}
{{- define "test.3" -}}
{{ printf "Hello %s!" externalURL }}
{{- end -}}
`, nil)
f(pathPatterns, expectedTmpl)
// existing path defined template override
pathPatterns = []string{
"templates/other/nested/good0-*.tpl",
}
expectedTmpl = mkTemplate(`
{{- define "good0-test.tpl" -}}{{- end -}}
{{- define "test.0" -}}
{{ printf "Hello %s!" externalURL }}
{{- end -}}
{{- define "test.1" -}}
{{- printf "before" -}}
{{ printf "Hello %s!" externalURL }}
{{- end -}}
`, `
{{- define "test.1" -}}
{{- printf "after" -}}
{{- define "test.3" -}}
{{ printf "Hello %s!" externalURL }}
{{- end -}}
`), mkTemplate(`
{{- define "test.1" -}}
{{- printf "after" -}}
{{- end -}}
`, nil))
`, nil)
f(pathPatterns, expectedTmpl)
}

View File

@@ -2,11 +2,12 @@ package main
import (
"fmt"
"reflect"
"testing"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/config"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/datasource"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/rule"
"reflect"
"testing"
)
func TestRecordingToApi(t *testing.T) {

View File

@@ -783,10 +783,11 @@ func parseAuthConfig(data []byte) (*AuthConfig, error) {
func parseAuthConfigUsers(ac *AuthConfig) (map[string]*UserInfo, error) {
uis := ac.Users
if len(uis) == 0 && ac.UnauthorizedUser == nil {
return nil, fmt.Errorf("Missing `users` or `unauthorized_user` sections")
}
byAuthToken := make(map[string]*UserInfo, len(uis))
if len(uis) == 0 && ac.UnauthorizedUser == nil {
// fast path for empty configuration
return byAuthToken, nil
}
for i := range uis {
ui := &uis[i]
ats, err := getAuthTokens(ui.AuthToken, ui.BearerToken, ui.Username, ui.Password)

View File

@@ -24,16 +24,10 @@ func TestParseAuthConfigFailure(t *testing.T) {
}
}
// Empty config
f(``)
// Invalid entry
f(`foobar`)
f(`foobar: baz`)
// Empty users
f(`users: []`)
// Missing url_prefix
f(`
users:
@@ -302,6 +296,12 @@ func TestParseAuthConfigSuccess(t *testing.T) {
insecureSkipVerifyTrue := true
// Empty config
f(``, map[string]*UserInfo{})
// Empty users
f(`users: []`, map[string]*UserInfo{})
// Single user
f(`
users:

View File

@@ -199,7 +199,7 @@ func processUserRequest(w http.ResponseWriter, r *http.Request, ui *UserInfo) {
func processRequest(w http.ResponseWriter, r *http.Request, ui *UserInfo) {
u := normalizeURL(r.URL)
up, hc := ui.getURLPrefixAndHeaders(u, r.Header)
up, hc := ui.getURLPrefixAndHeaders(u, r.Host, r.Header)
isDefault := false
if up == nil {
if ui.DefaultURL == nil {
@@ -213,7 +213,7 @@ func processRequest(w http.ResponseWriter, r *http.Request, ui *UserInfo) {
missingRouteRequests.Inc()
var di string
if ui.DumpRequestOnErrors {
di = debugInfo(u, r.Header)
di = debugInfo(u, r)
}
httpserver.Errorf(w, r, "missing route for %q%s", u.String(), di)
return
@@ -668,13 +668,13 @@ func (rtb *readTrackingBody) Close() error {
return nil
}
func debugInfo(u *url.URL, h http.Header) string {
func debugInfo(u *url.URL, r *http.Request) string {
s := &strings.Builder{}
fmt.Fprintf(s, " (host: %q; ", u.Host)
fmt.Fprintf(s, " (host: %q; ", r.Host)
fmt.Fprintf(s, "path: %q; ", u.Path)
fmt.Fprintf(s, "args: %q; ", u.Query().Encode())
fmt.Fprint(s, "headers:")
_ = h.WriteSubset(s, nil)
_ = r.Header.WriteSubset(s, nil)
fmt.Fprint(s, ")")
return s.String()
}

View File

@@ -51,9 +51,9 @@ func dropPrefixParts(path string, parts int) string {
return path
}
func (ui *UserInfo) getURLPrefixAndHeaders(u *url.URL, h http.Header) (*URLPrefix, HeadersConf) {
func (ui *UserInfo) getURLPrefixAndHeaders(u *url.URL, host string, h http.Header) (*URLPrefix, HeadersConf) {
for _, e := range ui.URLMaps {
if !matchAnyRegex(e.SrcHosts, u.Host) {
if !matchAnyRegex(e.SrcHosts, host) {
continue
}
if !matchAnyRegex(e.SrcPaths, u.Path) {

View File

@@ -95,7 +95,7 @@ func TestCreateTargetURLSuccess(t *testing.T) {
t.Fatalf("cannot parse %q: %s", requestURI, err)
}
u = normalizeURL(u)
up, hc := ui.getURLPrefixAndHeaders(u, nil)
up, hc := ui.getURLPrefixAndHeaders(u, u.Host, nil)
if up == nil {
t.Fatalf("cannot match available backend: %s", err)
}
@@ -306,7 +306,7 @@ func TestUserInfoGetBackendURL_SRV(t *testing.T) {
t.Fatalf("cannot parse %q: %s", requestURI, err)
}
u = normalizeURL(u)
up, _ := ui.getURLPrefixAndHeaders(u, nil)
up, _ := ui.getURLPrefixAndHeaders(u, u.Host, nil)
if up == nil {
t.Fatalf("cannot match available backend: %s", err)
}
@@ -384,7 +384,7 @@ func TestUserInfoGetBackendURL_SRVZeroBackends(t *testing.T) {
t.Fatalf("cannot parse %q: %s", requestURI, err)
}
u = normalizeURL(u)
up, _ := ui.getURLPrefixAndHeaders(u, nil)
up, _ := ui.getURLPrefixAndHeaders(u, u.Host, nil)
if up == nil {
t.Fatalf("cannot match available backend: %s", err)
}
@@ -432,7 +432,7 @@ func TestCreateTargetURLFailure(t *testing.T) {
t.Fatalf("cannot parse %q: %s", requestURI, err)
}
u = normalizeURL(u)
up, hc := ui.getURLPrefixAndHeaders(u, nil)
up, hc := ui.getURLPrefixAndHeaders(u, u.Host, nil)
if up != nil {
t.Fatalf("unexpected non-empty up=%#v", up)
}

View File

@@ -4,13 +4,46 @@ import (
"fmt"
"net/http"
"github.com/VictoriaMetrics/metrics"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/relabel"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/ratelimiter"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/slicesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeserieslimits"
)
// StartIngestionRateLimiter starts ingestion rate limiter.
//
// Ingestion rate limiter must be started before Init() call.
//
// StopIngestionRateLimiter must be called before Stop() call in order to unblock all the callers
// to ingestion rate limiter. Otherwise deadlock may occur at Stop() call.
func StartIngestionRateLimiter(maxIngestionRate int) {
if maxIngestionRate <= 0 {
return
}
ingestionRateLimitReached := metrics.NewCounter(`vm_max_ingestion_rate_limit_reached_total`)
ingestionRateLimiterStopCh = make(chan struct{})
ingestionRateLimiter = ratelimiter.New(int64(maxIngestionRate), ingestionRateLimitReached, ingestionRateLimiterStopCh)
}
// StopIngestionRateLimiter stops ingestion rate limiter.
func StopIngestionRateLimiter() {
if ingestionRateLimiterStopCh == nil {
return
}
close(ingestionRateLimiterStopCh)
ingestionRateLimiterStopCh = nil
}
var (
ingestionRateLimiter *ratelimiter.RateLimiter
ingestionRateLimiterStopCh chan struct{}
)
// InsertCtx contains common bits for data points insertion.
@@ -59,7 +92,27 @@ func (ctx *InsertCtx) marshalMetricNameRaw(prefix []byte, labels []prompbmarshal
return metricNameRaw[:len(metricNameRaw):len(metricNameRaw)]
}
// TryPrepareLabels prepares context labels to the ingestion
//
// It returns false if timeseries should be skipped
func (ctx *InsertCtx) TryPrepareLabels(hasRelabeling bool) bool {
if hasRelabeling {
ctx.ApplyRelabeling()
}
if len(ctx.Labels) == 0 {
return false
}
if timeserieslimits.Enabled() && timeserieslimits.IsExceeding(ctx.Labels) {
return false
}
ctx.sortLabelsIfNeeded()
return true
}
// WriteDataPoint writes (timestamp, value) with the given prefix and labels into ctx buffer.
//
// caller should invoke TryPrepareLabels before using this function if needed
func (ctx *InsertCtx) WriteDataPoint(prefix []byte, labels []prompbmarshal.Label, timestamp int64, value float64) error {
metricNameRaw := ctx.marshalMetricNameRaw(prefix, labels)
return ctx.addRow(metricNameRaw, timestamp, value)
@@ -67,6 +120,8 @@ func (ctx *InsertCtx) WriteDataPoint(prefix []byte, labels []prompbmarshal.Label
// WriteDataPointExt writes (timestamp, value) with the given metricNameRaw and labels into ctx buffer.
//
// caller must invoke TryPrepareLabels before using this function
//
// It returns metricNameRaw for the given labels if len(metricNameRaw) == 0.
func (ctx *InsertCtx) WriteDataPointExt(metricNameRaw []byte, labels []prompbmarshal.Label, timestamp int64, value float64) ([]byte, error) {
if len(metricNameRaw) == 0 {
@@ -149,9 +204,12 @@ func (ctx *InsertCtx) FlushBufs() error {
}
matchIdxsPool.Put(matchIdxs)
}
ingestionRateLimiter.Register(len(ctx.mrs))
// There is no need in limiting the number of concurrent calls to vmstorage.AddRows() here,
// since the number of concurrent FlushBufs() calls should be already limited via writeconcurrencylimiter
// used at every stream.Parse() call under lib/protoparser/*
err := vmstorage.AddRows(ctx.mrs)
ctx.Reset(0)
if err == nil {

View File

@@ -12,8 +12,8 @@ var sortLabels = flag.Bool("sortLabels", false, `Whether to sort labels for inco
`For example, if m{k1="v1",k2="v2"} may be sent as m{k2="v2",k1="v1"}. `+
`Enabled sorting for labels can slow down ingestion performance a bit`)
// SortLabelsIfNeeded sorts labels if -sortLabels command-line flag is set
func (ctx *InsertCtx) SortLabelsIfNeeded() {
// sortLabelsIfNeeded sorts labels if -sortLabels command-line flag is set
func (ctx *InsertCtx) sortLabelsIfNeeded() {
if *sortLabels {
sort.Sort(&ctx.Labels)
}

View File

@@ -46,14 +46,9 @@ func insertRows(rows []parser.Row, extraLabels []prompbmarshal.Label) error {
label := &extraLabels[j]
ctx.AddLabel(label.Name, label.Value)
}
if hasRelabeling {
ctx.ApplyRelabeling()
}
if len(ctx.Labels) == 0 {
// Skip metric without labels.
if !ctx.TryPrepareLabels(hasRelabeling) {
continue
}
ctx.SortLabelsIfNeeded()
if err := ctx.WriteDataPoint(nil, ctx.Labels, r.Timestamp, r.Value); err != nil {
return err
}

View File

@@ -60,14 +60,9 @@ func insertRows(sketches []*datadogsketches.Sketch, extraLabels []prompbmarshal.
label := &extraLabels[j]
ctx.AddLabel(label.Name, label.Value)
}
if hasRelabeling {
ctx.ApplyRelabeling()
}
if len(ctx.Labels) == 0 {
// Skip metric without labels.
if !ctx.TryPrepareLabels(hasRelabeling) {
continue
}
ctx.SortLabelsIfNeeded()
var metricNameRaw []byte
var err error
for _, p := range m.Points {

View File

@@ -63,14 +63,9 @@ func insertRows(series []datadogv1.Series, extraLabels []prompbmarshal.Label) er
label := &extraLabels[j]
ctx.AddLabel(label.Name, label.Value)
}
if hasRelabeling {
ctx.ApplyRelabeling()
}
if len(ctx.Labels) == 0 {
// Skip metric without labels.
if !ctx.TryPrepareLabels(hasRelabeling) {
continue
}
ctx.SortLabelsIfNeeded()
var metricNameRaw []byte
var err error
for _, pt := range ss.Points {

View File

@@ -66,14 +66,9 @@ func insertRows(series []datadogv2.Series, extraLabels []prompbmarshal.Label) er
label := &extraLabels[j]
ctx.AddLabel(label.Name, label.Value)
}
if hasRelabeling {
ctx.ApplyRelabeling()
}
if len(ctx.Labels) == 0 {
// Skip metric without labels.
if !ctx.TryPrepareLabels(hasRelabeling) {
continue
}
ctx.SortLabelsIfNeeded()
var metricNameRaw []byte
var err error
for _, pt := range ss.Points {

View File

@@ -36,14 +36,9 @@ func insertRows(rows []parser.Row) error {
tag := &r.Tags[j]
ctx.AddLabel(tag.Key, tag.Value)
}
if hasRelabeling {
ctx.ApplyRelabeling()
}
if len(ctx.Labels) == 0 {
// Skip metric without labels.
if !ctx.TryPrepareLabels(hasRelabeling) {
continue
}
ctx.SortLabelsIfNeeded()
if err := ctx.WriteDataPoint(nil, ctx.Labels, r.Timestamp, r.Value); err != nil {
return err
}

View File

@@ -14,6 +14,7 @@ import (
parser "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/influx"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/influx/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeserieslimits"
"github.com/VictoriaMetrics/metrics"
)
@@ -69,6 +70,7 @@ func insertRows(db string, rows []parser.Row, extraLabels []prompbmarshal.Label)
ic.Reset(rowsLen)
rowsTotal := 0
hasRelabeling := relabel.HasRelabeling()
hasLimitsEnabled := timeserieslimits.Enabled()
for i := range rows {
r := &rows[i]
rowsTotal += len(r.Fields)
@@ -108,18 +110,17 @@ func insertRows(db string, rows []parser.Row, extraLabels []prompbmarshal.Label)
metricGroup := bytesutil.ToUnsafeString(ctx.metricGroupBuf)
ic.Labels = append(ic.Labels[:0], ctx.originLabels...)
ic.AddLabel("", metricGroup)
ic.ApplyRelabeling()
if len(ic.Labels) == 0 {
// Skip metric without labels.
if !ic.TryPrepareLabels(true) {
continue
}
ic.SortLabelsIfNeeded()
if err := ic.WriteDataPoint(nil, ic.Labels, r.Timestamp, f.Value); err != nil {
return err
}
}
} else {
ic.SortLabelsIfNeeded()
if !ic.TryPrepareLabels(false) {
continue
}
ctx.metricNameBuf = storage.MarshalMetricNameRaw(ctx.metricNameBuf[:0], ic.Labels)
labelsLen := len(ic.Labels)
for j := range r.Fields {
@@ -130,9 +131,12 @@ func insertRows(db string, rows []parser.Row, extraLabels []prompbmarshal.Label)
metricGroup := bytesutil.ToUnsafeString(ctx.metricGroupBuf)
ic.Labels = ic.Labels[:labelsLen]
ic.AddLabel("", metricGroup)
if len(ic.Labels) == 0 {
// Skip metric without labels.
continue
if hasLimitsEnabled {
// special case for optimisation above
// check only __name__ label value limits
if timeserieslimits.IsExceeding(ic.Labels[len(ic.Labels)-1:]) {
continue
}
}
if err := ic.WriteDataPoint(ctx.metricNameBuf, ic.Labels[len(ic.Labels)-1:], r.Timestamp, f.Value); err != nil {
return err

View File

@@ -41,8 +41,8 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentelemetry/firehose"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/stringsutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeserieslimits"
)
var (
@@ -67,8 +67,9 @@ var (
"at -opentsdbHTTPListenAddr . See https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt")
configAuthKey = flagutil.NewPassword("configAuthKey", "Authorization key for accessing /config page. It must be passed via authKey query arg. It overrides -httpAuth.*")
reloadAuthKey = flagutil.NewPassword("reloadAuthKey", "Auth key for /-/reload http endpoint. It must be passed via authKey query arg. It overrides httpAuth.* settings.")
maxLabelsPerTimeseries = flag.Int("maxLabelsPerTimeseries", 30, "The maximum number of labels accepted per time series. Superfluous labels are dropped. In this case the vm_metrics_with_dropped_labels_total metric at /metrics page is incremented")
maxLabelValueLen = flag.Int("maxLabelValueLen", 4*1024, "The maximum length of label values in the accepted time series. Longer label values are truncated. In this case the vm_too_long_label_values_total metric at /metrics page is incremented")
maxLabelsPerTimeseries = flag.Int("maxLabelsPerTimeseries", 40, "The maximum number of labels per time series to be accepted. Series with superfluous labels are ignored. In this case the vm_rows_ignored_total{reason=\"too_many_labels\"} metric at /metrics page is incremented")
maxLabelNameLen = flag.Int("maxLabelNameLen", 256, "The maximum length of label name in the accepted time series. Series with longer label name are ignored. In this case the vm_rows_ignored_total{reason=\"too_long_label_name\"} metric at /metrics page is incremented")
maxLabelValueLen = flag.Int("maxLabelValueLen", 4*1024, "The maximum length of label values in the accepted time series. Series with longer label value are ignored. In this case the vm_rows_ignored_total{reason=\"too_long_label_value\"} metric at /metrics page is incremented")
)
var (
@@ -87,8 +88,6 @@ var staticServer = http.FileServer(http.FS(staticFiles))
func Init() {
relabel.Init()
vminsertCommon.InitStreamAggr()
storage.SetMaxLabelsPerTimeseries(*maxLabelsPerTimeseries)
storage.SetMaxLabelValueLen(*maxLabelValueLen)
common.StartUnmarshalWorkers()
if len(*graphiteListenAddr) > 0 {
graphiteServer = graphiteserver.MustStart(*graphiteListenAddr, *graphiteUseProxyProtocol, graphite.InsertHandler)
@@ -105,6 +104,7 @@ func Init() {
promscrape.Init(func(_ *auth.Token, wr *prompbmarshal.WriteRequest) {
prompush.Push(wr)
})
timeserieslimits.Init(*maxLabelsPerTimeseries, *maxLabelNameLen, *maxLabelValueLen)
}
// Stop stops vminsert.
@@ -439,14 +439,4 @@ var (
promscrapeStatusConfigRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/status/config"}`)
promscrapeConfigReloadRequests = metrics.NewCounter(`vm_http_requests_total{path="/-/reload"}`)
_ = metrics.NewGauge(`vm_metrics_with_dropped_labels_total`, func() float64 {
return float64(storage.MetricsWithDroppedLabels.Load())
})
_ = metrics.NewGauge(`vm_too_long_label_names_total`, func() float64 {
return float64(storage.TooLongLabelNames.Load())
})
_ = metrics.NewGauge(`vm_too_long_label_values_total`, func() float64 {
return float64(storage.TooLongLabelValues.Load())
})
)

View File

@@ -55,14 +55,9 @@ func insertRows(block *stream.Block, extraLabels []prompbmarshal.Label) error {
label := &extraLabels[j]
ic.AddLabel(label.Name, label.Value)
}
if hasRelabeling {
ic.ApplyRelabeling()
}
if len(ic.Labels) == 0 {
// Skip metric without labels.
if !ic.TryPrepareLabels(hasRelabeling) {
return nil
}
ic.SortLabelsIfNeeded()
ctx.metricNameBuf = storage.MarshalMetricNameRaw(ctx.metricNameBuf[:0], ic.Labels)
values := block.Values
timestamps := block.Timestamps
@@ -71,7 +66,9 @@ func insertRows(block *stream.Block, extraLabels []prompbmarshal.Label) error {
}
for j, value := range values {
timestamp := timestamps[j]
if err := ic.WriteDataPoint(ctx.metricNameBuf, nil, timestamp, value); err != nil {
// TODO: @f41gh7 looks like it's better to use WriteDataPointExt
// since metricName never changes inside insertRows call
if err := ic.WriteDataPoint(ctx.metricNameBuf, ic.Labels, timestamp, value); err != nil {
return err
}
}

View File

@@ -58,14 +58,9 @@ func insertRows(rows []newrelic.Row, extraLabels []prompbmarshal.Label) error {
label := &extraLabels[k]
ctx.AddLabel(label.Name, label.Value)
}
if hasRelabeling {
ctx.ApplyRelabeling()
}
if len(ctx.Labels) == 0 {
// Skip metric without labels.
if !ctx.TryPrepareLabels(hasRelabeling) {
continue
}
ctx.SortLabelsIfNeeded()
if err := ctx.WriteDataPoint(nil, ctx.Labels, r.Timestamp, s.Value); err != nil {
return err
}

View File

@@ -59,14 +59,9 @@ func insertRows(tss []prompbmarshal.TimeSeries, extraLabels []prompbmarshal.Labe
for _, label := range extraLabels {
ctx.AddLabel(label.Name, label.Value)
}
if hasRelabeling {
ctx.ApplyRelabeling()
}
if len(ctx.Labels) == 0 {
// Skip metric without labels.
if !ctx.TryPrepareLabels(hasRelabeling) {
continue
}
ctx.SortLabelsIfNeeded()
var metricNameRaw []byte
var err error
samples := ts.Samples

View File

@@ -36,14 +36,9 @@ func insertRows(rows []parser.Row) error {
tag := &r.Tags[j]
ctx.AddLabel(tag.Key, tag.Value)
}
if hasRelabeling {
ctx.ApplyRelabeling()
}
if len(ctx.Labels) == 0 {
// Skip metric without labels.
if !ctx.TryPrepareLabels(hasRelabeling) {
continue
}
ctx.SortLabelsIfNeeded()
if err := ctx.WriteDataPoint(nil, ctx.Labels, r.Timestamp, r.Value); err != nil {
return err
}

View File

@@ -54,14 +54,9 @@ func insertRows(rows []parser.Row, extraLabels []prompbmarshal.Label) error {
label := &extraLabels[j]
ctx.AddLabel(label.Name, label.Value)
}
if hasRelabeling {
ctx.ApplyRelabeling()
}
if len(ctx.Labels) == 0 {
// Skip metric without labels.
if !ctx.TryPrepareLabels(hasRelabeling) {
continue
}
ctx.SortLabelsIfNeeded()
if err := ctx.WriteDataPoint(nil, ctx.Labels, r.Timestamp, r.Value); err != nil {
return err
}

View File

@@ -54,14 +54,9 @@ func insertRows(rows []parser.Row, extraLabels []prompbmarshal.Label) error {
label := &extraLabels[j]
ctx.AddLabel(label.Name, label.Value)
}
if hasRelabeling {
ctx.ApplyRelabeling()
}
if len(ctx.Labels) == 0 {
// Skip metric without labels.
if !ctx.TryPrepareLabels(hasRelabeling) {
continue
}
ctx.SortLabelsIfNeeded()
if err := ctx.WriteDataPoint(nil, ctx.Labels, r.Timestamp, r.Value); err != nil {
return err
}

View File

@@ -57,12 +57,9 @@ func push(ctx *common.InsertCtx, tss []prompbmarshal.TimeSeries) {
label := &ts.Labels[j]
ctx.AddLabel(label.Name, label.Value)
}
ctx.ApplyRelabeling()
if len(ctx.Labels) == 0 {
// Skip metric without labels.
if !ctx.TryPrepareLabels(false) {
continue
}
ctx.SortLabelsIfNeeded()
var metricNameRaw []byte
var err error
for i := range ts.Samples {

View File

@@ -52,14 +52,10 @@ func insertRows(timeseries []prompb.TimeSeries, extraLabels []prompbmarshal.Labe
label := &extraLabels[j]
ctx.AddLabel(label.Name, label.Value)
}
if hasRelabeling {
ctx.ApplyRelabeling()
}
if len(ctx.Labels) == 0 {
// Skip metric without labels.
if !ctx.TryPrepareLabels(hasRelabeling) {
continue
}
ctx.SortLabelsIfNeeded()
var metricNameRaw []byte
var err error
samples := ts.Samples

View File

@@ -58,14 +58,9 @@ func insertRows(rows []parser.Row, extraLabels []prompbmarshal.Label) error {
label := &extraLabels[j]
ic.AddLabel(label.Name, label.Value)
}
if hasRelabeling {
ic.ApplyRelabeling()
}
if len(ic.Labels) == 0 {
// Skip metric without labels.
if !ic.TryPrepareLabels(hasRelabeling) {
continue
}
ic.SortLabelsIfNeeded()
ctx.metricNameBuf = storage.MarshalMetricNameRaw(ctx.metricNameBuf[:0], ic.Labels)
values := r.Values
timestamps := r.Timestamps

View File

@@ -699,8 +699,13 @@ func (rc *rollupConfig) doInternal(dstValues []float64, tsm *timeseriesMap, valu
// Extend dstValues in order to remove mallocs below.
dstValues = decimal.ExtendFloat64sCapacity(dstValues, len(rc.Timestamps))
scrapeInterval := getScrapeInterval(timestamps, rc.Step)
maxPrevInterval := getMaxPrevInterval(scrapeInterval)
// Use step as the scrape interval for instant queries (when start == end).
maxPrevInterval := rc.Step
if rc.Start < rc.End {
scrapeInterval := getScrapeInterval(timestamps, rc.Step)
maxPrevInterval = getMaxPrevInterval(scrapeInterval)
}
if rc.LookbackDelta > 0 && maxPrevInterval > rc.LookbackDelta {
maxPrevInterval = rc.LookbackDelta
}

View File

@@ -1,13 +1,13 @@
{
"files": {
"main.css": "./static/css/main.b1929c64.css",
"main.js": "./static/js/main.a7d57628.js",
"main.css": "./static/css/main.876c56b7.css",
"main.js": "./static/js/main.caf36c39.js",
"static/js/685.f772060c.chunk.js": "./static/js/685.f772060c.chunk.js",
"static/media/MetricsQL.md": "./static/media/MetricsQL.a00044c91d9781cf8557.md",
"index.html": "./index.html"
},
"entrypoints": [
"static/css/main.b1929c64.css",
"static/js/main.a7d57628.js"
"static/css/main.876c56b7.css",
"static/js/main.caf36c39.js"
]
}

View File

@@ -1 +1 @@
<!doctype html><html lang="en"><head><meta charset="utf-8"/><link rel="icon" href="./favicon.svg"/><link rel="apple-touch-icon" href="./favicon.svg"/><link rel="mask-icon" href="./favicon.svg" color="#000000"><meta name="viewport" content="width=device-width,initial-scale=1,maximum-scale=5"/><meta name="theme-color" content="#000000"/><meta name="description" content="Explore and troubleshoot your VictoriaMetrics data"/><link rel="manifest" href="./manifest.json"/><title>vmui</title><script src="./dashboards/index.js" type="module"></script><meta name="twitter:card" content="summary"><meta name="twitter:title" content="UI for VictoriaMetrics"><meta name="twitter:site" content="@https://victoriametrics.com/"><meta name="twitter:description" content="Explore and troubleshoot your VictoriaMetrics data"><meta name="twitter:image" content="./preview.jpg"><meta property="og:type" content="website"><meta property="og:title" content="UI for VictoriaMetrics"><meta property="og:url" content="https://victoriametrics.com/"><meta property="og:description" content="Explore and troubleshoot your VictoriaMetrics data"><script defer="defer" src="./static/js/main.a7d57628.js"></script><link href="./static/css/main.b1929c64.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div></body></html>
<!doctype html><html lang="en"><head><meta charset="utf-8"/><link rel="icon" href="./favicon.svg"/><link rel="apple-touch-icon" href="./favicon.svg"/><link rel="mask-icon" href="./favicon.svg" color="#000000"><meta name="viewport" content="width=device-width,initial-scale=1,maximum-scale=5"/><meta name="theme-color" content="#000000"/><meta name="description" content="Explore and troubleshoot your VictoriaMetrics data"/><link rel="manifest" href="./manifest.json"/><title>vmui</title><script src="./dashboards/index.js" type="module"></script><meta name="twitter:card" content="summary"><meta name="twitter:title" content="UI for VictoriaMetrics"><meta name="twitter:site" content="@https://victoriametrics.com/"><meta name="twitter:description" content="Explore and troubleshoot your VictoriaMetrics data"><meta name="twitter:image" content="./preview.jpg"><meta property="og:type" content="website"><meta property="og:title" content="UI for VictoriaMetrics"><meta property="og:url" content="https://victoriametrics.com/"><meta property="og:description" content="Explore and troubleshoot your VictoriaMetrics data"><script defer="defer" src="./static/js/main.caf36c39.js"></script><link href="./static/css/main.876c56b7.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div></body></html>

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1,4 +1,4 @@
FROM golang:1.23.3 AS build-web-stage
FROM golang:1.23.4 AS build-web-stage
COPY build /build
WORKDIR /build
@@ -6,7 +6,7 @@ COPY web/ /build/
RUN GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -o web-amd64 github.com/VictoriMetrics/vmui/ && \
GOOS=windows GOARCH=amd64 CGO_ENABLED=0 go build -o web-windows github.com/VictoriMetrics/vmui/
FROM alpine:3.20.3
FROM alpine:3.21.0
USER root
COPY --from=build-web-stage /build/web-amd64 /app/web

View File

@@ -48,3 +48,11 @@ export interface LogHits {
[key: string]: string;
};
}
export interface ReportMetaData {
id: number;
title: string;
endpoint: string;
comment: string;
params: Record<string, string>;
}

View File

@@ -43,7 +43,8 @@ const QueryEditor: FC<QueryEditorProps> = ({
const { isMobile } = useDeviceDetect();
const [openAutocomplete, setOpenAutocomplete] = useState(false);
const [caretPosition, setCaretPosition] = useState<[number, number]>([0, 0]);
const [caretPositionAutocomplete, setCaretPositionAutocomplete] = useState<[number, number]>([0, 0]);
const [caretPositionInput, setCaretPositionInput] = useState<[number, number]>([0, 0]);
const autocompleteAnchorEl = useRef<HTMLInputElement>(null);
const [showAutocomplete, setShowAutocomplete] = useState(autocomplete);
@@ -66,7 +67,7 @@ const QueryEditor: FC<QueryEditorProps> = ({
const handleSelect = (val: string, caretPosition: number) => {
onChange(val);
setCaretPosition([caretPosition, caretPosition]);
setCaretPositionInput([caretPosition, caretPosition]);
};
const handleKeyDown = (e: KeyboardEvent) => {
@@ -108,7 +109,7 @@ const QueryEditor: FC<QueryEditorProps> = ({
};
const handleChangeCaret = (val: [number, number]) => {
setCaretPosition(prev => prev[0] === val[0] && prev[1] === val[1] ? prev : val);
setCaretPositionAutocomplete(prev => prev[0] === val[0] && prev[1] === val[1] ? prev : val);
};
useEffect(() => {
@@ -118,7 +119,7 @@ const QueryEditor: FC<QueryEditorProps> = ({
useEffect(() => {
setShowAutocomplete(false);
debouncedSetShowAutocomplete(true);
}, [caretPosition]);
}, [caretPositionAutocomplete]);
return (
<div
@@ -137,13 +138,13 @@ const QueryEditor: FC<QueryEditorProps> = ({
onChangeCaret={handleChangeCaret}
disabled={disabled}
inputmode={"search"}
caretPosition={caretPosition}
caretPosition={caretPositionInput}
/>
{showAutocomplete && autocomplete && (
<QueryEditorAutocomplete
value={value}
anchorEl={autocompleteAnchorEl}
caretPosition={caretPosition}
caretPosition={caretPositionAutocomplete}
hasHelperText={Boolean(warning || error)}
includeFunctions={includeFunctions}
onSelect={handleSelect}

View File

@@ -38,6 +38,10 @@
align-items: flex-start;
gap: $padding-small;
ul {
list-style-position: inside;
}
button {
color: inherit;
min-height: 29px;

View File

@@ -19,6 +19,10 @@ const Accordion: FC<AccordionProps> = ({
const [isOpen, setIsOpen] = useState(defaultExpanded);
const toggleOpen = () => {
const selection = window.getSelection();
if (selection && selection.toString()) {
return; // If the text is selected, cancel the execution of toggle.
}
setIsOpen(prev => !prev);
};

View File

@@ -46,6 +46,8 @@
display: flex;
align-items: center;
justify-content: center;
align-self: flex-start;
min-height: 24px;
}
&__content {

View File

@@ -570,3 +570,14 @@ export const SpinnerIcon = () => (
</path>
</svg>
);
export const CommentIcon = () => (
<svg
viewBox="0 0 24 24"
fill="currentColor"
>
<path
d="M21.99 4c0-1.1-.89-2-1.99-2H4c-1.1 0-2 .9-2 2v12c0 1.1.9 2 2 2h14l4 4zM18 14H6v-2h12zm0-3H6V9h12zm0-3H6V6h12z"
></path>
</svg>
);

View File

@@ -0,0 +1,62 @@
import React, { FC } from "preact/compat";
import useBoolean from "../../../hooks/useBoolean";
import classNames from "classnames";
import TextField from "../TextField/TextField";
import "./style.scss";
import { marked } from "marked";
interface Props {
value: string;
onChange: (value: string) => void;
}
const tabs = [
{ title: "Write", value: false },
{ title: "Preview", value: true },
];
const MarkdownEditor: FC<Props> = ({ value, onChange }) => {
const {
value: markdownPreview,
setTrue: setMarkdownPreviewTrue,
setFalse: setMarkdownPreviewFalse,
} = useBoolean(false);
return (
<div className="vm-markdown-editor">
<div className="vm-markdown-editor-header">
<div className="vm-markdown-editor-header-tabs">
{tabs.map(({ title, value }) => (
<div
key={title}
className={classNames({
"vm-markdown-editor-header-tabs__tab": true,
"vm-markdown-editor-header-tabs__tab_active": markdownPreview === value,
})}
onClick={value ? setMarkdownPreviewTrue : setMarkdownPreviewFalse}
>
{title}
</div>
))}
</div>
<span className="vm-markdown-editor-header__info">
Markdown is supported
</span>
</div>
{markdownPreview ? (
<div
className="vm-markdown-editor-preview vm-markdown"
dangerouslySetInnerHTML={{ __html: marked(value) as string }}
/>
) : (
<TextField
type="textarea"
value={value}
onChange={onChange}
/>
)}
</div>
);
};
export default MarkdownEditor;

View File

@@ -0,0 +1,75 @@
@use "src/styles/variables" as *;
.vm-markdown-editor {
margin-top: 6px;
padding: 0 6px;
border-radius: $border-radius-small;
border: $border-divider;
overflow: hidden;
&-header {
display: flex;
align-items: center;
background-color: $color-hover-black;
padding-right: $padding-global;
border-bottom: $border-divider;
margin: -1px -7px 6px;
&-tabs {
display: flex;
&__tab {
display: flex;
align-items: center;
justify-content: center;
margin-bottom: -1px;
padding: $padding-small $padding-large;
min-height: 40px;
color: $color-text-secondary;
transition: color 0.3s;
cursor: pointer;
&:hover {
color: $color-text;
}
&_active {
position: relative;
color: $color-text;
background-color: $color-background-body;
border-top-right-radius: $border-radius-small;
border-top-left-radius: $border-radius-small;
z-index: 1;
&:first-child {
border-right: $border-divider;
}
&:last-child {
border-right: $border-divider;
border-left: $border-divider;
}
}
}
}
&__info {
margin-left: auto;
margin-right: 0;
color: $color-text-secondary;
font-size: $font-size-small;
font-weight: 500;
}
}
&-preview {
padding: $padding-small;
margin-bottom: 6px;
}
&-preview,
textarea {
min-height: 200px;
resize: vertical;
}
}

View File

@@ -1,7 +1,6 @@
import React, {
FC,
useEffect,
useState,
useRef,
useMemo,
FormEvent,
@@ -65,7 +64,6 @@ const TextField: FC<TextFieldProps> = ({
const inputRef = useRef<HTMLInputElement>(null);
const textareaRef = useRef<HTMLTextAreaElement>(null);
const fieldRef = useMemo(() => type === "textarea" ? textareaRef : inputRef, [type]);
const [selectionPos, setSelectionPos] = useState<[start: number, end: number]>([0, 0]);
const inputClasses = classNames({
"vm-text-field__input": true,
@@ -77,8 +75,9 @@ const TextField: FC<TextFieldProps> = ({
});
const updateCaretPosition = (target: HTMLInputElement | HTMLTextAreaElement) => {
if (!onChangeCaret) return;
const { selectionStart, selectionEnd } = target;
setSelectionPos([selectionStart || 0, selectionEnd || 0]);
onChangeCaret && onChangeCaret([selectionStart || 0, selectionEnd || 0]);
};
const handleMouseUp = (e: MouseEvent<HTMLInputElement | HTMLTextAreaElement>) => {
@@ -127,14 +126,6 @@ const TextField: FC<TextFieldProps> = ({
fieldRef?.current?.focus && fieldRef.current.focus();
}, [fieldRef, autofocus]);
useEffect(() => {
onChangeCaret && onChangeCaret(selectionPos);
}, [selectionPos]);
useEffect(() => {
setSelectionRange(selectionPos);
}, [value]);
useEffect(() => {
caretPosition && setSelectionRange(caretPosition);
}, [caretPosition]);

View File

@@ -16,17 +16,20 @@ const UploadJsonButtons: FC<Props> = ({ onOpenModal, onChange }) => (
>
Paste JSON
</Button>
<Button>
Upload Files
<div className="vm-upload-json-buttons__upload">
<Button>
Upload Files
</Button>
<input
id="json"
name="json"
type="file"
accept="application/json"
multiple
title=" "
onChange={onChange}
/>
</Button>
</div>
</div>
);

Some files were not shown because too many files have changed in this diff Show More