mirror of
https://github.com/VictoriaMetrics/VictoriaMetrics.git
synced 2026-05-17 00:26:36 +03:00
docs/victorialogs: remove VictoriaLogs, since now they are located in the https://github.com/VictoriaMetrics/VictoriaLogs/ repository
Docs at https://github.com/VictoriaMetrics/VictoriaLogs/ are automatically synced to https://docs.victoriametrics.com/victorialogs/
This commit is contained in:
File diff suppressed because it is too large
Load Diff
@@ -1,355 +0,0 @@
|
||||
---
|
||||
weight: 6
|
||||
title: FAQ
|
||||
menu:
|
||||
docs:
|
||||
identifier: "victorialogs-faq"
|
||||
parent: "victorialogs"
|
||||
weight: 6
|
||||
title: FAQ
|
||||
tags:
|
||||
- logs
|
||||
aliases:
|
||||
- /victorialogs/FAQ.html
|
||||
- /victorialogs/faq.html
|
||||
---
|
||||
|
||||
## Is VictoriaLogs ready for production use?
|
||||
|
||||
Yes. VictoriaLogs is ready for production use starting from [v1.0.0](https://docs.victoriametrics.com/victorialogs/changelog/).
|
||||
|
||||
## What is the difference between VictoriaLogs and Elasticsearch (OpenSearch)?
|
||||
|
||||
Both Elasticsearch and VictoriaLogs allow ingesting structured and unstructured logs
|
||||
and performing fast full-text search over the ingested logs.
|
||||
|
||||
Elasticsearch and OpenSearch are designed as general-purpose databases for fast full-text search over large set of documents.
|
||||
They aren't optimized specifically for logs. This results in the following issues, which are resolved by VictoriaLogs:
|
||||
|
||||
- High RAM usage
|
||||
- High disk space usage
|
||||
- Non-trivial index setup
|
||||
- Inability to select more than 10K matching log lines in a single query with default configs
|
||||
|
||||
VictoriaLogs is optimized specifically for logs. So it provides the following features useful for logs, which are missing in Elasticsearch:
|
||||
|
||||
- Easy to setup and operate. There is no need in tuning configuration for optimal performance or in creating any indexes for various log types.
|
||||
Just run VictoriaLogs on the most suitable hardware, ingest logs into it via [supported data ingestion protocols](https://docs.victoriametrics.com/victorialogs/data-ingestion/)
|
||||
and get the best available performance out of the box.
|
||||
- Up to 30x less RAM usage than Elasticsearch for the same workload.
|
||||
See [the post from a user, who replaced 27-node Elasticsearch cluster with a single-node VictoriaLogs](https://aus.social/@phs/114583927679254536).
|
||||
See also [this article](https://itnext.io/how-do-open-source-solutions-for-logs-work-elasticsearch-loki-and-victorialogs-9f7097ecbc2f) for technical details.
|
||||
- Up to 15x less disk space usage than Elasticsearch for the same amounts of stored logs.
|
||||
- Ability to work efficiently with hundreds of terabytes of logs on a single node.
|
||||
- Easy to use query language optimized for typical log analysis tasks - [LogsQL](https://docs.victoriametrics.com/victorialogs/logsql/).
|
||||
- Fast full-text search over all the [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) out of the box.
|
||||
- Good integration with traditional command-line tools for log analysis. See [these docs](https://docs.victoriametrics.com/victorialogs/querying/#command-line).
|
||||
|
||||
|
||||
## What is the difference between VictoriaLogs and Grafana Loki?
|
||||
|
||||
Both Grafana Loki and VictoriaLogs are designed for log management and processing.
|
||||
Both systems support [log stream](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields) concept.
|
||||
|
||||
VictoriaLogs and Grafana Loki have the following differences:
|
||||
|
||||
- VictoriaLogs is much easier to setup and operate than Grafana Loki. There is no need in non-trivial tuning -
|
||||
it works great with default configuration.
|
||||
|
||||
- VictoriaLogs performs typical full-text search queries up to 1000x faster than Grafana Loki.
|
||||
|
||||
- Grafana Loki doesn't support log fields with many unique values (aka high cardinality labels) such as `user_id`, `trace_id` or `ip`.
|
||||
It consumes huge amounts of RAM and slows down significantly when logs with high-cardinality fields are ingested into it.
|
||||
See [these docs](https://grafana.com/docs/loki/latest/best-practices/) for details.
|
||||
|
||||
VictoriaLogs supports high-cardinality [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
|
||||
out of the box without any additional configuration. It automatically indexes all the ingested log fields,
|
||||
so fast full-text search over any log field works without issues.
|
||||
|
||||
- Grafana Loki provides very inconvenient query language - [LogQL](https://grafana.com/docs/loki/latest/logql/).
|
||||
This query language is hard to use for typical log analysis tasks.
|
||||
|
||||
VictoriaLogs provides easy to use query language for typical log analysis tasks - [LogsQL](https://docs.victoriametrics.com/victorialogs/logsql/).
|
||||
|
||||
See [how to convert LogQL to LogsQL](https://docs.victoriametrics.com/victorialogs/logql-to-logsql/).
|
||||
|
||||
- VictoriaLogs usually needs less RAM and storage space than Grafana Loki for the same amounts of logs.
|
||||
|
||||
See [this article](https://itnext.io/why-victorialogs-is-a-better-alternative-to-grafana-loki-7e941567c4d5) for more details.
|
||||
|
||||
|
||||
## What is the difference between VictoriaLogs and ClickHouse?
|
||||
|
||||
ClickHouse is an extremely fast and efficient analytical database. It can be used for logs storage, analysis and processing.
|
||||
VictoriaLogs is designed solely for logs. VictoriaLogs uses [similar design ideas as ClickHouse](#how-does-victorialogs-work) for achieving high performance.
|
||||
|
||||
- ClickHouse is good for logs if you know the set of [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
|
||||
and the expected query types beforehand. Then you can create a table with a column per each log field, and use the most optimal settings for the table -
|
||||
sort order, partitioning and indexing - for achieving the maximum possible storage efficiency and query performance.
|
||||
|
||||
If the expected log fields or the expected query types aren't known beforehand, or if they may change over any time,
|
||||
then ClickHouse can still be used, but its' efficiency may suffer significantly depending on how you design the database schema for log storage.
|
||||
|
||||
VictoriaLogs works optimally with any log types out of the box - structured, unstructured and mixed.
|
||||
It works optimally with any sets of [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model),
|
||||
which can change in any way across different log sources.
|
||||
|
||||
- ClickHouse provides SQL dialect with additional analytical functionality. It allows performing arbitrary complex analytical queries
|
||||
over the stored logs.
|
||||
|
||||
VictoriaLogs provides easy to use query language with full-text search specifically optimized
|
||||
for log analysis - [LogsQL](https://docs.victoriametrics.com/victorialogs/logsql/).
|
||||
LogsQL is usually easier to use than SQL for typical log analysis tasks - see [these docs](https://docs.victoriametrics.com/victorialogs/sql-to-logsql/).
|
||||
|
||||
- VictoriaLogs accepts logs from popular log shippers out of the box - see [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/).
|
||||
|
||||
ClickHouse needs an intermediate applications for converting the ingested logs into `INSERT` SQL statements for the particular database schema.
|
||||
This may increase the complexity of the system and, subsequently, increase its' maintenance costs.
|
||||
|
||||
- VictoriaLogs provides [built-in Web UI](https://docs.victoriametrics.com/victorialogs/querying/#web-ui) for logs' exploration.
|
||||
|
||||
|
||||
## How does VictoriaLogs work?
|
||||
|
||||
VictoriaLogs accepts logs as [JSON entries](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model).
|
||||
Then it stores log fields into distinct data blocks. E.g. values for the same log field across multiple log entries
|
||||
are stored in a single data block. This allows reading data blocks only for the needed fields during querying.
|
||||
|
||||
Data blocks are compressed before being saved to persistent storage. This allows saving disk space and improving query performance
|
||||
when it is limited by disk read IO bandwidth.
|
||||
|
||||
Smaller data blocks are merged into bigger blocks in background. Data blocks are limited in size. If the size of data block exceeds the limit,
|
||||
then it is split into multiple blocks of smaller sizes.
|
||||
|
||||
Every data block is processed in an atomic manner during querying. For example, if the data block contains at least a single value,
|
||||
which needs to be processed, then the whole data block is unpacked and read at once. Data blocks are processed in parallel
|
||||
on all the available CPU cores during querying. This allows scaling query performance with the number of available CPU cores.
|
||||
|
||||
This architecture is inspired by [ClickHouse architecture](https://clickhouse.com/docs/en/development/architecture).
|
||||
|
||||
On top of this, VictoriaLogs employs additional optimizations for achieving high query performance:
|
||||
|
||||
- It uses [bloom filters](https://en.wikipedia.org/wiki/Bloom_filter) for skipping blocks without the given
|
||||
[word](https://docs.victoriametrics.com/victorialogs/logsql/#word-filter) or [phrase](https://docs.victoriametrics.com/victorialogs/logsql/#phrase-filter).
|
||||
- It uses custom encoding and compression for fields with different data types.
|
||||
For example, it encodes IP addresses into 4 bytes. Custom fields' encoding reduces data size on disk and improves query performance.
|
||||
- It physically groups logs for the same [log stream](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields)
|
||||
close to each other in the storage. This improves compression ratio, which helps reducing disk space usage. This also improves query performance
|
||||
by skipping blocks for unneeded streams when [stream filter](https://docs.victoriametrics.com/victorialogs/logsql/#stream-filter) is used.
|
||||
- It maintains sparse index for [log timestamps](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field),
|
||||
which allow improving query performance when [time filter](https://docs.victoriametrics.com/victorialogs/logsql/#time-filter) is used.
|
||||
|
||||
## How to export logs from VictoriaLogs?
|
||||
|
||||
Just send the query with the needed [filters](https://docs.victoriametrics.com/victorialogs/logsql/#filters)
|
||||
to [`/select/logsql/query`](https://docs.victoriametrics.com/victorialogs/querying/#querying-logs) - VictoriaLogs will return
|
||||
the requested logs as a [stream of JSON lines](https://jsonlines.org/). It is recommended specifying [time filter](https://docs.victoriametrics.com/victorialogs/logsql/#time-filter)
|
||||
for limiting the amounts of exported logs.
|
||||
|
||||
## I want to ingest logs without message field, is that possible?
|
||||
|
||||
VictoriaLogs [accepts](https://docs.victoriametrics.com/victorialogs/data-ingestion/) logs without [`_msg` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field).
|
||||
In this case the `_msg` field is set to the default value, which can be configured via `-defaultMsgValue` command-line flag.
|
||||
|
||||
## What if my logs have multiple message fields candidates?
|
||||
|
||||
If you [ingest](https://docs.victoriametrics.com/victorialogs/data-ingestion/) logs into VictoriaLogs
|
||||
without [`_msg` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field), then this field
|
||||
is filled according to the `_msg_field` HTTP query arg and/or `VL-Msg-Field` HTTP header.
|
||||
See [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters) for details.
|
||||
If the `_msg_field` HTTP query arg and/or `VL-Msg-Field` HTTP header contains a list of comma-separated field names,
|
||||
then the first non-empty field from this list is used as `_msg` field.
|
||||
|
||||
For example, if the following [log entry](https://docs.victoriametrics.com/victorialogs/keyconcepts/)
|
||||
is ingested into VictoriaLogs with `_msg_field=message,body`:
|
||||
|
||||
```json
|
||||
{
|
||||
"message": "foo bar in message",
|
||||
"body": "foo bar in body"
|
||||
}
|
||||
```
|
||||
|
||||
Then `_msg` field is set to `foo bar in message`.
|
||||
|
||||
If the following log entry is ingested into VictoriaLogs with `_msg_field=message,body`:
|
||||
|
||||
```json
|
||||
{
|
||||
"body": "foo bar in body"
|
||||
}
|
||||
```
|
||||
|
||||
Then `_msg` field is set to `foo bar in body`.
|
||||
|
||||
## What length a log record is expected to have?
|
||||
|
||||
VictoriaLogs works optimally with log records of up to `10KB`. It works OK with
|
||||
log records of up to `100KB`. It works not so optimal with log records exceeding
|
||||
`100KB`.
|
||||
|
||||
The max size of a log record VictoriaLogs can accept during [data ingestion](https://docs.victoriametrics.com/victorialogs/data-ingestion/)
|
||||
is `2MB`, because log records are stored in blocks of up to `2MB` size.
|
||||
Blocks of this size fit the L2 cache of a typical CPU, which gives an
|
||||
optimal performance during data ingestion and querying.
|
||||
|
||||
Note that log records with sizes close to `2MB` aren't handled efficiently by
|
||||
VictoriaLogs because per-block overhead translates to a single log record, and
|
||||
this overhead is big.
|
||||
|
||||
The `2MB` limit is hardcoded and is unlikely to increase.
|
||||
|
||||
The limit can be set to the lower value during [data ingestion](https://docs.victoriametrics.com/victorialogs/data-ingestion/)
|
||||
via `-insert.maxLineSizeBytes` command-line flag.
|
||||
|
||||
## What is the maximum supported field name length
|
||||
|
||||
VictoriaLogs limits [log field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) name length to 128 bytes -
|
||||
Log entries with longer field names are ignored during [data ingestion](https://docs.victoriametrics.com/victorialogs/data-ingestion/).
|
||||
|
||||
The maximum length of a field name is hardcoded and is unlikely to increase, since this may increase RAM and CPU usage.
|
||||
|
||||
## How many fields a single log entry may contain
|
||||
|
||||
A single log entry may contain up to 2000 fields. This fits well the majority of use cases for structured logs and
|
||||
for [wide events](https://jeremymorrell.dev/blog/a-practitioners-guide-to-wide-events/).
|
||||
|
||||
The maximum number of fields per log entry is hardcoded and is unlikely to increase, since this may increase RAM and CPU usage.
|
||||
|
||||
The limit can be set to the lower value during [data ingestion](https://docs.victoriametrics.com/victorialogs/data-ingestion/)
|
||||
via `-insert.maxFieldsPerLine` command-line flag.
|
||||
|
||||
## How to determine which log fields occupy the most of disk space?
|
||||
|
||||
[Run](https://docs.victoriametrics.com/victorialogs/querying/) the following [LogsQL](https://docs.victoriametrics.com/victorialogs/logsql/) query
|
||||
based on [`block_stats` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#block_stats-pipe):
|
||||
|
||||
```logsql
|
||||
_time:1d
|
||||
| block_stats
|
||||
| stats by (field)
|
||||
sum(values_bytes) as values_bytes,
|
||||
sum(bloom_bytes) as bloom_bytes,
|
||||
sum(rows) as rows
|
||||
| math
|
||||
(values_bytes+bloom_bytes) as total_bytes,
|
||||
round(total_bytes / rows, 0.01) as bytes_per_row
|
||||
| first 10 (total_bytes desc)
|
||||
```
|
||||
|
||||
This query returns top 10 [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model),
|
||||
which occupy the most of disk space across the logs ingested during the last day. The occupied disk space
|
||||
is returned in the `total_bytes` field.
|
||||
|
||||
If you use [VictoriaLogs web UI](https://docs.victoriametrics.com/victorialogs/querying/#web-ui)
|
||||
or [Grafana plugin for VictoriaLogs](https://docs.victoriametrics.com/victorialogs/victorialogs-datasource/),
|
||||
then make sure the selected time range covers the last day. Otherwise, the query above returns
|
||||
results on the intersection of the last day and the selected time range.
|
||||
|
||||
See [why the log field occupies a lot of disk space](#why-the-log-field-occupies-a-lot-of-disk-space).
|
||||
|
||||
## Why the log field occupies a lot of disk space?
|
||||
|
||||
See [how to determine which log fields occupy the most of disk space](#how-to-determine-which-log-fields-occupy-the-most-of-disk-space).
|
||||
Log field may occupy a lot of disk space if it contains values with many unique parts (aka "random" values).
|
||||
Such values do not compress well, so they occupy a lot of disk space. If you want reducing the amounts of occupied disk space,
|
||||
then either remove the given log field from the [ingested](https://docs.victoriametrics.com/victorialogs/data-ingestion/) logs
|
||||
or remove the unique parts from the log field before ingesting it into VictoriaLogs.
|
||||
|
||||
## How to detect the most frequently seen logs?
|
||||
|
||||
Use [`collapse_nums` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#collapse_nums-pipe).
|
||||
For example, the following [LogsQL](https://docs.victoriametrics.com/victorialogs/logsql/) query
|
||||
returns top 10 the most frequently seen [log messages](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field) over the last hour:
|
||||
|
||||
```logsql
|
||||
_time:1h | collapse_nums prettify | top 10 (_msg)
|
||||
```
|
||||
|
||||
Add [`_stream` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields) to the `top (...)` list in order to get top 10 the most frequently seen logs with the `_stream` field:
|
||||
|
||||
```logsql
|
||||
_time:1h | collapse_nums prettify | top 10 (_stream, _msg)
|
||||
```
|
||||
|
||||
## How to get field names seen in the selected logs?
|
||||
|
||||
Use [`field_names` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#field_names-pipe).
|
||||
For example, the following [LogsQL](https://docs.victoriametrics.com/victorialogs/logsql/) query
|
||||
returns all the [field names](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) seen
|
||||
across all the logs during the last hour:
|
||||
|
||||
```logsql
|
||||
_time:1h | field_names | sort by (name)
|
||||
```
|
||||
|
||||
The `hits` field in the returned results contains an estimated number of logs with the given log field.
|
||||
|
||||
## How to get unique field values seen in the selected logs?
|
||||
|
||||
Use [`field_values` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#field_values-pipe).
|
||||
For example, the following [LogsQL](https://docs.victoriametrics.com/victorialogs/logsql/) query
|
||||
returns all the values for the `level` field across all the logs seen during the last hour:
|
||||
|
||||
```logsql
|
||||
_time:1h | field_values level
|
||||
```
|
||||
|
||||
The `hits` field in the returned results contains an estimated number of logs with the given value for the `level` field.
|
||||
|
||||
## How to get the number of unique log streams on the given time range?
|
||||
|
||||
Use [`count_uniq` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#count_uniq-pipe)
|
||||
over [`_stream`](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields) field.
|
||||
For example, the following [LogsQL](https://docs.victoriametrics.com/victorialogs/logsql/) query
|
||||
returns the number of unique [log streams](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields)
|
||||
across all the logs over the last day:
|
||||
|
||||
```logsql
|
||||
_time:1d | count_uniq(_stream)
|
||||
```
|
||||
|
||||
## Does LogsQL support subqueries?
|
||||
|
||||
Yes. See [these docs](https://docs.victoriametrics.com/victorialogs/logsql/#subquery-filter).
|
||||
For example, the following query returns the total number of unique values for the `user_id` [field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
|
||||
across top 3 [log streams](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields) with the biggest number of logs during the last hour:
|
||||
|
||||
```logsql
|
||||
_time:1h _stream_id:in(_time:1h | top 3 (_stream_id) | keep _stream_id) | count_uniq(user_id)
|
||||
```
|
||||
|
||||
The query works in the following way:
|
||||
|
||||
- It selects top 3 log streams with the biggest number of logs during the last hour with the following subquery:
|
||||
```logsql
|
||||
_time:1h | top 3 (_stream_id) | keep _stream_id
|
||||
```
|
||||
This subquery uses [`top`](https://docs.victoriametrics.com/victorialogs/logsql/#top-pipe) and [`keep`](https://docs.victoriametrics.com/victorialogs/logsql/#fields-pipe) pipes.
|
||||
|
||||
- Then it selects all the logs across the selected log streams over the last hour with the help of [`_stream_id:...` filter](https://docs.victoriametrics.com/victorialogs/logsql/#_stream_id-filter).
|
||||
|
||||
|
||||
## How to estimate the needed compute resources for the given workload?
|
||||
|
||||
The needed storage space depends on the following factors:
|
||||
|
||||
- Data compressibility. VictoriaLogs compresses the ingested logs before storing them to disk. The compression ratio depends on the "randomness" of the ingested logs.
|
||||
Less "random" logs with many repeated field values and small differences between log messages compress the best (up to 100x and more).
|
||||
More "random" logs with many unique field values may have very low compression rate.
|
||||
|
||||
- [Data retention](https://docs.victoriametrics.com/victorialogs/#retention). For example, a year-long retention needs 52x more storage space than a week-long retention.
|
||||
|
||||
The needed RAM, CPU, storage IO and network bandwidth depends on the type and the rate of queries over the ingested logs.
|
||||
|
||||
- "Lightweight" queries over the recently ingested logs with very narrow [log stream filters](https://docs.victoriametrics.com/victorialogs/logsql/#stream-filter)
|
||||
require very low compute resources, even if they are executed at 1000 rps.
|
||||
|
||||
- "Heavy" queries over the long time range, which do not contain [log stream filters](https://docs.victoriametrics.com/victorialogs/logsql/#stream-filter)
|
||||
or have some heavy [pipe processing](https://docs.victoriametrics.com/victorialogs/logsql/#pipes) such as [analytics' calculations](https://docs.victoriametrics.com/victorialogs/logsql/#stats-pipe)
|
||||
or [sorting over billions of rows](https://docs.victoriametrics.com/victorialogs/logsql/#sort-pipe) may require hundreds of CPU cores and terabytes of RAM
|
||||
for fast execution. It is OK to execute such queries on machines with a few CPU cores and a few GiB of RAM - these queries will take more time to execute.
|
||||
|
||||
The best approach to estimate the needed compute resources for the given workload is to start a VictoriaLogs, to ingest a share (1%-10%) of your production logs into it,
|
||||
and to execute typical queries on it, while [measuring](https://docs.victoriametrics.com/victorialogs/#monitoring) the consumed compute resources.
|
||||
Then you can extrapolate the needed compute resources for the full production workload in your case.
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,157 +0,0 @@
|
||||
---
|
||||
weight: 1
|
||||
title: Quick Start
|
||||
menu:
|
||||
docs:
|
||||
parent: victorialogs
|
||||
identifier: vl-quick-start
|
||||
weight: 1
|
||||
title: Quick Start
|
||||
tags:
|
||||
- logs
|
||||
- guide
|
||||
aliases:
|
||||
- /victorialogs/QuickStart.html
|
||||
- /victorialogs/quick-start.html
|
||||
- /victorialogs/quick-start/
|
||||
---
|
||||
It is recommended to read [README](https://docs.victoriametrics.com/victorialogs/)
|
||||
and [Key Concepts](https://docs.victoriametrics.com/victorialogs/keyconcepts/)
|
||||
before you start working with VictoriaLogs.
|
||||
|
||||
## How to install and run VictoriaLogs
|
||||
|
||||
There are the following options exist:
|
||||
|
||||
- [To run pre-built binaries](#pre-built-binaries)
|
||||
- [To run Docker image](#docker-image)
|
||||
- [To run in Kubernetes with Helm charts](#helm-charts)
|
||||
- [To build VictoriaLogs from source code](#building-from-source-code)
|
||||
|
||||
### Pre-built binaries
|
||||
|
||||
Pre-built binaries for VictoriaLogs are available at the [releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/) page.
|
||||
Just download archive for the needed Operating system and architecture, unpack it and run `victoria-logs-prod` from it.
|
||||
|
||||
For example, the following commands download VictoriaLogs archive for Linux/amd64, unpack and run it:
|
||||
|
||||
```sh
|
||||
curl -L -O https://github.com/VictoriaMetrics/VictoriaMetrics/releases/download/v1.24.0-victorialogs/victoria-logs-linux-amd64-v1.24.0-victorialogs.tar.gz
|
||||
tar xzf victoria-logs-linux-amd64-v1.24.0-victorialogs.tar.gz
|
||||
./victoria-logs-prod -storageDataPath=victoria-logs-data
|
||||
```
|
||||
|
||||
VictoriaLogs is ready for [data ingestion](https://docs.victoriametrics.com/victorialogs/data-ingestion/)
|
||||
and [querying](https://docs.victoriametrics.com/victorialogs/querying/) at the TCP port `9428` now!
|
||||
It has no any external dependencies, so it may run in various environments without additional setup and configuration.
|
||||
VictoriaLogs automatically adapts to the available CPU and RAM resources. It also automatically setups and creates
|
||||
the needed indexes during [data ingestion](https://docs.victoriametrics.com/victorialogs/data-ingestion/).
|
||||
|
||||
See also:
|
||||
|
||||
- [How to configure VictoriaLogs](#how-to-configure-victorialogs)
|
||||
- [How to ingest logs into VictoriaLogs](https://docs.victoriametrics.com/victorialogs/data-ingestion/)
|
||||
- [How to query VictoriaLogs](https://docs.victoriametrics.com/victorialogs/querying/)
|
||||
|
||||
|
||||
### Docker image
|
||||
|
||||
You can run VictoriaLogs in a Docker container. It is the easiest way to start using VictoriaLogs.
|
||||
Here is the command to run VictoriaLogs in a Docker container:
|
||||
|
||||
```sh
|
||||
docker run --rm -it -p 9428:9428 -v ./victoria-logs-data:/victoria-logs-data \
|
||||
docker.io/victoriametrics/victoria-logs:v1.24.0-victorialogs -storageDataPath=victoria-logs-data
|
||||
```
|
||||
|
||||
See also:
|
||||
|
||||
- [How to configure VictoriaLogs](#how-to-configure-victorialogs)
|
||||
- [How to ingest logs into VictoriaLogs](https://docs.victoriametrics.com/victorialogs/data-ingestion/)
|
||||
- [How to query VictoriaLogs](https://docs.victoriametrics.com/victorialogs/querying/)
|
||||
|
||||
### Helm charts
|
||||
|
||||
You can run VictoriaLogs in Kubernetes environment
|
||||
with [VictoriaLogs single](https://docs.victoriametrics.com/helm/victorialogs-single/)
|
||||
or [cluster](https://docs.victoriametrics.com/helm/victorialogs-cluster) helm charts.
|
||||
|
||||
### Building from source code
|
||||
|
||||
Follow the following steps in order to build VictoriaLogs from source code:
|
||||
|
||||
- Checkout VictoriaLogs source code. It is located in the VictoriaMetrics repository:
|
||||
|
||||
```sh
|
||||
git clone https://github.com/VictoriaMetrics/VictoriaMetrics
|
||||
cd VictoriaMetrics
|
||||
```
|
||||
|
||||
- Build VictoriaLogs. The build command requires [Go 1.22](https://golang.org/doc/install).
|
||||
|
||||
```sh
|
||||
make victoria-logs
|
||||
```
|
||||
|
||||
- Run the built binary:
|
||||
|
||||
```sh
|
||||
bin/victoria-logs -storageDataPath=victoria-logs-data
|
||||
```
|
||||
|
||||
VictoriaLogs is ready for [data ingestion](https://docs.victoriametrics.com/victorialogs/data-ingestion/)
|
||||
and [querying](https://docs.victoriametrics.com/victorialogs/querying/) at the TCP port `9428` now!
|
||||
It has no any external dependencies, so it may run in various environments without additional setup and configuration.
|
||||
VictoriaLogs automatically adapts to the available CPU and RAM resources. It also automatically setups and creates
|
||||
the needed indexes during [data ingestion](https://docs.victoriametrics.com/victorialogs/data-ingestion/).
|
||||
|
||||
See also:
|
||||
|
||||
- [How to configure VictoriaLogs](#how-to-configure-victorialogs)
|
||||
- [How to ingest logs into VictoriaLogs](https://docs.victoriametrics.com/victorialogs/data-ingestion/)
|
||||
- [How to query VictoriaLogs](https://docs.victoriametrics.com/victorialogs/querying/)
|
||||
|
||||
|
||||
## How to configure VictoriaLogs
|
||||
|
||||
VictoriaLogs is configured via command-line flags. All the command-line flags have sane defaults,
|
||||
so there is no need in tuning them in general case. VictoriaLogs runs smoothly in most environments
|
||||
without additional configuration.
|
||||
|
||||
Pass `-help` to VictoriaLogs in order to see the list of supported command-line flags with their description and default values:
|
||||
|
||||
```sh
|
||||
/path/to/victoria-logs -help
|
||||
```
|
||||
|
||||
VictoriaLogs stores the ingested data to the `victoria-logs-data` directory by default. The directory can be changed
|
||||
via `-storageDataPath` command-line flag. See [these docs](https://docs.victoriametrics.com/victorialogs/#storage) for details.
|
||||
|
||||
By default, VictoriaLogs stores [log entries](https://docs.victoriametrics.com/victorialogs/keyconcepts/) with timestamps
|
||||
in the time range `[now-7d, now]`, while dropping logs outside the given time range.
|
||||
E.g. it uses the retention of 7 days. Read [these docs](https://docs.victoriametrics.com/victorialogs/#retention) on how to control the retention
|
||||
for the [ingested](https://docs.victoriametrics.com/victorialogs/data-ingestion/) logs.
|
||||
|
||||
It is recommended setting up monitoring of VictoriaLogs according to [these docs](https://docs.victoriametrics.com/victorialogs/#monitoring).
|
||||
|
||||
See also:
|
||||
|
||||
- [How to ingest logs into VictoriaLogs](https://docs.victoriametrics.com/victorialogs/data-ingestion/)
|
||||
- [How to query VictoriaLogs](https://docs.victoriametrics.com/victorialogs/querying/)
|
||||
|
||||
## Docker demos
|
||||
|
||||
Docker-compose demos for single-node and cluster version of VictoriaLogs that include logs collection,
|
||||
monitoring, alerting and Grafana are available [here](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker#readme).
|
||||
|
||||
Docker-compose demos that integrate VictoriaLogs and various log collectors:
|
||||
|
||||
- [Filebeat demo](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/filebeat)
|
||||
- [Fluentbit demo](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/fluentbit)
|
||||
- [Logstash demo](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/logstash)
|
||||
- [Vector demo](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/vector)
|
||||
- [Promtail demo](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/promtail)
|
||||
|
||||
You can use [VictoriaLogs single](https://docs.victoriametrics.com/helm/victorialogs-single/)
|
||||
or [cluster](https://docs.victoriametrics.com/helm/victorialogs-cluster) helm charts as a demo for running Vector
|
||||
in Kubernetes with VictoriaLogs.
|
||||
@@ -1,728 +0,0 @@
|
||||
VictoriaLogs is [open source](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/app/victoria-logs) user-friendly database for logs
|
||||
from [VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics/).
|
||||
|
||||
VictoriaLogs provides the following features:
|
||||
|
||||
- It is resource-efficient and fast. It uses up to 30x less RAM and up to 15x less disk space than other solutions such as Elasticsearch and Grafana Loki.
|
||||
See [these benchmarks](#benchmarks) and [this article](https://itnext.io/how-do-open-source-solutions-for-logs-work-elasticsearch-loki-and-victorialogs-9f7097ecbc2f) for details.
|
||||
See also [the post from a happy user, who replaced 27-node Elasticsearch with a single-node VictoriaLogs](https://aus.social/@phs/114583927679254536).
|
||||
- VictoriaLogs' capacity and performance scales linearly with the available resources (CPU, RAM, disk IO, disk space).
|
||||
It runs smoothly on Raspberry PI and on servers with hundreds of CPU cores and terabytes of RAM.
|
||||
It can scale horizontally to many nodes in [cluster mode](https://docs.victoriametrics.com/victorialogs/cluster/).
|
||||
- It can accept logs from popular log collectors. See [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/).
|
||||
- It is much easier to set up and operate compared to Elasticsearch and Grafana Loki, since it is basically zero-config.
|
||||
See [these docs](https://docs.victoriametrics.com/victorialogs/quickstart/).
|
||||
- It provides easy yet powerful query language with full-text search capabilities across
|
||||
all the [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model).
|
||||
See [LogsQL docs](https://docs.victoriametrics.com/victorialogs/logsql/).
|
||||
- It provides [built-in web UI](https://docs.victoriametrics.com/victorialogs/querying/#web-ui) for logs' exploration.
|
||||
- It provides [Grafana plugin](https://docs.victoriametrics.com/victorialogs/victorialogs-datasource/) for building arbitrary dashboards in Grafana.
|
||||
- It provides [interactive command-line tool for querying VictoriaLogs](https://docs.victoriametrics.com/victorialogs/querying/vlogscli/).
|
||||
- It can be seamlessly combined with good old Unix tools for log analysis such as `grep`, `less`, `sort`, `jq`, etc.
|
||||
See [these docs](https://docs.victoriametrics.com/victorialogs/querying/#command-line) for details.
|
||||
- It support [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) with high cardinality (e.g. high number of unique values) such as `trace_id`, `user_id` and `ip`.
|
||||
- It is optimized for logs with hundreds of fields (aka [`wide events`](https://jeremymorrell.dev/blog/a-practitioners-guide-to-wide-events/)).
|
||||
- It supports multitenancy - see [these docs](#multitenancy).
|
||||
- It supports out-of-order logs' ingestion aka backfilling.
|
||||
- It supports live tailing for newly ingested logs. See [these docs](https://docs.victoriametrics.com/victorialogs/querying/#live-tailing).
|
||||
- It supports selecting surrounding logs in front and after the selected logs. See [these docs](https://docs.victoriametrics.com/victorialogs/logsql/#stream_context-pipe).
|
||||
- It supports alerting - see [these docs](https://docs.victoriametrics.com/victorialogs/vmalert/).
|
||||
|
||||
If you have questions about VictoriaLogs, then read [this FAQ](https://docs.victoriametrics.com/victorialogs/faq/).
|
||||
Also feel free asking any questions at [VictoriaMetrics community Slack chat](https://victoriametrics.slack.com/),
|
||||
you can join it via [Slack Inviter](https://slack.victoriametrics.com/).
|
||||
|
||||
See [quick start docs](https://docs.victoriametrics.com/victorialogs/quickstart/) for start working with VictoriaLogs.
|
||||
|
||||
If you want playing with VictoriaLogs web UI and [LogsQL](https://docs.victoriametrics.com/victorialogs/logsql/) query language,
|
||||
then go to [VictoriaLogs demo playground](https://play-vmlogs.victoriametrics.com/).
|
||||
|
||||
## Tuning
|
||||
|
||||
* No need in tuning for VictoriaLogs - it uses reasonable defaults for command-line flags, which are automatically adjusted for the available CPU and RAM resources.
|
||||
* No need in tuning for Operating System - VictoriaLogs is optimized for default OS settings.
|
||||
The only option is increasing the limit on [the number of open files in the OS](https://medium.com/@muhammadtriwibowo/set-permanently-ulimit-n-open-files-in-ubuntu-4d61064429a).
|
||||
* The recommended filesystem is `ext4`, the recommended persistent storage is [persistent HDD-based disk on GCP](https://cloud.google.com/compute/docs/disks/#pdspecs),
|
||||
since it is protected from hardware failures via internal replication and it can be [resized on the fly](https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd).
|
||||
If you plan to store more than 1TB of data on `ext4` partition or plan extending it to more than 16TB,
|
||||
then the following options are recommended to pass to `mkfs.ext4`:
|
||||
|
||||
```sh
|
||||
mkfs.ext4 ... -O 64bit,huge_file,extent -T huge
|
||||
```
|
||||
|
||||
## Monitoring
|
||||
|
||||
VictoriaLogs exposes internal metrics in Prometheus exposition format at `http://localhost:9428/metrics` page.
|
||||
It is recommended to set up monitoring of these metrics via VictoriaMetrics
|
||||
(see [these docs](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#how-to-scrape-prometheus-exporters-such-as-node-exporter)),
|
||||
vmagent (see [these docs](https://docs.victoriametrics.com/victoriametrics/vmagent/#how-to-collect-metrics-in-prometheus-format)) or via Prometheus.
|
||||
|
||||
We recommend installing Grafana dashboard for [VictoriaLogs single-node](https://grafana.com/grafana/dashboards/22084) or [cluster](https://grafana.com/grafana/dashboards/23274).
|
||||
|
||||
We recommend setting up [alerts](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/deployment/docker/rules/alerts-vlogs.yml)
|
||||
via [vmalert](https://docs.victoriametrics.com/victoriametrics/vmalert/) or via Prometheus.
|
||||
|
||||
VictoriaLogs emits its own logs to stdout. It is recommended to investigate these logs during troubleshooting.
|
||||
|
||||
## Upgrading
|
||||
|
||||
It is safe upgrading VictoriaLogs to new versions unless [release notes](https://docs.victoriametrics.com/victorialogs/changelog/) say otherwise.
|
||||
It is safe to skip multiple versions during the upgrade unless [release notes](https://docs.victoriametrics.com/victorialogs/changelog/) say otherwise.
|
||||
It is recommended to perform regular upgrades to the latest version, since it may contain important bug fixes, performance optimizations or new features.
|
||||
|
||||
It is also safe to downgrade to older versions unless [release notes](https://docs.victoriametrics.com/victorialogs/changelog/) say otherwise.
|
||||
|
||||
The following steps must be performed during the upgrade / downgrade procedure:
|
||||
|
||||
* Send `SIGINT` signal to VictoriaLogs process in order to gracefully stop it.
|
||||
See [how to send signals to processes](https://stackoverflow.com/questions/33239959/send-signal-to-process-from-command-line).
|
||||
* Wait until the process stops. This can take a few seconds.
|
||||
* Start the upgraded VictoriaLogs.
|
||||
|
||||
## Retention
|
||||
|
||||
By default, VictoriaLogs stores log entries with timestamps in the time range `[now-7d, now]`, while dropping logs outside the given time range.
|
||||
E.g. it uses the retention of 7 days. The retention can be configured with `-retentionPeriod` command-line flag.
|
||||
This flag accepts values starting from `1d` (one day) up to `100y` (100 years). See [these docs](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-durations)
|
||||
for the supported duration formats.
|
||||
|
||||
For example, the following command starts VictoriaLogs with the retention of 8 weeks:
|
||||
|
||||
```sh
|
||||
/path/to/victoria-logs -retentionPeriod=8w
|
||||
```
|
||||
|
||||
See also [retention by disk space usage](#retention-by-disk-space-usage).
|
||||
|
||||
VictoriaLogs stores the [ingested](https://docs.victoriametrics.com/victorialogs/data-ingestion/) logs in per-day partition directories.
|
||||
It automatically drops partition directories outside the configured retention.
|
||||
|
||||
VictoriaLogs automatically drops logs at [data ingestion](https://docs.victoriametrics.com/victorialogs/data-ingestion/) stage
|
||||
if they have timestamps outside the configured retention. A sample of dropped logs is logged with `WARN` message in order to simplify troubleshooting.
|
||||
The `vl_rows_dropped_total` [metric](#monitoring) is incremented each time an ingested log entry is dropped because of timestamp outside the retention.
|
||||
It is recommended to set up the following alerting rule at [vmalert](https://docs.victoriametrics.com/victoriametrics/vmalert/) in order to be notified
|
||||
when logs with wrong timestamps are ingested into VictoriaLogs:
|
||||
|
||||
```metricsql
|
||||
rate(vl_rows_dropped_total[5m]) > 0
|
||||
```
|
||||
|
||||
By default, VictoriaLogs doesn't accept log entries with timestamps bigger than `now+2d`, e.g. 2 days in the future.
|
||||
If you need accepting logs with bigger timestamps, then specify the desired "future retention" via `-futureRetention` command-line flag.
|
||||
This flag accepts values starting from `1d`. See [these docs](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-durations)
|
||||
for the supported duration formats.
|
||||
|
||||
For example, the following command starts VictoriaLogs, which accepts logs with timestamps up to a year in the future:
|
||||
|
||||
```sh
|
||||
/path/to/victoria-logs -futureRetention=1y
|
||||
```
|
||||
|
||||
## Retention by disk space usage
|
||||
|
||||
VictoriaLogs can be configured to automatically drop older per-day partitions if the total size of data at [`-storageDataPath` directory](#storage)
|
||||
becomes bigger than the given threshold at `-retention.maxDiskSpaceUsageBytes` command-line flag. For example, the following command starts VictoriaLogs,
|
||||
which drops old per-day partitions if the total [storage](#storage) size becomes bigger than `100GiB`:
|
||||
|
||||
```sh
|
||||
/path/to/victoria-logs -retention.maxDiskSpaceUsageBytes=100GiB
|
||||
```
|
||||
|
||||
VictoriaLogs usually compresses logs by 10x or more times. This means that VictoriaLogs can store more than a terabyte of uncompressed
|
||||
logs when it runs with `-retention.maxDiskSpaceUsageBytes=100GiB`.
|
||||
|
||||
VictoriaLogs keeps at least two last days of data in order to guarantee that the logs for the last day can be returned in queries.
|
||||
This means that the total disk space usage may exceed the `-retention.maxDiskSpaceUsageBytes` if the size of the last two days of data
|
||||
exceeds the `-retention.maxDiskSpaceUsageBytes`.
|
||||
|
||||
The [`-retentionPeriod`](#retention) is applied independently to the `-retention.maxDiskSpaceUsageBytes`. This means that
|
||||
VictoriaLogs automatically drops logs older than 7 days by default if only `-retention.maxDiskSpaceUsageBytes` command-line flag is set.
|
||||
Set the `-retentionPeriod` to some big value (e.g. `100y` - 100 years) if logs shouldn't be dropped because of some small `-retentionPeriod`.
|
||||
For example:
|
||||
|
||||
```sh
|
||||
/path/to/victoria-logs -retention.maxDiskSpaceUsageBytes=10TiB -retentionPeriod=100y
|
||||
```
|
||||
|
||||
## Storage
|
||||
|
||||
By default VictoriaLogs stores all its data in a single directory - `victoria-logs-data`. The path to the directory can be changed via `-storageDataPath` command-line flag.
|
||||
For example, the following command starts VictoriaLogs, which stores the data at `/var/lib/victoria-logs`:
|
||||
|
||||
```sh
|
||||
/path/to/victoria-logs -storageDataPath=/var/lib/victoria-logs
|
||||
```
|
||||
|
||||
VictoriaLogs automatically creates the `-storageDataPath` directory on the first run if it is missing.
|
||||
|
||||
The ingested logs are stored in per-day subdirectories (partitions) at the `<-storageDataPath>/partitions` directory. The per-day subdirectories have `YYYYMMDD` names.
|
||||
For example, the directory with the name `20250418` contains logs with [`_time` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field) values
|
||||
at April 18, 2025 UTC. This allows flexible data management. For example, old per-day data is automatically and quickly deleted according to the provided [retention policy](#retention)
|
||||
by removing the corresponding per-day subdirectory (partition).
|
||||
|
||||
VictoriaLogs switches to cluster mode if `-storageNode` command-line flag is specified:
|
||||
|
||||
- It stops storing the ingested logs locally in cluster mode. It spreads them evenly among `vlstorage` nodes specified via the `-storageNode` command-line flag.
|
||||
- It stops querying the locally stored logs in cluster mode. It queries `vlstorage` nodes specified via `-storageNode` command-line flag.
|
||||
|
||||
See [cluster mode docs](https://docs.victoriametrics.com/victorialogs/cluster/) for details.
|
||||
|
||||
## Forced merge
|
||||
|
||||
VictoriaLogs performs data compactions in background in order to keep good performance characteristics when accepting new data.
|
||||
These compactions (merges) are performed independently on per-day partitions.
|
||||
This means that compactions are stopped for per-day partitions if no new data is ingested into these partitions.
|
||||
Sometimes it is necessary to trigger compactions for old partitions. In this case forced compaction may be initiated on the specified per-day partition
|
||||
by sending request to `/internal/force_merge?partition_prefix=YYYYMMDD`,
|
||||
where `YYYYMMDD` is per-day partition name. For example, `http://victoria-logs:9428/internal/force_merge?partition_prefix=20240921` would initiate forced
|
||||
merge for September 21, 2024 partition. The call to `/internal/force_merge` returns immediately, while the corresponding forced merge continues running in background.
|
||||
|
||||
Forced merges may require additional CPU, disk IO and storage space resources. It is unnecessary to run forced merge under normal conditions,
|
||||
since VictoriaLogs automatically performs optimal merges in background when new data is ingested into it.
|
||||
|
||||
## Forced flush
|
||||
|
||||
VictoriaLogs puts the recently [ingested logs](https://docs.victoriametrics.com/victorialogs/data-ingestion/) into in-memory buffers,
|
||||
which aren't available for [querying](https://docs.victoriametrics.com/victorialogs/querying/) for up to a second.
|
||||
If you need querying logs immediately after their ingestion, then the `/internal/force_flush` HTTP endpoint must be requested
|
||||
before querying. This endpoint converts in-memory buffers with the recently ingested logs into searchable data blocks.
|
||||
|
||||
It isn't recommended requesting the `/internal/force_flush` HTTP endpoint on a regular basis, since this increases CPU usage
|
||||
and slows down data ingestion. It is expected that the `/internal/force_flush` is requested in automated tests, which need querying
|
||||
the recently ingested data.
|
||||
|
||||
## High Availability
|
||||
|
||||
### High Availability (HA) Setup with VictoriaLogs Single-Node Instances
|
||||
|
||||
This schema outlines how to configure a High Availability (HA) setup using VictoriaLogs Single-Node instances. The setup consists of the following components:
|
||||
|
||||
- **Log Collector**: The log collector should support multiplexing incoming data to multiple outputs (destinations). Popular log collectors like [Fluent Bit](https://docs.fluentbit.io/manual/concepts/data-pipeline/router), [Logstash](https://www.elastic.co/guide/en/logstash/current/output-plugins.html), [Fluentd](https://docs.fluentd.org/output/copy), and [Vector](https://vector.dev/docs/reference/configuration/sinks/) already offer this capability. Refer to their documentation for configuration details.
|
||||
|
||||
- **VictoriaLogs Single-Node Instances**: Use two or more instances to achieve HA.
|
||||
|
||||
- **[vmauth](https://docs.victoriametrics.com/victoriametrics/vmauth/#load-balancing) or Load Balancer**: Used for reading data from one of the replicas to ensure balanced and redundant access.
|
||||
|
||||

|
||||
|
||||
Here are the working example of HA configuration for VictoriaLogs using Docker Compose:
|
||||
|
||||
- [Fluent Bit + VictoriaLogs Single-Node + vmauth](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/fluentbit/jsonline-ha)
|
||||
- [Logstash + VictoriaLogs Single-Node + vmauth](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/logstash/jsonline-ha)
|
||||
- [Vector + VictoriaLogs Single-Node + vmauth](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/vector/jsonline-ha)
|
||||
|
||||
## Backup and restore
|
||||
|
||||
VictoriaLogs currently does not have a snapshot feature and a tool like vmbackup as VictoriaMetrics does.
|
||||
So backing up VictoriaLogs requires manually executing the `rsync` command.
|
||||
|
||||
The files in VictoriaLogs have the following properties:
|
||||
- All the data files are immutable. Small metadata files can be modified.
|
||||
- Old data files are periodically merged into new data files.
|
||||
|
||||
Therefore, for a complete data **backup**, you need to run the `rsync` command **twice**.
|
||||
|
||||
```sh
|
||||
# example of rsync to remote host
|
||||
rsync -avh --progress --delete <path-to-victorialogs-data> <username>@<host>:<path-to-victorialogs-backup>
|
||||
```
|
||||
|
||||
The first `rsync` will sync the majority of the data, which can be time-consuming.
|
||||
As VictoriaLogs continues to run, new data is ingested, potentially creating new data files and modifying metadata files.
|
||||
|
||||
```sh
|
||||
# example output
|
||||
sending incremental file list
|
||||
victoria-logs-data/
|
||||
victoria-logs-data/flock.lock
|
||||
0 100% 0.00kB/s 0:00:00 (xfr#1, to-chk=78/80)
|
||||
|
||||
...
|
||||
|
||||
victoria-logs-data/partitions/20240809/indexdb/17E9ED7EF89BF422/metaindex.bin
|
||||
51 100% 5.53kB/s 0:00:00 (xfr#64, to-chk=0/80)
|
||||
|
||||
sent 12.19K bytes received 1.30K bytes 3.86K bytes/sec
|
||||
total size is 7.31K speedup is 0.54
|
||||
```
|
||||
|
||||
The second `rsync` **requires a brief shutdown of VictoriaLogs** to ensure all data and metadata files are consistent and no longer changing.
|
||||
This `rsync` will cover any changes that have occurred since the last `rsync` and should not take a significant amount of time.
|
||||
|
||||
To **restore** from a backup, simply `rsync` the backup files from a remote location to the original directory during downtime.
|
||||
VictoriaLogs will automatically load this data upon startup.
|
||||
|
||||
```sh
|
||||
# example of rsync from remote backup to local
|
||||
rsync -avh --progress --delete <username>@<host>:<path-to-victorialogs-backup> <path-to-victorialogs-data>
|
||||
```
|
||||
|
||||
It is also possible to use **the disk snapshot** in order to perform a backup. This feature could be provided by your operating system,
|
||||
cloud provider, or third-party tools. Note that the snapshot must be **consistent** to ensure reliable backup.
|
||||
|
||||
## Multitenancy
|
||||
|
||||
VictoriaLogs supports multitenancy. A tenant is identified by `(AccountID, ProjectID)` pair, where `AccountID` and `ProjectID` are arbitrary 32-bit unsigned integers.
|
||||
The `AccountID` and `ProjectID` fields can be set during [data ingestion](https://docs.victoriametrics.com/victorialogs/data-ingestion/)
|
||||
and [querying](https://docs.victoriametrics.com/victorialogs/querying/) via `AccountID` and `ProjectID` request headers.
|
||||
|
||||
If `AccountID` and/or `ProjectID` request headers aren't set, then the default `0` value is used.
|
||||
|
||||
VictoriaLogs has very low overhead for per-tenant management, so it is OK to have thousands of tenants in a single VictoriaLogs instance.
|
||||
|
||||
VictoriaLogs doesn't perform per-tenant authorization. Use [vmauth](https://docs.victoriametrics.com/victoriametrics/vmauth/) or similar tools for per-tenant authorization.
|
||||
|
||||
### Multitenancy access control
|
||||
|
||||
Enforce access control for tenants by using [vmauth](https://docs.victoriametrics.com/victoriametrics/vmauth/). Access control can be configured for each tenant by setting up the following rules:
|
||||
|
||||
```yaml
|
||||
users:
|
||||
- username: "foo"
|
||||
password: "bar"
|
||||
url_map:
|
||||
- src_paths:
|
||||
- "/select/.*"
|
||||
- "/insert/.*"
|
||||
headers:
|
||||
- "AccountID: 1"
|
||||
- "ProjectID: 0"
|
||||
url_prefix:
|
||||
- "http://localhost:9428/"
|
||||
|
||||
- username: "baz"
|
||||
password: "bar"
|
||||
url_map:
|
||||
- src_paths: ["/select/.*"]
|
||||
headers:
|
||||
- "AccountID: 2"
|
||||
- "ProjectID: 0"
|
||||
url_prefix:
|
||||
- "http://localhost:9428/"
|
||||
```
|
||||
|
||||
This configuration allows `foo` to use the `/select/.*` and `/insert/.*` endpoints with `AccountID: 1` and `ProjectID: 0`, while `baz` can only use the `/select/.*` endpoint with `AccountID: 2` and `ProjectID: 0`.
|
||||
|
||||
## Security
|
||||
|
||||
It is expected that VictoriaLogs runs in a protected environment, which is unreachable from the Internet without proper authorization.
|
||||
It is recommended providing access to VictoriaLogs [data ingestion APIs](https://docs.victoriametrics.com/victorialogs/data-ingestion/)
|
||||
and [querying APIs](https://docs.victoriametrics.com/victorialogs/querying/#http-api) via [vmauth](https://docs.victoriametrics.com/victoriametrics/vmauth/)
|
||||
or similar authorization proxies.
|
||||
|
||||
## Benchmarks
|
||||
|
||||
See the following benchmark results:
|
||||
|
||||
- [JSONBench: the comparison of VictoriaLogs with Elasticsearch, MongoDB, DuckDB and PostgreSQL](https://jsonbench.com/#eyJzeXN0ZW0iOnsiQ2xpY2tIb3VzZSAobHo0KSI6ZmFsc2UsIkNsaWNrSG91c2UgKHpzdGQpIjpmYWxzZSwiRHVja0RCIjp0cnVlLCJFbGFzdGljc2VhcmNoIChubyBzb3VyY2UsIGJlc3QgY29tcHJlc3Npb24pIjpmYWxzZSwiRWxhc3RpY3NlYXJjaCAobm8gc291cmNlLCBkZWZhdWx0KSI6ZmFsc2UsIkVsYXN0aWNzZWFyY2ggKGJlc3QgY29tcHJlc3Npb24pIjpmYWxzZSwiRWxhc3RpY3NlYXJjaCAoZGVmYXVsdCkiOnRydWUsIkVsYXN0aWNzZWFyY2giOmZhbHNlLCJNb25nb0RCIChzbmFwcHksIGNvdmVyZWQgaW5kZXgpIjpmYWxzZSwiTW9uZ29EQiAoenN0ZCwgY292ZXJlZCBpbmRleCkiOmZhbHNlLCJNb25nb0RCIChzbmFwcHkpIjpmYWxzZSwiTW9uZ29EQiAoenN0ZCkiOnRydWUsIlBvc3RncmVTUUwgKGx6NCkiOnRydWUsIlBvc3RncmVTUUwgKHBnbHopIjpmYWxzZSwiVmljdG9yaWFMb2dzIjp0cnVlfSwic2NhbGUiOjEwMDAwMDAwMDAsIm1ldHJpYyI6ImhvdCIsInF1ZXJpZXMiOlt0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWVdfQ==). The benchmark can be reproduced by running `main.sh` file inside `victorialogs` directory of the [JSONBench repository](https://github.com/ClickHouse/JSONBench).
|
||||
- [ClickBench: the comparison of VictoriaLogs with Elasticsearch, MongoDB, TimescaleDB, PostgreSQL, MySQL and SQLite](https://benchmark.clickhouse.com/#eyJzeXN0ZW0iOnsiQWxsb3lEQiI6ZmFsc2UsIkFsbG95REIgKHR1bmVkKSI6ZmFsc2UsIkF0aGVuYSAocGFydGl0aW9uZWQpIjpmYWxzZSwiQXRoZW5hIChzaW5nbGUpIjpmYWxzZSwiQXVyb3JhIGZvciBNeVNRTCI6ZmFsc2UsIkF1cm9yYSBmb3IgUG9zdGdyZVNRTCI6ZmFsc2UsIkJ5Q29uaXR5IjpmYWxzZSwiQnl0ZUhvdXNlIjpmYWxzZSwiY2hEQiAoRGF0YUZyYW1lKSI6ZmFsc2UsImNoREIgKFBhcnF1ZXQsIHBhcnRpdGlvbmVkKSI6ZmFsc2UsImNoREIiOmZhbHNlLCJDaXR1cyI6ZmFsc2UsIkNsaWNrSG91c2UgQ2xvdWQgKGF3cykiOmZhbHNlLCJDbGlja0hvdXNlIENsb3VkIChhenVyZSkiOmZhbHNlLCJDbGlja0hvdXNlIENsb3VkIChnY3ApIjpmYWxzZSwiQ2xpY2tIb3VzZSAoZGF0YSBsYWtlLCBwYXJ0aXRpb25lZCkiOmZhbHNlLCJDbGlja0hvdXNlIChkYXRhIGxha2UsIHNpbmdsZSkiOmZhbHNlLCJDbGlja0hvdXNlIChQYXJxdWV0LCBwYXJ0aXRpb25lZCkiOmZhbHNlLCJDbGlja0hvdXNlIChQYXJxdWV0LCBzaW5nbGUpIjpmYWxzZSwiQ2xpY2tIb3VzZSAod2ViKSI6ZmFsc2UsIkNsaWNrSG91c2UiOmZhbHNlLCJDbGlja0hvdXNlICh0dW5lZCkiOmZhbHNlLCJDbGlja0hvdXNlICh0dW5lZCwgbWVtb3J5KSI6ZmFsc2UsIkNsb3VkYmVycnkiOmZhbHNlLCJDcmF0ZURCIjpmYWxzZSwiQ3J1bmNoeSBCcmlkZ2UgZm9yIEFuYWx5dGljcyAoUGFycXVldCkiOmZhbHNlLCJEYXRhYmVuZCI6ZmFsc2UsIkRhdGFGdXNpb24gKFBhcnF1ZXQsIHBhcnRpdGlvbmVkKSI6ZmFsc2UsIkRhdGFGdXNpb24gKFBhcnF1ZXQsIHNpbmdsZSkiOmZhbHNlLCJBcGFjaGUgRG9yaXMiOmZhbHNlLCJEcmlsbCI6ZmFsc2UsIkRydWlkIjpmYWxzZSwiRHVja0RCIChEYXRhRnJhbWUpIjpmYWxzZSwiRHVja0RCIChtZW1vcnkpIjpmYWxzZSwiRHVja0RCIChQYXJxdWV0LCBwYXJ0aXRpb25lZCkiOmZhbHNlLCJEdWNrREIiOmZhbHNlLCJFbGFzdGljc2VhcmNoIjp0cnVlLCJFbGFzdGljc2VhcmNoICh0dW5lZCkiOmZhbHNlLCJHbGFyZURCIjpmYWxzZSwiR3JlZW5wbHVtIjpmYWxzZSwiSGVhdnlBSSI6ZmFsc2UsIkh5ZHJhIjpmYWxzZSwiSW5mb2JyaWdodCI6ZmFsc2UsIktpbmV0aWNhIjpmYWxzZSwiTWFyaWFEQiBDb2x1bW5TdG9yZSI6ZmFsc2UsIk1hcmlhREIiOmZhbHNlLCJNb25ldERCIjpmYWxzZSwiTW9uZ29EQiI6dHJ1ZSwiTW90aGVyRHVjayI6ZmFsc2UsIk15U1FMIChNeUlTQU0pIjpmYWxzZSwiTXlTUUwiOnRydWUsIk9jdG9TUUwiOmZhbHNlLCJPeGxhIjpmYWxzZSwiUGFuZGFzIChEYXRhRnJhbWUpIjpmYWxzZSwiUGFyYWRlREIgKFBhcnF1ZXQsIHBhcnRpdGlvbmVkKSI6ZmFsc2UsIlBhcmFkZURCIChQYXJxdWV0LCBzaW5nbGUpIjpmYWxzZSwicGdfZHVja2RiIChNb3RoZXJEdWNrIGVuYWJsZWQpIjpmYWxzZSwicGdfZHVja2RiIjpmYWxzZSwiUGlub3QiOmZhbHNlLCJQb2xhcnMgKERhdGFGcmFtZSkiOmZhbHNlLCJQb2xhcnMgKFBhcnF1ZXQpIjpmYWxzZSwiUG9zdGdyZVNRTCAodHVuZWQpIjpmYWxzZSwiUG9zdGdyZVNRTCI6dHJ1ZSwiUXVlc3REQiI6ZmFsc2UsIlJlZHNoaWZ0IjpmYWxzZSwiU2VsZWN0REIiOmZhbHNlLCJTaW5nbGVTdG9yZSI6ZmFsc2UsIlNub3dmbGFrZSI6ZmFsc2UsIlNwYXJrIjpmYWxzZSwiU1FMaXRlIjp0cnVlLCJTdGFyUm9ja3MiOmZhbHNlLCJUYWJsZXNwYWNlIjpmYWxzZSwiVGVtYm8gT0xBUCAoY29sdW1uYXIpIjpmYWxzZSwiVGltZXNjYWxlIENsb3VkIjpmYWxzZSwiVGltZXNjYWxlREIgKG5vIGNvbHVtbnN0b3JlKSI6ZmFsc2UsIlRpbWVzY2FsZURCIjp0cnVlLCJUaW55YmlyZCAoRnJlZSBUcmlhbCkiOmZhbHNlLCJVbWJyYSI6ZmFsc2UsIlZpY3RvcmlhTG9ncyI6dHJ1ZX0sInR5cGUiOnsiQyI6dHJ1ZSwiY29sdW1uLW9yaWVudGVkIjp0cnVlLCJQb3N0Z3JlU1FMIGNvbXBhdGlibGUiOnRydWUsIm1hbmFnZWQiOnRydWUsImdjcCI6dHJ1ZSwic3RhdGVsZXNzIjp0cnVlLCJKYXZhIjp0cnVlLCJDKysiOnRydWUsIk15U1FMIGNvbXBhdGlibGUiOnRydWUsInJvdy1vcmllbnRlZCI6dHJ1ZSwiQ2xpY2tIb3VzZSBkZXJpdmF0aXZlIjp0cnVlLCJlbWJlZGRlZCI6dHJ1ZSwic2VydmVybGVzcyI6dHJ1ZSwiZGF0YWZyYW1lIjp0cnVlLCJhd3MiOnRydWUsImF6dXJlIjp0cnVlLCJhbmFseXRpY2FsIjp0cnVlLCJSdXN0Ijp0cnVlLCJzZWFyY2giOnRydWUsImRvY3VtZW50Ijp0cnVlLCJHbyI6dHJ1ZSwic29tZXdoYXQgUG9zdGdyZVNRTCBjb21wYXRpYmxlIjp0cnVlLCJEYXRhRnJhbWUiOnRydWUsInBhcnF1ZXQiOnRydWUsInRpbWUtc2VyaWVzIjp0cnVlfSwibWFjaGluZSI6eyIxNiB2Q1BVIDEyOEdCIjpmYWxzZSwiOCB2Q1BVIDY0R0IiOmZhbHNlLCJzZXJ2ZXJsZXNzIjpmYWxzZSwiMTZhY3UiOmZhbHNlLCJjNmEuNHhsYXJnZSwgNTAwZ2IgZ3AyIjp0cnVlLCJMIjpmYWxzZSwiTSI6ZmFsc2UsIlMiOmZhbHNlLCJYUyI6ZmFsc2UsImM2YS5tZXRhbCwgNTAwZ2IgZ3AyIjpmYWxzZSwiMTkyR0IiOmZhbHNlLCIyNEdCIjpmYWxzZSwiMzYwR0IiOmZhbHNlLCI0OEdCIjpmYWxzZSwiNzIwR0IiOmZhbHNlLCI5NkdCIjpmYWxzZSwiZGV2IjpmYWxzZSwiNzA4R0IiOmZhbHNlLCJjNW4uNHhsYXJnZSwgNTAwZ2IgZ3AyIjpmYWxzZSwiQW5hbHl0aWNzLTI1NkdCICg2NCB2Q29yZXMsIDI1NiBHQikiOmZhbHNlLCJjNS40eGxhcmdlLCA1MDBnYiBncDIiOmZhbHNlLCJjNmEuNHhsYXJnZSwgMTUwMGdiIGdwMiI6dHJ1ZSwiY2xvdWQiOmZhbHNlLCJkYzIuOHhsYXJnZSI6ZmFsc2UsInJhMy4xNnhsYXJnZSI6ZmFsc2UsInJhMy40eGxhcmdlIjpmYWxzZSwicmEzLnhscGx1cyI6ZmFsc2UsIlMyIjpmYWxzZSwiUzI0IjpmYWxzZSwiMlhMIjpmYWxzZSwiM1hMIjpmYWxzZSwiNFhMIjpmYWxzZSwiWEwiOmZhbHNlLCJMMSAtIDE2Q1BVIDMyR0IiOmZhbHNlLCJjNmEuNHhsYXJnZSwgNTAwZ2IgZ3AzIjpmYWxzZSwiMTYgdkNQVSA2NEdCIjpmYWxzZSwiNCB2Q1BVIDE2R0IiOmZhbHNlLCI4IHZDUFUgMzJHQiI6ZmFsc2V9LCJjbHVzdGVyX3NpemUiOnsiMSI6dHJ1ZSwiMiI6ZmFsc2UsIjQiOmZhbHNlLCI4IjpmYWxzZSwiMTYiOmZhbHNlLCIzMiI6ZmFsc2UsIjY0IjpmYWxzZSwiMTI4IjpmYWxzZSwic2VydmVybGVzcyI6ZmFsc2UsInVuZGVmaW5lZCI6ZmFsc2V9LCJtZXRyaWMiOiJob3QiLCJxdWVyaWVzIjpbdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZV19). The benchmark can be reproduced by running `benchmark.sh` file inside `victorialogs` directory of the [ClickBench repository](https://github.com/ClickHouse/ClickBench/).
|
||||
|
||||
Here is a [benchmark suite](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/logs-benchmark) for comparing data ingestion performance
|
||||
and resource usage between VictoriaLogs and Elasticsearch or Loki.
|
||||
|
||||
It is recommended [setting up VictoriaLogs](https://docs.victoriametrics.com/victorialogs/quickstart/) in production alongside the existing
|
||||
log management systems and comparing resource usage + query performance between VictoriaLogs and your system such as Elasticsearch or Grafana Loki.
|
||||
|
||||
Please share benchmark results and ideas on how to improve benchmarks / VictoriaLogs
|
||||
via [VictoriaMetrics community channels](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#community-and-contributions).
|
||||
|
||||
## Profiling
|
||||
|
||||
VictoriaLogs provides handlers for collecting the following [Go profiles](https://blog.golang.org/profiling-go-programs):
|
||||
|
||||
* Memory profile. It can be collected with the following command (replace `0.0.0.0` with hostname if needed):
|
||||
|
||||
|
||||
```sh
|
||||
curl http://0.0.0.0:9428/debug/pprof/heap > mem.pprof
|
||||
```
|
||||
|
||||
|
||||
* CPU profile. It can be collected with the following command (replace `0.0.0.0` with hostname if needed):
|
||||
|
||||
|
||||
```sh
|
||||
curl http://0.0.0.0:9428/debug/pprof/profile > cpu.pprof
|
||||
```
|
||||
|
||||
|
||||
The command for collecting CPU profile waits for 30 seconds before returning.
|
||||
|
||||
The collected profiles may be analyzed with [go tool pprof](https://github.com/google/pprof).
|
||||
It is safe sharing the collected profiles from security point of view, since they do not contain sensitive information.
|
||||
|
||||
## List of command-line flags
|
||||
|
||||
Pass `-help` to VictoriaLogs in order to see the list of supported command-line flags with their description:
|
||||
|
||||
```
|
||||
-blockcache.missesBeforeCaching int
|
||||
The number of cache misses before putting the block into cache. Higher values may reduce indexdb/dataBlocks cache size at the cost of higher CPU and disk read usage (default 2)
|
||||
-datadog.ignoreFields array
|
||||
Comma-separated list of fields to ignore for logs ingested via DataDog protocol. See https://docs.victoriametrics.com/victorialogs/data-ingestion/datadog-agent/#dropping-fields
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-datadog.maxRequestSize size
|
||||
The maximum size in bytes of a single DataDog request
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 67108864)
|
||||
-datadog.streamFields array
|
||||
Comma-separated list of fields to use as log stream fields for logs ingested via DataDog protocol. See https://docs.victoriametrics.com/victorialogs/data-ingestion/datadog-agent/#stream-fields
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-defaultMsgValue string
|
||||
Default value for _msg field if the ingested log entry doesn't contain it; see https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field (default "missing _msg field; see https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field")
|
||||
-elasticsearch.version string
|
||||
Elasticsearch version to report to client (default "8.9.0")
|
||||
-enableTCP6
|
||||
Whether to enable IPv6 for listening and dialing. By default, only IPv4 TCP and UDP are used
|
||||
-envflag.enable
|
||||
Whether to enable reading flags from environment variables in addition to the command line. Command line flag values have priority over values from environment vars. Flags are read only from the command line if this flag isn't set. See https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#environment-variables for more details
|
||||
-envflag.prefix string
|
||||
Prefix for environment variables if -envflag.enable is set
|
||||
-filestream.disableFadvise
|
||||
Whether to disable fadvise() syscall when reading large data files. The fadvise() syscall prevents from eviction of recently accessed data from OS page cache during background merges and backups. In some rare cases it is better to disable the syscall if it uses too much CPU
|
||||
-flagsAuthKey value
|
||||
Auth key for /flags endpoint. It must be passed via authKey query arg. It overrides -httpAuth.*
|
||||
Flag value can be read from the given file when using -flagsAuthKey=file:///abs/path/to/file or -flagsAuthKey=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -flagsAuthKey=http://host/path or -flagsAuthKey=https://host/path
|
||||
-forceFlushAuthKey value
|
||||
authKey, which must be passed in query string to /internal/force_flush . It overrides -httpAuth.* . See https://docs.victoriametrics.com/victorialogs/#forced-flush
|
||||
Flag value can be read from the given file when using -forceFlushAuthKey=file:///abs/path/to/file or -forceFlushAuthKey=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -forceFlushAuthKey=http://host/path or -forceFlushAuthKey=https://host/path
|
||||
-forceMergeAuthKey value
|
||||
authKey, which must be passed in query string to /internal/force_merge . It overrides -httpAuth.* . See https://docs.victoriametrics.com/victorialogs/#forced-merge
|
||||
Flag value can be read from the given file when using -forceMergeAuthKey=file:///abs/path/to/file or -forceMergeAuthKey=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -forceMergeAuthKey=http://host/path or -forceMergeAuthKey=https://host/path
|
||||
-fs.disableMmap
|
||||
Whether to use pread() instead of mmap() for reading data files. By default, mmap() is used for 64-bit arches and pread() is used for 32-bit arches, since they cannot read data files bigger than 2^32 bytes in memory. mmap() is usually faster for reading small data chunks than pread()
|
||||
-futureRetention value
|
||||
Log entries with timestamps bigger than now+futureRetention are rejected during data ingestion; see https://docs.victoriametrics.com/victorialogs/#retention
|
||||
The following optional suffixes are supported: s (second), h (hour), d (day), w (week), y (year). If suffix isn't set, then the duration is counted in months (default 2d)
|
||||
-http.connTimeout duration
|
||||
Incoming connections to -httpListenAddr are closed after the configured timeout. This may help evenly spreading load among a cluster of services behind TCP-level load balancer. Zero value disables closing of incoming connections (default 2m0s)
|
||||
-http.disableCORS
|
||||
Disable CORS for all origins (*)
|
||||
-http.disableKeepAlive
|
||||
Whether to disable HTTP keep-alive for incoming connections at -httpListenAddr
|
||||
-http.disableResponseCompression
|
||||
Disable compression of HTTP responses to save CPU resources. By default, compression is enabled to save network bandwidth
|
||||
-http.header.csp string
|
||||
Value for 'Content-Security-Policy' header, recommended: "default-src 'self'"
|
||||
-http.header.frameOptions string
|
||||
Value for 'X-Frame-Options' header
|
||||
-http.header.hsts string
|
||||
Value for 'Strict-Transport-Security' header, recommended: 'max-age=31536000; includeSubDomains'
|
||||
-http.idleConnTimeout duration
|
||||
Timeout for incoming idle http connections (default 1m0s)
|
||||
-http.maxGracefulShutdownDuration duration
|
||||
The maximum duration for a graceful shutdown of the HTTP server. A highly loaded server may require increased value for a graceful shutdown (default 7s)
|
||||
-http.pathPrefix string
|
||||
An optional prefix to add to all the paths handled by http server. For example, if '-http.pathPrefix=/foo/bar' is set, then all the http requests will be handled on '/foo/bar/*' paths. This may be useful for proxied requests. See https://www.robustperception.io/using-external-urls-and-proxies-with-prometheus
|
||||
-http.shutdownDelay duration
|
||||
Optional delay before http server shutdown. During this delay, the server returns non-OK responses from /health page, so load balancers can route new requests to other servers
|
||||
-httpAuth.password value
|
||||
Password for HTTP server's Basic Auth. The authentication is disabled if -httpAuth.username is empty
|
||||
Flag value can be read from the given file when using -httpAuth.password=file:///abs/path/to/file or -httpAuth.password=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -httpAuth.password=http://host/path or -httpAuth.password=https://host/path
|
||||
-httpAuth.username string
|
||||
Username for HTTP server's Basic Auth. The authentication is disabled if empty. See also -httpAuth.password
|
||||
-httpListenAddr array
|
||||
TCP address to listen for incoming http requests. See also -httpListenAddr.useProxyProtocol
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-httpListenAddr.useProxyProtocol array
|
||||
Whether to use proxy protocol for connections accepted at the given -httpListenAddr . See https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt . With enabled proxy protocol http server cannot serve regular /metrics endpoint. Use -pushmetrics.url for metrics pushing
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to false.
|
||||
-inmemoryDataFlushInterval duration
|
||||
The interval for guaranteed saving of in-memory data to disk. The saved data survives unclean shutdowns such as OOM crash, hardware reset, SIGKILL, etc. Bigger intervals may help increase the lifetime of flash storage with limited write cycles (e.g. Raspberry PI). Smaller intervals increase disk IO load. Minimum supported value is 1s (default 5s)
|
||||
-insert.concurrency int
|
||||
The average number of concurrent data ingestion requests, which can be sent to every -storageNode (default 2)
|
||||
-insert.disable
|
||||
Whether to disable /insert/* HTTP endpoints
|
||||
-insert.disableCompression
|
||||
Whether to disable compression when sending the ingested data to -storageNode nodes. Disabled compression reduces CPU usage at the cost of higher network usage
|
||||
-insert.maxFieldsPerLine int
|
||||
The maximum number of log fields per line, which can be read by /insert/* handlers; see https://docs.victoriametrics.com/victorialogs/faq/#how-many-fields-a-single-log-entry-may-contain (default 1000)
|
||||
-insert.maxLineSizeBytes size
|
||||
The maximum size of a single line, which can be read by /insert/* handlers; see https://docs.victoriametrics.com/victorialogs/faq/#what-length-a-log-record-is-expected-to-have
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 262144)
|
||||
-insert.maxQueueDuration duration
|
||||
The maximum duration to wait in the queue when -maxConcurrentInserts concurrent insert requests are executed (default 1m0s)
|
||||
-internStringCacheExpireDuration duration
|
||||
The expiry duration for caches for interned strings. See https://en.wikipedia.org/wiki/String_interning . See also -internStringMaxLen and -internStringDisableCache (default 6m0s)
|
||||
-internStringDisableCache
|
||||
Whether to disable caches for interned strings. This may reduce memory usage at the cost of higher CPU usage. See https://en.wikipedia.org/wiki/String_interning . See also -internStringCacheExpireDuration and -internStringMaxLen
|
||||
-internStringMaxLen int
|
||||
The maximum length for strings to intern. A lower limit may save memory at the cost of higher CPU usage. See https://en.wikipedia.org/wiki/String_interning . See also -internStringDisableCache and -internStringCacheExpireDuration (default 500)
|
||||
-internalinsert.disable
|
||||
Whether to disable /internal/insert HTTP endpoint
|
||||
-internalinsert.maxRequestSize size
|
||||
The maximum size in bytes of a single request, which can be accepted at /internal/insert HTTP endpoint
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 67108864)
|
||||
-internalselect.disable
|
||||
Whether to disable /internal/select/* HTTP endpoints
|
||||
-journald.ignoreFields array
|
||||
Comma-separated list of fields to ignore for logs ingested over journald protocol. See https://docs.victoriametrics.com/victorialogs/data-ingestion/journald/#dropping-fields
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-journald.includeEntryMetadata
|
||||
Include journal entry fields, which with double underscores.
|
||||
-journald.streamFields array
|
||||
Comma-separated list of fields to use as log stream fields for logs ingested over journald protocol. See https://docs.victoriametrics.com/victorialogs/data-ingestion/journald/#stream-fields
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-journald.tenantID string
|
||||
TenantID for logs ingested via the Journald endpoint. See https://docs.victoriametrics.com/victorialogs/data-ingestion/journald/#multitenancy (default "0:0")
|
||||
-journald.timeField string
|
||||
Field to use as a log timestamp for logs ingested via journald protocol. See https://docs.victoriametrics.com/victorialogs/data-ingestion/journald/#time-field (default "__REALTIME_TIMESTAMP")
|
||||
-logIngestedRows
|
||||
Whether to log all the ingested log entries; this can be useful for debugging of data ingestion; see https://docs.victoriametrics.com/victorialogs/data-ingestion/ ; see also -logNewStreams
|
||||
-logNewStreams
|
||||
Whether to log creation of new streams; this can be useful for debugging of high cardinality issues with log streams; see https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields ; see also -logIngestedRows
|
||||
-loggerDisableTimestamps
|
||||
Whether to disable writing timestamps in logs
|
||||
-loggerErrorsPerSecondLimit int
|
||||
Per-second limit on the number of ERROR messages. If more than the given number of errors are emitted per second, the remaining errors are suppressed. Zero values disable the rate limit
|
||||
-loggerFormat string
|
||||
Format for logs. Possible values: default, json (default "default")
|
||||
-loggerJSONFields string
|
||||
Allows renaming fields in JSON formatted logs. Example: "ts:timestamp,msg:message" renames "ts" to "timestamp" and "msg" to "message". Supported fields: ts, level, caller, msg
|
||||
-loggerLevel string
|
||||
Minimum level of errors to log. Possible values: INFO, WARN, ERROR, FATAL, PANIC (default "INFO")
|
||||
-loggerMaxArgLen int
|
||||
The maximum length of a single logged argument. Longer arguments are replaced with 'arg_start..arg_end', where 'arg_start' and 'arg_end' is prefix and suffix of the arg with the length not exceeding -loggerMaxArgLen / 2 (default 5000)
|
||||
-loggerOutput string
|
||||
Output for the logs. Supported values: stderr, stdout (default "stderr")
|
||||
-loggerTimezone string
|
||||
Timezone to use for timestamps in logs. Timezone must be a valid IANA Time Zone. For example: America/New_York, Europe/Berlin, Etc/GMT+3 or Local (default "UTC")
|
||||
-loggerWarnsPerSecondLimit int
|
||||
Per-second limit on the number of WARN messages. If more than the given number of warns are emitted per second, then the remaining warns are suppressed. Zero values disable the rate limit
|
||||
-loki.disableMessageParsing
|
||||
Whether to disable automatic parsing of JSON-encoded log fields inside Loki log message into distinct log fields
|
||||
-loki.maxRequestSize size
|
||||
The maximum size in bytes of a single Loki request
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 67108864)
|
||||
-maxConcurrentInserts int
|
||||
The maximum number of concurrent insert requests. Set higher value when clients send data over slow networks. Default value depends on the number of available CPU cores. It should work fine in most cases since it minimizes resource usage. See also -insert.maxQueueDuration (default 28)
|
||||
-memory.allowedBytes size
|
||||
Allowed size of system memory VictoriaMetrics caches may occupy. This option overrides -memory.allowedPercent if set to a non-zero value. Too low a value may increase the cache miss rate usually resulting in higher CPU and disk IO usage. Too high a value may evict too much data from the OS page cache resulting in higher disk IO usage
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 0)
|
||||
-memory.allowedPercent float
|
||||
Allowed percent of system memory VictoriaMetrics caches may occupy. See also -memory.allowedBytes. Too low a value may increase cache miss rate usually resulting in higher CPU and disk IO usage. Too high a value may evict too much data from the OS page cache which will result in higher disk IO usage (default 60)
|
||||
-metrics.exposeMetadata
|
||||
Whether to expose TYPE and HELP metadata at the /metrics page, which is exposed at -httpListenAddr . The metadata may be needed when the /metrics page is consumed by systems, which require this information. For example, Managed Prometheus in Google Cloud - https://cloud.google.com/stackdriver/docs/managed-prometheus/troubleshooting#missing-metric-type
|
||||
-metricsAuthKey value
|
||||
Auth key for /metrics endpoint. It must be passed via authKey query arg. It overrides -httpAuth.*
|
||||
Flag value can be read from the given file when using -metricsAuthKey=file:///abs/path/to/file or -metricsAuthKey=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -metricsAuthKey=http://host/path or -metricsAuthKey=https://host/path
|
||||
-mtls array
|
||||
Whether to require valid client certificate for https requests to the corresponding -httpListenAddr . This flag works only if -tls flag is set. See also -mtlsCAFile . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to false.
|
||||
-mtlsCAFile array
|
||||
Optional path to TLS Root CA for verifying client certificates at the corresponding -httpListenAddr when -mtls is enabled. By default the host system TLS Root CA is used for client certificate verification. This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-opentelemetry.maxRequestSize size
|
||||
The maximum size in bytes of a single OpenTelemetry request
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 67108864)
|
||||
-pprofAuthKey value
|
||||
Auth key for /debug/pprof/* endpoints. It must be passed via authKey query arg. It overrides -httpAuth.*
|
||||
Flag value can be read from the given file when using -pprofAuthKey=file:///abs/path/to/file or -pprofAuthKey=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -pprofAuthKey=http://host/path or -pprofAuthKey=https://host/path
|
||||
-pushmetrics.disableCompression
|
||||
Whether to disable request body compression when pushing metrics to every -pushmetrics.url
|
||||
-pushmetrics.extraLabel array
|
||||
Optional labels to add to metrics pushed to every -pushmetrics.url . For example, -pushmetrics.extraLabel='instance="foo"' adds instance="foo" label to all the metrics pushed to every -pushmetrics.url
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-pushmetrics.header array
|
||||
Optional HTTP request header to send to every -pushmetrics.url . For example, -pushmetrics.header='Authorization: Basic foobar' adds 'Authorization: Basic foobar' header to every request to every -pushmetrics.url
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-pushmetrics.interval duration
|
||||
Interval for pushing metrics to every -pushmetrics.url (default 10s)
|
||||
-pushmetrics.url array
|
||||
Optional URL to push metrics exposed at /metrics page. See https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#push-metrics . By default, metrics exposed at /metrics page aren't pushed to any remote storage
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-retention.maxDiskSpaceUsageBytes size
|
||||
The maximum disk space usage at -storageDataPath before older per-day partitions are automatically dropped; see https://docs.victoriametrics.com/victorialogs/#retention-by-disk-space-usage ; see also -retentionPeriod
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 0)
|
||||
-retentionPeriod value
|
||||
Log entries with timestamps older than now-retentionPeriod are automatically deleted; log entries with timestamps outside the retention are also rejected during data ingestion; the minimum supported retention is 1d (one day); see https://docs.victoriametrics.com/victorialogs/#retention ; see also -retention.maxDiskSpaceUsageBytes
|
||||
The following optional suffixes are supported: s (second), h (hour), d (day), w (week), y (year). If suffix isn't set, then the duration is counted in months (default 7d)
|
||||
-search.maxConcurrentRequests int
|
||||
The maximum number of concurrent search requests. It shouldn't be high, since a single request can saturate all the CPU cores, while many concurrently executed requests may require high amounts of memory. See also -search.maxQueueDuration (default 14)
|
||||
-search.maxQueryDuration duration
|
||||
The maximum duration for query execution. It can be overridden to a smaller value on a per-query basis via 'timeout' query arg (default 30s)
|
||||
-search.maxQueueDuration duration
|
||||
The maximum time the search request waits for execution when -search.maxConcurrentRequests limit is reached; see also -search.maxQueryDuration (default 10s)
|
||||
-select.disable
|
||||
Whether to disable /select/* HTTP endpoints
|
||||
-select.disableCompression
|
||||
Whether to disable compression for select query responses received from -storageNode nodes. Disabled compression reduces CPU usage at the cost of higher network usage
|
||||
-storage.minFreeDiskSpaceBytes size
|
||||
The minimum free disk space at -storageDataPath after which the storage stops accepting new data
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 10000000)
|
||||
-storageDataPath string
|
||||
Path to directory where to store VictoriaLogs data; see https://docs.victoriametrics.com/victorialogs/#storage (default "victoria-logs-data")
|
||||
-storageNode array
|
||||
Comma-separated list of TCP addresses for storage nodes to route the ingested logs to and to send select queries to. If the list is empty, then the ingested logs are stored and queried locally from -storageDataPath
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-storageNode.bearerToken array
|
||||
Optional bearer auth token to use for the corresponding -storageNode
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-storageNode.bearerTokenFile array
|
||||
Optional path to bearer token file to use for the corresponding -storageNode. The token is re-read from the file every second
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-storageNode.password array
|
||||
Optional basic auth password to use for the corresponding -storageNode
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-storageNode.passwordFile array
|
||||
Optional path to basic auth password to use for the corresponding -storageNode. The file is re-read every second
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-storageNode.tls array
|
||||
Whether to use TLS (HTTPS) protocol for communicating with the corresponding -storageNode. By default communication is performed via HTTP
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to false.
|
||||
-storageNode.tlsCAFile array
|
||||
Optional path to TLS CA file to use for verifying connections to the corresponding -storageNode. By default, system CA is used
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-storageNode.tlsCertFile array
|
||||
Optional path to client-side TLS certificate file to use when connecting to the corresponding -storageNode
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-storageNode.tlsInsecureSkipVerify array
|
||||
Whether to skip tls verification when connecting to the corresponding -storageNode
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to false.
|
||||
-storageNode.tlsKeyFile array
|
||||
Optional path to client-side TLS certificate key to use when connecting to the corresponding -storageNode
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-storageNode.tlsServerName array
|
||||
Optional TLS server name to use for connections to the corresponding -storageNode. By default, the server name from -storageNode is used
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-storageNode.username array
|
||||
Optional basic auth username to use for the corresponding -storageNode
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-syslog.compressMethod.tcp array
|
||||
Compression method for syslog messages received at the corresponding -syslog.listenAddr.tcp. Supported values: none, gzip, deflate. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#compression
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-syslog.compressMethod.udp array
|
||||
Compression method for syslog messages received at the corresponding -syslog.listenAddr.udp. Supported values: none, gzip, deflate. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#compression
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-syslog.decolorizeFields.tcp array
|
||||
Fields to remove ANSI color codes across logs ingested via the corresponding -syslog.listenAddr.tcp. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#decolorizing-fields
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-syslog.decolorizeFields.udp array
|
||||
Fields to remove ANSI color codes across logs ingested via the corresponding -syslog.listenAddr.udp. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#decolorizing-fields
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-syslog.extraFields.tcp array
|
||||
Fields to add to logs ingested via the corresponding -syslog.listenAddr.tcp. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#adding-extra-fields
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-syslog.extraFields.udp array
|
||||
Fields to add to logs ingested via the corresponding -syslog.listenAddr.udp. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#adding-extra-fields
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-syslog.ignoreFields.tcp array
|
||||
Fields to ignore at logs ingested via the corresponding -syslog.listenAddr.tcp. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#dropping-fields
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-syslog.ignoreFields.udp array
|
||||
Fields to ignore at logs ingested via the corresponding -syslog.listenAddr.udp. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#dropping-fields
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-syslog.listenAddr.tcp array
|
||||
Comma-separated list of TCP addresses to listen to for Syslog messages. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-syslog.listenAddr.udp array
|
||||
Comma-separated list of UDP address to listen to for Syslog messages. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-syslog.streamFields.tcp array
|
||||
Fields to use as log stream labels for logs ingested via the corresponding -syslog.listenAddr.tcp. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#stream-fields
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-syslog.streamFields.udp array
|
||||
Fields to use as log stream labels for logs ingested via the corresponding -syslog.listenAddr.udp. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#stream-fields
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-syslog.tenantID.tcp array
|
||||
TenantID for logs ingested via the corresponding -syslog.listenAddr.tcp. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#multitenancy
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-syslog.tenantID.udp array
|
||||
TenantID for logs ingested via the corresponding -syslog.listenAddr.udp. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#multitenancy
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-syslog.timezone string
|
||||
Timezone to use when parsing timestamps in RFC3164 syslog messages. Timezone must be a valid IANA Time Zone. For example: America/New_York, Europe/Berlin, Etc/GMT+3 . See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/ (default "Local")
|
||||
-syslog.tls array
|
||||
Whether to enable TLS for receiving syslog messages at the corresponding -syslog.listenAddr.tcp. The corresponding -syslog.tlsCertFile and -syslog.tlsKeyFile must be set if -syslog.tls is set. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#security
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to false.
|
||||
-syslog.tlsCertFile array
|
||||
Path to file with TLS certificate for the corresponding -syslog.listenAddr.tcp if the corresponding -syslog.tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#security
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-syslog.tlsCipherSuites array
|
||||
Optional list of TLS cipher suites for -syslog.listenAddr.tcp if -syslog.tls is set. See the list of supported cipher suites at https://pkg.go.dev/crypto/tls#pkg-constants . See also https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#security
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-syslog.tlsKeyFile array
|
||||
Path to file with TLS key for the corresponding -syslog.listenAddr.tcp if the corresponding -syslog.tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#security
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-syslog.tlsMinVersion string
|
||||
The minimum TLS version to use for -syslog.listenAddr.tcp if -syslog.tls is set. Supported values: TLS10, TLS11, TLS12, TLS13. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#security (default "TLS13")
|
||||
-syslog.useLocalTimestamp.tcp array
|
||||
Whether to use local timestamp instead of the original timestamp for the ingested syslog messages at the corresponding -syslog.listenAddr.tcp. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#log-timestamps
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to false.
|
||||
-syslog.useLocalTimestamp.udp array
|
||||
Whether to use local timestamp instead of the original timestamp for the ingested syslog messages at the corresponding -syslog.listenAddr.udp. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#log-timestamps
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to false.
|
||||
-tls array
|
||||
Whether to enable TLS for incoming HTTP requests at the given -httpListenAddr (aka https). -tlsCertFile and -tlsKeyFile must be set if -tls is set. See also -mtls
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to false.
|
||||
-tlsAutocertCacheDir string
|
||||
Directory to store TLS certificates issued via Let's Encrypt. Certificates are lost on restarts if this flag isn't set. This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
|
||||
-tlsAutocertEmail string
|
||||
Contact email for the issued Let's Encrypt TLS certificates. See also -tlsAutocertHosts and -tlsAutocertCacheDir .This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
|
||||
-tlsAutocertHosts array
|
||||
Optional hostnames for automatic issuing of Let's Encrypt TLS certificates. These hostnames must be reachable at -httpListenAddr . The -httpListenAddr must listen tcp port 443 . The -tlsAutocertHosts overrides -tlsCertFile and -tlsKeyFile . See also -tlsAutocertEmail and -tlsAutocertCacheDir . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-tlsCertFile array
|
||||
Path to file with TLS certificate for the corresponding -httpListenAddr if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated. See also -tlsAutocertHosts
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-tlsCipherSuites array
|
||||
Optional list of TLS cipher suites for incoming requests over HTTPS if -tls is set. See the list of supported cipher suites at https://pkg.go.dev/crypto/tls#pkg-constants
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-tlsKeyFile array
|
||||
Path to file with TLS key for the corresponding -httpListenAddr if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated. See also -tlsAutocertHosts
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-tlsMinVersion array
|
||||
Optional minimum TLS version to use for the corresponding -httpListenAddr if -tls is set. Supported values: TLS10, TLS11, TLS12, TLS13
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-version
|
||||
Show VictoriaMetrics version
|
||||
```
|
||||
@@ -1,21 +0,0 @@
|
||||
---
|
||||
weight: 8
|
||||
title: Roadmap
|
||||
disableToc: true
|
||||
menu:
|
||||
docs:
|
||||
parent: "victorialogs"
|
||||
weight: 8
|
||||
title: Roadmap
|
||||
tags:
|
||||
- logs
|
||||
aliases:
|
||||
- /victorialogs/Roadmap.html
|
||||
---
|
||||
|
||||
The following functionality is planned in the future versions of VictoriaLogs:
|
||||
|
||||
- [ ] Ability to make instant snapshots and backups in the way [similar to VictoriaMetrics](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#how-to-work-with-snapshots).
|
||||
- [ ] Ability to store data to object storage (such as S3, GCS, Minio).
|
||||
- [ ] Data migration tool from Grafana Loki to VictoriaLogs (similar to [vmctl](https://docs.victoriametrics.com/victoriametrics/vmctl/)).
|
||||
- [ ] Retention filters based on tenant and stream fields similar to [Victoriametrics](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#retention-filters) (Enterprise only)
|
||||
@@ -1,17 +0,0 @@
|
||||
---
|
||||
title: VictoriaLogs
|
||||
weight: 0
|
||||
menu:
|
||||
docs:
|
||||
weight: 15
|
||||
identifier: victorialogs
|
||||
pageRef: /victorialogs/
|
||||
tags:
|
||||
- logs
|
||||
aliases:
|
||||
- /VictoriaLogs.html
|
||||
- /VictoriaLogs/
|
||||
- /victorialogs/
|
||||
- /victorialogs/index.html
|
||||
---
|
||||
{{% content "README.md" %}}
|
||||
@@ -1,333 +0,0 @@
|
||||
---
|
||||
weight: 20
|
||||
title: VictoriaLogs cluster
|
||||
menu:
|
||||
docs:
|
||||
parent: victorialogs
|
||||
identifier: vl-cluster
|
||||
weight: 20
|
||||
title: VictoriaLogs cluster
|
||||
tags:
|
||||
- logs
|
||||
- guide
|
||||
aliases:
|
||||
- /victorialogs/cluster/
|
||||
---
|
||||
|
||||
Cluster mode in VictoriaLogs provides horizontal scaling to many nodes when [single-node VictoriaLogs](https://docs.victoriametrics.com/victorialogs/)
|
||||
reaches vertical scalability limits of a single host. If you have an ability to run a single-node VictoriaLogs on a host with more CPU / RAM / storage space / storage IO,
|
||||
then it is preferred to do this instead of switching to cluster mode, since a single-node VictoriaLogs instance has the following advantages over cluster mode:
|
||||
|
||||
- It is easier to configure, manage and troubleshoot, since it consists of a single self-contained component.
|
||||
- It provides better performance and capacity on the same hardware, since it doesn't need
|
||||
to transfer data over the network between cluster components.
|
||||
|
||||
The migration path from a single-node VictoriaLogs to cluster mode is very easy - just [upgrade](https://docs.victoriametrics.com/victorialogs/#upgrading)
|
||||
a single-node VictoriaLogs executable to the [latest available release](https://docs.victoriametrics.com/victorialogs/changelog/) and add it to the list of `vlstorage` nodes
|
||||
passed via `-storageNode` command-line flag to `vlinsert` and `vlselect` components of the cluster mode. See [cluster architecture](#architecture)
|
||||
for more details about VictoriaLogs cluster components.
|
||||
|
||||
See [quick start guide](#quick-start) on how to start working with VictoriaLogs cluster.
|
||||
|
||||
## Architecture
|
||||
|
||||
VictoriaLogs in cluster mode is composed of three main components: `vlinsert`, `vlselect`, and `vlstorage`.
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant LS as Log Sources
|
||||
participant VI as vlinsert
|
||||
participant VS1 as vlstorage-1
|
||||
participant VS2 as vlstorage-2
|
||||
participant VL as vlselect
|
||||
participant QC as Query Client
|
||||
|
||||
Note over LS,VS2: Log Ingestion Flow
|
||||
LS->>VI: Send logs via supported protocols
|
||||
VI->>VS1: POST /internal/insert (HTTP)
|
||||
VI->>VS2: POST /internal/insert (HTTP)
|
||||
Note right of VI: Distributes logs evenly<br/>across vlstorage nodes
|
||||
|
||||
Note over VS1,QC: Query Flow
|
||||
QC->>VL: Query via HTTP endpoints
|
||||
VL->>VS1: GET /internal/select/* (HTTP)
|
||||
VL->>VS2: GET /internal/select/* (HTTP)
|
||||
VS1-->>VL: Return local results
|
||||
VS2-->>VL: Return local results
|
||||
VL->>QC: Processed & aggregated results
|
||||
```
|
||||
|
||||
- `vlinsert` handles log ingestion via [all supported protocols](https://docs.victoriametrics.com/victorialogs/data-ingestion/).
|
||||
It distributes incoming logs evenly across `vlstorage` nodes, as specified by the `-storageNode` command-line flag.
|
||||
|
||||
- `vlselect` receives queries through [all supported HTTP query endpoints](https://docs.victoriametrics.com/victorialogs/querying/).
|
||||
It fetches the required data from the configured `vlstorage` nodes, processes the queries, and returns the results.
|
||||
|
||||
- `vlstorage` performs two key roles:
|
||||
- It stores logs received from `vlinsert` at the directory defined by the `-storageDataPath` flag.
|
||||
See [storage configuration docs](https://docs.victoriametrics.com/victorialogs/#storage) for details.
|
||||
- It handles queries from `vlselect` by retrieving and transforming the requested data locally before returning results.
|
||||
|
||||
Each `vlstorage` node operates as a self-contained VictoriaLogs instance.
|
||||
Refer to the [single-node and cluster mode duality](#single-node-and-cluster-mode-duality) documentation for more information.
|
||||
This design allows you to reuse existing single-node VictoriaLogs instances by listing them in the `-storageNode` flag for `vlselect`, enabling unified querying across all nodes.
|
||||
|
||||
All VictoriaLogs components are horizontally scalable and can be deployed on hardware best suited to their respective workloads.
|
||||
`vlinsert` and `vlselect` can be run on the same node, which allows the minimal cluster to consist of just one `vlstorage` node and one node acting as both `vlinsert` and `vlselect`.
|
||||
However, for production environments, it is recommended to separate `vlinsert` and `vlselect` roles to avoid resource contention — for example, to prevent heavy queries from interfering with log ingestion.
|
||||
|
||||
Communication between `vlinsert` / `vlselect` and `vlstorage` is done via HTTP over the port specified by the `-httpListenAddr` flag:
|
||||
|
||||
- `vlinsert` sends data to the `/internal/insert` endpoint on `vlstorage`.
|
||||
- `vlselect` sends queries to endpoints under `/internal/select/` on `vlstorage`.
|
||||
|
||||
This HTTP-based communication model allows you to use reverse proxies for authorization, routing, and encryption between components.
|
||||
Use of [vmauth](https://docs.victoriametrics.com/victoriametrics/vmauth/) is recommended for managing access control.
|
||||
|
||||
For advanced setups, refer to the [multi-level cluster setup](#multi-level-cluster-setup) documentation.
|
||||
|
||||
## High availability
|
||||
|
||||
In the cluster setup, the following rules apply:
|
||||
|
||||
- The `vlselect` component requires **all relevant vlstorage nodes to be available** in order to return complete and correct query results.
|
||||
|
||||
- If even one of the vlstorage nodes is temporarily unavailable, `vlselect` cannot safely return a full response, since some of the required data may reside on the missing node. Rather than risk delivering partial or misleading query results, which can cause confusion, trigger false alerts, or produce incorrect metrics, VictoriaLogs chooses to return an error instead.
|
||||
|
||||
- The `vlinsert` component continues to function normally when some vlstorage nodes are unavailable. It automatically routes new logs to the remaining available nodes to ensure that data ingestion remains uninterrupted and newly received logs are not lost.
|
||||
|
||||
> [!NOTE] Insight
|
||||
> In most real-world cases, `vlstorage` nodes become unavailable during planned maintenance such as upgrades, config changes, or rolling restarts. These are typically infrequent (weekly or monthly) and brief (a few minutes).
|
||||
> A short period of query downtime during such events is acceptable and fits well within most SLAs. For example, 60 minutes of downtime per month still provides around 99.86% availability, which often outperforms complex HA setups that rely on opaque auto-recovery and may fail unpredictably.
|
||||
|
||||
VictoriaLogs itself does not handle replication at the storage level. Instead, it relies on an external log shipper, such as [vector](https://docs.victoriametrics.com/victorialogs/data-ingestion/vector/) or [vlagent](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9034), to send the same log stream to multiple independent VictoriaLogs instances:
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
subgraph "HA Solution"
|
||||
subgraph "Ingestion Layer"
|
||||
LS["Log Sources<br/>(Applications)"]
|
||||
VECTOR["Log Collector<br/>• Buffering<br/>• Replication<br/>• Delivery Guarantees"]
|
||||
end
|
||||
|
||||
subgraph "Storage Layer"
|
||||
subgraph "Zone A"
|
||||
VLA["VictoriaLogs Cluster A"]
|
||||
end
|
||||
|
||||
subgraph "Zone B"
|
||||
VLB["VictoriaLogs Cluster B"]
|
||||
end
|
||||
end
|
||||
|
||||
subgraph "Query Layer"
|
||||
LB["Load Balancer<br/>(vmauth)<br/>• Health Checks<br/>• Failover<br/>• Query Distribution"]
|
||||
QC["Query Clients<br/>(Grafana, API)"]
|
||||
end
|
||||
|
||||
LS --> VECTOR
|
||||
VECTOR -->|"Replicate logs to<br/>Zone A cluster"| VLA
|
||||
VECTOR -->|"Replicate logs to<br/>Zone B cluster"| VLB
|
||||
|
||||
VLA -->|"Serve queries from<br/>Zone A cluster"| LB
|
||||
VLB -->|"Serve queries from<br/>Zone B cluster"| LB
|
||||
LB --> QC
|
||||
|
||||
style VECTOR fill:#e8f5e8
|
||||
style VLA fill:#d5e8d4
|
||||
style VLB fill:#d5e8d4
|
||||
style LB fill:#e1f5fe
|
||||
style QC fill:#fff2cc
|
||||
style LS fill:#fff2cc
|
||||
end
|
||||
```
|
||||
|
||||
In this HA solution:
|
||||
|
||||
- A log shipper at the top receives logs and replicates them in parallel to two VictoriaLogs clusters.
|
||||
- If one cluster fails completely (i.e., **all** of its storage nodes become unavailable), the log shipper continues to send logs to the remaining healthy cluster and buffers any logs that cannot be delivered. When the failed cluster becomes available again, the log shipper resumes sending both buffered and new logs to it.
|
||||
- On the read path, a load balancer (e.g., vmauth) sits in front of the VictoriaLogs clusters and routes query requests to any healthy cluster.
|
||||
- If one cluster fails (i.e., **at least one** of its storage nodes is unavailable), the load balancer detects this and automatically redirects all query traffic to the remaining healthy cluster.
|
||||
|
||||
There's no hidden coordination logic or consensus algorithm. You can scale it horizontally and operate it safely, even in bare-metal Kubernetes clusters using local PVs, as long as the log shipper handles reliable replication and buffering.
|
||||
|
||||
## Single-node and cluster mode duality
|
||||
|
||||
Every `vlstorage` node can be used as a single-node VictoriaLogs instance:
|
||||
|
||||
- It can accept logs via [all the supported data ingestion protocols](https://docs.victoriametrics.com/victorialogs/data-ingestion/).
|
||||
- It can accept `select` queries via [all the supported HTTP querying endpoints](https://docs.victoriametrics.com/victorialogs/querying/).
|
||||
|
||||
A single-node VictoriaLogs instance can be used as `vlstorage` node in VictoriaLogs cluster:
|
||||
|
||||
- It accepts data ingestion requests from `vlinsert` via `/internal/insert` HTTP endpoint at the TCP port specified via `-httpListenAddr` command-line flag.
|
||||
- It accepts queries from `vlselect` via `/internal/select/*` HTTP endpoints at the TCP port specified via `-httpListenAddr` command-line flags.
|
||||
|
||||
It is possible to disallow access to `/internal/insert` and `/internal/select/*` endpoints at single-node VictoriaLogs instance
|
||||
by running it with `-internalinsert.disable` and `-internalselect.disable` command-line flags.
|
||||
|
||||
## Multi-level cluster setup
|
||||
|
||||
- `vlinsert` can send the ingested logs to other `vlinsert` nodes if they are specified via `-storageNode` command-line flag.
|
||||
This allows building multi-level data ingestion schemes when top-level `vlinsert` spreads the incoming logs evenly among multiple lower-level clusters of VictoriaLogs.
|
||||
|
||||
- `vlselect` can send queries to other `vlselect` nodes if they are specified via `-storageNode` command-line flag.
|
||||
This allows building multi-level cluster schemes when top-level `vlselect` queries multiple lower-level clusters of VictoriaLogs.
|
||||
|
||||
See [security docs](#security) on how to protect communications between multiple levels of `vlinsert` and `vlselect` nodes.
|
||||
|
||||
## Security
|
||||
|
||||
All the VictoriaLogs cluster components must run in protected internal network without direct access from the Internet.
|
||||
`vlstorage` must have no access from the Internet. HTTP authorization proxies such as [vmauth](https://docs.victoriametrics.com/victoriametrics/vmauth/)
|
||||
must be used in front of `vlinsert` and `vlselect` for authorizing access to these components from the Internet.
|
||||
|
||||
By default `vlinsert` and `vlselect` communicate with `vlstorage` via unencrypted HTTP. This is OK if all these components are located
|
||||
in the same protected internal network. This isn't OK if these components communicate over the Internet, since a third party can intercept / modify
|
||||
the transferred data. It is recommended switching to HTTPS in this case:
|
||||
|
||||
- Specify `-tls`, `-tlsCertFile` and `-tlsKeyFile` command-line flags at `vlstorage`, so it accepts incoming requests over HTTPS instead of HTTP at the corresponding `-httpListenAddr`:
|
||||
|
||||
```sh
|
||||
./victoria-logs-prod -httpListenAddr=... -storageDataPath=... -tls -tlsCertFile=/path/to/certfile -tlsKeyFile=/path/to/keyfile
|
||||
```
|
||||
|
||||
- Specify `-storageNode.tls` command-line flag at `vlinsert` and `vlselect`, which communicate with the `vlstorage` over untrusted networks such as Internet:
|
||||
|
||||
```sh
|
||||
./victoria-logs-prod -storageNode=... -storageNode.tls
|
||||
```
|
||||
|
||||
It is also recommended authorizing HTTPS requests to `vlstorage` via Basic Auth:
|
||||
|
||||
- Specify `-httpAuth.username` and `-httpAuth.password` command-line flags at `vlstorage`, so it verifies the Basic Auth username + password in HTTPS requests received via `-httpListenAddr`:
|
||||
|
||||
```sh
|
||||
./victoria-logs-prod -httpListenAddr=... -storageDataPath=... -tls -tlsCertFile=... -tlsKeyFile=... -httpAuth.username=... -httpAuth.password=...
|
||||
```
|
||||
|
||||
- Specify `-storageNode.username` and `-storageNode.password` command-line flags at `vlinsert` and `vlselect`, which communicate with the `vlostorage` over untrusted networks:
|
||||
|
||||
```sh
|
||||
./victoria-logs-prod -storageNode=... -storageNode.tls -storageNode.username=... -storageNode.password=...
|
||||
```
|
||||
|
||||
Another option is to use third-party HTTP proxies such as [vmauth](https://docs.victoriametrics.com/victoriametrics/vmauth/), `nginx`, etc. for authorizing and encrypting communications
|
||||
between VictoriaLogs cluster components over untrusted networks.
|
||||
|
||||
By default, all the logs component (vlinsert, vlselect, vlstorage) support all the HTTP endpoints including `/insert/*` and `/select/*`. It's recommended to disable select endpoints on `vlinsert` and insert endpoints on `vlselect`:
|
||||
|
||||
```sh
|
||||
# Disable select endpoints on vlinsert
|
||||
./victoria-logs-prod -storageNode=... -select.disable
|
||||
|
||||
# Disable insert endpoints on vlselect
|
||||
./victoria-logs-prod -storageNode=... -insert.disable
|
||||
```
|
||||
|
||||
This helps prevent sending select requests to `vlinsert` nodes or insert requests to `vlselect` nodes in case of misconfiguration in the authorization proxy in front of the `vlinsert` and `vlselect` nodes.
|
||||
|
||||
## Quick start
|
||||
|
||||
The following guide covers the following topics for Linux host:
|
||||
|
||||
- How to download VictoriaLogs executable.
|
||||
- How to start VictoriaLogs cluster, which consists of two `vlstorage` nodes, a single `vlinsert` node and a single `vlselect` node
|
||||
running on a localhost according to [cluster architecture](#architecture).
|
||||
- How to ingest logs into the cluster.
|
||||
- How to query the ingested logs.
|
||||
|
||||
Download and unpack the latest VictoriaLogs release:
|
||||
|
||||
```sh
|
||||
curl -L -O https://github.com/VictoriaMetrics/VictoriaMetrics/releases/download/v1.24.0-victorialogs/victoria-logs-linux-amd64-v1.24.0-victorialogs.tar.gz
|
||||
tar xzf victoria-logs-linux-amd64-v1.24.0-victorialogs.tar.gz
|
||||
```
|
||||
|
||||
Start the first [`vlstorage` node](#architecture), which accepts incoming requests at the port `9491` and stores the ingested logs at `victoria-logs-data-1` directory:
|
||||
|
||||
```sh
|
||||
./victoria-logs-prod -httpListenAddr=:9491 -storageDataPath=victoria-logs-data-1 &
|
||||
```
|
||||
|
||||
This command and all the following commands start cluster components as background processes.
|
||||
Use `jobs`, `fg`, `bg` commands for manipulating the running background processes. Use `kill` command and/or `Ctrl+C` for stopping the running processes when they no longer needed.
|
||||
See [these docs](https://tldp.org/LDP/abs/html/x9644.html) for details.
|
||||
|
||||
Start the second `vlstorage` node, which accepts incoming requests at the port `9492` and stores the ingested logs at `victoria-logs-data-2` directory:
|
||||
|
||||
```sh
|
||||
./victoria-logs-prod -httpListenAddr=:9492 -storageDataPath=victoria-logs-data-2 &
|
||||
```
|
||||
|
||||
Start `vlinsert` node, which [accepts logs](https://docs.victoriametrics.com/victorialogs/data-ingestion/) at the port `9481` and spreads them evenly among the two `vlstorage` nodes started above:
|
||||
|
||||
```sh
|
||||
./victoria-logs-prod -httpListenAddr=:9481 -storageNode=localhost:9491,localhost:9492 &
|
||||
```
|
||||
|
||||
Start `vlselect` node, which [accepts incoming queries](https://docs.victoriametrics.com/victorialogs/querying/) at the port `9471` and requests the needed data from `vlstorage` nodes started above:
|
||||
|
||||
```sh
|
||||
./victoria-logs-prod -httpListenAddr=:9471 -storageNode=localhost:9491,localhost:9492 &
|
||||
```
|
||||
|
||||
Note that all the VictoriaLogs cluster components - `vlstorage`, `vlinsert` and `vlselect` - share the same executable - `victoria-logs-prod`.
|
||||
Their roles depend on whether the `-storageNode` command-line flag is set - if this flag is set, then the executable runs in `vlinsert` and `vlselect` modes.
|
||||
Otherwise, it runs in `vlstorage` mode, which is identical to a [single-node VictoriaLogs mode](https://docs.victoriametrics.com/victorialogs/).
|
||||
|
||||
Let's ingest some logs (aka [wide events](https://jeremymorrell.dev/blog/a-practitioners-guide-to-wide-events/))
|
||||
from [GitHub archive](https://www.gharchive.org/) into the VictoriaLogs cluster with the following command:
|
||||
|
||||
```sh
|
||||
curl -s https://data.gharchive.org/$(date -d '2 days ago' '+%Y-%m-%d')-10.json.gz \
|
||||
| curl -T - -X POST -H 'Content-Encoding: gzip' 'http://localhost:9481/insert/jsonline?_time_field=created_at&_stream_fields=type'
|
||||
```
|
||||
|
||||
Let's query the ingested logs via [`/select/logsql/query` HTTP endpoint](https://docs.victoriametrics.com/victorialogs/querying/#querying-logs).
|
||||
For example, the following command returns the number of stored logs in the cluster:
|
||||
|
||||
```sh
|
||||
curl http://localhost:9471/select/logsql/query -d 'query=* | count()'
|
||||
```
|
||||
|
||||
See [these docs](https://docs.victoriametrics.com/victorialogs/querying/#command-line) for details on how to query logs from command line.
|
||||
|
||||
Logs also can be explored and queried via [built-in Web UI](https://docs.victoriametrics.com/victorialogs/querying/#web-ui).
|
||||
Open `http://localhost:9471/select/vmui/` in the web browser, select `last 7 days` time range in the top right corner and explore the ingested logs.
|
||||
See [LogsQL docs](https://docs.victoriametrics.com/victorialogs/logsql/) to familiarize yourself with the query language.
|
||||
|
||||
Every `vmstorage` node can be queried individually because [it is equivalent to a single-node VictoriaLogs](#single-node-and-cluster-mode-duality).
|
||||
For example, the following command returns the number of stored logs at the first `vmstorage` node started above:
|
||||
|
||||
```sh
|
||||
curl http://localhost:9491/select/logsql/query -d 'query=* | count()'
|
||||
```
|
||||
|
||||
It is recommended reading [key concepts](https://docs.victoriametrics.com/victorialogs/keyconcepts/) before you start working with VictoriaLogs.
|
||||
|
||||
See also [security docs](#security).
|
||||
|
||||
## Performance tuning
|
||||
|
||||
Cluster components of VictoriaLogs automatically adjust their settings for the best performance and the lowest resource usage on the given hardware.
|
||||
So there is no need in any tuning of these components in general. The following options can be used for achieving higher performance / lower resource
|
||||
usage on systems with constrained resources:
|
||||
|
||||
- `vlinsert` limits the number of concurrent requests to every `vlstorage` node. The default concurrency works great in most cases.
|
||||
Sometimes it can be increased via `-insert.concurrency` command-line flag at `vlinsert` in order to achieve higher data ingestion rate
|
||||
at the cost of higher RAM usage at `vlinsert` and `vlstorage` nodes.
|
||||
|
||||
- `vlinsert` compresses the data sent to `vlstorage` nodes in order to reduce network bandwidth usage at the cost of slightly higher CPU usage
|
||||
at `vlinsert` ant `vlstorage` nodes. The compression can be disabled by passing `-insert.disableCompression` command-line flag to `vlinsert`.
|
||||
This reduces CPU usage at `vlinsert` and `vlstorage` nodes at the cost of significantly higher network bandwidth usage.
|
||||
|
||||
- `vlselect` requests compressed data from `vlstorage` nodes in order to reduce network bandwidth usage at the cost of slightly higher CPU usage
|
||||
at `vlselect` and `vlstorage` nodes. The compression can be disabled by passing `-select.disableCompression` command-line flag to `vlselect`.
|
||||
This reduces CPU usage at `vlselect` and `vlstorage` nodes at the cost of significanlty higher network bandwidth usage.
|
||||
|
||||
## Advanced usage
|
||||
|
||||
Cluster components of VictoriaLogs provide various settings, which can be configured via command-line flags if needed.
|
||||
Default values for all the command-line flags work great in most cases, so it isn't recommended
|
||||
tuning them without the real need. See [the list of supported command-line flags at VictoriaLogs](https://docs.victoriametrics.com/victorialogs/#list-of-command-line-flags).
|
||||
@@ -1,86 +0,0 @@
|
||||
---
|
||||
weight: 5
|
||||
title: DataDog Agent setup
|
||||
disableToc: true
|
||||
menu:
|
||||
docs:
|
||||
parent: "victorialogs-data-ingestion"
|
||||
weight: 5
|
||||
url: /victorialogs/data-ingestion/datadog-agent/
|
||||
tags:
|
||||
- logs
|
||||
aliases:
|
||||
- /victorialogs/data-ingestion/datadog/
|
||||
- /victorialogs/data-ingestion/DataDogAgent.html
|
||||
---
|
||||
|
||||
Datadog Agent doesn't support custom path prefix, so for this reason it's required to use [VMAuth](https://docs.victoriametrics.com/victoriametrics/vmauth/) or any other
|
||||
reverse proxy to append `/insert/datadog` path prefix to all Datadog API logs requests.
|
||||
|
||||
In case of [VMAuth](https://docs.victoriametrics.com/victoriametrics/vmauth/) your config should look like:
|
||||
|
||||
```yaml
|
||||
unauthorized_user:
|
||||
url_map:
|
||||
- src_paths:
|
||||
- "/api/v2/logs"
|
||||
- "/api/v1/validate"
|
||||
url_prefix: `<victoria-logs-base-url>`/insert/datadog/
|
||||
- src_paths:
|
||||
- "/api/v1/series"
|
||||
- "/api/v2/series"
|
||||
- "/api/beta/sketches"
|
||||
- "/api/v1/validate"
|
||||
- "/api/v1/check_run"
|
||||
- "/intake"
|
||||
- "/api/v1/metadata"
|
||||
url_prefix: `<victoria-metrics-base-url>`/datadog/
|
||||
```
|
||||
|
||||
To start ingesting logs from DataDog agent please specify a custom URL instead of default one for sending collected logs to [VictoriaLogs](https://docs.victoriametrics.com/victorialogs/):
|
||||
|
||||
```yaml
|
||||
logs_enabled: true
|
||||
logs_config:
|
||||
logs_dd_url: `<vmauth-base-url>`
|
||||
use_http: true
|
||||
```
|
||||
|
||||
While using [Serverless DataDog plugin](https://github.com/DataDog/serverless-plugin-datadog) please set VictoriaLogs endpoint using `LOGS_DD_URL` environment variable:
|
||||
|
||||
```yaml
|
||||
custom:
|
||||
datadog:
|
||||
apiKey: fakekey # Set any key, otherwise plugin fails
|
||||
provider:
|
||||
environment:
|
||||
DD_DD_URL: `<vmauth-base-url>`/ # VMAuth endpoint for DataDog
|
||||
```
|
||||
|
||||
Substitute the `<vmauth-base-url>` address with the real address of VMAuth proxy.
|
||||
|
||||
## Dropping fields
|
||||
|
||||
VictoriaLogs can be configured for skipping the given [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
|
||||
for logs ingested via DataDog protocol. This can be done via the following options:
|
||||
|
||||
- `-datadog.ignoreFields` command-line flag, which accepts comma-separated list of log fields to ignore.
|
||||
This list can contain log field prefixes ending with `*` such as `some-prefix*`. In this case all the fields starting from `some-prefix` are ignored.
|
||||
- `ignore_fields` HTTP request query arg or `VL-Ignore-Fields` HTTP request header. See [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters) for details.
|
||||
|
||||
## Stream fields
|
||||
|
||||
VictoriaLogs can be configured to use the particular fields from the ingested logs as [log stream fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields)
|
||||
for logs ingested via DataDog protocol. This can be done via the following options:
|
||||
|
||||
- `-datadog.streamFields` command-line flag, which accepts comma-separated list of fields to use as log stream fields.
|
||||
- `_stream_fields` HTTP request query arg or `VL-Stream-Fields` HTTP request header. See [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters) for details.
|
||||
|
||||
|
||||
See also:
|
||||
|
||||
- [HTTP query args and HTTP headers, which can be set during data ingestion](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters)
|
||||
- [Data ingestion troubleshooting](https://docs.victoriametrics.com/victorialogs/data-ingestion/#troubleshooting)
|
||||
- [How to query VictoriaLogs](https://docs.victoriametrics.com/victorialogs/querying/)
|
||||
- [Docker-compose demo for Datadog integration with VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/datadog-agent)
|
||||
- [Docker-compose demo for Datadog Serverless integration with VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/datadog-serverless)
|
||||
@@ -1,127 +0,0 @@
|
||||
---
|
||||
weight: 1
|
||||
title: Filebeat setup
|
||||
disableToc: true
|
||||
menu:
|
||||
docs:
|
||||
parent: "victorialogs-data-ingestion"
|
||||
weight: 1
|
||||
tags:
|
||||
- logs
|
||||
aliases:
|
||||
- /victorialogs/data-ingestion/Filebeat.html
|
||||
- /victorialogs/data-ingestion/filebeat.html
|
||||
---
|
||||
|
||||
_Tested with filebeat [8.15.1+](https://www.elastic.co/guide/en/beats/libbeat/8.17/release-notes-8.15.1.html)._
|
||||
|
||||
Specify [`output.elasticsearch`](https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html) section in the `filebeat.yml`
|
||||
for sending the collected logs to [VictoriaLogs](https://docs.victoriametrics.com/victorialogs/):
|
||||
|
||||
```yaml
|
||||
output.elasticsearch:
|
||||
hosts: ["http://localhost:9428/insert/elasticsearch/"]
|
||||
parameters:
|
||||
_msg_field: "message"
|
||||
_time_field: "@timestamp"
|
||||
_stream_fields: "host.hostname,log.file.path"
|
||||
```
|
||||
|
||||
Substitute the `localhost:9428` address inside `hosts` section with the real TCP address of VictoriaLogs.
|
||||
|
||||
See [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters) for details on the `parameters` section.
|
||||
|
||||
It is recommended verifying whether the initial setup generates the needed [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
|
||||
and uses the correct [stream fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields).
|
||||
This can be done by specifying `debug` [parameter](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters)
|
||||
and inspecting VictoriaLogs logs then:
|
||||
|
||||
```yaml
|
||||
output.elasticsearch:
|
||||
hosts: ["http://localhost:9428/insert/elasticsearch/"]
|
||||
parameters:
|
||||
_msg_field: "message"
|
||||
_time_field: "@timestamp"
|
||||
_stream_fields: "host.hostname,log.file.path"
|
||||
debug: "1"
|
||||
```
|
||||
|
||||
If some [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) must be skipped
|
||||
during data ingestion, then they can be put into `ignore_fields` [parameter](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters).
|
||||
For example, the following config instructs VictoriaLogs to ignore `log.offset` and `event.original` fields in the ingested logs:
|
||||
|
||||
```yaml
|
||||
output.elasticsearch:
|
||||
hosts: ["http://localhost:9428/insert/elasticsearch/"]
|
||||
parameters:
|
||||
_msg_field: "message"
|
||||
_time_field: "@timestamp"
|
||||
_stream_fields: "host.name,log.file.path"
|
||||
ignore_fields: "log.offset,event.original"
|
||||
```
|
||||
|
||||
When Filebeat ingests logs into VictoriaLogs at a high rate, then it may be needed to tune `worker` and `bulk_max_size` options.
|
||||
For example, the following config is optimized for higher than usual ingestion rate:
|
||||
|
||||
```yaml
|
||||
output.elasticsearch:
|
||||
hosts: ["http://localhost:9428/insert/elasticsearch/"]
|
||||
parameters:
|
||||
_msg_field: "message"
|
||||
_time_field: "@timestamp"
|
||||
_stream_fields: "host.name,log.file.path"
|
||||
worker: 8
|
||||
bulk_max_size: 1000
|
||||
```
|
||||
|
||||
If the Filebeat sends logs to VictoriaLogs in another datacenter, then it may be useful enabling data compression via `compression_level` option.
|
||||
This usually allows saving network bandwidth and costs by up to 5 times:
|
||||
|
||||
```yaml
|
||||
output.elasticsearch:
|
||||
hosts: ["http://localhost:9428/insert/elasticsearch/"]
|
||||
parameters:
|
||||
_msg_field: "message"
|
||||
_time_field: "@timestamp"
|
||||
_stream_fields: "host.name,log.file.path"
|
||||
compression_level: 1
|
||||
```
|
||||
|
||||
By default, the ingested logs are stored in the `(AccountID=0, ProjectID=0)` [tenant](https://docs.victoriametrics.com/victorialogs/#multitenancy).
|
||||
If you need storing logs in other tenant, then specify the needed tenant via `headers` at `output.elasticsearch` section.
|
||||
For example, the following `filebeat.yml` config instructs Filebeat to store the data to `(AccountID=12, ProjectID=34)` tenant:
|
||||
|
||||
```yaml
|
||||
output.elasticsearch:
|
||||
hosts: ["http://localhost:9428/insert/elasticsearch/"]
|
||||
headers:
|
||||
AccountID: 12
|
||||
ProjectID: 34
|
||||
parameters:
|
||||
_msg_field: "message"
|
||||
_time_field: "@timestamp"
|
||||
_stream_fields: "host.name,log.file.path"
|
||||
```
|
||||
|
||||
Filebeat checks a version of ElasticSearch on startup and refuses to start sending logs if the version is not compatible.
|
||||
In order to bypass this check please add `allow_older_versions: true` into `output.elasticsearch` section:
|
||||
|
||||
```yaml
|
||||
output.elasticsearch:
|
||||
hosts: [ "http://localhost:9428/insert/elasticsearch/" ]
|
||||
parameters:
|
||||
_msg_field: "message"
|
||||
_time_field: "@timestamp"
|
||||
_stream_fields: "host.name,log.file.path"
|
||||
allow_older_versions: true
|
||||
```
|
||||
|
||||
Alternatively, is also possible to change version which VictoriaLogs reports to Filebeat by using `-elasticsearch.version`
|
||||
command-line flag.
|
||||
|
||||
See also:
|
||||
|
||||
- [Data ingestion troubleshooting](https://docs.victoriametrics.com/victorialogs/data-ingestion/#troubleshooting).
|
||||
- [How to query VictoriaLogs](https://docs.victoriametrics.com/victorialogs/querying/).
|
||||
- [Filebeat `output.elasticsearch` docs](https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html).
|
||||
- [Docker-compose demo for Filebeat integration with VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/filebeat).
|
||||
@@ -1,104 +0,0 @@
|
||||
---
|
||||
weight: 2
|
||||
title: Fluentbit setup
|
||||
disableToc: true
|
||||
menu:
|
||||
docs:
|
||||
parent: "victorialogs-data-ingestion"
|
||||
weight: 2
|
||||
tags:
|
||||
- logs
|
||||
aliases:
|
||||
- /victorialogs/data-ingestion/fluentbit.html
|
||||
- /victorialogs/data-ingestion/Fluentbit.html
|
||||
---
|
||||
|
||||
## HTTP
|
||||
|
||||
Specify [http output](https://docs.fluentbit.io/manual/pipeline/outputs/http) section in the `fluentbit.conf`
|
||||
for sending the collected logs to [VictoriaLogs](https://docs.victoriametrics.com/victorialogs/):
|
||||
|
||||
```fluentbit
|
||||
[Output]
|
||||
Name http
|
||||
Match *
|
||||
host localhost
|
||||
port 9428
|
||||
uri /insert/jsonline?_stream_fields=stream&_msg_field=log&_time_field=date
|
||||
format json_lines
|
||||
json_date_format iso8601
|
||||
```
|
||||
|
||||
Substitute the host (`localhost`) and port (`9428`) with the real TCP address of VictoriaLogs.
|
||||
|
||||
See [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters) for details on the query args specified in the `uri`.
|
||||
|
||||
It is recommended verifying whether the initial setup generates the needed [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
|
||||
and uses the correct [stream fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields).
|
||||
This can be done by specifying `debug` [parameter](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters) in the `uri`
|
||||
and inspecting VictoriaLogs logs then:
|
||||
|
||||
```fluentbit
|
||||
[Output]
|
||||
Name http
|
||||
Match *
|
||||
host localhost
|
||||
port 9428
|
||||
uri /insert/jsonline?_stream_fields=stream&_msg_field=log&_time_field=date&debug=1
|
||||
format json_lines
|
||||
json_date_format iso8601
|
||||
```
|
||||
|
||||
If some [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) must be skipped
|
||||
during data ingestion, then they can be put into `ignore_fields` [parameter](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters).
|
||||
For example, the following config instructs VictoriaLogs to ignore `log.offset` and `event.original` fields in the ingested logs:
|
||||
|
||||
```fluentbit
|
||||
[Output]
|
||||
Name http
|
||||
Match *
|
||||
host localhost
|
||||
port 9428
|
||||
uri /insert/jsonline?_stream_fields=stream&_msg_field=log&_time_field=date&ignore_fields=log.offset,event.original
|
||||
format json_lines
|
||||
json_date_format iso8601
|
||||
```
|
||||
|
||||
If the Fluentbit sends logs to VictoriaLogs in another datacenter, then it may be useful enabling data compression via `compress gzip` option.
|
||||
This usually allows saving network bandwidth and costs by up to 5 times:
|
||||
|
||||
```fluentbit
|
||||
[Output]
|
||||
Name http
|
||||
Match *
|
||||
host localhost
|
||||
port 9428
|
||||
uri /insert/jsonline?_stream_fields=stream&_msg_field=log&_time_field=date
|
||||
format json_lines
|
||||
json_date_format iso8601
|
||||
compress gzip
|
||||
```
|
||||
|
||||
By default, the ingested logs are stored in the `(AccountID=0, ProjectID=0)` [tenant](https://docs.victoriametrics.com/victorialogs/keyconcepts/#multitenancy).
|
||||
If you need storing logs in other tenant, then specify the needed tenant via `header` options.
|
||||
For example, the following `fluentbit.conf` config instructs Fluentbit to store the data to `(AccountID=12, ProjectID=34)` tenant:
|
||||
|
||||
```fluentbit
|
||||
[Output]
|
||||
Name http
|
||||
Match *
|
||||
host localhost
|
||||
port 9428
|
||||
uri /insert/jsonline?_stream_fields=stream&_msg_field=log&_time_field=date
|
||||
format json_lines
|
||||
json_date_format iso8601
|
||||
header AccountID 12
|
||||
header ProjectID 23
|
||||
```
|
||||
|
||||
See also:
|
||||
|
||||
- [Data ingestion troubleshooting](https://docs.victoriametrics.com/victorialogs/data-ingestion/#troubleshooting).
|
||||
- [How to query VictoriaLogs](https://docs.victoriametrics.com/victorialogs/querying/).
|
||||
- [Fluentbit HTTP output config docs](https://docs.fluentbit.io/manual/pipeline/outputs/http).
|
||||
- [Docker-compose demo for Fluentbit integration with VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/fluentbit).
|
||||
@@ -1,89 +0,0 @@
|
||||
---
|
||||
weight: 2
|
||||
title: Fluentd setup
|
||||
disableToc: true
|
||||
menu:
|
||||
docs:
|
||||
parent: "victorialogs-data-ingestion"
|
||||
weight: 2
|
||||
tags:
|
||||
- logs
|
||||
aliases:
|
||||
- /victorialogs/data-ingestion/fluentd.html
|
||||
- /victorialogs/data-ingestion/Fluentd.html
|
||||
---
|
||||
|
||||
## HTTP
|
||||
|
||||
Specify [http output](https://docs.fluentd.io/manual/pipeline/outputs/http) section in the `fluentd.conf`
|
||||
for sending the collected logs to [VictoriaLogs](https://docs.victoriametrics.com/victorialogs/):
|
||||
|
||||
```fluentd
|
||||
<match **>
|
||||
@type http
|
||||
endpoint "http://localhost:9428/insert/jsonline"
|
||||
headers {"VL-Msg-Field": "log", "VL-Time-Field": "time", "VL-Stream-Fields": "path"}
|
||||
</match>
|
||||
```
|
||||
|
||||
Substitute the host (`localhost`) and port (`9428`) with the real TCP address of VictoriaLogs.
|
||||
|
||||
See [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters) for details on the query args specified in the `endpoint`.
|
||||
|
||||
It is recommended verifying whether the initial setup generates the needed [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
|
||||
and uses the correct [stream fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields).
|
||||
This can be done by specifying `debug` [parameter](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters) in the `endpoint`
|
||||
and inspecting VictoriaLogs logs then:
|
||||
|
||||
```fluentd
|
||||
<match **>
|
||||
@type http
|
||||
endpoint "http://localhost:9428/insert/jsonline&debug=1"
|
||||
headers {"VL-Msg-Field": "log", "VL-Time-Field": "time", "VL-Stream-Fields": "path"}
|
||||
</match>
|
||||
```
|
||||
|
||||
If some [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) must be skipped
|
||||
during data ingestion, then they can be put into `ignore_fields` [parameter](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters).
|
||||
For example, the following config instructs VictoriaLogs to ignore `log.offset` and `event.original` fields in the ingested logs:
|
||||
|
||||
```fluentd
|
||||
<match **>
|
||||
@type http
|
||||
endpoint "http://localhost:9428/insert/jsonline&ignore_fields=log.offset,event.original"
|
||||
headers {"VL-Msg-Field": "log", "VL-Time-Field": "time", "VL-Stream-Fields": "path"}
|
||||
</match>
|
||||
```
|
||||
|
||||
If the Fluentd sends logs to VictoriaLogs in another datacenter, then it may be useful enabling data compression via `compress gzip` option.
|
||||
This usually allows saving network bandwidth and costs by up to 5 times:
|
||||
|
||||
```fluentd
|
||||
<match **>
|
||||
@type http
|
||||
endpoint "http://localhost:9428/insert/jsonline&ignore_fields=log.offset,event.original"
|
||||
headers {"VL-Msg-Field": "log", "VL-Time-Field": "time", "VL-Stream-Fields": "path"}
|
||||
compress gzip
|
||||
</match>
|
||||
```
|
||||
|
||||
By default, the ingested logs are stored in the `(AccountID=0, ProjectID=0)` [tenant](https://docs.victoriametrics.com/victorialogs/keyconcepts/#multitenancy).
|
||||
If you need storing logs in other tenant, then specify the needed tenant via `header` options.
|
||||
For example, the following `fluentd.conf` config instructs Fluentd to store the data to `(AccountID=12, ProjectID=34)` tenant:
|
||||
|
||||
```fluentd
|
||||
<match **>
|
||||
@type http
|
||||
endpoint "http://localhost:9428/insert/jsonline"
|
||||
headers {"VL-Msg-Field": "log", "VL-Time-Field": "time", "VL-Stream-Fields": "path"}
|
||||
header AccountID 12
|
||||
header ProjectID 23
|
||||
</match>
|
||||
```
|
||||
|
||||
See also:
|
||||
|
||||
- [Data ingestion troubleshooting](https://docs.victoriametrics.com/victorialogs/data-ingestion/#troubleshooting).
|
||||
- [How to query VictoriaLogs](https://docs.victoriametrics.com/victorialogs/querying/).
|
||||
- [Fluentd HTTP output config docs](https://docs.fluentd.org/output/http)
|
||||
- [Docker-compose demo for Fluentd integration with VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/fluentd).
|
||||
@@ -1,67 +0,0 @@
|
||||
---
|
||||
weight: 10
|
||||
title: Journald setup
|
||||
disableToc: true
|
||||
menu:
|
||||
docs:
|
||||
parent: "victorialogs-data-ingestion"
|
||||
weight: 10
|
||||
tags:
|
||||
- logs
|
||||
aliases:
|
||||
- /victorialogs/data-ingestion/Journald.html
|
||||
---
|
||||
On a client site which should already have journald please install additionally [systemd-journal-upload](https://www.freedesktop.org/software/systemd/man/latest/systemd-journal-upload.service.html) and edit `/etc/systemd/journal-upload.conf` and set `URL` to VictoriaLogs endpoint:
|
||||
|
||||
```
|
||||
[Upload]
|
||||
URL=http://localhost:9428/insert/journald
|
||||
```
|
||||
|
||||
Substitute the `localhost:9428` address inside `endpoints` section with the real TCP address of VictoriaLogs.
|
||||
|
||||
## Time field
|
||||
|
||||
VictoriaLogs uses the `__REALTIME_TIMESTAMP` field as [`_time` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field)
|
||||
for the logs ingested via journald protocol. Other field can be used instead of `__REALTIME_TIMESTAMP` by specifying it via `-journald.timeField` command-line flag.
|
||||
See [the list of supported Journald fields](https://www.freedesktop.org/software/systemd/man/latest/systemd.journal-fields.html).
|
||||
|
||||
## Level field
|
||||
|
||||
VictoriaLogs automatically sets the `level` log field according to the [`PRIORITY` field value](https://wiki.archlinux.org/title/Systemd/Journal).
|
||||
|
||||
## Stream fields
|
||||
|
||||
VictoriaLogs uses `(_MACHINE_ID, _HOSTNAME, _SYSTEMD_UNIT)` as [stream fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields)
|
||||
for logs ingested via journald protocol. The list of log stream fields can be changed via `-journald.streamFields` command-line flag if needed,
|
||||
by providing comma-separated list of journald fields from [this list](https://www.freedesktop.org/software/systemd/man/latest/systemd.journal-fields.html).
|
||||
|
||||
Please make sure that the log stream fields passed to `-journald.streamFields` do not contain fields with high number or unbound number of unique values,
|
||||
since this may lead to [high cardinality issues](https://docs.victoriametrics.com/victorialogs/keyconcepts/#high-cardinality).
|
||||
This can happen with `_SYSTEMD_UNIT` if you have templated units with non-static instances
|
||||
such as `systemd-coredump@.service` or if you have a `.socket` unit with `Accept=yes`.
|
||||
|
||||
The following Journald fields are also good candidates for stream fields:
|
||||
|
||||
- `_TRANSPORT` (to separate out kernel and audit logs which are not associated with a `_SYSTEMD_UNIT`)
|
||||
- `_SYSTEMD_USER_UNIT`
|
||||
|
||||
## Dropping fields
|
||||
|
||||
VictoriaLogs can be configured for skipping the given [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
|
||||
for logs ingested via journald protocol, via `-journald.ignoreFields` command-line flag, which accepts comma-separated list of log fields to ignore.
|
||||
This list can contain log field prefixes ending with `*` such as `some-prefix*`. In this case all the fields starting from `some-prefix` are ignored.
|
||||
|
||||
See [the list of supported Journald fields](https://www.freedesktop.org/software/systemd/man/latest/systemd.journal-fields.html).
|
||||
|
||||
## Multitenancy
|
||||
|
||||
By default VictoriaLogs stores logs ingested via journald protocol into `(AccountID=0, ProjectID=0)` [tenant](https://docs.victoriametrics.com/victorialogs/#multitenancy).
|
||||
This can be changed by passing the needed tenant in the format `AccountID:ProjectID` at the `-journald.tenantID` command-line flag.
|
||||
For example, `-journald.tenantID=123:456` would store logs ingested via journald protocol into `(AccountID=123, ProjectID=456)` tenant.
|
||||
|
||||
See also:
|
||||
|
||||
- [Data ingestion troubleshooting](https://docs.victoriametrics.com/victorialogs/data-ingestion/#troubleshooting).
|
||||
- [How to query VictoriaLogs](https://docs.victoriametrics.com/victorialogs/querying/).
|
||||
- [Docker-compose demo for Journald integration with VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/journald).
|
||||
@@ -1,134 +0,0 @@
|
||||
---
|
||||
weight: 3
|
||||
title: Logstash setup
|
||||
disableToc: true
|
||||
menu:
|
||||
docs:
|
||||
parent: "victorialogs-data-ingestion"
|
||||
weight: 3
|
||||
tags:
|
||||
- logs
|
||||
aliases:
|
||||
- /victorialogs/data-ingestion/logstash.html
|
||||
- /victorialogs/data-ingestion/Logstash.html
|
||||
---
|
||||
VictoriaLogs supports given below Logstash outputs:
|
||||
- [Elasticsearch](#elasticsearch)
|
||||
- [HTTP JSON](#http)
|
||||
|
||||
## Elasticsearch
|
||||
|
||||
Specify [`output.elasticsearch`](https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html) section in the `logstash.conf` file
|
||||
for sending the collected logs to [VictoriaLogs](https://docs.victoriametrics.com/victorialogs/):
|
||||
|
||||
```logstash
|
||||
output {
|
||||
elasticsearch {
|
||||
hosts => ["http://localhost:9428/insert/elasticsearch/"]
|
||||
parameters => {
|
||||
"_msg_field" => "message"
|
||||
"_time_field" => "@timestamp"
|
||||
"_stream_fields" => "host.name,process.name"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Substitute `localhost:9428` address inside `hosts` with the real TCP address of VictoriaLogs.
|
||||
|
||||
See [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters) for details on the `parameters` section.
|
||||
|
||||
It is recommended verifying whether the initial setup generates the needed [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
|
||||
and uses the correct [stream fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields).
|
||||
This can be done by specifying `debug` [parameter](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters)
|
||||
and inspecting VictoriaLogs logs then:
|
||||
|
||||
```logstash
|
||||
output {
|
||||
elasticsearch {
|
||||
hosts => ["http://localhost:9428/insert/elasticsearch/"]
|
||||
parameters => {
|
||||
"_msg_field" => "message"
|
||||
"_time_field" => "@timestamp"
|
||||
"_stream_fields" => "host.name,process.name"
|
||||
"debug" => "1"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If some [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) must be skipped
|
||||
during data ingestion, then they can be put into `ignore_fields` [parameter](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters).
|
||||
For example, the following config instructs VictoriaLogs to ignore `log.offset` and `event.original` fields in the ingested logs:
|
||||
|
||||
```logstash
|
||||
output {
|
||||
elasticsearch {
|
||||
hosts => ["http://localhost:9428/insert/elasticsearch/"]
|
||||
parameters => {
|
||||
"_msg_field" => "message"
|
||||
"_time_field" => "@timestamp"
|
||||
"_stream_fields" => "host.hostname,process.name"
|
||||
"ignore_fields" => "log.offset,event.original"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If the Logstash sends logs to VictoriaLogs in another datacenter, then it may be useful enabling data compression via `http_compression: true` option.
|
||||
This usually allows saving network bandwidth and costs by up to 5 times:
|
||||
|
||||
```logstash
|
||||
output {
|
||||
elasticsearch {
|
||||
hosts => ["http://localhost:9428/insert/elasticsearch/"]
|
||||
parameters => {
|
||||
"_msg_field" => "message"
|
||||
"_time_field" => "@timestamp"
|
||||
"_stream_fields" => "host.hostname,process.name"
|
||||
}
|
||||
http_compression => true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
By default, the ingested logs are stored in the `(AccountID=0, ProjectID=0)` [tenant](https://docs.victoriametrics.com/victorialogs/#multitenancy).
|
||||
If you need storing logs in other tenant, then specify the needed tenant via `custom_headers` at `output.elasticsearch` section.
|
||||
For example, the following `logstash.conf` config instructs Logstash to store the data to `(AccountID=12, ProjectID=34)` tenant:
|
||||
|
||||
```logstash
|
||||
output {
|
||||
elasticsearch {
|
||||
hosts => ["http://localhost:9428/insert/elasticsearch/"]
|
||||
custom_headers => {
|
||||
"AccountID" => "1"
|
||||
"ProjectID" => "2"
|
||||
}
|
||||
parameters => {
|
||||
"_msg_field" => "message"
|
||||
"_time_field" => "@timestamp"
|
||||
"_stream_fields" => "host.hostname,process.name"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## HTTP
|
||||
|
||||
Specify [`output.http`](https://www.elastic.co/guide/en/logstash/current/plugins-outputs-http.html) section in the `logstash.conf` file
|
||||
for sending the collected logs to [VictoriaLogs](https://docs.victoriametrics.com/victorialogs/):
|
||||
|
||||
```conf
|
||||
output {
|
||||
url => "http://victorialogs:9428/insert/jsonline?_stream_fields=host.ip,process.name&_msg_field=message&_time_field=@timestamp"
|
||||
format => "json"
|
||||
http_method => "post"
|
||||
}
|
||||
```
|
||||
|
||||
See also:
|
||||
|
||||
- [Data ingestion troubleshooting](https://docs.victoriametrics.com/victorialogs/data-ingestion/#troubleshooting).
|
||||
- [How to query VictoriaLogs](https://docs.victoriametrics.com/victorialogs/querying/).
|
||||
- [Logstash `output.elasticsearch` docs](https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html).
|
||||
- [Docker-compose demo for Logstash integration with VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/logstash).
|
||||
@@ -1,89 +0,0 @@
|
||||
---
|
||||
weight: 4
|
||||
title: Promtail setup
|
||||
disableToc: true
|
||||
menu:
|
||||
docs:
|
||||
parent: "victorialogs-data-ingestion"
|
||||
weight: 4
|
||||
tags:
|
||||
- logs
|
||||
aliases:
|
||||
- /victorialogs/data-ingestion/Promtail.html
|
||||
- /victorialogs/data-ingestion/promtail.html
|
||||
---
|
||||
[Promtail](https://grafana.com/docs/loki/latest/clients/promtail/), [Grafana Agent](https://grafana.com/docs/agent/latest/)
|
||||
and [Grafana Alloy](https://grafana.com/docs/alloy/latest/) are default log collectors for Grafana Loki.
|
||||
They can be configured to send the collected logs to VictoriaLogs according to the following docs.
|
||||
|
||||
Specify [`clients`](https://grafana.com/docs/loki/latest/clients/promtail/configuration/#clients) section in the configuration file
|
||||
for sending the collected logs to [VictoriaLogs](https://docs.victoriametrics.com/victorialogs/):
|
||||
|
||||
```yaml
|
||||
clients:
|
||||
- url: "http://localhost:9428/insert/loki/api/v1/push"
|
||||
```
|
||||
|
||||
Substitute `localhost:9428` address inside `clients` with the real TCP address of VictoriaLogs.
|
||||
|
||||
VictoriaLogs automatically parses JSON string from the log message into [distinct log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model).
|
||||
This behavior can be disabled by passing `-loki.disableMessageParsing` command-line flag to VictoriaLogs or by adding `disable_message_parsing=1` query arg
|
||||
to the `/insert/loki/api/v1/push` url in the config of log shipper:
|
||||
|
||||
```yaml
|
||||
clients:
|
||||
- url: "http://localhost:9428/insert/loki/api/v1/push?disable_message_parsing=1"
|
||||
```
|
||||
|
||||
In this case the JSON with log fields is stored as a string in the [`_msg` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field),
|
||||
so later it could be parsed at query time with the [`unpack_json` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#unpack_json-pipe).
|
||||
JSON parsing at query can be slow and can consume a lot of additional CPU time and disk read IO bandwidth. That's why it is
|
||||
recommended enabling JSON message parsing at data ingestion.
|
||||
|
||||
VictoriaLogs uses [log stream labels](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields) defined at the client side,
|
||||
e.g. at Promtail, Grafana Agent or Grafana Alloy. Sometimes it may be needed overriding the set of these fields. This can be done via `_stream_fields`
|
||||
query arg. For example, the following config instructs using only the `instance` and `job` labels as log stream fields, while other labels
|
||||
will be stored as [usual log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model):
|
||||
|
||||
```yaml
|
||||
clients:
|
||||
- url: "http://localhost:9428/insert/loki/api/v1/push?_stream_fields=instance,job"
|
||||
```
|
||||
|
||||
It is recommended verifying whether the initial setup generates the needed [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
|
||||
and uses the correct [stream fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields).
|
||||
This can be done by specifying `debug` [parameter](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters)
|
||||
and inspecting VictoriaLogs logs then:
|
||||
|
||||
```yaml
|
||||
clients:
|
||||
- url: "http://localhost:9428/insert/loki/api/v1/push?debug=1"
|
||||
```
|
||||
|
||||
If some [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) must be skipped
|
||||
during data ingestion, then they can be put into `ignore_fields` [parameter](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters).
|
||||
For example, the following config instructs VictoriaLogs to ignore `filename` and `stream` fields in the ingested logs:
|
||||
|
||||
```yaml
|
||||
clients:
|
||||
- url: 'http://localhost:9428/insert/loki/api/v1/push?ignore_fields=filename,stream'
|
||||
```
|
||||
|
||||
See also [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters) for details on other supported query args.
|
||||
There is no need in specifying `_time_field` query arg, since VictoriaLogs automatically extracts timestamp from the ingested Loki data.
|
||||
|
||||
By default the ingested logs are stored in the `(AccountID=0, ProjectID=0)` [tenant](https://docs.victoriametrics.com/victorialogs/#multitenancy).
|
||||
If you need storing logs in other tenant, then specify the needed tenant via `tenant_id` field
|
||||
in the [Loki client configuration](https://grafana.com/docs/loki/latest/clients/promtail/configuration/#clients)
|
||||
The `tenant_id` must have `AccountID:ProjectID` format, where `AccountID` and `ProjectID` are arbitrary uint32 numbers.
|
||||
For example, the following config instructs VictoriaLogs to store logs in the `(AccountID=12, ProjectID=34)` [tenant](https://docs.victoriametrics.com/victorialogs/#multitenancy):
|
||||
|
||||
```yaml
|
||||
clients:
|
||||
- url: "http://localhost:9428/insert/loki/api/v1/push"
|
||||
tenant_id: "12:34"
|
||||
```
|
||||
|
||||
The ingested log entries can be queried according to [these docs](https://docs.victoriametrics.com/victorialogs/querying/).
|
||||
|
||||
See also [data ingestion troubleshooting](https://docs.victoriametrics.com/victorialogs/data-ingestion/#troubleshooting) docs.
|
||||
@@ -1,372 +0,0 @@
|
||||
[VictoriaLogs](https://docs.victoriametrics.com/victorialogs/) can accept logs from the following log collectors:
|
||||
|
||||
- Syslog, Rsyslog and Syslog-ng - see [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/).
|
||||
- Filebeat - see [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/filebeat/).
|
||||
- Fluentbit - see [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/fluentbit/).
|
||||
- Fluentd - see [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/fluentd/).
|
||||
- Logstash - see [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/logstash/).
|
||||
- Vector - see [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/vector/).
|
||||
- Promtail (aka Grafana Loki, Grafana Agent or Grafana Alloy) - see [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/promtail/).
|
||||
- Telegraf - see [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/telegraf/).
|
||||
- OpenTelemetry Collector - see [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/opentelemetry/).
|
||||
- Journald - see [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/journald/).
|
||||
- DataDog - see [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/datadog-agent/).
|
||||
|
||||
The ingested logs can be queried according to [these docs](https://docs.victoriametrics.com/victorialogs/querying/).
|
||||
|
||||
See also:
|
||||
|
||||
- [Log collectors and data ingestion formats](#log-collectors-and-data-ingestion-formats).
|
||||
- [Data ingestion troubleshooting](#troubleshooting).
|
||||
|
||||
## HTTP APIs
|
||||
|
||||
VictoriaLogs supports the following data ingestion HTTP APIs:
|
||||
|
||||
- Elasticsearch bulk API. See [these docs](#elasticsearch-bulk-api).
|
||||
- JSON stream API aka [ndjson](https://jsonlines.org/). See [these docs](#json-stream-api).
|
||||
- Loki JSON API. See [these docs](#loki-json-api).
|
||||
- OpenTelemetry API. See [these docs](#opentelemetry-api).
|
||||
- Journald export format.
|
||||
|
||||
VictoriaLogs accepts optional [HTTP parameters](#http-parameters) at data ingestion HTTP APIs.
|
||||
|
||||
### Elasticsearch bulk API
|
||||
|
||||
VictoriaLogs accepts logs in [Elasticsearch bulk API](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html)
|
||||
/ [OpenSearch Bulk API](http://opensearch.org/docs/1.2/opensearch/rest-api/document-apis/bulk/) format
|
||||
at `http://localhost:9428/insert/elasticsearch/_bulk` endpoint.
|
||||
|
||||
The following command pushes a single log line to VictoriaLogs:
|
||||
|
||||
```sh
|
||||
echo '{"create":{}}
|
||||
{"_msg":"cannot open file","_time":"0","host.name":"host123"}
|
||||
' | curl -X POST -H 'Content-Type: application/json' --data-binary @- http://localhost:9428/insert/elasticsearch/_bulk
|
||||
```
|
||||
|
||||
It is possible to push thousands of log lines in a single request to this API.
|
||||
|
||||
If the [timestamp field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field) is set to `"0"`,
|
||||
then the current timestamp at VictoriaLogs side is used per each ingested log line.
|
||||
Otherwise the timestamp field must be in one of the following formats:
|
||||
|
||||
- [ISO8601](https://en.wikipedia.org/wiki/ISO_8601) or [RFC3339](https://www.rfc-editor.org/rfc/rfc3339).
|
||||
For example, `2023-06-20T15:32:10Z` or `2023-06-20 15:32:10.123456789+02:00`.
|
||||
If timezone information is missing (for example, `2023-06-20 15:32:10`),
|
||||
then the time is parsed in the local timezone of the host where VictoriaLogs runs.
|
||||
|
||||
- Unix timestamp in seconds, milliseconds, microseconds or nanoseconds. For example, `1686026893` (seconds), `1686026893735` (milliseconds),
|
||||
`1686026893735321` (microseconds) or `1686026893735321098` (nanoseconds).
|
||||
|
||||
See [these docs](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) for details on fields,
|
||||
which must be present in the ingested log messages.
|
||||
|
||||
The API accepts various http parameters, which can change the data ingestion behavior - [these docs](#http-parameters) for details.
|
||||
|
||||
The following command verifies that the data has been successfully ingested to VictoriaLogs by [querying](https://docs.victoriametrics.com/victorialogs/querying/) it:
|
||||
|
||||
```sh
|
||||
curl http://localhost:9428/select/logsql/query -d 'query=host.name:host123'
|
||||
```
|
||||
|
||||
The command should return the following response:
|
||||
|
||||
```sh
|
||||
{"_msg":"cannot open file","_stream":"{}","_time":"2023-06-21T04:24:24Z","host.name":"host123"}
|
||||
```
|
||||
|
||||
The response by default contains all the [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model).
|
||||
See [how to query specific fields](https://docs.victoriametrics.com/victorialogs/logsql/#querying-specific-fields).
|
||||
|
||||
The duration of requests to `/insert/elasticsearch/_bulk` can be monitored with `vl_http_request_duration_seconds{path="/insert/elasticsearch/_bulk"}` metric.
|
||||
|
||||
See also:
|
||||
|
||||
- [How to debug data ingestion](#troubleshooting).
|
||||
- [HTTP parameters, which can be passed to the API](#http-parameters).
|
||||
- [How to query VictoriaLogs](https://docs.victoriametrics.com/victorialogs/querying/).
|
||||
|
||||
### JSON stream API
|
||||
|
||||
VictoriaLogs accepts JSON line stream aka [ndjson](https://jsonlines.org/) at `http://localhost:9428/insert/jsonline` endpoint.
|
||||
|
||||
The following command pushes multiple log lines to VictoriaLogs:
|
||||
|
||||
```sh
|
||||
echo '{ "log": { "level": "info", "message": "hello world" }, "date": "0", "stream": "stream1" }
|
||||
{ "log": { "level": "error", "message": "oh no!" }, "date": "0", "stream": "stream1" }
|
||||
{ "log": { "level": "info", "message": "hello world" }, "date": "0", "stream": "stream2" }
|
||||
' | curl -X POST -H 'Content-Type: application/stream+json' --data-binary @- \
|
||||
'http://localhost:9428/insert/jsonline?_stream_fields=stream&_time_field=date&_msg_field=log.message'
|
||||
```
|
||||
|
||||
It is possible to push unlimited number of log lines in a single request to this API.
|
||||
|
||||
VictoriaLogs skips invalid JSON lines and continues parsing the remaining lines. It logs the warning
|
||||
with the reason why it skipped invalid JSON lines. It also increments the `vl_http_errors_total{path="/insert/jsonline"}` counter
|
||||
at the [`/metrics` page](https://docs.victoriametrics.com/victorialogs/#monitoring) per every invalid JSON line.
|
||||
|
||||
If the [timestamp field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field) is set to `"0"`,
|
||||
then the current timestamp at VictoriaLogs side is used per each ingested log line.
|
||||
Otherwise the timestamp field must be in one of the following formats:
|
||||
|
||||
- [ISO8601](https://en.wikipedia.org/wiki/ISO_8601) or [RFC3339](https://www.rfc-editor.org/rfc/rfc3339).
|
||||
For example, `2023-06-20T15:32:10Z` or `2023-06-20 15:32:10.123456789+02:00`.
|
||||
If timezone information is missing (for example, `2023-06-20 15:32:10`),
|
||||
then the time is parsed in the local timezone of the host where VictoriaLogs runs.
|
||||
|
||||
- Unix timestamp in seconds, milliseconds, microseconds or nanoseconds. For example, `1686026893` (seconds), `1686026893735` (milliseconds),
|
||||
`1686026893735321` (microseconds) or `1686026893735321098` (nanoseconds).
|
||||
|
||||
See [these docs](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) for details on fields,
|
||||
which must be present in the ingested log messages.
|
||||
|
||||
The API accepts various http parameters, which can change the data ingestion behavior - [these docs](#http-parameters) for details.
|
||||
|
||||
The following command verifies that the data has been successfully ingested into VictoriaLogs by [querying](https://docs.victoriametrics.com/victorialogs/querying/) it:
|
||||
|
||||
```sh
|
||||
curl http://localhost:9428/select/logsql/query -d 'query=log.level:*'
|
||||
```
|
||||
|
||||
The command should return the following response:
|
||||
|
||||
```sh
|
||||
{"_msg":"hello world","_stream":"{stream=\"stream2\"}","_time":"2023-06-20T13:35:11.56789Z","log.level":"info"}
|
||||
{"_msg":"hello world","_stream":"{stream=\"stream1\"}","_time":"2023-06-20T15:31:23Z","log.level":"info"}
|
||||
{"_msg":"oh no!","_stream":"{stream=\"stream1\"}","_time":"2023-06-20T15:32:10.567Z","log.level":"error"}
|
||||
```
|
||||
|
||||
The response by default contains all the [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model).
|
||||
See [how to query specific fields](https://docs.victoriametrics.com/victorialogs/logsql/#querying-specific-fields).
|
||||
|
||||
The duration of requests to `/insert/jsonline` can be monitored with `vl_http_request_duration_seconds{path="/insert/jsonline"}` metric.
|
||||
|
||||
See also:
|
||||
|
||||
- [How to debug data ingestion](#troubleshooting).
|
||||
- [HTTP parameters, which can be passed to the API](#http-parameters).
|
||||
- [How to query VictoriaLogs](https://docs.victoriametrics.com/victorialogs/querying/).
|
||||
|
||||
### Loki JSON API
|
||||
|
||||
VictoriaLogs accepts logs in [Loki JSON API](https://grafana.com/docs/loki/latest/reference/loki-http-api/#ingest-logs) format at `http://localhost:9428/insert/loki/api/v1/push` endpoint.
|
||||
|
||||
The following command pushes a single log line to Loki JSON API at VictoriaLogs:
|
||||
|
||||
```sh
|
||||
curl -H "Content-Type: application/json" -XPOST "http://localhost:9428/insert/loki/api/v1/push" --data-raw \
|
||||
'{"streams": [{ "stream": { "instance": "host123", "job": "app42" }, "values": [ [ "0", "foo fizzbuzz bar" ] ] }]}'
|
||||
```
|
||||
|
||||
It is possible to push thousands of log streams and log lines in a single request to this API.
|
||||
|
||||
The following command verifies that the data has been successfully ingested into VictoriaLogs by [querying](https://docs.victoriametrics.com/victorialogs/querying/) it:
|
||||
|
||||
```sh
|
||||
curl http://localhost:9428/select/logsql/query -d 'query=fizzbuzz'
|
||||
```
|
||||
|
||||
The command should return the following response:
|
||||
|
||||
```sh
|
||||
{"_msg":"foo fizzbuzz bar","_stream":"{instance=\"host123\",job=\"app42\"}","_time":"2023-07-20T23:01:19.288676497Z"}
|
||||
```
|
||||
|
||||
The response by default contains all the [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model).
|
||||
See [how to query specific fields](https://docs.victoriametrics.com/victorialogs/logsql/#querying-specific-fields).
|
||||
|
||||
The `/insert/loki/api/v1/push` accepts various http parameters, which can change the data ingestion behavior - [these docs](#http-parameters) for details.
|
||||
There is no need in specifying `_msg_field` and `_time_field` query args, since VictoriaLogs automatically extracts log message and timestamp from the ingested Loki data.
|
||||
|
||||
The `_stream_fields` arg is optional. If it isn't set, then all the labels inside the `"stream":{...}` are treated
|
||||
as [log stream fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields). Use `_stream_fields` query arg for overriding the list of stream fields.
|
||||
For example, the following query instructs using only the `instance` label from the `"stream":{...}` as a stream field, while `ip` and `trace_id` fields will be stored
|
||||
as usual [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model):
|
||||
|
||||
```sh
|
||||
curl -H "Content-Type: application/json" -XPOST "http://localhost:9428/insert/loki/api/v1/push?_stream_fields=instance" --data-raw \
|
||||
'{"streams": [{ "stream": { "instance": "host123", "ip": "foo", "trace_id": "bar" }, "values": [ [ "0", "foo fizzbuzz bar" ] ] }]}'
|
||||
```
|
||||
|
||||
The duration of requests to `/insert/loki/api/v1/push` can be monitored with `vl_http_request_duration_seconds{path="/insert/loki/api/v1/push"}` metric.
|
||||
|
||||
See also:
|
||||
|
||||
- [How to debug data ingestion](#troubleshooting).
|
||||
- [HTTP parameters, which can be passed to the API](#http-parameters).
|
||||
- [How to query VictoriaLogs](https://docs.victoriametrics.com/victorialogs/querying/).
|
||||
|
||||
### Opentelemetry API
|
||||
|
||||
VictoriaLogs accepts logs in [OpenTelemetry format](https://opentelemetry.io/docs/specs/otel/logs/data-model/) at the `/insert/opentelemetry/v1/logs` HTTP endpoint.
|
||||
See more details [in these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/opentelemetry/).
|
||||
|
||||
### HTTP parameters
|
||||
|
||||
VictoriaLogs accepts the following configuration parameters via [HTTP headers](https://en.wikipedia.org/wiki/List_of_HTTP_header_fields)
|
||||
or via [HTTP query string args](https://en.wikipedia.org/wiki/Query_string) at [data ingestion HTTP APIs](#http-apis).
|
||||
HTTP query string parameters have priority over HTTP Headers.
|
||||
|
||||
#### HTTP Query string parameters
|
||||
|
||||
All the [HTTP-based data ingestion protocols](#http-apis) support the following [HTTP query string](https://en.wikipedia.org/wiki/Query_string) args:
|
||||
|
||||
- `_msg_field` - the name of the [log field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
|
||||
containing [log message](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field).
|
||||
This is usually the `message` field for Filebeat and Logstash.
|
||||
|
||||
The `_msg_field` arg may contain comma-separated list of field names. In this case the first non-empty field from the list
|
||||
is treated as [log message](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field).
|
||||
|
||||
If the `_msg_field` arg isn't set, then VictoriaLogs reads the log message from the `_msg` field. If the `_msg` field is empty,
|
||||
then it is set to `-defaultMsgValue` command-line flag value.
|
||||
|
||||
- `_time_field` - the name of the [log field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
|
||||
containing [log timestamp](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field).
|
||||
This is usually the `@timestamp` field for Filebeat and Logstash.
|
||||
|
||||
The `_time_field` arg may contain comma-separated list of field names. In this case the first non-empty field from the list
|
||||
is used as log timestamp.
|
||||
|
||||
If the `_time_field` arg isn't set, then VictoriaLogs reads the timestamp from the `_time` field. If this field doesn't exist, then the current timestamp is used.
|
||||
|
||||
- `_stream_fields` - comma-separated list of [log field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) names,
|
||||
which uniquely identify every [log stream](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields).
|
||||
|
||||
If the `_stream_fields` arg isn't set, then all the ingested logs are written to default log stream - `{}`.
|
||||
|
||||
- `ignore_fields` - an optional comma-separated list of [log field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) names,
|
||||
which must be ignored during data ingestion. The list may contain field name prefixes ending with `*` such a `some-prefix*`.
|
||||
In this case all the log fields starting with `some-prefix` are ignored.
|
||||
|
||||
- `decolorize_fields` - an optional comma-separated list of [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
|
||||
where ANSI color codes must be removed during data ingestion. The list may contain field name prefixes ending with `*` such as `some-prefix*`.
|
||||
In this case all ANSI color codes are removed from all the fields starting with `some-prefix`.
|
||||
|
||||
- `extra_fields` - an optional comma-separated list of [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model),
|
||||
which must be added to all the ingested logs. The format of every `extra_fields` entry is `field_name=field_value`.
|
||||
If the log entry contains fields from the `extra_fields`, then they are overwritten by the values specified in `extra_fields`.
|
||||
|
||||
- `debug` - if this arg is set to `1`, then the ingested logs aren't stored in VictoriaLogs. Instead,
|
||||
the ingested data is logged by VictoriaLogs, so it can be investigated later.
|
||||
|
||||
See also [HTTP headers](#http-headers).
|
||||
|
||||
#### HTTP headers
|
||||
|
||||
All the [HTTP-based data ingestion protocols](#http-apis) support the following [HTTP Headers](https://en.wikipedia.org/wiki/List_of_HTTP_header_fields)
|
||||
additionally to [HTTP query args](#http-query-string-parameters):
|
||||
|
||||
- `AccountID` - accountID of the tenant to ingest data to. See [multitenancy docs](https://docs.victoriametrics.com/victorialogs/#multitenancy) for details.
|
||||
|
||||
- `ProjectID`- projectID of the tenant to ingest data to. See [multitenancy docs](https://docs.victoriametrics.com/victorialogs/#multitenancy) for details.
|
||||
|
||||
- `VL-Msg-Field` - the name of the [log field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
|
||||
containing [log message](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field).
|
||||
This is usually the `message` field for Filebeat and Logstash.
|
||||
|
||||
The `VL-Msg-Field` header may contain comma-separated list of field names. In this case the first non-empty field from the list
|
||||
is treated as [log message](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field).
|
||||
|
||||
If the `VL-Msg-Field` header isn't set, then VictoriaLogs reads log message from the `_msg` field. If the `_msg` field is empty,
|
||||
then it is set to `-defaultMsgValue` command-line flag value.
|
||||
|
||||
- `VL-Time-Field` - the name of the [log field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
|
||||
containing [log timestamp](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field).
|
||||
This is usually the `@timestamp` field for Filebeat and Logstash.
|
||||
|
||||
The `VL-Time-Field` header may contain comma-separated list of field names. In this case the first non-empty field from the list
|
||||
is treated as log timestamp.
|
||||
|
||||
If the `VL-Time-Field` header isn't set, then VictoriaLogs reads the timestamp from the `_time` field. If this field doesn't exist, then the current timestamp is used.
|
||||
|
||||
- `VL-Stream-Fields` - comma-separated list of [log field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) names,
|
||||
which uniquely identify every [log stream](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields).
|
||||
|
||||
If the `VL-Stream-Fields` header isn't set, then all the ingested logs are written to default log stream - `{}`.
|
||||
|
||||
- `VL-Ignore-Fields` - an optional comma-separated list of [log field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) names,
|
||||
which must be ignored during data ingestion. The list may contain field name prefixes ending with `*` such a `some-prefix*`.
|
||||
In this case all the log fields starting with `some-prefix` are ignored.
|
||||
|
||||
- `VL-Decolorize-Fields` - an optional comma-separated list of [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
|
||||
where ANSI color codes must be removed during data ingestion. The list may contain field name prefixes ending with `*` such as `some-prefix*`.
|
||||
In this case ANS color codes are removed from all the log fields starting with `some-prefix`.
|
||||
|
||||
- `VL-Extra-Fields` - an optional comma-separated list of [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model),
|
||||
which must be added to all the ingested logs. The format of every `extra_fields` entry is `field_name=field_value`.
|
||||
If the log entry contains fields from the `extra_fields`, then they are overwritten by the values specified in `extra_fields`.
|
||||
|
||||
- `VL-Debug` - if this parameter is set to `1`, then the ingested logs aren't stored in VictoriaLogs. Instead,
|
||||
the ingested data is logged by VictoriaLogs, so it can be investigated later.
|
||||
|
||||
See also [HTTP Query string parameters](#http-query-string-parameters).
|
||||
|
||||
## Decolorizing
|
||||
|
||||
If the ingested logs contain [ANSI color codes](https://en.wikipedia.org/wiki/ANSI_escape_code), then it is recommended dropping these color codes before
|
||||
storing the logs in VictoriaLogs. This simplifies further querying and analysis of such logs.
|
||||
|
||||
Decolorizing can be done either at the log collector / shipper side or at the VictoriaLogs side with `decolorize_fields` HTTP query arg
|
||||
and `VL-Decolorize-Fields` HTTP request header according to [these docs](#http-parameters).
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
The following command can be used for verifying whether the data is successfully ingested into VictoriaLogs:
|
||||
|
||||
```sh
|
||||
curl http://localhost:9428/select/logsql/query -d 'query=*' | head
|
||||
```
|
||||
|
||||
This command selects all the data ingested into VictoriaLogs via [HTTP query API](https://docs.victoriametrics.com/victorialogs/querying/#http-api)
|
||||
using [any value filter](https://docs.victoriametrics.com/victorialogs/logsql/#any-value-filter),
|
||||
while `head` cancels query execution after reading the first 10 log lines. See [these docs](https://docs.victoriametrics.com/victorialogs/querying/#command-line)
|
||||
for more details on how `head` integrates with VictoriaLogs.
|
||||
|
||||
The response by default contains all the [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model).
|
||||
See [how to query specific fields](https://docs.victoriametrics.com/victorialogs/logsql/#querying-specific-fields).
|
||||
|
||||
VictoriaLogs provides the following command-line flags, which can help debugging data ingestion issues:
|
||||
|
||||
- `-logNewStreams` - if this flag is passed to VictoriaLogs, then it logs all the newly
|
||||
registered [log streams](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields).
|
||||
This may help debugging [high cardinality issues](https://docs.victoriametrics.com/victorialogs/keyconcepts/#high-cardinality).
|
||||
- `-logIngestedRows` - if this flag is passed to VictoriaLogs, then it logs all the ingested
|
||||
[log entries](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model).
|
||||
See also `debug` [parameter](#http-parameters).
|
||||
|
||||
VictoriaLogs exposes various [metrics](https://docs.victoriametrics.com/victorialogs/#monitoring), which may help debugging data ingestion issues:
|
||||
|
||||
- `vl_rows_ingested_total` - the number of ingested [log entries](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
|
||||
since the last VictoriaLogs restart. If this number increases over time, then logs are successfully ingested into VictoriaLogs.
|
||||
The ingested logs can be inspected in the following ways:
|
||||
- By passing `debug=1` parameter to every request to [data ingestion APIs](#http-apis). The ingested rows aren't stored in VictoriaLogs
|
||||
in this case. Instead, they are logged, so they can be investigated later.
|
||||
The `vl_rows_dropped_total` [metric](https://docs.victoriametrics.com/victorialogs/#monitoring) is incremented for each logged row.
|
||||
- By passing `-logIngestedRows` command-line flag to VictoriaLogs. In this case it logs all the ingested data, so it can be investigated later.
|
||||
- `vl_streams_created_total` - the number of created [log streams](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields)
|
||||
since the last VictoriaLogs restart. If this metric grows rapidly during extended periods of time, then this may lead
|
||||
to [high cardinality issues](https://docs.victoriametrics.com/victorialogs/keyconcepts/#high-cardinality).
|
||||
The newly created log streams can be inspected in logs by passing `-logNewStreams` command-line flag to VictoriaLogs.
|
||||
|
||||
## Log collectors and data ingestion formats
|
||||
|
||||
Here is the list of log collectors and their ingestion formats supported by VictoriaLogs:
|
||||
|
||||
| How to setup the collector | Format: Elasticsearch | Format: JSON Stream | Format: Loki | Format: syslog | Format: OpenTelemetry | Format: Journald | Format: DataDog |
|
||||
|----------------------------|-----------------------|---------------------|--------------|----------------|-----------------------|------------------|-----------------|
|
||||
| [Rsyslog](https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/) | [Yes](https://www.rsyslog.com/doc/configuration/modules/omelasticsearch.html) | No | No | [Yes](https://www.rsyslog.com/doc/configuration/modules/omfwd.html) | No | No | No |
|
||||
| [Syslog-ng](https://docs.victoriametrics.com/victorialogs/data-ingestion/filebeat/) | Yes, [v1](https://support.oneidentity.com/technical-documents/syslog-ng-open-source-edition/3.16/administration-guide/28#TOPIC-956489), [v2](https://support.oneidentity.com/technical-documents/syslog-ng-open-source-edition/3.16/administration-guide/29#TOPIC-956494) | No | No | [Yes](https://support.oneidentity.com/technical-documents/syslog-ng-open-source-edition/3.16/administration-guide/44#TOPIC-956553) | No | No | No |
|
||||
| [Filebeat](https://docs.victoriametrics.com/victorialogs/data-ingestion/filebeat/) | [Yes](https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html) | No | No | No | No | No | No |
|
||||
| [Fluentbit](https://docs.victoriametrics.com/victorialogs/data-ingestion/fluentbit/) | No | [Yes](https://docs.fluentbit.io/manual/pipeline/outputs/http) | [Yes](https://docs.fluentbit.io/manual/pipeline/outputs/loki) | [Yes](https://docs.fluentbit.io/manual/pipeline/outputs/syslog) | [Yes](https://docs.fluentbit.io/manual/pipeline/outputs/opentelemetry) | No | [Yes](https://docs.fluentbit.io/manual/pipeline/outputs/datadog) |
|
||||
| [Logstash](https://docs.victoriametrics.com/victorialogs/data-ingestion/logstash/) | [Yes](https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html) | No | No | [Yes](https://www.elastic.co/guide/en/logstash/current/plugins-outputs-syslog.html) | [Yes](https://github.com/paulgrav/logstash-output-opentelemetry) | No | [Yes](https://www.elastic.co/guide/en/logstash/current/plugins-outputs-datadog.html) |
|
||||
| [Vector](https://docs.victoriametrics.com/victorialogs/data-ingestion/vector/) | [Yes](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/) | [Yes](https://vector.dev/docs/reference/configuration/sinks/http/) | [Yes](https://vector.dev/docs/reference/configuration/sinks/loki/) | No | No | No | [Yes](https://vector.dev/docs/reference/configuration/sinks/datadog_logs/) |
|
||||
| [Promtail](https://docs.victoriametrics.com/victorialogs/data-ingestion/promtail/) | No | No | [Yes](https://grafana.com/docs/loki/latest/clients/promtail/configuration/#clients) | No | No | No | No |
|
||||
| [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/) | [Yes](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/elasticsearchexporter) | No | [Yes](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/lokiexporter) | [Yes](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/exporter/syslogexporter) | [Yes](https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/otlphttpexporter) | No | [Yes](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/datadogexporter) |
|
||||
| [Telegraf](https://docs.victoriametrics.com/victorialogs/data-ingestion/telegraf/) | [Yes](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/elasticsearch) | [Yes](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/http) | [Yes](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/loki) | [Yes](https://github.com/influxdata/telegraf/blob/master/plugins/outputs/syslog) | Yes | No | No |
|
||||
| [Fluentd](https://docs.victoriametrics.com/victorialogs/data-ingestion/fluentd/) | [Yes](https://github.com/uken/fluent-plugin-elasticsearch) | [Yes](https://docs.fluentd.org/output/http) | [Yes](https://grafana.com/docs/loki/latest/send-data/fluentd/) | [Yes](https://github.com/fluent-plugins-nursery/fluent-plugin-remote_syslog) | No | No | No |
|
||||
| [Journald](https://docs.victoriametrics.com/victorialogs/data-ingestion/journald/) | No | No | No | No | No | Yes | No |
|
||||
| [DataDog Agent](https://docs.victoriametrics.com/victorialogs/data-ingestion/datadog-agent) | No | No | No | No | No | No | Yes |
|
||||
|
||||
@@ -1,94 +0,0 @@
|
||||
---
|
||||
weight: 5
|
||||
title: Telegraf setup
|
||||
disableToc: true
|
||||
menu:
|
||||
docs:
|
||||
parent: "victorialogs-data-ingestion"
|
||||
weight: 5
|
||||
tags:
|
||||
- logs
|
||||
aliases:
|
||||
- /victorialogs/data-ingestion/Telegraf.html
|
||||
---
|
||||
VictoriaLogs supports given below Telegraf outputs:
|
||||
- [Elasticsearch](#elasticsearch)
|
||||
- [HTTP JSON](#http)
|
||||
|
||||
## Elasticsearch
|
||||
|
||||
Specify [Elasticsearch output](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/elasticsearch) in the `telegraf.toml`
|
||||
for sending the collected logs to [VictoriaLogs](https://docs.victoriametrics.com/victorialogs/):
|
||||
|
||||
```toml
|
||||
[[outputs.elasticsearch]]
|
||||
urls = ["http://localhost:9428/insert/elasticsearch"]
|
||||
timeout = "1m"
|
||||
flush_interval = "30s"
|
||||
enable_sniffer = false
|
||||
health_check_interval = "0s"
|
||||
index_name = "device_log-%Y.%m.%d"
|
||||
manage_template = false
|
||||
template_name = "telegraf"
|
||||
overwrite_template = false
|
||||
namepass = ["tail"]
|
||||
[outputs.elasticsearch.headers]
|
||||
"VL-Msg-Field" = "tail.value"
|
||||
"VL-Time-Field" = "@timestamp"
|
||||
"VL-Stream-Fields" = "tag.log_source,tag.metric_type"
|
||||
|
||||
[[inputs.tail]]
|
||||
files = ["/tmp/telegraf.log"]
|
||||
from_beginning = false
|
||||
interval = "10s"
|
||||
pipe = false
|
||||
watch_method = "inotify"
|
||||
data_format = "value"
|
||||
data_type = "string"
|
||||
character_encoding = "utf-8"
|
||||
[inputs.tail.tags]
|
||||
metric_type = "logs"
|
||||
log_source = "telegraf"
|
||||
```
|
||||
|
||||
|
||||
## HTTP
|
||||
|
||||
Specify [HTTP output](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/http) in the `telegraf.toml with batch mode disabled`
|
||||
for sending the collected logs to [VictoriaLogs](https://docs.victoriametrics.com/victorialogs/):
|
||||
|
||||
```toml
|
||||
[[inputs.tail]]
|
||||
files = ["/tmp/telegraf.log"]
|
||||
from_beginning = false
|
||||
interval = "10s"
|
||||
pipe = false
|
||||
watch_method = "inotify"
|
||||
data_format = "value"
|
||||
data_type = "string"
|
||||
character_encoding = "utf-8"
|
||||
[inputs.tail.tags]
|
||||
metric_type = "logs"
|
||||
log_source = "telegraf"
|
||||
|
||||
[[outputs.http]]
|
||||
url = "http://localhost:9428/insert/jsonline?_msg_field=fields.message&_time_field=timestamp,_stream_fields=tags.log_source,tags.metric_type"
|
||||
data_format = "json"
|
||||
namepass = ["docker_log"]
|
||||
use_batch_format = false
|
||||
```
|
||||
|
||||
Substitute the `localhost:9428` address inside `endpoints` section with the real TCP address of VictoriaLogs.
|
||||
|
||||
See [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-headers) for details on headers specified
|
||||
in the `[[output.elasticsearch]]` section.
|
||||
|
||||
It is recommended verifying whether the initial setup generates the needed [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
|
||||
and uses the correct [stream fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields).
|
||||
|
||||
See also:
|
||||
|
||||
- [Data ingestion troubleshooting](https://docs.victoriametrics.com/victorialogs/data-ingestion/#troubleshooting).
|
||||
- [How to query VictoriaLogs](https://docs.victoriametrics.com/victorialogs/querying/).
|
||||
- [Elasticsearch output docs for Telegraf](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/elasticsearch).
|
||||
- [Docker-compose demo for Telegraf integration with VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/telegraf).
|
||||
@@ -1,177 +0,0 @@
|
||||
---
|
||||
weight: 20
|
||||
title: Vector setup
|
||||
disableToc: true
|
||||
menu:
|
||||
docs:
|
||||
parent: "victorialogs-data-ingestion"
|
||||
weight: 20
|
||||
tags:
|
||||
- logs
|
||||
aliases:
|
||||
- /victorialogs/data-ingestion/Vector.html
|
||||
- /victorialogs/data-ingestion/vector.html
|
||||
---
|
||||
|
||||
VictoriaLogs can accept logs from [Vector](https://vector.dev/) via the following protocols:
|
||||
|
||||
- Elasticsearch - see [these docs](#elasticsearch)
|
||||
- HTTP JSON - see [these docs](#http)
|
||||
|
||||
## Elasticsearch
|
||||
|
||||
Specify [Elasticsearch sink type](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/) in the `vector.yaml`
|
||||
for sending the collected logs to [VictoriaLogs](https://docs.victoriametrics.com/victorialogs/):
|
||||
|
||||
```yaml
|
||||
sinks:
|
||||
vlogs:
|
||||
inputs:
|
||||
- your_input
|
||||
type: elasticsearch
|
||||
endpoints:
|
||||
- http://localhost:9428/insert/elasticsearch/
|
||||
api_version: v8
|
||||
compression: gzip
|
||||
healthcheck:
|
||||
enabled: false
|
||||
query:
|
||||
_msg_field: message
|
||||
_time_field: timestamp
|
||||
_stream_fields: host,container_name
|
||||
```
|
||||
|
||||
Replace `your_input` with the name of the `inputs` section, which collects logs. See [these docs](https://vector.dev/docs/reference/configuration/sources/) for details.
|
||||
|
||||
Substitute the `localhost:9428` address inside `endpoints` section with the real TCP address of VictoriaLogs.
|
||||
|
||||
See [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters) for details on parameters specified
|
||||
in the `sinks.vlogs.query` section.
|
||||
|
||||
It is recommended verifying whether the initial setup generates the needed [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
|
||||
and uses the correct [stream fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields).
|
||||
This can be done by specifying `debug` [parameter](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters)
|
||||
in the `sinks.vlogs.query` section and inspecting VictoriaLogs logs then:
|
||||
|
||||
```yaml
|
||||
sinks:
|
||||
vlogs:
|
||||
inputs:
|
||||
- your_input
|
||||
type: elasticsearch
|
||||
endpoints:
|
||||
- http://localhost:9428/insert/elasticsearch/
|
||||
api_version: v8
|
||||
compression: gzip
|
||||
healthcheck:
|
||||
enabled: false
|
||||
query:
|
||||
_msg_field: message
|
||||
_time_field: timestamp
|
||||
_stream_fields: host,container_name
|
||||
debug: "1"
|
||||
```
|
||||
|
||||
If some [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) must be skipped
|
||||
during data ingestion, then they can be put into `ignore_fields` [parameter](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters).
|
||||
For example, the following config instructs VictoriaLogs to ignore `log.offset` and `event.original` fields in the ingested logs:
|
||||
|
||||
```yaml
|
||||
sinks:
|
||||
vlogs:
|
||||
inputs:
|
||||
- your_input
|
||||
type: elasticsearch
|
||||
endpoints:
|
||||
- http://localhost:9428/insert/elasticsearch/
|
||||
api_version: v8
|
||||
compression: gzip
|
||||
healthcheck:
|
||||
enabled: false
|
||||
query:
|
||||
_msg_field: message
|
||||
_time_field: timestamp
|
||||
_stream_fields: host,container_name
|
||||
ignore_fields: log.offset,event.original
|
||||
```
|
||||
|
||||
By default, the ingested logs are stored in the `(AccountID=0, ProjectID=0)` [tenant](https://docs.victoriametrics.com/victorialogs/keyconcepts/#multitenancy).
|
||||
If you need storing logs in other tenant, then specify the needed tenant via `sinks.vlogs.request.headers` section.
|
||||
For example, the following `vector.yaml` config instructs Vector to store the data to `(AccountID=12, ProjectID=34)` tenant:
|
||||
|
||||
```yaml
|
||||
sinks:
|
||||
vlogs:
|
||||
inputs:
|
||||
- your_input
|
||||
type: elasticsearch
|
||||
endpoints:
|
||||
- http://localhost:9428/insert/elasticsearch/
|
||||
mode: bulk
|
||||
api_version: v8
|
||||
healthcheck:
|
||||
enabled: false
|
||||
query:
|
||||
_msg_field: message
|
||||
_time_field: timestamp
|
||||
_stream_fields: host,container_name
|
||||
request:
|
||||
headers:
|
||||
AccountID: "12"
|
||||
ProjectID: "34"
|
||||
```
|
||||
|
||||
## HTTP
|
||||
|
||||
Vector can be configured with [HTTP sink type](https://vector.dev/docs/reference/configuration/sinks/http/)
|
||||
for sending data to VictoriaLogs via [JSON stream API](https://docs.victoriametrics.com/victorialogs/data-ingestion/#json-stream-api) format:
|
||||
|
||||
```yaml
|
||||
sinks:
|
||||
vlogs:
|
||||
inputs:
|
||||
- your_input
|
||||
type: http
|
||||
uri: http://localhost:9428/insert/jsonline?_stream_fields=host,container_name&_msg_field=message&_time_field=timestamp
|
||||
compression: gzip
|
||||
encoding:
|
||||
codec: json
|
||||
framing:
|
||||
method: newline_delimited
|
||||
healthcheck:
|
||||
enabled: false
|
||||
```
|
||||
|
||||
Replace `your_input` with the name of the `inputs` section, which collects logs. See [these docs](https://vector.dev/docs/reference/configuration/sources/) for details.
|
||||
|
||||
Substitute the `localhost:9428` address inside `endpoints` section with the real TCP address of VictoriaLogs.
|
||||
|
||||
See [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters) for details on parameters specified
|
||||
in the query args of the uri (`_stream_fields`, `_msg_field` and `_time_field`).
|
||||
|
||||
It is recommended verifying whether the initial setup generates the needed [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
|
||||
and uses the correct [stream fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields).
|
||||
This can be done by specifying `debug` [query arg](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters) in the `uri`:
|
||||
|
||||
```yaml
|
||||
sinks:
|
||||
vlogs:
|
||||
inputs:
|
||||
- your_input
|
||||
type: http
|
||||
uri: http://localhost:9428/insert/jsonline?_stream_fields=host,container_name&_msg_field=message&_time_field=timestamp&debug=1
|
||||
compression: gzip
|
||||
encoding:
|
||||
codec: json
|
||||
framing:
|
||||
method: newline_delimited
|
||||
healthcheck:
|
||||
enabled: false
|
||||
```
|
||||
|
||||
See also:
|
||||
|
||||
- [Data ingestion troubleshooting](https://docs.victoriametrics.com/victorialogs/data-ingestion/#troubleshooting).
|
||||
- [How to query VictoriaLogs](https://docs.victoriametrics.com/victorialogs/querying/).
|
||||
- [Elasticsearch output docs for Vector](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/).
|
||||
- [Docker-compose demo for Vector integration with VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/vector).
|
||||
@@ -1,15 +0,0 @@
|
||||
---
|
||||
title: Data ingestion
|
||||
weight: 3
|
||||
menu:
|
||||
docs:
|
||||
identifier: victorialogs-data-ingestion
|
||||
parent: "victorialogs"
|
||||
weight: 3
|
||||
tags:
|
||||
- logs
|
||||
aliases:
|
||||
- /victorialogs/data-ingestion/
|
||||
- /victorialogs/data-ingestion/index.html
|
||||
---
|
||||
{{% content "README.md" %}}
|
||||
@@ -1,112 +0,0 @@
|
||||
---
|
||||
weight: 4
|
||||
title: OpenTelemetry setup
|
||||
disableToc: true
|
||||
menu:
|
||||
docs:
|
||||
parent: "victorialogs-data-ingestion"
|
||||
weight: 4
|
||||
tags:
|
||||
- logs
|
||||
aliases:
|
||||
- /victorialogs/data-ingestion/OpenTelemetry.html
|
||||
---
|
||||
VictoriaLogs supports both client open-telemetry [SDK](https://opentelemetry.io/docs/languages/) and [collector](https://opentelemetry.io/docs/collector/).
|
||||
|
||||
## Client SDK
|
||||
|
||||
Specify `EndpointURL` for http-exporter builder to `/insert/opentelemetry/v1/logs`.
|
||||
|
||||
Consider the following example for Go SDK:
|
||||
|
||||
```go
|
||||
logExporter, err := otlploghttp.New(ctx,
|
||||
otlploghttp.WithEndpointURL("http://victorialogs:9428/insert/opentelemetry/v1/logs"),
|
||||
)
|
||||
```
|
||||
|
||||
VictoriaLogs treats all the resource labels as [log stream fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields).
|
||||
The list of log stream fields can be overridden via `VL-Stream-Fields` HTTP header if needed. For example, the following config uses only `host` and `app`
|
||||
labels as log stream fields, while the remaining labels are stored as [regular log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model):
|
||||
|
||||
```go
|
||||
logExporter, err := otlploghttp.New(ctx,
|
||||
otlploghttp.WithEndpointURL("http://victorialogs:9428/insert/opentelemetry/v1/logs"),
|
||||
otlploghttp.WithHeaders(map[string]string{
|
||||
"VL-Stream-Fields": "host,app",
|
||||
}),
|
||||
)
|
||||
```
|
||||
|
||||
VictoriaLogs supports other HTTP headers - see the list [here](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-headers).
|
||||
|
||||
The ingested log entries can be queried according to [these docs](https://docs.victoriametrics.com/victorialogs/querying/).
|
||||
|
||||
## Collector configuration
|
||||
|
||||
VictoriaLogs supports receiving logs from the following OpenTelemetry collectors:
|
||||
|
||||
* [Elasticsearch](#elasticsearch)
|
||||
* [OpenTelemetry](#opentelemetry)
|
||||
|
||||
### Elasticsearch
|
||||
|
||||
```yaml
|
||||
exporters:
|
||||
elasticsearch:
|
||||
endpoints:
|
||||
- http://victorialogs:9428/insert/elasticsearch
|
||||
receivers:
|
||||
filelog:
|
||||
include: [/tmp/logs/*.log]
|
||||
resource:
|
||||
region: us-east-1
|
||||
service:
|
||||
pipelines:
|
||||
logs:
|
||||
receivers: [filelog]
|
||||
exporters: [elasticsearch]
|
||||
```
|
||||
|
||||
If Elasticsearch stores the log message in the field other than [`_msg`](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field),
|
||||
then it can be moved to `_msg` field by using the `VL-Msg-Field` HTTP header. For example, if the log message is stored in the `Body` field,
|
||||
then it can be moved to `_msg` field via the following config:
|
||||
|
||||
```yaml
|
||||
exporters:
|
||||
elasticsearch:
|
||||
endpoints:
|
||||
- http://victorialogs:9428/insert/elasticsearch
|
||||
headers:
|
||||
VL-Msg-Field: Body
|
||||
```
|
||||
|
||||
VictoriaLogs supports other HTTP headers - see the list [here](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-headers).
|
||||
|
||||
### OpenTelemetry
|
||||
|
||||
Specify logs endpoint for [OTLP/HTTP exporter](https://github.com/open-telemetry/opentelemetry-collector/blob/main/exporter/otlphttpexporter/README.md) in configuration file
|
||||
for sending the collected logs to VictoriaLogs:
|
||||
|
||||
```yaml
|
||||
exporters:
|
||||
otlphttp:
|
||||
logs_endpoint: http://localhost:9428/insert/opentelemetry/v1/logs
|
||||
```
|
||||
|
||||
VictoriaLogs supports various HTTP headers, which can be used during data ingestion - see the list [here](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-headers).
|
||||
These headers can be passed to OpenTelemetry exporter config via `headers` options. For example, the following config instructs ignoring `foo` and `bar` fields during data ingestion:
|
||||
|
||||
```yaml
|
||||
exporters:
|
||||
otlphttp:
|
||||
logs_endpoint: http://localhost:9428/insert/opentelemetry/v1/logs
|
||||
headers:
|
||||
VL-Ignore-Fields: foo,bar
|
||||
```
|
||||
|
||||
See also:
|
||||
|
||||
* [Data ingestion troubleshooting](https://docs.victoriametrics.com/victorialogs/data-ingestion/#troubleshooting).
|
||||
* [How to query VictoriaLogs](https://docs.victoriametrics.com/victorialogs/querying/).
|
||||
* [Docker-compose demo for OpenTelemetry collector integration with VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/opentelemetry-collector).
|
||||
@@ -1,228 +0,0 @@
|
||||
---
|
||||
weight: 10
|
||||
title: Syslog setup
|
||||
disableToc: true
|
||||
menu:
|
||||
docs:
|
||||
parent: "victorialogs-data-ingestion"
|
||||
weight: 10
|
||||
tags:
|
||||
- logs
|
||||
aliases:
|
||||
- /victorialogs/data-ingestion/syslog.html
|
||||
---
|
||||
[VictoriaLogs](https://docs.victoriametrics.com/victorialogs/) can accept logs in [Syslog formats](https://en.wikipedia.org/wiki/Syslog) at the specified TCP and UDP addresses
|
||||
via `-syslog.listenAddr.tcp` and `-syslog.listenAddr.udp` command-line flags. The following syslog formats are supported:
|
||||
|
||||
- [RFC3164](https://datatracker.ietf.org/doc/html/rfc3164) aka `<PRI>MMM DD hh:mm:ss HOSTNAME APP-NAME[PROCID]: MESSAGE`
|
||||
- [RFC5424](https://datatracker.ietf.org/doc/html/rfc5424) aka `<PRI>1 TIMESTAMP HOSTNAME APP-NAME PROCID MSGID [STRUCTURED-DATA] MESSAGE`
|
||||
|
||||
For example, the following command starts VictoriaLogs, which accepts logs in Syslog format at TCP port 514 on all the network interfaces:
|
||||
|
||||
```sh
|
||||
./victoria-logs -syslog.listenAddr.tcp=:514
|
||||
```
|
||||
|
||||
It may be needed to run VictoriaLogs under `root` user or to set [`CAP_NET_BIND_SERVICE`](https://superuser.com/questions/710253/allow-non-root-process-to-bind-to-port-80-and-443)
|
||||
option if syslog messages must be accepted at TCP port below 1024.
|
||||
|
||||
The following command starts VictoriaLogs, which accepts logs in Syslog format at TCP and UDP ports 514:
|
||||
|
||||
```sh
|
||||
./victoria-logs -syslog.listenAddr.tcp=:514 -syslog.listenAddr.udp=:514
|
||||
```
|
||||
|
||||
VictoriaLogs can accept logs from the following syslog collectors:
|
||||
|
||||
- [Rsyslog](https://www.rsyslog.com/). See [these docs](#rsyslog).
|
||||
- [Syslog-ng](https://www.syslog-ng.com/). See [these docs](#syslog-ng).
|
||||
|
||||
Multiple logs in Syslog format can be ingested via a single TCP connection or via a single UDP packet - just put every log on a separate line
|
||||
and delimit them with `\n` char.
|
||||
|
||||
VictoriaLogs automatically extracts the following [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
|
||||
from the received Syslog lines:
|
||||
|
||||
- [`_time`](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field) - log timestamp. See also [log timestamps](#log-timestamps)
|
||||
- [`_msg`](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field) - the `MESSAGE` field from the supported syslog formats above
|
||||
- `hostname`, `app_name` and `proc_id` - for unique identification of [log streams](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields).
|
||||
It is possible to change the list of fields for log streams - see [these docs](#stream-fields).
|
||||
- `level` - string representation of the log level according to the `<PRI>` field value
|
||||
- `priority`, `facility` and `severity` - these fields are extracted from `<PRI>` field
|
||||
- `facility_keyword` - string representation of the `facility` field according to [these docs](https://en.wikipedia.org/wiki/Syslog#Facility)
|
||||
- `format` - this field is set to either `rfc3164` or `rfc5424` depending on the format of the parsed syslog line
|
||||
- `msg_id` - `MSGID` field from log line in `RFC5424` format.
|
||||
|
||||
The `[STRUCTURED-DATA]` is parsed into fields with the `SD-ID.param1`, `SD-ID.param2`, ..., `SD-ID.paramN` names and the corresponding values
|
||||
according to [the specification](https://datatracker.ietf.org/doc/html/rfc5424#section-6.3).
|
||||
|
||||
By default local timezone is used when parsing timestamps in `rfc3164` lines. This can be changed to any desired timezone via `-syslog.timezone` command-line flag.
|
||||
See [the list of supported timezone identifiers](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones). For example, the following command starts VictoriaLogs,
|
||||
which parses syslog timestamps in `rfc3164` using `Europe/Berlin` timezone:
|
||||
|
||||
```sh
|
||||
./victoria-logs -syslog.listenAddr.tcp=:514 -syslog.timezone='Europe/Berlin'
|
||||
```
|
||||
|
||||
The ingested logs can be queried via [logs querying API](https://docs.victoriametrics.com/victorialogs/querying/#http-api). For example, the following command
|
||||
returns ingested logs for the last 5 minutes by using [time filter](https://docs.victoriametrics.com/victorialogs/logsql/#time-filter):
|
||||
|
||||
```sh
|
||||
curl http://localhost:9428/select/logsql/query -d 'query=_time:5m'
|
||||
```
|
||||
|
||||
See also:
|
||||
|
||||
- [Log timestamps](#log-timestamps)
|
||||
- [Security](#security)
|
||||
- [Compression](#compression)
|
||||
- [Multitenancy](#multitenancy)
|
||||
- [Stream fields](#stream-fields)
|
||||
- [Dropping fields](#dropping-fields)
|
||||
- [Decolorizing fields](#decolorizing-fields)
|
||||
- [Adding extra fields](#adding-extra-fields)
|
||||
- [Data ingestion troubleshooting](https://docs.victoriametrics.com/victorialogs/data-ingestion/#troubleshooting).
|
||||
- [How to query VictoriaLogs](https://docs.victoriametrics.com/victorialogs/querying/).
|
||||
|
||||
## Log timestamps
|
||||
|
||||
By default VictoriaLogs uses the timestamp from the parsed Syslog message as [`_time` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field).
|
||||
Sometimes the ingested Syslog messages may contain incorrect timestamps (for example, timestamps with incorrect timezone). In this case VictoriaLogs can be configured
|
||||
for using the log ingestion timestamp as [`_time` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field). This can be done by specifying
|
||||
`-syslog.useLocalTimestamp.tcp` command-line flag for the corresponding `-syslog.listenAddr.tcp` address:
|
||||
|
||||
```sh
|
||||
./victoria-logs -syslog.listenAddr.tcp=:514 -syslog.useLocalTimestamp.tcp
|
||||
```
|
||||
|
||||
In this case the original timestamp from the Syslog message is stored in `timestamp` [log field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model).
|
||||
|
||||
The `-syslog.useLocalTimestamp.udp` command-line flag can be used for instructing VictoriaLogs to use local timestamps for the ingested logs
|
||||
via the corresponding `-syslog.listenAddr.udp` address:
|
||||
|
||||
```sh
|
||||
./victoria-logs -syslog.listenAddr.udp=:514 -syslog.useLocalTimestamp.udp
|
||||
```
|
||||
|
||||
## Security
|
||||
|
||||
By default VictoriaLogs accepts plaintext data at `-syslog.listenAddr.tcp` address. Run VictoriaLogs with `-syslog.tls` command-line flag
|
||||
in order to accept TLS-encrypted logs at `-syslog.listenAddr.tcp` address. The `-syslog.tlsCertFile` and `-syslog.tlsKeyFile` command-line flags
|
||||
must be set to paths to TLS certificate file and TLS key file if `-syslog.tls` is set. For example, the following command
|
||||
starts VictoriaLogs, which accepts TLS-encrypted syslog messages at TCP port 6514:
|
||||
|
||||
```sh
|
||||
./victoria-logs -syslog.listenAddr.tcp=:6514 -syslog.tls -syslog.tlsCertFile=/path/to/tls/cert -syslog.tlsKeyFile=/path/to/tls/key
|
||||
```
|
||||
|
||||
## Compression
|
||||
|
||||
By default VictoriaLogs accepts uncompressed log messages in Syslog format at `-syslog.listenAddr.tcp` and `-syslog.listenAddr.udp` addresses.
|
||||
It is possible configuring VictoriaLogs to accept compressed log messages via `-syslog.compressMethod.tcp` and `-syslog.compressMethod.udp` command-line flags.
|
||||
The following compression methods are supported:
|
||||
|
||||
- `none` - no compression
|
||||
- `zstd` - [zstd compression](https://en.wikipedia.org/wiki/Zstd)
|
||||
- `gzip` - [gzip compression](https://en.wikipedia.org/wiki/Gzip)
|
||||
- `deflate` - [deflate compression](https://en.wikipedia.org/wiki/Deflate)
|
||||
|
||||
For example, the following command starts VictoriaLogs, which accepts gzip-compressed syslog messages at TCP port 514:
|
||||
|
||||
```sh
|
||||
./victoria-logs -syslog.listenAddr.tcp=:514 -syslog.compressMethod.tcp=gzip
|
||||
```
|
||||
|
||||
## Multitenancy
|
||||
|
||||
By default, the ingested logs are stored in the `(AccountID=0, ProjectID=0)` [tenant](https://docs.victoriametrics.com/victorialogs/#multitenancy).
|
||||
If you need storing logs in other tenant, then specify the needed tenant via `-syslog.tenantID.tcp` or `-syslog.tenantID.udp` command-line flags
|
||||
depending on whether TCP or UDP ports are listened for syslog messages.
|
||||
For example, the following command starts VictoriaLogs, which writes syslog messages received at TCP port 514, to `(AccountID=12, ProjectID=34)` tenant:
|
||||
|
||||
```sh
|
||||
./victoria-logs -syslog.listenAddr.tcp=:514 -syslog.tenantID.tcp=12:34
|
||||
```
|
||||
|
||||
## Stream fields
|
||||
|
||||
VictoriaLogs uses `(hostname, app_name, proc_id)` fields as labels for [log streams](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields) by default.
|
||||
It is possible setting other set of labels via `-syslog.streamFields.tcp` and `-syslog.streamFields.udp` command-line flags
|
||||
for logs instead via the corresponding `-syslog.listenAddr.tcp` and `-syslog.listenAddr.dup` addresses.
|
||||
For example, the following command starts VictoriaLogs, which uses `(hostname, app_name)` fields as log stream labels
|
||||
for logs received at TCP port 514:
|
||||
|
||||
```sh
|
||||
./victoria-logs -syslog.listenAddr.tcp=:514 -syslog.streamFields.tcp='["hostname","app_name"]'
|
||||
```
|
||||
|
||||
## Dropping fields
|
||||
|
||||
VictoriaLogs supports `-syslog.ignoreFields.tcp` and `-syslog.ignoreFields.udp` command-line flags for skipping
|
||||
the given [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) during ingestion
|
||||
of Syslog logs into `-syslog.listenAddr.tcp` and `-syslog.listenAddr.udp` addresses.
|
||||
For example, the following command starts VictoriaLogs, which drops `proc_id` and `msg_id` fields from logs received at TCP port 514:
|
||||
|
||||
```sh
|
||||
./victoria-logs -syslog.listenAddr.tcp=:514 -syslog.ignoreFields.tcp='["prod_id","msg_id"]'
|
||||
```
|
||||
|
||||
The list may contain field name prefixes ending with `*` such as `some-prefix*`. In this case all the log fields starting with this prefix
|
||||
are ignored during data ingestion.
|
||||
|
||||
## Decolorizing fields
|
||||
|
||||
VictoriaLogs supports `-syslog.decolorizeFields.tcp` and `-syslog.decolorizeFields.udp` command-line flags,
|
||||
which can be used for removing ANSI color codes from the provided list fields during ingestion of Syslog logs
|
||||
into `-syslog.listenAddr.tcp` and `-syslog.listenAddr.upd` addresses.
|
||||
For example, the following command starts VictoriaLogs, which removes ANSI color codes from [`_msg` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field)
|
||||
at logs received via TCP port 514:
|
||||
|
||||
```sh
|
||||
./victoria-logs -syslog.listenAddr.tcp=:514 -syslog.decolorizeFields.tcp='["_msg"]'
|
||||
```
|
||||
|
||||
## Adding extra fields
|
||||
|
||||
VictoriaLogs supports -`syslog.extraFields.tcp` and `-syslog.extraFields.udp` command-line flags for adding
|
||||
the given [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) during data ingestion
|
||||
of Syslog logs into `-syslog.listenAddr.tcp` and `-syslog.listenAddr.udp` addresses.
|
||||
For example, the following command starts VictoriaLogs, which adds `source=foo` and `abc=def` fields to logs received at TCP port 514:
|
||||
|
||||
```sh
|
||||
./victoria-logs -syslog.listenAddr.tcp=:514 -syslog.extraFields.tcp='{"source":"foo","abc":"def"}'
|
||||
```
|
||||
|
||||
## Multiple configs
|
||||
|
||||
VictoriaLogs can accept syslog messages via multiple TCP and UDP ports with individual configurations for [log timestamps](#log-timestamps), [compression](#compression), [security](#security)
|
||||
and [multitenancy](#multitenancy). Specify multiple command-line flags for this. For example, the following command starts VictoriaLogs,
|
||||
which accepts gzip-compressed syslog messages via TCP port 514 at localhost interface and stores them to [tenant](https://docs.victoriametrics.com/victorialogs/#multitenancy) `123:0`,
|
||||
plus it accepts TLS-encrypted syslog messages via TCP port 6514 and stores them to [tenant](https://docs.victoriametrics.com/victorialogs/#multitenancy) `567:0`:
|
||||
|
||||
```sh
|
||||
./victoria-logs \
|
||||
-syslog.listenAddr.tcp=localhost:514 -syslog.tenantID.tcp=123:0 -syslog.compressMethod.tcp=gzip -syslog.tls=false -syslog.tlsKeyFile='' -syslog.tlsCertFile='' \
|
||||
-syslog.listenAddr.tcp=:6514 -syslog.tenantID.tcp=567:0 -syslog.compressMethod.tcp=none -syslog.tls=true -syslog.tlsKeyFile=/path/to/tls/key -syslog.tlsCertFile=/path/to/tls/cert
|
||||
```
|
||||
|
||||
## Rsyslog
|
||||
|
||||
1. Run VictoriaLogs with `-syslog.listenAddr.tcp=:29514` command-line flag.
|
||||
1. Put the following line to [rsyslog](https://www.rsyslog.com/) config (this config is usually located at `/etc/rsyslog.conf`):
|
||||
```
|
||||
*.* @@victoria-logs-server:29514
|
||||
```
|
||||
Where `victoria-logs-server` is the hostname where VictoriaLogs runs. See [these docs](https://www.rsyslog.com/sending-messages-to-a-remote-syslog-server/)
|
||||
for more details.
|
||||
|
||||
## Syslog-ng
|
||||
|
||||
1. Run VictoriaLogs with `-syslog.listenAddr.tcp=:29514` command-line flag.
|
||||
1. Put the following line to [syslog-ng](https://www.syslog-ng.com/) config:
|
||||
```
|
||||
destination d_remote {
|
||||
tcp("victoria-logs-server" port(29514));
|
||||
};
|
||||
```
|
||||
Where `victoria-logs-server` is the hostname where VictoriaLogs runs.
|
||||
See [these docs](https://support.oneidentity.com/technical-documents/syslog-ng-open-source-edition/3.19/administration-guide/29#TOPIC-1094570) for details.
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 34 KiB |
@@ -1,265 +0,0 @@
|
||||
---
|
||||
weight: 2
|
||||
title: Key concepts
|
||||
menu:
|
||||
docs:
|
||||
identifier: vl-key-concepts
|
||||
parent: victorialogs
|
||||
weight: 2
|
||||
title: Key concepts
|
||||
tags:
|
||||
- logs
|
||||
aliases:
|
||||
- /victorialogs/keyConcepts.html
|
||||
- /victorialogs/keyConcepts/
|
||||
---
|
||||
## Data model
|
||||
|
||||
[VictoriaLogs](https://docs.victoriametrics.com/victorialogs/) works with both structured and unstructured logs.
|
||||
Every log entry must contain at least [log message field](#message-field). Arbitrary number of additional `key=value` fields can be added to the log entry.
|
||||
A single log entry can be expressed as a single-level [JSON](https://www.json.org/json-en.html) object with string keys and string values.
|
||||
For example:
|
||||
|
||||
```json
|
||||
{
|
||||
"job": "my-app",
|
||||
"instance": "host123:4567",
|
||||
"level": "error",
|
||||
"client_ip": "1.2.3.4",
|
||||
"trace_id": "1234-56789-abcdef",
|
||||
"_msg": "failed to serve the client request"
|
||||
}
|
||||
```
|
||||
|
||||
Empty values are treated the same as non-existing values. For example, the following log entries are equivalent,
|
||||
since they have only one identical non-empty field - [`_msg`](#message-field):
|
||||
|
||||
```json
|
||||
{
|
||||
"_msg": "foo bar",
|
||||
"some_field": "",
|
||||
"another_field": ""
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"_msg": "foo bar",
|
||||
"third_field": "",
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"_msg": "foo bar",
|
||||
}
|
||||
```
|
||||
|
||||
VictoriaLogs automatically transforms multi-level JSON (aka nested JSON) into single-level JSON
|
||||
during [data ingestion](https://docs.victoriametrics.com/victorialogs/data-ingestion/) according to the following rules:
|
||||
|
||||
- Nested dictionaries are flattened by concatenating dictionary keys with `.` char. For example, the following multi-level JSON
|
||||
is transformed into the following single-level JSON:
|
||||
|
||||
```json
|
||||
{
|
||||
"host": {
|
||||
"name": "foobar"
|
||||
"os": {
|
||||
"version": "1.2.3"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"host.name": "foobar",
|
||||
"host.os.version": "1.2.3"
|
||||
}
|
||||
```
|
||||
|
||||
- Arrays, numbers and boolean values are converted into strings. This simplifies [full-text search](https://docs.victoriametrics.com/victorialogs/logsql/) over such values.
|
||||
For example, the following JSON with an array, a number and a boolean value is converted into the following JSON with string values:
|
||||
|
||||
```json
|
||||
{
|
||||
"tags": ["foo", "bar"],
|
||||
"offset": 12345,
|
||||
"is_error": false
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"tags": "[\"foo\", \"bar\"]",
|
||||
"offset": "12345",
|
||||
"is_error": "false"
|
||||
}
|
||||
```
|
||||
|
||||
Both field name and field value may contain arbitrary chars. Such chars must be encoded
|
||||
during [data ingestion](https://docs.victoriametrics.com/victorialogs/data-ingestion/)
|
||||
according to [JSON string encoding](https://www.rfc-editor.org/rfc/rfc7159.html#section-7).
|
||||
Unicode chars must be encoded with [UTF-8](https://en.wikipedia.org/wiki/UTF-8) encoding:
|
||||
|
||||
```json
|
||||
{
|
||||
"field with whitespace": "value\nwith\nnewlines",
|
||||
"Поле": "价值"
|
||||
}
|
||||
```
|
||||
|
||||
VictoriaLogs automatically indexes all the fields in all the [ingested](https://docs.victoriametrics.com/victorialogs/data-ingestion/) logs.
|
||||
This enables [full-text search](https://docs.victoriametrics.com/victorialogs/logsql/) across all the fields.
|
||||
|
||||
VictoriaLogs supports the following special fields additionally to arbitrary [other fields](#other-fields):
|
||||
|
||||
* [`_msg` field](#message-field)
|
||||
* [`_time` field](#time-field)
|
||||
* [`_stream` and `_stream_id` fields](#stream-fields)
|
||||
|
||||
### Message field
|
||||
|
||||
Every ingested [log entry](#data-model) must contain at least a `_msg` field with the actual log message. For example, this is the minimal
|
||||
log entry for VictoriaLogs:
|
||||
|
||||
```json
|
||||
{
|
||||
"_msg": "some log message"
|
||||
}
|
||||
```
|
||||
|
||||
If the actual log message has other than `_msg` field name, then it can be specified via `_msg_field` HTTP query arg or via `VL-Msg-Field` HTTP header
|
||||
during [data ingestion](https://docs.victoriametrics.com/victorialogs/data-ingestion/).
|
||||
For example, if log message is located in the `event.original` field, then specify `_msg_field=event.original` query arg.
|
||||
See [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters) for details.
|
||||
|
||||
If the `_msg` field remains empty after an attempt to get it from `_msg_field`, then VictoriaLogs automatically sets it to the value specified
|
||||
via `-defaultMsgValue` command-line flag.
|
||||
|
||||
### Time field
|
||||
|
||||
The ingested [log entries](#data-model) may contain `_time` field with the timestamp of the ingested log entry.
|
||||
This field must be in one of the following formats:
|
||||
|
||||
- [ISO8601](https://en.wikipedia.org/wiki/ISO_8601) or [RFC3339](https://www.rfc-editor.org/rfc/rfc3339).
|
||||
For example, `2023-06-20T15:32:10Z` or `2023-06-20 15:32:10.123456789+02:00`.
|
||||
If timezone information is missing (for example, `2023-06-20 15:32:10`),
|
||||
then the time is parsed in the local timezone of the host where VictoriaLogs runs.
|
||||
|
||||
- Unix timestamp in seconds, milliseconds, microseconds or nanoseconds. For example, `1686026893` (seconds), `1686026893735` (milliseconds),
|
||||
`1686026893735321` (microseconds) or `1686026893735321098` (nanoseconds).
|
||||
|
||||
For example, the following [log entry](#data-model) contains valid timestamp with millisecond precision in the `_time` field:
|
||||
|
||||
```json
|
||||
{
|
||||
"_msg": "some log message",
|
||||
"_time": "2023-04-12T06:38:11.095Z"
|
||||
}
|
||||
```
|
||||
|
||||
If the actual timestamp has other than `_time` field name, then it is possible to specify the real timestamp
|
||||
field via `_time_field` HTTP query arg or via `VL-Time-Field` HTTP header during [data ingestion](https://docs.victoriametrics.com/victorialogs/data-ingestion/).
|
||||
For example, if timestamp is located in the `event.created` field, then specify `_time_field=event.created` query arg.
|
||||
See [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters) for details.
|
||||
|
||||
If `_time` field is missing, or if it equals `0`, or if it equals `-`, then the data ingestion time is used as log entry timestamp.
|
||||
|
||||
The `_time` field is used by [time filter](https://docs.victoriametrics.com/victorialogs/logsql/#time-filter) for quickly narrowing down
|
||||
the search to the selected time range.
|
||||
|
||||
### Stream fields
|
||||
|
||||
Some [structured logging](#data-model) fields may uniquely identify the application instance, which generates logs.
|
||||
This may be either a single field such as `instance="host123:456"` or a set of fields such as
|
||||
`{datacenter="...", env="...", job="...", instance="..."}` or
|
||||
`{kubernetes.namespace="...", kubernetes.node.name="...", kubernetes.pod.name="...", kubernetes.container.name="..."}`.
|
||||
|
||||
Log entries received from a single application instance form a **log stream** in VictoriaLogs.
|
||||
VictoriaLogs optimizes storing and [querying](https://docs.victoriametrics.com/victorialogs/logsql/#stream-filter) of individual log streams.
|
||||
This provides the following benefits:
|
||||
|
||||
- Reduced disk space usage, since a log stream from a single application instance is usually compressed better
|
||||
than a mixed log stream from multiple distinct applications.
|
||||
|
||||
- Increased query performance, since VictoriaLogs needs to scan lower amounts of data
|
||||
when [searching by stream fields](https://docs.victoriametrics.com/victorialogs/logsql/#stream-filter).
|
||||
|
||||
Every ingested log entry is associated with a log stream. Every log stream consists of the following special fields:
|
||||
|
||||
- `_stream_id` - this is an unique identifier for the log stream. All the logs for the particular stream can be selected
|
||||
via [`_stream_id:...` filter](https://docs.victoriametrics.com/victorialogs/logsql/#_stream_id-filter).
|
||||
|
||||
- `_stream` - this field contains stream labels in the format similar to [labels in Prometheus metrics](https://docs.victoriametrics.com/victoriametrics/keyconcepts/#labels):
|
||||
```
|
||||
{field1="value1", ..., fieldN="valueN"}
|
||||
```
|
||||
For example, if `host` and `app` fields are associated with the stream, then the `_stream` field will have `{host="host-123",app="my-app"}` value
|
||||
for the log entry with `host="host-123"` and `app="my-app"` fields. The `_stream` field can be searched
|
||||
with [stream filters](https://docs.victoriametrics.com/victorialogs/logsql/#stream-filter).
|
||||
|
||||
By default the value of `_stream` field is `{}`, since VictoriaLogs cannot determine automatically,
|
||||
which fields uniquely identify every log stream. This may lead to not-so-optimal resource usage and query performance.
|
||||
Therefore it is recommended specifying stream-level fields via `_stream_fields` query arg
|
||||
during [data ingestion](https://docs.victoriametrics.com/victorialogs/data-ingestion/).
|
||||
For example, if logs from Kubernetes containers have the following fields:
|
||||
|
||||
```json
|
||||
{
|
||||
"kubernetes.namespace": "some-namespace",
|
||||
"kubernetes.node.name": "some-node",
|
||||
"kubernetes.pod.name": "some-pod",
|
||||
"kubernetes.container.name": "some-container",
|
||||
"_msg": "some log message"
|
||||
}
|
||||
```
|
||||
|
||||
then specify `_stream_fields=kubernetes.namespace,kubernetes.node.name,kubernetes.pod.name,kubernetes.container.name`
|
||||
query arg during [data ingestion](https://docs.victoriametrics.com/victorialogs/data-ingestion/) in order to properly store
|
||||
per-container logs into distinct streams.
|
||||
|
||||
#### How to determine which fields must be associated with log streams?
|
||||
|
||||
[Log streams](#stream-fields) must contain [fields](#data-model), which uniquely identify the application instance, which generates logs.
|
||||
For example, `container`, `instance` and `host` are good candidates for stream fields.
|
||||
|
||||
Additional fields may be added to log streams if they **remain constant during application instance lifetime**.
|
||||
For example, `namespace`, `node`, `pod` and `job` are good candidates for additional stream fields. Adding such fields to log streams
|
||||
makes sense if you are going to use these fields during search and want speeding up it with [stream filters](https://docs.victoriametrics.com/victorialogs/logsql/#stream-filter).
|
||||
|
||||
There is **no need to add all the constant fields to log streams**, since this may increase resource usage during data ingestion and querying.
|
||||
|
||||
**Never add non-constant fields to streams if these fields may change with every log entry of the same stream**.
|
||||
For example, `ip`, `user_id` and `trace_id` **must never be associated with log streams**, since this may lead to [high cardinality issues](#high-cardinality).
|
||||
|
||||
#### High cardinality
|
||||
|
||||
Some fields in the [ingested logs](#data-model) may contain big number of unique values across log entries.
|
||||
For example, fields with names such as `ip`, `user_id` or `trace_id` tend to contain big number of unique values.
|
||||
VictoriaLogs works perfectly with such fields unless they are associated with [log streams](#stream-fields).
|
||||
|
||||
**Never** associate high-cardinality fields with [log streams](#stream-fields), since this may lead to the following issues:
|
||||
|
||||
- Performance degradation during [data ingestion](https://docs.victoriametrics.com/victorialogs/data-ingestion/)
|
||||
and [querying](https://docs.victoriametrics.com/victorialogs/querying/)
|
||||
- Increased memory usage
|
||||
- Increased CPU usage
|
||||
- Increased disk space usage
|
||||
- Increased disk read / write IO
|
||||
|
||||
VictoriaLogs exposes `vl_streams_created_total` [metric](https://docs.victoriametrics.com/victorialogs/#monitoring),
|
||||
which shows the number of created streams since the last VictoriaLogs restart. If this metric grows at a rapid rate
|
||||
during long period of time, then there are high chances of high cardinality issues mentioned above.
|
||||
VictoriaLogs can log all the newly registered streams when `-logNewStreams` command-line flag is passed to it.
|
||||
This can help narrowing down and eliminating high-cardinality fields from [log streams](#stream-fields).
|
||||
|
||||
### Other fields
|
||||
|
||||
Every ingested log entry may contain arbitrary number of [fields](#data-model) additionally to [`_msg`](#message-field) and [`_time`](#time-field).
|
||||
For example, `level`, `ip`, `user_id`, `trace_id`, etc. Such fields can be used for simplifying and optimizing [search queries](https://docs.victoriametrics.com/victorialogs/logsql/).
|
||||
It is usually faster to search over a dedicated `trace_id` field instead of searching for the `trace_id` inside long [log message](#message-field).
|
||||
E.g. the `trace_id:="XXXX-YYYY-ZZZZ"` query usually works faster than the `_msg:"trace_id=XXXX-YYYY-ZZZZ"` query.
|
||||
|
||||
See [LogsQL docs](https://docs.victoriametrics.com/victorialogs/logsql/) for more details.
|
||||
@@ -1,291 +0,0 @@
|
||||
---
|
||||
weight: 130
|
||||
title: How to convert Loki queries to VictoriaLogs queries
|
||||
menu:
|
||||
docs:
|
||||
parent: "victorialogs"
|
||||
weight: 120
|
||||
tags:
|
||||
- logs
|
||||
- guide
|
||||
---
|
||||
|
||||
Loki provides [LogQL](https://grafana.com/docs/loki/latest/query/) query language, while VictoriaLogs provides [LogsQL](https://docs.victoriametrics.com/victorialogs/logsql/)
|
||||
query language. Both languages are optimized for querying logs. The docs below show how to convert typical LogQL queries to LogsQL queries.
|
||||
|
||||
## Data model
|
||||
|
||||
Both Loki and VictoriaLogs support log streams - these are timestamp-ordered streams of logs, where every stream may have its own set of labels. These labels can be used
|
||||
in [log stream selectors](#log-stream-selector) for quickly narrowing down the amounts of logs for further processing by the query.
|
||||
|
||||
The main difference is that VictoriaLogs is optimized for structured logs with big number of labels (aka [wide events](https://jeremymorrell.dev/blog/a-practitioners-guide-to-wide-events/)).
|
||||
Hundreds of labels per every log entry is OK for VictoriaLogs.
|
||||
|
||||
VictoriaLogs is also optimized for log labels with big number of unique values such as `trace_id`, `user_id`, `duration` and `ip` (aka high-cardinality labels).
|
||||
It is highly recommended storing all the labels as is without the need to pack them into a JSON and storing it into the log line (message).
|
||||
Storing labels separately results in much faster filtering on such labels (1000x faster and more).
|
||||
This also results in storage space savings because of better compression for per-label values.
|
||||
|
||||
It is recommended reading [VictoriaLogs key concepts](https://docs.victoriametrics.com/victorialogs/keyconcepts/) in order to understand VictoriaLogs data model.
|
||||
|
||||
## Log stream selector
|
||||
|
||||
The basic practical [LogQL query](https://grafana.com/docs/loki/latest/query/) consists of a [log stream selector](https://grafana.com/docs/loki/latest/query/log_queries/#log-stream-selector),
|
||||
which returns logs for the matching log streams. For example:
|
||||
|
||||
```logql
|
||||
{app="nginx",host="host-42"}
|
||||
```
|
||||
|
||||
VictoriaLogs supports the same `log streams` concept as Loki does. See [Loki docs about log streams](https://grafana.com/docs/loki/latest/get-started/overview/)
|
||||
and [VictoriaLogs docs about log streams](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields). That's why log stream selector
|
||||
in VictoriaLogs looks identical to the log stream selector in Loki:
|
||||
|
||||
```logsql
|
||||
{app="nginx",host="host-42"}
|
||||
```
|
||||
|
||||
Log stream selector is required in Loki query, while it is optional in VictoriaLogs query.
|
||||
|
||||
Log stream filters in VictoriaLogs provide additional functionality compared to the log stream selectors in Loki.
|
||||
Read [these docs](https://docs.victoriametrics.com/victorialogs/logsql/#stream-filter) for more details.
|
||||
|
||||
See also [this article](https://itnext.io/why-victorialogs-is-a-better-alternative-to-grafana-loki-7e941567c4d5) for more details.
|
||||
|
||||
## Line filter
|
||||
|
||||
Loki allows filtering log lines (log messages) with the following filters:
|
||||
|
||||
* Substring filter - `{...} |= "some_text"`. It selects logs with lines containing `some_text` substring.
|
||||
VictoriaLogs provides similar filters - [word filter](https://docs.victoriametrics.com/victorialogs/logsql/#word-filter)
|
||||
and [phrase filter](https://docs.victoriametrics.com/victorialogs/logsql/#phrase-filter),
|
||||
so the similar LogsQL query is `{...} "some_text"`, e.g. it is enough replacing `|=` with a whitespace in order
|
||||
to convert LogQL query to LogsQL query.
|
||||
A sequence of substring filters - `{...} |= "foo" |= "bar"` - is converted into the following VictoriaLogs query - `{...} "foo" "bar"`.
|
||||
|
||||
There is a subtle difference between substring filter in Loki and word / phrase filter in VictoriaLogs:
|
||||
substring filter matches substrings inside words, while word / phrase filters match full words only.
|
||||
For example, `{...} |= "error"` in Loki matches `foo error bar`, `foo errors bar` and `foo someerrors bar`,
|
||||
while `{...} "error"` in VictoriaLogs matches only `foo error bar`, while it doesn't match other cases which have
|
||||
no `error` word. Such cases are very rare in practice. They can be covered with the following VictoriaLogs filters if needed:
|
||||
|
||||
* [Prefix filter](https://docs.victoriametrics.com/victorialogs/logsql/#prefix-filter), which matches word / phrase prefix.
|
||||
* [Regexp filter](https://docs.victoriametrics.com/victorialogs/logsql/#regexp-filter), which matches the given regexp at any position of the log line.
|
||||
|
||||
* Negative substring filter - `{...} != "some_text"`. It selects logs with lines without the `some_text` substring.
|
||||
This query can be written as `{...} -"some_text"` in VictoriaLogs, e.g. just prepend the `"some_text"` with `-`.
|
||||
See [these docs](https://docs.victoriametrics.com/victorialogs/logsql/#logical-filter) for details.
|
||||
|
||||
* Regexp filter - `{...} |~ "regexp"`. It selects logs with lines matching the given `regexp`.
|
||||
This query can be written as `{...} ~"regexp"` in VictoriaLogs. See [these docs](https://docs.victoriametrics.com/victorialogs/logsql/#regexp-filter)
|
||||
for details.
|
||||
|
||||
* Negative regexp filter - `{...} !~ "regexp"`. It selects logs with lines not matching the given `regexp`.
|
||||
This query can be written as `{...} NOT ~"regexp"` in VictoriaLogs according to [these docs](https://docs.victoriametrics.com/victorialogs/logsql/#logical-filter).
|
||||
|
||||
## Label filter
|
||||
|
||||
Loki allows applying filters to log labels with `{...} | label op value` syntax:
|
||||
|
||||
* `{...} | label = value` or `{...} | label == value`. This is equivalent to `{...} label:=value` in VictoriaLogs.
|
||||
See [these docs](https://docs.victoriametrics.com/victorialogs/logsql/#exact-filter).
|
||||
|
||||
* `label != value`. This is equivalent to `{...} -label:=value` in VictoriaLogs. E.g. just add `-` in front of `label:=value`
|
||||
according to [these docs](https://docs.victoriametrics.com/victorialogs/logsql/#logical-filter).
|
||||
|
||||
* `{...} | label > value`, `{...} label >= value`, `{...} label < value` and `{...} label <= value`.
|
||||
This is equivalent to `{...} label:>value`, `{...} label:>=value`, `{...} label:<value` and `{...} label:<=value`
|
||||
in VictoriaLogs. See [these docs](https://docs.victoriametrics.com/victorialogs/logsql/#range-comparison-filter).
|
||||
|
||||
* `{...} | label ~= value`. This is equivalent to `{...} label:~value` in VictoriaLogs.
|
||||
See [these docs](https://docs.victoriametrics.com/victorialogs/logsql/#regexp-filter).
|
||||
|
||||
* `{...} | label !~ value`. This is equivalent to `{...} -label:~value` in VictoriaLogs. E.g. just add `-` in front of `label:~value`
|
||||
according to [these docs](https://docs.victoriametrics.com/victorialogs/logsql/#logical-filter).
|
||||
|
||||
Note that VictoriaLogs expects `:` after log labels (field names) in query filters.
|
||||
|
||||
Multiple label filters can be combined with `and`, `or` and `(...)` in both Loki and VictoriaLogs.
|
||||
Additionally, VictoriaLogs supports `not` in front of any filter or combination of filters.
|
||||
See [these docs](https://docs.victoriametrics.com/victorialogs/logsql/#logical-filter).
|
||||
|
||||
## IP filter
|
||||
|
||||
Loki provides the ability to filter logs by IP and IP ranges according to [these docs](https://grafana.com/docs/loki/latest/query/ip/).
|
||||
These filters can be substituted with [`ipv4_range` filter](https://docs.victoriametrics.com/victorialogs/logsql/#ipv4-range-filter) at VictoriaLogs.
|
||||
|
||||
## JSON parser
|
||||
|
||||
Loki has poor support for high-cardinality labels with big number of unique values, such as `trace_id`, `user_id`, `duration`, etc.
|
||||
That's why Loki recommends encoding such labels into JSON and storing this JSON as a log line (message). Later, these labels can be parsed
|
||||
at query time with the `{...} | unpack` or `{...} | json` syntax according to [these docs](https://grafana.com/docs/loki/latest/query/log_queries/#json),
|
||||
for further filtering or stats' calculation.
|
||||
|
||||
VictoriaLogs supports high-cardinality labels and recommends storing them separately instead of storing them together as a packed JSON in the log message.
|
||||
This provides the following advantages over Loki:
|
||||
|
||||
* Better on-disk compression rate for the ingested logs, so they occupy less storage space.
|
||||
* Much better query performance over such labels, since VictoriaLogs needs to read only the data for the labels referred in the query,
|
||||
while it completely skips the data for the rest of the labels. The performance improvement may reach 1000 times and more on real production logs.
|
||||
|
||||
Given these recommendations, a typical Loki query, which includes parsing JSON lines, can be simplified and significantly sped up at VictoriaLogs.
|
||||
For example, the following Loki query selects logs with the given `trace_id` at log lines:
|
||||
|
||||
```logql
|
||||
{...} | unpack | trace_id == "abcdef"
|
||||
```
|
||||
|
||||
This query is equivalent to the following VictoriaLogs query if the `trace_id` field is stored separately according to the recommendations above:
|
||||
|
||||
```logsql
|
||||
{...} trace_id:=abcdef
|
||||
```
|
||||
|
||||
VictoriaLogs supports parsing JSON inside any label (log field) with the [`unpack_json` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#unpack_json-pipe).
|
||||
For example, if, despite the recommendations to store every log label separately, you decided to pack labels into a JSON and store it in the log line (message)
|
||||
in VictoriaLogs, then the following query can be used instead of the query above:
|
||||
|
||||
```logsql
|
||||
{...} | unpack_json fields (trace_id) | trace_id:=abcdef
|
||||
```
|
||||
|
||||
Note that this query will be much slower than the recommended query above (though it should be still faster than the corresponding Loki query :) ).
|
||||
|
||||
See [this article](https://itnext.io/why-victorialogs-is-a-better-alternative-to-grafana-loki-7e941567c4d5) for more details.
|
||||
|
||||
## Logfmt parser
|
||||
|
||||
Loki supports parsing logfmt-formatted log lines with the `{...} | logfmt` syntax according to [these docs](https://grafana.com/docs/loki/latest/query/log_queries/#pattern).
|
||||
Such a query can be replaced with `{...} | unpack_logmt` at VictoriaLogs. See [these docs](https://docs.victoriametrics.com/victorialogs/logsql/#unpack_logfmt-pipe).
|
||||
|
||||
It is recommended parsing logfmt-formatted structured logs before ingesting them into VictoriaLogs, so log labels are stored separately. VictoriaLogs is optimized for storing logs
|
||||
with big number of labels (fields), and every such field may contain arbitrary big number of unique values (e.g. VictoriaLogs works great with high-cardinality labels).
|
||||
See [JSON parser](#json-parser) docs for more details.
|
||||
|
||||
## Pattern parser
|
||||
|
||||
Loki supports parsing log lines according to the provided pattern with the `{...} | pattern "..."` syntax according to [these docs](https://grafana.com/docs/loki/latest/query/log_queries/#pattern).
|
||||
Such a query can be replaced with `{...} | extract "..."` at VictoriaLogs. See [these docs](https://docs.victoriametrics.com/victorialogs/logsql/#extract-pipe).
|
||||
|
||||
## Regular expression parser
|
||||
|
||||
Loki supports parsing log lines according to the provided regexp with the `{...} | regexp "..."` syntax.
|
||||
Such a query can be replaced with `{...} | extract_regexp "..."` at VictoriaLogs. See [these docs](https://docs.victoriametrics.com/victorialogs/logsql/#extract_regexp-pipe).
|
||||
|
||||
## Line formatting
|
||||
|
||||
Loki provides the ability to format log lines with the `{...} | line_format "..."` syntax according to [these docs](https://grafana.com/docs/loki/latest/query/log_queries/#line-format-expression).
|
||||
Such a query can be replaced with `{...} | format "..."` at VictoriaLogs. See [these docs](https://docs.victoriametrics.com/victorialogs/logsql/#format-pipe).
|
||||
|
||||
Note that VictoriaLogs uses `<label>` format syntax identical to [pattern parser](#pattern-parser) syntax instead of `{{.label}}` format syntax from Loki.
|
||||
|
||||
## Label formatting
|
||||
|
||||
Loki provides the ability to format log labels with the `{...} | label_format label_name="..."` syntax according to [these docs](https://grafana.com/docs/loki/latest/query/log_queries/#labels-format-expression).
|
||||
Such a query can be replaced with `{...} | format "..." as label_name` at VictoriaLogs. See [these docs](https://docs.victoriametrics.com/victorialogs/logsql/#format-pipe).
|
||||
|
||||
Note that VictoriaLogs uses `<label>` format syntax identical to [pattern parser](#pattern-parser) syntax instead of `{{.label}}` format syntax from Loki.
|
||||
|
||||
## Dropping labels
|
||||
|
||||
Loki provides the ability to drop log labels with the `{...} | drop label1, ..., labelN` syntax
|
||||
according to [these docs](https://grafana.com/docs/loki/latest/query/log_queries/#drop-labels-expression).
|
||||
The similar syntax is also supported by VictoriaLogs. See [these docs](https://docs.victoriametrics.com/victorialogs/logsql/#delete-pipe).
|
||||
|
||||
Loki supports conditional dropping of labels with the `{...} | drop label="value"` syntax.
|
||||
This can be replaced with [conditional format](https://docs.victoriametrics.com/victorialogs/logsql/#conditional-format) at VictoriaLogs:
|
||||
|
||||
```logsql
|
||||
{...} | format if (label:="value") "" as label
|
||||
```
|
||||
|
||||
## Keeping labels
|
||||
|
||||
Loki provides the ability to keep log labels with the `{...} | keep label1, ..., labelN` syntax
|
||||
according to [these docs](https://grafana.com/docs/loki/latest/query/log_queries/#keep-labels-expression).
|
||||
The similar syntax is also supported by VictoriaLogs. See [these docs](https://docs.victoriametrics.com/victorialogs/logsql/#fields-pipe).
|
||||
|
||||
Loki supports conditional keeping of labels with the `{...} | leep label="value"` syntax.
|
||||
This can be replaced with [conditional format](https://docs.victoriametrics.com/victorialogs/logsql/#conditional-format) at VictoriaLogs:
|
||||
|
||||
```logsql
|
||||
{...} | format if (label:="value") "<label>" as label
|
||||
```
|
||||
|
||||
## Metric queries
|
||||
|
||||
Loki allows calculating various stats / metrics from the selected logs with the [metric queries](https://grafana.com/docs/loki/latest/query/metric_queries/).
|
||||
VictoriaLogs covers all this functionality with [`stats` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#stats-pipe), and extends it
|
||||
with additional [statistical functions](https://docs.victoriametrics.com/victorialogs/logsql/#stats-pipe-functions) and features:
|
||||
|
||||
* Ability to calculate multiple stats over any labels in a single query. For example, the following query calculates the average request duration (from the `duration` label (field)),
|
||||
the maximum response size (from the `response_size` label (field)) and the number of request logs in a single query:
|
||||
|
||||
```logsql
|
||||
* | stats
|
||||
avg(duration) as avg_duration,
|
||||
max(response_size) as max_response_size,
|
||||
count() as requests
|
||||
```
|
||||
|
||||
* Ability to calculate conditional stats. For example, the following query calculates the number of successful requests (`status=200`) and the total number of requests:
|
||||
|
||||
```logsql
|
||||
* | stats
|
||||
count() if (status:=200) as success_requests,
|
||||
count() as total_requests
|
||||
```
|
||||
|
||||
Let's look at Loki queries, which calculate typical metrics in practice:
|
||||
|
||||
### Logs rate
|
||||
|
||||
The `rate({...}[d])` query at Loki is substituted with `_time:d {...} | stats by (_stream) rate()` in VictoriaLogs.
|
||||
See [`_time` filter docs](https://docs.victoriametrics.com/victorialogs/logsql/#time-filter) and [`rate` stats function docs](https://docs.victoriametrics.com/victorialogs/logsql/#rate-stats).
|
||||
|
||||
If the logs rate query is used for building hits rate graph in Grafana, then the `_time:d` filter isn't needed in the VictoriaLogs query,
|
||||
e.g. the `{...} | stats by (_stream) rate()` is enough. It obtains the needed step between dots on the graph from the `step` query arg, which is automatically calculated by Grafana
|
||||
depending on the selected time range, and sent to VictoriaLogs in the query to [`/select/logsql/stats_query_range`](https://docs.victoriametrics.com/victorialogs/querying/#querying-log-range-stats).
|
||||
|
||||
By default VictoriaLogs calculates the summary rate over all the matching logs with the `rate()` function. If the rate must be calculated individually per log stream or per some other labels,
|
||||
then these labels must be enumerated in the `by(...)` clause like shown above. This also means that the frequently used `sum(rate({...}[d]))` query at Loki
|
||||
can be substituted with the simple `{...} | rate()` query at VictoriaLogs.
|
||||
|
||||
### Count the number of logs over time
|
||||
|
||||
The `count_over_time({...}[d])` query at Loki is substituted with `_time:d {...} | stats by (_stream) count()` in VictoriaLogs.
|
||||
See [`_time` filter docs](https://docs.victoriametrics.com/victorialogs/logsql/#time-filter) and [`count` stats function docs](https://docs.victoriametrics.com/victorialogs/logsql/#count-stats).
|
||||
|
||||
If the query is used for building hits graph in Grafana, then the `_time:d` filter isn't needed in the VictoriaLogs query,
|
||||
e.g. the `{...} | stats by (_stream) count()` is enough. It obtains the needed step between dots on the graph from the `step` query arg, which is automatically calculated by Grafana
|
||||
depending on the selected time range, and sent to VictoriaLogs in the query to [`/select/logsql/stats_query_range`](https://docs.victoriametrics.com/victorialogs/querying/#querying-log-range-stats).
|
||||
|
||||
By default VictoriaLogs counts all the matching logs with the `count()` function. If logs must be calculated individually per log stream or per some other labels,
|
||||
then these labels must be enumerated in the `by(...)` clause like shown above. This also means that the frequently used `sum(count_over_time({...}[d]))` query at Loki
|
||||
can be substituted with the simple `{...} | count()` query at VictoriaLogs.
|
||||
|
||||
### Unwrapped range aggregations
|
||||
|
||||
Loki allows calculating metrics from label values by using the `func_name({...} | unwrap label_name)` syntax. There is no need in unwrapping any labels in VictoriaLogs -
|
||||
just pass the needed label names into the needed [`stats` pipe function](https://docs.victoriametrics.com/victorialogs/logsql/#stats-pipe-functions).
|
||||
|
||||
VictoriaLogs aggregates all the selected logs by default, while Loki groups stats by log stream. Use `... | stats by (_stream) ...`
|
||||
for obtaining results grouped by log stream in VictoriaLogs. See [these docs](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields) for details.
|
||||
|
||||
### Topk and bottomk
|
||||
|
||||
Loki allows selecting top K metrics with the biggest values via `topk(K, (func_name({...} | unwrap label_name))` syntax.
|
||||
This query can be translated to `... | first K (label_name desc)` at VictoriaLogs. See [these docs](https://docs.victoriametrics.com/victorialogs/logsql/#first-pipe).
|
||||
|
||||
The `bottomk(K, func_name({...} | unwrap label_name))` query at Loki can be translated to `... | first K (label_name)` at VictoriaLogs.
|
||||
|
||||
### Approximate calculations
|
||||
|
||||
Loki provides [`approx_topk(K, ...)`](https://grafana.com/docs/loki/latest/query/metric_queries/#probabilistic-aggregation) for probabilistic
|
||||
selecting up to K metrics with the biggest values. VictoriaLogs provides [`sample` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#sample-pipe),
|
||||
which can be used for probabilistic calculations.
|
||||
|
||||
### Arithmetic operators
|
||||
|
||||
Loki allows performing math calculations over the calculated metrics with the `a op b` syntax, where `op` can be `+`, `-`, `/` and `*`.
|
||||
These calculations can be replaced with [`math` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#math-pipe) at VictoriaLogs.
|
||||
@@ -1,501 +0,0 @@
|
||||
---
|
||||
weight: 100
|
||||
title: LogsQL examples
|
||||
menu:
|
||||
docs:
|
||||
parent: "victorialogs"
|
||||
weight: 100
|
||||
tags:
|
||||
- logs
|
||||
- guide
|
||||
---
|
||||
|
||||
## How to select recently ingested logs?
|
||||
|
||||
[Run](https://docs.victoriametrics.com/victorialogs/querying/) the following query:
|
||||
|
||||
```logsql
|
||||
_time:5m
|
||||
```
|
||||
|
||||
It returns logs over the last 5 minutes by using [`_time` filter](https://docs.victoriametrics.com/victorialogs/logsql/#time-filter).
|
||||
The logs are returned in arbitrary order because of performance reasons.
|
||||
Add [`sort` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#sort-pipe) to the query if you need sorting
|
||||
the returned logs by some field (usually [`_time` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field)):
|
||||
|
||||
```logsql
|
||||
_time:5m | sort by (_time)
|
||||
```
|
||||
|
||||
If the number of returned logs is too big, it may be limited with the [`first` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#first-pipe).
|
||||
For example, the following query returns 10 most recent logs, which were ingested during the last 5 minutes:
|
||||
|
||||
```logsql
|
||||
_time:5m | first 10 by (_time desc)
|
||||
```
|
||||
|
||||
See also:
|
||||
|
||||
- [How to count the number of matching logs?](#how-to-count-the-number-of-matching-logs)
|
||||
- [How to return last N logs for the given query?](#how-to-return-last-n-logs-for-the-given-query)
|
||||
|
||||
## How to select logs with the given word in log message?
|
||||
|
||||
Just put the needed [word](https://docs.victoriametrics.com/victorialogs/logsql/#word) in the query.
|
||||
For example, the following query returns all the logs with the `error` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word)
|
||||
in [log message](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field):
|
||||
|
||||
```logsql
|
||||
error
|
||||
```
|
||||
|
||||
If the number of returned logs is too big, then add [`_time` filter](https://docs.victoriametrics.com/victorialogs/logsql/#time-filter)
|
||||
for limiting the time range for the selected logs. For example, the following query returns logs with `error` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word)
|
||||
over the last hour:
|
||||
|
||||
```logsql
|
||||
error _time:1h
|
||||
```
|
||||
|
||||
If the number of returned logs is still too big, then consider adding more specific [filters](https://docs.victoriametrics.com/victorialogs/logsql/#filters)
|
||||
to the query. For example, the following query selects logs with `error` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word),
|
||||
which do not contain `kubernetes` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word), over the last hour:
|
||||
|
||||
```logsql
|
||||
error -kubernetes _time:1h
|
||||
```
|
||||
|
||||
The logs are returned in arbitrary order because of performance reasons. Add [`sort` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#sort-pipe)
|
||||
for sorting logs by the needed [fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model). For example, the following query
|
||||
sorts the selected logs by [`_time` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field):
|
||||
|
||||
```logsql
|
||||
error _time:1h | sort by (_time)
|
||||
```
|
||||
|
||||
See also:
|
||||
|
||||
- [How to select logs with all the given words in log message?](#how-to-select-logs-with-all-the-given-words-in-log-message)
|
||||
- [How to select logs with some of the given words in log message?](#how-to-select-logs-with-some-of-the-given-words-in-log-message)
|
||||
- [How to skip logs with the given word in log message?](#how-to-skip-logs-with-the-given-word-in-log-message)
|
||||
- [Filtering by phrase](https://docs.victoriametrics.com/victorialogs/logsql/#phrase-filter)
|
||||
- [Filtering by prefix](https://docs.victoriametrics.com/victorialogs/logsql/#prefix-filter)
|
||||
- [Filtering by regular expression](https://docs.victoriametrics.com/victorialogs/logsql/#regexp-filter)
|
||||
- [Filtering by substring](https://docs.victoriametrics.com/victorialogs/logsql/#substring-filter)
|
||||
|
||||
|
||||
## How to skip logs with the given word in log message?
|
||||
|
||||
Use [`NOT` logical filter](https://docs.victoriametrics.com/victorialogs/logsql/#logical-filter). For example, the following query returns all the logs
|
||||
without the `INFO` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word) in the [log message](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field):
|
||||
|
||||
```logsql
|
||||
-INFO
|
||||
```
|
||||
|
||||
If the number of returned logs is too big, then add [`_time` filter](https://docs.victoriametrics.com/victorialogs/logsql/#time-filter)
|
||||
for limiting the time range for the selected logs. For example, the following query returns matching logs over the last hour:
|
||||
|
||||
```logsql
|
||||
-INFO _time:1h
|
||||
```
|
||||
|
||||
If the number of returned logs is still too big, then consider adding more specific [filters](https://docs.victoriametrics.com/victorialogs/logsql/#filters)
|
||||
to the query. For example, the following query selects logs without `INFO` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word),
|
||||
which contain `error` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word), over the last hour:
|
||||
|
||||
```logsql
|
||||
-INFO error _time:1h
|
||||
```
|
||||
|
||||
The logs are returned in arbitrary order because of performance reasons. Add [`sort` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#sort-pipe)
|
||||
for sorting logs by the needed [fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model). For example, the following query
|
||||
sorts the selected logs by [`_time` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field):
|
||||
|
||||
```logsql
|
||||
-INFO _time:1h | sort by (_time)
|
||||
```
|
||||
|
||||
See also:
|
||||
|
||||
- [How to select logs with all the given words in log message?](#how-to-select-logs-with-all-the-given-words-in-log-message)
|
||||
- [How to select logs with some of given words in log message?](#how-to-select-logs-with-some-of-the-given-words-in-log-message)
|
||||
- [Filtering by phrase](https://docs.victoriametrics.com/victorialogs/logsql/#phrase-filter)
|
||||
- [Filtering by prefix](https://docs.victoriametrics.com/victorialogs/logsql/#prefix-filter)
|
||||
- [Filtering by regular expression](https://docs.victoriametrics.com/victorialogs/logsql/#regexp-filter)
|
||||
- [Filtering by substring](https://docs.victoriametrics.com/victorialogs/logsql/#substring-filter)
|
||||
|
||||
|
||||
## How to select logs with all the given words in log message?
|
||||
|
||||
Just enumerate the needed [words](https://docs.victoriametrics.com/victorialogs/logsql/#word) in the query, by delimiting them with whitespace.
|
||||
For example, the following query selects logs containing both `error` and `kubernetes` [words](https://docs.victoriametrics.com/victorialogs/logsql/#word)
|
||||
in the [log message](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field):
|
||||
|
||||
```logsql
|
||||
error kubernetes
|
||||
```
|
||||
|
||||
This query uses [`AND` logical filter](https://docs.victoriametrics.com/victorialogs/logsql/#logical-filter).
|
||||
|
||||
If the number of returned logs is too big, then add [`_time` filter](https://docs.victoriametrics.com/victorialogs/logsql/#time-filter)
|
||||
for limiting the time range for the selected logs. For example, the following query returns matching logs over the last hour:
|
||||
|
||||
```logsql
|
||||
error kubernetes _time:1h
|
||||
```
|
||||
|
||||
If the number of returned logs is still too big, then consider adding more specific [filters](https://docs.victoriametrics.com/victorialogs/logsql/#filters)
|
||||
to the query. For example, the following query selects logs with `error` and `kubernetes` [words](https://docs.victoriametrics.com/victorialogs/logsql/#word)
|
||||
from [log streams](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields) containing `container="my-app"` field, over the last hour:
|
||||
|
||||
```logsql
|
||||
error kubernetes {container="my-app"} _time:1h
|
||||
```
|
||||
|
||||
The logs are returned in arbitrary order because of performance reasons. Add [`sort` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#sort-pipe)
|
||||
for sorting logs by the needed [fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model). For example, the following query
|
||||
sorts the selected logs by [`_time` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field):
|
||||
|
||||
```logsql
|
||||
error kubernetes _time:1h | sort by (_time)
|
||||
```
|
||||
|
||||
See also:
|
||||
|
||||
- [How to select logs with some of given words in log message?](#how-to-select-logs-with-some-of-the-given-words-in-log-message)
|
||||
- [How to skip logs with the given word in log message?](#how-to-skip-logs-with-the-given-word-in-log-message)
|
||||
- [Filtering by phrase](https://docs.victoriametrics.com/victorialogs/logsql/#phrase-filter)
|
||||
- [Filtering by prefix](https://docs.victoriametrics.com/victorialogs/logsql/#prefix-filter)
|
||||
- [Filtering by regular expression](https://docs.victoriametrics.com/victorialogs/logsql/#regexp-filter)
|
||||
- [Filtering by substring](https://docs.victoriametrics.com/victorialogs/logsql/#substring-filter)
|
||||
|
||||
|
||||
## How to select logs with some of the given words in log message?
|
||||
|
||||
Put the needed [words](https://docs.victoriametrics.com/victorialogs/logsql/#word) into `(...)`, by delimiting them with ` or `.
|
||||
For example, the following query selects logs with `error`, `ERROR` or `Error` [words](https://docs.victoriametrics.com/victorialogs/logsql/#word)
|
||||
in the [log message](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field):
|
||||
|
||||
```logsql
|
||||
(error or ERROR or Error)
|
||||
```
|
||||
|
||||
This query uses [`OR` logical filter](https://docs.victoriametrics.com/victorialogs/logsql/#logical-filter).
|
||||
|
||||
If the number of returned logs is too big, then add [`_time` filter](https://docs.victoriametrics.com/victorialogs/logsql/#time-filter)
|
||||
for limiting the time range for the selected logs. For example, the following query returns matching logs over the last hour:
|
||||
|
||||
```logsql
|
||||
(error or ERROR or Error) _time:1h
|
||||
```
|
||||
|
||||
If the number of returned logs is still too big, then consider adding more specific [filters](https://docs.victoriametrics.com/victorialogs/logsql/#filters)
|
||||
to the query. For example, the following query selects logs without `error`, `ERROR` or `Error` [words](https://docs.victoriametrics.com/victorialogs/logsql/#word),
|
||||
which do not contain `kubernetes` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word), over the last hour:
|
||||
|
||||
```logsql
|
||||
(error or ERROR or Error) -kubernetes _time:1h
|
||||
```
|
||||
|
||||
The logs are returned in arbitrary order because of performance reasons. Add [`sort` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#sort-pipe)
|
||||
for sorting logs by the needed [fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model). For example, the following query
|
||||
sorts the selected logs by [`_time` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field):
|
||||
|
||||
```logsql
|
||||
(error or ERROR or Error) _time:1h | sort by (_time)
|
||||
```
|
||||
|
||||
See also:
|
||||
|
||||
- [How to select logs with all the given words in log message?](#how-to-select-logs-with-all-the-given-words-in-log-message)
|
||||
- [How to skip logs with the given word in log message?](#how-to-skip-logs-with-the-given-word-in-log-message)
|
||||
- [Filtering by phrase](https://docs.victoriametrics.com/victorialogs/logsql/#phrase-filter)
|
||||
- [Filtering by prefix](https://docs.victoriametrics.com/victorialogs/logsql/#prefix-filter)
|
||||
- [Filtering by regular expression](https://docs.victoriametrics.com/victorialogs/logsql/#regexp-filter)
|
||||
- [Filtering by substring](https://docs.victoriametrics.com/victorialogs/logsql/#substring-filter)
|
||||
|
||||
|
||||
## How to select logs from the given application instance?
|
||||
|
||||
Make sure the application is properly configured with [stream-level log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields).
|
||||
Then just use [`_stream` filter](https://docs.victoriametrics.com/victorialogs/logsql/#stream-filter) for selecting logs for the given application instance.
|
||||
For example, if the application contains `job="app-42"` and `instance="host-123:5678"` [stream fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields),
|
||||
then the following query selects all the logs from this application:
|
||||
|
||||
```logsql
|
||||
{job="app-42",instance="host-123:5678"}
|
||||
```
|
||||
|
||||
If the number of returned logs is too big, it is recommended adding [`_time` filter](https://docs.victoriametrics.com/victorialogs/logsql/#time-filter)
|
||||
to the query in order to reduce the number of matching logs. For example, the following query returns logs for the given application for the last day:
|
||||
|
||||
```logsql
|
||||
{job="app-42",instance="host-123:5678"} _time:1d
|
||||
```
|
||||
|
||||
If the number of returned logs is still too big, then consider adding more specific [filters](https://docs.victoriametrics.com/victorialogs/logsql/#filters)
|
||||
to the query. For example, the following query selects logs from the given [log stream](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields),
|
||||
which contain `error` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word) in the [log message](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field),
|
||||
over the last day:
|
||||
|
||||
```logsql
|
||||
{job="app-42",instance="host-123:5678"} error _time:1d
|
||||
```
|
||||
|
||||
The logs are returned in arbitrary order because of performance reasons. Use [`sort` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#sort-pipe)
|
||||
for sorting the returned logs by the needed fields. For example, the following query sorts the selected logs
|
||||
by [`_time`](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field):
|
||||
|
||||
```logsql
|
||||
{job="app-42",instance="host-123:5678"} _time:1d | sort by (_time)
|
||||
```
|
||||
|
||||
See also:
|
||||
|
||||
- [How to determine applications with the most logs?](#how-to-determine-applications-with-the-most-logs)
|
||||
- [How to skip logs with the given word in log message?](#how-to-skip-logs-with-the-given-word-in-log-message)
|
||||
|
||||
|
||||
## How to count the number of matching logs?
|
||||
|
||||
Use [`count()` stats function](https://docs.victoriametrics.com/victorialogs/logsql/#count-stats). For example, the following query returns
|
||||
the number of results returned by `your_query_here`:
|
||||
|
||||
```logsql
|
||||
your_query_here | count()
|
||||
```
|
||||
|
||||
## How to determine applications with the most logs?
|
||||
|
||||
[Run](https://docs.victoriametrics.com/victorialogs/querying/) the following query:
|
||||
|
||||
```logsql
|
||||
_time:5m | stats by (_stream) count() as logs | sort by (logs desc) | limit 10
|
||||
```
|
||||
|
||||
This query returns top 10 application instances (aka [log streams](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields))
|
||||
with the most logs over the last 5 minutes.
|
||||
|
||||
This query uses the following [LogsQL](https://docs.victoriametrics.com/victorialogs/logsql/) features:
|
||||
|
||||
- [`_time` filter](https://docs.victoriametrics.com/victorialogs/logsql/#time-filter) for selecting logs on the given time range (5 minutes in the query above).
|
||||
- [`stats` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#stats-pipe) for calculating the number of logs.
|
||||
per each [`_stream`](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields). [`count` stats function](https://docs.victoriametrics.com/victorialogs/logsql/#count-stats)
|
||||
is used for calculating the needed stats.
|
||||
- [`sort` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#sort-pipe) for sorting the stats by `logs` field in descending order.
|
||||
- [`limit` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#limit-pipe) for limiting the number of returned results to 10.
|
||||
|
||||
This query can be simplified into the following one, which uses [`top` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#top-pipe):
|
||||
|
||||
```logsql
|
||||
_time:5m | top 10 by (_stream)
|
||||
```
|
||||
|
||||
See also:
|
||||
|
||||
- [How to filter out data after stats calculation?](#how-to-filter-out-data-after-stats-calculation)
|
||||
- [How to calculate the number of logs per the given interval?](#how-to-calculate-the-number-of-logs-per-the-given-interval)
|
||||
- [How to select logs from the given application instance?](#how-to-select-logs-from-the-given-application-instance)
|
||||
|
||||
|
||||
## How to parse JSON inside log message?
|
||||
|
||||
It is better from performance and resource usage PoV to avoid storing JSON inside [log message](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field).
|
||||
It is recommended storing individual JSON fields as log fields instead according to [VictoriaLogs data model](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model).
|
||||
|
||||
If you have to store JSON inside log message or inside any other [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model),
|
||||
then the stored JSON can be parsed during query time via [`unpack_json` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#unpack_json-pipe).
|
||||
For example, the following query unpacks JSON from the [`_msg` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field)
|
||||
across all the logs for the last 5 minutes:
|
||||
|
||||
```logsql
|
||||
_time:5m | unpack_json
|
||||
```
|
||||
|
||||
If you need to parse JSON array, then take a look at [`unroll` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#unroll-pipe).
|
||||
|
||||
|
||||
## How to extract some data from text log message?
|
||||
|
||||
Use [`extract`](https://docs.victoriametrics.com/victorialogs/logsql/#extract-pipe) or [`extract_regexp`](https://docs.victoriametrics.com/victorialogs/logsql/#extract_regexp-pipe) pipe.
|
||||
For example, the following query extracts `username` and `user_id` fields from text [log message](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field):
|
||||
|
||||
```logsql
|
||||
_time:5m | extract "username=<username>, user_id=<user_id>,"
|
||||
```
|
||||
|
||||
See also:
|
||||
|
||||
- [Replacing substrings in text fields](https://docs.victoriametrics.com/victorialogs/logsql/#replace-pipe)
|
||||
|
||||
|
||||
## How to filter out data after stats calculation?
|
||||
|
||||
Use [`filter` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#filter-pipe). For example, the following query
|
||||
returns only [log streams](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields) with more than 1000 logs
|
||||
over the last 5 minutes:
|
||||
|
||||
```logsql
|
||||
_time:5m | stats by (_stream) count() rows | filter rows:>1000
|
||||
```
|
||||
|
||||
## How to calculate the number of logs per the given interval?
|
||||
|
||||
Use [`stats` by time bucket](https://docs.victoriametrics.com/victorialogs/logsql/#stats-by-time-buckets). For example, the following query
|
||||
returns per-hour number of logs with the `error` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word) for the last day:
|
||||
|
||||
```logsql
|
||||
_time:1d error | stats by (_time:1h) count() rows | sort by (_time)
|
||||
```
|
||||
|
||||
This query uses [`sort` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#sort-pipe) in order to sort per-hour stats
|
||||
by [`_time`](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field).
|
||||
|
||||
## How to calculate the number of logs per IPv4 subnetwork?
|
||||
|
||||
Use [`stats` by IPv4 bucket](https://docs.victoriametrics.com/victorialogs/logsql/#stats-by-ipv4-buckets). For example, the following
|
||||
query returns top 10 `/24` subnetworks with the biggest number of logs for the last 5 minutes:
|
||||
|
||||
```logsql
|
||||
_time:5m | stats by (ip:/24) count() rows | last 10 by (rows)
|
||||
```
|
||||
|
||||
This query uses [`first` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#first-pipe) in order to get up to 10 per-subnetwork stats
|
||||
with the biggest number of rows.
|
||||
|
||||
The query assumes the original logs have `ip` [field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) with the IPv4 address.
|
||||
If the IPv4 address is located inside [log message](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field) or any other text field,
|
||||
then it can be extracted with the [`extract`](https://docs.victoriametrics.com/victorialogs/logsql/#extract-pipe)
|
||||
or [`extract_regexp`](https://docs.victoriametrics.com/victorialogs/logsql/#extract_regexp-pipe) pipes. For example, the following query
|
||||
extracts IPv4 address from [`_msg` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field) and then returns top 10
|
||||
`/16` subnetworks with the biggest number of logs for the last 5 minutes:
|
||||
|
||||
```logsql
|
||||
_time:5m | extract_regexp "(?P<ip>([0-9]+[.]){3}[0-9]+)" | stats by (ip:/16) count() rows | first 10 by (rows desc)
|
||||
```
|
||||
|
||||
## How to calculate the number of logs per every value of the given field?
|
||||
|
||||
Use [`stats` by field](https://docs.victoriametrics.com/victorialogs/logsql/#stats-by-fields). For example, the following query
|
||||
calculates the number of logs per `level` [field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) for logs over the last 5 minutes:
|
||||
|
||||
```logsql
|
||||
_time:5m | stats by (level) count() rows
|
||||
```
|
||||
|
||||
An alternative is to use [`field_values` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#field_values-pipe):
|
||||
|
||||
```logsql
|
||||
_time:5m | field_values level
|
||||
```
|
||||
|
||||
## How to get unique values for the given field?
|
||||
|
||||
Use [`uniq` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#uniq-pipe). For example, the following query returns unique values for the `ip` field
|
||||
over logs for the last 5 minutes:
|
||||
|
||||
```logsql
|
||||
_time:5m | uniq by (ip)
|
||||
```
|
||||
|
||||
## How to get unique sets of values for the given fields?
|
||||
|
||||
Use [`uniq` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#uniq-pipe). For example, the following query returns unique sets for (`host`, `path`) fields
|
||||
over logs for the last 5 minutes:
|
||||
|
||||
```logsql
|
||||
_time:5m | uniq by (host, path)
|
||||
```
|
||||
|
||||
## How to return last N logs for the given query?
|
||||
|
||||
Use [`first` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#first-pipe). For example, the following query returns the last 10 logs with the `error`
|
||||
[word](https://docs.victoriametrics.com/victorialogs/logsql/#word) in the [`_msg` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field)
|
||||
over the logs for the last 5 minutes:
|
||||
|
||||
```logsql
|
||||
_time:5m error | first 10 by (_time desc)
|
||||
```
|
||||
|
||||
It sorts the matching logs by [`_time` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field) and then selects
|
||||
the last 10 logs with the highest values for the `_time` field.
|
||||
|
||||
If the query is sent to [`/select/logsql/query` HTTP API](https://docs.victoriametrics.com/victorialogs/querying/#querying-logs), then `limit=N` query arg
|
||||
can be passed to it in order to return up to `N` latest log entries. For example, the following command returns up to 10 latest log entries with the `error` word:
|
||||
|
||||
```sh
|
||||
curl http://localhost:9428/select/logsql/query -d 'query=error' -d 'limit=10'
|
||||
```
|
||||
|
||||
See also:
|
||||
|
||||
- [How to select recently ingested logs?](#how-to-select-recently-ingested-logs)
|
||||
- [How to return last N logs for the given query?](#how-to-return-last-n-logs-for-the-given-query)
|
||||
|
||||
|
||||
## How to calculate the share of error logs to the total number of logs?
|
||||
|
||||
Use the following query:
|
||||
|
||||
```logsql
|
||||
_time:5m | stats count() logs, count() if (error) errors | math errors / logs
|
||||
```
|
||||
|
||||
This query uses the following [LogsQL](https://docs.victoriametrics.com/victorialogs/logsql/) features:
|
||||
|
||||
- [`_time` filter](https://docs.victoriametrics.com/victorialogs/logsql/#time-filter) for selecting logs on the given time range (last 5 minutes in the query above).
|
||||
- [`stats` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#stats-pipe) with [additional filtering](https://docs.victoriametrics.com/victorialogs/logsql/#stats-with-additional-filters)
|
||||
for calculating the total number of logs and the number of logs with the `error` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word) on the selected time range.
|
||||
- [`math` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#math-pipe) for calculating the share of logs with `error` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word)
|
||||
comparing to the total number of logs.
|
||||
|
||||
|
||||
## How to select logs for working hours and weekdays?
|
||||
|
||||
Use [`day_range`](https://docs.victoriametrics.com/victorialogs/logsql/#day-range-filter) and [`week_range`](https://docs.victoriametrics.com/victorialogs/logsql/#week-range-filter) filters.
|
||||
For example, the following query selects logs from Monday to Friday in working hours `[08:00 - 18:00]` over the last 4 weeks:
|
||||
|
||||
```logsql
|
||||
_time:4w _time:week_range[Mon, Fri] _time:day_range[08:00, 18:00)
|
||||
```
|
||||
|
||||
It uses implicit [`AND` logical filter](https://docs.victoriametrics.com/victorialogs/logsql/#logical-filter) for joining multiple filters
|
||||
on [`_time` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field).
|
||||
|
||||
## How to find logs with the given phrase containing whitespace?
|
||||
|
||||
Use [`phrase filter`](https://docs.victoriametrics.com/victorialogs/logsql/#phrase-filter). For example, the following [LogsQL query](https://docs.victoriametrics.com/victorialogs/logsql/)
|
||||
returns logs with the `cannot open file` phrase over the last 5 minutes:
|
||||
|
||||
|
||||
```logsql
|
||||
_time:5m "cannot open file"
|
||||
```
|
||||
|
||||
## How to select all the logs for a particular stacktrace or panic?
|
||||
|
||||
Use [`stream_context` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#stream_context-pipe) for selecting surrounding logs for the given log.
|
||||
For example, the following query selects up to 10 logs in front of every log message containing the `stacktrace` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word),
|
||||
plus up to 100 logs after the given log message:
|
||||
|
||||
```logsql
|
||||
_time:5m stacktrace | stream_context before 10 after 100
|
||||
```
|
||||
|
||||
|
||||
## How to get the duration since the last seen log entry matching the given filter?
|
||||
|
||||
Use the following query:
|
||||
|
||||
```logsql
|
||||
_time:1d ERROR
|
||||
| stats max(_time) as max_time
|
||||
| math round((now() - max_time) / 1s) as duration_seconds
|
||||
```
|
||||
|
||||
It uses [`max()` stats function](https://docs.victoriametrics.com/victorialogs/logsql/#max-stats) for obtaining the maximum value
|
||||
for the [`_time` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field) across all the logs for the last day,
|
||||
which contain the `ERROR` word in the [`_msg` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field).
|
||||
Then it uses `now()` function at [`math` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#math-pipe) for calculating
|
||||
the duration since the last seen log entry with the `ERROR` word.
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 23 KiB |
File diff suppressed because it is too large
Load Diff
@@ -1,16 +0,0 @@
|
||||
---
|
||||
title: Querying
|
||||
weight: 4
|
||||
menu:
|
||||
docs:
|
||||
identifier: victorialogs-querying
|
||||
parent: "victorialogs"
|
||||
weight: 4
|
||||
tags:
|
||||
- logs
|
||||
aliases:
|
||||
- /VictoriaLogs/querying/
|
||||
- /victorialogs/querying/
|
||||
- /victorialogs/querying/index.html
|
||||
---
|
||||
{{% content "README.md" %}}
|
||||
@@ -1,175 +0,0 @@
|
||||
---
|
||||
weight:
|
||||
title: vlogscli
|
||||
disableToc: true
|
||||
menu:
|
||||
docs:
|
||||
parent: "victorialogs-querying"
|
||||
weight: 1
|
||||
tags:
|
||||
- logs
|
||||
---
|
||||
|
||||
`vlogsqcli` is an interactive command-line tool for querying [VictoriaLogs](https://docs.victoriametrics.com/victorialogs/).
|
||||
It has the following features:
|
||||
|
||||
- It supports scrolling and searching over query results in the same way as `less` command does - see [these docs](#scrolling-query-results).
|
||||
- It supports canceling long-running queries at any time via `Ctrl+C`.
|
||||
- It supports query history - see [these docs](#query-history).
|
||||
- It supports different formats for query results (JSON, logfmt, compact, etc.) - see [these docs](#output-modes).
|
||||
- It supports live tailing - see [these docs](#live-tailing).
|
||||
|
||||
This tool can be obtained from the linked release pages at the [changelog](https://docs.victoriametrics.com/victorialogs/changelog/)
|
||||
or from docker images at [Docker Hub](https://hub.docker.com/r/victoriametrics/vlogscli/tags) and [Quay](https://quay.io/repository/victoriametrics/vlogscli?tab=tags).
|
||||
|
||||
### Running `vlogscli` from release binary
|
||||
|
||||
```sh
|
||||
curl -L -O https://github.com/VictoriaMetrics/VictoriaMetrics/releases/download/v1.24.0-victorialogs/vlogscli-linux-amd64-v1.24.0-victorialogs.tar.gz
|
||||
tar xzf vlogscli-linux-amd64-v1.24.0-victorialogs.tar.gz
|
||||
./vlogscli-prod
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
By default `vlogscli` sends queries to [`http://localhost:8429/select/logsql/query`](https://docs.victoriametrics.com/victorialogs/querying/#querying-logs).
|
||||
The url to query can be changed via `-datasource.url` command-line flag. For example, the following command instructs
|
||||
`vlogscli` sending queries to `https://victoria-logs.some-domain.com/select/logsql/query`:
|
||||
|
||||
```sh
|
||||
./vlogscli -datasource.url='https://victoria-logs.some-domain.com/select/logsql/query'
|
||||
```
|
||||
|
||||
If some HTTP request headers must be passed to the querying API, then set `-header` command-line flag.
|
||||
For example, the following command starts `vlogscli`,
|
||||
which queries `(AccountID=123, ProjectID=456)` [tenant](https://docs.victoriametrics.com/victorialogs/#multitenancy):
|
||||
|
||||
```sh
|
||||
./vlogscli -header='AccountID: 123' -header='ProjectID: 456'
|
||||
```
|
||||
|
||||
## Multitenancy
|
||||
|
||||
`AccountID` and `ProjectID` [values](https://docs.victoriametrics.com/victorialogs/#multitenancy)
|
||||
can be set via `-accountID` and `-projectID` command-line flags:
|
||||
|
||||
```sh
|
||||
./vlogscli -accountID=123 -projectID=456
|
||||
```
|
||||
|
||||
|
||||
## Querying
|
||||
|
||||
After the start `vlogscli` provides a prompt for writing [LogsQL](https://docs.victoriametrics.com/victorialogs/logsql/) queries.
|
||||
The query can be multi-line. It is sent to VictoriaLogs as soon as it contains `;` at the end or if a blank line follows the query.
|
||||
For example:
|
||||
|
||||
```sh
|
||||
;> _time:1y | count();
|
||||
executing [_time:1y | stats count(*) as "count(*)"]...; duration: 0.688s
|
||||
{
|
||||
"count(*)": "1923019991"
|
||||
}
|
||||
```
|
||||
|
||||
`vlogscli` shows the actually executed query on the next line after the query input prompt.
|
||||
This helps debugging issues related to incorrectly written queries.
|
||||
|
||||
The next line after the query input prompt also shows the query duration. This helps debugging
|
||||
and optimizing slow queries.
|
||||
|
||||
Query execution can be interrupted at any time by pressing `Ctrl+C`.
|
||||
|
||||
Type `q` and then press `Enter` for exit from `vlogscli` (if you want to search for `q` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word),
|
||||
then just wrap it into quotes: `"q"` or `'q'`).
|
||||
|
||||
See also:
|
||||
|
||||
- [output modes](#output-modes)
|
||||
- [query history](#query-history)
|
||||
- [scrolling query results](#scrolling-query-results)
|
||||
- [live tailing](#live-tailing)
|
||||
|
||||
|
||||
## Scrolling query results
|
||||
|
||||
If the query response exceeds vertical screen space, `vlogscli` pipes query response to `less` utility,
|
||||
so you can scroll the response as needed. This allows executing queries, which potentially
|
||||
may return billions of rows, without any problems at both VictoriaLogs and `vlogscli` sides,
|
||||
thanks to the way how `less` interacts with [`/select/logsql/query`](https://docs.victoriametrics.com/victorialogs/querying/#querying-logs):
|
||||
|
||||
- `less` reads the response when needed, e.g. when you scroll it down.
|
||||
`less` pauses reading the response when you stop scrolling. VictoriaLogs pauses processing the query
|
||||
when `less` stops reading the response, and automatically resumes processing the response
|
||||
when `less` continues reading it.
|
||||
- `less` closes the response stream after exit from scroll mode (e.g. by typing `q`).
|
||||
VictoriaLogs stops query processing and frees up all the associated resources
|
||||
after the response stream is closed.
|
||||
|
||||
See also [`less` docs](https://man7.org/linux/man-pages/man1/less.1.html) and
|
||||
[command-line integration docs for VictoriaLogs](https://docs.victoriametrics.com/victorialogs/querying/#command-line).
|
||||
|
||||
|
||||
## Live tailing
|
||||
|
||||
`vlogscli` enters live tailing mode when the query is prepended with `\tail ` command. For example,
|
||||
the following query shows all the newly ingested logs with `error` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word)
|
||||
in real time:
|
||||
|
||||
```
|
||||
;> \tail error;
|
||||
```
|
||||
|
||||
By default `vlogscli` derives [the URL for live tailing](https://docs.victoriametrics.com/victorialogs/querying/#live-tailing) from the `-datasource.url` command-line flag
|
||||
by replacing `/query` with `/tail` at the end of `-datasource.url`. The URL for live tailing can be specified explicitly via `-tail.url` command-line flag.
|
||||
|
||||
Live tailing can show query results in different formats - see [these docs](#output-modes).
|
||||
|
||||
|
||||
## Query history
|
||||
|
||||
`vlogscli` supports query history - press `up` and `down` keys for navigating the history.
|
||||
By default the history is stored in the `vlogscli-history` file at the directory where `vlogscli` runs,
|
||||
so the history is available between `vlogscli` runs.
|
||||
The path to the file can be changed via `-historyFile` command-line flag.
|
||||
|
||||
Quick tip: type some text and then press `Ctrl+R` for searching queries with the given text in the history.
|
||||
Press `Ctrl+R` multiple times for searching other matching queries in the history.
|
||||
Press `Enter` when the needed query is found in order to execute it.
|
||||
Press `Ctrl+C` for exit from the `search history` mode.
|
||||
See also [other available shortcuts](https://github.com/chzyer/readline/blob/f533ef1caae91a1fcc90875ff9a5a030f0237c6a/doc/shortcut.md).
|
||||
|
||||
|
||||
## Output modes
|
||||
|
||||
By default `vlogscli` displays query results as prettified JSON object with every field on a separate line.
|
||||
Fields in every JSON object are sorted in alphabetical order. This simplifies locating the needed fields.
|
||||
|
||||
`vlogscli` supports the following output modes:
|
||||
|
||||
* A single JSON line per every result. Type `\s` and press `enter` for this mode.
|
||||
* Multiline JSON per every result. Type `\m` and press `enter` for this mode.
|
||||
* Compact output. Type `\c` and press `enter` for this mode.
|
||||
This mode shows field values as is if the response contains a single field
|
||||
(for example if [`fields _msg` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#fields-pipe) is used)
|
||||
plus optional [`_time` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field).
|
||||
See also [docs about ANSI colors](#ansi-colors).
|
||||
* [Logfmt output](https://brandur.org/logfmt). Type `\logfmt` and press `enter` for this mode.
|
||||
|
||||
|
||||
## Wrapping long lines
|
||||
|
||||
`vlogscli` doesn't wrap long lines which do not fit screen width when it displays a response, which doesn't fit screen height.
|
||||
This helps inspecting responses with many lines. If you need investigating the contents of long lines,
|
||||
then press buttons with '->' and '<-' arrows on the keyboard.
|
||||
|
||||
Type `\wrap_long_lines` in the prompt and press enter in order to toggle automatic wrapping of long lines.
|
||||
|
||||
## ANSI colors
|
||||
|
||||
By default `vlogscli` doesn't display colored text in the compact [output mode](#output-modes) if the returned logs contain [ANSI color codes](https://en.wikipedia.org/wiki/ANSI_escape_code).
|
||||
It shows the ANSI color codes instead. Type `\enable_colors` for enabling colored text. Type `\disable_color` for disabling colored text.
|
||||
|
||||
ANSI colors make harder analyzing the logs, so it is recommended stripping ANSI colors at data ingestion stage
|
||||
according to [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/#decolorizing).
|
||||
|
||||
@@ -1,119 +0,0 @@
|
||||
---
|
||||
weight: 120
|
||||
title: SQL to LogsQL tutorial
|
||||
menu:
|
||||
docs:
|
||||
parent: "victorialogs"
|
||||
weight: 120
|
||||
tags:
|
||||
- logs
|
||||
- guide
|
||||
---
|
||||
|
||||
This is a tutorial for the migration from SQL to [LogsQL](https://docs.victoriametrics.com/victorialogs/logsql/).
|
||||
It is expected you are familiar with SQL and know [how to execute queries at VictoriaLogs](https://docs.victoriametrics.com/victorialogs/querying/).
|
||||
|
||||
|
||||
## data model
|
||||
|
||||
SQL is usually used for querying relational tables. Every such table contains a pre-defined set of columns with pre-defined types.
|
||||
LogsQL is used for querying logs. Logs are stored in [log streams](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields).
|
||||
So log streams is an analogue of tables in relational databases. Log streams and relational tables have the following major differences:
|
||||
|
||||
- Log streams are created automatically when the first log entry (row) is ingested into them.
|
||||
- There is no pre-defined scheme in log streams - logs with arbitrary set of fields can be ingested into every log stream.
|
||||
Both names and values in every log entry have string type. They may contain arbitrary string data.
|
||||
- Every log entry (row) can be represented as a flat JSON object: `{"f1":"v1",...,"fN":"vN"}`. See [these docs](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model).
|
||||
- By default VictoriaLogs selects log entries across all the log streams. The needed set of log streams can be specified
|
||||
via [stream filters](https://docs.victoriametrics.com/victorialogs/logsql/#stream-filter).
|
||||
- By default VictoriaLogs returns all the fields across the selected logs. The set of returned fields
|
||||
can be limited with [`fields` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#fields-pipe).
|
||||
|
||||
## query structure
|
||||
|
||||
SQL query structure is quite convoluted:
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
<fields, aggregations, calculations, transformations>
|
||||
FROM <table>
|
||||
<optional JOINs>
|
||||
<optional filters with optional subqueries>
|
||||
<optional GROUP BY>
|
||||
<optional HAVING>
|
||||
<optional ORDER BY>
|
||||
<optional LIMIT / OFFSET>
|
||||
<optional UNION>
|
||||
```
|
||||
|
||||
[LogsQL](https://docs.victoriametrics.com/victorialogs/logsql/) query structure is much simpler:
|
||||
|
||||
```logsql
|
||||
<filters>
|
||||
| <optional_pipe1>
|
||||
| ...
|
||||
| <optional_pipeN>
|
||||
```
|
||||
|
||||
The `<filters>` part selects the needed logs (rows) according to the provided [filters](https://docs.victoriametrics.com/victorialogs/logsql/#filters).
|
||||
Then the provided [pipes](https://docs.victoriametrics.com/victorialogs/logsql/#pipes) are executed sequentially.
|
||||
Every such pipe receives all the rows from the previous stage, performs some calculations and/or transformations,
|
||||
and then pushes the resulting rows to the next stage. This simplifies reading and understanding the query - just read it from the beginning
|
||||
to the end in order to understand what does it do at every stage.
|
||||
|
||||
LogsQL pipes cover all the functionality from SQL: aggregations, calculations, transformations, subqueries, joins, post-filters, sorting, etc.
|
||||
See the [conversion rules](#conversion-rules) on how to convert SQL to LogsQL.
|
||||
|
||||
## conversion rules
|
||||
|
||||
The following rules must be used for converting SQL query into LogsQL query:
|
||||
|
||||
* If the SQL query contains `WHERE`, then convert it into [LogsQL filters](https://docs.victoriametrics.com/victorialogs/logsql/#filters).
|
||||
Otherwise just start LogsQL query with [`*`](https://docs.victoriametrics.com/victorialogs/logsql/#any-value-filter).
|
||||
For example, `SELECT * FROM table WHERE field1=value1 AND field2<>value2` is converted into `field1:=value1 field2:!=value2`,
|
||||
while `SELECT * FROM table` is converted into `*`.
|
||||
* `IN` subqueries inside `WHERE` must be converted into [`in` filters](https://docs.victoriametrics.com/victorialogs/logsql/#multi-exact-filter).
|
||||
For example, `SELECT * FROM table WHERE id IN (SELECT id2 FROM table)` is converted into `id:in(* | fields id2)`.
|
||||
* If the `SELECT` part isn't equal to `*` and there are no `GROUP BY` / aggregate functions in the SQL query, then enumerate
|
||||
the selected columns at [`fields` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#fields-pipe).
|
||||
For example, `SELECT field1, field2 FROM table` is converted into `* | fields field1, field2`.
|
||||
* If the SQL query contains `JOIN`, then convert it into [`join` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#join-pipe).
|
||||
* If the SQL query contains `GROUP BY` / aggregate functions, then convert them to [`stats` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#stats-pipe).
|
||||
For example, `SELECT count(*) FROM table` is converted into `* | count()`, while `SELECT user_id, count(*) FROM table GROUP BY user_id`
|
||||
is converted to `* | stats by (user_id) count()`. Note how the LogsQL query mentions the `GROUP BY` fields only once,
|
||||
while SQL forces mentioning these fields twice - at the `SELECT` and at the `GROUP BY`. How many times did you hit the discrepancy
|
||||
between `SELECT` and `GROUP BY` fields?
|
||||
* If the SQL query contains additional calculations and/or transformations at the `SELECT`, which aren't covered yet by `GROUP BY`,
|
||||
then convert them into the corresponding [LogsQL pipes](https://docs.victoriametrics.com/victorialogs/logsql/#pipes).
|
||||
The most frequently used pipes are [`math`](https://docs.victoriametrics.com/victorialogs/logsql/#math-pipe)
|
||||
and [`format`](https://docs.victoriametrics.com/victorialogs/logsql/#format-pipe).
|
||||
For example, `SELECT field1 + 10 AS x, CONCAT("foo", field2) AS y FROM table` is converted into `* | math field1 + 10 as x | format "foo<field2>" as y | fields x, y`.
|
||||
* If the SQL query contains `HAVING`, then convert it into [`filter` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#filter-pipe).
|
||||
For example, `SELECT user_id, count(*) AS c FROM table GROUP BY user_id HAVING c > 100` is converted into `* | stats by (user_id) count() c | filter c:>100`.
|
||||
* If the SQL query contains `ORDER BY`, `LIMIT` and `OFFSET`, then convert them into [`sort` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#sort-pipe).
|
||||
For example, `SELECT * FROM table ORDER BY field1, field2 LIMIT 10 OFFSET 20` is converted into `* | sort by (field1, field2) limit 10 offset 20`.
|
||||
* If the SQL query contains `UNION`, then convert it into [`union` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#union-pipe).
|
||||
For example `SELECT * FROM table WHERE filters1 UNION ALL SELECT * FROM table WHERE filters2` is converted into `filters1 | union (filters2)`.
|
||||
|
||||
SQL queries are frequently used for obtaining top N column values, which are the most frequently seen in the selected rows.
|
||||
For example, the query below returns top 5 `user_id` values, which present in the biggest number of rows:
|
||||
|
||||
```sql
|
||||
SELECT user_id, count(*) hits FROM table GROUP BY user_id ORDER BY hits DESC LIMIT 5
|
||||
```
|
||||
|
||||
LogsQL provides a shortcut syntax with [`top` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#top-pipe) for this case:
|
||||
|
||||
```logsql
|
||||
* | top 5 (user_id)
|
||||
```
|
||||
|
||||
It is equivalent to the longer LogsQL query:
|
||||
|
||||
```logsql
|
||||
* | by (user_id) count() hits | sort by (hits desc) limit 5
|
||||
```
|
||||
|
||||
[LogsQL pipes](https://docs.victoriametrics.com/victorialogs/logsql/#pipes) support much wider functionality comparing to SQL,
|
||||
so spend your spare time by reading [pipe docs](https://docs.victoriametrics.com/victorialogs/logsql/) and playing with them
|
||||
at [VictoriaLogs demo playground](https://play-vmlogs.victoriametrics.com/).
|
||||
@@ -1,15 +0,0 @@
|
||||
---
|
||||
weight: 9
|
||||
title: Grafana datasource
|
||||
editLink: https://github.com/VictoriaMetrics/victorialogs-datasource/blob/main/README.md
|
||||
menu:
|
||||
docs:
|
||||
identifier: victorialogs-grafana-datasource
|
||||
parent: victorialogs
|
||||
weight: 9
|
||||
tags:
|
||||
- logs
|
||||
aliases:
|
||||
- /victorialogs/victorialogs-datasource.html
|
||||
---
|
||||
{{% content "grafana-datasource/README.md" %}}
|
||||
@@ -1,508 +0,0 @@
|
||||
---
|
||||
weight: 3
|
||||
menu:
|
||||
docs:
|
||||
parent: victorialogs
|
||||
weight: 3
|
||||
title: vlagent
|
||||
tags:
|
||||
- logs
|
||||
aliases:
|
||||
- /vlagent.html
|
||||
- /vlagent/index.html
|
||||
- /vlagent/
|
||||
---
|
||||
|
||||
`vlagent` is a tiny agent which helps you collect logs from various sources
|
||||
and store them in [VictoriaLogs](https://docs.victoriametrics.com/victorialogs/).
|
||||
See [Quick Start](#quick-start) for details.
|
||||
|
||||
|
||||
## Motivation
|
||||
|
||||
While VictoriaLogs provides an efficient solution to store and observe logs, it lacks of replication out of box.
|
||||
Previous solution was to configure clients to replicate log streams into multiple VictoriaLogs installations.
|
||||
`vlagent` is a missing piece of log streams replication.
|
||||
|
||||
## Features
|
||||
|
||||
- It can accept logs from popular log collectors. See [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/).
|
||||
* Can replicate collected logs simultaneously to multiple VictoriaLogs instances - see [these docs](#replication-and-high-availability).
|
||||
* Works smoothly in environments with unstable connections to remote storage. If the remote storage is unavailable, the collected logs
|
||||
are buffered at `-remoteWrite.tmpDataPath`. The buffered logs are sent to remote storage as soon as the connection
|
||||
to the remote storage is repaired. The maximum disk usage for the buffer can be limited with `-remoteWrite.maxDiskUsagePerURL`.
|
||||
|
||||
## Quick Start
|
||||
|
||||
Please download `vlagent` archive from [releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest) (
|
||||
`vlagent` is also available in docker images [Docker Hub](https://hub.docker.com/r/victoriametrics/vlagent/tags) and [Quay](https://quay.io/repository/victoriametrics/vlagent?tab=tags)),
|
||||
unpack it and pass the following flags to the `vlagent` binary in order to start sending the data to the VictoriaLogs remote storage:
|
||||
|
||||
* `-remoteWrite.url` with VictoriaLogs native protocol compatible remote storage endpoint, where to send the data to.
|
||||
The `-remoteWrite.url` may refer to [DNS SRV](https://en.wikipedia.org/wiki/SRV_record) address. See [these docs](#srv-urls) for details.
|
||||
|
||||
Example command for writing the data received via [supported push-based protocols](#how-to-push-data-to-vlagent)
|
||||
to [single-node VictoriaLogs](https://docs.victoriametrics.com/victorialogs) located at `victoria-logs-host:9428`:
|
||||
|
||||
```sh
|
||||
/path/to/vlagent -remoteWrite.url=https://victoria-logs-host:9428/internal/insert
|
||||
```
|
||||
|
||||
Pass `-help` to `vlagent` in order to see [the full list of supported command-line flags with their descriptions](#advanced-usage).
|
||||
|
||||
### Replication and high availability
|
||||
|
||||
`vlagent` replicates the collected logs among multiple remote storage instances configured via `-remoteWrite.url` args.
|
||||
If a single remote storage instance temporarily is out of service, then the collected data remains available in another remote storage instance.
|
||||
`vlagent` buffers the collected data in files at `-remoteWrite.tmpDataPath` until the remote storage becomes available again,
|
||||
and then it sends the buffered data to the remote storage in order to prevent data gaps.
|
||||
|
||||
## Monitoring
|
||||
|
||||
`vlagent` exports various metrics in Prometheus exposition format at `http://vmalent-host:9429/metrics` page.
|
||||
We recommend setting up regular scraping of this page either through `vmagent` or by Prometheus-compatible scraper,
|
||||
so that the exported metrics may be analyzed later.
|
||||
|
||||
Use official [Grafana dashboard](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/dashboards/vlagent.json) for `vlagent` state overview.
|
||||
Graphs on this dashboard contain useful hints - hover the `i` icon at the top left corner of each graph in order to read it.
|
||||
If you have suggestions for improvements or have found a bug - please open an issue on github or add a review to the dashboard.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
* It is recommended [setting up the official Grafana dashboard](#monitoring) in order to monitor the state of `vlagent`.
|
||||
|
||||
* It is recommended increasing `-remoteWrite.queues` if `vlagent_remotewrite_pending_data_bytes` [metric](#monitoring)
|
||||
grows constantly. It is also recommended increasing `-remoteWrite.maxBlockSize` command-line flags in this case.
|
||||
This can improve data ingestion performance to the configured remote storage systems at the cost of higher memory usage.
|
||||
|
||||
* If you see gaps in the data pushed by `vlagent` to remote storage when `-remoteWrite.maxDiskUsagePerURL` is set,
|
||||
try increasing `-remoteWrite.queues`. Such gaps may appear because `vlagent` cannot keep up with sending the collected data to remote storage.
|
||||
Therefore, it starts dropping the buffered data if the on-disk buffer size exceeds `-remoteWrite.maxDiskUsagePerURL`.
|
||||
|
||||
* `vlagent` drops data blocks if remote storage replies with `400 Bad Request` and `404 Not Found` HTTP responses.
|
||||
The number of dropped blocks can be monitored via `vlagent_remotewrite_packets_dropped_total` metric exported at [/metrics page](#monitoring).
|
||||
|
||||
* `vlagent` buffers scraped data at the `-remoteWrite.tmpDataPath` directory until it is sent to `-remoteWrite.url`.
|
||||
The directory can grow large when remote storage is unavailable for extended periods of time and if the maximum directory size isn't limited
|
||||
with `-remoteWrite.maxDiskUsagePerURL` command-line flag.
|
||||
If you don't want to send all the buffered data from the directory to remote storage then simply stop `vlagent` and delete the directory.
|
||||
|
||||
* By default `vlagent` masks `-remoteWrite.url` with `secret-url` values in logs and at `/metrics` page because
|
||||
the url may contain sensitive information such as auth tokens or passwords.
|
||||
Pass `-remoteWrite.showURL` command-line flag when starting `vlagent` in order to see all the valid urls.
|
||||
|
||||
See also:
|
||||
|
||||
- [General Troubleshooting](https://docs.victoriametrics.com/victoriametrics/troubleshooting/)
|
||||
|
||||
|
||||
## Profiling
|
||||
|
||||
`vlagent` provides handlers for collecting the following [Go profiles](https://blog.golang.org/profiling-go-programs):
|
||||
|
||||
* Memory profile can be collected with the following command (replace `0.0.0.0` with hostname if needed):
|
||||
|
||||
|
||||
```sh
|
||||
curl http://0.0.0.0:9429/debug/pprof/heap > mem.pprof
|
||||
```
|
||||
|
||||
|
||||
* CPU profile can be collected with the following command (replace `0.0.0.0` with hostname if needed):
|
||||
|
||||
|
||||
```sh
|
||||
curl http://0.0.0.0:9429/debug/pprof/profile > cpu.pprof
|
||||
```
|
||||
|
||||
|
||||
The command for collecting CPU profile waits for 30 seconds before returning.
|
||||
|
||||
The collected profiles may be analyzed with [go tool pprof](https://github.com/google/pprof).
|
||||
|
||||
It is safe sharing the collected profiles from security point of view, since they do not contain sensitive information.
|
||||
|
||||
## Advanced usage
|
||||
|
||||
`vlagent` can be fine-tuned with various command-line flags. Run `./vlagent -help` in order to see the full list of these flags with their descriptions and default values:
|
||||
|
||||
```bash
|
||||
vlagent collects logs via popular data ingestion protocols and routes it to VictoriaLogs.
|
||||
|
||||
See the docs at https://docs.victoriametrics.com/victorialogs/vlagent/ .
|
||||
|
||||
-blockcache.missesBeforeCaching int
|
||||
The number of cache misses before putting the block into cache. Higher values may reduce indexdb/dataBlocks cache size at the cost of higher CPU and disk read usage (default 2)
|
||||
-datadog.ignoreFields array
|
||||
Comma-separated list of fields to ignore for logs ingested via DataDog protocol. See https://docs.victoriametrics.com/victorialogs/data-ingestion/datadog-agent/#dropping-fields
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-datadog.maxRequestSize size
|
||||
The maximum size in bytes of a single DataDog request
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 67108864)
|
||||
-datadog.streamFields array
|
||||
Comma-separated list of fields to use as log stream fields for logs ingested via DataDog protocol. See https://docs.victoriametrics.com/victorialogs/data-ingestion/datadog-agent/#stream-fields
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-defaultMsgValue string
|
||||
Default value for _msg field if the ingested log entry doesn't contain it; see https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field (default "missing _msg field; see https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field")
|
||||
-elasticsearch.version string
|
||||
Elasticsearch version to report to client (default "8.9.0")
|
||||
-enableTCP6
|
||||
Whether to enable IPv6 for listening and dialing. By default, only IPv4 TCP and UDP are used
|
||||
-envflag.enable
|
||||
Whether to enable reading flags from environment variables in addition to the command line. Command line flag values have priority over values from environment vars. Flags are read only from the command line if this flag isn't set. See https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#environment-variables for more details
|
||||
-envflag.prefix string
|
||||
Prefix for environment variables if -envflag.enable is set
|
||||
-filestream.disableFadvise
|
||||
Whether to disable fadvise() syscall when reading large data files. The fadvise() syscall prevents from eviction of recently accessed data from OS page cache during background merges and backups. In some rare cases it is better to disable the syscall if it uses too much CPU
|
||||
-flagsAuthKey value
|
||||
Auth key for /flags endpoint. It must be passed via authKey query arg. It overrides -httpAuth.*
|
||||
Flag value can be read from the given file when using -flagsAuthKey=file:///abs/path/to/file or -flagsAuthKey=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -flagsAuthKey=http://host/path or -flagsAuthKey=https://host/path
|
||||
-fs.disableMmap
|
||||
Whether to use pread() instead of mmap() for reading data files. By default, mmap() is used for 64-bit arches and pread() is used for 32-bit arches, since they cannot read data files bigger than 2^32 bytes in memory. mmap() is usually faster for reading small data chunks than pread()
|
||||
-http.connTimeout duration
|
||||
Incoming connections to -httpListenAddr are closed after the configured timeout. This may help evenly spreading load among a cluster of services behind TCP-level load balancer. Zero value disables closing of incoming connections (default 2m0s)
|
||||
-http.disableCORS
|
||||
Disable CORS for all origins (*)
|
||||
-http.disableKeepAlive
|
||||
Whether to disable HTTP keep-alive for incoming connections at -httpListenAddr
|
||||
-http.disableResponseCompression
|
||||
Disable compression of HTTP responses to save CPU resources. By default, compression is enabled to save network bandwidth
|
||||
-http.header.csp string
|
||||
Value for 'Content-Security-Policy' header, recommended: "default-src 'self'"
|
||||
-http.header.frameOptions string
|
||||
Value for 'X-Frame-Options' header
|
||||
-http.header.hsts string
|
||||
Value for 'Strict-Transport-Security' header, recommended: 'max-age=31536000; includeSubDomains'
|
||||
-http.idleConnTimeout duration
|
||||
Timeout for incoming idle http connections (default 1m0s)
|
||||
-http.maxGracefulShutdownDuration duration
|
||||
The maximum duration for a graceful shutdown of the HTTP server. A highly loaded server may require increased value for a graceful shutdown (default 7s)
|
||||
-http.pathPrefix string
|
||||
An optional prefix to add to all the paths handled by http server. For example, if '-http.pathPrefix=/foo/bar' is set, then all the http requests will be handled on '/foo/bar/*' paths. This may be useful for proxied requests. See https://www.robustperception.io/using-external-urls-and-proxies-with-prometheus
|
||||
-http.shutdownDelay duration
|
||||
Optional delay before http server shutdown. During this delay, the server returns non-OK responses from /health page, so load balancers can route new requests to other servers
|
||||
-httpAuth.password value
|
||||
Password for HTTP server's Basic Auth. The authentication is disabled if -httpAuth.username is empty
|
||||
Flag value can be read from the given file when using -httpAuth.password=file:///abs/path/to/file or -httpAuth.password=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -httpAuth.password=http://host/path or -httpAuth.password=https://host/path
|
||||
-httpAuth.username string
|
||||
Username for HTTP server's Basic Auth. The authentication is disabled if empty. See also -httpAuth.password
|
||||
-httpListenAddr array
|
||||
TCP address to listen for incoming http requests. Set this flag to empty value in order to disable listening on any port. This mode may be useful for running multiple vlagent instances on the same server. Note that /targets and /metrics pages aren't available if -httpListenAddr=''. See also -tls and -httpListenAddr.useProxyProtocol
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-httpListenAddr.useProxyProtocol array
|
||||
Whether to use proxy protocol for connections accepted at the corresponding -httpListenAddr . See https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt . With enabled proxy protocol http server cannot serve regular /metrics endpoint. Use -pushmetrics.url for metrics pushing
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to false.
|
||||
-insert.disable
|
||||
Whether to disable /insert/* HTTP endpoints
|
||||
-insert.maxFieldsPerLine int
|
||||
The maximum number of log fields per line, which can be read by /insert/* handlers; see https://docs.victoriametrics.com/victorialogs/faq/#how-many-fields-a-single-log-entry-may-contain (default 1000)
|
||||
-insert.maxLineSizeBytes size
|
||||
The maximum size of a single line, which can be read by /insert/* handlers; see https://docs.victoriametrics.com/victorialogs/faq/#what-length-a-log-record-is-expected-to-have
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 262144)
|
||||
-insert.maxQueueDuration duration
|
||||
The maximum duration to wait in the queue when -maxConcurrentInserts concurrent insert requests are executed (default 1m0s)
|
||||
-internStringCacheExpireDuration duration
|
||||
The expiry duration for caches for interned strings. See https://en.wikipedia.org/wiki/String_interning . See also -internStringMaxLen and -internStringDisableCache (default 6m0s)
|
||||
-internStringDisableCache
|
||||
Whether to disable caches for interned strings. This may reduce memory usage at the cost of higher CPU usage. See https://en.wikipedia.org/wiki/String_interning . See also -internStringCacheExpireDuration and -internStringMaxLen
|
||||
-internStringMaxLen int
|
||||
The maximum length for strings to intern. A lower limit may save memory at the cost of higher CPU usage. See https://en.wikipedia.org/wiki/String_interning . See also -internStringDisableCache and -internStringCacheExpireDuration (default 500)
|
||||
-internalinsert.disable
|
||||
Whether to disable /internal/insert HTTP endpoint
|
||||
-internalinsert.maxRequestSize size
|
||||
The maximum size in bytes of a single request, which can be accepted at /internal/insert HTTP endpoint
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 67108864)
|
||||
-journald.ignoreFields array
|
||||
Comma-separated list of fields to ignore for logs ingested over journald protocol. See https://docs.victoriametrics.com/victorialogs/data-ingestion/journald/#dropping-fields
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-journald.includeEntryMetadata
|
||||
Include Journald fields with double underscore prefixes
|
||||
-journald.streamFields array
|
||||
Comma-separated list of fields to use as log stream fields for logs ingested over journald protocol. See https://docs.victoriametrics.com/victorialogs/data-ingestion/journald/#stream-fields
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-journald.tenantID string
|
||||
TenantID for logs ingested via the Journald endpoint. See https://docs.victoriametrics.com/victorialogs/data-ingestion/journald/#multitenancy (default "0:0")
|
||||
-journald.timeField string
|
||||
Field to use as a log timestamp for logs ingested via journald protocol. See https://docs.victoriametrics.com/victorialogs/data-ingestion/journald/#time-field (default "__REALTIME_TIMESTAMP")
|
||||
-loggerDisableTimestamps
|
||||
Whether to disable writing timestamps in logs
|
||||
-loggerErrorsPerSecondLimit int
|
||||
Per-second limit on the number of ERROR messages. If more than the given number of errors are emitted per second, the remaining errors are suppressed. Zero values disable the rate limit
|
||||
-loggerFormat string
|
||||
Format for logs. Possible values: default, json (default "default")
|
||||
-loggerJSONFields string
|
||||
Allows renaming fields in JSON formatted logs. Example: "ts:timestamp,msg:message" renames "ts" to "timestamp" and "msg" to "message". Supported fields: ts, level, caller, msg
|
||||
-loggerLevel string
|
||||
Minimum level of errors to log. Possible values: INFO, WARN, ERROR, FATAL, PANIC (default "INFO")
|
||||
-loggerMaxArgLen int
|
||||
The maximum length of a single logged argument. Longer arguments are replaced with 'arg_start..arg_end', where 'arg_start' and 'arg_end' is prefix and suffix of the arg with the length not exceeding -loggerMaxArgLen / 2 (default 5000)
|
||||
-loggerOutput string
|
||||
Output for the logs. Supported values: stderr, stdout (default "stderr")
|
||||
-loggerTimezone string
|
||||
Timezone to use for timestamps in logs. Timezone must be a valid IANA Time Zone. For example: America/New_York, Europe/Berlin, Etc/GMT+3 or Local (default "UTC")
|
||||
-loggerWarnsPerSecondLimit int
|
||||
Per-second limit on the number of WARN messages. If more than the given number of warns are emitted per second, then the remaining warns are suppressed. Zero values disable the rate limit
|
||||
-loki.disableMessageParsing
|
||||
Whether to disable automatic parsing of JSON-encoded log fields inside Loki log message into distinct log fields
|
||||
-loki.maxRequestSize size
|
||||
The maximum size in bytes of a single Loki request
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 67108864)
|
||||
-maxConcurrentInserts int
|
||||
The maximum number of concurrent insert requests. Set higher value when clients send data over slow networks. Default value depends on the number of available CPU cores. It should work fine in most cases since it minimizes resource usage. See also -insert.maxQueueDuration (default 20)
|
||||
-memory.allowedBytes size
|
||||
Allowed size of system memory VictoriaMetrics caches may occupy. This option overrides -memory.allowedPercent if set to a non-zero value. Too low a value may increase the cache miss rate usually resulting in higher CPU and disk IO usage. Too high a value may evict too much data from the OS page cache resulting in higher disk IO usage
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 0)
|
||||
-memory.allowedPercent float
|
||||
Allowed percent of system memory VictoriaMetrics caches may occupy. See also -memory.allowedBytes. Too low a value may increase cache miss rate usually resulting in higher CPU and disk IO usage. Too high a value may evict too much data from the OS page cache which will result in higher disk IO usage (default 60)
|
||||
-metrics.exposeMetadata
|
||||
Whether to expose TYPE and HELP metadata at the /metrics page, which is exposed at -httpListenAddr . The metadata may be needed when the /metrics page is consumed by systems, which require this information. For example, Managed Prometheus in Google Cloud - https://cloud.google.com/stackdriver/docs/managed-prometheus/troubleshooting#missing-metric-type
|
||||
-metricsAuthKey value
|
||||
Auth key for /metrics endpoint. It must be passed via authKey query arg. It overrides -httpAuth.*
|
||||
Flag value can be read from the given file when using -metricsAuthKey=file:///abs/path/to/file or -metricsAuthKey=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -metricsAuthKey=http://host/path or -metricsAuthKey=https://host/path
|
||||
-opentelemetry.maxRequestSize size
|
||||
The maximum size in bytes of a single OpenTelemetry request
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 67108864)
|
||||
-pprofAuthKey value
|
||||
Auth key for /debug/pprof/* endpoints. It must be passed via authKey query arg. It overrides -httpAuth.*
|
||||
Flag value can be read from the given file when using -pprofAuthKey=file:///abs/path/to/file or -pprofAuthKey=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -pprofAuthKey=http://host/path or -pprofAuthKey=https://host/path
|
||||
-pushmetrics.disableCompression
|
||||
Whether to disable request body compression when pushing metrics to every -pushmetrics.url
|
||||
-pushmetrics.extraLabel array
|
||||
Optional labels to add to metrics pushed to every -pushmetrics.url . For example, -pushmetrics.extraLabel='instance="foo"' adds instance="foo" label to all the metrics pushed to every -pushmetrics.url
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-pushmetrics.header array
|
||||
Optional HTTP request header to send to every -pushmetrics.url . For example, -pushmetrics.header='Authorization: Basic foobar' adds 'Authorization: Basic foobar' header to every request to every -pushmetrics.url
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-pushmetrics.interval duration
|
||||
Interval for pushing metrics to every -pushmetrics.url (default 10s)
|
||||
-pushmetrics.url array
|
||||
Optional URL to push metrics exposed at /metrics page. See https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#push-metrics . By default, metrics exposed at /metrics page aren't pushed to any remote storage
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-remoteWrite.basicAuth.password array
|
||||
Optional basic auth password to use for the corresponding -remoteWrite.url
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-remoteWrite.basicAuth.passwordFile array
|
||||
Optional path to basic auth password to use for the corresponding -remoteWrite.url. The file is re-read every second
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-remoteWrite.basicAuth.username array
|
||||
Optional basic auth username to use for the corresponding -remoteWrite.url
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-remoteWrite.bearerToken array
|
||||
Optional bearer auth token to use for the corresponding -remoteWrite.url
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-remoteWrite.bearerTokenFile array
|
||||
Optional path to bearer token file to use for the corresponding -remoteWrite.url. The token is re-read from the file every second
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-remoteWrite.flushInterval duration
|
||||
Interval for flushing the data to remote storage. This option takes effect only when less than 2MB of data per second are pushed to -remoteWrite.url (default 1s)
|
||||
-remoteWrite.headers array
|
||||
Optional HTTP headers to send with each request to the corresponding -remoteWrite.url. For example, -remoteWrite.headers='My-Auth:foobar' would send 'My-Auth: foobar' HTTP header with every request to the corresponding -remoteWrite.url. Multiple headers must be delimited by '^^': -remoteWrite.headers='header1:value1^^header2:value2'
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-remoteWrite.maxBlockSize size
|
||||
The maximum block size to send to remote storage. Bigger blocks may improve performance at the cost of the increased memory usage.
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 8388608)
|
||||
-remoteWrite.maxDiskUsagePerURL array
|
||||
The maximum file-based buffer size in bytes at -remoteWrite.tmpDataPath for each -remoteWrite.url. When buffer size reaches the configured maximum, then old data is dropped when adding new data to the buffer. Buffered data is stored in ~500MB chunks. It is recommended to set the value for this flag to a multiple of the block size 500MB. Disk usage is unlimited if the value is set to 0
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB. (default 0)
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to default value.
|
||||
-remoteWrite.oauth2.clientID array
|
||||
Optional OAuth2 clientID to use for the corresponding -remoteWrite.url
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-remoteWrite.oauth2.clientSecret array
|
||||
Optional OAuth2 clientSecret to use for the corresponding -remoteWrite.url
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-remoteWrite.oauth2.clientSecretFile array
|
||||
Optional OAuth2 clientSecretFile to use for the corresponding -remoteWrite.url
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-remoteWrite.oauth2.endpointParams array
|
||||
Optional OAuth2 endpoint parameters to use for the corresponding -remoteWrite.url . The endpoint parameters must be set in JSON format: {"param1":"value1",...,"paramN":"valueN"}
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-remoteWrite.oauth2.scopes array
|
||||
Optional OAuth2 scopes to use for the corresponding -remoteWrite.url. Scopes must be delimited by ';'
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-remoteWrite.oauth2.tokenUrl array
|
||||
Optional OAuth2 tokenURL to use for the corresponding -remoteWrite.url
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-remoteWrite.proxyURL array
|
||||
Optional proxy URL for writing data to the corresponding -remoteWrite.url. Supported proxies: http, https, socks5. Example: -remoteWrite.proxyURL=socks5://proxy:1234
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-remoteWrite.queues int
|
||||
The number of concurrent queues to each -remoteWrite.url. Set more queues if default number of queues isn't enough for sending high volume of collected data to remote storage. Default value depends on the number of available CPU cores. It should work fine in most cases since it minimizes resource usage (default 20)
|
||||
-remoteWrite.rateLimit array
|
||||
Optional rate limit in bytes per second for data sent to the corresponding -remoteWrite.url. By default, the rate limit is disabled. It can be useful for limiting load on remote storage when big amounts of buffered data (default 0)
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to default value.
|
||||
-remoteWrite.retryMaxTime array
|
||||
The max time spent on retry attempts to send a block of data to the corresponding -remoteWrite.url. Change this value if it is expected for -remoteWrite.url to be unreachable for more than -remoteWrite.retryMaxTime. See also -remoteWrite.retryMinInterval (default 1m0s)
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to default value.
|
||||
-remoteWrite.retryMinInterval array
|
||||
The minimum delay between retry attempts to send a block of data to the corresponding -remoteWrite.url. Every next retry attempt will double the delay to prevent hammering of remote database. See also -remoteWrite.retryMaxTime (default 1s)
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to default value.
|
||||
-remoteWrite.sendTimeout array
|
||||
Timeout for sending a single block of data to the corresponding -remoteWrite.url (default 1m0s)
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to default value.
|
||||
-remoteWrite.showURL
|
||||
Whether to show -remoteWrite.url in the exported metrics. It is hidden by default, since it can contain sensitive info such as auth key
|
||||
-remoteWrite.tlsCAFile array
|
||||
Optional path to TLS CA file to use for verifying connections to the corresponding -remoteWrite.url. By default, system CA is used
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-remoteWrite.tlsCertFile array
|
||||
Optional path to client-side TLS certificate file to use when connecting to the corresponding -remoteWrite.url
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-remoteWrite.tlsHandshakeTimeout array
|
||||
The timeout for establishing tls connections to the corresponding -remoteWrite.url (default 20s)
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to default value.
|
||||
-remoteWrite.tlsInsecureSkipVerify array
|
||||
Whether to skip tls verification when connecting to the corresponding -remoteWrite.url
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to false.
|
||||
-remoteWrite.tlsKeyFile array
|
||||
Optional path to client-side TLS certificate key to use when connecting to the corresponding -remoteWrite.url
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-remoteWrite.tlsServerName array
|
||||
Optional TLS server name to use for connections to the corresponding -remoteWrite.url. By default, the server name from -remoteWrite.url is used
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-remoteWrite.tmpDataPath string
|
||||
Path to directory for storing pending data, which isn't sent to the configured -remoteWrite.url . See also -remoteWrite.maxDiskUsagePerURL (default "vlagent-remotewrite-data")
|
||||
-remoteWrite.url array
|
||||
Remote storage URL to write data to. It must support VictoriaLogs native protocol. Example url: http://<victorialogs-host>:9428/internal/insert. Pass multiple -remoteWrite.url options in order to replicate the collected data to multiple remote storage systems.
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-syslog.compressMethod.tcp array
|
||||
Compression method for syslog messages received at the corresponding -syslog.listenAddr.tcp. Supported values: none, gzip, deflate. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#compression
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-syslog.compressMethod.udp array
|
||||
Compression method for syslog messages received at the corresponding -syslog.listenAddr.udp. Supported values: none, gzip, deflate. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#compression
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-syslog.decolorizeFields.tcp array
|
||||
Fields to remove ANSI color codes across logs ingested via the corresponding -syslog.listenAddr.tcp. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#decolorizing-fields
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-syslog.decolorizeFields.udp array
|
||||
Fields to remove ANSI color codes across logs ingested via the corresponding -syslog.listenAddr.udp. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#decolorizing-fields
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-syslog.extraFields.tcp array
|
||||
Fields to add to logs ingested via the corresponding -syslog.listenAddr.tcp. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#adding-extra-fields
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-syslog.extraFields.udp array
|
||||
Fields to add to logs ingested via the corresponding -syslog.listenAddr.udp. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#adding-extra-fields
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-syslog.ignoreFields.tcp array
|
||||
Fields to ignore at logs ingested via the corresponding -syslog.listenAddr.tcp. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#dropping-fields
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-syslog.ignoreFields.udp array
|
||||
Fields to ignore at logs ingested via the corresponding -syslog.listenAddr.udp. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#dropping-fields
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-syslog.listenAddr.tcp array
|
||||
Comma-separated list of TCP addresses to listen to for Syslog messages. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-syslog.listenAddr.udp array
|
||||
Comma-separated list of UDP address to listen to for Syslog messages. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-syslog.streamFields.tcp array
|
||||
Fields to use as log stream labels for logs ingested via the corresponding -syslog.listenAddr.tcp. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#stream-fields
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-syslog.streamFields.udp array
|
||||
Fields to use as log stream labels for logs ingested via the corresponding -syslog.listenAddr.udp. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#stream-fields
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-syslog.tenantID.tcp array
|
||||
TenantID for logs ingested via the corresponding -syslog.listenAddr.tcp. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#multitenancy
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-syslog.tenantID.udp array
|
||||
TenantID for logs ingested via the corresponding -syslog.listenAddr.udp. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#multitenancy
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-syslog.timezone string
|
||||
Timezone to use when parsing timestamps in RFC3164 syslog messages. Timezone must be a valid IANA Time Zone. For example: America/New_York, Europe/Berlin, Etc/GMT+3 . See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/ (default "Local")
|
||||
-syslog.tls array
|
||||
Whether to enable TLS for receiving syslog messages at the corresponding -syslog.listenAddr.tcp. The corresponding -syslog.tlsCertFile and -syslog.tlsKeyFile must be set if -syslog.tls is set. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#security
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to false.
|
||||
-syslog.tlsCertFile array
|
||||
Path to file with TLS certificate for the corresponding -syslog.listenAddr.tcp if the corresponding -syslog.tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#security
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-syslog.tlsCipherSuites array
|
||||
Optional list of TLS cipher suites for -syslog.listenAddr.tcp if -syslog.tls is set. See the list of supported cipher suites at https://pkg.go.dev/crypto/tls#pkg-constants . See also https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#security
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-syslog.tlsKeyFile array
|
||||
Path to file with TLS key for the corresponding -syslog.listenAddr.tcp if the corresponding -syslog.tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#security
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-syslog.tlsMinVersion string
|
||||
The minimum TLS version to use for -syslog.listenAddr.tcp if -syslog.tls is set. Supported values: TLS10, TLS11, TLS12, TLS13. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#security (default "TLS13")
|
||||
-syslog.useLocalTimestamp.tcp array
|
||||
Whether to use local timestamp instead of the original timestamp for the ingested syslog messages at the corresponding -syslog.listenAddr.tcp. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#log-timestamps
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to false.
|
||||
-syslog.useLocalTimestamp.udp array
|
||||
Whether to use local timestamp instead of the original timestamp for the ingested syslog messages at the corresponding -syslog.listenAddr.udp. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#log-timestamps
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to false.
|
||||
-tls array
|
||||
Whether to enable TLS for incoming HTTP requests at the given -httpListenAddr (aka https). -tlsCertFile and -tlsKeyFile must be set if -tls is set. See also -mtls
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to false.
|
||||
-tlsCertFile array
|
||||
Path to file with TLS certificate for the corresponding -httpListenAddr if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated. See also -tlsAutocertHosts
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-tlsCipherSuites array
|
||||
Optional list of TLS cipher suites for incoming requests over HTTPS if -tls is set. See the list of supported cipher suites at https://pkg.go.dev/crypto/tls#pkg-constants
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-tlsKeyFile array
|
||||
Path to file with TLS key for the corresponding -httpListenAddr if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated. See also -tlsAutocertHosts
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-tlsMinVersion array
|
||||
Optional minimum TLS version to use for the corresponding -httpListenAddr if -tls is set. Supported values: TLS10, TLS11, TLS12, TLS13
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-version
|
||||
Show VictoriaMetrics version
|
||||
```
|
||||
@@ -1,314 +0,0 @@
|
||||
---
|
||||
weight: 10
|
||||
title: vmalert
|
||||
menu:
|
||||
docs:
|
||||
parent: "victorialogs"
|
||||
weight: 10
|
||||
identifier: "victorialogs-vmalert"
|
||||
tags:
|
||||
- logs
|
||||
- metrics
|
||||
aliases:
|
||||
- /victorialogs/vmalert.html
|
||||
---
|
||||
|
||||
[vmalert](https://docs.victoriametrics.com/victoriametrics/vmalert/){{% available_from "v1.106.0" %}} integrates with VictoriaLogs {{% available_from "v0.36.0" "logs" %}} via stats APIs [`/select/logsql/stats_query`](https://docs.victoriametrics.com/victorialogs/querying/#querying-log-stats)
|
||||
and [`/select/logsql/stats_query_range`](https://docs.victoriametrics.com/victorialogs/querying/#querying-log-range-stats).
|
||||
These endpoints return the log stats in a format compatible with [Prometheus querying API](https://prometheus.io/docs/prometheus/latest/querying/api/#instant-queries).
|
||||
It allows using VictoriaLogs as the datasource in vmalert, creating alerting and recording rules via [LogsQL](https://docs.victoriametrics.com/victorialogs/logsql/).
|
||||
|
||||
_Note: This page provides only integration instructions for vmalert and VictoriaLogs. See the full textbook for vmalert [here](https://docs.victoriametrics.com/victoriametrics/vmalert/)._
|
||||
|
||||
## Quick Start
|
||||
|
||||
Run vmalert with the following settings:
|
||||
```sh
|
||||
./bin/vmalert -rule=alert.rules \ # Path to the files or http url with alerting and/or recording rules in YAML format
|
||||
-datasource.url=http://victorialogs:9428 \ # VictoriaLogs address
|
||||
-notifier.url=http://alertmanager:9093 \ # AlertManager URL (required if alerting rules are used)
|
||||
-remoteWrite.url=http://victoriametrics:8428 \ # Remote write compatible storage to persist recording rules and alerts state info
|
||||
-remoteRead.url=http://victoriametrics:8428 \ # Prometheus HTTP API compatible datasource to restore alerts state from
|
||||
```
|
||||
|
||||
> Note: By default, vmalert assumes all configured rules have `prometheus` type and will validate them accordingly.
|
||||
> For rules in [LogsQL](https://docs.victoriametrics.com/victorialogs/logsql/) specify `type: vlogs` on [Group level](#groups).
|
||||
> Or set `-rule.defaultRuleType=vlogs` cmd-line flag to change the default rule type.
|
||||
|
||||
Each `-rule` file may contain arbitrary number of [groups](https://docs.victoriametrics.com/victoriametrics/vmalert/#groups).
|
||||
See examples in [Groups](#groups) section. See the full list of configuration flags and their descriptions in [configuration](#configuration) section.
|
||||
|
||||
With configuration example above, vmalert will perform the following interactions:
|
||||

|
||||
|
||||
1. Rules listed in `-rule` file are executed against VictoriaLogs service configured via `-datasource.url`;
|
||||
2. Triggered alerting notifications are sent to [Alertmanager](https://github.com/prometheus/alertmanager) service configured via `-notifier.url`;
|
||||
3. Results of recording rules expressions and alerts state are persisted to Prometheus-compatible remote-write endpoint
|
||||
(i.e. VictoriaMetrics) configured via `-remoteWrite.url`;
|
||||
4. On vmalert restarts, alerts state [can be restored](https://docs.victoriametrics.com/victoriametrics/vmalert/#alerts-state-on-restarts)
|
||||
by querying Prometheus-compatible HTTP API endpoint (i.e. VictoriaMetrics) configured via `-remoteRead.url`.
|
||||
|
||||
## Configuration
|
||||
|
||||
### Flags
|
||||
|
||||
For a complete list of command-line flags, visit https://docs.victoriametrics.com/victoriametrics/vmalert/#flags or execute `./vmalert --help` command.
|
||||
The following are key flags related to integration with VictoriaLogs:
|
||||
|
||||
```shellhelp
|
||||
-datasource.url string
|
||||
Datasource address supporting log stats APIs, which can be a single VictoriaLogs node or a proxy in front of VictoriaLogs. Supports address in the form of IP address with a port (e.g., http://127.0.0.1:8428) or DNS SRV record.
|
||||
-notifier.url array
|
||||
Prometheus Alertmanager URL, e.g. http://127.0.0.1:9093. List all Alertmanager URLs if it runs in the cluster mode to ensure high availability.
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-remoteWrite.url string
|
||||
Optional URL to VictoriaMetrics or vminsert where to persist alerts state and recording rules results in form of timeseries. Supports address in the form of IP address with a port (e.g., http://127.0.0.1:8428) or DNS SRV record. For example, if -remoteWrite.url=http://127.0.0.1:8428 is specified, then the alerts state will be written to http://127.0.0.1:8428/api/v1/write . See also -remoteWrite.disablePathAppend, '-remoteWrite.showURL'.
|
||||
-remoteRead.url string
|
||||
Optional URL to datasource compatible with MetricsQL. It can be single node VictoriaMetrics or vmselect.Remote read is used to restore alerts state.This configuration makes sense only if vmalert was configured with `remoteWrite.url` before and has been successfully persisted its state. Supports address in the form of IP address with a port (e.g., http://127.0.0.1:8428) or DNS SRV record. See also '-remoteRead.disablePathAppend', '-remoteRead.showURL'.
|
||||
-rule array
|
||||
Path to the files or http url with alerting and/or recording rules in YAML format.
|
||||
Supports hierarchical patterns and regexpes.
|
||||
Examples:
|
||||
-rule="/path/to/file". Path to a single file with alerting rules.
|
||||
-rule="http://<some-server-addr>/path/to/rules". HTTP URL to a page with alerting rules.
|
||||
-rule="dir/*.yaml" -rule="/*.yaml" -rule="gcs://vmalert-rules/tenant_%{TENANT_ID}/prod".
|
||||
-rule="dir/**/*.yaml". Includes all the .yaml files in "dir" subfolders recursively.
|
||||
Rule files support YAML multi-document. Files may contain %{ENV_VAR} placeholders, which are substituted by the corresponding env vars.
|
||||
Enterprise version of vmalert supports S3 and GCS paths to rules.
|
||||
For example: gs://bucket/path/to/rules, s3://bucket/path/to/rules
|
||||
S3 and GCS paths support only matching by prefix, e.g. s3://bucket/dir/rule_ matches
|
||||
all files with prefix rule_ in folder dir.
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-rule.defaultRuleType
|
||||
Default type for rule expressions, can be overridden by type parameter inside the rule group. Supported values: "graphite", "prometheus" and "vlogs".
|
||||
Default is "prometheus", change it to "vlogs" if all of the rules are written with LogsQL.
|
||||
-rule.evalDelay time
|
||||
Adjustment of the time parameter for rule evaluation requests to compensate intentional data delay from the datasource. Normally, should be equal to `-search.latencyOffset` (cm d-line flag configured for VictoriaMetrics single-node or vmselect).
|
||||
Since there is no intentional search delay in VictoriaLogs, `-rule.evalDelay` can be reduced to a few seconds to accommodate network and ingestion time.
|
||||
```
|
||||
|
||||
See full list of configuration options [here](https://docs.victoriametrics.com/victoriametrics/vmalert/#configuration).
|
||||
|
||||
### Groups
|
||||
|
||||
Check the complete group attributes [here](https://docs.victoriametrics.com/victoriametrics/vmalert/#groups).
|
||||
|
||||
#### Alerting rules
|
||||
|
||||
Examples:
|
||||
```yaml
|
||||
groups:
|
||||
- name: ServiceLog
|
||||
type: vlogs
|
||||
interval: 5m
|
||||
rules:
|
||||
- alert: HasErrorLog
|
||||
expr: 'env: "prod" AND status:~"error|warn" | stats by (service, kubernetes.pod) count() as errorLog | filter errorLog:>0'
|
||||
annotations:
|
||||
description: 'Service {{$labels.service}} (pod {{ index $labels "kubernetes.pod" }}) generated {{$labels.errorLog}} error logs in the last 5 minutes'
|
||||
|
||||
- name: ServiceRequest
|
||||
type: vlogs
|
||||
interval: 5m
|
||||
rules:
|
||||
- alert: TooManyFailedRequest
|
||||
expr: '* | extract "ip=<ip> " | extract "status_code=<code>;" | stats by (ip) count() if (code:~4.*) as failed, count() as total| math failed / total as failed_percentage| filter failed_percentage :> 0.01 | fields ip,failed_percentage'
|
||||
annotations:
|
||||
description: "Connection from address {{$labels.ip}} has {{$value}}% failed requests in last 5 minutes"
|
||||
```
|
||||
|
||||
#### Recording rules
|
||||
|
||||
Examples:
|
||||
```yaml
|
||||
groups:
|
||||
- name: RequestCount
|
||||
type: vlogs
|
||||
interval: 5m
|
||||
rules:
|
||||
- record: nginxRequestCount
|
||||
expr: 'env: "test" AND service: "nginx" | stats count(*) as requests'
|
||||
- record: prodRequestCount
|
||||
expr: 'env: "prod" | stats by (service) count(*) as requests'
|
||||
```
|
||||
|
||||
## Time filter
|
||||
|
||||
It's recommended to omit the [time filter](https://docs.victoriametrics.com/victorialogs/logsql/#time-filter) in rule expression.
|
||||
By default, vmalert automatically appends the time filter `_time: <group_interval>` to the expression.
|
||||
For instance, the rule below will be evaluated every 5 minutes, and will return the result with logs from the last 5 minutes:
|
||||
```yaml
|
||||
groups:
|
||||
- name: Requests
|
||||
type: vlogs
|
||||
interval: 5m
|
||||
rules:
|
||||
- alert: TooManyFailedRequest
|
||||
expr: '* | extract "ip=<ip> " | extract "status_code=<code>;" | stats by (ip) count() if (code:~4.*) as failed, count() as total| math failed / total as failed_percentage| filter failed_percentage :> 0.01 | fields ip,failed_percentage'
|
||||
annotations:
|
||||
description: "Connection from address {{$labels.ip}} has {{$value}}% failed requests in last 5 minutes"
|
||||
```
|
||||
|
||||
User can specify a customized time filter if needed. For example, rule below will be evaluated every 5 minutes,
|
||||
but will calculate result over the logs from the last 10 minutes.
|
||||
```yaml
|
||||
groups:
|
||||
- name: Requests
|
||||
type: vlogs
|
||||
interval: 5m
|
||||
rules:
|
||||
- alert: TooManyFailedRequest
|
||||
expr: '_time: 10m | extract "ip=<ip> " | extract "status_code=<code>;" | stats by (ip) count() if (code:~4.*) as failed, count() as total| math failed / total as failed_percentage| filter failed_percentage :> 0.01 | fields ip,failed_percentage'
|
||||
annotations:
|
||||
description: "Connection from address {{$labels.ip}} has {{$value}}% failed requests in last 10 minutes"
|
||||
```
|
||||
|
||||
_Please note, vmalert doesn't support [backfilling](#rules-backfilling) for rules with a customized time filter now. (Might be added in future)._
|
||||
|
||||
## Rules backfilling
|
||||
|
||||
vmalert supports alerting and recording rules backfilling (aka replay) against VictoriaLogs as the datasource.
|
||||
```sh
|
||||
./bin/vmalert -rule=path/to/your.rules \ # path to files with rules you usually use with vmalert
|
||||
-datasource.url=http://localhost:9428 \ # VictoriaLogs address
|
||||
-rule.defaultRuleType=vlogs \ # Set default rule type to VictoriaLogs
|
||||
-remoteWrite.url=http://localhost:8428 \ # Remote write compatible storage to persist rules and alerts state info
|
||||
-replay.timeFrom=2021-05-11T07:21:43Z \ # to start replay from
|
||||
-replay.timeTo=2021-05-29T18:40:43Z # to finish replay by, optional. By default, set to the current time
|
||||
```
|
||||
|
||||
See more details about backfilling [here](https://docs.victoriametrics.com/victoriametrics/vmalert/#rules-backfilling).
|
||||
|
||||
## Performance tip
|
||||
|
||||
LogsQL allows users to obtain multiple stats from a single expression. For instance, the following query calculates
|
||||
50th, 90th and 99th percentiles for the `request_duration_seconds` field over logs for the last 5 minutes:
|
||||
|
||||
```logsql
|
||||
_time:5m | stats
|
||||
quantile(0.5, request_duration_seconds) p50,
|
||||
quantile(0.9, request_duration_seconds) p90,
|
||||
quantile(0.99, request_duration_seconds) p99
|
||||
```
|
||||
|
||||
This expression can also be used in recording rules as follows:
|
||||
```yaml
|
||||
groups:
|
||||
- name: requestDuration
|
||||
type: vlogs
|
||||
interval: 5m
|
||||
rules:
|
||||
- record: requestDurationQuantile
|
||||
expr: '* | stats by (service) quantile(0.5, request_duration_seconds) p50, quantile(0.9, request_duration_seconds) p90, quantile(0.99, request_duration_seconds) p99'
|
||||
```
|
||||
|
||||
This rule generates three metrics per service in each evaluation:
|
||||
```
|
||||
requestDurationQuantile{stats_result="p50", service="service-1"}
|
||||
requestDurationQuantile{stats_result="p90", service="service-1"}
|
||||
requestDurationQuantile{stats_result="p99", service="service-1"}
|
||||
|
||||
requestDurationQuantile{stats_result="p50", service="service-2"}
|
||||
requestDurationQuantile{stats_result="p90", service="service-2"}
|
||||
requestDurationQuantile{stats_result="p00", service="service-2"}
|
||||
...
|
||||
```
|
||||
|
||||
For additional tips on writing LogsQL, refer to this [doc](https://docs.victoriametrics.com/victorialogs/logsql/#performance-tips).
|
||||
|
||||
## Frequently Asked Questions
|
||||
|
||||
### How to use [multitenancy](https://docs.victoriametrics.com/victorialogs/#multitenancy) in rules?
|
||||
|
||||
vmalert doesn't support multi-tenancy for VictoriaLogs in the same way as it [supports it for VictoriaMetrics in ENT version](https://docs.victoriametrics.com/victoriametrics/vmalert/#multitenancy).
|
||||
However, it is possible to specify the queried tenant from VictoriaLogs datasource via `headers` param in [Group config](https://docs.victoriametrics.com/victoriametrics/vmalert/#groups).
|
||||
For example, the following config will execute all the rules within the group against tenant with `AccountID=1` and `ProjectID=2`:
|
||||
```yaml
|
||||
groups:
|
||||
- name: MyGroup
|
||||
headers:
|
||||
- "AccountID: 1"
|
||||
- "ProjectID: 2"
|
||||
rules: ...
|
||||
```
|
||||
By default, vmalert persists all results to the specific tenant in VictoriaMetrics that specified by `-remotewrite.url`. For example, if the `-remotewrite.url=http://vminsert:8480/insert/0/prometheus/`, all data goes to tenant `0`.
|
||||
To persist different rule results to different tenants in VictoriaMetrics, there are following approaches:
|
||||
1. To use the [multitenant endpoint of vminsert](https://docs.victoriametrics.com/victoriametrics/cluster-victoriametrics/#multitenancy-via-labels) as the `-remoteWrite.url`, and add tenant labels under the group configuration.
|
||||
|
||||
For example, run vmalert with:
|
||||
```
|
||||
./bin/vmalert -datasource.url=http://localhost:9428 -remoteWrite.url=http://vminsert:8480/insert/multitenant/prometheus ...
|
||||
```
|
||||
With the rules below, `recordingTenant123` will be queried from VictoriaLogs tenant `123` and persisted to tenant `123` in VictoriaMetrics, while `recordingTenant123-456:789` will be queried from VictoriaLogs tenant `124` and persisted to tenant `456:789` in VictoriaMetrics.
|
||||
```
|
||||
groups:
|
||||
- name: recordingTenant123
|
||||
type: vlogs
|
||||
headers:
|
||||
- "AccountID: 123"
|
||||
labels:
|
||||
vm_account_id: 123
|
||||
rules:
|
||||
- record: recordingTenant123
|
||||
expr: 'tags.path:/var/log/httpd OR tags.path:/var/log/nginx | stats by (tags.host) count() requests'
|
||||
- name: recordingTenant124-456:789
|
||||
type: vlogs
|
||||
headers:
|
||||
- "AccountID: 124"
|
||||
labels:
|
||||
vm_account_id: 456
|
||||
vm_project_id: 789
|
||||
rules:
|
||||
- record: recordingTenant124-456:789
|
||||
expr: 'tags.path:/var/log/httpd OR tags.path:/var/log/nginx | stats by (tags.host) count() requests'
|
||||
```
|
||||
|
||||
2. To run [enterprise version of vmalert](https://docs.victoriametrics.com/victoriametrics/enterprise/) with `-clusterMode` enabled, and specify tenant parameter per each group.
|
||||
|
||||
For example, run vmalert with:
|
||||
```
|
||||
./bin/vmalert -datasource.url=http://localhost:9428 -clusterMode=true -remoteWrite.url=http://vminsert:8480/ ...
|
||||
```
|
||||
With the rules below, `recordingTenant123` will be queried from VictoriaLogs tenant `123` and persisted to tenant `123` in VictoriaMetrics, while `recordingTenant123-456:789` will be queried from VictoriaLogs tenant `124` and persisted to tenant `456:789` in VictoriaMetrics.
|
||||
```
|
||||
groups:
|
||||
- name: recordingTenant123
|
||||
type: vlogs
|
||||
headers:
|
||||
- "AccountID: 123"
|
||||
tenant: "123"
|
||||
rules:
|
||||
- record: recordingTenant123
|
||||
expr: 'tags.path:/var/log/httpd OR tags.path:/var/log/nginx | stats by (tags.host) count() requests'
|
||||
- name: recordingTenant124-456:789
|
||||
type: vlogs
|
||||
headers:
|
||||
- "AccountID: 124"
|
||||
tenant: "456:789"
|
||||
rules:
|
||||
- record: recordingTenant124-456:789
|
||||
expr: 'tags.path:/var/log/httpd OR tags.path:/var/log/nginx | stats by (tags.host) count() requests'
|
||||
```
|
||||
|
||||
### How to use one vmalert for VictoriaLogs and VictoriaMetrics rules in the same time?
|
||||
|
||||
We recommend running separate instances of vmalert for VictoriaMetrics and VictoriaLogs.
|
||||
However, vmalert allows having many groups with different rule types (`vlogs`, `prometheus`, `graphite`).
|
||||
But only one `-datasource.url` cmd-line flag can be specified, so it can't be configured with more than 1 datasource.
|
||||
VictoriaMetrics and VictoriaLogs datasources have different query path prefixes, so it is possible to use
|
||||
[vmauth](https://docs.victoriametrics.com/victoriametrics/vmauth/) to route requests of different types between datasources.
|
||||
See example of vmauth config for such routing below:
|
||||
```yaml
|
||||
unauthorized_user:
|
||||
url_map:
|
||||
- src_paths:
|
||||
- "/api/v1/query.*"
|
||||
url_prefix: "http://victoriametrics:8428"
|
||||
- src_paths:
|
||||
- "/select/logsql/.*"
|
||||
url_prefix: "http://victorialogs:9428"
|
||||
```
|
||||
Now, vmalert can be configured with `--datasource.url=http://vmauth:8427/` to send queries to vmauth,
|
||||
and vmauth will route them to the specified destinations as in configuration example above.
|
||||
@@ -1,687 +0,0 @@
|
||||
{
|
||||
"type": "excalidraw",
|
||||
"version": 2,
|
||||
"source": "https://excalidraw.com",
|
||||
"elements": [
|
||||
{
|
||||
"type": "rectangle",
|
||||
"version": 803,
|
||||
"versionNonce": 1128884469,
|
||||
"index": "a0",
|
||||
"isDeleted": false,
|
||||
"id": "VgBUzo0blGR-Ijd2mQEEf",
|
||||
"fillStyle": "hachure",
|
||||
"strokeWidth": 1,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"angle": 0,
|
||||
"x": 422.3502197265625,
|
||||
"y": 215.55953979492188,
|
||||
"strokeColor": "#000000",
|
||||
"backgroundColor": "transparent",
|
||||
"width": 123.7601318359375,
|
||||
"height": 72.13211059570312,
|
||||
"seed": 1194011660,
|
||||
"groupIds": [
|
||||
"iBaXgbpyifSwPplm_GO5b"
|
||||
],
|
||||
"frameId": null,
|
||||
"roundness": null,
|
||||
"boundElements": [
|
||||
{
|
||||
"type": "arrow",
|
||||
"id": "sxEhnxlbT7ldlSsmHDUHp"
|
||||
},
|
||||
{
|
||||
"id": "wRO0q9xKPHc8e8XPPsQWh",
|
||||
"type": "arrow"
|
||||
},
|
||||
{
|
||||
"id": "Bpy5by47XGKB4yS99ZkuA",
|
||||
"type": "arrow"
|
||||
}
|
||||
],
|
||||
"updated": 1728889265677,
|
||||
"link": null,
|
||||
"locked": false
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"version": 660,
|
||||
"versionNonce": 130510869,
|
||||
"index": "a1",
|
||||
"isDeleted": false,
|
||||
"id": "e9TDm09y-GhPm84XWt0Jv",
|
||||
"fillStyle": "hachure",
|
||||
"strokeWidth": 1,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"angle": 0,
|
||||
"x": 443.89678955078125,
|
||||
"y": 236.64378356933594,
|
||||
"strokeColor": "#000000",
|
||||
"backgroundColor": "transparent",
|
||||
"width": 82,
|
||||
"height": 24,
|
||||
"seed": 327273100,
|
||||
"groupIds": [
|
||||
"iBaXgbpyifSwPplm_GO5b"
|
||||
],
|
||||
"frameId": null,
|
||||
"roundness": null,
|
||||
"boundElements": [],
|
||||
"updated": 1728889112138,
|
||||
"link": null,
|
||||
"locked": false,
|
||||
"fontSize": 20,
|
||||
"fontFamily": 3,
|
||||
"text": "vmalert",
|
||||
"textAlign": "center",
|
||||
"verticalAlign": "middle",
|
||||
"containerId": null,
|
||||
"originalText": "vmalert",
|
||||
"autoResize": true,
|
||||
"lineHeight": 1.2
|
||||
},
|
||||
{
|
||||
"type": "rectangle",
|
||||
"version": 2608,
|
||||
"versionNonce": 1050127035,
|
||||
"index": "a2",
|
||||
"isDeleted": false,
|
||||
"id": "dd52BjHfPMPRji9Tws7U-",
|
||||
"fillStyle": "hachure",
|
||||
"strokeWidth": 1,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"angle": 0,
|
||||
"x": 774.7067312730577,
|
||||
"y": 231.9532470703125,
|
||||
"strokeColor": "#000000",
|
||||
"backgroundColor": "transparent",
|
||||
"width": 275.7981470513237,
|
||||
"height": 39.621179787868925,
|
||||
"seed": 1779959692,
|
||||
"groupIds": [
|
||||
"2Lijjn3PwPQW_8KrcDmdu"
|
||||
],
|
||||
"frameId": null,
|
||||
"roundness": null,
|
||||
"boundElements": [
|
||||
{
|
||||
"id": "Bpy5by47XGKB4yS99ZkuA",
|
||||
"type": "arrow"
|
||||
}
|
||||
],
|
||||
"updated": 1728889420961,
|
||||
"link": null,
|
||||
"locked": false
|
||||
},
|
||||
{
|
||||
"type": "rectangle",
|
||||
"version": 1099,
|
||||
"versionNonce": 499029243,
|
||||
"index": "a6",
|
||||
"isDeleted": false,
|
||||
"id": "8-XFSbd6Zw96EUSJbJXZv",
|
||||
"fillStyle": "hachure",
|
||||
"strokeWidth": 1,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"angle": 0,
|
||||
"x": 371.7434387207031,
|
||||
"y": 398.50787353515625,
|
||||
"strokeColor": "#000000",
|
||||
"backgroundColor": "transparent",
|
||||
"width": 240.10644531249997,
|
||||
"height": 44.74725341796875,
|
||||
"seed": 99322124,
|
||||
"groupIds": [
|
||||
"6obQBPHIfExBKfejeLLVO"
|
||||
],
|
||||
"frameId": null,
|
||||
"roundness": null,
|
||||
"boundElements": [
|
||||
{
|
||||
"type": "arrow",
|
||||
"id": "sxEhnxlbT7ldlSsmHDUHp"
|
||||
}
|
||||
],
|
||||
"updated": 1728889112138,
|
||||
"link": null,
|
||||
"locked": false
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"version": 865,
|
||||
"versionNonce": 316509237,
|
||||
"index": "a7",
|
||||
"isDeleted": false,
|
||||
"id": "GUs816aggGqUSdoEsSmea",
|
||||
"fillStyle": "hachure",
|
||||
"strokeWidth": 1,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"angle": 0,
|
||||
"x": 393.73809814453125,
|
||||
"y": 410.5976257324219,
|
||||
"strokeColor": "#000000",
|
||||
"backgroundColor": "transparent",
|
||||
"width": 199,
|
||||
"height": 24,
|
||||
"seed": 1194745268,
|
||||
"groupIds": [
|
||||
"6obQBPHIfExBKfejeLLVO"
|
||||
],
|
||||
"frameId": null,
|
||||
"roundness": null,
|
||||
"boundElements": [],
|
||||
"updated": 1728889112138,
|
||||
"link": null,
|
||||
"locked": false,
|
||||
"fontSize": 20,
|
||||
"fontFamily": 3,
|
||||
"text": "alertmanager:9093",
|
||||
"textAlign": "center",
|
||||
"verticalAlign": "top",
|
||||
"containerId": null,
|
||||
"originalText": "alertmanager:9093",
|
||||
"autoResize": true,
|
||||
"lineHeight": 1.2
|
||||
},
|
||||
{
|
||||
"type": "arrow",
|
||||
"version": 3377,
|
||||
"versionNonce": 359177051,
|
||||
"index": "a8",
|
||||
"isDeleted": false,
|
||||
"id": "Bpy5by47XGKB4yS99ZkuA",
|
||||
"fillStyle": "hachure",
|
||||
"strokeWidth": 1,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"angle": 0,
|
||||
"x": 556.6860961914062,
|
||||
"y": 252.95352770712083,
|
||||
"strokeColor": "#000000",
|
||||
"backgroundColor": "transparent",
|
||||
"width": 202.02063508165145,
|
||||
"height": 0.22881326742660235,
|
||||
"seed": 357577356,
|
||||
"groupIds": [],
|
||||
"frameId": null,
|
||||
"roundness": {
|
||||
"type": 2
|
||||
},
|
||||
"boundElements": [],
|
||||
"updated": 1728889420962,
|
||||
"link": null,
|
||||
"locked": false,
|
||||
"startBinding": {
|
||||
"elementId": "VgBUzo0blGR-Ijd2mQEEf",
|
||||
"focus": 0.0344528515859526,
|
||||
"gap": 10.57574462890625
|
||||
},
|
||||
"endBinding": {
|
||||
"elementId": "dd52BjHfPMPRji9Tws7U-",
|
||||
"focus": -0.039393828258510157,
|
||||
"gap": 16
|
||||
},
|
||||
"lastCommittedPoint": null,
|
||||
"startArrowhead": null,
|
||||
"endArrowhead": "arrow",
|
||||
"points": [
|
||||
[
|
||||
0,
|
||||
0
|
||||
],
|
||||
[
|
||||
202.02063508165145,
|
||||
-0.22881326742660235
|
||||
]
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "arrow",
|
||||
"version": 1460,
|
||||
"versionNonce": 492906299,
|
||||
"index": "a9",
|
||||
"isDeleted": false,
|
||||
"id": "wRO0q9xKPHc8e8XPPsQWh",
|
||||
"fillStyle": "hachure",
|
||||
"strokeWidth": 1,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"angle": 0,
|
||||
"x": 406.0439244722469,
|
||||
"y": 246.6775563467225,
|
||||
"strokeColor": "#000000",
|
||||
"backgroundColor": "transparent",
|
||||
"width": 161.00829839007181,
|
||||
"height": 2.320722012761223,
|
||||
"seed": 656189364,
|
||||
"groupIds": [],
|
||||
"frameId": null,
|
||||
"roundness": {
|
||||
"type": 2
|
||||
},
|
||||
"boundElements": [],
|
||||
"updated": 1728889313672,
|
||||
"link": null,
|
||||
"locked": false,
|
||||
"startBinding": {
|
||||
"elementId": "VgBUzo0blGR-Ijd2mQEEf",
|
||||
"focus": 0.13736472619498497,
|
||||
"gap": 16.306295254315614
|
||||
},
|
||||
"endBinding": null,
|
||||
"lastCommittedPoint": null,
|
||||
"startArrowhead": null,
|
||||
"endArrowhead": "arrow",
|
||||
"points": [
|
||||
[
|
||||
0,
|
||||
0
|
||||
],
|
||||
[
|
||||
-161.00829839007181,
|
||||
-2.320722012761223
|
||||
]
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"version": 567,
|
||||
"versionNonce": 737159899,
|
||||
"index": "aA",
|
||||
"isDeleted": false,
|
||||
"id": "RbVSa4PnOgAMtzoKb-DhW",
|
||||
"fillStyle": "hachure",
|
||||
"strokeWidth": 1,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"angle": 0,
|
||||
"x": 552.4987182617188,
|
||||
"y": 212.27996826171875,
|
||||
"strokeColor": "#000000",
|
||||
"backgroundColor": "transparent",
|
||||
"width": 187.75,
|
||||
"height": 95,
|
||||
"seed": 1989838604,
|
||||
"groupIds": [],
|
||||
"frameId": null,
|
||||
"roundness": null,
|
||||
"boundElements": [
|
||||
{
|
||||
"id": "ijEBAhsESSoR3zLPouUVM",
|
||||
"type": "arrow"
|
||||
}
|
||||
],
|
||||
"updated": 1728889402055,
|
||||
"link": null,
|
||||
"locked": false,
|
||||
"fontSize": 16,
|
||||
"fontFamily": 3,
|
||||
"text": "persist alerts state\nand recording rules\n\n\n",
|
||||
"textAlign": "left",
|
||||
"verticalAlign": "top",
|
||||
"containerId": null,
|
||||
"originalText": "persist alerts state\nand recording rules\n\n\n",
|
||||
"autoResize": true,
|
||||
"lineHeight": 1.1875
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"version": 830,
|
||||
"versionNonce": 1996455189,
|
||||
"index": "aB",
|
||||
"isDeleted": false,
|
||||
"id": "ia2QzZNl_tuvfY3ymLjyJ",
|
||||
"fillStyle": "hachure",
|
||||
"strokeWidth": 1,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"angle": 0,
|
||||
"x": 279.55224609375,
|
||||
"y": 218.88568115234375,
|
||||
"strokeColor": "#000000",
|
||||
"backgroundColor": "transparent",
|
||||
"width": 122,
|
||||
"height": 19,
|
||||
"seed": 157304972,
|
||||
"groupIds": [],
|
||||
"frameId": null,
|
||||
"roundness": null,
|
||||
"boundElements": [
|
||||
{
|
||||
"type": "arrow",
|
||||
"id": "wRO0q9xKPHc8e8XPPsQWh"
|
||||
}
|
||||
],
|
||||
"updated": 1728889440112,
|
||||
"link": null,
|
||||
"locked": false,
|
||||
"fontSize": 16,
|
||||
"fontFamily": 3,
|
||||
"text": "execute rules",
|
||||
"textAlign": "left",
|
||||
"verticalAlign": "top",
|
||||
"containerId": null,
|
||||
"originalText": "execute rules",
|
||||
"autoResize": true,
|
||||
"lineHeight": 1.1875
|
||||
},
|
||||
{
|
||||
"type": "arrow",
|
||||
"version": 1476,
|
||||
"versionNonce": 1814378875,
|
||||
"index": "aC",
|
||||
"isDeleted": false,
|
||||
"id": "sxEhnxlbT7ldlSsmHDUHp",
|
||||
"fillStyle": "hachure",
|
||||
"strokeWidth": 1,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"angle": 0,
|
||||
"x": 484.18669893674246,
|
||||
"y": 302.3424013553929,
|
||||
"strokeColor": "#000000",
|
||||
"backgroundColor": "transparent",
|
||||
"width": 1.0484739253853945,
|
||||
"height": 84.72775855671654,
|
||||
"seed": 1818348300,
|
||||
"groupIds": [],
|
||||
"frameId": null,
|
||||
"roundness": {
|
||||
"type": 2
|
||||
},
|
||||
"boundElements": [],
|
||||
"updated": 1728889265678,
|
||||
"link": null,
|
||||
"locked": false,
|
||||
"startBinding": {
|
||||
"elementId": "VgBUzo0blGR-Ijd2mQEEf",
|
||||
"focus": 0.010768924644894236,
|
||||
"gap": 14.650750964767894
|
||||
},
|
||||
"endBinding": {
|
||||
"elementId": "8-XFSbd6Zw96EUSJbJXZv",
|
||||
"focus": -0.051051952959743775,
|
||||
"gap": 11.437713623046818
|
||||
},
|
||||
"lastCommittedPoint": null,
|
||||
"startArrowhead": null,
|
||||
"endArrowhead": "arrow",
|
||||
"points": [
|
||||
[
|
||||
0,
|
||||
0
|
||||
],
|
||||
[
|
||||
1.0484739253853945,
|
||||
84.72775855671654
|
||||
]
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"version": 631,
|
||||
"versionNonce": 1909410773,
|
||||
"index": "aD",
|
||||
"isDeleted": false,
|
||||
"id": "E9Run6wCm2chQ6JHrmc_y",
|
||||
"fillStyle": "hachure",
|
||||
"strokeWidth": 1,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"angle": 0,
|
||||
"x": 504.27996826171875,
|
||||
"y": 322.13031005859375,
|
||||
"strokeColor": "#000000",
|
||||
"backgroundColor": "transparent",
|
||||
"width": 122,
|
||||
"height": 38,
|
||||
"seed": 1836541708,
|
||||
"groupIds": [],
|
||||
"frameId": null,
|
||||
"roundness": null,
|
||||
"boundElements": [
|
||||
{
|
||||
"type": "arrow",
|
||||
"id": "sxEhnxlbT7ldlSsmHDUHp"
|
||||
}
|
||||
],
|
||||
"updated": 1728889430719,
|
||||
"link": null,
|
||||
"locked": false,
|
||||
"fontSize": 16,
|
||||
"fontFamily": 3,
|
||||
"text": "send alert \nnotifications",
|
||||
"textAlign": "left",
|
||||
"verticalAlign": "top",
|
||||
"containerId": null,
|
||||
"originalText": "send alert \nnotifications",
|
||||
"autoResize": true,
|
||||
"lineHeight": 1.1875
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"version": 579,
|
||||
"versionNonce": 326648123,
|
||||
"index": "aE",
|
||||
"isDeleted": false,
|
||||
"id": "ff5OkfgmkKLifS13_TFj3",
|
||||
"fillStyle": "hachure",
|
||||
"strokeWidth": 1,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"angle": 0,
|
||||
"x": 591.5895843505859,
|
||||
"y": 269.2361297607422,
|
||||
"strokeColor": "#000000",
|
||||
"backgroundColor": "transparent",
|
||||
"width": 121.875,
|
||||
"height": 19,
|
||||
"seed": 264004620,
|
||||
"groupIds": [],
|
||||
"frameId": null,
|
||||
"roundness": null,
|
||||
"boundElements": [
|
||||
{
|
||||
"type": "arrow",
|
||||
"id": "wRO0q9xKPHc8e8XPPsQWh"
|
||||
}
|
||||
],
|
||||
"updated": 1728889436228,
|
||||
"link": null,
|
||||
"locked": false,
|
||||
"fontSize": 16,
|
||||
"fontFamily": 3,
|
||||
"text": "restore state",
|
||||
"textAlign": "left",
|
||||
"verticalAlign": "top",
|
||||
"containerId": null,
|
||||
"originalText": "restore state",
|
||||
"autoResize": true,
|
||||
"lineHeight": 1.1875
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"version": 1141,
|
||||
"versionNonce": 39140603,
|
||||
"index": "aG",
|
||||
"isDeleted": false,
|
||||
"id": "J2AqHIHYjG3cvxrBLonQW",
|
||||
"fillStyle": "hachure",
|
||||
"strokeWidth": 1,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"angle": 0,
|
||||
"x": 782.2813415527344,
|
||||
"y": 238.312045541553,
|
||||
"strokeColor": "#000000",
|
||||
"backgroundColor": "transparent",
|
||||
"width": 254.41375732421875,
|
||||
"height": 26.05968577236269,
|
||||
"seed": 254079515,
|
||||
"groupIds": [
|
||||
"fw8b83Mw6tGXQ80jfC5Jx"
|
||||
],
|
||||
"frameId": null,
|
||||
"roundness": null,
|
||||
"boundElements": [],
|
||||
"updated": 1728889417069,
|
||||
"link": null,
|
||||
"locked": false,
|
||||
"fontSize": 21.716404810302244,
|
||||
"fontFamily": 3,
|
||||
"text": "victoriametrics:8428",
|
||||
"textAlign": "center",
|
||||
"verticalAlign": "top",
|
||||
"containerId": null,
|
||||
"originalText": "victoriametrics:8428",
|
||||
"autoResize": true,
|
||||
"lineHeight": 1.2
|
||||
},
|
||||
{
|
||||
"type": "rectangle",
|
||||
"version": 2824,
|
||||
"versionNonce": 1550880827,
|
||||
"index": "aH",
|
||||
"isDeleted": false,
|
||||
"id": "Whj4hd3Al6CbvGs7cQuWk",
|
||||
"fillStyle": "hachure",
|
||||
"strokeWidth": 1,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"angle": 0,
|
||||
"x": -11.824915810818197,
|
||||
"y": 223.79106415879994,
|
||||
"strokeColor": "#000000",
|
||||
"backgroundColor": "transparent",
|
||||
"width": 248.85674080132372,
|
||||
"height": 40.562586037868925,
|
||||
"seed": 1519267323,
|
||||
"groupIds": [
|
||||
"skPAIqL9ijNA0WE5GV8Gv"
|
||||
],
|
||||
"frameId": null,
|
||||
"roundness": null,
|
||||
"boundElements": [],
|
||||
"updated": 1728889342561,
|
||||
"link": null,
|
||||
"locked": false
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"version": 1290,
|
||||
"versionNonce": 1222168987,
|
||||
"index": "aI",
|
||||
"isDeleted": false,
|
||||
"id": "NJgvtn8_Kzy1quxMqyfAr",
|
||||
"fillStyle": "hachure",
|
||||
"strokeWidth": 1,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"angle": 0,
|
||||
"x": 8.194007518174999,
|
||||
"y": 231.4272063800404,
|
||||
"strokeColor": "#000000",
|
||||
"backgroundColor": "transparent",
|
||||
"width": 216.25169372558594,
|
||||
"height": 26.05968577236269,
|
||||
"seed": 1311553179,
|
||||
"groupIds": [
|
||||
"3JfeRMxXtVafxucZgxKNy"
|
||||
],
|
||||
"frameId": null,
|
||||
"roundness": null,
|
||||
"boundElements": [],
|
||||
"updated": 1728889339478,
|
||||
"link": null,
|
||||
"locked": false,
|
||||
"fontSize": 21.716404810302244,
|
||||
"fontFamily": 3,
|
||||
"text": "victorialogs:9428",
|
||||
"textAlign": "center",
|
||||
"verticalAlign": "top",
|
||||
"containerId": null,
|
||||
"originalText": "victorialogs:9428",
|
||||
"autoResize": true,
|
||||
"lineHeight": 1.2
|
||||
},
|
||||
{
|
||||
"id": "ijEBAhsESSoR3zLPouUVM",
|
||||
"type": "arrow",
|
||||
"x": 754.5486716336245,
|
||||
"y": 263.63184005775634,
|
||||
"width": 200.78701391878076,
|
||||
"height": 0.03213913002196023,
|
||||
"angle": 0,
|
||||
"strokeColor": "#1e1e1e",
|
||||
"backgroundColor": "transparent",
|
||||
"fillStyle": "solid",
|
||||
"strokeWidth": 1,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"groupIds": [],
|
||||
"frameId": null,
|
||||
"index": "aJ",
|
||||
"roundness": {
|
||||
"type": 2
|
||||
},
|
||||
"seed": 1284919637,
|
||||
"version": 349,
|
||||
"versionNonce": 186313781,
|
||||
"isDeleted": false,
|
||||
"boundElements": null,
|
||||
"updated": 1728889427809,
|
||||
"link": null,
|
||||
"locked": false,
|
||||
"points": [
|
||||
[
|
||||
0,
|
||||
0
|
||||
],
|
||||
[
|
||||
-200.78701391878076,
|
||||
-0.03213913002196023
|
||||
]
|
||||
],
|
||||
"lastCommittedPoint": null,
|
||||
"startBinding": {
|
||||
"elementId": "RbVSa4PnOgAMtzoKb-DhW",
|
||||
"focus": -0.0807019799085118,
|
||||
"gap": 14.299953371905758,
|
||||
"fixedPoint": null
|
||||
},
|
||||
"endBinding": null,
|
||||
"startArrowhead": null,
|
||||
"endArrowhead": "arrow",
|
||||
"elbowed": false
|
||||
}
|
||||
],
|
||||
"appState": {
|
||||
"gridSize": 20,
|
||||
"gridStep": 5,
|
||||
"gridModeEnabled": false,
|
||||
"viewBackgroundColor": "#ffffff"
|
||||
},
|
||||
"files": {}
|
||||
}
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 41 KiB |
Reference in New Issue
Block a user