mirror of
https://github.com/VictoriaMetrics/VictoriaMetrics.git
synced 2026-05-17 00:26:36 +03:00
spellcheck: run
This commit is contained in:
@@ -12,3 +12,13 @@ codespell-check: codespell
|
|||||||
codespell \
|
codespell \
|
||||||
--ignore-words=/vm/codespell/stopwords \
|
--ignore-words=/vm/codespell/stopwords \
|
||||||
--skip='*/node_modules/*,*/vmdocs/*,*/vendor/*,*.js,*.pb.go,*.qtpl.go' /vm
|
--skip='*/node_modules/*,*/vmdocs/*,*/vendor/*,*.js,*.pb.go,*.qtpl.go' /vm
|
||||||
|
|
||||||
|
# Automatically fixes spelling errors.
|
||||||
|
codespell-fix: codespell
|
||||||
|
@-docker run \
|
||||||
|
--mount type=bind,src="$(PWD)",dst=/vm \
|
||||||
|
--rm \
|
||||||
|
codespell \
|
||||||
|
--write-changes \
|
||||||
|
--ignore-words=/vm/codespell/stopwords \
|
||||||
|
--skip='*/node_modules/*,*/vmdocs/*,*/vendor/*,*.js,*.pb.go,*.qtpl.go' /vm
|
||||||
|
|||||||
@@ -259,7 +259,7 @@ With the introduction of [online models](https://docs.victoriametrics.com/anomal
|
|||||||
- **Reduced latency**: Online models update incrementally, which can lead to faster response times for anomaly detection since the model continuously adapts to new data without waiting for a batch `fit`.
|
- **Reduced latency**: Online models update incrementally, which can lead to faster response times for anomaly detection since the model continuously adapts to new data without waiting for a batch `fit`.
|
||||||
- **Scalability**: Handling smaller data chunks at a time reduces memory and computational overhead, making it easier to scale the anomaly detection system.
|
- **Scalability**: Handling smaller data chunks at a time reduces memory and computational overhead, making it easier to scale the anomaly detection system.
|
||||||
- **Optimized resource utilization**: By spreading the computational load over time and reducing peak demands, online models make more efficient use of resources and inducing less data transfer from VictoriaMetrics TSDB, improving overall system performance.
|
- **Optimized resource utilization**: By spreading the computational load over time and reducing peak demands, online models make more efficient use of resources and inducing less data transfer from VictoriaMetrics TSDB, improving overall system performance.
|
||||||
- **Faster convergence**: Online models can adapt {{% available_from "v1.23.0" anomaly %}} to changes in data patterns more quickly, which is particularly beneficial in dynamic environments where data characteristics may shift frequently. See `decay` argument descrition [here](https://docs.victoriametrics.com/anomaly-detection/components/models/#decay).
|
- **Faster convergence**: Online models can adapt {{% available_from "v1.23.0" anomaly %}} to changes in data patterns more quickly, which is particularly beneficial in dynamic environments where data characteristics may shift frequently. See `decay` argument description [here](https://docs.victoriametrics.com/anomaly-detection/components/models/#decay).
|
||||||
|
|
||||||
Here's an example of how we can switch from (offline) [Z-score model](https://docs.victoriametrics.com/anomaly-detection/components/models/#z-score) to [Online Z-score model](https://docs.victoriametrics.com/anomaly-detection/components/models/#online-z-score):
|
Here's an example of how we can switch from (offline) [Z-score model](https://docs.victoriametrics.com/anomaly-detection/components/models/#z-score) to [Online Z-score model](https://docs.victoriametrics.com/anomaly-detection/components/models/#online-z-score):
|
||||||
|
|
||||||
|
|||||||
@@ -17,7 +17,7 @@ Through the **Settings** section of a config, you can configure the following pa
|
|||||||
|
|
||||||
## Anomaly Score Outside Data Range
|
## Anomaly Score Outside Data Range
|
||||||
|
|
||||||
This argument allows you to override the anomaly score for anomalies that are caused by values outside the expected **data range** of particular [query](https://docs.victoriametrics.com/anomaly-detection/components/models#queries). The reasons for such anomalies can be various, such as improperly constructed metricsQL queries, sensor malfunctions, or other issues that lead to unexpected values in the data and reqire investigation.
|
This argument allows you to override the anomaly score for anomalies that are caused by values outside the expected **data range** of particular [query](https://docs.victoriametrics.com/anomaly-detection/components/models#queries). The reasons for such anomalies can be various, such as improperly constructed metricsQL queries, sensor malfunctions, or other issues that lead to unexpected values in the data and require investigation.
|
||||||
|
|
||||||
> If not set, the [anomaly score](https://docs.victoriametrics.com/anomaly-detection/faq#what-is-anomaly-score) for such anomalies defaults to `1.01` for backward compatibility, however, it is recommended to set it to a higher value, such as `5.0`, to better reflect the severity of anomalies that fall outside the expected data range to catch them faster and check the query for correctness and underlying data for potential issues.
|
> If not set, the [anomaly score](https://docs.victoriametrics.com/anomaly-detection/faq#what-is-anomaly-score) for such anomalies defaults to `1.01` for backward compatibility, however, it is recommended to set it to a higher value, such as `5.0`, to better reflect the severity of anomalies that fall outside the expected data range to catch them faster and check the query for correctness and underlying data for potential issues.
|
||||||
|
|
||||||
|
|||||||
@@ -38,7 +38,7 @@ VictoriaLogs is optimized specifically for logs. So it provides the following fe
|
|||||||
and get the best available performance out of the box.
|
and get the best available performance out of the box.
|
||||||
- Up to 30x less RAM usage than Elasticsearch for the same workload.
|
- Up to 30x less RAM usage than Elasticsearch for the same workload.
|
||||||
See [the post from a user, who replaced 27-node Elasticsearch cluster with a single-node VictoriaLogs](https://aus.social/@phs/114583927679254536).
|
See [the post from a user, who replaced 27-node Elasticsearch cluster with a single-node VictoriaLogs](https://aus.social/@phs/114583927679254536).
|
||||||
See also [this article](https://itnext.io/how-do-open-source-solutions-for-logs-work-elasticsearch-loki-and-victorialogs-9f7097ecbc2f) for techincal details.
|
See also [this article](https://itnext.io/how-do-open-source-solutions-for-logs-work-elasticsearch-loki-and-victorialogs-9f7097ecbc2f) for technical details.
|
||||||
- Up to 15x less disk space usage than Elasticsearch for the same amounts of stored logs.
|
- Up to 15x less disk space usage than Elasticsearch for the same amounts of stored logs.
|
||||||
- Ability to work efficiently with hundreds of terabytes of logs on a single node.
|
- Ability to work efficiently with hundreds of terabytes of logs on a single node.
|
||||||
- Easy to use query language optimized for typical log analysis tasks - [LogsQL](https://docs.victoriametrics.com/victorialogs/logsql/).
|
- Easy to use query language optimized for typical log analysis tasks - [LogsQL](https://docs.victoriametrics.com/victorialogs/logsql/).
|
||||||
|
|||||||
@@ -3582,7 +3582,7 @@ over the last 5 minutes:
|
|||||||
_time:5m | stats count(username, password) logs_with_username_or_password
|
_time:5m | stats count(username, password) logs_with_username_or_password
|
||||||
```
|
```
|
||||||
|
|
||||||
It is possible to caclulate the number of logs with at least a single non-empty field with common prefix with `count(prefix*)` syntax.
|
It is possible to calculate the number of logs with at least a single non-empty field with common prefix with `count(prefix*)` syntax.
|
||||||
For example, the following query returns the number of logs with at least a single non-empty field with `foo` prefix over the last 5 minutes:
|
For example, the following query returns the number of logs with at least a single non-empty field with `foo` prefix over the last 5 minutes:
|
||||||
|
|
||||||
```logsql
|
```logsql
|
||||||
|
|||||||
@@ -173,7 +173,7 @@ VictoriaLogs supports `-syslog.decolorizeFields.tcp` and `-syslog.decolorizeFiel
|
|||||||
which can be used for removing ANSI color codes from the provided list fields during ingestion of Syslog logs
|
which can be used for removing ANSI color codes from the provided list fields during ingestion of Syslog logs
|
||||||
into `-syslog.listenAddr.tcp` and `-syslog.listenAddr.upd` addresses.
|
into `-syslog.listenAddr.tcp` and `-syslog.listenAddr.upd` addresses.
|
||||||
For example, the following command starts VictoriaLogs, which removes ANSI color codes from [`_msg` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field)
|
For example, the following command starts VictoriaLogs, which removes ANSI color codes from [`_msg` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field)
|
||||||
at logs recevied via TCP port 514:
|
at logs received via TCP port 514:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
./victoria-logs -syslog.listenAddr.tcp=:514 -syslog.decolorizeFields.tcp='["_msg"]'
|
./victoria-logs -syslog.listenAddr.tcp=:514 -syslog.decolorizeFields.tcp='["_msg"]'
|
||||||
|
|||||||
@@ -1545,7 +1545,7 @@ There are two gauge metrics to monitor the retention filters process:
|
|||||||
- `vm_retention_filters_partitions_scheduled` shows the total number of partitions scheduled for retention filters
|
- `vm_retention_filters_partitions_scheduled` shows the total number of partitions scheduled for retention filters
|
||||||
- `vm_retention_filters_partitions_scheduled_size_bytes` shows the total size of scheduled partitions.
|
- `vm_retention_filters_partitions_scheduled_size_bytes` shows the total size of scheduled partitions.
|
||||||
|
|
||||||
Additionally, a log message with the filter expression and the paritition name is written to the log on the start and completion of the operation.
|
Additionally, a log message with the filter expression and the partition name is written to the log on the start and completion of the operation.
|
||||||
|
|
||||||
Important notes:
|
Important notes:
|
||||||
|
|
||||||
|
|||||||
@@ -464,7 +464,7 @@ func (tb *table) historicalMergeWatcher() {
|
|||||||
if ptw.pt.name == currentPartitionName {
|
if ptw.pt.name == currentPartitionName {
|
||||||
// Do not run force merge for the current month.
|
// Do not run force merge for the current month.
|
||||||
// For the current month, the samples are countinously
|
// For the current month, the samples are countinously
|
||||||
// deduplicated and retention fileters applied by the background in-memory, small, and big part
|
// deduplicated and retention filters applied by the background in-memory, small, and big part
|
||||||
// merge tasks. See:
|
// merge tasks. See:
|
||||||
// - partition.mergeParts() in paritiont.go and
|
// - partition.mergeParts() in paritiont.go and
|
||||||
// - Block.deduplicateSamplesDuringMerge() in block.go.
|
// - Block.deduplicateSamplesDuringMerge() in block.go.
|
||||||
|
|||||||
Reference in New Issue
Block a user