diff --git a/codespell/Makefile b/codespell/Makefile index 2a7171f4f1..c2d41214b3 100644 --- a/codespell/Makefile +++ b/codespell/Makefile @@ -12,3 +12,13 @@ codespell-check: codespell codespell \ --ignore-words=/vm/codespell/stopwords \ --skip='*/node_modules/*,*/vmdocs/*,*/vendor/*,*.js,*.pb.go,*.qtpl.go' /vm + +# Automatically fixes spelling errors. +codespell-fix: codespell + @-docker run \ + --mount type=bind,src="$(PWD)",dst=/vm \ + --rm \ + codespell \ + --write-changes \ + --ignore-words=/vm/codespell/stopwords \ + --skip='*/node_modules/*,*/vmdocs/*,*/vendor/*,*.js,*.pb.go,*.qtpl.go' /vm diff --git a/docs/anomaly-detection/FAQ.md b/docs/anomaly-detection/FAQ.md index 68dc9642ac..a6a63c8ad2 100644 --- a/docs/anomaly-detection/FAQ.md +++ b/docs/anomaly-detection/FAQ.md @@ -259,7 +259,7 @@ With the introduction of [online models](https://docs.victoriametrics.com/anomal - **Reduced latency**: Online models update incrementally, which can lead to faster response times for anomaly detection since the model continuously adapts to new data without waiting for a batch `fit`. - **Scalability**: Handling smaller data chunks at a time reduces memory and computational overhead, making it easier to scale the anomaly detection system. - **Optimized resource utilization**: By spreading the computational load over time and reducing peak demands, online models make more efficient use of resources and inducing less data transfer from VictoriaMetrics TSDB, improving overall system performance. -- **Faster convergence**: Online models can adapt {{% available_from "v1.23.0" anomaly %}} to changes in data patterns more quickly, which is particularly beneficial in dynamic environments where data characteristics may shift frequently. See `decay` argument descrition [here](https://docs.victoriametrics.com/anomaly-detection/components/models/#decay). +- **Faster convergence**: Online models can adapt {{% available_from "v1.23.0" anomaly %}} to changes in data patterns more quickly, which is particularly beneficial in dynamic environments where data characteristics may shift frequently. See `decay` argument description [here](https://docs.victoriametrics.com/anomaly-detection/components/models/#decay). Here's an example of how we can switch from (offline) [Z-score model](https://docs.victoriametrics.com/anomaly-detection/components/models/#z-score) to [Online Z-score model](https://docs.victoriametrics.com/anomaly-detection/components/models/#online-z-score): diff --git a/docs/anomaly-detection/components/settings.md b/docs/anomaly-detection/components/settings.md index 0235e40a9e..b4ac386e9d 100644 --- a/docs/anomaly-detection/components/settings.md +++ b/docs/anomaly-detection/components/settings.md @@ -17,7 +17,7 @@ Through the **Settings** section of a config, you can configure the following pa ## Anomaly Score Outside Data Range -This argument allows you to override the anomaly score for anomalies that are caused by values outside the expected **data range** of particular [query](https://docs.victoriametrics.com/anomaly-detection/components/models#queries). The reasons for such anomalies can be various, such as improperly constructed metricsQL queries, sensor malfunctions, or other issues that lead to unexpected values in the data and reqire investigation. +This argument allows you to override the anomaly score for anomalies that are caused by values outside the expected **data range** of particular [query](https://docs.victoriametrics.com/anomaly-detection/components/models#queries). The reasons for such anomalies can be various, such as improperly constructed metricsQL queries, sensor malfunctions, or other issues that lead to unexpected values in the data and require investigation. > If not set, the [anomaly score](https://docs.victoriametrics.com/anomaly-detection/faq#what-is-anomaly-score) for such anomalies defaults to `1.01` for backward compatibility, however, it is recommended to set it to a higher value, such as `5.0`, to better reflect the severity of anomalies that fall outside the expected data range to catch them faster and check the query for correctness and underlying data for potential issues. diff --git a/docs/victorialogs/FAQ.md b/docs/victorialogs/FAQ.md index 9652214e43..e7ca40e4c8 100644 --- a/docs/victorialogs/FAQ.md +++ b/docs/victorialogs/FAQ.md @@ -38,7 +38,7 @@ VictoriaLogs is optimized specifically for logs. So it provides the following fe and get the best available performance out of the box. - Up to 30x less RAM usage than Elasticsearch for the same workload. See [the post from a user, who replaced 27-node Elasticsearch cluster with a single-node VictoriaLogs](https://aus.social/@phs/114583927679254536). - See also [this article](https://itnext.io/how-do-open-source-solutions-for-logs-work-elasticsearch-loki-and-victorialogs-9f7097ecbc2f) for techincal details. + See also [this article](https://itnext.io/how-do-open-source-solutions-for-logs-work-elasticsearch-loki-and-victorialogs-9f7097ecbc2f) for technical details. - Up to 15x less disk space usage than Elasticsearch for the same amounts of stored logs. - Ability to work efficiently with hundreds of terabytes of logs on a single node. - Easy to use query language optimized for typical log analysis tasks - [LogsQL](https://docs.victoriametrics.com/victorialogs/logsql/). diff --git a/docs/victorialogs/LogsQL.md b/docs/victorialogs/LogsQL.md index bd402c5ed0..2ff012f5c1 100644 --- a/docs/victorialogs/LogsQL.md +++ b/docs/victorialogs/LogsQL.md @@ -3582,7 +3582,7 @@ over the last 5 minutes: _time:5m | stats count(username, password) logs_with_username_or_password ``` -It is possible to caclulate the number of logs with at least a single non-empty field with common prefix with `count(prefix*)` syntax. +It is possible to calculate the number of logs with at least a single non-empty field with common prefix with `count(prefix*)` syntax. For example, the following query returns the number of logs with at least a single non-empty field with `foo` prefix over the last 5 minutes: ```logsql diff --git a/docs/victorialogs/data-ingestion/syslog.md b/docs/victorialogs/data-ingestion/syslog.md index e49a6b0a44..234095db98 100644 --- a/docs/victorialogs/data-ingestion/syslog.md +++ b/docs/victorialogs/data-ingestion/syslog.md @@ -173,7 +173,7 @@ VictoriaLogs supports `-syslog.decolorizeFields.tcp` and `-syslog.decolorizeFiel which can be used for removing ANSI color codes from the provided list fields during ingestion of Syslog logs into `-syslog.listenAddr.tcp` and `-syslog.listenAddr.upd` addresses. For example, the following command starts VictoriaLogs, which removes ANSI color codes from [`_msg` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field) -at logs recevied via TCP port 514: +at logs received via TCP port 514: ```sh ./victoria-logs -syslog.listenAddr.tcp=:514 -syslog.decolorizeFields.tcp='["_msg"]' diff --git a/docs/victoriametrics/README.md b/docs/victoriametrics/README.md index c6408c9117..aa78108d64 100644 --- a/docs/victoriametrics/README.md +++ b/docs/victoriametrics/README.md @@ -1545,7 +1545,7 @@ There are two gauge metrics to monitor the retention filters process: - `vm_retention_filters_partitions_scheduled` shows the total number of partitions scheduled for retention filters - `vm_retention_filters_partitions_scheduled_size_bytes` shows the total size of scheduled partitions. -Additionally, a log message with the filter expression and the paritition name is written to the log on the start and completion of the operation. +Additionally, a log message with the filter expression and the partition name is written to the log on the start and completion of the operation. Important notes: diff --git a/lib/storage/table.go b/lib/storage/table.go index a7733c3815..d524a73813 100644 --- a/lib/storage/table.go +++ b/lib/storage/table.go @@ -464,7 +464,7 @@ func (tb *table) historicalMergeWatcher() { if ptw.pt.name == currentPartitionName { // Do not run force merge for the current month. // For the current month, the samples are countinously - // deduplicated and retention fileters applied by the background in-memory, small, and big part + // deduplicated and retention filters applied by the background in-memory, small, and big part // merge tasks. See: // - partition.mergeParts() in paritiont.go and // - Block.deduplicateSamplesDuringMerge() in block.go.