This reverts commit ccf97a4143.
reason for revert: this change may break tests, which expect that ServesMetrics.GetMetric() fails
when the given metric doesn't exist in the output.
It is better to add 'TryGetMetric() (float64, bool)' function, which would return '(0, false)'
when the given metric doesn't exist, so the caller could decide what to do next.
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9773
Add ad-hoc filters to query stats and operator dashboards.
These filters are useful for exploring non-uniform metrics sets
without distinct job/instance filters.
The previous text didn't contain links to vmagent's capabilities.
Instead, it contained misleading multitenancy-mode link that doesn't
seem to be related to the subject.
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Previously, `GetMetric` do `t.Fatalf` immediately when the target metric
not exist in `/metrics` page.
However, some metrics may start to appear after the process has been
running for a while. `t.Fatalf` invalidates the retry mechanism of
assertions, if the metric is not found the first time, the test case
will terminate.
This commit request changes `t.Fatalf` to `t.Logf` (instead of `t.Errorf`,
because error output may be considered a test case failure in some
scenarios).
Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9773
Follow-up for cea9505bab
fastcache.Cache allocates off-heap memory, which must be explicitly
returned back to the pool with Reset method call.
After changed made at commit above, during cache transit from whole to
split mode, it's possible that current cache is referenced by Cache.Get
or Cache.Call atomic pointers. It leads to potential memory leaks, since
we don't have any memory synchronization for atomic.Pointer.Store calls.
This commit adds `Finalizer` to the `fastcache.Cache` instances.
It properly releases memory, when cache is no reachable.
Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9769
Previously, cache state transition from split into whole could left
cache into broken state, if Reset cache method was called in switching
mode.
Also, cache Reset didn't start background workers and didn't change
cache size.
This commit properly check mode during cache transition. In addition,
it no longer stops background workers after whole mode transition and
always start workers during start-up.
Access to the prev, curr and mode Cache fields are properly locked
in order to mitigate possible race conditions.
Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9769
It seems like go compilator skipped computations and allocations for samples
as they weren't used afterwards. Sinking results into global variable removes
this optimizations and benchmark starts showing allocations within `pushSamples` fn.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
- Make SearchTSIDs look similar to SearchMetricNames, i.e. search for metricIDs within the method
- Make the corresponding corrupted index test look similar to one for metric names search
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
### Describe Your Changes
Consistently use the `v0.0.0-YYYYMMDDHHMMSS-commit_hash` reference for
the internal deps such as `github.com/VictoriaMetrics/VictoriaMetrics`
dependency, since it allows referring any commit without waiting for the
release tag.
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
### Describe Your Changes
This pull request consists of the following:
1. Markdown fixes
following https://www.markdownguide.org/basic-syntax/
and https://github.com/markdownlint/markdownlint/blob/main/docs/RULES.md
- Add empty lines after headers or lists
- Remove extra lines between paragraphs
- Remove extra spaces at the end of a line
- Add language to code quote
- Consistent list (dont mix astrixes and dashes on same file, choose one
and be consistent in the same file)
- Proper URL links
- Use meaningful context to URLs instead of "here".
2. Concise language
3. Grammar fixes
- removing extra spaces between words
- there are multiple ones but i picked the basic ones that triggered my
eye :)
4. Spelling fixes
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
---------
Co-authored-by: hagen1778 <roman@victoriametrics.com>
add ability to limit available in datePicker dates using `minDate` and
`maxDate` parameters. all dates before `minDate` and after `maxDate`
cannot be picked. lower and upper bounds can be set independently.
This `minDate` and `maxDate` parameters aren't set by default in vmui.
The datepicker component with these params is re-used elsewhere.
The change also explciitly mentions `out-of-order` phrase, as it is commonly
used in Prometheus ecosystem.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Searching metricName by metricID happens many times during a single API
call. This requires getting the current set of idbs before those calls
happen. Which is fine but requires propagating idbs across the code
base. This is also fine in case of OSS version as it is used in Search
only.
Propagating idbs across the code base becomes a problem in Enterprise
version as it is used in at least 3 places. As a result it becomes very
difficult to merge things from OSS to Ent.
Localizing the all the dependencies in one searchMetricName type and
reusing this type everywhere should make things simpler.
Related enterprise changes:
https://github.com/VictoriaMetrics/VictoriaMetrics-enterprise/compare/search-metric-name-ent?expand=1
Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9756
A small refactoring that reduces Search dependency on Storage:
- Move searchTSIDs() from Search to Storage because this method does not
depend on anything Search-specific but does depend on Storage.
- Use metricsTracker instead of storage.metricTracker.
Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9754
Benchmarking storage search api requires taking into account many
parameters, such as:
- data configuration: how many series, deleted series, search time range
- where the index data recides: prev and or indexDB
- which search operation to measure
While adding a new benchmark use case involves a lot boilerplate code.
This pr implements a framework for testing storage search ops that can
be relatively easily extended. This come in expecially handy when adding
new cases for parition index.
The current set of params will result of a lot of benchmarks to be run
which most probably does not make sense because:
- it will take a lot of time and
- the output data is hard to compare manually.
However, these benchmarks are very useful when only small set of params
is of interest. For example, if I want to compare the search of 100k
metric names when the index data resides in prevOnly, currOnly or
prevAndCurr indexDBs. This would translate in the following cmd:
```shell
go test ./lib/storage --loggerLevel=ERROR -run=^$ -bench=^BenchmarkSearch/MetricNames/.*/VariableSeries/100000$
```
Why this change:
- I often need to run benchmarks with configs that I did not have
before, requires either modifying the existing one or writing a new one.
It is easy to get lost and make benchmark non-comparable
- I need some way to make legacy and pt index benchmarks comparable
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
- state that it is unsafe to use lifecycle rules and describe the reason
- update formatting according latest changes in docs
---------
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
### Describe Your Changes
This pull request consists of the following:
1. Markdown fixes
following https://www.markdownguide.org/basic-syntax/
and https://github.com/markdownlint/markdownlint/blob/main/docs/RULES.md
- Add empty lines after headers or lists
- Remove extra lines between paragraphs
- Remove extra spaces at the end of a line
- Add language to code quote
- Consistent list (dont mix astrixes and dashes on same file, choose one
and be consistent in the same file)
- Proper URL links
- Use meaningful context to URLs instead of "here".
2. Concise language
3. Grammar fixes
- removing extra spaces between words
- there are multiple ones but i picked the basic ones that triggered my
eye :)
4. Spelling fixes
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
- Rename copyStream to copyStreamToClient in order to make it more clear
that the stream must be copied from backend to client.
- Make sure that the client implements net/http.Flusher interface.
It is a programming error (BUG) if the client passed to copyStreamToClient
doesn't implement net/http.Flusher interface.
- Do not write zero-length data to the backend.
Updates https://github.com/VictoriaMetrics/VictoriaLogs/issues/667
1beb629b removed logic which was used in order to keep full backup
location path in the restore mark file. Because of this, backups created
with a shortname (e.g. `vmbackupmanager restore create
daily/2025-09-12`) will fail as backup location is not prepended.
Fix that by properly constructing full backup name from parsed canonical
values.
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
Previously, vmagent always set enable.auto.commit to false and manually
commited messages. It adds additional pressure to the kafka brokers and could slow down
data consumption.
This commit allows vmagent to skip manual commit and use auto-commit
based on provided configuration. Which may improve message read throughput.
Related PR https://github.com/VictoriaMetrics/VictoriaMetrics-enterprise/pull/931
### Describe Your Changes
This PR introduces a `make docs-update-flags` command that updates flags
in the documentation using the actual binaries compiled from the latest
`enterprise-single-node` and `enterprise-cluster` branches (hardcoded
for now). The command also normalizes the output format.
It can be run from any branch. All work happens inside temporary
directories under /tmp. The script checks out the required branch,
builds the binaries, and updates the documentation. The current Git
repository is not touched.
The command adjusts default values to more meaningful ones, such as
changing `-maxConcurrentInserts` (default 20) to (default
2*cgroup.AvailableCPUs()).
Currently the logic is implemented only for vminsert, vmstorage,
vmselect, vmagent, vmalert, and victoria-metrics (aka single).
The goal is to make it easy to keep documentation synchronized with real
binaries
_**Note:** Please ignore xxx_flags.md files for now. Review flags in
`README.md` and `Cluster-VictoriaMetrics.md`, and `vmagent.md`,
`vmalert.md` only. Once we agree on the changes in those files, I'll
replace the flags with the `{{% content "xxx_flags.md" %}}`._
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
* stress on requirement to have empty destination folder for copying;
* remove extra verbosity from docs;
* remove list vmctl migration options as they became unsynced. Instead of syncing,
refer to the vmctl docs;
* fix typos.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
The application version can be then displayed in the vmui. Showing the
application version in vmui should make it easier to determine currently
used VM version (at least vmselect version).
------------
@Loori-R it would be could to add the app version in vmui in a follow-up
PR or by pushing a commit to this branch.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
`Table` component:
- add `format` property for table column, which allows to apply custom
formatting depending on column type
- add `rowClasses` table property, that allows to pass function that
allows to customize row css class depending on row value
- add `rowAction` table property, that allows to execute action while
clicking on table row
`Popper` component:
- add `classes` to specify additional CSS classes for popper to
differentiate from other poppers, since it's mounted to a DOM root
`Switch` component:
- use gap instead of left-margin
`DateTimeInput` component:
- add `dateOnly` property to allow accepting only date in the input
additional fixes:
- fix TopQuery header fields alignment
<img width="1279" height="125" alt="image"
src="https://github.com/user-attachments/assets/08ad4dbc-19e5-47f5-9ccd-a9fb222335a4"
/>
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
### Describe Your Changes
Set rateEnabled to false for probe_success in VMUI
Fix issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9655
Problem:
probe_success is incorrectly initialized with rateEnabled = true because
the regex detecting counters (/_sum?|_total?|_count?/) matches partial
strings like _su. This causes probe_success (a gauge) to be treated as a
counter, producing slightly misleading graphs. For example, when
rateEnabled is set to true, probe_success often shows as 0 in VMUI when
the probe is actually succeding.
It is not intuative for users to have to disable rateEnabled manually
just to get the correct value for probe_success in VMUI.
Solution:
Update the regex to strictly match suffixes:
`/_sum$|_total$|_count$/`
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
Signed-off-by: William Wren <william.wren@ericsson.com>
### Describe Your Changes
This pull request consists of the following:
1. Markdown fixes
following https://www.markdownguide.org/basic-syntax/
and https://github.com/markdownlint/markdownlint/blob/main/docs/RULES.md
- Add empty lines after headers or lists
- Remove extra lines between paragraphs
- Remove extra spaces at the end of a line
- Add language to code quote
- Consistent list (dont mix astrixes and dashes on same file, choose one
and be consistent in the same file)
- Proper URL links
- Use meaningful context to URLs instead of "here".
2. Concise language
3. Grammar fixes
- removing extra spaces between words
- there are multiple ones but i picked the basic ones that triggered my
eye :)
4. Spelling fixes
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
This is needed for VictoriaLogs, which allows limiting query results with the given set of extra filters
specified via extra_filters query arg. The request url can contain multiple extra_filters query args -
they are all applied with AND logic to the query. See https://docs.victoriametrics.com/victorialogs/querying/#extra-filters
The merge_query_args option at vmauth allows merging the extra_filters provided by the client
(such as Grafana plugin for VictoriaLogs or built-in web UI) with the extra_filters specified in the backend
url at vmauth config.
This is needed for https://github.com/VictoriaMetrics/VictoriaLogs/issues/106
`workingsetcache` is built on top of two
[fastcache](https://github.com/VictoriaMetrics/fastcache) instances
(curr and prev) that are rotated periodically (configurable via
`-cacheExpireDuration` flag). During the rotation curr becomes prev and
prev is discarded, new curr is an empty. If an entry is not found in
curr then the prev cache is checked, and if the entry is found there it
is copied to curr.
`workingsetcache` also exports metrics, such as `EntriesCount`,
`GetCalls`, `SetCalls`, and `Misses` counts. These metrics are currently
implemented as the sum of the same metrics in prev and curr `fastcache`
instances. Given to rotation logic, these counts can be incorrect:
1. `EntriesCount`. It is the sum of prev and curr entry counts. If an
entry is not found in curr and found in prev (and therefore is copied
from prev to curr) the resulting entry count will be incorrect, i.e. it
will count copied entries two times.
2. `GetCalls`. It is the sum of prev and curr get calls. If an entry is
not found in curr the logic will attempt to retrieve it from prev, which
will result in double counting. While it is actually one get call to
`workingsetcache`.
3. `SetCalls`. It is the sum of prev and curr get calls. If an entry is
not found in curr but found in prev it will be copied to curr resulting
in a set call to curr. While from the `workingsetcache` perspective
there hasn't been any set operation at all.
4. `Misses`. It is the sum of prev and curr misses. If an etry is not
found in curr, it is recorded as a miss. If it is then found in prev,
the entry is returned to the caller, but that cache miss remains. If it
is not found in prev, then there will be 2 misses for 1
`worksingsetcache` get call.
This PR introduces `GetCalls`, `SetCalls`, and `Misses` counts at the
`workingsetcache` level in order to count the calls correctly. It also
excludes duplicates from `EntriesCount`.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9553
### Describe Your Changes
Implemented the script that generates graphs using `gnuplot`.
Those graphs show the write speed to the db.
How to use it:
1. From the root run `make tsbs`;
2. The file will be generated automatically
`/tmp/tsbs-load-100000-2025-07-22T00:00:00Z-2025-07-23T00:00:00Z-80s.csv`
4. From the root run `make tsbs-plot-load` and observe the result
5. If you have two files with the `tsbs_load_victoriametrics` output,
just define the second in the
`TSBS_LOAD_RESULT_CSV_FILE_COMPARE=/tmp/tsbs-load-10
0000-2025-07-22T01:00:00Z-2025-07-23T01:00:00Z-80s.csv
`
To plot the measurements from some other benchmark, run
`make tsbs-plot-load TSBS_LOAD_RESULT_CSV_FILE=/path/to/file.csv`
To plot the measurements from two benchmarks, run
`make tsbs-plot-load TSBS_LOAD_RESULT_CSV_FILE=/path/to/file1.csv
TSBS_LOAD_RESULT_CSV_FILE_COMPARE=/path/to/file2.csv`
This command should generate a graph like described in the picture
<img width="638" height="578" alt="Screenshot 2025-07-25 at 15 35 42"
src="https://github.com/user-attachments/assets/900b05ab-0b98-4f7f-8f2c-18d28ad2eab6"
/>
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
---------
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Co-authored-by: Artem Fetishev <rtm@victoriametrics.com>
Co-authored-by: Artem Fetishev <149964189+rtm0@users.noreply.github.com>
This helps to improve readability of changes, so users
can see more important changes first, and see changes related
to the same component one after another.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
### Describe Your Changes
Previously mock storage `net.Listen("tcp", …)` could succeed even if
another process was bound to the same port, due to dual-stack behavior
(`[::]:port` vs `0.0.0.0:port`). That lead to strange test results that
hard to bound to port misuse. Tests queried not mock server but whatever
was running on that port.
Switched to `"tcp4"` to ensure conflicts are detected correctly.
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
### Describe Your Changes
As there are quite a few files, and each file might have multiple
changes and to make it easily to review, i limited the PR to 5 files at
a time.
I suggest you take a look at markdownlint and add it as part of your CI,
similar to
https://github.com/MicrosoftDocs/PowerShell-Docs/blob/main/.markdownlint.yaml
And while at it, take a look at cspell and how its used in thier repo
and replace the python one you have in your current implementation -
might open a PR with it after all the fixes PRs).
This pull request consists of the following:
1. Markdown fixes
following https://www.markdownguide.org/basic-syntax/
and https://github.com/markdownlint/markdownlint/blob/main/docs/RULES.md
- Add empty lines after headers or lists
- Remove extra lines between paragraphs
- Remove extra spaces at the end of a line
- Add language to code quote
- Consistent list (dont mix astrixes and dashes on same file, choose one
and be consistent in the same file)
- Proper URL links
- Use meaningful context to URLs instead of "here".
2. Concise language
3. Grammar fixes
- removing extra spaces between words
- there are multiple ones but i picked the basic ones that triggered my
eye :)
4. Spelling fixes
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
This allows performing a single MustFsyncPath() for the parent directory after multiple calls to these functions.
This clarifies code paths, which call these functions, and makes them more maintainable.
This also removes a redundant fsync() call for the parent directory when creating a file-based part.
Previously the first fsync() was indirectly called when the directory was created via MustMkdirFailIfExist()
and the second fsync() was called via MustSyncPathAndParentDir() after all the data is written to the part.
The fs.MustWriteSync() already fsyncs the created file, so there is no need in additional fsync() call.
While at it, add missing fsync for the parent directory after creating a directory for persistent queue.
The source file contents should be already fsynced to disk before creating a hard link,
so there is no sense in calling fsync() on the created hard link.
This commit ensures that the -search.maxQueryLen flag applies to Graphite
queries, matching the behavior already present for Prometheus queries.
Previously, Graphite queries could bypass this limit, creating an
inconsistency and a potential vector for resource exhaustion.
Key changes:
Added getMaxQueryLen() to access the global query length limit.
Enforced query length validation in execExpr() for Graphite queries.
Added comprehensive tests for the new validation logic and edge cases.
Error messages are consistent with Prometheus query validation.
The default limit is 16KB (configurable via -search.maxQueryLen).
Setting the limit to 0 disables validation.
This change closes the gap where Graphite queries could exceed
configured length limits, providing consistent protection against
excessively long queries across both query APIs.
Follow-up for https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9534
Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9600
The vm_deleted_metrics_total metric value represents the number of
metricIDs stored in deletedMetricIDs cache. This cache lives at the
storage level and stores the deleted metrics from both prev and curr
idbs. However, the metric is populated at the idb level. Since there are
always 2 idbs (prev and curr), the value is populated twice. Hence the
doubled value of the metric.
The fix is to populate the metric value at the storage level.
Related issue https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9602
- load and parse static`/vmui/config.json`, modify it according to
runtime values and use it as a replacement for static config.json
- remove using `/flags` endpoint for checking features, that should be
enabled on VMUI
Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9635
`router.home` represents `/` path, which is the same for all UI apps,
but content and title for root path differs depending on application
type. added `getDefaultOptions` function, which returns proper home
route configuration depending on application type, which allows to
remove renamings in respective layouts
Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9641
The commit
25cd5637bc
introduced the `-enableMetadata` flag and the
`promscrape.IsMetadataEnabled()` function, which is now used in multiple
places, including the `app/vminsert/prometheusimport` [request
handler](b24b76ff08/app/vminsert/prometheusimport/request_handler.go (L36)).
Because of the use of `promscrape` package vminsert registered all
`-promscrape.*` service discovery flags, which were not relevant for
`vminsert`.
This change moves the metadata flag logic into a dedicated package,
preventing vminsert from unintentionally loading unrelated promscrape
flags.
Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9631
This is an attempt to adjust image styles to GitHub themes, because
existing images with transparent backround become unreadable on dark theme.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
This is an attempt to adjust image styles to GitHub themes, because
existing images with transparent backround become unreadable on dark theme.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
When having a `match` of `__name__` key alone for labels api, it's going
to hit max series limit in case of high cardinality metric name.
Instead, we can skip looking by `metricIDs` and fallback to inverted
index scan with a `composite key` since we only have some `__name__` and
a label name.
Common requests for optimisations are:
1) /api/v1/labels?match=up or /api/v1/labels?extra_filters=up
2) /api/v1/label/job/values?match=up or /api/v1/labels?extra_filters =up
It's widely used by grafana variables.
Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9489
a.Subtract(b) perfomance degrades as b becomes bigger than a. For
example if len(b2) == 10xlen(b1) then time(a.Subtract(b2)) == 10x
time(a.Subtract(b1)).
A quick fix is to iterate over a elements in len(b) > len(a). Iterating
over a's elements and at the same time deleting should be safe since no
elements are actually deleted (i.e. memory freed, etc). Deletion here
means setting a corresponding bit from 1 to 0.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9602
### Describe Your Changes
Add the support of all standard TSDB query types that can be executed
against VictoriaMetrics. `double-groupby-all` is commented out as it
attempts to retrieve all 1B samples and fails. While this can be fixed
by setting the `-search.maxSamplesPerQuery` this query is left disabled
anyway because it will consume way too much memory and cpu time.
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
### Describe Your Changes
New benchmarks for storage search (data and index):
- Use the same dataset that accounts for prev and curr indexDBs and
deleted series
- The code is more structured
- Account for various numbers of series in response including higher
numbers (>10k) as this appears to be a quite common use case.
These bechmarks were used for investigating #9602 performance issue and
helped discover that prefetching metric names needed to be restored
#9619.
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
### Describe Your Changes
Some messages were written to `stdout` using `fmt.Printf` and
`fmt.Println`, while the other messages like import statistics were
written to `stderr` through the `log` package.
This led to ordering problems where the `Import finished!` +
`VictoriaMetrics importer stats` messages, which expected to be the last
messages, appeared before `Continue import process with filter`
messages, creating confusing output for users.
```
2025/08/20 13:07:26 Import finished!
2025/08/20 13:07:26 VictoriaMetrics importer stats:
time spent while importing: 20h49m10.8497184s;
total bytes: 277.1 GB;
bytes/s: 3.7 MB;
requests: 7978614;
requests retries: 0;
2025/08/20 13:07:26 Total time: 20h49m10.851006088s
Continue import process with filter
filter: match[]={__name__!=""}
start: 2025-08-08T00:00:00Z
end: 2025-08-15T00:00:00Z:
Continue import process with filter
filter: match[]={__name__!=""}
start: 2025-08-15T00:00:00Z
end: 2025-08-19T16:18:15Z:
```
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
### Describe Your Changes
It seems db39f045e1 accidentally reverted
#9419 changes.
```patch
--- a/app/vmagent/remotewrite/client.go
+++ b/app/vmagent/remotewrite/client.go
@@ -448,7 +448,8 @@ again:
}
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_remotewrite_requests_total{url=%q, status_code="%d"}`, c.sanitizedURL, statusCode)).Inc()
- if statusCode == 409 {
+ switch statusCode {
+ case 409:
logBlockRejected(block, c.sanitizedURL, resp)
// Just drop block on 409 status code like Prometheus does.
@@ -461,7 +462,13 @@ again:
// - Remote Write v2 specification explicitly specifies a `415 Unsupported Media Type` for unsupported encodings.
// - Real-world implementations of v1 use both 400 and 415 status codes.
// See more in research: https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8462#issuecomment-2786918054
- } else if statusCode == 415 || statusCode == 400 {
+ case 415, 400:
+ if c.canDowngradeVMProto.Swap(false) {
+ logger.Infof("received unsupported media type or bad request from remote storage at %q. Downgrading protocol from VictoriaMetrics to Prometheus remote write for all future requests. "+
+ "See https://docs.victoriametrics.com/victoriametrics/vmagent/#victoriametrics-remote-write-protocol", c.sanitizedURL)
+ c.useVMProto.Store(false)
+ }
+
if encoding.IsZstd(block) {
logger.Infof("received unsupported media type or bad request from remote storage at %q. Re-packing the block to Prometheus remote write and retrying."+
"See https://docs.victoriametrics.com/victoriametrics/vmagent/#victoriametrics-remote-write-protocol", c.sanitizedURL)
```
cc @makasim
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
Previously, if pushBlockPubSub function returned error, vmagent stopped
remote write worker thread assigned for it. Expected behavior for this
scenario is to retry error inside pushBlockPubSub function. It must
return only on vmagent shutdown.
This commit properly handles this error and prevents from ingestion
stop.
Add build tag `disable_grpc_modules` for vmbackup, vmrestore and
vmbackupmanager. Binary size increases only for 3MB with it. It's
acceptable trade-off for security and feature updates.
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8008
* check if built binary is present for `make tsbs-build`. Before, if
build fails, the command stopped working.
* make ENV variables configurable from command line, so `TSBS_STEP=15s
make tsbs-generate-data` would respect the configured step.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Another batch of documentation improvements
Fix Spelling in:
- Comments in code
- Displayed strings
One change was in a json file used for the anomaly dashboard in docker,
else no other code was changed.
Some Markdown changes, related to standards:
- URLs
- List numbering
- Empty spaces at the end of a line
Previously, vmagent treated differently the following configuration:
1) ./bin/vmagent --remoteWrite.url=url-0 --remoteWrite.url=url-1 --remoteWrite.disableOndiskQueue
and
2)./bin/vmagent --remoteWrite.url=url-0 --remoteWrite.url=url-1 --remoteWrite.disableOndiskQueue=true,true
In first case, it could produce duplicates and blocks ingestion requests if one of remote write targets were not accessible.
In second case, it implicitly added --remoteWrite.dropSamplesOnOverload as true and silently dropped samples for inaccessible target.
This commit treat this configuration as the same and silently drop samples on both cases to mitigate possible duplicates.
It's expected, that vmagent provides delivery guarantees, only if it has a single remote write target, when flag remoteWrite.disableOndiskQueue=true is set.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9565
While working on #9431 there has been introduced 2 bugs related to
indexDB.searchMetricName():
1. During the search the index records are unconditionally placed in
sparse index
2. If search touches index records in both prev and curr indexDBs, there
will be possible cases that metricIDs can be unintentionally removed
using `wasMetricIDMissingBefore()` logic
Additionally, the PR moves the searchMetricName from indexDB and Search
to Storage which simplifies the code and makes it spossible to reuse the
function as-is in enterprise code.
Follow up for #9431.
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
`-eula` was deprecated and made no-op in v1.123.0, so examples with
`-eula` will no longer work.
Replace those with proper license configuration.
While at it, remove license flags from vmbackupmanager CLI commands as
it is not required when using CLI.
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
### Describe Your Changes
Do not check vmauth_config_last_reload_success_timestamp_seconds since
it may contain the timestamp < time.Now() due to how lib/fasttime works.
Instead, compare the number of config reloads.
follow up on
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9369 and
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9572
Also, split the config update and reload into two separate functions.
master:
```
$gotest -race ./apptest/tests/ -run=TestSingleVMAuthRouterWithInternalAddr -count=40
ok github.com/VictoriaMetrics/VictoriaMetrics/apptest/tests 90.176s
```
pr:
```
$gotest -race ./apptest/tests/ -run=TestSingleVMAuthRouterWithInternalAddr -count=40
ok github.com/VictoriaMetrics/VictoriaMetrics/apptest/tests 46.130s
```
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
Removing extDB from indexDB makes prev, curr, and next indexDBs independent.
I.e. the search is performed independently in prev and curr, the results are
then merged.
Additionally, since no search is now performed in extDB:
- all indexDB search methods now return the original maps used for populating
the result, without invermediate conversion to slices.
- `NoExtDB` suffix has been removed from method names
This has been extracted from #8134.
Signed-off-by: Andrei Baidarov <baidarov@nebius.com>
Co-authored-by: Artem Fetishev <rtm@victoriametrics.com>
Before, we showed summarized 99th percentile for query complexity across
all available instance. This doesn't make much sense, as it doesn't
answer on the following questions:
1. What complexity limits to set per vmselect
2. What are the most expensive queries
The change is to use `max` instead of `sum`, to show only outliers, the
heaviest served queries. The update should help answering on questions
above.
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
Signed-off-by: hagen1778 <roman@victoriametrics.com>
fixes case, when `histogram_quantile` result contains gaps, that occur
in same time range, where NaNs are present in a first bucket of a
histogram
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
Fix flaky integration test `TestSingleVMAuthRouterWithAuth`.
The flakiness is caused by the
`vmauth_config_last_reload_success_timestamp_seconds` metric, which
reports time with second-level precision.
Update the test to account for this when verifying that the config
reloads correctly.
Previously, if limit was reached for cardinality limiter, vmstorage
started to perform index lookups for any series exceed limit. Since
storage must skip index creation for such series, it's not possible to
cache it. It resulted into opposite effect of cardinality limiter -
instead of reducing resource usage, it increased it instead.
This commit changes cardinality limit calculation from metricID to the
hash from raw metricName. It could slightly increase CPU usage if
cardinality limiter is configured, since hash must be calculated for
each metricName row. But it mitigates excessive CPU and memory usage on
limit hit
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9554
The current one-second timeout for individual read or write operations
during the handshake phase has proven to be insufficient in some
scenarios
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9345. For
example, short-lived CPU spikes lasting a few seconds can cause
handshake failures due to the low timeout threshold.
While a small timeout may work well in environments with fast and
reliable networking, such as within a single datacenter, it becomes
problematic in more complex setups—particularly in a [multi-level
cluster
setup](https://docs.victoriametrics.com/victoriametrics/cluster-victoriametrics/#multi-level-cluster-setup)
where the top-level vmselect may reside in a different availability zone
and work on a less reliable network.
Another issue with the per-operation timeout approach is that it allows
the total time for a handshake to accumulate significantly in the
worst-case scenario. If each operation experiences a delay just under
the timeout threshold, the entire handshake process could take up to 6s.
Which accounts for 60% of `-search.maxQueueDuration` and leaves only 4s
for the actual query.
Introducing a single timeout for the entire handshake process would
provide more predictable behavior and improve usability from a
configuration standpoint. The timeout for the whole handshake op is also
easier to understand from the operator's point of view. Increasing the
timeout value and providing a configuration option for it would make the
system more resilient to transient conditions like CPU contention and
better suited for use cases involving cross-AZ communication.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9345
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
### Describe Your Changes
The commit changes CI behavior:
- Run build in parallel for different os\arch
- Run unit\integration\lint in parallel
- Remove the custom Go cache step in favor of the logic provided in
`actions/setup-go`. The custom cache was used to build key based on
go.sum and makefiles. This logic is preserved.
- Introduce cache for golangci-lint.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
vmselect is experiencing memory exhaustion and OOM kills
when processing complex Graphite queries with nested functions and large
numbers of label selectors (30k+ values).
The root cause was unbounded growth of the pathExpression field.
This commit adds configurable truncation for Graphite pathExpression fields to
prevent memory exhaustion while preserving query functionality:
New flag: -search.maxGraphitePathExpressionLen=1024 (default 1024
characters)
Safe truncation: Long expressions are truncated with "..." suffix
Zero disables: Set to 0 to disable truncation entirely
Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9534/
Previously, tcp listener perform synchronous proxy protocol header
read during connection accept. It could significantly reduce vmauth
performance and lead to timeout at serving http requests.
This commit changes this logic and performs proxy protocol header
parsing during first Read request from connection or RemoteAddr method
call. It significantly improves performance and reduce possible
bottleneck at connections accept method.
Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9546/
- Rename WriteRequestUnmarshaller to WriteRequestUnmarshaler
- Add a description to WriteRequestUnmarshaler struct
Review comments
b98e592752 (r163365472)
Follow up on
b98e592752
* remove duplicated content between single and cluster versions
* mention recommendation to group component types by jobs in scrape
config
* link the example of scrape configs
* update wording
Signed-off-by: hagen1778 <roman@victoriametrics.com>
This metric can be used for building alerts and graphs for free disk space usage percentage by using the following MetricsQL query:
100 * (vm_free_disk_space_bytes / vm_total_disk_space_bytes)
### Describe Your Changes
By default `zstd.Reader` creates multiple goroutines to process a single
connection:
- It doesn't match cgo behavior, which works synchronously, and creates
a lot more concurrent goroutines (0.5k -> 5k on my workload)
- It results in non-zero `vm_tcpdialer_errors_total{type="read"}` errors
on vmselect because an underlying connection is closed while a goroutine
is still reading from it. The goroutine created by
`zstd.NewReader`/`zstd.Reset`
abb348e4db/lib/handshake/buffered_conn.go (L113-L120)
- vmselect (and vmagent) doesn't benefit from async mode since it has
multiple readers in-use at the same time, which usually exceeds the
number of cpu cores
Partly related to #9218
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
This commit introduces new storage API: `/internal/log_new_series`.
It helps to dynamically debug newly created series. Changing `-logNewSeries` value requires storage restart,
which may introduce downtime and is not recommended for production deployments.
In addition, this commit adds flags: `-logNewSeriesAuthKey`, which protects newly added API.
Fixes: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8879
This change will expose any IPv6 addresses assigned to an instance under
the meta labels:
* `__meta_gce_public_ipv6` -native IPv6 address, globally routed
* `__meta_gce_internal_ipv6` - unique local address (ULA).
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9370
Previously, the tenant selector was hidden when only one tenant
was returned, making it impossible to run queries in multi-tenant mode.
Now, the selector is always shown as long as at least one tenant exists.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9396
The prompb and prompbmarshal share exactly the same models and provide
marshal and unmarshale capabilities for them. This creates duplication
(changes in one model has to be made in another, case with metadata) and
confusion where for example you compare same looking models but golang
says they are not the same (because of the type).
This commit merge prompbmarshal logic into prompb so the rest of the
code is aligned on prompb models.
Moves samplesPool and labelsPool to WriteRequestUnmarshaller.
Make WriteRequest struct clean from unmarshal logic.
The benchmark shows no significant changes:
$benchstat prompbmarshal.bench prompb2.bench
goos: darwin
goarch: arm64
pkg: github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb
cpu: Apple M1 Pro
│ prompbmarshal.bench │ prompb2.bench │
│ sec/op │ sec/op vs base │
WriteRequestUnmarshalProtobuf-10 189.2µ ± 5% 190.8µ ± 8% ~ (p=0.579 n=10)
WriteRequestMarshalProtobuf-10 145.3µ ± 7% 143.6µ ± 2% ~ (p=0.143 n=10)
geomean 165.8µ 165.5µ -0.14%
│ prompbmarshal.bench │ prompb2.bench │
│ B/s │ B/s vs base │
WriteRequestUnmarshalProtobuf-10 50.42Mi ± 5% 49.99Mi ± 8% ~ (p=0.593 n=10)
WriteRequestMarshalProtobuf-10 65.64Mi ± 7% 66.39Mi ± 2% ~ (p=0.143 n=10)
geomean 57.53Mi 57.61Mi +0.14%
│ prompbmarshal.bench │ prompb2.bench │
│ B/op │ B/op vs base │
WriteRequestUnmarshalProtobuf-10 27.70Ki ± 4% 26.90Ki ± 7% ~ (p=0.190 n=10)
WriteRequestMarshalProtobuf-10 3.267Ki ± 12% 3.273Ki ± 12% ~ (p=0.971 n=10)
geomean 9.514Ki 9.383Ki -1.38%
│ prompbmarshal.bench │ prompb2.bench │
│ allocs/op │ allocs/op vs base │
WriteRequestUnmarshalProtobuf-10 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=10) ¹
WriteRequestMarshalProtobuf-10 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=10) ¹
geomean ² +0.00% ²
¹ all samples are equal
² summaries must be >0 to compute geomean
This commit introduces new flag: `-storage.idbPrefillStart` with
default value of `1h`. It allows to adjust start time of the prefill
process for the data written into the next indexDB.
By default, VictoriaMetrics starts prefill indexDB at 3 A.M UTC, while
indexDB rotates at 4 A.M UTC. It could be useful to change start time
from 3 A.M. to 1 A.M or 00:00 A.M. It should smooth overall resource
usage.
However, changing value to the number bigger than 4 hours for the
default
installations doesn't make much sense ( like 11 P.M. for the day before
rotation). Since, VictoriaMetrics maintenances daily-indexes and it have
to repopulate it twice.
But it could be useful in conjunction with `-retentionTimezoneOffset`,
which could delay index rotation for the current day and give more time
for the prefill process.
As an example, `-retentionTimezoneOffset=4h` adds an additional 4 hours
to the rotation time and `-storage.idbPrefillStart` could be changed
accordingly.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9393
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
---------
Signed-off-by: f41gh7 <nik@victoriametrics.com>
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
Previously, vmctl tried to create a tmp directory in a directory with
Prometheus snapshot. This is not always possible as snapshot can be
mounted in read-only mode.
Use a system default temporary location and allow user to customize the
tmp path in order to avoid issues with custom tmp locations.
Closes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9505
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
The `groupWatchersCleaner` iterates through all watchers, attempting to
lock them sequentially while holding the global `groupWatchersLock`.
Therefore, a large number of group watchers can cause the
`groupWatchersLock.Lock()` to block for a noticeable period.
Proposing to use `TryLock` as an optimistic case because if the watcher
lock is held, it's more likely that the watcher is still in-use so no
further cleanup is required.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9354#issuecomment-3089108889
The lib/fs.MustCloseParallel() accepts a slice of MustWriter items, which must implement only
a single method - MustWrite(). The previous lib/filestream.MustCloseWritersParallel() was
accepting CloseWriter items, which must implement Write() and Path() methods additionally
to MustClose() method. This was adding artificial restrictions on the applicability
of the MustCloseWritersParallel() method. Remove these restrictions.
This should reduce the time needed for the deletion of VictoriaLogs parts
with big number of files, which are created when wide events are ingested into VictoriaLogs
(e.g. logs with big number of log fields).
This may help improving scalability of VictoriaLogs at systems with big number of CPU cores,
which store data into high-latency storage such as Ceph or NFS.
See https://github.com/VictoriaMetrics/VictoriaLogs/issues/517
The import aliases may complicate maintenance of the code in the long term
if they aren't used consistently, e.g. if one file imports the apptest under the default name
while the other file imports the apptest under the "at" name.
The aliases also complicate grepping the code by apptest.* or prompbmarshal.* .
- Drop the code needed for asynchronous removal of the directory on NFS shares.
This code was needed when VictoriaMetrics could keep open files after their deletion
or renaming. This is no longer the case after the commit 43b24164ef .
Now files are deleted only after all the readers close them.
This updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/61
- Unify MustRemoveAll() and MustRemoveDirAtomic() into MustRemoveDir() and MustRemovePath()
functions:
- The MustRemoveDir() deletes the given directory with all its contents, in an "atomic" way:
it creates a special `.delete-this-dir` file in the directory, then removes all its contents
except of this file, and later removes the `.delete-this-dir` file together with the directory
itself. This makes possible easily determining whether the given directory needs to be deleted
after unclean shutdown - if it contains the `.delete-this-dir` file or if it is empty, it must be deleted.
Add IsPartiallyRemovedDir() function, which can be used for detecting whether the given directory must be removed
at starup.
Previously the MustRemoveDirAtomic() was using a "trick" for atomic directory removal: it was "atomically" renaming
the directory to a temporary directory with '.must-remove.' marker in the directory name, and after that it
was removing the renamed directory. On startup all the directories with the `.must-remove.` marker were deleted
if they are left after unclean shutdown. This "trick" doesn't work for NFS and object storage such as S3,
since these storage systems do not support atomic renaming of directories with multiple entries inside.
The new MustRemoveDir() function doesn't use this "trick", so it can be safely used in NFS and S3-like storage systems.
This is based on the pull request from @func25 - https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9486/files .
- The MustRemovePath() deletes the given file or an empty directory.
- Delete the existing parts and partitions at startup if they were partially deleted.
- Consistently use fs.MustRemoveDir() and fs.MustRemovePath() instead of os.RemoveAll() across the codebase.
This reduces the amounts of bolierplate code related to error handling.
- Consistently use fs.MustWriteSync() instead of os.WriteFile() across the codebase.
- Consistently use fs.IsPathExist() for checking whether the given path exists on the filesystem.
- Consistently use fs.MustWriteAtomic() for atomic store of the serialized state into file.
- Consistently panic with 'BUG:' prefix on unexpected errors.
- Read and write the state file contents in one go. This simplifies the code for loading and storing the state.
This shouldn't increase memory usage too much, since the parsed state is already stored in RAM.
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9074
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9102
The following renamings were requested in #8134 to be done in a separate
PR:
- Rename putTSIDToCache(s) to storeTSIDToCache(s). This is because
get/put prefixes are normally reserved for pools and ref counters. This
unclear though whether name `getTSIDFromCache` is okay in this context.
- Rename `addPartitionNolock` to `addPartitionLocked`, because the
`Locked` suffix is used everywhere else
- Rename deleteMetricIDs to saveDeletedMetricIDs, because no deletion is
actually happening. The deleted metric ids are still present in the
index. One needs to filter the deleted metric ids out after the
retrieval.
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Previously, it was not possible to compile netBSD binary due to missing OS constrains at lib/fs and lib/filestream packages.
This commit fixes it by:
* apply proper constrains at lib/filestream
* Introduce statfs_t and statfs() to abstract unix internals: NetBSD
needs to use unix.Statvfs_t and unix.Statvfs() unlike other Unix-es.
* apply proper constrain for vmctl terminal package
Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9473
The newly created data part could become missing after unclean shutdown (such as hardware power off),
since the contents of the parent directory wasn't synced to disk before storing the newly created data part in the parts.json file.
Fix this by syncing the parent directory contents before storing the newly created part in the parts.json file.
This commit is based on https://github.com/VictoriaMetrics/VictoriaLogs/pull/507
This commit adds tmp inmemory and data blocks buffers for
index search requests. It allows to reduce memory allocations on block
cache misses. Since block cache puts block into cache only on after
configured number of cache misses.
Related PR https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9324
This commit respects online CPU count for process_cpu_cores_available metric. Since CPUQuota may exceed it.
Detailed description:
There might be a case when `getCPUQuota()` returns a value bigger than
available logic CPU cores. In this case the lower value should be used.
These changes also reflect upcoming go1.25 changes
https://tip.golang.org/doc/go1.25#container-aware-gomaxprocs
> If the CPU bandwidth limit is lower than the number of logical CPUs
available, GOMAXPROCS will default to the lower limit
In practice that happens with [CPU
Manager](https://kubernetes.io/blog/2022/12/27/cpumanager-ga/) enabled.
It requires setting `/sys/devices/system/cpu/online` which shows the
total number of cores on a node. The container doesn't have any cgroup
limits, but it's pinned to a limited subset of cores.
```
root@vmselect:/# cat /sys/fs/cgroup/cpu/cpu.cfs_quota_us
-1
root@vmselect:/# cat /sys/fs/cgroup/cpu/cpu.cfs_period_us
100000
root@vmselect:/# cat /sys/devices/system/cpu/online
0-255
```
go1.25 and go.uber.org/automaxprocs don't look into
`/sys/devices/system/cpu/online`, VictoriaMetrics does
The chunkedbuffer is released twice, leading to concurrent use of the
buffer after it's acquired from a pool by two different goroutines.
Issue was introduced at v1.115.0 at the following commit 5b87aff830
### Describe Your Changes
Previously, if I run `make publish-vmstorage` it composed the tag name
like:
```
docker.io/victoriametrics/vmstorage:heads-tcpdialer-increase-idle-timeout-0-g1b20c2dbd3-dirty-84b03c0eEXTRA_DOCKER_TAG_SUFFIX
```
It would work okay if I explisitly provide empty
`EXTRA_DOCKER_TAG_SUFFIX=` env var.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
This reverts commit 63e1bf5d97. The reason
for revert is that the change makes increase/rate behavior less
predictable for users. See
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9420.
Another negative impact of this change is that it negatively affects
recording rules that calculate increases or rates over time series on
big intervals. For example,
`increase(http_errors_total{instance="foo"}[30d])` would stop producing
results if `http_errors_total{instance="foo"}` was marked as stale. But
this is logically incorrect, as `http_errors_total{instance="foo"}` over
last 30d should still return results.
-------------
This change doesn't revert
63e1bf5d97
completely. It keeps changes made to `removeCounterResets` in order to
preserve original `staleNaN` values. Before, `staleNaN` were compared
with float values and producing `NaN` instead. Which could confuse us in
future. So keeping the fix and tests. It shouldn't have any negative
effect.
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Previously, snapshots were never deleted automatically. This lead to
disk space waste in case snapshots were left behind and never deleted,
which happened in case of backups failure. This required manual
investigation and snapshots cleanup.
This change enables removal of snapshots older than 3 days by default.
This should give enough time to upload backup in time and also make sure
old snapshots are not wasting disk space.
Closes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9344
---------
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
### Describe Your Changes
Related issue: #9388
- removed all the code related to VictoriaLogs
- updated dependencies to the latest versions
- removed unnecessary `React` import from components
- removed deprecated dependencies such as `lodash.get`,
`@babel/plugin-proposal-nullish-coalescing-operator`,
`@babel/plugin-proposal-private-property-in-object`
- removed unused packages
- fixed proxy for local playground development (after refresh the page
app cannot properly parse the `/#/` in the path)
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
Add unit tests that verify filesystem hierarchy of the indexdb after
opening the storage and after rotation. The test have been originally
added in #8134 but are still applicable to the current version and will
also reduce the diff.
Co-authored-by: Artem Fetishev <rtm@victoriametrics.com>
- Move `prefetchMetricNames`, `SearchMetricNames`, and
`SearchLabelValues` from `Storage` to `indexDB`
- Rename `searchLabelValuesWithFiltersOnTimeRange` to
`searchLabelValuesOnTimeRange`
- Rename `searchLabelValuesWithFiltersOnDate` to
`searchLabelValuesOnDate`
Extracted from #8134.
Co-authored-by: Artem Fetishev <rtm@victoriametrics.com>
### Describe Your Changes
- Split the release process into two steps.
- Add some links
- Described in more details what should be done to check branches in
sync.
- Added some small checks, like running tests, and testing final release
on sandbox.
- Changed the order of some actions (like build final image before
publish release).]
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
When you try to run `make docker-cluster-up` and try to check metrics in
the Explore page of the Grafana, you will see the basic auth request. If
you enter the correct login and password from the `auth-vm-cluster.yml`
it will continue to ask about the username and password. It happens
because the datasource is configured to ask for data vmauth, but in the
configuration, no auth information is provided.
Added the basich auth info to the datasource provision file
Adding note for -dst config. Adding additional reference for snapshot troubleshooting for better accessibility
Making documentation easier to use following customer issues
TestClusterMultiTenantSelect passes in cluster branch but fails in master.
Integration tests must pass in any branch since they do not distinguish
between master and cluster.
The test fails because the multitenant_test.go is different in master and
cluster. This commit makes the file identical in both branches.
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
### Describe Your Changes
The example for using the `activeAt` template in `vmalert.md` was wrong,
with the `from` field being before the `to` field. This switches them
round to be correct.
### Checklist
The following checks are **mandatory**:
- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
The remoteAddr and requestURI aren't sent to the client in the error response after the commit 524f58c78c .
They are logged locally instead. This increases security by not exposing potentially sensitive information (remoteAddr and requestURI) to the client.
This doesn't reduce debuggability, since remoteAddr and requestURI are logged locally together with the error message.
### Describe Your Changes
This PR adds support for parsing Unix timestamps (both integer and
float) in the format pipe using the `time:` prefix.
The timestamp precision (seconds, milliseconds, microseconds, or
nanoseconds) is automatically determined based on the value.
It might be worth creating a new prefix for this rather than reusing
`time:`, but I haven't found any compelling reason to extend the syntax.
### Checklist
The following checks are **mandatory**:
- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
Co-authored-by: Aliaksandr Valialkin <valyala@gmail.com>
This should allow detecting the client, which led to the error logged via SendPrometheusError() function,
e.g. this improves troubleshooting of the logged errors.
This is a follow-up for c9bb4ddeed
This commit renames flag `remoteWrite.retryMaxTime` into `remoteWrite.retryMaxInterval`. New name aligns with corresponding `MinInterval` flag. Previous flag name still could be used, but vmagent will log warning message with suggested migration.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9169
While working on #8134 it has been discovered that
`/graphite/index.json` always returns an empty result. This PR adds a
test and a fix. This fix is a no-op for the master but adding it here in
order to reduce the diff in #8134.
Co-authored-by: Artem Fetishev <149964189+rtm0@users.noreply.github.com>
Co-authored-by: Artem Fetishev <rtm@victoriametrics.com>
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
Add metrics to track the behavior of the message processor:
- `vl_insert_processors_count` tracks the current number of active log
message processors.
- `vl_insert_flush_duration_seconds{type="protocol"}` measures the time
taken to flush log data from recently parsed logs to the underlying
storage system (in-memory buffering, remote storage nodes, or local
storage).
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9320
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Re-use existing `lib/snapshot` for snapshot operations in order to reduce code duplicates and unify supported options and error handling.
Previously, vmbackupmanager did not support `snapshot.tls*` options and did not handle errors in a same was as vmbackup does.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9340
When the page loads, the query field automatically receives focus, which
is expected. However, this also immediately opens the autocomplete
popup, triggering an unnecessary request and showing suggestions before
the user starts typing.
This PR commit the autocomplete popup from opening on initial page
load.
Related to https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9342
This commits changes release process by introducing new intermediate
docker image tag with `-rcY` suffix. It's needed to properly test
releases for possible regressions without need to erase release docker
images. Which could be cached by some proxy and actually used in
production.
Release candidate image must re-tagged into final release via make
command.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9136
This commit adds the following new metric `vm_cache_eviction_bytes_total` for tsid, metricName and metricIDs caches backed by workingset cache.
The problem this is attempting to solve is doing fine grained tuning for
caches on large high churn, high query load VictoriaMetrics deployments.
Cache performance in those scenarios is critical for cluster
performance, however for stability reasons its imperative not to provide
_too much_ cache space compared to available memory or there becomes a
risk of OOM which can easily destabilize a cluster.
Given the cost of memory, especially for large deployments, there is
therefore a need to optimize cache usage to be near the minimum required
to provide acceptable performance under churn, ingestion and query
loads.
VM provides an excellent set of flags for configuring this cache
performance(size and expiry and removal percentage).
However specifically when tuning `prevCacheRemovalPercent` and
`cacheExpireDuration` it can be hard to known exactly what impact
they're having. Sometimes its very clear when theres cyclical behavior
that cacheExpireDuration is involved, but in other scenarios it can be
very unclear why cache miss rate goes up.
Specifically when attempting to debug spikes in cache miss rate there
are questions that are hard to answer:
* Did a new workload come in that was poorly cached?
* Were caches rotated due to expiry and the `prev` cache contained many
active entries due to limited cache size?
* Where the caches rotated due to miss percentage and our tuning is too
aggressive?
In many of these debugging scenarios it would be greatly helpful to have
explicit metrics on how many bytes of key caches were evicted and for
what reasons, it then becomes trivial to associate spikes in eviction
for a specific reason with a miss rate spike that causes poor
performance. The appropriate tuning them becomes much more obvious.
Related to https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9293
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
removeCounterResets was operating only using stalenessInterval (when set
by user) and didn't account for user-specified [window] in rollup
functions. Since in rollup functions MetricsQL could look behind for up
to [window] interval - it can capture those datapoints that weren't
accounted in removeCounterResets. With this change, we remove resets on
stalenessInterval+window interval to avoid this corner case.
Fixes
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8935#issuecomment-2978728661
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
This commit adds a new option `concurrency` for kafka producer, which adds additional workers
for publishing messages into the kafka cluster.
This setting could be useful at networks with high latency. And it increases write throughput.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9249
Headers sent together with queries could contain important information
about the source of the request. This could help to find sources that send specific queries.
Logging all headers could expose sensitive info in logs.
So introducing `-search.logSlowQueryStatsHeaders` to whitelist headers that need
to be logged. See test case for example.
Previously, regex `.+|^$` was optimised into `.+`. But in fact, it must
optimised into `.*`. Since this regex matches any non empty symbols and
empty value. Which combines into any input.
This commit adds a special case of isDotStar check, which covers this
expression.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9290
This commit introduces new component - VictoriaLogs Agent (vlagent).
It accepts logs data via any data ingestion protocol supported by
VictoriaLogs and forwards it to the provided remote storages.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8766
It appears the `lib/mergeset.Item` violates the 3rd rule
https://pkg.go.dev/unsafe#Pointer
> (3) Conversion of a Pointer to a uintptr and back, with arithmetic.
>
> Unlike in C, it is not valid to advance a pointer just beyond the end
of its original allocation:
> ```
> // INVALID: end points outside allocated space.
> b := make([]byte, n)`
> end = unsafe.Pointer(uintptr(unsafe.Pointer(&b[0])) + uintptr(n))
> ```
`CGO_ENABLED=0 go test -trimpath -gcflags=all=-d=checkptr -run
TestTableAddItemsSerial`
```
fatal error: checkptr: pointer arithmetic result points to invalid allocation
This commit adds a special path to item encoding, that prevents invalid memory references.
See this PR for details:
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9264
* reduce time wait on notifiers update
* deterministically sort notifiers() call. This improves and tests and output for users
a9b641531b
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Besides `/select/`-prefixed requests vmauth also does `/admin/` requests
to fetch tenant IDs. Without whitelisting, such requests will fail to route.
I am adding this change only to generic vmauth configs where /admin/ access
is expected to be by default.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
This is ready, since https://github.com/VictoriaMetrics/cloud/pull/3229
is merged .
This PR deletes the cloud docs folder after moving it into the private
cloud repo. The main reason is to keep things tidy and handle reviews in
a better way, as the helm charts repo is doing.
Future updates should be protected since the github actions file with
rsync command should not be messing with this folder when updating
everything.
### Checklist
The following checks are **mandatory**:
- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
### Describe Your Changes
The example in the VictoriaLogs docs (Comments section) is currently
missing an aggregate function in the `stats` pipe:
```logsql
error # find logs with `error` word
| stats by (_stream) logs # then count the number of logs per `_stream` label
| sort by (logs) desc # then sort by the found logs in descending order
| limit 5 # and show top 5 streams with the biggest number of logs
```
However, `stats by (_stream) logs` is invalid syntax - `logs` is
interpreted as a function name, which causes a parsing error.
This fix replaces it with a valid version using `count()`:
```logsql
| stats by (_stream) count() as logs
```
Without `count()`, the query fails with:
```
cannot parse 'stats' pipe: unknown stats func "logs"
```
Signed-off-by: Yury Molodov <yurymolodov@gmail.com>
* split migration mods in separate sub-section. This removes conflicting
#-anchors and makes it easier to read&modify in future
* remove duplicating wording
* simplify texts
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8964
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Previously, if annotation was update on object, it could result into
duplicate targets register for dropped targets service-discovery page.
Mostly it affects endpoint annotations update. Endpoint holds
annotation with last update time. If any pod that belongs to the given
annotation changed, it causes duplication targets for all pod backed by
the endpoint. It makes service-discovery debug page hard to use.
This commit excludes `__metadata_kubernetes_*_annotation_` from key
generation for dropped targets map. Instead it updates target with new
labels value.
It may lead to some targets collision, but
since hash function already could produce collisions, it should not be a
problem.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8626
The files with synctest usage are built only if the goexperiment.synctest build tag is set.
This allows running `go test ./lib/...` and `go vet ./lib/...` without the need to pass GOEXPERIMENT=synctest to them.
This is a follow-up for d33d7e20be
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8848
New variable should help to change docker image tag name during release prepare.
Instead of publishing an exact version, like v1.1.0, it should use suffixed version v1.1.0-rc1.
Which should be re-tagged later, when release will be promoted to stable.
Related to https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9136
Signed-off-by: f41gh7 <nik@victoriametrics.com>
### Describe Your Changes
Revival and modification of original PR
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8762 after
discussion on
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7387.
`group.concurrency` is now respected if and only if
`-replay.rulesDelay=0` rather than always. This allows rules to be run
concurrently without ambiguity about rule chaining. If
`-replay.rulesDelay` is set greater than zero concurrency is still
ignored. This will be the default behavior since it defaults to 1s.
Implementation considerations:
I chose to add split some simple logic into a helper function in
preparation for adding `replay.singleRuleEvaluationConcurrency` in a
follow up PR as thats we're we can share that logic.
cc @Haleygo
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
---------
Co-authored-by: Hui Wang <haley@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
During the data retrieval, VictoriaMetrics switches to global index
search if the time range is > 40 days. Prior
ba0d7dc2fc, this switch was handled in
indexDB code. That commit moved that logic to the storage level. As the
result, indexDB will search per-day index for whatever time range that
is passed to it.
This broke the BenchmarkHeadPostingForMatchers. It now shows very slow
indexDB performance compared to v1.111.0 (the last release before that
commit). This is because now indexDB tries to search 55 years of data by
spawning a separate goroutine for each day. Even though the data was
inserted only for the current day, running that many goroutines
significantly slows down the search.
The fix is to use global index time range explicitly.
Fixes#9203
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Similar to other storage caches `storage/metricName` can be very
important to performance, however it is not tunable independently like
other caches.
In high cardinality setups where a large amount of that cardinality is
actively queried we can see a high `metricName` miss rate.
The only way to correct this is to increase available memory either by
provisioning more or by increasing `memory.allowedPercent`, which is
often expensive or undesirable for stability reasons.
Its possible to work around this by increasing `memory.allowedPercent`
and then adjusting `storage.cacheSizeIndexDBDataBlocks` and
`storage.cacheSizeStorageTSID` down as they are the largest caches, but
this is a less ideal solution than being able to directly control this
cache size.
Related to
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8843
### Describe Your Changes
The `DataBlock` contains structs with string fields, and while the
original comment mentioned not holding references to `br`, it wasn't
immediately clear that this also applies to fields like strings within
the data.
This change clarifies that the `writeBlockResultFunc` must not retain
references to any part of `br`, including its fields. This makes it
explicit that even seemingly safe types like strings must be copied if
needed.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
The 'select' HTTP request duration metric is currently implemented as a
summary, while the 'insert' HTTP request duration uses a histogram.
All time series under the same metric name must use the same metric
type. Using the summary type for this metric is preferable, as it
significantly reduces the number of unique time series generated. While
summaries have the limitation of not supporting accurate aggregation
across multiple instances or paths, this trade-off is more manageable
than dealing with a high-cardinality explosion caused by histograms.
### Describe Your Changes
- removed unneeded loop for binary field value size extraction, since it
should not be delimited by multiple `\n` symbols
- changed loop condition
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
### Describe Your Changes
Related issue: #8749
Added query to requests `/field_names` and `/field_values` to get more
precision results:
- If `filterName` is `_msg` or `_stream_id`, the query cannot be
generated specifically,
so a wildcard query (`"*"`) is returned.
- If `filterName` is `_stream`, the query is generated using regexp
(`{type=~"value.*"}`).
- If `filterName` is `_time`, a simplified query is created by trimming
the value up
to the first occurrence of a delimiter such as `-` or `:`.
- For all other values of `filterName`, a prefix query is returned using
the `query` value with a `*` appended (e.g., `"value*"`).
Related issue: #8806
Enhanced autocomplete with parsed field suggestions from unpack pipe.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Co-authored-by: Nikolay <nik@victoriametrics.com>
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
### Describe Your Changes
This PR adds a call to optimize pipes after the `limit` pipe has been
appended.
Related: #9200
While this approach is not ideal, since it forces us to re-optimize all
pipes, but it is simpler.
An alternative would be to reapply only the relevant optimizations
specifically for this case, something like:
```go
func (q *Query) AddPipeLimit(n uint64) {
if len(q.pipes) > 0 {
ps, ok := q.pipes[len(q.pipes)-1].(*pipeSort)
if ok {
if ps.limit == 0 || n < ps.limit {
ps.limit = n
}
return
}
pu, ok := q.pipes[len(q.pipes)-1].(*pipeUniq)
if ok {
if pu.limit == 0 || n < pu.limit {
pu.limit = n
}
return
}
}
q.pipes = append(q.pipes, &pipeLimit{
limit: n,
})
}
```
### Checklist
The following checks are **mandatory**:
- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
vlinsert package could be used as an imported dependency. Mostly it's
needed for vlagent, but it could be a case for other applications.
This commit introduces new interface for vlinsertutil package, which
represents actual storage for log rows ingestion.
It includes CanWriteData, which also could be used to introduce
back-pressure mechanism for vlinsert clients.
Related PR: https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9034
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
Previously, ForEachRow always reset last row fields after iteration.
It makes impossible concurrent iteration with forEachRow, since
ForEachRow performed hidden mutation of LogRows.
This commit resolves this issue by removal of fields reference.
Related to
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9076
This allows parsing unlimited number of logs in a single HTTP request,
without the need to buffer the logs in memory.
This is needed for https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9070
Thanks to @AndrewChubatiuk for the initial pull request - https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9153
This commit is based on the https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9153 .
It contains the following changes comparing to the original pull request:
- Remove ugly function LineReader.NextLineWithLineFn(). Instead, uglify the Journald parser a bit
with hacky calls to LineReader.NextLine() in order to parse binary-encoded field values.
This should preserve the maintainability of the LineReader, which is shared among multiple protocol parsers,
under control, while keeping the complexity of Journald parsing inside the app/vlinsert/journald package.
- Fix a typo bug inside isNameValid() - `(r < '0' && r > '9')` must be written as `(r < '0' || r > '9')`.
Rewrite isNameValid() into easier to understand code and rename it to isValidJournaldFieldName() for better readability.
Add tests for this function.
- Remove mentioning of the -journald.maxRequestSize command-line flag from VictoriaLogs docs.
- Add the description of the fix to VictoriaLogs changelog.
- Properly increment errorsTotal metric on every journald parse error.
- Add missing protoparserutil.PutUncompressedReader(reader) call, so the reader could be re-used between client requests.
- Remove improperly working code, which tries continuing parsing the request stream after parse errors.
It is impossible to recover reliably from journald parse errors related to reading the data from the request stream,
since the journald protocol format is completely braindead. So it is better to immediately return the error
to the client instead of trying to recover. The only errors, which could be recovered, are related to invalid field names / values.
Such errors are logged with the WARN level and the corresponding fields are skipped.
- Fix incorrect storage of the re-used name and value strings into fb.fields. The contents of the name and value strings
must be copied per every loop, which reads these strings from the request stream. Otherwise the contents of the previously
added Name and Value fields into fb.fields will be overwritten on the next loop.
- Ensure that LineReader.Line is set to nil after LineReader.NextLine() returns false. This should prevent from subtle bugs
when the LineReader.Line is read after LineReader.NextLine() returns false.
### Describe Your Changes
> ⚠️ still draft, don't merge even if already approved
Docs update for vmanomaly v1.24.0 release with stateful service option
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
Such filters can be optimized to `*`. This avoid executing other OR filters.
For example, `foo or * or bar` is optimized to `*`, while `foo` and `bar` filters aren't executed.
Such filters are frequently generated by Grafana, so this should improve query performance there.
Use getCompoundToken() instead of getCompoundFuncArg() for obtaining regexp filter value,
since getCompoundFuncArg() skips trailing '*' chars.
This allows detecting invalid queries in the https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8582 .
`Samples` could be confusing for users, especially for alerting rules:
* we say in docs that each returned "series" will create a new alert
* we say that there are "series fetched", so both columns should be
either "series" or "samples".
Let's rename it to `series` for consistency.
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
…s `query` template, but only logging a warning
The `query` template is not supported in replay mode, because we perform
range queries on the rule’s expression, but not on the `query` template.
Previously, if user see error `query template isn't supported in replay
mode`, they need remove the `query` template from the rule for replay
mode.
Also, templating is only used for alerting rules. Replaying alerting
rules don't send notifications(rule annotations are included here) and
users mainly focus on the generated `ALERTS`, the `query` result is
trivial. This pull request shouldn't break things but simplifies the
usage of replay mode for the case.
related https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8746
### Describe Your Changes
Users try to migrate from systems like Thanos or Prometheus using
`vmctl` and `promtheus` migration protocol, with questions about the
problems when they try to specify the snapshot, but `vmctl` shows `0`
series to be found for migration.
This issue happens because users specify the block folder instead of the
snapshot folder.
Added clarification about snapshot structure and its appearance with
multiple blocks inside.
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
* packaging/make: fix archiving release binaries for cluster
Cluster binaries for non-windows platforms did not include fips binaries, fix that by properly including binaries.
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
See #9144
VictoriaLogs was not correctly parsing timestamps from journald data
when using the default time field configuration. This caused
VictoriaLogs to look for a `_time` field instead of
`__REALTIME_TIMESTAMP` in journald data by default, resulting in
timestamps falling back to ingestion time rather than the actual log
timestamps.
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
The CommonParams.TimeFields is initialized to []{"_time"} by default. This prevent from the proper usage of the -journald.timeField
as the default field for reading log timestamps.
This bug has been introduced in the commit a1a731eb61
While at it, make sure that every parsed log entry has its own timestamp if the timestamp couldn't be read from the log entry.
This provides reliable sort order of the log fields.
### Describe Your Changes
fixes#8535
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
Storage node with large number of partitions (e.g. 150+ partitions with 12Tb of data) will take more than 30 seconds to start. This was causing vmbackupmanager restarts when running vmstorage colocated with vmbackupmanager in a single pod.
Use exponential backoff for retries and increase overall timeout for storage node healthcheck to 3 minutes to avoid vmbackupmanager restarts during storage node startup.
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
Using SAS token is not required when copying data within a single
storage account. Using a plain object URL is sufficient for this case. A
single storage account is normally used to store backups so it is safe
to remove SAS tokens usage.
This fixes support of server-side copy when using managed identity as
authentication source as SAS token can only be generated when using
"shared key" type of credentials.
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9131
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
### Describe Your Changes
removed trailing comma in ignoreFields, which led to _msg field removal,
since it is converted to empty string before filtering
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
### Describe Your Changes
This pull request addresses a bug in the Less method of the tagFilter
struct. The original implementation incorrectly assigned the value of
isCompositeB by calling tf.isComposite() instead of other.isComposite().
This caused both isCompositeA and isCompositeB to always have the same
value, leading to incorrect comparisons when determining the order of
tagFilter instances.
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
---------
Co-authored-by: Zhu Jiekun <jiekun@victoriametrics.com>
### Describe Your Changes
Currently, some PRs have a failed CI due to low code coverage reported
by Codecov. However, the team typically ignore this and relies on other
quality indicators such as thorough code reviews.
This change configures Codecov to continue posting coverage reports
without marking the build as failed.
It also helps reduce confusion for external contributors, who might
otherwise feel pressured to add unnecessary tests just to satisfy
Codecov requirements (for example
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9002#discussion_r2111651046).
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
This commit adds a basic backup/restore test for vmsingle and vmcluster. A
more sophisticated was originally added to the partition index PR
(#8134) and was aimed to test backup/restore when switching back and
forth between legacy and partition index. During the code review it was
decided that it would be good to have a separate test as well since
legacy code will be removed in future and so will the test.
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
This speeds up building the Go builder image significantly (from hours to a few minutes),
since the build speed was limited by the download speed from https://musl.cc , and this speed
was extremely slow (e.g. 10kb/s and slower).
This also improves build security, since the local mirror of musl.cc is under our control.
This PR addresses two issues:
When tenant labels (e.g. vm_account_id, vm_project_id) are passed via
extra_filters, they were not included in the rollupCache key. This could
cause cache entries to be reused across different tenants, resulting in
incorrect query results.
If a tenant is specified only via extra_filters, and that tenant does
not exist in TenantsCached, it gets silently filtered out by
GetTenantTokensFromFilters, causing the query to fall back to a global
(non-tenant) query — which is likely unexpected and potentially unsafe.
This fix ensures correct tenant scoping and avoids unintended data
exposure or cache pollution.
Related issue
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9001
---------
Co-authored-by: Zakhar Bessarab <me@zekker.dev>
Co-authored-by: Max Kotliar <kotlyar.maksim@gmail.com>
2025-06-06 15:56:36 +04:00
2882 changed files with 464102 additions and 160982 deletions
VictoriaMetrics is a fast, cost-saving, and scalable solution for monitoring and managing time series data. It delivers high performance and reliability, making it an ideal choice for businesses of all sizes.
## Folder Structure
-`/app`: Contains the compilable binaries.
-`/lib`: Contains the golang reusable libraries
-`/docs/victoriametrics`: Contains documentation for the project.
-`/apptest/tests`: Contains integration tests.
## Libraries and Frameworks
- Backend: Golang, no framework. Use third-party libraries sparingly.
- Frontend: React.
## Code review guidelines
Ensure the feature or bugfix includes a changelog entry in /docs/victoriametrics/changelog/CHANGELOG.md.
Verify the entry is under the ## tip section and matches the structure and style of existing entries.
Chore-only changes may be omitted from the changelog.
@@ -7,3 +7,4 @@ Please provide a brief description of the changes you made. Be as specific as po
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development goals](https://docs.victoriametrics.com/victoriametrics/goals/).
httpListenAddrs=flagutil.NewArrayString("httpListenAddr","TCP address to listen for incoming http requests. See also -httpListenAddr.useProxyProtocol")
useProxyProtocol=flagutil.NewArrayBool("httpListenAddr.useProxyProtocol","Whether to use proxy protocol for connections accepted at the given -httpListenAddr . "+
"See https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt . "+
"With enabled proxy protocol http server cannot serve regular /metrics endpoint. Use -pushmetrics.url for metrics pushing")
)
funcmain(){
// Write flags and help message to stdout, since it is easier to grep or pipe.
flag.CommandLine.SetOutput(os.Stdout)
flag.Usage=usage
envflag.Parse()
buildinfo.Init()
logger.Init()
listenAddrs:=*httpListenAddrs
iflen(listenAddrs)==0{
listenAddrs=[]string{":9428"}
}
logger.Infof("starting VictoriaLogs at %q...",listenAddrs)
datadogStreamFields=flagutil.NewArrayString("datadog.streamFields","Comma-separated list of fields to use as log stream fields for logs ingested via DataDog protocol. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/datadog-agent/#stream-fields")
datadogIgnoreFields=flagutil.NewArrayString("datadog.ignoreFields","Comma-separated list of fields to ignore for logs ingested via DataDog protocol. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/datadog-agent/#dropping-fields")
maxRequestSize=flagutil.NewBytes("datadog.maxRequestSize",64*1024*1024,"The maximum size in bytes of a single DataDog request")
defaultMsgValue=flag.String("defaultMsgValue","missing _msg field; see https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field",
"Default value for _msg field if the ingested log entry doesn't contain it; see https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field")
)
// CommonParams contains common HTTP parameters used by log ingestion APIs.
//
// See https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters
// ParseUnixTimestamp parses s as unix timestamp in seconds, milliseconds, microseconds or nanoseconds and returns the parsed timestamp in nanoseconds.
funcParseUnixTimestamp(sstring)(int64,error){
ifstrings.IndexByte(s,'.')>=0{
// Parse timestamp as floating-point value
f,err:=strconv.ParseFloat(s,64)
iferr!=nil{
return0,fmt.Errorf("cannot parse unix timestamp from %q: %w",s,err)
}
iff<(1<<31)&&f>=(-1<<31){
// The timestamp is in seconds.
returnint64(f*1e9),nil
}
iff<1e3*(1<<31)&&f>=1e3*(-1<<31){
// The timestamp is in milliseconds.
returnint64(f*1e6),nil
}
iff<1e6*(1<<31)&&f>=1e6*(-1<<31){
// The timestamp is in microseconds.
returnint64(f*1e3),nil
}
// The timestamp is in nanoseconds
iff>math.MaxInt64{
return0,fmt.Errorf("too big timestamp in nanoseconds: %v; mustn't exceed %v",f,int64(math.MaxInt64))
}
iff<math.MinInt64{
return0,fmt.Errorf("too small timestamp in nanoseconds: %v; must be bigger or equal to %v",f,int64(math.MinInt64))
}
returnint64(f),nil
}
// Parse timestamp as integer
n,err:=strconv.ParseInt(s,10,64)
iferr!=nil{
return0,fmt.Errorf("cannot parse unix timestamp from %q: %w",s,err)
maxRequestSize=flagutil.NewBytes("internalinsert.maxRequestSize",64*1024*1024,"The maximum size in bytes of a single request, which can be accepted at /internal/insert HTTP endpoint")
journaldStreamFields=flagutil.NewArrayString("journald.streamFields","Comma-separated list of fields to use as log stream fields for logs ingested over journald protocol. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/journald/#stream-fields")
journaldIgnoreFields=flagutil.NewArrayString("journald.ignoreFields","Comma-separated list of fields to ignore for logs ingested over journald protocol. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/journald/#dropping-fields")
journaldTimeField=flag.String("journald.timeField","__REALTIME_TIMESTAMP","Field to use as a log timestamp for logs ingested via journald protocol. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/journald/#time-field")
journaldTenantID=flag.String("journald.tenantID","0:0","TenantID for logs ingested via the Journald endpoint. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/journald/#multitenancy")
journaldIncludeEntryMetadata=flag.Bool("journald.includeEntryMetadata",false,"Include journal entry fields, which with double underscores.")
maxRequestSize=flagutil.NewBytes("journald.maxRequestSize",64*1024*1024,"The maximum size in bytes of a single journald request")
syslogTimezone=flag.String("syslog.timezone","Local","Timezone to use when parsing timestamps in RFC3164 syslog messages. Timezone must be a valid IANA Time Zone. "+
"For example: America/New_York, Europe/Berlin, Etc/GMT+3 . See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/")
streamFieldsTCP=flagutil.NewArrayString("syslog.streamFields.tcp","Fields to use as log stream labels for logs ingested via the corresponding -syslog.listenAddr.tcp. "+
streamFieldsUDP=flagutil.NewArrayString("syslog.streamFields.udp","Fields to use as log stream labels for logs ingested via the corresponding -syslog.listenAddr.udp. "+
decolorizeFieldsTCP=flagutil.NewArrayString("syslog.decolorizeFields.tcp","Fields to remove ANSI color codes across logs ingested via the corresponding -syslog.listenAddr.tcp. "+
decolorizeFieldsUDP=flagutil.NewArrayString("syslog.decolorizeFields.udp","Fields to remove ANSI color codes across logs ingested via the corresponding -syslog.listenAddr.udp. "+
tenantIDTCP=flagutil.NewArrayString("syslog.tenantID.tcp","TenantID for logs ingested via the corresponding -syslog.listenAddr.tcp. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#multitenancy")
tenantIDUDP=flagutil.NewArrayString("syslog.tenantID.udp","TenantID for logs ingested via the corresponding -syslog.listenAddr.udp. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#multitenancy")
listenAddrTCP=flagutil.NewArrayString("syslog.listenAddr.tcp","Comma-separated list of TCP addresses to listen to for Syslog messages. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/")
listenAddrUDP=flagutil.NewArrayString("syslog.listenAddr.udp","Comma-separated list of UDP address to listen to for Syslog messages. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/")
tlsEnable=flagutil.NewArrayBool("syslog.tls","Whether to enable TLS for receiving syslog messages at the corresponding -syslog.listenAddr.tcp. "+
"The corresponding -syslog.tlsCertFile and -syslog.tlsKeyFile must be set if -syslog.tls is set. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#security")
tlsCertFile=flagutil.NewArrayString("syslog.tlsCertFile","Path to file with TLS certificate for the corresponding -syslog.listenAddr.tcp if the corresponding -syslog.tls is set. "+
"Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#security")
tlsKeyFile=flagutil.NewArrayString("syslog.tlsKeyFile","Path to file with TLS key for the corresponding -syslog.listenAddr.tcp if the corresponding -syslog.tls is set. "+
"The provided key file is automatically re-read every second, so it can be dynamically updated. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#security")
tlsCipherSuites=flagutil.NewArrayString("syslog.tlsCipherSuites","Optional list of TLS cipher suites for -syslog.listenAddr.tcp if -syslog.tls is set. "+
"See the list of supported cipher suites at https://pkg.go.dev/crypto/tls#pkg-constants . "+
"See also https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#security")
tlsMinVersion=flag.String("syslog.tlsMinVersion","TLS13","The minimum TLS version to use for -syslog.listenAddr.tcp if -syslog.tls is set. "+
"Supported values: TLS10, TLS11, TLS12, TLS13. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#security")
compressMethodTCP=flagutil.NewArrayString("syslog.compressMethod.tcp","Compression method for syslog messages received at the corresponding -syslog.listenAddr.tcp. "+
"Supported values: none, gzip, deflate. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#compression")
compressMethodUDP=flagutil.NewArrayString("syslog.compressMethod.udp","Compression method for syslog messages received at the corresponding -syslog.listenAddr.udp. "+
"Supported values: none, gzip, deflate. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#compression")
useLocalTimestampTCP=flagutil.NewArrayBool("syslog.useLocalTimestamp.tcp","Whether to use local timestamp instead of the original timestamp for the ingested syslog messages "+
"at the corresponding -syslog.listenAddr.tcp. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#log-timestamps")
useLocalTimestampUDP=flagutil.NewArrayBool("syslog.useLocalTimestamp.udp","Whether to use local timestamp instead of the original timestamp for the ingested syslog messages "+
"at the corresponding -syslog.listenAddr.udp. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#log-timestamps")
)
// MustInit initializes syslog parser at the given -syslog.listenAddr.tcp and -syslog.listenAddr.udp ports
//
// This function must be called after flag.Parse().
//
// MustStop() must be called in order to free up resources occupied by the initialized syslog parser.
funcMustInit(){
ifworkersStopCh!=nil{
logger.Panicf("BUG: MustInit() called twice without MustStop() call")
`<123>1 2023-06-03T17:42:12.345Z mymachine.example.com appname 12345 ID47 [exampleSDID@32473 iut="3" eventSource="Application 123 = ] 56" eventID="11211"] This is a test message with structured data.`,
})
}
funcTestSyslogLineReader_Failure(t*testing.T){
f:=func(datastring){
t.Helper()
r:=bytes.NewBufferString(data)
slr:=getSyslogLineReader(r)
deferputSyslogLineReader(slr)
ifslr.nextLine(){
t.Fatalf("expecting failure to read the first line")
{"priority":"123","facility":"15","severity":"3","format":"rfc5424","hostname":"mymachine.example.com","app_name":"appname","proc_id":"12345","msg_id":"ID47","exampleSDID@32473.iut":"3","exampleSDID@32473.eventSource":"Application 123 = ] 56","exampleSDID@32473.eventID":"11211","_msg":"This is a test message with structured data."}`
datasourceURL=flag.String("datasource.url","http://localhost:9428/select/logsql/query","URL for querying VictoriaLogs; "+
"see https://docs.victoriametrics.com/victorialogs/querying/#querying-logs . See also -tail.url")
tailURL=flag.String("tail.url","","URL for live tailing queries to VictoriaLogs; see https://docs.victoriametrics.com/victorialogs/querying/#live-tailing ."+
"The url is automatically detected from -datasource.url by replacing /query with /tail at the end if -tail.url is empty")
historyFile=flag.String("historyFile","vlogscli-history","Path to file with command history")
header=flagutil.NewArrayString("header","Optional header to pass in request -datasource.url in the form 'HeaderName: value'")
accountID=flag.Int("accountID",0,"Account ID to query; see https://docs.victoriametrics.com/victorialogs/#multitenancy")
projectID=flag.Int("projectID",0,"Project ID to query; see https://docs.victoriametrics.com/victorialogs/#multitenancy")
)
const(
firstLinePrompt=";> "
nextLinePrompt=""
)
funcmain(){
// Write flags and help message to stdout, since it is easier to grep or pipe.
The `run_id` field uniquely identifies every `vlogsgenerator` invocation.
### How to write logs to VictoriaLogs?
The generated logs can be written directly to VictoriaLogs by passing the address of [`/insert/jsonline` endpoint](https://docs.victoriametrics.com/victorialogs/data-ingestion/#json-stream-api)
to `-addr` command-line flag. For example, the following command writes the generated logs to VictoriaLogs running at `localhost`:
`vlogsgenerator` accepts various command-line flags, which can be used for configuring the number and the shape of the generated logs.
These flags can be inspected by running `vlogsgenerator -help`. Below are the most interesting flags:
*`-start` - starting timestamp for generating logs. Logs are evenly generated on the [`-start` ... `-end`] interval.
*`-end` - ending timestamp for generating logs. Logs are evenly generated on the [`-start` ... `-end`] interval.
*`-activeStreams` - the number of active [log streams](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields) to generate.
*`-logsPerStream` - the number of log entries to generate per each log stream. Log entries are evenly distributed on the [`-start` ... `-end`] interval.
The total number of generated logs can be calculated as `-activeStreams` * `-logsPerStream`.
For example, the following command generates `1_000_000` log entries on the time range `[2024-01-01 - 2024-02-01]` across `100`
[log streams](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields), where every logs stream contains `10_000` log entries,
and writes them to `http://localhost:9428/insert/jsonline`:
```
bin/vlogsgenerator \
-start=2024-01-01 -end=2024-02-01 \
-activeStreams=100 \
-logsPerStream=10_000 \
-addr=http://localhost:9428/insert/jsonline
```
### Churn rate
It is possible to generate churn rate for active [log streams](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields)
by specifying `-totalStreams` command-line flag bigger than `-activeStreams`. For example, the following command generates
logs for `1000` total streams, while the number of active streams equals to `100`. This means that at every time there are logs for `100` streams,
but these streams change over the given [`-start` ... `-end`] time range, so the total number of streams on the given time range becomes `1000`:
```
bin/vlogsgenerator \
-start=2024-01-01 -end=2024-02-01 \
-activeStreams=100 \
-totalStreams=1_000 \
-logsPerStream=10_000 \
-addr=http://localhost:9428/insert/jsonline
```
In this case the total number of generated logs equals to `-totalStreams` * `-logsPerStream` = `10_000_000`.
### Benchmark tuning
By default `vlogsgenerator` generates and writes logs by a single worker. This may limit the maximum data ingestion rate during benchmarks.
The number of workers can be changed via `-workers` command-line flag. For example, the following command generates and writes logs with `16` workers:
```
bin/vlogsgenerator \
-start=2024-01-01 -end=2024-02-01 \
-activeStreams=100 \
-logsPerStream=10_000 \
-addr=http://localhost:9428/insert/jsonline \
-workers=16
```
### Output statistics
Every 10 seconds `vlogsgenerator` writes statistics about the generated logs into `stderr`. The frequency of the generated statistics can be adjusted via `-statInterval` command-line flag.
For example, the following command writes statistics every 2 seconds:
```
bin/vlogsgenerator \
-start=2024-01-01 -end=2024-02-01 \
-activeStreams=100 \
-logsPerStream=10_000 \
-addr=http://localhost:9428/insert/jsonline \
-statInterval=2s
```
VictoriaLogs source code has been moved to [github.com/VictoriaMetrics/VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaLogs/).
addr=flag.String("addr","stdout","HTTP address to push the generated logs to; if it is set to stdout, then logs are generated to stdout")
workers=flag.Int("workers",1,"The number of workers to use to push logs to -addr")
start=newTimeFlag("start","-1d","Generated logs start from this time; see https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#timestamp-formats")
end=newTimeFlag("end","0s","Generated logs end at this time; see https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#timestamp-formats")
activeStreams=flag.Int("activeStreams",100,"The number of active log streams to generate; see https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields")
totalStreams=flag.Int("totalStreams",0,"The number of total log streams; if -totalStreams > -activeStreams, then some active streams are substituted with new streams "+
"during data generation")
logsPerStream=flag.Int64("logsPerStream",1_000,"The number of log entries to generate per each log stream. Log entries are evenly distributed between -start and -end")
constFieldsPerLog=flag.Int("constFieldsPerLog",3,"The number of fields with constant values to generate per each log entry; "+
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
varFieldsPerLog=flag.Int("varFieldsPerLog",1,"The number of fields with variable values to generate per each log entry; "+
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
dictFieldsPerLog=flag.Int("dictFieldsPerLog",2,"The number of fields with up to 8 different values to generate per each log entry; "+
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
u8FieldsPerLog=flag.Int("u8FieldsPerLog",1,"The number of fields with uint8 values to generate per each log entry; "+
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
u16FieldsPerLog=flag.Int("u16FieldsPerLog",1,"The number of fields with uint16 values to generate per each log entry; "+
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
u32FieldsPerLog=flag.Int("u32FieldsPerLog",1,"The number of fields with uint32 values to generate per each log entry; "+
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
u64FieldsPerLog=flag.Int("u64FieldsPerLog",1,"The number of fields with uint64 values to generate per each log entry; "+
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
i64FieldsPerLog=flag.Int("i64FieldsPerLog",1,"The number of fields with int64 values to generate per each log entry; "+
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
floatFieldsPerLog=flag.Int("floatFieldsPerLog",1,"The number of fields with float64 values to generate per each log entry; "+
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
ipFieldsPerLog=flag.Int("ipFieldsPerLog",1,"The number of fields with IPv4 values to generate per each log entry; "+
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
timestampFieldsPerLog=flag.Int("timestampFieldsPerLog",1,"The number of fields with ISO8601 timestamps per each log entry; "+
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
jsonFieldsPerLog=flag.Int("jsonFieldsPerLog",1,"The number of JSON fields to generate per each log entry; "+
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
statInterval=flag.Duration("statInterval",10*time.Second,"The interval between publishing the stats")
)
funcmain(){
// Write flags and help message to stdout, since it is easier to grep or pipe.
logger.Infof("start -workers=%d workers for ingesting -logsPerStream=%d log entries per each -totalStreams=%d (-activeStreams=%d) on a time range -start=%s, -end=%s to -addr=%s",
maxConcurrentRequests=flag.Int("search.maxConcurrentRequests",getDefaultMaxConcurrentRequests(),"The maximum number of concurrent search requests. "+
"It shouldn't be high, since a single request can saturate all the CPU cores, while many concurrently executed requests may require high amounts of memory. "+
"See also -search.maxQueueDuration")
maxQueueDuration=flag.Duration("search.maxQueueDuration",10*time.Second,"The maximum time the search request waits for execution when -search.maxConcurrentRequests "+
"limit is reached; see also -search.maxQueryDuration")
maxQueryDuration=flag.Duration("search.maxQueryDuration",time.Second*30,"The maximum duration for query execution. It can be overridden to a smaller value on a per-query basis via 'timeout' query arg")
disableSelect=flag.Bool("select.disable",false,"Whether to disable /select/* HTTP endpoints")
disableInternal=flag.Bool("internalselect.disable",false,"Whether to disable /internal/select/* HTTP endpoints")
)
funcgetDefaultMaxConcurrentRequests()int{
n:=cgroup.AvailableCPUs()
ifn<=4{
n*=2
}
ifn>16{
// A single request can saturate all the CPU cores, so there is no sense
// in allowing higher number of concurrent requests - they will just contend
.uplot,.uplot*,.uplot*:before,.uplot*:after{box-sizing:border-box}.uplot{font-family:system-ui,-apple-system,SegoeUI,Roboto,HelveticaNeue,Arial,NotoSans,sans-serif,"Apple Color Emoji","Segoe UI Emoji",SegoeUISymbol,"Noto Color Emoji";line-height:1.5;width:min-content}.u-title{text-align:center;font-size:18px;font-weight:700}.u-wrap{position:relative;-webkit-user-select:none;user-select:none}.u-over,.u-under{position:absolute}.u-under{overflow:hidden}.uplotcanvas{display:block;position:relative;width:100%;height:100%}.u-axis{position:absolute}.u-legend{font-size:14px;margin:auto;text-align:center}.u-inline{display:block}.u-inline*{display:inline-block}.u-inlinetr{margin-right:16px}.u-legendth{font-weight:600}.u-legendth>*{vertical-align:middle;display:inline-block}.u-legend.u-marker{width:1em;height:1em;margin-right:4px;background-clip:padding-box!important}.u-inline.u-liveth:after{content:":";vertical-align:middle}.u-inline:not(.u-live).u-value{display:none}.u-series>*{padding:4px}.u-seriesth{cursor:pointer}.u-legend.u-off>*{opacity:.3}.u-select{background:#00000012;position:absolute;pointer-events:none}.u-cursor-x,.u-cursor-y{position:absolute;left:0;top:0;pointer-events:none;will-change:transform}.u-hz.u-cursor-x,.u-vt.u-cursor-y{height:100%;border-right:1pxdashed#607D8B}.u-hz.u-cursor-y,.u-vt.u-cursor-x{width:100%;border-bottom:1pxdashed#607D8B}.u-cursor-pt{position:absolute;top:0;left:0;border-radius:50%;border:0solid;pointer-events:none;will-change:transform;background-clip:padding-box!important}.u-axis.u-off,.u-select.u-off,.u-cursor-x.u-off,.u-cursor-y.u-off,.u-cursor-pt.u-off{display:none}
retentionPeriod=flagutil.NewRetentionDuration("retentionPeriod","7d","Log entries with timestamps older than now-retentionPeriod are automatically deleted; "+
"log entries with timestamps outside the retention are also rejected during data ingestion; the minimum supported retention is 1d (one day); "+
"see https://docs.victoriametrics.com/victorialogs/#retention ; see also -retention.maxDiskSpaceUsageBytes")
maxDiskSpaceUsageBytes=flagutil.NewBytes("retention.maxDiskSpaceUsageBytes",0,"The maximum disk space usage at -storageDataPath before older per-day "+
"partitions are automatically dropped; see https://docs.victoriametrics.com/victorialogs/#retention-by-disk-space-usage ; see also -retentionPeriod")
futureRetention=flagutil.NewRetentionDuration("futureRetention","2d","Log entries with timestamps bigger than now+futureRetention are rejected during data ingestion; "+
"see https://docs.victoriametrics.com/victorialogs/#retention")
storageDataPath=flag.String("storageDataPath","victoria-logs-data","Path to directory where to store VictoriaLogs data; "+
"see https://docs.victoriametrics.com/victorialogs/#storage")
inmemoryDataFlushInterval=flag.Duration("inmemoryDataFlushInterval",5*time.Second,"The interval for guaranteed saving of in-memory data to disk. "+
"The saved data survives unclean shutdowns such as OOM crash, hardware reset, SIGKILL, etc. "+
"Bigger intervals may help increase the lifetime of flash storage with limited write cycles (e.g. Raspberry PI). "+
"Smaller intervals increase disk IO load. Minimum supported value is 1s")
logNewStreams=flag.Bool("logNewStreams",false,"Whether to log creation of new streams; this can be useful for debugging of high cardinality issues with log streams; "+
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields ; see also -logIngestedRows")
logIngestedRows=flag.Bool("logIngestedRows",false,"Whether to log all the ingested log entries; this can be useful for debugging of data ingestion; "+
"see https://docs.victoriametrics.com/victorialogs/data-ingestion/ ; see also -logNewStreams")
minFreeDiskSpaceBytes=flagutil.NewBytes("storage.minFreeDiskSpaceBytes",10e6,"The minimum free disk space at -storageDataPath after which "+
"the storage stops accepting new data")
forceMergeAuthKey=flagutil.NewPassword("forceMergeAuthKey","authKey, which must be passed in query string to /internal/force_merge . It overrides -httpAuth.* . "+
"See https://docs.victoriametrics.com/victorialogs/#forced-merge")
forceFlushAuthKey=flagutil.NewPassword("forceFlushAuthKey","authKey, which must be passed in query string to /internal/force_flush . It overrides -httpAuth.* . "+
"See https://docs.victoriametrics.com/victorialogs/#forced-flush")
storageNodeAddrs=flagutil.NewArrayString("storageNode","Comma-separated list of TCP addresses for storage nodes to route the ingested logs to and to send select queries to. "+
"If the list is empty, then the ingested logs are stored and queried locally from -storageDataPath")
insertConcurrency=flag.Int("insert.concurrency",2,"The average number of concurrent data ingestion requests, which can be sent to every -storageNode")
insertDisableCompression=flag.Bool("insert.disableCompression",false,"Whether to disable compression when sending the ingested data to -storageNode nodes. "+
"Disabled compression reduces CPU usage at the cost of higher network usage")
selectDisableCompression=flag.Bool("select.disableCompression",false,"Whether to disable compression for select query responses received from -storageNode nodes. "+
"Disabled compression reduces CPU usage at the cost of higher network usage")
storageNodeUsername=flagutil.NewArrayString("storageNode.username","Optional basic auth username to use for the corresponding -storageNode")
storageNodePassword=flagutil.NewArrayString("storageNode.password","Optional basic auth password to use for the corresponding -storageNode")
storageNodePasswordFile=flagutil.NewArrayString("storageNode.passwordFile","Optional path to basic auth password to use for the corresponding -storageNode. "+
"The file is re-read every second")
storageNodeBearerToken=flagutil.NewArrayString("storageNode.bearerToken","Optional bearer auth token to use for the corresponding -storageNode")
storageNodeBearerTokenFile=flagutil.NewArrayString("storageNode.bearerTokenFile","Optional path to bearer token file to use for the corresponding -storageNode. "+
"The token is re-read from the file every second")
storageNodeTLS=flagutil.NewArrayBool("storageNode.tls","Whether to use TLS (HTTPS) protocol for communicating with the corresponding -storageNode. "+
"By default communication is performed via HTTP")
storageNodeTLSCAFile=flagutil.NewArrayString("storageNode.tlsCAFile","Optional path to TLS CA file to use for verifying connections to the corresponding -storageNode. "+
"By default, system CA is used")
storageNodeTLSCertFile=flagutil.NewArrayString("storageNode.tlsCertFile","Optional path to client-side TLS certificate file to use when connecting "+
"to the corresponding -storageNode")
storageNodeTLSKeyFile=flagutil.NewArrayString("storageNode.tlsKeyFile","Optional path to client-side TLS certificate key to use when connecting to the corresponding -storageNode")
storageNodeTLSServerName=flagutil.NewArrayString("storageNode.tlsServerName","Optional TLS server name to use for connections to the corresponding -storageNode. "+
"By default, the server name from -storageNode is used")
storageNodeTLSInsecureSkipVerify=flagutil.NewArrayBool("storageNode.tlsInsecureSkipVerify","Whether to skip tls verification when connecting to the corresponding -storageNode")
)
varlocalStorage*logstorage.Storage
varlocalStorageMetrics*metrics.Set
varnetstorageInsert*netinsert.Storage
varnetstorageSelect*netselect.Storage
// Init initializes vlstorage.
//
// Stop must be called when vlstorage is no longer needed
funcInit(){
iflen(*storageNodeAddrs)==0{
initLocalStorage()
}else{
initNetworkStorage()
}
}
funcinitLocalStorage(){
iflocalStorage!=nil{
logger.Panicf("BUG: initLocalStorage() has been already called")
}
ifretentionPeriod.Duration()<24*time.Hour{
logger.Fatalf("-retentionPeriod cannot be smaller than a day; got %s",retentionPeriod)
}
cfg:=&logstorage.StorageConfig{
Retention:retentionPeriod.Duration(),
MaxDiskSpaceUsageBytes:maxDiskSpaceUsageBytes.N,
FlushInterval:*inmemoryDataFlushInterval,
FutureRetention:futureRetention.Duration(),
LogNewStreams:*logNewStreams,
LogIngestedRows:*logIngestedRows,
MinFreeDiskSpaceBytes:minFreeDiskSpaceBytes.N,
}
logger.Infof("opening storage at -storageDataPath=%s",*storageDataPath)
logger.Infof("forced merge for partition_prefix=%q has been started",partitionNamePrefix)
startTime:=time.Now()
localStorage.MustForceMerge(partitionNamePrefix)
logger.Infof("forced merge for partition_prefix=%q has been successfully finished in %.3f seconds",partitionNamePrefix,time.Since(startTime).Seconds())
logger.Warnf("skipping too long log entry, since its length exceeds %d bytes; the actual log entry length is %d bytes; log entry contents: %s",maxInsertBlockSize,len(b),b)
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.