mirror of
https://github.com/VictoriaMetrics/VictoriaMetrics.git
synced 2026-05-17 16:59:40 +03:00
Compare commits
1 Commits
ark/implem
...
logsql-ski
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
971aecd1ae |
4
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
4
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
@@ -8,7 +8,7 @@ body:
|
||||
Before filling a bug report it would be great to [upgrade](https://docs.victoriametrics.com/#how-to-upgrade)
|
||||
to [the latest available release](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest)
|
||||
and verify whether the bug is reproducible there.
|
||||
It's also recommended to read the [troubleshooting docs](https://docs.victoriametrics.com/troubleshooting/) first.
|
||||
It's also recommended to read the [troubleshooting docs](https://docs.victoriametrics.com/Troubleshooting.html) first.
|
||||
- type: textarea
|
||||
id: describe-the-bug
|
||||
attributes:
|
||||
@@ -65,7 +65,7 @@ body:
|
||||
|
||||
See how to setup monitoring here:
|
||||
* [monitoring for single-node VictoriaMetrics](https://docs.victoriametrics.com/#monitoring)
|
||||
* [monitoring for VictoriaMetrics cluster](https://docs.victoriametrics.com/cluster-victoriametrics/#monitoring)
|
||||
* [monitoring for VictoriaMetrics cluster](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#monitoring)
|
||||
validations:
|
||||
required: false
|
||||
- type: textarea
|
||||
|
||||
8
.github/ISSUE_TEMPLATE/question.yml
vendored
8
.github/ISSUE_TEMPLATE/question.yml
vendored
@@ -24,9 +24,9 @@ body:
|
||||
label: Troubleshooting docs
|
||||
description: I am familiar with the following troubleshooting docs
|
||||
options:
|
||||
- label: General - https://docs.victoriametrics.com/troubleshooting/
|
||||
- label: General - https://docs.victoriametrics.com/Troubleshooting.html
|
||||
required: false
|
||||
- label: vmagent - https://docs.victoriametrics.com/vmagent/#troubleshooting
|
||||
required: false
|
||||
- label: vmalert - https://docs.victoriametrics.com/vmalert/#troubleshooting
|
||||
- label: vmagent - https://docs.victoriametrics.com/vmagent.html#troubleshooting
|
||||
required: false
|
||||
- label: vmalert - https://docs.victoriametrics.com/vmalert.html#troubleshooting
|
||||
required: false
|
||||
35
.github/PULL_REQUEST_TEMPLATE/pull_request_template.md
vendored
Normal file
35
.github/PULL_REQUEST_TEMPLATE/pull_request_template.md
vendored
Normal file
@@ -0,0 +1,35 @@
|
||||
### Describe Your Changes
|
||||
|
||||
Please provide a brief description of the changes you made. Be as specific as possible to help others understand the purpose and impact of your modifications.
|
||||
|
||||
### Checklist
|
||||
|
||||
The following checks are mandatory:
|
||||
|
||||
- [ ] I have read the [Contributing Guidelines](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/CONTRIBUTING.md)
|
||||
- [ ] All commits are signed and include `Signed-off-by` line. Use `git commit -s` to include `Signed-off-by` your commits. See this [doc](https://git-scm.com/book/en/v2/Git-Tools-Signing-Your-Work) about how to sign your commits.
|
||||
- [ ] Tests are passing locally. Use `make test` to run all tests locally.
|
||||
- [ ] Linting is passing locally. Use `make check-all` to run all linters locally.
|
||||
|
||||
Further checks are optional for External Contributions:
|
||||
|
||||
- [ ] Include a link to the GitHub issue in the commit message, if issue exists.
|
||||
- [ ] Mention the change in the [Changelog](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/docs/CHANGELOG.md). Explain what has changed and why. If there is a related issue or documentation change - link them as well.
|
||||
|
||||
Tips for writing a good changelog message::
|
||||
|
||||
* Write a human-readable changelog message that describes the problem and solution.
|
||||
* Include a link to the issue or pull request in your changelog message.
|
||||
* Use specific language identifying the fix, such as an error message, metric name, or flag name.
|
||||
* Provide a link to the relevant documentation for any new features you add or modify.
|
||||
|
||||
- [ ] After your pull request is merged, please add a message to the issue with instructions for how to test the fix or try the feature you added. Here is an [example](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4048#issuecomment-1546453726)
|
||||
- [ ] Do not close the original issue before the change is released. Please note, in some cases Github can automatically close the issue once PR is merged. Re-open the issue in such case.
|
||||
- [ ] If the change somehow affects public interfaces (a new flag was added or updated, or some behavior has changed) - add the corresponding change to documentation.
|
||||
|
||||
|
||||
Examples of good changelog messages:
|
||||
|
||||
1. FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): add support for [VictoriaMetrics remote write protocol](https://docs.victoriametrics.com/vmagent.html#victoriametrics-remote-write-protocol) when [sending / receiving data to / from Kafka](https://docs.victoriametrics.com/vmagent.html#kafka-integration). This protocol allows saving egress network bandwidth costs when sending data from `vmagent` to `Kafka` located in another datacenter or availability zone. See [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1225).
|
||||
|
||||
2. BUGFIX: [stream aggregation](https://docs.victoriametrics.com/stream-aggregation.html): suppress `series after dedup` error message in logs when `-remoteWrite.streamAggr.dedupInterval` command-line flag is set at [vmagent](https://docs.victoriametrics.com/vmgent.html) or when `-streamAggr.dedupInterval` command-line flag is set at [single-node VictoriaMetrics](https://docs.victoriametrics.com/).
|
||||
9
.github/pull_request_template.md
vendored
9
.github/pull_request_template.md
vendored
@@ -1,9 +0,0 @@
|
||||
### Describe Your Changes
|
||||
|
||||
Please provide a brief description of the changes you made. Be as specific as possible to help others understand the purpose and impact of your modifications.
|
||||
|
||||
### Checklist
|
||||
|
||||
The following checks are **mandatory**:
|
||||
|
||||
- [ ] My change adheres [VictoriaMetrics contributing guidelines](https://docs.victoriametrics.com/contributing/).
|
||||
54
.github/workflows/build.yml
vendored
54
.github/workflows/build.yml
vendored
@@ -1,54 +0,0 @@
|
||||
name: build
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- cluster
|
||||
- master
|
||||
paths:
|
||||
- '**.go'
|
||||
- '**/Dockerfile*' # The trailing * is for app/vmui/Dockerfile-*.
|
||||
- '**/Makefile'
|
||||
pull_request:
|
||||
branches:
|
||||
- cluster
|
||||
- master
|
||||
paths:
|
||||
- '**.go'
|
||||
- '**/Dockerfile*' # The trailing * is for app/vmui/Dockerfile-*.
|
||||
- '**/Makefile'
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
concurrency:
|
||||
cancel-in-progress: true
|
||||
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
|
||||
|
||||
jobs:
|
||||
build:
|
||||
name: Build
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Code checkout
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Go
|
||||
id: go
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
go-version: stable
|
||||
cache: false
|
||||
|
||||
- name: Cache Go artifacts
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: |
|
||||
~/.cache/go-build
|
||||
~/go/bin
|
||||
~/go/pkg/mod
|
||||
key: go-artifacts-${{ runner.os }}-crossbuild-${{ steps.go.outputs.go-version }}-${{ hashFiles('go.sum', 'Makefile', 'app/**/Makefile') }}
|
||||
restore-keys: go-artifacts-${{ runner.os }}-crossbuild-
|
||||
|
||||
- name: Run crossbuild
|
||||
run: make crossbuild
|
||||
62
.github/workflows/codeql-analysis-go.yml
vendored
62
.github/workflows/codeql-analysis-go.yml
vendored
@@ -1,62 +0,0 @@
|
||||
name: 'CodeQL Go'
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- cluster
|
||||
- master
|
||||
paths:
|
||||
- '**.go'
|
||||
pull_request:
|
||||
branches:
|
||||
- cluster
|
||||
- master
|
||||
paths:
|
||||
- '**.go'
|
||||
|
||||
concurrency:
|
||||
cancel-in-progress: true
|
||||
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
|
||||
|
||||
jobs:
|
||||
analyze:
|
||||
name: Analyze
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
actions: read
|
||||
contents: read
|
||||
security-events: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Go
|
||||
id: go
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
cache: false
|
||||
go-version: stable
|
||||
|
||||
- name: Cache Go artifacts
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: |
|
||||
~/.cache/go-build
|
||||
~/go/bin
|
||||
~/go/pkg/mod
|
||||
key: go-artifacts-${{ runner.os }}-codeql-analyze-${{ steps.go.outputs.go-version }}-${{ hashFiles('go.sum', 'Makefile', 'app/**/Makefile') }}
|
||||
restore-keys: go-artifacts-${{ runner.os }}-codeql-analyze-
|
||||
|
||||
- name: Initialize CodeQL
|
||||
uses: github/codeql-action/init@v3
|
||||
with:
|
||||
languages: go
|
||||
|
||||
- name: Autobuild
|
||||
uses: github/codeql-action/autobuild@v3
|
||||
|
||||
- name: Perform CodeQL Analysis
|
||||
uses: github/codeql-action/analyze@v3
|
||||
with:
|
||||
category: 'language:go'
|
||||
@@ -1,27 +1,22 @@
|
||||
name: 'CodeQL JS/TS'
|
||||
name: "CodeQL - JS"
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- cluster
|
||||
- master
|
||||
branches: [master, cluster]
|
||||
paths:
|
||||
- '**.js'
|
||||
- '**.ts'
|
||||
- '**.tsx'
|
||||
- "**.js"
|
||||
pull_request:
|
||||
branches:
|
||||
- cluster
|
||||
- master
|
||||
# The branches below must be a subset of the branches above
|
||||
branches: [master, cluster]
|
||||
paths:
|
||||
- '**.js'
|
||||
- '**.ts'
|
||||
- '**.tsx'
|
||||
- "**.js"
|
||||
schedule:
|
||||
- cron: "30 18 * * 2"
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
|
||||
jobs:
|
||||
analyze:
|
||||
name: Analyze
|
||||
@@ -31,6 +26,11 @@ jobs:
|
||||
contents: read
|
||||
security-events: write
|
||||
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
language: ["javascript"]
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
@@ -38,9 +38,9 @@ jobs:
|
||||
- name: Initialize CodeQL
|
||||
uses: github/codeql-action/init@v3
|
||||
with:
|
||||
languages: javascript-typescript
|
||||
languages: ${{ matrix.language }}
|
||||
|
||||
- name: Perform CodeQL Analysis
|
||||
uses: github/codeql-action/analyze@v3
|
||||
with:
|
||||
category: 'language:js/ts'
|
||||
category: "javascript"
|
||||
103
.github/workflows/codeql-analysis.yml
vendored
Normal file
103
.github/workflows/codeql-analysis.yml
vendored
Normal file
@@ -0,0 +1,103 @@
|
||||
# For most projects, this workflow file will not need changing; you simply need
|
||||
# to commit it to your repository.
|
||||
#
|
||||
# You may wish to alter this file to override the set of languages analyzed,
|
||||
# or to provide custom queries or build logic.
|
||||
#
|
||||
# ******** NOTE ********
|
||||
# We have attempted to detect the languages in your repository. Please check
|
||||
# the `language` matrix defined below to confirm you have the correct set of
|
||||
# supported CodeQL languages.
|
||||
#
|
||||
name: "CodeQL"
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [master, cluster]
|
||||
paths-ignore:
|
||||
- "docs/**"
|
||||
- "**.md"
|
||||
- "**.txt"
|
||||
- "**.js"
|
||||
pull_request:
|
||||
# The branches below must be a subset of the branches above
|
||||
branches: [master, cluster]
|
||||
paths-ignore:
|
||||
- "docs/**"
|
||||
- "**.md"
|
||||
- "**.txt"
|
||||
- "**.js"
|
||||
schedule:
|
||||
- cron: "30 18 * * 2"
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
analyze:
|
||||
name: Analyze
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
actions: read
|
||||
contents: read
|
||||
security-events: write
|
||||
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
language: ["go"]
|
||||
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python', 'ruby' ]
|
||||
# Learn more about CodeQL language support at https://git.io/codeql-language-support
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Go
|
||||
id: go
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
go-version: stable
|
||||
cache: false
|
||||
if: ${{ matrix.language == 'go' }}
|
||||
|
||||
- name: Cache Go artifacts
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: |
|
||||
~/.cache/go-build
|
||||
~/go/pkg/mod
|
||||
~/go/bin
|
||||
key: go-artifacts-${{ runner.os }}-codeql-analyze-${{ steps.go.outputs.go-version }}-${{ hashFiles('go.sum', 'Makefile', 'app/**/Makefile') }}
|
||||
restore-keys: go-artifacts-${{ runner.os }}-codeql-analyze-
|
||||
if: ${{ matrix.language == 'go' }}
|
||||
|
||||
# Initializes the CodeQL tools for scanning.
|
||||
- name: Initialize CodeQL
|
||||
uses: github/codeql-action/init@v3
|
||||
with:
|
||||
languages: ${{ matrix.language }}
|
||||
# If you wish to specify custom queries, you can do so here or in a config file.
|
||||
# By default, queries listed here will override any specified in a config file.
|
||||
# Prefix the list here with "+" to use these queries and those in the config file.
|
||||
# queries: ./path/to/local/query, your-org/your-repo/queries@main
|
||||
|
||||
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
|
||||
# If this step fails, then you should remove it and run the build manually (see below)
|
||||
- name: Autobuild
|
||||
uses: github/codeql-action/autobuild@v3
|
||||
|
||||
# ℹ️ Command-line programs to run using the OS shell.
|
||||
# 📚 https://git.io/JvXDl
|
||||
|
||||
# ✏️ If the Autobuild fails above, remove it and uncomment the following three lines
|
||||
# and modify them (or add more) to build your code if your project
|
||||
# uses a compiled language
|
||||
|
||||
#- run: |
|
||||
# make bootstrap
|
||||
# make release
|
||||
|
||||
- name: Perform CodeQL Analysis
|
||||
uses: github/codeql-action/analyze@v3
|
||||
75
.github/workflows/main.yml
vendored
75
.github/workflows/main.yml
vendored
@@ -1,25 +1,29 @@
|
||||
name: main
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- cluster
|
||||
- master
|
||||
paths:
|
||||
- '**.go'
|
||||
- cluster
|
||||
paths-ignore:
|
||||
- "docs/**"
|
||||
- "**.md"
|
||||
- "dashboards/**"
|
||||
- "deployment/**.yml"
|
||||
pull_request:
|
||||
branches:
|
||||
- cluster
|
||||
- master
|
||||
paths:
|
||||
- '**.go'
|
||||
|
||||
- cluster
|
||||
paths-ignore:
|
||||
- "docs/**"
|
||||
- "**.md"
|
||||
- "dashboards/**"
|
||||
- "deployment/**.yml"
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
concurrency:
|
||||
cancel-in-progress: true
|
||||
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
lint:
|
||||
@@ -33,16 +37,16 @@ jobs:
|
||||
id: go
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
cache: false
|
||||
go-version: stable
|
||||
cache: false
|
||||
|
||||
- name: Cache Go artifacts
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: |
|
||||
~/.cache/go-build
|
||||
~/go/bin
|
||||
~/go/pkg/mod
|
||||
~/go/bin
|
||||
key: go-artifacts-${{ runner.os }}-check-all-${{ steps.go.outputs.go-version }}-${{ hashFiles('go.sum', 'Makefile', 'app/**/Makefile') }}
|
||||
restore-keys: go-artifacts-${{ runner.os }}-check-all-
|
||||
|
||||
@@ -51,18 +55,10 @@ jobs:
|
||||
make check-all
|
||||
git diff --exit-code
|
||||
|
||||
test:
|
||||
name: test
|
||||
build:
|
||||
needs: lint
|
||||
name: build
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
strategy:
|
||||
matrix:
|
||||
scenario:
|
||||
- 'test-full'
|
||||
- 'test-full-386'
|
||||
- 'test-pure'
|
||||
|
||||
steps:
|
||||
- name: Code checkout
|
||||
uses: actions/checkout@v4
|
||||
@@ -71,20 +67,51 @@ jobs:
|
||||
id: go
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
cache: false
|
||||
go-version: stable
|
||||
cache: false
|
||||
|
||||
- name: Cache Go artifacts
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: |
|
||||
~/.cache/go-build
|
||||
~/go/bin
|
||||
~/go/pkg/mod
|
||||
~/go/bin
|
||||
key: go-artifacts-${{ runner.os }}-crossbuild-${{ steps.go.outputs.go-version }}-${{ hashFiles('go.sum', 'Makefile', 'app/**/Makefile') }}
|
||||
restore-keys: go-artifacts-${{ runner.os }}-crossbuild-
|
||||
|
||||
- name: Build
|
||||
run: make crossbuild
|
||||
|
||||
test:
|
||||
needs: lint
|
||||
strategy:
|
||||
matrix:
|
||||
scenario: ["test-full", "test-pure", "test-full-386"]
|
||||
name: test
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Code checkout
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Go
|
||||
id: go
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
go-version: stable
|
||||
cache: false
|
||||
|
||||
- name: Cache Go artifacts
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: |
|
||||
~/.cache/go-build
|
||||
~/go/pkg/mod
|
||||
~/go/bin
|
||||
key: go-artifacts-${{ runner.os }}-${{ matrix.scenario }}-${{ steps.go.outputs.go-version }}-${{ hashFiles('go.sum', 'Makefile', 'app/**/Makefile') }}
|
||||
restore-keys: go-artifacts-${{ runner.os }}-${{ matrix.scenario }}-
|
||||
|
||||
- name: Run tests
|
||||
- name: run tests
|
||||
run: make ${{ matrix.scenario}}
|
||||
|
||||
- name: Publish coverage
|
||||
|
||||
2
.github/workflows/sync-docs.yml
vendored
2
.github/workflows/sync-docs.yml
vendored
@@ -25,7 +25,7 @@ jobs:
|
||||
token: ${{ secrets.VM_BOT_GH_TOKEN }}
|
||||
path: docs
|
||||
- name: Import GPG key
|
||||
uses: crazy-max/ghaction-import-gpg@v6
|
||||
uses: crazy-max/ghaction-import-gpg@v5
|
||||
with:
|
||||
gpg_private_key: ${{ secrets.VM_BOT_GPG_PRIVATE_KEY }}
|
||||
passphrase: ${{ secrets.VM_BOT_PASSPHRASE }}
|
||||
|
||||
59
.github/workflows/test_package.yml
vendored
59
.github/workflows/test_package.yml
vendored
@@ -1,59 +0,0 @@
|
||||
name: Package tests
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- cluster
|
||||
- master
|
||||
paths:
|
||||
- '.github/workflows/test_package.yml'
|
||||
- '**/Dockerfile*'
|
||||
- '**/Makefile'
|
||||
pull_request:
|
||||
branches:
|
||||
- cluster
|
||||
- master
|
||||
paths:
|
||||
- '.github/workflows/test_package.yml'
|
||||
- '**/Dockerfile*'
|
||||
- '**/Makefile'
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
concurrency:
|
||||
cancel-in-progress: true
|
||||
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
|
||||
|
||||
jobs:
|
||||
test_release:
|
||||
name: Test Release
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Code checkout
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Go
|
||||
id: go
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
go-version: stable
|
||||
cache: false
|
||||
|
||||
- name: Cache Go artifacts
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: |
|
||||
~/.cache/go-build
|
||||
~/go/bin
|
||||
~/go/pkg/mod
|
||||
key: go-artifacts-${{ runner.os }}-test-release-${{ steps.go.outputs.go-version }}-${{ hashFiles('Makefile', '**/Dockerfile*', '**/Makefile') }}
|
||||
restore-keys: go-artifacts-${{ runner.os }}-test-release-
|
||||
|
||||
- name: Run make release
|
||||
run: |
|
||||
make clean
|
||||
make release
|
||||
|
||||
- name: Run release tests
|
||||
run: make test-release-victoria-metrics
|
||||
4
.gitignore
vendored
4
.gitignore
vendored
@@ -22,6 +22,4 @@ Gemfile.lock
|
||||
/_site
|
||||
_site
|
||||
*.tmp
|
||||
/docs/.jekyll-metadata
|
||||
coverage.txt
|
||||
cspell.json
|
||||
/docs/.jekyll-metadata
|
||||
@@ -16,7 +16,4 @@ issues:
|
||||
|
||||
linters-settings:
|
||||
errcheck:
|
||||
exclude-functions:
|
||||
- "fmt.Fprintf"
|
||||
- "fmt.Fprint"
|
||||
- "(net/http.ResponseWriter).Write"
|
||||
exclude: ./errcheck_excludes.txt
|
||||
|
||||
120
CODE_OF_CONDUCT_RU.md
Normal file
120
CODE_OF_CONDUCT_RU.md
Normal file
@@ -0,0 +1,120 @@
|
||||
|
||||
# Кодекс Поведения участника
|
||||
|
||||
## Наши обязательства
|
||||
|
||||
Мы, как участники, авторы и лидеры обязуемся сделать участие в сообществе
|
||||
свободным от притеснений для всех, независимо от возраста, телосложения,
|
||||
видимых или невидимых ограничений способности, этнической принадлежности,
|
||||
половых признаков, гендерной идентичности и выражения, уровня опыта,
|
||||
образования, социо-экономического статуса, национальности, внешности,
|
||||
расы, религии, или сексуальной идентичности и ориентации.
|
||||
|
||||
Мы обещаем действовать и взаимодействовать таким образом, чтобы вносить вклад в открытое,
|
||||
дружелюбное, многообразное, инклюзивное и здоровое сообщество.
|
||||
|
||||
## Наши стандарты
|
||||
|
||||
Примеры поведения, создающие условия для благоприятных взаимоотношений включают в себя:
|
||||
|
||||
* Проявление доброты и эмпатии к другим участникам проекта
|
||||
* Уважение к чужой точке зрения и опыту
|
||||
* Конструктивная критика и принятие конструктивной критики
|
||||
* Принятие ответственности, принесение извинений тем, кто пострадал от наших ошибок
|
||||
и извлечение уроков из опыта
|
||||
* Ориентирование на то, что лучше подходит для сообщества, а не только для нас лично
|
||||
|
||||
Примеры неприемлемого поведения участников включают в себя:
|
||||
|
||||
* Использование выражений или изображений сексуального характера и нежелательное сексуальное внимание или домогательство в любой форме
|
||||
* Троллинг, оскорбительные или уничижительные комментарии, переход на личности или затрагивание политических убеждений
|
||||
* Публичное или приватное домогательство
|
||||
* Публикация личной информации других лиц, например, физического или электронного адреса, без явного разрешения
|
||||
* Иное поведение, которое обоснованно считать неуместным в профессиональной обстановке
|
||||
|
||||
## Обязанности
|
||||
|
||||
Лидеры сообщества отвечают за разъяснение и применение наших стандартов приемлемого
|
||||
поведения и будут предпринимать соответствующие и честные меры по исправлению положения
|
||||
в ответ на любое поведение, которое они сочтут неприемлемым, угрожающим, оскорбительным или вредным.
|
||||
|
||||
Лидеры сообщества обладают правом и обязанностью удалять, редактировать или отклонять
|
||||
комментарии, коммиты, код, изменения в вики, вопросы и другой вклад, который не совпадает
|
||||
с Кодексом Поведения, и предоставят причины принятого решения, когда сочтут нужным.
|
||||
|
||||
## Область применения
|
||||
|
||||
Данный Кодекс Поведения применим во всех во всех публичных физических и цифровых пространства сообщества,
|
||||
а также когда человек официально представляет сообщество в публичных местах.
|
||||
Примеры представления проекта или сообщества включают использование официальной электронной почты,
|
||||
публикации в официальном аккаунте в социальных сетях,
|
||||
или упоминания как представителя в онлайн или оффлайн мероприятии.
|
||||
|
||||
## Приведение в исполнение
|
||||
|
||||
О случаях домогательства, а так же оскорбительного или иного другого неприемлемого
|
||||
поведения можно сообщить ответственным лидерам сообщества с помощью письма на info@victoriametrics.com
|
||||
Все жалобы будут рассмотрены и расследованы оперативно и беспристрастно.
|
||||
|
||||
Все лидеры сообщества обязаны уважать неприкосновенность частной жизни и личную
|
||||
неприкосновенность автора сообщения.
|
||||
|
||||
## Руководство по исполнению
|
||||
|
||||
Лидеры сообщества будут следовать следующим Принципам Воздействия в Сообществе,
|
||||
чтобы определить последствия для тех, кого они считают виновными в нарушении данного Кодекса Поведения:
|
||||
|
||||
### 1. Исправление
|
||||
|
||||
**Общественное влияние**: Использование недопустимой лексики или другое поведение,
|
||||
считающиеся непрофессиональным или нежелательным в сообществе.
|
||||
|
||||
**Последствия**: Личное, письменное предупреждение от лидеров сообщества,
|
||||
объясняющее суть нарушения и почему такое поведение
|
||||
было неуместно. Лидеры сообщества могут попросить принести публичное извинение.
|
||||
|
||||
### 2. Предупреждение
|
||||
|
||||
**Общественное влияние**: Нарушение в результате одного инцидента или серии действий.
|
||||
|
||||
**Последствия**: Предупреждение о последствиях в случае продолжающегося неуместного поведения.
|
||||
На определенное время не допускается взаимодействие с людьми, вовлеченными в инцидент,
|
||||
включая незапрошенное взаимодействие
|
||||
с теми, кто обеспечивает соблюдение Кодекса. Это включает в себя избегание взаимодействия
|
||||
в публичных пространствах, а так же во внешних каналах,
|
||||
таких как социальные сети. Нарушение этих правил влечет за собой временный или вечный бан.
|
||||
|
||||
### 3. Временный бан
|
||||
|
||||
**Общественное влияние**: Серьёзное нарушение стандартов сообщества,
|
||||
включая продолжительное неуместное поведение.
|
||||
|
||||
**Последствия**: Временный запрет (бан) на любое взаимодействие
|
||||
или публичное общение с сообществом на определенный период времени.
|
||||
На этот период не допускается публичное или личное взаимодействие с людьми,
|
||||
вовлеченными в инцидент, включая незапрошенное взаимодействие
|
||||
с теми, кто обеспечивает соблюдение Кодекса.
|
||||
Нарушение этих правил влечет за собой вечный бан.
|
||||
|
||||
### 4. Вечный бан
|
||||
|
||||
**Общественное влияние**: Демонстрация систематических нарушений стандартов сообщества,
|
||||
включая продолжающееся неуместное поведение, домогательство до отдельных лиц,
|
||||
или проявление агрессии либо пренебрежительного отношения к категориям лиц.
|
||||
|
||||
**Последствия**: Вечный запрет на любое публичное взаимодействие с сообществом.
|
||||
|
||||
## Атрибуция
|
||||
|
||||
Данный Кодекс Поведения основан на [Кодекс Поведения участника][homepage],
|
||||
версии 2.0, доступной по адресу
|
||||
<https://www.contributor-covenant.org/version/2/0/code_of_conduct.html>.
|
||||
|
||||
Принципы Воздействия в Сообществе были вдохновлены [Mozilla's code of conduct
|
||||
enforcement ladder](https://github.com/mozilla/diversity).
|
||||
|
||||
[homepage]: https://www.contributor-covenant.org
|
||||
|
||||
Ответы на общие вопросы о данном кодексе поведения ищите на странице FAQ:
|
||||
<https://www.contributor-covenant.org/faq>. Переводы доступны по адресу
|
||||
<https://www.contributor-covenant.org/translations>.
|
||||
@@ -1 +1,21 @@
|
||||
The document has been moved [here](https://docs.victoriametrics.com/contributing/).
|
||||
If you like VictoriaMetrics and want to contribute, then we need the following:
|
||||
|
||||
- Filing issues and feature requests [here](https://github.com/VictoriaMetrics/VictoriaMetrics/issues).
|
||||
- Spreading a word about VictoriaMetrics: conference talks, articles, comments, experience sharing with colleagues.
|
||||
- Updating documentation.
|
||||
|
||||
We are open to third-party pull requests provided they follow [KISS design principle](https://en.wikipedia.org/wiki/KISS_principle):
|
||||
|
||||
- Prefer simple code and architecture.
|
||||
- Avoid complex abstractions.
|
||||
- Avoid magic code and fancy algorithms.
|
||||
- Avoid [big external dependencies](https://medium.com/@valyala/stripping-dependency-bloat-in-victoriametrics-docker-image-983fb5912b0d).
|
||||
- Minimize the number of moving parts in the distributed system.
|
||||
- Avoid automated decisions, which may hurt cluster availability, consistency or performance.
|
||||
|
||||
Adhering `KISS` principle simplifies the resulting code and architecture, so it can be reviewed, understood and verified by many people.
|
||||
|
||||
Before sending a pull request please check the following:
|
||||
- [ ] All commits are signed and include `Signed-off-by` line. Use `git commit -s` to include `Signed-off-by` your commits. See this [doc](https://git-scm.com/book/en/v2/Git-Tools-Signing-Your-Work) about how to sign your commits.
|
||||
- [ ] Tests are passing locally. Use `make test` to run all tests locally.
|
||||
- [ ] Linting is passing locally. Use `make check-all` to run all linters locally.
|
||||
|
||||
186
Makefile
186
Makefile
@@ -14,86 +14,15 @@ endif
|
||||
|
||||
GO_BUILDINFO = -X '$(PKG_PREFIX)/lib/buildinfo.Version=$(APP_NAME)-$(DATEINFO_TAG)-$(BUILDINFO_TAG)'
|
||||
|
||||
VICTORIA_LOGS_COMPONENTS = victoria-logs
|
||||
VICTORIA_METRICS_COMPONENTS = victoria-metrics
|
||||
VICTORIA_METRICS_UTILS_COMPONENTS = vmagent vmalert vmalert-tool vmauth vmbackup vmrestore vmctl
|
||||
|
||||
.PHONY: $(MAKECMDGOALS)
|
||||
|
||||
include app/*/Makefile
|
||||
include cspell/Makefile
|
||||
include docs/Makefile
|
||||
include deployment/*/Makefile
|
||||
include dashboards/Makefile
|
||||
include snap/local/Makefile
|
||||
include package/release/Makefile
|
||||
|
||||
define RELEASE_GOOS_GOARCH
|
||||
$(eval PKG_NAME := $(1))
|
||||
$(eval PKG_COMPONENTS := $(2))
|
||||
|
||||
# Build
|
||||
$(foreach pkg_component, $(PKG_COMPONENTS), $(MAKE) $(pkg_component)-$(GOOS)-$(GOARCH)-prod)
|
||||
|
||||
# Generate SBOM
|
||||
mkdir -p "bin/$(PKG_NAME)-$(GOOS)-$(GOARCH)-$(PKG_TAG)_bom"
|
||||
$(foreach pkg_component, $(PKG_COMPONENTS),
|
||||
cyclonedx-gomod app -assert-licenses -json -licenses -packages \
|
||||
-main app/$(pkg_component) \
|
||||
-output bin/$(PKG_NAME)-$(GOOS)-$(GOARCH)-$(PKG_TAG)_bom/$(pkg_component)-$(GOOS)-$(GOARCH)-$(PKG_TAG)_bom.json)
|
||||
|
||||
# Pack and compress
|
||||
$(foreach pkg_component, $(PKG_COMPONENTS),
|
||||
cd bin && tar --transform="flags=r;s|-$(GOOS)-$(GOARCH)||" -rf $(PKG_NAME)-$(GOOS)-$(GOARCH)-$(PKG_TAG).tar \
|
||||
$(pkg_component)-$(GOOS)-$(GOARCH)-prod)
|
||||
cd bin && gzip $(PKG_NAME)-$(GOOS)-$(GOARCH)-$(PKG_TAG).tar
|
||||
cd bin && tar -czf $(PKG_NAME)-$(GOOS)-$(GOARCH)-$(PKG_TAG)_bom.tar.gz $(PKG_NAME)-$(GOOS)-$(GOARCH)-$(PKG_TAG)_bom
|
||||
|
||||
# Generate checksums
|
||||
cd bin && \
|
||||
sha256sum $(PKG_NAME)-$(GOOS)-$(GOARCH)-$(PKG_TAG).tar.gz > $(PKG_NAME)-$(GOOS)-$(GOARCH)-$(PKG_TAG)_checksums.txt && \
|
||||
sha256sum $(PKG_NAME)-$(GOOS)-$(GOARCH)-$(PKG_TAG)_bom.tar.gz >> $(PKG_NAME)-$(GOOS)-$(GOARCH)-$(PKG_TAG)_checksums.txt
|
||||
$(foreach pkg_component, $(PKG_COMPONENTS),
|
||||
cd bin && sha256sum $(pkg_component)-$(GOOS)-$(GOARCH)-prod | sed s/-$(GOOS)-$(GOARCH)-prod/-prod/ >> \
|
||||
$(PKG_NAME)-$(GOOS)-$(GOARCH)-$(PKG_TAG)_checksums.txt)
|
||||
|
||||
# Clean up
|
||||
cd bin && rm -rf $(PKG_NAME)-$(GOOS)-$(GOARCH)-$(PKG_TAG)_bom
|
||||
$(foreach pkg_component, $(PKG_COMPONENTS), cd bin && rm -f $(pkg_component)-$(GOOS)-$(GOARCH)-prod)
|
||||
endef
|
||||
|
||||
define RELEASE_WINDOWS_GOARCH
|
||||
$(eval PKG_NAME := $(1))
|
||||
$(eval PKG_COMPONENTS := $(2))
|
||||
|
||||
# Build
|
||||
$(foreach pkg_component, $(PKG_COMPONENTS), $(MAKE) $(pkg_component)-$(GOOS)-$(GOARCH)-prod)
|
||||
|
||||
# Generate SBOM
|
||||
mkdir -p "bin/$(PKG_NAME)-$(GOOS)-$(GOARCH)-$(PKG_TAG)_bom"
|
||||
$(foreach pkg_component, $(PKG_COMPONENTS),
|
||||
cyclonedx-gomod app -assert-licenses -json -licenses -packages \
|
||||
-main app/$(pkg_component) \
|
||||
-output bin/$(PKG_NAME)-$(GOOS)-$(GOARCH)-$(PKG_TAG)_bom/$(pkg_component)-$(GOOS)-$(GOARCH)-$(PKG_TAG)_bom.json)
|
||||
|
||||
# Pack and compress
|
||||
$(foreach pkg_component, $(PKG_COMPONENTS),
|
||||
cd bin && zip -u $(PKG_NAME)-$(GOOS)-$(GOARCH)-$(PKG_TAG).zip \
|
||||
$(pkg_component)-$(GOOS)-$(GOARCH)-prod.exe)
|
||||
cd bin && zip $(PKG_NAME)-$(GOOS)-$(GOARCH)-$(PKG_TAG)_bom.zip -r $(PKG_NAME)-$(GOOS)-$(GOARCH)-$(PKG_TAG)_bom
|
||||
|
||||
# Generate checksums
|
||||
cd bin && \
|
||||
sha256sum $(PKG_NAME)-$(GOOS)-$(GOARCH)-$(PKG_TAG).zip > $(PKG_NAME)-$(GOOS)-$(GOARCH)-$(PKG_TAG)_checksums.txt && \
|
||||
sha256sum $(PKG_NAME)-$(GOOS)-$(GOARCH)-$(PKG_TAG)_bom.zip >> $(PKG_NAME)-$(GOOS)-$(GOARCH)-$(PKG_TAG)_checksums.txt
|
||||
$(foreach pkg_component, $(PKG_COMPONENTS),
|
||||
cd bin && sha256sum $(pkg_component)-$(GOOS)-$(GOARCH)-prod.exe >> $(PKG_NAME)-$(GOOS)-$(GOARCH)-$(PKG_TAG)_checksums.txt)
|
||||
|
||||
# Clean up
|
||||
cd bin && rm -rf $(PKG_NAME)-$(GOOS)-$(GOARCH)-$(PKG_TAG)_bom
|
||||
$(foreach pkg_component, $(PKG_COMPONENTS), cd bin && rm -f $(pkg_component)-$(GOOS)-$(GOARCH)-prod.exe)
|
||||
endef
|
||||
|
||||
|
||||
all: \
|
||||
victoria-metrics-prod \
|
||||
victoria-logs-prod \
|
||||
@@ -312,13 +241,26 @@ release-victoria-metrics-openbsd-amd64:
|
||||
GOOS=openbsd GOARCH=amd64 $(MAKE) release-victoria-metrics-goos-goarch
|
||||
|
||||
release-victoria-metrics-windows-amd64:
|
||||
GOOS=windows GOARCH=amd64 $(MAKE) release-victoria-metrics-windows-goarch
|
||||
GOARCH=amd64 $(MAKE) release-victoria-metrics-windows-goarch
|
||||
|
||||
release-victoria-metrics-goos-goarch:
|
||||
$(call RELEASE_GOOS_GOARCH,victoria-metrics,$(VICTORIA_METRICS_COMPONENTS))
|
||||
release-victoria-metrics-goos-goarch: victoria-metrics-$(GOOS)-$(GOARCH)-prod
|
||||
cd bin && \
|
||||
tar --transform="flags=r;s|-$(GOOS)-$(GOARCH)||" -czf victoria-metrics-$(GOOS)-$(GOARCH)-$(PKG_TAG).tar.gz \
|
||||
victoria-metrics-$(GOOS)-$(GOARCH)-prod \
|
||||
&& sha256sum victoria-metrics-$(GOOS)-$(GOARCH)-$(PKG_TAG).tar.gz \
|
||||
victoria-metrics-$(GOOS)-$(GOARCH)-prod \
|
||||
| sed s/-$(GOOS)-$(GOARCH)-prod/-prod/ > victoria-metrics-$(GOOS)-$(GOARCH)-$(PKG_TAG)_checksums.txt
|
||||
cd bin && rm -rf victoria-metrics-$(GOOS)-$(GOARCH)-prod
|
||||
|
||||
release-victoria-metrics-windows-goarch:
|
||||
$(call RELEASE_WINDOWS_GOARCH,victoria-metrics,$(VICTORIA_METRICS_COMPONENTS))
|
||||
release-victoria-metrics-windows-goarch: victoria-metrics-windows-$(GOARCH)-prod
|
||||
cd bin && \
|
||||
zip victoria-metrics-windows-$(GOARCH)-$(PKG_TAG).zip \
|
||||
victoria-metrics-windows-$(GOARCH)-prod.exe \
|
||||
&& sha256sum victoria-metrics-windows-$(GOARCH)-$(PKG_TAG).zip \
|
||||
victoria-metrics-windows-$(GOARCH)-prod.exe \
|
||||
> victoria-metrics-windows-$(GOARCH)-$(PKG_TAG)_checksums.txt
|
||||
cd bin && rm -rf \
|
||||
victoria-metrics-windows-$(GOARCH)-prod.exe
|
||||
|
||||
release-victoria-logs:
|
||||
$(MAKE_PARALLEL) release-victoria-logs-linux-386 \
|
||||
@@ -413,13 +355,77 @@ release-vmutils-openbsd-amd64:
|
||||
GOOS=openbsd GOARCH=amd64 $(MAKE) release-vmutils-goos-goarch
|
||||
|
||||
release-vmutils-windows-amd64:
|
||||
GOOS=windows GOARCH=amd64 $(MAKE) release-vmutils-windows-goarch
|
||||
GOARCH=amd64 $(MAKE) release-vmutils-windows-goarch
|
||||
|
||||
release-vmutils-goos-goarch:
|
||||
$(call RELEASE_GOOS_GOARCH, vmutils, $(VICTORIA_METRICS_UTILS_COMPONENTS))
|
||||
release-vmutils-goos-goarch: \
|
||||
vmagent-$(GOOS)-$(GOARCH)-prod \
|
||||
vmalert-$(GOOS)-$(GOARCH)-prod \
|
||||
vmalert-tool-$(GOOS)-$(GOARCH)-prod \
|
||||
vmauth-$(GOOS)-$(GOARCH)-prod \
|
||||
vmbackup-$(GOOS)-$(GOARCH)-prod \
|
||||
vmrestore-$(GOOS)-$(GOARCH)-prod \
|
||||
vmctl-$(GOOS)-$(GOARCH)-prod
|
||||
cd bin && \
|
||||
tar --transform="flags=r;s|-$(GOOS)-$(GOARCH)||" -czf vmutils-$(GOOS)-$(GOARCH)-$(PKG_TAG).tar.gz \
|
||||
vmagent-$(GOOS)-$(GOARCH)-prod \
|
||||
vmalert-$(GOOS)-$(GOARCH)-prod \
|
||||
vmalert-tool-$(GOOS)-$(GOARCH)-prod \
|
||||
vmauth-$(GOOS)-$(GOARCH)-prod \
|
||||
vmbackup-$(GOOS)-$(GOARCH)-prod \
|
||||
vmrestore-$(GOOS)-$(GOARCH)-prod \
|
||||
vmctl-$(GOOS)-$(GOARCH)-prod \
|
||||
&& sha256sum vmutils-$(GOOS)-$(GOARCH)-$(PKG_TAG).tar.gz \
|
||||
vmagent-$(GOOS)-$(GOARCH)-prod \
|
||||
vmalert-$(GOOS)-$(GOARCH)-prod \
|
||||
vmalert-tool-$(GOOS)-$(GOARCH)-prod \
|
||||
vmauth-$(GOOS)-$(GOARCH)-prod \
|
||||
vmbackup-$(GOOS)-$(GOARCH)-prod \
|
||||
vmrestore-$(GOOS)-$(GOARCH)-prod \
|
||||
vmctl-$(GOOS)-$(GOARCH)-prod \
|
||||
| sed s/-$(GOOS)-$(GOARCH)-prod/-prod/ > vmutils-$(GOOS)-$(GOARCH)-$(PKG_TAG)_checksums.txt
|
||||
cd bin && rm -rf \
|
||||
vmagent-$(GOOS)-$(GOARCH)-prod \
|
||||
vmalert-$(GOOS)-$(GOARCH)-prod \
|
||||
vmalert-tool-$(GOOS)-$(GOARCH)-prod \
|
||||
vmauth-$(GOOS)-$(GOARCH)-prod \
|
||||
vmbackup-$(GOOS)-$(GOARCH)-prod \
|
||||
vmrestore-$(GOOS)-$(GOARCH)-prod \
|
||||
vmctl-$(GOOS)-$(GOARCH)-prod
|
||||
|
||||
release-vmutils-windows-goarch:
|
||||
$(call RELEASE_WINDOWS_GOARCH, vmutils, $(VICTORIA_METRICS_UTILS_COMPONENTS))
|
||||
release-vmutils-windows-goarch: \
|
||||
vmagent-windows-$(GOARCH)-prod \
|
||||
vmalert-windows-$(GOARCH)-prod \
|
||||
vmalert-tool-windows-$(GOARCH)-prod \
|
||||
vmauth-windows-$(GOARCH)-prod \
|
||||
vmbackup-windows-$(GOARCH)-prod \
|
||||
vmrestore-windows-$(GOARCH)-prod \
|
||||
vmctl-windows-$(GOARCH)-prod
|
||||
cd bin && \
|
||||
zip vmutils-windows-$(GOARCH)-$(PKG_TAG).zip \
|
||||
vmagent-windows-$(GOARCH)-prod.exe \
|
||||
vmalert-windows-$(GOARCH)-prod.exe \
|
||||
vmalert-tool-windows-$(GOARCH)-prod.exe \
|
||||
vmauth-windows-$(GOARCH)-prod.exe \
|
||||
vmbackup-windows-$(GOARCH)-prod.exe \
|
||||
vmrestore-windows-$(GOARCH)-prod.exe \
|
||||
vmctl-windows-$(GOARCH)-prod.exe \
|
||||
&& sha256sum vmutils-windows-$(GOARCH)-$(PKG_TAG).zip \
|
||||
vmagent-windows-$(GOARCH)-prod.exe \
|
||||
vmalert-windows-$(GOARCH)-prod.exe \
|
||||
vmalert-tool-windows-$(GOARCH)-prod.exe \
|
||||
vmauth-windows-$(GOARCH)-prod.exe \
|
||||
vmbackup-windows-$(GOARCH)-prod.exe \
|
||||
vmrestore-windows-$(GOARCH)-prod.exe \
|
||||
vmctl-windows-$(GOARCH)-prod.exe \
|
||||
> vmutils-windows-$(GOARCH)-$(PKG_TAG)_checksums.txt
|
||||
cd bin && rm -rf \
|
||||
vmagent-windows-$(GOARCH)-prod.exe \
|
||||
vmalert-windows-$(GOARCH)-prod.exe \
|
||||
vmalert-tool-windows-$(GOARCH)-prod.exe \
|
||||
vmauth-windows-$(GOARCH)-prod.exe \
|
||||
vmbackup-windows-$(GOARCH)-prod.exe \
|
||||
vmrestore-windows-$(GOARCH)-prod.exe \
|
||||
vmctl-windows-$(GOARCH)-prod.exe
|
||||
|
||||
pprof-cpu:
|
||||
go tool pprof -trim_path=github.com/VictoriaMetrics/VictoriaMetrics@ $(PPROF_FILE)
|
||||
@@ -434,8 +440,6 @@ vet:
|
||||
|
||||
check-all: fmt vet golangci-lint govulncheck
|
||||
|
||||
clean-checkers: remove-golangci-lint remove-govulncheck
|
||||
|
||||
test:
|
||||
go test ./lib/... ./app/...
|
||||
|
||||
@@ -462,7 +466,7 @@ benchmark-pure:
|
||||
vendor-update:
|
||||
go get -u -d ./lib/...
|
||||
go get -u -d ./app/...
|
||||
go mod tidy -compat=1.22
|
||||
go mod tidy -compat=1.21
|
||||
go mod vendor
|
||||
|
||||
app-local:
|
||||
@@ -488,10 +492,7 @@ golangci-lint: install-golangci-lint
|
||||
golangci-lint run
|
||||
|
||||
install-golangci-lint:
|
||||
which golangci-lint || curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(shell go env GOPATH)/bin v1.59.1
|
||||
|
||||
remove-golangci-lint:
|
||||
rm -rf `which golangci-lint`
|
||||
which golangci-lint || curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(shell go env GOPATH)/bin v1.57.1
|
||||
|
||||
govulncheck: install-govulncheck
|
||||
govulncheck ./...
|
||||
@@ -499,18 +500,12 @@ govulncheck: install-govulncheck
|
||||
install-govulncheck:
|
||||
which govulncheck || go install golang.org/x/vuln/cmd/govulncheck@latest
|
||||
|
||||
remove-govulncheck:
|
||||
rm -rf `which govulncheck`
|
||||
|
||||
install-wwhrd:
|
||||
which wwhrd || go install github.com/frapposelli/wwhrd@latest
|
||||
|
||||
check-licenses: install-wwhrd
|
||||
wwhrd check -f .wwhrd.yml
|
||||
|
||||
cyclonedx-gomod-install:
|
||||
which cyclonedx-gomod || go install github.com/CycloneDX/cyclonedx-gomod/cmd/cyclonedx-gomod@latest
|
||||
|
||||
copy-docs:
|
||||
# The 'printf' function is used instead of 'echo' or 'echo -e' to handle line breaks (e.g. '\n') in the same way on different operating systems (MacOS/Ubuntu Linux/Arch Linux) and their shells (bash/sh/zsh/fish).
|
||||
# For details, see https://github.com/VictoriaMetrics/VictoriaMetrics/pull/4548#issue-1782796419 and https://stackoverflow.com/questions/8467424/echo-newline-in-bash-prints-literal-n
|
||||
@@ -528,7 +523,6 @@ copy-docs:
|
||||
echo "---" >> ${DST}
|
||||
cat ${SRC} >> ${DST}
|
||||
sed -i='.tmp' 's/<img src=\"docs\//<img src=\"/' ${DST}
|
||||
sed -i='.tmp' 's/<source srcset=\"docs\//<source srcset=\"/' ${DST}
|
||||
rm -rf docs/*.tmp
|
||||
|
||||
# Copies docs for all components and adds the order/weight tag, title, menu position and alias with the backward compatible link for the old site.
|
||||
|
||||
@@ -6,7 +6,7 @@ The following versions of VictoriaMetrics receive regular security fixes:
|
||||
|
||||
| Version | Supported |
|
||||
|---------|--------------------|
|
||||
| [latest release](https://docs.victoriametrics.com/changelog/) | :white_check_mark: |
|
||||
| [latest release](https://docs.victoriametrics.com/CHANGELOG.html) | :white_check_mark: |
|
||||
| v1.97.x [LTS line](https://docs.victoriametrics.com/lts-releases/) | :white_check_mark: |
|
||||
| v1.93.x [LTS line](https://docs.victoriametrics.com/lts-releases/) | :white_check_mark: |
|
||||
| other releases | :x: |
|
||||
|
||||
@@ -81,9 +81,6 @@ victoria-logs-linux-ppc64le:
|
||||
victoria-logs-linux-s390x:
|
||||
APP_NAME=victoria-logs CGO_ENABLED=0 GOOS=linux GOARCH=s390x $(MAKE) app-local-goos-goarch
|
||||
|
||||
victoria-logs-linux-loong64:
|
||||
APP_NAME=victoria-logs CGO_ENABLED=0 GOOS=linux GOARCH=loong64 $(MAKE) app-local-goos-goarch
|
||||
|
||||
victoria-logs-linux-386:
|
||||
APP_NAME=victoria-logs CGO_ENABLED=0 GOOS=linux GOARCH=386 $(MAKE) app-local-goos-goarch
|
||||
|
||||
@@ -104,10 +101,3 @@ victoria-logs-windows-amd64:
|
||||
|
||||
victoria-logs-pure:
|
||||
APP_NAME=victoria-logs $(MAKE) app-local-pure
|
||||
|
||||
run-victoria-logs:
|
||||
mkdir -p victoria-logs-data
|
||||
DOCKER_OPTS='-v $(shell pwd)/victoria-logs-data:/victoria-logs-data' \
|
||||
APP_NAME=victoria-logs \
|
||||
ARGS='' \
|
||||
$(MAKE) run-via-docker
|
||||
|
||||
@@ -47,7 +47,7 @@ func main() {
|
||||
vlinsert.Init()
|
||||
|
||||
go httpserver.Serve(listenAddrs, useProxyProtocol, requestHandler)
|
||||
logger.Infof("started VictoriaLogs in %.3f seconds; see https://docs.victoriametrics.com/victorialogs/", time.Since(startTime).Seconds())
|
||||
logger.Infof("started VictoriaLogs in %.3f seconds; see https://docs.victoriametrics.com/VictoriaLogs/", time.Since(startTime).Seconds())
|
||||
|
||||
pushmetrics.Init()
|
||||
sig := procutil.WaitForSigterm()
|
||||
@@ -77,7 +77,7 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
|
||||
}
|
||||
w.Header().Add("Content-Type", "text/html; charset=utf-8")
|
||||
fmt.Fprintf(w, "<h2>Single-node VictoriaLogs</h2></br>")
|
||||
fmt.Fprintf(w, "See docs at <a href='https://docs.victoriametrics.com/victorialogs/'>https://docs.victoriametrics.com/victorialogs/</a></br>")
|
||||
fmt.Fprintf(w, "See docs at <a href='https://docs.victoriametrics.com/VictoriaLogs/'>https://docs.victoriametrics.com/VictoriaLogs/</a></br>")
|
||||
fmt.Fprintf(w, "Useful endpoints:</br>")
|
||||
httpserver.WriteAPIHelp(w, [][2]string{
|
||||
{"select/vmui", "Web UI for VictoriaLogs"},
|
||||
@@ -99,7 +99,7 @@ func usage() {
|
||||
const s = `
|
||||
victoria-logs is a log management and analytics service.
|
||||
|
||||
See the docs at https://docs.victoriametrics.com/victorialogs/
|
||||
See the docs at https://docs.victoriametrics.com/VictoriaLogs/
|
||||
`
|
||||
flagutil.Usage(s)
|
||||
}
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# See https://medium.com/on-docker/use-multi-stage-builds-to-inject-ca-certs-ad1e8f01de1b
|
||||
ARG certs_image
|
||||
ARG root_image
|
||||
FROM $certs_image AS certs
|
||||
FROM $certs_image as certs
|
||||
RUN apk update && apk upgrade && apk --update --no-cache add ca-certificates
|
||||
|
||||
FROM $root_image
|
||||
|
||||
@@ -88,9 +88,6 @@ victoria-metrics-linux-ppc64le:
|
||||
victoria-metrics-linux-s390x:
|
||||
APP_NAME=victoria-metrics CGO_ENABLED=0 GOOS=linux GOARCH=s390x $(MAKE) app-local-goos-goarch
|
||||
|
||||
victoria-metrics-linux-loong64:
|
||||
APP_NAME=victoria-metrics CGO_ENABLED=0 GOOS=linux GOARCH=loong64 $(MAKE) app-local-goos-goarch
|
||||
|
||||
victoria-metrics-linux-386:
|
||||
APP_NAME=victoria-metrics CGO_ENABLED=0 GOOS=linux GOARCH=386 $(MAKE) app-local-goos-goarch
|
||||
|
||||
|
||||
@@ -3,7 +3,6 @@ package main
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"flag"
|
||||
"fmt"
|
||||
"io"
|
||||
@@ -118,7 +117,7 @@ func (q *Query) metrics() ([]Metric, error) {
|
||||
type QueryInstant struct {
|
||||
Result []struct {
|
||||
Labels map[string]string `json:"metric"`
|
||||
TV [2]any `json:"value"`
|
||||
TV [2]interface{} `json:"value"`
|
||||
} `json:"result"`
|
||||
}
|
||||
|
||||
@@ -141,7 +140,7 @@ func (q QueryInstant) metrics() ([]Metric, error) {
|
||||
type QueryRange struct {
|
||||
Result []struct {
|
||||
Metric map[string]string `json:"metric"`
|
||||
Values [][]any `json:"values"`
|
||||
Values [][]interface{} `json:"values"`
|
||||
} `json:"result"`
|
||||
}
|
||||
|
||||
@@ -250,24 +249,24 @@ func TestWriteRead(t *testing.T) {
|
||||
|
||||
func testWrite(t *testing.T) {
|
||||
t.Run("prometheus", func(t *testing.T) {
|
||||
for _, test := range readIn("prometheus", insertionTime) {
|
||||
for _, test := range readIn("prometheus", t, insertionTime) {
|
||||
if test.Data == nil {
|
||||
continue
|
||||
}
|
||||
s := newSuite(t)
|
||||
r := testutil.WriteRequest{}
|
||||
testData := strings.Join(test.Data, "\n")
|
||||
if err := json.Unmarshal([]byte(testData), &r.Timeseries); err != nil {
|
||||
panic(fmt.Errorf("BUG: cannot unmarshal TimeSeries: %s\ntest data\n%s", err, testData))
|
||||
s.noError(json.Unmarshal([]byte(strings.Join(test.Data, "\n")), &r.Timeseries))
|
||||
data, err := testutil.Compress(r)
|
||||
s.greaterThan(len(r.Timeseries), 0)
|
||||
if err != nil {
|
||||
t.Errorf("error compressing %v %s", r, err)
|
||||
t.Fail()
|
||||
}
|
||||
if n := len(r.Timeseries); n <= 0 {
|
||||
panic(fmt.Errorf("BUG: expecting non-empty Timeseries in test data:\n%s", testData))
|
||||
}
|
||||
data := testutil.Compress(r)
|
||||
httpWrite(t, testPromWriteHTTPPath, test.InsertQuery, bytes.NewBuffer(data))
|
||||
}
|
||||
})
|
||||
t.Run("csv", func(t *testing.T) {
|
||||
for _, test := range readIn("csv", insertionTime) {
|
||||
for _, test := range readIn("csv", t, insertionTime) {
|
||||
if test.Data == nil {
|
||||
continue
|
||||
}
|
||||
@@ -276,7 +275,7 @@ func testWrite(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("influxdb", func(t *testing.T) {
|
||||
for _, x := range readIn("influxdb", insertionTime) {
|
||||
for _, x := range readIn("influxdb", t, insertionTime) {
|
||||
test := x
|
||||
t.Run(test.Name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
@@ -285,7 +284,7 @@ func testWrite(t *testing.T) {
|
||||
}
|
||||
})
|
||||
t.Run("graphite", func(t *testing.T) {
|
||||
for _, x := range readIn("graphite", insertionTime) {
|
||||
for _, x := range readIn("graphite", t, insertionTime) {
|
||||
test := x
|
||||
t.Run(test.Name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
@@ -294,7 +293,7 @@ func testWrite(t *testing.T) {
|
||||
}
|
||||
})
|
||||
t.Run("opentsdb", func(t *testing.T) {
|
||||
for _, x := range readIn("opentsdb", insertionTime) {
|
||||
for _, x := range readIn("opentsdb", t, insertionTime) {
|
||||
test := x
|
||||
t.Run(test.Name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
@@ -303,7 +302,7 @@ func testWrite(t *testing.T) {
|
||||
}
|
||||
})
|
||||
t.Run("opentsdbhttp", func(t *testing.T) {
|
||||
for _, x := range readIn("opentsdbhttp", insertionTime) {
|
||||
for _, x := range readIn("opentsdbhttp", t, insertionTime) {
|
||||
test := x
|
||||
t.Run(test.Name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
@@ -317,7 +316,7 @@ func testWrite(t *testing.T) {
|
||||
func testRead(t *testing.T) {
|
||||
for _, engine := range []string{"csv", "prometheus", "graphite", "opentsdb", "influxdb", "opentsdbhttp"} {
|
||||
t.Run(engine, func(t *testing.T) {
|
||||
for _, x := range readIn(engine, insertionTime) {
|
||||
for _, x := range readIn(engine, t, insertionTime) {
|
||||
test := x
|
||||
t.Run(test.Name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
@@ -366,10 +365,11 @@ func testRead(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func readIn(readFor string, insertTime time.Time) []test {
|
||||
testDir := filepath.Join(testFixturesDir, readFor)
|
||||
func readIn(readFor string, t *testing.T, insertTime time.Time) []test {
|
||||
t.Helper()
|
||||
s := newSuite(t)
|
||||
var tt []test
|
||||
err := filepath.Walk(testDir, func(path string, _ os.FileInfo, err error) error {
|
||||
s.noError(filepath.Walk(filepath.Join(testFixturesDir, readFor), func(path string, _ os.FileInfo, err error) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -377,129 +377,84 @@ func readIn(readFor string, insertTime time.Time) []test {
|
||||
return nil
|
||||
}
|
||||
b, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
panic(fmt.Errorf("BUG: cannot read %s: %s", path, err))
|
||||
}
|
||||
s.noError(err)
|
||||
item := test{}
|
||||
if err := json.Unmarshal(b, &item); err != nil {
|
||||
panic(fmt.Errorf("cannot parse %T from %s: %s; data:\n%s", &item, path, err, b))
|
||||
}
|
||||
s.noError(json.Unmarshal(b, &item))
|
||||
for i := range item.Data {
|
||||
item.Data[i] = testutil.PopulateTimeTplString(item.Data[i], insertTime)
|
||||
}
|
||||
tt = append(tt, item)
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
panic(fmt.Errorf("BUG: cannot read test data at %s: %w", testDir, err))
|
||||
}
|
||||
}))
|
||||
if len(tt) == 0 {
|
||||
panic(fmt.Errorf("BUG: no tests found in %s", testDir))
|
||||
t.Fatalf("no test found in %s", filepath.Join(testFixturesDir, readFor))
|
||||
}
|
||||
return tt
|
||||
}
|
||||
|
||||
func httpWrite(t *testing.T, address, query string, r io.Reader) {
|
||||
t.Helper()
|
||||
|
||||
requestURL := address + query
|
||||
resp, err := http.Post(requestURL, "", r)
|
||||
if err != nil {
|
||||
t.Fatalf("cannot send request to %s: %s", requestURL, err)
|
||||
}
|
||||
_ = resp.Body.Close()
|
||||
if resp.StatusCode != 204 {
|
||||
t.Fatalf("unexpected status code received from %s; got %d; want 204", requestURL, resp.StatusCode)
|
||||
}
|
||||
s := newSuite(t)
|
||||
resp, err := http.Post(address+query, "", r)
|
||||
s.noError(err)
|
||||
s.noError(resp.Body.Close())
|
||||
s.equalInt(resp.StatusCode, 204)
|
||||
}
|
||||
|
||||
func tcpWrite(t *testing.T, address, data string) {
|
||||
func tcpWrite(t *testing.T, address string, data string) {
|
||||
t.Helper()
|
||||
|
||||
s := newSuite(t)
|
||||
conn, err := net.Dial("tcp", address)
|
||||
if err != nil {
|
||||
t.Fatalf("cannot dial %s: %s", address, err)
|
||||
}
|
||||
s.noError(err)
|
||||
defer func() {
|
||||
_ = conn.Close()
|
||||
}()
|
||||
n, err := conn.Write([]byte(data))
|
||||
if err != nil {
|
||||
t.Fatalf("cannot write %d bytes to %s: %s", len(data), address, err)
|
||||
}
|
||||
if n != len(data) {
|
||||
panic(fmt.Errorf("BUG: conn.Write() returned unexpected number of written bytes to %s; got %d; want %d", address, n, len(data)))
|
||||
}
|
||||
s.noError(err)
|
||||
s.equalInt(n, len(data))
|
||||
}
|
||||
|
||||
func httpReadMetrics(t *testing.T, address, query string) []Metric {
|
||||
t.Helper()
|
||||
|
||||
requestURL := address + query
|
||||
resp, err := http.Get(requestURL)
|
||||
if err != nil {
|
||||
t.Fatalf("cannot send request to %s: %s", requestURL, err)
|
||||
}
|
||||
s := newSuite(t)
|
||||
resp, err := http.Get(address + query)
|
||||
s.noError(err)
|
||||
defer func() {
|
||||
_ = resp.Body.Close()
|
||||
}()
|
||||
if resp.StatusCode != 200 {
|
||||
t.Fatalf("unexpected status code received from %s; got %d; want 200", requestURL, resp.StatusCode)
|
||||
}
|
||||
|
||||
s.equalInt(resp.StatusCode, 200)
|
||||
var rows []Metric
|
||||
dec := json.NewDecoder(resp.Body)
|
||||
for {
|
||||
for dec := json.NewDecoder(resp.Body); dec.More(); {
|
||||
var row Metric
|
||||
err := dec.Decode(&row)
|
||||
if err != nil {
|
||||
if errors.Is(err, io.EOF) {
|
||||
return rows
|
||||
}
|
||||
t.Fatalf("cannot decode %T from response received from %s: %s", &row, requestURL, err)
|
||||
}
|
||||
s.noError(dec.Decode(&row))
|
||||
rows = append(rows, row)
|
||||
}
|
||||
return rows
|
||||
}
|
||||
|
||||
func httpReadStruct(t *testing.T, address, query string, dst any) {
|
||||
func httpReadStruct(t *testing.T, address, query string, dst interface{}) {
|
||||
t.Helper()
|
||||
|
||||
requestURL := address + query
|
||||
resp, err := http.Get(requestURL)
|
||||
if err != nil {
|
||||
t.Fatalf("cannot send request to %s: %s", requestURL, err)
|
||||
}
|
||||
s := newSuite(t)
|
||||
resp, err := http.Get(address + query)
|
||||
s.noError(err)
|
||||
defer func() {
|
||||
_ = resp.Body.Close()
|
||||
}()
|
||||
if resp.StatusCode != 200 {
|
||||
t.Fatalf("unexpected status code received from %s; got %d; want 200", requestURL, resp.StatusCode)
|
||||
}
|
||||
err = json.NewDecoder(resp.Body).Decode(dst)
|
||||
if err != nil {
|
||||
t.Fatalf("cannot decode %T from response received from %s: %s", dst, requestURL, err)
|
||||
}
|
||||
s.equalInt(resp.StatusCode, 200)
|
||||
s.noError(json.NewDecoder(resp.Body).Decode(dst))
|
||||
}
|
||||
|
||||
func httpReadData(t *testing.T, address, query string) []byte {
|
||||
t.Helper()
|
||||
|
||||
requestURL := address + query
|
||||
resp, err := http.Get(requestURL)
|
||||
if err != nil {
|
||||
t.Fatalf("cannot send request to %s: %s", requestURL, err)
|
||||
}
|
||||
s := newSuite(t)
|
||||
resp, err := http.Get(address + query)
|
||||
s.noError(err)
|
||||
defer func() {
|
||||
_ = resp.Body.Close()
|
||||
}()
|
||||
if resp.StatusCode != 200 {
|
||||
t.Fatalf("unexpected status code received from %s; got %d; want 200", requestURL, resp.StatusCode)
|
||||
}
|
||||
s.equalInt(resp.StatusCode, 200)
|
||||
data, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
t.Fatalf("cannot read response from %s: %s", requestURL, err)
|
||||
}
|
||||
s.noError(err)
|
||||
return data
|
||||
}
|
||||
|
||||
@@ -548,6 +503,34 @@ func removeIfFoundSeries(r map[string]string, contains []map[string]string) []ma
|
||||
return contains
|
||||
}
|
||||
|
||||
type suite struct{ t *testing.T }
|
||||
|
||||
func newSuite(t *testing.T) *suite { return &suite{t: t} }
|
||||
|
||||
func (s *suite) noError(err error) {
|
||||
s.t.Helper()
|
||||
if err != nil {
|
||||
s.t.Errorf("unexpected error %v", err)
|
||||
s.t.FailNow()
|
||||
}
|
||||
}
|
||||
|
||||
func (s *suite) equalInt(a, b int) {
|
||||
s.t.Helper()
|
||||
if a != b {
|
||||
s.t.Errorf("%d not equal %d", a, b)
|
||||
s.t.FailNow()
|
||||
}
|
||||
}
|
||||
|
||||
func (s *suite) greaterThan(a, b int) {
|
||||
s.t.Helper()
|
||||
if a <= b {
|
||||
s.t.Errorf("%d less or equal then %d", a, b)
|
||||
s.t.FailNow()
|
||||
}
|
||||
}
|
||||
|
||||
func TestImportJSONLines(t *testing.T) {
|
||||
f := func(labelsCount, labelLen int) {
|
||||
t.Helper()
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# See https://medium.com/on-docker/use-multi-stage-builds-to-inject-ca-certs-ad1e8f01de1b
|
||||
ARG certs_image
|
||||
ARG root_image
|
||||
FROM $certs_image AS certs
|
||||
FROM $certs_image as certs
|
||||
RUN apk update && apk upgrade && apk --update --no-cache add ca-certificates
|
||||
|
||||
FROM $root_image
|
||||
|
||||
@@ -1,16 +1,12 @@
|
||||
package test
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/golang/snappy"
|
||||
)
|
||||
import "github.com/golang/snappy"
|
||||
|
||||
// Compress marshals and compresses wr.
|
||||
func Compress(wr WriteRequest) []byte {
|
||||
func Compress(wr WriteRequest) ([]byte, error) {
|
||||
data, err := wr.Marshal()
|
||||
if err != nil {
|
||||
panic(fmt.Errorf("BUG: cannot compress WriteRequest: %s", err))
|
||||
return nil, err
|
||||
}
|
||||
return snappy.Encode(nil, data)
|
||||
return snappy.Encode(nil, data), nil
|
||||
}
|
||||
|
||||
@@ -20,6 +20,7 @@ import (
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logjson"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
|
||||
@@ -97,10 +98,12 @@ func RequestHandler(path string, w http.ResponseWriter, r *http.Request) bool {
|
||||
httpserver.Errorf(w, r, "%s", err)
|
||||
return true
|
||||
}
|
||||
lmp := cp.NewLogMessageProcessor()
|
||||
lr := logstorage.GetLogRows(cp.StreamFields, cp.IgnoreFields)
|
||||
processLogMessage := cp.GetProcessLogMessageFunc(lr)
|
||||
isGzip := r.Header.Get("Content-Encoding") == "gzip"
|
||||
n, err := readBulkRequest(r.Body, isGzip, cp.TimeField, cp.MsgField, lmp)
|
||||
lmp.MustClose()
|
||||
n, err := readBulkRequest(r.Body, isGzip, cp.TimeField, cp.MsgField, processLogMessage)
|
||||
vlstorage.MustAddRows(lr)
|
||||
logstorage.PutLogRows(lr)
|
||||
if err != nil {
|
||||
logger.Warnf("cannot decode log message #%d in /_bulk request: %s, stream fields: %s", n, err, cp.StreamFields)
|
||||
return true
|
||||
@@ -129,7 +132,9 @@ var (
|
||||
bulkRequestDuration = metrics.NewHistogram(`vl_http_request_duration_seconds{path="/insert/elasticsearch/_bulk"}`)
|
||||
)
|
||||
|
||||
func readBulkRequest(r io.Reader, isGzip bool, timeField, msgField string, lmp insertutils.LogMessageProcessor) (int, error) {
|
||||
func readBulkRequest(r io.Reader, isGzip bool, timeField, msgField string,
|
||||
processLogMessage func(timestamp int64, fields []logstorage.Field),
|
||||
) (int, error) {
|
||||
// See https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html
|
||||
|
||||
if isGzip {
|
||||
@@ -154,7 +159,7 @@ func readBulkRequest(r io.Reader, isGzip bool, timeField, msgField string, lmp i
|
||||
n := 0
|
||||
nCheckpoint := 0
|
||||
for {
|
||||
ok, err := readBulkLine(sc, timeField, msgField, lmp)
|
||||
ok, err := readBulkLine(sc, timeField, msgField, processLogMessage)
|
||||
wcr.DecConcurrency()
|
||||
if err != nil || !ok {
|
||||
rowsIngestedTotal.Add(n - nCheckpoint)
|
||||
@@ -170,7 +175,9 @@ func readBulkRequest(r io.Reader, isGzip bool, timeField, msgField string, lmp i
|
||||
|
||||
var lineBufferPool bytesutil.ByteBufferPool
|
||||
|
||||
func readBulkLine(sc *bufio.Scanner, timeField, msgField string, lmp insertutils.LogMessageProcessor) (bool, error) {
|
||||
func readBulkLine(sc *bufio.Scanner, timeField, msgField string,
|
||||
processLogMessage func(timestamp int64, fields []logstorage.Field),
|
||||
) (bool, error) {
|
||||
var line []byte
|
||||
|
||||
// Read the command, must be "create" or "index"
|
||||
@@ -203,7 +210,7 @@ func readBulkLine(sc *bufio.Scanner, timeField, msgField string, lmp insertutils
|
||||
return false, fmt.Errorf(`missing log message after the "create" or "index" command`)
|
||||
}
|
||||
line = sc.Bytes()
|
||||
p := logstorage.GetJSONParser()
|
||||
p := logjson.GetParser()
|
||||
if err := p.ParseLogMessage(line); err != nil {
|
||||
return false, fmt.Errorf("cannot parse json-encoded log entry: %w", err)
|
||||
}
|
||||
@@ -215,9 +222,9 @@ func readBulkLine(sc *bufio.Scanner, timeField, msgField string, lmp insertutils
|
||||
if ts == 0 {
|
||||
ts = time.Now().UnixNano()
|
||||
}
|
||||
logstorage.RenameField(p.Fields, msgField, "_msg")
|
||||
lmp.AddRow(ts, p.Fields)
|
||||
logstorage.PutJSONParser(p)
|
||||
p.RenameField(msgField, "_msg")
|
||||
processLogMessage(ts, p.Fields)
|
||||
logjson.PutParser(p)
|
||||
|
||||
return true, nil
|
||||
}
|
||||
@@ -266,9 +273,9 @@ func parseElasticsearchTimestamp(s string) (int64, error) {
|
||||
}
|
||||
return t.UnixNano(), nil
|
||||
}
|
||||
nsecs, ok := logstorage.TryParseTimestampRFC3339Nano(s)
|
||||
if !ok {
|
||||
return 0, fmt.Errorf("cannot parse timestamp %q", s)
|
||||
t, err := time.Parse(time.RFC3339, s)
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("cannot parse timestamp %q: %w", s, err)
|
||||
}
|
||||
return nsecs, nil
|
||||
return t.UnixNano(), nil
|
||||
}
|
||||
|
||||
@@ -4,18 +4,23 @@ import (
|
||||
"bytes"
|
||||
"compress/gzip"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
|
||||
)
|
||||
|
||||
func TestReadBulkRequest_Failure(t *testing.T) {
|
||||
func TestReadBulkRequestFailure(t *testing.T) {
|
||||
f := func(data string) {
|
||||
t.Helper()
|
||||
|
||||
tlp := &insertutils.TestLogMessageProcessor{}
|
||||
processLogMessage := func(timestamp int64, fields []logstorage.Field) {
|
||||
t.Fatalf("unexpected call to processLogMessage with timestamp=%d, fields=%s", timestamp, fields)
|
||||
}
|
||||
|
||||
r := bytes.NewBufferString(data)
|
||||
rows, err := readBulkRequest(r, false, "_time", "_msg", tlp)
|
||||
rows, err := readBulkRequest(r, false, "_time", "_msg", processLogMessage)
|
||||
if err == nil {
|
||||
t.Fatalf("expecting non-empty error")
|
||||
}
|
||||
@@ -32,38 +37,58 @@ func TestReadBulkRequest_Failure(t *testing.T) {
|
||||
foobar`)
|
||||
}
|
||||
|
||||
func TestReadBulkRequest_Success(t *testing.T) {
|
||||
func TestReadBulkRequestSuccess(t *testing.T) {
|
||||
f := func(data, timeField, msgField string, rowsExpected int, timestampsExpected []int64, resultExpected string) {
|
||||
t.Helper()
|
||||
|
||||
tlp := &insertutils.TestLogMessageProcessor{}
|
||||
var timestamps []int64
|
||||
var result string
|
||||
processLogMessage := func(timestamp int64, fields []logstorage.Field) {
|
||||
timestamps = append(timestamps, timestamp)
|
||||
|
||||
a := make([]string, len(fields))
|
||||
for i, f := range fields {
|
||||
a[i] = fmt.Sprintf("%q:%q", f.Name, f.Value)
|
||||
}
|
||||
s := "{" + strings.Join(a, ",") + "}\n"
|
||||
result += s
|
||||
}
|
||||
|
||||
// Read the request without compression
|
||||
r := bytes.NewBufferString(data)
|
||||
rows, err := readBulkRequest(r, false, timeField, msgField, tlp)
|
||||
rows, err := readBulkRequest(r, false, timeField, msgField, processLogMessage)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %s", err)
|
||||
}
|
||||
if rows != rowsExpected {
|
||||
t.Fatalf("unexpected rows read; got %d; want %d", rows, rowsExpected)
|
||||
}
|
||||
if err := tlp.Verify(rowsExpected, timestampsExpected, resultExpected); err != nil {
|
||||
t.Fatal(err)
|
||||
|
||||
if !reflect.DeepEqual(timestamps, timestampsExpected) {
|
||||
t.Fatalf("unexpected timestamps;\ngot\n%d\nwant\n%d", timestamps, timestampsExpected)
|
||||
}
|
||||
if result != resultExpected {
|
||||
t.Fatalf("unexpected result;\ngot\n%s\nwant\n%s", result, resultExpected)
|
||||
}
|
||||
|
||||
// Read the request with compression
|
||||
tlp = &insertutils.TestLogMessageProcessor{}
|
||||
timestamps = nil
|
||||
result = ""
|
||||
compressedData := compressData(data)
|
||||
r = bytes.NewBufferString(compressedData)
|
||||
rows, err = readBulkRequest(r, true, timeField, msgField, tlp)
|
||||
rows, err = readBulkRequest(r, true, timeField, msgField, processLogMessage)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %s", err)
|
||||
}
|
||||
if rows != rowsExpected {
|
||||
t.Fatalf("unexpected rows read; got %d; want %d", rows, rowsExpected)
|
||||
}
|
||||
if err := tlp.Verify(rowsExpected, timestampsExpected, resultExpected); err != nil {
|
||||
t.Fatalf("verification failure after compression: %s", err)
|
||||
|
||||
if !reflect.DeepEqual(timestamps, timestampsExpected) {
|
||||
t.Fatalf("unexpected timestamps;\ngot\n%d\nwant\n%d", timestamps, timestampsExpected)
|
||||
}
|
||||
if result != resultExpected {
|
||||
t.Fatalf("unexpected result;\ngot\n%s\nwant\n%s", result, resultExpected)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -86,7 +111,8 @@ func TestReadBulkRequest_Success(t *testing.T) {
|
||||
timestampsExpected := []int64{1686026891735000000, 1686026892735000000, 1686026893735000000}
|
||||
resultExpected := `{"@timestamp":"","log.offset":"71770","log.file.path":"/var/log/auth.log","_msg":"foobar"}
|
||||
{"@timestamp":"","_msg":"baz"}
|
||||
{"_msg":"xyz","@timestamp":"","x":"y"}`
|
||||
{"_msg":"xyz","@timestamp":"","x":"y"}
|
||||
`
|
||||
f(data, timeField, msgField, rowsExpected, timestampsExpected, resultExpected)
|
||||
}
|
||||
|
||||
|
||||
@@ -5,8 +5,8 @@ import (
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
|
||||
)
|
||||
|
||||
func BenchmarkReadBulkRequest(b *testing.B) {
|
||||
@@ -33,7 +33,7 @@ func benchmarkReadBulkRequest(b *testing.B, isGzip bool) {
|
||||
|
||||
timeField := "@timestamp"
|
||||
msgField := "message"
|
||||
blp := &insertutils.BenchmarkLogMessageProcessor{}
|
||||
processLogMessage := func(_ int64, _ []logstorage.Field) {}
|
||||
|
||||
b.ReportAllocs()
|
||||
b.SetBytes(int64(len(data)))
|
||||
@@ -41,7 +41,7 @@ func benchmarkReadBulkRequest(b *testing.B, isGzip bool) {
|
||||
r := &bytes.Reader{}
|
||||
for pb.Next() {
|
||||
r.Reset(dataBytes)
|
||||
_, err := readBulkRequest(r, isGzip, timeField, msgField, blp)
|
||||
_, err := readBulkRequest(r, isGzip, timeField, msgField, processLogMessage)
|
||||
if err != nil {
|
||||
panic(fmt.Errorf("unexpected error: %w", err))
|
||||
}
|
||||
|
||||
@@ -2,8 +2,6 @@ package insertutils
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
|
||||
@@ -12,12 +10,11 @@ import (
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httputils"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeutil"
|
||||
)
|
||||
|
||||
// CommonParams contains common HTTP parameters used by log ingestion APIs.
|
||||
//
|
||||
// See https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters
|
||||
// See https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#http-parameters
|
||||
type CommonParams struct {
|
||||
TenantID logstorage.TenantID
|
||||
TimeField string
|
||||
@@ -74,126 +71,29 @@ func GetCommonParams(r *http.Request) (*CommonParams, error) {
|
||||
return cp, nil
|
||||
}
|
||||
|
||||
// GetCommonParamsForSyslog returns common params needed for parsing syslog messages and storing them to the given tenantID.
|
||||
func GetCommonParamsForSyslog(tenantID logstorage.TenantID) *CommonParams {
|
||||
// See https://docs.victoriametrics.com/victorialogs/logsql/#unpack_syslog-pipe
|
||||
cp := &CommonParams{
|
||||
TenantID: tenantID,
|
||||
TimeField: "timestamp",
|
||||
MsgField: "message",
|
||||
StreamFields: []string{
|
||||
"hostname",
|
||||
"app_name",
|
||||
"proc_id",
|
||||
},
|
||||
}
|
||||
|
||||
return cp
|
||||
}
|
||||
|
||||
// LogMessageProcessor is an interface for log message processors.
|
||||
type LogMessageProcessor interface {
|
||||
// AddRow must add row to the LogMessageProcessor with the given timestamp and the given fields.
|
||||
//
|
||||
// The LogMessageProcessor implementation cannot hold references to fields, since the caller can re-use them.
|
||||
AddRow(timestamp int64, fields []logstorage.Field)
|
||||
|
||||
// MustClose() must flush all the remaining fields and free up resources occupied by LogMessageProcessor.
|
||||
MustClose()
|
||||
}
|
||||
|
||||
type logMessageProcessor struct {
|
||||
mu sync.Mutex
|
||||
wg sync.WaitGroup
|
||||
stopCh chan struct{}
|
||||
lastFlushTime time.Time
|
||||
|
||||
cp *CommonParams
|
||||
lr *logstorage.LogRows
|
||||
}
|
||||
|
||||
func (lmp *logMessageProcessor) initPeriodicFlush() {
|
||||
lmp.lastFlushTime = time.Now()
|
||||
|
||||
lmp.wg.Add(1)
|
||||
go func() {
|
||||
defer lmp.wg.Done()
|
||||
|
||||
d := timeutil.AddJitterToDuration(time.Second)
|
||||
ticker := time.NewTicker(d)
|
||||
defer ticker.Stop()
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-lmp.stopCh:
|
||||
return
|
||||
case <-ticker.C:
|
||||
lmp.mu.Lock()
|
||||
if time.Since(lmp.lastFlushTime) >= d {
|
||||
lmp.flushLocked()
|
||||
}
|
||||
lmp.mu.Unlock()
|
||||
}
|
||||
// GetProcessLogMessageFunc returns a function, which adds parsed log messages to lr.
|
||||
func (cp *CommonParams) GetProcessLogMessageFunc(lr *logstorage.LogRows) func(timestamp int64, fields []logstorage.Field) {
|
||||
return func(timestamp int64, fields []logstorage.Field) {
|
||||
if len(fields) > *MaxFieldsPerLine {
|
||||
rf := logstorage.RowFormatter(fields)
|
||||
logger.Warnf("dropping log line with %d fields; it exceeds -insert.maxFieldsPerLine=%d; %s", len(fields), *MaxFieldsPerLine, rf)
|
||||
rowsDroppedTotalTooManyFields.Inc()
|
||||
return
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
// AddRow adds new log message to lmp with the given timestamp and fields.
|
||||
func (lmp *logMessageProcessor) AddRow(timestamp int64, fields []logstorage.Field) {
|
||||
lmp.mu.Lock()
|
||||
defer lmp.mu.Unlock()
|
||||
|
||||
if len(fields) > *MaxFieldsPerLine {
|
||||
rf := logstorage.RowFormatter(fields)
|
||||
logger.Warnf("dropping log line with %d fields; it exceeds -insert.maxFieldsPerLine=%d; %s", len(fields), *MaxFieldsPerLine, rf)
|
||||
rowsDroppedTotalTooManyFields.Inc()
|
||||
return
|
||||
lr.MustAdd(cp.TenantID, timestamp, fields)
|
||||
if cp.Debug {
|
||||
s := lr.GetRowString(0)
|
||||
lr.ResetKeepSettings()
|
||||
logger.Infof("remoteAddr=%s; requestURI=%s; ignoring log entry because of `debug` query arg: %s", cp.DebugRemoteAddr, cp.DebugRequestURI, s)
|
||||
rowsDroppedTotalDebug.Inc()
|
||||
return
|
||||
}
|
||||
if lr.NeedFlush() {
|
||||
vlstorage.MustAddRows(lr)
|
||||
lr.ResetKeepSettings()
|
||||
}
|
||||
}
|
||||
|
||||
lmp.lr.MustAdd(lmp.cp.TenantID, timestamp, fields)
|
||||
if lmp.cp.Debug {
|
||||
s := lmp.lr.GetRowString(0)
|
||||
lmp.lr.ResetKeepSettings()
|
||||
logger.Infof("remoteAddr=%s; requestURI=%s; ignoring log entry because of `debug` query arg: %s", lmp.cp.DebugRemoteAddr, lmp.cp.DebugRequestURI, s)
|
||||
rowsDroppedTotalDebug.Inc()
|
||||
return
|
||||
}
|
||||
if lmp.lr.NeedFlush() {
|
||||
lmp.flushLocked()
|
||||
}
|
||||
}
|
||||
|
||||
// flushLocked must be called under locked lmp.mu.
|
||||
func (lmp *logMessageProcessor) flushLocked() {
|
||||
lmp.lastFlushTime = time.Now()
|
||||
vlstorage.MustAddRows(lmp.lr)
|
||||
lmp.lr.ResetKeepSettings()
|
||||
}
|
||||
|
||||
// MustClose flushes the remaining data to the underlying storage and closes lmp.
|
||||
func (lmp *logMessageProcessor) MustClose() {
|
||||
close(lmp.stopCh)
|
||||
lmp.wg.Wait()
|
||||
|
||||
lmp.flushLocked()
|
||||
logstorage.PutLogRows(lmp.lr)
|
||||
lmp.lr = nil
|
||||
}
|
||||
|
||||
// NewLogMessageProcessor returns new LogMessageProcessor for the given cp.
|
||||
//
|
||||
// MustClose() must be called on the returned LogMessageProcessor when it is no longer needed.
|
||||
func (cp *CommonParams) NewLogMessageProcessor() LogMessageProcessor {
|
||||
lr := logstorage.GetLogRows(cp.StreamFields, cp.IgnoreFields)
|
||||
lmp := &logMessageProcessor{
|
||||
cp: cp,
|
||||
lr: lr,
|
||||
|
||||
stopCh: make(chan struct{}),
|
||||
}
|
||||
lmp.initPeriodicFlush()
|
||||
|
||||
return lmp
|
||||
}
|
||||
|
||||
var rowsDroppedTotalDebug = metrics.NewCounter(`vl_rows_dropped_total{reason="debug"}`)
|
||||
|
||||
@@ -1,53 +0,0 @@
|
||||
package insertutils
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
"strings"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
|
||||
)
|
||||
|
||||
// TestLogMessageProcessor implements LogMessageProcessor for testing.
|
||||
type TestLogMessageProcessor struct {
|
||||
timestamps []int64
|
||||
rows []string
|
||||
}
|
||||
|
||||
// AddRow adds row with the given timestamp and fields to tlp
|
||||
func (tlp *TestLogMessageProcessor) AddRow(timestamp int64, fields []logstorage.Field) {
|
||||
tlp.timestamps = append(tlp.timestamps, timestamp)
|
||||
tlp.rows = append(tlp.rows, string(logstorage.MarshalFieldsToJSON(nil, fields)))
|
||||
}
|
||||
|
||||
// MustClose closes tlp.
|
||||
func (tlp *TestLogMessageProcessor) MustClose() {
|
||||
}
|
||||
|
||||
// Verify verifies the number of rows, timestamps and results after AddRow calls.
|
||||
func (tlp *TestLogMessageProcessor) Verify(rowsExpected int, timestampsExpected []int64, resultExpected string) error {
|
||||
result := strings.Join(tlp.rows, "\n")
|
||||
if len(tlp.rows) != rowsExpected {
|
||||
return fmt.Errorf("unexpected rows read; got %d; want %d;\nrows read:\n%s\nrows wanted\n%s", len(tlp.rows), rowsExpected, result, resultExpected)
|
||||
}
|
||||
|
||||
if !reflect.DeepEqual(tlp.timestamps, timestampsExpected) {
|
||||
return fmt.Errorf("unexpected timestamps;\ngot\n%d\nwant\n%d", tlp.timestamps, timestampsExpected)
|
||||
}
|
||||
if result != resultExpected {
|
||||
return fmt.Errorf("unexpected result;\ngot\n%s\nwant\n%s", result, resultExpected)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// BenchmarkLogMessageProcessor implements LogMessageProcessor for benchmarks.
|
||||
type BenchmarkLogMessageProcessor struct{}
|
||||
|
||||
// AddRow implements LogMessageProcessor interface.
|
||||
func (blp *BenchmarkLogMessageProcessor) AddRow(_ int64, _ []logstorage.Field) {
|
||||
}
|
||||
|
||||
// MustClose implements LogMessageProcessor interface.
|
||||
func (blp *BenchmarkLogMessageProcessor) MustClose() {
|
||||
}
|
||||
@@ -1,33 +0,0 @@
|
||||
package insertutils
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
|
||||
)
|
||||
|
||||
// ExtractTimestampRFC3339NanoFromFields extracts RFC3339 timestamp in nanoseconds from the field with the name timeField at fields.
|
||||
//
|
||||
// The value for the timeField is set to empty string after returning from the function,
|
||||
// so it could be ignored during data ingestion.
|
||||
//
|
||||
// The current timestamp is returned if fields do not contain a field with timeField name or if the timeField value is empty.
|
||||
func ExtractTimestampRFC3339NanoFromFields(timeField string, fields []logstorage.Field) (int64, error) {
|
||||
for i := range fields {
|
||||
f := &fields[i]
|
||||
if f.Name != timeField {
|
||||
continue
|
||||
}
|
||||
if f.Value == "" || f.Value == "0" {
|
||||
return time.Now().UnixNano(), nil
|
||||
}
|
||||
nsecs, ok := logstorage.TryParseTimestampRFC3339Nano(f.Value)
|
||||
if !ok {
|
||||
return 0, fmt.Errorf("cannot unmarshal rfc3339 timestamp from %s=%q", timeField, f.Value)
|
||||
}
|
||||
f.Value = ""
|
||||
return nsecs, nil
|
||||
}
|
||||
return time.Now().UnixNano(), nil
|
||||
}
|
||||
@@ -1,75 +0,0 @@
|
||||
package insertutils
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
|
||||
)
|
||||
|
||||
func TestExtractTimestampRFC3339NanoFromFields_Success(t *testing.T) {
|
||||
f := func(timeField string, fields []logstorage.Field, nsecsExpected int64) {
|
||||
t.Helper()
|
||||
|
||||
nsecs, err := ExtractTimestampRFC3339NanoFromFields(timeField, fields)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %s", err)
|
||||
}
|
||||
if nsecs != nsecsExpected {
|
||||
t.Fatalf("unexpected nsecs; got %d; want %d", nsecs, nsecsExpected)
|
||||
}
|
||||
|
||||
for _, f := range fields {
|
||||
if f.Name == timeField {
|
||||
if f.Value != "" {
|
||||
t.Fatalf("unexpected value for field %s; got %q; want %q", timeField, f.Value, "")
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
f("time", []logstorage.Field{
|
||||
{Name: "foo", Value: "bar"},
|
||||
{Name: "time", Value: "2024-06-18T23:37:20Z"},
|
||||
}, 1718753840000000000)
|
||||
|
||||
f("time", []logstorage.Field{
|
||||
{Name: "foo", Value: "bar"},
|
||||
{Name: "time", Value: "2024-06-18T23:37:20+08:00"},
|
||||
}, 1718725040000000000)
|
||||
|
||||
f("time", []logstorage.Field{
|
||||
{Name: "foo", Value: "bar"},
|
||||
{Name: "time", Value: "2024-06-18T23:37:20.123-05:30"},
|
||||
}, 1718773640123000000)
|
||||
|
||||
f("time", []logstorage.Field{
|
||||
{Name: "time", Value: "2024-06-18T23:37:20.123456789-05:30"},
|
||||
{Name: "foo", Value: "bar"},
|
||||
}, 1718773640123456789)
|
||||
}
|
||||
|
||||
func TestExtractTimestampRFC3339NanoFromFields_Error(t *testing.T) {
|
||||
f := func(s string) {
|
||||
t.Helper()
|
||||
|
||||
fields := []logstorage.Field{
|
||||
{Name: "time", Value: s},
|
||||
}
|
||||
nsecs, err := ExtractTimestampRFC3339NanoFromFields("time", fields)
|
||||
if err == nil {
|
||||
t.Fatalf("expecting non-nil error")
|
||||
}
|
||||
if nsecs != 0 {
|
||||
t.Fatalf("unexpected nsecs; got %d; want %d", nsecs, 0)
|
||||
}
|
||||
}
|
||||
|
||||
f("foobar")
|
||||
|
||||
// no Z at the end
|
||||
f("2024-06-18T23:37:20")
|
||||
|
||||
// incomplete time
|
||||
f("2024-06-18")
|
||||
f("2024-06-18T23:37")
|
||||
}
|
||||
@@ -4,7 +4,6 @@ import (
|
||||
"bufio"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"time"
|
||||
|
||||
@@ -13,6 +12,7 @@ import (
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logjson"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
|
||||
@@ -20,13 +20,13 @@ import (
|
||||
)
|
||||
|
||||
// RequestHandler processes jsonline insert requests
|
||||
func RequestHandler(w http.ResponseWriter, r *http.Request) {
|
||||
func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
|
||||
startTime := time.Now()
|
||||
w.Header().Add("Content-Type", "application/json")
|
||||
|
||||
if r.Method != "POST" {
|
||||
w.WriteHeader(http.StatusMethodNotAllowed)
|
||||
return
|
||||
return true
|
||||
}
|
||||
|
||||
requestsTotal.Inc()
|
||||
@@ -34,40 +34,27 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) {
|
||||
cp, err := insertutils.GetCommonParams(r)
|
||||
if err != nil {
|
||||
httpserver.Errorf(w, r, "%s", err)
|
||||
return
|
||||
return true
|
||||
}
|
||||
if err := vlstorage.CanWriteData(); err != nil {
|
||||
httpserver.Errorf(w, r, "%s", err)
|
||||
return
|
||||
return true
|
||||
}
|
||||
lr := logstorage.GetLogRows(cp.StreamFields, cp.IgnoreFields)
|
||||
processLogMessage := cp.GetProcessLogMessageFunc(lr)
|
||||
|
||||
reader := r.Body
|
||||
if r.Header.Get("Content-Encoding") == "gzip" {
|
||||
zr, err := common.GetGzipReader(reader)
|
||||
if err != nil {
|
||||
logger.Errorf("cannot read gzipped jsonline request: %s", err)
|
||||
return
|
||||
logger.Errorf("cannot read gzipped _bulk request: %s", err)
|
||||
return true
|
||||
}
|
||||
defer common.PutGzipReader(zr)
|
||||
reader = zr
|
||||
}
|
||||
|
||||
lmp := cp.NewLogMessageProcessor()
|
||||
err = processStreamInternal(reader, cp.TimeField, cp.MsgField, lmp)
|
||||
lmp.MustClose()
|
||||
|
||||
if err != nil {
|
||||
logger.Errorf("jsonline: %s", err)
|
||||
} else {
|
||||
// update requestDuration only for successfully parsed requests.
|
||||
// There is no need in updating requestDuration for request errors,
|
||||
// since their timings are usually much smaller than the timing for successful request parsing.
|
||||
requestDuration.UpdateDuration(startTime)
|
||||
}
|
||||
}
|
||||
|
||||
func processStreamInternal(r io.Reader, timeField, msgField string, lmp insertutils.LogMessageProcessor) error {
|
||||
wcr := writeconcurrencylimiter.GetReader(r)
|
||||
wcr := writeconcurrencylimiter.GetReader(reader)
|
||||
defer writeconcurrencylimiter.PutReader(wcr)
|
||||
|
||||
lb := lineBufferPool.Get()
|
||||
@@ -79,21 +66,31 @@ func processStreamInternal(r io.Reader, timeField, msgField string, lmp insertut
|
||||
|
||||
n := 0
|
||||
for {
|
||||
ok, err := readLine(sc, timeField, msgField, lmp)
|
||||
ok, err := readLine(sc, cp.TimeField, cp.MsgField, processLogMessage)
|
||||
wcr.DecConcurrency()
|
||||
if err != nil {
|
||||
errorsTotal.Inc()
|
||||
return fmt.Errorf("cannot read line #%d in /jsonline request: %s", n, err)
|
||||
logger.Errorf("cannot read line #%d in /jsonline request: %s", n, err)
|
||||
break
|
||||
}
|
||||
if !ok {
|
||||
return nil
|
||||
break
|
||||
}
|
||||
n++
|
||||
rowsIngestedTotal.Inc()
|
||||
}
|
||||
|
||||
vlstorage.MustAddRows(lr)
|
||||
logstorage.PutLogRows(lr)
|
||||
|
||||
// update jsonlineRequestDuration only for successfully parsed requests.
|
||||
// There is no need in updating jsonlineRequestDuration for request errors,
|
||||
// since their timings are usually much smaller than the timing for successful request parsing.
|
||||
jsonlineRequestDuration.UpdateDuration(startTime)
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
func readLine(sc *bufio.Scanner, timeField, msgField string, lmp insertutils.LogMessageProcessor) (bool, error) {
|
||||
func readLine(sc *bufio.Scanner, timeField, msgField string, processLogMessage func(timestamp int64, fields []logstorage.Field)) (bool, error) {
|
||||
var line []byte
|
||||
for len(line) == 0 {
|
||||
if !sc.Scan() {
|
||||
@@ -108,28 +105,57 @@ func readLine(sc *bufio.Scanner, timeField, msgField string, lmp insertutils.Log
|
||||
line = sc.Bytes()
|
||||
}
|
||||
|
||||
p := logstorage.GetJSONParser()
|
||||
p := logjson.GetParser()
|
||||
if err := p.ParseLogMessage(line); err != nil {
|
||||
return false, fmt.Errorf("cannot parse json-encoded log entry: %w", err)
|
||||
}
|
||||
ts, err := insertutils.ExtractTimestampRFC3339NanoFromFields(timeField, p.Fields)
|
||||
ts, err := extractTimestampFromFields(timeField, p.Fields)
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("cannot get timestamp: %w", err)
|
||||
return false, fmt.Errorf("cannot parse timestamp: %w", err)
|
||||
}
|
||||
logstorage.RenameField(p.Fields, msgField, "_msg")
|
||||
lmp.AddRow(ts, p.Fields)
|
||||
logstorage.PutJSONParser(p)
|
||||
if ts == 0 {
|
||||
ts = time.Now().UnixNano()
|
||||
}
|
||||
p.RenameField(msgField, "_msg")
|
||||
processLogMessage(ts, p.Fields)
|
||||
logjson.PutParser(p)
|
||||
|
||||
return true, nil
|
||||
}
|
||||
|
||||
func extractTimestampFromFields(timeField string, fields []logstorage.Field) (int64, error) {
|
||||
for i := range fields {
|
||||
f := &fields[i]
|
||||
if f.Name != timeField {
|
||||
continue
|
||||
}
|
||||
timestamp, err := parseISO8601Timestamp(f.Value)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
f.Value = ""
|
||||
return timestamp, nil
|
||||
}
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
func parseISO8601Timestamp(s string) (int64, error) {
|
||||
if s == "0" || s == "" {
|
||||
// Special case for returning the current timestamp.
|
||||
// It must be automatically converted to the current timestamp by the caller.
|
||||
return 0, nil
|
||||
}
|
||||
t, err := time.Parse(time.RFC3339, s)
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("cannot parse timestamp %q: %w", s, err)
|
||||
}
|
||||
return t.UnixNano(), nil
|
||||
}
|
||||
|
||||
var lineBufferPool bytesutil.ByteBufferPool
|
||||
|
||||
var (
|
||||
rowsIngestedTotal = metrics.NewCounter(`vl_rows_ingested_total{type="jsonline"}`)
|
||||
|
||||
requestsTotal = metrics.NewCounter(`vl_http_requests_total{path="/insert/jsonline"}`)
|
||||
errorsTotal = metrics.NewCounter(`vl_http_errors_total{path="/insert/jsonline"}`)
|
||||
|
||||
requestDuration = metrics.NewHistogram(`vl_http_request_duration_seconds{path="/insert/jsonline"}`)
|
||||
requestsTotal = metrics.NewCounter(`vl_http_requests_total{path="/insert/jsonline"}`)
|
||||
rowsIngestedTotal = metrics.NewCounter(`vl_rows_ingested_total{type="jsonline"}`)
|
||||
jsonlineRequestDuration = metrics.NewHistogram(`vl_http_request_duration_seconds{path="/insert/jsonline"}`)
|
||||
)
|
||||
|
||||
@@ -1,27 +1,59 @@
|
||||
package jsonline
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"fmt"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
|
||||
"reflect"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
|
||||
)
|
||||
|
||||
func TestProcessStreamInternal_Success(t *testing.T) {
|
||||
func TestReadBulkRequestSuccess(t *testing.T) {
|
||||
f := func(data, timeField, msgField string, rowsExpected int, timestampsExpected []int64, resultExpected string) {
|
||||
t.Helper()
|
||||
|
||||
tlp := &insertutils.TestLogMessageProcessor{}
|
||||
r := bytes.NewBufferString(data)
|
||||
if err := processStreamInternal(r, timeField, msgField, tlp); err != nil {
|
||||
t.Fatalf("unexpected error: %s", err)
|
||||
var timestamps []int64
|
||||
var result string
|
||||
processLogMessage := func(timestamp int64, fields []logstorage.Field) {
|
||||
timestamps = append(timestamps, timestamp)
|
||||
|
||||
a := make([]string, len(fields))
|
||||
for i, f := range fields {
|
||||
a[i] = fmt.Sprintf("%q:%q", f.Name, f.Value)
|
||||
}
|
||||
s := "{" + strings.Join(a, ",") + "}\n"
|
||||
result += s
|
||||
}
|
||||
|
||||
if err := tlp.Verify(rowsExpected, timestampsExpected, resultExpected); err != nil {
|
||||
t.Fatal(err)
|
||||
// Read the request without compression
|
||||
r := bytes.NewBufferString(data)
|
||||
sc := bufio.NewScanner(r)
|
||||
rows := 0
|
||||
for {
|
||||
ok, err := readLine(sc, timeField, msgField, processLogMessage)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %s", err)
|
||||
}
|
||||
if !ok {
|
||||
break
|
||||
}
|
||||
rows++
|
||||
}
|
||||
if rows != rowsExpected {
|
||||
t.Fatalf("unexpected rows read; got %d; want %d", rows, rowsExpected)
|
||||
}
|
||||
|
||||
if !reflect.DeepEqual(timestamps, timestampsExpected) {
|
||||
t.Fatalf("unexpected timestamps;\ngot\n%d\nwant\n%d", timestamps, timestampsExpected)
|
||||
}
|
||||
if result != resultExpected {
|
||||
t.Fatalf("unexpected result;\ngot\n%s\nwant\n%s", result, resultExpected)
|
||||
}
|
||||
}
|
||||
|
||||
// Verify non-empty data
|
||||
data := `{"@timestamp":"2023-06-06T04:48:11.735Z","log":{"offset":71770,"file":{"path":"/var/log/auth.log"}},"message":"foobar"}
|
||||
{"@timestamp":"2023-06-06T04:48:12.735Z","message":"baz"}
|
||||
{"message":"xyz","@timestamp":"2023-06-06T04:48:13.735Z","x":"y"}
|
||||
@@ -32,24 +64,7 @@ func TestProcessStreamInternal_Success(t *testing.T) {
|
||||
timestampsExpected := []int64{1686026891735000000, 1686026892735000000, 1686026893735000000}
|
||||
resultExpected := `{"@timestamp":"","log.offset":"71770","log.file.path":"/var/log/auth.log","_msg":"foobar"}
|
||||
{"@timestamp":"","_msg":"baz"}
|
||||
{"_msg":"xyz","@timestamp":"","x":"y"}`
|
||||
{"_msg":"xyz","@timestamp":"","x":"y"}
|
||||
`
|
||||
f(data, timeField, msgField, rowsExpected, timestampsExpected, resultExpected)
|
||||
}
|
||||
|
||||
func TestProcessStreamInternal_Failure(t *testing.T) {
|
||||
f := func(data string) {
|
||||
t.Helper()
|
||||
|
||||
tlp := &insertutils.TestLogMessageProcessor{}
|
||||
r := bytes.NewBufferString(data)
|
||||
if err := processStreamInternal(r, "time", "", tlp); err == nil {
|
||||
t.Fatalf("expecting non-nil error")
|
||||
}
|
||||
}
|
||||
|
||||
// invalid json
|
||||
f("foobar")
|
||||
|
||||
// invalid timestamp field
|
||||
f(`{"time":"foobar"}`)
|
||||
}
|
||||
|
||||
@@ -11,8 +11,7 @@ import (
|
||||
func RequestHandler(path string, w http.ResponseWriter, r *http.Request) bool {
|
||||
switch path {
|
||||
case "/api/v1/push":
|
||||
handleInsert(r, w)
|
||||
return true
|
||||
return handleInsert(r, w)
|
||||
case "/ready":
|
||||
// See https://grafana.com/docs/loki/latest/api/#identify-ready-loki-instance
|
||||
w.WriteHeader(http.StatusOK)
|
||||
@@ -24,14 +23,14 @@ func RequestHandler(path string, w http.ResponseWriter, r *http.Request) bool {
|
||||
}
|
||||
|
||||
// See https://grafana.com/docs/loki/latest/api/#push-log-entries-to-loki
|
||||
func handleInsert(r *http.Request, w http.ResponseWriter) {
|
||||
func handleInsert(r *http.Request, w http.ResponseWriter) bool {
|
||||
contentType := r.Header.Get("Content-Type")
|
||||
switch contentType {
|
||||
case "application/json":
|
||||
handleJSON(r, w)
|
||||
return handleJSON(r, w)
|
||||
default:
|
||||
// Protobuf request body should be handled by default according to https://grafana.com/docs/loki/latest/api/#push-log-entries-to-loki
|
||||
handleProtobuf(r, w)
|
||||
return handleProtobuf(r, w)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -46,7 +45,7 @@ func getCommonParams(r *http.Request) (*insertutils.CommonParams, error) {
|
||||
if cp.TenantID.AccountID == 0 && cp.TenantID.ProjectID == 0 {
|
||||
org := r.Header.Get("X-Scope-OrgID")
|
||||
if org != "" {
|
||||
tenantID, err := logstorage.ParseTenantID(org)
|
||||
tenantID, err := logstorage.GetTenantIDFromString(org)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
@@ -8,7 +8,6 @@ import (
|
||||
"strconv"
|
||||
"time"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
|
||||
@@ -21,15 +20,15 @@ import (
|
||||
|
||||
var parserPool fastjson.ParserPool
|
||||
|
||||
func handleJSON(r *http.Request, w http.ResponseWriter) {
|
||||
func handleJSON(r *http.Request, w http.ResponseWriter) bool {
|
||||
startTime := time.Now()
|
||||
requestsJSONTotal.Inc()
|
||||
lokiRequestsJSONTotal.Inc()
|
||||
reader := r.Body
|
||||
if r.Header.Get("Content-Encoding") == "gzip" {
|
||||
zr, err := common.GetGzipReader(reader)
|
||||
if err != nil {
|
||||
httpserver.Errorf(w, r, "cannot initialize gzip reader: %s", err)
|
||||
return
|
||||
return true
|
||||
}
|
||||
defer common.PutGzipReader(zr)
|
||||
reader = zr
|
||||
@@ -40,41 +39,45 @@ func handleJSON(r *http.Request, w http.ResponseWriter) {
|
||||
writeconcurrencylimiter.PutReader(wcr)
|
||||
if err != nil {
|
||||
httpserver.Errorf(w, r, "cannot read request body: %s", err)
|
||||
return
|
||||
return true
|
||||
}
|
||||
|
||||
cp, err := getCommonParams(r)
|
||||
if err != nil {
|
||||
httpserver.Errorf(w, r, "cannot parse common params from request: %s", err)
|
||||
return
|
||||
return true
|
||||
}
|
||||
if err := vlstorage.CanWriteData(); err != nil {
|
||||
httpserver.Errorf(w, r, "%s", err)
|
||||
return
|
||||
return true
|
||||
}
|
||||
lmp := cp.NewLogMessageProcessor()
|
||||
n, err := parseJSONRequest(data, lmp)
|
||||
lmp.MustClose()
|
||||
lr := logstorage.GetLogRows(cp.StreamFields, cp.IgnoreFields)
|
||||
processLogMessage := cp.GetProcessLogMessageFunc(lr)
|
||||
n, err := parseJSONRequest(data, processLogMessage)
|
||||
vlstorage.MustAddRows(lr)
|
||||
logstorage.PutLogRows(lr)
|
||||
if err != nil {
|
||||
httpserver.Errorf(w, r, "cannot parse Loki json request: %s", err)
|
||||
return
|
||||
return true
|
||||
}
|
||||
|
||||
rowsIngestedJSONTotal.Add(n)
|
||||
|
||||
// update requestJSONDuration only for successfully parsed requests
|
||||
// There is no need in updating requestJSONDuration for request errors,
|
||||
// update lokiRequestJSONDuration only for successfully parsed requests
|
||||
// There is no need in updating lokiRequestJSONDuration for request errors,
|
||||
// since their timings are usually much smaller than the timing for successful request parsing.
|
||||
requestJSONDuration.UpdateDuration(startTime)
|
||||
lokiRequestJSONDuration.UpdateDuration(startTime)
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
var (
|
||||
requestsJSONTotal = metrics.NewCounter(`vl_http_requests_total{path="/insert/loki/api/v1/push",format="json"}`)
|
||||
rowsIngestedJSONTotal = metrics.NewCounter(`vl_rows_ingested_total{type="loki",format="json"}`)
|
||||
requestJSONDuration = metrics.NewHistogram(`vl_http_request_duration_seconds{path="/insert/loki/api/v1/push",format="json"}`)
|
||||
lokiRequestsJSONTotal = metrics.NewCounter(`vl_http_requests_total{path="/insert/loki/api/v1/push",format="json"}`)
|
||||
rowsIngestedJSONTotal = metrics.NewCounter(`vl_rows_ingested_total{type="loki",format="json"}`)
|
||||
lokiRequestJSONDuration = metrics.NewHistogram(`vl_http_request_duration_seconds{path="/insert/loki/api/v1/push",format="json"}`)
|
||||
)
|
||||
|
||||
func parseJSONRequest(data []byte, lmp insertutils.LogMessageProcessor) (int, error) {
|
||||
func parseJSONRequest(data []byte, processLogMessage func(timestamp int64, fields []logstorage.Field)) (int, error) {
|
||||
p := parserPool.Get()
|
||||
defer parserPool.Put(p)
|
||||
v, err := p.ParseBytes(data)
|
||||
@@ -167,7 +170,7 @@ func parseJSONRequest(data []byte, lmp insertutils.LogMessageProcessor) (int, er
|
||||
Name: "_msg",
|
||||
Value: bytesutil.ToUnsafeString(msg),
|
||||
})
|
||||
lmp.AddRow(ts, fields)
|
||||
processLogMessage(ts, fields)
|
||||
}
|
||||
rowsIngested += len(lines)
|
||||
}
|
||||
|
||||
@@ -1,17 +1,19 @@
|
||||
package loki
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
|
||||
)
|
||||
|
||||
func TestParseJSONRequest_Failure(t *testing.T) {
|
||||
func TestParseJSONRequestFailure(t *testing.T) {
|
||||
f := func(s string) {
|
||||
t.Helper()
|
||||
|
||||
tlp := &insertutils.TestLogMessageProcessor{}
|
||||
n, err := parseJSONRequest([]byte(s), tlp)
|
||||
n, err := parseJSONRequest([]byte(s), func(_ int64, _ []logstorage.Field) {
|
||||
t.Fatalf("unexpected call to parseJSONRequest callback!")
|
||||
})
|
||||
if err == nil {
|
||||
t.Fatalf("expecting non-nil error")
|
||||
}
|
||||
@@ -54,30 +56,39 @@ func TestParseJSONRequest_Failure(t *testing.T) {
|
||||
f(`{"streams":[{"values":[["123",1234]]}]}`)
|
||||
}
|
||||
|
||||
func TestParseJSONRequest_Success(t *testing.T) {
|
||||
f := func(s string, timestampsExpected []int64, resultExpected string) {
|
||||
func TestParseJSONRequestSuccess(t *testing.T) {
|
||||
f := func(s string, resultExpected string) {
|
||||
t.Helper()
|
||||
|
||||
tlp := &insertutils.TestLogMessageProcessor{}
|
||||
|
||||
n, err := parseJSONRequest([]byte(s), tlp)
|
||||
var lines []string
|
||||
n, err := parseJSONRequest([]byte(s), func(timestamp int64, fields []logstorage.Field) {
|
||||
var a []string
|
||||
for _, f := range fields {
|
||||
a = append(a, f.String())
|
||||
}
|
||||
line := fmt.Sprintf("_time:%d %s", timestamp, strings.Join(a, " "))
|
||||
lines = append(lines, line)
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %s", err)
|
||||
}
|
||||
if err := tlp.Verify(n, timestampsExpected, resultExpected); err != nil {
|
||||
t.Fatal(err)
|
||||
if n != len(lines) {
|
||||
t.Fatalf("unexpected number of lines parsed; got %d; want %d", n, len(lines))
|
||||
}
|
||||
result := strings.Join(lines, "\n")
|
||||
if result != resultExpected {
|
||||
t.Fatalf("unexpected result;\ngot\n%s\nwant\n%s", result, resultExpected)
|
||||
}
|
||||
}
|
||||
|
||||
// Empty streams
|
||||
f(`{"streams":[]}`, nil, ``)
|
||||
f(`{"streams":[{"values":[]}]}`, nil, ``)
|
||||
f(`{"streams":[{"stream":{},"values":[]}]}`, nil, ``)
|
||||
f(`{"streams":[{"stream":{"foo":"bar"},"values":[]}]}`, nil, ``)
|
||||
f(`{"streams":[]}`, ``)
|
||||
f(`{"streams":[{"values":[]}]}`, ``)
|
||||
f(`{"streams":[{"stream":{},"values":[]}]}`, ``)
|
||||
f(`{"streams":[{"stream":{"foo":"bar"},"values":[]}]}`, ``)
|
||||
|
||||
// Empty stream labels
|
||||
f(`{"streams":[{"values":[["1577836800000000001", "foo bar"]]}]}`, []int64{1577836800000000001}, `{"_msg":"foo bar"}`)
|
||||
f(`{"streams":[{"stream":{},"values":[["1577836800000000001", "foo bar"]]}]}`, []int64{1577836800000000001}, `{"_msg":"foo bar"}`)
|
||||
f(`{"streams":[{"values":[["1577836800000000001", "foo bar"]]}]}`, `_time:1577836800000000001 "_msg":"foo bar"`)
|
||||
f(`{"streams":[{"stream":{},"values":[["1577836800000000001", "foo bar"]]}]}`, `_time:1577836800000000001 "_msg":"foo bar"`)
|
||||
|
||||
// Non-empty stream labels
|
||||
f(`{"streams":[{"stream":{
|
||||
@@ -87,9 +98,9 @@ func TestParseJSONRequest_Success(t *testing.T) {
|
||||
["1577836800000000001", "foo bar"],
|
||||
["1477836900005000002", "abc"],
|
||||
["147.78369e9", "foobar"]
|
||||
]}]}`, []int64{1577836800000000001, 1477836900005000002, 147783690000}, `{"label1":"value1","label2":"value2","_msg":"foo bar"}
|
||||
{"label1":"value1","label2":"value2","_msg":"abc"}
|
||||
{"label1":"value1","label2":"value2","_msg":"foobar"}`)
|
||||
]}]}`, `_time:1577836800000000001 "label1":"value1" "label2":"value2" "_msg":"foo bar"
|
||||
_time:1477836900005000002 "label1":"value1" "label2":"value2" "_msg":"abc"
|
||||
_time:147783690000 "label1":"value1" "label2":"value2" "_msg":"foobar"`)
|
||||
|
||||
// Multiple streams
|
||||
f(`{
|
||||
@@ -113,7 +124,7 @@ func TestParseJSONRequest_Success(t *testing.T) {
|
||||
]
|
||||
}
|
||||
]
|
||||
}`, []int64{1577836800000000001, 1577836900005000002, 1877836900005000002}, `{"foo":"bar","a":"b","_msg":"foo bar"}
|
||||
{"foo":"bar","a":"b","_msg":"abc"}
|
||||
{"x":"y","_msg":"yx"}`)
|
||||
}`, `_time:1577836800000000001 "foo":"bar" "a":"b" "_msg":"foo bar"
|
||||
_time:1577836900005000002 "foo":"bar" "a":"b" "_msg":"abc"
|
||||
_time:1877836900005000002 "x":"y" "_msg":"yx"`)
|
||||
}
|
||||
|
||||
@@ -6,7 +6,7 @@ import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
|
||||
)
|
||||
|
||||
func BenchmarkParseJSONRequest(b *testing.B) {
|
||||
@@ -22,13 +22,12 @@ func BenchmarkParseJSONRequest(b *testing.B) {
|
||||
}
|
||||
|
||||
func benchmarkParseJSONRequest(b *testing.B, streams, rows, labels int) {
|
||||
blp := &insertutils.BenchmarkLogMessageProcessor{}
|
||||
b.ReportAllocs()
|
||||
b.SetBytes(int64(streams * rows))
|
||||
b.RunParallel(func(pb *testing.PB) {
|
||||
data := getJSONBody(streams, rows, labels)
|
||||
for pb.Next() {
|
||||
_, err := parseJSONRequest(data, blp)
|
||||
_, err := parseJSONRequest(data, func(_ int64, _ []logstorage.Field) {})
|
||||
if err != nil {
|
||||
panic(fmt.Errorf("unexpected error: %w", err))
|
||||
}
|
||||
|
||||
@@ -9,7 +9,6 @@ import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
|
||||
@@ -24,49 +23,53 @@ var (
|
||||
pushReqsPool sync.Pool
|
||||
)
|
||||
|
||||
func handleProtobuf(r *http.Request, w http.ResponseWriter) {
|
||||
func handleProtobuf(r *http.Request, w http.ResponseWriter) bool {
|
||||
startTime := time.Now()
|
||||
requestsProtobufTotal.Inc()
|
||||
lokiRequestsProtobufTotal.Inc()
|
||||
wcr := writeconcurrencylimiter.GetReader(r.Body)
|
||||
data, err := io.ReadAll(wcr)
|
||||
writeconcurrencylimiter.PutReader(wcr)
|
||||
if err != nil {
|
||||
httpserver.Errorf(w, r, "cannot read request body: %s", err)
|
||||
return
|
||||
return true
|
||||
}
|
||||
|
||||
cp, err := getCommonParams(r)
|
||||
if err != nil {
|
||||
httpserver.Errorf(w, r, "cannot parse common params from request: %s", err)
|
||||
return
|
||||
return true
|
||||
}
|
||||
if err := vlstorage.CanWriteData(); err != nil {
|
||||
httpserver.Errorf(w, r, "%s", err)
|
||||
return
|
||||
return true
|
||||
}
|
||||
lmp := cp.NewLogMessageProcessor()
|
||||
n, err := parseProtobufRequest(data, lmp)
|
||||
lmp.MustClose()
|
||||
lr := logstorage.GetLogRows(cp.StreamFields, cp.IgnoreFields)
|
||||
processLogMessage := cp.GetProcessLogMessageFunc(lr)
|
||||
n, err := parseProtobufRequest(data, processLogMessage)
|
||||
vlstorage.MustAddRows(lr)
|
||||
logstorage.PutLogRows(lr)
|
||||
if err != nil {
|
||||
httpserver.Errorf(w, r, "cannot parse Loki protobuf request: %s", err)
|
||||
return
|
||||
return true
|
||||
}
|
||||
|
||||
rowsIngestedProtobufTotal.Add(n)
|
||||
|
||||
// update requestProtobufDuration only for successfully parsed requests
|
||||
// There is no need in updating requestProtobufDuration for request errors,
|
||||
// update lokiRequestProtobufDuration only for successfully parsed requests
|
||||
// There is no need in updating lokiRequestProtobufDuration for request errors,
|
||||
// since their timings are usually much smaller than the timing for successful request parsing.
|
||||
requestProtobufDuration.UpdateDuration(startTime)
|
||||
lokiRequestProtobufDuration.UpdateDuration(startTime)
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
var (
|
||||
requestsProtobufTotal = metrics.NewCounter(`vl_http_requests_total{path="/insert/loki/api/v1/push",format="protobuf"}`)
|
||||
rowsIngestedProtobufTotal = metrics.NewCounter(`vl_rows_ingested_total{type="loki",format="protobuf"}`)
|
||||
requestProtobufDuration = metrics.NewHistogram(`vl_http_request_duration_seconds{path="/insert/loki/api/v1/push",format="protobuf"}`)
|
||||
lokiRequestsProtobufTotal = metrics.NewCounter(`vl_http_requests_total{path="/insert/loki/api/v1/push",format="protobuf"}`)
|
||||
rowsIngestedProtobufTotal = metrics.NewCounter(`vl_rows_ingested_total{type="loki",format="protobuf"}`)
|
||||
lokiRequestProtobufDuration = metrics.NewHistogram(`vl_http_request_duration_seconds{path="/insert/loki/api/v1/push",format="protobuf"}`)
|
||||
)
|
||||
|
||||
func parseProtobufRequest(data []byte, lmp insertutils.LogMessageProcessor) (int, error) {
|
||||
func parseProtobufRequest(data []byte, processLogMessage func(timestamp int64, fields []logstorage.Field)) (int, error) {
|
||||
bb := bytesBufPool.Get()
|
||||
defer bytesBufPool.Put(bb)
|
||||
|
||||
@@ -79,14 +82,12 @@ func parseProtobufRequest(data []byte, lmp insertutils.LogMessageProcessor) (int
|
||||
req := getPushRequest()
|
||||
defer putPushRequest(req)
|
||||
|
||||
err = req.UnmarshalProtobuf(bb.B)
|
||||
err = req.Unmarshal(bb.B)
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("cannot parse request body: %w", err)
|
||||
}
|
||||
|
||||
fields := getFields()
|
||||
defer putFields(fields)
|
||||
|
||||
var commonFields []logstorage.Field
|
||||
rowsIngested := 0
|
||||
streams := req.Streams
|
||||
currentTimestamp := time.Now().UnixNano()
|
||||
@@ -94,60 +95,30 @@ func parseProtobufRequest(data []byte, lmp insertutils.LogMessageProcessor) (int
|
||||
stream := &streams[i]
|
||||
// st.Labels contains labels for the stream.
|
||||
// Labels are same for all entries in the stream.
|
||||
fields.fields, err = parsePromLabels(fields.fields[:0], stream.Labels)
|
||||
commonFields, err = parsePromLabels(commonFields[:0], stream.Labels)
|
||||
if err != nil {
|
||||
return rowsIngested, fmt.Errorf("cannot parse stream labels %q: %w", stream.Labels, err)
|
||||
}
|
||||
commonFieldsLen := len(fields.fields)
|
||||
fields := commonFields
|
||||
|
||||
entries := stream.Entries
|
||||
for j := range entries {
|
||||
e := &entries[j]
|
||||
fields.fields = fields.fields[:commonFieldsLen]
|
||||
|
||||
for _, lp := range e.StructuredMetadata {
|
||||
fields.fields = append(fields.fields, logstorage.Field{
|
||||
Name: lp.Name,
|
||||
Value: lp.Value,
|
||||
})
|
||||
}
|
||||
|
||||
fields.fields = append(fields.fields, logstorage.Field{
|
||||
entry := &entries[j]
|
||||
fields = append(fields[:len(commonFields)], logstorage.Field{
|
||||
Name: "_msg",
|
||||
Value: e.Line,
|
||||
Value: entry.Line,
|
||||
})
|
||||
|
||||
ts := e.Timestamp.UnixNano()
|
||||
ts := entry.Timestamp.UnixNano()
|
||||
if ts == 0 {
|
||||
ts = currentTimestamp
|
||||
}
|
||||
|
||||
lmp.AddRow(ts, fields.fields)
|
||||
processLogMessage(ts, fields)
|
||||
}
|
||||
rowsIngested += len(stream.Entries)
|
||||
}
|
||||
return rowsIngested, nil
|
||||
}
|
||||
|
||||
func getFields() *fields {
|
||||
v := fieldsPool.Get()
|
||||
if v == nil {
|
||||
return &fields{}
|
||||
}
|
||||
return v.(*fields)
|
||||
}
|
||||
|
||||
func putFields(f *fields) {
|
||||
f.fields = f.fields[:0]
|
||||
fieldsPool.Put(f)
|
||||
}
|
||||
|
||||
var fieldsPool sync.Pool
|
||||
|
||||
type fields struct {
|
||||
fields []logstorage.Field
|
||||
}
|
||||
|
||||
// parsePromLabels parses log fields in Prometheus text exposition format from s, appends them to dst and returns the result.
|
||||
//
|
||||
// See test data of promtail for examples: https://github.com/grafana/loki/blob/a24ef7b206e0ca63ee74ca6ecb0a09b745cd2258/pkg/push/types_test.go
|
||||
@@ -213,6 +184,6 @@ func getPushRequest() *PushRequest {
|
||||
}
|
||||
|
||||
func putPushRequest(req *PushRequest) {
|
||||
req.reset()
|
||||
req.Reset()
|
||||
pushReqsPool.Put(req)
|
||||
}
|
||||
|
||||
@@ -6,80 +6,83 @@ import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
|
||||
"github.com/golang/snappy"
|
||||
)
|
||||
|
||||
type testLogMessageProcessor struct {
|
||||
pr PushRequest
|
||||
}
|
||||
|
||||
func (tlp *testLogMessageProcessor) AddRow(timestamp int64, fields []logstorage.Field) {
|
||||
msg := ""
|
||||
for _, f := range fields {
|
||||
if f.Name == "_msg" {
|
||||
msg = f.Value
|
||||
}
|
||||
}
|
||||
var a []string
|
||||
for _, f := range fields {
|
||||
if f.Name == "_msg" {
|
||||
continue
|
||||
}
|
||||
item := fmt.Sprintf("%s=%q", f.Name, f.Value)
|
||||
a = append(a, item)
|
||||
}
|
||||
labels := "{" + strings.Join(a, ", ") + "}"
|
||||
tlp.pr.Streams = append(tlp.pr.Streams, Stream{
|
||||
Labels: labels,
|
||||
Entries: []Entry{
|
||||
{
|
||||
Timestamp: time.Unix(0, timestamp),
|
||||
Line: strings.Clone(msg),
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func (tlp *testLogMessageProcessor) MustClose() {
|
||||
}
|
||||
|
||||
func TestParseProtobufRequest_Success(t *testing.T) {
|
||||
f := func(s string, timestampsExpected []int64, resultExpected string) {
|
||||
func TestParseProtobufRequestSuccess(t *testing.T) {
|
||||
f := func(s string, resultExpected string) {
|
||||
t.Helper()
|
||||
|
||||
tlp := &testLogMessageProcessor{}
|
||||
n, err := parseJSONRequest([]byte(s), tlp)
|
||||
var pr PushRequest
|
||||
n, err := parseJSONRequest([]byte(s), func(timestamp int64, fields []logstorage.Field) {
|
||||
msg := ""
|
||||
for _, f := range fields {
|
||||
if f.Name == "_msg" {
|
||||
msg = f.Value
|
||||
}
|
||||
}
|
||||
var a []string
|
||||
for _, f := range fields {
|
||||
if f.Name == "_msg" {
|
||||
continue
|
||||
}
|
||||
item := fmt.Sprintf("%s=%q", f.Name, f.Value)
|
||||
a = append(a, item)
|
||||
}
|
||||
labels := "{" + strings.Join(a, ", ") + "}"
|
||||
pr.Streams = append(pr.Streams, Stream{
|
||||
Labels: labels,
|
||||
Entries: []Entry{
|
||||
{
|
||||
Timestamp: time.Unix(0, timestamp),
|
||||
Line: msg,
|
||||
},
|
||||
},
|
||||
})
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %s", err)
|
||||
}
|
||||
if n != len(tlp.pr.Streams) {
|
||||
t.Fatalf("unexpected number of streams; got %d; want %d", len(tlp.pr.Streams), n)
|
||||
if n != len(pr.Streams) {
|
||||
t.Fatalf("unexpected number of streams; got %d; want %d", len(pr.Streams), n)
|
||||
}
|
||||
|
||||
data := tlp.pr.MarshalProtobuf(nil)
|
||||
data, err := pr.Marshal()
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error when marshaling PushRequest: %s", err)
|
||||
}
|
||||
encodedData := snappy.Encode(nil, data)
|
||||
|
||||
tlp2 := &insertutils.TestLogMessageProcessor{}
|
||||
n, err = parseProtobufRequest(encodedData, tlp2)
|
||||
var lines []string
|
||||
n, err = parseProtobufRequest(encodedData, func(timestamp int64, fields []logstorage.Field) {
|
||||
var a []string
|
||||
for _, f := range fields {
|
||||
a = append(a, f.String())
|
||||
}
|
||||
line := fmt.Sprintf("_time:%d %s", timestamp, strings.Join(a, " "))
|
||||
lines = append(lines, line)
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %s", err)
|
||||
}
|
||||
if err := tlp2.Verify(n, timestampsExpected, resultExpected); err != nil {
|
||||
t.Fatal(err)
|
||||
if n != len(lines) {
|
||||
t.Fatalf("unexpected number of lines parsed; got %d; want %d", n, len(lines))
|
||||
}
|
||||
result := strings.Join(lines, "\n")
|
||||
if result != resultExpected {
|
||||
t.Fatalf("unexpected result;\ngot\n%s\nwant\n%s", result, resultExpected)
|
||||
}
|
||||
}
|
||||
|
||||
// Empty streams
|
||||
f(`{"streams":[]}`, nil, ``)
|
||||
f(`{"streams":[{"values":[]}]}`, nil, ``)
|
||||
f(`{"streams":[{"stream":{},"values":[]}]}`, nil, ``)
|
||||
f(`{"streams":[{"stream":{"foo":"bar"},"values":[]}]}`, nil, ``)
|
||||
f(`{"streams":[]}`, ``)
|
||||
f(`{"streams":[{"values":[]}]}`, ``)
|
||||
f(`{"streams":[{"stream":{},"values":[]}]}`, ``)
|
||||
f(`{"streams":[{"stream":{"foo":"bar"},"values":[]}]}`, ``)
|
||||
|
||||
// Empty stream labels
|
||||
f(`{"streams":[{"values":[["1577836800000000001", "foo bar"]]}]}`, []int64{1577836800000000001}, `{"_msg":"foo bar"}`)
|
||||
f(`{"streams":[{"stream":{},"values":[["1577836800000000001", "foo bar"]]}]}`, []int64{1577836800000000001}, `{"_msg":"foo bar"}`)
|
||||
f(`{"streams":[{"values":[["1577836800000000001", "foo bar"]]}]}`, `_time:1577836800000000001 "_msg":"foo bar"`)
|
||||
f(`{"streams":[{"stream":{},"values":[["1577836800000000001", "foo bar"]]}]}`, `_time:1577836800000000001 "_msg":"foo bar"`)
|
||||
|
||||
// Non-empty stream labels
|
||||
f(`{"streams":[{"stream":{
|
||||
@@ -89,9 +92,9 @@ func TestParseProtobufRequest_Success(t *testing.T) {
|
||||
["1577836800000000001", "foo bar"],
|
||||
["1477836900005000002", "abc"],
|
||||
["147.78369e9", "foobar"]
|
||||
]}]}`, []int64{1577836800000000001, 1477836900005000002, 147783690000}, `{"label1":"value1","label2":"value2","_msg":"foo bar"}
|
||||
{"label1":"value1","label2":"value2","_msg":"abc"}
|
||||
{"label1":"value1","label2":"value2","_msg":"foobar"}`)
|
||||
]}]}`, `_time:1577836800000000001 "label1":"value1" "label2":"value2" "_msg":"foo bar"
|
||||
_time:1477836900005000002 "label1":"value1" "label2":"value2" "_msg":"abc"
|
||||
_time:147783690000 "label1":"value1" "label2":"value2" "_msg":"foobar"`)
|
||||
|
||||
// Multiple streams
|
||||
f(`{
|
||||
@@ -115,12 +118,12 @@ func TestParseProtobufRequest_Success(t *testing.T) {
|
||||
]
|
||||
}
|
||||
]
|
||||
}`, []int64{1577836800000000001, 1577836900005000002, 1877836900005000002}, `{"foo":"bar","a":"b","_msg":"foo bar"}
|
||||
{"foo":"bar","a":"b","_msg":"abc"}
|
||||
{"x":"y","_msg":"yx"}`)
|
||||
}`, `_time:1577836800000000001 "foo":"bar" "a":"b" "_msg":"foo bar"
|
||||
_time:1577836900005000002 "foo":"bar" "a":"b" "_msg":"abc"
|
||||
_time:1877836900005000002 "x":"y" "_msg":"yx"`)
|
||||
}
|
||||
|
||||
func TestParsePromLabels_Success(t *testing.T) {
|
||||
func TestParsePromLabelsSuccess(t *testing.T) {
|
||||
f := func(s string) {
|
||||
t.Helper()
|
||||
fields, err := parsePromLabels(nil, s)
|
||||
@@ -144,7 +147,7 @@ func TestParsePromLabels_Success(t *testing.T) {
|
||||
f(`{foo="ba\"r\\z\n", a="", b="\"\\"}`)
|
||||
}
|
||||
|
||||
func TestParsePromLabels_Failure(t *testing.T) {
|
||||
func TestParsePromLabelsFailure(t *testing.T) {
|
||||
f := func(s string) {
|
||||
t.Helper()
|
||||
fields, err := parsePromLabels(nil, s)
|
||||
|
||||
@@ -8,8 +8,7 @@ import (
|
||||
|
||||
"github.com/golang/snappy"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
|
||||
)
|
||||
|
||||
func BenchmarkParseProtobufRequest(b *testing.B) {
|
||||
@@ -25,13 +24,12 @@ func BenchmarkParseProtobufRequest(b *testing.B) {
|
||||
}
|
||||
|
||||
func benchmarkParseProtobufRequest(b *testing.B, streams, rows, labels int) {
|
||||
blp := &insertutils.BenchmarkLogMessageProcessor{}
|
||||
b.ReportAllocs()
|
||||
b.SetBytes(int64(streams * rows))
|
||||
b.RunParallel(func(pb *testing.PB) {
|
||||
body := getProtobufBody(streams, rows, labels)
|
||||
for pb.Next() {
|
||||
_, err := parseProtobufRequest(body, blp)
|
||||
_, err := parseProtobufRequest(body, func(_ int64, _ []logstorage.Field) {})
|
||||
if err != nil {
|
||||
panic(fmt.Errorf("unexpected error: %w", err))
|
||||
}
|
||||
@@ -39,47 +37,29 @@ func benchmarkParseProtobufRequest(b *testing.B, streams, rows, labels int) {
|
||||
})
|
||||
}
|
||||
|
||||
func getProtobufBody(streamsCount, rowsCount, labelsCount int) []byte {
|
||||
var b []byte
|
||||
var entries []Entry
|
||||
streams := make([]Stream, streamsCount)
|
||||
for i := range streams {
|
||||
b = b[:0]
|
||||
b = append(b, '{')
|
||||
for j := 0; j < labelsCount; j++ {
|
||||
b = append(b, "label_"...)
|
||||
b = strconv.AppendInt(b, int64(j), 10)
|
||||
b = append(b, `="value_`...)
|
||||
b = strconv.AppendInt(b, int64(j), 10)
|
||||
b = append(b, '"')
|
||||
if j < labelsCount-1 {
|
||||
b = append(b, ',')
|
||||
func getProtobufBody(streams, rows, labels int) []byte {
|
||||
var pr PushRequest
|
||||
|
||||
for i := 0; i < streams; i++ {
|
||||
var st Stream
|
||||
|
||||
st.Labels = `{`
|
||||
for j := 0; j < labels; j++ {
|
||||
st.Labels += `label_` + strconv.Itoa(j) + `="value_` + strconv.Itoa(j) + `"`
|
||||
if j < labels-1 {
|
||||
st.Labels += `,`
|
||||
}
|
||||
}
|
||||
b = append(b, '}')
|
||||
labels := string(b)
|
||||
st.Labels += `}`
|
||||
|
||||
var rowsBuf []byte
|
||||
entriesLen := len(entries)
|
||||
for j := 0; j < rowsCount; j++ {
|
||||
rowsBufLen := len(rowsBuf)
|
||||
rowsBuf = append(rowsBuf, "value_"...)
|
||||
rowsBuf = strconv.AppendInt(rowsBuf, int64(j), 10)
|
||||
entries = append(entries, Entry{
|
||||
Timestamp: time.Now(),
|
||||
Line: bytesutil.ToUnsafeString(rowsBuf[rowsBufLen:]),
|
||||
})
|
||||
for j := 0; j < rows; j++ {
|
||||
st.Entries = append(st.Entries, Entry{Timestamp: time.Now(), Line: "value_" + strconv.Itoa(j)})
|
||||
}
|
||||
|
||||
st := &streams[i]
|
||||
st.Labels = labels
|
||||
st.Entries = entries[entriesLen:]
|
||||
}
|
||||
pr := PushRequest{
|
||||
Streams: streams,
|
||||
pr.Streams = append(pr.Streams, st)
|
||||
}
|
||||
|
||||
body := pr.MarshalProtobuf(nil)
|
||||
body, _ := pr.Marshal()
|
||||
encodedBody := snappy.Encode(nil, body)
|
||||
|
||||
return encodedBody
|
||||
|
||||
@@ -1,302 +0,0 @@
|
||||
// Code generated by protoc-gen-gogo. DO NOT EDIT.
|
||||
// source: push_request.proto
|
||||
// source: https://raw.githubusercontent.com/grafana/loki/main/pkg/push/push_request.proto
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// https://github.com/grafana/loki/blob/main/pkg/push/LICENSE
|
||||
|
||||
package loki
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/VictoriaMetrics/easyproto"
|
||||
)
|
||||
|
||||
var mp easyproto.MarshalerPool
|
||||
|
||||
// PushRequest represents Loki PushRequest
|
||||
//
|
||||
// See https://github.com/grafana/loki/blob/4220737a52da7ab6c9346b12d5a5d7bedbcd641d/pkg/push/push.proto#L14C1-L14C20
|
||||
type PushRequest struct {
|
||||
Streams []Stream
|
||||
|
||||
entriesBuf []Entry
|
||||
labelPairBuf []LabelPair
|
||||
}
|
||||
|
||||
func (pr *PushRequest) reset() {
|
||||
pr.Streams = pr.Streams[:0]
|
||||
|
||||
pr.entriesBuf = pr.entriesBuf[:0]
|
||||
pr.labelPairBuf = pr.labelPairBuf[:0]
|
||||
}
|
||||
|
||||
// UnmarshalProtobuf unmarshals pr from protobuf message at src.
|
||||
//
|
||||
// pr remains valid until src is modified.
|
||||
func (pr *PushRequest) UnmarshalProtobuf(src []byte) error {
|
||||
pr.reset()
|
||||
var err error
|
||||
pr.entriesBuf, pr.labelPairBuf, err = pr.unmarshalProtobuf(pr.entriesBuf, pr.labelPairBuf, src)
|
||||
return err
|
||||
}
|
||||
|
||||
// MarshalProtobuf marshals r to protobuf message, appends it to dst and returns the result.
|
||||
func (pr *PushRequest) MarshalProtobuf(dst []byte) []byte {
|
||||
m := mp.Get()
|
||||
pr.marshalProtobuf(m.MessageMarshaler())
|
||||
dst = m.Marshal(dst)
|
||||
mp.Put(m)
|
||||
return dst
|
||||
}
|
||||
|
||||
func (pr *PushRequest) marshalProtobuf(mm *easyproto.MessageMarshaler) {
|
||||
for _, s := range pr.Streams {
|
||||
s.marshalProtobuf(mm.AppendMessage(1))
|
||||
}
|
||||
}
|
||||
|
||||
func (pr *PushRequest) unmarshalProtobuf(entriesBuf []Entry, labelPairBuf []LabelPair, src []byte) ([]Entry, []LabelPair, error) {
|
||||
// message PushRequest {
|
||||
// repeated Stream streams = 1;
|
||||
// }
|
||||
var err error
|
||||
var fc easyproto.FieldContext
|
||||
for len(src) > 0 {
|
||||
src, err = fc.NextField(src)
|
||||
if err != nil {
|
||||
return entriesBuf, labelPairBuf, fmt.Errorf("cannot read next field in PushRequest: %w", err)
|
||||
}
|
||||
switch fc.FieldNum {
|
||||
case 1:
|
||||
data, ok := fc.MessageData()
|
||||
if !ok {
|
||||
return entriesBuf, labelPairBuf, fmt.Errorf("cannot read Stream data")
|
||||
}
|
||||
pr.Streams = append(pr.Streams, Stream{})
|
||||
s := &pr.Streams[len(pr.Streams)-1]
|
||||
entriesBuf, labelPairBuf, err = s.unmarshalProtobuf(entriesBuf, labelPairBuf, data)
|
||||
if err != nil {
|
||||
return entriesBuf, labelPairBuf, fmt.Errorf("cannot unmarshal Stream: %w", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
return entriesBuf, labelPairBuf, nil
|
||||
}
|
||||
|
||||
// Stream represents Loki stream.
|
||||
//
|
||||
// See https://github.com/grafana/loki/blob/4220737a52da7ab6c9346b12d5a5d7bedbcd641d/pkg/push/push.proto#L23
|
||||
type Stream struct {
|
||||
Labels string
|
||||
Entries []Entry
|
||||
}
|
||||
|
||||
func (s *Stream) marshalProtobuf(mm *easyproto.MessageMarshaler) {
|
||||
mm.AppendString(1, s.Labels)
|
||||
for _, e := range s.Entries {
|
||||
e.marshalProtobuf(mm.AppendMessage(2))
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Stream) unmarshalProtobuf(entriesBuf []Entry, labelPairBuf []LabelPair, src []byte) ([]Entry, []LabelPair, error) {
|
||||
// message Stream {
|
||||
// string labels = 1;
|
||||
// repeated Entry entries = 2;
|
||||
// }
|
||||
var err error
|
||||
var fc easyproto.FieldContext
|
||||
entriesBufLen := len(entriesBuf)
|
||||
for len(src) > 0 {
|
||||
src, err = fc.NextField(src)
|
||||
if err != nil {
|
||||
return entriesBuf, labelPairBuf, fmt.Errorf("cannot read next field in Stream: %w", err)
|
||||
}
|
||||
switch fc.FieldNum {
|
||||
case 1:
|
||||
labels, ok := fc.String()
|
||||
if !ok {
|
||||
return entriesBuf, labelPairBuf, fmt.Errorf("cannot read labels")
|
||||
}
|
||||
s.Labels = labels
|
||||
case 2:
|
||||
data, ok := fc.MessageData()
|
||||
if !ok {
|
||||
return entriesBuf, labelPairBuf, fmt.Errorf("cannot read Entry data")
|
||||
}
|
||||
entriesBuf = append(entriesBuf, Entry{})
|
||||
e := &entriesBuf[len(entriesBuf)-1]
|
||||
labelPairBuf, err = e.unmarshalProtobuf(labelPairBuf, data)
|
||||
if err != nil {
|
||||
return entriesBuf, labelPairBuf, fmt.Errorf("cannot unmarshal Entry: %w", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
s.Entries = entriesBuf[entriesBufLen:]
|
||||
return entriesBuf, labelPairBuf, nil
|
||||
}
|
||||
|
||||
// Entry represents Loki entry.
|
||||
//
|
||||
// See https://github.com/grafana/loki/blob/4220737a52da7ab6c9346b12d5a5d7bedbcd641d/pkg/push/push.proto#L38
|
||||
type Entry struct {
|
||||
Timestamp time.Time
|
||||
Line string
|
||||
StructuredMetadata []LabelPair
|
||||
}
|
||||
|
||||
func (e *Entry) marshalProtobuf(mm *easyproto.MessageMarshaler) {
|
||||
marshalTime(mm, 1, e.Timestamp)
|
||||
mm.AppendString(2, e.Line)
|
||||
for _, lp := range e.StructuredMetadata {
|
||||
lp.marshalProtobuf(mm.AppendMessage(3))
|
||||
}
|
||||
}
|
||||
|
||||
func (e *Entry) unmarshalProtobuf(labelPairBuf []LabelPair, src []byte) ([]LabelPair, error) {
|
||||
// message Entry {
|
||||
// Timestamp timestamp = 1;
|
||||
// string line = 2;
|
||||
// repeated LabelPair structuredMetadata = 3;
|
||||
// }
|
||||
var err error
|
||||
var fc easyproto.FieldContext
|
||||
labelPairBufLen := len(labelPairBuf)
|
||||
for len(src) > 0 {
|
||||
src, err = fc.NextField(src)
|
||||
if err != nil {
|
||||
return labelPairBuf, fmt.Errorf("cannot read next field in Entry: %w", err)
|
||||
}
|
||||
switch fc.FieldNum {
|
||||
case 1:
|
||||
data, ok := fc.MessageData()
|
||||
if !ok {
|
||||
return labelPairBuf, fmt.Errorf("cannot read Timestamp data")
|
||||
}
|
||||
timestamp, err := unmarshalTime(data)
|
||||
if err != nil {
|
||||
return labelPairBuf, fmt.Errorf("cannot unmarshal Timestamp: %w", err)
|
||||
}
|
||||
e.Timestamp = timestamp
|
||||
case 2:
|
||||
line, ok := fc.String()
|
||||
if !ok {
|
||||
return labelPairBuf, fmt.Errorf("cannot read Line")
|
||||
}
|
||||
e.Line = line
|
||||
case 3:
|
||||
data, ok := fc.MessageData()
|
||||
if !ok {
|
||||
return labelPairBuf, fmt.Errorf("cannot read StructuredMetadata")
|
||||
}
|
||||
labelPairBuf = append(labelPairBuf, LabelPair{})
|
||||
lp := &labelPairBuf[len(labelPairBuf)-1]
|
||||
if err := lp.unmarshalProtobuf(data); err != nil {
|
||||
return labelPairBuf, fmt.Errorf("cannot unmarshal StructuredMetadata: %w", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
e.StructuredMetadata = labelPairBuf[labelPairBufLen:]
|
||||
return labelPairBuf, nil
|
||||
}
|
||||
|
||||
// LabelPair represents Loki label pair.
|
||||
//
|
||||
// See https://github.com/grafana/loki/blob/4220737a52da7ab6c9346b12d5a5d7bedbcd641d/pkg/push/push.proto#L33
|
||||
type LabelPair struct {
|
||||
Name string
|
||||
Value string
|
||||
}
|
||||
|
||||
func (lp *LabelPair) marshalProtobuf(mm *easyproto.MessageMarshaler) {
|
||||
mm.AppendString(1, lp.Name)
|
||||
mm.AppendString(2, lp.Value)
|
||||
}
|
||||
|
||||
func (lp *LabelPair) unmarshalProtobuf(src []byte) (err error) {
|
||||
// message LabelPair {
|
||||
// string name = 1;
|
||||
// string value = 2;
|
||||
// }
|
||||
var fc easyproto.FieldContext
|
||||
for len(src) > 0 {
|
||||
src, err = fc.NextField(src)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot read next field in LabelPair: %w", err)
|
||||
}
|
||||
switch fc.FieldNum {
|
||||
case 1:
|
||||
name, ok := fc.String()
|
||||
if !ok {
|
||||
return fmt.Errorf("cannot read name")
|
||||
}
|
||||
lp.Name = name
|
||||
case 2:
|
||||
value, ok := fc.String()
|
||||
if !ok {
|
||||
return fmt.Errorf("cannot unmarshal value")
|
||||
}
|
||||
lp.Value = value
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func marshalTime(mm *easyproto.MessageMarshaler, fieldNum uint32, timestamp time.Time) {
|
||||
nsecs := timestamp.UnixNano()
|
||||
ts := Timestamp{
|
||||
Seconds: nsecs / 1e9,
|
||||
Nanos: int32(nsecs % 1e9),
|
||||
}
|
||||
ts.marshalProtobuf(mm.AppendMessage(fieldNum))
|
||||
}
|
||||
|
||||
func unmarshalTime(src []byte) (time.Time, error) {
|
||||
var ts Timestamp
|
||||
if err := ts.unmarshalProtobuf(src); err != nil {
|
||||
return time.Time{}, err
|
||||
}
|
||||
timestamp := time.Unix(ts.Seconds, int64(ts.Nanos)).UTC()
|
||||
return timestamp, nil
|
||||
}
|
||||
|
||||
// Timestamp is protobuf well-known timestamp type.
|
||||
type Timestamp struct {
|
||||
Seconds int64
|
||||
Nanos int32
|
||||
}
|
||||
|
||||
func (ts *Timestamp) marshalProtobuf(mm *easyproto.MessageMarshaler) {
|
||||
mm.AppendInt64(1, ts.Seconds)
|
||||
mm.AppendInt32(2, ts.Nanos)
|
||||
}
|
||||
|
||||
func (ts *Timestamp) unmarshalProtobuf(src []byte) (err error) {
|
||||
// message Timestamp {
|
||||
// int64 seconds = 1;
|
||||
// int32 nanos = 2;
|
||||
// }
|
||||
var fc easyproto.FieldContext
|
||||
for len(src) > 0 {
|
||||
src, err = fc.NextField(src)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot read next field in Timestamp: %w", err)
|
||||
}
|
||||
switch fc.FieldNum {
|
||||
case 1:
|
||||
seconds, ok := fc.Int64()
|
||||
if !ok {
|
||||
return fmt.Errorf("cannot read Seconds")
|
||||
}
|
||||
ts.Seconds = seconds
|
||||
case 2:
|
||||
nanos, ok := fc.Int32()
|
||||
if !ok {
|
||||
return fmt.Errorf("cannot read Nanos")
|
||||
}
|
||||
ts.Nanos = nanos
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
1036
app/vlinsert/loki/push_request.pb.go
Normal file
1036
app/vlinsert/loki/push_request.pb.go
Normal file
File diff suppressed because it is too large
Load Diff
38
app/vlinsert/loki/push_request.proto
Normal file
38
app/vlinsert/loki/push_request.proto
Normal file
@@ -0,0 +1,38 @@
|
||||
syntax = "proto3";
|
||||
|
||||
// source: https://raw.githubusercontent.com/grafana/loki/main/pkg/push/push.proto
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// https://github.com/grafana/loki/blob/main/pkg/push/LICENSE
|
||||
|
||||
package logproto;
|
||||
|
||||
import "gogoproto/gogo.proto";
|
||||
import "google/protobuf/timestamp.proto";
|
||||
|
||||
option go_package = "github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/loki";
|
||||
|
||||
message PushRequest {
|
||||
repeated StreamAdapter streams = 1 [
|
||||
(gogoproto.jsontag) = "streams",
|
||||
(gogoproto.customtype) = "Stream"
|
||||
];
|
||||
}
|
||||
|
||||
message StreamAdapter {
|
||||
string labels = 1 [(gogoproto.jsontag) = "labels"];
|
||||
repeated EntryAdapter entries = 2 [
|
||||
(gogoproto.nullable) = false,
|
||||
(gogoproto.jsontag) = "entries"
|
||||
];
|
||||
// hash contains the original hash of the stream.
|
||||
uint64 hash = 3 [(gogoproto.jsontag) = "-"];
|
||||
}
|
||||
|
||||
message EntryAdapter {
|
||||
google.protobuf.Timestamp timestamp = 1 [
|
||||
(gogoproto.stdtime) = true,
|
||||
(gogoproto.nullable) = false,
|
||||
(gogoproto.jsontag) = "ts"
|
||||
];
|
||||
string line = 2 [(gogoproto.jsontag) = "line"];
|
||||
}
|
||||
110
app/vlinsert/loki/timestamp.go
Normal file
110
app/vlinsert/loki/timestamp.go
Normal file
@@ -0,0 +1,110 @@
|
||||
package loki
|
||||
|
||||
// source: https://raw.githubusercontent.com/grafana/loki/main/pkg/push/timestamp.go
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// https://github.com/grafana/loki/blob/main/pkg/push/LICENSE
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"strconv"
|
||||
"time"
|
||||
|
||||
"github.com/gogo/protobuf/types"
|
||||
)
|
||||
|
||||
const (
|
||||
// Seconds field of the earliest valid Timestamp.
|
||||
// This is time.Date(1, 1, 1, 0, 0, 0, 0, time.UTC).Unix().
|
||||
minValidSeconds = -62135596800
|
||||
// Seconds field just after the latest valid Timestamp.
|
||||
// This is time.Date(10000, 1, 1, 0, 0, 0, 0, time.UTC).Unix().
|
||||
maxValidSeconds = 253402300800
|
||||
)
|
||||
|
||||
// validateTimestamp determines whether a Timestamp is valid.
|
||||
// A valid timestamp represents a time in the range
|
||||
// [0001-01-01, 10000-01-01) and has a Nanos field
|
||||
// in the range [0, 1e9).
|
||||
//
|
||||
// If the Timestamp is valid, validateTimestamp returns nil.
|
||||
// Otherwise, it returns an error that describes
|
||||
// the problem.
|
||||
//
|
||||
// Every valid Timestamp can be represented by a time.Time, but the converse is not true.
|
||||
func validateTimestamp(ts *types.Timestamp) error {
|
||||
if ts == nil {
|
||||
return errors.New("timestamp: nil Timestamp")
|
||||
}
|
||||
if ts.Seconds < minValidSeconds {
|
||||
return errors.New("timestamp: " + formatTimestamp(ts) + " before 0001-01-01")
|
||||
}
|
||||
if ts.Seconds >= maxValidSeconds {
|
||||
return errors.New("timestamp: " + formatTimestamp(ts) + " after 10000-01-01")
|
||||
}
|
||||
if ts.Nanos < 0 || ts.Nanos >= 1e9 {
|
||||
return errors.New("timestamp: " + formatTimestamp(ts) + ": nanos not in range [0, 1e9)")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// formatTimestamp is equivalent to fmt.Sprintf("%#v", ts)
|
||||
// but avoids the escape incurred by using fmt.Sprintf, eliminating
|
||||
// unnecessary heap allocations.
|
||||
func formatTimestamp(ts *types.Timestamp) string {
|
||||
if ts == nil {
|
||||
return "nil"
|
||||
}
|
||||
|
||||
seconds := strconv.FormatInt(ts.Seconds, 10)
|
||||
nanos := strconv.FormatInt(int64(ts.Nanos), 10)
|
||||
return "&types.Timestamp{Seconds: " + seconds + ",\nNanos: " + nanos + ",\n}"
|
||||
}
|
||||
|
||||
func sizeOfStdTime(t time.Time) int {
|
||||
ts, err := timestampProto(t)
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
return ts.Size()
|
||||
}
|
||||
|
||||
func stdTimeMarshalTo(t time.Time, data []byte) (int, error) {
|
||||
ts, err := timestampProto(t)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
return ts.MarshalTo(data)
|
||||
}
|
||||
|
||||
func stdTimeUnmarshal(t *time.Time, data []byte) error {
|
||||
ts := &types.Timestamp{}
|
||||
if err := ts.Unmarshal(data); err != nil {
|
||||
return err
|
||||
}
|
||||
tt, err := timestampFromProto(ts)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
*t = tt
|
||||
return nil
|
||||
}
|
||||
|
||||
func timestampFromProto(ts *types.Timestamp) (time.Time, error) {
|
||||
// Don't return the zero value on error, because corresponds to a valid
|
||||
// timestamp. Instead return whatever time.Unix gives us.
|
||||
var t time.Time
|
||||
if ts == nil {
|
||||
t = time.Unix(0, 0).UTC() // treat nil like the empty Timestamp
|
||||
} else {
|
||||
t = time.Unix(ts.Seconds, int64(ts.Nanos)).UTC()
|
||||
}
|
||||
return t, validateTimestamp(ts)
|
||||
}
|
||||
|
||||
func timestampProto(t time.Time) (types.Timestamp, error) {
|
||||
ts := types.Timestamp{
|
||||
Seconds: t.Unix(),
|
||||
Nanos: int32(t.Nanosecond()),
|
||||
}
|
||||
return ts, validateTimestamp(&ts)
|
||||
}
|
||||
481
app/vlinsert/loki/types.go
Normal file
481
app/vlinsert/loki/types.go
Normal file
@@ -0,0 +1,481 @@
|
||||
package loki
|
||||
|
||||
// source: https://raw.githubusercontent.com/grafana/loki/main/pkg/push/types.go
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// https://github.com/grafana/loki/blob/main/pkg/push/LICENSE
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Stream contains a unique labels set as a string and a set of entries for it.
|
||||
// We are not using the proto generated version but this custom one so that we
|
||||
// can improve serialization see benchmark.
|
||||
type Stream struct {
|
||||
Labels string `protobuf:"bytes,1,opt,name=labels,proto3" json:"labels"`
|
||||
Entries []Entry `protobuf:"bytes,2,rep,name=entries,proto3,customtype=EntryAdapter" json:"entries"`
|
||||
Hash uint64 `protobuf:"varint,3,opt,name=hash,proto3" json:"-"`
|
||||
}
|
||||
|
||||
// Entry is a log entry with a timestamp.
|
||||
type Entry struct {
|
||||
Timestamp time.Time `protobuf:"bytes,1,opt,name=timestamp,proto3,stdtime" json:"ts"`
|
||||
Line string `protobuf:"bytes,2,opt,name=line,proto3" json:"line"`
|
||||
}
|
||||
|
||||
// Marshal implements the proto.Marshaler interface.
|
||||
func (m *Stream) Marshal() (dAtA []byte, err error) {
|
||||
size := m.Size()
|
||||
dAtA = make([]byte, size)
|
||||
n, err := m.MarshalToSizedBuffer(dAtA[:size])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return dAtA[:n], nil
|
||||
}
|
||||
|
||||
// MarshalTo marshals m to dst.
|
||||
func (m *Stream) MarshalTo(dAtA []byte) (int, error) {
|
||||
size := m.Size()
|
||||
return m.MarshalToSizedBuffer(dAtA[:size])
|
||||
}
|
||||
|
||||
// MarshalToSizedBuffer marshals m to the sized buffer.
|
||||
func (m *Stream) MarshalToSizedBuffer(dAtA []byte) (int, error) {
|
||||
i := len(dAtA)
|
||||
_ = i
|
||||
var l int
|
||||
_ = l
|
||||
if m.Hash != 0 {
|
||||
i = encodeVarintPush(dAtA, i, m.Hash)
|
||||
i--
|
||||
dAtA[i] = 0x18
|
||||
}
|
||||
if len(m.Entries) > 0 {
|
||||
for iNdEx := len(m.Entries) - 1; iNdEx >= 0; iNdEx-- {
|
||||
{
|
||||
size, err := m.Entries[iNdEx].MarshalToSizedBuffer(dAtA[:i])
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
i -= size
|
||||
i = encodeVarintPush(dAtA, i, uint64(size))
|
||||
}
|
||||
i--
|
||||
dAtA[i] = 0x12
|
||||
}
|
||||
}
|
||||
if len(m.Labels) > 0 {
|
||||
i -= len(m.Labels)
|
||||
copy(dAtA[i:], m.Labels)
|
||||
i = encodeVarintPush(dAtA, i, uint64(len(m.Labels)))
|
||||
i--
|
||||
dAtA[i] = 0xa
|
||||
}
|
||||
return len(dAtA) - i, nil
|
||||
}
|
||||
|
||||
// Marshal implements the proto.Marshaler interface.
|
||||
func (m *Entry) Marshal() (dAtA []byte, err error) {
|
||||
size := m.Size()
|
||||
dAtA = make([]byte, size)
|
||||
n, err := m.MarshalToSizedBuffer(dAtA[:size])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return dAtA[:n], nil
|
||||
}
|
||||
|
||||
// MarshalTo marshals m to dst.
|
||||
func (m *Entry) MarshalTo(dAtA []byte) (int, error) {
|
||||
size := m.Size()
|
||||
return m.MarshalToSizedBuffer(dAtA[:size])
|
||||
}
|
||||
|
||||
// MarshalToSizedBuffer marshals m to the sized buffer.
|
||||
func (m *Entry) MarshalToSizedBuffer(dAtA []byte) (int, error) {
|
||||
i := len(dAtA)
|
||||
_ = i
|
||||
var l int
|
||||
_ = l
|
||||
if len(m.Line) > 0 {
|
||||
i -= len(m.Line)
|
||||
copy(dAtA[i:], m.Line)
|
||||
i = encodeVarintPush(dAtA, i, uint64(len(m.Line)))
|
||||
i--
|
||||
dAtA[i] = 0x12
|
||||
}
|
||||
n7, err7 := stdTimeMarshalTo(m.Timestamp, dAtA[i-sizeOfStdTime(m.Timestamp):])
|
||||
if err7 != nil {
|
||||
return 0, err7
|
||||
}
|
||||
i -= n7
|
||||
i = encodeVarintPush(dAtA, i, uint64(n7))
|
||||
i--
|
||||
dAtA[i] = 0xa
|
||||
return len(dAtA) - i, nil
|
||||
}
|
||||
|
||||
// Unmarshal unmarshals the given data into m.
|
||||
func (m *Stream) Unmarshal(dAtA []byte) error {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
for iNdEx < l {
|
||||
preIndex := iNdEx
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowPush
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
fieldNum := int32(wire >> 3)
|
||||
wireType := int(wire & 0x7)
|
||||
if wireType == 4 {
|
||||
return fmt.Errorf("proto: StreamAdapter: wiretype end group for non-group")
|
||||
}
|
||||
if fieldNum <= 0 {
|
||||
return fmt.Errorf("proto: StreamAdapter: illegal tag %d (wire type %d)", fieldNum, wire)
|
||||
}
|
||||
switch fieldNum {
|
||||
case 1:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Labels", wireType)
|
||||
}
|
||||
var stringLen uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowPush
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
stringLen |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
intStringLen := int(stringLen)
|
||||
if intStringLen < 0 {
|
||||
return ErrInvalidLengthPush
|
||||
}
|
||||
postIndex := iNdEx + intStringLen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthPush
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.Labels = string(dAtA[iNdEx:postIndex])
|
||||
iNdEx = postIndex
|
||||
case 2:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Entries", wireType)
|
||||
}
|
||||
var msglen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowPush
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
msglen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if msglen < 0 {
|
||||
return ErrInvalidLengthPush
|
||||
}
|
||||
postIndex := iNdEx + msglen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthPush
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.Entries = append(m.Entries, Entry{})
|
||||
if err := m.Entries[len(m.Entries)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
iNdEx = postIndex
|
||||
case 3:
|
||||
if wireType != 0 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Hash", wireType)
|
||||
}
|
||||
m.Hash = 0
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowPush
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
m.Hash |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
default:
|
||||
iNdEx = preIndex
|
||||
skippy, err := skipPush(dAtA[iNdEx:])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if skippy < 0 {
|
||||
return ErrInvalidLengthPush
|
||||
}
|
||||
if (iNdEx + skippy) < 0 {
|
||||
return ErrInvalidLengthPush
|
||||
}
|
||||
if (iNdEx + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
|
||||
if iNdEx > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Unmarshal unmarshals the given data into m.
|
||||
func (m *Entry) Unmarshal(dAtA []byte) error {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
for iNdEx < l {
|
||||
preIndex := iNdEx
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowPush
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
fieldNum := int32(wire >> 3)
|
||||
wireType := int(wire & 0x7)
|
||||
if wireType == 4 {
|
||||
return fmt.Errorf("proto: EntryAdapter: wiretype end group for non-group")
|
||||
}
|
||||
if fieldNum <= 0 {
|
||||
return fmt.Errorf("proto: EntryAdapter: illegal tag %d (wire type %d)", fieldNum, wire)
|
||||
}
|
||||
switch fieldNum {
|
||||
case 1:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Timestamp", wireType)
|
||||
}
|
||||
var msglen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowPush
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
msglen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if msglen < 0 {
|
||||
return ErrInvalidLengthPush
|
||||
}
|
||||
postIndex := iNdEx + msglen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthPush
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
if err := stdTimeUnmarshal(&m.Timestamp, dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
iNdEx = postIndex
|
||||
case 2:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Line", wireType)
|
||||
}
|
||||
var stringLen uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowPush
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
stringLen |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
intStringLen := int(stringLen)
|
||||
if intStringLen < 0 {
|
||||
return ErrInvalidLengthPush
|
||||
}
|
||||
postIndex := iNdEx + intStringLen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthPush
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.Line = string(dAtA[iNdEx:postIndex])
|
||||
iNdEx = postIndex
|
||||
default:
|
||||
iNdEx = preIndex
|
||||
skippy, err := skipPush(dAtA[iNdEx:])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if skippy < 0 {
|
||||
return ErrInvalidLengthPush
|
||||
}
|
||||
if (iNdEx + skippy) < 0 {
|
||||
return ErrInvalidLengthPush
|
||||
}
|
||||
if (iNdEx + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
|
||||
if iNdEx > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Size returns the size of the serialized Stream.
|
||||
func (m *Stream) Size() (n int) {
|
||||
if m == nil {
|
||||
return 0
|
||||
}
|
||||
var l int
|
||||
_ = l
|
||||
l = len(m.Labels)
|
||||
if l > 0 {
|
||||
n += 1 + l + sovPush(uint64(l))
|
||||
}
|
||||
if len(m.Entries) > 0 {
|
||||
for _, e := range m.Entries {
|
||||
l = e.Size()
|
||||
n += 1 + l + sovPush(uint64(l))
|
||||
}
|
||||
}
|
||||
if m.Hash != 0 {
|
||||
n += 1 + sovPush(m.Hash)
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
// Size returns the size of the serialized Entry
|
||||
func (m *Entry) Size() (n int) {
|
||||
if m == nil {
|
||||
return 0
|
||||
}
|
||||
var l int
|
||||
_ = l
|
||||
l = sizeOfStdTime(m.Timestamp)
|
||||
n += 1 + l + sovPush(uint64(l))
|
||||
l = len(m.Line)
|
||||
if l > 0 {
|
||||
n += 1 + l + sovPush(uint64(l))
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
// Equal returns true if the two Streams are equal.
|
||||
func (m *Stream) Equal(that interface{}) bool {
|
||||
if that == nil {
|
||||
return m == nil
|
||||
}
|
||||
|
||||
that1, ok := that.(*Stream)
|
||||
if !ok {
|
||||
that2, ok := that.(Stream)
|
||||
if ok {
|
||||
that1 = &that2
|
||||
} else {
|
||||
return false
|
||||
}
|
||||
}
|
||||
if that1 == nil {
|
||||
return m == nil
|
||||
} else if m == nil {
|
||||
return false
|
||||
}
|
||||
if m.Labels != that1.Labels {
|
||||
return false
|
||||
}
|
||||
if len(m.Entries) != len(that1.Entries) {
|
||||
return false
|
||||
}
|
||||
for i := range m.Entries {
|
||||
if !m.Entries[i].Equal(that1.Entries[i]) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return m.Hash == that1.Hash
|
||||
}
|
||||
|
||||
// Equal returns true if the two Entries are equal.
|
||||
func (m *Entry) Equal(that interface{}) bool {
|
||||
if that == nil {
|
||||
return m == nil
|
||||
}
|
||||
|
||||
that1, ok := that.(*Entry)
|
||||
if !ok {
|
||||
that2, ok := that.(Entry)
|
||||
if ok {
|
||||
that1 = &that2
|
||||
} else {
|
||||
return false
|
||||
}
|
||||
}
|
||||
if that1 == nil {
|
||||
return m == nil
|
||||
} else if m == nil {
|
||||
return false
|
||||
}
|
||||
if !m.Timestamp.Equal(that1.Timestamp) {
|
||||
return false
|
||||
}
|
||||
if m.Line != that1.Line {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
@@ -7,17 +7,14 @@ import (
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/elasticsearch"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/jsonline"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/loki"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/syslog"
|
||||
)
|
||||
|
||||
// Init initializes vlinsert
|
||||
func Init() {
|
||||
syslog.MustInit()
|
||||
}
|
||||
|
||||
// Stop stops vlinsert
|
||||
func Stop() {
|
||||
syslog.MustStop()
|
||||
}
|
||||
|
||||
// RequestHandler handles insert requests for VictoriaLogs
|
||||
@@ -31,8 +28,7 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
|
||||
path = strings.ReplaceAll(path, "//", "/")
|
||||
|
||||
if path == "/jsonline" {
|
||||
jsonline.RequestHandler(w, r)
|
||||
return true
|
||||
return jsonline.RequestHandler(w, r)
|
||||
}
|
||||
switch {
|
||||
case strings.HasPrefix(path, "/elasticsearch/"):
|
||||
|
||||
@@ -1,531 +0,0 @@
|
||||
package syslog
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"crypto/tls"
|
||||
"errors"
|
||||
"flag"
|
||||
"fmt"
|
||||
"io"
|
||||
"net"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"github.com/klauspost/compress/gzip"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/cgroup"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/ingestserver"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/netutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/slicesutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
)
|
||||
|
||||
var (
|
||||
syslogTimezone = flag.String("syslog.timezone", "Local", "Timezone to use when parsing timestamps in RFC3164 syslog messages. Timezone must be a valid IANA Time Zone. "+
|
||||
"For example: America/New_York, Europe/Berlin, Etc/GMT+3 . See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/")
|
||||
|
||||
syslogTenantIDTCP = flagutil.NewArrayString("syslog.tenantID.tcp", "TenantID for logs ingested via the corresponding -syslog.listenAddr.tcp. "+
|
||||
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/")
|
||||
syslogTenantIDUDP = flagutil.NewArrayString("syslog.tenantID.udp", "TenantID for logs ingested via the corresponding -syslog.listenAddr.udp. "+
|
||||
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/")
|
||||
|
||||
listenAddrTCP = flagutil.NewArrayString("syslog.listenAddr.tcp", "Comma-separated list of TCP addresses to listen to for Syslog messages. "+
|
||||
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/")
|
||||
listenAddrUDP = flagutil.NewArrayString("syslog.listenAddr.udp", "Comma-separated list of UDP address to listen to for Syslog messages. "+
|
||||
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/")
|
||||
|
||||
tlsEnable = flagutil.NewArrayBool("syslog.tls", "Whether to enable TLS for receiving syslog messages at the corresponding -syslog.listenAddr.tcp. "+
|
||||
"The corresponding -syslog.tlsCertFile and -syslog.tlsKeyFile must be set if -syslog.tls is set. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#security")
|
||||
tlsCertFile = flagutil.NewArrayString("syslog.tlsCertFile", "Path to file with TLS certificate for the corresponding -syslog.listenAddr.tcp if the corresponding -syslog.tls is set. "+
|
||||
"Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated. "+
|
||||
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#security")
|
||||
tlsKeyFile = flagutil.NewArrayString("syslog.tlsKeyFile", "Path to file with TLS key for the corresponding -syslog.listenAddr.tcp if the corresponding -syslog.tls is set. "+
|
||||
"The provided key file is automatically re-read every second, so it can be dynamically updated. "+
|
||||
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#security")
|
||||
tlsCipherSuites = flagutil.NewArrayString("syslog.tlsCipherSuites", "Optional list of TLS cipher suites for -syslog.listenAddr.tcp if -syslog.tls is set. "+
|
||||
"See the list of supported cipher suites at https://pkg.go.dev/crypto/tls#pkg-constants . "+
|
||||
"See also https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#security")
|
||||
tlsMinVersion = flag.String("syslog.tlsMinVersion", "TLS13", "The minimum TLS version to use for -syslog.listenAddr.tcp if -syslog.tls is set. "+
|
||||
"Supported values: TLS10, TLS11, TLS12, TLS13. "+
|
||||
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#security")
|
||||
|
||||
compressMethodTCP = flagutil.NewArrayString("syslog.compressMethod.tcp", "Compression method for syslog messages received at the corresponding -syslog.listenAddr.tcp. "+
|
||||
"Supported values: none, gzip, deflate. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#compression")
|
||||
compressMethodUDP = flagutil.NewArrayString("syslog.compressMethod.udp", "Compression method for syslog messages received at the corresponding -syslog.listenAddr.udp. "+
|
||||
"Supported values: none, gzip, deflate. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#compression")
|
||||
|
||||
useLocalTimestampTCP = flagutil.NewArrayBool("syslog.useLocalTimestamp.tcp", "Whether to use local timestamp instead of the original timestamp for the ingested syslog messages "+
|
||||
"at the corresponding -syslog.listenAddr.tcp. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#log-timestamps")
|
||||
useLocalTimestampUDP = flagutil.NewArrayBool("syslog.useLocalTimestamp.udp", "Whether to use local timestamp instead of the original timestamp for the ingested syslog messages "+
|
||||
"at the corresponding -syslog.listenAddr.udp. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#log-timestamps")
|
||||
)
|
||||
|
||||
// MustInit initializes syslog parser at the given -syslog.listenAddr.tcp and -syslog.listenAddr.udp ports
|
||||
//
|
||||
// This function must be called after flag.Parse().
|
||||
//
|
||||
// MustStop() must be called in order to free up resources occupied by the initialized syslog parser.
|
||||
func MustInit() {
|
||||
if workersStopCh != nil {
|
||||
logger.Panicf("BUG: MustInit() called twice without MustStop() call")
|
||||
}
|
||||
workersStopCh = make(chan struct{})
|
||||
|
||||
for argIdx, addr := range *listenAddrTCP {
|
||||
workersWG.Add(1)
|
||||
go func(addr string, argIdx int) {
|
||||
runTCPListener(addr, argIdx)
|
||||
workersWG.Done()
|
||||
}(addr, argIdx)
|
||||
}
|
||||
|
||||
for argIdx, addr := range *listenAddrUDP {
|
||||
workersWG.Add(1)
|
||||
go func(addr string, argIdx int) {
|
||||
runUDPListener(addr, argIdx)
|
||||
workersWG.Done()
|
||||
}(addr, argIdx)
|
||||
}
|
||||
|
||||
currentYear := time.Now().Year()
|
||||
globalCurrentYear.Store(int64(currentYear))
|
||||
workersWG.Add(1)
|
||||
go func() {
|
||||
ticker := time.NewTicker(time.Minute)
|
||||
for {
|
||||
select {
|
||||
case <-workersStopCh:
|
||||
ticker.Stop()
|
||||
workersWG.Done()
|
||||
return
|
||||
case <-ticker.C:
|
||||
currentYear := time.Now().Year()
|
||||
globalCurrentYear.Store(int64(currentYear))
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
if *syslogTimezone != "" {
|
||||
tz, err := time.LoadLocation(*syslogTimezone)
|
||||
if err != nil {
|
||||
logger.Fatalf("cannot parse -syslog.timezone=%q: %s", *syslogTimezone, err)
|
||||
}
|
||||
globalTimezone = tz
|
||||
} else {
|
||||
globalTimezone = time.Local
|
||||
}
|
||||
}
|
||||
|
||||
var (
|
||||
globalCurrentYear atomic.Int64
|
||||
globalTimezone *time.Location
|
||||
)
|
||||
|
||||
var (
|
||||
workersWG sync.WaitGroup
|
||||
workersStopCh chan struct{}
|
||||
)
|
||||
|
||||
// MustStop stops syslog parser initialized via MustInit()
|
||||
func MustStop() {
|
||||
close(workersStopCh)
|
||||
workersWG.Wait()
|
||||
workersStopCh = nil
|
||||
}
|
||||
|
||||
func runUDPListener(addr string, argIdx int) {
|
||||
ln, err := net.ListenPacket(netutil.GetUDPNetwork(), addr)
|
||||
if err != nil {
|
||||
logger.Fatalf("cannot start UDP syslog server at %q: %s", addr, err)
|
||||
}
|
||||
|
||||
tenantIDStr := syslogTenantIDUDP.GetOptionalArg(argIdx)
|
||||
tenantID, err := logstorage.ParseTenantID(tenantIDStr)
|
||||
if err != nil {
|
||||
logger.Fatalf("cannot parse -syslog.tenantID.udp=%q for -syslog.listenAddr.udp=%q: %s", tenantIDStr, addr, err)
|
||||
}
|
||||
|
||||
compressMethod := compressMethodUDP.GetOptionalArg(argIdx)
|
||||
checkCompressMethod(compressMethod, addr, "udp")
|
||||
|
||||
useLocalTimestamp := useLocalTimestampUDP.GetOptionalArg(argIdx)
|
||||
|
||||
doneCh := make(chan struct{})
|
||||
go func() {
|
||||
serveUDP(ln, tenantID, compressMethod, useLocalTimestamp)
|
||||
close(doneCh)
|
||||
}()
|
||||
|
||||
logger.Infof("started accepting syslog messages at -syslog.listenAddr.udp=%q", addr)
|
||||
<-workersStopCh
|
||||
if err := ln.Close(); err != nil {
|
||||
logger.Fatalf("syslog: cannot close UDP listener at %s: %s", addr, err)
|
||||
}
|
||||
<-doneCh
|
||||
logger.Infof("finished accepting syslog messages at -syslog.listenAddr.udp=%q", addr)
|
||||
}
|
||||
|
||||
func runTCPListener(addr string, argIdx int) {
|
||||
var tlsConfig *tls.Config
|
||||
if tlsEnable.GetOptionalArg(argIdx) {
|
||||
certFile := tlsCertFile.GetOptionalArg(argIdx)
|
||||
keyFile := tlsKeyFile.GetOptionalArg(argIdx)
|
||||
tc, err := netutil.GetServerTLSConfig(certFile, keyFile, *tlsMinVersion, *tlsCipherSuites)
|
||||
if err != nil {
|
||||
logger.Fatalf("cannot load TLS cert from -syslog.tlsCertFile=%q, -syslog.tlsKeyFile=%q, -syslog.tlsMinVersion=%q, -syslog.tlsCipherSuites=%q: %s",
|
||||
certFile, keyFile, *tlsMinVersion, *tlsCipherSuites, err)
|
||||
}
|
||||
tlsConfig = tc
|
||||
}
|
||||
ln, err := netutil.NewTCPListener("syslog", addr, false, tlsConfig)
|
||||
if err != nil {
|
||||
logger.Fatalf("syslog: cannot start TCP listener at %s: %s", addr, err)
|
||||
}
|
||||
|
||||
tenantIDStr := syslogTenantIDTCP.GetOptionalArg(argIdx)
|
||||
tenantID, err := logstorage.ParseTenantID(tenantIDStr)
|
||||
if err != nil {
|
||||
logger.Fatalf("cannot parse -syslog.tenantID.tcp=%q for -syslog.listenAddr.tcp=%q: %s", tenantIDStr, addr, err)
|
||||
}
|
||||
|
||||
compressMethod := compressMethodTCP.GetOptionalArg(argIdx)
|
||||
checkCompressMethod(compressMethod, addr, "tcp")
|
||||
|
||||
useLocalTimestamp := useLocalTimestampTCP.GetOptionalArg(argIdx)
|
||||
|
||||
doneCh := make(chan struct{})
|
||||
go func() {
|
||||
serveTCP(ln, tenantID, compressMethod, useLocalTimestamp)
|
||||
close(doneCh)
|
||||
}()
|
||||
|
||||
logger.Infof("started accepting syslog messages at -syslog.listenAddr.tcp=%q", addr)
|
||||
<-workersStopCh
|
||||
if err := ln.Close(); err != nil {
|
||||
logger.Fatalf("syslog: cannot close TCP listener at %s: %s", addr, err)
|
||||
}
|
||||
<-doneCh
|
||||
logger.Infof("finished accepting syslog messages at -syslog.listenAddr.tcp=%q", addr)
|
||||
}
|
||||
|
||||
func checkCompressMethod(compressMethod, addr, protocol string) {
|
||||
switch compressMethod {
|
||||
case "", "none", "gzip", "deflate":
|
||||
return
|
||||
default:
|
||||
logger.Fatalf("unsupported -syslog.compressMethod.%s=%q for -syslog.listenAddr.%s=%q; supported values: 'none', 'gzip', 'deflate'", protocol, compressMethod, protocol, addr)
|
||||
}
|
||||
}
|
||||
|
||||
func serveUDP(ln net.PacketConn, tenantID logstorage.TenantID, compressMethod string, useLocalTimestamp bool) {
|
||||
gomaxprocs := cgroup.AvailableCPUs()
|
||||
var wg sync.WaitGroup
|
||||
localAddr := ln.LocalAddr()
|
||||
for i := 0; i < gomaxprocs; i++ {
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
cp := insertutils.GetCommonParamsForSyslog(tenantID)
|
||||
var bb bytesutil.ByteBuffer
|
||||
bb.B = bytesutil.ResizeNoCopyNoOverallocate(bb.B, 64*1024)
|
||||
for {
|
||||
bb.Reset()
|
||||
bb.B = bb.B[:cap(bb.B)]
|
||||
n, remoteAddr, err := ln.ReadFrom(bb.B)
|
||||
if err != nil {
|
||||
udpErrorsTotal.Inc()
|
||||
var ne net.Error
|
||||
if errors.As(err, &ne) {
|
||||
if ne.Temporary() {
|
||||
logger.Errorf("syslog: temporary error when listening for UDP at %q: %s", localAddr, err)
|
||||
time.Sleep(time.Second)
|
||||
continue
|
||||
}
|
||||
if strings.Contains(err.Error(), "use of closed network connection") {
|
||||
break
|
||||
}
|
||||
}
|
||||
logger.Errorf("syslog: cannot read UDP data from %s at %s: %s", remoteAddr, localAddr, err)
|
||||
continue
|
||||
}
|
||||
bb.B = bb.B[:n]
|
||||
udpRequestsTotal.Inc()
|
||||
if err := processStream(bb.NewReader(), compressMethod, useLocalTimestamp, cp); err != nil {
|
||||
logger.Errorf("syslog: cannot process UDP data from %s at %s: %s", remoteAddr, localAddr, err)
|
||||
}
|
||||
}
|
||||
}()
|
||||
}
|
||||
wg.Wait()
|
||||
}
|
||||
|
||||
func serveTCP(ln net.Listener, tenantID logstorage.TenantID, compressMethod string, useLocalTimestamp bool) {
|
||||
var cm ingestserver.ConnsMap
|
||||
cm.Init("syslog")
|
||||
|
||||
var wg sync.WaitGroup
|
||||
addr := ln.Addr()
|
||||
for {
|
||||
c, err := ln.Accept()
|
||||
if err != nil {
|
||||
var ne net.Error
|
||||
if errors.As(err, &ne) {
|
||||
if ne.Temporary() {
|
||||
logger.Errorf("syslog: temporary error when listening for TCP addr %q: %s", addr, err)
|
||||
time.Sleep(time.Second)
|
||||
continue
|
||||
}
|
||||
if strings.Contains(err.Error(), "use of closed network connection") {
|
||||
break
|
||||
}
|
||||
logger.Fatalf("syslog: unrecoverable error when accepting TCP connections at %q: %s", addr, err)
|
||||
}
|
||||
logger.Fatalf("syslog: unexpected error when accepting TCP connections at %q: %s", addr, err)
|
||||
}
|
||||
if !cm.Add(c) {
|
||||
_ = c.Close()
|
||||
break
|
||||
}
|
||||
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
cp := insertutils.GetCommonParamsForSyslog(tenantID)
|
||||
if err := processStream(c, compressMethod, useLocalTimestamp, cp); err != nil {
|
||||
logger.Errorf("syslog: cannot process TCP data at %q: %s", addr, err)
|
||||
}
|
||||
|
||||
cm.Delete(c)
|
||||
_ = c.Close()
|
||||
wg.Done()
|
||||
}()
|
||||
}
|
||||
|
||||
cm.CloseAll(0)
|
||||
wg.Wait()
|
||||
}
|
||||
|
||||
// processStream parses a stream of syslog messages from r and ingests them into vlstorage.
|
||||
func processStream(r io.Reader, compressMethod string, useLocalTimestamp bool, cp *insertutils.CommonParams) error {
|
||||
if err := vlstorage.CanWriteData(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
lmp := cp.NewLogMessageProcessor()
|
||||
err := processStreamInternal(r, compressMethod, useLocalTimestamp, lmp)
|
||||
lmp.MustClose()
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
func processStreamInternal(r io.Reader, compressMethod string, useLocalTimestamp bool, lmp insertutils.LogMessageProcessor) error {
|
||||
switch compressMethod {
|
||||
case "", "none":
|
||||
case "gzip":
|
||||
zr, err := common.GetGzipReader(r)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot read gzipped data: %w", err)
|
||||
}
|
||||
r = zr
|
||||
case "deflate":
|
||||
zr, err := common.GetZlibReader(r)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot read deflated data: %w", err)
|
||||
}
|
||||
r = zr
|
||||
default:
|
||||
logger.Panicf("BUG: unsupported compressMethod=%q; supported values: none, gzip, deflate", compressMethod)
|
||||
}
|
||||
|
||||
err := processUncompressedStream(r, useLocalTimestamp, lmp)
|
||||
|
||||
switch compressMethod {
|
||||
case "gzip":
|
||||
zr := r.(*gzip.Reader)
|
||||
common.PutGzipReader(zr)
|
||||
case "deflate":
|
||||
zr := r.(io.ReadCloser)
|
||||
common.PutZlibReader(zr)
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
func processUncompressedStream(r io.Reader, useLocalTimestamp bool, lmp insertutils.LogMessageProcessor) error {
|
||||
wcr := writeconcurrencylimiter.GetReader(r)
|
||||
defer writeconcurrencylimiter.PutReader(wcr)
|
||||
|
||||
slr := getSyslogLineReader(wcr)
|
||||
defer putSyslogLineReader(slr)
|
||||
|
||||
n := 0
|
||||
for {
|
||||
ok := slr.nextLine()
|
||||
wcr.DecConcurrency()
|
||||
if !ok {
|
||||
break
|
||||
}
|
||||
|
||||
currentYear := int(globalCurrentYear.Load())
|
||||
err := processLine(slr.line, currentYear, globalTimezone, useLocalTimestamp, lmp)
|
||||
if err != nil {
|
||||
errorsTotal.Inc()
|
||||
return fmt.Errorf("cannot read line #%d: %s", n, err)
|
||||
}
|
||||
n++
|
||||
rowsIngestedTotal.Inc()
|
||||
}
|
||||
return slr.Error()
|
||||
}
|
||||
|
||||
type syslogLineReader struct {
|
||||
line []byte
|
||||
|
||||
br *bufio.Reader
|
||||
err error
|
||||
}
|
||||
|
||||
func (slr *syslogLineReader) reset(r io.Reader) {
|
||||
slr.line = slr.line[:0]
|
||||
slr.br.Reset(r)
|
||||
slr.err = nil
|
||||
}
|
||||
|
||||
// Error returns the last error occurred in slr.
|
||||
func (slr *syslogLineReader) Error() error {
|
||||
if slr.err == nil || slr.err == io.EOF {
|
||||
return nil
|
||||
}
|
||||
return slr.err
|
||||
}
|
||||
|
||||
// nextLine reads the next syslog line from slr and stores it at slr.line.
|
||||
//
|
||||
// false is returned if the next line cannot be read. Error() must be called in this case
|
||||
// in order to verify whether there is an error or just slr stream has been finished.
|
||||
func (slr *syslogLineReader) nextLine() bool {
|
||||
if slr.err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
again:
|
||||
prefix, err := slr.br.ReadSlice(' ')
|
||||
if err != nil {
|
||||
if err != io.EOF {
|
||||
slr.err = fmt.Errorf("cannot read message frame prefix: %w", err)
|
||||
return false
|
||||
}
|
||||
if len(prefix) == 0 {
|
||||
slr.err = err
|
||||
return false
|
||||
}
|
||||
}
|
||||
// skip empty lines
|
||||
for len(prefix) > 0 && prefix[0] == '\n' {
|
||||
prefix = prefix[1:]
|
||||
}
|
||||
if len(prefix) == 0 {
|
||||
// An empty prefix or a prefix with empty lines - try reading yet another prefix.
|
||||
goto again
|
||||
}
|
||||
|
||||
if prefix[0] >= '0' && prefix[0] <= '9' {
|
||||
// This is octet-counting method. See https://www.ietf.org/archive/id/draft-gerhards-syslog-plain-tcp-07.html#msgxfer
|
||||
msgLenStr := bytesutil.ToUnsafeString(prefix[:len(prefix)-1])
|
||||
msgLen, err := strconv.ParseUint(msgLenStr, 10, 64)
|
||||
if err != nil {
|
||||
slr.err = fmt.Errorf("cannot parse message length from %q: %w", msgLenStr, err)
|
||||
return false
|
||||
}
|
||||
if maxMsgLen := insertutils.MaxLineSizeBytes.IntN(); msgLen > uint64(maxMsgLen) {
|
||||
slr.err = fmt.Errorf("cannot read message longer than %d bytes; msgLen=%d", maxMsgLen, msgLen)
|
||||
return false
|
||||
}
|
||||
slr.line = slicesutil.SetLength(slr.line, int(msgLen))
|
||||
if _, err := io.ReadFull(slr.br, slr.line); err != nil {
|
||||
slr.err = fmt.Errorf("cannot read message with size %d bytes: %w", msgLen, err)
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// This is octet-stuffing method. See https://www.ietf.org/archive/id/draft-gerhards-syslog-plain-tcp-07.html#octet-stuffing-legacy
|
||||
slr.line = append(slr.line[:0], prefix...)
|
||||
for {
|
||||
line, err := slr.br.ReadSlice('\n')
|
||||
if err == nil {
|
||||
slr.line = append(slr.line, line[:len(line)-1]...)
|
||||
return true
|
||||
}
|
||||
if err == io.EOF {
|
||||
slr.line = append(slr.line, line...)
|
||||
return true
|
||||
}
|
||||
if err == bufio.ErrBufferFull {
|
||||
slr.line = append(slr.line, line...)
|
||||
continue
|
||||
}
|
||||
slr.err = fmt.Errorf("cannot read message in octet-stuffing method: %w", err)
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
func getSyslogLineReader(r io.Reader) *syslogLineReader {
|
||||
v := syslogLineReaderPool.Get()
|
||||
if v == nil {
|
||||
br := bufio.NewReaderSize(r, 64*1024)
|
||||
return &syslogLineReader{
|
||||
br: br,
|
||||
}
|
||||
}
|
||||
slr := v.(*syslogLineReader)
|
||||
slr.reset(r)
|
||||
return slr
|
||||
}
|
||||
|
||||
func putSyslogLineReader(slr *syslogLineReader) {
|
||||
syslogLineReaderPool.Put(slr)
|
||||
}
|
||||
|
||||
var syslogLineReaderPool sync.Pool
|
||||
|
||||
func processLine(line []byte, currentYear int, timezone *time.Location, useLocalTimestamp bool, lmp insertutils.LogMessageProcessor) error {
|
||||
p := logstorage.GetSyslogParser(currentYear, timezone)
|
||||
lineStr := bytesutil.ToUnsafeString(line)
|
||||
p.Parse(lineStr)
|
||||
|
||||
var ts int64
|
||||
if useLocalTimestamp {
|
||||
ts = time.Now().UnixNano()
|
||||
} else {
|
||||
nsecs, err := insertutils.ExtractTimestampRFC3339NanoFromFields("timestamp", p.Fields)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot get timestamp from syslog line %q: %w", line, err)
|
||||
}
|
||||
ts = nsecs
|
||||
}
|
||||
logstorage.RenameField(p.Fields, "message", "_msg")
|
||||
lmp.AddRow(ts, p.Fields)
|
||||
logstorage.PutSyslogParser(p)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
var (
|
||||
rowsIngestedTotal = metrics.NewCounter(`vl_rows_ingested_total{type="syslog"}`)
|
||||
|
||||
errorsTotal = metrics.NewCounter(`vl_errors_total{type="syslog"}`)
|
||||
|
||||
udpRequestsTotal = metrics.NewCounter(`vl_udp_reqests_total{type="syslog"}`)
|
||||
udpErrorsTotal = metrics.NewCounter(`vl_udp_errors_total{type="syslog"}`)
|
||||
)
|
||||
@@ -1,130 +0,0 @@
|
||||
package syslog
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"reflect"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
|
||||
)
|
||||
|
||||
func TestSyslogLineReader_Success(t *testing.T) {
|
||||
f := func(data string, linesExpected []string) {
|
||||
t.Helper()
|
||||
|
||||
r := bytes.NewBufferString(data)
|
||||
slr := getSyslogLineReader(r)
|
||||
defer putSyslogLineReader(slr)
|
||||
|
||||
var lines []string
|
||||
for slr.nextLine() {
|
||||
lines = append(lines, string(slr.line))
|
||||
}
|
||||
if err := slr.Error(); err != nil {
|
||||
t.Fatalf("unexpected error: %s", err)
|
||||
}
|
||||
if !reflect.DeepEqual(lines, linesExpected) {
|
||||
t.Fatalf("unexpected lines read;\ngot\n%q\nwant\n%q", lines, linesExpected)
|
||||
}
|
||||
}
|
||||
|
||||
f("", nil)
|
||||
f("\n", nil)
|
||||
f("\n\n\n", nil)
|
||||
|
||||
f("foobar", []string{"foobar"})
|
||||
f("foobar\n", []string{"foobar\n"})
|
||||
f("\n\nfoo\n\nbar\n\n", []string{"foo\n\nbar\n\n"})
|
||||
|
||||
f(`Jun 3 12:08:33 abcd systemd: Starting Update the local ESM caches...`, []string{"Jun 3 12:08:33 abcd systemd: Starting Update the local ESM caches..."})
|
||||
|
||||
f(`Jun 3 12:08:33 abcd systemd: Starting Update the local ESM caches...
|
||||
|
||||
48 <165>Jun 4 12:08:33 abcd systemd[345]: abc defg<123>1 2023-06-03T17:42:12.345Z mymachine.example.com appname 12345 ID47 [exampleSDID@32473 iut="3" eventSource="Application 123 = ] 56" eventID="11211"] This is a test message with structured data.
|
||||
|
||||
`, []string{
|
||||
"Jun 3 12:08:33 abcd systemd: Starting Update the local ESM caches...",
|
||||
"<165>Jun 4 12:08:33 abcd systemd[345]: abc defg",
|
||||
`<123>1 2023-06-03T17:42:12.345Z mymachine.example.com appname 12345 ID47 [exampleSDID@32473 iut="3" eventSource="Application 123 = ] 56" eventID="11211"] This is a test message with structured data.`,
|
||||
})
|
||||
}
|
||||
|
||||
func TestSyslogLineReader_Failure(t *testing.T) {
|
||||
f := func(data string) {
|
||||
t.Helper()
|
||||
|
||||
r := bytes.NewBufferString(data)
|
||||
slr := getSyslogLineReader(r)
|
||||
defer putSyslogLineReader(slr)
|
||||
|
||||
if slr.nextLine() {
|
||||
t.Fatalf("expecting failure to read the first line")
|
||||
}
|
||||
if err := slr.Error(); err == nil {
|
||||
t.Fatalf("expecting non-nil error")
|
||||
}
|
||||
}
|
||||
|
||||
// invalid format for message size
|
||||
f("12foo bar")
|
||||
|
||||
// too big message size
|
||||
f("123 aa")
|
||||
f("1233423432 abc")
|
||||
}
|
||||
|
||||
func TestProcessStreamInternal_Success(t *testing.T) {
|
||||
f := func(data string, currentYear, rowsExpected int, timestampsExpected []int64, resultExpected string) {
|
||||
t.Helper()
|
||||
|
||||
MustInit()
|
||||
defer MustStop()
|
||||
|
||||
globalTimezone = time.UTC
|
||||
globalCurrentYear.Store(int64(currentYear))
|
||||
|
||||
tlp := &insertutils.TestLogMessageProcessor{}
|
||||
r := bytes.NewBufferString(data)
|
||||
if err := processStreamInternal(r, "", false, tlp); err != nil {
|
||||
t.Fatalf("unexpected error: %s", err)
|
||||
}
|
||||
if err := tlp.Verify(rowsExpected, timestampsExpected, resultExpected); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
data := `Jun 3 12:08:33 abcd systemd: Starting Update the local ESM caches...
|
||||
|
||||
48 <165>Jun 4 12:08:33 abcd systemd[345]: abc defg<123>1 2023-06-03T17:42:12.345Z mymachine.example.com appname 12345 ID47 [exampleSDID@32473 iut="3" eventSource="Application 123 = ] 56" eventID="11211"] This is a test message with structured data.
|
||||
`
|
||||
currentYear := 2023
|
||||
rowsExpected := 3
|
||||
timestampsExpected := []int64{1685794113000000000, 1685880513000000000, 1685814132345000000}
|
||||
resultExpected := `{"format":"rfc3164","timestamp":"","hostname":"abcd","app_name":"systemd","_msg":"Starting Update the local ESM caches..."}
|
||||
{"priority":"165","facility":"20","severity":"5","format":"rfc3164","timestamp":"","hostname":"abcd","app_name":"systemd","proc_id":"345","_msg":"abc defg"}
|
||||
{"priority":"123","facility":"15","severity":"3","format":"rfc5424","timestamp":"","hostname":"mymachine.example.com","app_name":"appname","proc_id":"12345","msg_id":"ID47","exampleSDID@32473.iut":"3","exampleSDID@32473.eventSource":"Application 123 = ] 56","exampleSDID@32473.eventID":"11211","_msg":"This is a test message with structured data."}`
|
||||
f(data, currentYear, rowsExpected, timestampsExpected, resultExpected)
|
||||
}
|
||||
|
||||
func TestProcessStreamInternal_Failure(t *testing.T) {
|
||||
f := func(data string) {
|
||||
t.Helper()
|
||||
|
||||
MustInit()
|
||||
defer MustStop()
|
||||
|
||||
tlp := &insertutils.TestLogMessageProcessor{}
|
||||
r := bytes.NewBufferString(data)
|
||||
if err := processStreamInternal(r, "", false, tlp); err == nil {
|
||||
t.Fatalf("expecting non-nil error")
|
||||
}
|
||||
}
|
||||
|
||||
// invalid format for message size
|
||||
f("12foo bar")
|
||||
|
||||
// too big message size
|
||||
f("123 foo")
|
||||
f("123456789 bar")
|
||||
}
|
||||
@@ -1,7 +0,0 @@
|
||||
# All these commands must run from repository root.
|
||||
|
||||
vlogsgenerator:
|
||||
APP_NAME=vlogsgenerator $(MAKE) app-local
|
||||
|
||||
vlogsgenerator-race:
|
||||
APP_NAME=vlogsgenerator RACE=-race $(MAKE) app-local
|
||||
@@ -1,158 +0,0 @@
|
||||
# vlogsgenerator
|
||||
|
||||
Logs generator for [VictoriaLogs](https://docs.victoriametrics.com/victorialogs/).
|
||||
|
||||
## How to build vlogsgenerator?
|
||||
|
||||
Run `make vlogsgenerator` from the repository root. This builds `bin/vlogsgenerator` binary.
|
||||
|
||||
## How run vlogsgenerator?
|
||||
|
||||
`vlogsgenerator` generates logs in [JSON line format](https://jsonlines.org/) suitable for the ingestion
|
||||
via [`/insert/jsonline` endpoint at VictoriaLogs](https://docs.victoriametrics.com/victorialogs/data-ingestion/#json-stream-api).
|
||||
|
||||
By default it writes the generated logs into `stdout`. For example, the following command writes generated logs to `stdout`:
|
||||
|
||||
```
|
||||
bin/vlogsgenerator
|
||||
```
|
||||
|
||||
It is possible to redirect the generated logs to file. For example, the following command writes the generated logs to `logs.json` file:
|
||||
|
||||
```
|
||||
bin/vlogsgenerator > logs.json
|
||||
```
|
||||
|
||||
The generated logs at `logs.json` file can be inspected with the following command:
|
||||
|
||||
```
|
||||
head logs.json | jq .
|
||||
```
|
||||
|
||||
Below is an example output:
|
||||
|
||||
```json
|
||||
{
|
||||
"_time": "2024-05-08T14:34:00.854Z",
|
||||
"_msg": "message for the stream 8 and worker 0; ip=185.69.136.129; uuid=b4fe8f1a-c93c-dea3-ba11-5b9f0509291e; u64=8996587920687045253",
|
||||
"host": "host_8",
|
||||
"worker_id": "0",
|
||||
"run_id": "f9b3deee-e6b6-7f56-5deb-1586e4e81725",
|
||||
"const_0": "some value 0 8",
|
||||
"const_1": "some value 1 8",
|
||||
"const_2": "some value 2 8",
|
||||
"var_0": "some value 0 12752539384823438260",
|
||||
"dict_0": "warn",
|
||||
"dict_1": "info",
|
||||
"u8_0": "6",
|
||||
"u16_0": "35202",
|
||||
"u32_0": "1964973739",
|
||||
"u64_0": "4810489083243239145",
|
||||
"float_0": "1.868",
|
||||
"ip_0": "250.34.75.125",
|
||||
"timestamp_0": "1799-03-16T01:34:18.311Z",
|
||||
"json_0": "{\"foo\":\"bar_3\",\"baz\":{\"a\":[\"x\",\"y\"]},\"f3\":NaN,\"f4\":32}"
|
||||
}
|
||||
{
|
||||
"_time": "2024-05-08T14:34:00.854Z",
|
||||
"_msg": "message for the stream 9 and worker 0; ip=164.244.254.194; uuid=7e8373b1-ce0d-1ce7-8e96-4bcab8955598; u64=13949903463741076522",
|
||||
"host": "host_9",
|
||||
"worker_id": "0",
|
||||
"run_id": "f9b3deee-e6b6-7f56-5deb-1586e4e81725",
|
||||
"const_0": "some value 0 9",
|
||||
"const_1": "some value 1 9",
|
||||
"const_2": "some value 2 9",
|
||||
"var_0": "some value 0 5371555382075206134",
|
||||
"dict_0": "INFO",
|
||||
"dict_1": "FATAL",
|
||||
"u8_0": "219",
|
||||
"u16_0": "31459",
|
||||
"u32_0": "3918836777",
|
||||
"u64_0": "6593354256620219850",
|
||||
"float_0": "1.085",
|
||||
"ip_0": "253.151.88.158",
|
||||
"timestamp_0": "2042-10-05T16:42:57.082Z",
|
||||
"json_0": "{\"foo\":\"bar_5\",\"baz\":{\"a\":[\"x\",\"y\"]},\"f3\":NaN,\"f4\":27}"
|
||||
}
|
||||
```
|
||||
|
||||
The `run_id` field uniquely identifies every `vlogsgenerator` invocation.
|
||||
|
||||
### How to write logs to VictoriaLogs?
|
||||
|
||||
The generated logs can be written directly to VictoriaLogs by passing the address of [`/insert/jsonline` endpoint](https://docs.victoriametrics.com/victorialogs/data-ingestion/#json-stream-api)
|
||||
to `-addr` command-line flag. For example, the following command writes the generated logs to VictoriaLogs running at `localhost`:
|
||||
|
||||
```
|
||||
bin/vlogsgenerator -addr=http://localhost:9428/insert/jsonline
|
||||
```
|
||||
|
||||
### Configuration
|
||||
|
||||
`vlogsgenerator` accepts various command-line flags, which can be used for configuring the number and the shape of the generated logs.
|
||||
These flags can be inspected by running `vlogsgenerator -help`. Below are the most interesting flags:
|
||||
|
||||
* `-start` - starting timestamp for generating logs. Logs are evenly generated on the [`-start` ... `-end`] interval.
|
||||
* `-end` - ending timestamp for generating logs. Logs are evenly generated on the [`-start` ... `-end`] interval.
|
||||
* `-activeStreams` - the number of active [log streams](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields) to generate.
|
||||
* `-logsPerStream` - the number of log entries to generate per each log stream. Log entries are evenly distributed on the [`-start` ... `-end`] interval.
|
||||
|
||||
The total number of generated logs can be calculated as `-activeStreams` * `-logsPerStream`.
|
||||
|
||||
For example, the following command generates `1_000_000` log entries on the time range `[2024-01-01 - 2024-02-01]` across `100`
|
||||
[log streams](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields), where every logs stream contains `10_000` log entries,
|
||||
and writes them to `http://localhost:9428/insert/jsonline`:
|
||||
|
||||
```
|
||||
bin/vlogsgenerator \
|
||||
-start=2024-01-01 -end=2024-02-01 \
|
||||
-activeStreams=100 \
|
||||
-logsPerStream=10_000 \
|
||||
-addr=http://localhost:9428/insert/jsonline
|
||||
```
|
||||
|
||||
### Churn rate
|
||||
|
||||
It is possible to generate churn rate for active [log streams](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields)
|
||||
by specifying `-totalStreams` command-line flag bigger than `-activeStreams`. For example, the following command generates
|
||||
logs for `1000` total streams, while the number of active streams equals to `100`. This means that at every time there are logs for `100` streams,
|
||||
but these streams change over the given [`-start` ... `-end`] time range, so the total number of streams on the given time range becomes `1000`:
|
||||
|
||||
```
|
||||
bin/vlogsgenerator \
|
||||
-start=2024-01-01 -end=2024-02-01 \
|
||||
-activeStreams=100 \
|
||||
-totalStreams=1_000 \
|
||||
-logsPerStream=10_000 \
|
||||
-addr=http://localhost:9428/insert/jsonline
|
||||
```
|
||||
|
||||
In this case the total number of generated logs equals to `-totalStreams` * `-logsPerStream` = `10_000_000`.
|
||||
|
||||
### Benchmark tuning
|
||||
|
||||
By default `vlogsgenerator` generates and writes logs by a single worker. This may limit the maximum data ingestion rate during benchmarks.
|
||||
The number of workers can be changed via `-workers` command-line flag. For example, the following command generates and writes logs with `16` workers:
|
||||
|
||||
```
|
||||
bin/vlogsgenerator \
|
||||
-start=2024-01-01 -end=2024-02-01 \
|
||||
-activeStreams=100 \
|
||||
-logsPerStream=10_000 \
|
||||
-addr=http://localhost:9428/insert/jsonline \
|
||||
-workers=16
|
||||
```
|
||||
|
||||
### Output statistics
|
||||
|
||||
Every 10 seconds `vlogsgenerator` writes statistics about the generated logs into `stderr`. The frequency of the generated statistics can be adjusted via `-statInterval` command-line flag.
|
||||
For example, the following command writes statistics every 2 seconds:
|
||||
|
||||
```
|
||||
bin/vlogsgenerator \
|
||||
-start=2024-01-01 -end=2024-02-01 \
|
||||
-activeStreams=100 \
|
||||
-logsPerStream=10_000 \
|
||||
-addr=http://localhost:9428/insert/jsonline \
|
||||
-statInterval=2s
|
||||
```
|
||||
@@ -1,344 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"flag"
|
||||
"fmt"
|
||||
"io"
|
||||
"math"
|
||||
"math/rand"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"os"
|
||||
"strconv"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/buildinfo"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/envflag"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutils"
|
||||
)
|
||||
|
||||
var (
|
||||
addr = flag.String("addr", "stdout", "HTTP address to push the generated logs to; if it is set to stdout, then logs are generated to stdout")
|
||||
workers = flag.Int("workers", 1, "The number of workers to use to push logs to -addr")
|
||||
|
||||
start = newTimeFlag("start", "-1d", "Generated logs start from this time; see https://docs.victoriametrics.com/#timestamp-formats")
|
||||
end = newTimeFlag("end", "0s", "Generated logs end at this time; see https://docs.victoriametrics.com/#timestamp-formats")
|
||||
activeStreams = flag.Int("activeStreams", 100, "The number of active log streams to generate; see https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields")
|
||||
totalStreams = flag.Int("totalStreams", 0, "The number of total log streams; if -totalStreams > -activeStreams, then some active streams are substituted with new streams "+
|
||||
"during data generation")
|
||||
logsPerStream = flag.Int64("logsPerStream", 1_000, "The number of log entries to generate per each log stream. Log entries are evenly distributed between -start and -end")
|
||||
constFieldsPerLog = flag.Int("constFieldsPerLog", 3, "The number of fields with constaint values to generate per each log entry; "+
|
||||
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
|
||||
varFieldsPerLog = flag.Int("varFieldsPerLog", 1, "The number of fields with variable values to generate per each log entry; "+
|
||||
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
|
||||
dictFieldsPerLog = flag.Int("dictFieldsPerLog", 2, "The number of fields with up to 8 different values to generate per each log entry; "+
|
||||
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
|
||||
u8FieldsPerLog = flag.Int("u8FieldsPerLog", 1, "The number of fields with uint8 values to generate per each log entry; "+
|
||||
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
|
||||
u16FieldsPerLog = flag.Int("u16FieldsPerLog", 1, "The number of fields with uint16 values to generate per each log entry; "+
|
||||
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
|
||||
u32FieldsPerLog = flag.Int("u32FieldsPerLog", 1, "The number of fields with uint32 values to generate per each log entry; "+
|
||||
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
|
||||
u64FieldsPerLog = flag.Int("u64FieldsPerLog", 1, "The number of fields with uint64 values to generate per each log entry; "+
|
||||
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
|
||||
floatFieldsPerLog = flag.Int("floatFieldsPerLog", 1, "The number of fields with float64 values to generate per each log entry; "+
|
||||
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
|
||||
ipFieldsPerLog = flag.Int("ipFieldsPerLog", 1, "The number of fields with IPv4 values to generate per each log entry; "+
|
||||
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
|
||||
timestampFieldsPerLog = flag.Int("timestampFieldsPerLog", 1, "The number of fields with ISO8601 timestamps per each log entry; "+
|
||||
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
|
||||
jsonFieldsPerLog = flag.Int("jsonFieldsPerLog", 1, "The number of JSON fields to generate per each log entry; "+
|
||||
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
|
||||
|
||||
statInterval = flag.Duration("statInterval", 10*time.Second, "The interval between publishing the stats")
|
||||
)
|
||||
|
||||
func main() {
|
||||
// Write flags and help message to stdout, since it is easier to grep or pipe.
|
||||
flag.CommandLine.SetOutput(os.Stdout)
|
||||
envflag.Parse()
|
||||
buildinfo.Init()
|
||||
logger.Init()
|
||||
|
||||
var remoteWriteURL *url.URL
|
||||
if *addr != "stdout" {
|
||||
urlParsed, err := url.Parse(*addr)
|
||||
if err != nil {
|
||||
logger.Fatalf("cannot parse -addr=%q: %s", *addr, err)
|
||||
}
|
||||
qs, err := url.ParseQuery(urlParsed.RawQuery)
|
||||
if err != nil {
|
||||
logger.Fatalf("cannot parse query string in -addr=%q: %w", *addr, err)
|
||||
}
|
||||
qs.Set("_stream_fields", "host,worker_id")
|
||||
urlParsed.RawQuery = qs.Encode()
|
||||
remoteWriteURL = urlParsed
|
||||
}
|
||||
|
||||
if start.nsec >= end.nsec {
|
||||
logger.Fatalf("-start=%s must be smaller than -end=%s", start, end)
|
||||
}
|
||||
if *activeStreams <= 0 {
|
||||
logger.Fatalf("-activeStreams must be bigger than 0; got %d", *activeStreams)
|
||||
}
|
||||
if *logsPerStream <= 0 {
|
||||
logger.Fatalf("-logsPerStream must be bigger than 0; got %d", *logsPerStream)
|
||||
}
|
||||
if *totalStreams < *activeStreams {
|
||||
*totalStreams = *activeStreams
|
||||
}
|
||||
|
||||
cfg := &workerConfig{
|
||||
url: remoteWriteURL,
|
||||
activeStreams: *activeStreams,
|
||||
totalStreams: *totalStreams,
|
||||
}
|
||||
|
||||
// divide total and active streams among workers
|
||||
if *workers <= 0 {
|
||||
logger.Fatalf("-workers must be bigger than 0; got %d", *workers)
|
||||
}
|
||||
if *workers > *activeStreams {
|
||||
logger.Fatalf("-workers=%d cannot exceed -activeStreams=%d", *workers, *activeStreams)
|
||||
}
|
||||
cfg.activeStreams /= *workers
|
||||
cfg.totalStreams /= *workers
|
||||
|
||||
logger.Infof("start -workers=%d workers for ingesting -logsPerStream=%d log entries per each -totalStreams=%d (-activeStreams=%d) on a time range -start=%s, -end=%s to -addr=%s",
|
||||
*workers, *logsPerStream, *totalStreams, *activeStreams, toRFC3339(start.nsec), toRFC3339(end.nsec), *addr)
|
||||
|
||||
startTime := time.Now()
|
||||
var wg sync.WaitGroup
|
||||
for i := 0; i < *workers; i++ {
|
||||
wg.Add(1)
|
||||
go func(workerID int) {
|
||||
defer wg.Done()
|
||||
generateAndPushLogs(cfg, workerID)
|
||||
}(i)
|
||||
}
|
||||
|
||||
go func() {
|
||||
prevEntries := uint64(0)
|
||||
prevBytes := uint64(0)
|
||||
ticker := time.NewTicker(*statInterval)
|
||||
for range ticker.C {
|
||||
currEntries := logEntriesCount.Load()
|
||||
deltaEntries := currEntries - prevEntries
|
||||
rateEntries := float64(deltaEntries) / statInterval.Seconds()
|
||||
|
||||
currBytes := bytesGenerated.Load()
|
||||
deltaBytes := currBytes - prevBytes
|
||||
rateBytes := float64(deltaBytes) / statInterval.Seconds()
|
||||
logger.Infof("generated %dK log entries (%dK total) at %.0fK entries/sec, %dMB (%dMB total) at %.0fMB/sec",
|
||||
deltaEntries/1e3, currEntries/1e3, rateEntries/1e3, deltaBytes/1e6, currBytes/1e6, rateBytes/1e6)
|
||||
|
||||
prevEntries = currEntries
|
||||
prevBytes = currBytes
|
||||
}
|
||||
}()
|
||||
|
||||
wg.Wait()
|
||||
|
||||
dSecs := time.Since(startTime).Seconds()
|
||||
currEntries := logEntriesCount.Load()
|
||||
currBytes := bytesGenerated.Load()
|
||||
rateEntries := float64(currEntries) / dSecs
|
||||
rateBytes := float64(currBytes) / dSecs
|
||||
logger.Infof("ingested %dK log entries (%dMB) in %.3f seconds; avg ingestion rate: %.0fK entries/sec, %.0fMB/sec", currEntries/1e3, currBytes/1e6, dSecs, rateEntries/1e3, rateBytes/1e6)
|
||||
}
|
||||
|
||||
var logEntriesCount atomic.Uint64
|
||||
|
||||
var bytesGenerated atomic.Uint64
|
||||
|
||||
type workerConfig struct {
|
||||
url *url.URL
|
||||
activeStreams int
|
||||
totalStreams int
|
||||
}
|
||||
|
||||
type statWriter struct {
|
||||
w io.Writer
|
||||
}
|
||||
|
||||
func (sw *statWriter) Write(p []byte) (int, error) {
|
||||
bytesGenerated.Add(uint64(len(p)))
|
||||
return sw.w.Write(p)
|
||||
}
|
||||
|
||||
func generateAndPushLogs(cfg *workerConfig, workerID int) {
|
||||
pr, pw := io.Pipe()
|
||||
sw := &statWriter{
|
||||
w: pw,
|
||||
}
|
||||
bw := bufio.NewWriter(sw)
|
||||
doneCh := make(chan struct{})
|
||||
go func() {
|
||||
generateLogs(bw, workerID, cfg.activeStreams, cfg.totalStreams)
|
||||
_ = bw.Flush()
|
||||
_ = pw.Close()
|
||||
close(doneCh)
|
||||
}()
|
||||
|
||||
if cfg.url == nil {
|
||||
_, err := io.Copy(os.Stdout, pr)
|
||||
if err != nil {
|
||||
logger.Fatalf("unexpected error when writing logs to stdout: %s", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
req, err := http.NewRequest("POST", cfg.url.String(), pr)
|
||||
if err != nil {
|
||||
logger.Fatalf("cannot create request to %q: %s", cfg.url, err)
|
||||
}
|
||||
|
||||
resp, err := http.DefaultClient.Do(req)
|
||||
if err != nil {
|
||||
logger.Fatalf("cannot perform request to %q: %s", cfg.url, err)
|
||||
}
|
||||
if resp.StatusCode/100 != 2 {
|
||||
logger.Fatalf("unexpected status code got from %q: %d; want 2xx", cfg.url, err)
|
||||
}
|
||||
|
||||
// Wait until all the generateLogs goroutine is finished.
|
||||
<-doneCh
|
||||
}
|
||||
|
||||
func generateLogs(bw *bufio.Writer, workerID, activeStreams, totalStreams int) {
|
||||
streamLifetime := int64(float64(end.nsec-start.nsec) * (float64(activeStreams) / float64(totalStreams)))
|
||||
streamStep := int64(float64(end.nsec-start.nsec) / float64(totalStreams-activeStreams+1))
|
||||
step := streamLifetime / (*logsPerStream - 1)
|
||||
|
||||
currNsec := start.nsec
|
||||
for currNsec < end.nsec {
|
||||
firstStreamID := int((currNsec - start.nsec) / streamStep)
|
||||
generateLogsAtTimestamp(bw, workerID, currNsec, firstStreamID, activeStreams)
|
||||
currNsec += step
|
||||
}
|
||||
}
|
||||
|
||||
var runID = toUUID(rand.Uint64(), rand.Uint64())
|
||||
|
||||
func generateLogsAtTimestamp(bw *bufio.Writer, workerID int, ts int64, firstStreamID, activeStreams int) {
|
||||
streamID := firstStreamID
|
||||
timeStr := toRFC3339(ts)
|
||||
for i := 0; i < activeStreams; i++ {
|
||||
ip := toIPv4(rand.Uint32())
|
||||
uuid := toUUID(rand.Uint64(), rand.Uint64())
|
||||
fmt.Fprintf(bw, `{"_time":%q,"_msg":"message for the stream %d and worker %d; ip=%s; uuid=%s; u64=%d","host":"host_%d","worker_id":"%d"`,
|
||||
timeStr, streamID, workerID, ip, uuid, rand.Uint64(), streamID, workerID)
|
||||
fmt.Fprintf(bw, `,"run_id":"%s"`, runID)
|
||||
for j := 0; j < *constFieldsPerLog; j++ {
|
||||
fmt.Fprintf(bw, `,"const_%d":"some value %d %d"`, j, j, streamID)
|
||||
}
|
||||
for j := 0; j < *varFieldsPerLog; j++ {
|
||||
fmt.Fprintf(bw, `,"var_%d":"some value %d %d"`, j, j, rand.Uint64())
|
||||
}
|
||||
for j := 0; j < *dictFieldsPerLog; j++ {
|
||||
fmt.Fprintf(bw, `,"dict_%d":"%s"`, j, dictValues[rand.Intn(len(dictValues))])
|
||||
}
|
||||
for j := 0; j < *u8FieldsPerLog; j++ {
|
||||
fmt.Fprintf(bw, `,"u8_%d":"%d"`, j, uint8(rand.Uint32()))
|
||||
}
|
||||
for j := 0; j < *u16FieldsPerLog; j++ {
|
||||
fmt.Fprintf(bw, `,"u16_%d":"%d"`, j, uint16(rand.Uint32()))
|
||||
}
|
||||
for j := 0; j < *u32FieldsPerLog; j++ {
|
||||
fmt.Fprintf(bw, `,"u32_%d":"%d"`, j, rand.Uint32())
|
||||
}
|
||||
for j := 0; j < *u64FieldsPerLog; j++ {
|
||||
fmt.Fprintf(bw, `,"u64_%d":"%d"`, j, rand.Uint64())
|
||||
}
|
||||
for j := 0; j < *floatFieldsPerLog; j++ {
|
||||
fmt.Fprintf(bw, `,"float_%d":"%v"`, j, math.Round(10_000*rand.Float64())/1000)
|
||||
}
|
||||
for j := 0; j < *ipFieldsPerLog; j++ {
|
||||
ip := toIPv4(rand.Uint32())
|
||||
fmt.Fprintf(bw, `,"ip_%d":"%s"`, j, ip)
|
||||
}
|
||||
for j := 0; j < *timestampFieldsPerLog; j++ {
|
||||
timestamp := toISO8601(int64(rand.Uint64()))
|
||||
fmt.Fprintf(bw, `,"timestamp_%d":"%s"`, j, timestamp)
|
||||
}
|
||||
for j := 0; j < *jsonFieldsPerLog; j++ {
|
||||
fmt.Fprintf(bw, `,"json_%d":"{\"foo\":\"bar_%d\",\"baz\":{\"a\":[\"x\",\"y\"]},\"f3\":NaN,\"f4\":%d}"`, j, rand.Intn(10), rand.Intn(100))
|
||||
}
|
||||
fmt.Fprintf(bw, "}\n")
|
||||
|
||||
logEntriesCount.Add(1)
|
||||
streamID++
|
||||
}
|
||||
}
|
||||
|
||||
var dictValues = []string{
|
||||
"debug",
|
||||
"info",
|
||||
"warn",
|
||||
"error",
|
||||
"fatal",
|
||||
"ERROR",
|
||||
"FATAL",
|
||||
"INFO",
|
||||
}
|
||||
|
||||
func newTimeFlag(name, defaultValue, description string) *timeFlag {
|
||||
var tf timeFlag
|
||||
if err := tf.Set(defaultValue); err != nil {
|
||||
logger.Panicf("invalid defaultValue=%q for flag %q: %w", defaultValue, name, err)
|
||||
}
|
||||
flag.Var(&tf, name, description)
|
||||
return &tf
|
||||
}
|
||||
|
||||
type timeFlag struct {
|
||||
s string
|
||||
nsec int64
|
||||
}
|
||||
|
||||
func (tf *timeFlag) Set(s string) error {
|
||||
msec, err := promutils.ParseTimeMsec(s)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot parse time from %q: %w", s, err)
|
||||
}
|
||||
tf.s = s
|
||||
tf.nsec = msec * 1e6
|
||||
return nil
|
||||
}
|
||||
|
||||
func (tf *timeFlag) String() string {
|
||||
return tf.s
|
||||
}
|
||||
|
||||
func toRFC3339(nsec int64) string {
|
||||
return time.Unix(0, nsec).UTC().Format(time.RFC3339Nano)
|
||||
}
|
||||
|
||||
func toISO8601(nsec int64) string {
|
||||
return time.Unix(0, nsec).UTC().Format("2006-01-02T15:04:05.000Z")
|
||||
}
|
||||
|
||||
func toIPv4(n uint32) string {
|
||||
dst := make([]byte, 0, len("255.255.255.255"))
|
||||
dst = marshalUint64(dst, uint64(n>>24))
|
||||
dst = append(dst, '.')
|
||||
dst = marshalUint64(dst, uint64((n>>16)&0xff))
|
||||
dst = append(dst, '.')
|
||||
dst = marshalUint64(dst, uint64((n>>8)&0xff))
|
||||
dst = append(dst, '.')
|
||||
dst = marshalUint64(dst, uint64(n&0xff))
|
||||
return string(dst)
|
||||
}
|
||||
|
||||
func toUUID(a, b uint64) string {
|
||||
return fmt.Sprintf("%08x-%04x-%04x-%04x-%012x", a&(1<<32-1), (a>>32)&(1<<16-1), (a >> 48), b&(1<<16-1), b>>16)
|
||||
}
|
||||
|
||||
// marshalUint64 appends string representation of n to dst and returns the result.
|
||||
func marshalUint64(dst []byte, n uint64) []byte {
|
||||
return strconv.AppendUint(dst, n, 10)
|
||||
}
|
||||
@@ -1,47 +0,0 @@
|
||||
package logsql
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"io"
|
||||
"sync"
|
||||
)
|
||||
|
||||
func getBufferedWriter(w io.Writer) *bufferedWriter {
|
||||
v := bufferedWriterPool.Get()
|
||||
if v == nil {
|
||||
return &bufferedWriter{
|
||||
bw: bufio.NewWriter(w),
|
||||
}
|
||||
}
|
||||
bw := v.(*bufferedWriter)
|
||||
bw.bw.Reset(w)
|
||||
return bw
|
||||
}
|
||||
|
||||
func putBufferedWriter(bw *bufferedWriter) {
|
||||
bw.reset()
|
||||
bufferedWriterPool.Put(bw)
|
||||
}
|
||||
|
||||
var bufferedWriterPool sync.Pool
|
||||
|
||||
type bufferedWriter struct {
|
||||
mu sync.Mutex
|
||||
bw *bufio.Writer
|
||||
}
|
||||
|
||||
func (bw *bufferedWriter) reset() {
|
||||
// nothing to do
|
||||
}
|
||||
|
||||
func (bw *bufferedWriter) WriteIgnoreErrors(p []byte) {
|
||||
bw.mu.Lock()
|
||||
_, _ = bw.bw.Write(p)
|
||||
bw.mu.Unlock()
|
||||
}
|
||||
|
||||
func (bw *bufferedWriter) FlushIgnoreErrors() {
|
||||
bw.mu.Lock()
|
||||
_ = bw.bw.Flush()
|
||||
bw.mu.Unlock()
|
||||
}
|
||||
@@ -1,70 +0,0 @@
|
||||
{% import (
|
||||
"slices"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
|
||||
) %}
|
||||
|
||||
{% stripspace %}
|
||||
|
||||
// FieldsForHits formats labels for /select/logsql/hits response
|
||||
{% func FieldsForHits(columns []logstorage.BlockColumn, rowIdx int) %}
|
||||
{
|
||||
{% if len(columns) > 0 %}
|
||||
{%q= columns[0].Name %}:{%q= columns[0].Values[rowIdx] %}
|
||||
{% for _, c := range columns[1:] %}
|
||||
,{%q= c.Name %}:{%q= c.Values[rowIdx] %}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
}
|
||||
{% endfunc %}
|
||||
|
||||
{% func HitsSeries(m map[string]*hitsSeries) %}
|
||||
{
|
||||
{% code
|
||||
sortedKeys := make([]string, 0, len(m))
|
||||
for k := range m {
|
||||
sortedKeys = append(sortedKeys, k)
|
||||
}
|
||||
slices.Sort(sortedKeys)
|
||||
%}
|
||||
"hits":[
|
||||
{% if len(sortedKeys) > 0 %}
|
||||
{%= hitsSeriesLine(m, sortedKeys[0]) %}
|
||||
{% for _, k := range sortedKeys[1:] %}
|
||||
,{%= hitsSeriesLine(m, k) %}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
]
|
||||
}
|
||||
{% endfunc %}
|
||||
|
||||
{% func hitsSeriesLine(m map[string]*hitsSeries, k string) %}
|
||||
{
|
||||
{% code
|
||||
hs := m[k]
|
||||
hs.sort()
|
||||
timestamps := hs.timestamps
|
||||
hits := hs.hits
|
||||
%}
|
||||
"fields":{%s= k %},
|
||||
"timestamps":[
|
||||
{% if len(timestamps) > 0 %}
|
||||
{%q= timestamps[0] %}
|
||||
{% for _, ts := range timestamps[1:] %}
|
||||
,{%q= ts %}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
],
|
||||
"values":[
|
||||
{% if len(hits) > 0 %}
|
||||
{%dul= hits[0] %}
|
||||
{% for _, v := range hits[1:] %}
|
||||
,{%dul= v %}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
],
|
||||
"total":{%dul= hs.hitsTotal %}
|
||||
}
|
||||
{% endfunc %}
|
||||
|
||||
{% endstripspace %}
|
||||
@@ -1,223 +0,0 @@
|
||||
// Code generated by qtc from "hits_response.qtpl". DO NOT EDIT.
|
||||
// See https://github.com/valyala/quicktemplate for details.
|
||||
|
||||
//line app/vlselect/logsql/hits_response.qtpl:1
|
||||
package logsql
|
||||
|
||||
//line app/vlselect/logsql/hits_response.qtpl:1
|
||||
import (
|
||||
"slices"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
|
||||
)
|
||||
|
||||
// FieldsForHits formats labels for /select/logsql/hits response
|
||||
|
||||
//line app/vlselect/logsql/hits_response.qtpl:10
|
||||
import (
|
||||
qtio422016 "io"
|
||||
|
||||
qt422016 "github.com/valyala/quicktemplate"
|
||||
)
|
||||
|
||||
//line app/vlselect/logsql/hits_response.qtpl:10
|
||||
var (
|
||||
_ = qtio422016.Copy
|
||||
_ = qt422016.AcquireByteBuffer
|
||||
)
|
||||
|
||||
//line app/vlselect/logsql/hits_response.qtpl:10
|
||||
func StreamFieldsForHits(qw422016 *qt422016.Writer, columns []logstorage.BlockColumn, rowIdx int) {
|
||||
//line app/vlselect/logsql/hits_response.qtpl:10
|
||||
qw422016.N().S(`{`)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:12
|
||||
if len(columns) > 0 {
|
||||
//line app/vlselect/logsql/hits_response.qtpl:13
|
||||
qw422016.N().Q(columns[0].Name)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:13
|
||||
qw422016.N().S(`:`)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:13
|
||||
qw422016.N().Q(columns[0].Values[rowIdx])
|
||||
//line app/vlselect/logsql/hits_response.qtpl:14
|
||||
for _, c := range columns[1:] {
|
||||
//line app/vlselect/logsql/hits_response.qtpl:14
|
||||
qw422016.N().S(`,`)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:15
|
||||
qw422016.N().Q(c.Name)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:15
|
||||
qw422016.N().S(`:`)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:15
|
||||
qw422016.N().Q(c.Values[rowIdx])
|
||||
//line app/vlselect/logsql/hits_response.qtpl:16
|
||||
}
|
||||
//line app/vlselect/logsql/hits_response.qtpl:17
|
||||
}
|
||||
//line app/vlselect/logsql/hits_response.qtpl:17
|
||||
qw422016.N().S(`}`)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:19
|
||||
}
|
||||
|
||||
//line app/vlselect/logsql/hits_response.qtpl:19
|
||||
func WriteFieldsForHits(qq422016 qtio422016.Writer, columns []logstorage.BlockColumn, rowIdx int) {
|
||||
//line app/vlselect/logsql/hits_response.qtpl:19
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:19
|
||||
StreamFieldsForHits(qw422016, columns, rowIdx)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:19
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:19
|
||||
}
|
||||
|
||||
//line app/vlselect/logsql/hits_response.qtpl:19
|
||||
func FieldsForHits(columns []logstorage.BlockColumn, rowIdx int) string {
|
||||
//line app/vlselect/logsql/hits_response.qtpl:19
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line app/vlselect/logsql/hits_response.qtpl:19
|
||||
WriteFieldsForHits(qb422016, columns, rowIdx)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:19
|
||||
qs422016 := string(qb422016.B)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:19
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:19
|
||||
return qs422016
|
||||
//line app/vlselect/logsql/hits_response.qtpl:19
|
||||
}
|
||||
|
||||
//line app/vlselect/logsql/hits_response.qtpl:21
|
||||
func StreamHitsSeries(qw422016 *qt422016.Writer, m map[string]*hitsSeries) {
|
||||
//line app/vlselect/logsql/hits_response.qtpl:21
|
||||
qw422016.N().S(`{`)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:24
|
||||
sortedKeys := make([]string, 0, len(m))
|
||||
for k := range m {
|
||||
sortedKeys = append(sortedKeys, k)
|
||||
}
|
||||
slices.Sort(sortedKeys)
|
||||
|
||||
//line app/vlselect/logsql/hits_response.qtpl:29
|
||||
qw422016.N().S(`"hits":[`)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:31
|
||||
if len(sortedKeys) > 0 {
|
||||
//line app/vlselect/logsql/hits_response.qtpl:32
|
||||
streamhitsSeriesLine(qw422016, m, sortedKeys[0])
|
||||
//line app/vlselect/logsql/hits_response.qtpl:33
|
||||
for _, k := range sortedKeys[1:] {
|
||||
//line app/vlselect/logsql/hits_response.qtpl:33
|
||||
qw422016.N().S(`,`)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:34
|
||||
streamhitsSeriesLine(qw422016, m, k)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:35
|
||||
}
|
||||
//line app/vlselect/logsql/hits_response.qtpl:36
|
||||
}
|
||||
//line app/vlselect/logsql/hits_response.qtpl:36
|
||||
qw422016.N().S(`]}`)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:39
|
||||
}
|
||||
|
||||
//line app/vlselect/logsql/hits_response.qtpl:39
|
||||
func WriteHitsSeries(qq422016 qtio422016.Writer, m map[string]*hitsSeries) {
|
||||
//line app/vlselect/logsql/hits_response.qtpl:39
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:39
|
||||
StreamHitsSeries(qw422016, m)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:39
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:39
|
||||
}
|
||||
|
||||
//line app/vlselect/logsql/hits_response.qtpl:39
|
||||
func HitsSeries(m map[string]*hitsSeries) string {
|
||||
//line app/vlselect/logsql/hits_response.qtpl:39
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line app/vlselect/logsql/hits_response.qtpl:39
|
||||
WriteHitsSeries(qb422016, m)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:39
|
||||
qs422016 := string(qb422016.B)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:39
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:39
|
||||
return qs422016
|
||||
//line app/vlselect/logsql/hits_response.qtpl:39
|
||||
}
|
||||
|
||||
//line app/vlselect/logsql/hits_response.qtpl:41
|
||||
func streamhitsSeriesLine(qw422016 *qt422016.Writer, m map[string]*hitsSeries, k string) {
|
||||
//line app/vlselect/logsql/hits_response.qtpl:41
|
||||
qw422016.N().S(`{`)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:44
|
||||
hs := m[k]
|
||||
hs.sort()
|
||||
timestamps := hs.timestamps
|
||||
hits := hs.hits
|
||||
|
||||
//line app/vlselect/logsql/hits_response.qtpl:48
|
||||
qw422016.N().S(`"fields":`)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:49
|
||||
qw422016.N().S(k)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:49
|
||||
qw422016.N().S(`,"timestamps":[`)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:51
|
||||
if len(timestamps) > 0 {
|
||||
//line app/vlselect/logsql/hits_response.qtpl:52
|
||||
qw422016.N().Q(timestamps[0])
|
||||
//line app/vlselect/logsql/hits_response.qtpl:53
|
||||
for _, ts := range timestamps[1:] {
|
||||
//line app/vlselect/logsql/hits_response.qtpl:53
|
||||
qw422016.N().S(`,`)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:54
|
||||
qw422016.N().Q(ts)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:55
|
||||
}
|
||||
//line app/vlselect/logsql/hits_response.qtpl:56
|
||||
}
|
||||
//line app/vlselect/logsql/hits_response.qtpl:56
|
||||
qw422016.N().S(`],"values":[`)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:59
|
||||
if len(hits) > 0 {
|
||||
//line app/vlselect/logsql/hits_response.qtpl:60
|
||||
qw422016.N().DUL(hits[0])
|
||||
//line app/vlselect/logsql/hits_response.qtpl:61
|
||||
for _, v := range hits[1:] {
|
||||
//line app/vlselect/logsql/hits_response.qtpl:61
|
||||
qw422016.N().S(`,`)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:62
|
||||
qw422016.N().DUL(v)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:63
|
||||
}
|
||||
//line app/vlselect/logsql/hits_response.qtpl:64
|
||||
}
|
||||
//line app/vlselect/logsql/hits_response.qtpl:64
|
||||
qw422016.N().S(`],"total":`)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:66
|
||||
qw422016.N().DUL(hs.hitsTotal)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:66
|
||||
qw422016.N().S(`}`)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:68
|
||||
}
|
||||
|
||||
//line app/vlselect/logsql/hits_response.qtpl:68
|
||||
func writehitsSeriesLine(qq422016 qtio422016.Writer, m map[string]*hitsSeries, k string) {
|
||||
//line app/vlselect/logsql/hits_response.qtpl:68
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:68
|
||||
streamhitsSeriesLine(qw422016, m, k)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:68
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:68
|
||||
}
|
||||
|
||||
//line app/vlselect/logsql/hits_response.qtpl:68
|
||||
func hitsSeriesLine(m map[string]*hitsSeries, k string) string {
|
||||
//line app/vlselect/logsql/hits_response.qtpl:68
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line app/vlselect/logsql/hits_response.qtpl:68
|
||||
writehitsSeriesLine(qb422016, m, k)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:68
|
||||
qs422016 := string(qb422016.B)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:68
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line app/vlselect/logsql/hits_response.qtpl:68
|
||||
return qs422016
|
||||
//line app/vlselect/logsql/hits_response.qtpl:68
|
||||
}
|
||||
@@ -1,771 +1,87 @@
|
||||
package logsql
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"math"
|
||||
"net/http"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httputils"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutils"
|
||||
)
|
||||
|
||||
// ProcessHitsRequest handles /select/logsql/hits request.
|
||||
//
|
||||
// See https://docs.victoriametrics.com/victorialogs/querying/#querying-hits-stats
|
||||
func ProcessHitsRequest(ctx context.Context, w http.ResponseWriter, r *http.Request) {
|
||||
q, tenantIDs, err := parseCommonArgs(r)
|
||||
if err != nil {
|
||||
httpserver.Errorf(w, r, "%s", err)
|
||||
return
|
||||
}
|
||||
|
||||
// Obtain step
|
||||
stepStr := r.FormValue("step")
|
||||
if stepStr == "" {
|
||||
stepStr = "1d"
|
||||
}
|
||||
step, err := promutils.ParseDuration(stepStr)
|
||||
if err != nil {
|
||||
httpserver.Errorf(w, r, "cannot parse 'step' arg: %s", err)
|
||||
return
|
||||
}
|
||||
if step <= 0 {
|
||||
httpserver.Errorf(w, r, "'step' must be bigger than zero")
|
||||
}
|
||||
|
||||
// Obtain offset
|
||||
offsetStr := r.FormValue("offset")
|
||||
if offsetStr == "" {
|
||||
offsetStr = "0s"
|
||||
}
|
||||
offset, err := promutils.ParseDuration(offsetStr)
|
||||
if err != nil {
|
||||
httpserver.Errorf(w, r, "cannot parse 'offset' arg: %s", err)
|
||||
return
|
||||
}
|
||||
|
||||
// Obtain field entries
|
||||
fields := r.Form["field"]
|
||||
|
||||
// Obtain limit on the number of top fields entries.
|
||||
fieldsLimit, err := httputils.GetInt(r, "fields_limit")
|
||||
if err != nil {
|
||||
httpserver.Errorf(w, r, "%s", err)
|
||||
return
|
||||
}
|
||||
if fieldsLimit < 0 {
|
||||
fieldsLimit = 0
|
||||
}
|
||||
|
||||
// Prepare the query for hits count.
|
||||
q.Optimize()
|
||||
q.DropAllPipes()
|
||||
q.AddCountByTimePipe(int64(step), int64(offset), fields)
|
||||
|
||||
var mLock sync.Mutex
|
||||
m := make(map[string]*hitsSeries)
|
||||
writeBlock := func(_ uint, timestamps []int64, columns []logstorage.BlockColumn) {
|
||||
if len(columns) == 0 || len(columns[0].Values) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
timestampValues := columns[0].Values
|
||||
hitsValues := columns[len(columns)-1].Values
|
||||
columns = columns[1 : len(columns)-1]
|
||||
|
||||
bb := blockResultPool.Get()
|
||||
for i := range timestamps {
|
||||
timestampStr := strings.Clone(timestampValues[i])
|
||||
hitsStr := strings.Clone(hitsValues[i])
|
||||
hits, err := strconv.ParseUint(hitsStr, 10, 64)
|
||||
if err != nil {
|
||||
logger.Panicf("BUG: cannot parse hitsStr=%q: %s", hitsStr, err)
|
||||
}
|
||||
|
||||
bb.Reset()
|
||||
WriteFieldsForHits(bb, columns, i)
|
||||
|
||||
mLock.Lock()
|
||||
hs, ok := m[string(bb.B)]
|
||||
if !ok {
|
||||
k := string(bb.B)
|
||||
hs = &hitsSeries{}
|
||||
m[k] = hs
|
||||
}
|
||||
hs.timestamps = append(hs.timestamps, timestampStr)
|
||||
hs.hits = append(hs.hits, hits)
|
||||
hs.hitsTotal += hits
|
||||
mLock.Unlock()
|
||||
}
|
||||
blockResultPool.Put(bb)
|
||||
}
|
||||
|
||||
// Execute the query
|
||||
if err := vlstorage.RunQuery(ctx, tenantIDs, q, writeBlock); err != nil {
|
||||
httpserver.Errorf(w, r, "cannot execute query [%s]: %s", q, err)
|
||||
return
|
||||
}
|
||||
|
||||
m = getTopHitsSeries(m, fieldsLimit)
|
||||
|
||||
// Write response
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
WriteHitsSeries(w, m)
|
||||
}
|
||||
|
||||
func getTopHitsSeries(m map[string]*hitsSeries, fieldsLimit int) map[string]*hitsSeries {
|
||||
if fieldsLimit <= 0 || fieldsLimit >= len(m) {
|
||||
return m
|
||||
}
|
||||
|
||||
type fieldsHits struct {
|
||||
fieldsStr string
|
||||
hs *hitsSeries
|
||||
}
|
||||
a := make([]fieldsHits, 0, len(m))
|
||||
for fieldsStr, hs := range m {
|
||||
a = append(a, fieldsHits{
|
||||
fieldsStr: fieldsStr,
|
||||
hs: hs,
|
||||
})
|
||||
}
|
||||
sort.Slice(a, func(i, j int) bool {
|
||||
return a[i].hs.hitsTotal > a[j].hs.hitsTotal
|
||||
})
|
||||
|
||||
hitsOther := make(map[string]uint64)
|
||||
for _, x := range a[fieldsLimit:] {
|
||||
for i, timestampStr := range x.hs.timestamps {
|
||||
hitsOther[timestampStr] += x.hs.hits[i]
|
||||
}
|
||||
}
|
||||
var hsOther hitsSeries
|
||||
for timestampStr, hits := range hitsOther {
|
||||
hsOther.timestamps = append(hsOther.timestamps, timestampStr)
|
||||
hsOther.hits = append(hsOther.hits, hits)
|
||||
hsOther.hitsTotal += hits
|
||||
}
|
||||
|
||||
mNew := make(map[string]*hitsSeries, fieldsLimit+1)
|
||||
for _, x := range a[:fieldsLimit] {
|
||||
mNew[x.fieldsStr] = x.hs
|
||||
}
|
||||
mNew["{}"] = &hsOther
|
||||
|
||||
return mNew
|
||||
}
|
||||
|
||||
type hitsSeries struct {
|
||||
hitsTotal uint64
|
||||
timestamps []string
|
||||
hits []uint64
|
||||
}
|
||||
|
||||
func (hs *hitsSeries) sort() {
|
||||
sort.Sort(hs)
|
||||
}
|
||||
|
||||
func (hs *hitsSeries) Len() int {
|
||||
return len(hs.timestamps)
|
||||
}
|
||||
|
||||
func (hs *hitsSeries) Swap(i, j int) {
|
||||
hs.timestamps[i], hs.timestamps[j] = hs.timestamps[j], hs.timestamps[i]
|
||||
hs.hits[i], hs.hits[j] = hs.hits[j], hs.hits[i]
|
||||
}
|
||||
|
||||
func (hs *hitsSeries) Less(i, j int) bool {
|
||||
return hs.timestamps[i] < hs.timestamps[j]
|
||||
}
|
||||
|
||||
// ProcessFieldNamesRequest handles /select/logsql/field_names request.
|
||||
//
|
||||
// See https://docs.victoriametrics.com/victorialogs/querying/#querying-field-names
|
||||
func ProcessFieldNamesRequest(ctx context.Context, w http.ResponseWriter, r *http.Request) {
|
||||
q, tenantIDs, err := parseCommonArgs(r)
|
||||
if err != nil {
|
||||
httpserver.Errorf(w, r, "%s", err)
|
||||
return
|
||||
}
|
||||
|
||||
// Obtain field names for the given query
|
||||
q.Optimize()
|
||||
fieldNames, err := vlstorage.GetFieldNames(ctx, tenantIDs, q)
|
||||
if err != nil {
|
||||
httpserver.Errorf(w, r, "cannot obtain field names: %s", err)
|
||||
return
|
||||
}
|
||||
|
||||
// Write results
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
WriteValuesWithHitsJSON(w, fieldNames)
|
||||
}
|
||||
|
||||
// ProcessFieldValuesRequest handles /select/logsql/field_values request.
|
||||
//
|
||||
// See https://docs.victoriametrics.com/victorialogs/querying/#querying-field-values
|
||||
func ProcessFieldValuesRequest(ctx context.Context, w http.ResponseWriter, r *http.Request) {
|
||||
q, tenantIDs, err := parseCommonArgs(r)
|
||||
if err != nil {
|
||||
httpserver.Errorf(w, r, "%s", err)
|
||||
return
|
||||
}
|
||||
|
||||
// Parse fieldName query arg
|
||||
fieldName := r.FormValue("field")
|
||||
if fieldName == "" {
|
||||
httpserver.Errorf(w, r, "missing 'field' query arg")
|
||||
return
|
||||
}
|
||||
|
||||
// Parse limit query arg
|
||||
limit, err := httputils.GetInt(r, "limit")
|
||||
if err != nil {
|
||||
httpserver.Errorf(w, r, "%s", err)
|
||||
return
|
||||
}
|
||||
if limit < 0 {
|
||||
limit = 0
|
||||
}
|
||||
|
||||
// Obtain unique values for the given field
|
||||
q.Optimize()
|
||||
values, err := vlstorage.GetFieldValues(ctx, tenantIDs, q, fieldName, uint64(limit))
|
||||
if err != nil {
|
||||
httpserver.Errorf(w, r, "cannot obtain values for field %q: %s", fieldName, err)
|
||||
return
|
||||
}
|
||||
|
||||
// Write results
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
WriteValuesWithHitsJSON(w, values)
|
||||
}
|
||||
|
||||
// ProcessStreamFieldNamesRequest processes /select/logsql/stream_field_names request.
|
||||
//
|
||||
// See https://docs.victoriametrics.com/victorialogs/querying/#querying-stream-field-names
|
||||
func ProcessStreamFieldNamesRequest(ctx context.Context, w http.ResponseWriter, r *http.Request) {
|
||||
q, tenantIDs, err := parseCommonArgs(r)
|
||||
if err != nil {
|
||||
httpserver.Errorf(w, r, "%s", err)
|
||||
return
|
||||
}
|
||||
|
||||
// Obtain stream field names for the given query
|
||||
q.Optimize()
|
||||
names, err := vlstorage.GetStreamFieldNames(ctx, tenantIDs, q)
|
||||
if err != nil {
|
||||
httpserver.Errorf(w, r, "cannot obtain stream field names: %s", err)
|
||||
}
|
||||
|
||||
// Write results
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
WriteValuesWithHitsJSON(w, names)
|
||||
}
|
||||
|
||||
// ProcessStreamFieldValuesRequest processes /select/logsql/stream_field_values request.
|
||||
//
|
||||
// See https://docs.victoriametrics.com/victorialogs/querying/#querying-stream-field-values
|
||||
func ProcessStreamFieldValuesRequest(ctx context.Context, w http.ResponseWriter, r *http.Request) {
|
||||
q, tenantIDs, err := parseCommonArgs(r)
|
||||
if err != nil {
|
||||
httpserver.Errorf(w, r, "%s", err)
|
||||
return
|
||||
}
|
||||
|
||||
// Parse fieldName query arg
|
||||
fieldName := r.FormValue("field")
|
||||
if fieldName == "" {
|
||||
httpserver.Errorf(w, r, "missing 'field' query arg")
|
||||
return
|
||||
}
|
||||
|
||||
// Parse limit query arg
|
||||
limit, err := httputils.GetInt(r, "limit")
|
||||
if err != nil {
|
||||
httpserver.Errorf(w, r, "%s", err)
|
||||
return
|
||||
}
|
||||
if limit < 0 {
|
||||
limit = 0
|
||||
}
|
||||
|
||||
// Obtain stream field values for the given query and the given fieldName
|
||||
q.Optimize()
|
||||
values, err := vlstorage.GetStreamFieldValues(ctx, tenantIDs, q, fieldName, uint64(limit))
|
||||
if err != nil {
|
||||
httpserver.Errorf(w, r, "cannot obtain stream field values: %s", err)
|
||||
}
|
||||
|
||||
// Write results
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
WriteValuesWithHitsJSON(w, values)
|
||||
}
|
||||
|
||||
// ProcessStreamIDsRequest processes /select/logsql/stream_ids request.
|
||||
//
|
||||
// See https://docs.victoriametrics.com/victorialogs/querying/#querying-stream_ids
|
||||
func ProcessStreamIDsRequest(ctx context.Context, w http.ResponseWriter, r *http.Request) {
|
||||
q, tenantIDs, err := parseCommonArgs(r)
|
||||
if err != nil {
|
||||
httpserver.Errorf(w, r, "%s", err)
|
||||
return
|
||||
}
|
||||
|
||||
// Parse limit query arg
|
||||
limit, err := httputils.GetInt(r, "limit")
|
||||
if err != nil {
|
||||
httpserver.Errorf(w, r, "%s", err)
|
||||
return
|
||||
}
|
||||
if limit < 0 {
|
||||
limit = 0
|
||||
}
|
||||
|
||||
// Obtain streamIDs for the given query
|
||||
q.Optimize()
|
||||
streamIDs, err := vlstorage.GetStreamIDs(ctx, tenantIDs, q, uint64(limit))
|
||||
if err != nil {
|
||||
httpserver.Errorf(w, r, "cannot obtain stream_ids: %s", err)
|
||||
}
|
||||
|
||||
// Write results
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
WriteValuesWithHitsJSON(w, streamIDs)
|
||||
}
|
||||
|
||||
// ProcessStreamsRequest processes /select/logsql/streams request.
|
||||
//
|
||||
// See https://docs.victoriametrics.com/victorialogs/querying/#querying-streams
|
||||
func ProcessStreamsRequest(ctx context.Context, w http.ResponseWriter, r *http.Request) {
|
||||
q, tenantIDs, err := parseCommonArgs(r)
|
||||
if err != nil {
|
||||
httpserver.Errorf(w, r, "%s", err)
|
||||
return
|
||||
}
|
||||
|
||||
// Parse limit query arg
|
||||
limit, err := httputils.GetInt(r, "limit")
|
||||
if err != nil {
|
||||
httpserver.Errorf(w, r, "%s", err)
|
||||
return
|
||||
}
|
||||
if limit < 0 {
|
||||
limit = 0
|
||||
}
|
||||
|
||||
// Obtain streams for the given query
|
||||
q.Optimize()
|
||||
streams, err := vlstorage.GetStreams(ctx, tenantIDs, q, uint64(limit))
|
||||
if err != nil {
|
||||
httpserver.Errorf(w, r, "cannot obtain streams: %s", err)
|
||||
}
|
||||
|
||||
// Write results
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
WriteValuesWithHitsJSON(w, streams)
|
||||
}
|
||||
|
||||
// ProcessLiveTailRequest processes live tailing request to /select/logsq/tail
|
||||
func ProcessLiveTailRequest(ctx context.Context, w http.ResponseWriter, r *http.Request) {
|
||||
liveTailRequests.Inc()
|
||||
defer liveTailRequests.Dec()
|
||||
|
||||
q, tenantIDs, err := parseCommonArgs(r)
|
||||
if err != nil {
|
||||
httpserver.Errorf(w, r, "%s", err)
|
||||
return
|
||||
}
|
||||
if !q.CanLiveTail() {
|
||||
httpserver.Errorf(w, r, "the query [%s] cannot be used in live tailing; see https://docs.victoriametrics.com/victorialogs/querying/#live-tailing for details", q)
|
||||
}
|
||||
q.Optimize()
|
||||
|
||||
refreshIntervalMsecs, err := httputils.GetDuration(r, "refresh_interval", 1000)
|
||||
if err != nil {
|
||||
httpserver.Errorf(w, r, "%s", err)
|
||||
return
|
||||
}
|
||||
refreshInterval := time.Millisecond * time.Duration(refreshIntervalMsecs)
|
||||
|
||||
ctxWithCancel, cancel := context.WithCancel(ctx)
|
||||
tp := newTailProcessor(cancel)
|
||||
|
||||
ticker := time.NewTicker(refreshInterval)
|
||||
defer ticker.Stop()
|
||||
|
||||
end := time.Now().UnixNano()
|
||||
doneCh := ctxWithCancel.Done()
|
||||
flusher, ok := w.(http.Flusher)
|
||||
if !ok {
|
||||
logger.Panicf("BUG: it is expected that http.ResponseWriter (%T) supports http.Flusher interface", w)
|
||||
}
|
||||
for {
|
||||
start := end - tailOffsetNsecs
|
||||
end = time.Now().UnixNano()
|
||||
|
||||
qCopy := q.Clone()
|
||||
qCopy.AddTimeFilter(start, end)
|
||||
if err := vlstorage.RunQuery(ctxWithCancel, tenantIDs, qCopy, tp.writeBlock); err != nil {
|
||||
httpserver.Errorf(w, r, "cannot execute tail query [%s]: %s", q, err)
|
||||
return
|
||||
}
|
||||
resultRows, err := tp.getTailRows()
|
||||
if err != nil {
|
||||
httpserver.Errorf(w, r, "cannot get tail results for query [%q]: %s", q, err)
|
||||
return
|
||||
}
|
||||
if len(resultRows) > 0 {
|
||||
WriteJSONRows(w, resultRows)
|
||||
flusher.Flush()
|
||||
}
|
||||
|
||||
select {
|
||||
case <-doneCh:
|
||||
return
|
||||
case <-ticker.C:
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var liveTailRequests = metrics.NewCounter(`vl_live_tailing_requests`)
|
||||
|
||||
const tailOffsetNsecs = 5e9
|
||||
|
||||
type logRow struct {
|
||||
timestamp int64
|
||||
fields []logstorage.Field
|
||||
}
|
||||
|
||||
func sortLogRows(rows []logRow) {
|
||||
sort.SliceStable(rows, func(i, j int) bool {
|
||||
return rows[i].timestamp < rows[j].timestamp
|
||||
})
|
||||
}
|
||||
|
||||
type tailProcessor struct {
|
||||
cancel func()
|
||||
|
||||
mu sync.Mutex
|
||||
|
||||
perStreamRows map[string][]logRow
|
||||
lastTimestamps map[string]int64
|
||||
|
||||
err error
|
||||
}
|
||||
|
||||
func newTailProcessor(cancel func()) *tailProcessor {
|
||||
return &tailProcessor{
|
||||
cancel: cancel,
|
||||
|
||||
perStreamRows: make(map[string][]logRow),
|
||||
lastTimestamps: make(map[string]int64),
|
||||
}
|
||||
}
|
||||
|
||||
func (tp *tailProcessor) writeBlock(_ uint, timestamps []int64, columns []logstorage.BlockColumn) {
|
||||
if len(timestamps) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
tp.mu.Lock()
|
||||
defer tp.mu.Unlock()
|
||||
|
||||
if tp.err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
// Make sure columns contain _time field, since it is needed for proper tail work.
|
||||
hasTime := false
|
||||
for _, c := range columns {
|
||||
if c.Name == "_time" {
|
||||
hasTime = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !hasTime {
|
||||
tp.err = fmt.Errorf("missing _time field")
|
||||
tp.cancel()
|
||||
return
|
||||
}
|
||||
|
||||
// Copy block rows to tp.perStreamRows
|
||||
for i, timestamp := range timestamps {
|
||||
streamID := ""
|
||||
fields := make([]logstorage.Field, len(columns))
|
||||
for j, c := range columns {
|
||||
name := strings.Clone(c.Name)
|
||||
value := strings.Clone(c.Values[i])
|
||||
|
||||
fields[j] = logstorage.Field{
|
||||
Name: name,
|
||||
Value: value,
|
||||
}
|
||||
|
||||
if name == "_stream_id" {
|
||||
streamID = value
|
||||
}
|
||||
}
|
||||
|
||||
tp.perStreamRows[streamID] = append(tp.perStreamRows[streamID], logRow{
|
||||
timestamp: timestamp,
|
||||
fields: fields,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func (tp *tailProcessor) getTailRows() ([][]logstorage.Field, error) {
|
||||
if tp.err != nil {
|
||||
return nil, tp.err
|
||||
}
|
||||
|
||||
var resultRows []logRow
|
||||
for streamID, rows := range tp.perStreamRows {
|
||||
sortLogRows(rows)
|
||||
|
||||
lastTimestamp, ok := tp.lastTimestamps[streamID]
|
||||
if ok {
|
||||
// Skip already written rows
|
||||
for len(rows) > 0 && rows[0].timestamp <= lastTimestamp {
|
||||
rows = rows[1:]
|
||||
}
|
||||
}
|
||||
if len(rows) > 0 {
|
||||
resultRows = append(resultRows, rows...)
|
||||
tp.lastTimestamps[streamID] = rows[len(rows)-1].timestamp
|
||||
}
|
||||
}
|
||||
clear(tp.perStreamRows)
|
||||
|
||||
sortLogRows(resultRows)
|
||||
|
||||
tailRows := make([][]logstorage.Field, len(resultRows))
|
||||
for i, row := range resultRows {
|
||||
tailRows[i] = row.fields
|
||||
}
|
||||
|
||||
return tailRows, nil
|
||||
}
|
||||
|
||||
// ProcessQueryRequest handles /select/logsql/query request.
|
||||
//
|
||||
// See https://docs.victoriametrics.com/victorialogs/querying/#http-api
|
||||
func ProcessQueryRequest(ctx context.Context, w http.ResponseWriter, r *http.Request) {
|
||||
q, tenantIDs, err := parseCommonArgs(r)
|
||||
if err != nil {
|
||||
httpserver.Errorf(w, r, "%s", err)
|
||||
return
|
||||
}
|
||||
|
||||
// Parse limit query arg
|
||||
limit, err := httputils.GetInt(r, "limit")
|
||||
if err != nil {
|
||||
httpserver.Errorf(w, r, "%s", err)
|
||||
return
|
||||
}
|
||||
|
||||
bw := getBufferedWriter(w)
|
||||
defer func() {
|
||||
bw.FlushIgnoreErrors()
|
||||
putBufferedWriter(bw)
|
||||
}()
|
||||
w.Header().Set("Content-Type", "application/stream+json")
|
||||
|
||||
if limit > 0 {
|
||||
if q.CanReturnLastNResults() {
|
||||
rows, err := getLastNQueryResults(ctx, tenantIDs, q, limit)
|
||||
if err != nil {
|
||||
httpserver.Errorf(w, r, "%s", err)
|
||||
return
|
||||
}
|
||||
bb := blockResultPool.Get()
|
||||
b := bb.B
|
||||
for i := range rows {
|
||||
b = logstorage.MarshalFieldsToJSON(b[:0], rows[i].fields)
|
||||
b = append(b, '\n')
|
||||
bw.WriteIgnoreErrors(b)
|
||||
}
|
||||
bb.B = b
|
||||
blockResultPool.Put(bb)
|
||||
return
|
||||
}
|
||||
|
||||
q.AddPipeLimit(uint64(limit))
|
||||
}
|
||||
q.Optimize()
|
||||
|
||||
writeBlock := func(_ uint, timestamps []int64, columns []logstorage.BlockColumn) {
|
||||
if len(columns) == 0 || len(columns[0].Values) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
bb := blockResultPool.Get()
|
||||
for i := range timestamps {
|
||||
WriteJSONRow(bb, columns, i)
|
||||
}
|
||||
bw.WriteIgnoreErrors(bb.B)
|
||||
blockResultPool.Put(bb)
|
||||
}
|
||||
|
||||
if err := vlstorage.RunQuery(ctx, tenantIDs, q, writeBlock); err != nil {
|
||||
httpserver.Errorf(w, r, "cannot execute query [%s]: %s", q, err)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
var blockResultPool bytesutil.ByteBufferPool
|
||||
|
||||
type row struct {
|
||||
timestamp int64
|
||||
fields []logstorage.Field
|
||||
}
|
||||
|
||||
func getLastNQueryResults(ctx context.Context, tenantIDs []logstorage.TenantID, q *logstorage.Query, limit int) ([]row, error) {
|
||||
limitUpper := 2 * limit
|
||||
q.AddPipeLimit(uint64(limitUpper))
|
||||
q.Optimize()
|
||||
rows, err := getQueryResultsWithLimit(ctx, tenantIDs, q, limitUpper)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if len(rows) < limitUpper {
|
||||
// Fast path - the requested time range contains up to limitUpper rows.
|
||||
rows = getLastNRows(rows, limit)
|
||||
return rows, nil
|
||||
}
|
||||
|
||||
// Slow path - search for the time range containing up to limitUpper rows.
|
||||
start, end := q.GetFilterTimeRange()
|
||||
d := end/2 - start/2
|
||||
start += d
|
||||
|
||||
qOrig := q
|
||||
for {
|
||||
q = qOrig.Clone()
|
||||
q.AddTimeFilter(start, end)
|
||||
rows, err := getQueryResultsWithLimit(ctx, tenantIDs, q, limitUpper)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if len(rows) >= limit && len(rows) < limitUpper || d == 0 {
|
||||
rows = getLastNRows(rows, limit)
|
||||
return rows, nil
|
||||
}
|
||||
|
||||
lastBit := d & 1
|
||||
d /= 2
|
||||
if len(rows) > limit {
|
||||
start += d
|
||||
} else {
|
||||
start -= d + lastBit
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func getLastNRows(rows []row, limit int) []row {
|
||||
sort.Slice(rows, func(i, j int) bool {
|
||||
return rows[i].timestamp < rows[j].timestamp
|
||||
})
|
||||
if len(rows) > limit {
|
||||
rows = rows[len(rows)-limit:]
|
||||
}
|
||||
return rows
|
||||
}
|
||||
|
||||
func getQueryResultsWithLimit(ctx context.Context, tenantIDs []logstorage.TenantID, q *logstorage.Query, limit int) ([]row, error) {
|
||||
ctxWithCancel, cancel := context.WithCancel(ctx)
|
||||
defer cancel()
|
||||
|
||||
var rows []row
|
||||
var rowsLock sync.Mutex
|
||||
writeBlock := func(_ uint, timestamps []int64, columns []logstorage.BlockColumn) {
|
||||
rowsLock.Lock()
|
||||
defer rowsLock.Unlock()
|
||||
|
||||
for i, timestamp := range timestamps {
|
||||
fields := make([]logstorage.Field, len(columns))
|
||||
for j := range columns {
|
||||
f := &fields[j]
|
||||
f.Name = strings.Clone(columns[j].Name)
|
||||
f.Value = strings.Clone(columns[j].Values[i])
|
||||
}
|
||||
rows = append(rows, row{
|
||||
timestamp: timestamp,
|
||||
fields: fields,
|
||||
})
|
||||
}
|
||||
|
||||
if len(rows) >= limit {
|
||||
cancel()
|
||||
}
|
||||
}
|
||||
if err := vlstorage.RunQuery(ctxWithCancel, tenantIDs, q, writeBlock); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return rows, nil
|
||||
}
|
||||
|
||||
func parseCommonArgs(r *http.Request) (*logstorage.Query, []logstorage.TenantID, error) {
|
||||
var (
|
||||
maxSortBufferSize = flagutil.NewBytes("select.maxSortBufferSize", 1024*1024, "Query results from /select/logsql/query are automatically sorted by _time "+
|
||||
"if their summary size doesn't exceed this value; otherwise, query results are streamed in the response without sorting; "+
|
||||
"too big value for this flag may result in high memory usage since the sorting is performed in memory")
|
||||
)
|
||||
|
||||
// ProcessQueryRequest handles /select/logsql/query request
|
||||
func ProcessQueryRequest(w http.ResponseWriter, r *http.Request, stopCh <-chan struct{}, cancel func()) {
|
||||
// Extract tenantID
|
||||
tenantID, err := logstorage.GetTenantIDFromRequest(r)
|
||||
if err != nil {
|
||||
return nil, nil, fmt.Errorf("cannot obtain tenanID: %w", err)
|
||||
httpserver.Errorf(w, r, "%s", err)
|
||||
return
|
||||
}
|
||||
limit, err := httputils.GetInt(r, "limit")
|
||||
if err != nil {
|
||||
httpserver.Errorf(w, r, "%s", err)
|
||||
return
|
||||
}
|
||||
tenantIDs := []logstorage.TenantID{tenantID}
|
||||
|
||||
// Parse query
|
||||
qStr := r.FormValue("query")
|
||||
q, err := logstorage.ParseQuery(qStr)
|
||||
if err != nil {
|
||||
return nil, nil, fmt.Errorf("cannot parse query [%s]: %s", qStr, err)
|
||||
httpserver.Errorf(w, r, "cannot parse query [%s]: %s", qStr, err)
|
||||
return
|
||||
}
|
||||
w.Header().Set("Content-Type", "application/stream+json; charset=utf-8")
|
||||
|
||||
// Parse optional start and end args
|
||||
start, okStart, err := getTimeNsec(r, "start")
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
end, okEnd, err := getTimeNsec(r, "end")
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
if okStart || okEnd {
|
||||
if !okStart {
|
||||
start = math.MinInt64
|
||||
sw := getSortWriter()
|
||||
sw.Init(w, maxSortBufferSize.IntN(), limit)
|
||||
tenantIDs := []logstorage.TenantID{tenantID}
|
||||
vlstorage.RunQuery(tenantIDs, q, stopCh, func(columns []logstorage.BlockColumn) {
|
||||
if len(columns) == 0 {
|
||||
return
|
||||
}
|
||||
if !okEnd {
|
||||
end = math.MaxInt64
|
||||
}
|
||||
q.AddTimeFilter(start, end)
|
||||
}
|
||||
rowsCount := len(columns[0].Values)
|
||||
|
||||
return q, tenantIDs, nil
|
||||
// skip entries with empty _stream column
|
||||
// _stream is empty in case indexdb entry was not flushed to the storage yet
|
||||
// skipping such entries makes the result more consistent
|
||||
streamCol := 0
|
||||
|
||||
// fast path
|
||||
// _stream column is a built-in column and it is always supposed to be at the same position
|
||||
if len(columns) >= 2 && columns[1].Name == "_stream" {
|
||||
streamCol = 1
|
||||
} else {
|
||||
for i := 1; i < len(columns); i++ {
|
||||
if columns[i].Name == "_stream" {
|
||||
streamCol = i
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
bb := blockResultPool.Get()
|
||||
for rowIdx := 0; rowIdx < rowsCount; rowIdx++ {
|
||||
if columns[streamCol].Values[rowIdx] == "" {
|
||||
continue
|
||||
}
|
||||
WriteJSONRow(bb, columns, rowIdx)
|
||||
}
|
||||
|
||||
if !sw.TryWrite(bb.B) {
|
||||
cancel()
|
||||
}
|
||||
|
||||
blockResultPool.Put(bb)
|
||||
})
|
||||
sw.FinalFlush()
|
||||
putSortWriter(sw)
|
||||
}
|
||||
|
||||
func getTimeNsec(r *http.Request, argName string) (int64, bool, error) {
|
||||
s := r.FormValue(argName)
|
||||
if s == "" {
|
||||
return 0, false, nil
|
||||
}
|
||||
currentTimestamp := time.Now().UnixNano()
|
||||
nsecs, err := promutils.ParseTimeAt(s, currentTimestamp)
|
||||
if err != nil {
|
||||
return 0, false, fmt.Errorf("cannot parse %s=%s: %w", argName, s, err)
|
||||
}
|
||||
return nsecs, true, nil
|
||||
}
|
||||
var blockResultPool bytesutil.ByteBufferPool
|
||||
|
||||
@@ -1,32 +0,0 @@
|
||||
{% import (
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
|
||||
) %}
|
||||
|
||||
{% stripspace %}
|
||||
|
||||
// ValuesWithHitsJSON generates JSON from the given values.
|
||||
{% func ValuesWithHitsJSON(values []logstorage.ValueWithHits) %}
|
||||
{
|
||||
"values":{%= valuesWithHitsJSONArray(values) %}
|
||||
}
|
||||
{% endfunc %}
|
||||
|
||||
{% func valuesWithHitsJSONArray(values []logstorage.ValueWithHits) %}
|
||||
[
|
||||
{% if len(values) > 0 %}
|
||||
{%= valueWithHitsJSON(values[0]) %}
|
||||
{% for _, v := range values[1:] %}
|
||||
,{%= valueWithHitsJSON(v) %}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
]
|
||||
{% endfunc %}
|
||||
|
||||
{% func valueWithHitsJSON(v logstorage.ValueWithHits) %}
|
||||
{
|
||||
"value":{%q= v.Value %},
|
||||
"hits":{%dul= v.Hits %}
|
||||
}
|
||||
{% endfunc %}
|
||||
|
||||
{% endstripspace %}
|
||||
@@ -1,152 +0,0 @@
|
||||
// Code generated by qtc from "logsql.qtpl". DO NOT EDIT.
|
||||
// See https://github.com/valyala/quicktemplate for details.
|
||||
|
||||
//line app/vlselect/logsql/logsql.qtpl:1
|
||||
package logsql
|
||||
|
||||
//line app/vlselect/logsql/logsql.qtpl:1
|
||||
import (
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
|
||||
)
|
||||
|
||||
// ValuesWithHitsJSON generates JSON from the given values.
|
||||
|
||||
//line app/vlselect/logsql/logsql.qtpl:8
|
||||
import (
|
||||
qtio422016 "io"
|
||||
|
||||
qt422016 "github.com/valyala/quicktemplate"
|
||||
)
|
||||
|
||||
//line app/vlselect/logsql/logsql.qtpl:8
|
||||
var (
|
||||
_ = qtio422016.Copy
|
||||
_ = qt422016.AcquireByteBuffer
|
||||
)
|
||||
|
||||
//line app/vlselect/logsql/logsql.qtpl:8
|
||||
func StreamValuesWithHitsJSON(qw422016 *qt422016.Writer, values []logstorage.ValueWithHits) {
|
||||
//line app/vlselect/logsql/logsql.qtpl:8
|
||||
qw422016.N().S(`{"values":`)
|
||||
//line app/vlselect/logsql/logsql.qtpl:10
|
||||
streamvaluesWithHitsJSONArray(qw422016, values)
|
||||
//line app/vlselect/logsql/logsql.qtpl:10
|
||||
qw422016.N().S(`}`)
|
||||
//line app/vlselect/logsql/logsql.qtpl:12
|
||||
}
|
||||
|
||||
//line app/vlselect/logsql/logsql.qtpl:12
|
||||
func WriteValuesWithHitsJSON(qq422016 qtio422016.Writer, values []logstorage.ValueWithHits) {
|
||||
//line app/vlselect/logsql/logsql.qtpl:12
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line app/vlselect/logsql/logsql.qtpl:12
|
||||
StreamValuesWithHitsJSON(qw422016, values)
|
||||
//line app/vlselect/logsql/logsql.qtpl:12
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line app/vlselect/logsql/logsql.qtpl:12
|
||||
}
|
||||
|
||||
//line app/vlselect/logsql/logsql.qtpl:12
|
||||
func ValuesWithHitsJSON(values []logstorage.ValueWithHits) string {
|
||||
//line app/vlselect/logsql/logsql.qtpl:12
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line app/vlselect/logsql/logsql.qtpl:12
|
||||
WriteValuesWithHitsJSON(qb422016, values)
|
||||
//line app/vlselect/logsql/logsql.qtpl:12
|
||||
qs422016 := string(qb422016.B)
|
||||
//line app/vlselect/logsql/logsql.qtpl:12
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line app/vlselect/logsql/logsql.qtpl:12
|
||||
return qs422016
|
||||
//line app/vlselect/logsql/logsql.qtpl:12
|
||||
}
|
||||
|
||||
//line app/vlselect/logsql/logsql.qtpl:14
|
||||
func streamvaluesWithHitsJSONArray(qw422016 *qt422016.Writer, values []logstorage.ValueWithHits) {
|
||||
//line app/vlselect/logsql/logsql.qtpl:14
|
||||
qw422016.N().S(`[`)
|
||||
//line app/vlselect/logsql/logsql.qtpl:16
|
||||
if len(values) > 0 {
|
||||
//line app/vlselect/logsql/logsql.qtpl:17
|
||||
streamvalueWithHitsJSON(qw422016, values[0])
|
||||
//line app/vlselect/logsql/logsql.qtpl:18
|
||||
for _, v := range values[1:] {
|
||||
//line app/vlselect/logsql/logsql.qtpl:18
|
||||
qw422016.N().S(`,`)
|
||||
//line app/vlselect/logsql/logsql.qtpl:19
|
||||
streamvalueWithHitsJSON(qw422016, v)
|
||||
//line app/vlselect/logsql/logsql.qtpl:20
|
||||
}
|
||||
//line app/vlselect/logsql/logsql.qtpl:21
|
||||
}
|
||||
//line app/vlselect/logsql/logsql.qtpl:21
|
||||
qw422016.N().S(`]`)
|
||||
//line app/vlselect/logsql/logsql.qtpl:23
|
||||
}
|
||||
|
||||
//line app/vlselect/logsql/logsql.qtpl:23
|
||||
func writevaluesWithHitsJSONArray(qq422016 qtio422016.Writer, values []logstorage.ValueWithHits) {
|
||||
//line app/vlselect/logsql/logsql.qtpl:23
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line app/vlselect/logsql/logsql.qtpl:23
|
||||
streamvaluesWithHitsJSONArray(qw422016, values)
|
||||
//line app/vlselect/logsql/logsql.qtpl:23
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line app/vlselect/logsql/logsql.qtpl:23
|
||||
}
|
||||
|
||||
//line app/vlselect/logsql/logsql.qtpl:23
|
||||
func valuesWithHitsJSONArray(values []logstorage.ValueWithHits) string {
|
||||
//line app/vlselect/logsql/logsql.qtpl:23
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line app/vlselect/logsql/logsql.qtpl:23
|
||||
writevaluesWithHitsJSONArray(qb422016, values)
|
||||
//line app/vlselect/logsql/logsql.qtpl:23
|
||||
qs422016 := string(qb422016.B)
|
||||
//line app/vlselect/logsql/logsql.qtpl:23
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line app/vlselect/logsql/logsql.qtpl:23
|
||||
return qs422016
|
||||
//line app/vlselect/logsql/logsql.qtpl:23
|
||||
}
|
||||
|
||||
//line app/vlselect/logsql/logsql.qtpl:25
|
||||
func streamvalueWithHitsJSON(qw422016 *qt422016.Writer, v logstorage.ValueWithHits) {
|
||||
//line app/vlselect/logsql/logsql.qtpl:25
|
||||
qw422016.N().S(`{"value":`)
|
||||
//line app/vlselect/logsql/logsql.qtpl:27
|
||||
qw422016.N().Q(v.Value)
|
||||
//line app/vlselect/logsql/logsql.qtpl:27
|
||||
qw422016.N().S(`,"hits":`)
|
||||
//line app/vlselect/logsql/logsql.qtpl:28
|
||||
qw422016.N().DUL(v.Hits)
|
||||
//line app/vlselect/logsql/logsql.qtpl:28
|
||||
qw422016.N().S(`}`)
|
||||
//line app/vlselect/logsql/logsql.qtpl:30
|
||||
}
|
||||
|
||||
//line app/vlselect/logsql/logsql.qtpl:30
|
||||
func writevalueWithHitsJSON(qq422016 qtio422016.Writer, v logstorage.ValueWithHits) {
|
||||
//line app/vlselect/logsql/logsql.qtpl:30
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line app/vlselect/logsql/logsql.qtpl:30
|
||||
streamvalueWithHitsJSON(qw422016, v)
|
||||
//line app/vlselect/logsql/logsql.qtpl:30
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line app/vlselect/logsql/logsql.qtpl:30
|
||||
}
|
||||
|
||||
//line app/vlselect/logsql/logsql.qtpl:30
|
||||
func valueWithHitsJSON(v logstorage.ValueWithHits) string {
|
||||
//line app/vlselect/logsql/logsql.qtpl:30
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line app/vlselect/logsql/logsql.qtpl:30
|
||||
writevalueWithHitsJSON(qb422016, v)
|
||||
//line app/vlselect/logsql/logsql.qtpl:30
|
||||
qs422016 := string(qb422016.B)
|
||||
//line app/vlselect/logsql/logsql.qtpl:30
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line app/vlselect/logsql/logsql.qtpl:30
|
||||
return qs422016
|
||||
//line app/vlselect/logsql/logsql.qtpl:30
|
||||
}
|
||||
290
app/vlselect/logsql/sort_writer.go
Normal file
290
app/vlselect/logsql/sort_writer.go
Normal file
@@ -0,0 +1,290 @@
|
||||
package logsql
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io"
|
||||
"sort"
|
||||
"sync"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logjson"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
|
||||
)
|
||||
|
||||
func getSortWriter() *sortWriter {
|
||||
v := sortWriterPool.Get()
|
||||
if v == nil {
|
||||
return &sortWriter{}
|
||||
}
|
||||
return v.(*sortWriter)
|
||||
}
|
||||
|
||||
func putSortWriter(sw *sortWriter) {
|
||||
sw.reset()
|
||||
sortWriterPool.Put(sw)
|
||||
}
|
||||
|
||||
var sortWriterPool sync.Pool
|
||||
|
||||
// sortWriter expects JSON line stream to be written to it.
|
||||
//
|
||||
// It buffers the incoming data until its size reaches maxBufLen.
|
||||
// Then it streams the buffered data and all the incoming data to w.
|
||||
//
|
||||
// The FinalFlush() must be called when all the data is written.
|
||||
// If the buf isn't empty at FinalFlush() call, then the buffered data
|
||||
// is sorted by _time field.
|
||||
type sortWriter struct {
|
||||
mu sync.Mutex
|
||||
w io.Writer
|
||||
|
||||
maxLines int
|
||||
linesWritten int
|
||||
|
||||
maxBufLen int
|
||||
buf []byte
|
||||
bufFlushed bool
|
||||
|
||||
hasErr bool
|
||||
}
|
||||
|
||||
func (sw *sortWriter) reset() {
|
||||
sw.w = nil
|
||||
|
||||
sw.maxLines = 0
|
||||
sw.linesWritten = 0
|
||||
|
||||
sw.maxBufLen = 0
|
||||
sw.buf = sw.buf[:0]
|
||||
sw.bufFlushed = false
|
||||
sw.hasErr = false
|
||||
}
|
||||
|
||||
// Init initializes sw.
|
||||
//
|
||||
// If maxLines is set to positive value, then sw accepts up to maxLines
|
||||
// and then rejects all the other lines by returning false from TryWrite.
|
||||
func (sw *sortWriter) Init(w io.Writer, maxBufLen, maxLines int) {
|
||||
sw.reset()
|
||||
|
||||
sw.w = w
|
||||
sw.maxBufLen = maxBufLen
|
||||
sw.maxLines = maxLines
|
||||
}
|
||||
|
||||
// TryWrite writes p to sw.
|
||||
//
|
||||
// True is returned on successful write, false otherwise.
|
||||
//
|
||||
// Unsuccessful write may occur on underlying write error or when maxLines lines are already written to sw.
|
||||
func (sw *sortWriter) TryWrite(p []byte) bool {
|
||||
sw.mu.Lock()
|
||||
defer sw.mu.Unlock()
|
||||
|
||||
if sw.hasErr {
|
||||
return false
|
||||
}
|
||||
|
||||
if sw.bufFlushed {
|
||||
if !sw.writeToUnderlyingWriterLocked(p) {
|
||||
sw.hasErr = true
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
if len(sw.buf)+len(p) < sw.maxBufLen {
|
||||
sw.buf = append(sw.buf, p...)
|
||||
return true
|
||||
}
|
||||
|
||||
sw.bufFlushed = true
|
||||
if !sw.writeToUnderlyingWriterLocked(sw.buf) {
|
||||
sw.hasErr = true
|
||||
return false
|
||||
}
|
||||
sw.buf = sw.buf[:0]
|
||||
|
||||
if !sw.writeToUnderlyingWriterLocked(p) {
|
||||
sw.hasErr = true
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func (sw *sortWriter) writeToUnderlyingWriterLocked(p []byte) bool {
|
||||
if len(p) == 0 {
|
||||
return true
|
||||
}
|
||||
if sw.maxLines > 0 {
|
||||
if sw.linesWritten >= sw.maxLines {
|
||||
return false
|
||||
}
|
||||
var linesLeft int
|
||||
p, linesLeft = trimLines(p, sw.maxLines-sw.linesWritten)
|
||||
sw.linesWritten += linesLeft
|
||||
}
|
||||
if _, err := sw.w.Write(p); err != nil {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func trimLines(p []byte, maxLines int) ([]byte, int) {
|
||||
if maxLines <= 0 {
|
||||
return nil, 0
|
||||
}
|
||||
n := bytes.Count(p, newline)
|
||||
if n < maxLines {
|
||||
return p, n
|
||||
}
|
||||
for n >= maxLines {
|
||||
idx := bytes.LastIndexByte(p, '\n')
|
||||
p = p[:idx]
|
||||
n--
|
||||
}
|
||||
return p[:len(p)+1], maxLines
|
||||
}
|
||||
|
||||
var newline = []byte("\n")
|
||||
|
||||
func (sw *sortWriter) FinalFlush() {
|
||||
if sw.hasErr || sw.bufFlushed {
|
||||
return
|
||||
}
|
||||
|
||||
rs := getRowsSorter()
|
||||
rs.parseRows(sw.buf)
|
||||
rs.sort()
|
||||
|
||||
rows := rs.rows
|
||||
if sw.maxLines > 0 && len(rows) > sw.maxLines {
|
||||
rows = rows[:sw.maxLines]
|
||||
}
|
||||
WriteJSONRows(sw.w, rows)
|
||||
|
||||
putRowsSorter(rs)
|
||||
}
|
||||
|
||||
func getRowsSorter() *rowsSorter {
|
||||
v := rowsSorterPool.Get()
|
||||
if v == nil {
|
||||
return &rowsSorter{}
|
||||
}
|
||||
return v.(*rowsSorter)
|
||||
}
|
||||
|
||||
func putRowsSorter(rs *rowsSorter) {
|
||||
rs.reset()
|
||||
rowsSorterPool.Put(rs)
|
||||
}
|
||||
|
||||
var rowsSorterPool sync.Pool
|
||||
|
||||
type rowsSorter struct {
|
||||
buf []byte
|
||||
fieldsBuf []logstorage.Field
|
||||
rows [][]logstorage.Field
|
||||
times []string
|
||||
}
|
||||
|
||||
func (rs *rowsSorter) reset() {
|
||||
rs.buf = rs.buf[:0]
|
||||
|
||||
fieldsBuf := rs.fieldsBuf
|
||||
for i := range fieldsBuf {
|
||||
fieldsBuf[i].Reset()
|
||||
}
|
||||
rs.fieldsBuf = fieldsBuf[:0]
|
||||
|
||||
rows := rs.rows
|
||||
for i := range rows {
|
||||
rows[i] = nil
|
||||
}
|
||||
rs.rows = rows[:0]
|
||||
|
||||
times := rs.times
|
||||
for i := range times {
|
||||
times[i] = ""
|
||||
}
|
||||
rs.times = times[:0]
|
||||
}
|
||||
|
||||
func (rs *rowsSorter) parseRows(src []byte) {
|
||||
rs.reset()
|
||||
|
||||
buf := rs.buf
|
||||
fieldsBuf := rs.fieldsBuf
|
||||
rows := rs.rows
|
||||
times := rs.times
|
||||
|
||||
p := logjson.GetParser()
|
||||
for len(src) > 0 {
|
||||
var line []byte
|
||||
n := bytes.IndexByte(src, '\n')
|
||||
if n < 0 {
|
||||
line = src
|
||||
src = nil
|
||||
} else {
|
||||
line = src[:n]
|
||||
src = src[n+1:]
|
||||
}
|
||||
if len(line) == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
if err := p.ParseLogMessage(line); err != nil {
|
||||
logger.Panicf("BUG: unexpected invalid JSON line: %s", err)
|
||||
}
|
||||
|
||||
timeValue := ""
|
||||
fieldsBufLen := len(fieldsBuf)
|
||||
for _, f := range p.Fields {
|
||||
bufLen := len(buf)
|
||||
buf = append(buf, f.Name...)
|
||||
name := bytesutil.ToUnsafeString(buf[bufLen:])
|
||||
|
||||
bufLen = len(buf)
|
||||
buf = append(buf, f.Value...)
|
||||
value := bytesutil.ToUnsafeString(buf[bufLen:])
|
||||
|
||||
fieldsBuf = append(fieldsBuf, logstorage.Field{
|
||||
Name: name,
|
||||
Value: value,
|
||||
})
|
||||
|
||||
if name == "_time" {
|
||||
timeValue = value
|
||||
}
|
||||
}
|
||||
rows = append(rows, fieldsBuf[fieldsBufLen:])
|
||||
times = append(times, timeValue)
|
||||
}
|
||||
logjson.PutParser(p)
|
||||
|
||||
rs.buf = buf
|
||||
rs.fieldsBuf = fieldsBuf
|
||||
rs.rows = rows
|
||||
rs.times = times
|
||||
}
|
||||
|
||||
func (rs *rowsSorter) Len() int {
|
||||
return len(rs.rows)
|
||||
}
|
||||
|
||||
func (rs *rowsSorter) Less(i, j int) bool {
|
||||
times := rs.times
|
||||
return times[i] < times[j]
|
||||
}
|
||||
|
||||
func (rs *rowsSorter) Swap(i, j int) {
|
||||
times := rs.times
|
||||
rows := rs.rows
|
||||
times[i], times[j] = times[j], times[i]
|
||||
rows[i], rows[j] = rows[j], rows[i]
|
||||
}
|
||||
|
||||
func (rs *rowsSorter) sort() {
|
||||
sort.Sort(rs)
|
||||
}
|
||||
46
app/vlselect/logsql/sort_writer_test.go
Normal file
46
app/vlselect/logsql/sort_writer_test.go
Normal file
@@ -0,0 +1,46 @@
|
||||
package logsql
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestSortWriter(t *testing.T) {
|
||||
f := func(maxBufLen, maxLines int, data string, expectedResult string) {
|
||||
t.Helper()
|
||||
|
||||
var bb bytes.Buffer
|
||||
sw := getSortWriter()
|
||||
sw.Init(&bb, maxBufLen, maxLines)
|
||||
for _, s := range strings.Split(data, "\n") {
|
||||
if !sw.TryWrite([]byte(s + "\n")) {
|
||||
break
|
||||
}
|
||||
}
|
||||
sw.FinalFlush()
|
||||
putSortWriter(sw)
|
||||
|
||||
result := bb.String()
|
||||
if result != expectedResult {
|
||||
t.Fatalf("unexpected result;\ngot\n%s\nwant\n%s", result, expectedResult)
|
||||
}
|
||||
}
|
||||
|
||||
f(100, 0, "", "")
|
||||
f(100, 0, "{}", "{}\n")
|
||||
|
||||
data := `{"_time":"def","_msg":"xxx"}
|
||||
{"_time":"abc","_msg":"foo"}`
|
||||
resultExpected := `{"_time":"abc","_msg":"foo"}
|
||||
{"_time":"def","_msg":"xxx"}
|
||||
`
|
||||
f(100, 0, data, resultExpected)
|
||||
f(10, 0, data, data+"\n")
|
||||
|
||||
// Test with the maxLines
|
||||
f(100, 1, data, `{"_time":"abc","_msg":"foo"}`+"\n")
|
||||
f(10, 1, data, `{"_time":"def","_msg":"xxx"}`+"\n")
|
||||
f(10, 2, data, data+"\n")
|
||||
f(100, 2, data, resultExpected)
|
||||
}
|
||||
@@ -14,6 +14,7 @@ import (
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httputils"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timerpool"
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
)
|
||||
|
||||
@@ -23,7 +24,7 @@ var (
|
||||
"See also -search.maxQueueDuration")
|
||||
maxQueueDuration = flag.Duration("search.maxQueueDuration", 10*time.Second, "The maximum time the search request waits for execution when -search.maxConcurrentRequests "+
|
||||
"limit is reached; see also -search.maxQueryDuration")
|
||||
maxQueryDuration = flag.Duration("search.maxQueryDuration", time.Second*30, "The maximum duration for query execution. It can be overridden on a per-query basis via 'timeout' query arg")
|
||||
maxQueryDuration = flag.Duration("search.maxQueryDuration", time.Second*30, "The maximum duration for query execution")
|
||||
)
|
||||
|
||||
func getDefaultMaxConcurrentRequests() int {
|
||||
@@ -75,9 +76,10 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
|
||||
// Skip requests, which do not start with /select/, since these aren't our requests.
|
||||
return false
|
||||
}
|
||||
path = strings.TrimPrefix(path, "/select")
|
||||
path = strings.ReplaceAll(path, "//", "/")
|
||||
|
||||
if path == "/select/vmui" {
|
||||
if path == "/vmui" {
|
||||
// VMUI access via incomplete url without `/` in the end. Redirect to complete url.
|
||||
// Use relative redirect, since the hostname and path prefix may be incorrect if VictoriaMetrics
|
||||
// is hidden behind vmauth or similar proxy.
|
||||
@@ -86,128 +88,68 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
|
||||
httpserver.Redirect(w, newURL)
|
||||
return true
|
||||
}
|
||||
if strings.HasPrefix(path, "/select/vmui/") {
|
||||
if strings.HasPrefix(path, "/select/vmui/static/") {
|
||||
if strings.HasPrefix(path, "/vmui/") {
|
||||
if strings.HasPrefix(path, "/vmui/static/") {
|
||||
// Allow clients caching static contents for long period of time, since it shouldn't change over time.
|
||||
// Path to static contents (such as js and css) must be changed whenever its contents is changed.
|
||||
// See https://developer.chrome.com/docs/lighthouse/performance/uses-long-cache-ttl/
|
||||
w.Header().Set("Cache-Control", "max-age=31536000")
|
||||
}
|
||||
r.URL.Path = strings.TrimPrefix(path, "/select")
|
||||
r.URL.Path = path
|
||||
vmuiFileServer.ServeHTTP(w, r)
|
||||
return true
|
||||
}
|
||||
|
||||
// Limit the number of concurrent queries, which can consume big amounts of CPU time.
|
||||
// Limit the number of concurrent queries, which can consume big amounts of CPU.
|
||||
startTime := time.Now()
|
||||
ctx := r.Context()
|
||||
d := getMaxQueryDuration(r)
|
||||
ctxWithTimeout, cancel := context.WithTimeout(ctx, d)
|
||||
defer cancel()
|
||||
|
||||
stopCh := ctxWithTimeout.Done()
|
||||
stopCh := ctx.Done()
|
||||
select {
|
||||
case concurrencyLimitCh <- struct{}{}:
|
||||
defer func() { <-concurrencyLimitCh }()
|
||||
default:
|
||||
// Sleep for a while until giving up. This should resolve short bursts in requests.
|
||||
concurrencyLimitReached.Inc()
|
||||
d := getMaxQueryDuration(r)
|
||||
if d > *maxQueueDuration {
|
||||
d = *maxQueueDuration
|
||||
}
|
||||
t := timerpool.Get(d)
|
||||
select {
|
||||
case concurrencyLimitCh <- struct{}{}:
|
||||
timerpool.Put(t)
|
||||
defer func() { <-concurrencyLimitCh }()
|
||||
case <-stopCh:
|
||||
switch ctxWithTimeout.Err() {
|
||||
case context.Canceled:
|
||||
remoteAddr := httpserver.GetQuotedRemoteAddr(r)
|
||||
requestURI := httpserver.GetRequestURI(r)
|
||||
logger.Infof("client has canceled the pending request after %.3f seconds: remoteAddr=%s, requestURI: %q",
|
||||
time.Since(startTime).Seconds(), remoteAddr, requestURI)
|
||||
case context.DeadlineExceeded:
|
||||
concurrencyLimitTimeout.Inc()
|
||||
err := &httpserver.ErrorWithStatusCode{
|
||||
Err: fmt.Errorf("couldn't start executing the request in %.3f seconds, since -search.maxConcurrentRequests=%d concurrent requests "+
|
||||
"are executed. Possible solutions: to reduce query load; to add more compute resources to the server; "+
|
||||
"to increase -search.maxQueueDuration=%s; to increase -search.maxQueryDuration=%s; to increase -search.maxConcurrentRequests; "+
|
||||
"to pass bigger value to 'timeout' query arg",
|
||||
d.Seconds(), *maxConcurrentRequests, maxQueueDuration, maxQueryDuration),
|
||||
StatusCode: http.StatusServiceUnavailable,
|
||||
}
|
||||
httpserver.Errorf(w, r, "%s", err)
|
||||
timerpool.Put(t)
|
||||
remoteAddr := httpserver.GetQuotedRemoteAddr(r)
|
||||
requestURI := httpserver.GetRequestURI(r)
|
||||
logger.Infof("client has cancelled the request after %.3f seconds: remoteAddr=%s, requestURI: %q",
|
||||
time.Since(startTime).Seconds(), remoteAddr, requestURI)
|
||||
return true
|
||||
case <-t.C:
|
||||
timerpool.Put(t)
|
||||
concurrencyLimitTimeout.Inc()
|
||||
err := &httpserver.ErrorWithStatusCode{
|
||||
Err: fmt.Errorf("couldn't start executing the request in %.3f seconds, since -search.maxConcurrentRequests=%d concurrent requests "+
|
||||
"are executed. Possible solutions: to reduce query load; to add more compute resources to the server; "+
|
||||
"to increase -search.maxQueueDuration=%s; to increase -search.maxQueryDuration; to increase -search.maxConcurrentRequests",
|
||||
d.Seconds(), *maxConcurrentRequests, maxQueueDuration),
|
||||
StatusCode: http.StatusServiceUnavailable,
|
||||
}
|
||||
httpserver.Errorf(w, r, "%s", err)
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
if path == "/select/logsql/tail" {
|
||||
logsqlTailRequests.Inc()
|
||||
// Process live tailing request without timeout (e.g. use ctx instead of ctxWithTimeout),
|
||||
// since it is OK to run live tailing requests for very long time.
|
||||
logsql.ProcessLiveTailRequest(ctx, w, r)
|
||||
return true
|
||||
}
|
||||
ctxWithCancel, cancel := context.WithCancel(ctx)
|
||||
defer cancel()
|
||||
stopCh = ctxWithCancel.Done()
|
||||
|
||||
ok := processSelectRequest(ctxWithTimeout, w, r, path)
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
|
||||
err := ctxWithTimeout.Err()
|
||||
switch err {
|
||||
case nil:
|
||||
// nothing to do
|
||||
case context.Canceled:
|
||||
remoteAddr := httpserver.GetQuotedRemoteAddr(r)
|
||||
requestURI := httpserver.GetRequestURI(r)
|
||||
logger.Infof("client has canceled the request after %.3f seconds: remoteAddr=%s, requestURI: %q",
|
||||
time.Since(startTime).Seconds(), remoteAddr, requestURI)
|
||||
case context.DeadlineExceeded:
|
||||
err = &httpserver.ErrorWithStatusCode{
|
||||
Err: fmt.Errorf("the request couldn't be executed in %.3f seconds; possible solutions: "+
|
||||
"to increase -search.maxQueryDuration=%s; to pass bigger value to 'timeout' query arg", d.Seconds(), maxQueryDuration),
|
||||
StatusCode: http.StatusServiceUnavailable,
|
||||
}
|
||||
httpserver.Errorf(w, r, "%s", err)
|
||||
default:
|
||||
httpserver.Errorf(w, r, "unexpected error: %s", err)
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
func processSelectRequest(ctx context.Context, w http.ResponseWriter, r *http.Request, path string) bool {
|
||||
httpserver.EnableCORS(w, r)
|
||||
switch path {
|
||||
case "/select/logsql/field_names":
|
||||
logsqlFieldNamesRequests.Inc()
|
||||
logsql.ProcessFieldNamesRequest(ctx, w, r)
|
||||
return true
|
||||
case "/select/logsql/field_values":
|
||||
logsqlFieldValuesRequests.Inc()
|
||||
logsql.ProcessFieldValuesRequest(ctx, w, r)
|
||||
return true
|
||||
case "/select/logsql/hits":
|
||||
logsqlHitsRequests.Inc()
|
||||
logsql.ProcessHitsRequest(ctx, w, r)
|
||||
return true
|
||||
case "/select/logsql/query":
|
||||
switch {
|
||||
case path == "/logsql/query":
|
||||
logsqlQueryRequests.Inc()
|
||||
logsql.ProcessQueryRequest(ctx, w, r)
|
||||
return true
|
||||
case "/select/logsql/stream_field_names":
|
||||
logsqlStreamFieldNamesRequests.Inc()
|
||||
logsql.ProcessStreamFieldNamesRequest(ctx, w, r)
|
||||
return true
|
||||
case "/select/logsql/stream_field_values":
|
||||
logsqlStreamFieldValuesRequests.Inc()
|
||||
logsql.ProcessStreamFieldValuesRequest(ctx, w, r)
|
||||
return true
|
||||
case "/select/logsql/stream_ids":
|
||||
logsqlStreamIDsRequests.Inc()
|
||||
logsql.ProcessStreamIDsRequest(ctx, w, r)
|
||||
return true
|
||||
case "/select/logsql/streams":
|
||||
logsqlStreamsRequests.Inc()
|
||||
logsql.ProcessStreamsRequest(ctx, w, r)
|
||||
httpserver.EnableCORS(w, r)
|
||||
logsql.ProcessQueryRequest(w, r, stopCh, cancel)
|
||||
return true
|
||||
default:
|
||||
return false
|
||||
@@ -228,13 +170,5 @@ func getMaxQueryDuration(r *http.Request) time.Duration {
|
||||
}
|
||||
|
||||
var (
|
||||
logsqlFieldNamesRequests = metrics.NewCounter(`vl_http_requests_total{path="/select/logsql/field_names"}`)
|
||||
logsqlFieldValuesRequests = metrics.NewCounter(`vl_http_requests_total{path="/select/logsql/field_values"}`)
|
||||
logsqlHitsRequests = metrics.NewCounter(`vl_http_requests_total{path="/select/logsql/hits"}`)
|
||||
logsqlQueryRequests = metrics.NewCounter(`vl_http_requests_total{path="/select/logsql/query"}`)
|
||||
logsqlStreamFieldNamesRequests = metrics.NewCounter(`vl_http_requests_total{path="/select/logsql/stream_field_names"}`)
|
||||
logsqlStreamFieldValuesRequests = metrics.NewCounter(`vl_http_requests_total{path="/select/logsql/stream_field_values"}`)
|
||||
logsqlStreamIDsRequests = metrics.NewCounter(`vl_http_requests_total{path="/select/logsql/stream_ids"}`)
|
||||
logsqlStreamsRequests = metrics.NewCounter(`vl_http_requests_total{path="/select/logsql/streams"}`)
|
||||
logsqlTailRequests = metrics.NewCounter(`vl_http_requests_total{path="/select/logsql/tail"}`)
|
||||
logsqlQueryRequests = metrics.NewCounter(`vl_http_requests_total{path="/select/logsql/query"}`)
|
||||
)
|
||||
|
||||
@@ -1,13 +1,13 @@
|
||||
{
|
||||
"files": {
|
||||
"main.css": "./static/css/main.1041c3d4.css",
|
||||
"main.js": "./static/js/main.62a82f4a.js",
|
||||
"main.css": "./static/css/main.bc07cc78.css",
|
||||
"main.js": "./static/js/main.034044a7.js",
|
||||
"static/js/685.bebe1265.chunk.js": "./static/js/685.bebe1265.chunk.js",
|
||||
"static/media/MetricsQL.md": "./static/media/MetricsQL.aaabf95f2c9bf356bde4.md",
|
||||
"static/media/MetricsQL.md": "./static/media/MetricsQL.10add6e7bdf0f1d98cf7.md",
|
||||
"index.html": "./index.html"
|
||||
},
|
||||
"entrypoints": [
|
||||
"static/css/main.1041c3d4.css",
|
||||
"static/js/main.62a82f4a.js"
|
||||
"static/css/main.bc07cc78.css",
|
||||
"static/js/main.034044a7.js"
|
||||
]
|
||||
}
|
||||
@@ -1 +1 @@
|
||||
<!doctype html><html lang="en"><head><meta charset="utf-8"/><link rel="icon" href="./favicon.ico"/><meta name="viewport" content="width=device-width,initial-scale=1,maximum-scale=5"/><meta name="theme-color" content="#000000"/><meta name="description" content="UI for VictoriaMetrics"/><link rel="apple-touch-icon" href="./apple-touch-icon.png"/><link rel="icon" type="image/png" sizes="32x32" href="./favicon-32x32.png"><link rel="manifest" href="./manifest.json"/><title>VM UI</title><script src="./dashboards/index.js" type="module"></script><meta name="twitter:card" content="summary_large_image"><meta name="twitter:image" content="./preview.jpg"><meta name="twitter:title" content="UI for VictoriaMetrics"><meta name="twitter:description" content="Explore and troubleshoot your VictoriaMetrics data"><meta name="twitter:site" content="@VictoriaMetrics"><meta property="og:title" content="Metric explorer for VictoriaMetrics"><meta property="og:description" content="Explore and troubleshoot your VictoriaMetrics data"><meta property="og:image" content="./preview.jpg"><meta property="og:type" content="website"><script defer="defer" src="./static/js/main.62a82f4a.js"></script><link href="./static/css/main.1041c3d4.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div></body></html>
|
||||
<!doctype html><html lang="en"><head><meta charset="utf-8"/><link rel="icon" href="./favicon.ico"/><meta name="viewport" content="width=device-width,initial-scale=1,maximum-scale=5"/><meta name="theme-color" content="#000000"/><meta name="description" content="UI for VictoriaMetrics"/><link rel="apple-touch-icon" href="./apple-touch-icon.png"/><link rel="icon" type="image/png" sizes="32x32" href="./favicon-32x32.png"><link rel="manifest" href="./manifest.json"/><title>VM UI</title><script src="./dashboards/index.js" type="module"></script><meta name="twitter:card" content="summary_large_image"><meta name="twitter:image" content="./preview.jpg"><meta name="twitter:title" content="UI for VictoriaMetrics"><meta name="twitter:description" content="Explore and troubleshoot your VictoriaMetrics data"><meta name="twitter:site" content="@VictoriaMetrics"><meta property="og:title" content="Metric explorer for VictoriaMetrics"><meta property="og:description" content="Explore and troubleshoot your VictoriaMetrics data"><meta property="og:image" content="./preview.jpg"><meta property="og:type" content="website"><script defer="defer" src="./static/js/main.034044a7.js"></script><link href="./static/css/main.bc07cc78.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div></body></html>
|
||||
File diff suppressed because one or more lines are too long
1
app/vlselect/vmui/static/css/main.bc07cc78.css
Normal file
1
app/vlselect/vmui/static/css/main.bc07cc78.css
Normal file
File diff suppressed because one or more lines are too long
2
app/vlselect/vmui/static/js/main.034044a7.js
Normal file
2
app/vlselect/vmui/static/js/main.034044a7.js
Normal file
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
@@ -22,7 +22,7 @@ However, there are some [intentional differences](https://medium.com/@romanhavro
|
||||
[Standalone MetricsQL package](https://godoc.org/github.com/VictoriaMetrics/metricsql) can be used for parsing MetricsQL in external apps.
|
||||
|
||||
If you are unfamiliar with PromQL, then it is suggested reading [this tutorial for beginners](https://medium.com/@valyala/promql-tutorial-for-beginners-9ab455142085)
|
||||
and introduction into [basic querying via MetricsQL](https://docs.victoriametrics.com/keyconcepts/#metricsql).
|
||||
and introduction into [basic querying via MetricsQL](https://docs.victoriametrics.com/keyConcepts.html#metricsql).
|
||||
|
||||
The following functionality is implemented differently in MetricsQL compared to PromQL. This improves user experience:
|
||||
|
||||
@@ -70,17 +70,15 @@ The list of MetricsQL features on top of PromQL:
|
||||
VictoriaMetrics can be used as Graphite datasource in Grafana. See [these docs](https://docs.victoriametrics.com/#graphite-api-usage) for details.
|
||||
See also [label_graphite_group](#label_graphite_group) function, which can be used for extracting the given groups from Graphite metric name.
|
||||
* Lookbehind window in square brackets for [rollup functions](#rollup-functions) may be omitted. VictoriaMetrics automatically selects the lookbehind window
|
||||
depending on the `step` query arg passed to [/api/v1/query_range](https://docs.victoriametrics.com/keyconcepts/#range-query)
|
||||
depending on the `step` query arg passed to [/api/v1/query_range](https://docs.victoriametrics.com/keyConcepts.html#range-query)
|
||||
and the real interval between [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples) (aka `scrape_interval`).
|
||||
For instance, the following query is valid in VictoriaMetrics: `rate(node_network_receive_bytes_total)`.
|
||||
It is roughly equivalent to `rate(node_network_receive_bytes_total[$__interval])` when used in Grafana.
|
||||
The difference is documented in [rate() docs](#rate).
|
||||
* Numeric values can contain `_` delimiters for better readability. For example, `1_234_567_890` can be used in queries instead of `1234567890`.
|
||||
* [Series selectors](https://docs.victoriametrics.com/keyconcepts/#filtering) accept multiple `or` filters. For example, `{env="prod",job="a" or env="dev",job="b"}`
|
||||
* [Series selectors](https://docs.victoriametrics.com/keyConcepts.html#filtering) accept multiple `or` filters. For example, `{env="prod",job="a" or env="dev",job="b"}`
|
||||
selects series with `{env="prod",job="a"}` or `{env="dev",job="b"}` labels.
|
||||
See [these docs](https://docs.victoriametrics.com/keyconcepts/#filtering-by-multiple-or-filters) for details.
|
||||
* Support for matching against multiple numeric constants via `q == (C1, ..., CN)` and `q != (C1, ..., CN)` syntax. For example, `status_code == (300, 301, 304)`
|
||||
returns `status_code` metrics with one of `300`, `301` or `304` values.
|
||||
See [these docs](https://docs.victoriametrics.com/keyConcepts.html#filtering-by-multiple-or-filters) for details.
|
||||
* Support for `group_left(*)` and `group_right(*)` for copying all the labels from time series on the `one` side
|
||||
of [many-to-one operations](https://prometheus.io/docs/prometheus/latest/querying/operators/#many-to-one-and-one-to-many-vector-matches).
|
||||
The copied label names may clash with the existing label names, so MetricsQL provides an ability to add prefix to the copied metric names
|
||||
@@ -107,7 +105,7 @@ The list of MetricsQL features on top of PromQL:
|
||||
* Trailing commas on all the lists are allowed - label filters, function args and with expressions.
|
||||
For instance, the following queries are valid: `m{foo="bar",}`, `f(a, b,)`, `WITH (x=y,) x`.
|
||||
This simplifies maintenance of multi-line queries.
|
||||
* Metric names and label names may contain any unicode letter. For example `температура{город="Київ"}` is a valid MetricsQL expression.
|
||||
* Metric names and label names may contain any unicode letter. For example `температура{город="Київ"}` is a value MetricsQL expression.
|
||||
* Metric names and labels names may contain escaped chars. For example, `foo\-bar{baz\=aa="b"}` is valid expression.
|
||||
It returns time series with name `foo-bar` containing label `baz=aa` with value `b`.
|
||||
Additionally, the following escape sequences are supported:
|
||||
@@ -154,27 +152,27 @@ MetricsQL provides the following functions:
|
||||
|
||||
### Rollup functions
|
||||
|
||||
**Rollup functions** (aka range functions or window functions) calculate rollups over [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
on the given lookbehind window for the [selected time series](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
For example, `avg_over_time(temperature[24h])` calculates the average temperature over [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples) for the last 24 hours.
|
||||
**Rollup functions** (aka range functions or window functions) calculate rollups over **raw samples**
|
||||
on the given lookbehind window for the [selected time series](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
For example, `avg_over_time(temperature[24h])` calculates the average temperature over raw samples for the last 24 hours.
|
||||
|
||||
Additional details:
|
||||
|
||||
* If rollup functions are used for building graphs in Grafana, then the rollup is calculated independently per each point on the graph.
|
||||
For example, every point for `avg_over_time(temperature[24h])` graph shows the average temperature for the last 24 hours ending at this point.
|
||||
The interval between points is set as `step` query arg passed by Grafana to [/api/v1/query_range](https://docs.victoriametrics.com/keyconcepts/#range-query).
|
||||
* If the given [series selector](https://docs.victoriametrics.com/keyconcepts/#filtering) returns multiple time series,
|
||||
The interval between points is set as `step` query arg passed by Grafana to [/api/v1/query_range](https://docs.victoriametrics.com/keyConcepts.html#range-query).
|
||||
* If the given [series selector](https://docs.victoriametrics.com/keyConcepts.html#filtering) returns multiple time series,
|
||||
then rollups are calculated individually per each returned series.
|
||||
* If lookbehind window in square brackets is missing, then it is automatically set to the following value:
|
||||
- To `step` value passed to [/api/v1/query_range](https://docs.victoriametrics.com/keyconcepts/#range-query) or [/api/v1/query](https://docs.victoriametrics.com/keyconcepts/#instant-query)
|
||||
- To `step` value passed to [/api/v1/query_range](https://docs.victoriametrics.com/keyConcepts.html#range-query) or [/api/v1/query](https://docs.victoriametrics.com/keyconcepts/#instant-query)
|
||||
for all the [rollup functions](#rollup-functions) except of [default_rollup](#default_rollup) and [rate](#rate). This value is known as `$__interval` in Grafana or `1i` in MetricsQL.
|
||||
For example, `avg_over_time(temperature)` is automatically transformed to `avg_over_time(temperature[1i])`.
|
||||
- To the `max(step, scrape_interval)`, where `scrape_interval` is the interval between [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
for [default_rollup](#default_rollup) and [rate](#rate) functions. This allows avoiding unexpected gaps on the graph when `step` is smaller than `scrape_interval`.
|
||||
* Every [series selector](https://docs.victoriametrics.com/keyconcepts/#filtering) in MetricsQL must be wrapped into a rollup function.
|
||||
* Every [series selector](https://docs.victoriametrics.com/keyConcepts.html#filtering) in MetricsQL must be wrapped into a rollup function.
|
||||
Otherwise, it is automatically wrapped into [default_rollup](#default_rollup). For example, `foo{bar="baz"}`
|
||||
is automatically converted to `default_rollup(foo{bar="baz"})` before performing the calculations.
|
||||
* If something other than [series selector](https://docs.victoriametrics.com/keyconcepts/#filtering) is passed to rollup function,
|
||||
* If something other than [series selector](https://docs.victoriametrics.com/keyConcepts.html#filtering) is passed to rollup function,
|
||||
then the inner arg is automatically converted to a [subquery](#subqueries).
|
||||
* All the rollup functions accept optional `keep_metric_names` modifier. If it is set, then the function keeps metric names in results.
|
||||
See [these docs](#keep_metric_names).
|
||||
@@ -186,7 +184,7 @@ The list of supported rollup functions:
|
||||
#### absent_over_time
|
||||
|
||||
`absent_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns 1
|
||||
if the given lookbehind window `d` doesn't contain [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples). Otherwise, it returns an empty result.
|
||||
if the given lookbehind window `d` doesn't contain raw samples. Otherwise, it returns an empty result.
|
||||
|
||||
This function is supported by PromQL.
|
||||
|
||||
@@ -195,9 +193,9 @@ See also [present_over_time](#present_over_time).
|
||||
#### aggr_over_time
|
||||
|
||||
`aggr_over_time(("rollup_func1", "rollup_func2", ...), series_selector[d])` is a [rollup function](#rollup-functions),
|
||||
which calculates all the listed `rollup_func*` for [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples) on the given lookbehind window `d`.
|
||||
which calculates all the listed `rollup_func*` for raw samples on the given lookbehind window `d`.
|
||||
The calculations are performed individually per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
`rollup_func*` can contain any rollup function. For instance, `aggr_over_time(("min_over_time", "max_over_time", "rate"), m[d])`
|
||||
would calculate [min_over_time](#min_over_time), [max_over_time](#max_over_time) and [rate](#rate) for `m[d]`.
|
||||
@@ -205,8 +203,8 @@ would calculate [min_over_time](#min_over_time), [max_over_time](#max_over_time)
|
||||
#### ascent_over_time
|
||||
|
||||
`ascent_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates
|
||||
ascent of [raw sample](https://docs.victoriametrics.com/keyconcepts/#raw-samples) values on the given lookbehind window `d`. The calculations are performed individually
|
||||
per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
ascent of raw sample values on the given lookbehind window `d`. The calculations are performed individually
|
||||
per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
This function is useful for tracking height gains in GPS tracking. Metric names are stripped from the resulting rollups.
|
||||
|
||||
@@ -217,8 +215,8 @@ See also [descent_over_time](#descent_over_time).
|
||||
#### avg_over_time
|
||||
|
||||
`avg_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the average value
|
||||
over [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples) on the given lookbehind window `d` per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
over raw samples on the given lookbehind window `d` per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
This function is supported by PromQL.
|
||||
|
||||
@@ -227,8 +225,8 @@ See also [median_over_time](#median_over_time).
|
||||
#### changes
|
||||
|
||||
`changes(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the number of times
|
||||
the [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples) changed on the given lookbehind window `d` per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
the raw samples changed on the given lookbehind window `d` per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Unlike `changes()` in Prometheus it takes into account the change from the last sample before the given lookbehind window `d`.
|
||||
See [this article](https://medium.com/@romanhavronenko/victoriametrics-promql-compliance-d4318203f51e) for details.
|
||||
@@ -242,8 +240,8 @@ See also [changes_prometheus](#changes_prometheus).
|
||||
#### changes_prometheus
|
||||
|
||||
`changes_prometheus(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the number of times
|
||||
the [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples) changed on the given lookbehind window `d` per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
the raw samples changed on the given lookbehind window `d` per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
It doesn't take into account the change from the last sample before the given lookbehind window `d` in the same way as Prometheus does.
|
||||
See [this article](https://medium.com/@romanhavronenko/victoriametrics-promql-compliance-d4318203f51e) for details.
|
||||
@@ -256,9 +254,9 @@ See also [changes](#changes).
|
||||
|
||||
#### count_eq_over_time
|
||||
|
||||
`count_eq_over_time(series_selector[d], eq)` is a [rollup function](#rollup-functions), which calculates the number of [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
`count_eq_over_time(series_selector[d], eq)` is a [rollup function](#rollup-functions), which calculates the number of raw samples
|
||||
on the given lookbehind window `d`, which are equal to `eq`. It is calculated independently per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
@@ -266,9 +264,9 @@ See also [count_over_time](#count_over_time), [share_eq_over_time](#share_eq_ove
|
||||
|
||||
#### count_gt_over_time
|
||||
|
||||
`count_gt_over_time(series_selector[d], gt)` is a [rollup function](#rollup-functions), which calculates the number of [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
`count_gt_over_time(series_selector[d], gt)` is a [rollup function](#rollup-functions), which calculates the number of raw samples
|
||||
on the given lookbehind window `d`, which are bigger than `gt`. It is calculated independently per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
@@ -276,9 +274,9 @@ See also [count_over_time](#count_over_time) and [share_gt_over_time](#share_gt_
|
||||
|
||||
#### count_le_over_time
|
||||
|
||||
`count_le_over_time(series_selector[d], le)` is a [rollup function](#rollup-functions), which calculates the number of [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
`count_le_over_time(series_selector[d], le)` is a [rollup function](#rollup-functions), which calculates the number of raw samples
|
||||
on the given lookbehind window `d`, which don't exceed `le`. It is calculated independently per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
@@ -286,9 +284,9 @@ See also [count_over_time](#count_over_time) and [share_le_over_time](#share_le_
|
||||
|
||||
#### count_ne_over_time
|
||||
|
||||
`count_ne_over_time(series_selector[d], ne)` is a [rollup function](#rollup-functions), which calculates the number of [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
`count_ne_over_time(series_selector[d], ne)` is a [rollup function](#rollup-functions), which calculates the number of raw samples
|
||||
on the given lookbehind window `d`, which aren't equal to `ne`. It is calculated independently per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
@@ -296,8 +294,8 @@ See also [count_over_time](#count_over_time).
|
||||
|
||||
#### count_over_time
|
||||
|
||||
`count_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the number of [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
`count_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the number of raw samples
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
@@ -307,9 +305,9 @@ See also [count_le_over_time](#count_le_over_time), [count_gt_over_time](#count_
|
||||
|
||||
#### count_values_over_time
|
||||
|
||||
`count_values_over_time("label", series_selector[d])` is a [rollup function](#rollup-functions), which counts the number of [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
`count_values_over_time("label", series_selector[d])` is a [rollup function](#rollup-functions), which counts the number of raw samples
|
||||
with the same value over the given lookbehind window and stores the counts in a time series with an additional `label`, which contains each initial value.
|
||||
The results are calculated independently per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
The results are calculated independently per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
@@ -317,8 +315,8 @@ See also [count_eq_over_time](#count_eq_over_time), [count_values](#count_values
|
||||
|
||||
#### decreases_over_time
|
||||
|
||||
`decreases_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the number of [raw sample](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
value decreases over the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
`decreases_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the number of raw sample value decreases
|
||||
over the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
@@ -326,9 +324,8 @@ See also [increases_over_time](#increases_over_time).
|
||||
|
||||
#### default_rollup
|
||||
|
||||
`default_rollup(series_selector[d])` is a [rollup function](#rollup-functions), which returns the last [raw sample](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
value on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
Compared to [last_over_time](#last_over_time) it accounts for [staleness markers](https://docs.victoriametrics.com/vmagent/#prometheus-staleness-markers) to detect stale series.
|
||||
`default_rollup(series_selector[d])` is a [rollup function](#rollup-functions), which returns the last raw sample value on the given lookbehind window `d`
|
||||
per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
If the lookbehind window is skipped in square brackets, then it is automatically calculated as `max(step, scrape_interval)`, where `step` is the query arg value
|
||||
passed to [/api/v1/query_range](https://docs.victoriametrics.com/keyconcepts/#range-query) or [/api/v1/query](https://docs.victoriametrics.com/keyconcepts/#instant-query),
|
||||
@@ -339,7 +336,7 @@ This allows avoiding unexpected gaps on the graph when `step` is smaller than th
|
||||
|
||||
`delta(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the difference between
|
||||
the last sample before the given lookbehind window `d` and the last sample at the given lookbehind window `d`
|
||||
per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
The behaviour of `delta()` function in MetricsQL is slightly different to the behaviour of `delta()` function in Prometheus.
|
||||
See [this article](https://medium.com/@romanhavronenko/victoriametrics-promql-compliance-d4318203f51e) for details.
|
||||
@@ -354,7 +351,7 @@ See also [increase](#increase) and [delta_prometheus](#delta_prometheus).
|
||||
|
||||
`delta_prometheus(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the difference between
|
||||
the first and the last samples at the given lookbehind window `d` per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
The behaviour of `delta_prometheus()` is close to the behaviour of `delta()` function in Prometheus.
|
||||
See [this article](https://medium.com/@romanhavronenko/victoriametrics-promql-compliance-d4318203f51e) for details.
|
||||
@@ -366,7 +363,7 @@ See also [delta](#delta).
|
||||
#### deriv
|
||||
|
||||
`deriv(series_selector[d])` is a [rollup function](#rollup-functions), which calculates per-second derivative over the given lookbehind window `d`
|
||||
per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
The derivative is calculated using linear regression.
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
@@ -378,8 +375,8 @@ See also [deriv_fast](#deriv_fast) and [ideriv](#ideriv).
|
||||
#### deriv_fast
|
||||
|
||||
`deriv_fast(series_selector[d])` is a [rollup function](#rollup-functions), which calculates per-second derivative
|
||||
using the first and the last [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples) on the given lookbehind window `d` per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
using the first and the last raw samples on the given lookbehind window `d` per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
@@ -387,9 +384,9 @@ See also [deriv](#deriv) and [ideriv](#ideriv).
|
||||
|
||||
#### descent_over_time
|
||||
|
||||
`descent_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates descent of [raw sample](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
values on the given lookbehind window `d`. The calculations are performed individually per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
`descent_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates descent of raw sample values
|
||||
on the given lookbehind window `d`. The calculations are performed individually per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
This function is useful for tracking height loss in GPS tracking.
|
||||
|
||||
@@ -399,8 +396,8 @@ See also [ascent_over_time](#ascent_over_time).
|
||||
|
||||
#### distinct_over_time
|
||||
|
||||
`distinct_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the number of unique [raw sample](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
values on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
`distinct_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the number of distinct raw sample values
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
@@ -409,7 +406,7 @@ See also [count_values_over_time](#count_values_over_time).
|
||||
#### duration_over_time
|
||||
|
||||
`duration_over_time(series_selector[d], max_interval)` is a [rollup function](#rollup-functions), which returns the duration in seconds
|
||||
when time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering) were present
|
||||
when time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering) were present
|
||||
over the given lookbehind window `d`. It is expected that intervals between adjacent samples per each series don't exceed the `max_interval`.
|
||||
Otherwise, such intervals are considered as gaps and aren't counted.
|
||||
|
||||
@@ -419,26 +416,26 @@ See also [lifetime](#lifetime) and [lag](#lag).
|
||||
|
||||
#### first_over_time
|
||||
|
||||
`first_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the first [raw sample](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
value on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
`first_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the first raw sample value
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
See also [last_over_time](#last_over_time) and [tfirst_over_time](#tfirst_over_time).
|
||||
|
||||
#### geomean_over_time
|
||||
|
||||
`geomean_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates [geometric mean](https://en.wikipedia.org/wiki/Geometric_mean)
|
||||
over [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples) on the given lookbehind window `d` per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
over raw samples on the given lookbehind window `d` per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
#### histogram_over_time
|
||||
|
||||
`histogram_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates
|
||||
[VictoriaMetrics histogram](https://godoc.org/github.com/VictoriaMetrics/metrics#Histogram) over [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
on the given lookbehind window `d`. It is calculated individually per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
[VictoriaMetrics histogram](https://godoc.org/github.com/VictoriaMetrics/metrics#Histogram) over raw samples on the given lookbehind window `d`.
|
||||
It is calculated individually per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
The resulting histograms are useful to pass to [histogram_quantile](#histogram_quantile) for calculating quantiles
|
||||
over multiple [gauges](https://docs.victoriametrics.com/keyconcepts/#gauge).
|
||||
over multiple [gauges](https://docs.victoriametrics.com/keyConcepts.html#gauge).
|
||||
For example, the following query calculates median temperature by country over the last 24 hours:
|
||||
|
||||
`histogram_quantile(0.5, sum(histogram_over_time(temperature[24h])) by (vmrange,country))`.
|
||||
@@ -460,10 +457,10 @@ See also [hoeffding_bound_lower](#hoeffding_bound_lower).
|
||||
#### holt_winters
|
||||
|
||||
`holt_winters(series_selector[d], sf, tf)` is a [rollup function](#rollup-functions), which calculates Holt-Winters value
|
||||
(aka [double exponential smoothing](https://en.wikipedia.org/wiki/Exponential_smoothing#Double_exponential_smoothing)) for [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
(aka [double exponential smoothing](https://en.wikipedia.org/wiki/Exponential_smoothing#Double_exponential_smoothing)) for raw samples
|
||||
over the given lookbehind window `d` using the given smoothing factor `sf` and the given trend factor `tf`.
|
||||
Both `sf` and `tf` must be in the range `[0...1]`. It is expected that the [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering)
|
||||
returns time series of [gauge type](https://docs.victoriametrics.com/keyconcepts/#gauge).
|
||||
Both `sf` and `tf` must be in the range `[0...1]`. It is expected that the [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering)
|
||||
returns time series of [gauge type](https://docs.victoriametrics.com/keyConcepts.html#gauge).
|
||||
|
||||
This function is supported by PromQL.
|
||||
|
||||
@@ -471,8 +468,8 @@ See also [range_linear_regression](#range_linear_regression).
|
||||
|
||||
#### idelta
|
||||
|
||||
`idelta(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the difference between the last two [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
`idelta(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the difference between the last two raw samples
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
@@ -482,10 +479,9 @@ See also [delta](#delta).
|
||||
|
||||
#### ideriv
|
||||
|
||||
`ideriv(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the per-second derivative based
|
||||
on the last two [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
`ideriv(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the per-second derivative based on the last two raw samples
|
||||
over the given lookbehind window `d`. The derivative is calculated independently per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
@@ -494,8 +490,8 @@ See also [deriv](#deriv).
|
||||
#### increase
|
||||
|
||||
`increase(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the increase over the given lookbehind window `d`
|
||||
per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
It is expected that the `series_selector` returns time series of [counter type](https://docs.victoriametrics.com/keyconcepts/#counter).
|
||||
per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
It is expected that the `series_selector` returns time series of [counter type](https://docs.victoriametrics.com/keyConcepts.html#counter).
|
||||
|
||||
Unlike Prometheus, it takes into account the last sample before the given lookbehind window `d` when calculating the result.
|
||||
See [this article](https://medium.com/@romanhavronenko/victoriametrics-promql-compliance-d4318203f51e) for details.
|
||||
@@ -509,8 +505,8 @@ See also [increase_pure](#increase_pure), [increase_prometheus](#increase_promet
|
||||
#### increase_prometheus
|
||||
|
||||
`increase_prometheus(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the increase
|
||||
over the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
It is expected that the `series_selector` returns time series of [counter type](https://docs.victoriametrics.com/keyconcepts/#counter).
|
||||
over the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
It is expected that the `series_selector` returns time series of [counter type](https://docs.victoriametrics.com/keyConcepts.html#counter).
|
||||
It doesn't take into account the last sample before the given lookbehind window `d` when calculating the result in the same way as Prometheus does.
|
||||
See [this article](https://medium.com/@romanhavronenko/victoriametrics-promql-compliance-d4318203f51e) for details.
|
||||
|
||||
@@ -521,13 +517,13 @@ See also [increase_pure](#increase_pure) and [increase](#increase).
|
||||
#### increase_pure
|
||||
|
||||
`increase_pure(series_selector[d])` is a [rollup function](#rollup-functions), which works the same as [increase](#increase) except
|
||||
of the following corner case - it assumes that [counters](https://docs.victoriametrics.com/keyconcepts/#counter) always start from 0,
|
||||
of the following corner case - it assumes that [counters](https://docs.victoriametrics.com/keyConcepts.html#counter) always start from 0,
|
||||
while [increase](#increase) ignores the first value in a series if it is too big.
|
||||
|
||||
#### increases_over_time
|
||||
|
||||
`increases_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the number of [raw sample](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
value increases over the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
`increases_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the number of raw sample value increases
|
||||
over the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
@@ -535,17 +531,16 @@ See also [decreases_over_time](#decreases_over_time).
|
||||
|
||||
#### integrate
|
||||
|
||||
`integrate(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the integral over [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
`integrate(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the integral over raw samples on the given lookbehind window `d`
|
||||
per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
#### irate
|
||||
|
||||
`irate(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the "instant" per-second increase rate over
|
||||
the last two [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
It is expected that the `series_selector` returns time series of [counter type](https://docs.victoriametrics.com/keyconcepts/#counter).
|
||||
`irate(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the "instant" per-second increase rate over the last two raw samples
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
It is expected that the `series_selector` returns time series of [counter type](https://docs.victoriametrics.com/keyConcepts.html#counter).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
@@ -557,7 +552,7 @@ See also [rate](#rate) and [rollup_rate](#rollup_rate).
|
||||
|
||||
`lag(series_selector[d])` is a [rollup function](#rollup-functions), which returns the duration in seconds between the last sample
|
||||
on the given lookbehind window `d` and the timestamp of the current point. It is calculated independently per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
@@ -565,8 +560,8 @@ See also [lifetime](#lifetime) and [duration_over_time](#duration_over_time).
|
||||
|
||||
#### last_over_time
|
||||
|
||||
`last_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the last [raw sample](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
value on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
`last_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the last raw sample value on the given lookbehind window `d`
|
||||
per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
This function is supported by PromQL.
|
||||
|
||||
@@ -575,7 +570,7 @@ See also [first_over_time](#first_over_time) and [tlast_over_time](#tlast_over_t
|
||||
#### lifetime
|
||||
|
||||
`lifetime(series_selector[d])` is a [rollup function](#rollup-functions), which returns the duration in seconds between the last and the first sample
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
@@ -584,15 +579,14 @@ See also [duration_over_time](#duration_over_time) and [lag](#lag).
|
||||
#### mad_over_time
|
||||
|
||||
`mad_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates [median absolute deviation](https://en.wikipedia.org/wiki/Median_absolute_deviation)
|
||||
over [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples) on the given lookbehind window `d` per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
over raw samples on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
See also [mad](#mad), [range_mad](#range_mad) and [outlier_iqr_over_time](#outlier_iqr_over_time).
|
||||
|
||||
#### max_over_time
|
||||
|
||||
`max_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the maximum value over [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
`max_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the maximum value over raw samples
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
This function is supported by PromQL.
|
||||
|
||||
@@ -600,16 +594,16 @@ See also [tmax_over_time](#tmax_over_time).
|
||||
|
||||
#### median_over_time
|
||||
|
||||
`median_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates median value over [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
`median_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates median value over raw samples
|
||||
on the given lookbehind window `d` per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
See also [avg_over_time](#avg_over_time).
|
||||
|
||||
#### min_over_time
|
||||
|
||||
`min_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the minimum value over [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
`min_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the minimum value over raw samples
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
This function is supported by PromQL.
|
||||
|
||||
@@ -618,16 +612,15 @@ See also [tmin_over_time](#tmin_over_time).
|
||||
#### mode_over_time
|
||||
|
||||
`mode_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates [mode](https://en.wikipedia.org/wiki/Mode_(statistics))
|
||||
for [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples) on the given lookbehind window `d`. It is calculated individually per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering). It is expected that [raw sample](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
values are discrete.
|
||||
for raw samples on the given lookbehind window `d`. It is calculated individually per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). It is expected that raw sample values are discrete.
|
||||
|
||||
#### outlier_iqr_over_time
|
||||
|
||||
`outlier_iqr_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the last sample on the given lookbehind window `d`
|
||||
if its value is either smaller than the `q25-1.5*iqr` or bigger than `q75+1.5*iqr` where:
|
||||
- `iqr` is an [Interquartile range](https://en.wikipedia.org/wiki/Interquartile_range) over [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples) on the lookbehind window `d`
|
||||
- `q25` and `q75` are 25th and 75th [percentiles](https://en.wikipedia.org/wiki/Percentile) over [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples) on the lookbehind window `d`.
|
||||
- `iqr` is an [Interquartile range](https://en.wikipedia.org/wiki/Interquartile_range) over raw samples on the lookbehind window `d`
|
||||
- `q25` and `q75` are 25th and 75th [percentiles](https://en.wikipedia.org/wiki/Percentile) over raw samples on the lookbehind window `d`.
|
||||
|
||||
The `outlier_iqr_over_time()` is useful for detecting anomalies in gauge values based on the previous history of values.
|
||||
For example, `outlier_iqr_over_time(memory_usage_bytes[1h])` triggers when `memory_usage_bytes` suddenly goes outside the usual value range for the last hour.
|
||||
@@ -637,8 +630,8 @@ See also [outliers_iqr](#outliers_iqr).
|
||||
#### predict_linear
|
||||
|
||||
`predict_linear(series_selector[d], t)` is a [rollup function](#rollup-functions), which calculates the value `t` seconds in the future using
|
||||
linear interpolation over [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples) on the given lookbehind window `d`.
|
||||
The predicted value is calculated individually per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
linear interpolation over raw samples on the given lookbehind window `d`. The predicted value is calculated individually per each time series
|
||||
returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
This function is supported by PromQL.
|
||||
|
||||
@@ -646,7 +639,7 @@ See also [range_linear_regression](#range_linear_regression).
|
||||
|
||||
#### present_over_time
|
||||
|
||||
`present_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns 1 if there is at least a single [raw sample](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
`present_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns 1 if there is at least a single raw sample
|
||||
on the given lookbehind window `d`. Otherwise, an empty result is returned.
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
@@ -655,8 +648,8 @@ This function is supported by PromQL.
|
||||
|
||||
#### quantile_over_time
|
||||
|
||||
`quantile_over_time(phi, series_selector[d])` is a [rollup function](#rollup-functions), which calculates `phi`-quantile over [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
`quantile_over_time(phi, series_selector[d])` is a [rollup function](#rollup-functions), which calculates `phi`-quantile over raw samples
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
The `phi` value must be in the range `[0...1]`.
|
||||
|
||||
This function is supported by PromQL.
|
||||
@@ -666,16 +659,16 @@ See also [quantiles_over_time](#quantiles_over_time).
|
||||
#### quantiles_over_time
|
||||
|
||||
`quantiles_over_time("phiLabel", phi1, ..., phiN, series_selector[d])` is a [rollup function](#rollup-functions), which calculates `phi*`-quantiles
|
||||
over [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples) on the given lookbehind window `d` per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
over raw samples on the given lookbehind window `d` per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
The function returns individual series per each `phi*` with `{phiLabel="phi*"}` label. `phi*` values must be in the range `[0...1]`.
|
||||
|
||||
See also [quantile_over_time](#quantile_over_time).
|
||||
|
||||
#### range_over_time
|
||||
|
||||
`range_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates value range over [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
`range_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates value range over raw samples
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
E.g. it calculates `max_over_time(series_selector[d]) - min_over_time(series_selector[d])`.
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
@@ -683,8 +676,8 @@ Metric names are stripped from the resulting rollups. Add [keep_metric_names](#k
|
||||
#### rate
|
||||
|
||||
`rate(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the average per-second increase rate
|
||||
over the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
It is expected that the `series_selector` returns time series of [counter type](https://docs.victoriametrics.com/keyconcepts/#counter).
|
||||
over the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
It is expected that the `series_selector` returns time series of [counter type](https://docs.victoriametrics.com/keyConcepts.html#counter).
|
||||
|
||||
If the lookbehind window is skipped in square brackets, then it is automatically calculated as `max(step, scrape_interval)`, where `step` is the query arg value
|
||||
passed to [/api/v1/query_range](https://docs.victoriametrics.com/keyconcepts/#range-query) or [/api/v1/query](https://docs.victoriametrics.com/keyconcepts/#instant-query),
|
||||
@@ -699,18 +692,18 @@ See also [irate](#irate) and [rollup_rate](#rollup_rate).
|
||||
|
||||
#### rate_over_sum
|
||||
|
||||
`rate_over_sum(series_selector[d])` is a [rollup function](#rollup-functions), which calculates per-second rate over the sum of [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
`rate_over_sum(series_selector[d])` is a [rollup function](#rollup-functions), which calculates per-second rate over the sum of raw samples
|
||||
on the given lookbehind window `d`. The calculations are performed individually per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
#### resets
|
||||
|
||||
`resets(series_selector[d])` is a [rollup function](#rollup-functions), which returns the number
|
||||
of [counter](https://docs.victoriametrics.com/keyconcepts/#counter) resets over the given lookbehind window `d`
|
||||
per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
It is expected that the `series_selector` returns time series of [counter type](https://docs.victoriametrics.com/keyconcepts/#counter).
|
||||
of [counter](https://docs.victoriametrics.com/keyConcepts.html#counter) resets over the given lookbehind window `d`
|
||||
per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
It is expected that the `series_selector` returns time series of [counter type](https://docs.victoriametrics.com/keyConcepts.html#counter).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
@@ -718,9 +711,9 @@ This function is supported by PromQL.
|
||||
|
||||
#### rollup
|
||||
|
||||
`rollup(series_selector[d])` is a [rollup function](#rollup-functions), which calculates `min`, `max` and `avg` values for [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
`rollup(series_selector[d])` is a [rollup function](#rollup-functions), which calculates `min`, `max` and `avg` values for raw samples
|
||||
on the given lookbehind window `d` and returns them in time series with `rollup="min"`, `rollup="max"` and `rollup="avg"` additional labels.
|
||||
These values are calculated individually per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
These values are calculated individually per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Optional 2nd argument `"min"`, `"max"` or `"avg"` can be passed to keep only one calculation result and without adding a label.
|
||||
See also [label_match](#label_match).
|
||||
@@ -728,20 +721,19 @@ See also [label_match](#label_match).
|
||||
#### rollup_candlestick
|
||||
|
||||
`rollup_candlestick(series_selector[d])` is a [rollup function](#rollup-functions), which calculates `open`, `high`, `low` and `close` values (aka OHLC)
|
||||
over [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples) on the given lookbehind window `d` and returns them in time series
|
||||
with `rollup="open"`, `rollup="high"`, `rollup="low"` and `rollup="close"` additional labels.
|
||||
over raw samples on the given lookbehind window `d` and returns them in time series with `rollup="open"`, `rollup="high"`, `rollup="low"` and `rollup="close"` additional labels.
|
||||
The calculations are performed individually per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering). This function is useful for financial applications.
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). This function is useful for financial applications.
|
||||
|
||||
Optional 2nd argument `"open"`, `"high"` or `"low"` or `"close"` can be passed to keep only one calculation result and without adding a label.
|
||||
See also [label_match](#label_match).
|
||||
|
||||
#### rollup_delta
|
||||
|
||||
`rollup_delta(series_selector[d])` is a [rollup function](#rollup-functions), which calculates differences between adjacent [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
`rollup_delta(series_selector[d])` is a [rollup function](#rollup-functions), which calculates differences between adjacent raw samples
|
||||
on the given lookbehind window `d` and returns `min`, `max` and `avg` values for the calculated differences
|
||||
and returns them in time series with `rollup="min"`, `rollup="max"` and `rollup="avg"` additional labels.
|
||||
The calculations are performed individually per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
The calculations are performed individually per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Optional 2nd argument `"min"`, `"max"` or `"avg"` can be passed to keep only one calculation result and without adding a label.
|
||||
See also [label_match](#label_match).
|
||||
@@ -753,9 +745,9 @@ See also [rollup_increase](#rollup_increase).
|
||||
#### rollup_deriv
|
||||
|
||||
`rollup_deriv(series_selector[d])` is a [rollup function](#rollup-functions), which calculates per-second derivatives
|
||||
for adjacent [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples) on the given lookbehind window `d` and returns `min`, `max` and `avg` values
|
||||
for the calculated per-second derivatives and returns them in time series with `rollup="min"`, `rollup="max"` and `rollup="avg"` additional labels.
|
||||
The calculations are performed individually per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
for adjacent raw samples on the given lookbehind window `d` and returns `min`, `max` and `avg` values for the calculated per-second derivatives
|
||||
and returns them in time series with `rollup="min"`, `rollup="max"` and `rollup="avg"` additional labels.
|
||||
The calculations are performed individually per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Optional 2nd argument `"min"`, `"max"` or `"avg"` can be passed to keep only one calculation result and without adding a label.
|
||||
See also [label_match](#label_match).
|
||||
@@ -764,10 +756,10 @@ Metric names are stripped from the resulting rollups. Add [keep_metric_names](#k
|
||||
|
||||
#### rollup_increase
|
||||
|
||||
`rollup_increase(series_selector[d])` is a [rollup function](#rollup-functions), which calculates increases for adjacent [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
`rollup_increase(series_selector[d])` is a [rollup function](#rollup-functions), which calculates increases for adjacent raw samples
|
||||
on the given lookbehind window `d` and returns `min`, `max` and `avg` values for the calculated increases
|
||||
and returns them in time series with `rollup="min"`, `rollup="max"` and `rollup="avg"` additional labels.
|
||||
The calculations are performed individually per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
The calculations are performed individually per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Optional 2nd argument `"min"`, `"max"` or `"avg"` can be passed to keep only one calculation result and without adding a label.
|
||||
See also [label_match](#label_match).
|
||||
@@ -776,8 +768,7 @@ Metric names are stripped from the resulting rollups. Add [keep_metric_names](#k
|
||||
|
||||
#### rollup_rate
|
||||
|
||||
`rollup_rate(series_selector[d])` is a [rollup function](#rollup-functions), which calculates per-second change rates
|
||||
for adjacent [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
`rollup_rate(series_selector[d])` is a [rollup function](#rollup-functions), which calculates per-second change rates for adjacent raw samples
|
||||
on the given lookbehind window `d` and returns `min`, `max` and `avg` values for the calculated per-second change rates
|
||||
and returns them in time series with `rollup="min"`, `rollup="max"` and `rollup="avg"` additional labels.
|
||||
|
||||
@@ -787,16 +778,16 @@ when to use `rollup_rate()`.
|
||||
Optional 2nd argument `"min"`, `"max"` or `"avg"` can be passed to keep only one calculation result and without adding a label.
|
||||
See also [label_match](#label_match).
|
||||
|
||||
The calculations are performed individually per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
The calculations are performed individually per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
#### rollup_scrape_interval
|
||||
|
||||
`rollup_scrape_interval(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the interval in seconds between
|
||||
adjacent [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples) on the given lookbehind window `d` and returns `min`, `max` and `avg` values for the calculated interval
|
||||
adjacent raw samples on the given lookbehind window `d` and returns `min`, `max` and `avg` values for the calculated interval
|
||||
and returns them in time series with `rollup="min"`, `rollup="max"` and `rollup="avg"` additional labels.
|
||||
The calculations are performed individually per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
The calculations are performed individually per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Optional 2nd argument `"min"`, `"max"` or `"avg"` can be passed to keep only one calculation result and without adding a label.
|
||||
See also [label_match](#label_match).
|
||||
@@ -805,9 +796,8 @@ Metric names are stripped from the resulting rollups. Add [keep_metric_names](#k
|
||||
|
||||
#### scrape_interval
|
||||
|
||||
`scrape_interval(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the average interval in seconds
|
||||
between [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
`scrape_interval(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the average interval in seconds between raw samples
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
@@ -815,10 +805,9 @@ See also [rollup_scrape_interval](#rollup_scrape_interval).
|
||||
|
||||
#### share_gt_over_time
|
||||
|
||||
`share_gt_over_time(series_selector[d], gt)` is a [rollup function](#rollup-functions), which returns share (in the range `[0...1]`)
|
||||
of [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
`share_gt_over_time(series_selector[d], gt)` is a [rollup function](#rollup-functions), which returns share (in the range `[0...1]`) of raw samples
|
||||
on the given lookbehind window `d`, which are bigger than `gt`. It is calculated independently per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
This function is useful for calculating SLI and SLO. Example: `share_gt_over_time(up[24h], 0)` - returns service availability for the last 24 hours.
|
||||
|
||||
@@ -828,10 +817,9 @@ See also [share_le_over_time](#share_le_over_time) and [count_gt_over_time](#cou
|
||||
|
||||
#### share_le_over_time
|
||||
|
||||
`share_le_over_time(series_selector[d], le)` is a [rollup function](#rollup-functions), which returns share (in the range `[0...1]`)
|
||||
of [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
`share_le_over_time(series_selector[d], le)` is a [rollup function](#rollup-functions), which returns share (in the range `[0...1]`) of raw samples
|
||||
on the given lookbehind window `d`, which are smaller or equal to `le`. It is calculated independently per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
This function is useful for calculating SLI and SLO. Example: `share_le_over_time(memory_usage_bytes[24h], 100*1024*1024)` returns
|
||||
the share of time series values for the last 24 hours when memory usage was below or equal to 100MB.
|
||||
@@ -842,10 +830,9 @@ See also [share_gt_over_time](#share_gt_over_time) and [count_le_over_time](#cou
|
||||
|
||||
#### share_eq_over_time
|
||||
|
||||
`share_eq_over_time(series_selector[d], eq)` is a [rollup function](#rollup-functions), which returns share (in the range `[0...1]`)
|
||||
of [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
`share_eq_over_time(series_selector[d], eq)` is a [rollup function](#rollup-functions), which returns share (in the range `[0...1]`) of raw samples
|
||||
on the given lookbehind window `d`, which are equal to `eq`. It is calculated independently per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
@@ -854,15 +841,15 @@ See also [count_eq_over_time](#count_eq_over_time).
|
||||
#### stale_samples_over_time
|
||||
|
||||
`stale_samples_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the number
|
||||
of [staleness markers](https://docs.victoriametrics.com/vmagent/#prometheus-staleness-markers) on the given lookbehind window `d`
|
||||
per each time series matching the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
of [staleness markers](https://docs.victoriametrics.com/vmagent.html#prometheus-staleness-markers) on the given lookbehind window `d`
|
||||
per each time series matching the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
#### stddev_over_time
|
||||
|
||||
`stddev_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates standard deviation over [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
`stddev_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates standard deviation over raw samples
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
@@ -872,8 +859,8 @@ See also [stdvar_over_time](#stdvar_over_time).
|
||||
|
||||
#### stdvar_over_time
|
||||
|
||||
`stdvar_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates standard variance over [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
`stdvar_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates standard variance over raw samples
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
@@ -883,8 +870,8 @@ See also [stddev_over_time](#stddev_over_time).
|
||||
|
||||
#### sum_eq_over_time
|
||||
|
||||
`sum_eq_over_time(series_selector[d], eq)` is a [rollup function](#rollup-function), which calculates the sum of [raw sample](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
values equal to `eq` on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
`sum_eq_over_time(series_selector[d], eq)` is a [rollup function](#rollup-function), which calculates the sum of raw sample values equal to `eq`
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
@@ -892,8 +879,8 @@ See also [sum_over_time](#sum_over_time) and [count_eq_over_time](#count_eq_over
|
||||
|
||||
#### sum_gt_over_time
|
||||
|
||||
`sum_gt_over_time(series_selector[d], gt)` is a [rollup function](#rollup-function), which calculates the sum of [raw sample](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
values bigger than `gt` on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
`sum_gt_over_time(series_selector[d], gt)` is a [rollup function](#rollup-function), which calculates the sum of raw sample values bigger than `gt`
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
@@ -901,8 +888,8 @@ See also [sum_over_time](#sum_over_time) and [count_gt_over_time](#count_gt_over
|
||||
|
||||
#### sum_le_over_time
|
||||
|
||||
`sum_le_over_time(series_selector[d], le)` is a [rollup function](#rollup-function), which calculates the sum of [raw sample](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
values smaller or equal to `le` on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
`sum_le_over_time(series_selector[d], le)` is a [rollup function](#rollup-function), which calculates the sum of raw sample values smaller or equal to `le`
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
@@ -910,8 +897,8 @@ See also [sum_over_time](#sum_over_time) and [count_le_over_time](#count_le_over
|
||||
|
||||
#### sum_over_time
|
||||
|
||||
`sum_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the sum of [raw sample](https://docs.victoriametrics.com/keyconcepts/#raw-samples) values
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
`sum_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the sum of raw sample values
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
@@ -919,16 +906,15 @@ This function is supported by PromQL.
|
||||
|
||||
#### sum2_over_time
|
||||
|
||||
`sum2_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the sum of squares for [raw sample](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
values on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
`sum2_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the sum of squares for raw sample values
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
#### timestamp
|
||||
|
||||
`timestamp(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds with millisecond precision
|
||||
for the last [raw sample](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
`timestamp(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds with millisecond precision for the last raw sample
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
@@ -938,9 +924,8 @@ See also [time](#time) and [now](#now).
|
||||
|
||||
#### timestamp_with_name
|
||||
|
||||
`timestamp_with_name(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds with millisecond precision
|
||||
for the last [raw sample](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
`timestamp_with_name(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds with millisecond precision for the last raw sample
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are preserved in the resulting rollups.
|
||||
|
||||
@@ -948,9 +933,8 @@ See also [timestamp](#timestamp) and [keep_metric_names](#keep_metric_names) mod
|
||||
|
||||
#### tfirst_over_time
|
||||
|
||||
`tfirst_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds with millisecond precision
|
||||
for the first [raw sample](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
`tfirst_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds with millisecond precision for the first raw sample
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
@@ -959,7 +943,7 @@ See also [first_over_time](#first_over_time).
|
||||
#### tlast_change_over_time
|
||||
|
||||
`tlast_change_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds with millisecond precision for the last change
|
||||
per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering) on the given lookbehind window `d`.
|
||||
per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering) on the given lookbehind window `d`.
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
@@ -973,10 +957,9 @@ See also [tlast_change_over_time](#tlast_change_over_time).
|
||||
|
||||
#### tmax_over_time
|
||||
|
||||
`tmax_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds with millisecond precision
|
||||
for the [raw sample](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
`tmax_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds with millisecond precision for the raw sample
|
||||
with the maximum value on the given lookbehind window `d`. It is calculated independently per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
@@ -984,10 +967,9 @@ See also [max_over_time](#max_over_time).
|
||||
|
||||
#### tmin_over_time
|
||||
|
||||
`tmin_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds with millisecond precision
|
||||
for the [raw sample](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
`tmin_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds with millisecond precision for the raw sample
|
||||
with the minimum value on the given lookbehind window `d`. It is calculated independently per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
@@ -996,8 +978,8 @@ See also [min_over_time](#min_over_time).
|
||||
#### zscore_over_time
|
||||
|
||||
`zscore_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns [z-score](https://en.wikipedia.org/wiki/Standard_score)
|
||||
for [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples) on the given lookbehind window `d`. It is calculated independently per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
|
||||
for raw samples on the given lookbehind window `d`. It is calculated independently per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
@@ -1012,7 +994,7 @@ returned from the rollup `delta(temperature[24h])`.
|
||||
|
||||
Additional details:
|
||||
|
||||
* If transform function is applied directly to a [series selector](https://docs.victoriametrics.com/keyconcepts/#filtering),
|
||||
* If transform function is applied directly to a [series selector](https://docs.victoriametrics.com/keyConcepts.html#filtering),
|
||||
then the [default_rollup()](#default_rollup) function is automatically applied before calculating the transformations.
|
||||
For example, `abs(temperature)` is implicitly transformed to `abs(default_rollup(temperature))`.
|
||||
* All the transform functions accept optional `keep_metric_names` modifier. If it is set,
|
||||
@@ -1248,7 +1230,7 @@ by replacing all the values bigger or equal to 30 with 40.
|
||||
#### end
|
||||
|
||||
`end()` is a [transform function](#transform-functions), which returns the unix timestamp in seconds for the last point.
|
||||
It is known as `end` query arg passed to [/api/v1/query_range](https://docs.victoriametrics.com/keyconcepts/#range-query).
|
||||
It is known as `end` query arg passed to [/api/v1/query_range](https://docs.victoriametrics.com/keyConcepts.html#range-query).
|
||||
|
||||
See also [start](#start), [time](#time) and [now](#now).
|
||||
|
||||
@@ -1671,14 +1653,14 @@ This function is supported by PromQL.
|
||||
|
||||
`start()` is a [transform function](#transform-functions), which returns unix timestamp in seconds for the first point.
|
||||
|
||||
It is known as `start` query arg passed to [/api/v1/query_range](https://docs.victoriametrics.com/keyconcepts/#range-query).
|
||||
It is known as `start` query arg passed to [/api/v1/query_range](https://docs.victoriametrics.com/keyConcepts.html#range-query).
|
||||
|
||||
See also [end](#end), [time](#time) and [now](#now).
|
||||
|
||||
#### step
|
||||
|
||||
`step()` is a [transform function](#transform-functions), which returns the step in seconds (aka interval) between the returned points.
|
||||
It is known as `step` query arg passed to [/api/v1/query_range](https://docs.victoriametrics.com/keyconcepts/#range-query).
|
||||
It is known as `step` query arg passed to [/api/v1/query_range](https://docs.victoriametrics.com/keyConcepts.html#range-query).
|
||||
|
||||
See also [start](#start) and [end](#end).
|
||||
|
||||
@@ -1735,7 +1717,7 @@ This function is supported by PromQL.
|
||||
|
||||
Additional details:
|
||||
|
||||
* If label manipulation function is applied directly to a [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering),
|
||||
* If label manipulation function is applied directly to a [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering),
|
||||
then the [default_rollup()](#default_rollup) function is automatically applied before performing the label transformation.
|
||||
For example, `alias(temperature, "foo")` is implicitly transformed to `alias(default_rollup(temperature), "foo")`.
|
||||
|
||||
@@ -1912,7 +1894,7 @@ Additional details:
|
||||
and calculate the [count](#count) aggregate function independently per each group, while `count(up) without (instance)`
|
||||
would group [rollup results](#rollup-functions) by all the labels except `instance` before calculating [count](#count) aggregate function independently per each group.
|
||||
Multiple labels can be put in `by` and `without` modifiers.
|
||||
* If the aggregate function is applied directly to a [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering),
|
||||
* If the aggregate function is applied directly to a [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering),
|
||||
then the [default_rollup()](#default_rollup) function is automatically applied before calculating the aggregate.
|
||||
For example, `count(up)` is implicitly transformed to `count(default_rollup(up))`.
|
||||
* Aggregate functions accept arbitrary number of args. For example, `avg(q1, q2, q3)` would return the average values for every point
|
||||
@@ -2122,7 +2104,7 @@ See also [quantile](#quantile).
|
||||
`share(q) by (group_labels)` is [aggregate function](#aggregate-functions), which returns shares in the range `[0..1]`
|
||||
for every non-negative points returned by `q` per each timestamp, so the sum of shares per each `group_labels` equals 1.
|
||||
|
||||
This function is useful for normalizing [histogram bucket](https://docs.victoriametrics.com/keyconcepts/#histogram) shares
|
||||
This function is useful for normalizing [histogram bucket](https://docs.victoriametrics.com/keyConcepts.html#histogram) shares
|
||||
into `[0..1]` range:
|
||||
|
||||
```metricsql
|
||||
@@ -2226,11 +2208,10 @@ See also [zscore_over_time](#zscore_over_time), [range_trim_zscore](#range_trim_
|
||||
## Subqueries
|
||||
|
||||
MetricsQL supports and extends PromQL subqueries. See [this article](https://valyala.medium.com/prometheus-subqueries-in-victoriametrics-9b1492b720b3) for details.
|
||||
Any [rollup function](#rollup-functions) for something other than [series selector](https://docs.victoriametrics.com/keyconcepts/#filtering) form a subquery.
|
||||
Any [rollup function](#rollup-functions) for something other than [series selector](https://docs.victoriametrics.com/keyConcepts.html#filtering) form a subquery.
|
||||
Nested rollup functions can be implicit thanks to the [implicit query conversions](#implicit-query-conversions).
|
||||
For example, `delta(sum(m))` is implicitly converted to `delta(sum(default_rollup(m))[1i:1i])`, so it becomes a subquery,
|
||||
since it contains [default_rollup](#default_rollup) nested into [delta](#delta).
|
||||
This behavior can be disabled or logged via cmd-line flags `-search.disableImplicitConversion` and `-search.logImplicitConversion` since v1.101.0.
|
||||
|
||||
VictoriaMetrics performs subqueries in the following way:
|
||||
|
||||
@@ -2238,19 +2219,19 @@ VictoriaMetrics performs subqueries in the following way:
|
||||
For example, for expression `max_over_time(rate(http_requests_total[5m])[1h:30s])` the inner function `rate(http_requests_total[5m])`
|
||||
is calculated with `step=30s`. The resulting data points are aligned by the `step`.
|
||||
* It calculates the outer rollup function over the results of the inner rollup function using the `step` value
|
||||
passed by Grafana to [/api/v1/query_range](https://docs.victoriametrics.com/keyconcepts/#range-query).
|
||||
passed by Grafana to [/api/v1/query_range](https://docs.victoriametrics.com/keyConcepts.html#range-query).
|
||||
|
||||
## Implicit query conversions
|
||||
|
||||
VictoriaMetrics performs the following implicit conversions for incoming queries before starting the calculations:
|
||||
|
||||
* If lookbehind window in square brackets is missing inside [rollup function](#rollup-functions), then it is automatically set to the following value:
|
||||
- To `step` value passed to [/api/v1/query_range](https://docs.victoriametrics.com/keyconcepts/#range-query) or [/api/v1/query](https://docs.victoriametrics.com/keyconcepts/#instant-query)
|
||||
- To `step` value passed to [/api/v1/query_range](https://docs.victoriametrics.com/keyConcepts.html#range-query) or [/api/v1/query](https://docs.victoriametrics.com/keyconcepts/#instant-query)
|
||||
for all the [rollup functions](#rollup-functions) except of [default_rollup](#default_rollup) and [rate](#rate). This value is known as `$__interval` in Grafana or `1i` in MetricsQL.
|
||||
For example, `avg_over_time(temperature)` is automatically transformed to `avg_over_time(temperature[1i])`.
|
||||
- To the `max(step, scrape_interval)`, where `scrape_interval` is the interval between [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
|
||||
for [default_rollup](#default_rollup) and [rate](#rate) functions. This allows avoiding unexpected gaps on the graph when `step` is smaller than `scrape_interval`.
|
||||
* All the [series selectors](https://docs.victoriametrics.com/keyconcepts/#filtering),
|
||||
* All the [series selectors](https://docs.victoriametrics.com/keyConcepts.html#filtering),
|
||||
which aren't wrapped into [rollup functions](#rollup-functions), are automatically wrapped into [default_rollup](#default_rollup) function.
|
||||
Examples:
|
||||
* `foo` is transformed to `default_rollup(foo)`
|
||||
@@ -2261,7 +2242,6 @@ VictoriaMetrics performs the following implicit conversions for incoming queries
|
||||
it is [transform function](#transform-functions)
|
||||
* If `step` in square brackets is missing inside [subquery](#subqueries), then `1i` step is automatically added there.
|
||||
For example, `avg_over_time(rate(http_requests_total[5m])[1h])` is automatically converted to `avg_over_time(rate(http_requests_total[5m])[1h:1i])`.
|
||||
* If something other than [series selector](https://docs.victoriametrics.com/keyconcepts/#filtering)
|
||||
* If something other than [series selector](https://docs.victoriametrics.com/keyConcepts.html#filtering)
|
||||
is passed to [rollup function](#rollup-functions), then a [subquery](#subqueries) with `1i` lookbehind window and `1i` step is automatically formed.
|
||||
For example, `rate(sum(up))` is automatically converted to `rate((sum(default_rollup(up)))[1i:1i])`.
|
||||
This behavior can be disabled or logged via cmd-line flags `-search.disableImplicitConversion` and `-search.logImplicitConversion` since v1.101.0.
|
||||
For example, `rate(sum(up))` is automatically converted to `rate((sum(default_rollup(up)))[1i:1i])`.
|
||||
@@ -1,11 +1,10 @@
|
||||
package vlstorage
|
||||
|
||||
import (
|
||||
"context"
|
||||
"flag"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
@@ -20,21 +19,19 @@ import (
|
||||
var (
|
||||
retentionPeriod = flagutil.NewDuration("retentionPeriod", "7d", "Log entries with timestamps older than now-retentionPeriod are automatically deleted; "+
|
||||
"log entries with timestamps outside the retention are also rejected during data ingestion; the minimum supported retention is 1d (one day); "+
|
||||
"see https://docs.victoriametrics.com/victorialogs/#retention ; see also -retention.maxDiskSpaceUsageBytes")
|
||||
maxDiskSpaceUsageBytes = flagutil.NewBytes("retention.maxDiskSpaceUsageBytes", 0, "The maximum disk space usage at -storageDataPath before older per-day "+
|
||||
"partitions are automatically dropped; see https://docs.victoriametrics.com/victorialogs/#retention-by-disk-space-usage ; see also -retentionPeriod")
|
||||
"see https://docs.victoriametrics.com/VictoriaLogs/#retention")
|
||||
futureRetention = flagutil.NewDuration("futureRetention", "2d", "Log entries with timestamps bigger than now+futureRetention are rejected during data ingestion; "+
|
||||
"see https://docs.victoriametrics.com/victorialogs/#retention")
|
||||
storageDataPath = flag.String("storageDataPath", "victoria-logs-data", "Path to directory where to store VictoriaLogs data; "+
|
||||
"see https://docs.victoriametrics.com/victorialogs/#storage")
|
||||
"see https://docs.victoriametrics.com/VictoriaLogs/#retention")
|
||||
storageDataPath = flag.String("storageDataPath", "victoria-logs-data", "Path to directory with the VictoriaLogs data; "+
|
||||
"see https://docs.victoriametrics.com/VictoriaLogs/#storage")
|
||||
inmemoryDataFlushInterval = flag.Duration("inmemoryDataFlushInterval", 5*time.Second, "The interval for guaranteed saving of in-memory data to disk. "+
|
||||
"The saved data survives unclean shutdowns such as OOM crash, hardware reset, SIGKILL, etc. "+
|
||||
"Bigger intervals may help increase the lifetime of flash storage with limited write cycles (e.g. Raspberry PI). "+
|
||||
"Smaller intervals increase disk IO load. Minimum supported value is 1s")
|
||||
logNewStreams = flag.Bool("logNewStreams", false, "Whether to log creation of new streams; this can be useful for debugging of high cardinality issues with log streams; "+
|
||||
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields ; see also -logIngestedRows")
|
||||
"see https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#stream-fields ; see also -logIngestedRows")
|
||||
logIngestedRows = flag.Bool("logIngestedRows", false, "Whether to log all the ingested log entries; this can be useful for debugging of data ingestion; "+
|
||||
"see https://docs.victoriametrics.com/victorialogs/data-ingestion/ ; see also -logNewStreams")
|
||||
"see https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/ ; see also -logNewStreams")
|
||||
minFreeDiskSpaceBytes = flagutil.NewBytes("storage.minFreeDiskSpaceBytes", 10e6, "The minimum free disk space at -storageDataPath after which "+
|
||||
"the storage stops accepting new data")
|
||||
)
|
||||
@@ -51,13 +48,12 @@ func Init() {
|
||||
logger.Fatalf("-retentionPeriod cannot be smaller than a day; got %s", retentionPeriod)
|
||||
}
|
||||
cfg := &logstorage.StorageConfig{
|
||||
Retention: retentionPeriod.Duration(),
|
||||
MaxDiskSpaceUsageBytes: maxDiskSpaceUsageBytes.N,
|
||||
FlushInterval: *inmemoryDataFlushInterval,
|
||||
FutureRetention: futureRetention.Duration(),
|
||||
LogNewStreams: *logNewStreams,
|
||||
LogIngestedRows: *logIngestedRows,
|
||||
MinFreeDiskSpaceBytes: minFreeDiskSpaceBytes.N,
|
||||
Retention: retentionPeriod.Duration(),
|
||||
FlushInterval: *inmemoryDataFlushInterval,
|
||||
FutureRetention: futureRetention.Duration(),
|
||||
LogNewStreams: *logNewStreams,
|
||||
LogIngestedRows: *logIngestedRows,
|
||||
MinFreeDiskSpaceBytes: minFreeDiskSpaceBytes.N,
|
||||
}
|
||||
logger.Infof("opening storage at -storageDataPath=%s", *storageDataPath)
|
||||
startTime := time.Now()
|
||||
@@ -65,22 +61,16 @@ func Init() {
|
||||
|
||||
var ss logstorage.StorageStats
|
||||
strg.UpdateStats(&ss)
|
||||
logger.Infof("successfully opened storage in %.3f seconds; smallParts: %d; bigParts: %d; smallPartBlocks: %d; bigPartBlocks: %d; smallPartRows: %d; bigPartRows: %d; "+
|
||||
"smallPartSize: %d bytes; bigPartSize: %d bytes",
|
||||
time.Since(startTime).Seconds(), ss.SmallParts, ss.BigParts, ss.SmallPartBlocks, ss.BigPartBlocks, ss.SmallPartRowsCount, ss.BigPartRowsCount,
|
||||
ss.CompressedSmallPartSize, ss.CompressedBigPartSize)
|
||||
logger.Infof("successfully opened storage in %.3f seconds; partsCount: %d; blocksCount: %d; rowsCount: %d; sizeBytes: %d",
|
||||
time.Since(startTime).Seconds(), ss.FileParts, ss.FileBlocks, ss.FileRowsCount, ss.CompressedFileSize)
|
||||
storageMetrics = initStorageMetrics(strg)
|
||||
|
||||
// register storage metrics
|
||||
storageMetrics = metrics.NewSet()
|
||||
storageMetrics.RegisterMetricsWriter(func(w io.Writer) {
|
||||
writeStorageMetrics(w, strg)
|
||||
})
|
||||
metrics.RegisterSet(storageMetrics)
|
||||
}
|
||||
|
||||
// Stop stops vlstorage.
|
||||
func Stop() {
|
||||
metrics.UnregisterSet(storageMetrics, true)
|
||||
metrics.UnregisterSet(storageMetrics)
|
||||
storageMetrics = nil
|
||||
|
||||
strg.MustClose()
|
||||
@@ -109,99 +99,117 @@ func MustAddRows(lr *logstorage.LogRows) {
|
||||
strg.MustAddRows(lr)
|
||||
}
|
||||
|
||||
// RunQuery runs the given q and calls writeBlock for the returned data blocks
|
||||
func RunQuery(ctx context.Context, tenantIDs []logstorage.TenantID, q *logstorage.Query, writeBlock logstorage.WriteBlockFunc) error {
|
||||
return strg.RunQuery(ctx, tenantIDs, q, writeBlock)
|
||||
// RunQuery runs the given q and calls processBlock for the returned data blocks
|
||||
func RunQuery(tenantIDs []logstorage.TenantID, q *logstorage.Query, stopCh <-chan struct{}, processBlock func(columns []logstorage.BlockColumn)) {
|
||||
strg.RunQuery(tenantIDs, q, stopCh, processBlock)
|
||||
}
|
||||
|
||||
// GetFieldNames executes q and returns field names seen in results.
|
||||
func GetFieldNames(ctx context.Context, tenantIDs []logstorage.TenantID, q *logstorage.Query) ([]logstorage.ValueWithHits, error) {
|
||||
return strg.GetFieldNames(ctx, tenantIDs, q)
|
||||
}
|
||||
func initStorageMetrics(strg *logstorage.Storage) *metrics.Set {
|
||||
ssCache := &logstorage.StorageStats{}
|
||||
var ssCacheLock sync.Mutex
|
||||
var lastUpdateTime time.Time
|
||||
|
||||
// GetFieldValues executes q and returns unique values for the fieldName seen in results.
|
||||
//
|
||||
// If limit > 0, then up to limit unique values are returned.
|
||||
func GetFieldValues(ctx context.Context, tenantIDs []logstorage.TenantID, q *logstorage.Query, fieldName string, limit uint64) ([]logstorage.ValueWithHits, error) {
|
||||
return strg.GetFieldValues(ctx, tenantIDs, q, fieldName, limit)
|
||||
}
|
||||
|
||||
// GetStreamFieldNames executes q and returns stream field names seen in results.
|
||||
func GetStreamFieldNames(ctx context.Context, tenantIDs []logstorage.TenantID, q *logstorage.Query) ([]logstorage.ValueWithHits, error) {
|
||||
return strg.GetStreamFieldNames(ctx, tenantIDs, q)
|
||||
}
|
||||
|
||||
// GetStreamFieldValues executes q and returns stream field values for the given fieldName seen in results.
|
||||
//
|
||||
// If limit > 0, then up to limit unique stream field values are returned.
|
||||
func GetStreamFieldValues(ctx context.Context, tenantIDs []logstorage.TenantID, q *logstorage.Query, fieldName string, limit uint64) ([]logstorage.ValueWithHits, error) {
|
||||
return strg.GetStreamFieldValues(ctx, tenantIDs, q, fieldName, limit)
|
||||
}
|
||||
|
||||
// GetStreams executes q and returns streams seen in query results.
|
||||
//
|
||||
// If limit > 0, then up to limit unique streams are returned.
|
||||
func GetStreams(ctx context.Context, tenantIDs []logstorage.TenantID, q *logstorage.Query, limit uint64) ([]logstorage.ValueWithHits, error) {
|
||||
return strg.GetStreams(ctx, tenantIDs, q, limit)
|
||||
}
|
||||
|
||||
// GetStreamIDs executes q and returns streamIDs seen in query results.
|
||||
//
|
||||
// If limit > 0, then up to limit unique streamIDs are returned.
|
||||
func GetStreamIDs(ctx context.Context, tenantIDs []logstorage.TenantID, q *logstorage.Query, limit uint64) ([]logstorage.ValueWithHits, error) {
|
||||
return strg.GetStreamIDs(ctx, tenantIDs, q, limit)
|
||||
}
|
||||
|
||||
func writeStorageMetrics(w io.Writer, strg *logstorage.Storage) {
|
||||
var ss logstorage.StorageStats
|
||||
strg.UpdateStats(&ss)
|
||||
|
||||
metrics.WriteGaugeUint64(w, fmt.Sprintf(`vl_free_disk_space_bytes{path=%q}`, *storageDataPath), fs.MustGetFreeSpace(*storageDataPath))
|
||||
|
||||
isReadOnly := uint64(0)
|
||||
if ss.IsReadOnly {
|
||||
isReadOnly = 1
|
||||
m := func() *logstorage.StorageStats {
|
||||
ssCacheLock.Lock()
|
||||
defer ssCacheLock.Unlock()
|
||||
if time.Since(lastUpdateTime) < time.Second {
|
||||
return ssCache
|
||||
}
|
||||
var ss logstorage.StorageStats
|
||||
strg.UpdateStats(&ss)
|
||||
ssCache = &ss
|
||||
lastUpdateTime = time.Now()
|
||||
return ssCache
|
||||
}
|
||||
metrics.WriteGaugeUint64(w, fmt.Sprintf(`vl_storage_is_read_only{path=%q}`, *storageDataPath), isReadOnly)
|
||||
|
||||
metrics.WriteGaugeUint64(w, `vl_active_merges{type="storage/inmemory"}`, ss.InmemoryActiveMerges)
|
||||
metrics.WriteGaugeUint64(w, `vl_active_merges{type="storage/small"}`, ss.SmallPartActiveMerges)
|
||||
metrics.WriteGaugeUint64(w, `vl_active_merges{type="storage/big"}`, ss.BigPartActiveMerges)
|
||||
ms := metrics.NewSet()
|
||||
|
||||
metrics.WriteCounterUint64(w, `vl_merges_total{type="storage/inmemory"}`, ss.InmemoryMergesTotal)
|
||||
metrics.WriteCounterUint64(w, `vl_merges_total{type="storage/small"}`, ss.SmallPartMergesTotal)
|
||||
metrics.WriteCounterUint64(w, `vl_merges_total{type="storage/big"}`, ss.BigPartMergesTotal)
|
||||
ms.NewGauge(fmt.Sprintf(`vl_free_disk_space_bytes{path=%q}`, *storageDataPath), func() float64 {
|
||||
return float64(fs.MustGetFreeSpace(*storageDataPath))
|
||||
})
|
||||
ms.NewGauge(fmt.Sprintf(`vl_storage_is_read_only{path=%q}`, *storageDataPath), func() float64 {
|
||||
if m().IsReadOnly {
|
||||
return 1
|
||||
}
|
||||
return 0
|
||||
})
|
||||
|
||||
metrics.WriteGaugeUint64(w, `vl_storage_rows{type="storage/inmemory"}`, ss.InmemoryRowsCount)
|
||||
metrics.WriteGaugeUint64(w, `vl_storage_rows{type="storage/small"}`, ss.SmallPartRowsCount)
|
||||
metrics.WriteGaugeUint64(w, `vl_storage_rows{type="storage/big"}`, ss.BigPartRowsCount)
|
||||
ms.NewGauge(`vl_active_merges{type="inmemory"}`, func() float64 {
|
||||
return float64(m().InmemoryActiveMerges)
|
||||
})
|
||||
ms.NewGauge(`vl_merges_total{type="inmemory"}`, func() float64 {
|
||||
return float64(m().InmemoryMergesTotal)
|
||||
})
|
||||
ms.NewGauge(`vl_active_merges{type="file"}`, func() float64 {
|
||||
return float64(m().FileActiveMerges)
|
||||
})
|
||||
ms.NewGauge(`vl_merges_total{type="file"}`, func() float64 {
|
||||
return float64(m().FileMergesTotal)
|
||||
})
|
||||
|
||||
metrics.WriteGaugeUint64(w, `vl_storage_parts{type="storage/inmemory"}`, ss.InmemoryParts)
|
||||
metrics.WriteGaugeUint64(w, `vl_storage_parts{type="storage/small"}`, ss.SmallParts)
|
||||
metrics.WriteGaugeUint64(w, `vl_storage_parts{type="storage/big"}`, ss.BigParts)
|
||||
ms.NewGauge(`vl_storage_rows{type="inmemory"}`, func() float64 {
|
||||
return float64(m().InmemoryRowsCount)
|
||||
})
|
||||
ms.NewGauge(`vl_storage_rows{type="file"}`, func() float64 {
|
||||
return float64(m().FileRowsCount)
|
||||
})
|
||||
ms.NewGauge(`vl_storage_parts{type="inmemory"}`, func() float64 {
|
||||
return float64(m().InmemoryParts)
|
||||
})
|
||||
ms.NewGauge(`vl_storage_parts{type="file"}`, func() float64 {
|
||||
return float64(m().FileParts)
|
||||
})
|
||||
ms.NewGauge(`vl_storage_blocks{type="inmemory"}`, func() float64 {
|
||||
return float64(m().InmemoryBlocks)
|
||||
})
|
||||
ms.NewGauge(`vl_storage_blocks{type="file"}`, func() float64 {
|
||||
return float64(m().FileBlocks)
|
||||
})
|
||||
|
||||
metrics.WriteGaugeUint64(w, `vl_storage_blocks{type="storage/inmemory"}`, ss.InmemoryBlocks)
|
||||
metrics.WriteGaugeUint64(w, `vl_storage_blocks{type="storage/small"}`, ss.SmallPartBlocks)
|
||||
metrics.WriteGaugeUint64(w, `vl_storage_blocks{type="storage/big"}`, ss.BigPartBlocks)
|
||||
ms.NewGauge(`vl_partitions`, func() float64 {
|
||||
return float64(m().PartitionsCount)
|
||||
})
|
||||
ms.NewGauge(`vl_streams_created_total`, func() float64 {
|
||||
return float64(m().StreamsCreatedTotal)
|
||||
})
|
||||
|
||||
metrics.WriteGaugeUint64(w, `vl_partitions`, ss.PartitionsCount)
|
||||
metrics.WriteCounterUint64(w, `vl_streams_created_total`, ss.StreamsCreatedTotal)
|
||||
ms.NewGauge(`vl_indexdb_rows`, func() float64 {
|
||||
return float64(m().IndexdbItemsCount)
|
||||
})
|
||||
ms.NewGauge(`vl_indexdb_parts`, func() float64 {
|
||||
return float64(m().IndexdbPartsCount)
|
||||
})
|
||||
ms.NewGauge(`vl_indexdb_blocks`, func() float64 {
|
||||
return float64(m().IndexdbBlocksCount)
|
||||
})
|
||||
|
||||
metrics.WriteGaugeUint64(w, `vl_indexdb_rows`, ss.IndexdbItemsCount)
|
||||
metrics.WriteGaugeUint64(w, `vl_indexdb_parts`, ss.IndexdbPartsCount)
|
||||
metrics.WriteGaugeUint64(w, `vl_indexdb_blocks`, ss.IndexdbBlocksCount)
|
||||
ms.NewGauge(`vl_data_size_bytes{type="indexdb"}`, func() float64 {
|
||||
return float64(m().IndexdbSizeBytes)
|
||||
})
|
||||
ms.NewGauge(`vl_data_size_bytes{type="storage"}`, func() float64 {
|
||||
dm := m()
|
||||
return float64(dm.CompressedInmemorySize + dm.CompressedFileSize)
|
||||
})
|
||||
|
||||
metrics.WriteGaugeUint64(w, `vl_data_size_bytes{type="indexdb"}`, ss.IndexdbSizeBytes)
|
||||
metrics.WriteGaugeUint64(w, `vl_data_size_bytes{type="storage"}`, ss.CompressedInmemorySize+ss.CompressedSmallPartSize+ss.CompressedBigPartSize)
|
||||
ms.NewGauge(`vl_compressed_data_size_bytes{type="inmemory"}`, func() float64 {
|
||||
return float64(m().CompressedInmemorySize)
|
||||
})
|
||||
ms.NewGauge(`vl_compressed_data_size_bytes{type="file"}`, func() float64 {
|
||||
return float64(m().CompressedFileSize)
|
||||
})
|
||||
ms.NewGauge(`vl_uncompressed_data_size_bytes{type="inmemory"}`, func() float64 {
|
||||
return float64(m().UncompressedInmemorySize)
|
||||
})
|
||||
ms.NewGauge(`vl_uncompressed_data_size_bytes{type="file"}`, func() float64 {
|
||||
return float64(m().UncompressedFileSize)
|
||||
})
|
||||
|
||||
metrics.WriteGaugeUint64(w, `vl_compressed_data_size_bytes{type="storage/inmemory"}`, ss.CompressedInmemorySize)
|
||||
metrics.WriteGaugeUint64(w, `vl_compressed_data_size_bytes{type="storage/small"}`, ss.CompressedSmallPartSize)
|
||||
metrics.WriteGaugeUint64(w, `vl_compressed_data_size_bytes{type="storage/big"}`, ss.CompressedBigPartSize)
|
||||
ms.NewGauge(`vl_rows_dropped_total{reason="too_big_timestamp"}`, func() float64 {
|
||||
return float64(m().RowsDroppedTooBigTimestamp)
|
||||
})
|
||||
ms.NewGauge(`vl_rows_dropped_total{reason="too_small_timestamp"}`, func() float64 {
|
||||
return float64(m().RowsDroppedTooSmallTimestamp)
|
||||
})
|
||||
|
||||
metrics.WriteGaugeUint64(w, `vl_uncompressed_data_size_bytes{type="storage/inmemory"}`, ss.UncompressedInmemorySize)
|
||||
metrics.WriteGaugeUint64(w, `vl_uncompressed_data_size_bytes{type="storage/small"}`, ss.UncompressedSmallPartSize)
|
||||
metrics.WriteGaugeUint64(w, `vl_uncompressed_data_size_bytes{type="storage/big"}`, ss.UncompressedBigPartSize)
|
||||
|
||||
metrics.WriteCounterUint64(w, `vl_rows_dropped_total{reason="too_big_timestamp"}`, ss.RowsDroppedTooBigTimestamp)
|
||||
metrics.WriteCounterUint64(w, `vl_rows_dropped_total{reason="too_small_timestamp"}`, ss.RowsDroppedTooSmallTimestamp)
|
||||
return ms
|
||||
}
|
||||
|
||||
@@ -88,9 +88,6 @@ vmagent-linux-ppc64le:
|
||||
vmagent-linux-s390x:
|
||||
APP_NAME=vmagent CGO_ENABLED=0 GOOS=linux GOARCH=s390x $(MAKE) app-local-goos-goarch
|
||||
|
||||
vmagent-linux-loong64:
|
||||
APP_NAME=vmagent CGO_ENABLED=0 GOOS=linux GOARCH=loong64 $(MAKE) app-local-goos-goarch
|
||||
|
||||
vmagent-linux-386:
|
||||
APP_NAME=vmagent CGO_ENABLED=0 GOOS=linux GOARCH=386 $(MAKE) app-local-goos-goarch
|
||||
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
See vmagent docs [here](https://docs.victoriametrics.com/vmagent/).
|
||||
See vmagent docs [here](https://docs.victoriametrics.com/vmagent.html).
|
||||
|
||||
vmagent docs can be edited at [docs/vmagent.md](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/docs/vmagent.md).
|
||||
vmagent docs can be edited at [docs/vmagent.md](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/docs/vmagent.md).
|
||||
@@ -3,15 +3,13 @@ package common
|
||||
import (
|
||||
"sync"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/cgroup"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promrelabel"
|
||||
)
|
||||
|
||||
// PushCtx is a context used for populating WriteRequest.
|
||||
type PushCtx struct {
|
||||
// WriteRequest contains the WriteRequest, which must be pushed later to remote storage.
|
||||
//
|
||||
// The actual labels and samples for the time series are stored in Labels and Samples fields.
|
||||
WriteRequest prompbmarshal.WriteRequest
|
||||
|
||||
// Labels contains flat list of all the labels used in WriteRequest.
|
||||
@@ -23,7 +21,13 @@ type PushCtx struct {
|
||||
|
||||
// Reset resets ctx.
|
||||
func (ctx *PushCtx) Reset() {
|
||||
ctx.WriteRequest.Reset()
|
||||
tss := ctx.WriteRequest.Timeseries
|
||||
for i := range tss {
|
||||
ts := &tss[i]
|
||||
ts.Labels = nil
|
||||
ts.Samples = nil
|
||||
}
|
||||
ctx.WriteRequest.Timeseries = ctx.WriteRequest.Timeseries[:0]
|
||||
|
||||
promrelabel.CleanLabels(ctx.Labels)
|
||||
ctx.Labels = ctx.Labels[:0]
|
||||
@@ -35,10 +39,15 @@ func (ctx *PushCtx) Reset() {
|
||||
//
|
||||
// Call PutPushCtx when the ctx is no longer needed.
|
||||
func GetPushCtx() *PushCtx {
|
||||
if v := pushCtxPool.Get(); v != nil {
|
||||
return v.(*PushCtx)
|
||||
select {
|
||||
case ctx := <-pushCtxPoolCh:
|
||||
return ctx
|
||||
default:
|
||||
if v := pushCtxPool.Get(); v != nil {
|
||||
return v.(*PushCtx)
|
||||
}
|
||||
return &PushCtx{}
|
||||
}
|
||||
return &PushCtx{}
|
||||
}
|
||||
|
||||
// PutPushCtx returns ctx to the pool.
|
||||
@@ -46,7 +55,12 @@ func GetPushCtx() *PushCtx {
|
||||
// ctx mustn't be used after returning to the pool.
|
||||
func PutPushCtx(ctx *PushCtx) {
|
||||
ctx.Reset()
|
||||
pushCtxPool.Put(ctx)
|
||||
select {
|
||||
case pushCtxPoolCh <- ctx:
|
||||
default:
|
||||
pushCtxPool.Put(ctx)
|
||||
}
|
||||
}
|
||||
|
||||
var pushCtxPool sync.Pool
|
||||
var pushCtxPoolCh = make(chan *PushCtx, cgroup.AvailableCPUs())
|
||||
|
||||
@@ -10,6 +10,7 @@ import (
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/cgroup"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promrelabel"
|
||||
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
|
||||
@@ -159,15 +160,25 @@ func (ctx *pushCtx) reset() {
|
||||
}
|
||||
|
||||
func getPushCtx() *pushCtx {
|
||||
if v := pushCtxPool.Get(); v != nil {
|
||||
return v.(*pushCtx)
|
||||
select {
|
||||
case ctx := <-pushCtxPoolCh:
|
||||
return ctx
|
||||
default:
|
||||
if v := pushCtxPool.Get(); v != nil {
|
||||
return v.(*pushCtx)
|
||||
}
|
||||
return &pushCtx{}
|
||||
}
|
||||
return &pushCtx{}
|
||||
}
|
||||
|
||||
func putPushCtx(ctx *pushCtx) {
|
||||
ctx.reset()
|
||||
pushCtxPool.Put(ctx)
|
||||
select {
|
||||
case pushCtxPoolCh <- ctx:
|
||||
default:
|
||||
pushCtxPool.Put(ctx)
|
||||
}
|
||||
}
|
||||
|
||||
var pushCtxPool sync.Pool
|
||||
var pushCtxPoolCh = make(chan *pushCtx, cgroup.AvailableCPUs())
|
||||
|
||||
@@ -70,8 +70,8 @@ var (
|
||||
"See also -opentsdbHTTPListenAddr.useProxyProtocol")
|
||||
opentsdbHTTPUseProxyProtocol = flag.Bool("opentsdbHTTPListenAddr.useProxyProtocol", false, "Whether to use proxy protocol for connections accepted "+
|
||||
"at -opentsdbHTTPListenAddr . See https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt")
|
||||
configAuthKey = flagutil.NewPassword("configAuthKey", "Authorization key for accessing /config page. It must be passed via authKey query arg. It overrides -httpAuth.*")
|
||||
reloadAuthKey = flagutil.NewPassword("reloadAuthKey", "Auth key for /-/reload http endpoint. It must be passed via authKey query arg. It overrides -httpAuth.*")
|
||||
configAuthKey = flagutil.NewPassword("configAuthKey", "Authorization key for accessing /config page. It must be passed via authKey query arg")
|
||||
reloadAuthKey = flagutil.NewPassword("reloadAuthKey", "Auth key for /-/reload http endpoint. It must be passed as authKey=...")
|
||||
dryRun = flag.Bool("dryRun", false, "Whether to check config files without running vmagent. The following files are checked: "+
|
||||
"-promscrape.config, -remoteWrite.relabelConfig, -remoteWrite.urlRelabelConfig, -remoteWrite.streamAggr.config . "+
|
||||
"Unknown config entries aren't allowed in -promscrape.config by default. This can be changed by passing -promscrape.config.strictParse=false command-line flag")
|
||||
@@ -114,7 +114,7 @@ func main() {
|
||||
logger.Fatalf("error when checking relabel configs: %s", err)
|
||||
}
|
||||
if err := remotewrite.CheckStreamAggrConfigs(); err != nil {
|
||||
logger.Fatalf("error when checking -streamAggr.config and -remoteWrite.streamAggr.config: %s", err)
|
||||
logger.Fatalf("error when checking -remoteWrite.streamAggr.config: %s", err)
|
||||
}
|
||||
logger.Infof("all the configs are ok; exiting with 0 status code")
|
||||
return
|
||||
@@ -225,7 +225,7 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
|
||||
}
|
||||
w.Header().Add("Content-Type", "text/html; charset=utf-8")
|
||||
fmt.Fprintf(w, "<h2>vmagent</h2>")
|
||||
fmt.Fprintf(w, "See docs at <a href='https://docs.victoriametrics.com/vmagent/'>https://docs.victoriametrics.com/vmagent/</a></br>")
|
||||
fmt.Fprintf(w, "See docs at <a href='https://docs.victoriametrics.com/vmagent.html'>https://docs.victoriametrics.com/vmagent.html</a></br>")
|
||||
fmt.Fprintf(w, "Useful endpoints:</br>")
|
||||
httpserver.WriteAPIHelp(w, [][2]string{
|
||||
{"targets", "status for discovered active targets"},
|
||||
@@ -434,7 +434,7 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
|
||||
}
|
||||
return true
|
||||
case "/prometheus/config", "/config":
|
||||
if !httpserver.CheckAuthFlag(w, r, configAuthKey) {
|
||||
if !httpserver.CheckAuthFlag(w, r, configAuthKey.Get(), "configAuthKey") {
|
||||
return true
|
||||
}
|
||||
promscrapeConfigRequests.Inc()
|
||||
@@ -443,7 +443,7 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
|
||||
return true
|
||||
case "/prometheus/api/v1/status/config", "/api/v1/status/config":
|
||||
// See https://prometheus.io/docs/prometheus/latest/querying/api/#config
|
||||
if !httpserver.CheckAuthFlag(w, r, configAuthKey) {
|
||||
if !httpserver.CheckAuthFlag(w, r, configAuthKey.Get(), "configAuthKey") {
|
||||
return true
|
||||
}
|
||||
promscrapeStatusConfigRequests.Inc()
|
||||
@@ -453,7 +453,7 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
|
||||
fmt.Fprintf(w, `{"status":"success","data":{"yaml":%q}}`, bb.B)
|
||||
return true
|
||||
case "/prometheus/-/reload", "/-/reload":
|
||||
if !httpserver.CheckAuthFlag(w, r, reloadAuthKey) {
|
||||
if !httpserver.CheckAuthFlag(w, r, reloadAuthKey.Get(), "reloadAuthKey") {
|
||||
return true
|
||||
}
|
||||
promscrapeConfigReloadRequests.Inc()
|
||||
@@ -718,7 +718,7 @@ func usage() {
|
||||
const s = `
|
||||
vmagent collects metrics data via popular data ingestion protocols and routes it to VictoriaMetrics.
|
||||
|
||||
See the docs at https://docs.victoriametrics.com/vmagent/ .
|
||||
See the docs at https://docs.victoriametrics.com/vmagent.html .
|
||||
`
|
||||
flagutil.Usage(s)
|
||||
}
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# See https://medium.com/on-docker/use-multi-stage-builds-to-inject-ca-certs-ad1e8f01de1b
|
||||
ARG certs_image
|
||||
ARG root_image
|
||||
FROM $certs_image AS certs
|
||||
FROM $certs_image as certs
|
||||
RUN apk update && apk upgrade && apk --update --no-cache add ca-certificates
|
||||
|
||||
FROM $root_image
|
||||
|
||||
@@ -26,11 +26,11 @@ func TestInsertHandler(t *testing.T) {
|
||||
req := httptest.NewRequest(http.MethodPost, "/insert/0/api/v1/import/prometheus", bytes.NewBufferString(`{"foo":"bar"}
|
||||
go_memstats_alloc_bytes_total 1`))
|
||||
if err := InsertHandler(nil, req); err != nil {
|
||||
t.Fatalf("unxepected error %s", err)
|
||||
t.Errorf("unxepected error %s", err)
|
||||
}
|
||||
expectedMsg := "cannot unmarshal Prometheus line"
|
||||
if !strings.Contains(testOutput.String(), expectedMsg) {
|
||||
t.Fatalf("output %q should contain %q", testOutput.String(), expectedMsg)
|
||||
t.Errorf("output %q should contain %q", testOutput.String(), expectedMsg)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -11,25 +11,23 @@ import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/awsapi"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/netutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/persistentqueue"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promauth"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/ratelimiter"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timerpool"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeutil"
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
)
|
||||
|
||||
var (
|
||||
forcePromProto = flagutil.NewArrayBool("remoteWrite.forcePromProto", "Whether to force Prometheus remote write protocol for sending data "+
|
||||
"to the corresponding -remoteWrite.url . See https://docs.victoriametrics.com/vmagent/#victoriametrics-remote-write-protocol")
|
||||
"to the corresponding -remoteWrite.url . See https://docs.victoriametrics.com/vmagent.html#victoriametrics-remote-write-protocol")
|
||||
forceVMProto = flagutil.NewArrayBool("remoteWrite.forceVMProto", "Whether to force VictoriaMetrics remote write protocol for sending data "+
|
||||
"to the corresponding -remoteWrite.url . See https://docs.victoriametrics.com/vmagent/#victoriametrics-remote-write-protocol")
|
||||
"to the corresponding -remoteWrite.url . See https://docs.victoriametrics.com/vmagent.html#victoriametrics-remote-write-protocol")
|
||||
|
||||
rateLimit = flagutil.NewArrayInt("remoteWrite.rateLimit", 0, "Optional rate limit in bytes per second for data sent to the corresponding -remoteWrite.url. "+
|
||||
"By default, the rate limit is disabled. It can be useful for limiting load on remote storage when big amounts of buffered data "+
|
||||
@@ -38,7 +36,7 @@ var (
|
||||
proxyURL = flagutil.NewArrayString("remoteWrite.proxyURL", "Optional proxy URL for writing data to the corresponding -remoteWrite.url. "+
|
||||
"Supported proxies: http, https, socks5. Example: -remoteWrite.proxyURL=socks5://proxy:1234")
|
||||
|
||||
tlsHandshakeTimeout = flagutil.NewArrayDuration("remoteWrite.tlsHandshakeTimeout", 20*time.Second, "The timeout for establishing tls connections to the corresponding -remoteWrite.url")
|
||||
tlsHandshakeTimeout = flagutil.NewArrayDuration("remoteWrite.tlsHandshakeTimeout", 20*time.Second, "The timeout for estabilishing tls connections to the corresponding -remoteWrite.url")
|
||||
tlsInsecureSkipVerify = flagutil.NewArrayBool("remoteWrite.tlsInsecureSkipVerify", "Whether to skip tls verification when connecting to the corresponding -remoteWrite.url")
|
||||
tlsCertFile = flagutil.NewArrayString("remoteWrite.tlsCertFile", "Optional path to client-side TLS certificate file to use when connecting "+
|
||||
"to the corresponding -remoteWrite.url")
|
||||
@@ -120,7 +118,7 @@ func newHTTPClient(argIdx int, remoteWriteURL, sanitizedURL string, fq *persiste
|
||||
logger.Fatalf("cannot initialize AWS Config for -remoteWrite.url=%q: %s", remoteWriteURL, err)
|
||||
}
|
||||
tr := &http.Transport{
|
||||
DialContext: netutil.NewStatDialFunc("vmagent_remotewrite"),
|
||||
DialContext: statDial,
|
||||
TLSHandshakeTimeout: tlsHandshakeTimeout.GetOptionalArg(argIdx),
|
||||
MaxConnsPerHost: 2 * concurrency,
|
||||
MaxIdleConnsPerHost: 2 * concurrency,
|
||||
@@ -166,7 +164,7 @@ func newHTTPClient(argIdx int, remoteWriteURL, sanitizedURL string, fq *persiste
|
||||
useVMProto = common.HandleVMProtoClientHandshake(c.remoteWriteURL, doRequest)
|
||||
if !useVMProto {
|
||||
logger.Infof("the remote storage at %q doesn't support VictoriaMetrics remote write protocol. Switching to Prometheus remote write protocol. "+
|
||||
"See https://docs.victoriametrics.com/vmagent/#victoriametrics-remote-write-protocol", sanitizedURL)
|
||||
"See https://docs.victoriametrics.com/vmagent.html#victoriametrics-remote-write-protocol", sanitizedURL)
|
||||
}
|
||||
}
|
||||
c.useVMProto = useVMProto
|
||||
@@ -300,11 +298,6 @@ func (c *client) runWorker() {
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
if len(block) == 0 {
|
||||
// skip empty data blocks from sending
|
||||
// see https://github.com/VictoriaMetrics/VictoriaMetrics/pull/6241
|
||||
continue
|
||||
}
|
||||
go func() {
|
||||
startTime := time.Now()
|
||||
ch <- c.sendBlock(block)
|
||||
|
||||
@@ -28,7 +28,7 @@ var (
|
||||
maxRowsPerBlock = flag.Int("remoteWrite.maxRowsPerBlock", 10000, "The maximum number of samples to send in each block to remote storage. Higher number may improve performance at the cost of the increased memory usage. See also -remoteWrite.maxBlockSize")
|
||||
vmProtoCompressLevel = flag.Int("remoteWrite.vmProtoCompressLevel", 0, "The compression level for VictoriaMetrics remote write protocol. "+
|
||||
"Higher values reduce network traffic at the cost of higher CPU usage. Negative values reduce CPU usage at the cost of increased network traffic. "+
|
||||
"See https://docs.victoriametrics.com/vmagent/#victoriametrics-remote-write-protocol")
|
||||
"See https://docs.victoriametrics.com/vmagent.html#victoriametrics-remote-write-protocol")
|
||||
)
|
||||
|
||||
type pendingSeries struct {
|
||||
@@ -122,7 +122,11 @@ func (wr *writeRequest) reset() {
|
||||
|
||||
wr.wr.Timeseries = nil
|
||||
|
||||
clear(wr.tss)
|
||||
for i := range wr.tss {
|
||||
ts := &wr.tss[i]
|
||||
ts.Labels = nil
|
||||
ts.Samples = nil
|
||||
}
|
||||
wr.tss = wr.tss[:0]
|
||||
|
||||
promrelabel.CleanLabels(wr.labels)
|
||||
|
||||
@@ -34,7 +34,7 @@ func testPushWriteRequest(t *testing.T, rowsCount, expectedBlockLenProm, expecte
|
||||
return true
|
||||
}
|
||||
if !tryPushWriteRequest(wr, pushBlock, isVMRemoteWrite) {
|
||||
t.Fatalf("cannot push data to remote storage")
|
||||
t.Fatalf("cannot push data to to remote storage")
|
||||
}
|
||||
if math.Abs(float64(pushBlockLen-expectedBlockLen)/float64(expectedBlockLen)*100) > tolerancePrc {
|
||||
t.Fatalf("unexpected block len for rowsCount=%d, isVMRemoteWrite=%v; got %d bytes; expecting %d bytes +- %.0f%%",
|
||||
|
||||
@@ -19,10 +19,10 @@ var (
|
||||
relabelConfigPathGlobal = flag.String("remoteWrite.relabelConfig", "", "Optional path to file with relabeling configs, which are applied "+
|
||||
"to all the metrics before sending them to -remoteWrite.url. See also -remoteWrite.urlRelabelConfig. "+
|
||||
"The path can point either to local file or to http url. "+
|
||||
"See https://docs.victoriametrics.com/vmagent/#relabeling")
|
||||
"See https://docs.victoriametrics.com/vmagent.html#relabeling")
|
||||
relabelConfigPaths = flagutil.NewArrayString("remoteWrite.urlRelabelConfig", "Optional path to relabel configs for the corresponding -remoteWrite.url. "+
|
||||
"See also -remoteWrite.relabelConfig. The path can point either to local file or to http url. "+
|
||||
"See https://docs.victoriametrics.com/vmagent/#relabeling")
|
||||
"See https://docs.victoriametrics.com/vmagent.html#relabeling")
|
||||
|
||||
usePromCompatibleNaming = flag.Bool("usePromCompatibleNaming", false, "Whether to replace characters unsupported by Prometheus with underscores "+
|
||||
"in the ingested metric names and label names. For example, foo.bar{a.b='c'} is transformed into foo_bar{a_b='c'} during data ingestion if this flag is set. "+
|
||||
@@ -46,11 +46,11 @@ func loadRelabelConfigs() (*relabelConfigs, error) {
|
||||
}
|
||||
rcs.global = global
|
||||
}
|
||||
if len(*relabelConfigPaths) > len(*remoteWriteURLs) {
|
||||
return nil, fmt.Errorf("too many -remoteWrite.urlRelabelConfig args: %d; it mustn't exceed the number of -remoteWrite.url args: %d",
|
||||
len(*relabelConfigPaths), (len(*remoteWriteURLs)))
|
||||
if len(*relabelConfigPaths) > (len(*remoteWriteURLs) + len(*remoteWriteMultitenantURLs)) {
|
||||
return nil, fmt.Errorf("too many -remoteWrite.urlRelabelConfig args: %d; it mustn't exceed the number of -remoteWrite.url or -remoteWrite.multitenantURL args: %d",
|
||||
len(*relabelConfigPaths), (len(*remoteWriteURLs) + len(*remoteWriteMultitenantURLs)))
|
||||
}
|
||||
rcs.perURL = make([]*promrelabel.ParsedConfigs, len(*remoteWriteURLs))
|
||||
rcs.perURL = make([]*promrelabel.ParsedConfigs, (len(*remoteWriteURLs) + len(*remoteWriteMultitenantURLs)))
|
||||
for i, path := range *relabelConfigPaths {
|
||||
if len(path) == 0 {
|
||||
// Skip empty relabel config.
|
||||
@@ -181,7 +181,7 @@ func (rctx *relabelCtx) reset() {
|
||||
}
|
||||
|
||||
var relabelCtxPool = &sync.Pool{
|
||||
New: func() any {
|
||||
New: func() interface{} {
|
||||
return &relabelCtx{}
|
||||
},
|
||||
}
|
||||
|
||||
@@ -6,7 +6,6 @@ import (
|
||||
"net/http"
|
||||
"net/url"
|
||||
"path/filepath"
|
||||
"slices"
|
||||
"strconv"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
@@ -30,6 +29,7 @@ import (
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutils"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/ratelimiter"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/streamaggr"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
"github.com/cespare/xxhash/v2"
|
||||
)
|
||||
@@ -39,23 +39,22 @@ var (
|
||||
"or Prometheus remote_write protocol. Example url: http://<victoriametrics-host>:8428/api/v1/write . "+
|
||||
"Pass multiple -remoteWrite.url options in order to replicate the collected data to multiple remote storage systems. "+
|
||||
"The data can be sharded among the configured remote storage systems if -remoteWrite.shardByURL flag is set")
|
||||
remoteWriteMultitenantURLs = flagutil.NewArrayString("remoteWrite.multitenantURL", "Base path for multitenant remote storage URL to write data to. "+
|
||||
"See https://docs.victoriametrics.com/vmagent.html#multitenancy for details. Example url: http://<vminsert>:8480 . "+
|
||||
"Pass multiple -remoteWrite.multitenantURL flags in order to replicate data to multiple remote storage systems. "+
|
||||
"This flag is deprecated in favor of -enableMultitenantHandlers . See https://docs.victoriametrics.com/vmagent.html#multitenancy")
|
||||
enableMultitenantHandlers = flag.Bool("enableMultitenantHandlers", false, "Whether to process incoming data via multitenant insert handlers according to "+
|
||||
"https://docs.victoriametrics.com/cluster-victoriametrics/#url-format . By default incoming data is processed via single-node insert handlers "+
|
||||
"https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#url-format . By default incoming data is processed via single-node insert handlers "+
|
||||
"according to https://docs.victoriametrics.com/#how-to-import-time-series-data ."+
|
||||
"See https://docs.victoriametrics.com/vmagent/#multitenancy for details")
|
||||
|
||||
"See https://docs.victoriametrics.com/vmagent.html#multitenancy for details")
|
||||
shardByURL = flag.Bool("remoteWrite.shardByURL", false, "Whether to shard outgoing series across all the remote storage systems enumerated via -remoteWrite.url . "+
|
||||
"By default the data is replicated across all the -remoteWrite.url . See https://docs.victoriametrics.com/vmagent/#sharding-among-remote-storages . "+
|
||||
"See also -remoteWrite.shardByURLReplicas")
|
||||
shardByURLReplicas = flag.Int("remoteWrite.shardByURLReplicas", 1, "How many copies of data to make among remote storage systems enumerated via -remoteWrite.url "+
|
||||
"when -remoteWrite.shardByURL is set. See https://docs.victoriametrics.com/vmagent/#sharding-among-remote-storages")
|
||||
"By default the data is replicated across all the -remoteWrite.url . See https://docs.victoriametrics.com/vmagent.html#sharding-among-remote-storages")
|
||||
shardByURLLabels = flagutil.NewArrayString("remoteWrite.shardByURL.labels", "Optional list of labels, which must be used for sharding outgoing samples "+
|
||||
"among remote storage systems if -remoteWrite.shardByURL command-line flag is set. By default all the labels are used for sharding in order to gain "+
|
||||
"even distribution of series over the specified -remoteWrite.url systems. See also -remoteWrite.shardByURL.ignoreLabels")
|
||||
shardByURLIgnoreLabels = flagutil.NewArrayString("remoteWrite.shardByURL.ignoreLabels", "Optional list of labels, which must be ignored when sharding outgoing samples "+
|
||||
"among remote storage systems if -remoteWrite.shardByURL command-line flag is set. By default all the labels are used for sharding in order to gain "+
|
||||
"even distribution of series over the specified -remoteWrite.url systems. See also -remoteWrite.shardByURL.labels")
|
||||
|
||||
tmpDataPath = flag.String("remoteWrite.tmpDataPath", "vmagent-remotewrite-data", "Path to directory for storing pending data, which isn't sent to the configured -remoteWrite.url . "+
|
||||
"See also -remoteWrite.maxDiskUsagePerURL and -remoteWrite.disableOnDiskQueue")
|
||||
keepDanglingQueues = flag.Bool("remoteWrite.keepDanglingQueues", false, "Keep persistent queues contents at -remoteWrite.tmpDataPath in case there are no matching -remoteWrite.url. "+
|
||||
@@ -82,55 +81,63 @@ var (
|
||||
`For example, if m{k1="v1",k2="v2"} may be sent as m{k2="v2",k1="v1"}`+
|
||||
`Enabled sorting for labels can slow down ingestion performance a bit`)
|
||||
maxHourlySeries = flag.Int("remoteWrite.maxHourlySeries", 0, "The maximum number of unique series vmagent can send to remote storage systems during the last hour. "+
|
||||
"Excess series are logged and dropped. This can be useful for limiting series cardinality. See https://docs.victoriametrics.com/vmagent/#cardinality-limiter")
|
||||
"Excess series are logged and dropped. This can be useful for limiting series cardinality. See https://docs.victoriametrics.com/vmagent.html#cardinality-limiter")
|
||||
maxDailySeries = flag.Int("remoteWrite.maxDailySeries", 0, "The maximum number of unique series vmagent can send to remote storage systems during the last 24 hours. "+
|
||||
"Excess series are logged and dropped. This can be useful for limiting series churn rate. See https://docs.victoriametrics.com/vmagent/#cardinality-limiter")
|
||||
"Excess series are logged and dropped. This can be useful for limiting series churn rate. See https://docs.victoriametrics.com/vmagent.html#cardinality-limiter")
|
||||
maxIngestionRate = flag.Int("maxIngestionRate", 0, "The maximum number of samples vmagent can receive per second. Data ingestion is paused when the limit is exceeded. "+
|
||||
"By default there are no limits on samples ingestion rate. See also -remoteWrite.rateLimit")
|
||||
|
||||
disableOnDiskQueue = flagutil.NewArrayBool("remoteWrite.disableOnDiskQueue", "Whether to disable storing pending data to -remoteWrite.tmpDataPath "+
|
||||
"when the remote storage system at the corresponding -remoteWrite.url cannot keep up with the data ingestion rate. "+
|
||||
"See https://docs.victoriametrics.com/vmagent#disabling-on-disk-persistence . See also -remoteWrite.dropSamplesOnOverload")
|
||||
streamAggrConfig = flagutil.NewArrayString("remoteWrite.streamAggr.config", "Optional path to file with stream aggregation config. "+
|
||||
"See https://docs.victoriametrics.com/stream-aggregation.html . "+
|
||||
"See also -remoteWrite.streamAggr.keepInput, -remoteWrite.streamAggr.dropInput and -remoteWrite.streamAggr.dedupInterval")
|
||||
streamAggrKeepInput = flagutil.NewArrayBool("remoteWrite.streamAggr.keepInput", "Whether to keep all the input samples after the aggregation "+
|
||||
"with -remoteWrite.streamAggr.config. By default, only aggregates samples are dropped, while the remaining samples "+
|
||||
"are written to the corresponding -remoteWrite.url . See also -remoteWrite.streamAggr.dropInput and https://docs.victoriametrics.com/stream-aggregation.html")
|
||||
streamAggrDropInput = flagutil.NewArrayBool("remoteWrite.streamAggr.dropInput", "Whether to drop all the input samples after the aggregation "+
|
||||
"with -remoteWrite.streamAggr.config. By default, only aggregates samples are dropped, while the remaining samples "+
|
||||
"are written to the corresponding -remoteWrite.url . See also -remoteWrite.streamAggr.keepInput and https://docs.victoriametrics.com/stream-aggregation.html")
|
||||
streamAggrDedupInterval = flagutil.NewArrayDuration("remoteWrite.streamAggr.dedupInterval", 0, "Input samples are de-duplicated with this interval before optional aggregation "+
|
||||
"with -remoteWrite.streamAggr.config . See also -dedup.minScrapeInterval and https://docs.victoriametrics.com/stream-aggregation.html#deduplication")
|
||||
streamAggrIgnoreOldSamples = flagutil.NewArrayBool("remoteWrite.streamAggr.ignoreOldSamples", "Whether to ignore input samples with old timestamps outside the current aggregation interval "+
|
||||
"for the corresponding -remoteWrite.streamAggr.config . See https://docs.victoriametrics.com/stream-aggregation.html#ignoring-old-samples")
|
||||
streamAggrDropInputLabels = flagutil.NewArrayString("streamAggr.dropInputLabels", "An optional list of labels to drop from samples "+
|
||||
"before stream de-duplication and aggregation . See https://docs.victoriametrics.com/stream-aggregation.html#dropping-unneeded-labels")
|
||||
|
||||
disableOnDiskQueue = flag.Bool("remoteWrite.disableOnDiskQueue", false, "Whether to disable storing pending data to -remoteWrite.tmpDataPath "+
|
||||
"when the configured remote storage systems cannot keep up with the data ingestion rate. See https://docs.victoriametrics.com/vmagent.html#disabling-on-disk-persistence ."+
|
||||
"See also -remoteWrite.dropSamplesOnOverload")
|
||||
dropSamplesOnOverload = flag.Bool("remoteWrite.dropSamplesOnOverload", false, "Whether to drop samples when -remoteWrite.disableOnDiskQueue is set and if the samples "+
|
||||
"cannot be pushed into the configured -remoteWrite.url systems in a timely manner. See https://docs.victoriametrics.com/vmagent#disabling-on-disk-persistence")
|
||||
"cannot be pushed into the configured remote storage systems in a timely manner. See https://docs.victoriametrics.com/vmagent.html#disabling-on-disk-persistence")
|
||||
)
|
||||
|
||||
var (
|
||||
// rwctxsGlobal contains statically populated entries when -remoteWrite.url is specified.
|
||||
rwctxsGlobal []*remoteWriteCtx
|
||||
// rwctxsDefault contains statically populated entries when -remoteWrite.url is specified.
|
||||
rwctxsDefault []*remoteWriteCtx
|
||||
|
||||
// Data without tenant id is written to defaultAuthToken if -enableMultitenantHandlers is specified.
|
||||
// rwctxsMap contains dynamically populated entries when -remoteWrite.multitenantURL is specified.
|
||||
rwctxsMap = make(map[tenantmetrics.TenantID][]*remoteWriteCtx)
|
||||
rwctxsMapLock sync.Mutex
|
||||
|
||||
// Data without tenant id is written to defaultAuthToken if -remoteWrite.multitenantURL is specified.
|
||||
defaultAuthToken = &auth.Token{}
|
||||
|
||||
// ErrQueueFullHTTPRetry must be returned when TryPush() returns false.
|
||||
ErrQueueFullHTTPRetry = &httpserver.ErrorWithStatusCode{
|
||||
Err: fmt.Errorf("remote storage systems cannot keep up with the data ingestion rate; retry the request later " +
|
||||
"or remove -remoteWrite.disableOnDiskQueue from vmagent command-line flags, so it could save pending data to -remoteWrite.tmpDataPath; " +
|
||||
"see https://docs.victoriametrics.com/vmagent/#disabling-on-disk-persistence"),
|
||||
"see https://docs.victoriametrics.com/vmagent.html#disabling-on-disk-persistence"),
|
||||
StatusCode: http.StatusTooManyRequests,
|
||||
}
|
||||
|
||||
// disableOnDiskQueueAny is set to true if at least a single -remoteWrite.url is configured with -remoteWrite.disableOnDiskQueue
|
||||
disableOnDiskQueueAny bool
|
||||
|
||||
// dropSamplesOnFailureGlobal is set to true if -remoteWrite.dropSamplesOnOverload is set or if multiple -remoteWrite.disableOnDiskQueue options are set.
|
||||
dropSamplesOnFailureGlobal bool
|
||||
)
|
||||
|
||||
// MultitenancyEnabled returns true if -enableMultitenantHandlers is specified.
|
||||
// MultitenancyEnabled returns true if -enableMultitenantHandlers or -remoteWrite.multitenantURL is specified.
|
||||
func MultitenancyEnabled() bool {
|
||||
return *enableMultitenantHandlers
|
||||
return *enableMultitenantHandlers || len(*remoteWriteMultitenantURLs) > 0
|
||||
}
|
||||
|
||||
// Contains the current relabelConfigs.
|
||||
var allRelabelConfigs atomic.Pointer[relabelConfigs]
|
||||
|
||||
// Contains the current global stream aggregators.
|
||||
var sasGlobal atomic.Pointer[streamaggr.Aggregators]
|
||||
|
||||
// Contains the current global deduplicator.
|
||||
var deduplicatorGlobal *streamaggr.Deduplicator
|
||||
|
||||
// maxQueues limits the maximum value for `-remoteWrite.queues`. There is no sense in setting too high value,
|
||||
// since it may lead to high memory usage due to big number of buffers.
|
||||
var maxQueues = cgroup.AvailableCPUs() * 16
|
||||
@@ -156,8 +163,11 @@ var (
|
||||
//
|
||||
// Stop must be called for graceful shutdown.
|
||||
func Init() {
|
||||
if len(*remoteWriteURLs) == 0 {
|
||||
logger.Fatalf("at least one `-remoteWrite.url` command-line flag must be set")
|
||||
if len(*remoteWriteURLs) == 0 && len(*remoteWriteMultitenantURLs) == 0 {
|
||||
logger.Fatalf("at least one `-remoteWrite.url` or `-remoteWrite.multitenantURL` command-line flag must be set")
|
||||
}
|
||||
if len(*remoteWriteURLs) > 0 && len(*remoteWriteMultitenantURLs) > 0 {
|
||||
logger.Fatalf("cannot set both `-remoteWrite.url` and `-remoteWrite.multitenantURL` command-line flags")
|
||||
}
|
||||
if *maxHourlySeries > 0 {
|
||||
hourlySeriesLimiter = bloomfilter.NewLimiter(*maxHourlySeries, time.Hour)
|
||||
@@ -207,19 +217,9 @@ func Init() {
|
||||
relabelConfigSuccess.Set(1)
|
||||
relabelConfigTimestamp.Set(fasttime.UnixTimestamp())
|
||||
|
||||
initStreamAggrConfigGlobal()
|
||||
|
||||
rwctxsGlobal = newRemoteWriteCtxs(nil, *remoteWriteURLs)
|
||||
|
||||
disableOnDiskQueues := []bool(*disableOnDiskQueue)
|
||||
disableOnDiskQueueAny = slices.Contains(disableOnDiskQueues, true)
|
||||
|
||||
// Samples must be dropped if multiple -remoteWrite.disableOnDiskQueue options are configured and at least a single is set to true.
|
||||
// In this case it is impossible to prevent from sending many duplicates of samples passed to TryPush() to all the configured -remoteWrite.url
|
||||
// if these samples couldn't be sent to the -remoteWrite.url with the disabled persistent queue. So it is better sending samples
|
||||
// to the remaining -remoteWrite.url and dropping them on the blocked queue.
|
||||
dropSamplesOnFailureGlobal = *dropSamplesOnOverload || disableOnDiskQueueAny && len(disableOnDiskQueues) > 1
|
||||
|
||||
if len(*remoteWriteURLs) > 0 {
|
||||
rwctxsDefault = newRemoteWriteCtxs(nil, *remoteWriteURLs)
|
||||
}
|
||||
dropDanglingQueues()
|
||||
|
||||
// Start config reloader.
|
||||
@@ -228,9 +228,9 @@ func Init() {
|
||||
defer configReloaderWG.Done()
|
||||
for {
|
||||
select {
|
||||
case <-sighupCh:
|
||||
case <-configReloaderStopCh:
|
||||
return
|
||||
case <-sighupCh:
|
||||
}
|
||||
reloadRelabelConfigs()
|
||||
reloadStreamAggrConfigs()
|
||||
@@ -242,15 +242,18 @@ func dropDanglingQueues() {
|
||||
if *keepDanglingQueues {
|
||||
return
|
||||
}
|
||||
if len(*remoteWriteMultitenantURLs) > 0 {
|
||||
// Do not drop dangling queues for *remoteWriteMultitenantURLs, since it is impossible to determine
|
||||
// unused queues for multitenant urls - they are created on demand when new sample for the given
|
||||
// tenant is pushed to remote storage.
|
||||
return
|
||||
}
|
||||
// Remove dangling persistent queues, if any.
|
||||
// This is required for the case when the number of queues has been changed or URL have been changed.
|
||||
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4014
|
||||
//
|
||||
// In case if there were many persistent queues with identical *remoteWriteURLs
|
||||
// the queue with the last index will be dropped.
|
||||
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6140
|
||||
existingQueues := make(map[string]struct{}, len(rwctxsGlobal))
|
||||
for _, rwctx := range rwctxsGlobal {
|
||||
existingQueues := make(map[string]struct{}, len(rwctxsDefault))
|
||||
for _, rwctx := range rwctxsDefault {
|
||||
existingQueues[rwctx.fq.Dirname()] = struct{}{}
|
||||
}
|
||||
|
||||
@@ -267,7 +270,7 @@ func dropDanglingQueues() {
|
||||
}
|
||||
}
|
||||
if removed > 0 {
|
||||
logger.Infof("removed %d dangling queues from %q, active queues: %d", removed, *tmpDataPath, len(rwctxsGlobal))
|
||||
logger.Infof("removed %d dangling queues from %q, active queues: %d", removed, *tmpDataPath, len(rwctxsDefault))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -294,6 +297,24 @@ var (
|
||||
relabelConfigTimestamp = metrics.NewCounter(`vmagent_relabel_config_last_reload_success_timestamp_seconds`)
|
||||
)
|
||||
|
||||
func reloadStreamAggrConfigs() {
|
||||
if len(*remoteWriteMultitenantURLs) > 0 {
|
||||
rwctxsMapLock.Lock()
|
||||
for _, rwctxs := range rwctxsMap {
|
||||
reinitStreamAggr(rwctxs)
|
||||
}
|
||||
rwctxsMapLock.Unlock()
|
||||
} else {
|
||||
reinitStreamAggr(rwctxsDefault)
|
||||
}
|
||||
}
|
||||
|
||||
func reinitStreamAggr(rwctxs []*remoteWriteCtx) {
|
||||
for _, rwctx := range rwctxs {
|
||||
rwctx.reinitStreamAggr()
|
||||
}
|
||||
}
|
||||
|
||||
func newRemoteWriteCtxs(at *auth.Token, urls []string) []*remoteWriteCtx {
|
||||
if len(urls) == 0 {
|
||||
logger.Panicf("BUG: urls must be non-empty")
|
||||
@@ -317,7 +338,7 @@ func newRemoteWriteCtxs(at *auth.Token, urls []string) []*remoteWriteCtx {
|
||||
}
|
||||
sanitizedURL := fmt.Sprintf("%d:secret-url", i+1)
|
||||
if at != nil {
|
||||
// Construct full remote_write url for the given tenant according to https://docs.victoriametrics.com/cluster-victoriametrics/#url-format
|
||||
// Construct full remote_write url for the given tenant according to https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#url-format
|
||||
remoteWriteURL.Path = fmt.Sprintf("%s/insert/%d:%d/prometheus/api/v1/write", remoteWriteURL.Path, at.AccountID, at.ProjectID)
|
||||
sanitizedURL = fmt.Sprintf("%s:%d:%d", sanitizedURL, at.AccountID, at.ProjectID)
|
||||
}
|
||||
@@ -329,10 +350,8 @@ func newRemoteWriteCtxs(at *auth.Token, urls []string) []*remoteWriteCtx {
|
||||
return rwctxs
|
||||
}
|
||||
|
||||
var (
|
||||
configReloaderStopCh = make(chan struct{})
|
||||
configReloaderWG sync.WaitGroup
|
||||
)
|
||||
var configReloaderStopCh = make(chan struct{})
|
||||
var configReloaderWG sync.WaitGroup
|
||||
|
||||
// StartIngestionRateLimiter starts ingestion rate limiter.
|
||||
//
|
||||
@@ -370,16 +389,18 @@ func Stop() {
|
||||
close(configReloaderStopCh)
|
||||
configReloaderWG.Wait()
|
||||
|
||||
sasGlobal.Load().MustStop()
|
||||
if deduplicatorGlobal != nil {
|
||||
deduplicatorGlobal.MustStop()
|
||||
deduplicatorGlobal = nil
|
||||
}
|
||||
|
||||
for _, rwctx := range rwctxsGlobal {
|
||||
for _, rwctx := range rwctxsDefault {
|
||||
rwctx.MustStop()
|
||||
}
|
||||
rwctxsGlobal = nil
|
||||
rwctxsDefault = nil
|
||||
|
||||
// There is no need in locking rwctxsMapLock here, since nobody should call TryPush during the Stop call.
|
||||
for _, rwctxs := range rwctxsMap {
|
||||
for _, rwctx := range rwctxs {
|
||||
rwctx.MustStop()
|
||||
}
|
||||
}
|
||||
rwctxsMap = nil
|
||||
|
||||
if sl := hourlySeriesLimiter; sl != nil {
|
||||
sl.MustStop()
|
||||
@@ -389,26 +410,30 @@ func Stop() {
|
||||
}
|
||||
}
|
||||
|
||||
// PushDropSamplesOnFailure pushes wr to the configured remote storage systems set via -remoteWrite.url
|
||||
// PushDropSamplesOnFailure pushes wr to the configured remote storage systems set via -remoteWrite.url and -remoteWrite.multitenantURL
|
||||
//
|
||||
// PushDropSamplesOnFailure drops wr samples if they cannot be sent to -remoteWrite.url by any reason.
|
||||
// If at is nil, then the data is pushed to the configured -remoteWrite.url.
|
||||
// If at isn't nil, the data is pushed to the configured -remoteWrite.multitenantURL.
|
||||
//
|
||||
// PushDropSamplesOnFailure can modify wr contents.
|
||||
func PushDropSamplesOnFailure(at *auth.Token, wr *prompbmarshal.WriteRequest) {
|
||||
_ = tryPush(at, wr, true)
|
||||
}
|
||||
|
||||
// TryPush tries sending wr to the configured remote storage systems set via -remoteWrite.url
|
||||
// TryPush tries sending wr to the configured remote storage systems set via -remoteWrite.url and -remoteWrite.multitenantURL
|
||||
//
|
||||
// If at is nil, then the data is pushed to the configured -remoteWrite.url.
|
||||
// If at isn't nil, the data is pushed to the configured -remoteWrite.multitenantURL.
|
||||
//
|
||||
// TryPush can modify wr contents, so the caller must re-initialize wr before calling TryPush() after unsuccessful attempt.
|
||||
// TryPush may send partial data from wr on unsuccessful attempt, so repeated call for the same wr may send the data multiple times.
|
||||
//
|
||||
// The caller must return ErrQueueFullHTTPRetry to the client, which sends wr, if TryPush returns false.
|
||||
func TryPush(at *auth.Token, wr *prompbmarshal.WriteRequest) bool {
|
||||
return tryPush(at, wr, dropSamplesOnFailureGlobal)
|
||||
return tryPush(at, wr, *dropSamplesOnOverload)
|
||||
}
|
||||
|
||||
func tryPush(at *auth.Token, wr *prompbmarshal.WriteRequest, forceDropSamplesOnFailure bool) bool {
|
||||
func tryPush(at *auth.Token, wr *prompbmarshal.WriteRequest, dropSamplesOnFailure bool) bool {
|
||||
tss := wr.Timeseries
|
||||
|
||||
if at == nil && MultitenancyEnabled() {
|
||||
@@ -417,25 +442,45 @@ func tryPush(at *auth.Token, wr *prompbmarshal.WriteRequest, forceDropSamplesOnF
|
||||
}
|
||||
|
||||
var tenantRctx *relabelCtx
|
||||
if at != nil {
|
||||
var rwctxs []*remoteWriteCtx
|
||||
if at == nil {
|
||||
rwctxs = rwctxsDefault
|
||||
} else if len(*remoteWriteMultitenantURLs) == 0 {
|
||||
// Convert at to (vm_account_id, vm_project_id) labels.
|
||||
tenantRctx = getRelabelCtx()
|
||||
defer putRelabelCtx(tenantRctx)
|
||||
rwctxs = rwctxsDefault
|
||||
} else {
|
||||
rwctxsMapLock.Lock()
|
||||
tenantID := tenantmetrics.TenantID{
|
||||
AccountID: at.AccountID,
|
||||
ProjectID: at.ProjectID,
|
||||
}
|
||||
rwctxs = rwctxsMap[tenantID]
|
||||
if rwctxs == nil {
|
||||
rwctxs = newRemoteWriteCtxs(at, *remoteWriteMultitenantURLs)
|
||||
rwctxsMap[tenantID] = rwctxs
|
||||
}
|
||||
rwctxsMapLock.Unlock()
|
||||
}
|
||||
|
||||
// Quick check whether writes to configured remote storage systems are blocked.
|
||||
// This allows saving CPU time spent on relabeling and block compression
|
||||
// if some of remote storage systems cannot keep up with the data ingestion rate.
|
||||
rwctxs, ok := getEligibleRemoteWriteCtxs(tss, forceDropSamplesOnFailure)
|
||||
if !ok {
|
||||
// At least a single remote write queue is blocked and dropSamplesOnFailure isn't set.
|
||||
// Return false to the caller, so it could re-send samples again.
|
||||
return false
|
||||
}
|
||||
if len(rwctxs) == 0 {
|
||||
// All the remote write queues are skipped because they are blocked and dropSamplesOnFailure is set to true.
|
||||
// Return true to the caller, so it doesn't re-send the samples again.
|
||||
return true
|
||||
rowsCount := getRowsCount(tss)
|
||||
|
||||
if *disableOnDiskQueue {
|
||||
// Quick check whether writes to configured remote storage systems are blocked.
|
||||
// This allows saving CPU time spent on relabeling and block compression
|
||||
// if some of remote storage systems cannot keep up with the data ingestion rate.
|
||||
for _, rwctx := range rwctxs {
|
||||
if rwctx.fq.IsWriteBlocked() {
|
||||
pushFailures.Inc()
|
||||
if dropSamplesOnFailure {
|
||||
// Just drop samples
|
||||
samplesDropped.Add(rowsCount)
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var rctx *relabelCtx
|
||||
@@ -445,14 +490,11 @@ func tryPush(at *auth.Token, wr *prompbmarshal.WriteRequest, forceDropSamplesOnF
|
||||
rctx = getRelabelCtx()
|
||||
defer putRelabelCtx(rctx)
|
||||
}
|
||||
rowsCount := getRowsCount(tss)
|
||||
globalRowsPushedBeforeRelabel.Add(rowsCount)
|
||||
maxSamplesPerBlock := *maxRowsPerBlock
|
||||
// Allow up to 10x of labels per each block on average.
|
||||
maxLabelsPerBlock := 10 * maxSamplesPerBlock
|
||||
|
||||
sas := sasGlobal.Load()
|
||||
|
||||
for len(tss) > 0 {
|
||||
// Process big tss in smaller blocks in order to reduce the maximum memory usage
|
||||
samplesCount := 0
|
||||
@@ -487,57 +529,27 @@ func tryPush(at *auth.Token, wr *prompbmarshal.WriteRequest, forceDropSamplesOnF
|
||||
}
|
||||
sortLabelsIfNeeded(tssBlock)
|
||||
tssBlock = limitSeriesCardinality(tssBlock)
|
||||
if sas.IsEnabled() {
|
||||
matchIdxs := matchIdxsPool.Get()
|
||||
matchIdxs.B = sas.Push(tssBlock, matchIdxs.B)
|
||||
if !*streamAggrGlobalKeepInput {
|
||||
tssBlock = dropAggregatedSeries(tssBlock, matchIdxs.B, *streamAggrGlobalDropInput)
|
||||
if !tryPushBlockToRemoteStorages(rwctxs, tssBlock) {
|
||||
if !*disableOnDiskQueue {
|
||||
logger.Panicf("BUG: tryPushBlockToRemoteStorages must return true if -remoteWrite.disableOnDiskQueue isn't set")
|
||||
}
|
||||
pushFailures.Inc()
|
||||
if dropSamplesOnFailure {
|
||||
samplesDropped.Add(rowsCount)
|
||||
return true
|
||||
}
|
||||
matchIdxsPool.Put(matchIdxs)
|
||||
} else if deduplicatorGlobal != nil {
|
||||
deduplicatorGlobal.Push(tssBlock)
|
||||
tssBlock = tssBlock[:0]
|
||||
}
|
||||
if !tryPushBlockToRemoteStorages(rwctxs, tssBlock, forceDropSamplesOnFailure) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func getEligibleRemoteWriteCtxs(tss []prompbmarshal.TimeSeries, forceDropSamplesOnFailure bool) ([]*remoteWriteCtx, bool) {
|
||||
if !disableOnDiskQueueAny {
|
||||
return rwctxsGlobal, true
|
||||
}
|
||||
var (
|
||||
samplesDropped = metrics.NewCounter(`vmagent_remotewrite_samples_dropped_total`)
|
||||
pushFailures = metrics.NewCounter(`vmagent_remotewrite_push_failures_total`)
|
||||
)
|
||||
|
||||
// This code is applicable if at least a single remote storage has -disableOnDiskQueue
|
||||
rwctxs := make([]*remoteWriteCtx, 0, len(rwctxsGlobal))
|
||||
for _, rwctx := range rwctxsGlobal {
|
||||
if !rwctx.fq.IsWriteBlocked() {
|
||||
rwctxs = append(rwctxs, rwctx)
|
||||
} else {
|
||||
rwctx.pushFailures.Inc()
|
||||
if !forceDropSamplesOnFailure {
|
||||
return nil, false
|
||||
}
|
||||
rowsCount := getRowsCount(tss)
|
||||
rwctx.rowsDroppedOnPushFailure.Add(rowsCount)
|
||||
}
|
||||
}
|
||||
return rwctxs, true
|
||||
}
|
||||
|
||||
func pushToRemoteStoragesTrackDropped(tss []prompbmarshal.TimeSeries) {
|
||||
rwctxs, _ := getEligibleRemoteWriteCtxs(tss, true)
|
||||
if len(rwctxs) == 0 {
|
||||
return
|
||||
}
|
||||
if !tryPushBlockToRemoteStorages(rwctxs, tss, true) {
|
||||
logger.Panicf("BUG: tryPushBlockToRemoteStorages() must return true when forceDropSamplesOnFailure=true")
|
||||
}
|
||||
}
|
||||
|
||||
func tryPushBlockToRemoteStorages(rwctxs []*remoteWriteCtx, tssBlock []prompbmarshal.TimeSeries, forceDropSamplesOnFailure bool) bool {
|
||||
func tryPushBlockToRemoteStorages(rwctxs []*remoteWriteCtx, tssBlock []prompbmarshal.TimeSeries) bool {
|
||||
if len(tssBlock) == 0 {
|
||||
// Nothing to push
|
||||
return true
|
||||
@@ -545,22 +557,63 @@ func tryPushBlockToRemoteStorages(rwctxs []*remoteWriteCtx, tssBlock []prompbmar
|
||||
|
||||
if len(rwctxs) == 1 {
|
||||
// Fast path - just push data to the configured single remote storage
|
||||
return rwctxs[0].TryPush(tssBlock, forceDropSamplesOnFailure)
|
||||
return rwctxs[0].TryPush(tssBlock)
|
||||
}
|
||||
|
||||
// We need to push tssBlock to multiple remote storages.
|
||||
// This is either sharding or replication depending on -remoteWrite.shardByURL command-line flag value.
|
||||
if *shardByURL && *shardByURLReplicas < len(rwctxs) {
|
||||
// Shard tssBlock samples among rwctxs.
|
||||
replicas := *shardByURLReplicas
|
||||
if replicas <= 0 {
|
||||
replicas = 1
|
||||
if *shardByURL {
|
||||
// Shard the data among rwctxs
|
||||
tssByURL := make([][]prompbmarshal.TimeSeries, len(rwctxs))
|
||||
tmpLabels := promutils.GetLabels()
|
||||
for _, ts := range tssBlock {
|
||||
hashLabels := ts.Labels
|
||||
if len(shardByURLLabelsMap) > 0 {
|
||||
hashLabels = tmpLabels.Labels[:0]
|
||||
for _, label := range ts.Labels {
|
||||
if _, ok := shardByURLLabelsMap[label.Name]; ok {
|
||||
hashLabels = append(hashLabels, label)
|
||||
}
|
||||
}
|
||||
tmpLabels.Labels = hashLabels
|
||||
} else if len(shardByURLIgnoreLabelsMap) > 0 {
|
||||
hashLabels = tmpLabels.Labels[:0]
|
||||
for _, label := range ts.Labels {
|
||||
if _, ok := shardByURLIgnoreLabelsMap[label.Name]; !ok {
|
||||
hashLabels = append(hashLabels, label)
|
||||
}
|
||||
}
|
||||
tmpLabels.Labels = hashLabels
|
||||
}
|
||||
h := getLabelsHash(hashLabels)
|
||||
idx := h % uint64(len(tssByURL))
|
||||
tssByURL[idx] = append(tssByURL[idx], ts)
|
||||
}
|
||||
return tryShardingBlockAmongRemoteStorages(rwctxs, tssBlock, replicas, forceDropSamplesOnFailure)
|
||||
promutils.PutLabels(tmpLabels)
|
||||
|
||||
// Push sharded data to remote storages in parallel in order to reduce
|
||||
// the time needed for sending the data to multiple remote storage systems.
|
||||
var wg sync.WaitGroup
|
||||
var anyPushFailed atomic.Bool
|
||||
for i, rwctx := range rwctxs {
|
||||
tssShard := tssByURL[i]
|
||||
if len(tssShard) == 0 {
|
||||
continue
|
||||
}
|
||||
wg.Add(1)
|
||||
go func(rwctx *remoteWriteCtx, tss []prompbmarshal.TimeSeries) {
|
||||
defer wg.Done()
|
||||
if !rwctx.TryPush(tss) {
|
||||
anyPushFailed.Store(true)
|
||||
}
|
||||
}(rwctx, tssShard)
|
||||
}
|
||||
wg.Wait()
|
||||
return !anyPushFailed.Load()
|
||||
}
|
||||
|
||||
// Replicate tssBlock samples among rwctxs.
|
||||
// Push tssBlock to remote storage systems in parallel in order to reduce
|
||||
// Replicate data among rwctxs.
|
||||
// Push block to remote storages in parallel in order to reduce
|
||||
// the time needed for sending the data to multiple remote storage systems.
|
||||
var wg sync.WaitGroup
|
||||
wg.Add(len(rwctxs))
|
||||
@@ -568,7 +621,7 @@ func tryPushBlockToRemoteStorages(rwctxs []*remoteWriteCtx, tssBlock []prompbmar
|
||||
for _, rwctx := range rwctxs {
|
||||
go func(rwctx *remoteWriteCtx) {
|
||||
defer wg.Done()
|
||||
if !rwctx.TryPush(tssBlock, forceDropSamplesOnFailure) {
|
||||
if !rwctx.TryPush(tssBlock) {
|
||||
anyPushFailed.Store(true)
|
||||
}
|
||||
}(rwctx)
|
||||
@@ -577,97 +630,6 @@ func tryPushBlockToRemoteStorages(rwctxs []*remoteWriteCtx, tssBlock []prompbmar
|
||||
return !anyPushFailed.Load()
|
||||
}
|
||||
|
||||
func tryShardingBlockAmongRemoteStorages(rwctxs []*remoteWriteCtx, tssBlock []prompbmarshal.TimeSeries, replicas int, forceDropSamplesOnFailure bool) bool {
|
||||
x := getTSSShards(len(rwctxs))
|
||||
defer putTSSShards(x)
|
||||
|
||||
shards := x.shards
|
||||
tmpLabels := promutils.GetLabels()
|
||||
for _, ts := range tssBlock {
|
||||
hashLabels := ts.Labels
|
||||
if len(shardByURLLabelsMap) > 0 {
|
||||
hashLabels = tmpLabels.Labels[:0]
|
||||
for _, label := range ts.Labels {
|
||||
if _, ok := shardByURLLabelsMap[label.Name]; ok {
|
||||
hashLabels = append(hashLabels, label)
|
||||
}
|
||||
}
|
||||
tmpLabels.Labels = hashLabels
|
||||
} else if len(shardByURLIgnoreLabelsMap) > 0 {
|
||||
hashLabels = tmpLabels.Labels[:0]
|
||||
for _, label := range ts.Labels {
|
||||
if _, ok := shardByURLIgnoreLabelsMap[label.Name]; !ok {
|
||||
hashLabels = append(hashLabels, label)
|
||||
}
|
||||
}
|
||||
tmpLabels.Labels = hashLabels
|
||||
}
|
||||
h := getLabelsHash(hashLabels)
|
||||
idx := h % uint64(len(shards))
|
||||
i := 0
|
||||
for {
|
||||
shards[idx] = append(shards[idx], ts)
|
||||
i++
|
||||
if i >= replicas {
|
||||
break
|
||||
}
|
||||
idx++
|
||||
if idx >= uint64(len(shards)) {
|
||||
idx = 0
|
||||
}
|
||||
}
|
||||
}
|
||||
promutils.PutLabels(tmpLabels)
|
||||
|
||||
// Push sharded samples to remote storage systems in parallel in order to reduce
|
||||
// the time needed for sending the data to multiple remote storage systems.
|
||||
var wg sync.WaitGroup
|
||||
var anyPushFailed atomic.Bool
|
||||
for i, rwctx := range rwctxs {
|
||||
shard := shards[i]
|
||||
if len(shard) == 0 {
|
||||
continue
|
||||
}
|
||||
wg.Add(1)
|
||||
go func(rwctx *remoteWriteCtx, tss []prompbmarshal.TimeSeries) {
|
||||
defer wg.Done()
|
||||
if !rwctx.TryPush(tss, forceDropSamplesOnFailure) {
|
||||
anyPushFailed.Store(true)
|
||||
}
|
||||
}(rwctx, shard)
|
||||
}
|
||||
wg.Wait()
|
||||
return !anyPushFailed.Load()
|
||||
}
|
||||
|
||||
type tssShards struct {
|
||||
shards [][]prompbmarshal.TimeSeries
|
||||
}
|
||||
|
||||
func getTSSShards(n int) *tssShards {
|
||||
v := tssShardsPool.Get()
|
||||
if v == nil {
|
||||
v = &tssShards{}
|
||||
}
|
||||
x := v.(*tssShards)
|
||||
if cap(x.shards) < n {
|
||||
x.shards = make([][]prompbmarshal.TimeSeries, n)
|
||||
}
|
||||
x.shards = x.shards[:n]
|
||||
return x
|
||||
}
|
||||
|
||||
func putTSSShards(x *tssShards) {
|
||||
shards := x.shards
|
||||
for i := range shards {
|
||||
clear(shards[i])
|
||||
shards[i] = shards[i][:0]
|
||||
}
|
||||
tssShardsPool.Put(x)
|
||||
}
|
||||
|
||||
var tssShardsPool sync.Pool
|
||||
|
||||
// sortLabelsIfNeeded sorts labels if -sortLabels command-line flag is set.
|
||||
func sortLabelsIfNeeded(tss []prompbmarshal.TimeSeries) {
|
||||
if !*sortLabels {
|
||||
@@ -772,9 +734,6 @@ type remoteWriteCtx struct {
|
||||
|
||||
rowsPushedAfterRelabel *metrics.Counter
|
||||
rowsDroppedByRelabel *metrics.Counter
|
||||
|
||||
pushFailures *metrics.Counter
|
||||
rowsDroppedOnPushFailure *metrics.Counter
|
||||
}
|
||||
|
||||
func newRemoteWriteCtx(argIdx int, remoteWriteURL *url.URL, maxInmemoryBlocks int, sanitizedURL string) *remoteWriteCtx {
|
||||
@@ -790,9 +749,7 @@ func newRemoteWriteCtx(argIdx int, remoteWriteURL *url.URL, maxInmemoryBlocks in
|
||||
logger.Warnf("rounding the -remoteWrite.maxDiskUsagePerURL=%d to the minimum supported value: %d", maxPendingBytes, persistentqueue.DefaultChunkFileSize)
|
||||
maxPendingBytes = persistentqueue.DefaultChunkFileSize
|
||||
}
|
||||
|
||||
isPQDisabled := disableOnDiskQueue.GetOptionalArg(argIdx)
|
||||
fq := persistentqueue.MustOpenFastQueue(queuePath, sanitizedURL, maxInmemoryBlocks, maxPendingBytes, isPQDisabled)
|
||||
fq := persistentqueue.MustOpenFastQueue(queuePath, sanitizedURL, maxInmemoryBlocks, maxPendingBytes, *disableOnDiskQueue)
|
||||
_ = metrics.GetOrCreateGauge(fmt.Sprintf(`vmagent_remotewrite_pending_data_bytes{path=%q, url=%q}`, queuePath, sanitizedURL), func() float64 {
|
||||
return float64(fq.GetPendingBytes())
|
||||
})
|
||||
@@ -835,20 +792,39 @@ func newRemoteWriteCtx(argIdx int, remoteWriteURL *url.URL, maxInmemoryBlocks in
|
||||
c: c,
|
||||
pss: pss,
|
||||
|
||||
rowsPushedAfterRelabel: metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_remotewrite_rows_pushed_after_relabel_total{path=%q,url=%q}`, queuePath, sanitizedURL)),
|
||||
rowsDroppedByRelabel: metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_remotewrite_relabel_metrics_dropped_total{path=%q,url=%q}`, queuePath, sanitizedURL)),
|
||||
|
||||
pushFailures: metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_remotewrite_push_failures_total{path=%q,url=%q}`, queuePath, sanitizedURL)),
|
||||
rowsDroppedOnPushFailure: metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_remotewrite_samples_dropped_total{path=%q,url=%q}`, queuePath, sanitizedURL)),
|
||||
rowsPushedAfterRelabel: metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_remotewrite_rows_pushed_after_relabel_total{path=%q, url=%q}`, queuePath, sanitizedURL)),
|
||||
rowsDroppedByRelabel: metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_remotewrite_relabel_metrics_dropped_total{path=%q, url=%q}`, queuePath, sanitizedURL)),
|
||||
}
|
||||
|
||||
// Initialize sas
|
||||
sasFile := streamAggrConfig.GetOptionalArg(argIdx)
|
||||
dedupInterval := streamAggrDedupInterval.GetOptionalArg(argIdx)
|
||||
ignoreOldSamples := streamAggrIgnoreOldSamples.GetOptionalArg(argIdx)
|
||||
if sasFile != "" {
|
||||
opts := &streamaggr.Options{
|
||||
DedupInterval: dedupInterval,
|
||||
DropInputLabels: *streamAggrDropInputLabels,
|
||||
IgnoreOldSamples: ignoreOldSamples,
|
||||
}
|
||||
sas, err := streamaggr.LoadFromFile(sasFile, rwctx.pushInternalTrackDropped, opts)
|
||||
if err != nil {
|
||||
logger.Fatalf("cannot initialize stream aggregators from -remoteWrite.streamAggr.config=%q: %s", sasFile, err)
|
||||
}
|
||||
rwctx.sas.Store(sas)
|
||||
rwctx.streamAggrKeepInput = streamAggrKeepInput.GetOptionalArg(argIdx)
|
||||
rwctx.streamAggrDropInput = streamAggrDropInput.GetOptionalArg(argIdx)
|
||||
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_streamaggr_config_reload_successful{path=%q}`, sasFile)).Set(1)
|
||||
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_streamaggr_config_reload_success_timestamp_seconds{path=%q}`, sasFile)).Set(fasttime.UnixTimestamp())
|
||||
} else if dedupInterval > 0 {
|
||||
rwctx.deduplicator = streamaggr.NewDeduplicator(rwctx.pushInternalTrackDropped, dedupInterval, *streamAggrDropInputLabels)
|
||||
}
|
||||
rwctx.initStreamAggrConfig()
|
||||
|
||||
return rwctx
|
||||
}
|
||||
|
||||
func (rwctx *remoteWriteCtx) MustStop() {
|
||||
// sas and deduplicator must be stopped before rwctx is closed
|
||||
// because they can write pending series to rwctx.pss if there are any
|
||||
// because sas can write pending series to rwctx.pss if there are any
|
||||
sas := rwctx.sas.Swap(nil)
|
||||
sas.MustStop()
|
||||
|
||||
@@ -873,22 +849,10 @@ func (rwctx *remoteWriteCtx) MustStop() {
|
||||
rwctx.rowsDroppedByRelabel = nil
|
||||
}
|
||||
|
||||
// TryPush sends tss series to the configured remote write endpoint
|
||||
//
|
||||
// TryPush doesn't modify tss, so tss can be passed concurrently to TryPush across distinct rwctx instances.
|
||||
func (rwctx *remoteWriteCtx) TryPush(tss []prompbmarshal.TimeSeries, forceDropSamplesOnFailure bool) bool {
|
||||
func (rwctx *remoteWriteCtx) TryPush(tss []prompbmarshal.TimeSeries) bool {
|
||||
// Apply relabeling
|
||||
var rctx *relabelCtx
|
||||
var v *[]prompbmarshal.TimeSeries
|
||||
defer func() {
|
||||
if rctx == nil {
|
||||
return
|
||||
}
|
||||
*v = prompbmarshal.ResetTimeSeries(tss)
|
||||
tssPool.Put(v)
|
||||
putRelabelCtx(rctx)
|
||||
}()
|
||||
|
||||
// Apply relabeling
|
||||
rcs := allRelabelConfigs.Load()
|
||||
pcs := rcs.perURL[rwctx.idx]
|
||||
if pcs.Len() > 0 {
|
||||
@@ -909,7 +873,7 @@ func (rwctx *remoteWriteCtx) TryPush(tss []prompbmarshal.TimeSeries, forceDropSa
|
||||
|
||||
// Apply stream aggregation or deduplication if they are configured
|
||||
sas := rwctx.sas.Load()
|
||||
if sas.IsEnabled() {
|
||||
if sas != nil {
|
||||
matchIdxs := matchIdxsPool.Get()
|
||||
matchIdxs.B = sas.Push(tss, matchIdxs.B)
|
||||
if !rwctx.streamAggrKeepInput {
|
||||
@@ -924,22 +888,21 @@ func (rwctx *remoteWriteCtx) TryPush(tss []prompbmarshal.TimeSeries, forceDropSa
|
||||
matchIdxsPool.Put(matchIdxs)
|
||||
} else if rwctx.deduplicator != nil {
|
||||
rwctx.deduplicator.Push(tss)
|
||||
return true
|
||||
clear(tss)
|
||||
tss = tss[:0]
|
||||
}
|
||||
|
||||
// Try pushing tss to remote storage
|
||||
if rwctx.tryPushInternal(tss) {
|
||||
return true
|
||||
// Try pushing the data to remote storage
|
||||
ok := rwctx.tryPushInternal(tss)
|
||||
|
||||
// Return back relabeling contexts to the pool
|
||||
if rctx != nil {
|
||||
*v = prompbmarshal.ResetTimeSeries(tss)
|
||||
tssPool.Put(v)
|
||||
putRelabelCtx(rctx)
|
||||
}
|
||||
|
||||
// Couldn't push tss to remote storage
|
||||
rwctx.pushFailures.Inc()
|
||||
if forceDropSamplesOnFailure {
|
||||
rowsCount := getRowsCount(tss)
|
||||
rwctx.rowsDroppedOnPushFailure.Add(rowsCount)
|
||||
return true
|
||||
}
|
||||
return false
|
||||
return ok
|
||||
}
|
||||
|
||||
var matchIdxsPool bytesutil.ByteBufferPool
|
||||
@@ -963,26 +926,19 @@ func (rwctx *remoteWriteCtx) pushInternalTrackDropped(tss []prompbmarshal.TimeSe
|
||||
if rwctx.tryPushInternal(tss) {
|
||||
return
|
||||
}
|
||||
if !rwctx.fq.IsPersistentQueueDisabled() {
|
||||
if !*disableOnDiskQueue {
|
||||
logger.Panicf("BUG: tryPushInternal must return true if -remoteWrite.disableOnDiskQueue isn't set")
|
||||
}
|
||||
rwctx.pushFailures.Inc()
|
||||
rowsCount := getRowsCount(tss)
|
||||
rwctx.rowsDroppedOnPushFailure.Add(rowsCount)
|
||||
pushFailures.Inc()
|
||||
if *dropSamplesOnOverload {
|
||||
rowsCount := getRowsCount(tss)
|
||||
samplesDropped.Add(rowsCount)
|
||||
}
|
||||
}
|
||||
|
||||
func (rwctx *remoteWriteCtx) tryPushInternal(tss []prompbmarshal.TimeSeries) bool {
|
||||
var rctx *relabelCtx
|
||||
var v *[]prompbmarshal.TimeSeries
|
||||
defer func() {
|
||||
if rctx == nil {
|
||||
return
|
||||
}
|
||||
*v = prompbmarshal.ResetTimeSeries(tss)
|
||||
tssPool.Put(v)
|
||||
putRelabelCtx(rctx)
|
||||
}()
|
||||
|
||||
if len(labelsGlobal) > 0 {
|
||||
// Make a copy of tss before adding extra labels in order to prevent
|
||||
// from affecting time series for other remoteWrite.url configs.
|
||||
@@ -995,11 +951,53 @@ func (rwctx *remoteWriteCtx) tryPushInternal(tss []prompbmarshal.TimeSeries) boo
|
||||
pss := rwctx.pss
|
||||
idx := rwctx.pssNextIdx.Add(1) % uint64(len(pss))
|
||||
|
||||
return pss[idx].TryPush(tss)
|
||||
ok := pss[idx].TryPush(tss)
|
||||
|
||||
if rctx != nil {
|
||||
*v = prompbmarshal.ResetTimeSeries(tss)
|
||||
tssPool.Put(v)
|
||||
putRelabelCtx(rctx)
|
||||
}
|
||||
|
||||
return ok
|
||||
}
|
||||
|
||||
func (rwctx *remoteWriteCtx) reinitStreamAggr() {
|
||||
sasFile := streamAggrConfig.GetOptionalArg(rwctx.idx)
|
||||
if sasFile == "" {
|
||||
// There is no stream aggregation for rwctx
|
||||
return
|
||||
}
|
||||
|
||||
logger.Infof("reloading stream aggregation configs pointed by -remoteWrite.streamAggr.config=%q", sasFile)
|
||||
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_streamaggr_config_reloads_total{path=%q}`, sasFile)).Inc()
|
||||
opts := &streamaggr.Options{
|
||||
DedupInterval: streamAggrDedupInterval.GetOptionalArg(rwctx.idx),
|
||||
DropInputLabels: *streamAggrDropInputLabels,
|
||||
IgnoreOldSamples: streamAggrIgnoreOldSamples.GetOptionalArg(rwctx.idx),
|
||||
}
|
||||
sasNew, err := streamaggr.LoadFromFile(sasFile, rwctx.pushInternalTrackDropped, opts)
|
||||
if err != nil {
|
||||
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_streamaggr_config_reloads_errors_total{path=%q}`, sasFile)).Inc()
|
||||
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_streamaggr_config_reload_successful{path=%q}`, sasFile)).Set(0)
|
||||
logger.Errorf("cannot reload stream aggregation config from -remoteWrite.streamAggr.config=%q; continue using the previously loaded config; error: %s", sasFile, err)
|
||||
return
|
||||
}
|
||||
sas := rwctx.sas.Load()
|
||||
if !sasNew.Equal(sas) {
|
||||
sasOld := rwctx.sas.Swap(sasNew)
|
||||
sasOld.MustStop()
|
||||
logger.Infof("successfully reloaded stream aggregation configs at -remoteWrite.streamAggr.config=%q", sasFile)
|
||||
} else {
|
||||
sasNew.MustStop()
|
||||
logger.Infof("the config at -remoteWrite.streamAggr.config=%q wasn't changed", sasFile)
|
||||
}
|
||||
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_streamaggr_config_reload_successful{path=%q}`, sasFile)).Set(1)
|
||||
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_streamaggr_config_reload_success_timestamp_seconds{path=%q}`, sasFile)).Set(fasttime.UnixTimestamp())
|
||||
}
|
||||
|
||||
var tssPool = &sync.Pool{
|
||||
New: func() any {
|
||||
New: func() interface{} {
|
||||
a := []prompbmarshal.TimeSeries{}
|
||||
return &a
|
||||
},
|
||||
@@ -1013,6 +1011,27 @@ func getRowsCount(tss []prompbmarshal.TimeSeries) int {
|
||||
return rowsCount
|
||||
}
|
||||
|
||||
// CheckStreamAggrConfigs checks configs pointed by -remoteWrite.streamAggr.config
|
||||
func CheckStreamAggrConfigs() error {
|
||||
pushNoop := func(_ []prompbmarshal.TimeSeries) {}
|
||||
for idx, sasFile := range *streamAggrConfig {
|
||||
if sasFile == "" {
|
||||
continue
|
||||
}
|
||||
opts := &streamaggr.Options{
|
||||
DedupInterval: streamAggrDedupInterval.GetOptionalArg(idx),
|
||||
DropInputLabels: *streamAggrDropInputLabels,
|
||||
IgnoreOldSamples: streamAggrIgnoreOldSamples.GetOptionalArg(idx),
|
||||
}
|
||||
sas, err := streamaggr.LoadFromFile(sasFile, pushNoop, opts)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot load -remoteWrite.streamAggr.config=%q: %w", sasFile, err)
|
||||
}
|
||||
sas.MustStop()
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func newMapFromStrings(a []string) map[string]struct{} {
|
||||
m := make(map[string]struct{}, len(a))
|
||||
for _, s := range a {
|
||||
|
||||
@@ -1,169 +0,0 @@
|
||||
package remotewrite
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"math"
|
||||
"reflect"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promrelabel"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/streamaggr"
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
)
|
||||
|
||||
func TestGetLabelsHash_Distribution(t *testing.T) {
|
||||
f := func(bucketsCount int) {
|
||||
t.Helper()
|
||||
|
||||
// Distribute itemsCount hashes returned by getLabelsHash() across bucketsCount buckets.
|
||||
itemsCount := 1_000 * bucketsCount
|
||||
m := make([]int, bucketsCount)
|
||||
var labels []prompbmarshal.Label
|
||||
for i := 0; i < itemsCount; i++ {
|
||||
labels = append(labels[:0], prompbmarshal.Label{
|
||||
Name: "__name__",
|
||||
Value: fmt.Sprintf("some_name_%d", i),
|
||||
})
|
||||
for j := 0; j < 10; j++ {
|
||||
labels = append(labels, prompbmarshal.Label{
|
||||
Name: fmt.Sprintf("label_%d", j),
|
||||
Value: fmt.Sprintf("value_%d_%d", i, j),
|
||||
})
|
||||
}
|
||||
h := getLabelsHash(labels)
|
||||
m[h%uint64(bucketsCount)]++
|
||||
}
|
||||
|
||||
// Verify that the distribution is even
|
||||
expectedItemsPerBucket := itemsCount / bucketsCount
|
||||
for _, n := range m {
|
||||
if math.Abs(1-float64(n)/float64(expectedItemsPerBucket)) > 0.04 {
|
||||
t.Fatalf("unexpected items in the bucket for %d buckets; got %d; want around %d", bucketsCount, n, expectedItemsPerBucket)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
f(2)
|
||||
f(3)
|
||||
f(4)
|
||||
f(5)
|
||||
f(10)
|
||||
}
|
||||
|
||||
func TestRemoteWriteContext_TryPush_ImmutableTimeseries(t *testing.T) {
|
||||
f := func(streamAggrConfig, relabelConfig string, dedupInterval time.Duration, keepInput, dropInput bool, input string) {
|
||||
t.Helper()
|
||||
perURLRelabel, err := promrelabel.ParseRelabelConfigsData([]byte(relabelConfig))
|
||||
if err != nil {
|
||||
t.Fatalf("cannot load relabel configs: %s", err)
|
||||
}
|
||||
rcs := &relabelConfigs{
|
||||
perURL: []*promrelabel.ParsedConfigs{
|
||||
perURLRelabel,
|
||||
},
|
||||
}
|
||||
allRelabelConfigs.Store(rcs)
|
||||
|
||||
pss := make([]*pendingSeries, 1)
|
||||
pss[0] = newPendingSeries(nil, true, 0, 100)
|
||||
rwctx := &remoteWriteCtx{
|
||||
idx: 0,
|
||||
streamAggrKeepInput: keepInput,
|
||||
streamAggrDropInput: dropInput,
|
||||
pss: pss,
|
||||
rowsPushedAfterRelabel: metrics.GetOrCreateCounter(`foo`),
|
||||
rowsDroppedByRelabel: metrics.GetOrCreateCounter(`bar`),
|
||||
}
|
||||
if dedupInterval > 0 {
|
||||
rwctx.deduplicator = streamaggr.NewDeduplicator(nil, dedupInterval, nil, "dedup-global")
|
||||
}
|
||||
|
||||
if streamAggrConfig != "" {
|
||||
pushNoop := func(_ []prompbmarshal.TimeSeries) {}
|
||||
sas, err := streamaggr.LoadFromData([]byte(streamAggrConfig), pushNoop, nil, "global")
|
||||
if err != nil {
|
||||
t.Fatalf("cannot load streamaggr configs: %s", err)
|
||||
}
|
||||
defer sas.MustStop()
|
||||
rwctx.sas.Store(sas)
|
||||
}
|
||||
|
||||
offsetMsecs := time.Now().UnixMilli()
|
||||
inputTss := prompbmarshal.MustParsePromMetrics(input, offsetMsecs)
|
||||
expectedTss := make([]prompbmarshal.TimeSeries, len(inputTss))
|
||||
|
||||
// copy inputTss to make sure it is not mutated during TryPush call
|
||||
copy(expectedTss, inputTss)
|
||||
if !rwctx.TryPush(inputTss, false) {
|
||||
t.Fatalf("cannot push samples to rwctx")
|
||||
}
|
||||
|
||||
if !reflect.DeepEqual(expectedTss, inputTss) {
|
||||
t.Fatalf("unexpected samples;\ngot\n%v\nwant\n%v", inputTss, expectedTss)
|
||||
}
|
||||
}
|
||||
|
||||
f(`
|
||||
- interval: 1m
|
||||
outputs: [sum_samples]
|
||||
- interval: 2m
|
||||
outputs: [count_series]
|
||||
`, `
|
||||
- action: keep
|
||||
source_labels: [env]
|
||||
regex: "dev"
|
||||
`, 0, false, false, `
|
||||
metric{env="dev"} 10
|
||||
metric{env="bar"} 20
|
||||
metric{env="dev"} 15
|
||||
metric{env="bar"} 25
|
||||
`)
|
||||
f(``, ``, time.Hour, false, false, `
|
||||
metric{env="dev"} 10
|
||||
metric{env="foo"} 20
|
||||
metric{env="dev"} 15
|
||||
metric{env="foo"} 25
|
||||
`)
|
||||
f(``, `
|
||||
- action: keep
|
||||
source_labels: [env]
|
||||
regex: "dev"
|
||||
`, time.Hour, false, false, `
|
||||
metric{env="dev"} 10
|
||||
metric{env="bar"} 20
|
||||
metric{env="dev"} 15
|
||||
metric{env="bar"} 25
|
||||
`)
|
||||
f(``, `
|
||||
- action: keep
|
||||
source_labels: [env]
|
||||
regex: "dev"
|
||||
`, time.Hour, true, false, `
|
||||
metric{env="test"} 10
|
||||
metric{env="dev"} 20
|
||||
metric{env="foo"} 15
|
||||
metric{env="dev"} 25
|
||||
`)
|
||||
f(``, `
|
||||
- action: keep
|
||||
source_labels: [env]
|
||||
regex: "dev"
|
||||
`, time.Hour, false, true, `
|
||||
metric{env="foo"} 10
|
||||
metric{env="dev"} 20
|
||||
metric{env="foo"} 15
|
||||
metric{env="dev"} 25
|
||||
`)
|
||||
f(``, `
|
||||
- action: keep
|
||||
source_labels: [env]
|
||||
regex: "dev"
|
||||
`, time.Hour, true, true, `
|
||||
metric{env="dev"} 10
|
||||
metric{env="test"} 20
|
||||
metric{env="dev"} 15
|
||||
metric{env="bar"} 25
|
||||
`)
|
||||
}
|
||||
92
app/vmagent/remotewrite/statconn.go
Normal file
92
app/vmagent/remotewrite/statconn.go
Normal file
@@ -0,0 +1,92 @@
|
||||
package remotewrite
|
||||
|
||||
import (
|
||||
"context"
|
||||
"net"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/netutil"
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
)
|
||||
|
||||
func getStdDialer() *net.Dialer {
|
||||
stdDialerOnce.Do(func() {
|
||||
stdDialer = &net.Dialer{
|
||||
Timeout: 30 * time.Second,
|
||||
KeepAlive: 30 * time.Second,
|
||||
DualStack: netutil.TCP6Enabled(),
|
||||
}
|
||||
})
|
||||
return stdDialer
|
||||
}
|
||||
|
||||
var (
|
||||
stdDialer *net.Dialer
|
||||
stdDialerOnce sync.Once
|
||||
)
|
||||
|
||||
func statDial(ctx context.Context, _, addr string) (conn net.Conn, err error) {
|
||||
network := netutil.GetTCPNetwork()
|
||||
d := getStdDialer()
|
||||
conn, err = d.DialContext(ctx, network, addr)
|
||||
dialsTotal.Inc()
|
||||
if err != nil {
|
||||
dialErrors.Inc()
|
||||
return nil, err
|
||||
}
|
||||
conns.Inc()
|
||||
sc := &statConn{
|
||||
Conn: conn,
|
||||
}
|
||||
return sc, nil
|
||||
}
|
||||
|
||||
var (
|
||||
dialsTotal = metrics.NewCounter(`vmagent_remotewrite_dials_total`)
|
||||
dialErrors = metrics.NewCounter(`vmagent_remotewrite_dial_errors_total`)
|
||||
conns = metrics.NewCounter(`vmagent_remotewrite_conns`)
|
||||
)
|
||||
|
||||
type statConn struct {
|
||||
closed atomic.Int32
|
||||
net.Conn
|
||||
}
|
||||
|
||||
func (sc *statConn) Read(p []byte) (int, error) {
|
||||
n, err := sc.Conn.Read(p)
|
||||
connReadsTotal.Inc()
|
||||
if err != nil {
|
||||
connReadErrors.Inc()
|
||||
}
|
||||
connBytesRead.Add(n)
|
||||
return n, err
|
||||
}
|
||||
|
||||
func (sc *statConn) Write(p []byte) (int, error) {
|
||||
n, err := sc.Conn.Write(p)
|
||||
connWritesTotal.Inc()
|
||||
if err != nil {
|
||||
connWriteErrors.Inc()
|
||||
}
|
||||
connBytesWritten.Add(n)
|
||||
return n, err
|
||||
}
|
||||
|
||||
func (sc *statConn) Close() error {
|
||||
err := sc.Conn.Close()
|
||||
if sc.closed.Add(1) == 1 {
|
||||
conns.Dec()
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
var (
|
||||
connReadsTotal = metrics.NewCounter(`vmagent_remotewrite_conn_reads_total`)
|
||||
connWritesTotal = metrics.NewCounter(`vmagent_remotewrite_conn_writes_total`)
|
||||
connReadErrors = metrics.NewCounter(`vmagent_remotewrite_conn_read_errors_total`)
|
||||
connWriteErrors = metrics.NewCounter(`vmagent_remotewrite_conn_write_errors_total`)
|
||||
connBytesRead = metrics.NewCounter(`vmagent_remotewrite_conn_bytes_read_total`)
|
||||
connBytesWritten = metrics.NewCounter(`vmagent_remotewrite_conn_bytes_written_total`)
|
||||
)
|
||||
@@ -1,240 +0,0 @@
|
||||
package remotewrite
|
||||
|
||||
import (
|
||||
"flag"
|
||||
"fmt"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/streamaggr"
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
)
|
||||
|
||||
var (
|
||||
// Global config
|
||||
streamAggrGlobalConfig = flag.String("streamAggr.config", "", "Optional path to file with stream aggregation config. "+
|
||||
"See https://docs.victoriametrics.com/stream-aggregation/ . "+
|
||||
"See also -streamAggr.keepInput, -streamAggr.dropInput and -streamAggr.dedupInterval")
|
||||
streamAggrGlobalKeepInput = flag.Bool("streamAggr.keepInput", false, "Whether to keep all the input samples after the aggregation "+
|
||||
"with -streamAggr.config. By default, only aggregates samples are dropped, while the remaining samples "+
|
||||
"are written to remote storages write. See also -streamAggr.dropInput and https://docs.victoriametrics.com/stream-aggregation/")
|
||||
streamAggrGlobalDropInput = flag.Bool("streamAggr.dropInput", false, "Whether to drop all the input samples after the aggregation "+
|
||||
"with -remoteWrite.streamAggr.config. By default, only aggregates samples are dropped, while the remaining samples "+
|
||||
"are written to remote storages write. See also -streamAggr.keepInput and https://docs.victoriametrics.com/stream-aggregation/")
|
||||
streamAggrGlobalDedupInterval = flagutil.NewDuration("streamAggr.dedupInterval", "0s", "Input samples are de-duplicated with this interval on "+
|
||||
"aggregator before optional aggregation with -streamAggr.config . "+
|
||||
"See also -dedup.minScrapeInterval and https://docs.victoriametrics.com/stream-aggregation/#deduplication")
|
||||
streamAggrGlobalIgnoreOldSamples = flag.Bool("streamAggr.ignoreOldSamples", false, "Whether to ignore input samples with old timestamps outside the "+
|
||||
"current aggregation interval for aggregator. "+
|
||||
"See https://docs.victoriametrics.com/stream-aggregation/#ignoring-old-samples")
|
||||
streamAggrGlobalIgnoreFirstIntervals = flag.Int("streamAggr.ignoreFirstIntervals", 0, "Number of aggregation intervals to skip after the start for "+
|
||||
"aggregator. Increase this value if you observe incorrect aggregation results after vmagent restarts. It could be caused by receiving unordered delayed data from "+
|
||||
"clients pushing data into the vmagent. See https://docs.victoriametrics.com/stream-aggregation/#ignore-aggregation-intervals-on-start")
|
||||
streamAggrGlobalDropInputLabels = flagutil.NewArrayString("streamAggr.dropInputLabels", "An optional list of labels to drop from samples for aggregator "+
|
||||
"before stream de-duplication and aggregation . See https://docs.victoriametrics.com/stream-aggregation/#dropping-unneeded-labels")
|
||||
|
||||
// Per URL config
|
||||
streamAggrConfig = flagutil.NewArrayString("remoteWrite.streamAggr.config", "Optional path to file with stream aggregation config for the corresponding -remoteWrite.url. "+
|
||||
"See https://docs.victoriametrics.com/stream-aggregation/ . "+
|
||||
"See also -remoteWrite.streamAggr.keepInput, -remoteWrite.streamAggr.dropInput and -remoteWrite.streamAggr.dedupInterval")
|
||||
streamAggrDropInput = flagutil.NewArrayBool("remoteWrite.streamAggr.dropInput", "Whether to drop all the input samples after the aggregation "+
|
||||
"with -remoteWrite.streamAggr.config at the corresponding -remoteWrite.url. By default, only aggregates samples are dropped, while the remaining samples "+
|
||||
"are written to the corresponding -remoteWrite.url . See also -remoteWrite.streamAggr.keepInput and https://docs.victoriametrics.com/stream-aggregation/")
|
||||
streamAggrKeepInput = flagutil.NewArrayBool("remoteWrite.streamAggr.keepInput", "Whether to keep all the input samples after the aggregation "+
|
||||
"with -remoteWrite.streamAggr.config at the corresponding -remoteWrite.url. By default, only aggregates samples are dropped, while the remaining samples "+
|
||||
"are written to the corresponding -remoteWrite.url . See also -remoteWrite.streamAggr.dropInput and https://docs.victoriametrics.com/stream-aggregation/")
|
||||
streamAggrDedupInterval = flagutil.NewArrayDuration("remoteWrite.streamAggr.dedupInterval", 0, "Input samples are de-duplicated with this interval before optional aggregation "+
|
||||
"with -remoteWrite.streamAggr.config at the corresponding -remoteWrite.url. See also -dedup.minScrapeInterval and https://docs.victoriametrics.com/stream-aggregation/#deduplication")
|
||||
streamAggrIgnoreOldSamples = flagutil.NewArrayBool("remoteWrite.streamAggr.ignoreOldSamples", "Whether to ignore input samples with old timestamps outside the current "+
|
||||
"aggregation interval for the corresponding -remoteWrite.streamAggr.config at the corresponding -remoteWrite.url. "+
|
||||
"See https://docs.victoriametrics.com/stream-aggregation/#ignoring-old-samples")
|
||||
streamAggrIgnoreFirstIntervals = flag.Int("remoteWrite.streamAggr.ignoreFirstIntervals", 0, "Number of aggregation intervals to skip after the start "+
|
||||
"for the corresponding -remoteWrite.streamAggr.config at the corresponding -remoteWrite.url. Increase this value if "+
|
||||
"you observe incorrect aggregation results after vmagent restarts. It could be caused by receiving bufferred delayed data from clients pushing data into the vmagent. "+
|
||||
"See https://docs.victoriametrics.com/stream-aggregation/#ignore-aggregation-intervals-on-start")
|
||||
streamAggrDropInputLabels = flagutil.NewArrayString("remoteWrite.streamAggr.dropInputLabels", "An optional list of labels to drop from samples "+
|
||||
"before stream de-duplication and aggregation with -remoteWrite.streamAggr.config and -remoteWrite.streamAggr.dedupInterval at the corresponding -remoteWrite.url. "+
|
||||
"See https://docs.victoriametrics.com/stream-aggregation/#dropping-unneeded-labels")
|
||||
)
|
||||
|
||||
// CheckStreamAggrConfigs checks -remoteWrite.streamAggr.config and -streamAggr.config.
|
||||
func CheckStreamAggrConfigs() error {
|
||||
// Check global config
|
||||
sas, err := newStreamAggrConfigGlobal()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
sas.MustStop()
|
||||
|
||||
if len(*streamAggrConfig) > len(*remoteWriteURLs) {
|
||||
return fmt.Errorf("too many -remoteWrite.streamAggr.config args: %d; it mustn't exceed the number of -remoteWrite.url args: %d", len(*streamAggrConfig), len(*remoteWriteURLs))
|
||||
}
|
||||
|
||||
pushNoop := func(_ []prompbmarshal.TimeSeries) {}
|
||||
for idx := range *streamAggrConfig {
|
||||
sas, err := newStreamAggrConfigPerURL(idx, pushNoop)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
sas.MustStop()
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func reloadStreamAggrConfigs() {
|
||||
reloadStreamAggrConfigGlobal()
|
||||
for _, rwctx := range rwctxsGlobal {
|
||||
rwctx.reloadStreamAggrConfig()
|
||||
}
|
||||
}
|
||||
|
||||
func reloadStreamAggrConfigGlobal() {
|
||||
path := *streamAggrGlobalConfig
|
||||
if path == "" {
|
||||
return
|
||||
}
|
||||
|
||||
logger.Infof("reloading stream aggregation configs pointed by -streamAggr.config=%q", path)
|
||||
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_streamaggr_config_reloads_total{path=%q}`, path)).Inc()
|
||||
|
||||
sasNew, err := newStreamAggrConfigGlobal()
|
||||
if err != nil {
|
||||
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_streamaggr_config_reloads_errors_total{path=%q}`, path)).Inc()
|
||||
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_streamaggr_config_reload_successful{path=%q}`, path)).Set(0)
|
||||
logger.Errorf("cannot reload -streamAggr.config=%q; continue using the previously loaded config; error: %s", path, err)
|
||||
return
|
||||
}
|
||||
|
||||
sas := sasGlobal.Load()
|
||||
if !sasNew.Equal(sas) {
|
||||
sasOld := sasGlobal.Swap(sasNew)
|
||||
sasOld.MustStop()
|
||||
logger.Infof("successfully reloaded -streamAggr.config=%q", path)
|
||||
} else {
|
||||
sasNew.MustStop()
|
||||
logger.Infof("-streamAggr.config=%q wasn't changed since the last reload", path)
|
||||
}
|
||||
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_streamaggr_config_reload_successful{path=%q}`, path)).Set(1)
|
||||
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_streamaggr_config_reload_success_timestamp_seconds{path=%q}`, path)).Set(fasttime.UnixTimestamp())
|
||||
}
|
||||
|
||||
func initStreamAggrConfigGlobal() {
|
||||
sas, err := newStreamAggrConfigGlobal()
|
||||
if err != nil {
|
||||
logger.Fatalf("cannot initialize gloabl stream aggregators: %s", err)
|
||||
}
|
||||
if sas != nil {
|
||||
filePath := sas.FilePath()
|
||||
sasGlobal.Store(sas)
|
||||
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_streamaggr_config_reload_successful{path=%q}`, filePath)).Set(1)
|
||||
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_streamaggr_config_reload_success_timestamp_seconds{path=%q}`, filePath)).Set(fasttime.UnixTimestamp())
|
||||
} else {
|
||||
dedupInterval := streamAggrGlobalDedupInterval.Duration()
|
||||
if dedupInterval > 0 {
|
||||
deduplicatorGlobal = streamaggr.NewDeduplicator(pushToRemoteStoragesTrackDropped, dedupInterval, *streamAggrDropInputLabels, "dedup-global")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (rwctx *remoteWriteCtx) initStreamAggrConfig() {
|
||||
idx := rwctx.idx
|
||||
|
||||
sas, err := rwctx.newStreamAggrConfig()
|
||||
if err != nil {
|
||||
logger.Fatalf("cannot initialize stream aggregators: %s", err)
|
||||
}
|
||||
if sas != nil {
|
||||
filePath := sas.FilePath()
|
||||
rwctx.sas.Store(sas)
|
||||
rwctx.streamAggrKeepInput = streamAggrKeepInput.GetOptionalArg(idx)
|
||||
rwctx.streamAggrDropInput = streamAggrDropInput.GetOptionalArg(idx)
|
||||
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_streamaggr_config_reload_successful{path=%q}`, filePath)).Set(1)
|
||||
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_streamaggr_config_reload_success_timestamp_seconds{path=%q}`, filePath)).Set(fasttime.UnixTimestamp())
|
||||
} else {
|
||||
dedupInterval := streamAggrDedupInterval.GetOptionalArg(idx)
|
||||
if dedupInterval > 0 {
|
||||
alias := fmt.Sprintf("dedup-%d", idx+1)
|
||||
rwctx.deduplicator = streamaggr.NewDeduplicator(rwctx.pushInternalTrackDropped, dedupInterval, *streamAggrDropInputLabels, alias)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (rwctx *remoteWriteCtx) reloadStreamAggrConfig() {
|
||||
path := streamAggrConfig.GetOptionalArg(rwctx.idx)
|
||||
if path == "" {
|
||||
return
|
||||
}
|
||||
|
||||
logger.Infof("reloading stream aggregation configs pointed by -remoteWrite.streamAggr.config=%q", path)
|
||||
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_streamaggr_config_reloads_total{path=%q}`, path)).Inc()
|
||||
|
||||
sasNew, err := rwctx.newStreamAggrConfig()
|
||||
if err != nil {
|
||||
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_streamaggr_config_reloads_errors_total{path=%q}`, path)).Inc()
|
||||
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_streamaggr_config_reload_successful{path=%q}`, path)).Set(0)
|
||||
logger.Errorf("cannot reload -remoteWrite.streamAggr.config=%q; continue using the previously loaded config; error: %s", path, err)
|
||||
return
|
||||
}
|
||||
|
||||
sas := rwctx.sas.Load()
|
||||
if !sasNew.Equal(sas) {
|
||||
sasOld := rwctx.sas.Swap(sasNew)
|
||||
sasOld.MustStop()
|
||||
logger.Infof("successfully reloaded -remoteWrite.streamAggr.config=%q", path)
|
||||
} else {
|
||||
sasNew.MustStop()
|
||||
logger.Infof("-remoteWrite.streamAggr.config=%q wasn't changed since the last reload", path)
|
||||
}
|
||||
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_streamaggr_config_reload_successful{path=%q}`, path)).Set(1)
|
||||
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_streamaggr_config_reload_success_timestamp_seconds{path=%q}`, path)).Set(fasttime.UnixTimestamp())
|
||||
}
|
||||
|
||||
func newStreamAggrConfigGlobal() (*streamaggr.Aggregators, error) {
|
||||
path := *streamAggrGlobalConfig
|
||||
if path == "" {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
opts := &streamaggr.Options{
|
||||
DedupInterval: streamAggrGlobalDedupInterval.Duration(),
|
||||
DropInputLabels: *streamAggrGlobalDropInputLabels,
|
||||
IgnoreOldSamples: *streamAggrGlobalIgnoreOldSamples,
|
||||
IgnoreFirstIntervals: *streamAggrGlobalIgnoreFirstIntervals,
|
||||
}
|
||||
|
||||
sas, err := streamaggr.LoadFromFile(path, pushToRemoteStoragesTrackDropped, opts, "global")
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot load -streamAggr.config=%q: %w", *streamAggrGlobalConfig, err)
|
||||
}
|
||||
return sas, nil
|
||||
}
|
||||
|
||||
func (rwctx *remoteWriteCtx) newStreamAggrConfig() (*streamaggr.Aggregators, error) {
|
||||
return newStreamAggrConfigPerURL(rwctx.idx, rwctx.pushInternalTrackDropped)
|
||||
}
|
||||
|
||||
func newStreamAggrConfigPerURL(idx int, pushFunc streamaggr.PushFunc) (*streamaggr.Aggregators, error) {
|
||||
path := streamAggrConfig.GetOptionalArg(idx)
|
||||
if path == "" {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
alias := fmt.Sprintf("%d:secret-url", idx+1)
|
||||
if *showRemoteWriteURL {
|
||||
alias = fmt.Sprintf("%d:%s", idx+1, remoteWriteURLs.GetOptionalArg(idx))
|
||||
}
|
||||
opts := &streamaggr.Options{
|
||||
DedupInterval: streamAggrDedupInterval.GetOptionalArg(idx),
|
||||
DropInputLabels: *streamAggrDropInputLabels,
|
||||
IgnoreOldSamples: streamAggrIgnoreOldSamples.GetOptionalArg(idx),
|
||||
IgnoreFirstIntervals: *streamAggrIgnoreFirstIntervals,
|
||||
}
|
||||
|
||||
sas, err := streamaggr.LoadFromFile(path, pushFunc, opts, alias)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot load -remoteWrite.streamAggr.config=%q: %w", path, err)
|
||||
}
|
||||
return sas, nil
|
||||
}
|
||||
@@ -81,9 +81,6 @@ vmalert-tool-linux-ppc64le:
|
||||
vmalert-tool-linux-s390x:
|
||||
APP_NAME=vmalert-tool CGO_ENABLED=0 GOOS=linux GOARCH=s390x $(MAKE) app-local-goos-goarch
|
||||
|
||||
vmalert-tool-linux-loong64:
|
||||
APP_NAME=vmalert-tool CGO_ENABLED=0 GOOS=linux GOARCH=loong64 $(MAKE) app-local-goos-goarch
|
||||
|
||||
vmalert-tool-linux-386:
|
||||
APP_NAME=vmalert-tool CGO_ENABLED=0 GOOS=linux GOARCH=386 $(MAKE) app-local-goos-goarch
|
||||
|
||||
|
||||
@@ -26,14 +26,8 @@ func main() {
|
||||
UsageText: "More info in https://docs.victoriametrics.com/vmalert-tool.html#Unit-testing-for-rules",
|
||||
Flags: []cli.Flag{
|
||||
&cli.StringSliceFlag{
|
||||
Name: "files",
|
||||
Usage: `File path or http url with test files. Supports an array of values separated by comma or specified via multiple flags.
|
||||
Supports hierarchical patterns and regexpes.
|
||||
Examples:
|
||||
-files="/path/to/file". Path to a single test file.
|
||||
-files="http://<some-server-addr>/path/to/test.yaml". HTTP URL to a test file.
|
||||
-files="dir/**/*.yaml". Includes all the .yaml files in "dir" subfolders recursively.
|
||||
`,
|
||||
Name: "files",
|
||||
Usage: "files to run unittest with. Supports an array of values separated by comma or specified via multiple flags.",
|
||||
Required: true,
|
||||
},
|
||||
&cli.BoolFlag{
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# See https://medium.com/on-docker/use-multi-stage-builds-to-inject-ca-certs-ad1e8f01de1b
|
||||
ARG certs_image
|
||||
ARG root_image
|
||||
FROM $certs_image AS certs
|
||||
FROM $certs_image as certs
|
||||
RUN apk update && apk upgrade && apk --update --no-cache add ca-certificates
|
||||
|
||||
FROM $root_image
|
||||
|
||||
@@ -18,8 +18,6 @@ import (
|
||||
"github.com/VictoriaMetrics/metricsql"
|
||||
)
|
||||
|
||||
var numReg = regexp.MustCompile(`\D?\d*\.?\d*\D?`)
|
||||
|
||||
// series holds input_series defined in the test file
|
||||
type series struct {
|
||||
Series string `yaml:"series"`
|
||||
@@ -74,7 +72,10 @@ func writeInputSeries(input []series, interval *promutils.Duration, startStamp t
|
||||
r.Timeseries = append(r.Timeseries, testutil.TimeSeries{Labels: ls, Samples: samples})
|
||||
}
|
||||
|
||||
data := testutil.Compress(r)
|
||||
data, err := testutil.Compress(r)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to compress data: %v", err)
|
||||
}
|
||||
// write input series to vm
|
||||
httpWrite(dst, bytes.NewBuffer(data))
|
||||
vmstorage.Storage.DebugFlush()
|
||||
@@ -85,12 +86,13 @@ func writeInputSeries(input []series, interval *promutils.Duration, startStamp t
|
||||
func parseInputValue(input string, origin bool) ([]sequenceValue, error) {
|
||||
var res []sequenceValue
|
||||
items := strings.Split(input, " ")
|
||||
reg := regexp.MustCompile(`\D?\d*\D?`)
|
||||
for _, item := range items {
|
||||
if item == "stale" {
|
||||
res = append(res, sequenceValue{Value: decimal.StaleNaN})
|
||||
continue
|
||||
}
|
||||
vals := numReg.FindAllString(item, -1)
|
||||
vals := reg.FindAllString(item, -1)
|
||||
switch len(vals) {
|
||||
case 1:
|
||||
if vals[0] == "_" {
|
||||
|
||||
@@ -6,61 +6,88 @@ import (
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/decimal"
|
||||
)
|
||||
|
||||
func TestParseInputValue_Failure(t *testing.T) {
|
||||
f := func(input string) {
|
||||
t.Helper()
|
||||
|
||||
_, err := parseInputValue(input, true)
|
||||
if err == nil {
|
||||
t.Fatalf("expecting non-nil error")
|
||||
}
|
||||
func TestParseInputValue(t *testing.T) {
|
||||
testCases := []struct {
|
||||
input string
|
||||
exp []sequenceValue
|
||||
failed bool
|
||||
}{
|
||||
{
|
||||
"",
|
||||
nil,
|
||||
true,
|
||||
},
|
||||
{
|
||||
"testfailed",
|
||||
nil,
|
||||
true,
|
||||
},
|
||||
// stale doesn't support operations
|
||||
{
|
||||
"stalex3",
|
||||
nil,
|
||||
true,
|
||||
},
|
||||
{
|
||||
"-4",
|
||||
[]sequenceValue{{Value: -4}},
|
||||
false,
|
||||
},
|
||||
{
|
||||
"_",
|
||||
[]sequenceValue{{Omitted: true}},
|
||||
false,
|
||||
},
|
||||
{
|
||||
"stale",
|
||||
[]sequenceValue{{Value: decimal.StaleNaN}},
|
||||
false,
|
||||
},
|
||||
{
|
||||
"-4x1",
|
||||
[]sequenceValue{{Value: -4}, {Value: -4}},
|
||||
false,
|
||||
},
|
||||
{
|
||||
"_x1",
|
||||
[]sequenceValue{{Omitted: true}},
|
||||
false,
|
||||
},
|
||||
{
|
||||
"1+1x4",
|
||||
[]sequenceValue{{Value: 1}, {Value: 2}, {Value: 3}, {Value: 4}, {Value: 5}},
|
||||
false,
|
||||
},
|
||||
{
|
||||
"2-1x4",
|
||||
[]sequenceValue{{Value: 2}, {Value: 1}, {Value: 0}, {Value: -1}, {Value: -2}},
|
||||
false,
|
||||
},
|
||||
{
|
||||
"1+1x1 _ -4 stale 3+20x1",
|
||||
[]sequenceValue{{Value: 1}, {Value: 2}, {Omitted: true}, {Value: -4}, {Value: decimal.StaleNaN}, {Value: 3}, {Value: 23}},
|
||||
false,
|
||||
},
|
||||
}
|
||||
|
||||
f("")
|
||||
f("testfailed")
|
||||
|
||||
// stale doesn't support operations
|
||||
f("stalex3")
|
||||
}
|
||||
|
||||
func TestParseInputValue_Success(t *testing.T) {
|
||||
f := func(input string, outputExpected []sequenceValue) {
|
||||
t.Helper()
|
||||
|
||||
output, err := parseInputValue(input, true)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error in parseInputValue: %s", err)
|
||||
for _, tc := range testCases {
|
||||
output, err := parseInputValue(tc.input, true)
|
||||
if err != nil != tc.failed {
|
||||
t.Fatalf("failed to parse %s, expect %t, got %t", tc.input, tc.failed, err != nil)
|
||||
}
|
||||
|
||||
if len(outputExpected) != len(output) {
|
||||
t.Fatalf("unexpected output length; got %d; want %d", len(outputExpected), len(output))
|
||||
if len(tc.exp) != len(output) {
|
||||
t.Fatalf("expect %v, got %v", tc.exp, output)
|
||||
}
|
||||
for i := 0; i < len(outputExpected); i++ {
|
||||
if outputExpected[i].Omitted != output[i].Omitted {
|
||||
t.Fatalf("unexpected Omitted field in the output\ngot\n%v\nwant\n%v", output, outputExpected)
|
||||
for i := 0; i < len(tc.exp); i++ {
|
||||
if tc.exp[i].Omitted != output[i].Omitted {
|
||||
t.Fatalf("expect %v, got %v", tc.exp, output)
|
||||
}
|
||||
if outputExpected[i].Value != output[i].Value {
|
||||
if decimal.IsStaleNaN(outputExpected[i].Value) && decimal.IsStaleNaN(output[i].Value) {
|
||||
if tc.exp[i].Value != output[i].Value {
|
||||
if decimal.IsStaleNaN(tc.exp[i].Value) && decimal.IsStaleNaN(output[i].Value) {
|
||||
continue
|
||||
}
|
||||
t.Fatalf("unexpeccted Value field in the output\ngot\n%v\nwant\n%v", output, outputExpected)
|
||||
t.Fatalf("expect %v, got %v", tc.exp, output)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
f("-4", []sequenceValue{{Value: -4}})
|
||||
|
||||
f("_", []sequenceValue{{Omitted: true}})
|
||||
|
||||
f("stale", []sequenceValue{{Value: decimal.StaleNaN}})
|
||||
|
||||
f("-4x1", []sequenceValue{{Value: -4}, {Value: -4}})
|
||||
|
||||
f("_x1", []sequenceValue{{Omitted: true}})
|
||||
|
||||
f("1+1x2 0.1 0.1+0.3x2 3.14", []sequenceValue{{Value: 1}, {Value: 2}, {Value: 3}, {Value: 0.1}, {Value: 0.1}, {Value: 0.4}, {Value: 0.7}, {Value: 3.14}})
|
||||
|
||||
f("2-1x4", []sequenceValue{{Value: 2}, {Value: 1}, {Value: 0}, {Value: -1}, {Value: -2}})
|
||||
|
||||
f("1+1x1 _ -4 stale 3+20x1", []sequenceValue{{Value: 1}, {Value: 2}, {Omitted: true}, {Value: -4}, {Value: decimal.StaleNaN}, {Value: 3}, {Value: 23}})
|
||||
}
|
||||
|
||||
@@ -37,6 +37,3 @@ groups:
|
||||
rules:
|
||||
- record: t3
|
||||
expr: t1
|
||||
|
||||
- name: emptyGroup
|
||||
rules: []
|
||||
|
||||
@@ -9,12 +9,10 @@ import (
|
||||
"path/filepath"
|
||||
"reflect"
|
||||
"sort"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"gopkg.in/yaml.v2"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/config"
|
||||
vmalertconfig "github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/config"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/datasource"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/notifier"
|
||||
@@ -68,38 +66,36 @@ func UnitTest(files []string, disableGroupLabel bool) bool {
|
||||
defer vminsert.Stop()
|
||||
defer vmselect.Stop()
|
||||
disableAlertgroupLabel = disableGroupLabel
|
||||
return rulesUnitTest(files)
|
||||
}
|
||||
|
||||
testfiles, err := config.ReadFromFS(files)
|
||||
if err != nil {
|
||||
fmt.Println(" FAILED")
|
||||
fmt.Printf("\nfailed to read test files: \n%v", err)
|
||||
}
|
||||
func rulesUnitTest(files []string) bool {
|
||||
var failed bool
|
||||
for fileName, file := range testfiles {
|
||||
if err := ruleUnitTest(fileName, file); err != nil {
|
||||
for _, f := range files {
|
||||
if err := ruleUnitTest(f); err != nil {
|
||||
fmt.Println(" FAILED")
|
||||
fmt.Printf("\nfailed to run unit test for file %q: \n%v", file, err)
|
||||
fmt.Printf("\nfailed to run unit test for file %q: \n%v", f, err)
|
||||
failed = true
|
||||
} else {
|
||||
fmt.Println(" SUCCESS")
|
||||
}
|
||||
}
|
||||
|
||||
return failed
|
||||
}
|
||||
|
||||
func ruleUnitTest(filename string, content []byte) []error {
|
||||
func ruleUnitTest(filename string) []error {
|
||||
fmt.Println("\nUnit Testing: ", filename)
|
||||
var unitTestInp unitTestFile
|
||||
if err := yaml.UnmarshalStrict(content, &unitTestInp); err != nil {
|
||||
return []error{fmt.Errorf("failed to unmarshal file: %w", err)}
|
||||
b, err := os.ReadFile(filename)
|
||||
if err != nil {
|
||||
return []error{fmt.Errorf("failed to read file: %w", err)}
|
||||
}
|
||||
|
||||
// add file directory for rule files if needed
|
||||
for i, rf := range unitTestInp.RuleFiles {
|
||||
if rf != "" && !filepath.IsAbs(rf) && !strings.HasPrefix(rf, "http") {
|
||||
unitTestInp.RuleFiles[i] = filepath.Join(filepath.Dir(filename), rf)
|
||||
}
|
||||
var unitTestInp unitTestFile
|
||||
if err := yaml.UnmarshalStrict(b, &unitTestInp); err != nil {
|
||||
return []error{fmt.Errorf("failed to unmarshal file: %w", err)}
|
||||
}
|
||||
if err := resolveAndGlobFilepaths(filepath.Dir(filename), &unitTestInp); err != nil {
|
||||
return []error{fmt.Errorf("failed to resolve path for `rule_files`: %w", err)}
|
||||
}
|
||||
|
||||
if unitTestInp.EvaluationInterval.Duration() == 0 {
|
||||
@@ -143,24 +139,24 @@ func verifyTestGroup(group testGroup) error {
|
||||
}
|
||||
for _, at := range group.AlertRuleTests {
|
||||
if at.Alertname == "" {
|
||||
return fmt.Errorf("\n%s missing required field \"alertname\"", testGroupName)
|
||||
return fmt.Errorf("\n%s missing required filed \"alertname\"", testGroupName)
|
||||
}
|
||||
if !disableAlertgroupLabel && at.GroupName == "" {
|
||||
return fmt.Errorf("\n%s missing required field \"groupname\" when flag \"disableAlertgroupLabel\" is false", testGroupName)
|
||||
return fmt.Errorf("\n%s missing required filed \"groupname\" when flag \"disableAlertGroupLabel\" is false", testGroupName)
|
||||
}
|
||||
if disableAlertgroupLabel && at.GroupName != "" {
|
||||
return fmt.Errorf("\n%s shouldn't set field \"groupname\" when flag \"disableAlertgroupLabel\" is true", testGroupName)
|
||||
return fmt.Errorf("\n%s shouldn't set filed \"groupname\" when flag \"disableAlertGroupLabel\" is true", testGroupName)
|
||||
}
|
||||
if at.EvalTime == nil {
|
||||
return fmt.Errorf("\n%s missing required field \"eval_time\"", testGroupName)
|
||||
return fmt.Errorf("\n%s missing required filed \"eval_time\"", testGroupName)
|
||||
}
|
||||
}
|
||||
for _, et := range group.MetricsqlExprTests {
|
||||
if et.Expr == "" {
|
||||
return fmt.Errorf("\n%s missing required field \"expr\"", testGroupName)
|
||||
return fmt.Errorf("\n%s missing required filed \"expr\"", testGroupName)
|
||||
}
|
||||
if et.EvalTime == nil {
|
||||
return fmt.Errorf("\n%s missing required field \"eval_time\"", testGroupName)
|
||||
return fmt.Errorf("\n%s missing required filed \"eval_time\"", testGroupName)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
@@ -239,6 +235,30 @@ func tearDown() {
|
||||
fs.MustRemoveAll(storagePath)
|
||||
}
|
||||
|
||||
// resolveAndGlobFilepaths joins all relative paths in a configuration
|
||||
// with a given base directory and replaces all globs with matching files.
|
||||
func resolveAndGlobFilepaths(baseDir string, utf *unitTestFile) error {
|
||||
for i, rf := range utf.RuleFiles {
|
||||
if rf != "" && !filepath.IsAbs(rf) {
|
||||
utf.RuleFiles[i] = filepath.Join(baseDir, rf)
|
||||
}
|
||||
}
|
||||
|
||||
var globbedFiles []string
|
||||
for _, rf := range utf.RuleFiles {
|
||||
m, err := filepath.Glob(rf)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if len(m) == 0 {
|
||||
fmt.Fprintln(os.Stderr, " WARNING: no file match pattern", rf)
|
||||
}
|
||||
globbedFiles = append(globbedFiles, m...)
|
||||
}
|
||||
utf.RuleFiles = globbedFiles
|
||||
return nil
|
||||
}
|
||||
|
||||
func (tg *testGroup) test(evalInterval time.Duration, groupOrderMap map[string]int, testGroups []vmalertconfig.Group) (checkErrs []error) {
|
||||
// set up vmstorage and http server for ingest and read queries
|
||||
setUp()
|
||||
@@ -296,15 +316,11 @@ func (tg *testGroup) test(evalInterval time.Duration, groupOrderMap map[string]i
|
||||
maxEvalTime := testStartTime.Add(tg.maxEvalTime())
|
||||
for ts := testStartTime; ts.Before(maxEvalTime) || ts.Equal(maxEvalTime); ts = ts.Add(evalInterval) {
|
||||
for _, g := range groups {
|
||||
if len(g.Rules) == 0 {
|
||||
continue
|
||||
}
|
||||
errs := g.ExecOnce(context.Background(), func() []notifier.Notifier { return nil }, rw, ts)
|
||||
for err := range errs {
|
||||
if err != nil {
|
||||
checkErrs = append(checkErrs, fmt.Errorf("\nfailed to exec group: %q, time: %s, err: %w", g.Name,
|
||||
ts, err))
|
||||
return
|
||||
}
|
||||
}
|
||||
// flush series after each group evaluation
|
||||
@@ -358,7 +374,7 @@ func (tg *testGroup) test(evalInterval time.Duration, groupOrderMap map[string]i
|
||||
if expAlert.ExpLabels == nil {
|
||||
expAlert.ExpLabels = make(map[string]string)
|
||||
}
|
||||
// alertGroupNameLabel is added as additional labels when `disableAlertgroupLabel` is false
|
||||
// alertGroupNameLabel is added as additional labels when `disableAlertGroupLabel` is false
|
||||
if !disableAlertgroupLabel {
|
||||
expAlert.ExpLabels["alertgroup"] = groupname
|
||||
}
|
||||
|
||||
@@ -14,33 +14,34 @@ func TestMain(m *testing.M) {
|
||||
os.Exit(m.Run())
|
||||
}
|
||||
|
||||
func TestUnitTest_Failure(t *testing.T) {
|
||||
f := func(files []string) {
|
||||
t.Helper()
|
||||
|
||||
failed := UnitTest(files, false)
|
||||
if !failed {
|
||||
t.Fatalf("expecting failed test")
|
||||
func TestUnitRule(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
disableGroupLabel bool
|
||||
files []string
|
||||
failed bool
|
||||
}{
|
||||
{
|
||||
name: "run multi files",
|
||||
files: []string{"./testdata/test1.yaml", "./testdata/test2.yaml"},
|
||||
failed: false,
|
||||
},
|
||||
{
|
||||
name: "disable group label",
|
||||
disableGroupLabel: true,
|
||||
files: []string{"./testdata/disable-group-label.yaml"},
|
||||
failed: false,
|
||||
},
|
||||
{
|
||||
name: "failing test",
|
||||
files: []string{"./testdata/failed-test.yaml"},
|
||||
failed: true,
|
||||
},
|
||||
}
|
||||
for _, tc := range testCases {
|
||||
fail := UnitTest(tc.files, tc.disableGroupLabel)
|
||||
if fail != tc.failed {
|
||||
t.Fatalf("failed to test %s, expect %t, got %t", tc.name, tc.failed, fail)
|
||||
}
|
||||
}
|
||||
|
||||
// failing test
|
||||
f([]string{"./testdata/failed-test.yaml"})
|
||||
}
|
||||
|
||||
func TestUnitTest_Success(t *testing.T) {
|
||||
f := func(disableGroupLabel bool, files []string) {
|
||||
t.Helper()
|
||||
|
||||
failed := UnitTest(files, disableGroupLabel)
|
||||
if failed {
|
||||
t.Fatalf("unexpected failed test")
|
||||
}
|
||||
}
|
||||
|
||||
// run multi files
|
||||
f(false, []string{"./testdata/test1.yaml", "./testdata/test2.yaml"})
|
||||
|
||||
// disable group label
|
||||
f(true, []string{"./testdata/disable-group-label.yaml"})
|
||||
}
|
||||
|
||||
@@ -101,7 +101,8 @@ replay-vmalert: vmalert
|
||||
-remoteWrite.url=http://localhost:8428 \
|
||||
-external.label=cluster=east-1 \
|
||||
-external.label=replica=a \
|
||||
-replay.timeFrom=2024-06-01T00:00:00Z
|
||||
-replay.timeFrom=2021-05-11T07:21:43Z \
|
||||
-replay.timeTo=2021-05-29T18:40:43Z
|
||||
|
||||
vmalert-linux-amd64:
|
||||
APP_NAME=vmalert CGO_ENABLED=1 GOOS=linux GOARCH=amd64 $(MAKE) app-local-goos-goarch
|
||||
@@ -118,9 +119,6 @@ vmalert-linux-ppc64le:
|
||||
vmalert-linux-s390x:
|
||||
APP_NAME=vmalert CGO_ENABLED=0 GOOS=linux GOARCH=s390x $(MAKE) app-local-goos-goarch
|
||||
|
||||
vmalert-linux-loong64:
|
||||
APP_NAME=vmalert CGO_ENABLED=0 GOOS=linux GOARCH=loong64 $(MAKE) app-local-goos-goarch
|
||||
|
||||
vmalert-linux-386:
|
||||
APP_NAME=vmalert CGO_ENABLED=0 GOOS=linux GOARCH=386 $(MAKE) app-local-goos-goarch
|
||||
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
See vmalert docs [here](https://docs.victoriametrics.com/vmalert/).
|
||||
See vmalert docs [here](https://docs.victoriametrics.com/vmalert.html).
|
||||
|
||||
vmalert docs can be edited at [docs/vmalert.md](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/docs/vmalert.md).
|
||||
|
||||
@@ -45,11 +45,11 @@ type Group struct {
|
||||
// EvalAlignment will make the timestamp of group query requests be aligned with interval
|
||||
EvalAlignment *bool `yaml:"eval_alignment,omitempty"`
|
||||
// Catches all undefined fields and must be empty after parsing.
|
||||
XXX map[string]any `yaml:",inline"`
|
||||
XXX map[string]interface{} `yaml:",inline"`
|
||||
}
|
||||
|
||||
// UnmarshalYAML implements the yaml.Unmarshaler interface.
|
||||
func (g *Group) UnmarshalYAML(unmarshal func(any) error) error {
|
||||
func (g *Group) UnmarshalYAML(unmarshal func(interface{}) error) error {
|
||||
type group Group
|
||||
if err := unmarshal((*group)(g)); err != nil {
|
||||
return err
|
||||
@@ -142,11 +142,11 @@ type Rule struct {
|
||||
UpdateEntriesLimit *int `yaml:"update_entries_limit,omitempty"`
|
||||
|
||||
// Catches all undefined fields and must be empty after parsing.
|
||||
XXX map[string]any `yaml:",inline"`
|
||||
XXX map[string]interface{} `yaml:",inline"`
|
||||
}
|
||||
|
||||
// UnmarshalYAML implements the yaml.Unmarshaler interface.
|
||||
func (r *Rule) UnmarshalYAML(unmarshal func(any) error) error {
|
||||
func (r *Rule) UnmarshalYAML(unmarshal func(interface{}) error) error {
|
||||
type rule Rule
|
||||
if err := unmarshal((*rule)(r)); err != nil {
|
||||
return err
|
||||
@@ -234,7 +234,7 @@ func ParseSilent(pathPatterns []string, validateTplFn ValidateTplFn, validateExp
|
||||
cLogger.Suppress(true)
|
||||
defer cLogger.Suppress(false)
|
||||
|
||||
files, err := ReadFromFS(pathPatterns)
|
||||
files, err := readFromFS(pathPatterns)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to read from the config: %w", err)
|
||||
}
|
||||
@@ -243,7 +243,7 @@ func ParseSilent(pathPatterns []string, validateTplFn ValidateTplFn, validateExp
|
||||
|
||||
// Parse parses rule configs from given file patterns
|
||||
func Parse(pathPatterns []string, validateTplFn ValidateTplFn, validateExpressions bool) ([]Group, error) {
|
||||
files, err := ReadFromFS(pathPatterns)
|
||||
files, err := readFromFS(pathPatterns)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to read from the config: %w", err)
|
||||
}
|
||||
@@ -301,7 +301,7 @@ func parseConfig(data []byte) ([]Group, error) {
|
||||
g := struct {
|
||||
Groups []Group `yaml:"groups"`
|
||||
// Catches all undefined fields and must be empty after parsing.
|
||||
XXX map[string]any `yaml:",inline"`
|
||||
XXX map[string]interface{} `yaml:",inline"`
|
||||
}{}
|
||||
err = yaml.Unmarshal(data, &g)
|
||||
if err != nil {
|
||||
@@ -310,7 +310,7 @@ func parseConfig(data []byte) ([]Group, error) {
|
||||
return g.Groups, checkOverflow(g.XXX, "config")
|
||||
}
|
||||
|
||||
func checkOverflow(m map[string]any, ctx string) error {
|
||||
func checkOverflow(m map[string]interface{}, ctx string) error {
|
||||
if len(m) > 0 {
|
||||
var keys []string
|
||||
for k := range m {
|
||||
|
||||
@@ -23,6 +23,12 @@ func TestMain(m *testing.M) {
|
||||
os.Exit(m.Run())
|
||||
}
|
||||
|
||||
func TestParseGood(t *testing.T) {
|
||||
if _, err := Parse([]string{"testdata/rules/*good.rules", "testdata/dir/*good.*"}, notifier.ValidateTemplates, true); err != nil {
|
||||
t.Errorf("error parsing files %s", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseFromURL(t *testing.T) {
|
||||
mux := http.NewServeMux()
|
||||
mux.HandleFunc("/bad", func(w http.ResponseWriter, _ *http.Request) {
|
||||
@@ -49,353 +55,438 @@ groups:
|
||||
defer srv.Close()
|
||||
|
||||
if _, err := Parse([]string{srv.URL + "/good-alert", srv.URL + "/good-rr"}, notifier.ValidateTemplates, true); err != nil {
|
||||
t.Fatalf("error parsing URLs %s", err)
|
||||
t.Errorf("error parsing URLs %s", err)
|
||||
}
|
||||
|
||||
if _, err := Parse([]string{srv.URL + "/bad"}, notifier.ValidateTemplates, true); err == nil {
|
||||
t.Fatalf("expected parsing error: %s", err)
|
||||
t.Errorf("expected parsing error: %s", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_Success(t *testing.T) {
|
||||
_, err := Parse([]string{"testdata/rules/*good.rules", "testdata/dir/*good.*"}, notifier.ValidateTemplates, true)
|
||||
if err != nil {
|
||||
t.Fatalf("error parsing files %s", err)
|
||||
func TestParseBad(t *testing.T) {
|
||||
testCases := []struct {
|
||||
path []string
|
||||
expErr string
|
||||
}{
|
||||
{
|
||||
[]string{"testdata/rules/rules_interval_bad.rules"},
|
||||
"eval_offset should be smaller than interval",
|
||||
},
|
||||
{
|
||||
[]string{"testdata/rules/rules0-bad.rules"},
|
||||
"unexpected token",
|
||||
},
|
||||
{
|
||||
[]string{"testdata/dir/rules0-bad.rules"},
|
||||
"error parsing annotation",
|
||||
},
|
||||
{
|
||||
[]string{"testdata/dir/rules1-bad.rules"},
|
||||
"duplicate in file",
|
||||
},
|
||||
{
|
||||
[]string{"testdata/dir/rules2-bad.rules"},
|
||||
"function \"unknown\" not defined",
|
||||
},
|
||||
{
|
||||
[]string{"testdata/dir/rules3-bad.rules"},
|
||||
"either `record` or `alert` must be set",
|
||||
},
|
||||
{
|
||||
[]string{"testdata/dir/rules4-bad.rules"},
|
||||
"either `record` or `alert` must be set",
|
||||
},
|
||||
{
|
||||
[]string{"testdata/rules/rules1-bad.rules"},
|
||||
"bad graphite expr",
|
||||
},
|
||||
{
|
||||
[]string{"testdata/dir/rules6-bad.rules"},
|
||||
"missing ':' in header",
|
||||
},
|
||||
{
|
||||
[]string{"http://unreachable-url"},
|
||||
"failed to",
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_Failure(t *testing.T) {
|
||||
f := func(paths []string, errStrExpected string) {
|
||||
t.Helper()
|
||||
|
||||
_, err := Parse(paths, notifier.ValidateTemplates, true)
|
||||
for _, tc := range testCases {
|
||||
_, err := Parse(tc.path, notifier.ValidateTemplates, true)
|
||||
if err == nil {
|
||||
t.Fatalf("expected to get error")
|
||||
t.Errorf("expected to get error")
|
||||
return
|
||||
}
|
||||
if !strings.Contains(err.Error(), errStrExpected) {
|
||||
t.Fatalf("expected err to contain %q; got %q instead", errStrExpected, err)
|
||||
if !strings.Contains(err.Error(), tc.expErr) {
|
||||
t.Errorf("expected err to contain %q; got %q instead", tc.expErr, err)
|
||||
}
|
||||
}
|
||||
|
||||
f([]string{"testdata/rules/rules_interval_bad.rules"}, "eval_offset should be smaller than interval")
|
||||
f([]string{"testdata/rules/rules0-bad.rules"}, "unexpected token")
|
||||
f([]string{"testdata/dir/rules0-bad.rules"}, "error parsing annotation")
|
||||
f([]string{"testdata/dir/rules1-bad.rules"}, "duplicate in file")
|
||||
f([]string{"testdata/dir/rules2-bad.rules"}, "function \"unknown\" not defined")
|
||||
f([]string{"testdata/dir/rules3-bad.rules"}, "either `record` or `alert` must be set")
|
||||
f([]string{"testdata/dir/rules4-bad.rules"}, "either `record` or `alert` must be set")
|
||||
f([]string{"testdata/rules/rules1-bad.rules"}, "bad graphite expr")
|
||||
f([]string{"testdata/dir/rules6-bad.rules"}, "missing ':' in header")
|
||||
f([]string{"http://unreachable-url"}, "failed to")
|
||||
}
|
||||
|
||||
func TestRuleValidate(t *testing.T) {
|
||||
func TestRule_Validate(t *testing.T) {
|
||||
if err := (&Rule{}).Validate(); err == nil {
|
||||
t.Fatalf("expected empty name error")
|
||||
t.Errorf("expected empty name error")
|
||||
}
|
||||
if err := (&Rule{Alert: "alert"}).Validate(); err == nil {
|
||||
t.Fatalf("expected empty expr error")
|
||||
t.Errorf("expected empty expr error")
|
||||
}
|
||||
if err := (&Rule{Alert: "alert", Expr: "test>0"}).Validate(); err != nil {
|
||||
t.Fatalf("expected valid rule; got %s", err)
|
||||
t.Errorf("expected valid rule; got %s", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestGroupValidate_Failure(t *testing.T) {
|
||||
f := func(group *Group, validateExpressions bool, errStrExpected string) {
|
||||
t.Helper()
|
||||
|
||||
err := group.Validate(nil, validateExpressions)
|
||||
if err == nil {
|
||||
t.Fatalf("expecting non-nil error")
|
||||
}
|
||||
errStr := err.Error()
|
||||
if !strings.Contains(errStr, errStrExpected) {
|
||||
t.Fatalf("missing %q in the returned error %q", errStrExpected, errStr)
|
||||
}
|
||||
}
|
||||
|
||||
f(&Group{}, false, "group name must be set")
|
||||
|
||||
f(&Group{
|
||||
Name: "negative interval",
|
||||
Interval: promutils.NewDuration(-1),
|
||||
}, false, "interval shouldn't be lower than 0")
|
||||
|
||||
f(&Group{
|
||||
Name: "wrong eval_offset",
|
||||
Interval: promutils.NewDuration(time.Minute),
|
||||
EvalOffset: promutils.NewDuration(2 * time.Minute),
|
||||
}, false, "eval_offset should be smaller than interval")
|
||||
|
||||
f(&Group{
|
||||
Name: "wrong limit",
|
||||
Limit: -1,
|
||||
}, false, "invalid limit")
|
||||
|
||||
f(&Group{
|
||||
Name: "wrong concurrency",
|
||||
Concurrency: -1,
|
||||
}, false, "invalid concurrency")
|
||||
|
||||
f(&Group{
|
||||
Name: "test",
|
||||
Rules: []Rule{
|
||||
{
|
||||
Alert: "alert",
|
||||
Expr: "up == 1",
|
||||
func TestGroup_Validate(t *testing.T) {
|
||||
testCases := []struct {
|
||||
group *Group
|
||||
rules []Rule
|
||||
validateAnnotations bool
|
||||
validateExpressions bool
|
||||
expErr string
|
||||
}{
|
||||
{
|
||||
group: &Group{},
|
||||
expErr: "group name must be set",
|
||||
},
|
||||
{
|
||||
group: &Group{
|
||||
Name: "negative interval",
|
||||
Interval: promutils.NewDuration(-1),
|
||||
},
|
||||
{
|
||||
Alert: "alert",
|
||||
Expr: "up == 1",
|
||||
expErr: "interval shouldn't be lower than 0",
|
||||
},
|
||||
{
|
||||
group: &Group{
|
||||
Name: "wrong eval_offset",
|
||||
Interval: promutils.NewDuration(time.Minute),
|
||||
EvalOffset: promutils.NewDuration(2 * time.Minute),
|
||||
},
|
||||
expErr: "eval_offset should be smaller than interval",
|
||||
},
|
||||
}, false, "duplicate")
|
||||
|
||||
f(&Group{
|
||||
Name: "test",
|
||||
Rules: []Rule{
|
||||
{Alert: "alert", Expr: "up == 1", Labels: map[string]string{
|
||||
"summary": "{{ value|query }}",
|
||||
}},
|
||||
{Alert: "alert", Expr: "up == 1", Labels: map[string]string{
|
||||
"summary": "{{ value|query }}",
|
||||
}},
|
||||
},
|
||||
}, false, "duplicate")
|
||||
|
||||
f(&Group{
|
||||
Name: "test",
|
||||
Rules: []Rule{
|
||||
{Record: "record", Expr: "up == 1", Labels: map[string]string{
|
||||
"summary": "{{ value|query }}",
|
||||
}},
|
||||
{Record: "record", Expr: "up == 1", Labels: map[string]string{
|
||||
"summary": "{{ value|query }}",
|
||||
}},
|
||||
},
|
||||
}, false, "duplicate")
|
||||
|
||||
f(&Group{
|
||||
Name: "test",
|
||||
Rules: []Rule{
|
||||
{Alert: "alert", Expr: "up == 1", Labels: map[string]string{
|
||||
"summary": "{{ value|query }}",
|
||||
}},
|
||||
{Alert: "alert", Expr: "up == 1", Labels: map[string]string{
|
||||
"description": "{{ value|query }}",
|
||||
}},
|
||||
},
|
||||
}, false, "duplicate")
|
||||
|
||||
f(&Group{
|
||||
Name: "test",
|
||||
Rules: []Rule{
|
||||
{Record: "alert", Expr: "up == 1", Labels: map[string]string{
|
||||
"summary": "{{ value|query }}",
|
||||
}},
|
||||
{Alert: "alert", Expr: "up == 1", Labels: map[string]string{
|
||||
"summary": "{{ value|query }}",
|
||||
}},
|
||||
},
|
||||
}, false, "duplicate")
|
||||
|
||||
f(&Group{
|
||||
Name: "test graphite prometheus bad expr",
|
||||
Type: NewGraphiteType(),
|
||||
Rules: []Rule{
|
||||
{
|
||||
Expr: "sum(up == 0 ) by (host)",
|
||||
For: promutils.NewDuration(10 * time.Millisecond),
|
||||
{
|
||||
group: &Group{
|
||||
Name: "wrong limit",
|
||||
Limit: -1,
|
||||
},
|
||||
{
|
||||
Expr: "sumSeries(time('foo.bar',10))",
|
||||
expErr: "invalid limit",
|
||||
},
|
||||
{
|
||||
group: &Group{
|
||||
Name: "wrong concurrency",
|
||||
Concurrency: -1,
|
||||
},
|
||||
expErr: "invalid concurrency",
|
||||
},
|
||||
}, false, "invalid rule")
|
||||
|
||||
f(&Group{
|
||||
Name: "test graphite inherit",
|
||||
Type: NewGraphiteType(),
|
||||
Rules: []Rule{
|
||||
{
|
||||
Expr: "sumSeries(time('foo.bar',10))",
|
||||
For: promutils.NewDuration(10 * time.Millisecond),
|
||||
},
|
||||
{
|
||||
Expr: "sum(up == 0 ) by (host)",
|
||||
},
|
||||
},
|
||||
}, false, "either `record` or `alert` must be set")
|
||||
|
||||
// validate expressions
|
||||
f(&Group{
|
||||
Name: "test",
|
||||
Rules: []Rule{
|
||||
{
|
||||
Record: "record",
|
||||
Expr: "up | 0",
|
||||
},
|
||||
},
|
||||
}, true, "invalid expression")
|
||||
|
||||
f(&Group{
|
||||
Name: "test thanos",
|
||||
Type: NewRawType("thanos"),
|
||||
Rules: []Rule{
|
||||
{Alert: "alert", Expr: "up == 1", Labels: map[string]string{
|
||||
"description": "{{ value|query }}",
|
||||
}},
|
||||
},
|
||||
}, true, "unknown datasource type")
|
||||
|
||||
f(&Group{
|
||||
Name: "test graphite",
|
||||
Type: NewGraphiteType(),
|
||||
Rules: []Rule{
|
||||
{Alert: "alert", Expr: "up == 1", Labels: map[string]string{
|
||||
"description": "some-description",
|
||||
}},
|
||||
},
|
||||
}, true, "bad graphite expr")
|
||||
}
|
||||
|
||||
func TestGroupValidate_Success(t *testing.T) {
|
||||
f := func(group *Group, validateAnnotations, validateExpressions bool) {
|
||||
t.Helper()
|
||||
|
||||
var validateTplFn ValidateTplFn
|
||||
if validateAnnotations {
|
||||
validateTplFn = notifier.ValidateTemplates
|
||||
}
|
||||
err := group.Validate(validateTplFn, validateExpressions)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %s", err)
|
||||
}
|
||||
}
|
||||
|
||||
f(&Group{
|
||||
Name: "test",
|
||||
Rules: []Rule{
|
||||
{
|
||||
Record: "record",
|
||||
Expr: "up | 0",
|
||||
},
|
||||
},
|
||||
}, false, false)
|
||||
|
||||
f(&Group{
|
||||
Name: "test",
|
||||
Rules: []Rule{
|
||||
{
|
||||
Alert: "alert",
|
||||
Expr: "up == 1",
|
||||
Labels: map[string]string{
|
||||
"summary": "{{ value|query }}",
|
||||
{
|
||||
group: &Group{
|
||||
Name: "test",
|
||||
Rules: []Rule{
|
||||
{
|
||||
Record: "record",
|
||||
Expr: "up | 0",
|
||||
},
|
||||
},
|
||||
},
|
||||
expErr: "",
|
||||
},
|
||||
}, false, false)
|
||||
|
||||
// validate annotiations
|
||||
f(&Group{
|
||||
Name: "test",
|
||||
Rules: []Rule{
|
||||
{
|
||||
Alert: "alert",
|
||||
Expr: "up == 1",
|
||||
Labels: map[string]string{
|
||||
"summary": `
|
||||
{
|
||||
group: &Group{
|
||||
Name: "test",
|
||||
Rules: []Rule{
|
||||
{
|
||||
Record: "record",
|
||||
Expr: "up | 0",
|
||||
},
|
||||
},
|
||||
},
|
||||
expErr: "invalid expression",
|
||||
validateExpressions: true,
|
||||
},
|
||||
{
|
||||
group: &Group{
|
||||
Name: "test",
|
||||
Rules: []Rule{
|
||||
{
|
||||
Alert: "alert",
|
||||
Expr: "up == 1",
|
||||
Labels: map[string]string{
|
||||
"summary": "{{ value|query }}",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expErr: "",
|
||||
},
|
||||
{
|
||||
group: &Group{
|
||||
Name: "test",
|
||||
Rules: []Rule{
|
||||
{
|
||||
Alert: "alert",
|
||||
Expr: "up == 1",
|
||||
Labels: map[string]string{
|
||||
"summary": `
|
||||
{{ with printf "node_memory_MemTotal{job='node',instance='%s'}" "localhost" | query }}
|
||||
{{ . | first | value | humanize1024 }}B
|
||||
{{ end }}`,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
validateAnnotations: true,
|
||||
},
|
||||
{
|
||||
group: &Group{
|
||||
Name: "test",
|
||||
Rules: []Rule{
|
||||
{
|
||||
Alert: "alert",
|
||||
Expr: "up == 1",
|
||||
},
|
||||
{
|
||||
Alert: "alert",
|
||||
Expr: "up == 1",
|
||||
},
|
||||
},
|
||||
},
|
||||
expErr: "duplicate",
|
||||
},
|
||||
{
|
||||
group: &Group{
|
||||
Name: "test",
|
||||
Rules: []Rule{
|
||||
{Alert: "alert", Expr: "up == 1", Labels: map[string]string{
|
||||
"summary": "{{ value|query }}",
|
||||
}},
|
||||
{Alert: "alert", Expr: "up == 1", Labels: map[string]string{
|
||||
"summary": "{{ value|query }}",
|
||||
}},
|
||||
},
|
||||
},
|
||||
expErr: "duplicate",
|
||||
},
|
||||
{
|
||||
group: &Group{
|
||||
Name: "test",
|
||||
Rules: []Rule{
|
||||
{Record: "record", Expr: "up == 1", Labels: map[string]string{
|
||||
"summary": "{{ value|query }}",
|
||||
}},
|
||||
{Record: "record", Expr: "up == 1", Labels: map[string]string{
|
||||
"summary": "{{ value|query }}",
|
||||
}},
|
||||
},
|
||||
},
|
||||
expErr: "duplicate",
|
||||
},
|
||||
{
|
||||
group: &Group{
|
||||
Name: "test",
|
||||
Rules: []Rule{
|
||||
{Alert: "alert", Expr: "up == 1", Labels: map[string]string{
|
||||
"summary": "{{ value|query }}",
|
||||
}},
|
||||
{Alert: "alert", Expr: "up == 1", Labels: map[string]string{
|
||||
"description": "{{ value|query }}",
|
||||
}},
|
||||
},
|
||||
},
|
||||
expErr: "",
|
||||
},
|
||||
{
|
||||
group: &Group{
|
||||
Name: "test",
|
||||
Rules: []Rule{
|
||||
{Record: "alert", Expr: "up == 1", Labels: map[string]string{
|
||||
"summary": "{{ value|query }}",
|
||||
}},
|
||||
{Alert: "alert", Expr: "up == 1", Labels: map[string]string{
|
||||
"summary": "{{ value|query }}",
|
||||
}},
|
||||
},
|
||||
},
|
||||
expErr: "",
|
||||
},
|
||||
{
|
||||
group: &Group{
|
||||
Name: "test thanos",
|
||||
Type: NewRawType("thanos"),
|
||||
Rules: []Rule{
|
||||
{Alert: "alert", Expr: "up == 1", Labels: map[string]string{
|
||||
"description": "{{ value|query }}",
|
||||
}},
|
||||
},
|
||||
},
|
||||
validateExpressions: true,
|
||||
expErr: "unknown datasource type",
|
||||
},
|
||||
{
|
||||
group: &Group{
|
||||
Name: "test graphite",
|
||||
Type: NewGraphiteType(),
|
||||
Rules: []Rule{
|
||||
{Alert: "alert", Expr: "up == 1", Labels: map[string]string{
|
||||
"description": "some-description",
|
||||
}},
|
||||
},
|
||||
},
|
||||
validateExpressions: true,
|
||||
expErr: "",
|
||||
},
|
||||
{
|
||||
group: &Group{
|
||||
Name: "test prometheus",
|
||||
Type: NewPrometheusType(),
|
||||
Rules: []Rule{
|
||||
{Alert: "alert", Expr: "up == 1", Labels: map[string]string{
|
||||
"description": "{{ value|query }}",
|
||||
}},
|
||||
},
|
||||
},
|
||||
validateExpressions: true,
|
||||
expErr: "",
|
||||
},
|
||||
{
|
||||
group: &Group{
|
||||
Name: "test graphite inherit",
|
||||
Type: NewGraphiteType(),
|
||||
Rules: []Rule{
|
||||
{
|
||||
Expr: "sumSeries(time('foo.bar',10))",
|
||||
For: promutils.NewDuration(10 * time.Millisecond),
|
||||
},
|
||||
{
|
||||
Expr: "sum(up == 0 ) by (host)",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}, true, false)
|
||||
|
||||
// validate expressions
|
||||
f(&Group{
|
||||
Name: "test prometheus",
|
||||
Type: NewPrometheusType(),
|
||||
Rules: []Rule{
|
||||
{Alert: "alert", Expr: "up == 1", Labels: map[string]string{
|
||||
"description": "{{ value|query }}",
|
||||
}},
|
||||
{
|
||||
group: &Group{
|
||||
Name: "test graphite prometheus bad expr",
|
||||
Type: NewGraphiteType(),
|
||||
Rules: []Rule{
|
||||
{
|
||||
Expr: "sum(up == 0 ) by (host)",
|
||||
For: promutils.NewDuration(10 * time.Millisecond),
|
||||
},
|
||||
{
|
||||
Expr: "sumSeries(time('foo.bar',10))",
|
||||
},
|
||||
},
|
||||
},
|
||||
expErr: "invalid rule",
|
||||
},
|
||||
}, false, true)
|
||||
}
|
||||
|
||||
func TestHashRule_NotEqual(t *testing.T) {
|
||||
f := func(a, b Rule) {
|
||||
t.Helper()
|
||||
|
||||
aID, bID := HashRule(a), HashRule(b)
|
||||
if aID == bID {
|
||||
t.Fatalf("rule hashes mustn't be equal; got %d", aID)
|
||||
}
|
||||
}
|
||||
|
||||
f(Rule{Alert: "record", Expr: "up == 1"}, Rule{Record: "record", Expr: "up == 1"})
|
||||
|
||||
f(Rule{Record: "record", Expr: "up == 1"}, Rule{Record: "record", Expr: "up == 2"})
|
||||
|
||||
f(Rule{Alert: "alert", Expr: "up == 1", Labels: map[string]string{
|
||||
"foo": "bar",
|
||||
"baz": "foo",
|
||||
}}, Rule{Alert: "alert", Expr: "up == 1", Labels: map[string]string{
|
||||
"baz": "foo",
|
||||
"foo": "baz",
|
||||
}})
|
||||
|
||||
f(Rule{Alert: "alert", Expr: "up == 1", Labels: map[string]string{
|
||||
"foo": "bar",
|
||||
"baz": "foo",
|
||||
}}, Rule{Alert: "alert", Expr: "up == 1", Labels: map[string]string{
|
||||
"baz": "foo",
|
||||
}})
|
||||
|
||||
f(Rule{Alert: "alert", Expr: "up == 1", Labels: map[string]string{
|
||||
"foo": "bar",
|
||||
"baz": "foo",
|
||||
}}, Rule{Alert: "alert", Expr: "up == 1"})
|
||||
}
|
||||
|
||||
func TestHashRule_Equal(t *testing.T) {
|
||||
f := func(a, b Rule) {
|
||||
t.Helper()
|
||||
|
||||
aID, bID := HashRule(a), HashRule(b)
|
||||
if aID != bID {
|
||||
t.Fatalf("rule hashes must be equal; got %d and %d", aID, bID)
|
||||
for _, tc := range testCases {
|
||||
var validateTplFn ValidateTplFn
|
||||
if tc.validateAnnotations {
|
||||
validateTplFn = notifier.ValidateTemplates
|
||||
}
|
||||
err := tc.group.Validate(validateTplFn, tc.validateExpressions)
|
||||
if err == nil {
|
||||
if tc.expErr != "" {
|
||||
t.Errorf("expected to get err %q; got nil insted", tc.expErr)
|
||||
}
|
||||
continue
|
||||
}
|
||||
if !strings.Contains(err.Error(), tc.expErr) {
|
||||
t.Errorf("expected err to contain %q; got %q instead", tc.expErr, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
f(Rule{Record: "record", Expr: "up == 1"}, Rule{Record: "record", Expr: "up == 1"})
|
||||
|
||||
f(Rule{Alert: "alert", Expr: "up == 1"}, Rule{Alert: "alert", Expr: "up == 1"})
|
||||
|
||||
f(Rule{Alert: "alert", Expr: "up == 1", Labels: map[string]string{
|
||||
"foo": "bar",
|
||||
"baz": "foo",
|
||||
}}, Rule{Alert: "alert", Expr: "up == 1", Labels: map[string]string{
|
||||
"foo": "bar",
|
||||
"baz": "foo",
|
||||
}})
|
||||
|
||||
f(Rule{Alert: "alert", Expr: "up == 1", Labels: map[string]string{
|
||||
"foo": "bar",
|
||||
"baz": "foo",
|
||||
}}, Rule{Alert: "alert", Expr: "up == 1", Labels: map[string]string{
|
||||
"baz": "foo",
|
||||
"foo": "bar",
|
||||
}})
|
||||
|
||||
f(Rule{Alert: "record", Expr: "up == 1"}, Rule{Alert: "record", Expr: "up == 1"})
|
||||
|
||||
f(Rule{
|
||||
Alert: "alert", Expr: "up == 1", For: promutils.NewDuration(time.Minute), KeepFiringFor: promutils.NewDuration(time.Minute),
|
||||
}, Rule{Alert: "alert", Expr: "up == 1"})
|
||||
func TestHashRule(t *testing.T) {
|
||||
testCases := []struct {
|
||||
a, b Rule
|
||||
equal bool
|
||||
}{
|
||||
{
|
||||
Rule{Record: "record", Expr: "up == 1"},
|
||||
Rule{Record: "record", Expr: "up == 1"},
|
||||
true,
|
||||
},
|
||||
{
|
||||
Rule{Alert: "alert", Expr: "up == 1"},
|
||||
Rule{Alert: "alert", Expr: "up == 1"},
|
||||
true,
|
||||
},
|
||||
{
|
||||
Rule{Alert: "alert", Expr: "up == 1", Labels: map[string]string{
|
||||
"foo": "bar",
|
||||
"baz": "foo",
|
||||
}},
|
||||
Rule{Alert: "alert", Expr: "up == 1", Labels: map[string]string{
|
||||
"foo": "bar",
|
||||
"baz": "foo",
|
||||
}},
|
||||
true,
|
||||
},
|
||||
{
|
||||
Rule{Alert: "alert", Expr: "up == 1", Labels: map[string]string{
|
||||
"foo": "bar",
|
||||
"baz": "foo",
|
||||
}},
|
||||
Rule{Alert: "alert", Expr: "up == 1", Labels: map[string]string{
|
||||
"baz": "foo",
|
||||
"foo": "bar",
|
||||
}},
|
||||
true,
|
||||
},
|
||||
{
|
||||
Rule{Alert: "record", Expr: "up == 1"},
|
||||
Rule{Alert: "record", Expr: "up == 1"},
|
||||
true,
|
||||
},
|
||||
{
|
||||
Rule{Alert: "alert", Expr: "up == 1", For: promutils.NewDuration(time.Minute), KeepFiringFor: promutils.NewDuration(time.Minute)},
|
||||
Rule{Alert: "alert", Expr: "up == 1"},
|
||||
true,
|
||||
},
|
||||
{
|
||||
Rule{Alert: "record", Expr: "up == 1"},
|
||||
Rule{Record: "record", Expr: "up == 1"},
|
||||
false,
|
||||
},
|
||||
{
|
||||
Rule{Record: "record", Expr: "up == 1"},
|
||||
Rule{Record: "record", Expr: "up == 2"},
|
||||
false,
|
||||
},
|
||||
{
|
||||
Rule{Alert: "alert", Expr: "up == 1", Labels: map[string]string{
|
||||
"foo": "bar",
|
||||
"baz": "foo",
|
||||
}},
|
||||
Rule{Alert: "alert", Expr: "up == 1", Labels: map[string]string{
|
||||
"baz": "foo",
|
||||
"foo": "baz",
|
||||
}},
|
||||
false,
|
||||
},
|
||||
{
|
||||
Rule{Alert: "alert", Expr: "up == 1", Labels: map[string]string{
|
||||
"foo": "bar",
|
||||
"baz": "foo",
|
||||
}},
|
||||
Rule{Alert: "alert", Expr: "up == 1", Labels: map[string]string{
|
||||
"baz": "foo",
|
||||
}},
|
||||
false,
|
||||
},
|
||||
{
|
||||
Rule{Alert: "alert", Expr: "up == 1", Labels: map[string]string{
|
||||
"foo": "bar",
|
||||
"baz": "foo",
|
||||
}},
|
||||
Rule{Alert: "alert", Expr: "up == 1"},
|
||||
false,
|
||||
},
|
||||
}
|
||||
for i, tc := range testCases {
|
||||
aID, bID := HashRule(tc.a), HashRule(tc.b)
|
||||
if tc.equal != (aID == bID) {
|
||||
t.Fatalf("missmatch for rule %d", i)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestGroupChecksum(t *testing.T) {
|
||||
|
||||
@@ -32,14 +32,14 @@ var (
|
||||
fsRegistry = make(map[string]FS)
|
||||
)
|
||||
|
||||
// ReadFromFS parses the given path list and inits FS for each item.
|
||||
// Once initialed, ReadFromFS will try to read and return files from each FS.
|
||||
// ReadFromFS returns an error if at least one FS failed to init.
|
||||
// readFromFS parses the given path list and inits FS for each item.
|
||||
// Once initialed, readFromFS will try to read and return files from each FS.
|
||||
// readFromFS returns an error if at least one FS failed to init.
|
||||
// The function can be called multiple times but each unique path
|
||||
// will be initialed only once.
|
||||
//
|
||||
// It is allowed to mix different FS types in path list.
|
||||
func ReadFromFS(paths []string) (map[string][]byte, error) {
|
||||
func readFromFS(paths []string) (map[string][]byte, error) {
|
||||
var err error
|
||||
result := make(map[string][]byte)
|
||||
for _, path := range paths {
|
||||
|
||||
@@ -29,7 +29,7 @@ func (l *Logger) isDisabled() bool {
|
||||
}
|
||||
|
||||
// Errorf logs error message.
|
||||
func (l *Logger) Errorf(format string, args ...any) {
|
||||
func (l *Logger) Errorf(format string, args ...interface{}) {
|
||||
if l.isDisabled() {
|
||||
return
|
||||
}
|
||||
@@ -37,7 +37,7 @@ func (l *Logger) Errorf(format string, args ...any) {
|
||||
}
|
||||
|
||||
// Warnf logs warning message.
|
||||
func (l *Logger) Warnf(format string, args ...any) {
|
||||
func (l *Logger) Warnf(format string, args ...interface{}) {
|
||||
if l.isDisabled() {
|
||||
return
|
||||
}
|
||||
@@ -45,7 +45,7 @@ func (l *Logger) Warnf(format string, args ...any) {
|
||||
}
|
||||
|
||||
// Infof logs info message.
|
||||
func (l *Logger) Infof(format string, args ...any) {
|
||||
func (l *Logger) Infof(format string, args ...interface{}) {
|
||||
if l.isDisabled() {
|
||||
return
|
||||
}
|
||||
@@ -54,6 +54,6 @@ func (l *Logger) Infof(format string, args ...any) {
|
||||
|
||||
// Panicf logs panic message and panics.
|
||||
// Panicf can't be suppressed
|
||||
func (l *Logger) Panicf(format string, args ...any) {
|
||||
func (l *Logger) Panicf(format string, args ...interface{}) {
|
||||
logger.Panicf(format, args...)
|
||||
}
|
||||
|
||||
@@ -18,14 +18,14 @@ func TestOutput(t *testing.T) {
|
||||
|
||||
mustMatch := func(exp string) {
|
||||
t.Helper()
|
||||
|
||||
if exp == "" {
|
||||
if testOutput.String() != "" {
|
||||
t.Fatalf("expected output to be empty; got %q", testOutput.String())
|
||||
t.Errorf("expected output to be empty; got %q", testOutput.String())
|
||||
return
|
||||
}
|
||||
}
|
||||
if !strings.Contains(testOutput.String(), exp) {
|
||||
t.Fatalf("output %q should contain %q", testOutput.String(), exp)
|
||||
t.Errorf("output %q should contain %q", testOutput.String(), exp)
|
||||
}
|
||||
fmt.Println(testOutput.String())
|
||||
testOutput.Reset()
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user