Compare commits

...

72 Commits

Author SHA1 Message Date
Aliaksandr Valialkin
e71519b8b2 app/victoria-metrics/testdata: add a test for https://github.com/VictoriaMetrics/VictoriaMetrics/issues/395 2020-03-31 12:51:25 +03:00
Aliaksandr Valialkin
972713bd79 lib/storage: add fast path for the previous indexdb search if it doesn't contain per-day inverted index yet 2020-03-31 12:51:21 +03:00
Aliaksandr Valialkin
5d99ca6cfc lib/storage: optimize per-day inverted index search for tag filters matching big number of time series
- Sort tag filters in the ascending number of matching time series
  in order to apply the most specific filters first.
- Fall back to metricName search for filters matching big number of time series
  (usually this are negative filters or regexp filters).
2020-03-31 00:48:35 +03:00
Aliaksandr Valialkin
318326c309 lib/storage: properly handle {label=~"foo|"} filters as Prometheus does
Such filters must match all the time series with `label="foo"` plus all the time series without `label`

Previously only time series with `label="foo"` were matched.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/395
2020-03-31 00:48:18 +03:00
Aliaksandr Valialkin
a1e4c6a2be .github/workflows/wiki.yml: fix copying files from docs to wiki 2020-03-30 15:59:12 +03:00
Aliaksandr Valialkin
ac3ee44fa7 docs/robots.txt: trigger github actions 2020-03-30 15:54:39 +03:00
Aliaksandr Valialkin
b98ca56d94 lib/envflag: add -envflag.prefix for setting optional prefix for environment vars 2020-03-30 15:51:19 +03:00
Aliaksandr Valialkin
b41ee5f27d vendor: make vendor-update 2020-03-30 15:06:35 +03:00
Aliaksandr Valialkin
8d35af6fdb .github/workflows: copy all the files from docs folder to wiki and github pages 2020-03-30 15:05:37 +03:00
Aliaksandr Valialkin
0f2dd77a76 go.mod: update the minimum required Go version from go1.12 to go1.13 2020-03-30 14:56:57 +03:00
Aliaksandr Valialkin
0c485f14d1 app/vmselect/prometheus: allow passing relative time to start, end and time args of /api/v1/* queries 2020-03-29 21:57:14 +03:00
Aliaksandr Valialkin
2ebf7d86ff app/vmselect/prometheus: code simplification: (d.Seconds()/1e3) -> d.Milliseconds() 2020-03-29 21:50:28 +03:00
kreedom
bf6c24d0f4 [vmalert] config parser (#393)
* [vmalert] config parser

* make linter be happy

* fix test

* fix sprintf add test for rule validation
2020-03-29 01:48:30 +02:00
Aliaksandr Valialkin
1f7292675a docs: add robots.txt 2020-03-28 23:22:46 +02:00
Aliaksandr Valialkin
bd156cd088 docs/vmagent.md: add prometheus remote_write proxy use case 2020-03-28 23:16:38 +02:00
Aliaksandr Valialkin
b695087119 docs/CaseStudies.md: add Brandwatch case study 2020-03-28 20:57:54 +02:00
Aliaksandr Valialkin
80f53e5396 deployment/docker: run docker apps under default user (0, root) in order to preserve backwards compatibility
If docker app is upgraded from root to non-root, then the data pointed by `-storageDataPath` or similar flags
becomes denied to non-root user after the upgrade. This breaks upgrade path. So revert back to default root user
for docker apps.

Users may explicitly execute `docker run --user <non_root_user>` for running docker apps under non-root user.
2020-03-28 19:23:26 +02:00
Roman Khavronenko
7acb797595 Update dashboard according to new Grafana version. (#390)
The way how regex for column style in Table panel should be applied has changed in 6.7 Grafana version. The change supposed to fix Flags panel column styles accordingly.
2020-03-28 01:24:39 +02:00
Roman Khavronenko
3a8bbfd6b9 bump Prometheus and Grafana images (#389) 2020-03-28 01:15:07 +02:00
Dmitry Naumov
27373807c1 Rootless docker images by default (#358)
* Rootless docker images by default

* Migrate to rootless base image

Co-authored-by: Aliaksandr Valialkin <valyala@gmail.com>
2020-03-27 21:23:50 +02:00
Aliaksandr Valialkin
8d7f0aa632 vendor: make vendor-update 2020-03-27 21:23:30 +02:00
Aliaksandr Valialkin
149f365f74 lib/httpserver: add -http.maxGracefulShutdownDuration command-line flag for tuning the maximum duration required for graceful shutdown of http server 2020-03-27 21:23:30 +02:00
kreedom
b22da547a2 [vmalert] - parse template annotaions (#387)
* [vmalert] - parse template annotations
2020-03-27 18:31:16 +02:00
Aliaksandr Valialkin
047849e855 lib/uint64set: remove zero buckets after Set.Intersect 2020-03-27 01:15:58 +02:00
Aliaksandr Valialkin
f3ec424e7d lib/uint64set: small code cleanup and perf tuning
* Remember the last accessed bucket on Has() call.
* Inline fast paths inside Add() and Has() calls.
* Remove fragile code with maxUnsortedBuckets inside bucket32.
2020-03-25 15:30:25 +02:00
Aliaksandr Valialkin
ef8aee8a2d deployment/docker: update Go builder from Go1.14.0 to Go1.14.1 2020-03-24 22:35:26 +02:00
Aliaksandr Valialkin
dde4a97534 lib/uint64set: go fmt 2020-03-24 22:28:43 +02:00
Aliaksandr Valialkin
f3e0c55ea1 lib/storage: serialize snapshot creation process with mutex
This guarantees that the snapshot contains all the recently added data
from inmemory buffers when multiple concurrent calls to Storage.CreateSnapshot are performed.
2020-03-24 22:27:05 +02:00
Aliaksandr Valialkin
97fb0edd07 lib/uint64set: added more tests 2020-03-24 22:27:04 +02:00
Aliaksandr Valialkin
25f585ecf2 docs/CaseStudies.md: added a case study from MHI Vestas Offshore Wind 2020-03-14 13:22:12 +02:00
Aliaksandr Valialkin
df91d2d91f lib/storage: remove obsolete code 2020-03-13 22:48:17 +02:00
Aliaksandr Valialkin
3c7c71a49c app/vmselect: adjust label_map() handling for corner cases
The following corner cases now supported:
* label_map(q, "label", "", "foo") - adds `label="foo"` to series with missing `label`
* label_map(q, "label", "foo", "") - removes `label="foo"` from series

All the unmatched labels are kept unchanged.
2020-03-13 18:45:03 +02:00
Aliaksandr Valialkin
69f1470692 vendor: update github.com/VictoriaMetrics/metrics from v1.11.0 to v1.11.2
This fixes data race in Histogram
2020-03-13 12:39:57 +02:00
Aliaksandr Valialkin
4fc4912f0c app/vmalert/datasource: typo fix in docs: Labels -> Label 2020-03-13 12:22:33 +02:00
kreedom
a746cb62b6 vmalert add vm datasource, change alertmanager (#364)
* vmalert add vm datasource, change alertmanager

* make linter be happy

* make linter be happy.2

* PR comments

* PR comments.1
2020-03-13 12:19:31 +02:00
Aliaksandr Valialkin
499594f421 lib/promscrape: allow overriding external_labels as Prometheus does
Prometheus docs at https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config say:

> In communication with external systems, they are always applied only
> when a time series does not have a given label yet and are ignored otherwise.

Though this may result in consistency chaos when scrape targets override `external_labels`,
let's stick with Prometheus behavior for the sake of backwards compatibility.

There is last resort in vmagent with `-remoteWrite.label`, which consistently
sets the configured labels to all the metrics before sending them to remote storage.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/366
2020-03-12 20:24:42 +02:00
Aliaksandr Valialkin
fdc2a9d1d7 app/vmselect: add label_map(q, label, srcValue1, dstValue1, ... srcValueN, dstValueN) function to MetricsQL
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/369
2020-03-12 19:13:47 +02:00
Aliaksandr Valialkin
92d67e2592 vendor: update google.golang.org/genproto from fc8f55426688 to da6875a35672 2020-03-12 18:11:33 +02:00
Aliaksandr Valialkin
8a853778d7 vendor: update golang.org/x/tools from 26f6a1b6802d to 5e2df02acb1e 2020-03-12 18:07:52 +02:00
Aliaksandr Valialkin
8d75a5dbd0 vendor: update github.com/aws/aws-sdk-go from v1.29.10 to v1.29.22 2020-03-12 17:54:58 +02:00
Aliaksandr Valialkin
cdd6171af1 vendor: update google.golang.org/api from v0.19.0 to v0.20.0 2020-03-12 17:51:49 +02:00
Aliaksandr Valialkin
cc183bc899 vendor: update golang.org/x/sys from d5e6a3e2c0ae to 5c8b2ff67527 2020-03-12 17:46:24 +02:00
Aliaksandr Valialkin
3935038e20 vendor: update github.com/klauspost/compress from v1.10.1 to v1.10.3 2020-03-12 17:32:24 +02:00
Aliaksandr Valialkin
c8dc1cd218 lib/protoparser/csvimport: add missing metric vm_rows_invalid_total{type="csvimport"} 2020-03-12 15:27:45 +02:00
Aliaksandr Valialkin
c1551a3269 README.md: mention about alternative dashboard for cluster version - https://grafana.com/grafana/dashboards/11831 2020-03-12 15:10:14 +02:00
Aliaksandr Valialkin
8023ad7dbd app/vmselect: add -search.maxStalenessInterval for tuning Prometheus data model closer to Influx-style data model 2020-03-11 16:43:34 +02:00
Aliaksandr Valialkin
d4beb17ebe lib/promscrape: remove possible races when registering and de-registering scrape workers for /targets page 2020-03-11 16:30:21 +02:00
Aliaksandr Valialkin
fcd91795d5 app/vmagent: mention that vmagent can filter data 2020-03-11 16:22:39 +02:00
Aliaksandr Valialkin
650830db79 docs/Articles.md: add a link to https://stas.starikevich.com/posts/disk-usage-for-vm-versus-prometheus/ 2020-03-11 04:56:16 +02:00
Aliaksandr Valialkin
cdf70b7944 lib/promscrape: consistently update /targets page after SIGHUP 2020-03-11 03:20:03 +02:00
Aliaksandr Valialkin
301c2acd61 app/vmstorage: return 500 status code instead of 200 status code on internal errors inside /snapshot/* handlers 2020-03-10 23:51:55 +02:00
Aliaksandr Valialkin
61d0ee857c docs/vmagent.md: sync with app/vmagent/README.md 2020-03-10 21:54:04 +02:00
Aliaksandr Valialkin
e17702fada app/vmselect: add optional max_rows_per_line query arg to /api/v1/export
This arg allows limiting the number of data points that may be exported on a single line.
2020-03-10 21:45:56 +02:00
Aliaksandr Valialkin
1fe66fb3cc app/{vmagent,vminsert}: add support for importing csv data via /api/v1/import/csv 2020-03-10 21:15:35 +02:00
Aliaksandr Valialkin
49d7cb1a3f all: fix golangci-lint issues 2020-03-10 19:41:46 +02:00
Aliaksandr Valialkin
8d3869cd99 docs/FAQ.md: actualize answer about deduplication 2020-03-09 13:37:12 +02:00
Aliaksandr Valialkin
9d89b08cb5 docs: add missing vmagent.png, which is used in vmagent.md 2020-03-09 13:35:49 +02:00
Aliaksandr Valialkin
5fe38a84eb app/vmagent: properly apply -remoteWrite.sendTimeout to fasthttp.HostClient 2020-03-09 13:31:55 +02:00
Aliaksandr Valialkin
7c432da788 lib/promscrape: do not retry idempotent requests when scraping targets
This should prevent from the following unexpected side-effects of idempotent request retries:
- increased actual timeout when scraping the target comparing to the configured scrape_timeout
- increased load on the target

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/357
2020-03-09 13:31:52 +02:00
Aliaksandr Valialkin
986dba5ab3 app/vmagent: do not allow non-supported fields in -remoteWrite.relabelConfig and file_sd_configs
This should reduce possible confusion like in the https://github.com/VictoriaMetrics/VictoriaMetrics/issues/363
2020-03-06 20:19:13 +02:00
Aliaksandr Valialkin
c386c5de57 app/vmagent: properly add labels set via -remoteWrite.label to metrics before sending them to -remoteWrite.url 2020-03-06 19:26:58 +02:00
Artem Navoiev
58a3e59d59 bump version of codecov-action to v1.0.6 2020-03-05 23:25:13 +02:00
Aliaksandr Valialkin
c5f894b361 Makefile: add build and test rules with enabled race detector. These rules have -race suffix
Fix also `unsafe pointer conversion` errors detected by Go1.14. See https://golang.org/doc/go1.14#compiler .
2020-03-05 12:03:38 +02:00
Aliaksandr Valialkin
9be64e34b4 docs/Articles.md: add a link to https://www.percona.com/blog/2020/02/28/better-prometheus-rate-function-with-victoriametrics/ 2020-03-04 20:05:26 +02:00
Aliaksandr Valialkin
e51a0a56f4 README.md: add a link to https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/Articles 2020-03-04 20:05:18 +02:00
Aliaksandr Valialkin
754db0d22e app/vmagent/README.md: small fixes 2020-03-04 18:14:47 +02:00
Aliaksandr Valialkin
772312bf7b app/vmagent/README.md: typo fix 2020-03-04 18:05:09 +02:00
Aliaksandr Valialkin
871abfab7a app/vmagent/README.md: clarification 2020-03-04 18:03:48 +02:00
Aliaksandr Valialkin
007c591de8 app/vmagent/README.md: add iot and edge monitoring use case 2020-03-04 18:01:34 +02:00
Aliaksandr Valialkin
474a09c0f1 app/vmagent/README.md: add use cases section 2020-03-04 17:42:27 +02:00
Aliaksandr Valialkin
d58aa80e9b README.md: add a link to Synthesio case study 2020-03-04 14:18:19 +02:00
Aliaksandr Valialkin
ad927575b7 docs/CaseStudies: add Synthesio 2020-03-04 14:14:39 +02:00
281 changed files with 22574 additions and 94250 deletions

View File

@@ -2,7 +2,7 @@ name: github-pages
on:
push:
paths:
- 'docs/*.md'
- 'docs/*'
- 'README.md'
branches:
- master
@@ -17,14 +17,14 @@ jobs:
TOKEN: ${{secrets.CI_TOKEN}}
run: |
git clone https://vika:${TOKEN}@github.com/VictoriaMetrics/VictoriaMetrics.github.io.git gpages
cp docs/*.md gpages
cp docs/* gpages
cp README.md gpages
cd gpages
git config --local user.email "info@victoriametrics.com"
git config --local user.name "Vika"
git add "*.md"
git add .
git commit -m "update github pages"
remote_repo="https://vika:${TOKEN}@github.com/VictoriaMetrics/VictoriaMetrics.github.io.git"
git push "${remote_repo}"
cd ..
rm -rf gpages
rm -rf gpages

View File

@@ -44,7 +44,7 @@ jobs:
GOOS=freebsd go build -mod=vendor ./app/victoria-metrics
GOOS=darwin go build -mod=vendor ./app/victoria-metrics
- name: Publish coverage
uses: codecov/codecov-action@v1.0.4
uses: codecov/codecov-action@v1.0.6
with:
token: ${{secrets.CODECOV_TOKEN}}
file: ./coverage.txt

View File

@@ -2,7 +2,7 @@ name: wiki
on:
push:
paths:
- 'docs/*.md'
- 'docs/*'
branches:
- master
jobs:
@@ -15,15 +15,14 @@ jobs:
env:
TOKEN: ${{secrets.CI_TOKEN}}
run: |
cd docs
git clone https://vika:${TOKEN}@github.com/VictoriaMetrics/VictoriaMetrics.wiki.git wiki
find ./ -name '*.md' -exec cp -prv '{}' 'wiki' ';'
cp docs/* wiki
cd wiki
git config --local user.email "info@victoriametrics.com"
git config --local user.name "Vika"
git add "*.md"
git add .
git commit -m "update wiki pages"
remote_repo="https://vika:${TOKEN}@github.com/VictoriaMetrics/VictoriaMetrics.wiki.git"
git push "${remote_repo}"
cd ..
rm -rf wiki
rm -rf wiki

View File

@@ -90,6 +90,9 @@ check-all: fmt vet lint errcheck golangci-lint
test:
GO111MODULE=on go test -mod=vendor ./lib/... ./app/...
test-race:
GO111MODULE=on go test -mod=vendor -race ./lib/... ./app/...
test-pure:
GO111MODULE=on CGO_ENABLED=0 go test -mod=vendor ./lib/... ./app/...

View File

@@ -23,7 +23,11 @@ Cluster version is available [here](https://github.com/VictoriaMetrics/VictoriaM
* [COLOPL](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/CaseStudies#colopl)
* [Wix.com](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/CaseStudies#wixcom)
* [Wedos.com](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/CaseStudies#wedoscom)
* [Synthesio](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/CaseStudies#synthesio)
* [MHI Vestas Offshore Wind](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/CaseStudies#mhi-vestas-offshore-wind)
* [Dreamteam](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/CaseStudies#dreamteam)
* [Brandwatch](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/CaseStudies#brandwatch)
## Prominent features
@@ -60,9 +64,11 @@ Cluster version is available [here](https://github.com/VictoriaMetrics/VictoriaM
if `-graphiteListenAddr` is set.
* [OpenTSDB put message](#sending-data-via-telnet-put-protocol) if `-opentsdbListenAddr` is set.
* [HTTP OpenTSDB /api/put requests](#sending-opentsdb-data-via-http-apiput-requests) if `-opentsdbHTTPListenAddr` is set.
* [/api/v1/import](#how-to-import-time-series-data)
* [/api/v1/import](#how-to-import-time-series-data).
* [Arbitrary CSV data](#how-to-import-csv-data).
* Ideally works with big amounts of time series data from Kubernetes, IoT sensors, connected cars, industrial telemetry, financial data and various Enterprise workloads.
* Has open source [cluster version](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/cluster).
* See also technical [Articles about VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/Articles).
## Operation
@@ -422,6 +428,55 @@ The `/api/v1/export` endpoint should return the following response:
{"metric":{"__name__":"x.y.z","t1":"v1","t2":"v2"},"values":[45.34],"timestamps":[1566464763000]}
```
### How to import CSV data
Arbitrary CSV data can be imported via `/api/v1/import/csv`. The CSV data is imported according to the provided `format` query arg.
The `format` query arg must contain comma-separated list of parsing rules for CSV fields. Each rule consists of three parts delimited by a colon:
```
<column_pos>:<type>:<context>
```
* `<column_pos>` is the position of the CSV column (field). Column numbering starts from 1. The order of parsing rules may be arbitrary.
* `<type>` describes the column type. Supported types are:
* `metric` - the corresponding CSV column at `<column_pos>` contains metric value. The metric name is read from the `<context>`.
CSV line must have at least a single metric field.
* `label` - the corresponding CSV column at `<column_pos>` contains label value. The label name is read from the `<context>`.
CSV line may have arbitrary number of label fields. All these fields are attached to all the configured metrics.
* `time` - the corresponding CSV column at `<column_pos>` contains metric time. CSV line may contain either one or zero columns with time.
If CSV line has no time, then the current time is used. The time is applied to all the configured metrics.
The format of the time is configured via `<context>`. Supported time formats are:
* `unix_s` - unix timestamp in seconds.
* `unix_ms` - unix timestamp in milliseconds.
* `unix_ns` - unix timestamp in nanoseconds. Note that VictoriaMetrics rounds the timestamp to milliseconds.
* `rfc3339` - timestamp in [RFC3339](https://tools.ietf.org/html/rfc3339) format, i.e. `2006-01-02T15:04:05Z`.
* `custom:<layout>` - custom layout for the timestamp. The `<layout>` may contain arbitrary time layout according to [time.Parse rules in Go](https://golang.org/pkg/time/#Parse).
Each request to `/api/v1/import/csv` can contain arbitrary number of CSV lines.
Example for importing CSV data via `/api/v1/import/csv`:
```bash
curl -d "GOOG,1.23,4.56,NYSE" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market'
curl -d "MSFT,3.21,1.67,NASDAQ" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market'
```
After that the data may be read via [/api/v1/export](#how-to-export-time-series) endpoint:
```bash
curl -G 'http://localhost:8428/api/v1/export' -d 'match[]={ticker!=""}'
```
The following response should be returned:
```bash
{"metric":{"__name__":"bid","market":"NASDAQ","ticker":"MSFT"},"values":[1.67],"timestamps":[1583865146520]}
{"metric":{"__name__":"bid","market":"NYSE","ticker":"GOOG"},"values":[4.56],"timestamps":[1583865146495]}
{"metric":{"__name__":"ask","market":"NASDAQ","ticker":"MSFT"},"values":[3.21],"timestamps":[1583865146520]}
{"metric":{"__name__":"ask","market":"NYSE","ticker":"GOOG"},"values":[1.23],"timestamps":[1583865146495]}
```
### Prometheus querying API usage
VictoriaMetrics supports the following handlers from [Prometheus querying API](https://prometheus.io/docs/prometheus/latest/querying/api/):
@@ -576,6 +631,9 @@ Each JSON line would contain data for a single time series. An example output:
Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either
unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values.
Optional `max_rows_per_line` arg may be added to the request in order to limit the maximum number of rows exported per each JSON line.
By default each JSON line contains all the rows for a single time series.
Pass `Accept-Encoding: gzip` HTTP header in the request to `/api/v1/export` in order to reduce network bandwidth during exporing big amounts
of time series data. This enables gzip compression for the exported data. Example for exporting gzipped data:
@@ -597,6 +655,7 @@ Time series data can be imported via any supported ingestion protocol:
* [OpenTSDB telnet put protocol](#sending-data-via-telnet-put-protocol)
* [OpenTSDB http /api/put](#sending-opentsdb-data-via-http-apiput-requests)
* `/api/v1/import` http POST handler, which accepts data from [/api/v1/export](#how-to-export-time-series).
* `/api/v1/import/csv` http POST handler, which accepts CSV data. See [these docs](#how-to-import-csv-data) for details.
The most efficient protocol for importing data into VictoriaMetrics is `/api/v1/import`. Example for importing data obtained via `/api/v1/export`:
@@ -800,6 +859,7 @@ Alternatively they can be self-scraped by setting `-selfScrapeInterval` command-
For example, `-selfScrapeInterval=10s` would enable self-scraping of `/metrics` page with 10 seconds interval.
There are officials Grafana dashboards for [single-node VictoriaMetrics](https://grafana.com/dashboards/10229) and [clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11176).
There is also an [alternative dashboard for clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11831).
The most interesting metrics are:

View File

@@ -3,6 +3,9 @@
victoria-metrics:
APP_NAME=victoria-metrics $(MAKE) app-local
victoria-metrics-race:
APP_NAME=victoria-metrics RACE=-race $(MAKE) app-local
victoria-metrics-prod:
APP_NAME=victoria-metrics $(MAKE) app-via-docker

View File

@@ -1,8 +1,8 @@
ARG certs_image
FROM $certs_image AS certs
FROM scratch
COPY --from=certs /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
ARG base_image
FROM $base_image
EXPOSE 8428
ENTRYPOINT ["/victoria-metrics-prod"]
ARG src_binary
COPY $src_binary ./victoria-metrics-prod
EXPOSE 8428
ENTRYPOINT ["/victoria-metrics-prod"]

View File

@@ -0,0 +1,16 @@
{
"name": "empty-label-match",
"issue": "https://github.com/VictoriaMetrics/VictoriaMetrics/issues/395",
"data": [
"empty_label_match 1 {TIME_S-1m}",
"empty_label_match;foo=bar 2 {TIME_S-1m}",
"empty_label_match;foo=baz 3 {TIME_S-1m}"],
"query": ["/api/v1/query_range?query=empty_label_match{foo=~'bar|'}&start={TIME_S}&end={TIME_S}&step=60"],
"result_query_range": {
"status":"success",
"data":{"resultType":"matrix",
"result":[
{"metric":{"__name__":"empty_label_match"},"values":[["{TIME_S}","1"]]},
{"metric":{"__name__":"empty_label_match","foo":"bar"},"values":[["{TIME_S}","2"]]}
]}}
}

View File

@@ -3,6 +3,9 @@
vmagent:
APP_NAME=vmagent $(MAKE) app-local
vmagent-race:
APP_NAME=vmagent RACE=-race $(MAKE) app-local
vmagent-prod:
APP_NAME=vmagent $(MAKE) app-via-docker

View File

@@ -1,7 +1,8 @@
## vmagent
`vmagent` is a tiny but brave agent, which helps you collecting metrics from various sources
and storing them to [VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics).
and storing them to [VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics)
or any other Prometheus-compatible storage system that supports `remote_write` protocol.
<img alt="vmagent" src="vmagent.png">
@@ -18,13 +19,14 @@ to `vmagent` (like the ability to push metrics instead of pulling them). We did
* Can be used as drop-in replacement for Prometheus for scraping targets such as [node_exporter](https://github.com/prometheus/node_exporter).
See [Quick Start](#quick-start) for details.
* Can add, remove and modify labels via Prometheus relabeling. See [these docs](#relabeling) for details.
* Can add, remove and modify labels (aka tags) via Prometheus relabeling. Can filter data before sending it to remote storage. See [these docs](#relabeling) for details.
* Accepts data via all the ingestion protocols supported by VictoriaMetrics:
* Influx line protocol via `http://<vmagent>:8429/write`. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf).
* JSON lines import protocol via `http://<vmagent>:8429/api/v1/import`. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-import-time-series-data).
* Graphite plaintext protocol if `-graphiteListenAddr` command-line flag is set. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-send-data-from-graphite-compatible-agents-such-as-statsd).
* OpenTSDB telnet and http protocols if `-opentsdbListenAddr` command-line flag is set. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-send-data-from-opentsdb-compatible-agents).
* Prometheus remote write protocol via `http://<vmagent>:8429/api/v1/write`.
* JSON lines import protocol via `http://<vmagent>:8429/api/v1/import`. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-import-time-series-data).
* Arbitrary CSV data via `http://<vmagent>:8429/api/v1/import/csv`. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-import-csv-data).
* Can replicate collected metrics simultaneously to multiple remote storage systems.
* Works in environments with unstable connections to remote storage. If the remote storage is unavailable, the collected metrics
are buffered at `-remoteWrite.tmpDataPath`. The buffered metrics are sent to remote storage as soon as connection
@@ -53,14 +55,67 @@ If you need collecting only Influx data, then the following command line would b
/path/to/vmagent -remoteWrite.url=https://victoria-metrics-host:8428/api/v1/write
```
Then send Influx data to `http://vmagent-host:8429/write`. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf) for more details.
Then send Influx data to `http://vmagent-host:8429`. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf) for more details.
`vmagent` is also available in [docker images](https://hub.docker.com/r/victoriametrics/vmagent/).
Pass `-help` to `vmagent` in order to see the full list of supported command-line flags with their descriptions.
### How to collect metrics in Prometheus format?
### Use cases
#### IoT and Edge monitoring
`vmagent` can run and collect metrics in IoT and industrial networks with unreliable or scheduled connections to the remote storage.
It buffers the collected data in local files until the connection to remote storage becomes available and then sends the buffered
data to the remote storage. It re-tries sending the data to remote storage on any errors.
The maximum buffer size can be limited with `-remoteWrite.maxDiskUsagePerURL`.
`vmagent` works on various architectures from IoT world - 32-bit arm, 64-bit arm, ppc64, 386, amd64.
See [the corresponding Makefile rules](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/app/vmagent/Makefile) for details.
#### Drop-in replacement for Prometheus
If you use Prometheus only for scraping metrics from various targets and forwarding these metrics to remote storage,
then `vmagent` can replace such Prometheus setup. Usually `vmagent` requires lower amounts of RAM, CPU and network bandwidth comparing to Prometheus for such setup.
See [these docs](#how-to-collect-metrics-in-prometheus-format) for details.
#### Replication and high availability
`vmagent` replicates the collected metrics among multiple remote storage instances configured via `-remoteWrite.url` args.
If a single remote storage instance temporarily goes out of service, then the collected data remains available in another remote storage instances.
`vmagent` buffers the collected data in files at `-remoteWrite.tmpDataPath` until the remote storage becomes available again.
Then it sends the buffered data to the remote storage in order to prevent data gaps in the remote storage.
#### Relabeling and filtering
`vmagent` can add, remove or update labels on the collected data before sending it to remote storage. Additionally,
it can remove unneeded samples via Prometheus-like relabeling before sending the collected data to remote storage.
See [these docs](#relabeling) for details.
#### Splitting data streams among multiple systems
`vmagent` supports splitting of the collected data among muliple destinations with the help of `-remoteWrite.urlRelabelConfig`,
which is applied independently for each configured `-remoteWrite.url` destination. For instance, it is possible to replicate or split
data among long-term remote storage, short-term remote storage and real-time analytical system [built on top of Kafka](https://github.com/Telefonica/prometheus-kafka-adapter).
Note that each destination can receive its own subset of the collected data thanks to per-destination relabeling via `-remoteWrite.urlRelabelConfig`.
#### Prometheus remote_write proxy
`vmagent` may be used as a proxy for Prometheus data sent via Prometheus `remote_write` protocol. It can accept data via `remote_write` API
at `/api/v1/write` endpoint, apply relabeling and filtering and then proxy it to another `remote_write` systems.
The `vmagent` can be configured to encrypt the incoming `remote_write` requests with `-tls*` command-line flags.
Additionally, Basic Auth can be enabled for the incoming `remote_write` requests with `-httpAuth.*` command-line flags.
### How to collect metrics in Prometheus format
Pass the path to `prometheus.yml` to `-promscrape.config` command-line flag. `vmagent` takes into account the following
sections from [Prometheus config file](https://prometheus.io/docs/prometheus/latest/configuration/configuration/):
@@ -139,8 +194,8 @@ either via `vmagent` itself or via Prometheus, so the exported metrics could be
and `vmagent_remotewrite_pending_data_bytes` metric exported by `vmagent` at `/metrics` page constantly grows.
* `vmagent` buffers scraped data at `-remoteWrite.tmpDataPath` directory until it is sent to `-remoteWrite.url`.
The directory can grow big when remote storage is unvailable during extended periods of time. If you don't want
sending all the data from the directory to remote storage, just stop `vmagent` and delete the directory.
The directory can grow big when remote storage is unvailable during extended periods of time and if `-remoteWrite.maxDiskUsagePerURL` isn't set.
If you don't want sending all the data from the directory to remote storage, just stop `vmagent` and delete the directory.
### How to build from sources

View File

@@ -0,0 +1,63 @@
package csvimport
import (
"net/http"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parser "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/csvimport"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
"github.com/VictoriaMetrics/metrics"
)
var (
rowsInserted = metrics.NewCounter(`vmagent_rows_inserted_total{type="csvimport"}`)
rowsPerInsert = metrics.NewHistogram(`vmagent_rows_per_insert{type="csvimport"}`)
)
// InsertHandler processes csv data from req.
func InsertHandler(req *http.Request) error {
return writeconcurrencylimiter.Do(func() error {
return parser.ParseStream(req, insertRows)
})
}
func insertRows(rows []parser.Row) error {
ctx := common.GetPushCtx()
defer common.PutPushCtx(ctx)
tssDst := ctx.WriteRequest.Timeseries[:0]
labels := ctx.Labels[:0]
samples := ctx.Samples[:0]
for i := range rows {
r := &rows[i]
labelsLen := len(labels)
labels = append(labels, prompbmarshal.Label{
Name: "__name__",
Value: r.Metric,
})
for j := range r.Tags {
tag := &r.Tags[j]
labels = append(labels, prompbmarshal.Label{
Name: tag.Key,
Value: tag.Value,
})
}
samples = append(samples, prompbmarshal.Sample{
Value: r.Value,
Timestamp: r.Timestamp,
})
tssDst = append(tssDst, prompbmarshal.TimeSeries{
Labels: labels[labelsLen:],
Samples: samples[len(samples)-1:],
})
}
ctx.WriteRequest.Timeseries = tssDst
ctx.Labels = labels
ctx.Samples = samples
remotewrite.Push(&ctx.WriteRequest)
rowsInserted.Add(len(rows))
rowsPerInsert.Update(float64(len(rows)))
return nil
}

View File

@@ -1,8 +1,8 @@
ARG certs_image
FROM $certs_image AS certs
FROM scratch
COPY --from=certs /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
ARG base_image
FROM $base_image
EXPOSE 8429
ENTRYPOINT ["/vmagent-prod"]
ARG src_binary
COPY $src_binary ./vmagent-prod
EXPOSE 8429
ENTRYPOINT ["/vmagent-prod"]

View File

@@ -7,6 +7,7 @@ import (
"strings"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/csvimport"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/graphite"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/influx"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/opentsdb"
@@ -127,6 +128,15 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
}
w.WriteHeader(http.StatusNoContent)
return true
case "/api/v1/import/csv":
csvimportRequests.Inc()
if err := csvimport.InsertHandler(r); err != nil {
csvimportErrors.Inc()
httpserver.Errorf(w, "error in %q: %s", r.URL.Path, err)
return true
}
w.WriteHeader(http.StatusNoContent)
return true
case "/write", "/api/v2/write":
influxWriteRequests.Inc()
if err := influx.InsertHandlerForHTTP(r); err != nil {
@@ -152,11 +162,14 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
}
var (
prometheusWriteRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/api/v1/write", protocol="prometheus"}`)
prometheusWriteErrors = metrics.NewCounter(`vmagent_http_request_errors_total{path="/api/v1/write", protocol="prometheus"}`)
prometheusWriteRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/api/v1/write", protocol="promremotewrite"}`)
prometheusWriteErrors = metrics.NewCounter(`vmagent_http_request_errors_total{path="/api/v1/write", protocol="promremotewrite"}`)
vmimportRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/api/v1/import", protocol="vm"}`)
vmimportErrors = metrics.NewCounter(`vmagent_http_request_errors_total{path="/api/v1/import", protocol="vm"}`)
vmimportRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/api/v1/import", protocol="vmimport"}`)
vmimportErrors = metrics.NewCounter(`vmagent_http_request_errors_total{path="/api/v1/import", protocol="vmimport"}`)
csvimportRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/api/v1/import/csv", protocol="csvimport"}`)
csvimportErrors = metrics.NewCounter(`vmagent_http_request_errors_total{path="/api/v1/import/csv", protocol="csvimport"}`)
influxWriteRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/write", protocol="influx"}`)
influxWriteErrors = metrics.NewCounter(`vmagent_http_request_errors_total{path="/write", protocol="influx"}`)

View File

@@ -69,6 +69,7 @@ func newClient(remoteWriteURL, urlLabelValue string, fq *persistentqueue.FastQue
if readTimeout <= 0 {
readTimeout = time.Minute
}
writeTimeout := readTimeout
var u fasthttp.URI
u.Update(remoteWriteURL)
scheme := string(u.Scheme())
@@ -109,7 +110,7 @@ func newClient(remoteWriteURL, urlLabelValue string, fq *persistentqueue.FastQue
MaxConns: maxConns,
MaxIdleConnDuration: 10 * readTimeout,
ReadTimeout: readTimeout,
WriteTimeout: 10 * time.Second,
WriteTimeout: writeTimeout,
MaxResponseBodySize: 1024 * 1024,
}
c := &client{

View File

@@ -89,7 +89,7 @@ func Stop() {
// Each timeseries in wr.Timeseries must contain one sample.
func Push(wr *prompbmarshal.WriteRequest) {
var rctx *relabelCtx
if len(prcsGlobal) > 0 {
if len(prcsGlobal) > 0 || len(labelsGlobal) > 0 {
rctx = getRelabelCtx()
}
tss := wr.Timeseries

133
app/vmalert/common/alert.go Normal file
View File

@@ -0,0 +1,133 @@
package common
import (
"bytes"
"fmt"
"io"
"strings"
"text/template"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/datasource"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
)
// Alert the triggered alert
type Alert struct {
Group string
Name string
Labels []datasource.Label
Annotations map[string]string
Start time.Time
End time.Time
Value float64
}
type alertTplData struct {
Labels map[string]string
ExternalLabels map[string]string
Value float64
}
const tplHeader = `{{ $value := .Value }}{{ $labels := .Labels }}{{ $externalLabels := .ExternalLabels }}`
// AlertsFromMetrics converts metrics to alerts by alert Rule
func AlertsFromMetrics(metrics []datasource.Metric, group string, rule Rule, start, end time.Time) []Alert {
alerts := make([]Alert, 0, len(metrics))
var err error
for i, m := range metrics {
a := Alert{
Group: group,
Name: rule.Name,
Start: start,
End: end,
Value: m.Value,
}
tplData := alertTplData{Value: m.Value, ExternalLabels: make(map[string]string)}
tplData.Labels, a.Labels = mergeLabels(metrics[i].Labels, rule.Labels)
a.Annotations, err = templateAnnotations(rule.Annotations, tplHeader, tplData)
if err != nil {
logger.Errorf("%s", err)
}
alerts = append(alerts, a)
}
return alerts
}
func mergeLabels(ml []datasource.Label, rl map[string]string) (map[string]string, []datasource.Label) {
set := make(map[string]string, len(ml)+len(rl))
sl := append([]datasource.Label(nil), ml...)
for _, i := range ml {
set[i.Name] = i.Value
}
for name, value := range rl {
if _, ok := set[name]; ok {
continue
}
set[name] = value
sl = append(sl, datasource.Label{
Name: name,
Value: value,
})
}
return set, sl
}
func templateAnnotations(annotations map[string]string, header string, data alertTplData) (map[string]string, error) {
var builder strings.Builder
var buf bytes.Buffer
eg := errGroup{}
r := make(map[string]string, len(annotations))
for key, text := range annotations {
r[key] = text
buf.Reset()
builder.Reset()
builder.Grow(len(header) + len(text))
builder.WriteString(header)
builder.WriteString(text)
if err := templateAnnotation(&buf, builder.String(), data); err != nil {
eg.errs = append(eg.errs, fmt.Sprintf("key %s, template %s:%s", key, text, err))
continue
}
r[key] = buf.String()
}
return r, eg.err()
}
// ValidateAnnotations validate annotations for possible template error, uses empty data for template population
func ValidateAnnotations(annotations map[string]string) error {
_, err := templateAnnotations(annotations, tplHeader, alertTplData{
Labels: map[string]string{},
ExternalLabels: map[string]string{},
Value: 0,
})
return err
}
func templateAnnotation(dst io.Writer, text string, data alertTplData) error {
// todo add template helper func from Prometheus
tpl, err := template.New("").Option("missingkey=zero").Parse(text)
if err != nil {
return fmt.Errorf("error parsing annotation:%w", err)
}
if err = tpl.Execute(dst, data); err != nil {
return fmt.Errorf("error evaluating annotation template:%w", err)
}
return nil
}
type errGroup struct {
errs []string
}
func (eg *errGroup) err() error {
if eg == nil || len(eg.errs) == 0 {
return nil
}
return eg
}
func (eg *errGroup) Error() string {
return fmt.Sprintf("errors:%s", strings.Join(eg.errs, "\n"))
}

View File

@@ -0,0 +1,99 @@
package common
import (
"reflect"
"sort"
"testing"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/datasource"
)
func TestAlertsFromMetrics(t *testing.T) {
now := time.Now()
metrics := []datasource.Metric{
{
Labels: []datasource.Label{
{Name: "__name__", Value: "foo"},
{Name: "label", Value: "value"},
},
Timestamp: 10,
Value: 20,
},
{
Labels: []datasource.Label{
{Name: "__name__", Value: "bar"},
{Name: "label", Value: "value"},
},
Timestamp: 10,
Value: 30,
},
}
rule := Rule{
Name: "alertname",
Expr: "up==0",
Labels: map[string]string{
"label2": "value",
},
Annotations: map[string]string{
"tpl": "{{$value}} {{ $labels.label}}",
},
}
alerts := AlertsFromMetrics(metrics, "group", rule, now, now)
if len(alerts) != 2 {
t.Fatalf("expecting 2 alerts got %d", len(alerts))
}
f := func(got, exp Alert) {
t.Helper()
if got.Group != exp.Group ||
got.Value != exp.Value ||
got.End != exp.End ||
got.Name != exp.Name ||
got.Start != exp.Start {
t.Errorf("alerts are not equal: \nwant %#v \ngot %#v", exp, got)
}
sort.Slice(got.Labels, func(i, j int) bool {
return got.Labels[i].Name < got.Labels[j].Name
})
sort.Slice(exp.Labels, func(i, j int) bool {
return got.Labels[i].Name < got.Labels[j].Name
})
if !reflect.DeepEqual(got.Labels, exp.Labels) {
t.Errorf("alerts labels are not equal: want %+v got %+v", exp.Labels, got.Labels)
}
if !reflect.DeepEqual(got.Annotations, exp.Annotations) {
t.Errorf("alerts annotations are not equal: want %+v got %+v", exp.Annotations, got.Annotations)
}
}
f(alerts[0], Alert{
Group: "group",
Name: "alertname",
Labels: []datasource.Label{
{Name: "__name__", Value: "foo"},
{Name: "label", Value: "value"},
{Name: "label2", Value: "value"},
},
Annotations: map[string]string{
"tpl": "20 value",
},
Start: now,
End: now,
Value: 20,
})
f(alerts[1], Alert{
Group: "group",
Name: "alertname",
Labels: []datasource.Label{
{Name: "__name__", Value: "bar"},
{Name: "label", Value: "value"},
{Name: "label2", Value: "value"},
},
Annotations: map[string]string{
"tpl": "30 value",
},
Start: now,
End: now,
Value: 30,
})
}

View File

@@ -0,0 +1,38 @@
package common
import (
"errors"
"fmt"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/metricsql"
)
// Rule is basic alert entity
type Rule struct {
Name string `yaml:"alert"`
Expr string `yaml:"expr"`
For time.Duration `yaml:"for"`
Labels map[string]string `yaml:"labels"`
Annotations map[string]string `yaml:"annotations"`
}
// Validate validates rule
func (r Rule) Validate() error {
if r.Name == "" {
return errors.New("rule name can not be empty")
}
if r.Expr == "" {
return fmt.Errorf("rule %s expression can not be empty", r.Name)
}
if _, err := metricsql.Parse(r.Expr); err != nil {
return fmt.Errorf("rule %s invalid expression: %w", r.Name, err)
}
return nil
}
// Group grouping array of alert
type Group struct {
Name string
Rules []Rule
}

View File

@@ -0,0 +1,18 @@
package common
import "testing"
func TestRule_Validate(t *testing.T) {
if err := (Rule{}).Validate(); err == nil {
t.Errorf("exptected empty name error")
}
if err := (Rule{Name: "alert"}).Validate(); err == nil {
t.Errorf("exptected empty expr error")
}
if err := (Rule{Name: "alert", Expr: "test{"}).Validate(); err == nil {
t.Errorf("exptected invalid expr error")
}
if err := (Rule{Name: "alert", Expr: "test>0"}).Validate(); err != nil {
t.Errorf("exptected valid rule got %s", err)
}
}

View File

@@ -1,32 +1,64 @@
package config
import "time"
import (
"fmt"
"io/ioutil"
"path/filepath"
"strings"
// Annotations basic annotation for alert rule
type Annotations struct {
Summary string
Description string
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/common"
"gopkg.in/yaml.v2"
)
// Parse parses rule configs from given file patterns
func Parse(pathPatterns []string, validateAnnotations bool) ([]common.Group, error) {
var fp []string
for _, pattern := range pathPatterns {
matches, err := filepath.Glob(pattern)
if err != nil {
return nil, fmt.Errorf("error reading file patther %s:%v", pattern, err)
}
fp = append(fp, matches...)
}
var groups []common.Group
for _, file := range fp {
groupsNames := map[string]struct{}{}
gr, err := parseFile(file)
if err != nil {
return nil, fmt.Errorf("file %s: %w", file, err)
}
for _, group := range gr {
if _, ok := groupsNames[group.Name]; ok {
return nil, fmt.Errorf("one file can not contain groups with the same name %s, filepath:%s", file, group.Name)
}
groupsNames[group.Name] = struct{}{}
for _, rule := range group.Rules {
if err = rule.Validate(); err != nil {
return nil, fmt.Errorf("invalid rule filepath:%s, group %s:%w", file, group.Name, err)
}
if validateAnnotations {
if err = common.ValidateAnnotations(rule.Annotations); err != nil {
return nil, fmt.Errorf("invalida annotations filepath:%s, group %s:%w", file, group.Name, err)
}
}
}
}
groups = append(groups, gr...)
}
if len(groups) < 1 {
return nil, fmt.Errorf("no groups found in %s", strings.Join(pathPatterns, ";"))
}
return groups, nil
}
// Alert basic alert entity rule
type Alert struct {
Name string
Expr string
For time.Duration
Labels map[string]string
Annotations Annotations
Start time.Time
End time.Time
}
// Group grouping array of alert
type Group struct {
Name string
Rules []Alert
}
// Parse parses config from given file
func Parse(filepath string) ([]Group, error) {
return []Group{}, nil
func parseFile(path string) ([]common.Group, error) {
data, err := ioutil.ReadFile(path)
if err != nil {
return nil, fmt.Errorf("error reading alert rule file: %w", err)
}
g := struct {
Groups []common.Group `yaml:"groups"`
}{}
err = yaml.Unmarshal(data, &g)
return g.Groups, err
}

View File

@@ -0,0 +1,26 @@
package config
import (
"testing"
)
func TestParseGood(t *testing.T) {
if _, err := Parse([]string{"testdata/*good.rules", "testdata/dir/*good.*"}, true); err != nil {
t.Errorf("error parsing files %s", err)
}
}
func TestParseBad(t *testing.T) {
if _, err := Parse([]string{"testdata/rules0-bad.rules"}, true); err == nil {
t.Errorf("expected syntaxt error")
}
if _, err := Parse([]string{"testdata/dir/rules0-bad.rules"}, true); err == nil {
t.Errorf("expected template annotation error")
}
if _, err := Parse([]string{"testdata/dir/rules1-bad.rules"}, true); err == nil {
t.Errorf("expected same group error")
}
if _, err := Parse([]string{"testdata/*.yaml"}, true); err == nil {
t.Errorf("expected empty group")
}
}

View File

@@ -0,0 +1,19 @@
groups:
- name: group
rules:
- alert: InvalidAnnotations
for: 5m
expr: vm_rows > 0
labels:
label: bar
annotations:
summary: "{{ $value }"
description: "{{$labels}}"
- alert: UnkownAnnotationsFunction
for: 5m
expr: vm_rows > 0
labels:
label: bar
annotations:
summary: "{{ value|humanize }}"
description: "{{$labels}}"

View File

@@ -0,0 +1,13 @@
groups:
- name: duplicatedGroupDiffFiles
rules:
- alert: VMRows
for: 5m
expr: vm_rows > 0
labels:
label: bar
annotations:
summary: "{{ $value }}"
description: "{{$labels}}"

View File

@@ -0,0 +1,22 @@
groups:
- name: sameGroup
rules:
- alert: alert
for: 5m
expr: vm_rows > 0
labels:
label: bar
annotations:
summary: "{{ $value }}"
description: "{{$labels}}"
- name: sameGroup
rules:
- alert: alert
for: 5m
expr: vm_rows > 0
labels:
label: bar
annotations:
summary: "{{ $value }}"
description: "{{$labels}}"

View File

@@ -0,0 +1,13 @@
groups:
- name: duplicatedGroupDiffFiles
rules:
- alert: VMRows
for: 5m
expr: vm_rows > 0
labels:
label: bar
annotations:
summary: "{{ $value }}"
description: "{{$labels}}"

View File

@@ -0,0 +1,28 @@
groups:
- name: group
rules:
- alert: InvalidExpr
for: 5m
expr: vm_rows{ > 0
labels:
label: bar
annotations:
summary: "{{ $value }}"
description: "{{$labels}}"
- alert: EmptyExpr
for: 5m
expr: ""
labels:
label: bar
annotations:
summary: "{{ $value }}"
description: "{{$labels}}"
- alert: ""
for: 5m
expr: vm_rows > 0
labels:
label: foo
annotations:
summary: "{{ $value }}"
description: "{{$labels}}"

View File

@@ -0,0 +1,12 @@
groups:
- name: groupGorSingleAlert
rules:
- alert: VMRows
for: 5m
expr: vm_rows > 0
labels:
label: bar
annotations:
summary: "{{ $value }}"
description: "{{$labels}}"

View File

@@ -1,14 +1,15 @@
package datasource
import "context"
// Metrics the data returns from storage
type Metrics struct{}
// VMStorage represents vmstorage entity with ability to read and write metrics
type VMStorage struct{}
//Query basic query to the datasource
func (s *VMStorage) Query(ctx context.Context, query string) ([]Metrics, error) {
return nil, nil
// Metric is the basic entity which should be return by datasource
// It represents single data point with full list of labels
type Metric struct {
Labels []Label
Timestamp int64
Value float64
}
// Label represents metric's label
type Label struct {
Name string
Value string
}

View File

@@ -0,0 +1,103 @@
package datasource
import (
"context"
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
"net/url"
"strconv"
"strings"
)
type response struct {
Status string `json:"status"`
Data struct {
ResultType string `json:"resultType"`
Result []struct {
Labels map[string]string `json:"metric"`
TV [2]interface{} `json:"value"`
} `json:"result"`
} `json:"data"`
ErrorType string `json:"errorType"`
Error string `json:"error"`
}
func (r response) metrics() ([]Metric, error) {
var ms []Metric
var m Metric
var f float64
var err error
for i, res := range r.Data.Result {
f, err = strconv.ParseFloat(res.TV[1].(string), 64)
if err != nil {
return nil, fmt.Errorf("metric %v, unable to parse float64 from %s: %s", res, res.TV[1], err)
}
m.Labels = nil
for k, v := range r.Data.Result[i].Labels {
m.Labels = append(m.Labels, Label{Name: k, Value: v})
}
m.Timestamp = int64(res.TV[0].(float64))
m.Value = f
ms = append(ms, m)
}
return ms, nil
}
const queryPath = "/api/v1/query?query="
// VMStorage represents vmstorage entity with ability to read and write metrics
type VMStorage struct {
c *http.Client
queryURL string
basicAuthUser, basicAuthPass string
}
// NewVMStorage is a constructor for VMStorage
func NewVMStorage(baseURL, basicAuthUser, basicAuthPass string, c *http.Client) *VMStorage {
return &VMStorage{
c: c,
basicAuthUser: basicAuthUser,
basicAuthPass: basicAuthPass,
queryURL: strings.TrimSuffix(baseURL, "/") + queryPath,
}
}
// Query reads metrics from datasource by given query
func (s *VMStorage) Query(ctx context.Context, query string) ([]Metric, error) {
const (
statusSuccess, statusError, rtVector = "success", "error", "vector"
)
req, err := http.NewRequest("POST", s.queryURL+url.QueryEscape(query), nil)
if err != nil {
return nil, err
}
req.Header.Set("Content-Type", "application/json")
if s.basicAuthPass != "" {
req.SetBasicAuth(s.basicAuthUser, s.basicAuthPass)
}
resp, err := s.c.Do(req.WithContext(ctx))
if err != nil {
return nil, fmt.Errorf("error getting response from %s:%s", req.URL, err)
}
defer func() { _ = resp.Body.Close() }()
if resp.StatusCode != http.StatusOK {
body, _ := ioutil.ReadAll(resp.Body)
return nil, fmt.Errorf("datasource returns unxeprected response code %d for %s with err %s. Reponse body %s", resp.StatusCode, req.URL, err, body)
}
r := &response{}
if err := json.NewDecoder(resp.Body).Decode(r); err != nil {
return nil, fmt.Errorf("error parsing metrics for %s:%s", req.URL, err)
}
if r.Status == statusError {
return nil, fmt.Errorf("response error, query: %s, errorType: %s, error: %s", req.URL, r.ErrorType, r.Error)
}
if r.Status != statusSuccess {
return nil, fmt.Errorf("unkown status:%s, Expected success or error ", r.Status)
}
if r.Data.ResultType != rtVector {
return nil, fmt.Errorf("unkown restul type:%s. Expected vector", r.Data.ResultType)
}
return r.metrics()
}

View File

@@ -0,0 +1,93 @@
package datasource
import (
"context"
"net/http"
"net/http/httptest"
"testing"
)
var (
ctx = context.Background()
basicAuthName = "foo"
basicAuthPass = "bar"
query = "vm_rows"
)
func TestVMSelectQuery(t *testing.T) {
mux := http.NewServeMux()
mux.HandleFunc("/", func(_ http.ResponseWriter, _ *http.Request) {
t.Errorf("should not be called")
})
c := -1
mux.HandleFunc("/api/v1/query", func(w http.ResponseWriter, r *http.Request) {
c++
if r.Method != http.MethodPost {
t.Errorf("expected POST method got %s", r.Method)
}
if name, pass, _ := r.BasicAuth(); name != basicAuthName || pass != basicAuthPass {
t.Errorf("expected %s:%s as basic auth got %s:%s", basicAuthName, basicAuthPass, name, pass)
}
if r.URL.Query().Get("query") != query {
t.Errorf("exptected %s in query param, got %s", query, r.URL.Query().Get("query"))
}
switch c {
case 0:
conn, _, _ := w.(http.Hijacker).Hijack()
_ = conn.Close()
case 1:
w.WriteHeader(500)
case 2:
w.Write([]byte("[]"))
case 3:
w.Write([]byte(`{"status":"error", "errorType":"type:", "error":"some error msg"}`))
case 4:
w.Write([]byte(`{"status":"unknown"}`))
case 5:
w.Write([]byte(`{"status":"success","data":{"resultType":"matrix"}}`))
case 6:
w.Write([]byte(`{"status":"success","data":{"resultType":"vector","result":[{"metric":{"__name__":"vm_rows"},"value":[1583786142,"13763"]}]}}`))
}
})
srv := httptest.NewServer(mux)
defer srv.Close()
am := NewVMStorage(srv.URL, basicAuthName, basicAuthPass, srv.Client())
if _, err := am.Query(ctx, query); err == nil {
t.Fatalf("expected connection error got nil")
}
if _, err := am.Query(ctx, query); err == nil {
t.Fatalf("expected invalid response status error got nil")
}
if _, err := am.Query(ctx, query); err == nil {
t.Fatalf("expected response body error got nil")
}
if _, err := am.Query(ctx, query); err == nil {
t.Fatalf("expected error status got nil")
}
if _, err := am.Query(ctx, query); err == nil {
t.Fatalf("expected unkown status got nil")
}
if _, err := am.Query(ctx, query); err == nil {
t.Fatalf("expected non-vector resultType error got nil")
}
m, err := am.Query(ctx, query)
if err != nil {
t.Fatalf("unexpected %s", err)
}
if len(m) != 1 {
t.Fatalf("exptected 1 metric got %d in %+v", len(m), m)
}
expected := Metric{
Labels: []Label{{Value: "vm_rows", Name: "__name__"}},
Timestamp: 1583786142,
Value: 13763,
}
if m[0].Timestamp != expected.Timestamp &&
m[0].Value != expected.Value &&
m[0].Labels[0].Value != expected.Labels[0].Value &&
m[0].Labels[0].Name != expected.Labels[0].Name {
t.Fatalf("unexpected metric %+v want %+v", m[0], expected)
}
}

View File

@@ -1,37 +1,64 @@
package main
import (
"context"
"flag"
"fmt"
"net"
"net/http"
"strings"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/config"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/datasource"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/provider"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/buildinfo"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/envflag"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/procutil"
)
var (
configPath = flag.String("config", "config.yaml", "Path to alert configuration file")
httpListenAddr = flag.String("httpListenAddr", ":8880", "Address to listen for http connections")
rulePath = flagutil.NewArray("rule", `Path to file with alert rules, accepts patterns.
Flag can be specified multiple time.
Examples:
-rule /path/to/file. Path to single file with alerting rules
-rule dir/*.yaml -rule /*.yaml. Paths to all yaml files in relative dir folder and absolute yaml file in a root.`)
validateAlertAnnotations = flag.Bool("rule.validateAnnotations", true, "Indicates to validate annotation templates")
httpListenAddr = flag.String("httpListenAddr", ":8880", "Address to listen for http connections")
datasourceURL = flag.String("datasource.url", "", "Victoria Metrics or VMSelect url. Required parameter. e.g. http://127.0.0.1:8428")
basicAuthUsername = flag.String("datasource.basicAuth.username", "", "Optional basic auth username to use for -datasource.url")
basicAuthPassword = flag.String("datasource.basicAuth.password", "", "Optional basic auth password to use for -datasource.url")
evaluationInterval = flag.Duration("evaluationInterval", 1*time.Minute, "How often to evaluate the rules. Default 1m")
providerURL = flag.String("provider.url", "", "Prometheus alertmanager url. Required parameter. e.g. http://127.0.0.1:9093")
)
func main() {
envflag.Parse()
buildinfo.Init()
logger.Init()
checkFlags()
ctx, cancel := context.WithCancel(context.Background())
logger.Infof("reading alert rules configuration file from %s", *configPath)
alertGroups, err := config.Parse(*configPath)
logger.Infof("reading alert rules configuration file from %s", strings.Join(*rulePath, ";"))
alertGroups, err := config.Parse(*rulePath, *validateAlertAnnotations)
if err != nil {
logger.Fatalf("Cannot parse configuration file %s", err)
logger.Fatalf("Cannot parse configuration file: %s", err)
}
addr := getWebServerAddr(*httpListenAddr, false)
w := &watchdog{
storage: datasource.NewVMStorage(*datasourceURL, *basicAuthUsername, *basicAuthPassword, &http.Client{}),
alertProvider: provider.NewAlertManager(*providerURL, func(group, name string) string {
return addr + fmt.Sprintf("/%s/%s/status", group, name)
}, &http.Client{}),
}
w := &watchdog{storage: &datasource.VMStorage{}}
for id := range alertGroups {
go func(group config.Group) {
w.run(group)
go func(group common.Group) {
w.run(ctx, group, *evaluationInterval)
}(alertGroups[id])
}
go func() {
@@ -44,17 +71,86 @@ func main() {
if err := httpserver.Stop(*httpListenAddr); err != nil {
logger.Fatalf("cannot stop the webservice: %s", err)
}
cancel()
w.stop()
}
type watchdog struct {
storage *datasource.VMStorage
storage *datasource.VMStorage
alertProvider provider.AlertProvider
}
func (w *watchdog) run(a config.Group) {
func (w *watchdog) run(ctx context.Context, a common.Group, evaluationInterval time.Duration) {
logger.Infof("watchdog for %s has been run", a.Name)
t := time.NewTicker(evaluationInterval)
var metrics []datasource.Metric
var err error
var alerts []common.Alert
defer t.Stop()
for {
select {
case <-t.C:
start := time.Now()
for _, r := range a.Rules {
if metrics, err = w.storage.Query(ctx, r.Expr); err != nil {
logger.Errorf("error reading metrics %s", err)
continue
}
// todo check for and calculate alert states
if len(metrics) < 1 {
continue
}
// todo define alert end time
alerts = common.AlertsFromMetrics(metrics, a.Name, r, start, time.Time{})
// todo save to storage
if err := w.alertProvider.Send(alerts); err != nil {
logger.Errorf("error sending alerts %s", err)
continue
}
// todo is alert still active/pending?
}
case <-ctx.Done():
logger.Infof("%s receive stop signal", a.Name)
return
}
}
}
func getWebServerAddr(httpListenAddr string, isSecure bool) string {
if strings.Index(httpListenAddr, ":") != 0 {
if isSecure {
return "https://" + httpListenAddr
}
return "http://" + httpListenAddr
}
addrs, err := net.InterfaceAddrs()
if err != nil {
panic("error getting the interface addresses ")
}
for _, a := range addrs {
if ipnet, ok := a.(*net.IPNet); ok && !ipnet.IP.IsLoopback() {
if ipnet.IP.To4() != nil {
return "http://" + ipnet.IP.String() + httpListenAddr
}
}
}
// no loopback ip return internal address
return "http://127.0.0.1" + httpListenAddr
}
func (w *watchdog) stop() {
panic("not implemented")
}
func checkFlags() {
if *providerURL == "" {
flag.PrintDefaults()
logger.Fatalf("provider.url is empty")
}
if *datasourceURL == "" {
flag.PrintDefaults()
logger.Fatalf("datasource.url is empty")
}
}

View File

@@ -1,26 +1,34 @@
{% import (
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/config"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/common"
) %}
{% stripspace %}
{% func amRequest(alert *config.Alert, generatorURL string) %}
{% func amRequest(alerts []common.Alert, generatorURL func(string, string) string) %}
[
{% for i, alert := range alerts %}
{
"startsAt":{%q= alert.Start.Format(time.RFC3339Nano) %},
"generatorURL": {%q= generatorURL %},
"generatorURL": {%q= generatorURL(alert.Group, alert.Name) %},
{% if !alert.End.IsZero() %}
"endsAt":{%q= alert.End.Format(time.RFC3339Nano) %},
{% endif %}
"labels": {
"alertname":{%q= alert.Name %}
{% for k,v := range alert.Labels %}
,{%q= k %}:{%q= v %}
{% for _,v := range alert.Labels %}
,{%q= v.Name %}:{%q= v.Value %}
{% endfor %}
},
"annotations": {
"summary": {%q= alert.Annotations.Summary %},
"description": {%q= alert.Annotations.Description %}
{% code c := len(alert.Annotations) %}
{% for k,v := range alert.Annotations %}
{% code c = c-1 %}
{%q= k %}:{%q= v %}{% if c > 0 %},{% endif %}
{% endfor %}
}
}
{% if i != len(alerts)-1 %},{% endif %}
{% endfor %}
]
{% endfunc %}
{% endstripspace %}

View File

@@ -6,7 +6,7 @@ package provider
//line app/vmalert/provider/alert_manager_request.qtpl:1
import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/config"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/common"
"time"
)
@@ -24,78 +24,108 @@ var (
)
//line app/vmalert/provider/alert_manager_request.qtpl:7
func streamamRequest(qw422016 *qt422016.Writer, alert *config.Alert, generatorURL string) {
func streamamRequest(qw422016 *qt422016.Writer, alerts []common.Alert, generatorURL func(string, string) string) {
//line app/vmalert/provider/alert_manager_request.qtpl:7
qw422016.N().S(`{"startsAt":`)
qw422016.N().S(`[`)
//line app/vmalert/provider/alert_manager_request.qtpl:9
qw422016.N().Q(alert.Start.Format(time.RFC3339Nano))
for i, alert := range alerts {
//line app/vmalert/provider/alert_manager_request.qtpl:9
qw422016.N().S(`,"generatorURL":`)
//line app/vmalert/provider/alert_manager_request.qtpl:10
qw422016.N().Q(generatorURL)
//line app/vmalert/provider/alert_manager_request.qtpl:10
qw422016.N().S(`,`)
qw422016.N().S(`{"startsAt":`)
//line app/vmalert/provider/alert_manager_request.qtpl:11
if !alert.End.IsZero() {
qw422016.N().Q(alert.Start.Format(time.RFC3339Nano))
//line app/vmalert/provider/alert_manager_request.qtpl:11
qw422016.N().S(`"endsAt":`)
qw422016.N().S(`,"generatorURL":`)
//line app/vmalert/provider/alert_manager_request.qtpl:12
qw422016.N().Q(alert.End.Format(time.RFC3339Nano))
qw422016.N().Q(generatorURL(alert.Group, alert.Name))
//line app/vmalert/provider/alert_manager_request.qtpl:12
qw422016.N().S(`,`)
//line app/vmalert/provider/alert_manager_request.qtpl:13
}
if !alert.End.IsZero() {
//line app/vmalert/provider/alert_manager_request.qtpl:13
qw422016.N().S(`"labels": {"alertname":`)
qw422016.N().S(`"endsAt":`)
//line app/vmalert/provider/alert_manager_request.qtpl:14
qw422016.N().Q(alert.End.Format(time.RFC3339Nano))
//line app/vmalert/provider/alert_manager_request.qtpl:14
qw422016.N().S(`,`)
//line app/vmalert/provider/alert_manager_request.qtpl:15
qw422016.N().Q(alert.Name)
//line app/vmalert/provider/alert_manager_request.qtpl:16
for k, v := range alert.Labels {
//line app/vmalert/provider/alert_manager_request.qtpl:16
qw422016.N().S(`,`)
}
//line app/vmalert/provider/alert_manager_request.qtpl:15
qw422016.N().S(`"labels": {"alertname":`)
//line app/vmalert/provider/alert_manager_request.qtpl:17
qw422016.N().Q(k)
//line app/vmalert/provider/alert_manager_request.qtpl:17
qw422016.N().S(`:`)
//line app/vmalert/provider/alert_manager_request.qtpl:17
qw422016.N().Q(v)
qw422016.N().Q(alert.Name)
//line app/vmalert/provider/alert_manager_request.qtpl:18
for _, v := range alert.Labels {
//line app/vmalert/provider/alert_manager_request.qtpl:18
qw422016.N().S(`,`)
//line app/vmalert/provider/alert_manager_request.qtpl:19
qw422016.N().Q(v.Name)
//line app/vmalert/provider/alert_manager_request.qtpl:19
qw422016.N().S(`:`)
//line app/vmalert/provider/alert_manager_request.qtpl:19
qw422016.N().Q(v.Value)
//line app/vmalert/provider/alert_manager_request.qtpl:20
}
//line app/vmalert/provider/alert_manager_request.qtpl:20
qw422016.N().S(`},"annotations": {`)
//line app/vmalert/provider/alert_manager_request.qtpl:23
c := len(alert.Annotations)
//line app/vmalert/provider/alert_manager_request.qtpl:24
for k, v := range alert.Annotations {
//line app/vmalert/provider/alert_manager_request.qtpl:25
c = c - 1
//line app/vmalert/provider/alert_manager_request.qtpl:26
qw422016.N().Q(k)
//line app/vmalert/provider/alert_manager_request.qtpl:26
qw422016.N().S(`:`)
//line app/vmalert/provider/alert_manager_request.qtpl:26
qw422016.N().Q(v)
//line app/vmalert/provider/alert_manager_request.qtpl:26
if c > 0 {
//line app/vmalert/provider/alert_manager_request.qtpl:26
qw422016.N().S(`,`)
//line app/vmalert/provider/alert_manager_request.qtpl:26
}
//line app/vmalert/provider/alert_manager_request.qtpl:27
}
//line app/vmalert/provider/alert_manager_request.qtpl:27
qw422016.N().S(`}}`)
//line app/vmalert/provider/alert_manager_request.qtpl:30
if i != len(alerts)-1 {
//line app/vmalert/provider/alert_manager_request.qtpl:30
qw422016.N().S(`,`)
//line app/vmalert/provider/alert_manager_request.qtpl:30
}
//line app/vmalert/provider/alert_manager_request.qtpl:31
}
//line app/vmalert/provider/alert_manager_request.qtpl:18
qw422016.N().S(`},"annotations": {"summary":`)
//line app/vmalert/provider/alert_manager_request.qtpl:21
qw422016.N().Q(alert.Annotations.Summary)
//line app/vmalert/provider/alert_manager_request.qtpl:21
qw422016.N().S(`,"description":`)
//line app/vmalert/provider/alert_manager_request.qtpl:22
qw422016.N().Q(alert.Annotations.Description)
//line app/vmalert/provider/alert_manager_request.qtpl:22
qw422016.N().S(`}}`)
//line app/vmalert/provider/alert_manager_request.qtpl:25
//line app/vmalert/provider/alert_manager_request.qtpl:31
qw422016.N().S(`]`)
//line app/vmalert/provider/alert_manager_request.qtpl:33
}
//line app/vmalert/provider/alert_manager_request.qtpl:25
func writeamRequest(qq422016 qtio422016.Writer, alert *config.Alert, generatorURL string) {
//line app/vmalert/provider/alert_manager_request.qtpl:25
//line app/vmalert/provider/alert_manager_request.qtpl:33
func writeamRequest(qq422016 qtio422016.Writer, alerts []common.Alert, generatorURL func(string, string) string) {
//line app/vmalert/provider/alert_manager_request.qtpl:33
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmalert/provider/alert_manager_request.qtpl:25
streamamRequest(qw422016, alert, generatorURL)
//line app/vmalert/provider/alert_manager_request.qtpl:25
//line app/vmalert/provider/alert_manager_request.qtpl:33
streamamRequest(qw422016, alerts, generatorURL)
//line app/vmalert/provider/alert_manager_request.qtpl:33
qt422016.ReleaseWriter(qw422016)
//line app/vmalert/provider/alert_manager_request.qtpl:25
//line app/vmalert/provider/alert_manager_request.qtpl:33
}
//line app/vmalert/provider/alert_manager_request.qtpl:25
func amRequest(alert *config.Alert, generatorURL string) string {
//line app/vmalert/provider/alert_manager_request.qtpl:25
//line app/vmalert/provider/alert_manager_request.qtpl:33
func amRequest(alerts []common.Alert, generatorURL func(string, string) string) string {
//line app/vmalert/provider/alert_manager_request.qtpl:33
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmalert/provider/alert_manager_request.qtpl:25
writeamRequest(qb422016, alert, generatorURL)
//line app/vmalert/provider/alert_manager_request.qtpl:25
//line app/vmalert/provider/alert_manager_request.qtpl:33
writeamRequest(qb422016, alerts, generatorURL)
//line app/vmalert/provider/alert_manager_request.qtpl:33
qs422016 := string(qb422016.B)
//line app/vmalert/provider/alert_manager_request.qtpl:25
//line app/vmalert/provider/alert_manager_request.qtpl:33
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmalert/provider/alert_manager_request.qtpl:25
//line app/vmalert/provider/alert_manager_request.qtpl:33
return qs422016
//line app/vmalert/provider/alert_manager_request.qtpl:25
//line app/vmalert/provider/alert_manager_request.qtpl:33
}

View File

@@ -8,12 +8,17 @@ import (
"strings"
"sync"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/config"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
)
const alertsPath = "/api/v2/alerts"
// AlertProvider is common interface for alert manager provider
type AlertProvider interface {
Send(alerts []common.Alert) error
}
var pool = sync.Pool{New: func() interface{} {
return &bytes.Buffer{}
}}
@@ -26,7 +31,7 @@ type AlertManager struct {
}
// AlertURLGenerator returns URL to single alert by given name
type AlertURLGenerator func(name string) string
type AlertURLGenerator func(group, name string) string
// NewAlertManager is a constructor for AlertManager
func NewAlertManager(alertManagerURL string, fn AlertURLGenerator, c *http.Client) *AlertManager {
@@ -37,19 +42,12 @@ func NewAlertManager(alertManagerURL string, fn AlertURLGenerator, c *http.Clien
}
}
const (
jsonArrayOpen byte = 91 // [
jsonArrayClose byte = 93 // ]
)
// Send an alert or resolve message
func (am *AlertManager) Send(alert *config.Alert) error {
func (am *AlertManager) Send(alerts []common.Alert) error {
b := pool.Get().(*bytes.Buffer)
b.Reset()
defer pool.Put(b)
b.WriteByte(jsonArrayOpen)
writeamRequest(b, alert, am.argFunc(alert.Name))
b.WriteByte(jsonArrayClose)
writeamRequest(b, alerts, am.argFunc)
resp, err := am.client.Post(am.alertURL, "application/json", b)
if err != nil {
return err

View File

@@ -7,7 +7,7 @@ import (
"testing"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/config"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/common"
)
func TestAlertManager_Send(t *testing.T) {
@@ -42,7 +42,7 @@ func TestAlertManager_Send(t *testing.T) {
if len(a) != 1 {
t.Errorf("expected 1 alert in array got %d", len(a))
}
if a[0].GeneratorURL != "alert0" {
if a[0].GeneratorURL != "group0alert0" {
t.Errorf("exptected alert0 as generatorURL got %s", a[0].GeneratorURL)
}
if a[0].Labels["alertname"] != "alert0" {
@@ -58,20 +58,22 @@ func TestAlertManager_Send(t *testing.T) {
})
srv := httptest.NewServer(mux)
defer srv.Close()
am := NewAlertManager(srv.URL, func(name string) string {
return name
am := NewAlertManager(srv.URL, func(group, name string) string {
return group + name
}, srv.Client())
if err := am.Send(&config.Alert{}); err == nil {
if err := am.Send([]common.Alert{{}, {}}); err == nil {
t.Error("expected connection error got nil")
}
if err := am.Send(&config.Alert{}); err == nil {
if err := am.Send([]common.Alert{}); err == nil {
t.Error("expected wrong http code error got nil")
}
if err := am.Send(&config.Alert{
Name: "alert0",
Start: time.Now().UTC(),
End: time.Now().UTC(),
}); err != nil {
if err := am.Send([]common.Alert{{
Group: "group0",
Name: "alert0",
Start: time.Now().UTC(),
End: time.Now().UTC(),
Annotations: map[string]string{"a": "b", "c": "d", "e": "f"},
}}); err != nil {
t.Errorf("unexpected error %s", err)
}
if c != 2 {

View File

@@ -1,8 +0,0 @@
package provider
import "github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/config"
// AlertProvider is common interface for alert manager provider
type AlertProvider interface {
Send(rule config.Alert) error
}

View File

@@ -0,0 +1 @@
package storage

View File

@@ -3,6 +3,9 @@
vmbackup:
APP_NAME=vmbackup $(MAKE) app-local
vmbackup-race:
APP_NAME=vmbackup RACE=-race $(MAKE) app-local
vmbackup-prod:
APP_NAME=vmbackup $(MAKE) app-via-docker

View File

@@ -1,7 +1,6 @@
ARG certs_image
FROM $certs_image AS certs
FROM scratch
COPY --from=certs /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
ARG base_image
FROM $base_image
ENTRYPOINT ["/vmbackup-prod"]
ARG src_binary
COPY $src_binary ./vmbackup-prod
ENTRYPOINT ["/vmbackup-prod"]

View File

@@ -0,0 +1,44 @@
package csvimport
import (
"net/http"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/common"
parser "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/csvimport"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
"github.com/VictoriaMetrics/metrics"
)
var (
rowsInserted = metrics.NewCounter(`vm_rows_inserted_total{type="csvimport"}`)
rowsPerInsert = metrics.NewHistogram(`vm_rows_per_insert{type="csvimport"}`)
)
// InsertHandler processes /api/v1/import/csv requests.
func InsertHandler(req *http.Request) error {
return writeconcurrencylimiter.Do(func() error {
return parser.ParseStream(req, func(rows []parser.Row) error {
return insertRows(rows)
})
})
}
func insertRows(rows []parser.Row) error {
ctx := common.GetInsertCtx()
defer common.PutInsertCtx(ctx)
ctx.Reset(len(rows))
for i := range rows {
r := &rows[i]
ctx.Labels = ctx.Labels[:0]
ctx.AddLabel("", r.Metric)
for j := range r.Tags {
tag := &r.Tags[j]
ctx.AddLabel(tag.Key, tag.Value)
}
ctx.WriteDataPoint(nil, ctx.Labels, r.Timestamp, r.Value)
}
rowsInserted.Add(len(rows))
rowsPerInsert.Update(float64(len(rows)))
return ctx.FlushBufs()
}

View File

@@ -6,6 +6,7 @@ import (
"net/http"
"strings"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/csvimport"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/graphite"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/influx"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/opentsdb"
@@ -100,6 +101,15 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
}
w.WriteHeader(http.StatusNoContent)
return true
case "/api/v1/import/csv":
csvimportRequests.Inc()
if err := csvimport.InsertHandler(r); err != nil {
csvimportErrors.Inc()
httpserver.Errorf(w, "error in %q: %s", r.URL.Path, err)
return true
}
w.WriteHeader(http.StatusNoContent)
return true
case "/write", "/api/v2/write":
influxWriteRequests.Inc()
if err := influx.InsertHandlerForHTTP(r); err != nil {
@@ -127,11 +137,14 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
}
var (
prometheusWriteRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/write", protocol="prometheus"}`)
prometheusWriteErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/api/v1/write", protocol="prometheus"}`)
prometheusWriteRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/write", protocol="promremotewrite"}`)
prometheusWriteErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/api/v1/write", protocol="promremotewrite"}`)
vmimportRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/import", protocol="vm"}`)
vmimportErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/api/v1/import", protocol="vm"}`)
vmimportRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/import", protocol="vmimport"}`)
vmimportErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/api/v1/import", protocol="vmimport"}`)
csvimportRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/import/csv", protocol="csvimport"}`)
csvimportErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/api/v1/import/csv", protocol="csvimport"}`)
influxWriteRequests = metrics.NewCounter(`vm_http_requests_total{path="/write", protocol="influx"}`)
influxWriteErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/write", protocol="influx"}`)

View File

@@ -3,6 +3,9 @@
vmrestore:
APP_NAME=vmrestore $(MAKE) app-local
vmrestore-race:
APP_NAME=vmrestore RACE=-race $(MAKE) app-local
vmrestore-prod:
APP_NAME=vmrestore $(MAKE) app-via-docker

View File

@@ -1,7 +1,6 @@
ARG certs_image
FROM $certs_image AS certs
FROM scratch
COPY --from=certs /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
ARG base_image
FROM $base_image
ENTRYPOINT ["/vmrestore-prod"]
ARG src_binary
COPY $src_binary ./vmrestore-prod
ENTRYPOINT ["/vmrestore-prod"]

View File

@@ -576,6 +576,7 @@ func setupTfss(tagFilterss [][]storage.TagFilter) ([]*storage.TagFilters, error)
}
}
tfss = append(tfss, tfs)
tfss = append(tfss, tfs.Finalize()...)
}
return tfss, nil
}

View File

@@ -3,6 +3,7 @@ package prometheus
import (
"flag"
"fmt"
"io"
"math"
"net/http"
"runtime"
@@ -18,6 +19,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/metricsql"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/metrics"
"github.com/valyala/fastjson/fastfloat"
"github.com/valyala/quicktemplate"
)
@@ -129,11 +131,12 @@ func ExportHandler(startTime time.Time, w http.ResponseWriter, r *http.Request)
return err
}
format := r.FormValue("format")
maxRowsPerLine := int(fastfloat.ParseInt64BestEffort(r.FormValue("max_rows_per_line")))
deadline := getDeadlineForExport(r)
if start >= end {
end = start + defaultStep
}
if err := exportHandler(w, matches, start, end, format, deadline); err != nil {
if err := exportHandler(w, matches, start, end, format, maxRowsPerLine, deadline); err != nil {
return fmt.Errorf("error when exporting data for queries=%q on the time range (start=%d, end=%d): %s", matches, start, end, err)
}
exportDuration.UpdateDuration(startTime)
@@ -142,9 +145,37 @@ func ExportHandler(startTime time.Time, w http.ResponseWriter, r *http.Request)
var exportDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/api/v1/export"}`)
func exportHandler(w http.ResponseWriter, matches []string, start, end int64, format string, deadline netstorage.Deadline) error {
func exportHandler(w http.ResponseWriter, matches []string, start, end int64, format string, maxRowsPerLine int, deadline netstorage.Deadline) error {
writeResponseFunc := WriteExportStdResponse
writeLineFunc := WriteExportJSONLine
if maxRowsPerLine > 0 {
writeLineFunc = func(w io.Writer, rs *netstorage.Result) {
valuesOrig := rs.Values
timestampsOrig := rs.Timestamps
values := valuesOrig
timestamps := timestampsOrig
for len(values) > 0 {
var valuesChunk []float64
var timestampsChunk []int64
if len(values) > maxRowsPerLine {
valuesChunk = values[:maxRowsPerLine]
timestampsChunk = timestamps[:maxRowsPerLine]
values = values[maxRowsPerLine:]
timestamps = timestamps[maxRowsPerLine:]
} else {
valuesChunk = values
timestampsChunk = timestamps
values = nil
timestamps = nil
}
rs.Values = valuesChunk
rs.Timestamps = timestampsChunk
WriteExportJSONLine(w, rs)
}
rs.Values = valuesOrig
rs.Timestamps = timestampsOrig
}
}
contentType := "application/stream+json"
if format == "prometheus" {
contentType = "text/plain"
@@ -545,8 +576,7 @@ func QueryHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) e
if err != nil {
return err
}
queryOffset := getLatencyOffsetMilliseconds()
step, err := getDuration(r, "step", queryOffset)
step, err := getDuration(r, "step", defaultStep)
if err != nil {
return err
}
@@ -559,6 +589,7 @@ func QueryHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) e
if len(query) > *maxQueryLen {
return fmt.Errorf("too long query; got %d bytes; mustn't exceed `-search.maxQueryLen=%d` bytes", len(query), *maxQueryLen)
}
queryOffset := getLatencyOffsetMilliseconds()
if !getBool(r, "nocache") && ct-start < queryOffset {
// Adjust start time only if `nocache` arg isn't set.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/241
@@ -576,7 +607,7 @@ func QueryHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) e
start -= offset
end := start
start = end - window
if err := exportHandler(w, []string{childQuery}, start, end, "promapi", deadline); err != nil {
if err := exportHandler(w, []string{childQuery}, start, end, "promapi", 0, deadline); err != nil {
return fmt.Errorf("error when exporting data for query=%q on the time range (start=%d, end=%d): %s", childQuery, start, end, err)
}
queryDuration.UpdateDuration(startTime)
@@ -809,7 +840,15 @@ func getTime(r *http.Request, argKey string, defaultValue int64) (int64, error)
case prometheusMaxTimeFormatted:
return maxTimeMsecs, nil
}
return 0, fmt.Errorf("cannot parse %q=%q: %s", argKey, argValue, err)
// Try parsing duration relative to the current time
d, err1 := time.ParseDuration(argValue)
if err1 != nil {
return 0, fmt.Errorf("cannot parse %q=%q: %s", argKey, argValue, err)
}
if d > 0 {
d = -d
}
t = time.Now().Add(d)
}
secs = float64(t.UnixNano()) / 1e9
}
@@ -860,17 +899,17 @@ func getDuration(r *http.Request, argKey string, defaultValue int64) (int64, err
const maxDurationMsecs = 100 * 365 * 24 * 3600 * 1000
func getMaxLookback(r *http.Request) (int64, error) {
d := int64(*maxLookback / time.Millisecond)
d := maxLookback.Milliseconds()
return getDuration(r, "max_lookback", d)
}
func getDeadlineForQuery(r *http.Request) netstorage.Deadline {
dMax := int64(maxQueryDuration.Seconds() * 1e3)
dMax := maxQueryDuration.Milliseconds()
return getDeadlineWithMaxDuration(r, dMax, "-search.maxQueryDuration")
}
func getDeadlineForExport(r *http.Request) netstorage.Deadline {
dMax := int64(maxExportDuration.Seconds() * 1e3)
dMax := maxExportDuration.Milliseconds()
return getDeadlineWithMaxDuration(r, dMax, "-search.maxExportDuration")
}
@@ -913,7 +952,7 @@ func getTagFilterssFromMatches(matches []string) ([][]storage.TagFilter, error)
}
func getLatencyOffsetMilliseconds() int64 {
d := int64(*latencyOffset / time.Millisecond)
d := latencyOffset.Milliseconds()
if d <= 1000 {
d = 1000
}

View File

@@ -1,5 +0,0 @@
package promql
import "unsafe"
const maxByteSliceLen = 1<<(31+9*(unsafe.Sizeof(int(0))/8)) - 1

View File

@@ -974,6 +974,65 @@ func TestExecSuccess(t *testing.T) {
resultExpected := []netstorage.Result{r}
f(q, resultExpected)
})
t.Run(`label_map(match)`, func(t *testing.T) {
t.Parallel()
q := `sort(label_map((
label_set(time(), "label", "v1"),
label_set(time()+100, "label", "v2"),
label_set(time()+200, "label", "v3"),
label_set(time()+300, "x", "y"),
label_set(time()+400, "label", "v4"),
), "label", "v1", "foo", "v2", "bar", "", "qwe", "v4", ""))`
r1 := netstorage.Result{
MetricName: metricNameExpected,
Values: []float64{1000, 1200, 1400, 1600, 1800, 2000},
Timestamps: timestampsExpected,
}
r1.MetricName.Tags = []storage.Tag{{
Key: []byte("label"),
Value: []byte("foo"),
}}
r2 := netstorage.Result{
MetricName: metricNameExpected,
Values: []float64{1100, 1300, 1500, 1700, 1900, 2100},
Timestamps: timestampsExpected,
}
r2.MetricName.Tags = []storage.Tag{{
Key: []byte("label"),
Value: []byte("bar"),
}}
r3 := netstorage.Result{
MetricName: metricNameExpected,
Values: []float64{1200, 1400, 1600, 1800, 2000, 2200},
Timestamps: timestampsExpected,
}
r3.MetricName.Tags = []storage.Tag{{
Key: []byte("label"),
Value: []byte("v3"),
}}
r4 := netstorage.Result{
MetricName: metricNameExpected,
Values: []float64{1300, 1500, 1700, 1900, 2100, 2300},
Timestamps: timestampsExpected,
}
r4.MetricName.Tags = []storage.Tag{
{
Key: []byte("label"),
Value: []byte("qwe"),
},
{
Key: []byte("x"),
Value: []byte("y"),
},
}
r5 := netstorage.Result{
MetricName: metricNameExpected,
Values: []float64{1400, 1600, 1800, 2000, 2200, 2400},
Timestamps: timestampsExpected,
}
resultExpected := []netstorage.Result{r1, r2, r3, r4, r5}
f(q, resultExpected)
})
t.Run(`label_copy(new_tag)`, func(t *testing.T) {
t.Parallel()
q := `label_copy(
@@ -5371,6 +5430,8 @@ func TestExecError(t *testing.T) {
f(`label_transform(1)`)
f(`label_set()`)
f(`label_set(1, "foo")`)
f(`label_map()`)
f(`label_map(1)`)
f(`label_del()`)
f(`label_keep()`)
f(`label_match()`)

View File

@@ -1,6 +1,7 @@
package promql
import (
"flag"
"fmt"
"math"
"strings"
@@ -14,6 +15,10 @@ import (
"github.com/valyala/histogram"
)
var maxStalenessInterval = flag.Duration("search.maxStalenessInterval", 0, "The maximum interval for staleness calculations. "+
"By default it is automatically calculated from the median interval between samples. This flag can be useful for tuning "+
"Prometheus data model closer to Influx-style data model. See https://prometheus.io/docs/prometheus/latest/querying/basics/#staleness for details")
var rollupFuncs = map[string]newRollupFunc{
// Standard rollup funcs from PromQL.
// See funcs accepting range-vector on https://prometheus.io/docs/prometheus/latest/querying/functions/ .
@@ -440,6 +445,11 @@ func (rc *rollupConfig) doInternal(dstValues []float64, tsm *timeseriesMap, valu
dstValues = decimal.ExtendFloat64sCapacity(dstValues, len(rc.Timestamps))
scrapeInterval := getScrapeInterval(timestamps)
if *maxStalenessInterval > 0 {
if si := maxStalenessInterval.Milliseconds(); scrapeInterval > si {
scrapeInterval = si
}
}
maxPrevInterval := getMaxPrevInterval(scrapeInterval)
if rc.LookbackDelta > 0 && maxPrevInterval > rc.LookbackDelta {
maxPrevInterval = rc.LookbackDelta

View File

@@ -2,6 +2,7 @@ package promql
import (
"fmt"
"reflect"
"sort"
"strconv"
"sync"
@@ -168,7 +169,7 @@ func (ts *timeseries) marshalFastNoTimestamps(dst []byte) []byte {
// during marshalFastTimestamps.
var valuesBuf []byte
if len(ts.Values) > 0 {
valuesBuf = (*[maxByteSliceLen]byte)(unsafe.Pointer(&ts.Values[0]))[:len(ts.Values)*8]
valuesBuf = float64ToByteSlice(ts.Values)
}
dst = append(dst, valuesBuf...)
return dst
@@ -178,7 +179,7 @@ func marshalFastTimestamps(dst []byte, timestamps []int64) []byte {
dst = encoding.MarshalUint32(dst, uint32(len(timestamps)))
var timestampsBuf []byte
if len(timestamps) > 0 {
timestampsBuf = (*[maxByteSliceLen]byte)(unsafe.Pointer(&timestamps[0]))[:len(timestamps)*8]
timestampsBuf = int64ToByteSlice(timestamps)
}
dst = append(dst, timestampsBuf...)
return dst
@@ -199,8 +200,7 @@ func unmarshalFastTimestamps(src []byte) ([]byte, []int64, error) {
if len(src) < bufSize {
return src, nil, fmt.Errorf("cannot unmarshal timestamps; got %d bytes; want at least %d bytes", len(src), bufSize)
}
timestamps := (*[maxByteSliceLen / 8]int64)(unsafe.Pointer(&src[0]))[:timestampsCount]
timestamps = timestamps[:len(timestamps):len(timestamps)]
timestamps := byteSliceToInt64(src[:bufSize])
src = src[bufSize:]
return src, timestamps, nil
@@ -229,12 +229,43 @@ func (ts *timeseries) unmarshalFastNoTimestamps(src []byte) ([]byte, error) {
if len(src) < bufSize {
return src, fmt.Errorf("cannot unmarshal values; got %d bytes; want at least %d bytes", len(src), bufSize)
}
values := (*[maxByteSliceLen / 8]float64)(unsafe.Pointer(&src[0]))[:valuesCount]
ts.Values = values[:len(values):len(values)]
ts.Values = byteSliceToFloat64(src[:bufSize])
return src[bufSize:], nil
}
func float64ToByteSlice(a []float64) (b []byte) {
sh := (*reflect.SliceHeader)(unsafe.Pointer(&b))
sh.Data = uintptr(unsafe.Pointer(&a[0]))
sh.Len = len(a) * int(unsafe.Sizeof(a[0]))
sh.Cap = sh.Len
return
}
func int64ToByteSlice(a []int64) (b []byte) {
sh := (*reflect.SliceHeader)(unsafe.Pointer(&b))
sh.Data = uintptr(unsafe.Pointer(&a[0]))
sh.Len = len(a) * int(unsafe.Sizeof(a[0]))
sh.Cap = sh.Len
return
}
func byteSliceToInt64(b []byte) (a []int64) {
sh := (*reflect.SliceHeader)(unsafe.Pointer(&a))
sh.Data = uintptr(unsafe.Pointer(&b[0]))
sh.Len = len(b) / int(unsafe.Sizeof(a[0]))
sh.Cap = sh.Len
return
}
func byteSliceToFloat64(b []byte) (a []float64) {
sh := (*reflect.SliceHeader)(unsafe.Pointer(&a))
sh.Data = uintptr(unsafe.Pointer(&b[0]))
sh.Len = len(b) / int(unsafe.Sizeof(a[0]))
sh.Cap = sh.Len
return
}
// unmarshalMetricNameFast unmarshals mn from src, so mn members
// hold references to src.
//

View File

@@ -59,6 +59,7 @@ var transformFuncs = map[string]transformFunc{
// New funcs
"label_set": transformLabelSet,
"label_map": transformLabelMap,
"label_del": transformLabelDel,
"label_keep": transformLabelKeep,
"label_copy": transformLabelCopy,
@@ -1026,6 +1027,38 @@ func transformLabelSet(tfa *transformFuncArg) ([]*timeseries, error) {
return rvs, nil
}
func transformLabelMap(tfa *transformFuncArg) ([]*timeseries, error) {
args := tfa.args
if len(args) < 2 {
return nil, fmt.Errorf(`not enough args; got %d; want at least %d`, len(args), 2)
}
label, err := getString(args[1], 1)
if err != nil {
return nil, fmt.Errorf("cannot read label name: %s", err)
}
srcValues, dstValues, err := getStringPairs(args[2:])
if err != nil {
return nil, err
}
m := make(map[string]string, len(srcValues))
for i, srcValue := range srcValues {
m[srcValue] = dstValues[i]
}
rvs := args[0]
for _, ts := range rvs {
mn := &ts.MetricName
dstValue := getDstValue(mn, label)
value, ok := m[string(*dstValue)]
if ok {
*dstValue = append((*dstValue)[:0], value...)
}
if len(*dstValue) == 0 {
mn.RemoveTag(label)
}
}
return rvs, nil
}
func transformLabelCopy(tfa *transformFuncArg) ([]*timeseries, error) {
return transformLabelCopyExt(tfa, false)
}

View File

@@ -162,9 +162,8 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
w.Header().Set("Content-Type", "application/json")
snapshotPath, err := Storage.CreateSnapshot()
if err != nil {
msg := fmt.Sprintf("cannot create snapshot: %s", err)
logger.Errorf("%s", msg)
fmt.Fprintf(w, `{"status":"error","msg":%q}`, msg)
err = fmt.Errorf("cannot create snapshot: %s", err)
jsonResponseError(w, err)
return true
}
if prometheusCompatibleResponse {
@@ -177,9 +176,8 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
w.Header().Set("Content-Type", "application/json")
snapshots, err := Storage.ListSnapshots()
if err != nil {
msg := fmt.Sprintf("cannot list snapshots: %s", err)
logger.Errorf("%s", msg)
fmt.Fprintf(w, `{"status":"error","msg":%q}`, msg)
err = fmt.Errorf("cannot list snapshots: %s", err)
jsonResponseError(w, err)
return true
}
fmt.Fprintf(w, `{"status":"ok","snapshots":[`)
@@ -195,9 +193,8 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
w.Header().Set("Content-Type", "application/json")
snapshotName := r.FormValue("snapshot")
if err := Storage.DeleteSnapshot(snapshotName); err != nil {
msg := fmt.Sprintf("cannot delete snapshot %q: %s", snapshotName, err)
logger.Errorf("%s", msg)
fmt.Fprintf(w, `{"status":"error","msg":%q}`, msg)
err = fmt.Errorf("cannot delete snapshot %q: %s", snapshotName, err)
jsonResponseError(w, err)
return true
}
fmt.Fprintf(w, `{"status":"ok"}`)
@@ -206,16 +203,14 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
w.Header().Set("Content-Type", "application/json")
snapshots, err := Storage.ListSnapshots()
if err != nil {
msg := fmt.Sprintf("cannot list snapshots: %s", err)
logger.Errorf("%s", msg)
fmt.Fprintf(w, `{"status":"error","msg":%q}`, msg)
err = fmt.Errorf("cannot list snapshots: %s", err)
jsonResponseError(w, err)
return true
}
for _, snapshotName := range snapshots {
if err := Storage.DeleteSnapshot(snapshotName); err != nil {
msg := fmt.Sprintf("cannot delete snapshot %q: %s", snapshotName, err)
logger.Errorf("%s", msg)
fmt.Fprintf(w, `{"status":"error","msg":%q}`, msg)
err = fmt.Errorf("cannot delete snapshot %q: %s", snapshotName, err)
jsonResponseError(w, err)
return true
}
}
@@ -567,3 +562,9 @@ func registerStorageMetrics() {
return float64(m().MetricNameCacheCollisions)
})
}
func jsonResponseError(w http.ResponseWriter, err error) {
logger.Errorf("%s", err)
w.WriteHeader(http.StatusInternalServerError)
fmt.Fprintf(w, `{"status":"error","msg":%q}`, err)
}

View File

@@ -14,7 +14,7 @@
"type": "grafana",
"id": "grafana",
"name": "Grafana",
"version": "6.5.2"
"version": "6.7.1"
},
{
"type": "panel",
@@ -60,12 +60,12 @@
}
]
},
"description": "Overview for single node VictoriaMetrics v1.32.8 or higher",
"description": "Overview for single node VictoriaMetrics v1.34.0 or higher",
"editable": true,
"gnetId": 10229,
"graphTooltip": 0,
"id": null,
"iteration": 1580634216692,
"iteration": 1585340890561,
"links": [
{
"icon": "doc",
@@ -127,7 +127,6 @@
}
],
"mode": "html",
"options": {},
"timeFrom": null,
"timeShift": null,
"title": "Version",
@@ -146,7 +145,6 @@
},
"id": 4,
"links": [],
"options": {},
"pageSize": null,
"scroll": true,
"showHeader": true,
@@ -157,6 +155,7 @@
"styles": [
{
"alias": "",
"align": "auto",
"colorMode": null,
"colors": [
"rgba(245, 54, 54, 0.9)",
@@ -171,6 +170,7 @@
},
{
"alias": "",
"align": "auto",
"colorMode": null,
"colors": [
"rgba(245, 54, 54, 0.9)",
@@ -187,6 +187,7 @@
},
{
"alias": "",
"align": "auto",
"colorMode": null,
"colors": [
"rgba(245, 54, 54, 0.9)",
@@ -196,7 +197,7 @@
"dateFormat": "YYYY-MM-DD HH:mm:ss",
"decimals": 2,
"mappingType": 1,
"pattern": ".*",
"pattern": "/.*/",
"thresholds": [],
"type": "hidden",
"unit": "short"
@@ -259,7 +260,6 @@
"maxDataPoints": 100,
"nullPointMode": "connected",
"nullText": null,
"options": {},
"postfix": "",
"postfixFontSize": "50%",
"prefix": "",
@@ -344,7 +344,6 @@
"maxDataPoints": 100,
"nullPointMode": "connected",
"nullText": null,
"options": {},
"postfix": "",
"postfixFontSize": "50%",
"prefix": "",
@@ -428,7 +427,6 @@
"maxDataPoints": 100,
"nullPointMode": "connected",
"nullText": null,
"options": {},
"postfix": "",
"postfixFontSize": "50%",
"prefix": "",
@@ -2878,7 +2876,7 @@
}
],
"refresh": "30s",
"schemaVersion": 21,
"schemaVersion": 22,
"style": "dark",
"tags": [],
"templating": {
@@ -2890,6 +2888,7 @@
"definition": "label_values(vm_app_version, job)",
"hide": 0,
"includeAll": false,
"index": -1,
"label": null,
"multi": false,
"name": "job",
@@ -2912,6 +2911,7 @@
"definition": "label_values(vm_app_version{job=\"$job\"}, version)",
"hide": 2,
"includeAll": false,
"index": -1,
"label": null,
"multi": false,
"name": "version",
@@ -2961,5 +2961,8 @@
"timezone": "",
"title": "VictoriaMetrics",
"uid": "wNf0q_kZk",
"version": 2
"variables": {
"list": []
},
"version": 1
}

View File

@@ -1,18 +1,18 @@
# All these commands must run from repository root.
DOCKER_NAMESPACE := docker.io/victoriametrics
BUILDER_IMAGE := local/builder:go1.14.0
CERTS_IMAGE := local/certs:1.0.3
BUILDER_IMAGE := local/builder:go1.14.1
BASE_IMAGE := local/base:1.1.0
package-certs:
(docker image ls --format '{{.Repository}}:{{.Tag}}' | grep -q '$(CERTS_IMAGE)$$') \
|| docker build -t $(CERTS_IMAGE) deployment/docker/certs
package-base:
(docker image ls --format '{{.Repository}}:{{.Tag}}' | grep -q '$(BASE_IMAGE)$$') \
|| docker build -t $(BASE_IMAGE) deployment/docker/base
package-builder:
(docker image ls --format '{{.Repository}}:{{.Tag}}' | grep -q '$(BUILDER_IMAGE)$$') \
|| docker build -t $(BUILDER_IMAGE) deployment/docker/builder
app-via-docker: package-certs package-builder
app-via-docker: package-base package-builder
mkdir -p gocache-for-docker
docker run --rm \
--user $(shell id -u):$(shell id -g) \
@@ -31,7 +31,7 @@ package-via-docker:
$(MAKE) app-via-docker && \
docker build \
--build-arg src_binary=$(APP_NAME)$(APP_SUFFIX)-prod \
--build-arg certs_image=$(CERTS_IMAGE) \
--build-arg base_image=$(BASE_IMAGE) \
-t $(DOCKER_NAMESPACE)/$(APP_NAME):$(PKG_TAG)$(APP_SUFFIX)$(RACE) \
-f app/$(APP_NAME)/deployment/Dockerfile bin)

View File

@@ -0,0 +1,8 @@
# See https://medium.com/on-docker/use-multi-stage-builds-to-inject-ca-certs-ad1e8f01de1b
FROM alpine:3.10 as base
RUN apk --update --no-cache add ca-certificates
FROM scratch
COPY --from=base /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt

View File

@@ -0,0 +1,2 @@
root:x:0:root
victoriametrics:x:1000:victoriametrics

View File

@@ -0,0 +1,2 @@
root:x:0:0:root:/root:/bin/ash
victoriametrics:x:1000:1000::/:

View File

@@ -1,2 +1,2 @@
FROM golang:1.14.0
FROM golang:1.14.1
STOPSIGNAL SIGINT

View File

@@ -1,3 +0,0 @@
# See https://medium.com/on-docker/use-multi-stage-builds-to-inject-ca-certs-ad1e8f01de1b
FROM alpine:3.10 as certs
RUN apk --update add ca-certificates

View File

@@ -2,7 +2,7 @@ version: '3.5'
services:
prometheus:
container_name: prometheus
image: prom/prometheus:v2.15.2
image: prom/prometheus:v2.17.1
depends_on:
- "victoriametrics"
ports:
@@ -35,7 +35,7 @@ services:
restart: always
grafana:
container_name: grafana
image: grafana/grafana:6.5.2
image: grafana/grafana:6.7.1
entrypoint: >
/bin/sh -c "
cd /var/lib/grafana &&

View File

@@ -21,3 +21,5 @@
* [Improving histogram usability for Prometheus and Grafana](https://medium.com/@valyala/improving-histogram-usability-for-prometheus-and-grafana-bc7e5df0e350)
* [Prometheus storage: tech terms for humans](https://medium.com/@valyala/prometheus-storage-technical-terms-for-humans-4ab4de6c3d48)
* [Billy: how VictoriaMetrics deals with more than 500 billion rows](https://medium.com/@valyala/billy-how-victoriametrics-deals-with-more-than-500-billion-rows-e82ff8f725da)
* [Better Prometheus rate() function with VictoriaMetrics](https://www.percona.com/blog/2020/02/28/better-prometheus-rate-function-with-victoriametrics/)
* [Disk usage: VictoriaMetrics vs Prometheus](https://stas.starikevich.com/posts/disk-usage-for-vm-versus-prometheus/)

View File

@@ -68,6 +68,38 @@ Numbers:
> We like configuration simplicity and zero maintenance for VictoriaMetrics - once installed and forgot about it. It works out of the box without any issues.
### Synthesio
[Synthesio](https://www.synthesio.com/) is the leading social intelligence tool for social media monitoring & social analytics.
> We fully migrated from [Metrictank](https://grafana.com/oss/metrictank/) to Victoria Metrics
Numbers:
- Single node
- Active time series - 5 Million
- Datapoints: 1.25 Trillion
- Ingestion rate - 550k datapoints per second
- Disk usage - 150gb
- Index size - 3gb
- Query duration 99th percentile - 147ms
- Churn rate - 100 new time series per hour
### MHI Vestas Offshore Wind
The mission of [MHI Vestas Offshore Wind](http://www.mhivestasoffshore.com) is to co-develop offshore wind as an economically viable and sustainable energy resource to benefit future generations.
MHI Vestas Offshore Wind is using VictoriaMetrics to ingest and visualize sensor data from offshore wind turbines. The very efficient storage and ability to backfill was key in chosing VictoriaMetrics. MHI Vestas Offshore Wind is running the cluster version of VictoriaMetrics on Kubernetes using the Helm charts for deployment to be able to scale up capacity as the solution will be rolled out.
Numbers with current limited roll out:
- Active time series: 270K
- Ingestion rate: 70K/sec
- Total number of datapoints: 850 billions
- Data size on disk: 800 GiB
- Retention time: 3 years
### Dreamteam
[Dreamteam](https://dreamteam.gg/) successfully uses single-node VictoriaMetrics in multiple environments.
@@ -82,3 +114,48 @@ Numbers:
VictoriaMetrics in production environment runs on 2 M5 EC2 instances in "HA" mode, managed by Terraform and Ansible TF module.
2 Prometheus instances are writing to both VMs, with 2 [Promxy](https://github.com/jacksontj/promxy) replicas
as load balancer for reads.
### Brandwatch
[Brandwatch](https://www.brandwatch.com/) is the world's pioneering digital consumer intelligence suite,
helping over 2,000 of the world's most admired brands and agencies to make insightful, data-driven business decisions.
The engineering department at Brandwatch has been using InfluxDB for storing application metrics for many years
and when End-of-Life of InfluxDB version 1.x was announced we decided to re-evaluate our whole metrics collection and storage stack.
Main goals for the new metrics stack were:
- improved performance
- lower maintenance
- support for native clustering in open source version
- the less metrics shipment had to change, the better
- achieving longer data retention would be great but not critical
We initially looked at CrateDB and TimescaleDB which both turned out to have limitations or requirements in the open source versions
that made them unfit for our use case. Prometheus was also considered but push vs. pull metrics was a big change we did not want
to include in the already significant change.
Once we found VictoriaMetrics it solved the following problems:
- it is very light weight and we can now run virtual machines instead of dedicated hardware machines for metrics storage
- very short startup time and any possible gaps in data can easily be filled in by using Promxy
- we could continue using Telegraf as our metrics agent and ship identical metrics to both InfluxDB and VictoriaMetrics during a migration period (migration just about to start)
- compression is really good so we can store more metrics and we can spin up new VictoriaMetrics instances
for new data and keep read-only nodes with older data if we need to extend our retention period further
than single virtual machine disks allow and we can aggregate all the data from VictoriaMetrics with Promxy
High availability is done the same way we did with InfluxDB, by running parallel single nodes of VictoriaMetrics.
Numbers:
- active time series: up to 25 million
- ingestion rate: ~300 000
- total number of datapoints: 380 billion and growing
- total number of entries in inverted index: 575 million and growing
- daily time series churn rate: ~550 000
- data size on disk: ~660GB and growing
- index size on disk: ~9,3GB and growing
- average datapoint size on disk: ~1.75 bytes
Query rates are insignificant as we have concentrated on data ingestion so far.
Anders Bomberg, Monitoring and Infrastructure Team Lead, brandwatch.com

View File

@@ -163,14 +163,7 @@ or via [Prometheus datasource in Grafana](http://docs.grafana.org/features/datas
### Does VictoriaMetrics deduplicate data from Prometheus instances scraping the same targets (aka `HA pairs`)?
Data from all the Prometheus instances is saved in VictoriaMetrics without deduplication.
The deduplication for Prometheus HA pair may be easily implemented on top of VictoriaMetrics with the following steps:
1) Run multiple VictoriaMetrics instances in multiple availability zones (datacenters).
2) Configure each Prometheus from each HA pair to write data to VictoriaMetrics in distinct availability zone.
3) Put [Promxy](https://github.com/jacksontj/promxy) in front of all the VictoriaMetrics instances.
4) Send queries to Promxy - it will deduplicate data from VictoriaMetrics instances behind it.
Yes. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#deduplication) for details.
### Where is the source code of VictoriaMetrics?

View File

@@ -46,6 +46,7 @@ This functionality can be tried at [an editable Grafana dashboard](http://play-g
- Functions for label manipulation:
- `alias(q, name)` for setting metric name across all the time series `q`.
- `label_set(q, label1, value1, ... labelN, valueN)` for setting the given values for the given labels on `q`.
- `label_map(q, label, srcValue1, dstValue1, ... srcValueN, dstValueN)` for mapping `label` values from `src*` to `dst*`.
- `label_del(q, label1, ... labelN)` for deleting the given labels from `q`.
- `label_keep(q, label1, ... labelN)` for deleting all the labels except the given labels from `q`.
- `label_copy(q, src_label1, dst_label1, ... src_labelN, dst_labelN)` for copying label values from `src_*` to `dst_*`.

View File

@@ -13,7 +13,11 @@ Cluster version is available [here](https://github.com/VictoriaMetrics/VictoriaM
* [COLOPL](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/CaseStudies#colopl)
* [Wix.com](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/CaseStudies#wixcom)
* [Wedos.com](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/CaseStudies#wedoscom)
* [Synthesio](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/CaseStudies#synthesio)
* [MHI Vestas Offshore Wind](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/CaseStudies#mhi-vestas-offshore-wind)
* [Dreamteam](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/CaseStudies#dreamteam)
* [Brandwatch](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/CaseStudies#brandwatch)
## Prominent features
@@ -50,9 +54,11 @@ Cluster version is available [here](https://github.com/VictoriaMetrics/VictoriaM
if `-graphiteListenAddr` is set.
* [OpenTSDB put message](#sending-data-via-telnet-put-protocol) if `-opentsdbListenAddr` is set.
* [HTTP OpenTSDB /api/put requests](#sending-opentsdb-data-via-http-apiput-requests) if `-opentsdbHTTPListenAddr` is set.
* [/api/v1/import](#how-to-import-time-series-data)
* [/api/v1/import](#how-to-import-time-series-data).
* [Arbitrary CSV data](#how-to-import-csv-data).
* Ideally works with big amounts of time series data from Kubernetes, IoT sensors, connected cars, industrial telemetry, financial data and various Enterprise workloads.
* Has open source [cluster version](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/cluster).
* See also technical [Articles about VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/Articles).
## Operation
@@ -412,6 +418,55 @@ The `/api/v1/export` endpoint should return the following response:
{"metric":{"__name__":"x.y.z","t1":"v1","t2":"v2"},"values":[45.34],"timestamps":[1566464763000]}
```
### How to import CSV data
Arbitrary CSV data can be imported via `/api/v1/import/csv`. The CSV data is imported according to the provided `format` query arg.
The `format` query arg must contain comma-separated list of parsing rules for CSV fields. Each rule consists of three parts delimited by a colon:
```
<column_pos>:<type>:<context>
```
* `<column_pos>` is the position of the CSV column (field). Column numbering starts from 1. The order of parsing rules may be arbitrary.
* `<type>` describes the column type. Supported types are:
* `metric` - the corresponding CSV column at `<column_pos>` contains metric value. The metric name is read from the `<context>`.
CSV line must have at least a single metric field.
* `label` - the corresponding CSV column at `<column_pos>` contains label value. The label name is read from the `<context>`.
CSV line may have arbitrary number of label fields. All these fields are attached to all the configured metrics.
* `time` - the corresponding CSV column at `<column_pos>` contains metric time. CSV line may contain either one or zero columns with time.
If CSV line has no time, then the current time is used. The time is applied to all the configured metrics.
The format of the time is configured via `<context>`. Supported time formats are:
* `unix_s` - unix timestamp in seconds.
* `unix_ms` - unix timestamp in milliseconds.
* `unix_ns` - unix timestamp in nanoseconds. Note that VictoriaMetrics rounds the timestamp to milliseconds.
* `rfc3339` - timestamp in [RFC3339](https://tools.ietf.org/html/rfc3339) format, i.e. `2006-01-02T15:04:05Z`.
* `custom:<layout>` - custom layout for the timestamp. The `<layout>` may contain arbitrary time layout according to [time.Parse rules in Go](https://golang.org/pkg/time/#Parse).
Each request to `/api/v1/import/csv` can contain arbitrary number of CSV lines.
Example for importing CSV data via `/api/v1/import/csv`:
```bash
curl -d "GOOG,1.23,4.56,NYSE" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market'
curl -d "MSFT,3.21,1.67,NASDAQ" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market'
```
After that the data may be read via [/api/v1/export](#how-to-export-time-series) endpoint:
```bash
curl -G 'http://localhost:8428/api/v1/export' -d 'match[]={ticker!=""}'
```
The following response should be returned:
```bash
{"metric":{"__name__":"bid","market":"NASDAQ","ticker":"MSFT"},"values":[1.67],"timestamps":[1583865146520]}
{"metric":{"__name__":"bid","market":"NYSE","ticker":"GOOG"},"values":[4.56],"timestamps":[1583865146495]}
{"metric":{"__name__":"ask","market":"NASDAQ","ticker":"MSFT"},"values":[3.21],"timestamps":[1583865146520]}
{"metric":{"__name__":"ask","market":"NYSE","ticker":"GOOG"},"values":[1.23],"timestamps":[1583865146495]}
```
### Prometheus querying API usage
VictoriaMetrics supports the following handlers from [Prometheus querying API](https://prometheus.io/docs/prometheus/latest/querying/api/):
@@ -566,6 +621,9 @@ Each JSON line would contain data for a single time series. An example output:
Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either
unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values.
Optional `max_rows_per_line` arg may be added to the request in order to limit the maximum number of rows exported per each JSON line.
By default each JSON line contains all the rows for a single time series.
Pass `Accept-Encoding: gzip` HTTP header in the request to `/api/v1/export` in order to reduce network bandwidth during exporing big amounts
of time series data. This enables gzip compression for the exported data. Example for exporting gzipped data:
@@ -587,6 +645,7 @@ Time series data can be imported via any supported ingestion protocol:
* [OpenTSDB telnet put protocol](#sending-data-via-telnet-put-protocol)
* [OpenTSDB http /api/put](#sending-opentsdb-data-via-http-apiput-requests)
* `/api/v1/import` http POST handler, which accepts data from [/api/v1/export](#how-to-export-time-series).
* `/api/v1/import/csv` http POST handler, which accepts CSV data. See [these docs](#how-to-import-csv-data) for details.
The most efficient protocol for importing data into VictoriaMetrics is `/api/v1/import`. Example for importing data obtained via `/api/v1/export`:
@@ -790,6 +849,7 @@ Alternatively they can be self-scraped by setting `-selfScrapeInterval` command-
For example, `-selfScrapeInterval=10s` would enable self-scraping of `/metrics` page with 10 seconds interval.
There are officials Grafana dashboards for [single-node VictoriaMetrics](https://grafana.com/dashboards/10229) and [clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11176).
There is also an [alternative dashboard for clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11831).
The most interesting metrics are:

2
docs/robots.txt Normal file
View File

@@ -0,0 +1,2 @@
user-agent: *
allow: /

View File

@@ -1,7 +1,8 @@
## vmagent
`vmagent` is a tiny but brave agent, which helps you collecting metrics from various sources
and storing them to [VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics).
and storing them to [VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics)
or any other Prometheus-compatible storage system that supports `remote_write` protocol.
<img alt="vmagent" src="vmagent.png">
@@ -18,17 +19,18 @@ to `vmagent` (like the ability to push metrics instead of pulling them). We did
* Can be used as drop-in replacement for Prometheus for scraping targets such as [node_exporter](https://github.com/prometheus/node_exporter).
See [Quick Start](#quick-start) for details.
* Can add, remove and modify labels via Prometheus relabeling. See [these docs](#relabeling) for details.
* Can add, remove and modify labels (aka tags) via Prometheus relabeling. Can filter data before sending it to remote storage. See [these docs](#relabeling) for details.
* Accepts data via all the ingestion protocols supported by VictoriaMetrics:
* Influx line protocol via `http://<vmagent>:8429/write`. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf).
* JSON lines import protocol via `http://<vmagent>:8429/api/v1/import`. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-import-time-series-data).
* Graphite plaintext protocol if `-graphiteListenAddr` command-line flag is set. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-send-data-from-graphite-compatible-agents-such-as-statsd).
* OpenTSDB telnet and http protocols if `-opentsdbListenAddr` command-line flag is set. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-send-data-from-opentsdb-compatible-agents).
* Prometheus remote write protocol via `http://<vmagent>:8429/api/v1/write`.
* JSON lines import protocol via `http://<vmagent>:8429/api/v1/import`. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-import-time-series-data).
* Arbitrary CSV data via `http://<vmagent>:8429/api/v1/import/csv`. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-import-csv-data).
* Can replicate collected metrics simultaneously to multiple remote storage systems.
* Works in environments with unstable connections to remote storage. If the remote storage is unavailable, the collected metrics
are buffered at `-remoteWrite.tmpDataPath`. The buffered metrics are sent to remote storage as soon as connection
to remote storage is recovered.
to remote storage is recovered. The maximum disk usage for the buffer can be limited with `-remoteWrite.maxDiskUsagePerURL`.
* Uses lower amounts of RAM, CPU, disk IO and network bandwidth comparing to Prometheus.
@@ -53,14 +55,67 @@ If you need collecting only Influx data, then the following command line would b
/path/to/vmagent -remoteWrite.url=https://victoria-metrics-host:8428/api/v1/write
```
Then send Influx data to `http://vmagent-host:8429/write`. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf) for more details.
Then send Influx data to `http://vmagent-host:8429`. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf) for more details.
`vmagent` is also available in [docker images](https://hub.docker.com/r/victoriametrics/vmagent/).
Pass `-help` to `vmagent` in order to see the full list of supported command-line flags with their descriptions.
### How to collect metrics in Prometheus format?
### Use cases
#### IoT and Edge monitoring
`vmagent` can run and collect metrics in IoT and industrial networks with unreliable or scheduled connections to the remote storage.
It buffers the collected data in local files until the connection to remote storage becomes available and then sends the buffered
data to the remote storage. It re-tries sending the data to remote storage on any errors.
The maximum buffer size can be limited with `-remoteWrite.maxDiskUsagePerURL`.
`vmagent` works on various architectures from IoT world - 32-bit arm, 64-bit arm, ppc64, 386, amd64.
See [the corresponding Makefile rules](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/app/vmagent/Makefile) for details.
#### Drop-in replacement for Prometheus
If you use Prometheus only for scraping metrics from various targets and forwarding these metrics to remote storage,
then `vmagent` can replace such Prometheus setup. Usually `vmagent` requires lower amounts of RAM, CPU and network bandwidth comparing to Prometheus for such setup.
See [these docs](#how-to-collect-metrics-in-prometheus-format) for details.
#### Replication and high availability
`vmagent` replicates the collected metrics among multiple remote storage instances configured via `-remoteWrite.url` args.
If a single remote storage instance temporarily goes out of service, then the collected data remains available in another remote storage instances.
`vmagent` buffers the collected data in files at `-remoteWrite.tmpDataPath` until the remote storage becomes available again.
Then it sends the buffered data to the remote storage in order to prevent data gaps in the remote storage.
#### Relabeling and filtering
`vmagent` can add, remove or update labels on the collected data before sending it to remote storage. Additionally,
it can remove unneeded samples via Prometheus-like relabeling before sending the collected data to remote storage.
See [these docs](#relabeling) for details.
#### Splitting data streams among multiple systems
`vmagent` supports splitting of the collected data among muliple destinations with the help of `-remoteWrite.urlRelabelConfig`,
which is applied independently for each configured `-remoteWrite.url` destination. For instance, it is possible to replicate or split
data among long-term remote storage, short-term remote storage and real-time analytical system [built on top of Kafka](https://github.com/Telefonica/prometheus-kafka-adapter).
Note that each destination can receive its own subset of the collected data thanks to per-destination relabeling via `-remoteWrite.urlRelabelConfig`.
#### Prometheus remote_write proxy
`vmagent` may be used as a proxy for Prometheus data sent via Prometheus `remote_write` protocol. It can accept data via `remote_write` API
at `/api/v1/write` endpoint, apply relabeling and filtering and then proxy it to another `remote_write` systems.
The `vmagent` can be configured to encrypt the incoming `remote_write` requests with `-tls*` command-line flags.
Additionally, Basic Auth can be enabled for the incoming `remote_write` requests with `-httpAuth.*` command-line flags.
### How to collect metrics in Prometheus format
Pass the path to `prometheus.yml` to `-promscrape.config` command-line flag. `vmagent` takes into account the following
sections from [Prometheus config file](https://prometheus.io/docs/prometheus/latest/configuration/configuration/):
@@ -69,11 +124,7 @@ sections from [Prometheus config file](https://prometheus.io/docs/prometheus/lat
* `scrape_configs`
All the other sections are ignored, including [remote_write](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write) section.
Use `-remoteWrite.*` command-line flags instead for configuring remote write settings:
* `-remoteWrite.url` for pointing to remote storage. Data to remote storage can be sent either via HTTP or HTTPS. See `-remoteWrite.tls*` flags for details.
* `-remoteWrite.label` for adding labels to metrics before sending them to remote storage.
* `-remoteWrite.relabelConfig` for applying relabeling to metrics before sending them to remote storage.
Use `-remoteWrite.*` command-line flags instead for configuring remote write settings.
The following scrape types in [scrape_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config) section are supported:
@@ -114,7 +165,8 @@ The relabeling can be defined in the following places:
* At `scrape_config -> relabel_configs` section in `-promscrape.config` file. This relabeling is applied to targets when parsing the file during `vmagent` startup
or during config reload after sending `SIGHUP` signal to `vmagent` via `kill -HUP`.
* At `scrape_config -> metric_relabel_configs` section in `-promscrape.config` file. This relabeling is applied to metrics after each scrape for the configured targets.
* At `-remoteWrite.relabelConfig` file. This relabeling is aplied to all the collected metrics before sending them to `-remoteWrite.url`.
* At `-remoteWrite.relabelConfig` file. This relabeling is aplied to all the collected metrics before sending them to remote storage.
* At `-remoteWrite.urlRelabelConfig` files. This relabeling is applied to metrics before sending them to the corresponding `-remoteWrite.url`.
Read more about relabeling in the following articles:
@@ -142,8 +194,8 @@ either via `vmagent` itself or via Prometheus, so the exported metrics could be
and `vmagent_remotewrite_pending_data_bytes` metric exported by `vmagent` at `/metrics` page constantly grows.
* `vmagent` buffers scraped data at `-remoteWrite.tmpDataPath` directory until it is sent to `-remoteWrite.url`.
The directory can grow big when remote storage is unvailable during extended periods of time. If you don't want
sending all the data from the directory to remote storage, just stop `vmagent` and delete the directory.
The directory can grow big when remote storage is unvailable during extended periods of time and if `-remoteWrite.maxDiskUsagePerURL` isn't set.
If you don't want sending all the data from the directory to remote storage, just stop `vmagent` and delete the directory.
### How to build from sources

BIN
docs/vmagent.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 69 KiB

20
go.mod
View File

@@ -1,25 +1,27 @@
module github.com/VictoriaMetrics/VictoriaMetrics
require (
cloud.google.com/go v0.55.0 // indirect
cloud.google.com/go/storage v1.6.0
github.com/VictoriaMetrics/fastcache v1.5.7
github.com/VictoriaMetrics/metrics v1.11.0
github.com/aws/aws-sdk-go v1.29.10
github.com/VictoriaMetrics/metrics v1.11.2
github.com/aws/aws-sdk-go v1.29.34
github.com/cespare/xxhash/v2 v2.1.1
github.com/golang/snappy v0.0.1
github.com/klauspost/compress v1.10.1
github.com/jmespath/go-jmespath v0.3.0 // indirect
github.com/klauspost/compress v1.10.3
github.com/valyala/fasthttp v1.9.0
github.com/valyala/fastjson v1.5.0
github.com/valyala/fastrand v1.0.0
github.com/valyala/gozstd v1.6.4
github.com/valyala/histogram v1.0.1
github.com/valyala/quicktemplate v1.4.1
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b // indirect
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae
golang.org/x/tools v0.0.0-20200226180945-26f6a1b6802d // indirect
google.golang.org/api v0.19.0
google.golang.org/genproto v0.0.0-20200225123651-fc8f55426688 // indirect
golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e // indirect
golang.org/x/sys v0.0.0-20200327173247-9dae0f8f5775
golang.org/x/tools v0.0.0-20200330040139-fa3cc9eebcfe // indirect
google.golang.org/api v0.20.0
google.golang.org/genproto v0.0.0-20200330113809-af700f360a68 // indirect
gopkg.in/yaml.v2 v2.2.8
)
go 1.12
go 1.13

55
go.sum
View File

@@ -9,6 +9,8 @@ cloud.google.com/go v0.50.0/go.mod h1:r9sluTvynVuxRIOHXQEHMFffphuXHOMZMycpNR5e6T
cloud.google.com/go v0.52.0/go.mod h1:pXajvRH/6o3+F9jDHZWQ5PbGhn+o8w9qiu/CffaVdO4=
cloud.google.com/go v0.53.0 h1:MZQCQQaRwOrAcuKjiHWHrgKykt4fZyuwF2dtiG3fGW8=
cloud.google.com/go v0.53.0/go.mod h1:fp/UouUEsRkN6ryDKNW/Upv/JBKnv6WDthjR6+vze6M=
cloud.google.com/go v0.55.0 h1:eoz/lYxKSL4CNAiaUJ0ZfD1J3bfMYbU5B3rwM1C1EIU=
cloud.google.com/go v0.55.0/go.mod h1:ZHmoY+/lIMNkN2+fBmuTiqZ4inFhvQad8ft7MT8IV5Y=
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
cloud.google.com/go/bigquery v1.3.0 h1:sAbMqjY1PEQKZBWfbu6Y6bsupJ9c4QdHnzg/VvYTLcE=
cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE=
@@ -34,12 +36,12 @@ github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/VictoriaMetrics/fastcache v1.5.7 h1:4y6y0G8PRzszQUYIQHHssv/jgPHAb5qQuuDNdCbyAgw=
github.com/VictoriaMetrics/fastcache v1.5.7/go.mod h1:ptDBkNMQI4RtmVo8VS/XwRY6RoTu1dAWCbrk+6WsEM8=
github.com/VictoriaMetrics/metrics v1.11.0 h1:sfRmbgk7hGrxNXrziwyTmU8FZFLFrPNC7g2kZs0xMTQ=
github.com/VictoriaMetrics/metrics v1.11.0/go.mod h1:LU2j9qq7xqZYXz8tF3/RQnB2z2MbZms5TDiIg9/NHiQ=
github.com/VictoriaMetrics/metrics v1.11.2 h1:t/ceLP6SvagUqypCKU7cI7+tQn54+TIV/tGoxihHvx8=
github.com/VictoriaMetrics/metrics v1.11.2/go.mod h1:LU2j9qq7xqZYXz8tF3/RQnB2z2MbZms5TDiIg9/NHiQ=
github.com/allegro/bigcache v1.2.1-0.20190218064605-e24eb225f156 h1:eMwmnE/GDgah4HI848JfFxHt+iPb26b4zyfspmqY0/8=
github.com/allegro/bigcache v1.2.1-0.20190218064605-e24eb225f156/go.mod h1:Cb/ax3seSYIx7SuZdm2G2xzfwmv3TPSk2ucNfQESPXM=
github.com/aws/aws-sdk-go v1.29.10 h1:QJOQq1xNmdrY5mXUmC8CHXzZPve8134Bx/Ux0o6s38s=
github.com/aws/aws-sdk-go v1.29.10/go.mod h1:1KvfttTE3SPKMpo8g2c6jL3ZKfXtFvKscTgahTma5Xg=
github.com/aws/aws-sdk-go v1.29.34 h1:yrzwfDaZFe9oT4AmQeNNunSQA7c0m2chz0B43+bJ1ok=
github.com/aws/aws-sdk-go v1.29.34/go.mod h1:1KvfttTE3SPKMpo8g2c6jL3ZKfXtFvKscTgahTma5Xg=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/xxhash/v2 v2.1.1 h1:6MnRN8NT7+YBpUIWxHtefFZOKTAPgGjpQSxqLNn0+qY=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
@@ -47,10 +49,13 @@ github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWR
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
@@ -67,12 +72,15 @@ github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfb
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y=
github.com/golang/mock v1.4.0/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.3/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.3 h1:gyjaxf+svBWX08ZjK86iN9geUJF0H6gp2IRKX6Nf6/I=
github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
github.com/golang/protobuf v1.3.5 h1:F768QJ1E9tib+q5Sc8MkdJi1RxLTbRcTf8LJV56aRls=
github.com/golang/protobuf v1.3.5/go.mod h1:6O5/vntMXwX2lRkT1hjjk0nAC1IDOTvTlVgjlRvqsdk=
github.com/golang/snappy v0.0.1 h1:Qgr9rKW7uDUkrbSmQeiDsGa8SjGyCOGtuasMWwvp2P4=
github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
@@ -89,6 +97,7 @@ github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OI
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200212024743-f11f1df84d12/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200229191704-1ebb73c60ed3/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
github.com/googleapis/gax-go/v2 v2.0.5 h1:sjZBwGj9Jlw33ImPtvFviGYvseOtDM7hkSKB7+Tv3SM=
@@ -98,6 +107,8 @@ github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ
github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af h1:pmfjZENx5imkbgOkpRUYLnmbU7UEFbjtDA2hxJ1ichM=
github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
github.com/jmespath/go-jmespath v0.3.0 h1:OS12ieG61fsCg5+qLJ+SsW9NicxNkg3b25OyT2yCeUc=
github.com/jmespath/go-jmespath v0.3.0/go.mod h1:9QtRXoHjLGCJ5IBSaohpXITPlowMeeYCZ7fLUTSywik=
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
github.com/jstemmer/go-junit-report v0.9.1 h1:6QPYqodiu3GuPL+7mfx+NwDdp2eTkp9IfEUpgAwUN0o=
github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
@@ -105,8 +116,8 @@ github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+o
github.com/klauspost/compress v1.4.0/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A=
github.com/klauspost/compress v1.4.1/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A=
github.com/klauspost/compress v1.8.2/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A=
github.com/klauspost/compress v1.10.1 h1:a/QY0o9S6wCi0XhxaMX/QmusicNUqCqFugR6WKPOSoQ=
github.com/klauspost/compress v1.10.1/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
github.com/klauspost/compress v1.10.3 h1:OP96hzwJVBIHYU52pVTI6CczrxPvrGfgqF9N5eTO0Q8=
github.com/klauspost/compress v1.10.3/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
github.com/klauspost/cpuid v0.0.0-20180405133222-e7e905edc00e/go.mod h1:Pj4uuM528wm8OyEC2QMXAi2YiTZ96dNQPGgoMS4s3ek=
github.com/klauspost/cpuid v1.2.0/go.mod h1:Pj4uuM528wm8OyEC2QMXAi2YiTZ96dNQPGgoMS4s3ek=
github.com/klauspost/cpuid v1.2.1/go.mod h1:Pj4uuM528wm8OyEC2QMXAi2YiTZ96dNQPGgoMS4s3ek=
@@ -124,6 +135,8 @@ github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJyk=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.5.1 h1:nOGnQDM7FYENwehXlg/kFVnos3rEvtKTjRvOWSzb6H4=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw=
github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
github.com/valyala/fasthttp v1.2.0/go.mod h1:4vX61m6KN+xDduDNwXrhIAVZaZaZiQ1luJk8LWSxF3s=
@@ -140,6 +153,7 @@ github.com/valyala/histogram v1.0.1/go.mod h1:lQy0xA4wUz2+IUnf97SivorsJIp8FxsnRd
github.com/valyala/quicktemplate v1.4.1 h1:tEtkSN6mTCJlYVT7As5x4wjtkk2hj2thsb0M+AcAVeM=
github.com/valyala/quicktemplate v1.4.1/go.mod h1:EH+4AkTd43SvgIbQHYu59/cJyxDoOVRUAfrukLPuGJ4=
github.com/valyala/tcplisten v0.0.0-20161114210144-ceec8f93295a/go.mod h1:v3UYOV9WzVtRmSR+PDvWpU/qWl4Wa5LApYYX4ZtKbio=
github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
go.opencensus.io v0.22.2 h1:75k/FF0Q2YM8QYo07VPddOLBslDt1MZOdEslOHvmzAs=
@@ -175,6 +189,8 @@ golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f h1:J5lckAjkw6qYlOZNj90mLYNT
golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f/go.mod h1:5qLYkcX4OjUUV8bRuDixDT3tpyyb+LUpUlRWLxfhWrs=
golang.org/x/lint v0.0.0-20200130185559-910be7a94367 h1:0IiAsCRByjO2QjX7ZPkw5oU9x+n1YqRL802rjC0c3Aw=
golang.org/x/lint v0.0.0-20200130185559-910be7a94367/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/lint v0.0.0-20200302205851-738671d3881b h1:Wh+f8QHJXR411sJR8/vRBTZ7YapZaRvUcLFFJhusH0k=
golang.org/x/lint v0.0.0-20200302205851-738671d3881b/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
@@ -204,6 +220,10 @@ golang.org/x/net v0.0.0-20200202094626-16171245cfb2/go.mod h1:z5CRVTTTmAJ677TzLL
golang.org/x/net v0.0.0-20200222125558-5a598a2470a0/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b h1:0mm1VjtFUOIlE1SbDlwjYaDxZVDP2S5ou6y0gSgXHu8=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200301022130-244492dfa37a h1:GuSPYbZzB5/dcLNCwLQLsg3obCJtX9IJhpXkvY7kzk0=
golang.org/x/net v0.0.0-20200301022130-244492dfa37a/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e h1:3G+cUijn7XD+S4eJFddp53Pv7+slrESplyjG25HgL+k=
golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@@ -217,6 +237,8 @@ golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e h1:vcxGaoTs7kV8m5Np9uUNQin4BrLOthgV7252N8V+FwY=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a h1:WXEvlFVvvGxCJLG6REjsT03iWnKLEWinaScsxF2Vm2o=
golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -236,6 +258,10 @@ golang.org/x/sys v0.0.0-20200212091648-12a6c2dcc1e4 h1:sfkvUWPNGwSV+8/fNqctR5lS2
golang.org/x/sys v0.0.0-20200212091648-12a6c2dcc1e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae h1:/WDfKMnPU+m5M4xB+6x4kaepxRw6jWvR5iDRdvjHgy8=
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200317113312-5766fd39f98d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200327173247-9dae0f8f5775 h1:TC0v2RSO1u2kn1ZugjrFXkRZAEaqMN/RW+OTZkBzmLE=
golang.org/x/sys v0.0.0-20200327173247-9dae0f8f5775/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@@ -273,8 +299,9 @@ golang.org/x/tools v0.0.0-20200204074204-1cc6d1ef6c74/go.mod h1:TB2adYChydJhpapK
golang.org/x/tools v0.0.0-20200207183749-b753a1ba74fa/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200212150539-ea181f53ac56/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200224181240-023911ca70b2/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200226180945-26f6a1b6802d h1:pUSEBYeASep5mmhgY5ZgD6zz3TDJ4SWJwaepxO+tJog=
golang.org/x/tools v0.0.0-20200226180945-26f6a1b6802d/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200317043434-63da46f3035e/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8=
golang.org/x/tools v0.0.0-20200330040139-fa3cc9eebcfe h1:sOd+hT8wBUrIFR5Q6uQb/rg50z8NjHk96kC4adwvxjw=
golang.org/x/tools v0.0.0-20200330040139-fa3cc9eebcfe/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
@@ -290,8 +317,8 @@ google.golang.org/api v0.15.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsb
google.golang.org/api v0.17.0 h1:0q95w+VuFtv4PAx4PZVQdBMmYbaCHbnfKaEiDIcVyag=
google.golang.org/api v0.17.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.18.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.19.0 h1:GwFK8+l5/gdsOYKz5p6M4UK+QT8OvmHWZPJCnf+5DjA=
google.golang.org/api v0.19.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.20.0 h1:jz2KixHX7EcCPiQrySzPdnYT7DbINAypCqKZ1Z7GM40=
google.golang.org/api v0.20.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
@@ -316,17 +343,21 @@ google.golang.org/genproto v0.0.0-20200204135345-fa8e72b47b90/go.mod h1:GmwEX6Z4
google.golang.org/genproto v0.0.0-20200212174721-66ed5ce911ce h1:1mbrb1tUU+Zmt5C94IGKADBTJZjZXAd+BubWi7r9EiI=
google.golang.org/genproto v0.0.0-20200212174721-66ed5ce911ce/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200224152610-e50cd9704f63/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200225123651-fc8f55426688 h1:1+0Z5cgv1eDXJD9z2tdQF9PSSQnJXwism490hJydMRI=
google.golang.org/genproto v0.0.0-20200225123651-fc8f55426688/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200317114155-1f3552e48f24/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200330113809-af700f360a68 h1:ay2fio+sR6N1ccqZQgr/bUoo6pwgbxU8imlLkQc9Nlo=
google.golang.org/genproto v0.0.0-20200330113809-af700f360a68/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
google.golang.org/grpc v1.26.0 h1:2dTRdpdFEEhJYQD8EMLB61nnrzSCTbG38PhqdhvOltg=
google.golang.org/grpc v1.26.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.27.1 h1:zvIju4sqAGvwKspUQOhwnpcqSbzi7/H6QomNNjTL4sk=
google.golang.org/grpc v1.27.1/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.28.0 h1:bO/TA4OxCOummhSf10siHuG7vJOiwh7SpRpFZDkOgl4=
google.golang.org/grpc v1.28.0/go.mod h1:rpkK4SK4GF4Ach/+MFLZUBavHOvF2JJB5uozKKal+60=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=

View File

@@ -1,8 +1,6 @@
package fsnil
import (
"fmt"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/backup/common"
)
@@ -11,7 +9,7 @@ type FS struct{}
// String returns human-readable string representation for fs.
func (fs *FS) String() string {
return fmt.Sprintf("fsnil")
return "fsnil"
}
// ListParts returns all the parts from fs.

View File

@@ -7,9 +7,12 @@ import (
"strings"
)
var enable = flag.Bool("envflag.enable", false, "Whether to enable reading flags from environment variables additionally to command line. "+
"Command line flag values have priority over values from environment vars. "+
"Flags are read only from command line if this flag isn't set")
var (
enable = flag.Bool("envflag.enable", false, "Whether to enable reading flags from environment variables additionally to command line. "+
"Command line flag values have priority over values from environment vars. "+
"Flags are read only from command line if this flag isn't set")
prefix = flag.String("envflag.prefix", "", "Prefix for environment variables if -envflag.enable is set")
)
// Parse parses environment vars and command-line flags.
//
@@ -48,5 +51,6 @@ func Parse() {
func getEnvFlagName(s string) string {
// Substitute dots with underscores, since env var names cannot contain dots.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/311#issuecomment-586354129 for details.
return strings.ReplaceAll(s, ".", "_")
s = strings.ReplaceAll(s, ".", "_")
return *prefix + s
}

View File

@@ -2,6 +2,7 @@ package fastnum
import (
"bytes"
"reflect"
"unsafe"
)
@@ -84,7 +85,7 @@ func isInt64Data(a, data []int64) bool {
if len(data) != 8*1024 {
panic("len(data) must equal to 8*1024")
}
b := (*[64 * 1024]byte)(unsafe.Pointer(&data[0]))
b := int64ToByteSlice(data)
for len(a) > 0 {
n := len(data)
if n > len(a) {
@@ -92,9 +93,8 @@ func isInt64Data(a, data []int64) bool {
}
x := a[:n]
a = a[n:]
xb := (*[64 * 1024]byte)(unsafe.Pointer(&x[0]))
xbLen := len(x) * 8
if !bytes.Equal(xb[:xbLen], b[:xbLen]) {
xb := int64ToByteSlice(x)
if !bytes.Equal(xb, b[:len(xb)]) {
return false
}
}
@@ -108,7 +108,7 @@ func isFloat64Data(a, data []float64) bool {
if len(data) != 8*1024 {
panic("len(data) must equal to 8*1024")
}
b := (*[64 * 1024]byte)(unsafe.Pointer(&data[0]))
b := float64ToByteSlice(data)
for len(a) > 0 {
n := len(data)
if n > len(a) {
@@ -116,15 +116,30 @@ func isFloat64Data(a, data []float64) bool {
}
x := a[:n]
a = a[n:]
xb := (*[64 * 1024]byte)(unsafe.Pointer(&x[0]))
xbLen := len(x) * 8
if !bytes.Equal(xb[:xbLen], b[:xbLen]) {
xb := float64ToByteSlice(x)
if !bytes.Equal(xb, b[:len(xb)]) {
return false
}
}
return true
}
func int64ToByteSlice(a []int64) (b []byte) {
sh := (*reflect.SliceHeader)(unsafe.Pointer(&b))
sh.Data = uintptr(unsafe.Pointer(&a[0]))
sh.Len = len(a) * int(unsafe.Sizeof(a[0]))
sh.Cap = sh.Len
return
}
func float64ToByteSlice(a []float64) (b []byte) {
sh := (*reflect.SliceHeader)(unsafe.Pointer(&b))
sh.Data = uintptr(unsafe.Pointer(&a[0]))
sh.Len = len(a) * int(unsafe.Sizeof(a[0]))
sh.Cap = sh.Len
return
}
var (
int64Zeros [8 * 1024]int64
int64Ones = func() (a [8 * 1024]int64) {

View File

@@ -5,7 +5,7 @@ import (
"strings"
)
// NewArray returns new Array with the given name and descprition.
// NewArray returns new Array with the given name and description.
func NewArray(name, description string) *Array {
var a Array
flag.Var(&a, name, description)

View File

@@ -32,7 +32,9 @@ var (
metricsAuthKey = flag.String("metricsAuthKey", "", "Auth key for /metrics. It overrides httpAuth settings")
pprofAuthKey = flag.String("pprofAuthKey", "", "Auth key for /debug/pprof. It overrides httpAuth settings")
disableResponseCompression = flag.Bool("http.disableResponseCompression", false, "Disable compression of HTTP responses for saving CPU resources. By default compression is enabled to save network bandwidth")
disableResponseCompression = flag.Bool("http.disableResponseCompression", false, "Disable compression of HTTP responses for saving CPU resources. By default compression is enabled to save network bandwidth")
maxGracefulShutdownDuration = flag.Duration("http.maxGracefulShutdownDuration", 7*time.Second, "The maximum duration for graceful shutdown of HTTP server. "+
"Highly loaded server may require increased value for graceful shutdown")
)
var (
@@ -130,10 +132,10 @@ func Stop(addr string) error {
if s == nil {
logger.Panicf("BUG: there is no http server at %q", addr)
}
ctx, cancelFunc := context.WithTimeout(context.Background(), 5*time.Second)
ctx, cancelFunc := context.WithTimeout(context.Background(), *maxGracefulShutdownDuration)
defer cancelFunc()
if err := s.Shutdown(ctx); err != nil {
return fmt.Errorf("cannot gracefully shutdown http server at %q: %s", addr, err)
return fmt.Errorf("cannot gracefully shutdown http server at %q in %.3fs: %s", addr, maxGracefulShutdownDuration.Seconds(), err)
}
return nil
}

View File

@@ -38,6 +38,7 @@ var transformFuncs = map[string]bool{
// New funcs from MetricsQL
"label_set": true,
"label_map": true,
"label_del": true,
"label_keep": true,
"label_copy": true,

View File

@@ -155,7 +155,7 @@ func TestFastQueueReadUnblockByWrite(t *testing.T) {
mustDeleteDir(path)
fq := MustOpenFastQueue(path, "foobar", 13, 0)
block := fmt.Sprintf("foodsafdsaf sdf")
block := "foodsafdsaf sdf"
resultCh := make(chan error)
go func() {
data, ok := fq.MustReadBlock(nil)

View File

@@ -28,7 +28,7 @@ func LoadRelabelConfigs(path string) ([]ParsedRelabelConfig, error) {
return nil, fmt.Errorf("cannot read `relabel_configs` from %q: %s", path, err)
}
var rcs []RelabelConfig
if err := yaml.Unmarshal(data, &rcs); err != nil {
if err := yaml.UnmarshalStrict(data, &rcs); err != nil {
return nil, fmt.Errorf("cannot unmarshal `relabel_configs` from %q: %s", path, err)
}
return ParseRelabelConfigs(nil, rcs)

View File

@@ -46,16 +46,17 @@ func newClient(sw *ScrapeWork) *client {
}
}
hc := &fasthttp.HostClient{
Addr: host,
Name: "vm_promscrape",
Dial: statDial,
DialDualStack: netutil.TCP6Enabled(),
IsTLS: isTLS,
TLSConfig: tlsCfg,
MaxIdleConnDuration: 2 * sw.ScrapeInterval,
ReadTimeout: sw.ScrapeTimeout,
WriteTimeout: 10 * time.Second,
MaxResponseBodySize: *maxScrapeSize,
Addr: host,
Name: "vm_promscrape",
Dial: statDial,
DialDualStack: netutil.TCP6Enabled(),
IsTLS: isTLS,
TLSConfig: tlsCfg,
MaxIdleConnDuration: 2 * sw.ScrapeInterval,
ReadTimeout: sw.ScrapeTimeout,
WriteTimeout: 10 * time.Second,
MaxResponseBodySize: *maxScrapeSize,
MaxIdemponentCallAttempts: 1,
}
return &client{
hc: hc,

View File

@@ -102,7 +102,7 @@ func loadStaticConfigs(path string) ([]StaticConfig, error) {
return nil, fmt.Errorf("cannot read `static_configs` from %q: %s", path, err)
}
var stcs []StaticConfig
if err := yaml.Unmarshal(data, &stcs); err != nil {
if err := yaml.UnmarshalStrict(data, &stcs); err != nil {
return nil, fmt.Errorf("cannot unmarshal `static_configs` from %q: %s", path, err)
}
return stcs, nil
@@ -187,8 +187,9 @@ func (sc *ScrapeConfig) appendFileSDScrapeWork(dst, prev []ScrapeWork, baseDir s
label := promrelabel.GetLabelByName(sw.Labels, "__meta_filepath")
if label == nil {
logger.Panicf("BUG: missing `__meta_filepath` label")
} else {
swPrev[label.Value] = append(swPrev[label.Value], *sw)
}
swPrev[label.Value] = append(swPrev[label.Value], *sw)
}
for i := range sc.FileSDConfigs {
var err error
@@ -508,39 +509,26 @@ func getParamsFromLabels(labels []prompbmarshal.Label, paramsOrig map[string][]s
func mergeLabels(job, scheme, target, metricsPath string, labels, externalLabels, metaLabels map[string]string, params map[string][]string) ([]prompbmarshal.Label, error) {
// See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config
m := map[string]string{
"job": job,
"__address__": target,
"__scheme__": scheme,
"__metrics_path__": metricsPath,
}
m := make(map[string]string)
for k, v := range externalLabels {
if vOrig, ok := m[k]; ok {
return nil, fmt.Errorf("external label `%q: %q` clashes with the previously set label with value %q", k, v, vOrig)
}
m[k] = v
}
for k, v := range metaLabels {
if vOrig, ok := m[k]; ok {
return nil, fmt.Errorf("meta label `%q: %q` clashes with the previously set label with value %q", k, v, vOrig)
}
m[k] = v
}
for k, v := range labels {
if vOrig, ok := m[k]; ok {
return nil, fmt.Errorf("label `%q: %q` clashes with the previously set label with value %q", k, v, vOrig)
}
m[k] = v
}
m["job"] = job
m["__address__"] = target
m["__scheme__"] = scheme
m["__metrics_path__"] = metricsPath
for k, args := range params {
if len(args) == 0 {
continue
}
k = "__param_" + k
v := args[0]
if vOrig, ok := m[k]; ok {
return nil, fmt.Errorf("param `%q: %q` claches with the previously set label with value %q", k, v, vOrig)
}
m[k] = v
}
for k, v := range labels {
m[k] = v
}
for k, v := range metaLabels {
m[k] = v
}
result := make([]prompbmarshal.Label, 0, len(m))

View File

@@ -270,43 +270,6 @@ scrape_configs:
- targets: ["a"]
`)
// Clash of external_label with job or instance
f(`
global:
external_labels:
job: foobar
scrape_configs:
- job_name: aaa
static_configs:
- targets: ["a"]
`)
// Clash of external_label with static_configs label
f(`
global:
external_labels:
xxx: foobar
scrape_configs:
- job_name: aaa
static_configs:
- targets: ["a"]
labels:
xxx: yyy
`)
// Clash of param with external_labels
f(`
global:
external_labels:
__param_xxx: foobar
scrape_configs:
- job_name: aaa
params:
xxx: [abcd]
static_configs:
- targets: ["a"]
`)
// non-existing ca_file
f(`
scrape_configs:
@@ -1091,6 +1054,62 @@ scrape_configs:
},
})
f(`
global:
external_labels:
job: foobar
foo: xx
q: qwe
__address__: aaasdf
__param_a: jlfd
scrape_configs:
- job_name: aaa
params:
a: [b, xy]
static_configs:
- targets: ["a"]
labels:
foo: bar
__param_a: c
__address__: pp
job: yyy
`, []ScrapeWork{
{
ScrapeURL: "http://pp:80/metrics?a=c&a=xy",
ScrapeInterval: defaultScrapeInterval,
ScrapeTimeout: defaultScrapeTimeout,
Labels: []prompbmarshal.Label{
{
Name: "__address__",
Value: "pp",
},
{
Name: "__metrics_path__",
Value: "/metrics",
},
{
Name: "__param_a",
Value: "c",
},
{
Name: "__scheme__",
Value: "http",
},
{
Name: "foo",
Value: "bar",
},
{
Name: "job",
Value: "yyy",
},
{
Name: "q",
Value: "qwe",
},
},
},
})
f(`
scrape_configs:
- job_name: 'snmp'
static_configs:

View File

@@ -225,6 +225,7 @@ func equalLabel(a, b *prompbmarshal.Label) bool {
//
// This function returns after closing stopCh.
func runScrapeWorkers(sws []ScrapeWork, pushData func(wr *prompbmarshal.WriteRequest), stopCh <-chan struct{}) {
tsmGlobal.RegisterAll(sws)
var wg sync.WaitGroup
for i := range sws {
cfg := &sws[i]
@@ -240,4 +241,5 @@ func runScrapeWorkers(sws []ScrapeWork, pushData func(wr *prompbmarshal.WriteReq
}()
}
wg.Wait()
tsmGlobal.UnregisterAll(sws)
}

View File

@@ -35,6 +35,25 @@ func (tsm *targetStatusMap) Reset() {
tsm.mu.Unlock()
}
func (tsm *targetStatusMap) RegisterAll(sws []ScrapeWork) {
tsm.mu.Lock()
for i := range sws {
sw := &sws[i]
tsm.m[sw.ScrapeURL] = targetStatus{
sw: sw,
}
}
tsm.mu.Unlock()
}
func (tsm *targetStatusMap) UnregisterAll(sws []ScrapeWork) {
tsm.mu.Lock()
for i := range sws {
delete(tsm.m, sws[i].ScrapeURL)
}
tsm.mu.Unlock()
}
func (tsm *targetStatusMap) Update(sw *ScrapeWork, up bool, scrapeTime, scrapeDuration int64, err error) {
tsm.mu.Lock()
tsm.m[sw.ScrapeURL] = targetStatus{
@@ -50,12 +69,7 @@ func (tsm *targetStatusMap) Update(sw *ScrapeWork, up bool, scrapeTime, scrapeDu
func (tsm *targetStatusMap) WriteHumanReadable(w io.Writer) {
byJob := make(map[string][]targetStatus)
tsm.mu.Lock()
for k, st := range tsm.m {
if st.getDurationFromLastScrape() > 10*st.sw.ScrapeInterval {
// Remove obsolete targets
delete(tsm.m, k)
continue
}
for _, st := range tsm.m {
job := ""
label := promrelabel.GetLabelByName(st.sw.Labels, "job")
if label != nil {

View File

@@ -0,0 +1,172 @@
package csvimport
import (
"fmt"
"strconv"
"strings"
"time"
"github.com/valyala/fastjson/fastfloat"
)
// ColumnDescriptor represents parsing rules for a single csv column.
//
// The column is transformed to either timestamp, tag or metric value
// depending on the corresponding non-empty field.
//
// If all the fields are empty, then the given column is ignored.
type ColumnDescriptor struct {
// ParseTimestamp is set to a function, which is used for timestamp
// parsing from the given column.
ParseTimestamp func(s string) (int64, error)
// TagName is set to tag name for tag value, which should be obtained
// from the given column.
TagName string
// MetricName is set to metric name for value obtained from the given column.
MetricName string
}
const maxColumnsPerRow = 64 * 1024
// ParseColumnDescriptors parses column descriptors from s.
//
// s must have comma-separated list of the following entries:
//
// <column_pos>:<column_type>:<extension>
//
// Where:
//
// - <column_pos> is numeric csv column position. The first column has position 1.
// - <column_type> is one of the following types:
// - time - the corresponding column contains timestamp. Timestamp format is determined by <extension>. The following formats are supported:
// - unix_s - unix timestamp in seconds
// - unix_ms - unix timestamp in milliseconds
// - unix_ns - unix_timestamp in nanoseconds
// - rfc3339 - RFC3339 format in the form `2006-01-02T15:04:05Z07:00`
// - label - the corresponding column contains metric label with the name set in <extension>.
// - metric - the corresponding column contains metric value with the name set in <extension>.
//
// s must contain at least a single 'metric' column and no more than a single `time` column.
func ParseColumnDescriptors(s string) ([]ColumnDescriptor, error) {
m := make(map[int]ColumnDescriptor)
cols := strings.Split(s, ",")
hasValueCol := false
hasTimeCol := false
maxPos := 0
for i, col := range cols {
var cd ColumnDescriptor
a := strings.SplitN(col, ":", 3)
if len(a) != 3 {
return nil, fmt.Errorf("entry #%d must have the following form: <column_pos>:<column_type>:<extension>; got %q", i+1, a)
}
pos, err := strconv.Atoi(a[0])
if err != nil {
return nil, fmt.Errorf("cannot parse <column_pos> part from the entry #%d %q: %s", i+1, col, err)
}
if pos <= 0 {
return nil, fmt.Errorf("<column_pos> cannot be smaller than 1; got %d for entry #%d %q", pos, i+1, col)
}
if pos > maxColumnsPerRow {
return nil, fmt.Errorf("<column_pos> cannot be bigger than %d; got %d for entry #%d %q", maxColumnsPerRow, pos, i+1, col)
}
if pos > maxPos {
maxPos = pos
}
typ := a[1]
switch typ {
case "time":
if hasTimeCol {
return nil, fmt.Errorf("duplicate time column has been found at entry #%d %q for %q", i+1, col, s)
}
parseTimestamp, err := parseTimeFormat(a[2])
if err != nil {
return nil, fmt.Errorf("cannot parse time format from the entry #%d %q: %s", i+1, col, err)
}
cd.ParseTimestamp = parseTimestamp
hasTimeCol = true
case "label":
cd.TagName = a[2]
if len(cd.TagName) == 0 {
return nil, fmt.Errorf("label name cannot be empty in the entry #%d %q", i+1, col)
}
case "metric":
cd.MetricName = a[2]
if len(cd.MetricName) == 0 {
return nil, fmt.Errorf("metric name cannot be empty in the entry #%d %q", i+1, col)
}
hasValueCol = true
default:
return nil, fmt.Errorf("unknown <column_type>: %q; allowed values: time, metric, label", typ)
}
pos--
if _, ok := m[pos]; ok {
return nil, fmt.Errorf("duplicate <column_pos> %d for the entry #%d %q", pos, i+1, col)
}
m[pos] = cd
}
if !hasValueCol {
return nil, fmt.Errorf("missing 'metric' column in %q", s)
}
cds := make([]ColumnDescriptor, maxPos)
for pos, cd := range m {
cds[pos] = cd
}
return cds, nil
}
func parseTimeFormat(format string) (func(s string) (int64, error), error) {
if strings.HasPrefix(format, "custom:") {
format = format[len("custom:"):]
return newParseCustomTimeFunc(format), nil
}
switch format {
case "unix_s":
return parseUnixTimestampSeconds, nil
case "unix_ms":
return parseUnixTimestampMilliseconds, nil
case "unix_ns":
return parseUnixTimestampNanoseconds, nil
case "rfc3339":
return parseRFC3339, nil
default:
return nil, fmt.Errorf("unknown format for time parsing: %q; supported formats: unix_s, unix_ms, unix_ns, rfc3339", format)
}
}
func parseUnixTimestampSeconds(s string) (int64, error) {
n := fastfloat.ParseInt64BestEffort(s)
if n > int64(1<<63-1)/1e3 {
return 0, fmt.Errorf("too big unix timestamp in seconds: %d; must be smaller than %d", n, int64(1<<63-1)/1e3)
}
return n * 1e3, nil
}
func parseUnixTimestampMilliseconds(s string) (int64, error) {
n := fastfloat.ParseInt64BestEffort(s)
return n, nil
}
func parseUnixTimestampNanoseconds(s string) (int64, error) {
n := fastfloat.ParseInt64BestEffort(s)
return n / 1e6, nil
}
func parseRFC3339(s string) (int64, error) {
t, err := time.Parse(time.RFC3339, s)
if err != nil {
return 0, fmt.Errorf("cannot parse time in RFC3339 from %q: %s", s, err)
}
return t.UnixNano() / 1e6, nil
}
func newParseCustomTimeFunc(format string) func(s string) (int64, error) {
return func(s string) (int64, error) {
t, err := time.Parse(format, s)
if err != nil {
return 0, fmt.Errorf("cannot parse time in custom format %q from %q: %s", format, s, err)
}
return t.UnixNano() / 1e6, nil
}
}

View File

@@ -0,0 +1,226 @@
package csvimport
import (
"bytes"
"fmt"
"reflect"
"testing"
"time"
"unsafe"
)
func TestParseColumnDescriptorsSuccess(t *testing.T) {
f := func(s string, cdsExpected []ColumnDescriptor) {
t.Helper()
cds, err := ParseColumnDescriptors(s)
if err != nil {
t.Fatalf("unexpected error on ParseColumnDescriptors(%q): %s", s, err)
}
if !equalColumnDescriptors(cds, cdsExpected) {
t.Fatalf("unexpected cds returned from ParseColumnDescriptors(%q);\ngot\n%v\nwant\n%v", s, cds, cdsExpected)
}
}
f("1:time:unix_s,3:metric:temperature", []ColumnDescriptor{
{
ParseTimestamp: parseUnixTimestampSeconds,
},
{},
{
MetricName: "temperature",
},
})
f("2:time:unix_ns,1:metric:temperature,3:label:city,4:label:country", []ColumnDescriptor{
{
MetricName: "temperature",
},
{
ParseTimestamp: parseUnixTimestampNanoseconds,
},
{
TagName: "city",
},
{
TagName: "country",
},
})
f("2:time:unix_ms,1:metric:temperature", []ColumnDescriptor{
{
MetricName: "temperature",
},
{
ParseTimestamp: parseUnixTimestampMilliseconds,
},
})
f("2:time:rfc3339,1:metric:temperature", []ColumnDescriptor{
{
MetricName: "temperature",
},
{
ParseTimestamp: parseRFC3339,
},
})
}
func TestParseColumnDescriptorsFailure(t *testing.T) {
f := func(s string) {
t.Helper()
cds, err := ParseColumnDescriptors(s)
if err == nil {
t.Fatalf("expecting non-nil error for ParseColumnDescriptors(%q)", s)
}
if cds != nil {
t.Fatalf("expecting nil cds; got %v", cds)
}
}
// Empty string
f("")
// Missing metric column
f("1:time:unix_s")
f("1:label:aaa")
// Invalid column number
f("foo:time:unix_s,bar:metric:temp")
f("0:metric:aaa")
f("-123:metric:aaa")
f(fmt.Sprintf("%d:metric:aaa", maxColumnsPerRow+10))
// Duplicate time column
f("1:time:unix_s,2:time:rfc3339,3:metric:aaa")
f("1:time:custom:2006,2:time:rfc3339,3:metric:aaa")
// Invalid time format
f("1:time:foobar,2:metric:aaa")
f("1:time:,2:metric:aaa")
f("1:time:sss:sss,2:metric:aaa")
// empty label name
f("2:label:,1:metric:aaa")
// Empty metric name
f("1:metric:")
// Unknown type
f("1:metric:aaa,2:aaaa:bbb")
// duplicate column number
f("1:metric:a,1:metric:b")
}
func TestParseUnixTimestampSeconds(t *testing.T) {
f := func(s string, tsExpected int64) {
t.Helper()
ts, err := parseUnixTimestampSeconds(s)
if err != nil {
t.Fatalf("unexpected error when parsing %q: %s", s, err)
}
if ts != tsExpected {
t.Fatalf("unexpected ts when parsing %q; got %d; want %d", s, ts, tsExpected)
}
}
f("0", 0)
f("123", 123000)
f("-123", -123000)
}
func TestParseUnixTimestampMilliseconds(t *testing.T) {
f := func(s string, tsExpected int64) {
t.Helper()
ts, err := parseUnixTimestampMilliseconds(s)
if err != nil {
t.Fatalf("unexpected error when parsing %q: %s", s, err)
}
if ts != tsExpected {
t.Fatalf("unexpected ts when parsing %q; got %d; want %d", s, ts, tsExpected)
}
}
f("0", 0)
f("123", 123)
f("-123", -123)
}
func TestParseUnixTimestampNanoseconds(t *testing.T) {
f := func(s string, tsExpected int64) {
t.Helper()
ts, err := parseUnixTimestampNanoseconds(s)
if err != nil {
t.Fatalf("unexpected error when parsing %q: %s", s, err)
}
if ts != tsExpected {
t.Fatalf("unexpected ts when parsing %q; got %d; want %d", s, ts, tsExpected)
}
}
f("0", 0)
f("123", 0)
f("12343567", 12)
f("-12343567", -12)
}
func TestParseRFC3339(t *testing.T) {
f := func(s string, tsExpected int64) {
t.Helper()
ts, err := parseRFC3339(s)
if err != nil {
t.Fatalf("unexpected error when parsing %q: %s", s, err)
}
if ts != tsExpected {
t.Fatalf("unexpected ts when parsing %q; got %d; want %d", s, ts, tsExpected)
}
}
f("2006-01-02T15:04:05Z", 1136214245000)
f("2020-03-11T18:23:46Z", 1583951026000)
}
func TestParseCustomTimeFunc(t *testing.T) {
f := func(format, s string, tsExpected int64) {
t.Helper()
f := newParseCustomTimeFunc(format)
ts, err := f(s)
if err != nil {
t.Fatalf("unexpected error when parsing %q: %s", s, err)
}
if ts != tsExpected {
t.Fatalf("unexpected ts when parsing %q; got %d; want %d", s, ts, tsExpected)
}
}
f(time.RFC1123, "Mon, 29 Oct 2018 07:50:37 GMT", 1540799437000)
f("2006-01-02 15:04:05.999Z", "2015-08-10 20:04:40.123Z", 1439237080123)
}
func equalColumnDescriptors(a, b []ColumnDescriptor) bool {
if len(a) != len(b) {
return false
}
for i, x := range a {
y := b[i]
if !equalColumnDescriptor(x, y) {
return false
}
}
return true
}
func equalColumnDescriptor(x, y ColumnDescriptor) bool {
sh1 := &reflect.SliceHeader{
Data: uintptr(unsafe.Pointer(&x.ParseTimestamp)),
Len: int(unsafe.Sizeof(x.ParseTimestamp)),
Cap: int(unsafe.Sizeof(x.ParseTimestamp)),
}
b1 := *(*[]byte)(unsafe.Pointer(sh1))
sh2 := &reflect.SliceHeader{
Data: uintptr(unsafe.Pointer(&y.ParseTimestamp)),
Len: int(unsafe.Sizeof(y.ParseTimestamp)),
Cap: int(unsafe.Sizeof(y.ParseTimestamp)),
}
b2 := *(*[]byte)(unsafe.Pointer(sh2))
if !bytes.Equal(b1, b2) {
return false
}
if x.TagName != y.TagName {
return false
}
if x.MetricName != y.MetricName {
return false
}
return true
}

View File

@@ -0,0 +1,145 @@
package csvimport
import (
"fmt"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/metrics"
"github.com/valyala/fastjson/fastfloat"
)
// Rows represents csv rows.
type Rows struct {
// Rows contains parsed csv rows after the call to Unmarshal.
Rows []Row
sc scanner
tagsPool []Tag
metricsPool []metric
}
// Reset resets rs.
func (rs *Rows) Reset() {
rows := rs.Rows
for i := range rows {
r := &rows[i]
r.Metric = ""
r.Tags = nil
r.Value = 0
r.Timestamp = 0
}
rs.Rows = rs.Rows[:0]
rs.sc.Init("")
tags := rs.tagsPool
for i := range tags {
t := &tags[i]
t.Key = ""
t.Value = ""
}
rs.tagsPool = rs.tagsPool[:0]
metrics := rs.metricsPool
for i := range metrics {
m := &metrics[i]
m.Name = ""
m.Value = 0
}
rs.metricsPool = rs.metricsPool[:0]
}
// Row represents a single metric row
type Row struct {
Metric string
Tags []Tag
Value float64
Timestamp int64
}
// Tag represents metric tag
type Tag struct {
Key string
Value string
}
type metric struct {
Name string
Value float64
}
// Unmarshal unmarshal csv lines from s according to the given cds.
func (rs *Rows) Unmarshal(s string, cds []ColumnDescriptor) {
rs.sc.Init(s)
rs.Rows, rs.tagsPool, rs.metricsPool = parseRows(&rs.sc, rs.Rows[:0], rs.tagsPool[:0], rs.metricsPool[:0], cds)
}
func parseRows(sc *scanner, dst []Row, tags []Tag, metrics []metric, cds []ColumnDescriptor) ([]Row, []Tag, []metric) {
for sc.NextLine() {
line := sc.Line
var r Row
col := uint(0)
metrics = metrics[:0]
tagsLen := len(tags)
for sc.NextColumn() {
if col >= uint(len(cds)) {
// Skip superflouous column.
continue
}
cd := &cds[col]
col++
if parseTimestamp := cd.ParseTimestamp; parseTimestamp != nil {
timestamp, err := parseTimestamp(sc.Column)
if err != nil {
sc.Error = fmt.Errorf("cannot parse timestamp from %q: %s", sc.Column, err)
break
}
r.Timestamp = timestamp
continue
}
if tagName := cd.TagName; tagName != "" {
tags = append(tags, Tag{
Key: tagName,
Value: sc.Column,
})
continue
}
metricName := cd.MetricName
if metricName == "" {
// The given field is ignored.
continue
}
value := fastfloat.ParseBestEffort(sc.Column)
metrics = append(metrics, metric{
Name: metricName,
Value: value,
})
}
if col < uint(len(cds)) && sc.Error == nil {
sc.Error = fmt.Errorf("missing columns in the csv line %q; got %d columns; want at least %d columns", line, col, len(cds))
}
if sc.Error != nil {
logger.Errorf("error when parsing csv line %q: %s; skipping this line", line, sc.Error)
invalidLines.Inc()
continue
}
if len(metrics) == 0 {
logger.Panicf("BUG: expecting at least a single metric in columnDescriptors=%#v", cds)
}
r.Metric = metrics[0].Name
r.Tags = tags[tagsLen:]
r.Value = metrics[0].Value
dst = append(dst, r)
for _, m := range metrics[1:] {
dst = append(dst, Row{
Metric: m.Name,
Tags: r.Tags,
Value: m.Value,
Timestamp: r.Timestamp,
})
}
}
return dst, tags, metrics
}
var invalidLines = metrics.NewCounter(`vm_rows_invalid_total{type="csvimport"}`)

View File

@@ -0,0 +1,171 @@
package csvimport
import (
"reflect"
"testing"
)
func TestRowsUnmarshalFailure(t *testing.T) {
f := func(format, s string) {
t.Helper()
cds, err := ParseColumnDescriptors(format)
if err != nil {
t.Fatalf("unexpected error when parsing %q: %s", format, err)
}
var rs Rows
rs.Unmarshal(s, cds)
if len(rs.Rows) != 0 {
t.Fatalf("unexpected rows unmarshaled: %#v", rs.Rows)
}
}
// Invalid timestamp
f("1:metric:foo,2:time:rfc3339", "234,foobar")
// Missing columns
f("3:metric:aaa", "123,456")
}
func TestRowsUnmarshalSuccess(t *testing.T) {
f := func(format, s string, rowsExpected []Row) {
t.Helper()
cds, err := ParseColumnDescriptors(format)
if err != nil {
t.Fatalf("unexpected error when parsing %q: %s", format, err)
}
var rs Rows
rs.Unmarshal(s, cds)
if !reflect.DeepEqual(rs.Rows, rowsExpected) {
t.Fatalf("unexpected rows;\ngot\n%v\nwant\n%v", rs.Rows, rowsExpected)
}
rs.Reset()
// Unmarshal rows the second time
rs.Unmarshal(s, cds)
if !reflect.DeepEqual(rs.Rows, rowsExpected) {
t.Fatalf("unexpected rows on the second unmarshal;\ngot\n%v\nwant\n%v", rs.Rows, rowsExpected)
}
}
f("1:metric:foo", "", nil)
f("1:metric:foo", `123`, []Row{
{
Metric: "foo",
Value: 123,
},
})
f("1:metric:foo,2:time:unix_s,3:label:foo,4:label:bar", `123,456,xxx,yy`, []Row{
{
Metric: "foo",
Tags: []Tag{
{
Key: "foo",
Value: "xxx",
},
{
Key: "bar",
Value: "yy",
},
},
Value: 123,
Timestamp: 456000,
},
})
// Multiple metrics
f("2:metric:bar,1:metric:foo,3:label:foo,4:label:bar,5:time:custom:2006-01-02 15:04:05.999Z",
`"2.34",5.6,"foo"",bar","aa",2015-08-10 20:04:40.123Z`, []Row{
{
Metric: "foo",
Tags: []Tag{
{
Key: "foo",
Value: "foo\",bar",
},
{
Key: "bar",
Value: "aa",
},
},
Value: 2.34,
Timestamp: 1439237080123,
},
{
Metric: "bar",
Tags: []Tag{
{
Key: "foo",
Value: "foo\",bar",
},
{
Key: "bar",
Value: "aa",
},
},
Value: 5.6,
Timestamp: 1439237080123,
},
})
f("2:label:symbol,3:time:custom:2006-01-02 15:04:05.999Z,4:metric:bid,5:metric:ask",
`
"aaa","AUDCAD","2015-08-10 00:00:01.000Z",0.9725,0.97273
"aaa","AUDCAD","2015-08-10 00:00:02.000Z",0.97253,0.97276
`, []Row{
{
Metric: "bid",
Tags: []Tag{
{
Key: "symbol",
Value: "AUDCAD",
},
},
Value: 0.9725,
Timestamp: 1439164801000,
},
{
Metric: "ask",
Tags: []Tag{
{
Key: "symbol",
Value: "AUDCAD",
},
},
Value: 0.97273,
Timestamp: 1439164801000,
},
{
Metric: "bid",
Tags: []Tag{
{
Key: "symbol",
Value: "AUDCAD",
},
},
Value: 0.97253,
Timestamp: 1439164802000,
},
{
Metric: "ask",
Tags: []Tag{
{
Key: "symbol",
Value: "AUDCAD",
},
},
Value: 0.97276,
Timestamp: 1439164802000,
},
})
// Superflouos columns
f("1:metric:foo", `123,456,foo,bar`, []Row{
{
Metric: "foo",
Value: 123,
},
})
f("2:metric:foo", `123,-45.6,foo,bar`, []Row{
{
Metric: "foo",
Value: -45.6,
},
})
}

View File

@@ -0,0 +1,31 @@
package csvimport
import (
"fmt"
"testing"
)
func BenchmarkRowsUnmarshal(b *testing.B) {
cds, err := ParseColumnDescriptors("1:label:symbol,2:metric:bid,3:metric:ask,4:time:unix_ms")
if err != nil {
b.Fatalf("cannot parse column descriptors: %s", err)
}
s := `GOOG,123.456,789.234,1345678999003
GOOG,223.456,889.234,1345678939003
GOOG,323.456,989.234,1345678949003
MSFT,423.456,189.234,1345678959003
AMZN,523.456,189.234,1345678959005
`
const rowsExpected = 10
b.SetBytes(int64(len(s)))
b.ReportAllocs()
b.RunParallel(func(pb *testing.PB) {
var rs Rows
for pb.Next() {
rs.Unmarshal(s, cds)
if len(rs.Rows) != rowsExpected {
panic(fmt.Errorf("unexpected rows parsed; got %d; want %d; rows: %v", len(rs.Rows), rowsExpected, rs.Rows))
}
}
})
}

View File

@@ -0,0 +1,127 @@
package csvimport
import (
"fmt"
"strings"
)
// scanner is csv scanner
type scanner struct {
// The line value read after the call to NextLine()
Line string
// The column value read after the call to NextColumn()
Column string
// Error may be set only on NextColumn call.
// It is cleared on NextLine call.
Error error
s string
}
// Init initializes sc with s
func (sc *scanner) Init(s string) {
sc.Line = ""
sc.Column = ""
sc.Error = nil
sc.s = s
}
// NextLine advances csv scanner to the next line and sets cs.Line to it.
//
// It clears sc.Error.
//
// false is returned if no more lines left in sc.s
func (sc *scanner) NextLine() bool {
s := sc.s
sc.Error = nil
for len(s) > 0 {
n := strings.IndexByte(s, '\n')
var line string
if n >= 0 {
line = trimTrailingSpace(s[:n])
s = s[n+1:]
} else {
line = trimTrailingSpace(s)
s = ""
}
sc.Line = line
sc.s = s
if len(line) > 0 {
return true
}
}
return false
}
// NextColumn advances sc.Line to the next Column and sets sc.Column to it.
//
// false is returned if no more columns left in sc.Line or if any error occurs.
// sc.Error is set to error in the case of error.
func (sc *scanner) NextColumn() bool {
s := sc.Line
if len(s) == 0 {
return false
}
if sc.Error != nil {
return false
}
if s[0] == '"' {
sc.Column, sc.Line, sc.Error = readQuotedField(s)
return sc.Error == nil
}
n := strings.IndexByte(s, ',')
if n >= 0 {
sc.Column = s[:n]
sc.Line = s[n+1:]
} else {
sc.Column = s
sc.Line = ""
}
return true
}
func trimTrailingSpace(s string) string {
if len(s) > 0 && s[len(s)-1] == '\r' {
return s[:len(s)-1]
}
return s
}
func readQuotedField(s string) (string, string, error) {
sOrig := s
if len(s) == 0 || s[0] != '"' {
return "", sOrig, fmt.Errorf("missing opening quote for %q", sOrig)
}
s = s[1:]
hasEscapedQuote := false
for {
n := strings.IndexByte(s, '"')
if n < 0 {
return "", sOrig, fmt.Errorf("missing closing quote for %q", sOrig)
}
s = s[n+1:]
if len(s) == 0 {
// The end of string found
return unquote(sOrig[1:len(sOrig)-1], hasEscapedQuote), "", nil
}
if s[0] == '"' {
// Take into account escaped quote
s = s[1:]
hasEscapedQuote = true
continue
}
if s[0] != ',' {
return "", sOrig, fmt.Errorf("missing comma after quoted field in %q", sOrig)
}
return unquote(sOrig[1:len(sOrig)-len(s)-1], hasEscapedQuote), s[1:], nil
}
}
func unquote(s string, hasEscapedQuote bool) string {
if !hasEscapedQuote {
return s
}
return strings.ReplaceAll(s, `""`, `"`)
}

View File

@@ -0,0 +1,85 @@
package csvimport
import (
"testing"
)
func TestScannerSuccess(t *testing.T) {
var sc scanner
sc.Init("foo,bar\n\"aa,\"\"bb\",\"\"")
if !sc.NextLine() {
t.Fatalf("expecting the first line")
}
if sc.Line != "foo,bar" {
t.Fatalf("unexpected line; got %q; want %q", sc.Line, "foo,bar")
}
if !sc.NextColumn() {
t.Fatalf("expecting the first column")
}
if sc.Column != "foo" {
t.Fatalf("unexpected first column; got %q; want %q", sc.Column, "foo")
}
if !sc.NextColumn() {
t.Fatalf("expecting the second column")
}
if sc.Column != "bar" {
t.Fatalf("unexpected second column; got %q; want %q", sc.Column, "bar")
}
if sc.NextColumn() {
t.Fatalf("unexpected next column: %q", sc.Column)
}
if sc.Error != nil {
t.Fatalf("unexpected error: %s", sc.Error)
}
if !sc.NextLine() {
t.Fatalf("expecting the second line")
}
if sc.Line != "\"aa,\"\"bb\",\"\"" {
t.Fatalf("unexpected the second line; got %q; want %q", sc.Line, "\"aa,\"\"bb\",\"\"")
}
if !sc.NextColumn() {
t.Fatalf("expecting the first column on the second line")
}
if sc.Column != "aa,\"bb" {
t.Fatalf("unexpected column on the second line; got %q; want %q", sc.Column, "aa,\"bb")
}
if !sc.NextColumn() {
t.Fatalf("expecting the second column on the second line")
}
if sc.Column != "" {
t.Fatalf("unexpected column on the second line; got %q; want %q", sc.Column, "")
}
if sc.NextColumn() {
t.Fatalf("unexpected next column on the second line: %q", sc.Column)
}
if sc.Error != nil {
t.Fatalf("unexpected error: %s", sc.Error)
}
if sc.NextLine() {
t.Fatalf("unexpected next line: %q", sc.Line)
}
}
func TestScannerFailure(t *testing.T) {
f := func(s string) {
t.Helper()
var sc scanner
sc.Init(s)
for sc.NextLine() {
for sc.NextColumn() {
}
if sc.Error != nil {
if sc.NextColumn() {
t.Fatalf("unexpected NextColumn success after the error %v", sc.Error)
}
return
}
}
t.Fatalf("expecting at least a single error")
}
// Unclosed quote
f("foo\r\n\"bar,")
f(`"foo,"bar`)
f(`foo,"bar",""a`)
f(`foo,"bar","a""`)
}

View File

@@ -0,0 +1,124 @@
package csvimport
import (
"fmt"
"io"
"net/http"
"runtime"
"sync"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/metrics"
)
// ParseStream parses csv from req and calls callback for the parsed rows.
//
// The callback can be called multiple times for streamed data from req.
//
// callback shouldn't hold rows after returning.
func ParseStream(req *http.Request, callback func(rows []Row) error) error {
readCalls.Inc()
q := req.URL.Query()
format := q.Get("format")
cds, err := ParseColumnDescriptors(format)
if err != nil {
return fmt.Errorf("cannot parse the provided csv format: %s", err)
}
r := req.Body
if req.Header.Get("Content-Encoding") == "gzip" {
zr, err := common.GetGzipReader(r)
if err != nil {
return fmt.Errorf("cannot read gzipped csv data: %s", err)
}
defer common.PutGzipReader(zr)
r = zr
}
ctx := getStreamContext()
defer putStreamContext(ctx)
for ctx.Read(r, cds) {
if err := callback(ctx.Rows.Rows); err != nil {
return err
}
}
return ctx.Error()
}
func (ctx *streamContext) Read(r io.Reader, cds []ColumnDescriptor) bool {
if ctx.err != nil {
return false
}
ctx.reqBuf, ctx.tailBuf, ctx.err = common.ReadLinesBlock(r, ctx.reqBuf, ctx.tailBuf)
if ctx.err != nil {
if ctx.err != io.EOF {
readErrors.Inc()
ctx.err = fmt.Errorf("cannot read csv data: %s", ctx.err)
}
return false
}
ctx.Rows.Unmarshal(bytesutil.ToUnsafeString(ctx.reqBuf), cds)
rowsRead.Add(len(ctx.Rows.Rows))
// Set missing timestamps
currentTs := time.Now().UnixNano() / 1e6
for i := range ctx.Rows.Rows {
row := &ctx.Rows.Rows[i]
if row.Timestamp == 0 {
row.Timestamp = currentTs
}
}
return true
}
var (
readCalls = metrics.NewCounter(`vm_protoparser_read_calls_total{type="csvimport"}`)
readErrors = metrics.NewCounter(`vm_protoparser_read_errors_total{type="csvimport"}`)
rowsRead = metrics.NewCounter(`vm_protoparser_rows_read_total{type="csvimport"}`)
)
type streamContext struct {
Rows Rows
reqBuf []byte
tailBuf []byte
err error
}
func (ctx *streamContext) Error() error {
if ctx.err == io.EOF {
return nil
}
return ctx.err
}
func (ctx *streamContext) reset() {
ctx.Rows.Reset()
ctx.reqBuf = ctx.reqBuf[:0]
ctx.tailBuf = ctx.tailBuf[:0]
ctx.err = nil
}
func getStreamContext() *streamContext {
select {
case ctx := <-streamContextPoolCh:
return ctx
default:
if v := streamContextPool.Get(); v != nil {
return v.(*streamContext)
}
return &streamContext{}
}
}
func putStreamContext(ctx *streamContext) {
ctx.reset()
select {
case streamContextPoolCh <- ctx:
default:
streamContextPool.Put(ctx)
}
}
var streamContextPool sync.Pool
var streamContextPoolCh = make(chan *streamContext, runtime.GOMAXPROCS(-1))

View File

@@ -126,10 +126,14 @@ type indexDB struct {
// Cache for fast MetricID -> MetricName lookup.
metricNameCache *workingsetcache.Cache
// Cache holding useless TagFilters entries, which have no tag filters
// Cache for useless TagFilters entries, which have no tag filters
// matching low number of metrics.
uselessTagFiltersCache *workingsetcache.Cache
// Cache for (date, tagFilter) -> metricIDsLen, which is used for reducing
// the amount of work when matching a set of filters.
metricIDsPerDateTagFilterCache *workingsetcache.Cache
indexSearchPool sync.Pool
// An inmemory set of deleted metricIDs.
@@ -178,10 +182,11 @@ func openIndexDB(path string, metricIDCache, metricNameCache *workingsetcache.Ca
tb: tb,
name: name,
tagCache: workingsetcache.New(mem/32, time.Hour),
metricIDCache: metricIDCache,
metricNameCache: metricNameCache,
uselessTagFiltersCache: workingsetcache.New(mem/128, time.Hour),
tagCache: workingsetcache.New(mem/32, time.Hour),
metricIDCache: metricIDCache,
metricNameCache: metricNameCache,
uselessTagFiltersCache: workingsetcache.New(mem/128, time.Hour),
metricIDsPerDateTagFilterCache: workingsetcache.New(mem/128, time.Hour),
currHourMetricIDs: currHourMetricIDs,
prevHourMetricIDs: prevHourMetricIDs,
@@ -348,11 +353,13 @@ func (db *indexDB) decRef() {
// Free space occupied by caches owned by db.
db.tagCache.Stop()
db.uselessTagFiltersCache.Stop()
db.metricIDsPerDateTagFilterCache.Stop()
db.tagCache = nil
db.metricIDCache = nil
db.metricNameCache = nil
db.uselessTagFiltersCache = nil
db.metricIDsPerDateTagFilterCache = nil
if atomic.LoadUint64(&db.mustDrop) == 0 {
return
@@ -1014,8 +1021,7 @@ func (is *indexSearch) getStartDateForPerDayInvertedIndex() (uint64, error) {
item := ts.Item
if !bytes.HasPrefix(item, prefix) {
// The databse doesn't contain per-day inverted index yet.
// Return the next date, since the current date may contain unindexed data.
return minDate + 1, nil
return minDate, nil
}
suffix := item[len(prefix):]
@@ -1024,15 +1030,13 @@ func (is *indexSearch) getStartDateForPerDayInvertedIndex() (uint64, error) {
return 0, fmt.Errorf("unexpected (date, tag)->metricIDs row len; must be at least 8 bytes; got %d bytes", len(suffix))
}
minDate = encoding.UnmarshalUint64(suffix)
// The minDate can contain incomplete inverted index, so increment it.
return minDate + 1, nil
return minDate, nil
}
if err := ts.Error(); err != nil {
return 0, err
}
// There are no (date,tag)->metricIDs entries in the database yet.
// Return the next date, since the current date may contain unindexed data.
return minDate + 1, nil
return minDate, nil
}
func (is *indexSearch) loadDeletedMetricIDs() (*uint64set.Set, error) {
@@ -1209,14 +1213,36 @@ func mergeTSIDs(a, b []TSID) []TSID {
return tsids
}
func (is *indexSearch) containsTimeRange(tr TimeRange) (bool, error) {
ts := &is.ts
kb := &is.kb
// Verify whether the maximum date in `ts` covers tr.MinTimestamp.
minDate := uint64(tr.MinTimestamp) / msecPerDay
kb.B = marshalCommonPrefix(kb.B[:0], nsPrefixDateToMetricID)
prefix := kb.B
kb.B = encoding.MarshalUint64(kb.B, minDate)
ts.Seek(kb.B)
if !ts.NextItem() {
if err := ts.Error(); err != nil {
return false, fmt.Errorf("error when searching for minDate=%d, prefix %q: %s", minDate, kb.B, err)
}
return false, nil
}
if !bytes.HasPrefix(ts.Item, prefix) {
// minDate exceeds max date from ts.
return false, nil
}
return true, nil
}
func (is *indexSearch) searchTSIDs(tfss []*TagFilters, tr TimeRange, maxMetrics int) ([]TSID, error) {
// Verify whether `is` contains data for the given tr.
ok, err := is.containsTimeRange(tr)
if err != nil {
return nil, fmt.Errorf("error in containsTimeRange(%s): %s", &tr, err)
return nil, err
}
if !ok {
// Fast path: nothing to search.
// Fast path - the index doesn't contain data for the given tr.
return nil, nil
}
metricIDs, err := is.searchMetricIDs(tfss, tr, maxMetrics)
@@ -1684,30 +1710,16 @@ func (is *indexSearch) searchMetricIDs(tfss []*TagFilters, tr TimeRange, maxMetr
func (is *indexSearch) updateMetricIDsForTagFilters(metricIDs *uint64set.Set, tfs *TagFilters, tr TimeRange, maxMetrics int) error {
// Sort tag filters for faster ts.Seek below.
sort.Slice(tfs.tfs, func(i, j int) bool {
// Move regexp and negative filters to the end, since they require scanning
// all the entries for the given label.
a := &tfs.tfs[i]
b := &tfs.tfs[j]
if a.isRegexp != b.isRegexp {
return !a.isRegexp
}
if a.isNegative != b.isNegative {
return !a.isNegative
}
if len(a.orSuffixes) != len(b.orSuffixes) {
return len(a.orSuffixes) < len(b.orSuffixes)
}
return bytes.Compare(a.prefix, b.prefix) < 0
return tfs.tfs[i].Less(&tfs.tfs[j])
})
ok, err := is.tryUpdatingMetricIDsForDateRange(metricIDs, tfs, tr, maxMetrics)
if err != nil {
return err
}
if ok {
err := is.tryUpdatingMetricIDsForDateRange(metricIDs, tfs, tr, maxMetrics)
if err == nil {
// Fast path: found metricIDs by date range.
return nil
}
if err != errFallbackToMetricNameMatch {
return err
}
// Slow path - try searching over the whole inverted index.
minTf, minMetricIDs, err := is.getTagFilterWithMinMetricIDsCountOptimized(tfs, tr, maxMetrics)
@@ -2051,30 +2063,41 @@ func (is *indexSearch) getMetricIDsForTimeRange(tr TimeRange, maxMetrics int) (*
// Too much dates must be covered. Give up.
return nil, errMissingMetricIDsForDate
}
if minDate == maxDate {
// Fast path - query on a single day.
metricIDs, err := is.getMetricIDsForDate(minDate, maxMetrics)
if err != nil {
return nil, err
}
atomic.AddUint64(&is.db.dateMetricIDsSearchHits, 1)
return metricIDs, nil
}
// Search for metricIDs for each day in parallel.
// Slower path - query over multiple days in parallel.
metricIDs = &uint64set.Set{}
var wg sync.WaitGroup
var errGlobal error
var mu sync.Mutex // protects metricIDs + errGlobal from concurrent access below.
for minDate <= maxDate {
date := minDate
isLocal := is.db.getIndexSearch()
wg.Add(1)
go func() {
go func(date uint64) {
defer wg.Done()
isLocal := is.db.getIndexSearch()
defer is.db.putIndexSearch(isLocal)
var result uint64set.Set
err := isLocal.getMetricIDsForDate(date, &result, maxMetrics)
m, err := isLocal.getMetricIDsForDate(date, maxMetrics)
mu.Lock()
if metricIDs.Len() < maxMetrics {
metricIDs.UnionMayOwn(&result)
defer mu.Unlock()
if errGlobal != nil {
return
}
if err != nil {
errGlobal = err
return
}
mu.Unlock()
}()
if metricIDs.Len() < maxMetrics {
metricIDs.UnionMayOwn(m)
}
}(minDate)
minDate++
}
wg.Wait()
@@ -2085,126 +2108,184 @@ func (is *indexSearch) getMetricIDsForTimeRange(tr TimeRange, maxMetrics int) (*
return metricIDs, nil
}
func (is *indexSearch) tryUpdatingMetricIDsForDateRange(metricIDs *uint64set.Set, tfs *TagFilters, tr TimeRange, maxMetrics int) (bool, error) {
func (is *indexSearch) tryUpdatingMetricIDsForDateRange(metricIDs *uint64set.Set, tfs *TagFilters, tr TimeRange, maxMetrics int) error {
atomic.AddUint64(&is.db.dateRangeSearchCalls, 1)
minDate := uint64(tr.MinTimestamp) / msecPerDay
maxDate := uint64(tr.MaxTimestamp) / msecPerDay
if minDate < is.db.startDateForPerDayInvertedIndex || maxDate < minDate {
// Per-day inverted index doesn't cover the selected date range.
return false, nil
return errFallbackToMetricNameMatch
}
if maxDate-minDate > maxDaysForDateMetricIDs {
// Too much dates must be covered. Give up, since it may be slow.
return false, nil
return errFallbackToMetricNameMatch
}
if minDate == maxDate {
// Fast path - query only a single date.
m, err := is.getMetricIDsForDateAndFilters(minDate, tfs, maxMetrics)
if err != nil {
return err
}
metricIDs.UnionMayOwn(m)
atomic.AddUint64(&is.db.dateRangeSearchHits, 1)
return nil
}
// Search for metricIDs for each day in parallel.
// Slower path - search for metricIDs for each day in parallel.
var wg sync.WaitGroup
var errGlobal error
okGlobal := true
var mu sync.Mutex // protects metricIDs + *Global vars from concurrent access below
var mu sync.Mutex // protects metricIDs + errGlobal vars from concurrent access below
for minDate <= maxDate {
date := minDate
isLocal := is.db.getIndexSearch()
wg.Add(1)
go func() {
go func(date uint64) {
defer wg.Done()
isLocal := is.db.getIndexSearch()
defer is.db.putIndexSearch(isLocal)
var result uint64set.Set
ok, err := isLocal.tryUpdatingMetricIDsForDate(date, &result, tfs, maxMetrics)
m, err := isLocal.getMetricIDsForDateAndFilters(date, tfs, maxMetrics)
mu.Lock()
if metricIDs.Len() < maxMetrics {
metricIDs.UnionMayOwn(&result)
}
if !ok {
okGlobal = ok
defer mu.Unlock()
if errGlobal != nil {
return
}
if err != nil {
if err == errFallbackToMetricNameMatch {
// The per-date search is too expensive. Probably it is faster to perform global search
// using metric name match.
errGlobal = err
return
}
dateStr := time.Unix(int64(date*24*3600), 0)
errGlobal = fmt.Errorf("cannot search for metricIDs for %s: %s", dateStr, err)
return
}
mu.Unlock()
}()
if metricIDs.Len() < maxMetrics {
metricIDs.UnionMayOwn(m)
}
}(minDate)
minDate++
}
wg.Wait()
if errGlobal != nil {
return false, errGlobal
return errGlobal
}
atomic.AddUint64(&is.db.dateRangeSearchHits, 1)
return okGlobal, nil
return nil
}
func (is *indexSearch) tryUpdatingMetricIDsForDate(date uint64, metricIDs *uint64set.Set, tfs *TagFilters, maxMetrics int) (bool, error) {
var tfFirst *tagFilter
func (is *indexSearch) getMetricIDsForDateAndFilters(date uint64, tfs *TagFilters, maxMetrics int) (*uint64set.Set, error) {
// Sort tfs by the number of matching filters from previous queries.
// This way we limit the amount of work below by applying more specific filters at first.
type tagFilterWithCount struct {
tf *tagFilter
count uint64
}
tfsWithCount := make([]tagFilterWithCount, len(tfs.tfs))
kb := &is.kb
var buf []byte
for i := range tfs.tfs {
tf := &tfs.tfs[i]
kb.B = appendDateTagFilterCacheKey(kb.B[:0], date, tf)
buf = is.db.metricIDsPerDateTagFilterCache.Get(buf[:0], kb.B)
count := uint64(0)
if len(buf) == 8 {
count = encoding.UnmarshalUint64(buf)
}
tfsWithCount[i] = tagFilterWithCount{
tf: tf,
count: count,
}
}
sort.Slice(tfsWithCount, func(i, j int) bool {
a, b := &tfsWithCount[i], &tfsWithCount[j]
if a.count != b.count {
return a.count < b.count
}
return a.tf.Less(b.tf)
})
// Populate metricIDs with the first non-negative filter.
var tfFirst *tagFilter
for i := range tfsWithCount {
tf := tfsWithCount[i].tf
if tf.isNegative {
continue
}
tfFirst = tf
break
}
var result *uint64set.Set
var metricIDs *uint64set.Set
maxDateMetrics := maxMetrics * 50
if tfFirst == nil {
result = &uint64set.Set{}
if err := is.updateMetricIDsForDateAll(result, date, maxDateMetrics); err != nil {
// All the filters in tfs are negative. Populate all the metricIDs for the given (date),
// so later they can be filtered out with negative filters.
m, err := is.getMetricIDsForDate(date, maxDateMetrics)
if err != nil {
if err == errMissingMetricIDsForDate {
// Zero data points were written on the given date.
// Zero time series were written on the given date.
// It is OK, since (date, metricID) entries must exist for the given date
// according to startDateForPerDayInvertedIndex.
return true, nil
return nil, nil
}
return false, fmt.Errorf("cannot obtain all the metricIDs: %s", err)
return nil, fmt.Errorf("cannot obtain all the metricIDs: %s", err)
}
metricIDs = m
} else {
// Populate metricIDs for the given tfFirst on the given (date)
m, err := is.getMetricIDsForDateTagFilter(tfFirst, date, tfs.commonPrefix, maxDateMetrics)
if err != nil {
if err == errFallbackToMetricNameMatch {
// The per-date search is too expensive. Probably it is better to perform global search
// using metric name match.
return false, nil
}
return false, err
return nil, err
}
result = m
metricIDs = m
}
if result.Len() >= maxDateMetrics {
if metricIDs.Len() >= maxDateMetrics {
// Too many time series found by a single tag filter. Fall back to global search.
return false, nil
return nil, errFallbackToMetricNameMatch
}
for i := range tfs.tfs {
tf := &tfs.tfs[i]
// Intersect metricIDs with the rest of filters.
for i := range tfsWithCount {
tfWithCount := &tfsWithCount[i]
tf := tfWithCount.tf
if tf == tfFirst {
continue
}
if n := uint64(metricIDs.Len()); n < 1000 || n < tfWithCount.count/maxIndexScanLoopsPerMetric {
// It should be faster performing metricName match on the remaining filters
// instead of scanning big number of entries in the inverted index for these filters.
tfsRemaining := tfsWithCount[i:]
tfsPostponed := make([]*tagFilter, 0, len(tfsRemaining))
for j := range tfsRemaining {
tf := tfsRemaining[j].tf
if tf == tfFirst {
continue
}
tfsPostponed = append(tfsPostponed, tf)
}
var m uint64set.Set
if err := is.updateMetricIDsByMetricNameMatch(&m, metricIDs, tfsPostponed); err != nil {
return nil, err
}
return &m, nil
}
m, err := is.getMetricIDsForDateTagFilter(tf, date, tfs.commonPrefix, maxDateMetrics)
if err != nil {
if err == errFallbackToMetricNameMatch {
// The per-date search is too expensive. Probably it is better to perform global search
// using metric name match.
return false, nil
}
return false, err
return nil, err
}
if m.Len() >= maxDateMetrics {
// Too many time series found by a single tag filter. Fall back to global search.
return false, nil
return nil, errFallbackToMetricNameMatch
}
if tf.isNegative {
result.Subtract(m)
metricIDs.Subtract(m)
} else {
result.Intersect(m)
metricIDs.Intersect(m)
}
if result.Len() == 0 {
return true, nil
if metricIDs.Len() == 0 {
// Short circuit - there is no need in applying the remaining filters to empty set.
return nil, nil
}
}
metricIDs.UnionMayOwn(result)
return true, nil
return metricIDs, nil
}
func (is *indexSearch) getMetricIDsForRecentHours(tr TimeRange, maxMetrics int) (*uint64set.Set, bool) {
@@ -2329,70 +2410,45 @@ func (is *indexSearch) getMetricIDsForDateTagFilter(tf *tagFilter, date uint64,
tfNew := *tf
tfNew.isNegative = false // isNegative for the original tf is handled by the caller.
tfNew.prefix = kb.B
return is.getMetricIDsForTagFilter(&tfNew, maxMetrics)
metricIDs, err := is.getMetricIDsForTagFilter(&tfNew, maxMetrics)
// Store the number of matching metricIDs in the cache in order to sort tag filters
// in ascending number of matching metricIDs on the next search.
is.kb.B = appendDateTagFilterCacheKey(is.kb.B[:0], date, tf)
metricIDsLen := uint64(metricIDs.Len())
if err != nil {
// Set metricIDsLen to maxMetrics, so the given entry will be moved to the end
// of tag filters on the next search.
metricIDsLen = uint64(maxMetrics)
}
kb.B = encoding.MarshalUint64(kb.B[:0], metricIDsLen)
is.db.metricIDsPerDateTagFilterCache.Set(is.kb.B, kb.B)
return metricIDs, err
}
func (is *indexSearch) getMetricIDsForDate(date uint64, metricIDs *uint64set.Set, maxMetrics int) error {
ts := &is.ts
kb := &is.kb
kb.B = marshalCommonPrefix(kb.B[:0], nsPrefixDateToMetricID)
kb.B = encoding.MarshalUint64(kb.B, date)
ts.Seek(kb.B)
items := 0
for metricIDs.Len() < maxMetrics && ts.NextItem() {
if !bytes.HasPrefix(ts.Item, kb.B) {
break
}
// Extract MetricID from ts.Item (the last 8 bytes).
v := ts.Item[len(kb.B):]
if len(v) != 8 {
return fmt.Errorf("cannot extract metricID from k; want %d bytes; got %d bytes", 8, len(v))
}
metricID := encoding.UnmarshalUint64(v)
metricIDs.Add(metricID)
items++
}
if err := ts.Error(); err != nil {
return fmt.Errorf("error when searching for metricIDs for date %d: %s", date, err)
}
if items == 0 {
// There are no metricIDs for the given date.
// This may be the case for old data when Date -> MetricID wasn't available.
return errMissingMetricIDsForDate
}
return nil
func appendDateTagFilterCacheKey(dst []byte, date uint64, tf *tagFilter) []byte {
dst = encoding.MarshalUint64(dst, date)
dst = tf.Marshal(dst)
return dst
}
func (is *indexSearch) containsTimeRange(tr TimeRange) (bool, error) {
ts := &is.ts
kb := &is.kb
// Verify whether the maximum date in `ts` covers tr.MinTimestamp.
minDate := uint64(tr.MinTimestamp) / msecPerDay
kb.B = marshalCommonPrefix(kb.B[:0], nsPrefixDateToMetricID)
kb.B = encoding.MarshalUint64(kb.B, minDate)
ts.Seek(kb.B)
if !ts.NextItem() {
if err := ts.Error(); err != nil {
return false, fmt.Errorf("error when searching for minDate=%d, prefix %q: %s", minDate, kb.B, err)
}
return false, nil
}
if !bytes.HasPrefix(ts.Item, kb.B[:1]) {
// minDate exceeds max date from ts.
return false, nil
}
return true, nil
}
func (is *indexSearch) updateMetricIDsForDateAll(metricIDs *uint64set.Set, date uint64, maxMetrics int) error {
func (is *indexSearch) getMetricIDsForDate(date uint64, maxMetrics int) (*uint64set.Set, error) {
// Extract all the metricIDs from (date, __name__=value)->metricIDs entries.
kb := kbPool.Get()
defer kbPool.Put(kb)
kb.B = marshalCommonPrefix(kb.B[:0], nsPrefixTagToMetricIDs)
kb.B = marshalCommonPrefix(kb.B[:0], nsPrefixDateTagToMetricIDs)
kb.B = encoding.MarshalUint64(kb.B, date)
kb.B = marshalTagValue(kb.B, nil)
return is.updateMetricIDsForPrefix(kb.B, metricIDs, maxMetrics)
var metricIDs uint64set.Set
if err := is.updateMetricIDsForPrefix(kb.B, &metricIDs, maxMetrics); err != nil {
return nil, err
}
if metricIDs.Len() == 0 {
// There are no metricIDs for the given date.
// This may be the case for old data where (data, __name__=value)->metricIDs entries weren't available.
return nil, errMissingMetricIDsForDate
}
return &metricIDs, nil
}
func (is *indexSearch) updateMetricIDsAll(metricIDs *uint64set.Set, maxMetrics int) error {
@@ -2441,7 +2497,7 @@ func (is *indexSearch) updateMetricIDsForPrefix(prefix []byte, metricIDs *uint64
// over the found metrics.
const maxIndexScanLoopsPerMetric = 100
// The maximum number of slow index scan loops per.
// The maximum number of slow index scan loops.
// Bigger number of loops is slower than updateMetricIDsByMetricNameMatch
// over the found metrics.
const maxIndexScanSlowLoopsPerMetric = 20

View File

@@ -80,6 +80,12 @@ type Storage struct {
currHourMetricIDsUpdaterWG sync.WaitGroup
retentionWatcherWG sync.WaitGroup
prefetchedMetricIDsCleanerWG sync.WaitGroup
// The snapshotLock prevents from concurrent creation of snapshots,
// since this may result in snapshots without recently added data,
// which may be in the process of flushing to disk by concurrently running
// snapshot process.
snapshotLock sync.Mutex
}
// OpenStorage opens storage on the given path with the given number of retention months.
@@ -178,6 +184,9 @@ func (s *Storage) CreateSnapshot() (string, error) {
logger.Infof("creating Storage snapshot for %q...", s.path)
startTime := time.Now()
s.snapshotLock.Lock()
defer s.snapshotLock.Unlock()
snapshotName := fmt.Sprintf("%s-%08X", time.Now().UTC().Format("20060102150405"), nextSnapshotIdx())
srcDir := s.path
dstDir := fmt.Sprintf("%s/snapshots/%s", srcDir, snapshotName)

View File

@@ -675,35 +675,43 @@ func checkTagKeys(tks []string, tksExpected map[string]bool) error {
return nil
}
func TestStorageAddRows(t *testing.T) {
path := "TestStorageAddRows"
func TestStorageAddRowsSerial(t *testing.T) {
path := "TestStorageAddRowsSerial"
s, err := OpenStorage(path, 0)
if err != nil {
t.Fatalf("cannot open storage: %s", err)
}
t.Run("serial", func(t *testing.T) {
if err := testStorageAddRows(s); err != nil {
t.Fatalf("unexpected error: %s", err)
}
})
t.Run("concurrent", func(t *testing.T) {
ch := make(chan error, 3)
for i := 0; i < cap(ch); i++ {
go func() {
ch <- testStorageAddRows(s)
}()
}
for i := 0; i < cap(ch); i++ {
select {
case err := <-ch:
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
case <-time.After(3 * time.Second):
t.Fatalf("timeout")
if err := testStorageAddRows(s); err != nil {
t.Fatalf("unexpected error: %s", err)
}
s.MustClose()
if err := os.RemoveAll(path); err != nil {
t.Fatalf("cannot remove %q: %s", path, err)
}
}
func TestStorageAddRowsConcurrent(t *testing.T) {
path := "TestStorageAddRowsConcurrent"
s, err := OpenStorage(path, 0)
if err != nil {
t.Fatalf("cannot open storage: %s", err)
}
ch := make(chan error, 3)
for i := 0; i < cap(ch); i++ {
go func() {
ch <- testStorageAddRows(s)
}()
}
for i := 0; i < cap(ch); i++ {
select {
case err := <-ch:
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
case <-time.After(10 * time.Second):
t.Fatalf("timeout")
}
})
}
s.MustClose()
if err := os.RemoveAll(path); err != nil {
t.Fatalf("cannot remove %q: %s", path, err)

View File

@@ -33,6 +33,8 @@ func NewTagFilters() *TagFilters {
// Add adds the given tag filter to tfs.
//
// MetricGroup must be encoded with nil key.
//
// Finalize must be called after tfs is constructed.
func (tfs *TagFilters) Add(key, value []byte, isNegative, isRegexp bool) error {
// Verify whether tag filter is empty.
if len(value) == 0 {
@@ -66,6 +68,34 @@ func (tfs *TagFilters) Add(key, value []byte, isNegative, isRegexp bool) error {
return nil
}
// Finalize finalizes tfs and may return complementary TagFilters,
// which must be added to the resulting set of tag filters.
func (tfs *TagFilters) Finalize() []*TagFilters {
var tfssNew []*TagFilters
for i := range tfs.tfs {
tf := &tfs.tfs[i]
if tf.matchesEmptyValue {
// tf matches empty value, so it must be accompanied with `key!~".+"` tag filter
// in order to match time series without the given label.
tfssNew = append(tfssNew, tfs.cloneWithNegativeFilter(tf))
}
}
return tfssNew
}
func (tfs *TagFilters) cloneWithNegativeFilter(tfNegative *tagFilter) *TagFilters {
tfsNew := NewTagFilters()
for i := range tfs.tfs {
tf := &tfs.tfs[i]
if tf == tfNegative {
tfsNew.Add(tf.key, []byte(".+"), true, true)
} else {
tfsNew.Add(tf.key, tf.value, tf.isNegative, tf.isRegexp)
}
}
return tfsNew
}
// String returns human-readable value for tfs.
func (tfs *TagFilters) String() string {
if len(tfs.tfs) == 0 {
@@ -102,7 +132,7 @@ type tagFilter struct {
// Prefix always contains {nsPrefixTagToMetricIDs, key}.
// Additionally it contains:
// - value ending with tagSeparatorChar if !isRegexp.
// - value if !isRegexp.
// - non-regexp prefix if isRegexp.
prefix []byte
@@ -111,6 +141,26 @@ type tagFilter struct {
// Matches regexp suffix.
reSuffixMatch func(b []byte) bool
// Set to true for filter that matches empty value, i.e. "", "|foo" or ".*"
//
// Such a filter must be applied directly to metricNames.
matchesEmptyValue bool
}
func (tf *tagFilter) Less(other *tagFilter) bool {
// Move regexp and negative filters to the end, since they require scanning
// all the entries for the given label.
if tf.isRegexp != other.isRegexp {
return !tf.isRegexp
}
if tf.isNegative != other.isNegative {
return !tf.isNegative
}
if len(tf.orSuffixes) != len(other.orSuffixes) {
return len(tf.orSuffixes) < len(other.orSuffixes)
}
return bytes.Compare(tf.prefix, other.prefix) < 0
}
// String returns human-readable tf value.
@@ -196,6 +246,9 @@ func (tf *tagFilter) Init(commonPrefix, key, value []byte, isNegative, isRegexp
}
tf.orSuffixes = append(tf.orSuffixes[:0], rcv.orValues...)
tf.reSuffixMatch = rcv.reMatch
if len(prefix) == 0 && !tf.isNegative && tf.reSuffixMatch(nil) {
tf.matchesEmptyValue = true
}
return nil
}

View File

@@ -4,6 +4,7 @@ import (
"math/bits"
"sort"
"sync"
"sync/atomic"
"unsafe"
)
@@ -16,6 +17,10 @@ import (
type Set struct {
itemsCount int
buckets bucket32Sorter
// Most likely the buckets contains only a single item, so put it here for performance reasons
// in order to improve memory locality.
scratchBuckets [1]bucket32
}
type bucket32Sorter []bucket32
@@ -38,7 +43,11 @@ func (s *Set) Clone() *Set {
}
var dst Set
dst.itemsCount = s.itemsCount
dst.buckets = make([]bucket32, len(s.buckets))
if len(s.buckets) == 1 {
dst.buckets = dst.scratchBuckets[:]
} else {
dst.buckets = make([]bucket32, len(s.buckets))
}
for i := range s.buckets {
s.buckets[i].copyTo(&dst.buckets[i])
}
@@ -56,6 +65,9 @@ func (s *Set) fixItemsCount() {
func (s *Set) cloneShallow() *Set {
var dst Set
dst.itemsCount = s.itemsCount
if len(s.buckets) == 1 {
dst.buckets = dst.scratchBuckets[:]
}
dst.buckets = append(dst.buckets[:0], s.buckets...)
return &dst
}
@@ -84,18 +96,37 @@ func (s *Set) Len() int {
// Add adds x to s.
func (s *Set) Add(x uint64) {
hi := uint32(x >> 32)
lo := uint32(x)
for i := range s.buckets {
b32 := &s.buckets[i]
if b32.hi == hi {
if b32.add(lo) {
hi32 := uint32(x >> 32)
lo32 := uint32(x)
bs := s.buckets
if len(bs) > 0 && bs[0].hi == hi32 {
// Manually inline bucket32.add for performance reasons.
hi16 := uint16(lo32 >> 16)
lo16 := uint16(lo32)
b32 := &bs[0]
his := b32.b16his
if n := b32.getHint(); n < uint32(len(his)) && his[n] == hi16 {
bs := b32.buckets
if n < uint32(len(bs)) && bs[n].add(lo16) {
s.itemsCount++
}
return
}
if b32.addSlow(hi16, lo16) {
s.itemsCount++
}
return
}
for i := range bs {
b32 := &bs[i]
if b32.hi == hi32 {
if b32.add(lo32) {
s.itemsCount++
}
return
}
}
s.addAlloc(hi, lo)
s.addAlloc(hi32, lo32)
}
func (s *Set) addAlloc(hi, lo uint32) {
@@ -106,7 +137,11 @@ func (s *Set) addAlloc(hi, lo uint32) {
}
func (s *Set) addBucket32() *bucket32 {
s.buckets = append(s.buckets, bucket32{})
if len(s.buckets) == 0 {
s.buckets = s.scratchBuckets[:]
} else {
s.buckets = append(s.buckets, bucket32{})
}
return &s.buckets[len(s.buckets)-1]
}
@@ -115,12 +150,26 @@ func (s *Set) Has(x uint64) bool {
if s == nil {
return false
}
hi := uint32(x >> 32)
lo := uint32(x)
for i := range s.buckets {
b32 := &s.buckets[i]
if b32.hi == hi {
return b32.has(lo)
hi32 := uint32(x >> 32)
lo32 := uint32(x)
bs := s.buckets
if len(bs) > 0 && bs[0].hi == hi32 {
// Manually inline bucket32.has for performance reasons.
hi16 := uint16(lo32 >> 16)
lo16 := uint16(lo32)
b32 := &bs[0]
his := b32.b16his
if n := b32.getHint(); n < uint32(len(his)) && his[n] == hi16 {
// Fast path - check the previously used bucket.
bs := b32.buckets
return n < uint32(len(bs)) && bs[n].has(lo16)
}
return b32.hasSlow(hi16, lo16)
}
for i := range bs {
b32 := &bs[i]
if b32.hi == hi32 {
return b32.has(lo32)
}
}
return false
@@ -130,8 +179,15 @@ func (s *Set) Has(x uint64) bool {
func (s *Set) Del(x uint64) {
hi := uint32(x >> 32)
lo := uint32(x)
for i := range s.buckets {
b32 := &s.buckets[i]
bs := s.buckets
if len(bs) > 0 && bs[0].hi == hi {
if bs[0].del(lo) {
s.itemsCount--
}
return
}
for i := range bs {
b32 := &bs[i]
if b32.hi == hi {
if b32.del(lo) {
s.itemsCount--
@@ -205,12 +261,12 @@ func (s *Set) union(a *Set, mayOwn bool) {
s.sort()
i := 0
j := 0
sbuckets := s.buckets
sBucketsLen := len(s.buckets)
for {
for i < len(sbuckets) && j < len(a.buckets) && sbuckets[i].hi < a.buckets[j].hi {
for i < sBucketsLen && j < len(a.buckets) && s.buckets[i].hi < a.buckets[j].hi {
i++
}
if i >= len(sbuckets) {
if i >= sBucketsLen {
for j < len(a.buckets) {
b32 := s.addBucket32()
a.buckets[j].copyTo(b32)
@@ -218,7 +274,7 @@ func (s *Set) union(a *Set, mayOwn bool) {
}
break
}
for j < len(a.buckets) && a.buckets[j].hi < sbuckets[i].hi {
for j < len(a.buckets) && a.buckets[j].hi < s.buckets[i].hi {
b32 := s.addBucket32()
a.buckets[j].copyTo(b32)
j++
@@ -226,8 +282,8 @@ func (s *Set) union(a *Set, mayOwn bool) {
if j >= len(a.buckets) {
break
}
if sbuckets[i].hi == a.buckets[j].hi {
sbuckets[i].union(&a.buckets[j], mayOwn)
if s.buckets[i].hi == a.buckets[j].hi {
s.buckets[i].union(&a.buckets[j], mayOwn)
i++
j++
}
@@ -323,22 +379,19 @@ func (s *Set) ForEach(f func(part []uint64) bool) {
}
type bucket32 struct {
hi uint32
b16his []uint16
buckets []bucket16
hi uint32
// hint may contain bucket index for the last successful add or del operation.
// hint may contain bucket index for the last successful operation.
// This allows saving CPU time on subsequent calls to the same bucket.
hint int
}
hint uint32
func (b *bucket32) cloneShallow() *bucket32 {
var dst bucket32
dst.hi = b.hi
dst.b16his = append(dst.b16his[:0], b.b16his...)
dst.buckets = append(dst.buckets[:0], b.buckets...)
dst.hint = b.hint
return &dst
// b16his contains high 16 bits for each bucket in buckets.
//
// It is always sorted.
b16his []uint16
// buckets are sorted by b16his
buckets []bucket16
}
func (b *bucket32) getLen() int {
@@ -350,49 +403,60 @@ func (b *bucket32) getLen() int {
}
func (b *bucket32) union(a *bucket32, mayOwn bool) {
if !mayOwn {
a = a.cloneShallow() // clone a, since it is sorted below.
}
a.sort()
b.sort()
i := 0
j := 0
bb16his := b.b16his
bBucketsLen := len(b.buckets)
for {
for i < len(bb16his) && j < len(a.b16his) && bb16his[i] < a.b16his[j] {
for i < bBucketsLen && j < len(a.b16his) && b.b16his[i] < a.b16his[j] {
i++
}
if i >= len(bb16his) {
if i >= bBucketsLen {
for j < len(a.b16his) {
b.b16his = append(b.b16his, a.b16his[j])
b16 := b.addBucket16()
a.buckets[j].copyTo(b16)
b16 := b.addBucket16(a.b16his[j])
if mayOwn {
*b16 = a.buckets[j]
} else {
a.buckets[j].copyTo(b16)
}
j++
}
break
}
for j < len(a.b16his) && a.b16his[j] < bb16his[i] {
b.b16his = append(b.b16his, a.b16his[j])
b16 := b.addBucket16()
a.buckets[j].copyTo(b16)
for j < len(a.b16his) && a.b16his[j] < b.b16his[i] {
b16 := b.addBucket16(a.b16his[j])
if mayOwn {
*b16 = a.buckets[j]
} else {
a.buckets[j].copyTo(b16)
}
j++
}
if j >= len(a.b16his) {
break
}
if bb16his[i] == a.b16his[j] {
if b.b16his[i] == a.b16his[j] {
b.buckets[i].union(&a.buckets[j])
i++
j++
}
}
b.sort()
// Restore buckets order, which could be violated during the merge above.
if !sort.IsSorted(b) {
sort.Sort(b)
}
}
// This is for sort.Interface used in bucket32.union
func (b *bucket32) Len() int { return len(b.b16his) }
func (b *bucket32) Less(i, j int) bool { return b.b16his[i] < b.b16his[j] }
func (b *bucket32) Swap(i, j int) {
his := b.b16his
buckets := b.buckets
his[i], his[j] = his[j], his[i]
buckets[i], buckets[j] = buckets[j], buckets[i]
}
func (b *bucket32) intersect(a *bucket32) {
a = a.cloneShallow() // clone a, since it is sorted below.
a.sort()
b.sort()
i := 0
j := 0
for {
@@ -419,6 +483,23 @@ func (b *bucket32) intersect(a *bucket32) {
j++
}
}
// Remove zero buckets
b16his := b.b16his[:0]
bs := b.buckets[:0]
for i := range b.buckets {
b32 := &b.buckets[i]
if b32.isZero() {
continue
}
b16his = append(b16his, b.b16his[i])
bs = append(bs, *b32)
}
for i := len(bs); i < len(b.buckets); i++ {
b.buckets[i] = bucket16{}
}
b.hint = 0
b.b16his = b16his
b.buckets = bs
}
func (b *bucket32) forEach(f func(part []uint64) bool) bool {
@@ -465,73 +546,53 @@ func (b *bucket32) copyTo(dst *bucket32) {
b.buckets[i].copyTo(&dst.buckets[i])
}
}
dst.hint = b.hint
}
// This is for sort.Interface
func (b *bucket32) Len() int { return len(b.b16his) }
func (b *bucket32) Less(i, j int) bool { return b.b16his[i] < b.b16his[j] }
func (b *bucket32) Swap(i, j int) {
his := b.b16his
buckets := b.buckets
his[i], his[j] = his[j], his[i]
buckets[i], buckets[j] = buckets[j], buckets[i]
func (b *bucket32) getHint() uint32 {
return atomic.LoadUint32(&b.hint)
}
const maxUnsortedBuckets = 32
func (b *bucket32) setHint(n int) {
atomic.StoreUint32(&b.hint, uint32(n))
}
func (b *bucket32) add(x uint32) bool {
hi := uint16(x >> 16)
lo := uint16(x)
if n := b.hint; n < len(b.b16his) && b.b16his[n] == hi {
his := b.b16his
if n := b.getHint(); n < uint32(len(his)) && his[n] == hi {
// Fast path - add to the previously used bucket.
return n < len(b.buckets) && b.buckets[n].add(lo)
bs := b.buckets
return n < uint32(len(bs)) && bs[n].add(lo)
}
return b.addSlow(hi, lo)
}
func (b *bucket32) addSlow(hi, lo uint16) bool {
if len(b.buckets) > maxUnsortedBuckets {
n := binarySearch16(b.b16his, hi)
b.hint = n
if n < 0 || n >= len(b.b16his) || b.b16his[n] != hi {
b.addAllocBig(hi, lo, n)
return true
}
return n < len(b.buckets) && b.buckets[n].add(lo)
his := b.b16his
n := binarySearch16(his, hi)
if n < 0 || n >= len(his) || his[n] != hi {
b.addAlloc(hi, lo, n)
return true
}
for i, hi16 := range b.b16his {
if hi16 == hi {
b.hint = i
return i < len(b.buckets) && b.buckets[i].add(lo)
}
}
b.addAllocSmall(hi, lo)
return true
b.setHint(n)
bs := b.buckets
return n < len(bs) && bs[n].add(lo)
}
func (b *bucket32) addAllocSmall(hi, lo uint16) {
func (b *bucket32) addBucket16(hi uint16) *bucket16 {
b.b16his = append(b.b16his, hi)
b16 := b.addBucket16()
_ = b16.add(lo)
if len(b.buckets) > maxUnsortedBuckets {
sort.Sort(b)
}
}
func (b *bucket32) addBucket16() *bucket16 {
b.buckets = append(b.buckets, bucket16{})
return &b.buckets[len(b.buckets)-1]
}
func (b *bucket32) addAllocBig(hi, lo uint16, n int) {
func (b *bucket32) addAlloc(hi, lo uint16, n int) {
if n < 0 {
// This is a hint to Go compiler to remove automatic bounds checks below.
return
}
if n >= len(b.b16his) {
b.b16his = append(b.b16his, hi)
b16 := b.addBucket16()
b16 := b.addBucket16(hi)
_ = b16.add(lo)
return
}
@@ -546,57 +607,50 @@ func (b *bucket32) addAllocBig(hi, lo uint16, n int) {
func (b *bucket32) has(x uint32) bool {
hi := uint16(x >> 16)
lo := uint16(x)
if len(b.buckets) > maxUnsortedBuckets {
return b.hasSlow(hi, lo)
his := b.b16his
if n := b.getHint(); n < uint32(len(his)) && his[n] == hi {
// Fast path - check the previously used bucket.
bs := b.buckets
return n < uint32(len(bs)) && bs[n].has(lo)
}
for i, hi16 := range b.b16his {
if hi16 == hi {
return i < len(b.buckets) && b.buckets[i].has(lo)
}
}
return false
return b.hasSlow(hi, lo)
}
func (b *bucket32) hasSlow(hi, lo uint16) bool {
n := binarySearch16(b.b16his, hi)
if n < 0 || n >= len(b.b16his) || b.b16his[n] != hi {
his := b.b16his
n := binarySearch16(his, hi)
if n < 0 || n >= len(his) || his[n] != hi {
return false
}
return n < len(b.buckets) && b.buckets[n].has(lo)
b.setHint(n)
bs := b.buckets
return n < len(bs) && bs[n].has(lo)
}
func (b *bucket32) del(x uint32) bool {
hi := uint16(x >> 16)
lo := uint16(x)
if n := b.hint; n < len(b.b16his) && b.b16his[n] == hi {
his := b.b16his
if n := b.getHint(); n < uint32(len(his)) && his[n] == hi {
// Fast path - use the bucket from the previous operation.
return n < len(b.buckets) && b.buckets[n].del(lo)
bs := b.buckets
return n < uint32(len(bs)) && bs[n].del(lo)
}
return b.delSlow(hi, lo)
}
func (b *bucket32) delSlow(hi, lo uint16) bool {
if len(b.buckets) > maxUnsortedBuckets {
n := binarySearch16(b.b16his, hi)
b.hint = n
if n < 0 || n >= len(b.b16his) || b.b16his[n] != hi {
return false
}
return n < len(b.buckets) && b.buckets[n].del(lo)
his := b.b16his
n := binarySearch16(his, hi)
if n < 0 || n >= len(his) || his[n] != hi {
return false
}
for i, hi16 := range b.b16his {
if hi16 == hi {
b.hint = i
return i < len(b.buckets) && b.buckets[i].del(lo)
}
}
return false
b.setHint(n)
bs := b.buckets
return n < len(bs) && bs[n].del(lo)
}
func (b *bucket32) appendTo(dst []uint64) []uint64 {
if len(b.buckets) <= maxUnsortedBuckets {
b.sort()
}
for i := range b.buckets {
hi16 := b.b16his[i]
dst = b.buckets[i].appendTo(dst, b.hi, hi16)
@@ -604,12 +658,6 @@ func (b *bucket32) appendTo(dst []uint64) []uint64 {
return dst
}
func (b *bucket32) sort() {
if !sort.IsSorted(b) {
sort.Sort(b)
}
}
const (
bitsPerBucket = 1 << 16
wordsPerBucket = bitsPerBucket / 64
@@ -621,6 +669,10 @@ type bucket16 struct {
smallPool [56]uint16
}
func (b *bucket16) isZero() bool {
return b.bits == nil && b.smallPoolLen == 0
}
func (b *bucket16) getLen() int {
if b.bits == nil {
return b.smallPoolLen
@@ -637,10 +689,13 @@ func (b *bucket16) getLen() int {
func (b *bucket16) union(a *bucket16) {
if a.bits != nil && b.bits != nil {
// Fast path - use bitwise ops.
for i, ax := range a.bits {
bx := b.bits[i]
ab := a.bits
bb := b.bits
_ = bb[len(ab)-1]
for i, ax := range ab {
bx := bb[i]
bx |= ax
b.bits[i] = bx
bb[i] = bx
}
return
}
@@ -660,10 +715,13 @@ func (b *bucket16) union(a *bucket16) {
func (b *bucket16) intersect(a *bucket16) {
if a.bits != nil && b.bits != nil {
// Fast path - use bitwise ops
for i, ax := range a.bits {
bx := b.bits[i]
ab := a.bits
bb := b.bits
_ = bb[len(ab)-1]
for i, ax := range ab {
bx := bb[i]
bx &= ax
b.bits[i] = bx
bb[i] = bx
}
return
}

View File

@@ -9,8 +9,219 @@ import (
"time"
)
func TestSetOps(t *testing.T) {
f := func(a, b []uint64) {
t.Helper()
mUnion := make(map[uint64]bool)
mIntersect := make(map[uint64]bool)
ma := make(map[uint64]bool)
sa := &Set{}
sb := &Set{}
for _, v := range a {
sa.Add(v)
ma[v] = true
mUnion[v] = true
}
for _, v := range b {
sb.Add(v)
mUnion[v] = true
if ma[v] {
mIntersect[v] = true
}
}
saOrig := sa.Clone()
if !saOrig.Equal(sa) {
t.Fatalf("saOrig must be equal to sa; got\n%v\nvs\n%v", saOrig, sa)
}
sbOrig := sb.Clone()
if !sbOrig.Equal(sb) {
t.Fatalf("sbOrig must be equal to sb; got\n%v\nvs\n%v", sbOrig, sb)
}
// Verify sa.Union(sb)
sa.Union(sb)
if err := expectEqual(sa, mUnion); err != nil {
t.Fatalf("ivalid sa.Union(sb): %s", err)
}
if !sbOrig.Equal(sb) {
t.Fatalf("sbOrig must be equal to sb after sa.Union(sb); got\n%v\nvs\n%v", sbOrig, sb)
}
// Verify sb.Union(sa)
sa = saOrig.Clone()
sb.Union(sa)
if err := expectEqual(sb, mUnion); err != nil {
t.Fatalf("invalid sb.Union(sa): %s", err)
}
if !saOrig.Equal(sa) {
t.Fatalf("saOrig must be equal to sa after sb.Union(sa); got\n%v\nvs\n%v", saOrig, sa)
}
// Verify sa.UnionMayOwn(sb)
sa = saOrig.Clone()
sb = sbOrig.Clone()
sa.UnionMayOwn(sb)
if err := expectEqual(sa, mUnion); err != nil {
t.Fatalf("invalid sa.UnionMayOwn(sb): %s", err)
}
if !sbOrig.Equal(sb) {
t.Fatalf("sbOrig must be equal to sb after sa.UnionMayOwn(sb); got\n%v\nvs\n%v", sbOrig, sb)
}
// Verify sb.UnionMayOwn(sa)
sa = saOrig.Clone()
sb.UnionMayOwn(sa)
if err := expectEqual(sb, mUnion); err != nil {
t.Fatalf("invalid sb.UnionMayOwn(sa): %s", err)
}
if !saOrig.Equal(sa) {
t.Fatalf("saOrig must be equal to sa after sb.UnionMayOwn(sa); got\n%v\nvs\n%v", saOrig, sa)
}
// Verify sa.Intersect(sb)
sa = saOrig.Clone()
sb = sbOrig.Clone()
sa.Intersect(sb)
if err := expectEqual(sa, mIntersect); err != nil {
t.Fatalf("invalid sa.Intersect(sb): %s", err)
}
if !sbOrig.Equal(sb) {
t.Fatalf("sbOrig must be equal to sb after sa.Intersect(sb); got\n%v\nvs\n%v", sbOrig, sb)
}
// Verify sb.Intersect(sa)
sa = saOrig.Clone()
sb.Intersect(sa)
if err := expectEqual(sb, mIntersect); err != nil {
t.Fatalf("invalid sb.Intersect(sa): %s", err)
}
if !saOrig.Equal(sa) {
t.Fatalf("saOrig must be equal to sa after sb.Intersect(sa); got\n%v\nvs\n%v", saOrig, sa)
}
// Verify sa.Subtract(sb)
mSubtractAB := make(map[uint64]bool)
for _, v := range a {
mSubtractAB[v] = true
}
for _, v := range b {
delete(mSubtractAB, v)
}
sa = saOrig.Clone()
sb = sbOrig.Clone()
sa.Subtract(sb)
if err := expectEqual(sa, mSubtractAB); err != nil {
t.Fatalf("invalid sa.Subtract(sb): %s", err)
}
if !sbOrig.Equal(sb) {
t.Fatalf("sbOrig must be equal to sb after sa.Subtract(sb); got\n%v\nvs\n%v", sbOrig, sb)
}
// Verify sb.Subtract(sa)
mSubtractBA := make(map[uint64]bool)
for _, v := range b {
mSubtractBA[v] = true
}
for _, v := range a {
delete(mSubtractBA, v)
}
sa = saOrig.Clone()
sb.Subtract(sa)
if err := expectEqual(sb, mSubtractBA); err != nil {
t.Fatalf("invalid sb.Subtract(sa): %s", err)
}
if !saOrig.Equal(sa) {
t.Fatalf("saOrig must be equal to sa after sb.Subtract(sa); got\n%v\nvs\n%v", saOrig, sa)
}
}
f(nil, nil)
f([]uint64{1}, nil)
f([]uint64{1, 2, 3}, nil)
f([]uint64{1, 2, 3, 1 << 16, 1 << 32, 2 << 32}, nil)
f([]uint64{1}, []uint64{1})
f([]uint64{0}, []uint64{1 << 16})
f([]uint64{1}, []uint64{1 << 16})
f([]uint64{1}, []uint64{4 << 16})
f([]uint64{1}, []uint64{1 << 32})
f([]uint64{1}, []uint64{1 << 32, 2 << 32})
f([]uint64{1}, []uint64{2 << 32})
f([]uint64{1, 1<<16 - 1}, []uint64{1 << 16})
f([]uint64{0, 1<<16 - 1}, []uint64{1 << 16, 1<<16 - 1})
f([]uint64{0, 1<<16 - 1}, []uint64{1 << 16, 1<<16 - 1, 2 << 16, 8 << 16})
f([]uint64{0}, []uint64{1 << 16, 1<<16 - 1, 2 << 16, 8 << 16})
f([]uint64{0, 2 << 16}, []uint64{1 << 16})
f([]uint64{0, 2 << 16}, []uint64{1 << 16, 3 << 16})
f([]uint64{0, 2 << 16}, []uint64{1 << 16, 2 << 16})
f([]uint64{0, 2 << 16}, []uint64{1 << 16, 2 << 16, 3 << 16})
f([]uint64{0, 2 << 32}, []uint64{1 << 32})
f([]uint64{0, 2 << 32}, []uint64{1 << 32, 3 << 32})
f([]uint64{0, 2 << 32}, []uint64{1 << 32, 2 << 32})
f([]uint64{0, 2 << 32}, []uint64{1 << 32, 2 << 32, 3 << 32})
var a []uint64
for i := 0; i < 100; i++ {
a = append(a, uint64(i))
}
var b []uint64
for i := 1 << 16; i < 1<<16+1000; i++ {
b = append(b, uint64(i))
}
f(a, b)
for i := 1<<16 - 100; i < 1<<16+100; i++ {
a = append(a, uint64(i))
}
for i := uint64(1) << 32; i < 1<<32+1<<16+200; i++ {
b = append(b, i)
}
f(a, b)
rng := rand.New(rand.NewSource(0))
for i := 0; i < 10; i++ {
a = nil
b = nil
for j := 0; j < 1000; j++ {
a = append(a, uint64(rng.Intn(1e6)))
b = append(b, uint64(rng.Intn(1e6)))
}
f(a, b)
}
}
func expectEqual(s *Set, m map[uint64]bool) error {
if s.Len() != len(m) {
return fmt.Errorf("unexpected s.Len(); got %d; want %d\ns=%v\nm=%v", s.Len(), len(m), s.AppendTo(nil), m)
}
for _, v := range s.AppendTo(nil) {
if !m[v] {
return fmt.Errorf("missing value %d in m; s=%v\nm=%v", v, s.AppendTo(nil), m)
}
}
// Additional check via s.Has()
for v := range m {
if !s.Has(v) {
return fmt.Errorf("missing value %d in s; s=%v\nm=%v", v, s.AppendTo(nil), m)
}
}
// Extra check via s.ForEach()
var err error
s.ForEach(func(part []uint64) bool {
for _, v := range part {
if !m[v] {
err = fmt.Errorf("miising value %d in m inside s.ForEach; s=%v\nm=%v", v, s.AppendTo(nil), m)
return false
}
}
return true
})
return err
}
func TestSetBasicOps(t *testing.T) {
for _, itemsCount := range []int{1, 2, 3, 4, 5, 6, 1e2, 1e3, 1e4, 1e5, 1e6, maxUnsortedBuckets * bitsPerBucket * 2} {
for _, itemsCount := range []int{1, 2, 3, 4, 5, 6, 1e2, 1e3, 1e4, 1e5, 1e6} {
t.Run(fmt.Sprintf("items_%d", itemsCount), func(t *testing.T) {
testSetBasicOps(t, itemsCount)
})

View File

@@ -1,5 +1,21 @@
# Changes
## v0.55.0
- Various updates to autogenerated clients.
## v0.54.0
- all:
- remove unused golang.org/x/exp from mod file
- update godoc.org links to pkg.go.dev
- compute/metadata:
- use defaultClient when http.Client is nil
- remove subscribeClient
- iam:
- add support for v3 policy and IAM conditions
- Various updates to autogenerated clients.
## v0.53.0
- all: most clients now use transport/grpc.DialPool rather than Dial (see #1777 for outliers).

View File

@@ -22,7 +22,7 @@ to install the code reviewing tool.
1. If you would like, you may want to set up aliases for `git-codereview`,
such that `git codereview change` becomes `git change`. See the
[godoc](https://godoc.org/golang.org/x/review/git-codereview) for details.
[godoc](https://pkg.go.dev/golang.org/x/review/git-codereview) for details.
* Should you run into issues with the `git-codereview` tool, please note
that all error messages will assume that you have set up these aliases.

90
vendor/cloud.google.com/go/README.md generated vendored
View File

@@ -1,6 +1,6 @@
# Google Cloud Client Libraries for Go
[![GoDoc](https://godoc.org/cloud.google.com/go?status.svg)](https://godoc.org/cloud.google.com/go)
[![GoDoc](https://pkg.go.dev/cloud.google.com/go?status.svg)](https://pkg.go.dev/cloud.google.com/go)
Go packages for [Google Cloud Platform](https://cloud.google.com) services.
@@ -31,46 +31,46 @@ make backwards-incompatible changes.
Google API | Status | Package
------------------------------------------------|--------------|-----------------------------------------------------------
[Asset][cloud-asset] | alpha | [`cloud.google.com/go/asset/v1beta`](https://godoc.org/cloud.google.com/go/asset/v1beta)
[Automl][cloud-automl] | stable | [`cloud.google.com/go/automl/apiv1`](https://godoc.org/cloud.google.com/go/automl/apiv1)
[BigQuery][cloud-bigquery] | stable | [`cloud.google.com/go/bigquery`](https://godoc.org/cloud.google.com/go/bigquery)
[Bigtable][cloud-bigtable] | stable | [`cloud.google.com/go/bigtable`](https://godoc.org/cloud.google.com/go/bigtable)
[Cloudbuild][cloud-build] | stable | [`cloud.google.com/go/cloudbuild/apiv1`](https://godoc.org/cloud.google.com/go/cloudbuild/apiv1)
[Cloudtasks][cloud-tasks] | stable | [`cloud.google.com/go/cloudtasks/apiv2`](https://godoc.org/cloud.google.com/go/cloudtasks/apiv2)
[Container][cloud-container] | stable | [`cloud.google.com/go/container/apiv1`](https://godoc.org/cloud.google.com/go/container/apiv1)
[ContainerAnalysis][cloud-containeranalysis] | beta | [`cloud.google.com/go/containeranalysis/apiv1`](https://godoc.org/cloud.google.com/go/containeranalysis/apiv1)
[Dataproc][cloud-dataproc] | stable | [`cloud.google.com/go/dataproc/apiv1`](https://godoc.org/cloud.google.com/go/dataproc/apiv1)
[Datastore][cloud-datastore] | stable | [`cloud.google.com/go/datastore`](https://godoc.org/cloud.google.com/go/datastore)
[Debugger][cloud-debugger] | stable | [`cloud.google.com/go/debugger/apiv2`](https://godoc.org/cloud.google.com/go/debugger/apiv2)
[Dialogflow][cloud-dialogflow] | stable | [`cloud.google.com/go/dialogflow/apiv2`](https://godoc.org/cloud.google.com/go/dialogflow/apiv2)
[Data Loss Prevention][cloud-dlp] | stable | [`cloud.google.com/go/dlp/apiv2`](https://godoc.org/cloud.google.com/go/dlp/apiv2)
[ErrorReporting][cloud-errors] | alpha | [`cloud.google.com/go/errorreporting`](https://godoc.org/cloud.google.com/go/errorreporting)
[Firestore][cloud-firestore] | stable | [`cloud.google.com/go/firestore`](https://godoc.org/cloud.google.com/go/firestore)
[IAM][cloud-iam] | stable | [`cloud.google.com/go/iam`](https://godoc.org/cloud.google.com/go/iam)
[IoT][cloud-iot] | stable | [`cloud.google.com/go/iot/apiv1`](https://godoc.org/cloud.google.com/go/iot/apiv1)
[IRM][cloud-irm] | alpha | [`cloud.google.com/go/irm/apiv1alpha2`](https://godoc.org/cloud.google.com/go/irm/apiv1alpha2)
[KMS][cloud-kms] | stable | [`cloud.google.com/go/kms/apiv1`](https://godoc.org/cloud.google.com/go/kms/apiv1)
[Natural Language][cloud-natural-language] | stable | [`cloud.google.com/go/language/apiv1`](https://godoc.org/cloud.google.com/go/language/apiv1)
[Logging][cloud-logging] | stable | [`cloud.google.com/go/logging`](https://godoc.org/cloud.google.com/go/logging)
[Memorystore][cloud-memorystore] | alpha | [`cloud.google.com/go/redis/apiv1`](https://godoc.org/cloud.google.com/go/redis/apiv1)
[Monitoring][cloud-monitoring] | alpha | [`cloud.google.com/go/monitoring/apiv3`](https://godoc.org/cloud.google.com/go/monitoring/apiv3)
[OS Login][cloud-oslogin] | alpha | [`cloud.google.com/go/oslogin/apiv1`](https://godoc.org/cloud.google.com/go/oslogin/apiv1)
[Pub/Sub][cloud-pubsub] | stable | [`cloud.google.com/go/pubsub`](https://godoc.org/cloud.google.com/go/pubsub)
[Phishing Protection][cloud-phishingprotection] | alpha | [`cloud.google.com/go/phishingprotection/apiv1beta1`](https://godoc.org/cloud.google.com/go/phishingprotection/apiv1beta1)
[reCAPTCHA Enterprise][cloud-recaptcha] | alpha | [`cloud.google.com/go/recaptchaenterprise/apiv1beta1`](https://godoc.org/cloud.google.com/go/recaptchaenterprise/apiv1beta1)
[Recommender][cloud-recommender] | beta | [`cloud.google.com/go/recommender/apiv1beta1`](https://godoc.org/cloud.google.com/go/recommender/apiv1beta1)
[Scheduler][cloud-scheduler] | stable | [`cloud.google.com/go/scheduler/apiv1`](https://godoc.org/cloud.google.com/go/scheduler/apiv1)
[Securitycenter][cloud-securitycenter] | alpha | [`cloud.google.com/go/securitycenter/apiv1`](https://godoc.org/cloud.google.com/go/securitycenter/apiv1)
[Spanner][cloud-spanner] | stable | [`cloud.google.com/go/spanner`](https://godoc.org/cloud.google.com/go/spanner)
[Speech][cloud-speech] | stable | [`cloud.google.com/go/speech/apiv1`](https://godoc.org/cloud.google.com/go/speech/apiv1)
[Storage][cloud-storage] | stable | [`cloud.google.com/go/storage`](https://godoc.org/cloud.google.com/go/storage)
[Talent][cloud-talent] | alpha | [`cloud.google.com/go/talent/apiv4beta1`](https://godoc.org/cloud.google.com/go/talent/apiv4beta1)
[Text To Speech][cloud-texttospeech] | alpha | [`cloud.google.com/go/texttospeech/apiv1`](https://godoc.org/cloud.google.com/go/texttospeech/apiv1)
[Trace][cloud-trace] | alpha | [`cloud.google.com/go/trace/apiv2`](https://godoc.org/cloud.google.com/go/trace/apiv2)
[Translate][cloud-translate] | stable | [`cloud.google.com/go/translate`](https://godoc.org/cloud.google.com/go/translate)
[Video Intelligence][cloud-video] | alpha | [`cloud.google.com/go/videointelligence/apiv1beta1`](https://godoc.org/cloud.google.com/go/videointelligence/apiv1beta1)
[Vision][cloud-vision] | stable | [`cloud.google.com/go/vision/apiv1`](https://godoc.org/cloud.google.com/go/vision/apiv1)
[Webrisk][cloud-webrisk] | alpha | [`cloud.google.com/go/webrisk/apiv1beta1`](https://godoc.org/cloud.google.com/go/webrisk/apiv1beta1)
[Asset][cloud-asset] | stable | [`cloud.google.com/go/asset/apiv1`](https://pkg.go.dev/cloud.google.com/go/asset/v1beta)
[Automl][cloud-automl] | stable | [`cloud.google.com/go/automl/apiv1`](https://pkg.go.dev/cloud.google.com/go/automl/apiv1)
[BigQuery][cloud-bigquery] | stable | [`cloud.google.com/go/bigquery`](https://pkg.go.dev/cloud.google.com/go/bigquery)
[Bigtable][cloud-bigtable] | stable | [`cloud.google.com/go/bigtable`](https://pkg.go.dev/cloud.google.com/go/bigtable)
[Cloudbuild][cloud-build] | stable | [`cloud.google.com/go/cloudbuild/apiv1`](https://pkg.go.dev/cloud.google.com/go/cloudbuild/apiv1)
[Cloudtasks][cloud-tasks] | stable | [`cloud.google.com/go/cloudtasks/apiv2`](https://pkg.go.dev/cloud.google.com/go/cloudtasks/apiv2)
[Container][cloud-container] | stable | [`cloud.google.com/go/container/apiv1`](https://pkg.go.dev/cloud.google.com/go/container/apiv1)
[ContainerAnalysis][cloud-containeranalysis] | beta | [`cloud.google.com/go/containeranalysis/apiv1`](https://pkg.go.dev/cloud.google.com/go/containeranalysis/apiv1)
[Dataproc][cloud-dataproc] | stable | [`cloud.google.com/go/dataproc/apiv1`](https://pkg.go.dev/cloud.google.com/go/dataproc/apiv1)
[Datastore][cloud-datastore] | stable | [`cloud.google.com/go/datastore`](https://pkg.go.dev/cloud.google.com/go/datastore)
[Debugger][cloud-debugger] | stable | [`cloud.google.com/go/debugger/apiv2`](https://pkg.go.dev/cloud.google.com/go/debugger/apiv2)
[Dialogflow][cloud-dialogflow] | stable | [`cloud.google.com/go/dialogflow/apiv2`](https://pkg.go.dev/cloud.google.com/go/dialogflow/apiv2)
[Data Loss Prevention][cloud-dlp] | stable | [`cloud.google.com/go/dlp/apiv2`](https://pkg.go.dev/cloud.google.com/go/dlp/apiv2)
[ErrorReporting][cloud-errors] | alpha | [`cloud.google.com/go/errorreporting`](https://pkg.go.dev/cloud.google.com/go/errorreporting)
[Firestore][cloud-firestore] | stable | [`cloud.google.com/go/firestore`](https://pkg.go.dev/cloud.google.com/go/firestore)
[IAM][cloud-iam] | stable | [`cloud.google.com/go/iam`](https://pkg.go.dev/cloud.google.com/go/iam)
[IoT][cloud-iot] | stable | [`cloud.google.com/go/iot/apiv1`](https://pkg.go.dev/cloud.google.com/go/iot/apiv1)
[IRM][cloud-irm] | alpha | [`cloud.google.com/go/irm/apiv1alpha2`](https://pkg.go.dev/cloud.google.com/go/irm/apiv1alpha2)
[KMS][cloud-kms] | stable | [`cloud.google.com/go/kms/apiv1`](https://pkg.go.dev/cloud.google.com/go/kms/apiv1)
[Natural Language][cloud-natural-language] | stable | [`cloud.google.com/go/language/apiv1`](https://pkg.go.dev/cloud.google.com/go/language/apiv1)
[Logging][cloud-logging] | stable | [`cloud.google.com/go/logging`](https://pkg.go.dev/cloud.google.com/go/logging)
[Memorystore][cloud-memorystore] | alpha | [`cloud.google.com/go/redis/apiv1`](https://pkg.go.dev/cloud.google.com/go/redis/apiv1)
[Monitoring][cloud-monitoring] | stable | [`cloud.google.com/go/monitoring/apiv3`](https://pkg.go.dev/cloud.google.com/go/monitoring/apiv3)
[OS Login][cloud-oslogin] | stable | [`cloud.google.com/go/oslogin/apiv1`](https://pkg.go.dev/cloud.google.com/go/oslogin/apiv1)
[Pub/Sub][cloud-pubsub] | stable | [`cloud.google.com/go/pubsub`](https://pkg.go.dev/cloud.google.com/go/pubsub)
[Phishing Protection][cloud-phishingprotection] | alpha | [`cloud.google.com/go/phishingprotection/apiv1beta1`](https://pkg.go.dev/cloud.google.com/go/phishingprotection/apiv1beta1)
[reCAPTCHA Enterprise][cloud-recaptcha] | alpha | [`cloud.google.com/go/recaptchaenterprise/apiv1beta1`](https://pkg.go.dev/cloud.google.com/go/recaptchaenterprise/apiv1beta1)
[Recommender][cloud-recommender] | beta | [`cloud.google.com/go/recommender/apiv1beta1`](https://pkg.go.dev/cloud.google.com/go/recommender/apiv1beta1)
[Scheduler][cloud-scheduler] | stable | [`cloud.google.com/go/scheduler/apiv1`](https://pkg.go.dev/cloud.google.com/go/scheduler/apiv1)
[Securitycenter][cloud-securitycenter] | stable | [`cloud.google.com/go/securitycenter/apiv1`](https://pkg.go.dev/cloud.google.com/go/securitycenter/apiv1)
[Spanner][cloud-spanner] | stable | [`cloud.google.com/go/spanner`](https://pkg.go.dev/cloud.google.com/go/spanner)
[Speech][cloud-speech] | stable | [`cloud.google.com/go/speech/apiv1`](https://pkg.go.dev/cloud.google.com/go/speech/apiv1)
[Storage][cloud-storage] | stable | [`cloud.google.com/go/storage`](https://pkg.go.dev/cloud.google.com/go/storage)
[Talent][cloud-talent] | alpha | [`cloud.google.com/go/talent/apiv4beta1`](https://pkg.go.dev/cloud.google.com/go/talent/apiv4beta1)
[Text To Speech][cloud-texttospeech] | stable | [`cloud.google.com/go/texttospeech/apiv1`](https://pkg.go.dev/cloud.google.com/go/texttospeech/apiv1)
[Trace][cloud-trace] | stable | [`cloud.google.com/go/trace/apiv2`](https://pkg.go.dev/cloud.google.com/go/trace/apiv2)
[Translate][cloud-translate] | stable | [`cloud.google.com/go/translate`](https://pkg.go.dev/cloud.google.com/go/translate)
[Video Intelligence][cloud-video] | beta | [`cloud.google.com/go/videointelligence/apiv1beta2`](https://pkg.go.dev/cloud.google.com/go/videointelligence/apiv1beta2)
[Vision][cloud-vision] | stable | [`cloud.google.com/go/vision/apiv1`](https://pkg.go.dev/cloud.google.com/go/vision/apiv1)
[Webrisk][cloud-webrisk] | alpha | [`cloud.google.com/go/webrisk/apiv1beta1`](https://pkg.go.dev/cloud.google.com/go/webrisk/apiv1beta1)
> **Alpha status**: the API is still being actively developed. As a
> result, it might change in backward-incompatible ways and is not recommended
@@ -83,7 +83,7 @@ Google API | Status | Package
> **Stable status**: the API is mature and ready for production use. We will
> continue addressing bugs and feature requests.
Documentation and examples are available at [godoc.org/cloud.google.com/go](https://godoc.org/cloud.google.com/go)
Documentation and examples are available at [pkg.go.dev/cloud.google.com/go](https://pkg.go.dev/cloud.google.com/go)
## Go Versions Supported
@@ -104,7 +104,7 @@ client, err := storage.NewClient(ctx)
To authorize using a
[JSON key file](https://cloud.google.com/iam/docs/managing-service-account-keys),
pass
[`option.WithCredentialsFile`](https://godoc.org/google.golang.org/api/option#WithCredentialsFile)
[`option.WithCredentialsFile`](https://pkg.go.dev/google.golang.org/api/option#WithCredentialsFile)
to the `NewClient` function of the desired package. For example:
[snip]:# (auth-JSON)
@@ -113,9 +113,9 @@ client, err := storage.NewClient(ctx, option.WithCredentialsFile("path/to/keyfil
```
You can exert more control over authorization by using the
[`golang.org/x/oauth2`](https://godoc.org/golang.org/x/oauth2) package to
[`golang.org/x/oauth2`](https://pkg.go.dev/golang.org/x/oauth2) package to
create an `oauth2.TokenSource`. Then pass
[`option.WithTokenSource`](https://godoc.org/google.golang.org/api/option#WithTokenSource)
[`option.WithTokenSource`](https://pkg.go.dev/google.golang.org/api/option#WithTokenSource)
to the `NewClient` function:
[snip]:# (auth-ts)
```go

View File

@@ -18,7 +18,7 @@ to install the code reviewing tool.
1. If you would like, you may want to set up aliases for `git-codereview`,
such that `git codereview change` becomes `git change`. See the
[godoc](https://godoc.org/golang.org/x/review/git-codereview) for details.
[godoc](https://pkg.go.dev/golang.org/x/review/git-codereview) for details.
* Should you run into issues with the `git-codereview` tool, please note
that all error messages will assume that you have set up these aliases.

Some files were not shown because too many files have changed in this diff Show More