Compare commits

...

592 Commits

Author SHA1 Message Date
Aliaksandr Valialkin
359c4d6109 docs: add a link to https://medium.com/@valyala/prometheus-storage-technical-terms-for-humans-4ab4de6c3d48 2019-12-03 22:37:16 +02:00
Aliaksandr Valialkin
face3d57bf app/vmselect: add placeholders for /api/v1/rules and /api/v1/alerts 2019-12-03 19:36:33 +02:00
Aliaksandr Valialkin
a247236f61 lib/storage: fall back to global inverted index if a filter match too many time series in per-day index
Previously this resulted to error message. The query may succeed via search in global index.
2019-12-03 14:48:31 +02:00
Aliaksandr Valialkin
54741ee578 lib/storage: fix printing tag filters in TagFilters.String 2019-12-03 14:25:13 +02:00
Aliaksandr Valialkin
efbc83a13e lib/storage: print __name__ instead of empty string in user-visible tag filters 2019-12-03 14:18:28 +02:00
Aliaksandr Valialkin
ade453847f docs: typo fixes 2019-12-03 00:44:50 +02:00
Aliaksandr Valialkin
f52874dab4 lib/storage: optimize regexp filter search 2019-12-03 00:43:12 +02:00
Artem Navoiev
652ba59ce9 [docs] update release page doc 2019-12-02 23:01:51 +02:00
Artem Navoiev
3e81ab2f75 [docs] change titles 2019-12-02 22:53:11 +02:00
Artem Navoiev
a778233877 [docs] change titles 2019-12-02 22:50:54 +02:00
Aliaksandr Valialkin
14100ed643 vendor: update github.com/VictoriaMetrics/metrics from v1.9.1 to v1.9.2
This fixes possible deadlock when metrics.WritePrometheus calls Gauge callback, which calls metrics functions with internal lock.
2019-12-02 22:33:33 +02:00
Artem Navoiev
cfc6e7df07 [docs] revert titles 2019-12-02 22:06:39 +02:00
Artem Navoiev
c07a83374c [docs] remove double titles 2019-12-02 22:02:59 +02:00
Artem Navoiev
c76b2be21f [ci] add github pages action 2019-12-02 21:53:33 +02:00
Aliaksandr Valialkin
638a5cbb16 lib/{mergeset,storage}: remove transaction files only after the mentioned dirs are really removed
This should fix the issue on NFS when incompletely removed dirs may be left
after unclean shutdown (OOM, kill -9, hard reset, etc.), while the corresponding transaction
files are already removed.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/162
2019-12-02 21:36:31 +02:00
Aliaksandr Valialkin
20812008a7 lib/storage: remove metricID with missing metricID->metricName entry
The metricID->metricName entry can be missing in the indexdb after unclean shutdown
when only a part of entries for new time series is written into indexdb.

Recover from such a situation by removing the broken metricID. New metricID
will be automatically created for time series with the given metricName
when new data point will arive to it.
2019-12-02 20:46:44 +02:00
Aliaksandr Valialkin
62a915f2b2 lib/storage: protect from time drift during indexdb rotation
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/248
2019-12-02 14:44:42 +02:00
Aliaksandr Valialkin
42da569bcd lib/logger: merge file and line labels into location="file:line"
This should improve the usability for `vm_log_messages_total` metric during practical queries
2019-12-02 14:44:40 +02:00
Aliaksandr Valialkin
70b8191fab lib/storage: generate more human-friendly result in TagFilters.String 2019-12-02 13:52:22 +02:00
Aliaksandr Valialkin
9476b73527 app/vmselect/promql: estimate per-series scrape interval as 0.6 quantile for the first 100 intervals
This should improve scrape interval estimation for tiem series with gaps.
2019-12-02 13:42:33 +02:00
Aliaksandr Valialkin
542b9c2043 lib/logger: consistency renaming from vm_log_messages_count to vm_log_messages_total, since this is a counter 2019-12-02 00:49:00 +02:00
Aliaksandr Valialkin
c567919f80 lib/logger: track the number of log messages by (level, file, line) in the vm_log_messages_count metric 2019-12-01 18:37:49 +02:00
Aliaksandr Valialkin
761645b20a lib/netutil: use IPv6 for both listening and dialing if -enabledTCP6 is set
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/244
2019-12-01 02:57:13 +02:00
Aliaksandr Valialkin
811b7a8303 app/vminsert/influx: allow empty measurement in Influx line protocol
In this case metric names are mapped directly from field names without any prefixes.
2019-11-30 23:18:41 +02:00
Artem Navoiev
4972bd4c96 Update release guide add Wiki section. Change styling 2019-11-30 21:10:42 +02:00
Artem Navoiev
335e0f8f6a Update release guide add Wiki section 2019-11-30 21:08:48 +02:00
Artem Navoiev
505e46980a [ci] push docs/*.md file to wiki 2019-11-30 20:58:28 +02:00
Artem Navoiev
ab88b77515 rename doc to docs 2019-11-30 20:48:40 +02:00
Artem Navoiev
3d8e75e065 [ci] test wiki push 2019-11-30 20:38:37 +02:00
Artem Navoiev
74b4ccfc91 [ci] push to wiki 2019-11-30 20:36:10 +02:00
Aliaksandr Valialkin
75ff524a4e app/vmselect/promql: fix corner case for increase over time series with gaps
In this case `increase` could return invalid high value for the first point after the gap.
2019-11-30 01:34:56 +02:00
Aliaksandr Valialkin
96492348cb deployment/docker/certs: update TLS certs source from alpine:3.9 to alpine:3.10 2019-11-29 19:57:29 +02:00
Aliaksandr Valialkin
f733cb2186 lib/backup: cosmetic fixes after #243 2019-11-29 18:07:04 +02:00
glebsam
15b7406f7b Add option to provide custom endpoint for S3, add option to specify S3 config profile (#243)
* Add option to provide custom endpoint for S3 for use with s3-compatible storages, add option to specify S3 config profile

* make fmt
2019-11-29 17:59:56 +02:00
Aliaksandr Valialkin
9010c6a1d6 lib/netutil: add -enableTCP6 command-line flag for enabling listening for IPv6 additionally to IPv4 TCP ports 2019-11-29 17:32:47 +02:00
Aliaksandr Valialkin
a7125a5b7b lib/backup: remove flock.lock file in empty dirs
This fixes an issue when VictoriaMetrics doesn't see the restored data after the following operations:

1. Stop VictoriaMetrics.
2. Delete `<-storageDataPath>` dir.
3. Start VictoriaMetrics, then stop it.
4. Restore data from backup with `vmrestore`.
5. Start VictoriaMetrics.

`vmrestore` didn't delete properly empty dirs in `<-storageDataPath>/indexdb` because of the remaining `flock.lock` files in these dirs.
2019-11-28 13:38:58 +02:00
Aliaksandr Valialkin
a6d7179286 README.md: remove the unnecessary step during restoring from backups 2019-11-27 19:57:03 +02:00
Aliaksandr Valialkin
e828647d0f vendor: make vendor-update 2019-11-27 15:37:14 +02:00
Aliaksandr Valialkin
31fb6f2b07 vendor: update github.com/VictoriaMetrics/fastcache from v1.5.2 to v1.5.4 2019-11-27 15:30:33 +02:00
Aliaksandr Valialkin
2c86816950 deployment/docker: update Grafana from v6.4.4 to v6.5.0 2019-11-27 15:10:37 +02:00
Aliaksandr Valialkin
4c859d980c app/vmselect/prometheus: consistently apply nocache arg to /api/v1/query the same way ast to /api/v1/query_range
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/241
2019-11-26 22:55:43 +02:00
Aliaksandr Valialkin
14bcff6015 lib/httpserver: improve docs for -tls* flags to be more clear
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/242
2019-11-26 18:08:35 +02:00
Aliaksandr Valialkin
110235f789 app/vmselect/prometheus: fix content-type for /api/v1/export responses
The correct Content-Type should be `application/stream+json` instead of `application/json`
Thanks to Joshua Ryder for pointing to this.
2019-11-26 17:45:26 +02:00
Aliaksandr Valialkin
205233d9a7 app/vmselect/promql: remove zero timeseries from prometheus_buckets output 2019-11-25 19:10:23 +02:00
Aliaksandr Valialkin
3f99f39e9b app/vmselect/prometheus: reduce default value for -search.latencyOffset from 60s to 30s
30 seconds should be enough for almost all the cases
2019-11-25 16:33:42 +02:00
Aliaksandr Valialkin
e91cb34c0e app/vmselect/promql: allow nested parens 2019-11-25 16:13:41 +02:00
Aliaksandr Valialkin
826dfd63a5 vendor: update github.com/VictoriaMetrics/metrics from v1.9.0 to v1.9.1 2019-11-25 15:23:01 +02:00
Aliaksandr Valialkin
0401969d78 app/vmselect/promql: re-use metrics.Histogram when calculating histogram function for each point on the graph
This should reduce the amounts memory allocations
2019-11-25 14:24:21 +02:00
Aliaksandr Valialkin
da98703748 app/vmselect/promql: optimize binary search over big number of samples during rollup calculations 2019-11-25 14:01:46 +02:00
Aliaksandr Valialkin
c28876172f app/vmselect/promql: adjust tests after the upgrade of github.com/VictoriaMetrics/metrics from v1.8.3 to v1.9.0 2019-11-25 13:43:57 +02:00
Aliaksandr Valialkin
66c53bf3c6 vendor: update github.com/VictoriaMetrics/metrics from v1.8.3 to v1.9.0 2019-11-25 13:19:43 +02:00
Aliaksandr Valialkin
50ae1879c6 app/vmselect/promql: add histogram aggregate function, which is useful for building heatmaps from multiple time series 2019-11-24 00:04:25 +02:00
Aliaksandr Valialkin
4ff2fbcf3f vendor: update github.com/VictoriaMetrics/metrics from v1.8.2 to v1.8.3 2019-11-24 00:04:24 +02:00
Aliaksandr Valialkin
5285acae3e lib/decimal: calculate ln2/ln10 constant during compile time 2019-11-23 15:52:58 +02:00
Aliaksandr Valialkin
8582b50360 app/vmselect/promql: do not take into account buckets with negative counters in prometheus_buckets 2019-11-23 14:19:25 +02:00
Aliaksandr Valialkin
19dfe52254 app/vmselect/promql: properly handle histogram_quantile(0, ...) with zero buckets 2019-11-23 14:02:35 +02:00
Aliaksandr Valialkin
4bb88843cf app/vmselect: add vm_per_query_{rows,series}_processed_count histograms 2019-11-23 13:23:26 +02:00
Aliaksandr Valialkin
0827bb6ce5 vendor: update github.com/VictoriaMetrics/metrics from v1.8.1 to v1.8.2 2019-11-23 11:48:54 +02:00
Aliaksandr Valialkin
7753c8c0a1 app/vmselect/promql: transparently apply prometheus_buckets in histogram_quantile 2019-11-23 11:48:51 +02:00
Aliaksandr Valialkin
ef25e1b049 vendor: update github.com/VictoriaMetrics/metrics from v1.8.0 to v1.8.1 2019-11-23 00:49:13 +02:00
Aliaksandr Valialkin
9d1fcb2be6 vendor: update github.com/VictoriaMetrics/metrics from v1.7.2 to v1.8.0. This version supports histograms 2019-11-23 00:20:27 +02:00
Aliaksandr Valialkin
c4287b3c86 app/vmselect/promql: add prometheus_buckets function for converting the upcoming histogram buckets from github.com/VictoriaMetrics/metrics to Prometheus-compatible buckets 2019-11-23 00:20:20 +02:00
Aliaksandr Valialkin
1f3fd2c910 app/vmselect: adjust end arg instead of adjusting start arg if start > end
`start` arg has higher chances to be set properly comparing to `end` arg,
so it is expected that the `end` arg could be adjusted if it was set incorrectly.
2019-11-22 16:12:19 +02:00
Aliaksandr Valialkin
90b03309de vendor: updated github.com/valyala/gozstd from v1.6.2 to v1.6.3 2019-11-21 23:57:00 +02:00
Aliaksandr Valialkin
7a4635f853 all: remove the remaining mentions of cluster version 2019-11-21 23:18:22 +02:00
Aliaksandr Valialkin
3e9b7addb1 lib/httpserver: typo fix in -httpAuth.password command-line description 2019-11-21 21:54:26 +02:00
Aliaksandr Valialkin
f652c0f40f lib/storage: move non-matching tag filters to the top at matchTagFilters
This should reduce the amount of useless work needed for matching the next metricNames.
2019-11-21 21:35:13 +02:00
Aliaksandr Valialkin
b8cde6cce1 lib/storage: speed up time series search for queries with multiple filters
Use optimized specialized binary search for uint64 metricIDs instead of generic sort.Search.
2019-11-21 18:43:17 +02:00
Aliaksandr Valialkin
aeea59e280 Makefile: create files with sha256 checksums during make release
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/19
2019-11-20 22:43:37 +02:00
Aliaksandr Valialkin
74e563ca3f README.md: added a link to https://github.com/dreamteam-gg/ansible-victoriametrics-role 2019-11-20 21:26:43 +02:00
Aliaksandr Valialkin
5c1e4143e9 lib/storage: verify the number of returned metricIDs in BenchmarkHeadPostingForMatchers 2019-11-20 15:39:28 +02:00
Aliaksandr Valialkin
52d7ca6bf0 lib/decimal: increase decimal->float speed conversion for integer numbers 2019-11-20 13:04:34 +02:00
Aliaksandr Valialkin
75eeea21ee lib/decimal: reduce rounding error when converting from decimal to float with negative exponent
While at it, slightly increase the conversion performance by moving fast path to the top of the loop.
2019-11-19 23:35:33 +02:00
Artem Navoiev
c03b87dac0 update version of codecove to 1.04 2019-11-19 22:23:14 +02:00
Aliaksandr Valialkin
259dc95366 make vendor-update 2019-11-19 21:35:07 +02:00
Aliaksandr Valialkin
cfb9fa2100 lib/backup: retrieve only the required metadata when reading GCS objects 2019-11-19 21:06:34 +02:00
Aliaksandr Valialkin
355ccba81a make vendor-update 2019-11-19 21:05:37 +02:00
Aliaksandr Valialkin
443189fb0a app/{vmbackup,vmrestore}: add -maxBytesPerSecond command-line flag for limiting the used network bandwidth during backup / restore 2019-11-19 20:31:52 +02:00
Aliaksandr Valialkin
2db06f0ef8 lib/backup: prevent from restoring to directory which is in use by VictoriaMetrics during the restore 2019-11-19 18:36:23 +02:00
Aliaksandr Valialkin
0094bc4fc9 app/vmselect/prometheus: properly adjust too big time time on /api/v1/query
Too big `time` must be adjusted to `now()-queryOffset`.
2019-11-19 00:42:00 +02:00
Aliaksandr Valialkin
b6f22a62cb lib/storage: increase the number of created time series in BenchmarkHeadPostingForMatchers in order to be on par with Promethues
The previous commit was accidentally creating 10x smaller number of time series than Prometheus
and this led to invalid benchmark results.

The updated benchmark results:

benchmark                                                          old ns/op      new ns/op     delta
BenchmarkHeadPostingForMatchers/n="1"                              272756688      6194893       -97.73%
BenchmarkHeadPostingForMatchers/n="1",j="foo"                      138132923      10781372      -92.19%
BenchmarkHeadPostingForMatchers/j="foo",n="1"                      134723762      10632834      -92.11%
BenchmarkHeadPostingForMatchers/n="1",j!="foo"                     195823953      10679975      -94.55%
BenchmarkHeadPostingForMatchers/i=~".*"                            7962582919     100118510     -98.74%
BenchmarkHeadPostingForMatchers/i=~".+"                            7589543864     154955671     -97.96%
BenchmarkHeadPostingForMatchers/i=~""                              1142371741     258003769     -77.42%
BenchmarkHeadPostingForMatchers/i!=""                              9964150263     159783895     -98.40%
BenchmarkHeadPostingForMatchers/n="1",i=~".*",j="foo"              216995884      10937895      -94.96%
BenchmarkHeadPostingForMatchers/n="1",i=~".*",i!="2",j="foo"       202541348      10990027      -94.57%
BenchmarkHeadPostingForMatchers/n="1",i!=""                        486285711      87004349      -82.11%
BenchmarkHeadPostingForMatchers/n="1",i!="",j="foo"                350776931      53342793      -84.79%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",j="foo"              380888565      54256156      -85.76%
BenchmarkHeadPostingForMatchers/n="1",i=~"1.+",j="foo"             89500296       21823279      -75.62%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",i!="2",j="foo"       379529654      46671359      -87.70%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",i!~"2.*",j="foo"     424563825      53915842      -87.30%

VictoriaMetrics uses 1GB of RAM during the benchmark (vs 3.5GB of RAM for Prometheus)
2019-11-18 19:50:58 +02:00
Aliaksandr Valialkin
8a0dfc6220 lib/storage: add BenchmarkHeadPostingForMatchers similar to the benchmark from Prometheus
See the corresponding benchmark in Prometheus - 23c0299d85/tsdb/head_bench_test.go (L52)

The benchmark allows performing apples-to-apples comparison of time series search
in Prometheus and VictoriaMetrics. The following article - https://www.robustperception.io/evaluating-performance-and-correctness -
contains incorrect numbers for VictoriaMetrics, since there wasn't this benchmark yet. Fix this.

Benchmarks can be repeated with the following commands from Prometheus and VictoriaMetrics source code roots:

- Prometheus: GOMAXPROCS=1 go test ./tsdb/ -run=111 -bench=BenchmarkHeadPostingForMatchers
- VictoriaMetrics: GOMAXPROCS=1 go test ./lib/storage/ -run=111 -bench=BenchmarkHeadPostingForMatchers

Benchmark results:
benchmark                                                          old ns/op      new ns/op     delta
BenchmarkHeadPostingForMatchers/n="1"                              272756688      364977        -99.87%
BenchmarkHeadPostingForMatchers/n="1",j="foo"                      138132923      1181636       -99.14%
BenchmarkHeadPostingForMatchers/j="foo",n="1"                      134723762      1141578       -99.15%
BenchmarkHeadPostingForMatchers/n="1",j!="foo"                     195823953      1148056       -99.41%
BenchmarkHeadPostingForMatchers/i=~".*"                            7962582919     8716755       -99.89%
BenchmarkHeadPostingForMatchers/i=~".+"                            7589543864     12096587      -99.84%
BenchmarkHeadPostingForMatchers/i=~""                              1142371741     16164560      -98.59%
BenchmarkHeadPostingForMatchers/i!=""                              9964150263     12230021      -99.88%
BenchmarkHeadPostingForMatchers/n="1",i=~".*",j="foo"              216995884      1173476       -99.46%
BenchmarkHeadPostingForMatchers/n="1",i=~".*",i!="2",j="foo"       202541348      1299743       -99.36%
BenchmarkHeadPostingForMatchers/n="1",i!=""                        486285711      11555193      -97.62%
BenchmarkHeadPostingForMatchers/n="1",i!="",j="foo"                350776931      5607506       -98.40%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",j="foo"              380888565      6380335       -98.32%
BenchmarkHeadPostingForMatchers/n="1",i=~"1.+",j="foo"             89500296       2078970       -97.68%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",i!="2",j="foo"       379529654      6561368       -98.27%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",i!~"2.*",j="foo"     424563825      6757132       -98.41%

The first column (old) is for Prometheus, the second column (new) is for VictoriaMetrics.

As you can see, VictoriaMetrics outperforms Prometheus by more than 100x in almost all the test cases of this benchmark.

Prometheus was using 3.5GB of RAM during the benchmark, while VictoriaMetrics was using 400MB of RAM.
2019-11-18 18:45:06 +02:00
Aliaksandr Valialkin
2ab4cea5e5 lib/storage: always start using per-day inverted index on the next day after its creation
The current day could miss entries for already stopped time series before
enabling per-day index.

This fixes the issue when queries return empty results during the first hour after
upgrading to v1.29.*
2019-11-16 12:11:25 +02:00
Aliaksandr Valialkin
c050abbbad deployment/docker: update Prometheus version from v2.12.0 to v2.14.0 2019-11-16 00:13:15 +02:00
Aliaksandr Valialkin
3f1637fae8 app/vmselect/promql: properly calculate integrate(q[d]) 2019-11-13 21:10:41 +02:00
Aliaksandr Valialkin
c56b9ed03b app/victoria-metrics: add build rules for GOARCH=ppc64le
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/235
2019-11-13 20:24:33 +02:00
Aliaksandr Valialkin
3fd32e331a app/vmselect/promql: use universal approach for determining maxByteSliceLen on 32-bit and 64-bit archs
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/235
2019-11-13 20:24:26 +02:00
Aliaksandr Valialkin
119dfd01bb lib/storage: add vm_cache_size_bytes{type="storage/hour_metric_ids"} metric 2019-11-13 20:24:21 +02:00
Aliaksandr Valialkin
86a1cd700b lib/storage: remove inmemory index for recent hour, since it uses too much memory
Production workload shows that the index requires ~4Kb of RAM per active time series.
This is too much for high number of active time series, so let's delete this index.

Now the queries should fall back to the index for the current day instead of the index
for the recent hour. The query performance for the current day index should be good enough
given the 100M rows/sec scan speed per CPU core.
2019-11-13 17:58:07 +02:00
Aliaksandr Valialkin
33895d4a0f lib/storage: add missing increment for recentHourInvertedIndexSearchCalls 2019-11-13 15:13:51 +02:00
Aliaksandr Valialkin
c57eb0ff83 lib/storage: add -disableRecentHourIndex flag for disabling inmemory index for recent hour
This may be useful for saving RAM on high number of time series aka high cardinality
2019-11-13 15:02:51 +02:00
Aliaksandr Valialkin
e14ab14e54 lib/storage: verify marshaling for iidx.pendingMetricIDs in TestInmemoryInvertedIndexMarshalUnmarshal 2019-11-13 13:35:30 +02:00
Aliaksandr Valialkin
ca259864e2 lib/storage: return back inmemory inverted index for recent hour
Issues fixed:
- Slow startup times. Now the index is loaded from cache during start.
- High memory usage related to superflouos index copies every 10 seconds.
2019-11-13 13:11:04 +02:00
Aliaksandr Valialkin
01bb3c06c7 lib/storage: remove inmemory inverted index for recent hours
Production load with >10M active time series showed it could
slow down VictoriaMetrics startup times and could eat
all the memory leading to OOM.

Remove inmemory inverted index for recent hours until thorough
testing on production data shows it works OK.
2019-11-13 10:45:53 +02:00
Aliaksandr Valialkin
66c4961ff8 README.md: mention that VictoriaMetrics executable is small 2019-11-12 16:58:15 +02:00
Aliaksandr Valialkin
3e16248ed6 README.md: small updates 2019-11-12 16:54:18 +02:00
Aliaksandr Valialkin
5e6c1cd986 README.md: typo fix 2019-11-12 16:48:40 +02:00
Aliaksandr Valialkin
6c2303764e Revert "lib/fs: do not postpone directory removal on NFS error"
This reverts commit 4c02e496f7.

Reason for revert: the commit breaks on NFS - see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/234
2019-11-12 16:18:09 +02:00
Mike Poindexter
f3ad330635 Add test for invalid caching of tsids (#232)
* Add test for invalid caching of tsids

* Clean up error handling
2019-11-12 15:09:33 +02:00
Aliaksandr Valialkin
6c362d82cb README.md: mention that backups are made to S3 or GCS 2019-11-12 14:32:37 +02:00
Aliaksandr Valialkin
661dd190bb Refer to https://medium.com/@valyala/speeding-up-backups-for-big-time-series-databases-533c1a927883 from multiple places in README.md 2019-11-12 13:02:39 +02:00
Aliaksandr Valialkin
630ba810f1 deployment/docker: upgrade Go from v1.13.4 to v1.13.4 2019-11-12 03:49:19 +02:00
Oleg Kovalov
b4f44befa3 fix misspelled words (#229) 2019-11-12 00:16:42 +02:00
Roman Khavronenko
5fc8fb1323 add churn rate panel (#230) 2019-11-12 00:14:53 +02:00
Aliaksandr Valialkin
8e8f98f712 lib/storage: add tests for dateMetricIDCache 2019-11-11 13:21:57 +02:00
Aliaksandr Valialkin
c342f5e37e lib/storage: eliminate data race when updating lastSyncTime in dateMetricIDCache.Has 2019-11-10 22:04:01 +02:00
Aliaksandr Valialkin
56d7cc8a0d app/victoria-metrics: remove deprecated fs.MustStopDirRemover from main_test.go 2019-11-10 13:37:13 +02:00
Aliaksandr Valialkin
4c02e496f7 lib/fs: do not postpone directory removal on NFS error
Continue trying to remove NFS directory on temporary errors for up to a minute.

The previous async removal process breaks in the following case during VictoriaMetrics start

- VictoriaMetrics opens index, finds incomplete merge transactions and starts replaying them.
- The transaction instructs removing old directories for parts, which were already merged into bigger part.
- VictoriaMetrics removes these directories, but their removal is delayed due to NFS errors.
- VictoriaMetrics scans partition directory after all the incomplete merge transactions are finished
  and finds directories, which should be removed, but weren't still removed due to NFS errors.
- VictoriaMetrics panics when it finds unexpected empty directory.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/162
2019-11-10 13:24:51 +02:00
Aliaksandr Valialkin
3956003dd0 lib/storage: reorganize the code in getStartDateForPerDayInvertedIndex according to golangci-lint 2019-11-10 00:38:59 +02:00
Aliaksandr Valialkin
5c3fa59181 app/vmrestore: the upcoming release would be 1.29.0 2019-11-10 00:20:41 +02:00
Aliaksandr Valialkin
ee7765b10d lib/storage: implement per-day inverted index 2019-11-10 00:02:46 +02:00
Aliaksandr Valialkin
5810ba57c2 lib/storage: use specialized cache for (date, metricID) entries
This improves ingestion performance.
2019-11-09 23:06:11 +02:00
Aliaksandr Valialkin
e573ef2126 lib/storage: remove unused code from getMetricIDsForTimeRange: it is expected that time range is always non-zero 2019-11-09 19:03:34 +02:00
Aliaksandr Valialkin
823fa085ef lib/storage: properly set time range when deleting time series 2019-11-09 18:49:49 +02:00
Aliaksandr Valialkin
695c1dc5eb lib/storage: obtain all the time series ids from (tag->metricIDs) rows instead of (metricID->TSID) rows, since this much faster 2019-11-09 18:04:33 +02:00
Aliaksandr Valialkin
cdbe848102 lib/storage: small code prettifying 2019-11-09 14:19:52 +02:00
Aliaksandr Valialkin
5c25070556 lib/uint64set: remove superflouos check for item existence before deleting it in Set.Subtract 2019-11-09 14:19:47 +02:00
Aliaksandr Valialkin
bb08bab263 lib/storage: inmemoryInvertedIndex prettifying 2019-11-09 14:19:41 +02:00
Aliaksandr Valialkin
6ad7fe8eeb lib/storage: export vm_new_timeseries_created_total metric for determining time series churn rate 2019-11-08 21:21:07 +02:00
Aliaksandr Valialkin
9ea549ed24 lib/storage: sync with cluster changes 2019-11-08 21:21:07 +02:00
Aliaksandr Valialkin
63b05c0b9f app/vmselect/promql: adjust memory limits calculations for incremental aggregate functions
Incremental aggregate functions don't keep all the selected time series in memory -
they keep only up to GOMAXPROCS time series for incremental aggregations.

Take into account that the number of time series in RAM can be higher if they are split
into many groups with `by (...)` or `without (...)` modifiers.

This should reduce the number of `not enough memory for processing ... data points` false
positive errors.
2019-11-08 21:21:07 +02:00
Aliaksandr Valialkin
d888b21657 lib/storage: add inmemory inverted index for the last hour
It should improve performance for `last N hours` dashboards with update intervals smaller than 1 hour.
2019-11-08 21:21:07 +02:00
Aliaksandr Valialkin
1e46961d68 app/{vmbackup,vmrestore}: add vmbackup and vmrestore tools for creating backups on s3 or gcs from instant snapshots
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/203
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/38
2019-11-08 21:21:07 +02:00
Roman Khavronenko
72756ab8c7 #224: add slow_queries, on-going merges and merge speed panels to dashboard (#226) 2019-11-08 21:20:38 +02:00
Aliaksandr Valialkin
543dc8d337 lib/storage: populate partition names from both small and big directories
Certain partition directories may be missing after restoring from backups
if they had no data. Re-create such directories on start.
2019-11-06 19:49:34 +02:00
Aliaksandr Valialkin
e472f0b23b lib/storage: substitute error message about unsorted items in the index block after metricIDs merge with counter
The origin of the error has been detected and documented in the code,
so it is enough to export a counter for such errors at `vm_index_blocks_with_metric_ids_incorrect_order_total`,
so it could be monitored and alerted on high error rates.

Export also the counter for processed index blocks with metricIDs - `vm_index_blocks_with_metric_ids_processed_total`,
so its' rate could be compared to `rate(vm_index_blocks_with_metric_ids_incorrect_order_total)`.
2019-11-06 14:28:11 +02:00
Aliaksandr Valialkin
c51ca04a43 lib/storage: take into account the requested time range when caching TSIDs for the given tag filters 2019-11-06 14:28:11 +02:00
Aliaksandr Valialkin
e37f06dc52 lib/storage: dump incorrectly sorted items on a single line; this should simplify error reporting 2019-11-05 18:44:22 +02:00
Aliaksandr Valialkin
5c2099ecfe lib/storage: return back finalPartsToMerge from 2 to 3 in order to prevent from excessive merges in old partitions 2019-11-05 17:27:48 +02:00
Aliaksandr Valialkin
885ba17905 lib/storage: separate the max inverted index scan loops per metric into fast and slow loops
Slow loops could require seeks and expensive regexp matching, while fast loops just scans
all the metricIDs for the given `tag=value` prefix. So these operations must have separate
max loops multiplier.
2019-11-05 17:27:48 +02:00
Aliaksandr Valialkin
b9a06e8e74 lib/storage: skip repeated useless work when intersection of metricIDs with the given filter is too expensive
This should improve performance for query filters over big number of time series.
2019-11-05 14:19:13 +02:00
Aliaksandr Valialkin
30c8301b11 lib/storage: reduce the maximum inverted index scans before giving up to label filters matching by metric name
The new value reduces the amount of wasted work during index scans over big number of time series.
2019-11-05 14:19:06 +02:00
Aliaksandr Valialkin
e53f9e553d lib/storage: try potentially faster tag filters at first, then apply slower tag filters
The fastest tag filters are non-negative non-regexp, since they are the most specific.
The slowest tag filters are negative regexp, since they require scanning
all the entries for the given label.
2019-11-05 14:19:01 +02:00
Aliaksandr Valialkin
d6ade02fd3 Makefile: add pprof-cpu rule for inspecting CPU profiles with PPROF_FILE=/path/to/cpu.pprof make pprof-cpu 2019-11-04 12:44:09 +02:00
Aliaksandr Valialkin
3c90d77858 lib/storage: pass pointer to MetricName in Fatalf, so it is properly detected as an interface with String() method
This fixes lint errors
2019-11-04 01:07:19 +02:00
Artem Navoiev
478767d0ed add unittests for bytesutil and storage (#221) 2019-11-04 00:54:46 +02:00
Aliaksandr Valialkin
02e0b19a62 lib/storage: tune the returned value from adjustMaxMetricsAdaptive 2019-11-04 00:44:37 +02:00
Aliaksandr Valialkin
6be4456d88 lib/{storage,uint64set}: add Set.Union() function and use it 2019-11-04 00:44:37 +02:00
Aliaksandr Valialkin
9becc26f4b lib/storage: remove interface conversion in hot path during block merging
This should improve merge speed a bit for parts with big number of small blocks.
2019-11-03 12:33:34 +02:00
Aliaksandr Valialkin
c62399eb3e lib/{storage,mergeset}: create missing partition directories after restoring from backups
Backup tools could skip empty directories. So re-create such directories on the first run.
2019-11-02 02:27:11 +02:00
Aliaksandr Valialkin
55d728c849 lib/{decimal,encoding}: optimize float64<->decimal conversion for arrays with zeros or ones
Time series with only zeros or ones frequently occur in monitoring, so it is worth optimizing their handling.
2019-11-01 16:48:12 +02:00
Aliaksandr Valialkin
808fc0971f lib/{encoding,decimal}: add benchmarks for blocks containing zeros or ones
Time series with such values are quite common in monitoring space,
so it would be great to have benchmarks for them.
2019-11-01 16:48:12 +02:00
Aliaksandr Valialkin
370cfbb365 lib/uint64set: return an emptry set instead of nil set from Set.Clone, since the caller may add data to the cloned set
This fixes the following panic in v1.28.1:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x783a7e]

goroutine 1155 [running]:
github.com/VictoriaMetrics/VictoriaMetrics/lib/uint64set.(*Set).Add(0x0, 0x15b3bfb41e8b71ec)
  github.com/VictoriaMetrics/VictoriaMetrics@/lib/uint64set/uint64set.go:57 +0x2e
github.com/VictoriaMetrics/VictoriaMetrics/lib/storage.(*indexSearch).getMetricIDsForRecentHours(0xc5bdc0dd40, 0x16e273f6b50, 0x16e2745d3f0, 0x5b8d95, 0x10, 0x4a2f51, 0xaa01000000000000)
  github.com/VictoriaMetrics/VictoriaMetrics@/lib/storage/index_db.go:1951 +0x260
github.com/VictoriaMetrics/VictoriaMetrics/lib/storage.(*indexSearch).getMetricIDsForTimeRange(0xc5bdc0dd40, 0x16e273f6b50, 0x16e2745d3f0, 0x5b8d95, 0x10, 0xb296c0, 0xc00009cd80, 0x9bc640)
2019-11-01 16:12:44 +02:00
Aliaksandr Valialkin
2f58f37f07 app/vmselect/promql: add lag(q[d]) function, which returns the lag between the current timestamp and the timstamp for the last data point in q 2019-11-01 12:21:33 +02:00
Aliaksandr Valialkin
d18ea0c95b app/vmstorage: add -bigMergeConcurrency and -smallMergeConcurrency flags for tuning the maximum number of CPU cores used during merges 2019-10-31 16:19:13 +02:00
Aliaksandr Valialkin
e0b292c6de lib/storage: small cleanup in Storage.add 2019-10-31 14:30:34 +02:00
Aliaksandr Valialkin
86f6be40db README.md: update information about vm_rows{type="indexdb"} metric
The previous information became outdated after v1.28.0, since now each row in the inverted index
can refer to multiple time series.
2019-10-31 13:30:29 +02:00
Aliaksandr Valialkin
e76e21e4c7 lib/decimal: speed up FromFloat for common case with integers 2019-10-31 13:24:59 +02:00
Aliaksandr Valialkin
cfa5e279c2 lib/decimal: increase float64->decimal conversion precision a bit
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/213
2019-10-30 02:04:56 +02:00
Aliaksandr Valialkin
fa7c3ab93a README.md: fix delimiter between {measurement} and {field_name} in the Influx line protocol example 2019-10-30 02:04:56 +02:00
Aliaksandr Valialkin
26d570bb3a lib/storage: get parts to merge after applying the limit on the number of concurrent merges
This should reduce write amplification under high ingestion rate.
2019-10-30 02:04:56 +02:00
Roman Khavronenko
62ed508546 Bump version requirements in description 2019-10-29 22:29:48 +00:00
Aliaksandr Valialkin
2e2eff90d5 lib/{mergeset,storage}: limit the maximum number of concurrent merges; leave smaller number of parts during final merge 2019-10-29 12:45:28 +02:00
Aliaksandr Valialkin
855e5c8963 vendor: update github.com/VictoriaMetrics/fastcache from v1.5.1 to v1.5.2 2019-10-29 11:31:29 +02:00
Aliaksandr Valialkin
04e48ef064 lib/fs: typo fix in comment to WriteFileAtomically 2019-10-29 11:31:26 +02:00
Roman Khavronenko
971206b514 update single-version dashboard with panels: (#219)
* concurrent inserts
* rows ignored
2019-10-28 13:54:10 +02:00
Aliaksandr Valialkin
d063bfaf83 vendor: make vendor-update 2019-10-28 13:39:05 +02:00
Roman Khavronenko
6ab48838bf #215: update klauspost/compress lib (#217)
* #215: update klauspost/compress lib

* #215: bump klauspost/compress lib to 1.9.1
2019-10-28 13:36:35 +02:00
Aliaksandr Valialkin
a42b5db39f lib/decimal: increase float->decimal conversion precision for big numbers
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/213
2019-10-28 13:23:44 +02:00
Aliaksandr Valialkin
b0295dbf2e app/vmselect: add -search.latencyOffset flag for tuning the time after data collection when data points become visible in query results
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/218
2019-10-28 12:31:07 +02:00
Petr Mikusek
3cea200309 Fix typo s/telergam/telegram/ in README.md 2019-10-23 19:30:36 +03:00
Aliaksandr Valialkin
32600ba4fc deployment/docker: upgrade Go builder from go1.13.1 to go1.13.3 2019-10-20 23:50:05 +03:00
hanzai
b3c946e35a warns during rows addition (#214) 2019-10-20 23:41:07 +03:00
Aliaksandr Valialkin
e83fe938c8 all: make fmt 2019-10-17 20:04:34 +03:00
Aliaksandr Valialkin
f708aa7003 Makefile: disable structcheck in golangci-lint, since it gives false positive on embedded structs 2019-10-17 19:59:10 +03:00
Aliaksandr Valialkin
97ce4e03a5 all: add support for GOARCH=386 and fix all the issues related to 32-bit architectures such as GOARCH=arm
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/212
2019-10-17 18:23:23 +03:00
Aliaksandr Valialkin
a398343bb6 vendor: update github.com/valyala/quicktemplate from v1.2.0 to v1.3.1 2019-10-17 18:23:19 +03:00
Aliaksandr Valialkin
6ebf537153 lib/memory: properly handle int overflow in sysTotalMemory
This should fix builds on 32-bit architectures such as arm.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/212
2019-10-17 00:50:48 +03:00
Aliaksandr Valialkin
f752479cb8 app/victoria-metrics/test: add missing docs to public funcs PopulateTimeTplString and PopulateTimeTpl 2019-10-17 00:50:46 +03:00
Aliaksandr Valialkin
61e956e175 app/victoria-metrics: add a test for max_lookback=<duration> query arg
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/209
2019-10-15 21:31:48 +03:00
Aliaksandr Valialkin
c66a691593 app/vmselect/prometheus: add -search.maxLookback command-line flag for overriding dynamic calculations for max lookback interval
This flag is similar to `-search.lookback-delta` if set. The max lookback interval is determined dynamically
from interval between datapoints for each input time series if the flag isn't set.

The interval can be overriden on per-query basis by passing `max_lookback=<duration>` query arg to `/api/v1/query` and `/api/v1/query_range`.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/209
2019-10-15 21:31:48 +03:00
Aliaksandr Valialkin
cc21b31502 app/victoria-metrics/test: add a test for PopulateTimeTplString 2019-10-15 21:31:48 +03:00
Aliaksandr Valialkin
195cefd81a lib/prompb: removed outdated README.md 2019-10-14 22:12:57 +03:00
Aliaksandr Valialkin
c1581c3810 vendor: make vendor-update 2019-10-13 23:17:47 +03:00
Aliaksandr Valialkin
16cae15c45 README.md: add integrations section 2019-10-11 19:14:28 +03:00
Aliaksandr Valialkin
f6334bffa1 lib/storage: harden the check that the original items are sorted after mergeTagToMetricIDsRows fails to preserve sort order 2019-10-09 12:13:17 +03:00
Aliaksandr Valialkin
2abd5154e0 lib/storage: typo fix in comment to maxRowsPerSmallPart. 2019-10-08 18:51:20 +03:00
Aliaksandr Valialkin
c1cf7d9f93 lib/storage: add tests for mergeTagToMetricIDsRows and return the original items if the function breaks items` ordering.
This should save from data corruption issues revealed in the previous releases up to v1.28.0-beta5.
2019-10-08 16:27:35 +03:00
Aliaksandr Valialkin
956fdd89d3 app/vmselect/promql: take into account the previous point when calculating max_over_time and min_over_time
This lines up with `first_over_time` function used in `rollup_candlestick`, so `rollup=low` always returns
the minimum value.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/204
2019-10-08 12:30:05 +03:00
Alexander Danilov
1bc6377863 Improve documentation a little bit 2019-10-07 22:18:40 +03:00
Artem Navoiev
1e2c511747 Add regression test for query apo
Part of https://github.com/VictoriaMetrics/VictoriaMetrics/issues/187
cover:
- https://github.com/VictoriaMetrics/VictoriaMetrics/issues/153
- https://github.com/VictoriaMetrics/VictoriaMetrics/issues/150
2019-10-07 22:18:04 +03:00
Aliaksandr Valialkin
0eeffb910f vendor: make vendor-update 2019-10-06 15:47:23 +03:00
Aliaksandr Valialkin
4ba86f501a vendor: update github.com/VictoriaMetrics/metrics from v1.7.1 to v1.7.2 2019-10-06 11:20:45 +03:00
Aliaksandr Valialkin
fdc5cfd838 lib/mergeset: reduce the maximum number of cached blocks, since there are reports on OOMs due to too big caches
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/189
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/195
2019-09-30 12:25:40 +03:00
Artem Navoiev
a116f5e7c1 Add regression test for query apo (#194)
Part of https://github.com/VictoriaMetrics/VictoriaMetrics/issues/187
cover:
- https://github.com/VictoriaMetrics/VictoriaMetrics/issues/184
2019-09-30 11:25:54 +03:00
Aliaksandr Valialkin
4e9e1ca0f7 app/vmselect/netstorage: hint the OS that tmpBlocksFile is read almost sequentially
This became the case after b7ee2e7af2 .
2019-09-30 00:11:14 +03:00
Aliaksandr Valialkin
c1d3705be0 app/vmselect/netstorage: marshal block outside tmpBlocksFile.WriteBlock
This allows re-using the destination buffer for marshaling in the outer loop.
2019-09-28 21:07:13 +03:00
Aliaksandr Valialkin
b7ee2e7af2 app/vmselect/netstorage: reduce the number of disk seeks when the query processes big number of time series 2019-09-28 21:07:09 +03:00
Aliaksandr Valialkin
67d44b0845 app/vmselect/promql: do not generate timestamps for NaN values in timestamp function according to Prometheus logic 2019-09-27 18:54:43 +03:00
Artem Navoiev
1e6ae9eff4 Add regression test for duplicated labels and series
Part of https://github.com/VictoriaMetrics/VictoriaMetrics/issues/187
cover:
- https://github.com/VictoriaMetrics/VictoriaMetrics/issues/155
- https://github.com/VictoriaMetrics/VictoriaMetrics/issues/172
2019-09-27 16:52:16 +03:00
Aliaksandr Valialkin
fa81f82714 deployment/docker: switch Go builder image from v1.13.0 to v1.13.1 2019-09-26 17:09:40 +03:00
Aliaksandr Valialkin
0fa6df94a2 lib/storage: optimize TSID comparison 2019-09-26 14:16:02 +03:00
Aliaksandr Valialkin
c39355921e lib/storage: verify whether items are sorted in the end of call to mergeTagToMetricIDsRows
This should prevent from inverted index corruption if bug in mergeTagToMetricIDsRows is discovered.
2019-09-26 13:13:41 +03:00
Artem Navoiev
cf4786f34a add test for https://github.com/VictoriaMetrics/VictoriaMetrics/issues/161 2019-09-26 12:45:19 +03:00
Aliaksandr Valialkin
3e67862676 README.md: typo fix 2019-09-26 11:03:14 +03:00
Aliaksandr Valialkin
0db9fcedd5 lib/storage: properly match labels against regexp with (?i) flag
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/161
2019-09-26 11:03:10 +03:00
Aliaksandr Valialkin
391530bb74 README.md: mention recommended ext4 options for mkfs.ext4 when creating multi-TB partition 2019-09-25 23:52:43 +03:00
Aliaksandr Valialkin
60c5b368bc README.md: tiny updates 2019-09-25 23:29:55 +03:00
Aliaksandr Valialkin
26dc21cf64 app/vmselect/promql: add increases_over_time and decreases_over_time functions
`increases_over_time(q[d])` returns the number of `q` increases during the given duration `d`.
`decreases_over_time(q[d])` returns the number of `q` decreases during the given duration `d`.
2019-09-25 20:38:44 +03:00
Aliaksandr Valialkin
2444433d83 lib/storage: add missing break in removeDuplicateMetricIDs 2019-09-25 18:23:43 +03:00
Aliaksandr Valialkin
ea4c828bae lib/storage: remove duplicate MetricIDs in tag->metricIDs items before writing them into inverted index 2019-09-25 17:55:13 +03:00
Aliaksandr Valialkin
aebc45ad26 lib/{mergeset,storage}: do not cache inverted index blocks containing tag->metricIDs items
This should reduce the amounts of used RAM during queries with filters over big number of time series.
2019-09-25 14:02:15 +03:00
Aliaksandr Valialkin
2cb811b42f lib/uint64set: optimize Set.AppendTo 2019-09-25 00:34:17 +03:00
Aliaksandr Valialkin
b986516fbe lib/storage: create and use lib/uint64set instead of map[uint64]struct{}
This should improve inverted index search performance for filters matching big number of time series,
since `lib/uint64set.Set` is faster than `map[uint64]struct{}` for both `Add` and `Has` calls.
See the corresponding benchmarks in `lib/uint64set`.
2019-09-24 21:17:55 +03:00
Aliaksandr Valialkin
ef2296e420 lib/storage: typo fix: return dstData instead of data from mergeTagToMetricIDsRows 2019-09-24 19:32:34 +03:00
Aliaksandr Valialkin
a6086cde78 lib/storage: limit the number of metricIDs in tag->metricIDs row
This reduces the overhead on index and metaindex in lib/mergeset
2019-09-24 00:49:51 +03:00
Aliaksandr Valialkin
c9063ece66 lib/storage: share tsids across all the partSearch instances
This should reduce memory usage when big number of time series matches the given query.
2019-09-23 22:35:15 +03:00
Aliaksandr Valialkin
4e26ad869b lib/{storage,mergeset}: verify PrepareBlock callback results
Do not touch the first and the last item passed to PrepareBlock
in order to preserve sort order of mergeset blocks.
2019-09-23 20:43:13 +03:00
Aliaksandr Valialkin
0772191975 lib/mergeset: detect whether we are in test by executable suffix 2019-09-22 23:12:15 +03:00
Aliaksandr Valialkin
48999e5396 lib/workingsetcache: remove data race when resetting c.misses 2019-09-22 19:36:49 +03:00
Aliaksandr Valialkin
0adebae1f8 lib/storage: generate the first tag->metricIDs item in a mergeset block with a single metricID
The first item from each mergeset block goes into index (lib/mergeset.blockHeader),
so it must be short in order to reduce index size.
2019-09-22 19:21:33 +03:00
Aliaksandr Valialkin
267efde5ae README.md: update troubleshooting and tuning sections according to recent questions from our users 2019-09-22 19:12:24 +03:00
Aliaksandr Valialkin
0686ac52c3 lib/{storage,mergeset}: merge tag->metricID rows into tag->metricIDs rows for common tag values
This should improve lookup performance if the same `label=value` pair exists
in big number of time series.
This should also reduce memory usage for mergeset data cache, since `tag->metricIDs` rows
occupy less space than the original `tag->metricID` rows.
2019-09-20 22:06:41 +03:00
Aliaksandr Valialkin
68722c3c74 lib/encoding: optimize UnmarshalUint* and UnmarshalInt* 2019-09-20 13:08:16 +03:00
Aliaksandr Valialkin
a544f49c2b lib/storage: optimize selecting all the metricIDs by scanning MetricID->TSID entries instead of tag->MetricID entries
The number of MetricID->TSID entries is smaller than the number of tag->MetricID entries
and MetricID->TSID entries are usually shorter than tag->MetricID entries.
This should improve performance when selecting all the metricIDs.
2019-09-20 11:54:10 +03:00
Aliaksandr Valialkin
d32f88c378 app/vminsert/opentsdbhttp: remove FATAL prefix from logger.Fatalf errors for the sake of consistency with other logger.Fatalf calls 2019-09-19 22:15:59 +03:00
Aliaksandr Valialkin
00cfb2d2b9 lib/mergeset: rename misleading mergeSmallParts to mergeExistingParts 2019-09-19 21:48:20 +03:00
Aliaksandr Valialkin
37dc223e25 lib/mergeset: use sort.IsSorted instead of sort.SliceIsSorted in inmemoryBlock.isSorted in order to reduce memory allocations 2019-09-19 20:13:08 +03:00
Aliaksandr Valialkin
a84fe76677 lib/storage: use sort.Sort instead of sort.slice in getSortedMetricIDs 2019-09-19 20:07:22 +03:00
Aliaksandr Valialkin
3a697a935a lib/storage: skip duplicate call to intersectMetricIDsWithTagFilter on zero successful intersects 2019-09-19 17:49:56 +03:00
Aliaksandr Valialkin
51a21c7d4b lib/mergeset: fill partHeader.firstItem on first block flush 2019-09-19 17:48:09 +03:00
Aliaksandr Valialkin
3d83f5d334 lib/storage: mark tag filter returning errFallbackToMetricNameMatch as useless
This will save CPU on subsequent calls for this filter
2019-09-18 19:10:32 +03:00
Aliaksandr Valialkin
6f3b2fd600 deployment/docker/docker-compose.yml: update Prometheus and Grafana image tags
Prometheus: from v2.10.0 to v2.12.0
Grafana: v6.2.1 from to v6.3.5
2019-09-18 18:29:09 +03:00
Aliaksandr Valialkin
8d35718dc6 lib/storage: properly construct keys for uselessTagFiltersCache and register useless negative tag filters there 2019-09-17 23:20:27 +03:00
Aliaksandr Valialkin
33975513d0 vendor: update github.com/valyala/gozstd from v1.6.1 to v1.6.2 2019-09-16 21:50:49 +03:00
Aliaksandr Valialkin
63f2b539df vendor: make vendor-update 2019-09-13 22:48:56 +03:00
Aliaksandr Valialkin
9428ec9c9f deployment/docker: remove file system paths from the compiled binary 2019-09-13 22:45:59 +03:00
Aliaksandr Valialkin
0c8057924f lib/mergeset: properly check for sorted block headers
Fix a typo for https://github.com/VictoriaMetrics/VictoriaMetrics/issues/181
2019-09-13 21:59:29 +03:00
Aliaksandr Valialkin
d4218d27e6 app/vmselect/promql: properly handle subqueries like aggr_func(rollup_func(metric[window:step]))
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/184
2019-09-13 21:41:04 +03:00
hanzai
e2274714b1 lib/workingsetcache: adjust switching from mode=split to mode=whole smoothly and load cachefile successfully 2019-09-13 19:13:01 +03:00
Aliaksandr Valialkin
4d636c244d app/vmselect/promql: binary operation fixes according to Prometheus behaviour
The follosing issues were fixed:
- VictoriaMetrics could leave superflouos labels when using `on` or `ignoring` modifiers
- VictoriaMetrics could return `duplicate timeseries` error when using `group_left` or `group_right` with non-empty label list
2019-09-13 17:42:52 +03:00
Aliaksandr Valialkin
bad53e4207 lib/mergeset: dynamically calculate the maximum number of items per part, which can be cached in OS page cache 2019-09-11 14:53:45 +03:00
Artem Navoiev
3f581a9860 [ci] github actions - run pipeline on pull request. Fix running of test in external PR from forks 2019-09-11 09:30:11 +03:00
sundy-li
398e00aa54 README.md: fix ExtendedPromQL link url 2019-09-10 14:56:19 +03:00
Artem Navoiev
4fd741f40d [tests] check timestamp in tests (#177) 2019-09-08 19:48:38 +03:00
Artem Navoiev
4a2cd85b92 [ci] bump version of go to 1.13 in github actions config 2019-09-08 14:02:23 +03:00
Aliaksandr Valialkin
6c46afb087 vendor: update github.com/klauspost/compress from v1.7.6 to v1.8.2 2019-09-06 00:47:31 +03:00
Aliaksandr Valialkin
7343e8b408 vendor: update golang.org/x/sys 2019-09-06 00:47:31 +03:00
Artem Navoiev
22e3fabefd Add OpenTSDB and Prometheus integration tests (#168)
* [WIP] open tsdb and prometheus integration tests

* app/victoria-metrics: fix race condition on parallel tests
2019-09-05 17:55:38 +03:00
Aliaksandr Valialkin
88f8670ede lib/fs: add MustStopDirRemover for waiting until pending directories are removed on graceful shutdown
This patch is mainly required for laggy NFS. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/162
2019-09-05 11:13:17 +03:00
Aliaksandr Valialkin
9eb5de334f lib/storage: typo fix 2019-09-04 19:58:01 +03:00
Aliaksandr Valialkin
6954e126fc app/vmselect/promql: ignore grouping by destination label in count_values, since such a grouping is performed automatically 2019-09-04 19:58:01 +03:00
Aliaksandr Valialkin
bce35b8dd9 README.md: mention that Prometheus doesn't drop data when VictoriaMetrics restarts 2019-09-04 18:40:39 +03:00
Aliaksandr Valialkin
16dd145586 lib/storage: remove duplicate tag keys on MetricName.Marshal call
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/172
2019-09-04 18:13:45 +03:00
Aliaksandr Valialkin
cd2c9e39da deployment/docker: switch Go builder from Go 1.12.9 to Go 1.13.0 2019-09-04 17:17:23 +03:00
Aliaksandr Valialkin
305e7bc981 app/vmselect/promql: do not return artificial points beyond the last point in time series 2019-09-04 16:35:34 +03:00
Aliaksandr Valialkin
9721d06c6a app/vmselect/prometheus: do not adjust start and end args in /api/v1/query_range if nocache=1 arg is set
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/171
2019-09-04 13:10:09 +03:00
Aliaksandr Valialkin
4862e93024 lib/fs: try harder with directory removal on NFS in the event of temporary lock
Do not give up after 11 attempts of directory removal on laggy NFS.

Add `vm_nfs_dir_remove_failed_attempts_total` metric for counting the number of failed attempts
on directory removal.

Log failed attempts on directory removal after long sleep times.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/162
2019-09-04 12:24:50 +03:00
Aliaksandr Valialkin
db4560ca31 app/vmselect/promql: reset timeseries name on group_left and group_right as Prometheus does 2019-09-03 20:42:54 +03:00
Aliaksandr Valialkin
1575a560f0 app/vmselect/netstorage: adaptively adjust the maximum inmemory file size for storing temporary blocks
The maximum inmemory file size now depends on `-memory.allowedPercent`.
This should improve performance and reduce the number of filesystem calls
on machines with big amounts of RAM when performing heavy queries
over big number of samples and time series.
2019-09-03 13:32:09 +03:00
Aliaksandr Valialkin
e1d76ec1f3 lib/storage: invalidate tagFilters -> TSIDS cache when newly added index data becomes visible to search
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/163
2019-08-29 15:08:35 +03:00
Aliaksandr Valialkin
aeaa5de5fe lib/prombp: apply ba06b47c16
The following commands used:

gofmt -r '(uint64(x)&0x7F)<<shift -> uint64(x&0x7F)<<shift' -w ./lib/prompb/
gofmt -r '(int64(x)&0x7F)<<shift -> int64(x&0x7F)<<shift' -w ./lib/prompb/
2019-08-29 13:35:27 +03:00
Aliaksandr Valialkin
4c0a262a2e .github/workflows: verify builds on freebsd and darwin 2019-08-28 23:05:15 +03:00
Aliaksandr Valialkin
3685fc18d5 Makefile: extract app-local and app-local-pure build rules 2019-08-28 01:34:58 +03:00
Aliaksandr Valialkin
ede7ad3703 app/victoria-metrics: add missing victoria-metrics prefix to --version output when building with make victoria-metrics 2019-08-28 01:28:08 +03:00
Aliaksandr Valialkin
9196c085a7 all: port to FreeBSD on GOARCH=amd64 2019-08-28 01:19:23 +03:00
Aliaksandr Valialkin
3802ae9269 README.md: recommend checking which metrics will be deleted before deleting them 2019-08-27 15:01:16 +03:00
Artem Navoiev
b0090dbd86 add github actions (#160) 2019-08-27 14:42:46 +03:00
Aliaksandr Valialkin
603a79b357 app/vmstorage: increase default values for search.maxTagKeys, search.maxTagValues and search.maxUniqueTimeseries 2019-08-27 14:29:53 +03:00
Aliaksandr Valialkin
2655220c58 lib/storage: go fmt 2019-08-27 14:29:51 +03:00
Aliaksandr Valialkin
bf915fc0db lib/storage: report proper maxMetrics limit when more than -search.maxUniqueTimeseries series match the given filters 2019-08-27 14:21:42 +03:00
Aliaksandr Valialkin
2fc157ff7a lib/storage: properly handle (?i) in the tag filter regexp
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/161
2019-08-26 00:44:45 +03:00
Aliaksandr Valialkin
0dc0006f34 lib/storage: calculate the maximum number of rows per small part from -memory.allowedPercent
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/159

This simplifies error detection additionally to the `vm_rows_ignored_total` counters.
2019-08-25 15:31:47 +03:00
Aliaksandr Valialkin
4b688fffee lib/storage: calculate the maximum number of rows per small part from -memory.allowedPercent
This should improve query speed over recent data on machines with big amounts of RAM
2019-08-25 14:41:12 +03:00
Aliaksandr Valialkin
1402a6b981 lib/storage: properly limit the number of output rows in small and big parts storage
Previously small parts storage didn't take into account the available disk space for big parts.
2019-08-25 14:41:12 +03:00
Aliaksandr Valialkin
3308279c4e lib/storage: remove outdated comment on maxRowsPerSmallPart
The commend became outdated after the commit ed6ac1a5df027f0dfc22448e3b27c26b6f77c67a,
which stops merging of small parts on graceful shutdown instead of waiting
for their completion.
2019-08-25 13:47:32 +03:00
Aliaksandr Valialkin
fb909cf710 app/vminsert/influx: set db label only if Influx line doesnt have db tag 2019-08-24 13:52:48 +03:00
Aliaksandr Valialkin
c4e75f09dc README.md: mention that -retentionPeriod must cover the backfilled data 2019-08-24 13:52:48 +03:00
Aliaksandr Valialkin
fb8840ac38 vendor: update github.com/valyala/quicktemplate from v1.1.1 to v1.2.0 2019-08-24 13:41:15 +03:00
Aliaksandr Valialkin
9c9221d1b2 app/vminsert: skip empty tags 2019-08-24 13:36:29 +03:00
Aliaksandr Valialkin
70ca018a57 app/vminsert/opentsdbhttp: skip invalid rows and continue parsing the remaining rows
Invalid rows are logged and counted in `vm_rows_invalid_total{type="opentsdb-http"}` metric
2019-08-24 13:36:29 +03:00
Aliaksandr Valialkin
4266091e4f app/vminsert/opentsdb: skip invalid rows and continue parsing the remaining rows
Invalid rows are logged and counted in `vm_rows_invalid_total{type="opentsdb"}` metric
2019-08-24 13:36:29 +03:00
Aliaksandr Valialkin
8001d29b6e app/vminsert/graphite: skip invalid rows and continue parsing the remaining rows
Invalid rows are logged and counted in `vm_rows_invalid_total{type="graphite"}` metric
2019-08-24 13:36:29 +03:00
Aliaksandr Valialkin
9d3f1fcbb9 app/vminsert/influx: skip invalid rows and continue parsing the remaining rows
Invalid influx lines are logged and counted in `vm_rows_invalid_total{type="influx"}` metric.
2019-08-24 13:36:29 +03:00
Aliaksandr Valialkin
ba7b3806be app/vminsert/influx: do not allow escaping newline char, since they dont occur in real life
The prefious report with escaped newline chars in influx line protocol was false alarm.
2019-08-23 18:42:05 +03:00
Aliaksandr Valialkin
7fa88c6efc app/vminsert/opentsdbhttp: allow timestamp as float64 and as string, since it occurs in real life 2019-08-23 18:35:41 +03:00
Aliaksandr Valialkin
4da34b11f8 app/vminsert/influx: handle \r\n aka crlf influx line endings from windows world
Such lines exist in real life.
2019-08-23 18:28:49 +03:00
Aliaksandr Valialkin
a18317adbc app/vminsert/influx: allow escaping newline char
Though newline char isn't mentioned in escape rules at https://docs.influxdata.com/influxdb/v1.7/write_protocols/line_protocol_tutorial/ ,
there are reports that such chars occur in real life
2019-08-23 15:14:46 +03:00
Aliaksandr Valialkin
44d7fc599d app/vminsert/influx: skip comments starting with # in influx line protocol 2019-08-23 14:43:09 +03:00
Aliaksandr Valialkin
dce6079379 README.md: add a section about Go profiling 2019-08-23 13:37:09 +03:00
Aliaksandr Valialkin
98419c00ef vendor: make vendor-update 2019-08-23 10:02:10 +03:00
Aliaksandr Valialkin
ac004665b5 all: return 503 http error if service is temporarily unavailable
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/156
2019-08-23 09:55:07 +03:00
Aliaksandr Valialkin
8c03a8c4b4 app/vminsert: allow setting the maximum number of labels per time series via -maxLabelsPerTimeseries 2019-08-23 08:45:26 +03:00
Aliaksandr Valialkin
8a126c2865 README.md: mention that VictoriaMetrics supports enterprise workloads 2019-08-22 18:00:47 +03:00
Aliaksandr Valialkin
380cae23a0 lib/storage: add benchmarks for regexp filter match / mismatch
These benchmarks allow estimate the performance of regexp filters in promql
2019-08-22 16:36:42 +03:00
Aliaksandr Valialkin
1272e407b2 app/vmselect/promql: attempt to repair invalid bucket counts passed to histogram_quantile
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/136
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/154
2019-08-22 14:39:46 +03:00
Aliaksandr Valialkin
5f33fc8e46 app/vminsert: add ability to ingest data via HTTP OpenTSDB /api/put requests
This is manual merge of the https://github.com/VictoriaMetrics/VictoriaMetrics/pull/152
Thanks to nustinov@gmail.com for the initial pull request.
2019-08-22 12:28:32 +03:00
Aliaksandr Valialkin
ec8125606d app/vminsert/opentsdb: fix BenchmarkRowsUnmarshal by adding missing put prefixes to each line 2019-08-21 19:14:47 +03:00
Aliaksandr Valialkin
f4a38f7fb1 app/vmselect/promql: fix panic on -search.disableCache
Reset the cache if it is disabled instead of stopping, since it is stopped on graceful shutdown.
2019-08-21 17:11:52 +03:00
Aliaksandr Valialkin
ab740afd0d app/vmselect/promql: explain why empty timeseries arent removed in transformLabelValue 2019-08-21 11:29:24 +03:00
Aliaksandr Valialkin
7b5168adfb app/vmselect/promql: remove NaNs from /api/v1/query_range output like Prometheus does
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/153
2019-08-20 23:01:41 +03:00
Aliaksandr Valialkin
a0d480fbf3 app/vmselect/promql: pre-allocate memory for map for checking for duplicate timeseries
This should reduce memory allocations for big number of timeseries
2019-08-20 23:01:39 +03:00
Aliaksandr Valialkin
0dfc1ace53 README.md: add a section about backfilling 2019-08-20 00:34:51 +03:00
Aliaksandr Valialkin
d3fd113a80 app/vmselect/promql: add label_value(q, label_name) func, which returns numeric value labels with name label_name in q 2019-08-20 00:28:34 +03:00
Aliaksandr Valialkin
4f738c8a15 lib/storage: try slower path for searching the tag filter with the minimum number of matching time series before giving up with increase -search.maxUniqueTimeseries error 2019-08-19 16:04:21 +03:00
Aliaksandr Valialkin
dd86e6130c app/vmselect/promql: independently track offset hints for tStart and tEnd
This should improve performance if timeseries starts or ends on the selected time range
2019-08-19 13:40:14 +03:00
Aliaksandr Valialkin
6a27657d73 app/vmselect/promql: optimize search for timestamp boundaries in rollupConfig.Do
This should improve the performance of queries over big number of time series
with big number of output points.
2019-08-19 13:03:29 +03:00
Aliaksandr Valialkin
c23b66a1ad lib/storage: pre-allocate memory for blockHeader slice in unmarshalBlockHeaders
This reduces memory usage and memory fragmentation when working with big number of time series
2019-08-19 12:46:33 +03:00
Aliaksandr Valialkin
be39414f9c deployment/docker: switch Go builder from go1.12.8 to go1.12.9 2019-08-18 22:07:58 +03:00
Aliaksandr Valialkin
e74fb23189 app/vmselect/promql: add scrape_interval(q[d]) function, which would return scrape interval for q over d 2019-08-18 21:08:26 +03:00
Aliaksandr Valialkin
582fdc059a app/vmselect/promql: hande comparisons with NaN similar to Prometheus
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/150
2019-08-18 00:25:50 +03:00
Aliaksandr Valialkin
1c108fc494 app/vmselect/promql: add lifetime(q[d]) function, which returns the lifetime of q over d in seconds.
This function is useful for determining time series lifetime.
`d` must exceed the expected lifetime of the time series, otherwise
the function would return values close to `d`.
2019-08-16 11:59:32 +03:00
Aliaksandr Valialkin
d6b5ed6d39 app/vmselect/promql: fix corner-case calculation for ideriv 2019-08-16 11:59:28 +03:00
Aliaksandr Valialkin
639b14e8ab app/vmselect/promql: properly handle corner cases for rollup functions 2019-08-15 23:29:59 +03:00
Aliaksandr Valialkin
483de1cc06 lib/workingsetcache: automatically detect when it is better to double cache capacity 2019-08-15 22:57:55 +03:00
Aliaksandr Valialkin
9e0896055d deployment/docker: switch Go builder from go1.12.7 to go1.12.8 2019-08-15 20:43:36 +03:00
Aliaksandr Valialkin
5bb61b8b38 vendor: update github.com/valyala/gozstd from v1.5.1 to v1.6.0 2019-08-15 12:56:42 +03:00
Aliaksandr Valialkin
75a58dee02 README.md: typo fix 2019-08-14 03:28:07 +03:00
Aliaksandr Valialkin
5b41122292 lib/storage: properly cache tagFilters -> TSIDs entries from historical index 2019-08-14 02:29:58 +03:00
Aliaksandr Valialkin
964c296f96 lib/storage: compress contents of cache for tagFilters -> TSIDs
This should increase cache capacity
2019-08-14 02:29:52 +03:00
Aliaksandr Valialkin
9ecb994671 app/vmselect/promql: store compressed results in the cache
This should increase rollup results cache capacity.
2019-08-14 02:29:45 +03:00
Aliaksandr Valialkin
9d41e0dcae README.md: reduce the recommended max_shards value according to test results
See https://github.com/prometheus/prometheus/issues/5803#issuecomment-520973662
2019-08-13 22:33:10 +03:00
Aliaksandr Valialkin
09fc6e22e5 all: use workingsetcache instead of fastcache
This should reduce the amount of RAM required for processing time series
with non-zero churn rate.

The previous cache behavior can be restored with `-cache.oldBehavior` command-line flag.
2019-08-13 21:39:34 +03:00
Aliaksandr Valialkin
99c37c2c96 lib/fs: add test for IsTemporaryFileName 2019-08-13 21:33:45 +03:00
Aliaksandr Valialkin
06c2c25544 Makefile: consistency renaming: check_all -> check-all 2019-08-13 21:31:19 +03:00
Aliaksandr Valialkin
ec1b185991 lib/storage: remove broken BenchmarkIndexDBSearchTSIDs 2019-08-13 20:22:08 +03:00
Aliaksandr Valialkin
0967683ae9 lib: move common code for creating flock.lock file into fs.CreateFlockFile 2019-08-13 01:45:46 +03:00
Aliaksandr Valialkin
ad8a43b4e1 README.md: fix metric names in influx line protocol example
Default separator between `measurement` and `field_name` is `_`.
2019-08-12 15:58:34 +03:00
Aliaksandr Valialkin
7346982763 README.md: mention that Influx line protocol accepts timestamps in nanoseconds by default 2019-08-12 15:31:52 +03:00
Aliaksandr Valialkin
5d8d110010 lib/fs: atomically create file with the given contents on WriteFileAtomically
This should prevent from `transaction` and `metadata.json` files corruption
on unclean shutdown such as OOM, `kill -9`, power loss, etc.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/148
2019-08-12 15:02:55 +03:00
Aliaksandr Valialkin
0b488f1e37 lib/storage: do not change timestamps to constant rate if values are constant or have constant delta
This breaks the original timestamps, which results in issues like
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/120 and
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/141 .
2019-08-06 15:40:07 +03:00
Aliaksandr Valialkin
b8bb74ffc6 app/vmstorage: add vm_concurrent_addrows_* metrics for tracking concurrency for Storage.AddRows calls
Track also the number of dropped rows due to the exceeded timeout
on concurrency limit for Storage.AddRows. This number is tracked in `vm_concurrent_addrows_dropped_rows_total`
2019-08-06 15:08:33 +03:00
Aliaksandr Valialkin
5c9e48417a vendor: update github.com/VictoriaMetrics/metrics to v1.7.1 2019-08-05 19:21:36 +03:00
Aliaksandr Valialkin
5c83f8e203 app: add vm_concurrent_ metrics for visibility in concurrency limiters for vminsert and vmselect 2019-08-05 18:30:57 +03:00
Aliaksandr Valialkin
05713469c3 vendor: make vendor-update 2019-08-05 10:33:21 +03:00
Aliaksandr Valialkin
8822079b77 lib/storage: properly reset partSearch.fetchData in partSearch.reset 2019-08-05 09:56:06 +03:00
Aliaksandr Valialkin
99e048c9df app/vmselect: allow passing match[], start and time to /api/v1/label/<label_name>/values
`/api/v1/label/<label_name>/values?match[]=q` emulates emulates `label_values(q, <label_name>)`
call in Grafana templating.
2019-08-04 23:09:21 +03:00
Aliaksandr Valialkin
47e4b50112 app/vmselect: optimize /api/v1/series by skipping storage data
Fetch and process only time series metainfo.
2019-08-04 23:01:28 +03:00
Aliaksandr Valialkin
241170dc05 app/vmselect/prometheus: prevent from fetching and scanning all the data on /api/v1/searies call by default 2019-08-04 19:42:36 +03:00
Aliaksandr Valialkin
1c69f4eadc app/vmselect/promql: tune automatic window adjustement
Increase the windows adjustement for small scrape intervals,
since they usually have higher jitter.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/139
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/134
2019-08-04 19:34:05 +03:00
Aliaksandr Valialkin
8d93b15b86 app/vmselect/promql: further increase the allowed jitter for scrape interval
Real-world production data shows higher jitter than 1/8 of scrape interval.
This may results in gaps on the graph. So increase the allowed jitter to 1/4
of scrape interval in order to reduce the probability of gaps on the graphs
over time series with high jitter for scrape_interval.
2019-08-02 20:10:23 +03:00
Aliaksandr Valialkin
fcc166622a README.md: mention that monitoring is recommended for VictoriaMetrics 2019-08-02 15:27:10 +03:00
Aliaksandr Valialkin
a9f39168d2 app/vminsert/influx: round automatically generated timestamp according to the given precision arg 2019-08-02 00:24:06 +03:00
Aliaksandr Valialkin
f090b2e917 app/vmselect/promql: tolerate higher jitter in scrape interval
Allow jitter for up to 1/8 instead of 1/16 for the scrape interval.
This should imrpove graphs when `step` is smaller than the `scrape_interval`.
2019-08-01 23:26:00 +03:00
Aliaksandr Valialkin
10caad4728 lib/decimal: modernize tests a bit 2019-07-31 21:10:03 +03:00
Aliaksandr Valialkin
3b90c2a99a Add CODE_OF_CONDUCT.md 2019-07-31 15:44:26 +03:00
Aliaksandr Valialkin
57ec4f5f92 Update issue templates
Add a template for feature request
2019-07-31 15:41:57 +03:00
Aliaksandr Valialkin
01cb15b6f5 Update issue templates
Add a template for bug report.
2019-07-31 15:39:41 +03:00
Aliaksandr Valialkin
b9256511e8 README.md: add join slack badge 2019-07-31 15:27:11 +03:00
Aliaksandr Valialkin
3a38b23fa3 app/vmselect/promql: add vm_slow_queries_total metric for counting slow queries
The query is slow if its execution time exceeds `-search.logSlowQueryDuration`
2019-07-31 03:36:37 +03:00
Aliaksandr Valialkin
8bd6f1f6df app/vmselect/promql: return NaN from histogram_quantile if at least a single bucket is broken 2019-07-31 01:18:07 +03:00
Aliaksandr Valialkin
4aaa5c2efc app/vmselect/promql: allow adjusting window for default rollup function
Default rollup function is `last_over_time`. It must support adjusting
the provided window in order to prevent from gaps on the graph
for window values smaller than scrape interval.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/134
2019-07-31 00:45:54 +03:00
Aliaksandr Valialkin
10f5a26bec app/vmselect/promql: return NaN values if invalid bucket counts are passed to histogram_quantile
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/136
2019-07-30 22:05:10 +03:00
Aliaksandr Valialkin
c14fd6c43f lib/storage: typo fixes after a77e88db7d 2019-07-30 15:38:52 +03:00
Aliaksandr Valialkin
a77e88db7d lib/storage: fix matching against tag filter with empty name
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/137
2019-07-30 15:15:09 +03:00
Aliaksandr Valialkin
aad7236e5d README.md: formatting fixes 2019-07-28 22:02:42 +03:00
Artem Navoiev
5e5de6be9a Create CONTRIBUTING.md 2019-07-28 20:42:32 +03:00
Anton Patsev
90cf6f3fcb change /usr/bin/victoriametrics to /usr/bin/victoria-metrics-prod (#132) 2019-07-28 20:40:46 +03:00
Artem Navoiev
8e3d69219f Add roadmap (#130)
* Add roadmap
* Fix typos
2019-07-28 18:39:39 +01:00
Aliaksandr Valialkin
b842a2eccc README.md: mention that VictoriaMetrics needs free disk space for background merges 2019-07-28 12:26:16 +03:00
Aliaksandr Valialkin
afcc7fb167 app/vmselect/netstorage: improve error message when reading data blocks from storage
Mention the block number in the error. This should simplify troubleshooting in this code.
2019-07-28 12:12:35 +03:00
Aliaksandr Valialkin
57a57c711a package: changed the remaining /usr/local/bin to /usr/bin
This is a follo-up after 68f260d878
2019-07-28 11:08:07 +03:00
Anton Patsev
68f260d878 change /usr/local/bin to /usr/bin (#131) 2019-07-28 11:06:24 +03:00
Aliaksandr Valialkin
1eade9b358 app/vminsert: add vm_rows_per_insert summary metric
This metric should help tuning batch sizes on clients writing data to VictoriaMetrics
2019-07-27 13:21:46 +03:00
Aliaksandr Valialkin
7e8747f6ed README.md: add a section for production ARM build 2019-07-26 22:34:31 +03:00
Aliaksandr Valialkin
0168a1b658 package: various fixes
- Use `-prod` binaries instead of development binaries for both deb and rpm packages.
- Fix binary directory from /usr/sbin to /usr/local/bin as outlined in package/victoria-metrics.service
- Fix binary name from `victoriametrics` to `victoria-metrics-prod` in package/victoria-metrics.service
2019-07-26 22:31:04 +03:00
Aliaksandr Valialkin
bf6cbb762c app/vminsert: improve error messages for Influx, OpenTSDB and Graphite parsing
Include in the error message the line which failed to parse.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/127
2019-07-26 22:08:52 +03:00
Kostya Vasilyev
6aeac37fc5 pick up .service file from ./rpm (#126)
* pick up .service file from ./rpm

* feedback from @patsevanton

* remove 'start' from ExecStart command
2019-07-26 21:56:30 +03:00
Aliaksandr Valialkin
c98725db55 app/vmstorage: consistency renaming for ignored rows metrics
vm_too_big_timestamp_rows_total -> vm_rows_ignored_total{reason="big_timestamp"}
  vm_too_small_timestamp_rows_total -> vm_rows_ignored_total{reason="small_timestamp"}
2019-07-26 20:02:06 +03:00
Anton Patsev
d8043f7161 Change default value storageDataPath (#125)
Fixes #124 .
2019-07-26 14:13:55 +03:00
Aliaksandr Valialkin
f586e1f83c lib/storage: add metrics for calculating skipped rows outside the retention
The metrics are:

    - vm_too_big_timestamp_rows_total
    - vm_too_small_timestamp_rows_total
2019-07-26 14:11:01 +03:00
Kostya Vasilyev
d1132bb188 deb packaging fixes: 1) stop the service in prerm 2) reload services in postrm (#123) 2019-07-26 12:38:59 +03:00
Aliaksandr Valialkin
915fb6df79 README.md: mention that arm builds can run on Raspberry Pi 2019-07-26 12:28:40 +03:00
Kostya Vasilyev
89eb6d78a4 RPM packaging (#122) 2019-07-25 23:47:41 +03:00
Aliaksandr Valialkin
17096b5750 app/vmselect/promql: return NaN from count() over zero time series
This aligns `count` behavior with Prometheus.
2019-07-25 22:02:30 +03:00
Aliaksandr Valialkin
66efa5745f app/vmselect/promql: properly calculate incremental aggregations grouped by __name__
Previously the following query may fail on multiple distinct metric names match:

    sum(count_over_time{__name__!=''}) by (__name__)
2019-07-25 21:53:20 +03:00
Anton Patsev
106ab78a47 Add package/rpm/ (#121) 2019-07-25 11:21:55 +03:00
Aliaksandr Valialkin
8aa474d685 README.md: move how to build VictoriaMetrics section to the bottom
This streamlines `getting started` experience
2019-07-25 11:17:30 +03:00
Aliaksandr Valialkin
9e059bb330 README.md: add links to ARM build and Pure Go build in TOC 2019-07-25 11:05:35 +03:00
Aliaksandr Valialkin
2346335ea6 README.md: moved advanced topics to the bottom, so they don't clutter getting started workflow 2019-07-25 11:00:41 +03:00
Aliaksandr Valialkin
b339890dca lib/encoding/zstd: go fmt 2019-07-25 01:37:16 +03:00
Aliaksandr Valialkin
6c4ca89d75 lib/encoding/zstd: disable CRC checks in pure Go build
This should give slightly better compression and decompressions performance.
Additionally this shaves off 4 bytes per each compressed block.
2019-07-24 19:17:16 +03:00
Roman Khavronenko
f0fe7b5ad6 fix typo (#117) 2019-07-24 07:48:28 +01:00
Aliaksandr Valialkin
22ed4e7fd4 vendor: make vendor-update 2019-07-23 20:00:19 +03:00
Aliaksandr Valialkin
162f1fb1b7 all: small updates after PR #114 2019-07-23 19:54:50 +03:00
Aliaksandr Valialkin
d07f616609 lib/encoding: small fixes in tests after the PR #114 2019-07-23 19:37:51 +03:00
Roman Khavronenko
5bf4e5ffb5 all: add Pure Go build (pull request #114)
Updates #94
2019-07-23 19:26:39 +03:00
Kostya Vasilyev
8c3629a892 Debian packaging (#116)
* initial commit of deb packaging

* Incorporated feedback from @valyala:
- Put data directory under /var/lib
- More beef in systemd file
- Packaging for arm64
- Package all target which builds and packages both amd64 and arm64

* Remove PIDFile from systemd unit, useless

* per PR feedback, move debian specific files into deb subdirectory

Updates #107 .
2019-07-22 17:12:48 +03:00
Aliaksandr Valialkin
ea07cf68ba README.md: add querying Graphite data section
Mention that Graphite data may be read either via Prometheus querying API
or via go-graphite/carbonapi. See https://github.com/go-graphite/carbonapi/blob/master/cmd/carbonapi/carbonapi.example.prometheus.yaml
2019-07-21 16:10:19 +03:00
Roman Khavronenko
4ee41bab43 add versioning to dashboard description (#113) 2019-07-21 14:34:50 +03:00
Roman Khavronenko
1273f31f19 Add CPU usage panel; rename Go runtime to Resource usage (#112)
* add CPU usage panel; rename `Go runtime` to `Resource usage`

* rm irate from CPU usage panel

Updates #92 .
2019-07-20 17:24:24 +03:00
Aliaksandr Valialkin
0f2ecde0e6 lib/encoding: improve gauge series detection
- Series with negative values are always gauges
- Counters may only have increasing values with possible counter resets

This should improve compression ratio for gauge series which
were previously mistakenly detected as counters.
2019-07-20 14:05:09 +03:00
Aliaksandr Valialkin
6cd77d4847 deployment: switch builder from go1.12.6 to go1.12.7 2019-07-20 12:15:05 +03:00
Roman Khavronenko
fb14f23532 mention docker-compose as option to spin up VM (#97) 2019-07-16 00:45:21 +03:00
Aliaksandr Valialkin
daba0cdb05 lib/netutil: do not count timeouts as network errors 2019-07-15 23:05:35 +03:00
Aliaksandr Valialkin
575d2f0a91 app/vminsert: use netutil.TCPListener for collecting network-related metrics for Graphite and OpenTSDB TCP traffic 2019-07-15 22:58:00 +03:00
Aliaksandr Valialkin
ec1b439329 README.md: expand capacity planning section a bit 2019-07-12 21:19:27 +03:00
Aliaksandr Valialkin
6a943a6a58 app/vmselect/promql: remove empty time series after applying filters like q > 0
This should reduce CPU and RAM usage for queries over high number of time series.
2019-07-12 19:59:27 +03:00
Aliaksandr Valialkin
998525999c vendor: update github.com/VictoriaMetrics/metrics to v1.7.0
This version adds support for `process_*` metrics similar
to metrics exposed by https://github.com/prometheus/client_golang .

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/92
2019-07-12 17:22:53 +03:00
Aliaksandr Valialkin
ab88890523 app/vmselect/promql: parallelize incremental aggregation to multiple CPU cores
This may reduce response times for aggregation over big number of time series
with small step between output data points.
2019-07-12 15:52:22 +03:00
Aliaksandr Valialkin
374d681848 README.md: clarify that Prometheus replicates data to remote storage 2019-07-12 02:51:04 +03:00
Aliaksandr Valialkin
e75d5f47c4 lib/storage: remove unused function isTooBigTimeRangeForDateMetricIDs 2019-07-12 02:28:23 +03:00
Aliaksandr Valialkin
fc90ebf43c lib/storage: do not reduce maxMetrics on time ranges exceeding maxDaysForDateMetricIDs
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/95
2019-07-12 02:20:34 +03:00
Aliaksandr Valialkin
62a7353479 app/vmselect/prometheus: set start arg in /api/v1/series to the minimum allowed time by default as Prometheus does
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/91
2019-07-11 17:10:14 +03:00
Aliaksandr Valialkin
54bd21eb4a app/vmselect/prometheus: convert negative times to 0, since they arent supported by the storage 2019-07-11 17:07:20 +03:00
Aliaksandr Valialkin
2bd1a01d1a lib/storage: do not pollute inverted index with data for samples outside the retention period 2019-07-11 17:04:56 +03:00
Artem Navoiev
cd4833d3d0 integration tests 2019-07-11 15:48:08 +03:00
Aliaksandr Valialkin
101fa258e5 app/vmstorage: prepare for integration tests with multiple Init / Stop cycles 2019-07-11 15:34:50 +03:00
Aliaksandr Valialkin
d031e04023 lib/storage: use fast path for orSuffix when searching for metricIDs against plain tag value 2019-07-11 14:48:37 +03:00
Aliaksandr Valialkin
43ea4ce428 lib/storage: remember and skip individual tag filters matching too many metrics
This saves CPU time by skipping useless matching for individual tag filters.
2019-07-11 14:48:30 +03:00
Aliaksandr Valialkin
a336bb4e22 app/vmselect/promql: reduce RAM usage for aggregates over big number of time series
Calculate incremental aggregates for `aggr(metric_selector)` function instead of
keeping all the time series matching the given `metric_selector` in memory.
2019-07-10 13:04:39 +03:00
Aliaksandr Valialkin
1fe6d784d8 all: consistency renaming: bytesSize -> sizeBytes 2019-07-10 00:47:36 +03:00
Aliaksandr Valialkin
55fe36149c app/vmselect/promql: mention -search.logSlowQueryDuration flag value in the slow query log message 2019-07-10 00:41:24 +03:00
Aliaksandr Valialkin
9203170eb2 app/vmselect/promql: extract rmoeveGroupTags function for removing unneeded tags from MetricName according to the given modifierExpr 2019-07-09 23:20:48 +03:00
Aliaksandr Valialkin
2db685c19c app/vmselect/promql: properly preserve metric name after applying functions in any case from transformFuncsKeepMetricGroup 2019-07-09 23:10:35 +03:00
Aliaksandr Valialkin
6ddfb06b52 README.md: add alerting section 2019-07-08 22:45:34 +03:00
Aliaksandr Valialkin
40a6c0d672 app/vmselect/prometheus: typo fix 2019-07-07 23:34:23 +03:00
Aliaksandr Valialkin
1371024747 app/vmselect/prometheus: handle minTime and maxTime values that may be set by Promxy or Prometheus client
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/88
2019-07-07 21:53:48 +03:00
Roman Khavronenko
c27c6de297 add panels for Active time series, Disk space usage (datapoints) and Disk space usage (index) (#87) 2019-07-04 22:15:24 +03:00
Aliaksandr Valialkin
0c629429de README.md: clarify upgrading and applying new config sections 2019-07-04 20:07:00 +03:00
Aliaksandr Valialkin
4dbd642c86 app/vmselect/promql: remove empty timeseries left after topk call 2019-07-04 19:42:39 +03:00
Aliaksandr Valialkin
56c154f45b all: add vm_data_size_bytes metrics for easy monitoring of on-disk data size and on-disk inverted index size 2019-07-04 19:42:30 +03:00
Aliaksandr Valialkin
8d83dcf332 README.md: update community and contributions section 2019-07-04 09:36:36 +03:00
Aliaksandr Valialkin
9a4b2b8315 app/vmselect/prometheus: update adjustLastPoints function
- Do not overwrite last points by the previous NaNs, since this may result in empty time series.
- Overwrite the last 2 points instead of 3. This should be enough in most cases.
2019-07-04 09:14:18 +03:00
Aliaksandr Valialkin
e06866005d app/vmselect/promql: gracefully handle duplicate timestamps in irate and rollup_rate funcs
Previously such timestamps result in `+Inf` results. Now the previous timestamp is used
for the calculations.
2019-07-03 12:39:55 +03:00
Aliaksandr Valialkin
2c76a9c9ab README.md: enumerate the most interesting metrics exported at /metrics page 2019-07-01 23:41:08 +03:00
Aliaksandr Valialkin
b9166a60ff app/vmselect: do not return empty time series in /api/v1/query result 2019-07-01 17:16:34 +03:00
Aliaksandr Valialkin
c7034fc51b lib/memory: attempt #3 to determine memory limit for LXC container
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/84
2019-07-01 14:01:13 +03:00
Aliaksandr Valialkin
715c423f1a README.md: mention Thanos vs VictoriaMetrics article 2019-07-01 12:26:47 +03:00
Aliaksandr Valialkin
ca74e29458 README.md: explain how to configure HA setup for Prometheus HA pairs 2019-06-29 19:54:46 +03:00
Aliaksandr Valialkin
a41955863a lib/mergeset: make fmt 2019-06-29 14:25:26 +03:00
Aliaksandr Valialkin
2ecb117082 lib/storage: skip non-matching metricIDs in sortedFilter
This should improve performance for big sorteFilter lists.
2019-06-29 13:48:32 +03:00
Aliaksandr Valialkin
0c88afa386 lib/mergeset: speed up binarySearchKey by skipping the first item during binary search 2019-06-29 13:45:49 +03:00
Aliaksandr Valialkin
74c0fb04f3 app/vmselect/promql: consistency renaming: candlestick -> rollup_candlestick 2019-06-29 03:13:02 +03:00
Aliaksandr Valialkin
828078eb45 lib/memory: remove TestReadLXCMemoryLimit, since it doesnt work in Travis 2019-06-28 18:22:46 +03:00
Aliaksandr Valialkin
7b59466667 lib/memory: attempt #2 to determine memory limit inside LXC container
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/84
2019-06-28 18:08:15 +03:00
Aliaksandr Valialkin
79ac02ba74 README.md: clean up <img> attributes 2019-06-28 17:57:43 +03:00
Aliaksandr Valialkin
593bd35aaa lib/memory: an attempt to read proper memory limit inside LXC container
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/84
2019-06-28 15:34:30 +03:00
Aliaksandr Valialkin
7354f10336 vendor: update github.com/VictoriaMetrics/metrics to v1.6.2
This fixes Summary printing for *_count and *_sum values with metric names containing labels.
2019-06-28 14:17:17 +03:00
Aliaksandr Valialkin
e8998c69a7 vendor: update github.com/VictoriaMetrics/metrics to v1.6.1 2019-06-28 14:06:49 +03:00
Aliaksandr Valialkin
55bcf60ea6 app/vmselect: fix 32bit arm build
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/83
2019-06-27 19:36:17 +03:00
Aliaksandr Valialkin
796b010139 app/vmselect: add candlestick(m[d]) func for returning open, close, low and high rollups on the given time range d
This function is frequently used in financial apps. See https://en.wikipedia.org/wiki/Candlestick_chart
2019-06-27 18:46:13 +03:00
Aliaksandr Valialkin
0c8a09c8e1 README.md: mention about global query view 2019-06-27 17:38:37 +03:00
Aliaksandr Valialkin
c1be1e4342 lib/storage: optimize time series search by regexp filter
This should improve search speed on label filters like `{foo=~"bar.+baz"}`
2019-06-27 16:17:43 +03:00
Aliaksandr Valialkin
0c8d463307 README.md: mention that Prometheus 2.10.0+ works better with remote_write
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/80
2019-06-27 00:54:48 +03:00
Jiri Tyr
e0fccc6c60 Change the default influxMeasurementFieldSeparator 2019-06-26 13:22:03 +03:00
Aliaksandr Valialkin
1f7d9a213a app/vminsert: fix inifinite loop when reading two lines without newline in the end
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/82
2019-06-26 02:51:56 +03:00
Aliaksandr Valialkin
7ce1f73ada README.md: add more information to rough estimation of the required resources 2019-06-26 02:20:33 +03:00
Aliaksandr Valialkin
e605315f01 README.md: add link to slack chat 2019-06-26 02:05:38 +03:00
Aliaksandr Valialkin
fcef49184b README.md: clarify docs about Influx line protocol support 2019-06-26 00:05:09 +03:00
Aliaksandr Valialkin
844ce4731e app/vmselect/promql: suppress error when template func is used inside modifier list. Just leave it as is
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/78
2019-06-25 20:43:22 +03:00
Aliaksandr Valialkin
683bf2a11f lib/storage: make sure non-nil args are passed to openIndexDB 2019-06-25 20:10:04 +03:00
Aliaksandr Valialkin
eb2283a029 lib/storage: reduce too big maxMetrics in getTagFilterWithMinMetricIDsCountAdaptive
This should improve performance on inverted index search for big amount of unique time series
when big -search.maxUniqueTimeseries is set.
2019-06-25 19:55:27 +03:00
Aliaksandr Valialkin
e8377011ab lib/storage: free up memory from caches owned by indexDB when it is deleted 2019-06-25 14:42:44 +03:00
Aliaksandr Valialkin
33ea2120c3 lib/storage: use unversioned keys for tag cache in extDB
Data in ExtDB cannot be changed, so it is OK to use unversioned keys for tag cache.
This should improve performance for index lookups over big amount of time series.
2019-06-25 13:08:58 +03:00
Aliaksandr Valialkin
cf63669303 lib/storage: skip searching in extDB if it doesn't contain items for the given time range
This should improve inverted index search performance for big amount
of unique time series when the search is performed only on recent data.
2019-06-25 13:00:37 +03:00
Aliaksandr Valialkin
feacfffe89 app/vmselect/promql: increase default value for -search.maxPointsPerTimeSeries from 10k to 30k
This may be required for subqueries with small steps. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/77
2019-06-24 22:53:18 +03:00
Aliaksandr Valialkin
4bb738ddd9 app/vmselect/promql: adjust value returned by linearRegression to the end of time range like Prometheus does
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/71
2019-06-24 22:45:58 +03:00
Aliaksandr Valialkin
90e72c2a42 app/vmselect/promql: add sum2 and sum2_over_time, geomean and geomean_over_time funcs.
These functions may be useful for statistic calculations.
2019-06-24 16:44:44 +03:00
Aliaksandr Valialkin
ccd8b7a003 README.md: mention how to recover from broken parts due to disk errors
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/76
2019-06-24 14:17:58 +03:00
Aliaksandr Valialkin
d32845781e README.md: remove unused TOC items 2019-06-24 14:12:07 +03:00
Aliaksandr Valialkin
af2ceaaa0b lib/storage: mention source parts on merge error
This should improve determining broken source part.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/76
2019-06-24 14:08:43 +03:00
Aliaksandr Valialkin
61926bae01 app/vmselect/promql: adjust the provided window only for range functions with dt in denominator
This should fix range function calculations such as `changes(m[d])` where `d` is smaller
than the scrape interval.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/72
2019-06-23 19:27:31 +03:00
Aliaksandr Valialkin
ee13256f74 app/vmselect/promql: use deriv_fast instead of deriv in ttf, since deriv calculations have been changed recently 2019-06-23 15:54:18 +03:00
Aliaksandr Valialkin
3b3b2f1e6e app/vmselect/promql: adjust ttf calculation, so deriv(freev) for freev=m[d] could be properly calculated 2019-06-23 14:31:19 +03:00
Aliaksandr Valialkin
c9cbf5351c vendor: update github.com/valyala/gozstd to v1.5.1 2019-06-22 00:14:19 +03:00
Aliaksandr Valialkin
146c6e1f72 app/vmselect/promql: typo fixes in comments 2019-06-21 23:22:59 +03:00
Aliaksandr Valialkin
d261fa2885 app/vmselect/promql: add deriv_fast function for calculating fast derivative
`deriv_fast` calculates derivative based on the first and the last point on the interval
instead of calculating linear regression based on all the data points on the interval.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/73
2019-06-21 23:05:39 +03:00
Aliaksandr Valialkin
5b47c00910 app/vmselect/promql: use linear regression in deriv func like Prometheus does
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/73
2019-06-21 22:59:46 +03:00
Aliaksandr Valialkin
9e1119dab8 app/vmselect/promql: ajdust data model to the model used in Prometheus
Do not take into account data points on the range `[timestamp .. timestamp+step)`
when calculating value on the given `timestamp`.
Use only data points from the past when performing these calculations like Prometheus does.

This should reduce discrepancies between results returned by VictoriaMetrics
and results returned by Prometheus.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/72
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/71
2019-06-21 21:54:48 +03:00
Aliaksandr Valialkin
47a3228108 app/vmselect/promql: do not strip __name__ form time series after binary comparison operation
Example:

  foo > 10

Would leave `foo` name for all the matching time series on the left.
2019-06-21 13:09:38 +03:00
Aliaksandr Valialkin
e88a03323a all: initial stubs for Windows support; see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/70 2019-06-20 20:07:10 +03:00
Aliaksandr Valialkin
b75630fcf4 Makefile: enable golangci-lint in make check_all 2019-06-20 14:52:58 +03:00
Aliaksandr Valialkin
80db24386e lib/storage: typo fixes found by golangci-lint; updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/69 2019-06-20 14:37:55 +03:00
Aliaksandr Valialkin
296c14317f lib/netutil: remove unused TCPListener.name; updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/69 2019-06-20 14:36:15 +03:00
Aliaksandr Valialkin
973e4b5b76 app/vmselect/promql: remove unused func keepLastValue; updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/69 2019-06-20 14:35:11 +03:00
Aliaksandr Valialkin
7aadec8e3c app/vmselect/promql: typo fix; updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/69 2019-06-20 14:33:47 +03:00
Aliaksandr Valialkin
45fc8cb72f Makefile: add make golangci-lint rule for running golangci-lint run; updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/69 2019-06-20 14:30:55 +03:00
Aliaksandr Valialkin
4b2523fb40 app/vminsert/opentsdb: remove unused const maxReadPacketSize; update https://github.com/VictoriaMetrics/VictoriaMetrics/issues/69 2019-06-20 14:30:06 +03:00
Aliaksandr Valialkin
70ba36fa37 app/vmselect/prometheus: return better error messages on missing args to /api/v1/* 2019-06-20 14:07:55 +03:00
Aliaksandr Valialkin
a78b3dba7f app/vmstorage: add vm_cache_entries{type="storage/hour_metric_ids"} metric for tracking active time series count 2019-06-19 18:36:47 +03:00
Aliaksandr Valialkin
a9cfca6a72 README.md: add max_shards: 100 to the recommended Prometheus config
Prometheus establishes a connection per shard in remote_write config.
By default it establishes up to 1000 connections to remote storage (max_shards: 1000).
This is quite big, so set `max_shards: 100` in the recommmended Prometheus config.
2019-06-19 17:48:09 +03:00
Aliaksandr Valialkin
710d6c33ea lib/prompb: remove superflouos bytes copying in ReadSnappy 2019-06-18 20:37:51 +03:00
Aliaksandr Valialkin
a8d4224828 app/vminsert/graphite: allow skipping timestamps in Graphite plaintext protocol
In this case VictoriaMetrics uses the ingestion time as a timestamp.
2019-06-18 19:04:04 +03:00
Aliaksandr Valialkin
341bed4822 README.md: mention that arbitrary number of lines may be sent in a single request via supported ingestion protocols 2019-06-18 18:59:12 +03:00
Aliaksandr Valialkin
5982e94c94 vendor: update golang.org/x/sys 2019-06-18 16:19:26 +03:00
Aliaksandr Valialkin
6d6c9eb1f8 lib/flagutil: remove unused package 2019-06-18 10:43:55 +03:00
Aliaksandr Valialkin
86d3d907a5 app/vminsert/influx: add -influxSkipSingleField flag for using {measurement} instead of {measurement}{separator}{field_name} for Influx lines with a single field
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/66
2019-06-17 19:05:57 +03:00
Aliaksandr Valialkin
269285848f app/vminsert/influx: add -influxMeasurementFieldSeparator flag for the ability to change separator for {measurement}{separator}{field_name} metric name 2019-06-14 10:00:12 +03:00
Aliaksandr Valialkin
47e1e5eb4b deployment/docker: switch builder from go1.12.5 to go1.12.6 2019-06-14 09:32:06 +03:00
Aliaksandr Valialkin
d2c801029b lib/storage: persist metric ids for the current and the previous hour on graceful shutdown
This should improve performance after restart when the db contains a lot of time series
with high time series churn (i.e. metrics from Kubernetes with many pods and frequent deployments)
2019-06-14 07:55:14 +03:00
Aliaksandr Valialkin
beb479b8f1 app/vmselect/promql: use dynamic limit on memory for concurrent queries 2019-06-12 23:18:44 +03:00
Aliaksandr Valialkin
611c4401f8 README.md: mention about multi-tenancy 2019-06-12 21:30:36 +03:00
Aliaksandr Valialkin
a8db528930 app/vmselect/promql: merge non-overlapping duplicate time series in group_left and group_right joins 2019-06-12 20:32:32 +03:00
Aliaksandr Valialkin
15613e5338 app/vmselect/promql: swap binary operation with modifier in the error message for improved readability 2019-06-12 17:14:39 +03:00
Aliaksandr Valialkin
3237d0309c app/vmselect/promql: list a sample of duplicate time series in the error message for group_left or group_right
This should improve troubleshooting for complex queries involving `group_left` and `group_right` modifiers.
2019-06-12 16:57:37 +03:00
Aliaksandr Valialkin
26f8d7ea1b lib/fs: sync parent dir in MustRemoveAll only if it exists
The parent directory may be non-existing when the deleted directory
didn't exist before the MustRemoveAll call
2019-06-12 02:14:44 +03:00
Aliaksandr Valialkin
419197ba08 lib/fs: consolidate *RemoveAll* funcs into a single MustRemoveAll func
The func syncs parent dir in order to persist directory removal
in the event of power loss
2019-06-12 01:53:46 +03:00
Aliaksandr Valialkin
a4b4db9bf6 README.md: add a chapter about downsampling 2019-06-12 01:32:26 +03:00
Aliaksandr Valialkin
c1276edab5 lib/fs: panic with fatal error when directories cannot be removed
Unremoved directories may lead to inconsistent data directory,
so VictoriaMetrics will fail to start next time.

So panic on the first error when trying to remove directory in order
to simplify recover process.
2019-06-12 01:20:54 +03:00
Aliaksandr Valialkin
2322c9a45a lib/fs: attempt #2 to work around NFS issue with directory removal
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/61
2019-06-12 01:07:05 +03:00
Aliaksandr Valialkin
89b928ff24 vendor: update github.com/VictoriaMetrics/fastcache to v1.5.1 2019-06-11 23:56:08 +03:00
Aliaksandr Valialkin
935bfd7a18 lib/fs: consistency renaming SyncPath -> MustSyncPath, since it doesnt return error 2019-06-11 23:13:49 +03:00
Aliaksandr Valialkin
3dd36b8088 lib/fs: make sure the created directory remains visible in the fs in the event of power loss
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/63
2019-06-11 23:08:09 +03:00
Aliaksandr Valialkin
afb964670a lib/fs: use filepath.Dir instead of filepath.Split, since the filename is unused 2019-06-11 22:54:26 +03:00
Aliaksandr Valialkin
20fc0e0e54 lib/{storage,mergeset}: sync filenames inside part when finalizing the part
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/63
2019-06-11 21:51:13 +03:00
Aliaksandr Valialkin
4d9f088526 README.md: add examples on how to write data with Graphite and OpenTSDB protocols 2019-06-11 21:24:32 +03:00
Aliaksandr Valialkin
82d1707861 README.md: add missing port to example urls 2019-06-11 21:05:24 +03:00
Aliaksandr Valialkin
70d20ce8de README.md: use proper urls for single-node version in examples 2019-06-11 20:33:52 +03:00
Aliaksandr Valialkin
723bf1af7f README.md: add example on how to write data with Influx line protocol to VictoriaMetrics 2019-06-11 20:31:25 +03:00
Aliaksandr Valialkin
ac7b186f13 all: try hard removing directory with contents
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/61
2019-06-11 01:57:59 +03:00
Roman Khavronenko
cd1bc32158 convert dashboard for provisioning (#62) 2019-06-11 01:07:09 +03:00
Aliaksandr Valialkin
1c33b5937e app/vmselect/promql: prevent from count_values explosion of timeseries, which could result in OOM 2019-06-11 01:03:13 +03:00
Aliaksandr Valialkin
8bb6bc986d app/vmselect/promql: skip superflouos timestamps copying in count_values 2019-06-11 00:44:01 +03:00
Aliaksandr Valialkin
d2be567482 app/vmselect/promql: remove superflouos timeseries copy in histogram_quantile func 2019-06-11 00:39:41 +03:00
Aliaksandr Valialkin
7e7d4d5275 app/vmselect/promql: remove superflouos timeseries copy in union func 2019-06-11 00:35:20 +03:00
Aliaksandr Valialkin
bf9782eaf6 app/vmselect/promql: skip NaN values in count_values func 2019-06-10 22:42:32 +03:00
Aliaksandr Valialkin
cbe692f0e2 app/vmselect: add /api/v1/labels/count handler for quick detection of labels with the maximum number of distinct values 2019-06-10 19:55:38 +03:00
Aliaksandr Valialkin
7b6623558f lib/storage: skip adaptive searching for tag filter matching the minimum number of metrics if the identical previous search didn't found such filter
This should improve speed for searching metrics among high number of time series
with high churn rate like in big Kubernetes clusters with frequent deployments.
2019-06-10 14:07:39 +03:00
Aliaksandr Valialkin
a1351bbaee lib/storage: factor out getTagFilterWithMinMetricIDsCountAdaptive from updateMetricIDsForTagFilters 2019-06-10 13:26:44 +03:00
Aliaksandr Valialkin
b4d707d9bb lib/storage: give clearer names to more functions 2019-06-10 13:01:23 +03:00
Aliaksandr Valialkin
bee7298f81 lib/storage: give more clear names to functions 2019-06-10 12:50:44 +03:00
Aliaksandr Valialkin
dbd217b8f0 lib/storage: test GetSeriesCount 2019-06-10 12:43:34 +03:00
Aliaksandr Valialkin
4d936b1524 lib/storage: make getSeriesCount func indexSearch method 2019-06-10 12:29:11 +03:00
Aliaksandr Valialkin
7354090aad app/vmstorage: add missing _total suffixes to newly added metrics 2019-06-09 22:11:36 +03:00
Aliaksandr Valialkin
d37924900b lib/storage: optimize time series lookup for recent hours when the db contains many millions of time series with high churn rate (aka frequent deployments in Kubernetes) 2019-06-09 19:13:56 +03:00
Aliaksandr Valialkin
c0baa977cf app/vminsert/concurrencylimiter: typo fix in the error message 2019-06-08 22:43:33 +03:00
Aliaksandr Valialkin
f4252f87e6 app/vminsert: really fix #60
ReadLinesBlock may accept dstBuf with non-zero length. In this case the last line without trailing newline isn't read.
Fix this by comparing len(dstBuf) to 0 instead of its original length.
2019-06-07 23:37:03 +03:00
Aliaksandr Valialkin
0b78d228d2 app/vminsert: properly read trailing line without newline in the end
This fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/60
2019-06-07 23:17:59 +03:00
Aliaksandr Valialkin
0371c216a7 deployment/docker: move victoriametrics single-node docker image from valyala/victoria-metrics to victoriametrics/victoria-metrics docker hub path 2019-06-07 11:52:53 +03:00
Aliaksandr Valialkin
c1f18ee48d app/vmselect/promql: properly handle {__name__ op "string"} queries
This has been broken in 7294ef333ad26f4f6578b783e97649e58b1f8945 .
2019-06-07 02:02:04 +03:00
Roman Khavronenko
fbd7044b2b Dashboard update (#57)
* split "pending datapoints" by storage and index pending entities

* update provisioned dVM dashboard
2019-06-07 01:31:45 +03:00
Roman Khavronenko
2afe511d80 Setup Grafana provisioning for docker-compose setup (#50)
* setup Grafana provisioning for docker-compose setup

* review fixes
2019-06-06 23:37:44 +03:00
Seua Polyakov
f4e63cd070 Add SIGINT as stopsignal to docker file (#54)
Add sigint as stopsignal to docker file. You can find more here: https://docs.docker.com/engine/reference/builder/#usage
With this change, the main process inside the container will receive SIGINT, and after a grace period, SIGKILL.
2019-06-06 22:36:21 +03:00
Aliaksandr Valialkin
667115a5c7 app/vmselect/prometheus: report about incorrect time or duration instead of silently using the default value
This should prevent from incorrect usage of the querying API.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/52
2019-06-06 22:18:18 +03:00
Aliaksandr Valialkin
1458450dba app/vmselect/promql: return the correct time series from quantile
Previously arbitrary time series could be returned from `quantile`
depending on sort order for the last data point in the selected range.

Fix this by returning the calculated time series.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/55
2019-06-06 17:07:31 +03:00
Aliaksandr Valialkin
5a5ba749f2 README.md: add an example on how Influx line protocol is converted into Prometheus data points 2019-06-06 16:08:29 +03:00
Aliaksandr Valialkin
a3e26de45e lib/procutil: typo fix in comment to WaitForSigterm 2019-06-04 17:31:47 +03:00
Aliaksandr Valialkin
53ea90865d app/vmselect/promql: add -search.disableCache flag for disabling response caching
This may be useful for data back-filling, when the response caching
could interfere badly with newly added data points with timestamps
in the past.
2019-06-04 17:30:45 +03:00
Aliaksandr Valialkin
17f0a53068 app/vminsert: explain that /query request emulation is required for TSBS benchmark 2019-06-03 18:40:27 +03:00
Anton Patsev
b03bdb32ff Prettify Table of contents (#47) 2019-06-03 17:31:15 +03:00
Aliaksandr Valialkin
15f59c6df9 deployment/docker: remove trailing whitespace 2019-06-03 14:53:08 +03:00
Artem Navoiev
da45a20491 docker compose for VM 2019-06-03 09:57:33 +02:00
Roman Khavronenko
5859bb9556 Add grafana dashboard for VM (#46) 2019-06-03 00:25:07 +03:00
Aliaksandr Valialkin
28f6c36ab4 lib/storage: tune updating a map with today`s metric ids
- Increase update iterval from 1s to 10s. This should reduce CPU usage
  for large amounts of metric ids with constant churn.
- Reduce pendingTodayMetricIDsLock lock duration during the update.
2019-06-02 21:58:16 +03:00
Aliaksandr Valialkin
4794f894a4 lib/storage: speed up checking metricID existence in the list for the current date 2019-06-02 18:34:08 +03:00
Aliaksandr Valialkin
c7280ba61a vendor: update deps with make vendor-update 2019-06-01 23:39:58 +03:00
Aliaksandr Valialkin
fbd8b03f15 README.md: fixed the link to yum repository source codes 2019-06-01 13:55:44 +03:00
Aliaksandr Valialkin
d17a47e3e0 README.md: add setting up service chapter 2019-05-31 23:34:09 +03:00
Aliaksandr Valialkin
d6862a2d97 README.md: mention that VictoriaMetrics works with time series data from Kubernetes 2019-05-31 22:53:35 +03:00
Aliaksandr Valialkin
f2cf5d8e36 app/vmselect/promql: allow escaping identifiers with \ and \xXX
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/42
2019-05-31 17:35:17 +03:00
Aliaksandr Valialkin
27f0d098bd app/victoria-metrics: add make victoria-metrics-arm64 rule for building GOARCH=arm64 binary 2019-05-29 23:07:14 +03:00
Aliaksandr Valialkin
a51ff2c6cb README.md: add LICENSE shield 2019-05-29 14:09:36 +03:00
Aliaksandr Valialkin
56b952c456 app/vminsert: add -maxConcurrentInserts command-line flag for limiting the number of concurrent inserts 2019-05-29 12:41:23 +03:00
Aliaksandr Valialkin
61bad1e07e Makefile: run go vet with -mod=vendor in order to disable downloading vendored deps 2019-05-29 01:38:13 +03:00
Artem Navoiev
be97f764f5 [ci-ci] enable CI (#39) 2019-05-29 01:32:49 +03:00
Artem Navoiev
a576d1f5d3 README.md: add links to slack and telegrams (#40) 2019-05-29 01:30:37 +03:00
Aliaksandr Valialkin
968d094524 app/vminsert: reduce memory usage for Influx, Graphite and OpenTSDB protocols
Do not buffer per-connection data and just store it as it arrives
2019-05-28 18:47:23 +03:00
Aliaksandr Valialkin
e307a4d92c lib/timerpool: use timer pool in concurrency limiters
This should reduce the number of memory allocations in highly loaded system
2019-05-28 17:20:10 +03:00
Aliaksandr Valialkin
0eae39daa7 app/vminsert: properly reset InsertCtx.mrs - they must be empty after Reset call 2019-05-28 16:08:01 +03:00
Aliaksandr Valialkin
437e0b2300 README.md: typo fix 2019-05-27 21:37:48 +03:00
Aliaksandr Valialkin
4b3af728ea README.md: add steps for restoring from a snapshot 2019-05-27 20:36:51 +03:00
Aliaksandr Valialkin
4a12c4c982 README.md: add Third-party contributions section 2019-05-27 20:23:39 +03:00
Anton Patsev
2e75efb64e README.md: add unofficial yum repository (#37) 2019-05-27 20:19:54 +03:00
Aliaksandr Valialkin
25900162f6 Makefile: add -mod=vendor to go test, so tests use external deps from vendor folder 2019-05-27 00:35:46 +03:00
Aliaksandr Valialkin
16afcd6aff vendor: update dependencies with make vendor-update 2019-05-26 23:25:12 +03:00
Aliaksandr Valialkin
c2a5eef5e3 Makefile: pass GO111MODULE=on to all the go invocations 2019-05-26 23:23:43 +03:00
Aliaksandr Valialkin
4859ca0cda app/vmselect: update comment according to the updated code 2019-05-26 22:38:58 +03:00
Aliaksandr Valialkin
feb6b203a4 app/vminsert/influx: try converting string values to numeric values, since Influx agents may send numeric values as strings
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/34
2019-05-26 22:11:19 +03:00
Aliaksandr Valialkin
51ee990902 README.md: typo fix 2019-05-26 17:59:04 +03:00
Aliaksandr Valialkin
5262aae5da app/vmselect/promql: misspeling fix 2019-05-25 21:53:11 +03:00
Aliaksandr Valialkin
54fb8b21f9 all: fix misspellings 2019-05-25 21:51:11 +03:00
Aliaksandr Valialkin
d6523ffe90 Makefile: add -s flag to go fmt in make fmt command 2019-05-25 21:43:35 +03:00
Aliaksandr Valialkin
024560b161 README.md: add goreportcard.com badge 2019-05-25 21:38:57 +03:00
Aliaksandr Valialkin
96ac664b27 Add make victoria-metrics Makefile rule for building dev binary 2019-05-25 18:24:51 +03:00
Aliaksandr Valialkin
2ffcf7a4a5 README.md: mention that VictoriaMetrics is scalable 2019-05-25 17:09:43 +03:00
Aliaksandr Valialkin
5cbd4cfca9 app/vmselect: log slow queries if their execution time exceeds -search.logSlowQueryDuration 2019-05-24 16:12:31 +03:00
Aliaksandr Valialkin
718ce33714 app/vmselect: consume resultsCh data in exportHandler if writeResponseFunc failed to consume it 2019-05-24 14:54:31 +03:00
Aliaksandr Valialkin
f332c0d54e README.md: add contacts chapter 2019-05-24 13:58:26 +03:00
0xflotus
eca566ed22 fixed small errors (#31) 2019-05-24 13:27:42 +03:00
Aliaksandr Valialkin
5bbfdff9fe Makefile: add make publish and make package shortcuts for building and publishing docker images 2019-05-24 13:19:24 +03:00
Aliaksandr Valialkin
6b0ae332f8 lib/encoding: add vm_zstd_block_{compress|decompress}_calls_total for determining the number CompressZSTD / DecompressZSTD calls 2019-05-24 13:01:02 +03:00
Aliaksandr Valialkin
2eb3602d61 app/victoria-metrics: remove -p XXXX:XXXX from docker run options, since it is unnesessary if --net=host is set 2019-05-24 12:54:53 +03:00
Aliaksandr Valialkin
6fb9dd09f5 lib/encoding: add vm_zstd_block_{original|compressed}_bytes_total metrics for rough estimation of block compression ratio 2019-05-24 12:34:32 +03:00
Aliaksandr Valialkin
19b6643e5c lib/encoding: substitute CompressZSTD with CompressZSTDLevel 2019-05-24 12:32:55 +03:00
Aliaksandr Valialkin
08b889ef09 lib/httpserver: add -http.disableResponseCompression flag, which may help saving CPU resources at the cost of higher network bandwidth usage 2019-05-24 12:18:40 +03:00
Aliaksandr Valialkin
d15d0127fe app/vmselect/promql: add alias(q, name) function that sets the given name to all the time series in q 2019-05-24 02:41:45 +03:00
Aliaksandr Valialkin
674888fdc9 lib/decimal: add a comment explaining weird code in maxUpExponent. Fixes #29 2019-05-23 17:18:35 +03:00
Aliaksandr Valialkin
fb140eda33 app/vmselect/promql: add label_transform(q, label, regexp, replacement) function for replacing all the occurences of regexp with replacement in the given label for q 2019-05-23 16:26:19 +03:00
Aliaksandr Valialkin
398ec4383e README.md: typo fix 2019-05-23 02:09:51 +03:00
Aliaksandr Valialkin
eff0debe14 README.md: mention that VictoriaMetrics is high-perf cost-effective TSDB 2019-05-23 00:36:45 +03:00
Aliaksandr Valialkin
1836c415e6 all: open-sourcing single-node version 2019-05-23 00:18:06 +03:00
Artem Navoiev
81bbbf2cae Add link to grafana dashboard (#24) 2019-05-20 15:45:14 +03:00
Aliaksandr Valialkin
8ab67e3803 Streamlined wording for operational simplicity 2019-03-21 18:24:15 +02:00
Aliaksandr Valialkin
1e72faa51f Mention that VictoriaMetrics accepts data in OpenTSDB format 2019-03-20 20:13:15 +02:00
Aliaksandr Valialkin
59307cceba Update README.md 2019-02-07 19:00:38 +02:00
Aliaksandr Valialkin
56bc071ee7 Update README.md 2019-02-07 03:38:48 +02:00
Aliaksandr Valialkin
a6e2ce3ea3 Add yet another link to relevant article 2019-01-31 03:20:48 +02:00
Aliaksandr Valialkin
cd49dbd313 Added links to relevant articles 2019-01-31 03:19:38 +02:00
Aliaksandr Valialkin
c1c2d42598 Mention about new features 2019-01-31 03:18:09 +02:00
Aliaksandr Valialkin
e11a0030b3 Typo fix 2019-01-20 00:20:25 +02:00
Aliaksandr Valialkin
0baea8a0df Mention Graphite plaintext protocol support 2019-01-20 00:19:59 +02:00
1656 changed files with 661921 additions and 34 deletions

6
.dockerignore Normal file
View File

@@ -0,0 +1,6 @@
.git
vendor
gocache-for-docker
victoria-metrics-data
vmstorage-data
vmselect-cache

30
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,30 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Version**
The line returned when passing `--version` command line flag to binary. For example:
```
$ ./victoria-metrics-prod --version
victoria-metrics-20190730-121249-heads-single-node-0-g671d9e55
```
**Additional context**
Add any other context about the problem here such as error logs, `/metrics` output, screenshots from [the official Grafana dashboard for VictoriaMetrics](https://grafana.com/dashboards/10229).

View File

@@ -0,0 +1,20 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

30
.github/workflows/github-pages.yml vendored Normal file
View File

@@ -0,0 +1,30 @@
name: github-pages
on:
push:
paths:
- 'docs/*.md'
- 'README.md'
branches:
- master
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@master
- name: publish
shell: bash
env:
TOKEN: ${{secrets.CI_TOKEN}}
run: |
git clone https://vika:${TOKEN}@github.com/VictoriaMetrics/VictoriaMetrics.github.io.git gpages
cp docs/*.md gpages
cp README.md gpages
cd gpages
git config --local user.email "info@victoriametrics.com"
git config --local user.name "Vika"
git add "*.md"
git commit -m "update github pages"
remote_repo="https://vika:${TOKEN}@github.com/VictoriaMetrics/VictoriaMetrics.github.io.git"
git push "${remote_repo}"
cd ..
rm -rf gpages

51
.github/workflows/main.yml vendored Normal file
View File

@@ -0,0 +1,51 @@
name: main
on:
push:
paths-ignore:
- 'docs/**'
- '**.md'
pull_request:
paths-ignore:
- 'docs/**'
- '**.md'
jobs:
build:
name: Build
runs-on: ubuntu-latest
steps:
- name: Setup Go
uses: actions/setup-go@v1
with:
go-version: 1.13
id: go
- name: Code checkout
uses: actions/checkout@v1
- name: Dependencies
env:
GO111MODULE: off
run: |
go get -v golang.org/x/lint/golint
go get -u github.com/kisielk/errcheck
- name: Build
env:
GO111MODULE: on
run: |
export PATH=$PATH:$(go env GOPATH)/bin # temporary fix. See https://github.com/actions/setup-go/issues/14
make check-all
git diff --exit-code
make test-full
make test-pure
make test-full-386
make victoria-metrics
make victoria-metrics-pure
make victoria-metrics-arm
make victoria-metrics-arm64
make vmutils
GOOS=freebsd go build -mod=vendor ./app/victoria-metrics
GOOS=darwin go build -mod=vendor ./app/victoria-metrics
- name: Publish coverage
uses: codecov/codecov-action@v1.0.4
with:
token: ${{secrets.CODECOV_TOKEN}}
file: ./coverage.txt

29
.github/workflows/wiki.yml vendored Normal file
View File

@@ -0,0 +1,29 @@
name: wiki
on:
push:
paths:
- 'docs/*.md'
branches:
- master
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@master
- name: publish
shell: bash
env:
TOKEN: ${{secrets.CI_TOKEN}}
run: |
cd docs
git clone https://vika:${TOKEN}@github.com/VictoriaMetrics/VictoriaMetrics.wiki.git wiki
find ./ -name '*.md' -exec cp -prv '{}' 'wiki' ';'
cd wiki
git config --local user.email "info@victoriametrics.com"
git config --local user.name "Vika"
git add "*.md"
git commit -m "update wiki pages"
remote_repo="https://vika:${TOKEN}@github.com/VictoriaMetrics/VictoriaMetrics.wiki.git"
git push "${remote_repo}"
cd ..
rm -rf wiki

16
.gitignore vendored Normal file
View File

@@ -0,0 +1,16 @@
/tmp
/tags
/pkg
*.pprof
/bin
.idea
*.test
*.swp
/gocache-for-docker
/victoria-metrics-data
/vmstorage-data
/vmselect-cache
/package/temp-deb-*
/package/temp-rpm-*
/package/*.deb
/package/*.rpm

76
CODE_OF_CONDUCT.md Normal file
View File

@@ -0,0 +1,76 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, sex characteristics, gender identity and expression,
level of experience, education, socio-economic status, nationality, personal
appearance, race, religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at info@victoriametrics.com. All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see
https://www.contributor-covenant.org/faq

16
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,16 @@
If you like VictoriaMetrics and want to contribute, then we need the following:
- Filing issues and feature requests [here](https://github.com/VictoriaMetrics/VictoriaMetrics/issues).
- Spreading a word about VictoriaMetrics: conference talks, articles, comments, experience sharing with colleagues.
- Updating documentation.
We are open to third-party pull requests provided they follow [KISS design principle](https://en.wikipedia.org/wiki/KISS_principle):
- Prefer simple code and architecture.
- Avoid complex abstractions.
- Avoid magic code and fancy algorithms.
- Avoid [big external dependencies](https://medium.com/@valyala/stripping-dependency-bloat-in-victoriametrics-docker-image-983fb5912b0d).
- Minimize the number of moving parts in the distributed system.
- Avoid automated decisions, which may hurt cluster availability, consistency or performance.
Adhering `KISS` principle simplifies the resulting code and architecture, so it can be reviewed, understood and verified by many people.

190
LICENSE Normal file
View File

@@ -0,0 +1,190 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
Copyright 2019 VictoriaMetrics, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

124
Makefile Normal file
View File

@@ -0,0 +1,124 @@
PKG_PREFIX := github.com/VictoriaMetrics/VictoriaMetrics
BUILDINFO_TAG ?= $(shell echo $$(git describe --long --all | tr '/' '-')$$( \
git diff-index --quiet HEAD -- || echo '-dirty-'$$(git diff-index -u HEAD | openssl sha1 | cut -c 10-17)))
PKG_TAG ?= $(shell git tag -l --points-at HEAD)
ifeq ($(PKG_TAG),)
PKG_TAG := $(BUILDINFO_TAG)
endif
GO_BUILDINFO = -X '$(PKG_PREFIX)/lib/buildinfo.Version=$(APP_NAME)-$(shell date -u +'%Y%m%d-%H%M%S')-$(BUILDINFO_TAG)'
all: \
victoria-metrics-prod
include app/*/Makefile
include deployment/*/Makefile
clean:
rm -rf bin/*
publish: \
publish-victoria-metrics \
publish-vmbackup \
publish-vmrestore
package: \
package-victoria-metrics \
package-vmbackup \
package-vmrestore
vmutils: \
vmbackup \
vmrestore
release: \
release-victoria-metrics \
release-vmutils
release-victoria-metrics: victoria-metrics-prod
cd bin && tar czf victoria-metrics-$(PKG_TAG).tar.gz victoria-metrics-prod && \
sha256sum victoria-metrics-$(PKG_TAG).tar.gz > victoria-metrics-$(PKG_TAG)_checksums.txt
release-vmutils: \
vmbackup-prod \
vmrestore-prod
cd bin && tar czf vmutils-$(PKG_TAG).tar.gz vmbackup-prod vmrestore-prod && \
sha256sum vmutils-$(PKG_TAG).tar.gz > vmutils-$(PKG_TAG)_checksums.txt
pprof-cpu:
go tool pprof -trim_path=github.com/VictoriaMetrics/VictoriaMetrics@ $(PPROF_FILE)
fmt:
GO111MODULE=on gofmt -l -w -s ./lib
GO111MODULE=on gofmt -l -w -s ./app
vet:
GO111MODULE=on go vet -mod=vendor ./lib/...
GO111MODULE=on go vet -mod=vendor ./app/...
lint: install-golint
golint lib/...
golint app/...
install-golint:
which golint || GO111MODULE=off go get -u golang.org/x/lint/golint
errcheck: install-errcheck
errcheck -exclude=errcheck_excludes.txt ./lib/...
errcheck -exclude=errcheck_excludes.txt ./app/vminsert/...
errcheck -exclude=errcheck_excludes.txt ./app/vmselect/...
errcheck -exclude=errcheck_excludes.txt ./app/vmstorage/...
errcheck -exclude=errcheck_excludes.txt ./app/vmbackup/...
errcheck -exclude=errcheck_excludes.txt ./app/vmrestore/...
install-errcheck:
which errcheck || GO111MODULE=off go get -u github.com/kisielk/errcheck
check-all: fmt vet lint errcheck golangci-lint
test:
GO111MODULE=on go test -tags=integration -mod=vendor ./lib/... ./app/...
test-pure:
GO111MODULE=on CGO_ENABLED=0 go test -tags=integration -mod=vendor ./lib/... ./app/...
test-full:
GO111MODULE=on go test -tags=integration -mod=vendor -coverprofile=coverage.txt -covermode=atomic ./lib/... ./app/...
test-full-386:
GO111MODULE=on GOARCH=386 go test -tags=integration -mod=vendor -coverprofile=coverage.txt -covermode=atomic ./lib/... ./app/...
benchmark:
GO111MODULE=on go test -mod=vendor -bench=. ./lib/...
GO111MODULE=on go test -mod=vendor -bench=. ./app/...
benchmark-pure:
GO111MODULE=on CGO_ENABLED=0 go test -mod=vendor -bench=. ./lib/...
GO111MODULE=on CGO_ENABLED=0 go test -mod=vendor -bench=. ./app/...
vendor-update:
GO111MODULE=on go get -u ./lib/...
GO111MODULE=on go get -u ./app/...
GO111MODULE=on go mod tidy
GO111MODULE=on go mod vendor
app-local:
CGO_ENABLED=1 GO111MODULE=on go build $(RACE) -mod=vendor -ldflags "$(GO_BUILDINFO)" -o bin/$(APP_NAME)$(RACE) $(PKG_PREFIX)/app/$(APP_NAME)
app-local-pure:
CGO_ENABLED=0 GO111MODULE=on go build $(RACE) -mod=vendor -ldflags "$(GO_BUILDINFO)" -o bin/$(APP_NAME)-pure$(RACE) $(PKG_PREFIX)/app/$(APP_NAME)
quicktemplate-gen: install-qtc
qtc
install-qtc:
which qtc || GO111MODULE=off go get -u github.com/valyala/quicktemplate/qtc
golangci-lint: install-golangci-lint
golangci-lint run --exclude '(SA4003|SA1019):' -D errcheck -D structcheck
install-golangci-lint:
which golangci-lint || GO111MODULE=off go get -u github.com/golangci/golangci-lint/cmd/golangci-lint

830
README.md
View File

@@ -1,36 +1,801 @@
<img text-align="center" alt="Victoria Metrics" src="logo.png">
## VictoriaMetrics - the best long-term remote storage for Prometheus
[![Latest Release](https://img.shields.io/github/release/VictoriaMetrics/VictoriaMetrics.svg?style=flat-square)](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest)
[![Slack](https://img.shields.io/badge/join%20slack-%23victoriametrics-brightgreen.svg)](http://slack.victoriametrics.com/)
[![GitHub license](https://img.shields.io/github/license/VictoriaMetrics/VictoriaMetrics.svg)](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/LICENSE)
[![Go Report](https://goreportcard.com/badge/github.com/VictoriaMetrics/VictoriaMetrics)](https://goreportcard.com/report/github.com/VictoriaMetrics/VictoriaMetrics)
[![Build Status](https://github.com/VictoriaMetrics/VictoriaMetrics/workflows/main/badge.svg)](https://github.com/VictoriaMetrics/VictoriaMetrics/actions)
[![codecov](https://codecov.io/gh/VictoriaMetrics/VictoriaMetrics/branch/master/graph/badge.svg)](https://codecov.io/gh/VictoriaMetrics/VictoriaMetrics)
### VictoriaMetrics features
<img alt="Victoria Metrics" src="logo.png">
- Native [PromQL](https://prometheus.io/docs/prometheus/latest/querying/basics/) support. Additionally, VictoriaMetrics extends PromQL with useful features. See [Extended PromQL](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/ExtendedPromQL) for more details.
- Simple configuration. Just copy-n-paste remote storage URL to Prometheus config and that's it! See [Quick Start](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/Quick-Start) for more info.
- Reduced operational overhead. Prometheus local storage retention may be set to the minimum possible value when using VictoriaMetrics remote storage. This effectively makes Prometheus stateless, so it may be run as a stateless service in Kubernetes.
- Insertion rate scales to millions of metric values per second.
- Storage scales to millions of metrics with trillions of metric values.
- Wide range of retention periods - from 1 month to 5 years. Users may create different projects (aka `storage namespaces`) with different retention periods.
- Fast query engine. It excels on heavy queries over thousands of metrics with millions of metric values.
- The same remote storage URL may be used by multiple Prometheus instances collecting distinct metric sets, so all these metrics may be used in a single query (aka `global querying view`). This works ideally for multiple Prometheus instances located in different subnetworks / datacenters.
- Accepts data in [InfluxDB line protocol](https://docs.influxdata.com/influxdb/v1.7/write_protocols/line_protocol_reference/), so [Telegraf](https://www.influxdata.com/time-series-platform/telegraf/) and other influx-compatible agents may send data to VictoriaMetrics.
## Single-node VictoriaMetrics
VictoriaMetrics is fast, cost-effective and scalable time-series database. It can be used as long-term remote storage for Prometheus.
It is available in [binary releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases),
[docker images](https://hub.docker.com/r/victoriametrics/victoria-metrics/) and
in [source code](https://github.com/VictoriaMetrics/VictoriaMetrics).
Cluster version is available [here](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/cluster).
### Useful links
## Prominent features
* [Site](https://victoriametrics.com/)
* [`WITH` templates playground](https://play.victoriametrics.com/promql/expand-with-exprs)
* [Grafana playground](http://play-grafana.victoriametrics.com:3000/d/4ome8yJmz/node-exporter-on-victoriametrics-demo)
* [Docs](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki)
* [FAQ](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/FAQ)
* [Issues](https://github.com/VictoriaMetrics/VictoriaMetrics/issues)
* [Google group](https://groups.google.com/forum/#!forum/victoriametrics)
* [Creating the best remote storage for Prometheus](https://medium.com/devopslinks/victoriametrics-creating-the-best-remote-storage-for-prometheus-5d92d66787ac) - an article with technical details about VictoriaMetrics.
* [Docker images](https://hub.docker.com/r/valyala/victoria-metrics/) and the corresponding [binaries](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) for single-server VictoriaMetrics
* Supports [Prometheus querying API](https://prometheus.io/docs/prometheus/latest/querying/api/), so it can be used as Prometheus drop-in replacement in Grafana.
Additionally, VictoriaMetrics extends PromQL with opt-in [useful features](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/ExtendedPromQL).
* Supports global query view. Multiple Prometheus instances may write data into VictoriaMetrics. Later this data may be used in a single query.
* High performance and good scalability for both [inserts](https://medium.com/@valyala/high-cardinality-tsdb-benchmarks-victoriametrics-vs-timescaledb-vs-influxdb-13e6ee64dd6b)
and [selects](https://medium.com/@valyala/when-size-matters-benchmarking-victoriametrics-vs-timescale-and-influxdb-6035811952d4).
[Outperforms InfluxDB and TimescaleDB by up to 20x](https://medium.com/@valyala/measuring-vertical-scalability-for-time-series-databases-in-google-cloud-92550d78d8ae).
* [Uses 10x less RAM than InfluxDB](https://medium.com/@valyala/insert-benchmarks-with-inch-influxdb-vs-victoriametrics-e31a41ae2893) when working with millions of unique time series (aka high cardinality).
* Optimized for time series with high churn rate. Think about [prometheus-operator](https://github.com/coreos/prometheus-operator) metrics from frequent deployments in Kubernetes.
* High data compression, so [up to 70x more data points](https://medium.com/@valyala/when-size-matters-benchmarking-victoriametrics-vs-timescale-and-influxdb-6035811952d4)
may be crammed into limited storage comparing to TimescaleDB.
* Optimized for storage with high-latency IO and low IOPS (HDD and network storage in AWS, Google Cloud, Microsoft Azure, etc). See [graphs from these benchmarks](https://medium.com/@valyala/high-cardinality-tsdb-benchmarks-victoriametrics-vs-timescaledb-vs-influxdb-13e6ee64dd6b).
* A single-node VictoriaMetrics may substitute moderately sized clusters built with competing solutions such as Thanos, Uber M3, Cortex, InfluxDB or TimescaleDB.
See [vertical scalability benchmarks](https://medium.com/@valyala/measuring-vertical-scalability-for-time-series-databases-in-google-cloud-92550d78d8ae)
and [comparing Thanos to VictoriaMetrics cluster](https://medium.com/@valyala/comparing-thanos-to-victoriametrics-cluster-b193bea1683).
* Easy operation:
* VictoriaMetrics consists of a single [small executable](https://medium.com/@valyala/stripping-dependency-bloat-in-victoriametrics-docker-image-983fb5912b0d) without external dependencies.
* All the configuration is done via explicit command-line flags with reasonable defaults.
* All the data is stored in a single directory pointed by `-storageDataPath` flag.
* Easy and fast backups from [instant snapshots](https://medium.com/@valyala/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282)
to S3 or GCS with [vmbackup](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/app/vmbackup/README.md) / [vmrestore](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/app/vmrestore/README.md).
See [this article](https://medium.com/@valyala/speeding-up-backups-for-big-time-series-databases-533c1a927883) for more details.
* Storage is protected from corruption on unclean shutdown (i.e. OOM, hardware reset or `kill -9`) thanks to [the storage architecture](https://medium.com/@valyala/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282).
* Supports metrics' ingestion and [backfilling](#backfilling) via the following protocols:
* [Prometheus remote write API](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write)
* [InfluxDB line protocol](https://docs.influxdata.com/influxdb/v1.7/write_protocols/line_protocol_tutorial/)
* [Graphite plaintext protocol](https://graphite.readthedocs.io/en/latest/feeding-carbon.html) with [tags](https://graphite.readthedocs.io/en/latest/tags.html#carbon)
if `-graphiteListenAddr` is set.
* [OpenTSDB put message](http://opentsdb.net/docs/build/html/api_telnet/put.html) if `-opentsdbListenAddr` is set.
* [HTTP OpenTSDB /api/put requests](http://opentsdb.net/docs/build/html/api_http/put.html) if `-opentsdbHTTPListenAddr` is set.
* Ideally works with big amounts of time series data from Kubernetes, IoT sensors, connected cars, industrial telemetry, financial data and various Enterprise workloads.
* Has open source [cluster version](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/cluster).
### Victoria Metrics Logo
## Operation
### Table of contents
- [How to start VictoriaMetrics](#how-to-start-victoriametrics)
- [Prometheus setup](#prometheus-setup)
- [Grafana setup](#grafana-setup)
- [How to upgrade VictoriaMetrics?](#how-to-upgrade-victoriametrics)
- [How to apply new config to VictoriaMetrics?](#how-to-apply-new-config-to-victoriametrics)
- [How to send data from InfluxDB-compatible agents such as Telegraf?](#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf)
- [How to send data from Graphite-compatible agents such as StatsD?](#how-to-send-data-from-graphite-compatible-agents-such-as-statsd)
- [Querying Graphite data](#querying-graphite-data)
- [How to send data from OpenTSDB-compatible agents?](#how-to-send-data-from-opentsdb-compatible-agents)
- [How to build from sources](#how-to-build-from-sources)
- [Development build](#development-build)
- [Production build](#production-build)
- [ARM build](#arm-build)
- [Pure Go build (CGO_ENABLED=0)](#pure-go-build-cgo_enabled0)
- [Building docker images](#building-docker-images)
- [Start with docker-compose](#start-with-docker-compose)
- [Setting up service](#setting-up-service)
- [Third-party contributions](#third-party-contributions)
- [How to work with snapshots?](#how-to-work-with-snapshots)
- [How to delete time series?](#how-to-delete-time-series)
- [How to export time series?](#how-to-export-time-series)
- [Federation](#federation)
- [Capacity planning](#capacity-planning)
- [High availability](#high-availability)
- [Multiple retentions](#multiple-retentions)
- [Downsampling](#downsampling)
- [Multi-tenancy](#multi-tenancy)
- [Scalability and cluster version](#scalability-and-cluster-version)
- [Alerting](#alerting)
- [Security](#security)
- [Tuning](#tuning)
- [Monitoring](#monitoring)
- [Troubleshooting](#troubleshooting)
- [Backfilling](#backfilling)
- [Profiling](#profiling)
- [Integrations](#integrations)
- [Roadmap](#roadmap)
- [Contacts](#contacts)
- [Community and contributions](#community-and-contributions)
- [Reporting bugs](#reporting-bugs)
- [Victoria Metrics Logo](#victoria-metrics-logo)
- [Logo Usage Guidelines](#logo-usage-guidelines)
- [Font used:](#font-used)
- [Color Palette:](#color-palette)
- [We kindly ask:](#we-kindly-ask)
### How to start VictoriaMetrics
Just start VictoriaMetrics [executable](https://github.com/VictoriaMetrics/VictoriaMetrics/releases)
or [docker image](https://hub.docker.com/r/victoriametrics/victoria-metrics/) with the desired command-line flags.
The following command-line flags are used the most:
* `-storageDataPath` - path to data directory. VictoriaMetrics stores all the data in this directory. Default path is `victoria-metrics-data` in current working directory.
* `-retentionPeriod` - retention period in months for the data. Older data is automatically deleted. Default period is 1 month.
* `-httpListenAddr` - TCP address to listen to for http requests. By default, it listens port `8428` on all the network interfaces.
* `-graphiteListenAddr` - TCP and UDP address to listen to for Graphite data. By default, it is disabled.
* `-opentsdbListenAddr` - TCP and UDP address to listen to for OpenTSDB data over telnet protocol. By default, it is disabled.
* `-opentsdbHTTPListenAddr` - TCP address to listen to for HTTP OpenTSDB data over `/api/put`. By default, it is disabled.
Pass `-help` to see all the available flags with description and default values.
It is recommended setting up [monitoring](#monitoring) for VictoriaMetrics.
### Prometheus setup
Add the following lines to Prometheus config file (it is usually located at `/etc/prometheus/prometheus.yml`):
```yml
remote_write:
- url: http://<victoriametrics-addr>:8428/api/v1/write
queue_config:
max_samples_per_send: 10000
max_shards: 30
```
Substitute `<victoriametrics-addr>` with the hostname or IP address of VictoriaMetrics.
Then apply the new config via the following command:
```
kill -HUP `pidof prometheus`
```
Prometheus writes incoming data to local storage and replicates it to remote storage in parallel.
This means the data remains available in local storage for `--storage.tsdb.retention.time` duration
even if remote storage is unavailable.
If you plan to send data to VictoriaMetrics from multiple Prometheus instances, then add the following lines into `global` section
of [Prometheus config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#configuration-file):
```yml
global:
external_labels:
datacenter: dc-123
```
This instructs Prometheus to add `datacenter=dc-123` label to each time series sent to remote storage.
The label name may be arbitrary - `datacenter` is just an example. The label value must be unique
across Prometheus instances, so those time series may be filtered and grouped by this label.
It is recommended upgrading Prometheus to [v2.12.0](https://github.com/prometheus/prometheus/releases) or newer,
since the previous versions may have issues with `remote_write`.
### Grafana setup
Create [Prometheus datasource](http://docs.grafana.org/features/datasources/prometheus/) in Grafana with the following Url:
```
http://<victoriametrics-addr>:8428
```
Substitute `<victoriametrics-addr>` with the hostname or IP address of VictoriaMetrics.
Then build graphs with the created datasource using [Prometheus query language](https://prometheus.io/docs/prometheus/latest/querying/basics/).
VictoriaMetrics supports native PromQL and [extends it with useful features](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/ExtendedPromQL).
### How to upgrade VictoriaMetrics?
It is safe upgrading VictoriaMetrics to new versions unless [release notes](https://github.com/VictoriaMetrics/VictoriaMetrics/releases)
say otherwise. It is recommended performing regular upgrades to the latest version,
since it may contain important bug fixes, performance optimizations or new features.
Follow the following steps during the upgrade:
1) Send `SIGINT` signal to VictoriaMetrics process in order to gracefully stop it.
2) Wait until the process stops. This can take a few seconds.
3) Start the upgraded VictoriaMetrics.
Prometheus doesn't drop data during VictoriaMetrics restart.
See [this article](https://grafana.com/blog/2019/03/25/whats-new-in-prometheus-2.8-wal-based-remote-write/) for details.
### How to apply new config to VictoriaMetrics?
VictoriaMetrics must be restarted for applying new config:
1) Send `SIGINT` signal to VictoriaMetrics process in order to gracefully stop it.
2) Wait until the process stops. This can take a few seconds.
3) Start VictoriaMetrics with the new config.
Prometheus doesn't drop data during VictoriaMetrics restart.
See [this article](https://grafana.com/blog/2019/03/25/whats-new-in-prometheus-2.8-wal-based-remote-write/) for details.
### How to send data from InfluxDB-compatible agents such as [Telegraf](https://www.influxdata.com/time-series-platform/telegraf/)?
Just use `http://<victoriametric-addr>:8428` url instead of InfluxDB url in agents' configs.
For instance, put the following lines into `Telegraf` config, so it sends data to VictoriaMetrics instead of InfluxDB:
```
[[outputs.influxdb]]
urls = ["http://<victoriametrics-addr>:8428"]
```
Do not forget substituting `<victoriametrics-addr>` with the real address where VictoriaMetrics runs.
VictoriaMetrics maps Influx data using the following rules:
* [`db` query arg](https://docs.influxdata.com/influxdb/v1.7/tools/api/#write-http-endpoint) is mapped into `db` label value
unless `db` tag exists in the Influx line.
* Field names are mapped to time series names prefixed with `{measurement}{separator}` value,
where `{separator}` equals to `_` by default. It can be changed with `-influxMeasurementFieldSeparator` command-line flag.
See also `-influxSkipSingleField` command-line flag. If `{measurement}` is empty, then time series names correspond to field names.
* Field values are mapped to time series values.
* Tags are mapped to Prometheus labels as-is.
For example, the following Influx line:
```
foo,tag1=value1,tag2=value2 field1=12,field2=40
```
is converted into the following Prometheus data points:
```
foo_field1{tag1="value1", tag2="value2"} 12
foo_field2{tag1="value1", tag2="value2"} 40
```
Example for writing data with [Influx line protocol](https://docs.influxdata.com/influxdb/v1.7/write_protocols/line_protocol_tutorial/)
to local VictoriaMetrics using `curl`:
```
curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST 'http://localhost:8428/write'
```
An arbitrary number of lines delimited by '\n' may be sent in a single request.
After that the data may be read via [/api/v1/export](#how-to-export-time-series) endpoint:
```
curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"measurement_.*"}'
```
The `/api/v1/export` endpoint should return the following response:
```
{"metric":{"__name__":"measurement_field1","tag1":"value1","tag2":"value2"},"values":[123],"timestamps":[1560272508147]}
{"metric":{"__name__":"measurement_field2","tag1":"value1","tag2":"value2"},"values":[1.23],"timestamps":[1560272508147]}
```
Note that Influx line protocol expects [timestamps in *nanoseconds* by default](https://docs.influxdata.com/influxdb/v1.7/write_protocols/line_protocol_tutorial/#timestamp),
while VictoriaMetrics stores them with *milliseconds* precision.
### How to send data from Graphite-compatible agents such as [StatsD](https://github.com/etsy/statsd)?
1) Enable Graphite receiver in VictoriaMetrics by setting `-graphiteListenAddr` command line flag. For instance,
the following command will enable Graphite receiver in VictoriaMetrics on TCP and UDP port `2003`:
```
/path/to/victoria-metrics-prod -graphiteListenAddr=:2003
```
2) Use the configured address in Graphite-compatible agents. For instance, set `graphiteHost`
to the VictoriaMetrics host in `StatsD` configs.
Example for writing data with Graphite plaintext protocol to local VictoriaMetrics using `nc`:
```
echo "foo.bar.baz;tag1=value1;tag2=value2 123 `date +%s`" | nc -N localhost 2003
```
VictoriaMetrics sets the current time if the timestamp is omitted.
An arbitrary number of lines delimited by `\n` may be sent in one go.
After that the data may be read via [/api/v1/export](#how-to-export-time-series) endpoint:
```
curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz'
```
The `/api/v1/export` endpoint should return the following response:
```
{"metric":{"__name__":"foo.bar.baz","tag1":"value1","tag2":"value2"},"values":[123],"timestamps":[1560277406000]}
```
### Querying Graphite data
Data sent to VictoriaMetrics via `Graphite plaintext protocol` may be read either via
[Prometheus querying API](https://prometheus.io/docs/prometheus/latest/querying/api/)
or via [go-graphite/carbonapi](https://github.com/go-graphite/carbonapi/blob/master/cmd/carbonapi/carbonapi.example.prometheus.yaml).
### How to send data from OpenTSDB-compatible agents?
VictoriaMetrics supports [telnet put protocol](http://opentsdb.net/docs/build/html/api_telnet/put.html)
and [HTTP /api/put requests](http://opentsdb.net/docs/build/html/api_http/put.html) for ingesting OpenTSDB data.
#### Sending data via `telnet put` protocol
1) Enable OpenTSDB receiver in VictoriaMetrics by setting `-opentsdbListenAddr` command line flag. For instance,
the following command enables OpenTSDB receiver in VictoriaMetrics on TCP and UDP port `4242`:
```
/path/to/victoria-metrics-prod -opentsdbListenAddr=:4242
```
2) Send data to the given address from OpenTSDB-compatible agents.
Example for writing data with OpenTSDB protocol to local VictoriaMetrics using `nc`:
```
echo "put foo.bar.baz `date +%s` 123 tag1=value1 tag2=value2" | nc -N localhost 4242
```
An arbitrary number of lines delimited by `\n` may be sent in one go.
After that the data may be read via [/api/v1/export](#how-to-export-time-series) endpoint:
```
curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz'
```
The `/api/v1/export` endpoint should return the following response:
```
{"metric":{"__name__":"foo.bar.baz","tag1":"value1","tag2":"value2"},"values":[123],"timestamps":[1560277292000]}
```
#### Sending OpenTSDB data via HTTP `/api/put` requests
1) Enable HTTP server for OpenTSDB `/api/put` requests by setting `-opentsdbHTTPListenAddr` command line flag. For instance,
the following command enables OpenTSDB HTTP server on port `4242`:
```
/path/to/victoria-metrics-prod -opentsdbHTTPListenAddr=:4242
```
2) Send data to the given address from OpenTSDB-compatible agents.
Example for writing a single data point:
```
curl -H 'Content-Type: application/json' -d '{"metric":"x.y.z","value":45.34,"tags":{"t1":"v1","t2":"v2"}}' http://localhost:4242/api/put
```
Example for writing multiple data points in a single request:
```
curl -H 'Content-Type: application/json' -d '[{"metric":"foo","value":45.34},{"metric":"bar","value":43}]' http://localhost:4242/api/put
```
After that the data may be read via [/api/v1/export](#how-to-export-time-series) endpoint:
```
curl -G 'http://localhost:8428/api/v1/export' -d 'match[]=x.y.z' -d 'match[]=foo' -d 'match[]=bar'
```
The `/api/v1/export` endpoint should return the following response:
```
{"metric":{"__name__":"foo"},"values":[45.34],"timestamps":[1566464846000]}
{"metric":{"__name__":"bar"},"values":[43],"timestamps":[1566464846000]}
{"metric":{"__name__":"x.y.z","t1":"v1","t2":"v2"},"values":[45.34],"timestamps":[1566464763000]}
```
### How to build from sources
We recommend using either [binary releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) or
[docker images](https://hub.docker.com/r/victoriametrics/victoria-metrics/) instead of building VictoriaMetrics
from sources. Building from sources is reasonable when developing additional features specific
to your needs.
#### Development build
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.12.
2. Run `make victoria-metrics` from the root folder of the repository.
It builds `victoria-metrics` binary and puts it into the `bin` folder.
#### Production build
1. [Install docker](https://docs.docker.com/install/).
2. Run `make victoria-metrics-prod` from the root folder of the repository.
It builds `victoria-metrics-prod` binary and puts it into the `bin` folder.
#### ARM build
ARM build may run on Raspberry Pi or on [energy-efficient ARM servers](https://blog.cloudflare.com/arm-takes-wing/).
#### Development ARM build
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.12.
2. Run `make victoria-metrics-arm` or `make victoria-metrics-arm64` from the root folder of the repository.
It builds `victoria-metrics-arm` or `victoria-metrics-arm64` binary respectively and puts it into the `bin` folder.
#### Production ARM build
1. [Install docker](https://docs.docker.com/install/).
2. Run `make victoria-metrics-arm-prod` or `make victoria-metrics-arm64-prod` from the root folder of the repository.
It builds `victoria-metrics-arm-prod` or `victoria-metrics-arm64-prod` binary respectively and puts it into the `bin` folder.
#### Pure Go build (CGO_ENABLED=0)
`Pure Go` mode builds only Go code without [cgo](https://golang.org/cmd/cgo/) dependencies.
This is an experimental mode, which may result in a lower compression ratio and slower decompression performance.
Use it with caution!
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.12.
2. Run `make victoria-metrics-pure` from the root folder of the repository.
It builds `victoria-metrics-pure` binary and puts it into the `bin` folder.
#### Building docker images
Run `make package-victoria-metrics`. It builds `victoriametrics/victoria-metrics:<PKG_TAG>` docker image locally.
`<PKG_TAG>` is auto-generated image tag, which depends on source code in the repository.
The `<PKG_TAG>` may be manually set via `PKG_TAG=foobar make package-victoria-metrics`.
### Start with docker-compose
[Docker-compose](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/deployment/docker/docker-compose.yml)
helps to spin up VictoriaMetrics, Prometheus and Grafana with one command.
More details may be found [here](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker#folder-contains-basic-images-and-tools-for-building-and-running-victoria-metrics-in-docker).
### Setting up service
Read [these instructions](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/43) on how to set up VictoriaMetrics as a service in your OS.
### Third-party contributions
* [Unofficial yum repository](https://copr.fedorainfracloud.org/coprs/antonpatsev/VictoriaMetrics/) ([source code](https://github.com/patsevanton/victoriametrics-rpm))
### How to work with snapshots?
VictoriaMetrics can create [instant snapshots](https://medium.com/@valyala/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282)
for all the data stored under `-storageDataPath` directory.
Navigate to `http://<victoriametrics-addr>:8428/snapshot/create` in order to create an instant snapshot.
The page will return the following JSON response:
```
{"status":"ok","snapshot":"<snapshot-name>"}
```
Snapshots are created under `<-storageDataPath>/snapshots` directory, where `<-storageDataPath>`
is the command-line flag value. Snapshots can be archived to backup storage at any time
with [vmbackup](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/app/vmbackup/README.md).
The `http://<victoriametrics-addr>:8428/snapshot/list` page contains the list of available snapshots.
Navigate to `http://<victoriametrics-addr>:8428/snapshot/delete?snapshot=<snapshot-name>` in order
to delete `<snapshot-name>` snapshot.
Navigate to `http://<victoriametrics-addr>:8428/snapshot/delete_all` in order to delete all the snapshots.
Steps for restoring from a snapshot:
1. Stop VictoriaMetrics with `kill -INT`.
2. Restore snapshot contents from backup with [vmrestore](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/app/vmrestore/README.md)
to the directory pointed by `-storageDataPath`.
3. Start VictoriaMetrics.
### How to delete time series?
Send a request to `http://<victoriametrics-addr>:8428/api/v1/admin/tsdb/delete_series?match[]=<timeseries_selector_for_delete>`,
where `<timeseries_selector_for_delete>` may contain any [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors)
for metrics to delete. After that all the time series matching the given selector are deleted. Storage space for
the deleted time series isn't freed instantly - it is freed during subsequent merges of data files.
It is recommended verifying which metrics will be deleted with the call to `http://<victoria-metrics-addr>:8428/api/v1/series?match[]=<timeseries_selector_for_delete>`
before actually deleting the metrics.
### How to export time series?
Send a request to `http://<victoriametrics-addr>:8428/api/v1/export?match[]=<timeseries_selector_for_export>`,
where `<timeseries_selector_for_export>` may contain any [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors)
for metrics to export. The response would contain all the data for the selected time series in [JSON streaming format](https://en.wikipedia.org/wiki/JSON_streaming#Line-delimited_JSON).
Each JSON line would contain data for a single time series. An example output:
```
{"metric":{"__name__":"up","job":"node_exporter","instance":"localhost:9100"},"values":[0,0,0],"timestamps":[1549891472010,1549891487724,1549891503438]}
{"metric":{"__name__":"up","job":"prometheus","instance":"localhost:9090"},"values":[1,1,1],"timestamps":[1549891461511,1549891476511,1549891491511]}
```
Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either
unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values.
### Federation
VictoriaMetrics exports [Prometheus-compatible federation data](https://prometheus.io/docs/prometheus/latest/federation/)
at `http://<victoriametrics-addr>:8428/federate?match[]=<timeseries_selector_for_federation>`.
Optional `start` and `end` args may be added to the request in order to scrape the last point for each selected time series on the `[start ... end]` interval.
`start` and `end` may contain either unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values. By default, the last point
on the interval `[now - max_lookback ... now]` is scraped for each time series. The default value for `max_lookback` is `5m` (5 minutes), but it can be overridden.
For instance, `/federate?match[]=up&max_lookback=1h` would return last points on the `[now - 1h ... now]` interval. This may be useful for time series federation
with scrape intervals exceeding `5m`.
### Capacity planning
A rough estimation of the required resources for ingestion path:
* RAM size: less than 1KB per active time series. So, ~1GB of RAM is required for 1M active time series.
Time series is considered active if new data points have been added to it recently or if it has been recently queried.
The number of active time series may be obtained from `vm_cache_entries{type="storage/hour_metric_ids"}` metric
exproted on the `/metrics` page.
VictoriaMetrics stores various caches in RAM. Memory size for these caches may be limited by `-memory.allowedPercent` flag.
* CPU cores: a CPU core per 300K inserted data points per second. So, ~4 CPU cores are required for processing
the insert stream of 1M data points per second. The ingestion rate may be lower for high cardinality data or for time series with high number of labels.
See [this article](https://medium.com/@valyala/insert-benchmarks-with-inch-influxdb-vs-victoriametrics-e31a41ae2893) for details.
If you see lower numbers per CPU core, then it is likely active time series info doesn't fit caches,
so you need more RAM for lowering CPU usage.
* Storage space: less than a byte per data point on average. So, ~260GB is required for storing a month-long insert stream
of 100K data points per second.
The actual storage size heavily depends on data randomness (entropy). Higher randomness means higher storage size requirements.
Read [this article](https://medium.com/faun/victoriametrics-achieving-better-compression-for-time-series-data-than-gorilla-317bc1f95932)
for details.
* Network usage: outbound traffic is negligible. Ingress traffic is ~100 bytes per ingested data point via
[Prometheus remote_write API](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write).
The actual ingress bandwidth usage depends on the average number of labels per ingested metric and the average size
of label values. The higher number of per-metric labels and longer label values mean the higher ingress bandwidth.
The required resources for query path:
* RAM size: depends on the number of time series to scan in each query and the `step`
argument passed to [/api/v1/query_range](https://prometheus.io/docs/prometheus/latest/querying/api/#range-queries).
The higher number of scanned time series and lower `step` argument results in the higher RAM usage.
* CPU cores: a CPU core per 30 millions of scanned data points per second.
* Network usage: depends on the frequency and the type of incoming requests. Typical Grafana dashboards usually
require negligible network bandwidth.
### High availability
1) Install multiple VictoriaMetrics instances in distinct datacenters (availability zones).
2) Add addresses of these instances to `remote_write` section in Prometheus config:
```yml
remote_write:
- url: http://<victoriametrics-addr-1>:8428/api/v1/write
queue_config:
max_samples_per_send: 10000
# ...
- url: http://<victoriametrics-addr-N>:8428/api/v1/write
queue_config:
max_samples_per_send: 10000
```
3) Apply the updated config:
```
kill -HUP `pidof prometheus`
```
4) Now Prometheus should write data into all the configured `remote_write` urls in parallel.
5) Set up [Promxy](https://github.com/jacksontj/promxy) in front of all the VictoriaMetrics replicas.
6) Set up Prometheus datasource in Grafana that points to Promxy.
If you have Prometheus HA pairs with replicas `r1` and `r2` in each pair, then configure each `r1`
to write data to `victoriametrics-addr-1`, while each `r2` should write data to `victoriametrics-addr-2`.
### Multiple retentions
Just start multiple VictoriaMetrics instances with distinct values for the following flags:
* `-retentionPeriod`
* `-storageDataPath`, so the data for each retention period is saved in a separate directory
* `-httpListenAddr`, so clients may reach VictoriaMetrics instance with proper retention
### Downsampling
There is no downsampling support at the moment, but:
- VictoriaMetrics is optimized for querying big amounts of raw data. See benchmark results for heavy queries
in [this article](https://medium.com/@valyala/measuring-vertical-scalability-for-time-series-databases-in-google-cloud-92550d78d8ae).
- VictoriaMetrics has good compression for on-disk data. See [this article](https://medium.com/@valyala/victoriametrics-achieving-better-compression-for-time-series-data-than-gorilla-317bc1f95932)
for details.
These properties reduce the need in downsampling. We plan to implement downsampling in the future.
See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/36) for details.
### Multi-tenancy
Single-node VictoriaMetrics doesn't support multi-tenancy. Use [cluster version](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/cluster) instead.
### Scalability and cluster version
Though single-node VictoriaMetrics cannot scale to multiple nodes, it is optimized for resource usage - storage size / bandwidth / IOPS, RAM, CPU.
This means that a single-node VictoriaMetrics may scale vertically and substitute a moderately sized cluster built with competing solutions
such as Thanos, Uber M3, InfluxDB or TimescaleDB. See [vertical scalability benchmarks](https://medium.com/@valyala/measuring-vertical-scalability-for-time-series-databases-in-google-cloud-92550d78d8ae).
So try single-node VictoriaMetrics at first and then [switch to cluster version](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/cluster) if you still need
horizontally scalable long-term remote storage for really large Prometheus deployments.
[Contact us](mailto:info@victoriametrics.com) for paid support.
### Alerting
VictoriaMetrics doesn't support rule evaluation and alerting yet, so these actions must be performed either
on [Prometheus side](https://prometheus.io/docs/alerting/overview/) or on [Grafana side](https://grafana.com/docs/alerting/rules/).
### Security
Do not forget protecting sensitive endpoints in VictoriaMetrics when exposing it to untrusted networks such as the internet.
Consider setting the following command-line flags:
* `-tls`, `-tlsCertFile` and `-tlsKeyFile` for switching from HTTP to HTTPS.
* `-httpAuth.username` and `-httpAuth.password` for protecting all the HTTP endpoints
with [HTTP Basic Authentication](https://en.wikipedia.org/wiki/Basic_access_authentication).
* `-deleteAuthKey` for protecting `/api/v1/admin/tsdb/delete_series` endpoint. See [how to delete time series](#how-to-delete-time-series).
* `-snapshotAuthKey` for protecting `/snapshot*` endpoints. See [how to work with snapshots](#how-to-work-with-snapshots).
Explicitly set internal network interface for TCP and UDP ports for data ingestion with Graphite and OpenTSDB formats.
For example, substitute `-graphiteListenAddr=:2003` with `-graphiteListenAddr=<internal_iface_ip>:2003`.
### Tuning
* There is no need in VictoriaMetrics tuning since it uses reasonable defaults for command-line flags,
which are automatically adjusted for the available CPU and RAM resources.
* There is no need in Operating System tuning since VictoriaMetrics is optimized for default OS settings.
The only option is increasing the limit on [the number of open files in the OS](https://medium.com/@muhammadtriwibowo/set-permanently-ulimit-n-open-files-in-ubuntu-4d61064429a),
so Prometheus instances could establish more connections to VictoriaMetrics.
* The recommended filesystem is `ext4`, the recommended persistent storage is [persistent HDD-based disk on GCP](https://cloud.google.com/compute/docs/disks/#pdspecs),
since it is protected from hardware failures via internal replication and it can be [resized on the fly](https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd).
If you plan storing more than 1TB of data on `ext4` partition or plan extending it to more than 16TB,
then the following options are recommended to pass to `mkfs.ext4`:
```
mkfs.ext4 ... -O 64bit,huge_file,extent -T huge
```
### Monitoring
VictoriaMetrics exports internal metrics in Prometheus format on the `/metrics` page.
Add this page to Prometheus' scrape config in order to collect VictoriaMetrics metrics.
There is [an official Grafana dashboard for single-node VictoriaMetrics](https://grafana.com/dashboards/10229).
The most interesting metrics are:
* `vm_cache_entries{type="storage/hour_metric_ids"}` - the number of time series with new data points during the last hour
aka active time series.
* `rate(vm_new_timeseries_created_total[5m])` - time series churn rate.
* `vm_rows{type="indexdb"}` - the number of rows in inverted index. High value for this number usually mean high churn rate for time series.
* Sum of `vm_rows{type="storage/big"}` and `vm_rows{type="storage/small"}` - total number of `(timestamp, value)` data points
in the database.
* Sum of all the `vm_cache_size_bytes` metrics - the total size of all the caches in the database.
* `vm_allowed_memory_bytes` - the maximum allowed size for caches in the database. It is calculated as `system_memory * <-memory.allowedPercent> / 100`,
where `system_memory` is the amount of system memory and `-memory.allowedPercent` is the corresponding flag value.
* `vm_rows_inserted_total` - the total number of inserted rows since VictoriaMetrics start.
### Troubleshooting
* It is recommended to use default command-line flag values (i.e. don't set them explicitly) until the need
in tweaking these flag values arises.
* If VictoriaMetrics works slowly and eats more than a CPU core per 100K ingested data points per second,
then it is likely you have too many active time series for the current amount of RAM.
It is recommended increasing the amount of RAM on the node with VictoriaMetrics in order to improve
ingestion performance.
Another option is to increase `-memory.allowedPercent` command-line flag value. Be careful with this
option, since too big value for `-memory.allowedPercent` may result in high I/O usage.
* VictoriaMetrics requires free disk space for [merging data files to bigger ones](https://medium.com/@valyala/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282).
It may slow down when there is no enough free space left. So make sure `-storageDataPath` directory
has at least 20% of free space comparing to disk size.
* If VictoriaMetrics doesn't work because of certain parts are corrupted due to disk errors,
then just remove directoreis with broken parts. This will recover VictoriaMetrics at the cost
of data loss stored in the broken parts. In the future, `vmrecover` tool will be created
for automatic recovering from such errors.
### Backfilling
Make sure that configured `-retentionPeriod` covers timestamps for the backfilled data.
It is recommended disabling query cache with `-search.disableCache` command-line flag when writing
historical data with timestamps from the past, since the cache assumes that the data is written with
the current timestamps. Query cache can be enabled after the backfilling is complete.
### Profiling
VictoriaMetrics provides handlers for collecting the following [Go profiles](https://blog.golang.org/profiling-go-programs):
- Memory profile. It can be collected with the following command:
```
curl -s http://<victoria-metrics-host>:8428/debug/pprof/heap > mem.pprof
```
- CPU profile. It can be collected with the following command:
```
curl -s http://<victoria-metrics-host>:8428/debug/pprof/profile > cpu.pprof
```
The command for collecting CPU profile waits for 30 seconds before returning.
The collected profiles may be analyzed with [go tool pprof](https://github.com/google/pprof).
## Integrations
* [netdata](https://github.com/netdata/netdata) can push data into VictoriaMetrics via `Prometheus remote_write API`.
See [these docs](https://github.com/netdata/netdata#integrations).
* [go-graphite/carbonapi](https://github.com/go-graphite/carbonapi) can use VictoriaMetrics as time series backend.
See [this example](/blob/master/cmd/carbonapi/carbonapi.example.prometheus.yaml).
* [Ansible role for installing VictoriaMetrics](https://github.com/dreamteam-gg/ansible-victoriametrics-role).
## Roadmap
- [ ] Replication [#118](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/118)
- [ ] Support of Object Storages (GCS, S3, Azure Storage) [#38](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/38)
- [ ] Data downsampling [#36](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/36)
- [ ] Alert Manager Integration [#119](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/119)
- [ ] CLI tool for data migration, re-balancing and adding/removing nodes [#103](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/103)
The discussion happens [here](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/129). Feel free to comment any item or add own one.
## Contacts
Contact us with any questions regarding VictoriaMetrics at [info@victoriametrics.com](mailto:info@victoriametrics.com).
## Community and contributions
Feel free asking any questions regarding VictoriaMetrics:
- [slack](http://slack.victoriametrics.com/)
- [telegram-en](https://t.me/VictoriaMetrics_en)
- [telegram-ru](https://t.me/VictoriaMetrics_ru1)
- [google groups](https://groups.google.com/forum/#!forum/victorametrics-users)
If you like VictoriaMetrics and want to contribute, then we need the following:
- Filing issues and feature requests [here](https://github.com/VictoriaMetrics/VictoriaMetrics/issues).
- Spreading a word about VictoriaMetrics: conference talks, articles, comments, experience sharing with colleagues.
- Updating documentation.
We are open to third-party pull requests provided they follow [KISS design principle](https://en.wikipedia.org/wiki/KISS_principle):
- Prefer simple code and architecture.
- Avoid complex abstractions.
- Avoid magic code and fancy algorithms.
- Avoid [big external dependencies](https://medium.com/@valyala/stripping-dependency-bloat-in-victoriametrics-docker-image-983fb5912b0d).
- Minimize the number of moving parts in the distributed system.
- Avoid automated decisions, which may hurt cluster availability, consistency or performance.
Adhering `KISS` principle simplifies the resulting code and architecture, so it can be reviewed, understood and verified by many people.
## Reporting bugs
Report bugs and propose new features [here](https://github.com/VictoriaMetrics/VictoriaMetrics/issues).
## Victoria Metrics Logo
[Zip](VM_logo.zip) contains three folders with different image orientation (main color and inverted version).
@@ -41,24 +806,21 @@ Files included in each folder:
* 2 EPS Adobe Illustrator EPS10 files
#### Logo Usage Guidelines
### Logo Usage Guidelines
##### Font used:
#### Font used:
* Lato Black
* Lato Black
* Lato Regular
##### Color Palette:
#### Color Palette:
* HEX [#110f0f](https://www.color-hex.com/color/110f0f)
* HEX [#110f0f](https://www.color-hex.com/color/110f0f)
* HEX [#ffffff](https://www.color-hex.com/color/ffffff)
#### We kindly ask:
### We kindly ask:
- Please don't use any other font instead of suggested.
- There should be sufficient clear space around the logo.
- Do not change spacing, alignment, or relative locations of the design elements.
- Do not change the proportions of any of the design elements or the design itself. You may resize as needed but must retain all proportions.

View File

@@ -0,0 +1,84 @@
# All these commands must run from repository root.
victoria-metrics:
APP_NAME=victoria-metrics $(MAKE) app-local
victoria-metrics-prod:
APP_NAME=victoria-metrics $(MAKE) app-via-docker
package-victoria-metrics:
APP_NAME=victoria-metrics \
$(MAKE) package-via-docker
publish-victoria-metrics:
APP_NAME=victoria-metrics $(MAKE) publish-via-docker
run-victoria-metrics:
mkdir -p victoria-metrics-data
DOCKER_OPTS='-v $(shell pwd)/victoria-metrics-data:/victoria-metrics-data' \
APP_NAME=victoria-metrics \
ARGS='-graphiteListenAddr=:2003 -opentsdbListenAddr=:4242 -retentionPeriod=12 -search.maxUniqueTimeseries=1000000 -search.maxQueryDuration=10m' \
$(MAKE) run-via-docker
victoria-metrics-arm:
CGO_ENABLED=0 GOOS=linux GOARCH=arm GO111MODULE=on go build -mod=vendor -ldflags "$(GO_BUILDINFO)" -o bin/victoria-metrics-arm ./app/victoria-metrics
victoria-metrics-arm-prod:
APP_NAME=victoria-metrics APP_SUFFIX='-arm' DOCKER_OPTS='--env CGO_ENABLED=0 --env GOARCH=arm' $(MAKE) app-via-docker
victoria-metrics-arm64:
CGO_ENABLED=0 GOOS=linux GOARCH=arm64 GO111MODULE=on go build -mod=vendor -ldflags "$(GO_BUILDINFO)" -o bin/victoria-metrics-arm64 ./app/victoria-metrics
victoria-metrics-arm64-prod:
APP_NAME=victoria-metrics APP_SUFFIX='-arm64' DOCKER_OPTS='--env CGO_ENABLED=0 --env GOARCH=arm64' $(MAKE) app-via-docker
victoria-metrics-ppc64le:
CGO_ENABLED=0 GOOS=linux GOARCH=ppc64le GO111MODULE=on go build -mod=vendor -ldflags "$(GO_BUILDINFO)" -o bin/victoria-metrics-ppc64le ./app/victoria-metrics
victoria-metrics-ppc64le-prod:
APP_NAME=victoria-metrics APP_SUFFIX='-ppc64le' DOCKER_OPTS='--env CGO_ENABLED=0 --env GOARCH=ppc64le' $(MAKE) app-via-docker
victoria-metrics-386:
CGO_ENABLED=0 GOOS=linux GOARCH=386 GO111MODULE=on go build -mod=vendor -ldflags "$(GO_BUILDINFO)" -o bin/victoria-metrics-386 ./app/victoria-metrics
victoria-metrics-386-prod:
APP_NAME=victoria-metrics APP_SUFFIX='-386' DOCKER_OPTS='--env CGO_ENABLED=0 --env GOARCH=386' $(MAKE) app-via-docker
victoria-metrics-pure:
APP_NAME=victoria-metrics $(MAKE) app-local-pure
victoria-metrics-pure-prod:
APP_NAME=victoria-metrics APP_SUFFIX='-pure' DOCKER_OPTS='--env CGO_ENABLED=0' $(MAKE) app-via-docker
### Packaging as DEB - amd64
victoria-metrics-package-deb: victoria-metrics-prod
./package/package_deb.sh amd64
### Packaging as DEB - arm64
victoria-metrics-package-deb-arm64: victoria-metrics-arm64-prod
./package/package_deb.sh arm64
### Packaging as DEB - all
victoria-metrics-package-deb-all: \
victoria-metrics-package-deb \
victoria-metrics-package-deb-arm64
### Packaging as RPM - amd64
victoria-metrics-package-rpm: victoria-metrics-prod
./package/package_rpm.sh amd64
### Packaging as RPM - arm64
victoria-metrics-package-rpm-arm64: victoria-metrics-arm64-prod
./package/package_rpm.sh arm64
### Packaging as RPM - all
victoria-metrics-package-rpm-all: \
victoria-metrics-package-rpm \
victoria-metrics-package-rpm-arm64
### Packaging as both DEB and RPM - all
victoria-metrics-package-deb-rpm-all: \
victoria-metrics-package-deb \
victoria-metrics-package-deb-arm64 \
victoria-metrics-package-rpm \
victoria-metrics-package-rpm-arm64

View File

@@ -0,0 +1,5 @@
FROM scratch
COPY --from=local/certs:1.0.3 /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
COPY bin/victoria-metrics-prod .
EXPOSE 8428
ENTRYPOINT ["/victoria-metrics-prod"]

View File

@@ -0,0 +1,63 @@
package main
import (
"flag"
"net/http"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/buildinfo"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fs"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/procutil"
)
var httpListenAddr = flag.String("httpListenAddr", ":8428", "TCP address to listen for http connections")
func main() {
flag.Parse()
buildinfo.Init()
logger.Init()
logger.Infof("starting VictoraMetrics at %q...", *httpListenAddr)
startTime := time.Now()
vmstorage.Init()
vmselect.Init()
vminsert.Init()
go httpserver.Serve(*httpListenAddr, requestHandler)
logger.Infof("started VictoriaMetrics in %s", time.Since(startTime))
sig := procutil.WaitForSigterm()
logger.Infof("received signal %s", sig)
logger.Infof("gracefully shutting down webservice at %q", *httpListenAddr)
startTime = time.Now()
if err := httpserver.Stop(*httpListenAddr); err != nil {
logger.Fatalf("cannot stop the webservice: %s", err)
}
vminsert.Stop()
logger.Infof("successfully shut down the webservice in %s", time.Since(startTime))
vmstorage.Stop()
vmselect.Stop()
fs.MustStopDirRemover()
logger.Infof("the VictoriaMetrics has been stopped in %s", time.Since(startTime))
}
func requestHandler(w http.ResponseWriter, r *http.Request) bool {
if vminsert.RequestHandler(w, r) {
return true
}
if vmselect.RequestHandler(w, r) {
return true
}
if vmstorage.RequestHandler(w, r) {
return true
}
return false
}

View File

@@ -0,0 +1,494 @@
// +build integration
package main
import (
"bytes"
"encoding/json"
"flag"
"fmt"
"io"
"io/ioutil"
"log"
"net"
"net/http"
"os"
"path/filepath"
"reflect"
"strings"
"testing"
"time"
testutil "github.com/VictoriaMetrics/VictoriaMetrics/app/victoria-metrics/test"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fs"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
)
const (
testFixturesDir = "testdata"
testStorageSuffix = "vm-test-storage"
testHTTPListenAddr = ":7654"
testStatsDListenAddr = ":2003"
testOpenTSDBListenAddr = ":4242"
testOpenTSDBHTTPListenAddr = ":4243"
testLogLevel = "INFO"
)
const (
testReadHTTPPath = "http://127.0.0.1" + testHTTPListenAddr
testWriteHTTPPath = "http://127.0.0.1" + testHTTPListenAddr + "/write"
testOpenTSDBWriteHTTPPath = "http://127.0.0.1" + testOpenTSDBHTTPListenAddr + "/api/put"
testPromWriteHTTPPath = "http://127.0.0.1" + testHTTPListenAddr + "/api/v1/write"
testHealthHTTPPath = "http://127.0.0.1" + testHTTPListenAddr + "/health"
)
const (
testStorageInitTimeout = 10 * time.Second
)
var (
storagePath string
insertionTime = time.Now().UTC()
)
type test struct {
Name string `json:"name"`
Data []string `json:"data"`
Query []string `json:"query"`
ResultMetrics []Metric `json:"result_metrics"`
ResultSeries Series `json:"result_series"`
ResultQuery Query `json:"result_query"`
ResultQueryRange QueryRange `json:"result_query_range"`
Issue string `json:"issue"`
}
type Metric struct {
Metric map[string]string `json:"metric"`
Values []float64 `json:"values"`
Timestamps []int64 `json:"timestamps"`
}
func (r *Metric) UnmarshalJSON(b []byte) error {
type plain Metric
return json.Unmarshal(testutil.PopulateTimeTpl(b, insertionTime), (*plain)(r))
}
type Series struct {
Status string `json:"status"`
Data []map[string]string `json:"data"`
}
type Query struct {
Status string `json:"status"`
Data QueryData `json:"data"`
}
type QueryData struct {
ResultType string `json:"resultType"`
Result []QueryDataResult `json:"result"`
}
type QueryDataResult struct {
Metric map[string]string `json:"metric"`
Value []interface{} `json:"value"`
}
func (r *QueryDataResult) UnmarshalJSON(b []byte) error {
type plain QueryDataResult
return json.Unmarshal(testutil.PopulateTimeTpl(b, insertionTime), (*plain)(r))
}
type QueryRange struct {
Status string `json:"status"`
Data QueryRangeData `json:"data"`
}
type QueryRangeData struct {
ResultType string `json:"resultType"`
Result []QueryRangeDataResult `json:"result"`
}
type QueryRangeDataResult struct {
Metric map[string]string `json:"metric"`
Values [][]interface{} `json:"values"`
}
func (r *QueryRangeDataResult) UnmarshalJSON(b []byte) error {
type plain QueryRangeDataResult
return json.Unmarshal(testutil.PopulateTimeTpl(b, insertionTime), (*plain)(r))
}
func TestMain(m *testing.M) {
setUp()
code := m.Run()
tearDown()
os.Exit(code)
}
func setUp() {
storagePath = filepath.Join(os.TempDir(), testStorageSuffix)
processFlags()
logger.Init()
vmstorage.InitWithoutMetrics()
vmselect.Init()
vminsert.Init()
go httpserver.Serve(*httpListenAddr, requestHandler)
readyStorageCheckFunc := func() bool {
resp, err := http.Get(testHealthHTTPPath)
if err != nil {
return false
}
resp.Body.Close()
return resp.StatusCode == 200
}
if err := waitFor(testStorageInitTimeout, readyStorageCheckFunc); err != nil {
log.Fatalf("http server can't start for %s seconds, err %s", testStorageInitTimeout, err)
}
}
func processFlags() {
flag.Parse()
for _, fv := range []struct {
flag string
value string
}{
{flag: "storageDataPath", value: storagePath},
{flag: "httpListenAddr", value: testHTTPListenAddr},
{flag: "graphiteListenAddr", value: testStatsDListenAddr},
{flag: "opentsdbListenAddr", value: testOpenTSDBListenAddr},
{flag: "loggerLevel", value: testLogLevel},
{flag: "opentsdbHTTPListenAddr", value: testOpenTSDBHTTPListenAddr},
} {
// panics if flag doesn't exist
if err := flag.Lookup(fv.flag).Value.Set(fv.value); err != nil {
log.Fatalf("unable to set %q with value %q, err: %v", fv.flag, fv.value, err)
}
}
}
func waitFor(timeout time.Duration, f func() bool) error {
fraction := timeout / 10
for i := fraction; i < timeout; i += fraction {
if f() {
return nil
}
time.Sleep(fraction)
}
return fmt.Errorf("timeout")
}
func tearDown() {
if err := httpserver.Stop(*httpListenAddr); err != nil {
log.Printf("cannot stop the webservice: %s", err)
}
vminsert.Stop()
vmstorage.Stop()
vmselect.Stop()
fs.MustRemoveAll(storagePath)
}
func TestWriteRead(t *testing.T) {
t.Run("write", testWrite)
time.Sleep(1 * time.Second)
vmstorage.Stop()
// open storage after stop in write
vmstorage.InitWithoutMetrics()
t.Run("read", testRead)
}
func testWrite(t *testing.T) {
t.Run("prometheus", func(t *testing.T) {
for _, test := range readIn("prometheus", t, insertionTime) {
s := newSuite(t)
r := testutil.WriteRequest{}
s.noError(json.Unmarshal([]byte(strings.Join(test.Data, "\n")), &r.Timeseries))
data, err := testutil.Compress(r)
s.greaterThan(len(r.Timeseries), 0)
if err != nil {
t.Errorf("error compressing %v %s", r, err)
t.Fail()
}
httpWrite(t, testPromWriteHTTPPath, bytes.NewBuffer(data))
}
})
t.Run("influxdb", func(t *testing.T) {
for _, x := range readIn("influxdb", t, insertionTime) {
test := x
t.Run(test.Name, func(t *testing.T) {
t.Parallel()
httpWrite(t, testWriteHTTPPath, bytes.NewBufferString(strings.Join(test.Data, "\n")))
})
}
})
t.Run("graphite", func(t *testing.T) {
for _, x := range readIn("graphite", t, insertionTime) {
test := x
t.Run(test.Name, func(t *testing.T) {
t.Parallel()
tcpWrite(t, "127.0.0.1"+testStatsDListenAddr, strings.Join(test.Data, "\n"))
})
}
})
t.Run("opentsdb", func(t *testing.T) {
for _, x := range readIn("opentsdb", t, insertionTime) {
test := x
t.Run(test.Name, func(t *testing.T) {
t.Parallel()
tcpWrite(t, "127.0.0.1"+testOpenTSDBListenAddr, strings.Join(test.Data, "\n"))
})
}
})
t.Run("opentsdbhttp", func(t *testing.T) {
for _, x := range readIn("opentsdbhttp", t, insertionTime) {
test := x
t.Run(test.Name, func(t *testing.T) {
t.Parallel()
logger.Infof("writing %s", test.Data)
httpWrite(t, testOpenTSDBWriteHTTPPath, bytes.NewBufferString(strings.Join(test.Data, "\n")))
})
}
})
}
func testRead(t *testing.T) {
for _, engine := range []string{"prometheus", "graphite", "opentsdb", "influxdb", "opentsdbhttp"} {
t.Run(engine, func(t *testing.T) {
for _, x := range readIn(engine, t, insertionTime) {
test := x
t.Run(test.Name, func(t *testing.T) {
t.Parallel()
for _, q := range test.Query {
q = testutil.PopulateTimeTplString(q, insertionTime)
if test.Issue != "" {
test.Issue = "Regression in " + test.Issue
}
switch true {
case strings.HasPrefix(q, "/api/v1/export"):
if err := checkMetricsResult(httpReadMetrics(t, testReadHTTPPath, q), test.ResultMetrics); err != nil {
t.Fatalf("Export. %s fails with error %s.%s", q, err, test.Issue)
}
case strings.HasPrefix(q, "/api/v1/series"):
s := Series{}
httpReadStruct(t, testReadHTTPPath, q, &s)
if err := checkSeriesResult(s, test.ResultSeries); err != nil {
t.Fatalf("Series. %s fails with error %s.%s", q, err, test.Issue)
}
case strings.HasPrefix(q, "/api/v1/query_range"):
queryResult := QueryRange{}
httpReadStruct(t, testReadHTTPPath, q, &queryResult)
if err := checkQueryRangeResult(queryResult, test.ResultQueryRange); err != nil {
t.Fatalf("Query Range. %s fails with error %s.%s", q, err, test.Issue)
}
case strings.HasPrefix(q, "/api/v1/query"):
queryResult := Query{}
httpReadStruct(t, testReadHTTPPath, q, &queryResult)
if err := checkQueryResult(queryResult, test.ResultQuery); err != nil {
t.Fatalf("Query. %s fails with error %s.%s", q, err, test.Issue)
}
default:
t.Fatalf("unsupported read query %s", q)
}
}
})
}
})
}
}
func readIn(readFor string, t *testing.T, insertTime time.Time) []test {
t.Helper()
s := newSuite(t)
var tt []test
s.noError(filepath.Walk(filepath.Join(testFixturesDir, readFor), func(path string, info os.FileInfo, err error) error {
if filepath.Ext(path) != ".json" {
return nil
}
b, err := ioutil.ReadFile(path)
s.noError(err)
item := test{}
s.noError(json.Unmarshal(b, &item))
for i := range item.Data {
item.Data[i] = testutil.PopulateTimeTplString(item.Data[i], insertTime)
}
tt = append(tt, item)
return nil
}))
if len(tt) == 0 {
t.Fatalf("no test found in %s", filepath.Join(testFixturesDir, readFor))
}
return tt
}
func httpWrite(t *testing.T, address string, r io.Reader) {
t.Helper()
s := newSuite(t)
resp, err := http.Post(address, "", r)
s.noError(err)
s.noError(resp.Body.Close())
s.equalInt(resp.StatusCode, 204)
}
func tcpWrite(t *testing.T, address string, data string) {
t.Helper()
s := newSuite(t)
conn, err := net.Dial("tcp", address)
s.noError(err)
defer conn.Close()
n, err := conn.Write([]byte(data))
s.noError(err)
s.equalInt(n, len(data))
}
func httpReadMetrics(t *testing.T, address, query string) []Metric {
t.Helper()
s := newSuite(t)
resp, err := http.Get(address + query)
s.noError(err)
defer resp.Body.Close()
s.equalInt(resp.StatusCode, 200)
var rows []Metric
for dec := json.NewDecoder(resp.Body); dec.More(); {
var row Metric
s.noError(dec.Decode(&row))
rows = append(rows, row)
}
return rows
}
func httpReadStruct(t *testing.T, address, query string, dst interface{}) {
t.Helper()
s := newSuite(t)
resp, err := http.Get(address + query)
s.noError(err)
defer resp.Body.Close()
s.equalInt(resp.StatusCode, 200)
s.noError(json.NewDecoder(resp.Body).Decode(dst))
}
func checkMetricsResult(got, want []Metric) error {
for _, r := range append([]Metric(nil), got...) {
want = removeIfFoundMetrics(r, want)
}
if len(want) > 0 {
return fmt.Errorf("exptected metrics %+v not found in %+v", want, got)
}
return nil
}
func removeIfFoundMetrics(r Metric, contains []Metric) []Metric {
for i, item := range contains {
if reflect.DeepEqual(r.Metric, item.Metric) && reflect.DeepEqual(r.Values, item.Values) &&
reflect.DeepEqual(r.Timestamps, item.Timestamps) {
contains[i] = contains[len(contains)-1]
return contains[:len(contains)-1]
}
}
return contains
}
func checkSeriesResult(got, want Series) error {
if got.Status != want.Status {
return fmt.Errorf("status mismatch %q - %q", want.Status, got.Status)
}
wantData := append([]map[string]string(nil), want.Data...)
for _, r := range got.Data {
wantData = removeIfFoundSeries(r, wantData)
}
if len(wantData) > 0 {
return fmt.Errorf("expected seria(s) %+v not found in %+v", wantData, got.Data)
}
return nil
}
func removeIfFoundSeries(r map[string]string, contains []map[string]string) []map[string]string {
for i, item := range contains {
if reflect.DeepEqual(r, item) {
contains[i] = contains[len(contains)-1]
return contains[:len(contains)-1]
}
}
return contains
}
func checkQueryResult(got, want Query) error {
if got.Status != want.Status {
return fmt.Errorf("status mismatch %q - %q", want.Status, got.Status)
}
if got.Data.ResultType != want.Data.ResultType {
return fmt.Errorf("result type mismatch %q - %q", want.Data.ResultType, got.Data.ResultType)
}
wantData := append([]QueryDataResult(nil), want.Data.Result...)
for _, r := range got.Data.Result {
wantData = removeIfFoundQueryData(r, wantData)
}
if len(wantData) > 0 {
return fmt.Errorf("expected query result %+v not found in %+v", wantData, got.Data.Result)
}
return nil
}
func removeIfFoundQueryData(r QueryDataResult, contains []QueryDataResult) []QueryDataResult {
for i, item := range contains {
if reflect.DeepEqual(r.Metric, item.Metric) && reflect.DeepEqual(r.Value[0], item.Value[0]) && reflect.DeepEqual(r.Value[1], item.Value[1]) {
contains[i] = contains[len(contains)-1]
return contains[:len(contains)-1]
}
}
return contains
}
func checkQueryRangeResult(got, want QueryRange) error {
if got.Status != want.Status {
return fmt.Errorf("status mismatch %q - %q", want.Status, got.Status)
}
if got.Data.ResultType != want.Data.ResultType {
return fmt.Errorf("result type mismatch %q - %q", want.Data.ResultType, got.Data.ResultType)
}
wantData := append([]QueryRangeDataResult(nil), want.Data.Result...)
for _, r := range got.Data.Result {
wantData = removeIfFoundQueryRangeData(r, wantData)
}
if len(wantData) > 0 {
return fmt.Errorf("expected query range result %+v not found in %+v", wantData, got.Data.Result)
}
return nil
}
func removeIfFoundQueryRangeData(r QueryRangeDataResult, contains []QueryRangeDataResult) []QueryRangeDataResult {
for i, item := range contains {
if reflect.DeepEqual(r.Metric, item.Metric) && reflect.DeepEqual(r.Values, item.Values) {
contains[i] = contains[len(contains)-1]
return contains[:len(contains)-1]
}
}
return contains
}
type suite struct{ t *testing.T }
func newSuite(t *testing.T) *suite { return &suite{t: t} }
func (s *suite) noError(err error) {
s.t.Helper()
if err != nil {
s.t.Errorf("unexpected error %v", err)
s.t.FailNow()
}
}
func (s *suite) equalInt(a, b int) {
s.t.Helper()
if a != b {
s.t.Errorf("%d not equal %d", a, b)
s.t.FailNow()
}
}
func (s *suite) greaterThan(a, b int) {
s.t.Helper()
if a <= b {
s.t.Errorf("%d less or equal then %d", a, b)
s.t.FailNow()
}
}

View File

@@ -0,0 +1,52 @@
package test
import (
"fmt"
"log"
"regexp"
"strings"
"time"
)
var (
parseTimeExpRegex = regexp.MustCompile(`"?{TIME[^}]*}"?`)
extractRegex = regexp.MustCompile(`"?{([^}]*)}"?`)
)
// PopulateTimeTplString substitutes {TIME_*} with t in s and returns the result.
func PopulateTimeTplString(s string, t time.Time) string {
return string(PopulateTimeTpl([]byte(s), t))
}
// PopulateTimeTpl substitutes {TIME_*} with tGlobal in b and returns the result.
func PopulateTimeTpl(b []byte, tGlobal time.Time) []byte {
return parseTimeExpRegex.ReplaceAllFunc(b, func(repl []byte) []byte {
t := tGlobal
repl = extractRegex.FindSubmatch(repl)[1]
parts := strings.SplitN(string(repl), "-", 2)
if len(parts) == 2 {
duration, err := time.ParseDuration(strings.TrimSpace(parts[1]))
if err != nil {
log.Fatalf("error %s parsing duration %s in %s", err, parts[1], repl)
}
t = t.Add(-duration)
}
switch strings.TrimSpace(parts[0]) {
case `TIME_S`:
return []byte(fmt.Sprintf("%d", t.Unix()))
case `TIME_MSZ`:
return []byte(fmt.Sprintf("%d", t.Unix()*1e3))
case `TIME_MS`:
return []byte(fmt.Sprintf("%d", timeToMillis(t)))
case `TIME_NS`:
return []byte(fmt.Sprintf("%d", t.UnixNano()))
default:
log.Fatalf("unknown time pattern %s in %s", parts[0], repl)
}
return repl
})
}
func timeToMillis(t time.Time) int64 {
return t.UnixNano() / 1e6
}

View File

@@ -0,0 +1,24 @@
package test
import (
"testing"
"time"
)
func TestPopulateTimeTplString(t *testing.T) {
now, err := time.Parse(time.RFC3339, "2006-01-02T15:04:05Z")
if err != nil {
t.Fatalf("unexpected error when parsing time: %s", err)
}
f := func(s, resultExpected string) {
t.Helper()
result := PopulateTimeTplString(s, now)
if result != resultExpected {
t.Fatalf("unexpected result; got %q; want %q", result, resultExpected)
}
}
f("", "")
f("{TIME_S}", "1136214245")
f("now: {TIME_S}, past 30s: {TIME_MS-30s}, now: {TIME_S}", "now: 1136214245, past 30s: 1136214215000, now: 1136214245")
f("now: {TIME_MS}, past 30m: {TIME_MSZ-30m}, past 2h: {TIME_NS-2h}", "now: 1136214245000, past 30m: 1136212445000, past 2h: 1136207045000000000")
}

View File

@@ -0,0 +1,338 @@
// +build integration
// Source https://github.com/prometheus/prometheus/blob/master/prompb/remote.pb.go . Code is copy pasted and cleaned up
package test
import (
"encoding/binary"
"math"
"math/bits"
)
type WriteRequest struct {
Timeseries []TimeSeries `protobuf:"bytes,1,rep,name=timeseries,proto3" json:"timeseries"`
}
func (m *WriteRequest) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
if len(m.Timeseries) > 0 {
for _, e := range m.Timeseries {
l = e.Size()
n += 1 + l + sovRemote(uint64(l))
}
}
return n
}
func sovRemote(x uint64) (n int) {
return (bits.Len64(x|1) + 6) / 7
}
func (m *WriteRequest) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *WriteRequest) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *WriteRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
if len(m.Timeseries) > 0 {
for iNdEx := len(m.Timeseries) - 1; iNdEx >= 0; iNdEx-- {
{
size, err := m.Timeseries[iNdEx].MarshalToSizedBuffer(dAtA[:i])
if err != nil {
return 0, err
}
i -= size
i = encodeVarintRemote(dAtA, i, uint64(size))
}
i--
dAtA[i] = 0xa
}
}
return len(dAtA) - i, nil
}
func encodeVarintRemote(dAtA []byte, offset int, v uint64) int {
offset -= sovRemote(v)
base := offset
for v >= 1<<7 {
dAtA[offset] = uint8(v&0x7f | 0x80)
v >>= 7
offset++
}
dAtA[offset] = uint8(v)
return base
}
type Sample struct {
Value float64 `protobuf:"fixed64,1,opt,name=value,proto3" json:"value,omitempty"`
Timestamp int64 `protobuf:"varint,2,opt,name=timestamp,proto3" json:"timestamp,omitempty"`
}
func (m *Sample) Reset() { *m = Sample{} }
// TimeSeries represents samples and labels for a single time series.
type TimeSeries struct {
Labels []Label `protobuf:"bytes,1,rep,name=labels,proto3" json:"labels"`
Samples []Sample `protobuf:"bytes,2,rep,name=samples,proto3" json:"samples"`
}
func (m *TimeSeries) Reset() { *m = TimeSeries{} }
type Label struct {
Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
Value string `protobuf:"bytes,2,opt,name=value,proto3" json:"value,omitempty"`
}
func (m *Label) Reset() { *m = Label{} }
type Labels struct {
Labels []Label `protobuf:"bytes,1,rep,name=labels,proto3" json:"labels"`
}
func (m *Labels) Reset() { *m = Labels{} }
func (m *Sample) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *Sample) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *Sample) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
if m.Timestamp != 0 {
i = encodeVarintTypes(dAtA, i, uint64(m.Timestamp))
i--
dAtA[i] = 0x10
}
if m.Value != 0 {
i -= 8
binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.Value))))
i--
dAtA[i] = 0x9
}
return len(dAtA) - i, nil
}
func (m *TimeSeries) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *TimeSeries) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *TimeSeries) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
if len(m.Samples) > 0 {
for iNdEx := len(m.Samples) - 1; iNdEx >= 0; iNdEx-- {
{
size, err := m.Samples[iNdEx].MarshalToSizedBuffer(dAtA[:i])
if err != nil {
return 0, err
}
i -= size
i = encodeVarintTypes(dAtA, i, uint64(size))
}
i--
dAtA[i] = 0x12
}
}
if len(m.Labels) > 0 {
for iNdEx := len(m.Labels) - 1; iNdEx >= 0; iNdEx-- {
{
size, err := m.Labels[iNdEx].MarshalToSizedBuffer(dAtA[:i])
if err != nil {
return 0, err
}
i -= size
i = encodeVarintTypes(dAtA, i, uint64(size))
}
i--
dAtA[i] = 0xa
}
}
return len(dAtA) - i, nil
}
func (m *Label) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *Label) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *Label) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
if len(m.Value) > 0 {
i -= len(m.Value)
copy(dAtA[i:], m.Value)
i = encodeVarintTypes(dAtA, i, uint64(len(m.Value)))
i--
dAtA[i] = 0x12
}
if len(m.Name) > 0 {
i -= len(m.Name)
copy(dAtA[i:], m.Name)
i = encodeVarintTypes(dAtA, i, uint64(len(m.Name)))
i--
dAtA[i] = 0xa
}
return len(dAtA) - i, nil
}
func (m *Labels) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *Labels) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *Labels) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
if len(m.Labels) > 0 {
for iNdEx := len(m.Labels) - 1; iNdEx >= 0; iNdEx-- {
{
size, err := m.Labels[iNdEx].MarshalToSizedBuffer(dAtA[:i])
if err != nil {
return 0, err
}
i -= size
i = encodeVarintTypes(dAtA, i, uint64(size))
}
i--
dAtA[i] = 0xa
}
}
return len(dAtA) - i, nil
}
func encodeVarintTypes(dAtA []byte, offset int, v uint64) int {
offset -= sovTypes(v)
base := offset
for v >= 1<<7 {
dAtA[offset] = uint8(v&0x7f | 0x80)
v >>= 7
offset++
}
dAtA[offset] = uint8(v)
return base
}
func (m *Sample) Size() (n int) {
if m == nil {
return 0
}
if m.Value != 0 {
n += 9
}
if m.Timestamp != 0 {
n += 1 + sovTypes(uint64(m.Timestamp))
}
return n
}
func (m *TimeSeries) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
if len(m.Labels) > 0 {
for _, e := range m.Labels {
l = e.Size()
n += 1 + l + sovTypes(uint64(l))
}
}
if len(m.Samples) > 0 {
for _, e := range m.Samples {
l = e.Size()
n += 1 + l + sovTypes(uint64(l))
}
}
return n
}
func (m *Label) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
l = len(m.Name)
if l > 0 {
n += 1 + l + sovTypes(uint64(l))
}
l = len(m.Value)
if l > 0 {
n += 1 + l + sovTypes(uint64(l))
}
return n
}
func (m *Labels) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
if len(m.Labels) > 0 {
for _, e := range m.Labels {
l = e.Size()
n += 1 + l + sovTypes(uint64(l))
}
}
return n
}
func sovTypes(x uint64) (n int) {
return (bits.Len64(x|1) + 6) / 7
}

View File

@@ -0,0 +1,13 @@
// +build integration
package test
import "github.com/golang/snappy"
func Compress(wr WriteRequest) ([]byte, error) {
data, err := wr.Marshal()
if err != nil {
return nil, err
}
return snappy.Encode(nil, data), nil
}

View File

@@ -0,0 +1,8 @@
{
"name": "basic_insertion",
"data": ["graphite.foo.bar.baz;tag1=value1;tag2=value2 123 {TIME_S}"],
"query": ["/api/v1/export?match={__name__!=''}"],
"result_metrics": [
{"metric":{"__name__":"graphite.foo.bar.baz","tag1":"value1","tag2":"value2"},"values":[123], "timestamps": ["{TIME_MSZ}"]}
]
}

View File

@@ -0,0 +1,16 @@
{
"name": "comparison-not-inf-not-nan",
"issue": "https://github.com/VictoriaMetrics/VictoriaMetrics/issues/150",
"data": [
"not_nan_not_inf;item=x 1 {TIME_S-1m}",
"not_nan_not_inf;item=x 1 {TIME_S-2m}",
"not_nan_not_inf;item=y 3 {TIME_S-1m}",
"not_nan_not_inf;item=y 1 {TIME_S-2m}"],
"query": ["/api/v1/query_range?query=1/(not_nan_not_inf-1)!=inf!=nan&start={TIME_S-3m}&end={TIME_S}&step=60"],
"result_query_range": {
"status":"success",
"data":{"resultType":"matrix",
"result":[
{"metric":{"item":"y"},"values":[["{TIME_S-1m}","0.5"],["{TIME_S}","0.5"]]}
]}}
}

View File

@@ -0,0 +1,24 @@
{
"name": "max_lookback_set",
"issue": "https://github.com/VictoriaMetrics/VictoriaMetrics/issues/209",
"data": [
"max_lookback_set 1 {TIME_S-30s}",
"max_lookback_set 2 {TIME_S-60s}",
"max_lookback_set 3 {TIME_S-120s}",
"max_lookback_set 4 {TIME_S-150s}"
],
"query": ["/api/v1/query_range?query=max_lookback_set&start={TIME_S-150s}&end={TIME_S}&step=10s&max_lookback=1s"],
"result_query_range": {
"status":"success",
"data":{"resultType":"matrix",
"result":[{"metric":{"__name__":"max_lookback_set"},"values":[
["{TIME_S-150s}","4"],
["{TIME_S-140s}","4"],
["{TIME_S-120s}","3"],
["{TIME_S-110s}","3"],
["{TIME_S-60s}","2"],
["{TIME_S-50s}","2"],
["{TIME_S-30s}","1"],
["{TIME_S-20s}","1"]
]}]}}
}

View File

@@ -0,0 +1,32 @@
{
"name": "max_lookback_unset",
"issue": "https://github.com/VictoriaMetrics/VictoriaMetrics/issues/209",
"data": [
"max_lookback_unset 1 {TIME_S-30s}",
"max_lookback_unset 2 {TIME_S-60s}",
"max_lookback_unset 3 {TIME_S-120s}",
"max_lookback_unset 4 {TIME_S-150s}"
],
"query": ["/api/v1/query_range?query=max_lookback_unset&start={TIME_S-150s}&end={TIME_S}&step=10s"],
"result_query_range": {
"status":"success",
"data":{"resultType":"matrix",
"result":[{"metric":{"__name__":"max_lookback_unset"},"values":[
["{TIME_S-150s}","4"],
["{TIME_S-140s}","4"],
["{TIME_S-130s}","4"],
["{TIME_S-120s}","3"],
["{TIME_S-110s}","3"],
["{TIME_S-100s}","3"],
["{TIME_S-90s}","3"],
["{TIME_S-80s}","3"],
["{TIME_S-70s}","3"],
["{TIME_S-60s}","2"],
["{TIME_S-50s}","2"],
["{TIME_S-40s}","2"],
["{TIME_S-30s}","1"],
["{TIME_S-20s}","1"],
["{TIME_S-10s}","1"],
["{TIME_S}","1"]
]}]}}
}

View File

@@ -0,0 +1,18 @@
{
"name": "not-nan-as-missing-data",
"issue": "https://github.com/VictoriaMetrics/VictoriaMetrics/issues/153",
"data": [
"not_nan_as_missing_data;item=x 2 {TIME_S-2m}",
"not_nan_as_missing_data;item=x 1 {TIME_S-1m}",
"not_nan_as_missing_data;item=y 4 {TIME_S-2m}",
"not_nan_as_missing_data;item=y 3 {TIME_S-1m}"
],
"query": ["/api/v1/query_range?query=not_nan_as_missing_data>1&start={TIME_S-2m}&end={TIME_S}&step=60"],
"result_query_range": {
"status":"success",
"data":{"resultType":"matrix",
"result":[
{"metric":{"__name__":"not_nan_as_missing_data","item":"x"},"values":[["{TIME_S-2m}","2"]]},
{"metric":{"__name__":"not_nan_as_missing_data","item":"y"},"values":[["{TIME_S-2m}","4"],["{TIME_S-1m}","3"],["{TIME_S}","3"]]}
]}}
}

View File

@@ -0,0 +1,14 @@
{
"name": "subquery-aggregation",
"issue": "https://github.com/VictoriaMetrics/VictoriaMetrics/issues/184",
"data": [
"forms_daily_count;item=x 1 {TIME_S-1m}",
"forms_daily_count;item=x 2 {TIME_S-2m}",
"forms_daily_count;item=y 3 {TIME_S-1m}",
"forms_daily_count;item=y 4 {TIME_S-2m}"],
"query": ["/api/v1/query?query=min%20by%20(item)%20(min_over_time(forms_daily_count[10m:1m]))&time={TIME_S-1m}"],
"result_query": {
"status":"success",
"data":{"resultType":"vector","result":[{"metric":{"item":"x"},"value":["{TIME_S-1m}","1"]},{"metric":{"item":"y"},"value":["{TIME_S-1m}","3"]}]}
}
}

View File

@@ -0,0 +1,9 @@
{
"name": "basic_insertion",
"data": ["measurement,tag1=value1,tag2=value2 field1=1.23,field2=123 {TIME_NS}"],
"query": ["/api/v1/export?match={__name__!=''}"],
"result_metrics": [
{"metric":{"__name__":"measurement_field2","tag1":"value1","tag2":"value2"},"values":[123], "timestamps": ["{TIME_MS}"]},
{"metric":{"__name__":"measurement_field1","tag1":"value1","tag2":"value2"},"values":[1.23], "timestamps": ["{TIME_MS}"]}
]
}

View File

@@ -0,0 +1,8 @@
{
"name": "basic_insertion",
"data": ["put openstdb.foo.bar.baz {TIME_S} 123 tag1=value1 tag2=value2"],
"query": ["/api/v1/export?match={__name__!=''}"],
"result_metrics": [
{"metric":{"__name__":"openstdb.foo.bar.baz","tag1":"value1","tag2":"value2"},"values":[123], "timestamps": ["{TIME_MSZ}"]}
]
}

View File

@@ -0,0 +1,8 @@
{
"name": "basic_insertion",
"data": ["{\"metric\": \"opentsdbhttp.foo\", \"value\": 1001, \"timestamp\": {TIME_S}, \"tags\": {\"bar\":\"baz\", \"x\": \"y\"}}"],
"query": ["/api/v1/export?match={__name__!=''}"],
"result_metrics": [
{"metric":{"__name__":"opentsdbhttp.foo","bar":"baz","x":"y"},"values":[1001], "timestamps": ["{TIME_MSZ}"]}
]
}

View File

@@ -0,0 +1,9 @@
{
"name": "multiline",
"data": ["[{\"metric\": \"opentsdbhttp.multiline1\", \"value\": 1001, \"timestamp\": \"{TIME_S}\", \"tags\": {\"bar\":\"baz\", \"x\": \"y\"}}, {\"metric\": \"opentsdbhttp.multiline2\", \"value\": 1002, \"timestamp\": {TIME_S}}]"],
"query": ["/api/v1/export?match={__name__!=''}"],
"result_metrics": [
{"metric":{"__name__":"opentsdbhttp.multiline1","bar":"baz","x":"y"},"values":[1001], "timestamps": ["{TIME_MSZ}"]},
{"metric":{"__name__":"opentsdbhttp.multiline2"},"values":[1002], "timestamps": ["{TIME_MSZ}"]}
]
}

View File

@@ -0,0 +1,8 @@
{
"name": "basic_insertion",
"data": ["[{\"labels\":[{\"name\":\"__name__\",\"value\":\"prometheus.bar\"},{\"name\":\"baz\",\"value\":\"qux\"}],\"samples\":[{\"value\":100000,\"timestamp\":\"{TIME_MS}\"}]}]"],
"query": ["/api/v1/export?match={__name__!=''}"],
"result_metrics": [
{"metric":{"__name__":"prometheus.bar","baz":"qux"},"values":[100000], "timestamps": ["{TIME_MS}"]}
]
}

View File

@@ -0,0 +1,10 @@
{
"name": "case-sensitive-regex",
"issue": "https://github.com/VictoriaMetrics/VictoriaMetrics/issues/161",
"data": ["[{\"labels\":[{\"name\":\"__name__\",\"value\":\"prometheus.sensitiveRegex\"},{\"name\":\"label\",\"value\":\"sensitiveRegex\"}],\"samples\":[{\"value\":2,\"timestamp\":\"{TIME_MS}\"}]},{\"labels\":[{\"name\":\"__name__\",\"value\":\"prometheus.sensitiveRegex\"},{\"name\":\"label\",\"value\":\"SensitiveRegex\"}],\"samples\":[{\"value\":1,\"timestamp\":\"{TIME_MS}\"}]}]"],
"query": ["/api/v1/export?match={label=~'(?i)sensitiveregex'}"],
"result_metrics": [
{"metric":{"__name__":"prometheus.sensitiveRegex","label":"sensitiveRegex"},"values":[2], "timestamps": ["{TIME_MS}"]},
{"metric":{"__name__":"prometheus.sensitiveRegex","label":"SensitiveRegex"},"values":[1], "timestamps": ["{TIME_MS}"]}
]
}

View File

@@ -0,0 +1,9 @@
{
"name": "duplicate_label",
"issue": "https://github.com/VictoriaMetrics/VictoriaMetrics/issues/172",
"data": ["[{\"labels\":[{\"name\":\"__name__\",\"value\":\"prometheus.duplicate_label\"},{\"name\":\"duplicate\",\"value\":\"label\"},{\"name\":\"duplicate\",\"value\":\"label\"}],\"samples\":[{\"value\":1,\"timestamp\":\"{TIME_MS}\"}]}]"],
"query": ["/api/v1/export?match={__name__!=''}"],
"result_metrics": [
{"metric":{"__name__":"prometheus.duplicate_label","duplicate":"label"},"values":[1], "timestamps": ["{TIME_MS}"]}
]
}

View File

@@ -0,0 +1,15 @@
{
"name": "match_series",
"issue": "https://github.com/VictoriaMetrics/VictoriaMetrics/issues/155",
"data": ["[{\"labels\":[{\"name\":\"__name__\",\"value\":\"MatchSeries\"},{\"name\":\"db\",\"value\":\"TenMinute\"},{\"name\":\"TurbineType\",\"value\":\"V112\"},{\"name\":\"Park\",\"value\":\"1\"}],\"samples\":[{\"value\":1,\"timestamp\":\"{TIME_MS}\"}]},{\"labels\":[{\"name\":\"__name__\",\"value\":\"MatchSeries\"},{\"name\":\"db\",\"value\":\"TenMinute\"},{\"name\":\"TurbineType\",\"value\":\"V112\"},{\"name\":\"Park\",\"value\":\"2\"}],\"samples\":[{\"value\":1,\"timestamp\":\"{TIME_MS}\"}]},{\"labels\":[{\"name\":\"__name__\",\"value\":\"MatchSeries\"},{\"name\":\"db\",\"value\":\"TenMinute\"},{\"name\":\"TurbineType\",\"value\":\"V112\"},{\"name\":\"Park\",\"value\":\"3\"}],\"samples\":[{\"value\":1,\"timestamp\":\"{TIME_MS}\"}]},{\"labels\":[{\"name\":\"__name__\",\"value\":\"MatchSeries\"},{\"name\":\"db\",\"value\":\"TenMinute\"},{\"name\":\"TurbineType\",\"value\":\"V112\"},{\"name\":\"Park\",\"value\":\"4\"}],\"samples\":[{\"value\":1,\"timestamp\":\"{TIME_MS}\"}]}]"],
"query": ["/api/v1/series?match[]={__name__='MatchSeries'}", "/api/v1/series?match[]={__name__=~'MatchSeries.*'}"],
"result_series": {
"status": "success",
"data": [
{"__name__":"MatchSeries","db":"TenMinute","Park":"1","TurbineType":"V112"},
{"__name__":"MatchSeries","db":"TenMinute","Park":"2","TurbineType":"V112"},
{"__name__":"MatchSeries","db":"TenMinute","Park":"3","TurbineType":"V112"},
{"__name__":"MatchSeries","db":"TenMinute","Park":"4","TurbineType":"V112"}
]
}
}

37
app/vmbackup/Makefile Normal file
View File

@@ -0,0 +1,37 @@
# All these commands must run from repository root.
vmbackup:
APP_NAME=vmbackup $(MAKE) app-local
vmbackup-prod:
APP_NAME=vmbackup $(MAKE) app-via-docker
package-vmbackup:
APP_NAME=vmbackup $(MAKE) package-via-docker
publish-vmbackup:
APP_NAME=vmbackup $(MAKE) publish-via-docker
vmbackup-arm:
CGO_ENABLED=0 GOOS=linux GOARCH=arm GO111MODULE=on go build -mod=vendor -ldflags "$(GO_BUILDINFO)" -o bin/vmbackup-arm ./app/vmbackup
vmbackup-arm-prod:
APP_NAME=vmbackup APP_SUFFIX='-arm' DOCKER_OPTS='--env CGO_ENABLED=0 --env GOARCH=arm' $(MAKE) app-via-docker
vmbackup-arm64:
CGO_ENABLED=0 GOOS=linux GOARCH=arm64 GO111MODULE=on go build -mod=vendor -ldflags "$(GO_BUILDINFO)" -o bin/vmbackup-arm64 ./app/vmbackup
vmbackup-arm64-prod:
APP_NAME=vmbackup APP_SUFFIX='-arm64' DOCKER_OPTS='--env CGO_ENABLED=0 --env GOARCH=arm64' $(MAKE) app-via-docker
vmbackup-386:
CGO_ENABLED=0 GOOS=linux GOARCH=386 GO111MODULE=on go build -mod=vendor -ldflags "$(GO_BUILDINFO)" -o bin/vmbackup-386 ./app/vmbackup
vmbackup-386-prod:
APP_NAME=vmbackup APP_SUFFIX='-386' DOCKER_OPTS='--env CGO_ENABLED=0 --env GOARCH=386' $(MAKE) app-via-docker
vmbackup-pure:
APP_NAME=vmbackup $(MAKE) app-local-pure
vmbackup-pure-prod:
APP_NAME=vmbackup APP_SUFFIX='-pure' DOCKER_OPTS='--env CGO_ENABLED=0' $(MAKE) app-via-docker

181
app/vmbackup/README.md Normal file
View File

@@ -0,0 +1,181 @@
## vmbackup
`vmbackup` creates VictoriaMetrics data backups from [instant snapshots](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-work-with-snapshots).
Supported storage systems for backups:
* [GCS](https://cloud.google.com/storage/). Example: `gcs://<bucket>/<path/to/backup>`
* [S3](https://aws.amazon.com/s3/). Example: `s3://<bucket>/<path/to/backup>`
* Any S3-compatible storage such as [MinIO](https://github.com/minio/minio). See `-customS3Endpoint` command-line flag.
* Local filesystem. Example: `fs://</absolute/path/to/backup>`
Incremental backups and full backups are supported. Incremental backups are created automatically if the destination path already contains data from the previous backup.
Full backups can be sped up with `-origin` pointing to already existing backup on the same remote storage. In this case `vmbackup` makes server-side copy for the shared
data between the existing backup and new backup. This saves time and costs on data transfer.
Backup process can be interrupted at any time. It is automatically resumed from the interruption point when restarting `vmbackup` with the same args.
Backed up data can be restored with [vmrestore](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/app/vmrestore/README.md).
See [this article](https://medium.com/@valyala/speeding-up-backups-for-big-time-series-databases-533c1a927883) for more details.
### Use cases
#### Regular backups
Regular backup can be performed with the following command:
```
vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshotName=<local-snapshot> -dst=gcs://<bucket>/<path/to/new/backup>
```
* `</path/to/victoria-metrics-data>` - path to VictoriaMetrics data pointed by `-storageDataPath` command-line flag in single-node VictoriaMetrics or in cluster `vmstorage`.
There is no need to stop VictoriaMetrics for creating backups, since they are performed from immutable [instant snapshots](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-work-with-snapshots).
* `<local-snapshot>` is the snapshot to backup. See [how to create instant snapshots](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-work-with-snapshots).
* `<bucket>` is already existing name for [GCS bucket](https://cloud.google.com/storage/docs/creating-buckets).
* `<path/to/new/backup>` is the destination path where new backup will be placed.
#### Regular backups with server-side copy from existing backup
If the destination GCS bucket already contains the previous backup at `-origin` path, then new backup can be sped up
with the following command:
```
vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshotName=<local-snapshot> -dst=gcs://<bucket>/<path/to/new/backup> -origin=gcs://<bucket>/<path/to/existing/backup>
```
This saves time and network bandwidth costs by performing server-side copy for the shared data from the `-origin` to `-dst`.
#### Incremental backups
Incremental backups are performed if `-dst` points to already existing backup. In this case only new data is uploaded to remote storage.
This saves time and network bandwidth costs when working with big backups:
```
vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshotName=<local-snapshot> -dst=gcs://<bucket>/<path/to/existing/backup>
```
#### Smart backups
Smart backups mean storing full daily backups into `YYYYMMDD` folders and creating incremental hourly backup into `latest` folder:
* Run the following command every hour:
```
vmbackup -snapshotName=<latest-snapshot> -dst=gcs://<bucket>/latest
```
Where `<latest-snapshot>` is the latest [snapshot](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-work-with-snapshots).
The command will upload only changed data to `gcs://<bucket>/latest`.
* Run the following command once a day:
```
vmbackup -snapshotName=<daily-snapshot> -dst=gcs://<bucket>/<YYYYMMDD> -origin=gcs://<bucket>/latest
```
Where `<daily-snapshot>` is the snapshot for the last day `<YYYYMMDD>`.
This apporach saves network bandwidth costs on hourly backups (since they are incremental) and allows recovering data from either the last hour (`latest` backup)
or from any day (`YYYYMMDD` backups). Note that hourly backup shouldn't run when creating daily backup.
Do not forget removing old snapshots and backups when they are no longer needed for saving storage costs.
### How does it work?
The backup algorithm is the following:
1. Collect information about files in the `-snapshotName`, in the `-dst` and in the `-origin`.
2. Determine files in `-dst`, which are missing in `-snapshotName`, and delete them. These are usually small files, which are already merged into bigger files in the snapshot.
3. Determine files from `-snapshotName`, which are missing in `-dst`. These are usually small new files and bigger merged files.
4. Determine files from step 3, which exist in the `-origin`, and perform server-side copy of these files from `-origin` to `-dst`.
This are usually the biggest and the oldest files, which are shared between backups.
5. Upload the remaining files from setp 3 from `-snapshotName` to `-dst`.
The algorithm splits source files into 100MB chunks in the backup. Each chunk is stored as a separate file in the backup.
Such splitting minimizes the amounts of data to re-transfer after temporary errors.
`vmbackup` relies on [instant snapshot](https://medium.com/@valyala/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282) properties:
- All the files in the snapshot are immutable.
- Old files are periodically merged into new files.
- Smaller files have higher probability to be merged.
- Consecutive snapshots share many identical files.
These properties allow performing fast and cheap incremental backups and server-side copying from `-origin` paths.
See [this article](https://medium.com/@valyala/speeding-up-backups-for-big-time-series-databases-533c1a927883) for more details.
`vmbackup` can work improperly or slowly when these properties are violated.
### Troubleshooting
* If the backup is slow, then try setting higher value for `-concurrency` flag. This will increase the number of concurrent workers that upload data to backup storage.
* If `vmbackup` eats all the network bandwidth, then set `-maxBytesPerSecond` to the desired value.
* If `vmbackup` has been interrupted due to temporary error, then just restart it with the same args. It will resume the backup process.
### Advanced usage
Run `vmbackup -help` in order to see all the available options:
```
-concurrency int
The number of concurrent workers. Higher concurrency may reduce backup duration (default 10)
-configFilePath string
Path to file with S3 configs. Configs are loaded from default location if not set.
See https://docs.aws.amazon.com/general/latest/gr/aws-security-credentials.html
-configProfile string
Profile name for S3 configs (default "default")
-credsFilePath string
Path to file with GCS or S3 credentials. Credentials are loaded from default locations if not set.
See https://cloud.google.com/iam/docs/creating-managing-service-account-keys and https://docs.aws.amazon.com/general/latest/gr/aws-security-credentials.html
-customS3Endpoint string
Custom S3 endpoint for use with S3-compatible storages (e.g. MinIO). S3 is used if not set
-dst string
Where to put the backup on the remote storage. Example: gcs://bucket/path/to/backup/dir, s3://bucket/path/to/backup/dir or fs:///path/to/local/backup/dir
-dst can point to the previous backup. In this case incremental backup is performed, i.e. only changed data is uploaded
-loggerLevel string
Minimum level of errors to log. Possible values: INFO, ERROR, FATAL, PANIC (default "INFO")
-maxBytesPerSecond int
The maximum upload speed. There is no limit if it is set to 0
-memory.allowedPercent float
Allowed percent of system memory VictoriaMetrics caches may occupy (default 60)
-origin string
Optional origin directory on the remote storage with old backup for server-side copying when performing full backup. This speeds up full backups
-snapshotName string
Name for the snapshot to backup. See https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-work-with-snapshots
-storageDataPath string
Path to VictoriaMetrics data. Must match -storageDataPath from VictoriaMetrics or vmstorage (default "victoria-metrics-data")
-version
Show VictoriaMetrics version
```
### How to build from sources
It is recommended using [binary releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) - see `vmutils-*` archives there.
#### Development build
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.12.
2. Run `make vmbackup` from the root folder of the repository.
It builds `vmbackup` binary and puts it into the `bin` folder.
#### Production build
1. [Install docker](https://docs.docker.com/install/).
2. Run `make vmbackup-prod` from the root folder of the repository.
It builds `vmbackup-prod` binary and puts it into the `bin` folder.
#### Building docker images
Run `make package-vmbackup`. It builds `victoriametrics/vmbackup:<PKG_TAG>` docker image locally.
`<PKG_TAG>` is auto-generated image tag, which depends on source code in the repository.
The `<PKG_TAG>` may be manually set via `PKG_TAG=foobar make package-vmbackup`.

View File

@@ -0,0 +1,5 @@
FROM scratch
COPY --from=local/certs:1.0.3 /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
COPY bin/vmbackup-prod .
EXPOSE 8428
ENTRYPOINT ["/vmbackup-prod"]

114
app/vmbackup/main.go Normal file
View File

@@ -0,0 +1,114 @@
package main
import (
"flag"
"fmt"
"os"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/backup/actions"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/backup/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/backup/fslocal"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/buildinfo"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
)
var (
storageDataPath = flag.String("storageDataPath", "victoria-metrics-data", "Path to VictoriaMetrics data. Must match -storageDataPath from VictoriaMetrics or vmstorage")
snapshotName = flag.String("snapshotName", "", "Name for the snapshot to backup. See https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-work-with-snapshots")
dst = flag.String("dst", "", "Where to put the backup on the remote storage. "+
"Example: gcs://bucket/path/to/backup/dir, s3://bucket/path/to/backup/dir or fs:///path/to/local/backup/dir\n"+
"-dst can point to the previous backup. In this case incremental backup is performed, i.e. only changed data is uploaded")
origin = flag.String("origin", "", "Optional origin directory on the remote storage with old backup for server-side copying when performing full backup. This speeds up full backups")
concurrency = flag.Int("concurrency", 10, "The number of concurrent workers. Higher concurrency may reduce backup duration")
maxBytesPerSecond = flag.Int("maxBytesPerSecond", 0, "The maximum upload speed. There is no limit if it is set to 0")
)
func main() {
flag.Usage = usage
flag.Parse()
buildinfo.Init()
srcFS, err := newSrcFS()
if err != nil {
logger.Fatalf("%s", err)
}
dstFS, err := newDstFS()
if err != nil {
logger.Fatalf("%s", err)
}
originFS, err := newOriginFS()
if err != nil {
logger.Fatalf("%s", err)
}
a := &actions.Backup{
Concurrency: *concurrency,
Src: srcFS,
Dst: dstFS,
Origin: originFS,
}
if err := a.Run(); err != nil {
logger.Fatalf("cannot create backup: %s", err)
}
}
func usage() {
const s = `
vmbackup performs backups for VictoriaMetrics data from instant snapshots to gcs, s3
or local filesystem. Backed up data can be restored with vmrestore.
See the docs at https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/app/vmbackup/README.md .
`
f := flag.CommandLine.Output()
fmt.Fprintf(f, "%s\n", s)
flag.PrintDefaults()
}
func newSrcFS() (*fslocal.FS, error) {
if len(*snapshotName) == 0 {
return nil, fmt.Errorf("`-snapshotName` cannot be empty")
}
snapshotPath := *storageDataPath + "/snapshots/" + *snapshotName
// Verify the snapshot exists.
f, err := os.Open(snapshotPath)
if err != nil {
return nil, fmt.Errorf("cannot open snapshot at %q: %s", snapshotPath, err)
}
fi, err := f.Stat()
_ = f.Close()
if err != nil {
return nil, fmt.Errorf("cannot stat %q: %s", snapshotPath, err)
}
if !fi.IsDir() {
return nil, fmt.Errorf("snapshot %q must be a directory", snapshotPath)
}
fs := &fslocal.FS{
Dir: snapshotPath,
MaxBytesPerSecond: *maxBytesPerSecond,
}
if err := fs.Init(); err != nil {
return nil, fmt.Errorf("cannot initialize fs: %s", err)
}
return fs, nil
}
func newDstFS() (common.RemoteFS, error) {
fs, err := actions.NewRemoteFS(*dst)
if err != nil {
return nil, fmt.Errorf("cannot parse `-dst`=%q: %s", *dst, err)
}
return fs, nil
}
func newOriginFS() (common.RemoteFS, error) {
if len(*origin) == 0 {
return nil, nil
}
fs, err := actions.NewRemoteFS(*origin)
if err != nil {
return nil, fmt.Errorf("cannot parse `-origin`=%q: %s", *origin, err)
}
return fs, nil
}

1
app/vminsert/README.md Normal file
View File

@@ -0,0 +1 @@
`vminsert` routes the ingested data to `vmstorage`.

View File

@@ -0,0 +1,30 @@
package common
import (
"compress/gzip"
"io"
"sync"
)
// GetGzipReader returns new gzip reader from the pool.
//
// Return back the gzip reader when it no longer needed with PutGzipReader.
func GetGzipReader(r io.Reader) (*gzip.Reader, error) {
v := gzipReaderPool.Get()
if v == nil {
return gzip.NewReader(r)
}
zr := v.(*gzip.Reader)
if err := zr.Reset(r); err != nil {
return nil, err
}
return zr, nil
}
// PutGzipReader returns back gzip reader obtained via GetGzipReader.
func PutGzipReader(zr *gzip.Reader) {
_ = zr.Close()
gzipReaderPool.Put(zr)
}
var gzipReaderPool sync.Pool

View File

@@ -0,0 +1,110 @@
package common
import (
"fmt"
"net/http"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
)
// InsertCtx contains common bits for data points insertion.
type InsertCtx struct {
Labels []prompb.Label
mrs []storage.MetricRow
metricNamesBuf []byte
}
// Reset resets ctx for future fill with rowsLen rows.
func (ctx *InsertCtx) Reset(rowsLen int) {
for _, label := range ctx.Labels {
label.Name = nil
label.Value = nil
}
ctx.Labels = ctx.Labels[:0]
for i := range ctx.mrs {
mr := &ctx.mrs[i]
mr.MetricNameRaw = nil
}
ctx.mrs = ctx.mrs[:0]
if n := rowsLen - cap(ctx.mrs); n > 0 {
ctx.mrs = append(ctx.mrs[:cap(ctx.mrs)], make([]storage.MetricRow, n)...)
}
ctx.mrs = ctx.mrs[:0]
ctx.metricNamesBuf = ctx.metricNamesBuf[:0]
}
func (ctx *InsertCtx) marshalMetricNameRaw(prefix []byte, labels []prompb.Label) []byte {
start := len(ctx.metricNamesBuf)
ctx.metricNamesBuf = append(ctx.metricNamesBuf, prefix...)
ctx.metricNamesBuf = storage.MarshalMetricNameRaw(ctx.metricNamesBuf, labels)
metricNameRaw := ctx.metricNamesBuf[start:]
return metricNameRaw[:len(metricNameRaw):len(metricNameRaw)]
}
// WriteDataPoint writes (timestamp, value) with the given prefix and lables into ctx buffer.
func (ctx *InsertCtx) WriteDataPoint(prefix []byte, labels []prompb.Label, timestamp int64, value float64) {
metricNameRaw := ctx.marshalMetricNameRaw(prefix, labels)
ctx.addRow(metricNameRaw, timestamp, value)
}
// WriteDataPointExt writes (timestamp, value) with the given metricNameRaw and labels into ctx buffer.
//
// It returns metricNameRaw for the given labels if len(metricNameRaw) == 0.
func (ctx *InsertCtx) WriteDataPointExt(metricNameRaw []byte, labels []prompb.Label, timestamp int64, value float64) []byte {
if len(metricNameRaw) == 0 {
metricNameRaw = ctx.marshalMetricNameRaw(nil, labels)
}
ctx.addRow(metricNameRaw, timestamp, value)
return metricNameRaw
}
func (ctx *InsertCtx) addRow(metricNameRaw []byte, timestamp int64, value float64) {
mrs := ctx.mrs
if cap(mrs) > len(mrs) {
mrs = mrs[:len(mrs)+1]
} else {
mrs = append(mrs, storage.MetricRow{})
}
mr := &mrs[len(mrs)-1]
ctx.mrs = mrs
mr.MetricNameRaw = metricNameRaw
mr.Timestamp = timestamp
mr.Value = value
}
// AddLabel adds (name, value) label to ctx.Labels.
//
// name and value must exist until ctx.Labels is used.
func (ctx *InsertCtx) AddLabel(name, value string) {
labels := ctx.Labels
if cap(labels) > len(labels) {
labels = labels[:len(labels)+1]
} else {
labels = append(labels, prompb.Label{})
}
label := &labels[len(labels)-1]
// Do not copy name and value contents for performance reasons.
// This reduces GC overhead on the number of objects and allocations.
label.Name = bytesutil.ToUnsafeBytes(name)
label.Value = bytesutil.ToUnsafeBytes(value)
ctx.Labels = labels
}
// FlushBufs flushes buffered rows to the underlying storage.
func (ctx *InsertCtx) FlushBufs() error {
if err := vmstorage.AddRows(ctx.mrs); err != nil {
return &httpserver.ErrorWithStatusCode{
Err: fmt.Errorf("cannot store metrics: %s", err),
StatusCode: http.StatusServiceUnavailable,
}
}
return nil
}

View File

@@ -0,0 +1,68 @@
package common
import (
"bytes"
"fmt"
"io"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
)
// The maximum size of a single line returned by ReadLinesBlock.
const maxLineSize = 256 * 1024
// Default size in bytes of a single block returned by ReadLinesBlock.
const defaultBlockSize = 64 * 1024
// ReadLinesBlock reads a block of lines delimited by '\n' from tailBuf and r into dstBuf.
//
// Trailing chars after the last newline are put into tailBuf.
//
// Returns (dstBuf, tailBuf).
func ReadLinesBlock(r io.Reader, dstBuf, tailBuf []byte) ([]byte, []byte, error) {
if cap(dstBuf) < defaultBlockSize {
dstBuf = bytesutil.Resize(dstBuf, defaultBlockSize)
}
dstBuf = append(dstBuf[:0], tailBuf...)
tailBuf = tailBuf[:0]
again:
n, err := r.Read(dstBuf[len(dstBuf):cap(dstBuf)])
// Check for error only if zero bytes read from r, i.e. no forward progress made.
// Otherwise process the read data.
if n == 0 {
if err == nil {
return dstBuf, tailBuf, fmt.Errorf("no forward progress made")
}
if err == io.EOF && len(dstBuf) > 0 {
// Missing newline in the end of stream. This is OK,
// so suppress io.EOF for now. It will be returned during the next
// call to ReadLinesBlock.
// This fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/60 .
return dstBuf, tailBuf, nil
}
return dstBuf, tailBuf, err
}
dstBuf = dstBuf[:len(dstBuf)+n]
// Search for the last newline in dstBuf and put the rest into tailBuf.
nn := bytes.LastIndexByte(dstBuf[len(dstBuf)-n:], '\n')
if nn < 0 {
// Didn't found at least a single line.
if len(dstBuf) > maxLineSize {
return dstBuf, tailBuf, fmt.Errorf("too long line: more than %d bytes", maxLineSize)
}
if cap(dstBuf) < 2*len(dstBuf) {
// Increase dsbBuf capacity, so more data could be read into it.
dstBufLen := len(dstBuf)
dstBuf = bytesutil.Resize(dstBuf, 2*cap(dstBuf))
dstBuf = dstBuf[:dstBufLen]
}
goto again
}
// Found at least a single line. Return it.
nn += len(dstBuf) - n
tailBuf = append(tailBuf[:0], dstBuf[nn+1:]...)
dstBuf = dstBuf[:nn]
return dstBuf, tailBuf, nil
}

View File

@@ -0,0 +1,213 @@
package common
import (
"bytes"
"fmt"
"io"
"reflect"
"testing"
)
func TestReadLinesBlockFailure(t *testing.T) {
f := func(s string) {
t.Helper()
r := bytes.NewBufferString(s)
if _, _, err := ReadLinesBlock(r, nil, nil); err == nil {
t.Fatalf("expecting non-nil error")
}
sbr := &singleByteReader{
b: []byte(s),
}
if _, _, err := ReadLinesBlock(sbr, nil, nil); err == nil {
t.Fatalf("expecting non-nil error")
}
fr := &failureReader{}
if _, _, err := ReadLinesBlock(fr, nil, nil); err == nil {
t.Fatalf("expecting non-nil error")
}
}
// empty string
f("")
// too long string
b := make([]byte, maxLineSize+1)
f(string(b))
}
type failureReader struct{}
func (fr *failureReader) Read(p []byte) (int, error) {
return 0, fmt.Errorf("some error")
}
func TestReadLinesBlockMultiLinesSingleByteReader(t *testing.T) {
f := func(s string, linesExpected []string) {
t.Helper()
r := &singleByteReader{
b: []byte(s),
}
var err error
var dstBuf, tailBuf []byte
var lines []string
for {
dstBuf, tailBuf, err = ReadLinesBlock(r, dstBuf, tailBuf)
if err != nil {
if err == io.EOF {
break
}
t.Fatalf("unexpected error in ReadLinesBlock(%q): %s", s, err)
}
lines = append(lines, string(dstBuf))
}
if !reflect.DeepEqual(lines, linesExpected) {
t.Fatalf("unexpected lines after reading %q: got %q; want %q", s, lines, linesExpected)
}
}
f("", nil)
f("foo", []string{"foo"})
f("foo\n", []string{"foo"})
f("foo\nbar", []string{"foo", "bar"})
f("\nfoo\nbar", []string{"", "foo", "bar"})
f("\nfoo\nbar\n", []string{"", "foo", "bar"})
f("\nfoo\nbar\n\n", []string{"", "foo", "bar", ""})
}
func TestReadLinesBlockMultiLinesBytesBuffer(t *testing.T) {
f := func(s string, linesExpected []string) {
t.Helper()
r := bytes.NewBufferString(s)
var err error
var dstBuf, tailBuf []byte
var lines []string
for {
dstBuf, tailBuf, err = ReadLinesBlock(r, dstBuf, tailBuf)
if err != nil {
if err == io.EOF {
break
}
t.Fatalf("unexpected error in ReadLinesBlock(%q): %s", s, err)
}
lines = append(lines, string(dstBuf))
}
if !reflect.DeepEqual(lines, linesExpected) {
t.Fatalf("unexpected lines after reading %q: got %q; want %q", s, lines, linesExpected)
}
}
f("", nil)
f("foo", []string{"foo"})
f("foo\n", []string{"foo"})
f("foo\nbar", []string{"foo", "bar"})
f("\nfoo\nbar", []string{"\nfoo", "bar"})
f("\nfoo\nbar\n", []string{"\nfoo\nbar"})
f("\nfoo\nbar\n\n", []string{"\nfoo\nbar\n"})
}
func TestReadLinesBlockSuccessSingleByteReader(t *testing.T) {
f := func(s, dstBufExpected, tailBufExpected string) {
t.Helper()
r := &singleByteReader{
b: []byte(s),
}
dstBuf, tailBuf, err := ReadLinesBlock(r, nil, nil)
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
if string(dstBuf) != dstBufExpected {
t.Fatalf("unexpected dstBuf; got %q; want %q; tailBuf=%q", dstBuf, dstBufExpected, tailBuf)
}
if string(tailBuf) != tailBufExpected {
t.Fatalf("unexpected tailBuf; got %q; want %q; dstBuf=%q", tailBuf, tailBufExpected, dstBuf)
}
// Verify the same with non-empty dstBuf and tailBuf
r = &singleByteReader{
b: []byte(s),
}
dstBuf, tailBuf, err = ReadLinesBlock(r, dstBuf, tailBuf[:0])
if err != nil {
t.Fatalf("non-empty bufs: unexpected error: %s", err)
}
if string(dstBuf) != dstBufExpected {
t.Fatalf("non-empty bufs: unexpected dstBuf; got %q; want %q; tailBuf=%q", dstBuf, dstBufExpected, tailBuf)
}
if string(tailBuf) != tailBufExpected {
t.Fatalf("non-empty bufs: unexpected tailBuf; got %q; want %q; dstBuf=%q", tailBuf, tailBufExpected, dstBuf)
}
}
f("\n", "", "")
f("foo\n", "foo", "")
f("\nfoo", "", "")
f("foo\nbar", "foo", "")
f("foo\nbar\nbaz", "foo", "")
f("foo", "foo", "")
// The maximum line size
b := make([]byte, maxLineSize+10)
b[maxLineSize] = '\n'
f(string(b), string(b[:maxLineSize]), "")
}
func TestReadLinesBlockSuccessBytesBuffer(t *testing.T) {
f := func(s, dstBufExpected, tailBufExpected string) {
t.Helper()
r := bytes.NewBufferString(s)
dstBuf, tailBuf, err := ReadLinesBlock(r, nil, nil)
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
if string(dstBuf) != dstBufExpected {
t.Fatalf("unexpected dstBuf; got %q; want %q; tailBuf=%q", dstBuf, dstBufExpected, tailBuf)
}
if string(tailBuf) != tailBufExpected {
t.Fatalf("unexpected tailBuf; got %q; want %q; dstBuf=%q", tailBuf, tailBufExpected, dstBuf)
}
// Verify the same with non-empty dstBuf and tailBuf
r = bytes.NewBufferString(s)
dstBuf, tailBuf, err = ReadLinesBlock(r, dstBuf, tailBuf[:0])
if err != nil {
t.Fatalf("non-empty bufs: unexpected error: %s", err)
}
if string(dstBuf) != dstBufExpected {
t.Fatalf("non-empty bufs: unexpected dstBuf; got %q; want %q; tailBuf=%q", dstBuf, dstBufExpected, tailBuf)
}
if string(tailBuf) != tailBufExpected {
t.Fatalf("non-empty bufs: unexpected tailBuf; got %q; want %q; dstBuf=%q", tailBuf, tailBufExpected, dstBuf)
}
}
f("\n", "", "")
f("foo\n", "foo", "")
f("\nfoo", "", "foo")
f("foo\nbar", "foo", "bar")
f("foo\nbar\nbaz", "foo\nbar", "baz")
// The maximum line size
b := make([]byte, maxLineSize+10)
b[maxLineSize] = '\n'
f(string(b), string(b[:maxLineSize]), string(b[maxLineSize+1:]))
}
type singleByteReader struct {
b []byte
}
func (sbr *singleByteReader) Read(p []byte) (int, error) {
if len(sbr.b) == 0 {
return 0, io.EOF
}
n := copy(p, sbr.b[:1])
sbr.b = sbr.b[n:]
if len(sbr.b) == 0 {
return n, io.EOF
}
return n, nil
}

View File

@@ -0,0 +1,75 @@
package concurrencylimiter
import (
"flag"
"fmt"
"net/http"
"runtime"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timerpool"
"github.com/VictoriaMetrics/metrics"
)
var maxConcurrentInserts = flag.Int("maxConcurrentInserts", runtime.GOMAXPROCS(-1)*4, "The maximum number of concurrent inserts")
var (
// ch is the channel for limiting concurrent calls to Do.
ch chan struct{}
// waitDuration is the amount of time to wait until at least a single
// concurrent Do call out of cap(ch) inserts is complete.
waitDuration = time.Second * 30
)
// Init initializes concurrencylimiter.
//
// Init must be called after flag.Parse call.
func Init() {
ch = make(chan struct{}, *maxConcurrentInserts)
}
// Do calls f with the limited concurrency.
func Do(f func() error) error {
// Limit the number of conurrent f calls in order to prevent from excess
// memory usage and CPU trashing.
select {
case ch <- struct{}{}:
err := f()
<-ch
return err
default:
}
// All the workers are busy.
// Sleep for up to waitDuration.
concurrencyLimitReached.Inc()
t := timerpool.Get(waitDuration)
select {
case ch <- struct{}{}:
timerpool.Put(t)
err := f()
<-ch
return err
case <-t.C:
timerpool.Put(t)
concurrencyLimitTimeout.Inc()
return &httpserver.ErrorWithStatusCode{
Err: fmt.Errorf("the server is overloaded with %d concurrent inserts; either increase -maxConcurrentInserts or reduce the load", cap(ch)),
StatusCode: http.StatusServiceUnavailable,
}
}
}
var (
concurrencyLimitReached = metrics.NewCounter(`vm_concurrent_insert_limit_reached_total`)
concurrencyLimitTimeout = metrics.NewCounter(`vm_concurrent_insert_limit_timeout_total`)
_ = metrics.NewGauge(`vm_concurrent_insert_capacity`, func() float64 {
return float64(cap(ch))
})
_ = metrics.NewGauge(`vm_concurrent_insert_current`, func() float64 {
return float64(len(ch))
})
)

View File

@@ -0,0 +1,190 @@
package graphite
import (
"fmt"
"strings"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/metrics"
"github.com/valyala/fastjson/fastfloat"
)
// Rows contains parsed graphite rows.
type Rows struct {
Rows []Row
tagsPool []Tag
}
// Reset resets rs.
func (rs *Rows) Reset() {
// Reset items, so they can be GC'ed
for i := range rs.Rows {
rs.Rows[i].reset()
}
rs.Rows = rs.Rows[:0]
for i := range rs.tagsPool {
rs.tagsPool[i].reset()
}
rs.tagsPool = rs.tagsPool[:0]
}
// Unmarshal unmarshals grahite plaintext protocol rows from s.
//
// See https://graphite.readthedocs.io/en/latest/feeding-carbon.html#the-plaintext-protocol
//
// s must be unchanged until rs is in use.
func (rs *Rows) Unmarshal(s string) {
rs.Rows, rs.tagsPool = unmarshalRows(rs.Rows[:0], s, rs.tagsPool[:0])
}
// Row is a single graphite row.
type Row struct {
Metric string
Tags []Tag
Value float64
Timestamp int64
}
func (r *Row) reset() {
r.Metric = ""
r.Tags = nil
r.Value = 0
r.Timestamp = 0
}
func (r *Row) unmarshal(s string, tagsPool []Tag) ([]Tag, error) {
r.reset()
n := strings.IndexByte(s, ' ')
if n < 0 {
return tagsPool, fmt.Errorf("cannot find whitespace between metric and value in %q", s)
}
metricAndTags := s[:n]
tail := s[n+1:]
n = strings.IndexByte(metricAndTags, ';')
if n < 0 {
// No tags
r.Metric = metricAndTags
} else {
// Tags found
r.Metric = metricAndTags[:n]
tagsStart := len(tagsPool)
var err error
tagsPool, err = unmarshalTags(tagsPool, metricAndTags[n+1:])
if err != nil {
return tagsPool, fmt.Errorf("cannot umarshal tags: %s", err)
}
tags := tagsPool[tagsStart:]
r.Tags = tags[:len(tags):len(tags)]
}
if len(r.Metric) == 0 {
return tagsPool, fmt.Errorf("metric cannot be empty")
}
n = strings.IndexByte(tail, ' ')
if n < 0 {
// There is no timestamp. Use default timestamp instead.
r.Value = fastfloat.ParseBestEffort(tail)
return tagsPool, nil
}
r.Value = fastfloat.ParseBestEffort(tail[:n])
r.Timestamp = fastfloat.ParseInt64BestEffort(tail[n+1:])
return tagsPool, nil
}
func unmarshalRows(dst []Row, s string, tagsPool []Tag) ([]Row, []Tag) {
for len(s) > 0 {
n := strings.IndexByte(s, '\n')
if n < 0 {
// The last line.
return unmarshalRow(dst, s, tagsPool)
}
dst, tagsPool = unmarshalRow(dst, s[:n], tagsPool)
s = s[n+1:]
}
return dst, tagsPool
}
func unmarshalRow(dst []Row, s string, tagsPool []Tag) ([]Row, []Tag) {
if len(s) > 0 && s[len(s)-1] == '\r' {
s = s[:len(s)-1]
}
if len(s) == 0 {
// Skip empty line
return dst, tagsPool
}
if cap(dst) > len(dst) {
dst = dst[:len(dst)+1]
} else {
dst = append(dst, Row{})
}
r := &dst[len(dst)-1]
var err error
tagsPool, err = r.unmarshal(s, tagsPool)
if err != nil {
dst = dst[:len(dst)-1]
logger.Errorf("cannot unmarshal Graphite line %q: %s", s, err)
invalidLines.Inc()
}
return dst, tagsPool
}
var invalidLines = metrics.NewCounter(`vm_rows_invalid_total{type="graphite"}`)
func unmarshalTags(dst []Tag, s string) ([]Tag, error) {
for {
if cap(dst) > len(dst) {
dst = dst[:len(dst)+1]
} else {
dst = append(dst, Tag{})
}
tag := &dst[len(dst)-1]
n := strings.IndexByte(s, ';')
if n < 0 {
// The last tag found
if err := tag.unmarshal(s); err != nil {
return dst[:len(dst)-1], err
}
if len(tag.Key) == 0 || len(tag.Value) == 0 {
// Skip empty tag
dst = dst[:len(dst)-1]
}
return dst, nil
}
if err := tag.unmarshal(s[:n]); err != nil {
return dst[:len(dst)-1], err
}
s = s[n+1:]
if len(tag.Key) == 0 || len(tag.Value) == 0 {
// Skip empty tag
dst = dst[:len(dst)-1]
}
}
}
// Tag is a graphite tag.
type Tag struct {
Key string
Value string
}
func (t *Tag) reset() {
t.Key = ""
t.Value = ""
}
func (t *Tag) unmarshal(s string) error {
t.reset()
n := strings.IndexByte(s, '=')
if n < 0 {
return fmt.Errorf("missing tag value for %q", s)
}
t.Key = s[:n]
t.Value = s[n+1:]
return nil
}

View File

@@ -0,0 +1,163 @@
package graphite
import (
"reflect"
"testing"
)
func TestRowsUnmarshalFailure(t *testing.T) {
f := func(s string) {
t.Helper()
var rows Rows
rows.Unmarshal(s)
if len(rows.Rows) != 0 {
t.Fatalf("unexpected number of rows parsed; got %d; want 0", len(rows.Rows))
}
// Try again
rows.Unmarshal(s)
if len(rows.Rows) != 0 {
t.Fatalf("unexpected number of rows parsed; got %d; want 0", len(rows.Rows))
}
}
// Missing metric
f(" 123 455")
// Missing value
f("aaa")
// missing tag
f("aa; 12 34")
// missing tag value
f("aa;bb 23 34")
}
func TestRowsUnmarshalSuccess(t *testing.T) {
f := func(s string, rowsExpected *Rows) {
t.Helper()
var rows Rows
rows.Unmarshal(s)
if !reflect.DeepEqual(rows.Rows, rowsExpected.Rows) {
t.Fatalf("unexpected rows;\ngot\n%+v;\nwant\n%+v", rows.Rows, rowsExpected.Rows)
}
// Try unmarshaling again
rows.Unmarshal(s)
if !reflect.DeepEqual(rows.Rows, rowsExpected.Rows) {
t.Fatalf("unexpected rows;\ngot\n%+v;\nwant\n%+v", rows.Rows, rowsExpected.Rows)
}
rows.Reset()
if len(rows.Rows) != 0 {
t.Fatalf("non-empty rows after reset: %+v", rows.Rows)
}
}
// Empty line
f("", &Rows{})
f("\r", &Rows{})
f("\n\n", &Rows{})
f("\n\r\n", &Rows{})
// Single line
f("foobar -123.456 789", &Rows{
Rows: []Row{{
Metric: "foobar",
Value: -123.456,
Timestamp: 789,
}},
})
f("foo.bar 123.456 789\n", &Rows{
Rows: []Row{{
Metric: "foo.bar",
Value: 123.456,
Timestamp: 789,
}},
})
// Missing timestamp
f("aaa 1123", &Rows{
Rows: []Row{{
Metric: "aaa",
Value: 1123,
}},
})
// Timestamp bigger than 1<<31
f("aaa 1123 429496729600", &Rows{
Rows: []Row{{
Metric: "aaa",
Value: 1123,
Timestamp: 429496729600,
}},
})
// Tags
f("foo;bar=baz 1 2", &Rows{
Rows: []Row{{
Metric: "foo",
Tags: []Tag{{
Key: "bar",
Value: "baz",
}},
Value: 1,
Timestamp: 2,
}},
})
// Empty tags
f("foo;bar=baz;aa=;x=y;=z 1 2", &Rows{
Rows: []Row{{
Metric: "foo",
Tags: []Tag{
{
Key: "bar",
Value: "baz",
},
{
Key: "x",
Value: "y",
},
},
Value: 1,
Timestamp: 2,
}},
})
// Multi lines
f("foo 0.3 2\naaa 3\nbar.baz 0.34 43\n", &Rows{
Rows: []Row{
{
Metric: "foo",
Value: 0.3,
Timestamp: 2,
},
{
Metric: "aaa",
Value: 3,
},
{
Metric: "bar.baz",
Value: 0.34,
Timestamp: 43,
},
},
})
// Multi lines with invalid line
f("foo 0.3 2\naaa\nbar.baz 0.34 43\n", &Rows{
Rows: []Row{
{
Metric: "foo",
Value: 0.3,
Timestamp: 2,
},
{
Metric: "bar.baz",
Value: 0.34,
Timestamp: 43,
},
},
})
}

View File

@@ -0,0 +1,25 @@
package graphite
import (
"fmt"
"testing"
)
func BenchmarkRowsUnmarshal(b *testing.B) {
s := `cpu.usage_user 1.23 1234556768
cpu.usage_system 23.344 1234556768
cpu.usage_iowait 3.3443 1234556769
cpu.usage_irq 0.34432 1234556768
`
b.SetBytes(int64(len(s)))
b.ReportAllocs()
b.RunParallel(func(pb *testing.PB) {
var rows Rows
for pb.Next() {
rows.Unmarshal(s)
if len(rows.Rows) != 4 {
panic(fmt.Errorf("unexpected number of rows unmarshaled: got %d; want 4", len(rows.Rows)))
}
}
})
}

View File

@@ -0,0 +1,161 @@
package graphite
import (
"fmt"
"io"
"net"
"runtime"
"sync"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/concurrencylimiter"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/metrics"
)
var (
rowsInserted = metrics.NewCounter(`vm_rows_inserted_total{type="graphite"}`)
rowsPerInsert = metrics.NewSummary(`vm_rows_per_insert{type="graphite"}`)
)
// insertHandler processes remote write for graphite plaintext protocol.
//
// See https://graphite.readthedocs.io/en/latest/feeding-carbon.html#the-plaintext-protocol
func insertHandler(r io.Reader) error {
return concurrencylimiter.Do(func() error {
return insertHandlerInternal(r)
})
}
func insertHandlerInternal(r io.Reader) error {
ctx := getPushCtx()
defer putPushCtx(ctx)
for ctx.Read(r) {
if err := ctx.InsertRows(); err != nil {
return err
}
}
return ctx.Error()
}
func (ctx *pushCtx) InsertRows() error {
rows := ctx.Rows.Rows
ic := &ctx.Common
ic.Reset(len(rows))
for i := range rows {
r := &rows[i]
ic.Labels = ic.Labels[:0]
ic.AddLabel("", r.Metric)
for j := range r.Tags {
tag := &r.Tags[j]
ic.AddLabel(tag.Key, tag.Value)
}
ic.WriteDataPoint(nil, ic.Labels, r.Timestamp, r.Value)
}
rowsInserted.Add(len(rows))
rowsPerInsert.Update(float64(len(rows)))
return ic.FlushBufs()
}
const flushTimeout = 3 * time.Second
func (ctx *pushCtx) Read(r io.Reader) bool {
graphiteReadCalls.Inc()
if ctx.err != nil {
return false
}
if c, ok := r.(net.Conn); ok {
if err := c.SetReadDeadline(time.Now().Add(flushTimeout)); err != nil {
graphiteReadErrors.Inc()
ctx.err = fmt.Errorf("cannot set read deadline: %s", err)
return false
}
}
ctx.reqBuf, ctx.tailBuf, ctx.err = common.ReadLinesBlock(r, ctx.reqBuf, ctx.tailBuf)
if ctx.err != nil {
if ne, ok := ctx.err.(net.Error); ok && ne.Timeout() {
// Flush the read data on timeout and try reading again.
ctx.err = nil
} else {
if ctx.err != io.EOF {
graphiteReadErrors.Inc()
ctx.err = fmt.Errorf("cannot read graphite plaintext protocol data: %s", ctx.err)
}
return false
}
}
ctx.Rows.Unmarshal(bytesutil.ToUnsafeString(ctx.reqBuf))
// Fill missing timestamps with the current timestamp rounded to seconds.
currentTimestamp := time.Now().Unix()
rows := ctx.Rows.Rows
for i := range rows {
r := &rows[i]
if r.Timestamp == 0 {
r.Timestamp = currentTimestamp
}
}
// Convert timestamps from seconds to milliseconds.
for i := range rows {
rows[i].Timestamp *= 1e3
}
return true
}
type pushCtx struct {
Rows Rows
Common common.InsertCtx
reqBuf []byte
tailBuf []byte
err error
}
func (ctx *pushCtx) Error() error {
if ctx.err == io.EOF {
return nil
}
return ctx.err
}
func (ctx *pushCtx) reset() {
ctx.Rows.Reset()
ctx.Common.Reset(0)
ctx.reqBuf = ctx.reqBuf[:0]
ctx.tailBuf = ctx.tailBuf[:0]
ctx.err = nil
}
var (
graphiteReadCalls = metrics.NewCounter(`vm_read_calls_total{name="graphite"}`)
graphiteReadErrors = metrics.NewCounter(`vm_read_errors_total{name="graphite"}`)
)
func getPushCtx() *pushCtx {
select {
case ctx := <-pushCtxPoolCh:
return ctx
default:
if v := pushCtxPool.Get(); v != nil {
return v.(*pushCtx)
}
return &pushCtx{}
}
}
func putPushCtx(ctx *pushCtx) {
ctx.reset()
select {
case pushCtxPoolCh <- ctx:
default:
pushCtxPool.Put(ctx)
}
}
var pushCtxPool sync.Pool
var pushCtxPoolCh = make(chan *pushCtx, runtime.GOMAXPROCS(-1))

View File

@@ -0,0 +1,138 @@
package graphite
import (
"net"
"runtime"
"strings"
"sync"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/netutil"
"github.com/VictoriaMetrics/metrics"
)
var (
writeRequestsTCP = metrics.NewCounter(`vm_graphite_requests_total{name="write", net="tcp"}`)
writeErrorsTCP = metrics.NewCounter(`vm_graphite_request_errors_total{name="write", net="tcp"}`)
writeRequestsUDP = metrics.NewCounter(`vm_graphite_requests_total{name="write", net="udp"}`)
writeErrorsUDP = metrics.NewCounter(`vm_graphite_request_errors_total{name="write", net="udp"}`)
)
// Serve starts graphite server on the given addr.
func Serve(addr string) {
logger.Infof("starting TCP Graphite server at %q", addr)
lnTCP, err := netutil.NewTCPListener("graphite", addr)
if err != nil {
logger.Fatalf("cannot start TCP Graphite server at %q: %s", addr, err)
}
listenerTCP = lnTCP
logger.Infof("starting UDP Graphite server at %q", addr)
lnUDP, err := net.ListenPacket("udp4", addr)
if err != nil {
logger.Fatalf("cannot start UDP Graphite server at %q: %s", addr, err)
}
listenerUDP = lnUDP
var wg sync.WaitGroup
wg.Add(1)
go func() {
defer wg.Done()
serveTCP(listenerTCP)
logger.Infof("stopped TCP Graphite server at %q", addr)
}()
wg.Add(1)
go func() {
defer wg.Done()
serveUDP(listenerUDP)
logger.Infof("stopped UDP Graphite server at %q", addr)
}()
wg.Wait()
}
func serveTCP(ln net.Listener) {
for {
c, err := ln.Accept()
if err != nil {
if ne, ok := err.(net.Error); ok {
if ne.Temporary() {
time.Sleep(time.Second)
continue
}
if strings.Contains(err.Error(), "use of closed network connection") {
break
}
logger.Fatalf("unrecoverable error when accepting TCP Graphite connections: %s", err)
}
logger.Fatalf("unexpected error when accepting TCP Graphite connections: %s", err)
}
go func() {
writeRequestsTCP.Inc()
if err := insertHandler(c); err != nil {
writeErrorsTCP.Inc()
logger.Errorf("error in TCP Graphite conn %q<->%q: %s", c.LocalAddr(), c.RemoteAddr(), err)
}
_ = c.Close()
}()
}
}
func serveUDP(ln net.PacketConn) {
gomaxprocs := runtime.GOMAXPROCS(-1)
var wg sync.WaitGroup
for i := 0; i < gomaxprocs; i++ {
wg.Add(1)
go func() {
defer wg.Done()
var bb bytesutil.ByteBuffer
bb.B = bytesutil.Resize(bb.B, 64*1024)
for {
bb.Reset()
bb.B = bb.B[:cap(bb.B)]
n, addr, err := ln.ReadFrom(bb.B)
if err != nil {
writeErrorsUDP.Inc()
if ne, ok := err.(net.Error); ok {
if ne.Temporary() {
time.Sleep(time.Second)
continue
}
if strings.Contains(err.Error(), "use of closed network connection") {
break
}
}
logger.Errorf("cannot read Graphite UDP data: %s", err)
continue
}
bb.B = bb.B[:n]
writeRequestsUDP.Inc()
if err := insertHandler(bb.NewReader()); err != nil {
writeErrorsUDP.Inc()
logger.Errorf("error in UDP Graphite conn %q<->%q: %s", ln.LocalAddr(), addr, err)
continue
}
}
}()
}
wg.Wait()
}
var (
listenerTCP net.Listener
listenerUDP net.PacketConn
)
// Stop stops the server.
func Stop() {
logger.Infof("stopping TCP Graphite server at %q...", listenerTCP.Addr())
if err := listenerTCP.Close(); err != nil {
logger.Errorf("cannot close TCP Graphite server: %s", err)
}
logger.Infof("stopping UDP Graphite server at %q...", listenerUDP.LocalAddr())
if err := listenerUDP.Close(); err != nil {
logger.Errorf("cannot close UDP Graphite server: %s", err)
}
}

View File

@@ -0,0 +1,397 @@
package influx
import (
"fmt"
"strings"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/metrics"
"github.com/valyala/fastjson/fastfloat"
)
// Rows contains parsed influx rows.
type Rows struct {
Rows []Row
tagsPool []Tag
fieldsPool []Field
}
// Reset resets rs.
func (rs *Rows) Reset() {
// Reset rows, tags and fields in order to remove references to old data,
// so GC could collect it.
for i := range rs.Rows {
rs.Rows[i].reset()
}
rs.Rows = rs.Rows[:0]
for i := range rs.tagsPool {
rs.tagsPool[i].reset()
}
rs.tagsPool = rs.tagsPool[:0]
for i := range rs.fieldsPool {
rs.fieldsPool[i].reset()
}
rs.fieldsPool = rs.fieldsPool[:0]
}
// Unmarshal unmarshals influx line protocol rows from s.
//
// See https://docs.influxdata.com/influxdb/v1.7/write_protocols/line_protocol_tutorial/
//
// s must be unchanged until rs is in use.
func (rs *Rows) Unmarshal(s string) {
rs.Rows, rs.tagsPool, rs.fieldsPool = unmarshalRows(rs.Rows[:0], s, rs.tagsPool[:0], rs.fieldsPool[:0])
}
// Row is a single influx row.
type Row struct {
Measurement string
Tags []Tag
Fields []Field
Timestamp int64
}
func (r *Row) reset() {
r.Measurement = ""
r.Tags = nil
r.Fields = nil
r.Timestamp = 0
}
func (r *Row) unmarshal(s string, tagsPool []Tag, fieldsPool []Field, noEscapeChars bool) ([]Tag, []Field, error) {
r.reset()
n := nextUnescapedChar(s, ' ', noEscapeChars)
if n < 0 {
return tagsPool, fieldsPool, fmt.Errorf("cannot find Whitespace I in %q", s)
}
measurementTags := s[:n]
s = s[n+1:]
// Parse measurement and tags
var err error
n = nextUnescapedChar(measurementTags, ',', noEscapeChars)
if n >= 0 {
tagsStart := len(tagsPool)
tagsPool, err = unmarshalTags(tagsPool, measurementTags[n+1:], noEscapeChars)
if err != nil {
return tagsPool, fieldsPool, err
}
tags := tagsPool[tagsStart:]
r.Tags = tags[:len(tags):len(tags)]
measurementTags = measurementTags[:n]
}
r.Measurement = unescapeTagValue(measurementTags, noEscapeChars)
// Allow empty r.Measurement. In this case metric name is constructed directly from field keys.
// Parse fields
fieldsStart := len(fieldsPool)
hasQuotedFields := nextUnescapedChar(s, '"', noEscapeChars) >= 0
n = nextUnquotedChar(s, ' ', noEscapeChars, hasQuotedFields)
if n < 0 {
// No timestamp.
fieldsPool, err = unmarshalInfluxFields(fieldsPool, s, noEscapeChars, hasQuotedFields)
if err != nil {
return tagsPool, fieldsPool, err
}
fields := fieldsPool[fieldsStart:]
r.Fields = fields[:len(fields):len(fields)]
return tagsPool, fieldsPool, nil
}
fieldsPool, err = unmarshalInfluxFields(fieldsPool, s[:n], noEscapeChars, hasQuotedFields)
if err != nil {
return tagsPool, fieldsPool, err
}
r.Fields = fieldsPool[fieldsStart:]
s = s[n+1:]
// Parse timestamp
timestamp := fastfloat.ParseInt64BestEffort(s)
if timestamp == 0 && s != "0" {
return tagsPool, fieldsPool, fmt.Errorf("cannot parse timestamp %q", s)
}
r.Timestamp = timestamp
return tagsPool, fieldsPool, nil
}
// Tag represents influx tag.
type Tag struct {
Key string
Value string
}
func (tag *Tag) reset() {
tag.Key = ""
tag.Value = ""
}
func (tag *Tag) unmarshal(s string, noEscapeChars bool) error {
tag.reset()
n := nextUnescapedChar(s, '=', noEscapeChars)
if n < 0 {
return fmt.Errorf("missing tag value for %q", s)
}
tag.Key = unescapeTagValue(s[:n], noEscapeChars)
tag.Value = unescapeTagValue(s[n+1:], noEscapeChars)
return nil
}
// Field represents influx field.
type Field struct {
Key string
Value float64
}
func (f *Field) reset() {
f.Key = ""
f.Value = 0
}
func (f *Field) unmarshal(s string, noEscapeChars, hasQuotedFields bool) error {
f.reset()
n := nextUnescapedChar(s, '=', noEscapeChars)
if n < 0 {
return fmt.Errorf("missing field value for %q", s)
}
f.Key = unescapeTagValue(s[:n], noEscapeChars)
if len(f.Key) == 0 {
return fmt.Errorf("field key cannot be empty")
}
v, err := parseFieldValue(s[n+1:], hasQuotedFields)
if err != nil {
return fmt.Errorf("cannot parse field value for %q: %s", f.Key, err)
}
f.Value = v
return nil
}
func unmarshalRows(dst []Row, s string, tagsPool []Tag, fieldsPool []Field) ([]Row, []Tag, []Field) {
noEscapeChars := strings.IndexByte(s, '\\') < 0
for len(s) > 0 {
n := strings.IndexByte(s, '\n')
if n < 0 {
// The last line.
return unmarshalRow(dst, s, tagsPool, fieldsPool, noEscapeChars)
}
dst, tagsPool, fieldsPool = unmarshalRow(dst, s[:n], tagsPool, fieldsPool, noEscapeChars)
s = s[n+1:]
}
return dst, tagsPool, fieldsPool
}
func unmarshalRow(dst []Row, s string, tagsPool []Tag, fieldsPool []Field, noEscapeChars bool) ([]Row, []Tag, []Field) {
if len(s) > 0 && s[len(s)-1] == '\r' {
s = s[:len(s)-1]
}
if len(s) == 0 {
// Skip empty line
return dst, tagsPool, fieldsPool
}
if s[0] == '#' {
// Skip comment
return dst, tagsPool, fieldsPool
}
if cap(dst) > len(dst) {
dst = dst[:len(dst)+1]
} else {
dst = append(dst, Row{})
}
r := &dst[len(dst)-1]
var err error
tagsPool, fieldsPool, err = r.unmarshal(s, tagsPool, fieldsPool, noEscapeChars)
if err != nil {
dst = dst[:len(dst)-1]
logger.Errorf("cannot unmarshal Influx line %q: %s; skipping it", s, err)
invalidLines.Inc()
}
return dst, tagsPool, fieldsPool
}
var invalidLines = metrics.NewCounter(`vm_rows_invalid_total{type="influx"}`)
func unmarshalTags(dst []Tag, s string, noEscapeChars bool) ([]Tag, error) {
for {
if cap(dst) > len(dst) {
dst = dst[:len(dst)+1]
} else {
dst = append(dst, Tag{})
}
tag := &dst[len(dst)-1]
n := nextUnescapedChar(s, ',', noEscapeChars)
if n < 0 {
if err := tag.unmarshal(s, noEscapeChars); err != nil {
return dst[:len(dst)-1], err
}
if len(tag.Key) == 0 || len(tag.Value) == 0 {
// Skip empty tag
dst = dst[:len(dst)-1]
}
return dst, nil
}
if err := tag.unmarshal(s[:n], noEscapeChars); err != nil {
return dst[:len(dst)-1], err
}
s = s[n+1:]
if len(tag.Key) == 0 || len(tag.Value) == 0 {
// Skip empty tag
dst = dst[:len(dst)-1]
}
}
}
func unmarshalInfluxFields(dst []Field, s string, noEscapeChars, hasQuotedFields bool) ([]Field, error) {
for {
if cap(dst) > len(dst) {
dst = dst[:len(dst)+1]
} else {
dst = append(dst, Field{})
}
f := &dst[len(dst)-1]
n := nextUnquotedChar(s, ',', noEscapeChars, hasQuotedFields)
if n < 0 {
if err := f.unmarshal(s, noEscapeChars, hasQuotedFields); err != nil {
return dst, err
}
return dst, nil
}
if err := f.unmarshal(s[:n], noEscapeChars, hasQuotedFields); err != nil {
return dst, err
}
s = s[n+1:]
}
}
func unescapeTagValue(s string, noEscapeChars bool) string {
if noEscapeChars {
// Fast path - no escape chars.
return s
}
n := strings.IndexByte(s, '\\')
if n < 0 {
return s
}
// Slow path. Remove escape chars.
dst := make([]byte, 0, len(s))
for {
dst = append(dst, s[:n]...)
s = s[n+1:]
if len(s) == 0 {
return string(append(dst, '\\'))
}
ch := s[0]
if ch != ' ' && ch != ',' && ch != '=' && ch != '\\' {
dst = append(dst, '\\')
}
dst = append(dst, ch)
s = s[1:]
n = strings.IndexByte(s, '\\')
if n < 0 {
return string(append(dst, s...))
}
}
}
func parseFieldValue(s string, hasQuotedFields bool) (float64, error) {
if len(s) == 0 {
return 0, fmt.Errorf("field value cannot be empty")
}
if hasQuotedFields && s[0] == '"' {
if len(s) < 2 || s[len(s)-1] != '"' {
return 0, fmt.Errorf("missing closing quote for quoted field value %s", s)
}
// Try converting quoted string to number, since sometimes Influx agents
// send numbers as strings.
s = s[1 : len(s)-1]
return fastfloat.ParseBestEffort(s), nil
}
ch := s[len(s)-1]
if ch == 'i' {
// Integer value
ss := s[:len(s)-1]
n := fastfloat.ParseInt64BestEffort(ss)
return float64(n), nil
}
if ch == 'u' {
// Unsigned integer value
ss := s[:len(s)-1]
n := fastfloat.ParseUint64BestEffort(ss)
return float64(n), nil
}
if s == "t" || s == "T" || s == "true" || s == "True" || s == "TRUE" {
return 1, nil
}
if s == "f" || s == "F" || s == "false" || s == "False" || s == "FALSE" {
return 0, nil
}
return fastfloat.ParseBestEffort(s), nil
}
func nextUnescapedChar(s string, ch byte, noEscapeChars bool) int {
if noEscapeChars {
// Fast path: just search for ch in s, since s has no escape chars.
return strings.IndexByte(s, ch)
}
sOrig := s
again:
n := strings.IndexByte(s, ch)
if n < 0 {
return -1
}
if n == 0 {
return len(sOrig) - len(s) + n
}
if s[n-1] != '\\' {
return len(sOrig) - len(s) + n
}
nOrig := n
slashes := 0
for n > 0 && s[n-1] == '\\' {
slashes++
n--
}
if slashes&1 == 0 {
return len(sOrig) - len(s) + nOrig
}
s = s[nOrig+1:]
goto again
}
func nextUnquotedChar(s string, ch byte, noEscapeChars, hasQuotedFields bool) int {
if !hasQuotedFields {
return nextUnescapedChar(s, ch, noEscapeChars)
}
sOrig := s
for {
n := nextUnescapedChar(s, ch, noEscapeChars)
if n < 0 {
return -1
}
if !isInQuote(s[:n], noEscapeChars) {
return n + len(sOrig) - len(s)
}
s = s[n+1:]
n = nextUnescapedChar(s, '"', noEscapeChars)
if n < 0 {
return -1
}
s = s[n+1:]
}
}
func isInQuote(s string, noEscapeChars bool) bool {
isQuote := false
for {
n := nextUnescapedChar(s, '"', noEscapeChars)
if n < 0 {
return isQuote
}
isQuote = !isQuote
s = s[n+1:]
}
}

View File

@@ -0,0 +1,451 @@
package influx
import (
"reflect"
"testing"
)
func TestNextUnquotedChar(t *testing.T) {
f := func(s string, ch byte, noUnescape bool, nExpected int) {
t.Helper()
n := nextUnquotedChar(s, ch, noUnescape, true)
if n != nExpected {
t.Fatalf("unexpected n for nextUnqotedChar(%q, '%c', %v); got %d; want %d", s, ch, noUnescape, n, nExpected)
}
}
f(``, ' ', false, -1)
f(``, ' ', true, -1)
f(`""`, ' ', false, -1)
f(`""`, ' ', true, -1)
f(`"foo bar\" " baz`, ' ', false, 12)
f(`"foo bar\" " baz`, ' ', true, 10)
}
func TestNextUnescapedChar(t *testing.T) {
f := func(s string, ch byte, noUnescape bool, nExpected int) {
t.Helper()
n := nextUnescapedChar(s, ch, noUnescape)
if n != nExpected {
t.Fatalf("unexpected n for nextUnescapedChar(%q, '%c', %v); got %d; want %d", s, ch, noUnescape, n, nExpected)
}
}
f("", ' ', true, -1)
f("", ' ', false, -1)
f(" ", ' ', true, 0)
f(" ", ' ', false, 0)
f("x y", ' ', true, 1)
f("x y", ' ', false, 1)
f(`x\ y`, ' ', true, 2)
f(`x\ y`, ' ', false, 3)
f(`\\,`, ',', true, 2)
f(`\\,`, ',', false, 2)
f(`\\\=`, '=', true, 3)
f(`\\\=`, '=', false, -1)
f(`\\\=aa`, '=', true, 3)
f(`\\\=aa`, '=', false, -1)
f(`\\\=a=a`, '=', true, 3)
f(`\\\=a=a`, '=', false, 5)
f(`a\`, ' ', true, -1)
f(`a\`, ' ', false, -1)
}
func TestUnescapeTagValue(t *testing.T) {
f := func(s, sExpected string) {
t.Helper()
ss := unescapeTagValue(s, false)
if ss != sExpected {
t.Fatalf("unexpected value for %q; got %q; want %q", s, ss, sExpected)
}
}
f("", "")
f("x", "x")
f("foobar", "foobar")
f("привет", "привет")
f(`\a\b\cd`, `\a\b\cd`)
f(`\`, `\`)
f(`foo\`, `foo\`)
f(`\,foo\\\=\ bar`, `,foo\= bar`)
}
func TestRowsUnmarshalFailure(t *testing.T) {
f := func(s string) {
t.Helper()
var rows Rows
rows.Unmarshal(s)
if len(rows.Rows) != 0 {
t.Fatalf("expecting zero rows; got %d rows", len(rows.Rows))
}
// Try again
rows.Unmarshal(s)
if len(rows.Rows) != 0 {
t.Fatalf("expecting zero rows; got %d rows", len(rows.Rows))
}
}
// No fields
f("foo")
f("foo,bar=baz 1234")
// Missing tag value
f("foo,bar")
f("foo,bar baz")
f("foo,bar=123, 123")
// Missing field value
f("foo bar")
f("foo bar=")
f("foo bar=,baz=23 123")
f("foo bar=1, 123")
f(`foo bar=" 123`)
f(`foo bar="123`)
f(`foo bar=",123`)
f(`foo bar=a"", 123`)
// Missing field name
f("foo =123")
f("foo =123\nbar")
// Invalid timestamp
f("foo bar=123 baz")
}
func TestRowsUnmarshalSuccess(t *testing.T) {
f := func(s string, rowsExpected *Rows) {
t.Helper()
var rows Rows
rows.Unmarshal(s)
if !reflect.DeepEqual(rows.Rows, rowsExpected.Rows) {
t.Fatalf("unexpected rows;\ngot\n%+v;\nwant\n%+v", rows.Rows, rowsExpected.Rows)
}
// Try unmarshaling again
rows.Unmarshal(s)
if !reflect.DeepEqual(rows.Rows, rowsExpected.Rows) {
t.Fatalf("unexpected rows;\ngot\n%+v;\nwant\n%+v", rows.Rows, rowsExpected.Rows)
}
rows.Reset()
if len(rows.Rows) != 0 {
t.Fatalf("non-empty rows after reset: %+v", rows.Rows)
}
}
// Empty line
f("", &Rows{})
f("\n\n", &Rows{})
f("\n\r\n", &Rows{})
// Comment
f("\n# foobar\n", &Rows{})
f("#foobar baz", &Rows{})
f("#foobar baz\n#sss", &Rows{})
// Missing measurement
f(" baz=123", &Rows{
Rows: []Row{{
Measurement: "",
Fields: []Field{{
Key: "baz",
Value: 123,
}},
}},
})
f(",foo=bar baz=123", &Rows{
Rows: []Row{{
Measurement: "",
Tags: []Tag{{
Key: "foo",
Value: "bar",
}},
Fields: []Field{{
Key: "baz",
Value: 123,
}},
}},
})
// Minimal line without tags and timestamp
f("foo bar=123", &Rows{
Rows: []Row{{
Measurement: "foo",
Fields: []Field{{
Key: "bar",
Value: 123,
}},
}},
})
f("# comment\nfoo bar=123\r\n#comment2 sdsf dsf", &Rows{
Rows: []Row{{
Measurement: "foo",
Fields: []Field{{
Key: "bar",
Value: 123,
}},
}},
})
f("foo bar=123\n", &Rows{
Rows: []Row{{
Measurement: "foo",
Fields: []Field{{
Key: "bar",
Value: 123,
}},
}},
})
// Line without tags and with a timestamp.
f("foo bar=123.45 -345", &Rows{
Rows: []Row{{
Measurement: "foo",
Fields: []Field{{
Key: "bar",
Value: 123.45,
}},
Timestamp: -345,
}},
})
// Line with a single tag
f("foo,tag1=xyz bar=123", &Rows{
Rows: []Row{{
Measurement: "foo",
Tags: []Tag{{
Key: "tag1",
Value: "xyz",
}},
Fields: []Field{{
Key: "bar",
Value: 123,
}},
}},
})
// Line with multiple tags
f("foo,tag1=xyz,tag2=43as bar=123", &Rows{
Rows: []Row{{
Measurement: "foo",
Tags: []Tag{
{
Key: "tag1",
Value: "xyz",
},
{
Key: "tag2",
Value: "43as",
},
},
Fields: []Field{{
Key: "bar",
Value: 123,
}},
}},
})
// Line with empty tag values
f("foo,tag1=xyz,tagN=,tag2=43as,=xxx bar=123", &Rows{
Rows: []Row{{
Measurement: "foo",
Tags: []Tag{
{
Key: "tag1",
Value: "xyz",
},
{
Key: "tag2",
Value: "43as",
},
},
Fields: []Field{{
Key: "bar",
Value: 123,
}},
}},
})
// Line with multiple tags, multiple fields and timestamp
f(`system,host=ip-172-16-10-144 uptime_format="3 days, 21:01",quoted_float="-1.23",quoted_int="123" 1557761040000000000`, &Rows{
Rows: []Row{{
Measurement: "system",
Tags: []Tag{{
Key: "host",
Value: "ip-172-16-10-144",
}},
Fields: []Field{
{
Key: "uptime_format",
Value: 0,
},
{
Key: "quoted_float",
Value: -1.23,
},
{
Key: "quoted_int",
Value: 123,
},
},
Timestamp: 1557761040000000000,
}},
})
f(`foo,tag1=xyz,tag2=43as bar=-123e4,x=True,y=-45i,z=f,aa="f,= \"a",bb=23u 48934`, &Rows{
Rows: []Row{{
Measurement: "foo",
Tags: []Tag{
{
Key: "tag1",
Value: "xyz",
},
{
Key: "tag2",
Value: "43as",
},
},
Fields: []Field{
{
Key: "bar",
Value: -123e4,
},
{
Key: "x",
Value: 1,
},
{
Key: "y",
Value: -45,
},
{
Key: "z",
Value: 0,
},
{
Key: "aa",
Value: 0,
},
{
Key: "bb",
Value: 23,
},
},
Timestamp: 48934,
}},
})
// Escape chars
f(`fo\,bar\=baz,x\=\b=\\a\,\=\q\ \\\a\=\,=4.34`, &Rows{
Rows: []Row{{
Measurement: `fo,bar=baz`,
Tags: []Tag{{
Key: `x=\b`,
Value: `\a,=\q `,
}},
Fields: []Field{{
Key: `\\a=,`,
Value: 4.34,
}},
}},
})
// Multiple lines
f("foo,tag=xyz field=1.23 48934\n"+
"bar x=-1i\n\n", &Rows{
Rows: []Row{
{
Measurement: "foo",
Tags: []Tag{{
Key: "tag",
Value: "xyz",
}},
Fields: []Field{{
Key: "field",
Value: 1.23,
}},
Timestamp: 48934,
},
{
Measurement: "bar",
Fields: []Field{{
Key: "x",
Value: -1,
}},
},
},
})
// Multiple lines with invalid line in the middle.
f("foo,tag=xyz field=1.23 48934\n"+
"invalid line\n"+
"bar x=-1i\n\n", &Rows{
Rows: []Row{
{
Measurement: "foo",
Tags: []Tag{{
Key: "tag",
Value: "xyz",
}},
Fields: []Field{{
Key: "field",
Value: 1.23,
}},
Timestamp: 48934,
},
{
Measurement: "bar",
Fields: []Field{{
Key: "x",
Value: -1,
}},
},
},
})
// No newline after the second line.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/82
f("foo,tag=xyz field=1.23 48934\n"+
"bar x=-1i", &Rows{
Rows: []Row{
{
Measurement: "foo",
Tags: []Tag{{
Key: "tag",
Value: "xyz",
}},
Fields: []Field{{
Key: "field",
Value: 1.23,
}},
Timestamp: 48934,
},
{
Measurement: "bar",
Fields: []Field{{
Key: "x",
Value: -1,
}},
},
},
})
f("x,y=z,g=p:\\ \\ 5432\\,\\ gp\\ mon\\ [lol]\\ con10\\ cmd5\\ SELECT f=1", &Rows{
Rows: []Row{{
Measurement: "x",
Tags: []Tag{
{
Key: "y",
Value: "z",
},
{
Key: "g",
Value: "p: 5432, gp mon [lol] con10 cmd5 SELECT",
},
},
Fields: []Field{{
Key: "f",
Value: 1,
}},
}},
})
}

View File

@@ -0,0 +1,25 @@
package influx
import (
"fmt"
"testing"
)
func BenchmarkRowsUnmarshal(b *testing.B) {
s := `cpu usage_user=1.23,usage_system=4.34,usage_iowait=0.1112 1234556768
cpu usage_user=1.23,usage_system=4.34,usage_iowait=0.1112 123455676344
aaa usage_user=1.23,usage_system=4.34,usage_iowait=0.1112 123455676344
bbb usage_user=1.23,usage_system=4.34,usage_iowait=0.1112 123455676344
`
b.SetBytes(int64(len(s)))
b.ReportAllocs()
b.RunParallel(func(pb *testing.PB) {
var rows Rows
for pb.Next() {
rows.Unmarshal(s)
if len(rows.Rows) != 4 {
panic(fmt.Errorf("unexpected number of rows parsed; got %d; want 4", len(rows.Rows)))
}
}
})
}

View File

@@ -0,0 +1,226 @@
package influx
import (
"flag"
"fmt"
"io"
"net/http"
"runtime"
"sync"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/concurrencylimiter"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/metrics"
)
var (
measurementFieldSeparator = flag.String("influxMeasurementFieldSeparator", "_", "Separator for `{measurement}{separator}{field_name}` metric name when inserted via Influx line protocol")
skipSingleField = flag.Bool("influxSkipSingleField", false, "Uses `{measurement}` instead of `{measurement}{separator}{field_name}` for metic name if Influx line contains only a single field")
)
var (
rowsInserted = metrics.NewCounter(`vm_rows_inserted_total{type="influx"}`)
rowsPerInsert = metrics.NewSummary(`vm_rows_per_insert{type="influx"}`)
)
// InsertHandler processes remote write for influx line protocol.
//
// See https://github.com/influxdata/influxdb/blob/4cbdc197b8117fee648d62e2e5be75c6575352f0/tsdb/README.md
func InsertHandler(req *http.Request) error {
return concurrencylimiter.Do(func() error {
return insertHandlerInternal(req)
})
}
func insertHandlerInternal(req *http.Request) error {
influxReadCalls.Inc()
r := req.Body
if req.Header.Get("Content-Encoding") == "gzip" {
zr, err := common.GetGzipReader(r)
if err != nil {
return fmt.Errorf("cannot read gzipped influx line protocol data: %s", err)
}
defer common.PutGzipReader(zr)
r = zr
}
q := req.URL.Query()
tsMultiplier := int64(1e6)
switch q.Get("precision") {
case "ns":
tsMultiplier = 1e6
case "u":
tsMultiplier = 1e3
case "ms":
tsMultiplier = 1
case "s":
tsMultiplier = -1e3
case "m":
tsMultiplier = -1e3 * 60
case "h":
tsMultiplier = -1e3 * 3600
}
// Read db tag from https://docs.influxdata.com/influxdb/v1.7/tools/api/#write-http-endpoint
db := q.Get("db")
ctx := getPushCtx()
defer putPushCtx(ctx)
for ctx.Read(r, tsMultiplier) {
if err := ctx.InsertRows(db); err != nil {
return err
}
}
return ctx.Error()
}
func (ctx *pushCtx) InsertRows(db string) error {
rows := ctx.Rows.Rows
rowsLen := 0
for i := range rows {
rowsLen += len(rows[i].Tags)
}
ic := &ctx.Common
ic.Reset(rowsLen)
rowsTotal := 0
for i := range rows {
r := &rows[i]
ic.Labels = ic.Labels[:0]
hasDBLabel := false
for j := range r.Tags {
tag := &r.Tags[j]
if tag.Key == "db" {
hasDBLabel = true
}
ic.AddLabel(tag.Key, tag.Value)
}
if len(db) > 0 && !hasDBLabel {
ic.AddLabel("db", db)
}
ctx.metricNameBuf = storage.MarshalMetricNameRaw(ctx.metricNameBuf[:0], ic.Labels)
ctx.metricGroupBuf = append(ctx.metricGroupBuf[:0], r.Measurement...)
skipFieldKey := len(r.Fields) == 1 && *skipSingleField
if len(ctx.metricGroupBuf) > 0 && !skipFieldKey {
ctx.metricGroupBuf = append(ctx.metricGroupBuf, *measurementFieldSeparator...)
}
metricGroupPrefixLen := len(ctx.metricGroupBuf)
for j := range r.Fields {
f := &r.Fields[j]
if !skipFieldKey {
ctx.metricGroupBuf = append(ctx.metricGroupBuf[:metricGroupPrefixLen], f.Key...)
}
metricGroup := bytesutil.ToUnsafeString(ctx.metricGroupBuf)
ic.Labels = ic.Labels[:0]
ic.AddLabel("", metricGroup)
ic.WriteDataPoint(ctx.metricNameBuf, ic.Labels[:1], r.Timestamp, f.Value)
}
rowsTotal += len(r.Fields)
}
rowsInserted.Add(rowsTotal)
rowsPerInsert.Update(float64(rowsTotal))
return ic.FlushBufs()
}
func (ctx *pushCtx) Read(r io.Reader, tsMultiplier int64) bool {
if ctx.err != nil {
return false
}
ctx.reqBuf, ctx.tailBuf, ctx.err = common.ReadLinesBlock(r, ctx.reqBuf, ctx.tailBuf)
if ctx.err != nil {
if ctx.err != io.EOF {
influxReadErrors.Inc()
ctx.err = fmt.Errorf("cannot read influx line protocol data: %s", ctx.err)
}
return false
}
ctx.Rows.Unmarshal(bytesutil.ToUnsafeString(ctx.reqBuf))
// Adjust timestamps according to tsMultiplier
currentTs := time.Now().UnixNano() / 1e6
if tsMultiplier >= 1 {
for i := range ctx.Rows.Rows {
row := &ctx.Rows.Rows[i]
if row.Timestamp == 0 {
row.Timestamp = currentTs
} else {
row.Timestamp /= tsMultiplier
}
}
} else if tsMultiplier < 0 {
tsMultiplier = -tsMultiplier
currentTs -= currentTs % tsMultiplier
for i := range ctx.Rows.Rows {
row := &ctx.Rows.Rows[i]
if row.Timestamp == 0 {
row.Timestamp = currentTs
} else {
row.Timestamp *= tsMultiplier
}
}
}
return true
}
var (
influxReadCalls = metrics.NewCounter(`vm_read_calls_total{name="influx"}`)
influxReadErrors = metrics.NewCounter(`vm_read_errors_total{name="influx"}`)
)
type pushCtx struct {
Rows Rows
Common common.InsertCtx
reqBuf []byte
tailBuf []byte
metricNameBuf []byte
metricGroupBuf []byte
err error
}
func (ctx *pushCtx) Error() error {
if ctx.err == io.EOF {
return nil
}
return ctx.err
}
func (ctx *pushCtx) reset() {
ctx.Rows.Reset()
ctx.Common.Reset(0)
ctx.reqBuf = ctx.reqBuf[:0]
ctx.tailBuf = ctx.tailBuf[:0]
ctx.metricNameBuf = ctx.metricNameBuf[:0]
ctx.metricGroupBuf = ctx.metricGroupBuf[:0]
ctx.err = nil
}
func getPushCtx() *pushCtx {
select {
case ctx := <-pushCtxPoolCh:
return ctx
default:
if v := pushCtxPool.Get(); v != nil {
return v.(*pushCtx)
}
return &pushCtx{}
}
}
func putPushCtx(ctx *pushCtx) {
ctx.reset()
select {
case pushCtxPoolCh <- ctx:
default:
pushCtxPool.Put(ctx)
}
}
var pushCtxPool sync.Pool
var pushCtxPoolCh = make(chan *pushCtx, runtime.GOMAXPROCS(-1))

99
app/vminsert/main.go Normal file
View File

@@ -0,0 +1,99 @@
package vminsert
import (
"flag"
"fmt"
"net/http"
"strings"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/concurrencylimiter"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/graphite"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/influx"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/opentsdb"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/opentsdbhttp"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/prometheus"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/metrics"
)
var (
graphiteListenAddr = flag.String("graphiteListenAddr", "", "TCP and UDP address to listen for Graphite plaintext data. Usually :2003 must be set. Doesn't work if empty")
opentsdbListenAddr = flag.String("opentsdbListenAddr", "", "TCP and UDP address to listen for OpentTSDB put messages. Usually :4242 must be set. Doesn't work if empty")
opentsdbHTTPListenAddr = flag.String("opentsdbHTTPListenAddr", "", "TCP address to listen for OpentTSDB HTTP put requests. Usually :4242 must be set. Doesn't work if empty")
maxInsertRequestSize = flag.Int("maxInsertRequestSize", 32*1024*1024, "The maximum size of a single insert request in bytes")
maxLabelsPerTimeseries = flag.Int("maxLabelsPerTimeseries", 30, "The maximum number of labels accepted per time series. Superflouos labels are dropped")
)
// Init initializes vminsert.
func Init() {
storage.SetMaxLabelsPerTimeseries(*maxLabelsPerTimeseries)
concurrencylimiter.Init()
if len(*graphiteListenAddr) > 0 {
go graphite.Serve(*graphiteListenAddr)
}
if len(*opentsdbListenAddr) > 0 {
go opentsdb.Serve(*opentsdbListenAddr)
}
if len(*opentsdbHTTPListenAddr) > 0 {
go opentsdbhttp.Serve(*opentsdbHTTPListenAddr, int64(*maxInsertRequestSize))
}
}
// Stop stops vminsert.
func Stop() {
if len(*graphiteListenAddr) > 0 {
graphite.Stop()
}
if len(*opentsdbListenAddr) > 0 {
opentsdb.Stop()
}
if len(*opentsdbHTTPListenAddr) > 0 {
opentsdbhttp.Stop()
}
}
// RequestHandler is a handler for Prometheus remote storage write API
func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
path := strings.Replace(r.URL.Path, "//", "/", -1)
switch path {
case "/api/v1/write":
prometheusWriteRequests.Inc()
if err := prometheus.InsertHandler(r, int64(*maxInsertRequestSize)); err != nil {
prometheusWriteErrors.Inc()
httpserver.Errorf(w, "error in %q: %s", r.URL.Path, err)
return true
}
w.WriteHeader(http.StatusNoContent)
return true
case "/write", "/api/v2/write":
influxWriteRequests.Inc()
if err := influx.InsertHandler(r); err != nil {
influxWriteErrors.Inc()
httpserver.Errorf(w, "error in %q: %s", r.URL.Path, err)
return true
}
w.WriteHeader(http.StatusNoContent)
return true
case "/query":
// Emulate fake response for influx query.
// This is required for TSBS benchmark.
influxQueryRequests.Inc()
fmt.Fprintf(w, `{"results":[{"series":[{"values":[]}]}]}`)
return true
default:
// This is not our link
return false
}
}
var (
prometheusWriteRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/write", protocol="prometheus"}`)
prometheusWriteErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/api/v1/write", protocol="prometheus"}`)
influxWriteRequests = metrics.NewCounter(`vm_http_requests_total{path="/write", protocol="influx"}`)
influxWriteErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/write", protocol="influx"}`)
influxQueryRequests = metrics.NewCounter(`vm_http_requests_total{path="/query", protocol="influx"}`)
)

View File

@@ -0,0 +1,187 @@
package opentsdb
import (
"fmt"
"strings"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/metrics"
"github.com/valyala/fastjson/fastfloat"
)
// Rows contains parsed OpenTSDB rows.
type Rows struct {
Rows []Row
tagsPool []Tag
}
// Reset resets rs.
func (rs *Rows) Reset() {
// Release references to objects, so they can be GC'ed.
for i := range rs.Rows {
rs.Rows[i].reset()
}
rs.Rows = rs.Rows[:0]
for i := range rs.tagsPool {
rs.tagsPool[i].reset()
}
rs.tagsPool = rs.tagsPool[:0]
}
// Unmarshal unmarshals OpenTSDB put rows from s.
//
// See http://opentsdb.net/docs/build/html/api_telnet/put.html
//
// s must be unchanged until rs is in use.
func (rs *Rows) Unmarshal(s string) {
rs.Rows, rs.tagsPool = unmarshalRows(rs.Rows[:0], s, rs.tagsPool[:0])
}
// Row is a single OpenTSDB row.
type Row struct {
Metric string
Tags []Tag
Value float64
Timestamp int64
}
func (r *Row) reset() {
r.Metric = ""
r.Tags = nil
r.Value = 0
r.Timestamp = 0
}
func (r *Row) unmarshal(s string, tagsPool []Tag) ([]Tag, error) {
r.reset()
if !strings.HasPrefix(s, "put ") {
return tagsPool, fmt.Errorf("missing `put ` prefix in %q", s)
}
s = s[len("put "):]
n := strings.IndexByte(s, ' ')
if n < 0 {
return tagsPool, fmt.Errorf("cannot find whitespace between metric and timestamp in %q", s)
}
r.Metric = s[:n]
if len(r.Metric) == 0 {
return tagsPool, fmt.Errorf("metric cannot be empty")
}
tail := s[n+1:]
n = strings.IndexByte(tail, ' ')
if n < 0 {
return tagsPool, fmt.Errorf("cannot find whitespace between timestamp and value in %q", s)
}
r.Timestamp = int64(fastfloat.ParseBestEffort(tail[:n]))
tail = tail[n+1:]
n = strings.IndexByte(tail, ' ')
if n < 0 {
return tagsPool, fmt.Errorf("cannot find whitespace between value and the first tag in %q", s)
}
r.Value = fastfloat.ParseBestEffort(tail[:n])
var err error
tagsStart := len(tagsPool)
tagsPool, err = unmarshalTags(tagsPool, tail[n+1:])
if err != nil {
return tagsPool, fmt.Errorf("cannot unmarshal tags in %q: %s", s, err)
}
tags := tagsPool[tagsStart:]
r.Tags = tags[:len(tags):len(tags)]
return tagsPool, nil
}
func unmarshalRows(dst []Row, s string, tagsPool []Tag) ([]Row, []Tag) {
for len(s) > 0 {
n := strings.IndexByte(s, '\n')
if n < 0 {
// The last line.
return unmarshalRow(dst, s, tagsPool)
}
dst, tagsPool = unmarshalRow(dst, s[:n], tagsPool)
s = s[n+1:]
}
return dst, tagsPool
}
func unmarshalRow(dst []Row, s string, tagsPool []Tag) ([]Row, []Tag) {
if len(s) > 0 && s[len(s)-1] == '\r' {
s = s[:len(s)-1]
}
if len(s) == 0 {
// Skip empty line
return dst, tagsPool
}
if cap(dst) > len(dst) {
dst = dst[:len(dst)+1]
} else {
dst = append(dst, Row{})
}
r := &dst[len(dst)-1]
var err error
tagsPool, err = r.unmarshal(s, tagsPool)
if err != nil {
dst = dst[:len(dst)-1]
logger.Errorf("cannot unmarshal OpenTSDB line %q: %s", s, err)
invalidLines.Inc()
}
return dst, tagsPool
}
var invalidLines = metrics.NewCounter(`vm_rows_invalid_total{type="opentsdb"}`)
func unmarshalTags(dst []Tag, s string) ([]Tag, error) {
for {
if cap(dst) > len(dst) {
dst = dst[:len(dst)+1]
} else {
dst = append(dst, Tag{})
}
tag := &dst[len(dst)-1]
n := strings.IndexByte(s, ' ')
if n < 0 {
// The last tag found
if err := tag.unmarshal(s); err != nil {
return dst[:len(dst)-1], err
}
if len(tag.Key) == 0 || len(tag.Value) == 0 {
// Skip empty tag
dst = dst[:len(dst)-1]
}
return dst, nil
}
if err := tag.unmarshal(s[:n]); err != nil {
return dst[:len(dst)-1], err
}
s = s[n+1:]
if len(tag.Key) == 0 || len(tag.Value) == 0 {
// Skip empty tag
dst = dst[:len(dst)-1]
}
}
}
// Tag is an OpenTSDB tag.
type Tag struct {
Key string
Value string
}
func (t *Tag) reset() {
t.Key = ""
t.Value = ""
}
func (t *Tag) unmarshal(s string) error {
t.reset()
n := strings.IndexByte(s, '=')
if n < 0 {
return fmt.Errorf("missing tag value for %q", s)
}
t.Key = s[:n]
t.Value = s[n+1:]
return nil
}

View File

@@ -0,0 +1,222 @@
package opentsdb
import (
"reflect"
"testing"
)
func TestRowsUnmarshalFailure(t *testing.T) {
f := func(s string) {
t.Helper()
var rows Rows
rows.Unmarshal(s)
if len(rows.Rows) != 0 {
t.Fatalf("unexpected number of rows parsed; got %d; want 0", len(rows.Rows))
}
// Try again
rows.Unmarshal(s)
if len(rows.Rows) != 0 {
t.Fatalf("unexpected number of rows parsed; got %d; want 0", len(rows.Rows))
}
}
// Missing put prefix
f("xx")
// Missing metric
f("put 111 34")
// Missing timestamp
f("put aaa")
// Missing value
f("put aaa 1123")
// Invalid timestamp
f("put aaa timestamp")
// Missing first tag
f("put aaa 123 43")
// Invalid value
f("put aaa 123 invalid-value")
// Invalid multiline
f("put aaa\nbbb 123 34")
// Invalid tag
f("put aaa 123 4.5 foo")
}
func TestRowsUnmarshalSuccess(t *testing.T) {
f := func(s string, rowsExpected *Rows) {
t.Helper()
var rows Rows
rows.Unmarshal(s)
if !reflect.DeepEqual(rows.Rows, rowsExpected.Rows) {
t.Fatalf("unexpected rows;\ngot\n%+v;\nwant\n%+v", rows.Rows, rowsExpected.Rows)
}
// Try unmarshaling again
rows.Unmarshal(s)
if !reflect.DeepEqual(rows.Rows, rowsExpected.Rows) {
t.Fatalf("unexpected rows;\ngot\n%+v;\nwant\n%+v", rows.Rows, rowsExpected.Rows)
}
rows.Reset()
if len(rows.Rows) != 0 {
t.Fatalf("non-empty rows after reset: %+v", rows.Rows)
}
}
// Empty line
f("", &Rows{})
f("\r", &Rows{})
f("\n\n", &Rows{})
f("\n\r\n", &Rows{})
// Single line
f("put foobar 789 -123.456 a=b", &Rows{
Rows: []Row{{
Metric: "foobar",
Value: -123.456,
Timestamp: 789,
Tags: []Tag{{
Key: "a",
Value: "b",
}},
}},
})
// Empty tag
f("put foobar 789 -123.456 a= b=c =d", &Rows{
Rows: []Row{{
Metric: "foobar",
Value: -123.456,
Timestamp: 789,
Tags: []Tag{
{
Key: "b",
Value: "c",
},
},
}},
})
// Fractional timestamp that is supported by Akumuli.
f("put foobar 789.4 -123.456 a=b", &Rows{
Rows: []Row{{
Metric: "foobar",
Value: -123.456,
Timestamp: 789,
Tags: []Tag{{
Key: "a",
Value: "b",
}},
}},
})
f("put foo.bar 789 123.456 a=b\n", &Rows{
Rows: []Row{{
Metric: "foo.bar",
Value: 123.456,
Timestamp: 789,
Tags: []Tag{{
Key: "a",
Value: "b",
}},
}},
})
// Tags
f("put foo 2 1 bar=baz", &Rows{
Rows: []Row{{
Metric: "foo",
Tags: []Tag{{
Key: "bar",
Value: "baz",
}},
Value: 1,
Timestamp: 2,
}},
})
f("put foo 2 1 bar=baz x=y", &Rows{
Rows: []Row{{
Metric: "foo",
Tags: []Tag{
{
Key: "bar",
Value: "baz",
},
{
Key: "x",
Value: "y",
},
},
Value: 1,
Timestamp: 2,
}},
})
f("put foo 2 1 bar=baz=aaa x=y", &Rows{
Rows: []Row{{
Metric: "foo",
Tags: []Tag{
{
Key: "bar",
Value: "baz=aaa",
},
{
Key: "x",
Value: "y",
},
},
Value: 1,
Timestamp: 2,
}},
})
// Multi lines
f("put foo 2 0.3 a=b\nput bar.baz 43 0.34 a=b\n", &Rows{
Rows: []Row{
{
Metric: "foo",
Value: 0.3,
Timestamp: 2,
Tags: []Tag{{
Key: "a",
Value: "b",
}},
},
{
Metric: "bar.baz",
Value: 0.34,
Timestamp: 43,
Tags: []Tag{{
Key: "a",
Value: "b",
}},
},
},
})
// Multi lines with invalid line
f("put foo 2 0.3 a=b\naaa bbb\nput bar.baz 43 0.34 a=b\n", &Rows{
Rows: []Row{
{
Metric: "foo",
Value: 0.3,
Timestamp: 2,
Tags: []Tag{{
Key: "a",
Value: "b",
}},
},
{
Metric: "bar.baz",
Value: 0.34,
Timestamp: 43,
Tags: []Tag{{
Key: "a",
Value: "b",
}},
},
},
})
}

View File

@@ -0,0 +1,25 @@
package opentsdb
import (
"fmt"
"testing"
)
func BenchmarkRowsUnmarshal(b *testing.B) {
s := `put cpu.usage_user 1234556768 1.23 a=b
put cpu.usage_system 1234556768 23.344 a=b
put cpu.usage_iowait 1234556769 3.3443 a=b
put cpu.usage_irq 1234556768 0.34432 a=b
`
b.SetBytes(int64(len(s)))
b.ReportAllocs()
b.RunParallel(func(pb *testing.PB) {
var rows Rows
for pb.Next() {
rows.Unmarshal(s)
if len(rows.Rows) != 4 {
panic(fmt.Errorf("unexpected number of parsed rows; got %d; want 4", len(rows.Rows)))
}
}
})
}

View File

@@ -0,0 +1,160 @@
package opentsdb
import (
"fmt"
"io"
"net"
"runtime"
"sync"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/concurrencylimiter"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/metrics"
)
var (
rowsInserted = metrics.NewCounter(`vm_rows_inserted_total{type="opentsdb"}`)
rowsPerInsert = metrics.NewSummary(`vm_rows_per_insert{type="opentsdb"}`)
)
// insertHandler processes remote write for OpenTSDB put protocol.
//
// See http://opentsdb.net/docs/build/html/api_telnet/put.html
func insertHandler(r io.Reader) error {
return concurrencylimiter.Do(func() error {
return insertHandlerInternal(r)
})
}
func insertHandlerInternal(r io.Reader) error {
ctx := getPushCtx()
defer putPushCtx(ctx)
for ctx.Read(r) {
if err := ctx.InsertRows(); err != nil {
return err
}
}
return ctx.Error()
}
func (ctx *pushCtx) InsertRows() error {
rows := ctx.Rows.Rows
ic := &ctx.Common
ic.Reset(len(rows))
for i := range rows {
r := &rows[i]
ic.Labels = ic.Labels[:0]
ic.AddLabel("", r.Metric)
for j := range r.Tags {
tag := &r.Tags[j]
ic.AddLabel(tag.Key, tag.Value)
}
ic.WriteDataPoint(nil, ic.Labels, r.Timestamp, r.Value)
}
rowsInserted.Add(len(rows))
rowsPerInsert.Update(float64(len(rows)))
return ic.FlushBufs()
}
const flushTimeout = 3 * time.Second
func (ctx *pushCtx) Read(r io.Reader) bool {
opentsdbReadCalls.Inc()
if ctx.err != nil {
return false
}
if c, ok := r.(net.Conn); ok {
if err := c.SetReadDeadline(time.Now().Add(flushTimeout)); err != nil {
opentsdbReadErrors.Inc()
ctx.err = fmt.Errorf("cannot set read deadline: %s", err)
return false
}
}
ctx.reqBuf, ctx.tailBuf, ctx.err = common.ReadLinesBlock(r, ctx.reqBuf, ctx.tailBuf)
if ctx.err != nil {
if ne, ok := ctx.err.(net.Error); ok && ne.Timeout() {
// Flush the read data on timeout and try reading again.
ctx.err = nil
} else {
if ctx.err != io.EOF {
opentsdbReadErrors.Inc()
ctx.err = fmt.Errorf("cannot read OpenTSDB put protocol data: %s", ctx.err)
}
return false
}
}
ctx.Rows.Unmarshal(bytesutil.ToUnsafeString(ctx.reqBuf))
// Fill in missing timestamps
currentTimestamp := time.Now().Unix()
rows := ctx.Rows.Rows
for i := range rows {
r := &rows[i]
if r.Timestamp == 0 {
r.Timestamp = currentTimestamp
}
}
// Convert timestamps from seconds to milliseconds
for i := range rows {
rows[i].Timestamp *= 1e3
}
return true
}
type pushCtx struct {
Rows Rows
Common common.InsertCtx
reqBuf []byte
tailBuf []byte
err error
}
func (ctx *pushCtx) Error() error {
if ctx.err == io.EOF {
return nil
}
return ctx.err
}
func (ctx *pushCtx) reset() {
ctx.Rows.Reset()
ctx.Common.Reset(0)
ctx.reqBuf = ctx.reqBuf[:0]
ctx.tailBuf = ctx.tailBuf[:0]
ctx.err = nil
}
var (
opentsdbReadCalls = metrics.NewCounter(`vm_read_calls_total{name="opentsdb"}`)
opentsdbReadErrors = metrics.NewCounter(`vm_read_errors_total{name="opentsdb"}`)
)
func getPushCtx() *pushCtx {
select {
case ctx := <-pushCtxPoolCh:
return ctx
default:
if v := pushCtxPool.Get(); v != nil {
return v.(*pushCtx)
}
return &pushCtx{}
}
}
func putPushCtx(ctx *pushCtx) {
ctx.reset()
select {
case pushCtxPoolCh <- ctx:
default:
pushCtxPool.Put(ctx)
}
}
var pushCtxPool sync.Pool
var pushCtxPoolCh = make(chan *pushCtx, runtime.GOMAXPROCS(-1))

View File

@@ -0,0 +1,138 @@
package opentsdb
import (
"net"
"runtime"
"strings"
"sync"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/netutil"
"github.com/VictoriaMetrics/metrics"
)
var (
writeRequestsTCP = metrics.NewCounter(`vm_opentsdb_requests_total{name="write", net="tcp"}`)
writeErrorsTCP = metrics.NewCounter(`vm_opentsdb_request_errors_total{name="write", net="tcp"}`)
writeRequestsUDP = metrics.NewCounter(`vm_opentsdb_requests_total{name="write", net="udp"}`)
writeErrorsUDP = metrics.NewCounter(`vm_opentsdb_request_errors_total{name="write", net="udp"}`)
)
// Serve starts OpenTSDB collector on the given addr.
func Serve(addr string) {
logger.Infof("starting TCP OpenTSDB collector at %q", addr)
lnTCP, err := netutil.NewTCPListener("opentsdb", addr)
if err != nil {
logger.Fatalf("cannot start TCP OpenTSDB collector at %q: %s", addr, err)
}
listenerTCP = lnTCP
logger.Infof("starting UDP OpenTSDB collector at %q", addr)
lnUDP, err := net.ListenPacket("udp4", addr)
if err != nil {
logger.Fatalf("cannot start UDP OpenTSDB collector at %q: %s", addr, err)
}
listenerUDP = lnUDP
var wg sync.WaitGroup
wg.Add(1)
go func() {
defer wg.Done()
serveTCP(listenerTCP)
logger.Infof("stopped TCP OpenTSDB collector at %q", addr)
}()
wg.Add(1)
go func() {
defer wg.Done()
serveUDP(listenerUDP)
logger.Infof("stopped UDP OpenTSDB collector at %q", addr)
}()
wg.Wait()
}
func serveTCP(ln net.Listener) {
for {
c, err := ln.Accept()
if err != nil {
if ne, ok := err.(net.Error); ok {
if ne.Temporary() {
time.Sleep(time.Second)
continue
}
if strings.Contains(err.Error(), "use of closed network connection") {
break
}
logger.Fatalf("unrecoverable error when accepting TCP OpenTSDB connections: %s", err)
}
logger.Fatalf("unexpected error when accepting TCP OpenTSDB connections: %s", err)
}
go func() {
writeRequestsTCP.Inc()
if err := insertHandler(c); err != nil {
writeErrorsTCP.Inc()
logger.Errorf("error in TCP OpenTSDB conn %q<->%q: %s", c.LocalAddr(), c.RemoteAddr(), err)
}
_ = c.Close()
}()
}
}
func serveUDP(ln net.PacketConn) {
gomaxprocs := runtime.GOMAXPROCS(-1)
var wg sync.WaitGroup
for i := 0; i < gomaxprocs; i++ {
wg.Add(1)
go func() {
defer wg.Done()
var bb bytesutil.ByteBuffer
bb.B = bytesutil.Resize(bb.B, 64*1024)
for {
bb.Reset()
bb.B = bb.B[:cap(bb.B)]
n, addr, err := ln.ReadFrom(bb.B)
if err != nil {
writeErrorsUDP.Inc()
if ne, ok := err.(net.Error); ok {
if ne.Temporary() {
time.Sleep(time.Second)
continue
}
if strings.Contains(err.Error(), "use of closed network connection") {
break
}
}
logger.Errorf("cannot read OpenTSDB UDP data: %s", err)
continue
}
bb.B = bb.B[:n]
writeRequestsUDP.Inc()
if err := insertHandler(bb.NewReader()); err != nil {
writeErrorsUDP.Inc()
logger.Errorf("error in UDP OpenTSDB conn %q<->%q: %s", ln.LocalAddr(), addr, err)
continue
}
}
}()
}
wg.Wait()
}
var (
listenerTCP net.Listener
listenerUDP net.PacketConn
)
// Stop stops the server.
func Stop() {
logger.Infof("stopping TCP OpenTSDB server at %q...", listenerTCP.Addr())
if err := listenerTCP.Close(); err != nil {
logger.Errorf("cannot close TCP OpenTSDB server: %s", err)
}
logger.Infof("stopping UDP OpenTSDB server at %q...", listenerUDP.LocalAddr())
if err := listenerUDP.Close(); err != nil {
logger.Errorf("cannot close UDP OpenTSDB server: %s", err)
}
}

View File

@@ -0,0 +1,198 @@
package opentsdbhttp
import (
"fmt"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/metrics"
"github.com/valyala/fastjson"
"github.com/valyala/fastjson/fastfloat"
)
// Rows contains parsed OpenTSDB rows.
type Rows struct {
Rows []Row
tagsPool []Tag
}
// Reset resets rs.
func (rs *Rows) Reset() {
// Release references to objects, so they can be GC'ed.
for i := range rs.Rows {
rs.Rows[i].reset()
}
rs.Rows = rs.Rows[:0]
for i := range rs.tagsPool {
rs.tagsPool[i].reset()
}
rs.tagsPool = rs.tagsPool[:0]
}
// Unmarshal unmarshals OpenTSDB rows from av.
//
// See http://opentsdb.net/docs/build/html/api_http/put.html
//
// s must be unchanged until rs is in use.
func (rs *Rows) Unmarshal(av *fastjson.Value) {
rs.Rows, rs.tagsPool = unmarshalRows(rs.Rows[:0], av, rs.tagsPool[:0])
}
// Row is a single OpenTSDB row.
type Row struct {
Metric string
Tags []Tag
Value float64
Timestamp int64
}
func (r *Row) reset() {
r.Metric = ""
r.Tags = nil
r.Value = 0
r.Timestamp = 0
}
func (r *Row) unmarshal(o *fastjson.Value, tagsPool []Tag) ([]Tag, error) {
r.reset()
m := o.GetStringBytes("metric")
if len(m) == 0 {
return tagsPool, fmt.Errorf("missing `metric` in %s", o)
}
r.Metric = bytesutil.ToUnsafeString(m)
rawTs := o.Get("timestamp")
if rawTs != nil {
ts, err := getFloat64(rawTs)
if err != nil {
return tagsPool, fmt.Errorf("invalid `timestamp` in %s: %s", o, err)
}
r.Timestamp = int64(ts)
} else {
// Allow missing timestamp. It is automatically populated
// with the current time in this case.
r.Timestamp = 0
}
rawV := o.Get("value")
if rawV == nil {
return tagsPool, fmt.Errorf("missing `value` in %s", o)
}
v, err := getFloat64(rawV)
if err != nil {
return tagsPool, fmt.Errorf("invalid `value` in %s: %s", o, err)
}
r.Value = v
vt := o.Get("tags")
if vt == nil {
// Allow empty tags.
return tagsPool, nil
}
rawTags, err := vt.Object()
if err != nil {
return tagsPool, fmt.Errorf("invalid `tags` in %s: %s", o, err)
}
tagsStart := len(tagsPool)
tagsPool, err = unmarshalTags(tagsPool, rawTags)
if err != nil {
return tagsPool, fmt.Errorf("cannot parse tags %s: %s", rawTags, err)
}
tags := tagsPool[tagsStart:]
r.Tags = tags[:len(tags):len(tags)]
return tagsPool, nil
}
func getFloat64(v *fastjson.Value) (float64, error) {
switch v.Type() {
case fastjson.TypeNumber:
return v.Float64()
case fastjson.TypeString:
vStr, _ := v.StringBytes()
vFloat := fastfloat.ParseBestEffort(bytesutil.ToUnsafeString(vStr))
if vFloat == 0 && string(vStr) != "0" && string(vStr) != "0.0" {
return 0, fmt.Errorf("invalid float64 value: %q", vStr)
}
return vFloat, nil
default:
return 0, fmt.Errorf("value doesn't contain float64; it contains %s", v.Type())
}
}
func unmarshalRows(dst []Row, av *fastjson.Value, tagsPool []Tag) ([]Row, []Tag) {
switch av.Type() {
case fastjson.TypeObject:
return unmarshalRow(dst, av, tagsPool)
case fastjson.TypeArray:
a, _ := av.Array()
for _, o := range a {
dst, tagsPool = unmarshalRow(dst, o, tagsPool)
}
return dst, tagsPool
default:
logger.Errorf("OpenTSDB JSON must be either object or array; got %s; body=%s", av.Type(), av)
invalidLines.Inc()
return dst, tagsPool
}
}
func unmarshalRow(dst []Row, o *fastjson.Value, tagsPool []Tag) ([]Row, []Tag) {
if cap(dst) > len(dst) {
dst = dst[:len(dst)+1]
} else {
dst = append(dst, Row{})
}
r := &dst[len(dst)-1]
var err error
tagsPool, err = r.unmarshal(o, tagsPool)
if err != nil {
dst = dst[:len(dst)-1]
logger.Errorf("cannot unmarshal OpenTSDB object %s: %s", o, err)
invalidLines.Inc()
}
return dst, tagsPool
}
var invalidLines = metrics.NewCounter(`vm_rows_invalid_total{type="opentsdb-http"}`)
func unmarshalTags(dst []Tag, o *fastjson.Object) ([]Tag, error) {
var err error
o.Visit(func(k []byte, v *fastjson.Value) {
if v.Type() != fastjson.TypeString {
err = fmt.Errorf("tag value must be string; got %s; value=%s", v.Type(), v)
return
}
if len(k) == 0 {
// Skip empty tags
return
}
vStr, _ := v.StringBytes()
if len(vStr) == 0 {
// Skip empty tags
return
}
if cap(dst) > len(dst) {
dst = dst[:len(dst)+1]
} else {
dst = append(dst, Tag{})
}
tag := &dst[len(dst)-1]
tag.Key = bytesutil.ToUnsafeString(k)
tag.Value = bytesutil.ToUnsafeString(vStr)
})
return dst, err
}
// Tag is an OpenTSDB tag.
type Tag struct {
Key string
Value string
}
func (t *Tag) reset() {
t.Key = ""
t.Value = ""
}

View File

@@ -0,0 +1,246 @@
package opentsdbhttp
import (
"reflect"
"testing"
)
func TestRowsUnmarshalFailure(t *testing.T) {
f := func(s string) {
t.Helper()
var rows Rows
p := parserPool.Get()
defer parserPool.Put(p)
v, err := p.Parse(s)
if err != nil {
// Expected JSON parser error
return
}
// Verify OpenTSDB body parsing error
rows.Unmarshal(v)
if len(rows.Rows) != 0 {
t.Fatalf("unexpected number of rows parsed; got %d; want 0", len(rows.Rows))
}
// Try again
rows.Unmarshal(v)
if len(rows.Rows) != 0 {
t.Fatalf("unexpected number of rows parsed; got %d; want 0", len(rows.Rows))
}
}
// invalid json
f("{g")
// Invalid json type
f(`1`)
f(`"foo"`)
f(`[1,2]`)
f(`null`)
// Incomplete object
f(`{}`)
f(`{"metric": "aaa"}`)
f(`{"metric": "aaa", "timestamp": 1122}`)
f(`{"metric": "aaa", "timestamp": "tststs"}`)
f(`{"timestamp": 1122, "value": 33}`)
f(`{"value": 33}`)
f(`{"value": 33, "tags": {"fooo":"bar"}}`)
// Invalid value
f(`{"metric": "aaa", "timestamp": 1122, "value": "0.0.0"}`)
// Invalid metric type
f(`{"metric": "", "timestamp": 1122, "value": 0.45, "tags": {"foo": "bar"}}`)
f(`{"metric": ["aaa"], "timestamp": 1122, "value": 0.45, "tags": {"foo": "bar"}}`)
f(`{"metric": {"aaa":1}, "timestamp": 1122, "value": 0.45, "tags": {"foo": "bar"}}`)
f(`{"metric": 1, "timestamp": 1122, "value": 0.45, "tags": {"foo": "bar"}}`)
// Invalid timestamp type
f(`{"metric": "aaa", "timestamp": "foobar", "value": 0.45, "tags": {"foo": "bar"}}`)
f(`{"metric": "aaa", "timestamp": [1,2], "value": 0.45, "tags": {"foo": "bar"}}`)
f(`{"metric": "aaa", "timestamp": {"a":1}, "value": 0.45, "tags": {"foo": "bar"}}`)
// Invalid value type
f(`{"metric": "aaa", "timestamp": 1122, "value": [0,1], "tags": {"foo":"bar"}}`)
f(`{"metric": "aaa", "timestamp": 1122, "value": {"a":1}, "tags": {"foo":"bar"}}`)
f(`{"metric": "aaa", "timestamp": 1122, "value": "foobar", "tags": {"foo":"bar"}}`)
// Invalid tags type
f(`{"metric": "aaa", "timestamp": 1122, "value": 0.45, "tags": 1}`)
f(`{"metric": "aaa", "timestamp": 1122, "value": 0.45, "tags": [1,2]}`)
f(`{"metric": "aaa", "timestamp": 1122, "value": 0.45, "tags": "foo"}`)
// Invalid tag value type
f(`{"metric": "aaa", "timestamp": 1122, "value": 0.45, "tags": {"foo": ["bar"]}}`)
f(`{"metric": "aaa", "timestamp": 1122, "value": 0.45, "tags": {"foo": {"bar":"baz"}}}`)
f(`{"metric": "aaa", "timestamp": 1122, "value": 0.45, "tags": {"foo": 1}}`)
// Invalid multiline
f(`[{"metric": "aaa", "timestamp": 1122, "value": "trt", "tags":{"foo":"bar"}}, {"metric": "aaa", "timestamp": [1122], "value": 111}]`)
}
func TestRowsUnmarshalSuccess(t *testing.T) {
f := func(s string, rowsExpected *Rows) {
t.Helper()
var rows Rows
p := parserPool.Get()
defer parserPool.Put(p)
v, err := p.Parse(s)
if err != nil {
t.Fatalf("cannot parse json %s: %s", s, err)
}
rows.Unmarshal(v)
if !reflect.DeepEqual(rows.Rows, rowsExpected.Rows) {
t.Fatalf("unexpected rows;\ngot\n%+v;\nwant\n%+v", rows.Rows, rowsExpected.Rows)
}
// Try unmarshaling again
rows.Unmarshal(v)
if !reflect.DeepEqual(rows.Rows, rowsExpected.Rows) {
t.Fatalf("unexpected rows;\ngot\n%+v;\nwant\n%+v", rows.Rows, rowsExpected.Rows)
}
rows.Reset()
if len(rows.Rows) != 0 {
t.Fatalf("non-empty rows after reset: %+v", rows.Rows)
}
}
// Normal line
f(`{"metric": "foobar", "timestamp": 789, "value": -123.456, "tags": {"a":"b"}}`, &Rows{
Rows: []Row{{
Metric: "foobar",
Value: -123.456,
Timestamp: 789,
Tags: []Tag{{
Key: "a",
Value: "b",
}},
}},
})
// Timestamp as string
f(`{"metric": "foobar", "timestamp": "1789", "value": -123.456, "tags": {"a":"b"}}`, &Rows{
Rows: []Row{{
Metric: "foobar",
Value: -123.456,
Timestamp: 1789,
Tags: []Tag{{
Key: "a",
Value: "b",
}},
}},
})
// Timestamp as float64 (it is truncated to integer)
f(`{"metric": "foobar", "timestamp": 17.89, "value": -123.456, "tags": {"a":"b"}}`, &Rows{
Rows: []Row{{
Metric: "foobar",
Value: -123.456,
Timestamp: 17,
Tags: []Tag{{
Key: "a",
Value: "b",
}},
}},
})
// Empty tags
f(`{"metric": "foobar", "timestamp": 789, "value": -123.456, "tags": {}}`, &Rows{
Rows: []Row{{
Metric: "foobar",
Value: -123.456,
Timestamp: 789,
Tags: nil,
}},
})
// Missing tags
f(`{"metric": "foobar", "timestamp": 789, "value": -123.456}`, &Rows{
Rows: []Row{{
Metric: "foobar",
Value: -123.456,
Timestamp: 789,
Tags: nil,
}},
})
// Empty tag value
f(`{"metric": "foobar", "timestamp": 123, "value": -123.456, "tags": {"a":"", "b":"c", "": "d"}}`, &Rows{
Rows: []Row{{
Metric: "foobar",
Value: -123.456,
Timestamp: 123,
Tags: []Tag{
{
Key: "b",
Value: "c",
},
},
}},
})
// Value as string
f(`{"metric": "foobar", "timestamp": 789, "value": "-12.456", "tags": {"a":"b"}}`, &Rows{
Rows: []Row{{
Metric: "foobar",
Value: -12.456,
Timestamp: 789,
Tags: []Tag{{
Key: "a",
Value: "b",
}},
}},
})
// Missing timestamp
f(`{"metric": "foobar", "value": "-12.456", "tags": {"a":"b"}}`, &Rows{
Rows: []Row{{
Metric: "foobar",
Value: -12.456,
Timestamp: 0,
Tags: []Tag{{
Key: "a",
Value: "b",
}},
}},
})
// Multiple tags
f(`{"metric": "foo", "value": 1, "timestamp": 2, "tags": {"bar":"baz", "x": "y"}}`, &Rows{
Rows: []Row{{
Metric: "foo",
Tags: []Tag{
{
Key: "bar",
Value: "baz",
},
{
Key: "x",
Value: "y",
},
},
Value: 1,
Timestamp: 2,
}},
})
// Multi lines
f(`[{"metric": "foo", "value": "0.3", "timestamp": 2, "tags": {"a":"b"}},
{"metric": "bar.baz", "value": 0.34, "timestamp": 43, "tags": {"a":"b"}}]`, &Rows{
Rows: []Row{
{
Metric: "foo",
Value: 0.3,
Timestamp: 2,
Tags: []Tag{{
Key: "a",
Value: "b",
}},
},
{
Metric: "bar.baz",
Value: 0.34,
Timestamp: 43,
Tags: []Tag{{
Key: "a",
Value: "b",
}},
},
},
})
}

View File

@@ -0,0 +1,33 @@
package opentsdbhttp
import (
"fmt"
"testing"
"github.com/valyala/fastjson"
)
func BenchmarkRowsUnmarshal(b *testing.B) {
s := `[{"metric": "cpu.usage_user", "timestamp": 1234556768, "value": 1.23, "tags": {"a":"b", "x": "y"}},
{"metric": "cpu.usage_system", "timestamp": 1234556768, "value": 23.344, "tags": {"a":"b"}},
{"metric": "cpu.usage_iowait", "timestamp": 1234556769, "value":3.3443, "tags": {"a":"b"}},
{"metric": "cpu.usage_irq", "timestamp": 1234556768, "value": 0.34432, "tags": {"a":"b"}}
]
`
b.SetBytes(int64(len(s)))
b.ReportAllocs()
b.RunParallel(func(pb *testing.PB) {
var rows Rows
var p fastjson.Parser
for pb.Next() {
v, err := p.Parse(s)
if err != nil {
panic(fmt.Errorf("cannot parse %q: %s", s, err))
}
rows.Unmarshal(v)
if len(rows.Rows) != 4 {
panic(fmt.Errorf("unexpected number of rows unmarshaled; got %d; want 4", len(rows.Rows)))
}
}
})
}

View File

@@ -0,0 +1,150 @@
package opentsdbhttp
import (
"fmt"
"io"
"net/http"
"runtime"
"sync"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/concurrencylimiter"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/metrics"
"github.com/valyala/fastjson"
)
var (
rowsInserted = metrics.NewCounter(`vm_rows_inserted_total{type="opentsdb-http"}`)
rowsPerInsert = metrics.NewSummary(`vm_rows_per_insert{type="opentsdb-http"}`)
opentsdbReadCalls = metrics.NewCounter(`vm_read_calls_total{name="opentsdb-http"}`)
opentsdbReadErrors = metrics.NewCounter(`vm_read_errors_total{name="opentsdb-http"}`)
opentsdbUnmarshalErrors = metrics.NewCounter(`vm_unmarshal_errors_total{name="opentsdb-http"}`)
)
// insertHandler processes HTTP OpenTSDB put requests.
// See http://opentsdb.net/docs/build/html/api_http/put.html
func insertHandler(req *http.Request, maxSize int64) error {
return concurrencylimiter.Do(func() error {
return insertHandlerInternal(req, maxSize)
})
}
func insertHandlerInternal(req *http.Request, maxSize int64) error {
opentsdbReadCalls.Inc()
r := req.Body
if req.Header.Get("Content-Encoding") == "gzip" {
zr, err := common.GetGzipReader(r)
if err != nil {
opentsdbReadErrors.Inc()
return fmt.Errorf("cannot read gzipped http protocol data: %s", err)
}
defer common.PutGzipReader(zr)
r = zr
}
ctx := getPushCtx()
defer putPushCtx(ctx)
// Read the request in ctx.reqBuf
lr := io.LimitReader(r, maxSize+1)
reqLen, err := ctx.reqBuf.ReadFrom(lr)
if err != nil {
opentsdbReadErrors.Inc()
return fmt.Errorf("cannot read HTTP OpenTSDB request: %s", err)
}
if reqLen > maxSize {
opentsdbReadErrors.Inc()
return fmt.Errorf("too big HTTP OpenTSDB request; mustn't exceed %d bytes", maxSize)
}
// Unmarshal the request to ctx.Rows
p := parserPool.Get()
defer parserPool.Put(p)
v, err := p.ParseBytes(ctx.reqBuf.B)
if err != nil {
opentsdbUnmarshalErrors.Inc()
return fmt.Errorf("cannot parse HTTP OpenTSDB json: %s", err)
}
ctx.Rows.Unmarshal(v)
// Fill in missing timestamps
currentTimestamp := time.Now().Unix()
rows := ctx.Rows.Rows
for i := range rows {
r := &rows[i]
if r.Timestamp == 0 {
r.Timestamp = currentTimestamp
}
}
// Convert timestamps in seconds to milliseconds if needed.
// See http://opentsdb.net/docs/javadoc/net/opentsdb/core/Const.html#SECOND_MASK
for i := range rows {
r := &rows[i]
if r.Timestamp&secondMask == 0 {
r.Timestamp *= 1e3
}
}
// Insert ctx.Rows to db.
ic := &ctx.Common
ic.Reset(len(rows))
for i := range rows {
r := &rows[i]
ic.Labels = ic.Labels[:0]
ic.AddLabel("", r.Metric)
for j := range r.Tags {
tag := &r.Tags[j]
ic.AddLabel(tag.Key, tag.Value)
}
ic.WriteDataPoint(nil, ic.Labels, r.Timestamp, r.Value)
}
rowsInserted.Add(len(rows))
rowsPerInsert.Update(float64(len(rows)))
return ic.FlushBufs()
}
const secondMask int64 = 0x7FFFFFFF00000000
var parserPool fastjson.ParserPool
type pushCtx struct {
Rows Rows
Common common.InsertCtx
reqBuf bytesutil.ByteBuffer
}
func (ctx *pushCtx) reset() {
ctx.Rows.Reset()
ctx.Common.Reset(0)
ctx.reqBuf.Reset()
}
func getPushCtx() *pushCtx {
select {
case ctx := <-pushCtxPoolCh:
return ctx
default:
if v := pushCtxPool.Get(); v != nil {
return v.(*pushCtx)
}
return &pushCtx{}
}
}
func putPushCtx(ctx *pushCtx) {
ctx.reset()
select {
case pushCtxPoolCh <- ctx:
default:
pushCtxPool.Put(ctx)
}
}
var pushCtxPool sync.Pool
var pushCtxPoolCh = make(chan *pushCtx, runtime.GOMAXPROCS(-1))

View File

@@ -0,0 +1,70 @@
package opentsdbhttp
import (
"context"
"net/http"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/metrics"
)
var (
writeRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/put", protocol="opentsdb-http"}`)
writeErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/api/put", protocol="opentsdb-http"}`)
)
var (
httpServer *http.Server
httpAddr string
maxRequestSize int64
)
// Serve starts HTTP OpenTSDB server on the given addr.
func Serve(addr string, maxReqSize int64) {
logger.Infof("starting HTTP OpenTSDB server at %q", addr)
httpAddr = addr
maxRequestSize = maxReqSize
httpServer = &http.Server{
Addr: addr,
Handler: http.HandlerFunc(requestHandler),
ReadTimeout: 30 * time.Second,
WriteTimeout: 10 * time.Second,
}
go func() {
err := httpServer.ListenAndServe()
if err == http.ErrServerClosed {
return
}
if err != nil {
logger.Fatalf("error serving HTTP OpenTSDB: %s", err)
}
}()
}
// requestHandler handles HTTP OpenTSDB insert request.
func requestHandler(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/api/put":
writeRequests.Inc()
if err := insertHandler(r, maxRequestSize); err != nil {
writeErrors.Inc()
httpserver.Errorf(w, "error in %q: %s", r.URL.Path, err)
return
}
w.WriteHeader(http.StatusNoContent)
default:
httpserver.Errorf(w, "unexpected path requested on HTTP OpenTSDB server: %q", r.URL.Path)
}
}
// Stop stops HTTP OpenTSDB server.
func Stop() {
logger.Infof("stopping HTTP OpenTSDB server at %q...", httpAddr)
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := httpServer.Shutdown(ctx); err != nil {
logger.Fatalf("cannot close HTTP OpenTSDB server: %s", err)
}
}

View File

@@ -0,0 +1,112 @@
package prometheus
import (
"fmt"
"net/http"
"runtime"
"sync"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/concurrencylimiter"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/metrics"
)
var (
rowsInserted = metrics.NewCounter(`vm_rows_inserted_total{type="prometheus"}`)
rowsPerInsert = metrics.NewSummary(`vm_rows_per_insert{type="prometheus"}`)
)
// InsertHandler processes remote write for prometheus.
func InsertHandler(r *http.Request, maxSize int64) error {
return concurrencylimiter.Do(func() error {
return insertHandlerInternal(r, maxSize)
})
}
func insertHandlerInternal(r *http.Request, maxSize int64) error {
ctx := getPushCtx()
defer putPushCtx(ctx)
if err := ctx.Read(r, maxSize); err != nil {
return err
}
timeseries := ctx.req.Timeseries
rowsLen := 0
for i := range timeseries {
rowsLen += len(timeseries[i].Samples)
}
ic := &ctx.Common
ic.Reset(rowsLen)
rowsTotal := 0
for i := range timeseries {
ts := &timeseries[i]
var metricNameRaw []byte
for i := range ts.Samples {
r := &ts.Samples[i]
metricNameRaw = ic.WriteDataPointExt(metricNameRaw, ts.Labels, r.Timestamp, r.Value)
}
rowsTotal += len(ts.Samples)
}
rowsInserted.Add(rowsTotal)
rowsPerInsert.Update(float64(rowsTotal))
return ic.FlushBufs()
}
type pushCtx struct {
Common common.InsertCtx
req prompb.WriteRequest
reqBuf []byte
}
func (ctx *pushCtx) reset() {
ctx.Common.Reset(0)
ctx.req.Reset()
ctx.reqBuf = ctx.reqBuf[:0]
}
func (ctx *pushCtx) Read(r *http.Request, maxSize int64) error {
prometheusReadCalls.Inc()
var err error
ctx.reqBuf, err = prompb.ReadSnappy(ctx.reqBuf[:0], r.Body, maxSize)
if err != nil {
prometheusReadErrors.Inc()
return fmt.Errorf("cannot read prompb.WriteRequest: %s", err)
}
if err = ctx.req.Unmarshal(ctx.reqBuf); err != nil {
prometheusUnmarshalErrors.Inc()
return fmt.Errorf("cannot unmarshal prompb.WriteRequest with size %d bytes: %s", len(ctx.reqBuf), err)
}
return nil
}
var (
prometheusReadCalls = metrics.NewCounter(`vm_read_calls_total{name="prometheus"}`)
prometheusReadErrors = metrics.NewCounter(`vm_read_errors_total{name="prometheus"}`)
prometheusUnmarshalErrors = metrics.NewCounter(`vm_unmarshal_errors_total{name="prometheus"}`)
)
func getPushCtx() *pushCtx {
select {
case ctx := <-pushCtxPoolCh:
return ctx
default:
if v := pushCtxPool.Get(); v != nil {
return v.(*pushCtx)
}
return &pushCtx{}
}
}
func putPushCtx(ctx *pushCtx) {
ctx.reset()
select {
case pushCtxPoolCh <- ctx:
default:
pushCtxPool.Put(ctx)
}
}
var pushCtxPool sync.Pool
var pushCtxPoolCh = make(chan *pushCtx, runtime.GOMAXPROCS(-1))

37
app/vmrestore/Makefile Normal file
View File

@@ -0,0 +1,37 @@
# All these commands must run from repository root.
vmrestore:
APP_NAME=vmrestore $(MAKE) app-local
vmrestore-prod:
APP_NAME=vmrestore $(MAKE) app-via-docker
package-vmrestore:
APP_NAME=vmrestore $(MAKE) package-via-docker
publish-vmrestore:
APP_NAME=vmrestore $(MAKE) publish-via-docker
vmrestore-arm:
CGO_ENABLED=0 GOOS=linux GOARCH=arm GO111MODULE=on go build -mod=vendor -ldflags "$(GO_BUILDINFO)" -o bin/vmrestore-arm ./app/vmrestore
vmrestore-arm-prod:
APP_NAME=vmrestore APP_SUFFIX='-arm' DOCKER_OPTS='--env CGO_ENABLED=0 --env GOARCH=arm' $(MAKE) app-via-docker
vmrestore-arm64:
CGO_ENABLED=0 GOOS=linux GOARCH=arm64 GO111MODULE=on go build -mod=vendor -ldflags "$(GO_BUILDINFO)" -o bin/vmrestore-arm64 ./app/vmrestore
vmrestore-arm64-prod:
APP_NAME=vmrestore APP_SUFFIX='-arm64' DOCKER_OPTS='--env CGO_ENABLED=0 --env GOARCH=arm64' $(MAKE) app-via-docker
vmrestore-386:
CGO_ENABLED=0 GOOS=linux GOARCH=386 GO111MODULE=on go build -mod=vendor -ldflags "$(GO_BUILDINFO)" -o bin/vmrestore-386 ./app/vmrestore
vmrestore-386-prod:
APP_NAME=vmrestore APP_SUFFIX='-386' DOCKER_OPTS='--env CGO_ENABLED=0 --env GOARCH=386' $(MAKE) app-via-docker
vmrestore-pure:
APP_NAME=vmrestore $(MAKE) app-local-pure
vmrestore-pure-prod:
APP_NAME=vmrestore APP_SUFFIX='-pure' DOCKER_OPTS='--env CGO_ENABLED=0' $(MAKE) app-via-docker

86
app/vmrestore/README.md Normal file
View File

@@ -0,0 +1,86 @@
## vmrestore
`vmrestore` restores data from backups created by [vmbackup](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/app/vmbackup/README.md).
VictoriaMetrics `v1.29.0` and newer versions must be used for working with the restored data.
Restore process can be interrupted at any time. It is automatically resumed from the inerruption point
when restarting `vmrestore` with the same args.
### Usage
VictoriaMetrics must be stopped during the restore process.
```
vmrestore -src=gcs://<bucket>/<path/to/backup> -storageDataPath=<local/path/to/restore>
```
* `<bucket>` is [GCS bucket](https://cloud.google.com/storage/docs/creating-buckets) name.
* `<path/to/backup>` is the path to backup made with [vmbackup](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/app/vmbackup/README.md) on GCS bucket.
* `<local/path/to/restore>` is the path to folder where data will be restored. This folder must be passed
to VictoriaMetrics in `-storageDataPath` command-line flag after the restore process is complete.
The original `-storageDataPath` directory may contain old files. They will be susbstituted by the files from backup.
### Troubleshooting
* If `vmrestore` eats all the network bandwidth, then set `-maxBytesPerSecond` to the desired value.
* If `vmrestore` has been interrupted due to temporary error, then just restart it with the same args. It will resume the restore process.
### Advanced usage
Run `vmrestore -help` in order to see all the available options:
```
-concurrency int
The number of concurrent workers. Higher concurrency may reduce restore duration (default 10)
-configFilePath string
Path to file with S3 configs. Configs are loaded from default location if not set.
See https://docs.aws.amazon.com/general/latest/gr/aws-security-credentials.html
-configProfile string
Profile name for S3 configs (default "default")
-credsFilePath string
Path to file with GCS or S3 credentials. Credentials are loaded from default locations if not set.
See https://cloud.google.com/iam/docs/creating-managing-service-account-keys and https://docs.aws.amazon.com/general/latest/gr/aws-security-credentials.html
-customS3Endpoint string
Custom S3 endpoint for use with S3-compatible storages (e.g. MinIO). S3 is used if not set
-loggerLevel string
Minimum level of errors to log. Possible values: INFO, ERROR, FATAL, PANIC (default "INFO")
-maxBytesPerSecond int
The maximum download speed. There is no limit if it is set to 0
-memory.allowedPercent float
Allowed percent of system memory VictoriaMetrics caches may occupy (default 60)
-src string
Source path with backup on the remote storage. Example: gcs://bucket/path/to/backup/dir, s3://bucket/path/to/backup/dir or fs:///path/to/local/backup/dir
-storageDataPath string
Destination path where backup must be restored. VictoriaMetrics must be stopped when restoring from backup. -storageDataPath dir can be non-empty. In this case only missing data is downloaded from backup (default "victoria-metrics-data")
-version
Show VictoriaMetrics version
```
### How to build from sources
It is recommended using [binary releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) - see `vmutils-*` archives there.
#### Development build
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.12.
2. Run `make vmrestore` from the root folder of the repository.
It builds `vmrestore` binary and puts it into the `bin` folder.
#### Production build
1. [Install docker](https://docs.docker.com/install/).
2. Run `make vmrestore-prod` from the root folder of the repository.
It builds `vmrestore-prod` binary and puts it into the `bin` folder.
#### Building docker images
Run `make package-vmrestore`. It builds `victoriametrics/vmrestore:<PKG_TAG>` docker image locally.
`<PKG_TAG>` is auto-generated image tag, which depends on source code in the repository.
The `<PKG_TAG>` may be manually set via `PKG_TAG=foobar make package-vmrestore`.

View File

@@ -0,0 +1,5 @@
FROM scratch
COPY --from=local/certs:1.0.3 /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
COPY bin/vmrestore-prod .
EXPOSE 8428
ENTRYPOINT ["/vmrestore-prod"]

78
app/vmrestore/main.go Normal file
View File

@@ -0,0 +1,78 @@
package main
import (
"flag"
"fmt"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/backup/actions"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/backup/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/backup/fslocal"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/buildinfo"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
)
var (
src = flag.String("src", "", "Source path with backup on the remote storage. "+
"Example: gcs://bucket/path/to/backup/dir, s3://bucket/path/to/backup/dir or fs:///path/to/local/backup/dir")
storageDataPath = flag.String("storageDataPath", "victoria-metrics-data", "Destination path where backup must be restored. "+
"VictoriaMetrics must be stopped when restoring from backup. -storageDataPath dir can be non-empty. In this case only missing data is downloaded from backup")
concurrency = flag.Int("concurrency", 10, "The number of concurrent workers. Higher concurrency may reduce restore duration")
maxBytesPerSecond = flag.Int("maxBytesPerSecond", 0, "The maximum download speed. There is no limit if it is set to 0")
)
func main() {
flag.Usage = usage
flag.Parse()
buildinfo.Init()
srcFS, err := newSrcFS()
if err != nil {
logger.Fatalf("%s", err)
}
dstFS, err := newDstFS()
if err != nil {
logger.Fatalf("%s", err)
}
a := &actions.Restore{
Concurrency: *concurrency,
Src: srcFS,
Dst: dstFS,
}
if err := a.Run(); err != nil {
logger.Fatalf("cannot restore from backup: %s", err)
}
}
func usage() {
const s = `
vmrestore restores VictoriaMetrics data from backups made by vmbackup.
See the docs at https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/app/vmrestore/README.md .
`
f := flag.CommandLine.Output()
fmt.Fprintf(f, "%s\n", s)
flag.PrintDefaults()
}
func newDstFS() (*fslocal.FS, error) {
if len(*storageDataPath) == 0 {
return nil, fmt.Errorf("`-storageDataPath` cannot be empty")
}
fs := &fslocal.FS{
Dir: *storageDataPath,
MaxBytesPerSecond: *maxBytesPerSecond,
}
if err := fs.Init(); err != nil {
return nil, fmt.Errorf("cannot initialize local fs: %s", err)
}
return fs, nil
}
func newSrcFS() (common.RemoteFS, error) {
fs, err := actions.NewRemoteFS(*src)
if err != nil {
return nil, fmt.Errorf("cannot parse `-src`=%q: %s", *src, err)
}
return fs, nil
}

2
app/vmselect/README.md Normal file
View File

@@ -0,0 +1,2 @@
`vmselect` performs the incoming queries and fetches the required data
from `vmstorage`.

246
app/vmselect/main.go Normal file
View File

@@ -0,0 +1,246 @@
package vmselect
import (
"flag"
"fmt"
"net/http"
"runtime"
"strings"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/netstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/prometheus"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/promql"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fs"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timerpool"
"github.com/VictoriaMetrics/metrics"
)
var (
deleteAuthKey = flag.String("deleteAuthKey", "", "authKey for metrics' deletion via /api/v1/admin/tsdb/delete_series")
maxConcurrentRequests = flag.Int("search.maxConcurrentRequests", runtime.GOMAXPROCS(-1)*2, "The maximum number of concurrent search requests. It shouldn't exceed 2*vCPUs for better performance. See also -search.maxQueueDuration")
maxQueueDuration = flag.Duration("search.maxQueueDuration", 10*time.Second, "The maximum time the request waits for execution when -search.maxConcurrentRequests limit is reached")
)
// Init initializes vmselect
func Init() {
tmpDirPath := *vmstorage.DataPath + "/tmp"
fs.RemoveDirContents(tmpDirPath)
netstorage.InitTmpBlocksDir(tmpDirPath)
promql.InitRollupResultCache(*vmstorage.DataPath + "/cache/rollupResult")
concurrencyCh = make(chan struct{}, *maxConcurrentRequests)
}
// Stop stops vmselect
func Stop() {
promql.StopRollupResultCache()
}
var concurrencyCh chan struct{}
var (
concurrencyLimitReached = metrics.NewCounter(`vm_concurrent_select_limit_reached_total`)
concurrencyLimitTimeout = metrics.NewCounter(`vm_concurrent_select_limit_timeout_total`)
_ = metrics.NewGauge(`vm_concurrent_select_capacity`, func() float64 {
return float64(cap(concurrencyCh))
})
_ = metrics.NewGauge(`vm_concurrent_select_current`, func() float64 {
return float64(len(concurrencyCh))
})
)
// RequestHandler handles remote read API requests for Prometheus
func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
// Limit the number of concurrent queries.
select {
case concurrencyCh <- struct{}{}:
defer func() { <-concurrencyCh }()
default:
// Sleep for a while until giving up. This should resolve short bursts in requests.
concurrencyLimitReached.Inc()
t := timerpool.Get(*maxQueueDuration)
select {
case concurrencyCh <- struct{}{}:
timerpool.Put(t)
defer func() { <-concurrencyCh }()
case <-t.C:
timerpool.Put(t)
concurrencyLimitTimeout.Inc()
err := &httpserver.ErrorWithStatusCode{
Err: fmt.Errorf("cannot handle more than %d concurrent requests", cap(concurrencyCh)),
StatusCode: http.StatusServiceUnavailable,
}
httpserver.Errorf(w, "%s", err)
return true
}
}
path := strings.Replace(r.URL.Path, "//", "/", -1)
if strings.HasPrefix(path, "/api/v1/label/") {
s := r.URL.Path[len("/api/v1/label/"):]
if strings.HasSuffix(s, "/values") {
labelValuesRequests.Inc()
labelName := s[:len(s)-len("/values")]
httpserver.EnableCORS(w, r)
if err := prometheus.LabelValuesHandler(labelName, w, r); err != nil {
labelValuesErrors.Inc()
sendPrometheusError(w, r, err)
return true
}
return true
}
}
switch path {
case "/api/v1/query":
queryRequests.Inc()
httpserver.EnableCORS(w, r)
if err := prometheus.QueryHandler(w, r); err != nil {
queryErrors.Inc()
sendPrometheusError(w, r, err)
return true
}
return true
case "/api/v1/query_range":
queryRangeRequests.Inc()
httpserver.EnableCORS(w, r)
if err := prometheus.QueryRangeHandler(w, r); err != nil {
queryRangeErrors.Inc()
sendPrometheusError(w, r, err)
return true
}
return true
case "/api/v1/series":
seriesRequests.Inc()
httpserver.EnableCORS(w, r)
if err := prometheus.SeriesHandler(w, r); err != nil {
seriesErrors.Inc()
sendPrometheusError(w, r, err)
return true
}
return true
case "/api/v1/series/count":
seriesCountRequests.Inc()
httpserver.EnableCORS(w, r)
if err := prometheus.SeriesCountHandler(w, r); err != nil {
seriesCountErrors.Inc()
sendPrometheusError(w, r, err)
return true
}
return true
case "/api/v1/labels":
labelsRequests.Inc()
httpserver.EnableCORS(w, r)
if err := prometheus.LabelsHandler(w, r); err != nil {
labelsErrors.Inc()
sendPrometheusError(w, r, err)
return true
}
return true
case "/api/v1/labels/count":
labelsCountRequests.Inc()
httpserver.EnableCORS(w, r)
if err := prometheus.LabelsCountHandler(w, r); err != nil {
labelsCountErrors.Inc()
sendPrometheusError(w, r, err)
return true
}
return true
case "/api/v1/export":
exportRequests.Inc()
if err := prometheus.ExportHandler(w, r); err != nil {
exportErrors.Inc()
httpserver.Errorf(w, "error in %q: %s", r.URL.Path, err)
return true
}
return true
case "/federate":
federateRequests.Inc()
if err := prometheus.FederateHandler(w, r); err != nil {
federateErrors.Inc()
httpserver.Errorf(w, "error int %q: %s", r.URL.Path, err)
return true
}
return true
case "/api/v1/rules":
// Return dumb placeholder
rulesRequests.Inc()
w.Header().Set("Content-Type", "application/json")
fmt.Fprintf(w, "%s", `{"status":"success","data":{"groups":[]}}`)
return true
case "/api/v1/alerts":
// Return dumb placehloder
alertsRequests.Inc()
w.Header().Set("Content-Type", "application/json")
fmt.Fprintf(w, "%s", `{"status":"success","data":{"alerts":[]}}`)
return true
case "/api/v1/admin/tsdb/delete_series":
deleteRequests.Inc()
authKey := r.FormValue("authKey")
if authKey != *deleteAuthKey {
httpserver.Errorf(w, "invalid authKey %q. It must match the value from -deleteAuthKey command line flag", authKey)
return true
}
if err := prometheus.DeleteHandler(r); err != nil {
deleteErrors.Inc()
httpserver.Errorf(w, "error in %q: %s", r.URL.Path, err)
return true
}
w.WriteHeader(http.StatusNoContent)
return true
default:
return false
}
}
func sendPrometheusError(w http.ResponseWriter, r *http.Request, err error) {
logger.Errorf("error in %q: %s", r.URL.Path, err)
w.Header().Set("Content-Type", "application/json")
statusCode := http.StatusUnprocessableEntity
if esc, ok := err.(*httpserver.ErrorWithStatusCode); ok {
statusCode = esc.StatusCode
}
w.WriteHeader(statusCode)
prometheus.WriteErrorResponse(w, statusCode, err)
}
var (
labelValuesRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/label/{}/values"}`)
labelValuesErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/api/v1/label/{}/values"}`)
queryRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/query"}`)
queryErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/api/v1/query"}`)
queryRangeRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/query_range"}`)
queryRangeErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/api/v1/query_range"}`)
seriesRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/series"}`)
seriesErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/api/v1/series"}`)
seriesCountRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/series/count"}`)
seriesCountErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/api/v1/series/count"}`)
labelsRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/labels"}`)
labelsErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/api/v1/labels"}`)
labelsCountRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/labels/count"}`)
labelsCountErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/api/v1/labels/count"}`)
deleteRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/admin/tsdb/delete_series"}`)
deleteErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/api/v1/admin/tsdb/delete_series"}`)
exportRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/export"}`)
exportErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/api/v1/export"}`)
federateRequests = metrics.NewCounter(`vm_http_requests_total{path="/federate"}`)
federateErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/federate"}`)
rulesRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/rules"}`)
alertsRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/alerts"}`)
)

View File

@@ -0,0 +1,9 @@
package netstorage
import (
"os"
)
func mustFadviseSequentialRead(f *os.File) {
// Do nothing :)
}

View File

@@ -0,0 +1,15 @@
package netstorage
import (
"os"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"golang.org/x/sys/unix"
)
func mustFadviseSequentialRead(f *os.File) {
fd := int(f.Fd())
if err := unix.Fadvise(int(fd), 0, 0, unix.FADV_SEQUENTIAL|unix.FADV_WILLNEED); err != nil {
logger.Panicf("FATAL: error returned from unix.Fadvise(SEQUENTIAL|WILLNEED): %s", err)
}
}

View File

@@ -0,0 +1,15 @@
package netstorage
import (
"os"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"golang.org/x/sys/unix"
)
func mustFadviseSequentialRead(f *os.File) {
fd := int(f.Fd())
if err := unix.Fadvise(int(fd), 0, 0, unix.FADV_SEQUENTIAL|unix.FADV_WILLNEED); err != nil {
logger.Panicf("FATAL: error returned from unix.Fadvise(SEQUENTIAL|WILLNEED): %s", err)
}
}

View File

@@ -0,0 +1,590 @@
package netstorage
import (
"container/heap"
"flag"
"fmt"
"runtime"
"sort"
"sync"
"sync/atomic"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/decimal"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/metrics"
)
var (
maxTagKeysPerSearch = flag.Int("search.maxTagKeys", 100e3, "The maximum number of tag keys returned per search")
maxTagValuesPerSearch = flag.Int("search.maxTagValues", 100e3, "The maximum number of tag values returned per search")
maxMetricsPerSearch = flag.Int("search.maxUniqueTimeseries", 300e3, "The maximum number of unique time series each search can scan")
)
// Result is a single timeseries result.
//
// ProcessSearchQuery returns Result slice.
type Result struct {
// The name of the metric.
MetricName storage.MetricName
// Values are sorted by Timestamps.
Values []float64
Timestamps []int64
// Marshaled MetricName. Used only for results sorting
// in app/vmselect/promql
MetricNameMarshaled []byte
}
func (r *Result) reset() {
r.MetricName.Reset()
r.Values = r.Values[:0]
r.Timestamps = r.Timestamps[:0]
r.MetricNameMarshaled = r.MetricNameMarshaled[:0]
}
// Results holds results returned from ProcessSearchQuery.
type Results struct {
tr storage.TimeRange
fetchData bool
deadline Deadline
tbf *tmpBlocksFile
packedTimeseries []packedTimeseries
}
// Len returns the number of results in rss.
func (rss *Results) Len() int {
return len(rss.packedTimeseries)
}
// Cancel cancels rss work.
func (rss *Results) Cancel() {
putTmpBlocksFile(rss.tbf)
rss.tbf = nil
}
// RunParallel runs in parallel f for all the results from rss.
//
// f shouldn't hold references to rs after returning.
// workerID is the id of the worker goroutine that calls f.
//
// rss becomes unusable after the call to RunParallel.
func (rss *Results) RunParallel(f func(rs *Result, workerID uint)) error {
defer func() {
putTmpBlocksFile(rss.tbf)
rss.tbf = nil
}()
workersCount := 1 + len(rss.packedTimeseries)/32
if workersCount > gomaxprocs {
workersCount = gomaxprocs
}
if workersCount == 0 {
logger.Panicf("BUG: workersCount cannot be zero")
}
workCh := make(chan *packedTimeseries, workersCount)
doneCh := make(chan error)
// Start workers.
rowsProcessedTotal := uint64(0)
for i := 0; i < workersCount; i++ {
go func(workerID uint) {
rs := getResult()
defer putResult(rs)
maxWorkersCount := gomaxprocs / workersCount
var err error
rowsProcessed := 0
for pts := range workCh {
if time.Until(rss.deadline.Deadline) < 0 {
err = fmt.Errorf("timeout exceeded during query execution: %s", rss.deadline.Timeout)
break
}
if err = pts.Unpack(rss.tbf, rs, rss.tr, rss.fetchData, maxWorkersCount); err != nil {
break
}
if len(rs.Timestamps) == 0 && rss.fetchData {
// Skip empty blocks.
continue
}
rowsProcessed += len(rs.Values)
f(rs, workerID)
}
atomic.AddUint64(&rowsProcessedTotal, uint64(rowsProcessed))
// Drain the remaining work
for range workCh {
}
doneCh <- err
}(uint(i))
}
// Feed workers with work.
for i := range rss.packedTimeseries {
workCh <- &rss.packedTimeseries[i]
}
seriesProcessedTotal := len(rss.packedTimeseries)
rss.packedTimeseries = rss.packedTimeseries[:0]
close(workCh)
// Wait until workers finish.
var errors []error
for i := 0; i < workersCount; i++ {
if err := <-doneCh; err != nil {
errors = append(errors, err)
}
}
perQueryRowsProcessed.Update(float64(rowsProcessedTotal))
perQuerySeriesProcessed.Update(float64(seriesProcessedTotal))
if len(errors) > 0 {
// Return just the first error, since other errors
// is likely duplicate the first error.
return errors[0]
}
return nil
}
var perQueryRowsProcessed = metrics.NewHistogram(`vm_per_query_rows_processed_count`)
var perQuerySeriesProcessed = metrics.NewHistogram(`vm_per_query_series_processed_count`)
var gomaxprocs = runtime.GOMAXPROCS(-1)
type packedTimeseries struct {
metricName string
addrs []tmpBlockAddr
}
// Unpack unpacks pts to dst.
func (pts *packedTimeseries) Unpack(tbf *tmpBlocksFile, dst *Result, tr storage.TimeRange, fetchData bool, maxWorkersCount int) error {
dst.reset()
if err := dst.MetricName.Unmarshal(bytesutil.ToUnsafeBytes(pts.metricName)); err != nil {
return fmt.Errorf("cannot unmarshal metricName %q: %s", pts.metricName, err)
}
workersCount := 1 + len(pts.addrs)/32
if workersCount > maxWorkersCount {
workersCount = maxWorkersCount
}
if workersCount == 0 {
logger.Panicf("BUG: workersCount cannot be zero")
}
sbs := make([]*sortBlock, 0, len(pts.addrs))
var sbsLock sync.Mutex
workCh := make(chan tmpBlockAddr, workersCount)
doneCh := make(chan error)
// Start workers
for i := 0; i < workersCount; i++ {
go func() {
var err error
for addr := range workCh {
sb := getSortBlock()
if err = sb.unpackFrom(tbf, addr, tr, fetchData); err != nil {
break
}
sbsLock.Lock()
sbs = append(sbs, sb)
sbsLock.Unlock()
}
// Drain the remaining work
for range workCh {
}
doneCh <- err
}()
}
// Feed workers with work
for _, addr := range pts.addrs {
workCh <- addr
}
pts.addrs = pts.addrs[:0]
close(workCh)
// Wait until workers finish
var errors []error
for i := 0; i < workersCount; i++ {
if err := <-doneCh; err != nil {
errors = append(errors, err)
}
}
if len(errors) > 0 {
// Return the first error only, since other errors are likely the same.
return errors[0]
}
// Merge blocks
mergeSortBlocks(dst, sbs)
return nil
}
func getSortBlock() *sortBlock {
v := sbPool.Get()
if v == nil {
return &sortBlock{}
}
return v.(*sortBlock)
}
func putSortBlock(sb *sortBlock) {
sb.reset()
sbPool.Put(sb)
}
var sbPool sync.Pool
var metricRowsSkipped = metrics.NewCounter(`vm_metric_rows_skipped_total{name="vmselect"}`)
func mergeSortBlocks(dst *Result, sbh sortBlocksHeap) {
// Skip empty sort blocks, since they cannot be passed to heap.Init.
src := sbh
sbh = sbh[:0]
for _, sb := range src {
if len(sb.Timestamps) == 0 {
putSortBlock(sb)
continue
}
sbh = append(sbh, sb)
}
if len(sbh) == 0 {
return
}
heap.Init(&sbh)
for {
top := sbh[0]
heap.Pop(&sbh)
if len(sbh) == 0 {
dst.Timestamps = append(dst.Timestamps, top.Timestamps[top.NextIdx:]...)
dst.Values = append(dst.Values, top.Values[top.NextIdx:]...)
putSortBlock(top)
return
}
sbNext := sbh[0]
tsNext := sbNext.Timestamps[sbNext.NextIdx]
idxNext := len(top.Timestamps)
if top.Timestamps[idxNext-1] > tsNext {
idxNext = top.NextIdx
for top.Timestamps[idxNext] <= tsNext {
idxNext++
}
}
dst.Timestamps = append(dst.Timestamps, top.Timestamps[top.NextIdx:idxNext]...)
dst.Values = append(dst.Values, top.Values[top.NextIdx:idxNext]...)
if idxNext < len(top.Timestamps) {
top.NextIdx = idxNext
heap.Push(&sbh, top)
} else {
// Return top to the pool.
putSortBlock(top)
}
}
}
type sortBlock struct {
// b is used as a temporary storage for unpacked rows before they
// go to Timestamps and Values.
b storage.Block
Timestamps []int64
Values []float64
NextIdx int
}
func (sb *sortBlock) reset() {
sb.b.Reset()
sb.Timestamps = sb.Timestamps[:0]
sb.Values = sb.Values[:0]
sb.NextIdx = 0
}
func (sb *sortBlock) unpackFrom(tbf *tmpBlocksFile, addr tmpBlockAddr, tr storage.TimeRange, fetchData bool) error {
tbf.MustReadBlockAt(&sb.b, addr)
if fetchData {
if err := sb.b.UnmarshalData(); err != nil {
return fmt.Errorf("cannot unmarshal block: %s", err)
}
}
timestamps := sb.b.Timestamps()
// Skip timestamps smaller than tr.MinTimestamp.
i := 0
for i < len(timestamps) && timestamps[i] < tr.MinTimestamp {
i++
}
// Skip timestamps bigger than tr.MaxTimestamp.
j := len(timestamps)
for j > i && timestamps[j-1] > tr.MaxTimestamp {
j--
}
skippedRows := sb.b.RowsCount() - (j - i)
metricRowsSkipped.Add(skippedRows)
// Copy the remaining values.
if i == j {
return nil
}
values := sb.b.Values()
sb.Timestamps = append(sb.Timestamps, timestamps[i:j]...)
sb.Values = decimal.AppendDecimalToFloat(sb.Values, values[i:j], sb.b.Scale())
return nil
}
type sortBlocksHeap []*sortBlock
func (sbh sortBlocksHeap) Len() int {
return len(sbh)
}
func (sbh sortBlocksHeap) Less(i, j int) bool {
a := sbh[i]
b := sbh[j]
return a.Timestamps[a.NextIdx] < b.Timestamps[b.NextIdx]
}
func (sbh sortBlocksHeap) Swap(i, j int) {
sbh[i], sbh[j] = sbh[j], sbh[i]
}
func (sbh *sortBlocksHeap) Push(x interface{}) {
*sbh = append(*sbh, x.(*sortBlock))
}
func (sbh *sortBlocksHeap) Pop() interface{} {
a := *sbh
v := a[len(a)-1]
*sbh = a[:len(a)-1]
return v
}
// DeleteSeries deletes time series matching the given tagFilterss.
func DeleteSeries(sq *storage.SearchQuery) (int, error) {
tfss, err := setupTfss(sq.TagFilterss)
if err != nil {
return 0, err
}
return vmstorage.DeleteMetrics(tfss)
}
// GetLabels returns labels until the given deadline.
func GetLabels(deadline Deadline) ([]string, error) {
labels, err := vmstorage.SearchTagKeys(*maxTagKeysPerSearch)
if err != nil {
return nil, fmt.Errorf("error during labels search: %s", err)
}
// Substitute "" with "__name__"
for i := range labels {
if labels[i] == "" {
labels[i] = "__name__"
}
}
// Sort labels like Prometheus does
sort.Strings(labels)
return labels, nil
}
// GetLabelValues returns label values for the given labelName
// until the given deadline.
func GetLabelValues(labelName string, deadline Deadline) ([]string, error) {
if labelName == "__name__" {
labelName = ""
}
// Search for tag values
labelValues, err := vmstorage.SearchTagValues([]byte(labelName), *maxTagValuesPerSearch)
if err != nil {
return nil, fmt.Errorf("error during label values search for labelName=%q: %s", labelName, err)
}
// Sort labelValues like Prometheus does
sort.Strings(labelValues)
return labelValues, nil
}
// GetLabelEntries returns all the label entries until the given deadline.
func GetLabelEntries(deadline Deadline) ([]storage.TagEntry, error) {
labelEntries, err := vmstorage.SearchTagEntries(*maxTagKeysPerSearch, *maxTagValuesPerSearch)
if err != nil {
return nil, fmt.Errorf("error during label entries request: %s", err)
}
// Substitute "" with "__name__"
for i := range labelEntries {
e := &labelEntries[i]
if e.Key == "" {
e.Key = "__name__"
}
}
// Sort labelEntries by the number of label values in each entry.
sort.Slice(labelEntries, func(i, j int) bool {
a, b := labelEntries[i].Values, labelEntries[j].Values
if len(a) < len(b) {
return true
}
if len(a) > len(b) {
return false
}
return labelEntries[i].Key < labelEntries[j].Key
})
return labelEntries, nil
}
// GetSeriesCount returns the number of unique series.
func GetSeriesCount(deadline Deadline) (uint64, error) {
n, err := vmstorage.GetSeriesCount()
if err != nil {
return 0, fmt.Errorf("error during series count request: %s", err)
}
return n, nil
}
func getStorageSearch() *storage.Search {
v := ssPool.Get()
if v == nil {
return &storage.Search{}
}
return v.(*storage.Search)
}
func putStorageSearch(sr *storage.Search) {
sr.MustClose()
ssPool.Put(sr)
}
var ssPool sync.Pool
// ProcessSearchQuery performs sq on storage nodes until the given deadline.
func ProcessSearchQuery(sq *storage.SearchQuery, fetchData bool, deadline Deadline) (*Results, error) {
// Setup search.
tfss, err := setupTfss(sq.TagFilterss)
if err != nil {
return nil, err
}
tr := storage.TimeRange{
MinTimestamp: sq.MinTimestamp,
MaxTimestamp: sq.MaxTimestamp,
}
vmstorage.WG.Add(1)
defer vmstorage.WG.Done()
sr := getStorageSearch()
defer putStorageSearch(sr)
sr.Init(vmstorage.Storage, tfss, tr, fetchData, *maxMetricsPerSearch)
tbf := getTmpBlocksFile()
m := make(map[string][]tmpBlockAddr)
blocksRead := 0
bb := tmpBufPool.Get()
defer tmpBufPool.Put(bb)
for sr.NextMetricBlock() {
blocksRead++
bb.B = storage.MarshalBlock(bb.B[:0], sr.MetricBlock.Block)
addr, err := tbf.WriteBlockData(bb.B)
if err != nil {
putTmpBlocksFile(tbf)
return nil, fmt.Errorf("cannot write data block #%d to temporary blocks file: %s", blocksRead, err)
}
if time.Until(deadline.Deadline) < 0 {
putTmpBlocksFile(tbf)
return nil, fmt.Errorf("timeout exceeded while fetching data block #%d from storage: %s", blocksRead, deadline.Timeout)
}
metricName := sr.MetricBlock.MetricName
m[string(metricName)] = append(m[string(metricName)], addr)
}
if err := sr.Error(); err != nil {
putTmpBlocksFile(tbf)
return nil, fmt.Errorf("search error after reading %d data blocks: %s", blocksRead, err)
}
if err := tbf.Finalize(); err != nil {
putTmpBlocksFile(tbf)
return nil, fmt.Errorf("cannot finalize temporary blocks file with %d blocks: %s", blocksRead, err)
}
var rss Results
rss.packedTimeseries = make([]packedTimeseries, len(m))
rss.tr = tr
rss.fetchData = fetchData
rss.deadline = deadline
rss.tbf = tbf
i := 0
for metricName, addrs := range m {
pts := &rss.packedTimeseries[i]
i++
pts.metricName = metricName
pts.addrs = addrs
}
// Sort rss.packedTimeseries by the first addr offset in order
// to reduce the number of disk seeks during unpacking in RunParallel.
// In this case tmpBlocksFile must be read almost sequentially.
sort.Slice(rss.packedTimeseries, func(i, j int) bool {
pts := rss.packedTimeseries
return pts[i].addrs[0].offset < pts[j].addrs[0].offset
})
return &rss, nil
}
func getResult() *Result {
v := rsPool.Get()
if v == nil {
return &Result{}
}
return v.(*Result)
}
func putResult(rs *Result) {
if len(rs.Values) > 8192 {
// Do not pool big results, since they may occupy too much memory.
return
}
rs.reset()
rsPool.Put(rs)
}
var rsPool sync.Pool
func setupTfss(tagFilterss [][]storage.TagFilter) ([]*storage.TagFilters, error) {
tfss := make([]*storage.TagFilters, 0, len(tagFilterss))
for _, tagFilters := range tagFilterss {
tfs := storage.NewTagFilters()
for i := range tagFilters {
tf := &tagFilters[i]
if err := tfs.Add(tf.Key, tf.Value, tf.IsNegative, tf.IsRegexp); err != nil {
return nil, fmt.Errorf("cannot parse tag filter %s: %s", tf, err)
}
}
tfss = append(tfss, tfs)
}
return tfss, nil
}
// Deadline contains deadline with the corresponding timeout for pretty error messages.
type Deadline struct {
Deadline time.Time
Timeout time.Duration
}
// NewDeadline returns deadline for the given timeout.
func NewDeadline(timeout time.Duration) Deadline {
return Deadline{
Deadline: time.Now().Add(timeout),
Timeout: timeout,
}
}

View File

@@ -0,0 +1,179 @@
package netstorage
import (
"fmt"
"io/ioutil"
"os"
"sync"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fs"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/memory"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/metrics"
)
// InitTmpBlocksDir initializes directory to store temporary search results.
//
// It stores data in system-defined temporary directory if tmpDirPath is empty.
func InitTmpBlocksDir(tmpDirPath string) {
if len(tmpDirPath) == 0 {
tmpDirPath = os.TempDir()
}
tmpBlocksDir = tmpDirPath + "/searchResults"
fs.MustRemoveAll(tmpBlocksDir)
if err := fs.MkdirAllIfNotExist(tmpBlocksDir); err != nil {
logger.Panicf("FATAL: cannot create %q: %s", tmpBlocksDir, err)
}
}
var tmpBlocksDir string
func maxInmemoryTmpBlocksFile() int {
mem := memory.Allowed()
maxLen := mem / 1024
if maxLen < 64*1024 {
return 64 * 1024
}
return maxLen
}
var _ = metrics.NewGauge(`vm_tmp_blocks_max_inmemory_file_size_bytes`, func() float64 {
return float64(maxInmemoryTmpBlocksFile())
})
type tmpBlocksFile struct {
buf []byte
f *os.File
offset uint64
}
func getTmpBlocksFile() *tmpBlocksFile {
v := tmpBlocksFilePool.Get()
if v == nil {
return &tmpBlocksFile{
buf: make([]byte, 0, maxInmemoryTmpBlocksFile()),
}
}
return v.(*tmpBlocksFile)
}
func putTmpBlocksFile(tbf *tmpBlocksFile) {
tbf.MustClose()
tbf.buf = tbf.buf[:0]
tbf.f = nil
tbf.offset = 0
tmpBlocksFilePool.Put(tbf)
}
var tmpBlocksFilePool sync.Pool
type tmpBlockAddr struct {
offset uint64
size int
}
func (addr tmpBlockAddr) String() string {
return fmt.Sprintf("offset %d, size %d", addr.offset, addr.size)
}
var tmpBlocksFilesCreated = metrics.NewCounter(`vm_tmp_blocks_files_created_total`)
// WriteBlockData writes b to tbf.
//
// It returns errors since the operation may fail on space shortage
// and this must be handled.
func (tbf *tmpBlocksFile) WriteBlockData(b []byte) (tmpBlockAddr, error) {
var addr tmpBlockAddr
addr.offset = tbf.offset
addr.size = len(b)
tbf.offset += uint64(addr.size)
if len(tbf.buf)+len(b) <= cap(tbf.buf) {
// Fast path - the data fits tbf.buf
tbf.buf = append(tbf.buf, b...)
return addr, nil
}
// Slow path: flush the data from tbf.buf to file.
if tbf.f == nil {
f, err := ioutil.TempFile(tmpBlocksDir, "")
if err != nil {
return addr, err
}
tbf.f = f
tmpBlocksFilesCreated.Inc()
}
_, err := tbf.f.Write(tbf.buf)
tbf.buf = append(tbf.buf[:0], b...)
if err != nil {
return addr, fmt.Errorf("cannot write block to %q: %s", tbf.f.Name(), err)
}
return addr, nil
}
func (tbf *tmpBlocksFile) Finalize() error {
if tbf.f == nil {
return nil
}
if _, err := tbf.f.Write(tbf.buf); err != nil {
return fmt.Errorf("cannot flush the remaining %d bytes to tmpBlocksFile: %s", len(tbf.buf), err)
}
tbf.buf = tbf.buf[:0]
if _, err := tbf.f.Seek(0, 0); err != nil {
logger.Panicf("FATAL: cannot seek to the start of file: %s", err)
}
// Hint the OS that the file is read almost sequentiallly.
// This should reduce the number of disk seeks, which is important
// for HDDs.
mustFadviseSequentialRead(tbf.f)
return nil
}
func (tbf *tmpBlocksFile) MustReadBlockAt(dst *storage.Block, addr tmpBlockAddr) {
var buf []byte
if tbf.f == nil {
buf = tbf.buf[addr.offset : addr.offset+uint64(addr.size)]
} else {
bb := tmpBufPool.Get()
defer tmpBufPool.Put(bb)
bb.B = bytesutil.Resize(bb.B, addr.size)
n, err := tbf.f.ReadAt(bb.B, int64(addr.offset))
if err != nil {
logger.Panicf("FATAL: cannot read from %q at %s: %s", tbf.f.Name(), addr, err)
}
if n != len(bb.B) {
logger.Panicf("FATAL: too short number of bytes read at %s; got %d; want %d", addr, n, len(bb.B))
}
buf = bb.B
}
tail, err := storage.UnmarshalBlock(dst, buf)
if err != nil {
logger.Panicf("FATAL: cannot unmarshal data at %s: %s", addr, err)
}
if len(tail) > 0 {
logger.Panicf("FATAL: unexpected non-empty tail left after unmarshaling data at %s; len(tail)=%d", addr, len(tail))
}
}
var tmpBufPool bytesutil.ByteBufferPool
func (tbf *tmpBlocksFile) MustClose() {
if tbf.f == nil {
return
}
fname := tbf.f.Name()
// Remove the file at first, then close it.
// This way the OS shouldn't try to flush file contents to storage
// on close.
if err := os.Remove(fname); err != nil {
logger.Panicf("FATAL: cannot remove %q: %s", fname, err)
}
if err := tbf.f.Close(); err != nil {
logger.Panicf("FATAL: cannot close %q: %s", fname, err)
}
tbf.f = nil
}

View File

@@ -0,0 +1,153 @@
package netstorage
import (
"fmt"
"math/rand"
"os"
"reflect"
"testing"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
)
func TestMain(m *testing.M) {
rand.Seed(time.Now().UnixNano())
tmpDir := "TestTmpBlocks"
InitTmpBlocksDir(tmpDir)
statusCode := m.Run()
if err := os.RemoveAll(tmpDir); err != nil {
logger.Panicf("cannot remove %q: %s", tmpDir, err)
}
os.Exit(statusCode)
}
func TestTmpBlocksFileSerial(t *testing.T) {
if err := testTmpBlocksFile(); err != nil {
t.Fatalf("unexpected error: %s", err)
}
}
func TestTmpBlocksFileConcurrent(t *testing.T) {
concurrency := 3
ch := make(chan error, concurrency)
for i := 0; i < concurrency; i++ {
go func() {
ch <- testTmpBlocksFile()
}()
}
for i := 0; i < concurrency; i++ {
select {
case err := <-ch:
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
case <-time.After(30 * time.Second):
t.Fatalf("timeout")
}
}
}
func testTmpBlocksFile() error {
createBlock := func() *storage.Block {
rowsCount := rand.Intn(8000) + 1
var timestamps, values []int64
ts := int64(rand.Intn(1023434))
for i := 0; i < rowsCount; i++ {
ts += int64(rand.Intn(1000) + 1)
timestamps = append(timestamps, ts)
values = append(values, int64(i*i+rand.Intn(20)))
}
tsid := &storage.TSID{
MetricID: 234211,
}
scale := int16(rand.Intn(123))
precisionBits := uint8(rand.Intn(63) + 1)
var b storage.Block
b.Init(tsid, timestamps, values, scale, precisionBits)
_, _, _ = b.MarshalData(0, 0)
return &b
}
for _, size := range []int{1024, 16 * 1024, maxInmemoryTmpBlocksFile() / 2, 2 * maxInmemoryTmpBlocksFile()} {
err := func() error {
tbf := getTmpBlocksFile()
defer putTmpBlocksFile(tbf)
// Write blocks until their summary size exceeds `size`.
var addrs []tmpBlockAddr
var blocks []*storage.Block
bb := tmpBufPool.Get()
defer tmpBufPool.Put(bb)
for tbf.offset < uint64(size) {
b := createBlock()
bb.B = storage.MarshalBlock(bb.B[:0], b)
addr, err := tbf.WriteBlockData(bb.B)
if err != nil {
return fmt.Errorf("cannot write block at offset %d: %s", tbf.offset, err)
}
if addr.offset+uint64(addr.size) != tbf.offset {
return fmt.Errorf("unexpected addr=%+v for offset %v", &addr, tbf.offset)
}
addrs = append(addrs, addr)
blocks = append(blocks, b)
}
if err := tbf.Finalize(); err != nil {
return fmt.Errorf("cannot finalize tbf: %s", err)
}
// Read blocks in parallel and verify them
concurrency := 2
workCh := make(chan int)
doneCh := make(chan error)
for i := 0; i < concurrency; i++ {
go func() {
doneCh <- func() error {
var b1 storage.Block
for idx := range workCh {
addr := addrs[idx]
b := blocks[idx]
if err := b.UnmarshalData(); err != nil {
return fmt.Errorf("cannot unmarshal data from the original block: %s", err)
}
b1.Reset()
tbf.MustReadBlockAt(&b1, addr)
if err := b1.UnmarshalData(); err != nil {
return fmt.Errorf("cannot unmarshal data from tbf: %s", err)
}
if b1.RowsCount() != b.RowsCount() {
return fmt.Errorf("unexpected number of rows in tbf block; got %d; want %d", b1.RowsCount(), b.RowsCount())
}
if !reflect.DeepEqual(b1.Timestamps(), b.Timestamps()) {
return fmt.Errorf("unexpected timestamps; got\n%v\nwant\n%v", b1.Timestamps(), b.Timestamps())
}
if !reflect.DeepEqual(b1.Values(), b.Values()) {
return fmt.Errorf("unexpected values; got\n%v\nwant\n%v", b1.Values(), b.Values())
}
}
return nil
}()
}()
}
for i := range addrs {
workCh <- i
}
close(workCh)
for i := 0; i < concurrency; i++ {
select {
case err := <-doneCh:
if err != nil {
return err
}
case <-time.After(time.Second):
return fmt.Errorf("timeout")
}
}
return nil
}()
if err != nil {
return err
}
}
return nil
}

View File

@@ -0,0 +1,11 @@
{% stripspace %}
ErrorResponse generates error response for /api/v1/query.
See https://prometheus.io/docs/prometheus/latest/querying/api/#format-overview
{% func ErrorResponse(statusCode int, err error) %}
{
"status":"error",
"errorType":"{%d statusCode %}",
"error": {%q= err.Error() %}
}
{% endfunc %}
{% endstripspace %}

View File

@@ -0,0 +1,61 @@
// Code generated by qtc from "error_response.qtpl". DO NOT EDIT.
// See https://github.com/valyala/quicktemplate for details.
// ErrorResponse generates error response for /api/v1/query.See https://prometheus.io/docs/prometheus/latest/querying/api/#format-overview
//line app/vmselect/prometheus/error_response.qtpl:4
package prometheus
//line app/vmselect/prometheus/error_response.qtpl:4
import (
qtio422016 "io"
qt422016 "github.com/valyala/quicktemplate"
)
//line app/vmselect/prometheus/error_response.qtpl:4
var (
_ = qtio422016.Copy
_ = qt422016.AcquireByteBuffer
)
//line app/vmselect/prometheus/error_response.qtpl:4
func StreamErrorResponse(qw422016 *qt422016.Writer, statusCode int, err error) {
//line app/vmselect/prometheus/error_response.qtpl:4
qw422016.N().S(`{"status":"error","errorType":"`)
//line app/vmselect/prometheus/error_response.qtpl:7
qw422016.N().D(statusCode)
//line app/vmselect/prometheus/error_response.qtpl:7
qw422016.N().S(`","error":`)
//line app/vmselect/prometheus/error_response.qtpl:8
qw422016.N().Q(err.Error())
//line app/vmselect/prometheus/error_response.qtpl:8
qw422016.N().S(`}`)
//line app/vmselect/prometheus/error_response.qtpl:10
}
//line app/vmselect/prometheus/error_response.qtpl:10
func WriteErrorResponse(qq422016 qtio422016.Writer, statusCode int, err error) {
//line app/vmselect/prometheus/error_response.qtpl:10
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/prometheus/error_response.qtpl:10
StreamErrorResponse(qw422016, statusCode, err)
//line app/vmselect/prometheus/error_response.qtpl:10
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/prometheus/error_response.qtpl:10
}
//line app/vmselect/prometheus/error_response.qtpl:10
func ErrorResponse(statusCode int, err error) string {
//line app/vmselect/prometheus/error_response.qtpl:10
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/prometheus/error_response.qtpl:10
WriteErrorResponse(qb422016, statusCode, err)
//line app/vmselect/prometheus/error_response.qtpl:10
qs422016 := string(qb422016.B)
//line app/vmselect/prometheus/error_response.qtpl:10
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/prometheus/error_response.qtpl:10
return qs422016
//line app/vmselect/prometheus/error_response.qtpl:10
}

View File

@@ -0,0 +1,96 @@
{% import (
"github.com/valyala/quicktemplate"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/netstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
) %}
{% stripspace %}
{% func ExportPrometheusLine(rs *netstorage.Result) %}
{% if len(rs.Timestamps) == 0 %}{% return %}{% endif %}
{% code bb := quicktemplate.AcquireByteBuffer() %}
{% code writeprometheusMetricName(bb, &rs.MetricName) %}
{% for i, ts := range rs.Timestamps %}
{%z= bb.B %}{% space %}
{%f= rs.Values[i] %}{% space %}
{%dl= ts %}{% newline %}
{% endfor %}
{% code quicktemplate.ReleaseByteBuffer(bb) %}
{% endfunc %}
{% func ExportJSONLine(rs *netstorage.Result) %}
{% if len(rs.Timestamps) == 0 %}{% return %}{% endif %}
{
"metric":{%= metricNameObject(&rs.MetricName) %},
"values":[
{% if len(rs.Values) > 0 %}
{% code values := rs.Values %}
{%f= values[0] %}
{% code values = values[1:] %}
{% for _, v := range values %}
,{%f= v %}
{% endfor %}
{% endif %}
],
"timestamps":[
{% if len(rs.Timestamps) > 0 %}
{% code timestamps := rs.Timestamps %}
{%dl= timestamps[0] %}
{% code timestamps = timestamps[1:] %}
{% for _, ts := range timestamps %}
,{%dl= ts %}
{% endfor %}
{% endif %}
]
}{% newline %}
{% endfunc %}
{% func ExportPromAPILine(rs *netstorage.Result) %}
{
"metric": {%= metricNameObject(&rs.MetricName) %},
"values": {%= valuesWithTimestamps(rs.Values, rs.Timestamps) %}
}
{% endfunc %}
{% func ExportPromAPIResponse(resultsCh <-chan *quicktemplate.ByteBuffer) %}
{
"status":"success",
"data":{
"resultType":"matrix",
"result":[
{% code bb, ok := <-resultsCh %}
{% if ok %}
{%z= bb.B %}
{% code quicktemplate.ReleaseByteBuffer(bb) %}
{% for bb := range resultsCh %}
,{%z= bb.B %}
{% code quicktemplate.ReleaseByteBuffer(bb) %}
{% endfor %}
{% endif %}
]
}
}
{% endfunc %}
{% func ExportStdResponse(resultsCh <-chan *quicktemplate.ByteBuffer) %}
{% for bb := range resultsCh %}
{%z= bb.B %}
{% code quicktemplate.ReleaseByteBuffer(bb) %}
{% endfor %}
{% endfunc %}
{% func prometheusMetricName(mn *storage.MetricName) %}
{%z= mn.MetricGroup %}
{% if len(mn.Tags) > 0 %}
{
{% code tags := mn.Tags %}
{%z= tags[0].Key %}={%qz= tags[0].Value %}
{% code tags = tags[1:] %}
{% for i := range tags %}
{% code tag := &tags[i] %}
,{%z= tag.Key %}={%qz= tag.Value %}
{% endfor %}
}
{% endif %}
{% endfunc %}
{% endstripspace %}

View File

@@ -0,0 +1,385 @@
// Code generated by qtc from "export.qtpl". DO NOT EDIT.
// See https://github.com/valyala/quicktemplate for details.
//line app/vmselect/prometheus/export.qtpl:1
package prometheus
//line app/vmselect/prometheus/export.qtpl:1
import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/netstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/valyala/quicktemplate"
)
//line app/vmselect/prometheus/export.qtpl:9
import (
qtio422016 "io"
qt422016 "github.com/valyala/quicktemplate"
)
//line app/vmselect/prometheus/export.qtpl:9
var (
_ = qtio422016.Copy
_ = qt422016.AcquireByteBuffer
)
//line app/vmselect/prometheus/export.qtpl:9
func StreamExportPrometheusLine(qw422016 *qt422016.Writer, rs *netstorage.Result) {
//line app/vmselect/prometheus/export.qtpl:10
if len(rs.Timestamps) == 0 {
//line app/vmselect/prometheus/export.qtpl:10
return
//line app/vmselect/prometheus/export.qtpl:10
}
//line app/vmselect/prometheus/export.qtpl:11
bb := quicktemplate.AcquireByteBuffer()
//line app/vmselect/prometheus/export.qtpl:12
writeprometheusMetricName(bb, &rs.MetricName)
//line app/vmselect/prometheus/export.qtpl:13
for i, ts := range rs.Timestamps {
//line app/vmselect/prometheus/export.qtpl:14
qw422016.N().Z(bb.B)
//line app/vmselect/prometheus/export.qtpl:14
qw422016.N().S(` `)
//line app/vmselect/prometheus/export.qtpl:15
qw422016.N().F(rs.Values[i])
//line app/vmselect/prometheus/export.qtpl:15
qw422016.N().S(` `)
//line app/vmselect/prometheus/export.qtpl:16
qw422016.N().DL(ts)
//line app/vmselect/prometheus/export.qtpl:16
qw422016.N().S(`
`)
//line app/vmselect/prometheus/export.qtpl:17
}
//line app/vmselect/prometheus/export.qtpl:18
quicktemplate.ReleaseByteBuffer(bb)
//line app/vmselect/prometheus/export.qtpl:19
}
//line app/vmselect/prometheus/export.qtpl:19
func WriteExportPrometheusLine(qq422016 qtio422016.Writer, rs *netstorage.Result) {
//line app/vmselect/prometheus/export.qtpl:19
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/prometheus/export.qtpl:19
StreamExportPrometheusLine(qw422016, rs)
//line app/vmselect/prometheus/export.qtpl:19
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/prometheus/export.qtpl:19
}
//line app/vmselect/prometheus/export.qtpl:19
func ExportPrometheusLine(rs *netstorage.Result) string {
//line app/vmselect/prometheus/export.qtpl:19
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/prometheus/export.qtpl:19
WriteExportPrometheusLine(qb422016, rs)
//line app/vmselect/prometheus/export.qtpl:19
qs422016 := string(qb422016.B)
//line app/vmselect/prometheus/export.qtpl:19
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/prometheus/export.qtpl:19
return qs422016
//line app/vmselect/prometheus/export.qtpl:19
}
//line app/vmselect/prometheus/export.qtpl:21
func StreamExportJSONLine(qw422016 *qt422016.Writer, rs *netstorage.Result) {
//line app/vmselect/prometheus/export.qtpl:22
if len(rs.Timestamps) == 0 {
//line app/vmselect/prometheus/export.qtpl:22
return
//line app/vmselect/prometheus/export.qtpl:22
}
//line app/vmselect/prometheus/export.qtpl:22
qw422016.N().S(`{"metric":`)
//line app/vmselect/prometheus/export.qtpl:24
streammetricNameObject(qw422016, &rs.MetricName)
//line app/vmselect/prometheus/export.qtpl:24
qw422016.N().S(`,"values":[`)
//line app/vmselect/prometheus/export.qtpl:26
if len(rs.Values) > 0 {
//line app/vmselect/prometheus/export.qtpl:27
values := rs.Values
//line app/vmselect/prometheus/export.qtpl:28
qw422016.N().F(values[0])
//line app/vmselect/prometheus/export.qtpl:29
values = values[1:]
//line app/vmselect/prometheus/export.qtpl:30
for _, v := range values {
//line app/vmselect/prometheus/export.qtpl:30
qw422016.N().S(`,`)
//line app/vmselect/prometheus/export.qtpl:31
qw422016.N().F(v)
//line app/vmselect/prometheus/export.qtpl:32
}
//line app/vmselect/prometheus/export.qtpl:33
}
//line app/vmselect/prometheus/export.qtpl:33
qw422016.N().S(`],"timestamps":[`)
//line app/vmselect/prometheus/export.qtpl:36
if len(rs.Timestamps) > 0 {
//line app/vmselect/prometheus/export.qtpl:37
timestamps := rs.Timestamps
//line app/vmselect/prometheus/export.qtpl:38
qw422016.N().DL(timestamps[0])
//line app/vmselect/prometheus/export.qtpl:39
timestamps = timestamps[1:]
//line app/vmselect/prometheus/export.qtpl:40
for _, ts := range timestamps {
//line app/vmselect/prometheus/export.qtpl:40
qw422016.N().S(`,`)
//line app/vmselect/prometheus/export.qtpl:41
qw422016.N().DL(ts)
//line app/vmselect/prometheus/export.qtpl:42
}
//line app/vmselect/prometheus/export.qtpl:43
}
//line app/vmselect/prometheus/export.qtpl:43
qw422016.N().S(`]}`)
//line app/vmselect/prometheus/export.qtpl:45
qw422016.N().S(`
`)
//line app/vmselect/prometheus/export.qtpl:46
}
//line app/vmselect/prometheus/export.qtpl:46
func WriteExportJSONLine(qq422016 qtio422016.Writer, rs *netstorage.Result) {
//line app/vmselect/prometheus/export.qtpl:46
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/prometheus/export.qtpl:46
StreamExportJSONLine(qw422016, rs)
//line app/vmselect/prometheus/export.qtpl:46
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/prometheus/export.qtpl:46
}
//line app/vmselect/prometheus/export.qtpl:46
func ExportJSONLine(rs *netstorage.Result) string {
//line app/vmselect/prometheus/export.qtpl:46
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/prometheus/export.qtpl:46
WriteExportJSONLine(qb422016, rs)
//line app/vmselect/prometheus/export.qtpl:46
qs422016 := string(qb422016.B)
//line app/vmselect/prometheus/export.qtpl:46
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/prometheus/export.qtpl:46
return qs422016
//line app/vmselect/prometheus/export.qtpl:46
}
//line app/vmselect/prometheus/export.qtpl:48
func StreamExportPromAPILine(qw422016 *qt422016.Writer, rs *netstorage.Result) {
//line app/vmselect/prometheus/export.qtpl:48
qw422016.N().S(`{"metric":`)
//line app/vmselect/prometheus/export.qtpl:50
streammetricNameObject(qw422016, &rs.MetricName)
//line app/vmselect/prometheus/export.qtpl:50
qw422016.N().S(`,"values":`)
//line app/vmselect/prometheus/export.qtpl:51
streamvaluesWithTimestamps(qw422016, rs.Values, rs.Timestamps)
//line app/vmselect/prometheus/export.qtpl:51
qw422016.N().S(`}`)
//line app/vmselect/prometheus/export.qtpl:53
}
//line app/vmselect/prometheus/export.qtpl:53
func WriteExportPromAPILine(qq422016 qtio422016.Writer, rs *netstorage.Result) {
//line app/vmselect/prometheus/export.qtpl:53
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/prometheus/export.qtpl:53
StreamExportPromAPILine(qw422016, rs)
//line app/vmselect/prometheus/export.qtpl:53
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/prometheus/export.qtpl:53
}
//line app/vmselect/prometheus/export.qtpl:53
func ExportPromAPILine(rs *netstorage.Result) string {
//line app/vmselect/prometheus/export.qtpl:53
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/prometheus/export.qtpl:53
WriteExportPromAPILine(qb422016, rs)
//line app/vmselect/prometheus/export.qtpl:53
qs422016 := string(qb422016.B)
//line app/vmselect/prometheus/export.qtpl:53
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/prometheus/export.qtpl:53
return qs422016
//line app/vmselect/prometheus/export.qtpl:53
}
//line app/vmselect/prometheus/export.qtpl:55
func StreamExportPromAPIResponse(qw422016 *qt422016.Writer, resultsCh <-chan *quicktemplate.ByteBuffer) {
//line app/vmselect/prometheus/export.qtpl:55
qw422016.N().S(`{"status":"success","data":{"resultType":"matrix","result":[`)
//line app/vmselect/prometheus/export.qtpl:61
bb, ok := <-resultsCh
//line app/vmselect/prometheus/export.qtpl:62
if ok {
//line app/vmselect/prometheus/export.qtpl:63
qw422016.N().Z(bb.B)
//line app/vmselect/prometheus/export.qtpl:64
quicktemplate.ReleaseByteBuffer(bb)
//line app/vmselect/prometheus/export.qtpl:65
for bb := range resultsCh {
//line app/vmselect/prometheus/export.qtpl:65
qw422016.N().S(`,`)
//line app/vmselect/prometheus/export.qtpl:66
qw422016.N().Z(bb.B)
//line app/vmselect/prometheus/export.qtpl:67
quicktemplate.ReleaseByteBuffer(bb)
//line app/vmselect/prometheus/export.qtpl:68
}
//line app/vmselect/prometheus/export.qtpl:69
}
//line app/vmselect/prometheus/export.qtpl:69
qw422016.N().S(`]}}`)
//line app/vmselect/prometheus/export.qtpl:73
}
//line app/vmselect/prometheus/export.qtpl:73
func WriteExportPromAPIResponse(qq422016 qtio422016.Writer, resultsCh <-chan *quicktemplate.ByteBuffer) {
//line app/vmselect/prometheus/export.qtpl:73
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/prometheus/export.qtpl:73
StreamExportPromAPIResponse(qw422016, resultsCh)
//line app/vmselect/prometheus/export.qtpl:73
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/prometheus/export.qtpl:73
}
//line app/vmselect/prometheus/export.qtpl:73
func ExportPromAPIResponse(resultsCh <-chan *quicktemplate.ByteBuffer) string {
//line app/vmselect/prometheus/export.qtpl:73
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/prometheus/export.qtpl:73
WriteExportPromAPIResponse(qb422016, resultsCh)
//line app/vmselect/prometheus/export.qtpl:73
qs422016 := string(qb422016.B)
//line app/vmselect/prometheus/export.qtpl:73
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/prometheus/export.qtpl:73
return qs422016
//line app/vmselect/prometheus/export.qtpl:73
}
//line app/vmselect/prometheus/export.qtpl:75
func StreamExportStdResponse(qw422016 *qt422016.Writer, resultsCh <-chan *quicktemplate.ByteBuffer) {
//line app/vmselect/prometheus/export.qtpl:76
for bb := range resultsCh {
//line app/vmselect/prometheus/export.qtpl:77
qw422016.N().Z(bb.B)
//line app/vmselect/prometheus/export.qtpl:78
quicktemplate.ReleaseByteBuffer(bb)
//line app/vmselect/prometheus/export.qtpl:79
}
//line app/vmselect/prometheus/export.qtpl:80
}
//line app/vmselect/prometheus/export.qtpl:80
func WriteExportStdResponse(qq422016 qtio422016.Writer, resultsCh <-chan *quicktemplate.ByteBuffer) {
//line app/vmselect/prometheus/export.qtpl:80
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/prometheus/export.qtpl:80
StreamExportStdResponse(qw422016, resultsCh)
//line app/vmselect/prometheus/export.qtpl:80
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/prometheus/export.qtpl:80
}
//line app/vmselect/prometheus/export.qtpl:80
func ExportStdResponse(resultsCh <-chan *quicktemplate.ByteBuffer) string {
//line app/vmselect/prometheus/export.qtpl:80
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/prometheus/export.qtpl:80
WriteExportStdResponse(qb422016, resultsCh)
//line app/vmselect/prometheus/export.qtpl:80
qs422016 := string(qb422016.B)
//line app/vmselect/prometheus/export.qtpl:80
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/prometheus/export.qtpl:80
return qs422016
//line app/vmselect/prometheus/export.qtpl:80
}
//line app/vmselect/prometheus/export.qtpl:82
func streamprometheusMetricName(qw422016 *qt422016.Writer, mn *storage.MetricName) {
//line app/vmselect/prometheus/export.qtpl:83
qw422016.N().Z(mn.MetricGroup)
//line app/vmselect/prometheus/export.qtpl:84
if len(mn.Tags) > 0 {
//line app/vmselect/prometheus/export.qtpl:84
qw422016.N().S(`{`)
//line app/vmselect/prometheus/export.qtpl:86
tags := mn.Tags
//line app/vmselect/prometheus/export.qtpl:87
qw422016.N().Z(tags[0].Key)
//line app/vmselect/prometheus/export.qtpl:87
qw422016.N().S(`=`)
//line app/vmselect/prometheus/export.qtpl:87
qw422016.N().QZ(tags[0].Value)
//line app/vmselect/prometheus/export.qtpl:88
tags = tags[1:]
//line app/vmselect/prometheus/export.qtpl:89
for i := range tags {
//line app/vmselect/prometheus/export.qtpl:90
tag := &tags[i]
//line app/vmselect/prometheus/export.qtpl:90
qw422016.N().S(`,`)
//line app/vmselect/prometheus/export.qtpl:91
qw422016.N().Z(tag.Key)
//line app/vmselect/prometheus/export.qtpl:91
qw422016.N().S(`=`)
//line app/vmselect/prometheus/export.qtpl:91
qw422016.N().QZ(tag.Value)
//line app/vmselect/prometheus/export.qtpl:92
}
//line app/vmselect/prometheus/export.qtpl:92
qw422016.N().S(`}`)
//line app/vmselect/prometheus/export.qtpl:94
}
//line app/vmselect/prometheus/export.qtpl:95
}
//line app/vmselect/prometheus/export.qtpl:95
func writeprometheusMetricName(qq422016 qtio422016.Writer, mn *storage.MetricName) {
//line app/vmselect/prometheus/export.qtpl:95
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/prometheus/export.qtpl:95
streamprometheusMetricName(qw422016, mn)
//line app/vmselect/prometheus/export.qtpl:95
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/prometheus/export.qtpl:95
}
//line app/vmselect/prometheus/export.qtpl:95
func prometheusMetricName(mn *storage.MetricName) string {
//line app/vmselect/prometheus/export.qtpl:95
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/prometheus/export.qtpl:95
writeprometheusMetricName(qb422016, mn)
//line app/vmselect/prometheus/export.qtpl:95
qs422016 := string(qb422016.B)
//line app/vmselect/prometheus/export.qtpl:95
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/prometheus/export.qtpl:95
return qs422016
//line app/vmselect/prometheus/export.qtpl:95
}

View File

@@ -0,0 +1,16 @@
{% import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/netstorage"
) %}
{% stripspace %}
// Federate writes rs in /federate format.
// See https://prometheus.io/docs/prometheus/latest/federation/
{% func Federate(rs *netstorage.Result) %}
{% if len(rs.Timestamps) == 0 || len(rs.Values) == 0 %}{% return %}{% endif %}
{%= prometheusMetricName(&rs.MetricName) %}{% space %}
{%f= rs.Values[len(rs.Values)-1] %}{% space %}
{%dl= rs.Timestamps[len(rs.Timestamps)-1] %}{% newline %}
{% endfunc %}
{% endstripspace %}

View File

@@ -0,0 +1,75 @@
// Code generated by qtc from "federate.qtpl". DO NOT EDIT.
// See https://github.com/valyala/quicktemplate for details.
//line app/vmselect/prometheus/federate.qtpl:1
package prometheus
//line app/vmselect/prometheus/federate.qtpl:1
import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/netstorage"
)
// Federate writes rs in /federate format.// See https://prometheus.io/docs/prometheus/latest/federation/
//line app/vmselect/prometheus/federate.qtpl:9
import (
qtio422016 "io"
qt422016 "github.com/valyala/quicktemplate"
)
//line app/vmselect/prometheus/federate.qtpl:9
var (
_ = qtio422016.Copy
_ = qt422016.AcquireByteBuffer
)
//line app/vmselect/prometheus/federate.qtpl:9
func StreamFederate(qw422016 *qt422016.Writer, rs *netstorage.Result) {
//line app/vmselect/prometheus/federate.qtpl:10
if len(rs.Timestamps) == 0 || len(rs.Values) == 0 {
//line app/vmselect/prometheus/federate.qtpl:10
return
//line app/vmselect/prometheus/federate.qtpl:10
}
//line app/vmselect/prometheus/federate.qtpl:11
streamprometheusMetricName(qw422016, &rs.MetricName)
//line app/vmselect/prometheus/federate.qtpl:11
qw422016.N().S(` `)
//line app/vmselect/prometheus/federate.qtpl:12
qw422016.N().F(rs.Values[len(rs.Values)-1])
//line app/vmselect/prometheus/federate.qtpl:12
qw422016.N().S(` `)
//line app/vmselect/prometheus/federate.qtpl:13
qw422016.N().DL(rs.Timestamps[len(rs.Timestamps)-1])
//line app/vmselect/prometheus/federate.qtpl:13
qw422016.N().S(`
`)
//line app/vmselect/prometheus/federate.qtpl:14
}
//line app/vmselect/prometheus/federate.qtpl:14
func WriteFederate(qq422016 qtio422016.Writer, rs *netstorage.Result) {
//line app/vmselect/prometheus/federate.qtpl:14
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/prometheus/federate.qtpl:14
StreamFederate(qw422016, rs)
//line app/vmselect/prometheus/federate.qtpl:14
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/prometheus/federate.qtpl:14
}
//line app/vmselect/prometheus/federate.qtpl:14
func Federate(rs *netstorage.Result) string {
//line app/vmselect/prometheus/federate.qtpl:14
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/prometheus/federate.qtpl:14
WriteFederate(qb422016, rs)
//line app/vmselect/prometheus/federate.qtpl:14
qs422016 := string(qb422016.B)
//line app/vmselect/prometheus/federate.qtpl:14
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/prometheus/federate.qtpl:14
return qs422016
//line app/vmselect/prometheus/federate.qtpl:14
}

View File

@@ -0,0 +1,15 @@
{% stripspace %}
LabelValuesResponse generates response for /api/v1/label/<labelName>/values .
See https://prometheus.io/docs/prometheus/latest/querying/api/#querying-label-values
{% func LabelValuesResponse(labelValues []string) %}
{
"status":"success",
"data":[
{% for i, labelValue := range labelValues %}
{%q= labelValue %}
{% if i+1 < len(labelValues) %},{% endif %}
{% endfor %}
]
}
{% endfunc %}
{% endstripspace %}

View File

@@ -0,0 +1,67 @@
// Code generated by qtc from "label_values_response.qtpl". DO NOT EDIT.
// See https://github.com/valyala/quicktemplate for details.
// LabelValuesResponse generates response for /api/v1/label/<labelName>/values .See https://prometheus.io/docs/prometheus/latest/querying/api/#querying-label-values
//line app/vmselect/prometheus/label_values_response.qtpl:4
package prometheus
//line app/vmselect/prometheus/label_values_response.qtpl:4
import (
qtio422016 "io"
qt422016 "github.com/valyala/quicktemplate"
)
//line app/vmselect/prometheus/label_values_response.qtpl:4
var (
_ = qtio422016.Copy
_ = qt422016.AcquireByteBuffer
)
//line app/vmselect/prometheus/label_values_response.qtpl:4
func StreamLabelValuesResponse(qw422016 *qt422016.Writer, labelValues []string) {
//line app/vmselect/prometheus/label_values_response.qtpl:4
qw422016.N().S(`{"status":"success","data":[`)
//line app/vmselect/prometheus/label_values_response.qtpl:8
for i, labelValue := range labelValues {
//line app/vmselect/prometheus/label_values_response.qtpl:9
qw422016.N().Q(labelValue)
//line app/vmselect/prometheus/label_values_response.qtpl:10
if i+1 < len(labelValues) {
//line app/vmselect/prometheus/label_values_response.qtpl:10
qw422016.N().S(`,`)
//line app/vmselect/prometheus/label_values_response.qtpl:10
}
//line app/vmselect/prometheus/label_values_response.qtpl:11
}
//line app/vmselect/prometheus/label_values_response.qtpl:11
qw422016.N().S(`]}`)
//line app/vmselect/prometheus/label_values_response.qtpl:14
}
//line app/vmselect/prometheus/label_values_response.qtpl:14
func WriteLabelValuesResponse(qq422016 qtio422016.Writer, labelValues []string) {
//line app/vmselect/prometheus/label_values_response.qtpl:14
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/prometheus/label_values_response.qtpl:14
StreamLabelValuesResponse(qw422016, labelValues)
//line app/vmselect/prometheus/label_values_response.qtpl:14
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/prometheus/label_values_response.qtpl:14
}
//line app/vmselect/prometheus/label_values_response.qtpl:14
func LabelValuesResponse(labelValues []string) string {
//line app/vmselect/prometheus/label_values_response.qtpl:14
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/prometheus/label_values_response.qtpl:14
WriteLabelValuesResponse(qb422016, labelValues)
//line app/vmselect/prometheus/label_values_response.qtpl:14
qs422016 := string(qb422016.B)
//line app/vmselect/prometheus/label_values_response.qtpl:14
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/prometheus/label_values_response.qtpl:14
return qs422016
//line app/vmselect/prometheus/label_values_response.qtpl:14
}

View File

@@ -0,0 +1,17 @@
{% import "github.com/VictoriaMetrics/VictoriaMetrics/lib/storage" %}
{% stripspace %}
LabelsCountResponse generates response for /api/v1/labels/count .
{% func LabelsCountResponse(labelEntries []storage.TagEntry) %}
{
"status":"success",
"data":{
{% for i, e := range labelEntries %}
{%q= e.Key %}:{%d= len(e.Values) %}
{% if i+1 < len(labelEntries) %},{% endif %}
{% endfor %}
}
}
{% endfunc %}
{% endstripspace %}

View File

@@ -0,0 +1,74 @@
// Code generated by qtc from "labels_count_response.qtpl". DO NOT EDIT.
// See https://github.com/valyala/quicktemplate for details.
//line app/vmselect/prometheus/labels_count_response.qtpl:1
package prometheus
//line app/vmselect/prometheus/labels_count_response.qtpl:1
import "github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
// LabelsCountResponse generates response for /api/v1/labels/count .
//line app/vmselect/prometheus/labels_count_response.qtpl:5
import (
qtio422016 "io"
qt422016 "github.com/valyala/quicktemplate"
)
//line app/vmselect/prometheus/labels_count_response.qtpl:5
var (
_ = qtio422016.Copy
_ = qt422016.AcquireByteBuffer
)
//line app/vmselect/prometheus/labels_count_response.qtpl:5
func StreamLabelsCountResponse(qw422016 *qt422016.Writer, labelEntries []storage.TagEntry) {
//line app/vmselect/prometheus/labels_count_response.qtpl:5
qw422016.N().S(`{"status":"success","data":{`)
//line app/vmselect/prometheus/labels_count_response.qtpl:9
for i, e := range labelEntries {
//line app/vmselect/prometheus/labels_count_response.qtpl:10
qw422016.N().Q(e.Key)
//line app/vmselect/prometheus/labels_count_response.qtpl:10
qw422016.N().S(`:`)
//line app/vmselect/prometheus/labels_count_response.qtpl:10
qw422016.N().D(len(e.Values))
//line app/vmselect/prometheus/labels_count_response.qtpl:11
if i+1 < len(labelEntries) {
//line app/vmselect/prometheus/labels_count_response.qtpl:11
qw422016.N().S(`,`)
//line app/vmselect/prometheus/labels_count_response.qtpl:11
}
//line app/vmselect/prometheus/labels_count_response.qtpl:12
}
//line app/vmselect/prometheus/labels_count_response.qtpl:12
qw422016.N().S(`}}`)
//line app/vmselect/prometheus/labels_count_response.qtpl:15
}
//line app/vmselect/prometheus/labels_count_response.qtpl:15
func WriteLabelsCountResponse(qq422016 qtio422016.Writer, labelEntries []storage.TagEntry) {
//line app/vmselect/prometheus/labels_count_response.qtpl:15
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/prometheus/labels_count_response.qtpl:15
StreamLabelsCountResponse(qw422016, labelEntries)
//line app/vmselect/prometheus/labels_count_response.qtpl:15
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/prometheus/labels_count_response.qtpl:15
}
//line app/vmselect/prometheus/labels_count_response.qtpl:15
func LabelsCountResponse(labelEntries []storage.TagEntry) string {
//line app/vmselect/prometheus/labels_count_response.qtpl:15
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/prometheus/labels_count_response.qtpl:15
WriteLabelsCountResponse(qb422016, labelEntries)
//line app/vmselect/prometheus/labels_count_response.qtpl:15
qs422016 := string(qb422016.B)
//line app/vmselect/prometheus/labels_count_response.qtpl:15
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/prometheus/labels_count_response.qtpl:15
return qs422016
//line app/vmselect/prometheus/labels_count_response.qtpl:15
}

View File

@@ -0,0 +1,15 @@
{% stripspace %}
LabelsResponse generates response for /api/v1/labels .
See https://prometheus.io/docs/prometheus/latest/querying/api/#getting-label-names
{% func LabelsResponse(labels []string) %}
{
"status":"success",
"data":[
{% for i, label := range labels %}
{%q= label %}
{% if i+1 < len(labels) %},{% endif %}
{% endfor %}
]
}
{% endfunc %}
{% endstripspace %}

View File

@@ -0,0 +1,67 @@
// Code generated by qtc from "labels_response.qtpl". DO NOT EDIT.
// See https://github.com/valyala/quicktemplate for details.
// LabelsResponse generates response for /api/v1/labels .See https://prometheus.io/docs/prometheus/latest/querying/api/#getting-label-names
//line app/vmselect/prometheus/labels_response.qtpl:4
package prometheus
//line app/vmselect/prometheus/labels_response.qtpl:4
import (
qtio422016 "io"
qt422016 "github.com/valyala/quicktemplate"
)
//line app/vmselect/prometheus/labels_response.qtpl:4
var (
_ = qtio422016.Copy
_ = qt422016.AcquireByteBuffer
)
//line app/vmselect/prometheus/labels_response.qtpl:4
func StreamLabelsResponse(qw422016 *qt422016.Writer, labels []string) {
//line app/vmselect/prometheus/labels_response.qtpl:4
qw422016.N().S(`{"status":"success","data":[`)
//line app/vmselect/prometheus/labels_response.qtpl:8
for i, label := range labels {
//line app/vmselect/prometheus/labels_response.qtpl:9
qw422016.N().Q(label)
//line app/vmselect/prometheus/labels_response.qtpl:10
if i+1 < len(labels) {
//line app/vmselect/prometheus/labels_response.qtpl:10
qw422016.N().S(`,`)
//line app/vmselect/prometheus/labels_response.qtpl:10
}
//line app/vmselect/prometheus/labels_response.qtpl:11
}
//line app/vmselect/prometheus/labels_response.qtpl:11
qw422016.N().S(`]}`)
//line app/vmselect/prometheus/labels_response.qtpl:14
}
//line app/vmselect/prometheus/labels_response.qtpl:14
func WriteLabelsResponse(qq422016 qtio422016.Writer, labels []string) {
//line app/vmselect/prometheus/labels_response.qtpl:14
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/prometheus/labels_response.qtpl:14
StreamLabelsResponse(qw422016, labels)
//line app/vmselect/prometheus/labels_response.qtpl:14
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/prometheus/labels_response.qtpl:14
}
//line app/vmselect/prometheus/labels_response.qtpl:14
func LabelsResponse(labels []string) string {
//line app/vmselect/prometheus/labels_response.qtpl:14
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/prometheus/labels_response.qtpl:14
WriteLabelsResponse(qb422016, labels)
//line app/vmselect/prometheus/labels_response.qtpl:14
qs422016 := string(qb422016.B)
//line app/vmselect/prometheus/labels_response.qtpl:14
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/prometheus/labels_response.qtpl:14
return qs422016
//line app/vmselect/prometheus/labels_response.qtpl:14
}

View File

@@ -0,0 +1,796 @@
package prometheus
import (
"flag"
"fmt"
"math"
"net/http"
"runtime"
"sort"
"strconv"
"strings"
"sync"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/netstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/promql"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/metrics"
"github.com/valyala/quicktemplate"
)
var (
latencyOffset = flag.Duration("search.latencyOffset", time.Second*30, "The time when data points become visible in query results after the colection. "+
"Too small value can result in incomplete last points for query results")
maxQueryDuration = flag.Duration("search.maxQueryDuration", time.Second*30, "The maximum time for search query execution")
maxQueryLen = flag.Int("search.maxQueryLen", 16*1024, "The maximum search query length in bytes")
maxLookback = flag.Duration("search.maxLookback", 0, "Synonim to `-search.lookback-delta` from Prometheus. "+
"The value is dynamically detected from interval between time series datapoints if not set. It can be overridden on per-query basis via `max_lookback` arg")
)
// Default step used if not set.
const defaultStep = 5 * 60 * 1000
// FederateHandler implements /federate . See https://prometheus.io/docs/prometheus/latest/federation/
func FederateHandler(w http.ResponseWriter, r *http.Request) error {
startTime := time.Now()
ct := currentTime()
if err := r.ParseForm(); err != nil {
return fmt.Errorf("cannot parse request form values: %s", err)
}
matches := r.Form["match[]"]
if len(matches) == 0 {
return fmt.Errorf("missing `match[]` arg")
}
lookbackDelta, err := getMaxLookback(r)
if err != nil {
return err
}
if lookbackDelta <= 0 {
lookbackDelta = defaultStep
}
start, err := getTime(r, "start", ct-lookbackDelta)
if err != nil {
return err
}
end, err := getTime(r, "end", ct)
if err != nil {
return err
}
deadline := getDeadline(r)
if start >= end {
start = end - defaultStep
}
tagFilterss, err := getTagFilterssFromMatches(matches)
if err != nil {
return err
}
sq := &storage.SearchQuery{
MinTimestamp: start,
MaxTimestamp: end,
TagFilterss: tagFilterss,
}
rss, err := netstorage.ProcessSearchQuery(sq, true, deadline)
if err != nil {
return fmt.Errorf("cannot fetch data for %q: %s", sq, err)
}
resultsCh := make(chan *quicktemplate.ByteBuffer)
doneCh := make(chan error)
go func() {
err := rss.RunParallel(func(rs *netstorage.Result, workerID uint) {
bb := quicktemplate.AcquireByteBuffer()
WriteFederate(bb, rs)
resultsCh <- bb
})
close(resultsCh)
doneCh <- err
}()
w.Header().Set("Content-Type", "text/plain")
for bb := range resultsCh {
w.Write(bb.B)
quicktemplate.ReleaseByteBuffer(bb)
}
err = <-doneCh
if err != nil {
return fmt.Errorf("error during data fetching: %s", err)
}
federateDuration.UpdateDuration(startTime)
return nil
}
var federateDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/federate"}`)
// ExportHandler exports data in raw format from /api/v1/export.
func ExportHandler(w http.ResponseWriter, r *http.Request) error {
startTime := time.Now()
ct := currentTime()
if err := r.ParseForm(); err != nil {
return fmt.Errorf("cannot parse request form values: %s", err)
}
matches := r.Form["match[]"]
if len(matches) == 0 {
// Maintain backwards compatibility
match := r.FormValue("match")
if len(match) == 0 {
return fmt.Errorf("missing `match[]` arg")
}
matches = []string{match}
}
start, err := getTime(r, "start", 0)
if err != nil {
return err
}
end, err := getTime(r, "end", ct)
if err != nil {
return err
}
format := r.FormValue("format")
deadline := getDeadline(r)
if start >= end {
end = start + defaultStep
}
if err := exportHandler(w, matches, start, end, format, deadline); err != nil {
return err
}
exportDuration.UpdateDuration(startTime)
return nil
}
var exportDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/api/v1/export"}`)
func exportHandler(w http.ResponseWriter, matches []string, start, end int64, format string, deadline netstorage.Deadline) error {
writeResponseFunc := WriteExportStdResponse
writeLineFunc := WriteExportJSONLine
contentType := "application/stream+json"
if format == "prometheus" {
contentType = "text/plain"
writeLineFunc = WriteExportPrometheusLine
} else if format == "promapi" {
writeResponseFunc = WriteExportPromAPIResponse
writeLineFunc = WriteExportPromAPILine
}
tagFilterss, err := getTagFilterssFromMatches(matches)
if err != nil {
return err
}
sq := &storage.SearchQuery{
MinTimestamp: start,
MaxTimestamp: end,
TagFilterss: tagFilterss,
}
rss, err := netstorage.ProcessSearchQuery(sq, true, deadline)
if err != nil {
return fmt.Errorf("cannot fetch data for %q: %s", sq, err)
}
resultsCh := make(chan *quicktemplate.ByteBuffer, runtime.GOMAXPROCS(-1))
doneCh := make(chan error)
go func() {
err := rss.RunParallel(func(rs *netstorage.Result, workerID uint) {
bb := quicktemplate.AcquireByteBuffer()
writeLineFunc(bb, rs)
resultsCh <- bb
})
close(resultsCh)
doneCh <- err
}()
w.Header().Set("Content-Type", contentType)
writeResponseFunc(w, resultsCh)
// Consume all the data from resultsCh in the event writeResponseFunc
// fails to consume all the data.
for bb := range resultsCh {
quicktemplate.ReleaseByteBuffer(bb)
}
err = <-doneCh
if err != nil {
return fmt.Errorf("error during data fetching: %s", err)
}
return nil
}
// DeleteHandler processes /api/v1/admin/tsdb/delete_series prometheus API request.
//
// See https://prometheus.io/docs/prometheus/latest/querying/api/#delete-series
func DeleteHandler(r *http.Request) error {
startTime := time.Now()
if err := r.ParseForm(); err != nil {
return fmt.Errorf("cannot parse request form values: %s", err)
}
if r.FormValue("start") != "" || r.FormValue("end") != "" {
return fmt.Errorf("start and end aren't supported. Remove these args from the query in order to delete all the matching metrics")
}
matches := r.Form["match[]"]
if len(matches) == 0 {
return fmt.Errorf("missing `match[]` arg")
}
tagFilterss, err := getTagFilterssFromMatches(matches)
if err != nil {
return err
}
sq := &storage.SearchQuery{
TagFilterss: tagFilterss,
}
deletedCount, err := netstorage.DeleteSeries(sq)
if err != nil {
return fmt.Errorf("cannot delete time series matching %q: %s", matches, err)
}
if deletedCount > 0 {
promql.ResetRollupResultCache()
}
deleteDuration.UpdateDuration(startTime)
return nil
}
var deleteDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/api/v1/admin/tsdb/delete_series"}`)
// LabelValuesHandler processes /api/v1/label/<labelName>/values request.
//
// See https://prometheus.io/docs/prometheus/latest/querying/api/#querying-label-values
func LabelValuesHandler(labelName string, w http.ResponseWriter, r *http.Request) error {
startTime := time.Now()
deadline := getDeadline(r)
if err := r.ParseForm(); err != nil {
return fmt.Errorf("cannot parse form values: %s", err)
}
var labelValues []string
if len(r.Form["match[]"]) == 0 && len(r.Form["start"]) == 0 && len(r.Form["end"]) == 0 {
var err error
labelValues, err = netstorage.GetLabelValues(labelName, deadline)
if err != nil {
return fmt.Errorf(`cannot obtain label values for %q: %s`, labelName, err)
}
} else {
// Extended functionality that allows filtering by label filters and time range
// i.e. /api/v1/label/foo/values?match[]=foobar{baz="abc"}&start=...&end=...
// is equivalent to `label_values(foobar{baz="abc"}, foo)` call on the selected
// time range in Grafana templating.
matches := r.Form["match[]"]
if len(matches) == 0 {
matches = []string{fmt.Sprintf("{%s!=''}", labelName)}
}
ct := currentTime()
end, err := getTime(r, "end", ct)
if err != nil {
return err
}
start, err := getTime(r, "start", end-defaultStep)
if err != nil {
return err
}
labelValues, err = labelValuesWithMatches(labelName, matches, start, end, deadline)
if err != nil {
return fmt.Errorf("cannot obtain label values for %q, match[]=%q, start=%d, end=%d: %s", labelName, matches, start, end, err)
}
}
w.Header().Set("Content-Type", "application/json")
WriteLabelValuesResponse(w, labelValues)
labelValuesDuration.UpdateDuration(startTime)
return nil
}
func labelValuesWithMatches(labelName string, matches []string, start, end int64, deadline netstorage.Deadline) ([]string, error) {
if len(matches) == 0 {
logger.Panicf("BUG: matches must be non-empty")
}
tagFilterss, err := getTagFilterssFromMatches(matches)
if err != nil {
return nil, err
}
if start >= end {
end = start + defaultStep
}
sq := &storage.SearchQuery{
MinTimestamp: start,
MaxTimestamp: end,
TagFilterss: tagFilterss,
}
rss, err := netstorage.ProcessSearchQuery(sq, false, deadline)
if err != nil {
return nil, fmt.Errorf("cannot fetch data for %q: %s", sq, err)
}
m := make(map[string]struct{})
var mLock sync.Mutex
err = rss.RunParallel(func(rs *netstorage.Result, workerID uint) {
labelValue := rs.MetricName.GetTagValue(labelName)
if len(labelValue) == 0 {
return
}
mLock.Lock()
m[string(labelValue)] = struct{}{}
mLock.Unlock()
})
if err != nil {
return nil, fmt.Errorf("error when data fetching: %s", err)
}
labelValues := make([]string, 0, len(m))
for labelValue := range m {
labelValues = append(labelValues, labelValue)
}
sort.Strings(labelValues)
return labelValues, nil
}
var labelValuesDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/api/v1/label/{}/values"}`)
// LabelsCountHandler processes /api/v1/labels/count request.
func LabelsCountHandler(w http.ResponseWriter, r *http.Request) error {
startTime := time.Now()
deadline := getDeadline(r)
labelEntries, err := netstorage.GetLabelEntries(deadline)
if err != nil {
return fmt.Errorf(`cannot obtain label entries: %s`, err)
}
w.Header().Set("Content-Type", "application/json")
WriteLabelsCountResponse(w, labelEntries)
labelsCountDuration.UpdateDuration(startTime)
return nil
}
var labelsCountDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/api/v1/labels/count"}`)
// LabelsHandler processes /api/v1/labels request.
//
// See https://prometheus.io/docs/prometheus/latest/querying/api/#getting-label-names
func LabelsHandler(w http.ResponseWriter, r *http.Request) error {
startTime := time.Now()
deadline := getDeadline(r)
labels, err := netstorage.GetLabels(deadline)
if err != nil {
return fmt.Errorf("cannot obtain labels: %s", err)
}
w.Header().Set("Content-Type", "application/json")
WriteLabelsResponse(w, labels)
labelsDuration.UpdateDuration(startTime)
return nil
}
var labelsDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/api/v1/labels"}`)
// SeriesCountHandler processes /api/v1/series/count request.
func SeriesCountHandler(w http.ResponseWriter, r *http.Request) error {
startTime := time.Now()
deadline := getDeadline(r)
n, err := netstorage.GetSeriesCount(deadline)
if err != nil {
return fmt.Errorf("cannot obtain series count: %s", err)
}
w.Header().Set("Content-Type", "application/json")
WriteSeriesCountResponse(w, n)
seriesCountDuration.UpdateDuration(startTime)
return nil
}
var seriesCountDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/api/v1/series/count"}`)
// SeriesHandler processes /api/v1/series request.
//
// See https://prometheus.io/docs/prometheus/latest/querying/api/#finding-series-by-label-matchers
func SeriesHandler(w http.ResponseWriter, r *http.Request) error {
startTime := time.Now()
ct := currentTime()
if err := r.ParseForm(); err != nil {
return fmt.Errorf("cannot parse form values: %s", err)
}
matches := r.Form["match[]"]
if len(matches) == 0 {
return fmt.Errorf("missing `match[]` arg")
}
end, err := getTime(r, "end", ct)
if err != nil {
return err
}
// Do not set start to minTimeMsecs by default as Prometheus does,
// since this leads to fetching and scanning all the data from the storage,
// which can take a lot of time for big storages.
// It is better setting start as end-defaultStep by default.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/91
start, err := getTime(r, "start", end-defaultStep)
if err != nil {
return err
}
deadline := getDeadline(r)
tagFilterss, err := getTagFilterssFromMatches(matches)
if err != nil {
return err
}
if start >= end {
end = start + defaultStep
}
sq := &storage.SearchQuery{
MinTimestamp: start,
MaxTimestamp: end,
TagFilterss: tagFilterss,
}
rss, err := netstorage.ProcessSearchQuery(sq, false, deadline)
if err != nil {
return fmt.Errorf("cannot fetch data for %q: %s", sq, err)
}
resultsCh := make(chan *quicktemplate.ByteBuffer)
doneCh := make(chan error)
go func() {
err := rss.RunParallel(func(rs *netstorage.Result, workerID uint) {
bb := quicktemplate.AcquireByteBuffer()
writemetricNameObject(bb, &rs.MetricName)
resultsCh <- bb
})
close(resultsCh)
doneCh <- err
}()
w.Header().Set("Content-Type", "application/json")
WriteSeriesResponse(w, resultsCh)
// Consume all the data from resultsCh in the event WriteSeriesResponse
// fails to consume all the data.
for bb := range resultsCh {
quicktemplate.ReleaseByteBuffer(bb)
}
err = <-doneCh
if err != nil {
return fmt.Errorf("error during data fetching: %s", err)
}
seriesDuration.UpdateDuration(startTime)
return nil
}
var seriesDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/api/v1/series"}`)
// QueryHandler processes /api/v1/query request.
//
// See https://prometheus.io/docs/prometheus/latest/querying/api/#instant-queries
func QueryHandler(w http.ResponseWriter, r *http.Request) error {
startTime := time.Now()
ct := currentTime()
query := r.FormValue("query")
if len(query) == 0 {
return fmt.Errorf("missing `query` arg")
}
start, err := getTime(r, "time", ct)
if err != nil {
return err
}
queryOffset := getLatencyOffsetMilliseconds()
step, err := getDuration(r, "step", queryOffset)
if err != nil {
return err
}
deadline := getDeadline(r)
lookbackDelta, err := getMaxLookback(r)
if err != nil {
return err
}
if len(query) > *maxQueryLen {
return fmt.Errorf(`too long query; got %d bytes; mustn't exceed %d bytes`, len(query), *maxQueryLen)
}
if !getBool(r, "nocache") && ct-start < queryOffset {
// Adjust start time only if `nocache` arg isn't set.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/241
start = ct - queryOffset
}
if childQuery, windowStr, offsetStr := promql.IsMetricSelectorWithRollup(query); childQuery != "" {
var window int64
if len(windowStr) > 0 {
var err error
window, err = promql.DurationValue(windowStr, step)
if err != nil {
return err
}
}
var offset int64
if len(offsetStr) > 0 {
var err error
offset, err = promql.DurationValue(offsetStr, step)
if err != nil {
return err
}
}
start -= offset
end := start
start = end - window
if err := exportHandler(w, []string{childQuery}, start, end, "promapi", deadline); err != nil {
return err
}
queryDuration.UpdateDuration(startTime)
return nil
}
ec := promql.EvalConfig{
Start: start,
End: start,
Step: step,
Deadline: deadline,
LookbackDelta: lookbackDelta,
}
result, err := promql.Exec(&ec, query, true)
if err != nil {
return fmt.Errorf("cannot execute %q: %s", query, err)
}
w.Header().Set("Content-Type", "application/json")
WriteQueryResponse(w, result)
queryDuration.UpdateDuration(startTime)
return nil
}
var queryDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/api/v1/query"}`)
// QueryRangeHandler processes /api/v1/query_range request.
//
// See https://prometheus.io/docs/prometheus/latest/querying/api/#range-queries
func QueryRangeHandler(w http.ResponseWriter, r *http.Request) error {
startTime := time.Now()
ct := currentTime()
query := r.FormValue("query")
if len(query) == 0 {
return fmt.Errorf("missing `query` arg")
}
start, err := getTime(r, "start", ct-defaultStep)
if err != nil {
return err
}
end, err := getTime(r, "end", ct)
if err != nil {
return err
}
step, err := getDuration(r, "step", defaultStep)
if err != nil {
return err
}
deadline := getDeadline(r)
mayCache := !getBool(r, "nocache")
lookbackDelta, err := getMaxLookback(r)
if err != nil {
return err
}
// Validate input args.
if len(query) > *maxQueryLen {
return fmt.Errorf(`too long query; got %d bytes; mustn't exceed %d bytes`, len(query), *maxQueryLen)
}
if start > end {
end = start + defaultStep
}
if err := promql.ValidateMaxPointsPerTimeseries(start, end, step); err != nil {
return err
}
if mayCache {
start, end = promql.AdjustStartEnd(start, end, step)
}
ec := promql.EvalConfig{
Start: start,
End: end,
Step: step,
Deadline: deadline,
MayCache: mayCache,
LookbackDelta: lookbackDelta,
}
result, err := promql.Exec(&ec, query, false)
if err != nil {
return fmt.Errorf("cannot execute %q: %s", query, err)
}
queryOffset := getLatencyOffsetMilliseconds()
if ct-end < queryOffset {
result = adjustLastPoints(result)
}
// Remove NaN values as Prometheus does.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/153
removeNaNValuesInplace(result)
w.Header().Set("Content-Type", "application/json")
WriteQueryRangeResponse(w, result)
queryRangeDuration.UpdateDuration(startTime)
return nil
}
func removeNaNValuesInplace(tss []netstorage.Result) {
for i := range tss {
ts := &tss[i]
hasNaNs := false
for _, v := range ts.Values {
if math.IsNaN(v) {
hasNaNs = true
break
}
}
if !hasNaNs {
// Fast path: nothing to remove.
continue
}
// Slow path: remove NaNs.
srcTimestamps := ts.Timestamps
dstValues := ts.Values[:0]
dstTimestamps := ts.Timestamps[:0]
for j, v := range ts.Values {
if math.IsNaN(v) {
continue
}
dstValues = append(dstValues, v)
dstTimestamps = append(dstTimestamps, srcTimestamps[j])
}
ts.Values = dstValues
ts.Timestamps = dstTimestamps
}
}
var queryRangeDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/api/v1/query_range"}`)
// adjustLastPoints substitutes the last point values with the previous
// point values, since the last points may contain garbage.
func adjustLastPoints(tss []netstorage.Result) []netstorage.Result {
if len(tss) == 0 {
return nil
}
// Search for the last non-NaN value across all the timeseries.
lastNonNaNIdx := -1
for i := range tss {
values := tss[i].Values
j := len(values) - 1
for j >= 0 && math.IsNaN(values[j]) {
j--
}
if j > lastNonNaNIdx {
lastNonNaNIdx = j
}
}
if lastNonNaNIdx == -1 {
// All timeseries contain only NaNs.
return nil
}
// Substitute the last two values starting from lastNonNaNIdx
// with the previous values for each timeseries.
for i := range tss {
values := tss[i].Values
for j := 0; j < 2; j++ {
idx := lastNonNaNIdx + j
if idx <= 0 || idx >= len(values) || math.IsNaN(values[idx-1]) {
continue
}
values[idx] = values[idx-1]
}
}
return tss
}
func getTime(r *http.Request, argKey string, defaultValue int64) (int64, error) {
argValue := r.FormValue(argKey)
if len(argValue) == 0 {
return defaultValue, nil
}
secs, err := strconv.ParseFloat(argValue, 64)
if err != nil {
// Try parsing string format
t, err := time.Parse(time.RFC3339, argValue)
if err != nil {
// Handle Prometheus'-provided minTime and maxTime.
// See https://github.com/prometheus/client_golang/issues/614
switch argValue {
case prometheusMinTimeFormatted:
return minTimeMsecs, nil
case prometheusMaxTimeFormatted:
return maxTimeMsecs, nil
}
return 0, fmt.Errorf("cannot parse %q=%q: %s", argKey, argValue, err)
}
secs = float64(t.UnixNano()) / 1e9
}
msecs := int64(secs * 1e3)
if msecs < minTimeMsecs {
msecs = 0
}
if msecs > maxTimeMsecs {
msecs = maxTimeMsecs
}
return msecs, nil
}
var (
// These constants were obtained from https://github.com/prometheus/prometheus/blob/91d7175eaac18b00e370965f3a8186cc40bf9f55/web/api/v1/api.go#L442
// See https://github.com/prometheus/client_golang/issues/614 for details.
prometheusMinTimeFormatted = time.Unix(math.MinInt64/1000+62135596801, 0).UTC().Format(time.RFC3339Nano)
prometheusMaxTimeFormatted = time.Unix(math.MaxInt64/1000-62135596801, 999999999).UTC().Format(time.RFC3339Nano)
)
const (
// These values prevent from overflow when storing msec-precision time in int64.
minTimeMsecs = 0 // use 0 instead of `int64(-1<<63) / 1e6` because the storage engine doesn't actually support negative time
maxTimeMsecs = int64(1<<63-1) / 1e6
)
func getDuration(r *http.Request, argKey string, defaultValue int64) (int64, error) {
argValue := r.FormValue(argKey)
if len(argValue) == 0 {
return defaultValue, nil
}
secs, err := strconv.ParseFloat(argValue, 64)
if err != nil {
// Try parsing string format
d, err := time.ParseDuration(argValue)
if err != nil {
return 0, fmt.Errorf("cannot parse %q=%q: %s", argKey, argValue, err)
}
secs = d.Seconds()
}
msecs := int64(secs * 1e3)
if msecs <= 0 || msecs > maxDurationMsecs {
return 0, fmt.Errorf("%q=%dms is out of allowed range [%d ... %d]", argKey, msecs, 0, int64(maxDurationMsecs))
}
return msecs, nil
}
const maxDurationMsecs = 100 * 365 * 24 * 3600 * 1000
func getMaxLookback(r *http.Request) (int64, error) {
d := int64(*maxLookback / time.Millisecond)
return getDuration(r, "max_lookback", d)
}
func getDeadline(r *http.Request) netstorage.Deadline {
d, err := getDuration(r, "timeout", 0)
if err != nil {
d = 0
}
dMax := int64(maxQueryDuration.Seconds() * 1e3)
if d <= 0 || d > dMax {
d = dMax
}
timeout := time.Duration(d) * time.Millisecond
return netstorage.NewDeadline(timeout)
}
func getBool(r *http.Request, argKey string) bool {
argValue := r.FormValue(argKey)
switch strings.ToLower(argValue) {
case "", "0", "f", "false", "no":
return false
default:
return true
}
}
func currentTime() int64 {
return int64(time.Now().UTC().Unix()) * 1e3
}
func getTagFilterssFromMatches(matches []string) ([][]storage.TagFilter, error) {
tagFilterss := make([][]storage.TagFilter, 0, len(matches))
for _, match := range matches {
tagFilters, err := promql.ParseMetricSelector(match)
if err != nil {
return nil, fmt.Errorf("cannot parse %q: %s", match, err)
}
tagFilterss = append(tagFilterss, tagFilters)
}
return tagFilterss, nil
}
func getLatencyOffsetMilliseconds() int64 {
d := int64(*latencyOffset / time.Millisecond)
if d <= 1000 {
d = 1000
}
return d
}

View File

@@ -0,0 +1,115 @@
package prometheus
import (
"fmt"
"math"
"net/http"
"net/url"
"reflect"
"testing"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/netstorage"
)
func TestRemoveNaNValuesInplace(t *testing.T) {
f := func(tss []netstorage.Result, tssExpected []netstorage.Result) {
t.Helper()
removeNaNValuesInplace(tss)
if !reflect.DeepEqual(tss, tssExpected) {
t.Fatalf("unexpected result; got %v; want %v", tss, tssExpected)
}
}
nan := math.NaN()
f(nil, nil)
f([]netstorage.Result{
{
Timestamps: []int64{100, 200, 300},
Values: []float64{1, 2, 3},
},
{
Timestamps: []int64{100, 200, 300, 400},
Values: []float64{nan, nan, 3, nan},
},
}, []netstorage.Result{
{
Timestamps: []int64{100, 200, 300},
Values: []float64{1, 2, 3},
},
{
Timestamps: []int64{300},
Values: []float64{3},
},
})
}
func TestGetTimeSuccess(t *testing.T) {
f := func(s string, timestampExpected int64) {
t.Helper()
urlStr := fmt.Sprintf("http://foo.bar/baz?s=%s", url.QueryEscape(s))
r, err := http.NewRequest("GET", urlStr, nil)
if err != nil {
t.Fatalf("unexpected error in NewRequest: %s", err)
}
// Verify defaultValue
ts, err := getTime(r, "foo", 123)
if err != nil {
t.Fatalf("unexpected error when obtaining default time from getTime(%q): %s", s, err)
}
if ts != 123 {
t.Fatalf("unexpected default value for getTime(%q); got %d; want %d", s, ts, 123)
}
// Verify timestampExpected
ts, err = getTime(r, "s", 123)
if err != nil {
t.Fatalf("unexpected error in getTime(%q): %s", s, err)
}
if ts != timestampExpected {
t.Fatalf("unexpected timestamp for getTime(%q); got %d; want %d", s, ts, timestampExpected)
}
}
f("2019-07-07T20:01:02Z", 1562529662000)
f("2019-07-07T20:47:40+03:00", 1562521660000)
f("-292273086-05-16T16:47:06Z", minTimeMsecs)
f("292277025-08-18T07:12:54.999999999Z", maxTimeMsecs)
f("1562529662.324", 1562529662324)
f("-9223372036.854", minTimeMsecs)
f("-9223372036.855", minTimeMsecs)
f("9223372036.855", maxTimeMsecs)
}
func TestGetTimeError(t *testing.T) {
f := func(s string) {
t.Helper()
urlStr := fmt.Sprintf("http://foo.bar/baz?s=%s", url.QueryEscape(s))
r, err := http.NewRequest("GET", urlStr, nil)
if err != nil {
t.Fatalf("unexpected error in NewRequest: %s", err)
}
// Verify defaultValue
ts, err := getTime(r, "foo", 123)
if err != nil {
t.Fatalf("unexpected error when obtaining default time from getTime(%q): %s", s, err)
}
if ts != 123 {
t.Fatalf("unexpected default value for getTime(%q); got %d; want %d", s, ts, 123)
}
// Verify timestampExpected
_, err = getTime(r, "s", 123)
if err == nil {
t.Fatalf("expecting non-nil error in getTime(%q)", s)
}
}
f("foo")
f("2019-07-07T20:01:02Zisdf")
f("2019-07-07T20:47:40+03:00123")
f("-292273086-05-16T16:47:07Z")
f("292277025-08-18T07:12:54.999999998Z")
}

View File

@@ -0,0 +1,33 @@
{% import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/netstorage"
) %}
{% stripspace %}
QueryRangeResponse generates response for /api/v1/query_range.
See https://prometheus.io/docs/prometheus/latest/querying/api/#range-queries
{% func QueryRangeResponse(rs []netstorage.Result) %}
{
"status":"success",
"data":{
"resultType":"matrix",
"result":[
{% if len(rs) > 0 %}
{%= queryRangeLine(&rs[0]) %}
{% code rs = rs[1:] %}
{% for i := range rs %}
,{%= queryRangeLine(&rs[i]) %}
{% endfor %}
{% endif %}
]
}
}
{% endfunc %}
{% func queryRangeLine(r *netstorage.Result) %}
{
"metric": {%= metricNameObject(&r.MetricName) %},
"values": {%= valuesWithTimestamps(r.Values, r.Timestamps) %}
}
{% endfunc %}
{% endstripspace %}

View File

@@ -0,0 +1,118 @@
// Code generated by qtc from "query_range_response.qtpl". DO NOT EDIT.
// See https://github.com/valyala/quicktemplate for details.
//line app/vmselect/prometheus/query_range_response.qtpl:1
package prometheus
//line app/vmselect/prometheus/query_range_response.qtpl:1
import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/netstorage"
)
// QueryRangeResponse generates response for /api/v1/query_range.See https://prometheus.io/docs/prometheus/latest/querying/api/#range-queries
//line app/vmselect/prometheus/query_range_response.qtpl:8
import (
qtio422016 "io"
qt422016 "github.com/valyala/quicktemplate"
)
//line app/vmselect/prometheus/query_range_response.qtpl:8
var (
_ = qtio422016.Copy
_ = qt422016.AcquireByteBuffer
)
//line app/vmselect/prometheus/query_range_response.qtpl:8
func StreamQueryRangeResponse(qw422016 *qt422016.Writer, rs []netstorage.Result) {
//line app/vmselect/prometheus/query_range_response.qtpl:8
qw422016.N().S(`{"status":"success","data":{"resultType":"matrix","result":[`)
//line app/vmselect/prometheus/query_range_response.qtpl:14
if len(rs) > 0 {
//line app/vmselect/prometheus/query_range_response.qtpl:15
streamqueryRangeLine(qw422016, &rs[0])
//line app/vmselect/prometheus/query_range_response.qtpl:16
rs = rs[1:]
//line app/vmselect/prometheus/query_range_response.qtpl:17
for i := range rs {
//line app/vmselect/prometheus/query_range_response.qtpl:17
qw422016.N().S(`,`)
//line app/vmselect/prometheus/query_range_response.qtpl:18
streamqueryRangeLine(qw422016, &rs[i])
//line app/vmselect/prometheus/query_range_response.qtpl:19
}
//line app/vmselect/prometheus/query_range_response.qtpl:20
}
//line app/vmselect/prometheus/query_range_response.qtpl:20
qw422016.N().S(`]}}`)
//line app/vmselect/prometheus/query_range_response.qtpl:24
}
//line app/vmselect/prometheus/query_range_response.qtpl:24
func WriteQueryRangeResponse(qq422016 qtio422016.Writer, rs []netstorage.Result) {
//line app/vmselect/prometheus/query_range_response.qtpl:24
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/prometheus/query_range_response.qtpl:24
StreamQueryRangeResponse(qw422016, rs)
//line app/vmselect/prometheus/query_range_response.qtpl:24
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/prometheus/query_range_response.qtpl:24
}
//line app/vmselect/prometheus/query_range_response.qtpl:24
func QueryRangeResponse(rs []netstorage.Result) string {
//line app/vmselect/prometheus/query_range_response.qtpl:24
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/prometheus/query_range_response.qtpl:24
WriteQueryRangeResponse(qb422016, rs)
//line app/vmselect/prometheus/query_range_response.qtpl:24
qs422016 := string(qb422016.B)
//line app/vmselect/prometheus/query_range_response.qtpl:24
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/prometheus/query_range_response.qtpl:24
return qs422016
//line app/vmselect/prometheus/query_range_response.qtpl:24
}
//line app/vmselect/prometheus/query_range_response.qtpl:26
func streamqueryRangeLine(qw422016 *qt422016.Writer, r *netstorage.Result) {
//line app/vmselect/prometheus/query_range_response.qtpl:26
qw422016.N().S(`{"metric":`)
//line app/vmselect/prometheus/query_range_response.qtpl:28
streammetricNameObject(qw422016, &r.MetricName)
//line app/vmselect/prometheus/query_range_response.qtpl:28
qw422016.N().S(`,"values":`)
//line app/vmselect/prometheus/query_range_response.qtpl:29
streamvaluesWithTimestamps(qw422016, r.Values, r.Timestamps)
//line app/vmselect/prometheus/query_range_response.qtpl:29
qw422016.N().S(`}`)
//line app/vmselect/prometheus/query_range_response.qtpl:31
}
//line app/vmselect/prometheus/query_range_response.qtpl:31
func writequeryRangeLine(qq422016 qtio422016.Writer, r *netstorage.Result) {
//line app/vmselect/prometheus/query_range_response.qtpl:31
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/prometheus/query_range_response.qtpl:31
streamqueryRangeLine(qw422016, r)
//line app/vmselect/prometheus/query_range_response.qtpl:31
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/prometheus/query_range_response.qtpl:31
}
//line app/vmselect/prometheus/query_range_response.qtpl:31
func queryRangeLine(r *netstorage.Result) string {
//line app/vmselect/prometheus/query_range_response.qtpl:31
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/prometheus/query_range_response.qtpl:31
writequeryRangeLine(qb422016, r)
//line app/vmselect/prometheus/query_range_response.qtpl:31
qs422016 := string(qb422016.B)
//line app/vmselect/prometheus/query_range_response.qtpl:31
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/prometheus/query_range_response.qtpl:31
return qs422016
//line app/vmselect/prometheus/query_range_response.qtpl:31
}

View File

@@ -0,0 +1,32 @@
{% import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/netstorage"
) %}
{% stripspace %}
QueryResponse generates response for /api/v1/query.
See https://prometheus.io/docs/prometheus/latest/querying/api/#instant-queries
{% func QueryResponse(rs []netstorage.Result) %}
{
"status":"success",
"data":{
"resultType":"vector",
"result":[
{% if len(rs) > 0 %}
{
"metric": {%= metricNameObject(&rs[0].MetricName) %},
"value": {%= metricRow(rs[0].Timestamps[0], rs[0].Values[0]) %}
}
{% code rs = rs[1:] %}
{% for i := range rs %}
{% code r := &rs[i] %}
,{
"metric": {%= metricNameObject(&r.MetricName) %},
"value": {%= metricRow(r.Timestamps[0], r.Values[0]) %}
}
{% endfor %}
{% endif %}
]
}
}
{% endfunc %}
{% endstripspace %}

View File

@@ -0,0 +1,94 @@
// Code generated by qtc from "query_response.qtpl". DO NOT EDIT.
// See https://github.com/valyala/quicktemplate for details.
//line app/vmselect/prometheus/query_response.qtpl:1
package prometheus
//line app/vmselect/prometheus/query_response.qtpl:1
import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/netstorage"
)
// QueryResponse generates response for /api/v1/query.See https://prometheus.io/docs/prometheus/latest/querying/api/#instant-queries
//line app/vmselect/prometheus/query_response.qtpl:8
import (
qtio422016 "io"
qt422016 "github.com/valyala/quicktemplate"
)
//line app/vmselect/prometheus/query_response.qtpl:8
var (
_ = qtio422016.Copy
_ = qt422016.AcquireByteBuffer
)
//line app/vmselect/prometheus/query_response.qtpl:8
func StreamQueryResponse(qw422016 *qt422016.Writer, rs []netstorage.Result) {
//line app/vmselect/prometheus/query_response.qtpl:8
qw422016.N().S(`{"status":"success","data":{"resultType":"vector","result":[`)
//line app/vmselect/prometheus/query_response.qtpl:14
if len(rs) > 0 {
//line app/vmselect/prometheus/query_response.qtpl:14
qw422016.N().S(`{"metric":`)
//line app/vmselect/prometheus/query_response.qtpl:16
streammetricNameObject(qw422016, &rs[0].MetricName)
//line app/vmselect/prometheus/query_response.qtpl:16
qw422016.N().S(`,"value":`)
//line app/vmselect/prometheus/query_response.qtpl:17
streammetricRow(qw422016, rs[0].Timestamps[0], rs[0].Values[0])
//line app/vmselect/prometheus/query_response.qtpl:17
qw422016.N().S(`}`)
//line app/vmselect/prometheus/query_response.qtpl:19
rs = rs[1:]
//line app/vmselect/prometheus/query_response.qtpl:20
for i := range rs {
//line app/vmselect/prometheus/query_response.qtpl:21
r := &rs[i]
//line app/vmselect/prometheus/query_response.qtpl:21
qw422016.N().S(`,{"metric":`)
//line app/vmselect/prometheus/query_response.qtpl:23
streammetricNameObject(qw422016, &r.MetricName)
//line app/vmselect/prometheus/query_response.qtpl:23
qw422016.N().S(`,"value":`)
//line app/vmselect/prometheus/query_response.qtpl:24
streammetricRow(qw422016, r.Timestamps[0], r.Values[0])
//line app/vmselect/prometheus/query_response.qtpl:24
qw422016.N().S(`}`)
//line app/vmselect/prometheus/query_response.qtpl:26
}
//line app/vmselect/prometheus/query_response.qtpl:27
}
//line app/vmselect/prometheus/query_response.qtpl:27
qw422016.N().S(`]}}`)
//line app/vmselect/prometheus/query_response.qtpl:31
}
//line app/vmselect/prometheus/query_response.qtpl:31
func WriteQueryResponse(qq422016 qtio422016.Writer, rs []netstorage.Result) {
//line app/vmselect/prometheus/query_response.qtpl:31
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/prometheus/query_response.qtpl:31
StreamQueryResponse(qw422016, rs)
//line app/vmselect/prometheus/query_response.qtpl:31
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/prometheus/query_response.qtpl:31
}
//line app/vmselect/prometheus/query_response.qtpl:31
func QueryResponse(rs []netstorage.Result) string {
//line app/vmselect/prometheus/query_response.qtpl:31
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/prometheus/query_response.qtpl:31
WriteQueryResponse(qb422016, rs)
//line app/vmselect/prometheus/query_response.qtpl:31
qs422016 := string(qb422016.B)
//line app/vmselect/prometheus/query_response.qtpl:31
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/prometheus/query_response.qtpl:31
return qs422016
//line app/vmselect/prometheus/query_response.qtpl:31
}

View File

@@ -0,0 +1,9 @@
{% stripspace %}
SeriesCountResponse generates response for /api/v1/series/count .
{% func SeriesCountResponse(n uint64) %}
{
"status":"success",
"data":[{%dl int64(n) %}]
}
{% endfunc %}
{% endstripspace %}

View File

@@ -0,0 +1,57 @@
// Code generated by qtc from "series_count_response.qtpl". DO NOT EDIT.
// See https://github.com/valyala/quicktemplate for details.
// SeriesCountResponse generates response for /api/v1/series/count .
//line app/vmselect/prometheus/series_count_response.qtpl:3
package prometheus
//line app/vmselect/prometheus/series_count_response.qtpl:3
import (
qtio422016 "io"
qt422016 "github.com/valyala/quicktemplate"
)
//line app/vmselect/prometheus/series_count_response.qtpl:3
var (
_ = qtio422016.Copy
_ = qt422016.AcquireByteBuffer
)
//line app/vmselect/prometheus/series_count_response.qtpl:3
func StreamSeriesCountResponse(qw422016 *qt422016.Writer, n uint64) {
//line app/vmselect/prometheus/series_count_response.qtpl:3
qw422016.N().S(`{"status":"success","data":[`)
//line app/vmselect/prometheus/series_count_response.qtpl:6
qw422016.N().DL(int64(n))
//line app/vmselect/prometheus/series_count_response.qtpl:6
qw422016.N().S(`]}`)
//line app/vmselect/prometheus/series_count_response.qtpl:8
}
//line app/vmselect/prometheus/series_count_response.qtpl:8
func WriteSeriesCountResponse(qq422016 qtio422016.Writer, n uint64) {
//line app/vmselect/prometheus/series_count_response.qtpl:8
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/prometheus/series_count_response.qtpl:8
StreamSeriesCountResponse(qw422016, n)
//line app/vmselect/prometheus/series_count_response.qtpl:8
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/prometheus/series_count_response.qtpl:8
}
//line app/vmselect/prometheus/series_count_response.qtpl:8
func SeriesCountResponse(n uint64) string {
//line app/vmselect/prometheus/series_count_response.qtpl:8
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/prometheus/series_count_response.qtpl:8
WriteSeriesCountResponse(qb422016, n)
//line app/vmselect/prometheus/series_count_response.qtpl:8
qs422016 := string(qb422016.B)
//line app/vmselect/prometheus/series_count_response.qtpl:8
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/prometheus/series_count_response.qtpl:8
return qs422016
//line app/vmselect/prometheus/series_count_response.qtpl:8
}

View File

@@ -0,0 +1,24 @@
{% import (
"github.com/valyala/quicktemplate"
) %}
{% stripspace %}
SeriesResponse generates response for /api/v1/series.
See https://prometheus.io/docs/prometheus/latest/querying/api/#finding-series-by-label-matchers
{% func SeriesResponse(resultsCh <-chan *quicktemplate.ByteBuffer) %}
{
"status":"success",
"data":[
{% code bb, ok := <-resultsCh %}
{% if ok %}
{%z= bb.B %}
{% code quicktemplate.ReleaseByteBuffer(bb) %}
{% for bb := range resultsCh %}
,{%z= bb.B %}
{% code quicktemplate.ReleaseByteBuffer(bb) %}
{% endfor %}
{% endif %}
]
}
{% endfunc %}
{% endstripspace %}

View File

@@ -0,0 +1,83 @@
// Code generated by qtc from "series_response.qtpl". DO NOT EDIT.
// See https://github.com/valyala/quicktemplate for details.
//line app/vmselect/prometheus/series_response.qtpl:1
package prometheus
//line app/vmselect/prometheus/series_response.qtpl:1
import (
"github.com/valyala/quicktemplate"
)
// SeriesResponse generates response for /api/v1/series.See https://prometheus.io/docs/prometheus/latest/querying/api/#finding-series-by-label-matchers
//line app/vmselect/prometheus/series_response.qtpl:8
import (
qtio422016 "io"
qt422016 "github.com/valyala/quicktemplate"
)
//line app/vmselect/prometheus/series_response.qtpl:8
var (
_ = qtio422016.Copy
_ = qt422016.AcquireByteBuffer
)
//line app/vmselect/prometheus/series_response.qtpl:8
func StreamSeriesResponse(qw422016 *qt422016.Writer, resultsCh <-chan *quicktemplate.ByteBuffer) {
//line app/vmselect/prometheus/series_response.qtpl:8
qw422016.N().S(`{"status":"success","data":[`)
//line app/vmselect/prometheus/series_response.qtpl:12
bb, ok := <-resultsCh
//line app/vmselect/prometheus/series_response.qtpl:13
if ok {
//line app/vmselect/prometheus/series_response.qtpl:14
qw422016.N().Z(bb.B)
//line app/vmselect/prometheus/series_response.qtpl:15
quicktemplate.ReleaseByteBuffer(bb)
//line app/vmselect/prometheus/series_response.qtpl:16
for bb := range resultsCh {
//line app/vmselect/prometheus/series_response.qtpl:16
qw422016.N().S(`,`)
//line app/vmselect/prometheus/series_response.qtpl:17
qw422016.N().Z(bb.B)
//line app/vmselect/prometheus/series_response.qtpl:18
quicktemplate.ReleaseByteBuffer(bb)
//line app/vmselect/prometheus/series_response.qtpl:19
}
//line app/vmselect/prometheus/series_response.qtpl:20
}
//line app/vmselect/prometheus/series_response.qtpl:20
qw422016.N().S(`]}`)
//line app/vmselect/prometheus/series_response.qtpl:23
}
//line app/vmselect/prometheus/series_response.qtpl:23
func WriteSeriesResponse(qq422016 qtio422016.Writer, resultsCh <-chan *quicktemplate.ByteBuffer) {
//line app/vmselect/prometheus/series_response.qtpl:23
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/prometheus/series_response.qtpl:23
StreamSeriesResponse(qw422016, resultsCh)
//line app/vmselect/prometheus/series_response.qtpl:23
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/prometheus/series_response.qtpl:23
}
//line app/vmselect/prometheus/series_response.qtpl:23
func SeriesResponse(resultsCh <-chan *quicktemplate.ByteBuffer) string {
//line app/vmselect/prometheus/series_response.qtpl:23
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/prometheus/series_response.qtpl:23
WriteSeriesResponse(qb422016, resultsCh)
//line app/vmselect/prometheus/series_response.qtpl:23
qs422016 := string(qb422016.B)
//line app/vmselect/prometheus/series_response.qtpl:23
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/prometheus/series_response.qtpl:23
return qs422016
//line app/vmselect/prometheus/series_response.qtpl:23
}

View File

@@ -0,0 +1,47 @@
{% import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
) %}
{% stripspace %}
{% func metricNameObject(mn *storage.MetricName) %}
{
{% if len(mn.MetricGroup) > 0 %}
"__name__":{%qz= mn.MetricGroup %}{% if len(mn.Tags) > 0 %},{% endif %}
{% endif %}
{% for j := range mn.Tags %}
{% code tag := &mn.Tags[j] %}
{%qz= tag.Key %}:{%qz= tag.Value %}{% if j+1 < len(mn.Tags) %},{% endif %}
{% endfor %}
}
{% endfunc %}
{% func metricRow(timestamp int64, value float64) %}
[{%f= float64(timestamp)/1e3 %},"{%f= value %}"]
{% endfunc %}
{% func valuesWithTimestamps(values []float64, timestamps []int64) %}
[
{% if len(values) == 0 %}
{% return %}
{% endif %}
{% code /* inline metricRow call here for the sake of performance optimization */ %}
[{%f= float64(timestamps[0])/1e3 %},"{%f= values[0] %}"]
{% code
timestamps = timestamps[1:]
values = values[1:]
%}
{% if len(values) > 0 %}
{%code
// Remove bounds check inside the loop below
_ = timestamps[len(values)-1]
%}
{% for i, v := range values %}
{% code /* inline metricRow call here for the sake of performance optimization */ %}
,[{%f= float64(timestamps[i])/1e3 %},"{%f= v %}"]
{% endfor %}
{% endif %}
]
{% endfunc %}
{% endstripspace %}

Some files were not shown because too many files have changed in this diff Show More