Compare commits

..

1216 Commits

Author SHA1 Message Date
Aliaksandr Valialkin
0ec43cb8b0 docs/Articles.md: added a link to https://medium.com/@IG1.com/sismology-iguana-solutions-monitoring-system-f46e4170447f 2020-05-28 20:10:21 +03:00
Aliaksandr Valialkin
213c3903c9 docs/Cluster-VictoriaMetrics.md: mention that opentsdb/api/put handler is disabled by default 2020-05-28 14:28:06 +03:00
Aliaksandr Valialkin
a7797dae09 lib/storage: fix Graphite wildcard matching, which has been broken in v1.36.0
v1.36.0 always returns empty responses for Graphite wildcards like the following

   {__name__=~"foo\\.[^.]*\\.bar\\.baz"}

Temporary workaround for v1.36.0 is to add `[^.]*` to the end of the regexp.
2020-05-28 12:03:49 +03:00
Aliaksandr Valialkin
d186472081 lib/storage: improve search speed for time series matching Graphite whildcards such as foo.*.bar.baz
Add index for reverse Graphite-like metric names with dots. Use this index during search for filters
like `__name__=~"foo\\.[^.]*\\.bar\\.baz"` which end with non-empty suffix with dots, i.e. `.bar.baz` in this case.

This change may "hide" historical time series during queries. The workaround is to add `[.]*` to the end of regexp label filter,
i.e. "foo\\.[^.]*\\.bar\\.baz" should be substituted with "foo\\.[^.]*\\.bar\\.baz[.]*".
2020-05-27 21:45:52 +03:00
Aliaksandr Valialkin
7e2669f733 vendor: make vendor-update 2020-05-27 18:40:53 +03:00
Aliaksandr Valialkin
ff6d093e1b docs/Cluster-VictoriaMetrics.md: mention that nginx can be used as a load balancer in front of vminsert and vmselect 2020-05-27 18:10:08 +03:00
Aliaksandr Valialkin
8311193293 docs: refresh docs about replication support 2020-05-27 17:48:10 +03:00
Aliaksandr Valialkin
80609fdf35 docs/Cluster-VictoriaMetrics.md: sync with upstream docs 2020-05-27 17:40:01 +03:00
Aliaksandr Valialkin
d7291487be vendor: make vendor-update 2020-05-25 00:06:57 +03:00
Aliaksandr Valialkin
f5a4731412 lib/httpserver: properly set status code for empty response 2020-05-24 23:55:28 +03:00
Aliaksandr Valialkin
947009f459 lib/httpserver: fix compression for static files 2020-05-24 22:17:21 +03:00
Aliaksandr Valialkin
4cf7238b73 docs/Single-server-VictoriaMetrics.md: add a video to Zerodha talk about monitoring k8s with VictoriaMetrics 2020-05-24 15:51:56 +03:00
Aliaksandr Valialkin
c602284a99 lib/promscrape: mention about -promscrape.maxScrapeSize in the error message when target returns too big response 2020-05-24 14:41:14 +03:00
Aliaksandr Valialkin
2f35cf13c6 docs/Cluster-VictoriaMetrics.md: mention that cluster components may be monitored with vmagent 2020-05-23 14:29:50 +03:00
Aliaksandr Valialkin
b4103e055a docs/CaseStudies.md: add a link to a post about VictoriaMetrics histograms in Zerodha case study 2020-05-23 12:45:00 +03:00
Aliaksandr Valialkin
dde29c3c18 docs/CaseStudies.md: add Zerodha case based on monitoring K8s with VictoriaMetrics slides at https://docs.google.com/presentation/d/1g7yUyVEaAp4tPuRy-MZbPXKqJ1z78_5VKuV841aQfsg/edit 2020-05-23 12:41:25 +03:00
Aliaksandr Valialkin
b3fcd726e3 lib/httpserver: do not recompress already compressed response
This shoud help with vmauth issue - https://github.com/VictoriaMetrics/VictoriaMetrics/issues/514
2020-05-22 16:45:04 +03:00
Aliaksandr Valialkin
5b6a9675d8 app/vmauth: fix make run-vmauth command 2020-05-22 16:45:02 +03:00
Aliaksandr Valialkin
aa647637bf docs/Single-server-VictoriaMetrics.md: mention about vmauth in Security section 2020-05-21 23:47:56 +03:00
Aliaksandr Valialkin
84860167d0 docs/Cluster-VictoriaMetrics.md: mention about vmauth service in Multitenancy chapter 2020-05-21 22:54:19 +03:00
Aliaksandr Valialkin
0101a2d7ca docs/Single-server-VictoriaMetrics.md: sync with single-node README.md 2020-05-21 20:45:39 +03:00
Roman Khavronenko
d2cab369b2 Minor additions to single version Readme (#511)
* docs/Single-server-VictoriaMetrics.md: add link to Wiki page so it may get more attention

* docs/Single-server-VictoriaMetrics.md: mention case for changing `-retentionPeriod` setting
2020-05-21 17:48:42 +03:00
Aliaksandr Valialkin
8905bc2a40 app/vmagent: check for error returned from flag.Set 2020-05-21 16:31:14 +03:00
Aliaksandr Valialkin
f9847352b4 app/vmagent: add -dryRun option for checking all the configs mentioned in command-line flags without running vmagent
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/362
2020-05-21 15:23:27 +03:00
Aliaksandr Valialkin
d1a9d8aa1c lib/promscrape: add -promscrape.config.dryRun flag for checking -promscrape.config for errors or unsupported options
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/508
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/362
2020-05-21 14:55:11 +03:00
Aliaksandr Valialkin
b93b01bc6d docs/vmagent.md: sync with app/vmagent/README.md 2020-05-21 12:11:15 +03:00
Aliaksandr Valialkin
dbd8beccfa app/vmselect/promql: add ascent_over_time(m[d]) and descent_over_time(m[d]) functions
These functions could be useful in GPS tracking apps for calculating the summary for height gain/loss
over the given duration `d`.
2020-05-21 12:07:48 +03:00
kreedom
6b23df2bec vmalert add quotes escape function (#510)
* vmalert add quotes escape function

Co-authored-by: kreedom
2020-05-20 22:20:31 +03:00
Aaron France
619d4959c7 Update README.md 2020-05-20 09:04:31 +03:00
Aliaksandr Valialkin
70ea4e28a7 app/vmselect/promql: update numbers after the upgrade of github.com/VictoriaMetrics/metrics from v1.11.2 to v1.11.3 2020-05-20 03:06:23 +03:00
Aliaksandr Valialkin
74a2943030 vendor: update github.com/VictoriaMetrics/metrics from v1.11.2 to v1.11.3 2020-05-20 02:55:11 +03:00
faceair
b3ec0fb5e2 keep debug symbols (#438)
Co-authored-by: Aliaksandr Valialkin <valyala@gmail.com>
2020-05-20 01:21:46 +03:00
Roman Khavronenko
9a3afea123 dashboards: update troubleshooting row (#505)
* Slow metrics load panel was removed since it is hard to interpret without
additional metrics and stats;
* Slow inserts panel was updated to display percentage of slow inserts comparing
to total number of inserts to show the real impact.
2020-05-20 00:48:45 +03:00
Aliaksandr Valialkin
7705c19720 docs/MetricsQL.md: add a link to https://medium.com/@valyala/promql-tutorial-for-beginners-9ab455142085 2020-05-19 23:20:13 +03:00
Aliaksandr Valialkin
80c18b7275 docs/vmagent.md: mention an alternative to refresh_interval option in scrape configs 2020-05-19 23:10:06 +03:00
Aliaksandr Valialkin
cf87b810b7 lib/promscrape: add -promscrape.discovery.concurrency and -promscrape.discovery.concurrentWaitTime flags for tuning the number of concurrent requests to autodiscovery API servers at Consul or Kubernetes 2020-05-19 17:35:53 +03:00
Aliaksandr Valialkin
538fdfe133 app/vmselect/promql: move common code from aggrFuncOutliersK and newAggrFuncRangeTopK into getRangeTopKTimeseries 2020-05-19 16:11:14 +03:00
Aliaksandr Valialkin
f52769f6ee app/vmselect/promql: fix outilersk calculations 2020-05-19 14:44:53 +03:00
Aliaksandr Valialkin
e6a782498a docs/Quick-Start.md: mention that vmagent can be used instead of Prometheus in most cases 2020-05-19 14:09:27 +03:00
Aliaksandr Valialkin
a441cdd1d9 app/vmselect/promql: add outliersk(N, m) aggregate function for anomaly detection across groups of similar time series 2020-05-19 13:52:36 +03:00
Aliaksandr Valialkin
d0f08b4a58 app/vmalert/notifier: go fmt 2020-05-19 12:59:46 +03:00
Roman Khavronenko
aae8064285 dashboards: updates and fixes (#499)
The new update introduces new row "Troubleshooting" that
contains panels for churn rate and slow-queries/inserts/loads metrics. This row supposed to be reveal the cause of low performance or other issues.

Panels for storage were updated with "bytes-per-datapoint" and "remaining disk size" panels.
2020-05-19 11:51:02 +03:00
kreedom
7e173655ba vmalert - add expr to variables, add escape functions (#495)
* vmalert - add expr to variables, add escape functions

Co-authored-by: kreedom
2020-05-18 11:55:16 +03:00
Roman Khavronenko
92212f04da vmalert: avoid sending resolves for pending alerts (#498)
Before the change we were sending notifications to notifier
if following conditions are met:
* alert is in Fire state
* alert is in Inactive state

We were sending Inactive notifications to resolve alert ASAP. 
Unfortunately, we were sending resolves for Pending alerts that become
Inactive, which is wrong.

In this change we delete alert from the active list if
it was Pending and become Inactive. In this way we now
have Inactive alerts only if they were in state Fire before.
See test change for example.
2020-05-17 15:13:22 +01:00
Roman Khavronenko
de60ad0cd6 vmalert: fix potential race during configuration reloads (#497)
Configuration reload and rules evaluation can't be executed
in same time now. This may make reload time longer but
prevents from potential races.
2020-05-17 15:12:09 +01:00
Aliaksandr Valialkin
7a8ef517ae docs/Articles.md: add https://www.robustperception.io/evaluating-performance-and-correctness to third-party posts 2020-05-17 00:35:44 +03:00
Aliaksandr Valialkin
d61bac9fd9 deployment/docker: update Go builder from v1.14.2 to v1.14.3
This should fix the following issues found in Go v1.14.2.
See https://github.com/golang/go/issues?q=milestone%3AGo1.14.3+label%3ACherryPickApproved for details.
2020-05-16 22:55:27 +03:00
Aliaksandr Valialkin
eac3da478e app/vmalert: run make quicktemplate-gen from the root dir of the repository 2020-05-16 22:46:02 +03:00
Aliaksandr Valialkin
0890c780c2 docs/Single-server-VictoriaMetrics.md: put contact us email to the top of the page 2020-05-16 22:36:59 +03:00
Aliaksandr Valialkin
fd1a6ce9ae docs/Single-server-VictoriaMetrics.md: add Replication and Backups sections 2020-05-16 22:27:48 +03:00
Aliaksandr Valialkin
9b90c841c6 docs/Cluster-VictoriaMetrics.md: add missing endpoints to the list: api/v1/import/csv and api/v1/status/tsdb 2020-05-16 22:13:25 +03:00
Aliaksandr Valialkin
93c87d28f6 all: print --help output to stdout instead of stderr
This is easier to grep and pipe
2020-05-16 11:59:33 +03:00
Aliaksandr Valialkin
23c55181ef docs/Quick-Start.md: update old link to Docker hub to new link 2020-05-16 10:23:08 +03:00
Aliaksandr Valialkin
b19ca3eb5f lib/storage: do not increment vm_slow_metric_name_loads_total counter for metric_ids which shouldnt be prefetched, since this may mislead users 2020-05-16 10:21:17 +03:00
Aliaksandr Valialkin
4e850cd6a7 lib/persistentqueue: a follow-up for https://github.com/VictoriaMetrics/VictoriaMetrics/pull/484 2020-05-16 09:31:46 +03:00
Aliaksandr Valialkin
8cb35974af app/vmrestore: document better that vmrestore works like rsync --delete, i.e. it deletes files in -storageDataPath, which are missing in the backup 2020-05-16 09:22:17 +03:00
肖贝贝
a0380a0a91 fix: fix vmagent multi queue may become one because sync bug (#484)
Co-authored-by: xiaobeibei <xiaobeibei@bigo.sg>
2020-05-16 09:19:52 +03:00
Aliaksandr Valialkin
3a68c47de0 app/vmagent/Makefile: fix make run-vmagent rule 2020-05-15 19:35:10 +03:00
Aliaksandr Valialkin
697b6af10f app/vmagent/remotewrite: remove unused import after the commit 93267f143f 2020-05-15 17:42:19 +03:00
Aliaksandr Valialkin
93267f143f app/vmagent/remotewrite: allow ingesting time series with multiple samples at once
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/481
2020-05-15 17:36:36 +03:00
Aliaksandr Valialkin
3412d5d138 lib/backup: remove misleading -dst mention in error message
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/482
2020-05-15 17:13:37 +03:00
Aliaksandr Valialkin
27f7cca7ff lib/backup: donload only the remaining parts for partially downloaded files after vmrestore restart
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/487
2020-05-15 17:03:33 +03:00
Aliaksandr Valialkin
cf38c3c62f .github/workflows: an attempt to fix loading of golangci-lint 2020-05-15 15:06:24 +03:00
Aliaksandr Valialkin
c56b66210f vendor: make vendor-update 2020-05-15 15:02:10 +03:00
Aliaksandr Valialkin
82ffbcb9a6 app/vmstorage: add vm_slow_metric_name_loads_total metric, which could be used as an indicator when more RAM is needed for improving query performance 2020-05-15 14:11:45 +03:00
Aliaksandr Valialkin
82ccdfaa91 app/vmstorage: add vm_slow_row_inserts_total and vm_slow_per_day_index_inserts_total metrics for determining whether VictoriaMetrics required more RAM for the current number of active time series 2020-05-15 13:44:32 +03:00
Aliaksandr Valialkin
ab8f5545bc docs/vmalert.md: sync with app/vmalert/README.md 2020-05-15 13:27:09 +03:00
Aliaksandr Valialkin
0eacea1de1 lib/{storage,mergeset}: further tuning of compression levels depending on block size
This should improve performance for querying newly added data, since it can be unpacked faster.
2020-05-15 13:24:37 +03:00
Aliaksandr Valialkin
737d641920 lib/storage: wait for all the goroutines to finish in TestSearch in order to prevent racy behavior on test finish 2020-05-15 13:24:37 +03:00
Aliaksandr Valialkin
4fc33163c4 lib/storage: optimize ingestion pefrormance for new time series 2020-05-15 13:24:37 +03:00
Aliaksandr Valialkin
f9f3afb6af lib/mergeset: tune compression levels in order to improve ingestion performance a bit 2020-05-15 13:24:37 +03:00
Aliaksandr Valialkin
8b32e7c3a0 lib/storage: reduce indentation in Storage.add 2020-05-15 13:24:37 +03:00
Aliaksandr Valialkin
1573ececb2 lib/storage: return the first error instead of the last error, since the first error usually points to the root cause 2020-05-15 13:24:37 +03:00
Roman Khavronenko
a249cd9d22 vmalert: fix the access to rules slice element by wrong index (#486)
During group's update rules deletion was causing slice
mutations while slice index was assumed to be unchanged.
This caused "slice bounds out of range" errors when multiple
rules were deleted sequentially.
2020-05-15 07:55:22 +01:00
hagen1778
ef0e37cb9e vmalert: update README 2020-05-15 09:17:28 +03:00
Aliaksandr Valialkin
0afd48d2ee lib: extract common code for returning fast unix timestamp into lib/fasttime 2020-05-14 23:02:07 +03:00
Aliaksandr Valialkin
42866fa754 lib/{storage,mergeset}: return dst on error from unmarshalBlockHeaders, so it could be reused 2020-05-14 15:32:07 +03:00
Aliaksandr Valialkin
827a3a7866 lib/storage: document that getnerateUniqueMetricID should return dense ids 2020-05-14 14:08:45 +03:00
Aliaksandr Valialkin
606585f7be lib/{storage,mergeset}: cleanup: remove unused partSearch.indexBlockReuse 2020-05-14 14:03:03 +03:00
Aliaksandr Valialkin
21598ac417 docs/vmalert.md: sync with app/vmalert/README.md 2020-05-13 22:55:35 +03:00
Aliaksandr Valialkin
894e5d2b9b docs/vmbackup.md: add a link to vmbackuper tool 2020-05-13 22:54:22 +03:00
Roman Khavronenko
415b1ddfb5 vmalert: check if remoteRead object was initied before calling Restore (#473)
The check for non-nil remoteRead was mistakenly dropped
during refactoring which caused panics when `vmalert`
wasn't configured with `remoteRead` flag.
2020-05-13 19:32:58 +01:00
Roman Khavronenko
db7dd96346 vmalert: fix flag names and description in README (#475)
Change also adds the recommendation for `remotewrite`
queue error.
2020-05-13 19:32:21 +01:00
肖贝贝
ba48438b06 Feat/vmalert add max queue size (#472)
* feat: add remoteWrite.maxQueueSize to reduce queue full
* rename remote(write|read) flags to remote(Write|Read) for the sake of consistency

Co-authored-by: xiaobeibei <xiaobeibei@bigo.sg>
2020-05-13 18:58:56 +01:00
Aliaksandr Valialkin
7882a0dbbf app/vmselect/promql: suppress "SA4006: this value of dstValues is never used" error in golangci-lint 2020-05-13 11:47:08 +03:00
Aliaksandr Valialkin
4fe67504f9 lib/storage: optimize label matching for regexp ending with literal suffix
For example, `{label=~"foo.*bar.+baz"}` contains literal suffix `baz`,
so it should work faster now.
2020-05-13 11:47:07 +03:00
Aliaksandr Valialkin
96e001d254 app/vmagent: fix a bug with improper relabeling when multiple -remoteWrite.urlRelableConfig args are set
This bug could result in incorrect relabeling and metrics' drop.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/467
2020-05-12 22:02:58 +03:00
Aliaksandr Valialkin
faf92a0965 app/vmselect/promql: fix any(..) calculations - return all the data points instead of the first one 2020-05-12 20:36:42 +03:00
Aliaksandr Valialkin
a6f16dcc11 lib/fs: do not use mmap for 32-bit arches by default, since they cannot map files bigger than 4GB in RAM 2020-05-12 20:22:09 +03:00
Aliaksandr Valialkin
cc311e20fe app/vmselect/promql: add any(x) by (y) aggregate function, which returns any time series from q for each group y 2020-05-12 19:45:56 +03:00
Aliaksandr Valialkin
574289c3fb app/vmselect/promql: support for sum(x) by (y) limit N syntax in order to limit the number of output time series after aggregation 2020-05-12 19:45:54 +03:00
Aliaksandr Valialkin
0a134ace63 app/vmagent: fix scraping mTLS targets, which has been broken in v1.35.1
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/470
2020-05-12 17:23:03 +03:00
Aliaksandr Valialkin
8300cc17af app/vmagent,lib/promscrape: do not set HostClient.DialDualStack, since it isnt used if HostClient.Dial is set 2020-05-12 15:24:18 +03:00
Aliaksandr Valialkin
6273385618 app/vmagent/remotewrite: properly dial TCP6 addresses set via -remoteWrite.url
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/469
2020-05-12 15:22:29 +03:00
Aliaksandr Valialkin
3232605524 lib/storage: properly initialize part struct before trying to close it on error
This should prevent from nil pointer dereference bug at https://github.com/VictoriaMetrics/VictoriaMetrics/issues/468 .
2020-05-12 14:54:31 +03:00
Aliaksandr Valialkin
18834a9191 vendor: make vendor-update 2020-05-12 14:25:35 +03:00
Aliaksandr Valialkin
d90dc1fbf9 deployment/docker: omit http2 support in *-prod binaries
VictoriaMetrics doesn't use http/2.0, so disable it completely.

Use `nethttpomithttp2` tag defined in Go1.14 for this.
See 2566e21f24 for details.
2020-05-12 14:19:43 +03:00
Aliaksandr Valialkin
dbd0c552d5 lib/storage: gradually pre-populate per-day inverted index for the next day
This should prevent from CPU usage spikes at 00:00 UTC every day when
inverted index for new day must be quickly created for all the active time series.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/430
2020-05-12 12:13:05 +03:00
Aliaksandr Valialkin
cc00a2c453 lib/storage: typo fixes in error messages: or -> of 2020-05-12 12:12:42 +03:00
Aliaksandr Valialkin
ce2107bc52 lib/storage: speed up matching for common regexps in label filters
The following regexps have been optimized:

* 'foo.+bar'
* 'foo.+bar.+baz'

This should improve performance for matching Graphite-like metrics.
2020-05-11 22:40:55 +03:00
Aliaksandr Valialkin
12a1a71cc1 lib/storage: add a benchmark for Graphite-like regexps for metric names 2020-05-11 22:37:32 +03:00
Aliaksandr Valialkin
de113806bb docs/CaseStudies.md: add CERN case study 2020-05-11 14:05:20 +03:00
Roman Khavronenko
8c8ff5d0cb vmalert: cleanup and restructure of code to improve maintainability (#471)
The change introduces new entity `manager` which replaces
`watchdog`, decouples requestHandler and groups. Manager
supposed to control life cycle of groups, rules and
config reloads.

Groups export an ID method which returns a hash
from filename and group name. ID supposed to be unique
identifier across all loaded groups.

Some tests were added to improve coverage.

Bug with wrong annotation value if $value is used in
 templates after metrics being restored fixed.

Notifier interface was extended to accept context.

New set of metrics was introduced for config reload.
2020-05-10 17:58:17 +01:00
Nikolay Khramchikhin
9e8733ff65 vmalert config reload
added config hot reload for vmalert with sighup and api call
2020-05-09 10:32:12 +01:00
Aliaksandr Valialkin
5bc5d6a1f2 docs/Single-server-VictoriaMetrics.md: small updates for Monitoring and How to start VictoriaMetrics sections 2020-05-08 20:34:50 +03:00
Aliaksandr Valialkin
baedb25936 docs/vmauth.md: fix a link to docker images 2020-05-08 14:10:04 +03:00
Aliaksandr Valialkin
6909d845b0 docs/Articles.md: add a link to CERN article at https://indico.cern.ch/event/877333/contributions/3696707/attachments/1972189/3281133/CMS_mon_RD_for_opInt.pdf 2020-05-08 01:25:37 +03:00
Aliaksandr Valialkin
efc8e3c523 Makefile: suppress false positives for golangci-lint on nil pointer dereference 2020-05-07 19:41:42 +03:00
Aliaksandr Valialkin
51291015a5 app/vmagent: return 200 from /-/reload endpoint as Prometheus does 2020-05-07 19:30:30 +03:00
Aliaksandr Valialkin
099e44005b lib/httpserver: add -http.shutdownDelay flag for a grace period before http server shutdown
The http server returns 503 non-OK error at `/health` page during grace period,
so load balancers in front of the http server could re-route incoming requests
to other servers.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/463
2020-05-07 15:30:35 +03:00
Aliaksandr Valialkin
787fcfba0c lib/httpserver: reduce typical duration for http server graceful shutdown
Previously the duration for graceful shutdown for http server could take more than a minute
because of imporperly set timeouts in setNetworkTimeout.
Now typical duration for graceful shutdown should be reduced to less than 5 seconds.
2020-05-07 14:12:39 +03:00
Aliaksandr Valialkin
6afb25fd08 docs/{vmagent,vmauth}: small clarifications in the docs 2020-05-07 12:55:20 +03:00
Aliaksandr Valialkin
653d51694a app/vmauth: prevent from attacks with .. in path for accessing resources outside the configured url_prefix 2020-05-07 12:55:18 +03:00
Aliaksandr Valialkin
91a49eecea lib/flagutil: make errcheck happy by explicitly ignoring Array.Set result in tests 2020-05-06 22:37:39 +03:00
Aliaksandr Valialkin
c4c447507d lib/flagutil: properly parse quoted flag values for flagutil.Array 2020-05-06 22:27:21 +03:00
Aliaksandr Valialkin
8a00807f60 app/vmagent: allow setting independent auth configs per each configured -remoteWrite.url 2020-05-06 16:51:41 +03:00
Aliaksandr Valialkin
b69eb7bf38 app/vmagent: properly set client-side TLS certificates for -remoteWrite.url. Previously they were mistakenly set as server-side 2020-05-06 16:50:30 +03:00
Aliaksandr Valialkin
68928bf3df lib/promscrape/discovery/gce: discover per-zone instances for gce_sd_config in parallel. This should reduce discovery latency 2020-05-06 15:00:09 +03:00
Aliaksandr Valialkin
e8936c9cb3 docs/vmagent.md: small fixes 2020-05-06 14:49:18 +03:00
Aliaksandr Valialkin
3f52a97f9b lib/promscrape: add Prometheus-compatible DNS-based service discovery aka dns_sd_configs 2020-05-06 00:01:58 +03:00
Aliaksandr Valialkin
364789c24c lib/promscrape: properly connect to TCP6 addresses if -enableTCP6 is set 2020-05-06 00:01:57 +03:00
Aliaksandr Valialkin
08320cfcf4 docs/{vmauth,vmagent}: fix ports for profiling 2020-05-05 20:15:47 +03:00
Aliaksandr Valialkin
f65930b34d docs/vmauth.md: mention that we can help creating customized proxy 2020-05-05 12:34:42 +03:00
Aliaksandr Valialkin
266327642b docs/{vmagent,vmauth}: add Profiling section 2020-05-05 11:45:13 +03:00
Aliaksandr Valialkin
0c7cddfca6 docs: add vmauth.md 2020-05-05 11:17:23 +03:00
Aliaksandr Valialkin
e767aedd17 app/vmauth: add initial version of vmauth. See https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/app/vmauth/README.md for details 2020-05-05 10:54:17 +03:00
Aliaksandr Valialkin
b5a780930d docs/vmagent.md: /targets page doesnt expose infomration about imporperly configured scrape configs now. It is written in error log instead 2020-05-05 10:54:14 +03:00
Aliaksandr Valialkin
7b5ef63384 lib/procutil: add NewSighupChan function, which returns a channel, which is triggered on every SIGHUP 2020-05-05 10:54:09 +03:00
Aliaksandr Valialkin
1aea001532 docs/vmalert.md: sync with app/vmalert/README.md 2020-05-05 07:50:57 +03:00
Aliaksandr Valialkin
4fa817be10 lib/promscrape: allow explicitly setting empty token via token: "" in consul_sd_config 2020-05-05 07:50:15 +03:00
Aliaksandr Valialkin
8c77faec96 make vendor update 2020-05-05 00:54:38 +03:00
Roman Khavronenko
0ba1b5c71b app/vmalert: restore alerts state from datasource metrics (#461)
* app/vmalert: restore alerts state from datasource metrics

Vmalert will restore alerts state for rules that have `rule.For` > 0 from previously written timeseries via `remotewrite.url` flag.

* app/vmalert: mention remotewerite and remoteread configuration in README
2020-05-05 00:51:22 +03:00
Aliaksandr Valialkin
40c3ffb359 lib/promscrape: add Prometheus-compatible service discovery for Consul aka consul_sd_configs
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/330
2020-05-04 20:51:17 +03:00
Aliaksandr Valialkin
83f0e35b7b lib/promauth: properly set up client certificate in tls.Config
Previously the client certificate has been mistakenly set up as a server certificate
2020-05-04 20:51:08 +03:00
Aliaksandr Valialkin
218e566647 lib/promscrape: move common code for discovery api config map handling into discoveryutils 2020-05-04 20:51:01 +03:00
Aliaksandr Valialkin
6310b20e72 lib/promscrape/discovery/kubernetes/: unify apiConfig creation 2020-05-04 20:50:49 +03:00
Aliaksandr Valialkin
d17381037e vendor: update github.com/valyala/quicktemplate from v1.4.1 to v1.5.0 2020-05-04 01:36:41 +03:00
Aliaksandr Valialkin
6c68b8aa81 docs/Single-server-VictoriaMetrics.md: mention that it is recommended upgrading to the latest release before reporting issues 2020-05-04 00:41:47 +03:00
Aliaksandr Valialkin
23010e6321 docs/Cluster-VictoriaMetrics.md: add Multitenancy chapter 2020-05-03 18:01:26 +03:00
Aliaksandr Valialkin
66b0ae79a5 lib/promscrape: remove debug line left after the commit e4aac6ea40 2020-05-03 17:15:32 +03:00
Aliaksandr Valialkin
69004a5f67 lib/promscrape: fix tests after the commit 658a8742ac
The original commit copies `__address__` label to `instance` label when generating per-target labels as Prometheus does.

See https://www.robustperception.io/life-of-a-label for details.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/453
2020-05-03 16:56:15 +03:00
DexterZhang
658a8742ac fix(vmagent): different behavior as how prometheus deal with labels. [Issue#453] (#454) 2020-05-03 16:51:03 +03:00
Aliaksandr Valialkin
e4aac6ea40 lib/promscrape: make consistent scrape time offsets across reloads for the same ScrapeURL and Labels
This should make consistent intervals between data points for scrape targets across reloads.
Previously these intervals were random.
2020-05-03 14:30:21 +03:00
Aliaksandr Valialkin
d9f1b4d6a3 lib/promscrape: fix TestGetFileSDScrapeWorkSuccess after 3b234d82e5 2020-05-03 14:28:31 +03:00
Aliaksandr Valialkin
3b234d82e5 lib/promscrape: reload only modified scrapers on config changes
This should improve scrape stability when big number of targets are scraped and these targets are frequently changed.

Thanks to @xbsura for the idea and initial implementation attempts at the following pull requests:

- https://github.com/VictoriaMetrics/VictoriaMetrics/pull/449
- https://github.com/VictoriaMetrics/VictoriaMetrics/pull/458
- https://github.com/VictoriaMetrics/VictoriaMetrics/pull/459
- https://github.com/VictoriaMetrics/VictoriaMetrics/pull/460
2020-05-03 12:45:40 +03:00
Aliaksandr Valialkin
d2af2c8c3e docs/MetricsQL.md: document first_over_time and last_over_time functions 2020-05-02 22:43:29 +03:00
Aliaksandr Valialkin
ee810e5f3a lib/httpserver: rename http.externalURL to http.pathPrefix and improve help message for this flag
The `http.externalURL` flag name was slightly misleading, so it has been renamed to `http.pathPrefix`.
2020-05-02 13:07:34 +03:00
DexterZhang
34743974d5 feat(httpserver): add http.externalUrl config to http server, it adds prefix to http path automatically (#452) 2020-05-02 12:42:53 +03:00
Aliaksandr Valialkin
09f5d0056f docs/Single-server-VictoriaMetrics.md: hint that \n is a single newline char 2020-05-01 13:41:55 +03:00
Aliaksandr Valialkin
432187ac3b app/vminsert: add /-/reload handler in the same way as for vmagent 2020-04-30 02:15:39 +03:00
Aliaksandr Valialkin
825a2dd554 lib/procutil: prevent from app termination on SIGHUP signal, since this signal is frequently used for config reload 2020-04-30 02:09:27 +03:00
DexterZhang
67511d4165 feat(vmagent): add promscrap config reload suppport via http (#450)
* feat(vmagent): add promscrap config reload suppport via http endpoint `/-/reload`

* fix: typo fix
2020-04-30 02:00:32 +03:00
Aliaksandr Valialkin
01c17092e1 lib/httpserver: mention that -http.maxGracefulShutdownDuration command-line flag value can be increased on shutdown timeout 2020-04-30 01:38:06 +03:00
Aliaksandr Valialkin
7d36616b93 docs/Single-server-VictoriaMetrics.md: mention that it is better to increase CPU and RAM per vmselect node in order to achieve higher query performance 2020-04-30 00:53:53 +03:00
Aliaksandr Valialkin
d0ebbb166e docs: add vmalert.md 2020-04-29 17:42:06 +03:00
Aliaksandr Valialkin
8b2f54d7cd docs/Single-server-VictoriaMetrics.md: update Alerting section 2020-04-29 17:39:21 +03:00
Aliaksandr Valialkin
5ec036439d lib/promscrape: set 30 seconds timeout for discovery api requests
Previously such requests could hang for long time. This could make debugging harder.
2020-04-29 17:33:34 +03:00
Aliaksandr Valialkin
43c39dc36c vendor: use github.com/VictoriaMetrics/fasthttp instead of github.com/fasthttp/fasthttp
The upstream fasthttp may contain issues like 996610f021 ,
plus a code that isn't used by VictoriaMetrics. So let's use a private copy under our control instead.
2020-04-29 17:33:34 +03:00
Artem Navoiev
cc1878607a fix link to vmalert 2020-04-29 17:17:08 +03:00
Artem Navoiev
d8cd69895c update README.md change alerting section 2020-04-29 17:16:13 +03:00
Artem Navoiev
4487b454a8 Update README.md 2020-04-29 12:39:15 +03:00
Aliaksandr Valialkin
e3cc329d85 vendor: downgrade github.com/valyala/fasthttp from v1.12.0 to v0.1.0
The v0.1.0 points to the last verified changes made by me.
I'm afraid that releases after v0.1.0 may contain completely broken changes like
996610f021
2020-04-29 01:09:02 +03:00
Aliaksandr Valialkin
57407cca83 app/vmselect/promql: remove -search.maxPointsPerTimeseries command-line flag
Limit the estimated time series count after aggregation with grouping by the number of source time series.
2020-04-29 00:20:04 +03:00
Aliaksandr Valialkin
4470308d5b docs/Single-server-VictoriaMetrics.md: mention that basic downsampling could be made with the help of de-duplication 2020-04-28 16:38:32 +03:00
Aliaksandr Valialkin
4e4f57b121 lib/metricsql: move it to a separate repository - github.com/VictoriaMetrics/metrics 2020-04-28 15:28:22 +03:00
Aliaksandr Valialkin
17d96e4503 app/vmselect: add -search.estimatedSeriesCountAfterAggregation command-line flag for tuning the probability of OOMs vs false-positive not enough memory errors 2020-04-28 12:52:37 +03:00
Aliaksandr Valialkin
83aca79137 lib/storage: recover when metricID->metricName entry is missing in the inverted index after unclean shutdown
Newly added index entries can be missing after unclean shutdown, since they didn't flush to persistent storage yet.
Log about this and delete the corresponding metricID, so it could be re-created next time.
2020-04-28 12:00:33 +03:00
Aliaksandr Valialkin
1397612117 app/vmalert: added missing comments for public entities 2020-04-28 11:21:07 +03:00
Aliaksandr Valialkin
20b71acf19 docs/Articles.md: add https://zerodha.tech/blog/infra-monitoring-at-zerodha/ 2020-04-28 02:24:16 +03:00
Aliaksandr Valialkin
521df0e2fc lib/promscrape: handle connection reset when targets responds with http redirect 2020-04-28 02:13:02 +03:00
肖贝贝
2b16c188e8 fix: vmagent not follow 301/302 redirect bug (#445)
Co-authored-by: xiaobeibei <xiaobeibei@bigo.sg>
2020-04-28 01:29:37 +03:00
Roman Khavronenko
3bfa41a95c app/vmalert: initial remote-write support for alerts state persistence. (#442)
* app/vmalert: initial remote-write support for alerts state persistence.

If `remotewrite.url` flag is set, vmalert will send alerts state  via remote-write protocol to remote storage. The sending is asynchronous to avoid blocking calls in rules evaluation loop.

* app/vmalert: merge with master

* app/vmalert: write both `instant` and `for` alerts timeseries states in remote storage.
2020-04-28 00:18:02 +03:00
Aliaksandr Valialkin
90670cb55e app/vmalert: include it into the next release 2020-04-28 00:10:12 +03:00
Aliaksandr Valialkin
303905cd84 lib/{encoding,decimal}: typo fixes in tests: epxecting->expecting 2020-04-28 00:01:55 +03:00
Aliaksandr Valialkin
36fa3078c2 lib/encoding: reduce possibility of failure in TestMarshalInt64ArraySize 2020-04-28 00:01:54 +03:00
Aliaksandr Valialkin
95942f1ac6 lib/promscrape/discovery/gce: make golangci-lint happy 2020-04-27 19:28:10 +03:00
Aliaksandr Valialkin
b768bc9a6a lib/promscrape: add initial support for Prometheus-compatible service discovery for Amazon EC2 aka ec2_sd_configs 2020-04-27 19:25:53 +03:00
Aliaksandr Valialkin
de59703a16 lib/promscrape/discovery/gce: properly set filter query arg in api url 2020-04-27 16:01:17 +03:00
Aliaksandr Valialkin
b4afe562c1 lib/storage: postpone reading data from blocks during search
This eliminates the need for storing block data into temporary files on a single-node VictoriaMetrics
during heavy queries, which touch big number of time series over long time ranges.

This improves single-node VM performance on heavy queries by up to 2x.
2020-04-27 11:45:24 +03:00
Aliaksandr Valialkin
0224071ebe lib/promscrape/discovery/gce: allow empty project and zone for gce_sd_config 2020-04-27 11:45:02 +03:00
Aliaksandr Valialkin
fcf57f9883 app/vmselect/netstorage: substitute sorting packedTimeseries with the natural order of the fetched blocks
This should minimize the number of disk seeks when reading data from temporary file.
2020-04-26 16:26:23 +03:00
Aliaksandr Valialkin
6954d0edb7 lib/promscrape/discovery/gce: allow empty zone arg in gce_sd_config - in this case zones for the given project are automatically discovered 2020-04-26 14:34:11 +03:00
kreedom
fb967ae6c8 happy fmt 2020-04-26 14:16:32 +03:00
kreedom
2c18548e08 alert - rename validate function and flags (#440)
* alert - rename validate function and flags
2020-04-26 14:15:04 +03:00
kreedom
5f61d43db9 vmalert - validate template in labels (#439) 2020-04-26 13:53:57 +03:00
肖贝贝
eeadfccdc5 fix: fix vmalert template label not complete bug (#435)
Co-authored-by: xiaobeibei <xiaobeibei@bigo.sg>
2020-04-26 13:30:10 +03:00
Aliaksandr Valialkin
d7c1ff8b0c lib/storage: improve deduplication algorithm
Now it leaves only the first data point on each `-dedup.minScrapeInterval` interval.

Previously it may leave two data points on the interval. This could lead to unexpected results
for `histogram_quantile(phi, sum(rate(buckets)) by (le))` query.
2020-04-26 13:10:02 +03:00
Aliaksandr Valialkin
1f3fd93b58 docs/{vmbackup,vmrestore}.md: update -help output 2020-04-24 22:44:21 +03:00
Jason Gardner
66af7e40f3 app/vmbackup: added ability to create and delete snapshots during backup (#428)
* app/vmbackup: added ability to create and delete snapshots during backup

Resolves: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/422

* Add snapshot create and delete url flags

* Fixed errcheck warnings in build
2020-04-24 22:35:03 +03:00
Aliaksandr Valialkin
491b31b369 lib/storage: postpone label filters matching too many time series instead of giving up with error
This should reduce the frequency of the following errors:

    cannot find tag filter matching less than N time series; either increase -search.maxUniqueTimeseries or use more specific tag filters

    more than N time series found on the time range [...]; either increase -search.maxUniqueTimeseries or shrink the time range
2020-04-24 21:13:50 +03:00
Aliaksandr Valialkin
4b84c592e9 docs/Single-server-VictoriaMetrics.md: document -search.resetCacheAuthKey 2020-04-24 19:47:52 +03:00
Aliaksandr Valialkin
a596aec82c app/vmselect: fix description for -search.resetCacheAuthKey 2020-04-24 19:45:50 +03:00
Aliaksandr Valialkin
7b8008e0bd lib/promscrape/discovery/gce: make golint happy by ignoring resp.Body.Close() result 2020-04-24 18:13:09 +03:00
Aliaksandr Valialkin
6d3567d65c .github/workflows: install dependencies before code checkout
Othwerise dependencies' install mangles go.mod
2020-04-24 17:55:17 +03:00
Aliaksandr Valialkin
9ef5935552 lib/promscrape: initial implementation for gce_sd_configs aga Prometheus-compatible service discovery for Google Compute Engine 2020-04-24 17:51:22 +03:00
Aliaksandr Valialkin
b80e6b4d56 .github/workflows: enable Go modules when installing dependencies
Disabled Go modules broke golangci-lint build
2020-04-24 17:39:58 +03:00
Aliaksandr Valialkin
5f9c23226a docs/Single-server-VictoriaMetrics.md: mention that -search.maxStalenessInterval can be useful for InfluxDB and TimescaleDB users 2020-04-24 16:22:50 +03:00
Aliaksandr Valialkin
ac43075cc9 .github/workflows: install golangci-lint at Dependencies step 2020-04-24 15:37:35 +03:00
Aliaksandr Valialkin
3157fb0186 .github/workflows: update Go version in actions/setup-go from v1.13 to v1.14 2020-04-24 15:31:16 +03:00
Aliaksandr Valialkin
e48822942d vendor: make vendor-update 2020-04-24 15:27:45 +03:00
Aliaksandr Valialkin
77bea69fab .github/workflows: use master branch for 'actions/setup-go' and 'actions/checkout' 2020-04-24 14:41:21 +03:00
Aliaksandr Valialkin
24461153bf lib/promscrape: query /api/v1/namespaces/* for the configured namespaces in kubernetes_sd_config
This should fix authroization issues described at https://github.com/VictoriaMetrics/VictoriaMetrics/issues/432
2020-04-24 14:33:50 +03:00
Aliaksandr Valialkin
00e897119f lib/promscrape: add -promscrape.configCheckInterval command-line flag for automating config checking
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/431
2020-04-23 23:41:08 +03:00
Aliaksandr Valialkin
a9a7a7175e lib/promscrape: access Config entries by reference, so they can be compared by addresses 2020-04-23 14:38:20 +03:00
Aliaksandr Valialkin
a9b83bf512 vendor: update google.golang.org/api from v0.21.0 to v0.22.0 2020-04-23 14:30:46 +03:00
Aliaksandr Valialkin
a87ca3bdf0 vendor: update github.com/aws/aws-sdk-go from v1.30.8 to v1.30.12 2020-04-23 12:36:03 +03:00
Aliaksandr Valialkin
1c5d14a2eb lib/promscrape: move KubernetesSDConfig to lib/promscrape/discovery/kubernetes 2020-04-23 11:34:22 +03:00
Aliaksandr Valialkin
a714568374 lib/promscrape/discovery/kubernetes: hide role switch logic behind GetLabels function 2020-04-22 22:16:11 +03:00
Aliaksandr Valialkin
364db13c9c app/vmselect: add /api/v1/status/tsdb page with useful stats for locating root cause for high cardinality issues
See https://prometheus.io/docs/prometheus/latest/querying/api/#tsdb-stats

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/425
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/268
2020-04-22 22:03:43 +03:00
Aliaksandr Valialkin
01e33be34a vendor: update github.com/valyala/fastjson from v1.5.0 to v1.5.1 2020-04-21 00:03:56 +03:00
Aliaksandr Valialkin
78ff5f2aa5 vendor: update github.com/valyala/gozstd from v1.6.4 to v1.7.0 2020-04-20 23:03:40 +03:00
Aliaksandr Valialkin
2dc5593b75 lib/writeconcurrencylimiter: improve docs for -maxConcurrentInserts command-line flag 2020-04-20 21:03:00 +03:00
Aliaksandr Valialkin
9ebc937685 app/vmselect: add -search.minStalenessInterval command-line flag for removing gaps on graphs built from time series with irregular duration between samples
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/426
2020-04-20 19:42:15 +03:00
Aliaksandr Valialkin
fe57d46687 app/vmselect: merge -search.maxLookback and -search.maxStalenessInterval flags, since it has been appeared they have identical purpose :(
Leave both flags for backwards compatibility reasons.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/209
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/426
2020-04-20 19:26:31 +03:00
Aliaksandr Valialkin
6cc6ec6d2e deployment/docker/docker-compose.yml: bump Prometheus from v2.17.1 to v2.17.2 and Grafana from v6.7.1 to v6.7.2 2020-04-20 17:29:20 +03:00
Aliaksandr Valialkin
5454b518a6 lib/promscrape/discovery/kubernetes: reuse a client for empty api_server inside different jobs 2020-04-20 17:07:11 +03:00
Aliaksandr Valialkin
5ecb50d7c2 docs/Single-server-VictoriaMetrics.md: mention about vmagent in the end of Prometheus setup section 2020-04-20 16:41:36 +03:00
Aliaksandr Valialkin
851946af1e deployment/docker: allow building docker images on top of any base image set via ROOT_IMAGE environment var
For example, the following command will build VictoriaMetrics docker image on top of alpine image:

    ROOT_IMAGE=alpine make package-victoria-metrics
2020-04-20 01:16:57 +03:00
Aliaksandr Valialkin
2de76bca96 deployment/docker/base: remove unused group and passwd files 2020-04-19 23:31:31 +03:00
Aliaksandr Valialkin
94ad531bfe Makefile: increase the timeout for make golangci-lint from 1 minute to 2 minutes
This should fix timeout errors on GitHub actions
2020-04-17 19:14:04 +03:00
Aliaksandr Valialkin
936fb0eac3 app/vmagent/remotewrite: retry sending data if the server closes keep-alive connection
This should fix the following error when sending data to remote storage:

couldn't send a block with size XX bytes to "YYY": the server closed connection before returning the first response byte. Make sure the server returns 'Connection: close' response header before closing the connection
2020-04-17 15:52:42 +03:00
Aliaksandr Valialkin
43375df923 lib/promscrape/discovery/kubernetes: update stale comments 2020-04-17 14:06:20 +03:00
Aliaksandr Valialkin
43bbffebb3 vendor: make vendor-update 2020-04-17 13:24:08 +03:00
Aliaksandr Valialkin
79fb595732 docs/vmagent.md: typo fix: unvailable -> unavailable 2020-04-17 13:11:31 +03:00
Aliaksandr Valialkin
546d26523c app/vmagent/README.md: mention about prodmscrape.suppressScrapeErrors 2020-04-17 13:08:21 +03:00
Aliaksandr Valialkin
f41e6a7bd9 app/vmselect: properly apply -search.maxLookback to queries sent to /api/v1/query 2020-04-17 12:30:11 +03:00
Dmitry Shihovtsev
830538e290 Fix misspelled Cortex name in the FAQ (#421) 2020-04-17 08:36:12 +01:00
Aliaksandr Valialkin
5d1537a395 lib/promscrape: suppress scrape errors if -promscrape.suppressScrapeErrors flag is set 2020-04-16 23:41:30 +03:00
Aliaksandr Valialkin
600490131f lib/promscrape: print all the labels for the target on error message for failed scrape
This should improve debuggability.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/420
2020-04-16 23:35:05 +03:00
Aliaksandr Valialkin
bd4c6d21dd lib/promscrape: retry target scraping when the target closes previously established keep-alive connection to it
This should fix the following error:

the server closed connection before returning the first response byte. Make sure the server returns 'Connection: close' response header before closing the connection
2020-04-16 23:25:29 +03:00
Aliaksandr Valialkin
95da8d410c docs/Single-server-VictoriaMetrics.md: mention that VictoriaMetrics supports Kubernetes service discovery 2020-04-16 18:40:11 +03:00
Aliaksandr Valialkin
bcec5c5429 docs/Single-server-VictoriaMetrics.md: typo fix: unneded -> unneeded 2020-04-16 17:35:08 +03:00
Aliaksandr Valialkin
467279acd2 docs/Single-server-VictoriaMetrics.md: imrpove docs about metrics deletion 2020-04-16 17:32:09 +03:00
Aliaksandr Valialkin
e0d213f82b docs/Single-server-VictoriaMetrics.md: mention that the delete API can be protected by authKey 2020-04-16 17:19:10 +03:00
Aliaksandr Valialkin
2fd2dec5eb lib/logger: typo fix 2020-04-16 00:19:10 +03:00
Aliaksandr Valialkin
071fdf5518 lib/logger: add WARN level for logging expected errors such as invalid user queries 2020-04-15 20:50:26 +03:00
Aliaksandr Valialkin
30b401ebbf docs/Single-server-VictoriaMetrics.md: typo fix 2020-04-15 15:21:58 +03:00
Aliaksandr Valialkin
a59a7bcc5e vendor: make vendor-update 2020-04-15 14:52:24 +03:00
Aliaksandr Valialkin
ccb887c0f6 docs/Single-server-VictoriaMetrics.md: clarify how to use -influxListenAddr command-line option 2020-04-15 12:33:42 +03:00
Aliaksandr Valialkin
6f7f64f757 app/vmselect: handle timestamp(metric offset X) the same way as Prometheus does
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/415
2020-04-15 12:01:00 +03:00
Aliaksandr Valialkin
426a0567c4 lib/promscrape: code cleanup in runScraper func 2020-04-15 11:36:24 +03:00
Aliaksandr Valialkin
6e2f6574b8 docs/Single-server-VictoriaMetrics.md: mention that backfilling can be done via any supported ingestion method 2020-04-15 10:56:53 +03:00
Aliaksandr Valialkin
c1de3f67b4 lib/storage: skip metricID if the corresponding metricID->metricName is missing in inverted index during search
This case is possible when the corresponding metricID->metricName entry didn't propagate to inverted index yet.

This should fix the following error:

error when searching tsids for tfss [...]: cannot find metricName by metricID 1582417212213420669: EOF
2020-04-15 00:06:43 +03:00
Aliaksandr Valialkin
8a25c1ed71 docs/Single-server-VictoriaMetrics.md: add https://github.com/Slapper/ansible-victoriametrics-cluster-role to integrations chapter 2020-04-14 16:27:20 +03:00
Aliaksandr Valialkin
067c7afebc lib/promscrape: show information on improperly configured scrape targets at the bottom of /targets page
This is a common error whith improperly configured target autodiscovery and/or relabeling.
This error leads to duplicate scraping of the same targets with the same set of labels, which leads
to duplicate samples in time series.
2020-04-14 14:55:05 +03:00
Aliaksandr Valialkin
ac35635b71 lib/promscrape/discovery/kubernetes: remove only unused client for API server during cleaning 2020-04-14 14:19:21 +03:00
Aliaksandr Valialkin
78863d7066 lib/promscrape: add promrelabel.GetLabelValueByName helper function 2020-04-14 14:12:01 +03:00
Aliaksandr Valialkin
c64f003cfb lib/promscrape: mention job name in error messages when target cannot be scraped
This should improve debuggability
2020-04-14 13:33:13 +03:00
Aliaksandr Valialkin
4718a5d951 lib/promscrape: reset ScrapeWork.ID in tests 2020-04-14 13:31:31 +03:00
Aliaksandr Valialkin
257521a634 lib/promscrape: properly expose statuses for targets with duplicate scrape urls at /targets page
Previously targets with duplicate scrape urls were merged into a single line on the page.
Now each target with duplicate scrape url is displayed on a separate line.
2020-04-14 13:10:01 +03:00
Aliaksandr Valialkin
6a75c95194 lib/promscrape: remove labels starting with __meta_ after applying relabel_configs as Prometheus does
This should reduce CPU load during scraping when target discovery generates
big number of `__meta_*` labels (for instance, k8s discovery).

See https://www.robustperception.io/life-of-a-label for details.
2020-04-14 12:23:22 +03:00
Aliaksandr Valialkin
01d7d799dc lib/promscrape: rename 'scrape_config->scrape_limit' to 'scrape_config->sample_limit'
`scrape_config` block from Prometheus config contains `sample_limit` field,
while in `vmagent` this field was mistakenly named as `scrape_limit`.
2020-04-14 11:59:57 +03:00
Aliaksandr Valialkin
0b76c27fa1 docs/vmagent.md: mention that vmagent supports kubernetes_sd_configs now 2020-04-13 21:06:36 +03:00
Aliaksandr Valialkin
2e4e202c2b lib/promscrape: add initial support for kubernetes_sd_config
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/334
2020-04-13 21:03:28 +03:00
Aliaksandr Valialkin
2814b1490f lib/promscrape: add -promscrape.config.strictParse flag for detecting errors in -promscrape.config file 2020-04-13 13:15:44 +03:00
Aliaksandr Valialkin
90b4a6dd12 lib/promscrape: extract common auth code to lib/promauth 2020-04-13 12:59:10 +03:00
hagen1778
2eed6c393f vmalert: prepare package for external usage
* update README according to changes
* add Makefile with basic commands
2020-04-12 15:32:42 +03:00
kreedom
948f8b6b5f [vmalert] fix linter issues 2020-04-12 15:08:11 +03:00
kreedom
8fca5f2819 [vmalert] add tests to webserver (#413) 2020-04-12 14:51:03 +03:00
Roman Khavronenko
7c9405f53d Vmalert metrics (#412)
vmalert: add basic list of metrics
2020-04-11 20:42:01 +01:00
Roman Khavronenko
9f8cc8ae1b Extend web responses for alerts: (#411)
vmalert: Extend web responses for alerts

* populate apiAlert object with additional fields
* return all active alerts, not only firing
* sort list of API alerts for deterministic output
* add helper for available path list
2020-04-11 16:49:23 +01:00
kreedom
90de3086b3 [vmalert] add webserver (#410)
* [vmalert] add webserver
2020-04-11 12:40:24 +03:00
Aliaksandr Valialkin
830d5fb1e0 vendor: make vendor-update 2020-04-10 18:40:21 +03:00
Aliaksandr Valialkin
66d8086a5e vendor: update github.com/klauspost/compress from v1.10.3 to v1.10.4 2020-04-10 18:39:19 +03:00
Aliaksandr Valialkin
a30c98c0bc deployment/docker: update Go builder image from go1.14.1 to go1.14.2 2020-04-10 18:19:34 +03:00
Aliaksandr Valialkin
4de6c6bbf0 lib/storage: disable deduplication after dedup tests are complete
The rest of tests expect that the de-duplication is disabled.
2020-04-10 17:28:31 +03:00
Aliaksandr Valialkin
ded0c0d3c7 lib/storage: correctly handle -dedup.minScrapeInterval values smaller than 8ms
Such small values may be used for removing samples with duplicate timestamps.
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/409 for details.
2020-04-10 16:36:41 +03:00
Aliaksandr Valialkin
7d73623c69 lib/{storage,mergeset}: make sure that requests and misses cache counters never go down 2020-04-10 14:45:01 +03:00
Aliaksandr Valialkin
e62afc7366 lib/protoparser: add -*TrimTimstamp command-line flags for Influx, Graphite, OpenTSDB and CSV data
These flags can be used for reducing disk space usage for timestamps data ingested over the given protocols
2020-04-10 12:44:39 +03:00
Aliaksandr Valialkin
0681b4c27a lib/workingsetcache: accumulate stat counters on cache rotation
This should prevent from cache stats counters going down after cache rotation,
which may corrupt `cache hit ratio` graph on the official Grafan dasbhoards
when using the following query:

    1 - (sum(rate(vm_cache_misses_total[5m])) by (type) / sum(rate(vm_cache_requests_total[5m])) by (type))
2020-04-10 11:51:40 +03:00
Aliaksandr Valialkin
f86947d55c lib/memory: add more details to -memory.allowedPercent help message 2020-04-09 15:28:53 +03:00
Aliaksandr Valialkin
f94a090020 docs: update minimum supported Go version from 1.12 to 1.13 2020-04-07 13:38:37 +03:00
Aliaksandr Valialkin
8064775c02 docs/CaseStudies.md: updated ARNES numbers 2020-04-06 16:20:11 +03:00
Aliaksandr Valialkin
520a704606 docs/CaseStudies.md: prettifying of the formatting 2020-04-06 15:24:37 +03:00
Aliaksandr Valialkin
105f0c78d9 docs/CaseStudies.md: add ARNES case study 2020-04-06 15:17:33 +03:00
Roman Khavronenko
b099d84271 Vmalert/rules eval (#400)
* Initial rules evaluation support.

Rules are now store alerts state in private field `alerts`. Every evaluation updates
the alerts and state. Every unique metric received from datastore represents a unique alert,
uniqueness is guaranteed by hashing ordered labelset.

* merge with master

* cleanup

* support endAt parameter as 3*evaluationInterval for active alerts

* make golint happy
2020-04-06 14:44:03 +03:00
Aliaksandr Valialkin
407bdbf2b9 docs/Single-server-VictoriaMetrics.md: cosmetic fixes in Importing CSV data chapter 2020-04-06 12:29:28 +03:00
Aliaksandr Valialkin
69962a7001 docs/FAQ.md: small fixes 2020-04-05 13:53:08 +03:00
Aliaksandr Valialkin
9f03548e55 docs/FAQ.md: add more articles about VictoriaMetrics performance 2020-04-05 13:48:03 +03:00
Aliaksandr Valialkin
022310f35b docs/Articles.md: added a link to https://www.iunera.com/kraken/fabric/time-series-database/ 2020-04-04 16:40:00 +03:00
Aliaksandr Valialkin
895cadfae7 app/vmagent/remotewrite: add "X-Prometheus-Remote-Write-Version: 0.1.0" http header to remote_write request
This header is required by Cortex (and, probably, other remote storage systems).
See 9c1f44d090/docs/apis.md (remote-api) .

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/399
2020-04-04 16:24:56 +03:00
Aliaksandr Valialkin
57704aa584 app/victoria-metrics: add -selfScrapeInstance and -selfScrapeJob flags for tuning labels for self-scraped metrics 2020-04-04 14:57:22 +03:00
Aliaksandr Valialkin
f9b24d4899 app/vmselect/promql: keep metric name after applying first_over_time and last_over_time functions 2020-04-04 14:54:13 +03:00
Aliaksandr Valialkin
fa0554b771 docs/Articles.md: move Percona article to third-party 2020-04-02 15:43:02 +03:00
Aliaksandr Valialkin
35b133bff4 docs/Articles.md: add a link to https://blog.cloudera.com/benchmarking-time-series-workloads-on-apache-kudu-using-tsbs/ 2020-04-02 15:41:09 +03:00
Aliaksandr Valialkin
a884803377 docs/CaseStudies.md: add Adsterra case 2020-04-02 00:49:16 +03:00
Aliaksandr Valialkin
b38d048dd9 app/vmstorage: add vm_free_disk_space_bytes metric for monitoring the remaining disk space at -storageDataPath 2020-04-01 23:08:58 +03:00
Aliaksandr Valialkin
de2cd4231b docs/Single-server-VictoriaMetrics.md: re-organize chapters 2020-04-01 22:38:56 +03:00
kreedom
298eb0a0f8 [vmalert] improve external url handling 2020-04-01 22:29:11 +03:00
kreedom
12fe915b48 [vmalert] add prometheus template function (#396)
* [vmalert] add prometheus template function

* make linter be happy

Co-authored-by: Aliaksandr Valialkin <valyala@gmail.com>
2020-04-01 18:17:53 +03:00
Aliaksandr Valialkin
cdf0a4cf8f lib/httpserver: remove unnecessary http.HandlerFunc wrapper in gzipHandler 2020-04-01 18:14:17 +03:00
Aliaksandr Valialkin
1c9c57db1c docs/Cluster-VictoriaMetrics.md: small fixes and updates 2020-04-01 18:10:12 +03:00
Aliaksandr Valialkin
8edc72201d docs/Single-server-VictoriaMetrics.md: small fixes and updates 2020-04-01 18:09:07 +03:00
Aliaksandr Valialkin
b024ecd10c docs/Cluster-VictoriaMetrics.md: swap production build and development build chapters 2020-04-01 17:49:51 +03:00
Aliaksandr Valialkin
e0d0348f36 lib/storage: add missing reset for tagFilter.matchesEmptyValue on tagFilter.Init 2020-04-01 17:42:44 +03:00
Aliaksandr Valialkin
3e55c7e069 lib/promscrape: reduce timestamp jitter when scraping targets
This should improve compression for timestamps
2020-04-01 16:11:35 +03:00
Aliaksandr Valialkin
c4acd20d2a lib/storage: remove duplicate data points on 7/8*minScrapeInterval interval instead of 1/2*minScrapeInterval
This should reduce storage usage and should improve deduplication accuracy
2020-04-01 15:48:48 +03:00
Aliaksandr Valialkin
8661dc4624 docs/Single-server-VictoriaMetrics.md: mention that environment vars may be prefixed with -envflag.prefix 2020-03-31 22:37:44 +03:00
Aliaksandr Valialkin
16572c8722 README.md: mention that response cache must be reset after import historical data 2020-03-31 19:33:20 +03:00
Aliaksandr Valialkin
b699c46046 lib/storage: handle errors returned from TagFilters.Add when cloning TagFilters with negative filter 2020-03-31 16:18:02 +03:00
Aliaksandr Valialkin
e71519b8b2 app/victoria-metrics/testdata: add a test for https://github.com/VictoriaMetrics/VictoriaMetrics/issues/395 2020-03-31 12:51:25 +03:00
Aliaksandr Valialkin
972713bd79 lib/storage: add fast path for the previous indexdb search if it doesn't contain per-day inverted index yet 2020-03-31 12:51:21 +03:00
Aliaksandr Valialkin
5d99ca6cfc lib/storage: optimize per-day inverted index search for tag filters matching big number of time series
- Sort tag filters in the ascending number of matching time series
  in order to apply the most specific filters first.
- Fall back to metricName search for filters matching big number of time series
  (usually this are negative filters or regexp filters).
2020-03-31 00:48:35 +03:00
Aliaksandr Valialkin
318326c309 lib/storage: properly handle {label=~"foo|"} filters as Prometheus does
Such filters must match all the time series with `label="foo"` plus all the time series without `label`

Previously only time series with `label="foo"` were matched.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/395
2020-03-31 00:48:18 +03:00
Aliaksandr Valialkin
a1e4c6a2be .github/workflows/wiki.yml: fix copying files from docs to wiki 2020-03-30 15:59:12 +03:00
Aliaksandr Valialkin
ac3ee44fa7 docs/robots.txt: trigger github actions 2020-03-30 15:54:39 +03:00
Aliaksandr Valialkin
b98ca56d94 lib/envflag: add -envflag.prefix for setting optional prefix for environment vars 2020-03-30 15:51:19 +03:00
Aliaksandr Valialkin
b41ee5f27d vendor: make vendor-update 2020-03-30 15:06:35 +03:00
Aliaksandr Valialkin
8d35af6fdb .github/workflows: copy all the files from docs folder to wiki and github pages 2020-03-30 15:05:37 +03:00
Aliaksandr Valialkin
0f2dd77a76 go.mod: update the minimum required Go version from go1.12 to go1.13 2020-03-30 14:56:57 +03:00
Aliaksandr Valialkin
0c485f14d1 app/vmselect/prometheus: allow passing relative time to start, end and time args of /api/v1/* queries 2020-03-29 21:57:14 +03:00
Aliaksandr Valialkin
2ebf7d86ff app/vmselect/prometheus: code simplification: (d.Seconds()/1e3) -> d.Milliseconds() 2020-03-29 21:50:28 +03:00
kreedom
bf6c24d0f4 [vmalert] config parser (#393)
* [vmalert] config parser

* make linter be happy

* fix test

* fix sprintf add test for rule validation
2020-03-29 01:48:30 +02:00
Aliaksandr Valialkin
1f7292675a docs: add robots.txt 2020-03-28 23:22:46 +02:00
Aliaksandr Valialkin
bd156cd088 docs/vmagent.md: add prometheus remote_write proxy use case 2020-03-28 23:16:38 +02:00
Aliaksandr Valialkin
b695087119 docs/CaseStudies.md: add Brandwatch case study 2020-03-28 20:57:54 +02:00
Aliaksandr Valialkin
80f53e5396 deployment/docker: run docker apps under default user (0, root) in order to preserve backwards compatibility
If docker app is upgraded from root to non-root, then the data pointed by `-storageDataPath` or similar flags
becomes denied to non-root user after the upgrade. This breaks upgrade path. So revert back to default root user
for docker apps.

Users may explicitly execute `docker run --user <non_root_user>` for running docker apps under non-root user.
2020-03-28 19:23:26 +02:00
Roman Khavronenko
7acb797595 Update dashboard according to new Grafana version. (#390)
The way how regex for column style in Table panel should be applied has changed in 6.7 Grafana version. The change supposed to fix Flags panel column styles accordingly.
2020-03-28 01:24:39 +02:00
Roman Khavronenko
3a8bbfd6b9 bump Prometheus and Grafana images (#389) 2020-03-28 01:15:07 +02:00
Dmitry Naumov
27373807c1 Rootless docker images by default (#358)
* Rootless docker images by default

* Migrate to rootless base image

Co-authored-by: Aliaksandr Valialkin <valyala@gmail.com>
2020-03-27 21:23:50 +02:00
Aliaksandr Valialkin
8d7f0aa632 vendor: make vendor-update 2020-03-27 21:23:30 +02:00
Aliaksandr Valialkin
149f365f74 lib/httpserver: add -http.maxGracefulShutdownDuration command-line flag for tuning the maximum duration required for graceful shutdown of http server 2020-03-27 21:23:30 +02:00
kreedom
b22da547a2 [vmalert] - parse template annotaions (#387)
* [vmalert] - parse template annotations
2020-03-27 18:31:16 +02:00
Aliaksandr Valialkin
047849e855 lib/uint64set: remove zero buckets after Set.Intersect 2020-03-27 01:15:58 +02:00
Aliaksandr Valialkin
f3ec424e7d lib/uint64set: small code cleanup and perf tuning
* Remember the last accessed bucket on Has() call.
* Inline fast paths inside Add() and Has() calls.
* Remove fragile code with maxUnsortedBuckets inside bucket32.
2020-03-25 15:30:25 +02:00
Aliaksandr Valialkin
ef8aee8a2d deployment/docker: update Go builder from Go1.14.0 to Go1.14.1 2020-03-24 22:35:26 +02:00
Aliaksandr Valialkin
dde4a97534 lib/uint64set: go fmt 2020-03-24 22:28:43 +02:00
Aliaksandr Valialkin
f3e0c55ea1 lib/storage: serialize snapshot creation process with mutex
This guarantees that the snapshot contains all the recently added data
from inmemory buffers when multiple concurrent calls to Storage.CreateSnapshot are performed.
2020-03-24 22:27:05 +02:00
Aliaksandr Valialkin
97fb0edd07 lib/uint64set: added more tests 2020-03-24 22:27:04 +02:00
Aliaksandr Valialkin
25f585ecf2 docs/CaseStudies.md: added a case study from MHI Vestas Offshore Wind 2020-03-14 13:22:12 +02:00
Aliaksandr Valialkin
df91d2d91f lib/storage: remove obsolete code 2020-03-13 22:48:17 +02:00
Aliaksandr Valialkin
3c7c71a49c app/vmselect: adjust label_map() handling for corner cases
The following corner cases now supported:
* label_map(q, "label", "", "foo") - adds `label="foo"` to series with missing `label`
* label_map(q, "label", "foo", "") - removes `label="foo"` from series

All the unmatched labels are kept unchanged.
2020-03-13 18:45:03 +02:00
Aliaksandr Valialkin
69f1470692 vendor: update github.com/VictoriaMetrics/metrics from v1.11.0 to v1.11.2
This fixes data race in Histogram
2020-03-13 12:39:57 +02:00
Aliaksandr Valialkin
4fc4912f0c app/vmalert/datasource: typo fix in docs: Labels -> Label 2020-03-13 12:22:33 +02:00
kreedom
a746cb62b6 vmalert add vm datasource, change alertmanager (#364)
* vmalert add vm datasource, change alertmanager

* make linter be happy

* make linter be happy.2

* PR comments

* PR comments.1
2020-03-13 12:19:31 +02:00
Aliaksandr Valialkin
499594f421 lib/promscrape: allow overriding external_labels as Prometheus does
Prometheus docs at https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config say:

> In communication with external systems, they are always applied only
> when a time series does not have a given label yet and are ignored otherwise.

Though this may result in consistency chaos when scrape targets override `external_labels`,
let's stick with Prometheus behavior for the sake of backwards compatibility.

There is last resort in vmagent with `-remoteWrite.label`, which consistently
sets the configured labels to all the metrics before sending them to remote storage.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/366
2020-03-12 20:24:42 +02:00
Aliaksandr Valialkin
fdc2a9d1d7 app/vmselect: add label_map(q, label, srcValue1, dstValue1, ... srcValueN, dstValueN) function to MetricsQL
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/369
2020-03-12 19:13:47 +02:00
Aliaksandr Valialkin
92d67e2592 vendor: update google.golang.org/genproto from fc8f55426688 to da6875a35672 2020-03-12 18:11:33 +02:00
Aliaksandr Valialkin
8a853778d7 vendor: update golang.org/x/tools from 26f6a1b6802d to 5e2df02acb1e 2020-03-12 18:07:52 +02:00
Aliaksandr Valialkin
8d75a5dbd0 vendor: update github.com/aws/aws-sdk-go from v1.29.10 to v1.29.22 2020-03-12 17:54:58 +02:00
Aliaksandr Valialkin
cdd6171af1 vendor: update google.golang.org/api from v0.19.0 to v0.20.0 2020-03-12 17:51:49 +02:00
Aliaksandr Valialkin
cc183bc899 vendor: update golang.org/x/sys from d5e6a3e2c0ae to 5c8b2ff67527 2020-03-12 17:46:24 +02:00
Aliaksandr Valialkin
3935038e20 vendor: update github.com/klauspost/compress from v1.10.1 to v1.10.3 2020-03-12 17:32:24 +02:00
Aliaksandr Valialkin
c8dc1cd218 lib/protoparser/csvimport: add missing metric vm_rows_invalid_total{type="csvimport"} 2020-03-12 15:27:45 +02:00
Aliaksandr Valialkin
c1551a3269 README.md: mention about alternative dashboard for cluster version - https://grafana.com/grafana/dashboards/11831 2020-03-12 15:10:14 +02:00
Aliaksandr Valialkin
8023ad7dbd app/vmselect: add -search.maxStalenessInterval for tuning Prometheus data model closer to Influx-style data model 2020-03-11 16:43:34 +02:00
Aliaksandr Valialkin
d4beb17ebe lib/promscrape: remove possible races when registering and de-registering scrape workers for /targets page 2020-03-11 16:30:21 +02:00
Aliaksandr Valialkin
fcd91795d5 app/vmagent: mention that vmagent can filter data 2020-03-11 16:22:39 +02:00
Aliaksandr Valialkin
650830db79 docs/Articles.md: add a link to https://stas.starikevich.com/posts/disk-usage-for-vm-versus-prometheus/ 2020-03-11 04:56:16 +02:00
Aliaksandr Valialkin
cdf70b7944 lib/promscrape: consistently update /targets page after SIGHUP 2020-03-11 03:20:03 +02:00
Aliaksandr Valialkin
301c2acd61 app/vmstorage: return 500 status code instead of 200 status code on internal errors inside /snapshot/* handlers 2020-03-10 23:51:55 +02:00
Aliaksandr Valialkin
61d0ee857c docs/vmagent.md: sync with app/vmagent/README.md 2020-03-10 21:54:04 +02:00
Aliaksandr Valialkin
e17702fada app/vmselect: add optional max_rows_per_line query arg to /api/v1/export
This arg allows limiting the number of data points that may be exported on a single line.
2020-03-10 21:45:56 +02:00
Aliaksandr Valialkin
1fe66fb3cc app/{vmagent,vminsert}: add support for importing csv data via /api/v1/import/csv 2020-03-10 21:15:35 +02:00
Aliaksandr Valialkin
49d7cb1a3f all: fix golangci-lint issues 2020-03-10 19:41:46 +02:00
Aliaksandr Valialkin
8d3869cd99 docs/FAQ.md: actualize answer about deduplication 2020-03-09 13:37:12 +02:00
Aliaksandr Valialkin
9d89b08cb5 docs: add missing vmagent.png, which is used in vmagent.md 2020-03-09 13:35:49 +02:00
Aliaksandr Valialkin
5fe38a84eb app/vmagent: properly apply -remoteWrite.sendTimeout to fasthttp.HostClient 2020-03-09 13:31:55 +02:00
Aliaksandr Valialkin
7c432da788 lib/promscrape: do not retry idempotent requests when scraping targets
This should prevent from the following unexpected side-effects of idempotent request retries:
- increased actual timeout when scraping the target comparing to the configured scrape_timeout
- increased load on the target

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/357
2020-03-09 13:31:52 +02:00
Aliaksandr Valialkin
986dba5ab3 app/vmagent: do not allow non-supported fields in -remoteWrite.relabelConfig and file_sd_configs
This should reduce possible confusion like in the https://github.com/VictoriaMetrics/VictoriaMetrics/issues/363
2020-03-06 20:19:13 +02:00
Aliaksandr Valialkin
c386c5de57 app/vmagent: properly add labels set via -remoteWrite.label to metrics before sending them to -remoteWrite.url 2020-03-06 19:26:58 +02:00
Artem Navoiev
58a3e59d59 bump version of codecov-action to v1.0.6 2020-03-05 23:25:13 +02:00
Aliaksandr Valialkin
c5f894b361 Makefile: add build and test rules with enabled race detector. These rules have -race suffix
Fix also `unsafe pointer conversion` errors detected by Go1.14. See https://golang.org/doc/go1.14#compiler .
2020-03-05 12:03:38 +02:00
Aliaksandr Valialkin
9be64e34b4 docs/Articles.md: add a link to https://www.percona.com/blog/2020/02/28/better-prometheus-rate-function-with-victoriametrics/ 2020-03-04 20:05:26 +02:00
Aliaksandr Valialkin
e51a0a56f4 README.md: add a link to https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/Articles 2020-03-04 20:05:18 +02:00
Aliaksandr Valialkin
754db0d22e app/vmagent/README.md: small fixes 2020-03-04 18:14:47 +02:00
Aliaksandr Valialkin
772312bf7b app/vmagent/README.md: typo fix 2020-03-04 18:05:09 +02:00
Aliaksandr Valialkin
871abfab7a app/vmagent/README.md: clarification 2020-03-04 18:03:48 +02:00
Aliaksandr Valialkin
007c591de8 app/vmagent/README.md: add iot and edge monitoring use case 2020-03-04 18:01:34 +02:00
Aliaksandr Valialkin
474a09c0f1 app/vmagent/README.md: add use cases section 2020-03-04 17:42:27 +02:00
Aliaksandr Valialkin
d58aa80e9b README.md: add a link to Synthesio case study 2020-03-04 14:18:19 +02:00
Aliaksandr Valialkin
ad927575b7 docs/CaseStudies: add Synthesio 2020-03-04 14:14:39 +02:00
Aliaksandr Valialkin
0b1e877a7d docs/Single-server-VictoriaMetrics.md: sync with README.md 2020-03-03 21:39:05 +02:00
Aliaksandr Valialkin
0ba8ee6022 README.md: mention -search.cacheTimestampOffset in Backfilling section 2020-03-03 21:38:39 +02:00
Aliaksandr Valialkin
9a944fd169 lib/promscrape: consistency renaming: stopCh -> globalStopCh 2020-03-03 20:08:08 +02:00
Aliaksandr Valialkin
032c88561b app/vminsert/prompush: limit memory usage by pushing promscrape data in smaller blocks 2020-03-03 19:58:54 +02:00
Aliaksandr Valialkin
76036c1897 app/vmagent: add -remoteWrite.maxDiskUsagePerURL for limiting the maximum disk usage for each -remoteWrite.url buffer
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/352
2020-03-03 19:49:07 +02:00
Aliaksandr Valialkin
c31d640eb9 app/vmagent/remotewrite: do not reset empty relabelCtx 2020-03-03 15:01:03 +02:00
Aliaksandr Valialkin
02b55c72dc app/vmagent: add -remoteWrite.urlRelabelConfig for applying individual relabeling for each -remoteWrite.url
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/320
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/308
2020-03-03 13:12:16 +02:00
Aliaksandr Valialkin
1d7ab78b55 lib/protoparser/prometheus: allow trailing comma in tags list
The trailing comma is generated by cloudwatch exporter.
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/350
2020-03-02 22:22:09 +02:00
Aliaksandr Valialkin
7d178a40bd app/vmselect/prometheus: do not add __name__!= filter when searching for all the matching metric names via /api/v1/label/__name__/values with non-empty label filter
This should reduce query time.
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/343
2020-02-28 23:35:55 +02:00
Aliaksandr Valialkin
43754ff420 README.md: put https://gitlab.com/optima_public/prometheus_oauth_proxy in third-party contributions section 2020-02-28 21:23:34 +02:00
Aliaksandr Valialkin
b785429ddb lib/protoparser: metrics renaming: vm_protoparser_<type>_* -> vm_protoparser_*{type="<type>"}
This should improve composability of these metrics in PromQL queries
2020-02-28 20:20:10 +02:00
Aliaksandr Valialkin
f9a584b5c1 app/vmagent/remotewrite: yet another typo fix 2020-02-28 20:05:55 +02:00
Aliaksandr Valialkin
e22fdc1073 lib/persistentqueue: reset chunk file when the persistent queue is empty 2020-02-28 20:05:53 +02:00
Aliaksandr Valialkin
b9b46cb8dc app/vmagent/remotewrite: typo fix 2020-02-28 19:03:16 +02:00
Aliaksandr Valialkin
db6f4e4af1 app/vmagent/remotewrite: limit memory usage when big scrape blocks are pushed to remote storage 2020-02-28 18:58:01 +02:00
Aliaksandr Valialkin
8cc88db38d docs/Single-server-VictoriaMetrics.md: sync with README.md 2020-02-28 12:58:32 +02:00
Aliaksandr Valialkin
f3c28d2ae4 README.md: typo fix 2020-02-28 12:58:31 +02:00
Aliaksandr Valialkin
57528ca31c docs: add a doc for vmagent 2020-02-28 12:23:56 +02:00
Aliaksandr Valialkin
5701b2f7bb app/vmselect/prometheus: properly pass filter for labelName=__name__ in labelValuesWithMatches
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/343
2020-02-28 12:18:14 +02:00
Aliaksandr Valialkin
18af31a4c2 all: properly split vm_deduplicated_samples_total among cluster components
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/345
2020-02-27 23:48:07 +02:00
Aliaksandr Valialkin
6819db5686 lib/envflag: typo fix in docs to -envflag.enable: envoronment->environment 2020-02-27 21:47:58 +02:00
Aliaksandr Valialkin
63a88a619b deployment/docker: update Go builder from Go1.13.8 to Go1.14.0 2020-02-26 22:15:44 +02:00
Aliaksandr Valialkin
c458b521a2 app/vmagent: allow setting -httpListenAddr to empty string in order to disable listening for http requests
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/340
2020-02-26 20:58:11 +02:00
Aliaksandr Valialkin
b459919250 make vendor-update 2020-02-26 20:45:27 +02:00
Aliaksandr Valialkin
cc5fe0b315 vendor: update github.com/VictoriaMetrics/metrics from v1.10.1 to v1.11.0 2020-02-26 20:41:02 +02:00
Aliaksandr Valialkin
117c76311c app/vmagent/README.md: list service discovery mechanisms, which will be added soon
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/334
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/330
2020-02-26 19:27:08 +02:00
Aliaksandr Valialkin
b63e4464f4 lib/promscrape: properly reload new configs on SIGHUP
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/335
2020-02-26 13:54:00 +02:00
Edouard Hur
3ad36134f6 Readme markdown linting (#338)
* fixed MD009/no-trailing-spaces

* fixed MD033/no-inline-html: Inline HTML

* fixed MD012/no-multiple-blanks

* fixed MD007/ul-indent

* fixed MD004/ul-style

* fixed MD031/blanks-around-fences

* fixed MD040/fenced-code-language

* fixed MD032/blanks-around-lists

* fixed MD026/no-trailing-punctuation
2020-02-26 13:21:19 +02:00
Edouard Hur
1f0007d0b1 Readme envvars (#332)
* add details about env vars config

* add env var to table of contents

* remove unnecessary words
2020-02-25 22:41:34 +02:00
Aliaksandr Valialkin
6739c2749d lib/promscrape: go fmt 2020-02-25 20:56:44 +02:00
Aliaksandr Valialkin
7a33da8fea lib/promscrape: do not add missing port to __address__ label in order to be consistent with Prometheus behavior
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/331
2020-02-25 20:49:50 +02:00
Aliaksandr Valialkin
be37d762cd app/vmagent: add -remoteWrite.maxBlockSize command-line flag for limiting the maximum size of unpacked block to send to remote storage 2020-02-25 19:57:47 +02:00
Aliaksandr Valialkin
4e24839a2c app/vmagent: do not allow sending unpacked requests with sizes exceeding -maxInsertRequestSize 2020-02-25 19:34:41 +02:00
Aliaksandr Valialkin
6386aeb1e0 app/vmagent: add ability to accept Influx line protocol data via TCP and UDP
Just set `-influxListenAddr` command-line flag

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/333
2020-02-25 19:12:49 +02:00
Aliaksandr Valialkin
e453880084 app/vmagent/README.md: mention that vmagent exposes target statuses at /targets page 2020-02-25 18:15:58 +02:00
Aliaksandr Valialkin
4c4448b66e app/vminsert: add /targets handler, which exposes Prometheus targets defined in -promscrape.config file 2020-02-25 18:13:11 +02:00
Aliaksandr Valialkin
7ef7c9368e lib/fs: typo fix: read blocks bigger than 8KB via pread() call instead of using mmap 2020-02-25 18:05:06 +02:00
Aliaksandr Valialkin
e1ef72af01 app/vmagent: logo fix 2020-02-25 00:09:19 +02:00
Aliaksandr Valialkin
56c70fe856 app/vmagent: update docs 2020-02-25 00:09:18 +02:00
Aliaksandr Valialkin
e7e4aa5243 app/vmagent/README.md: small fixes 2020-02-24 21:25:38 +02:00
Aliaksandr Valialkin
fed2959658 lib/envflag: substitute dots with underscores in env var names if -envflag.enable is set
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/311
2020-02-24 21:14:44 +02:00
Aliaksandr Valialkin
ae51300973 app/vmselect/promql: properly take into account the first datapoint when calculating rollup_candlestick
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/309
2020-02-24 13:24:30 +02:00
Aliaksandr Valialkin
e65ec88779 app/vmselect/promql: do not take into account values outside the current window in rollup_candlestick
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/309
2020-02-23 18:03:57 +02:00
Yaroslav
a6d0645539 fix rollupOpen(), rollupHigh(), rollupLow() functions (#328) 2020-02-23 18:01:53 +02:00
Aliaksandr Valialkin
04762344c6 app/vmagent: initial implementation for vmagent 2020-02-23 13:36:03 +02:00
Aliaksandr Valialkin
4e905d6501 vendor: update github.com/valyala/fastjson from v1.4.5 to v1.5.0 2020-02-23 10:06:00 +02:00
kreedom
49390b8dbc [vmalert] integration with AlertManager (#325) 2020-02-21 23:15:05 +02:00
Aliaksandr Valialkin
2f55cabaa4 app/vmselect/promql: log when rollupResult cache is cleared 2020-02-21 20:07:01 +02:00
Aliaksandr Valialkin
d21cb43e48 lib/storage: add vm_ prefix to deduplicated_samples_total metric to be conistent with other metrics 2020-02-21 19:33:59 +02:00
Aliaksandr Valialkin
ec9bf39b5b app/vmselect: add -search.cacheTimestampOffset command-line flag
This flag can be used for removing gaps on graphs if the difference between the current time
and the timestamps from the ingested data exceeds 5 minutes.

This is the case when the time between data sources and VictoriaMetrics is improperly synchronized.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/312
2020-02-21 13:58:06 +02:00
Aliaksandr Valialkin
539139391c app/vmselect: add /internl/resetRollupResultCache handler for resetting response cache
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/312
2020-02-21 13:58:05 +02:00
Aliaksandr Valialkin
5431f9cd4e deployment/docker: update Go builder from v1.13.7 to v1.13.8 2020-02-20 19:46:20 +02:00
kreedom
3c06179184 basic vmalert backbone (#317)
* basic vmalert backbone

* Resolve code review comments for vmalert backbone

* Second review fixes for vmalert backbone
2020-02-16 20:59:02 +02:00
Aliaksandr Valialkin
71a52f5f90 lib/protoparser/prometheus: skip leading whitespace from tag names 2020-02-16 19:06:33 +02:00
Aliaksandr Valialkin
e7ba18b0d9 vendor: make vendor-udpate 2020-02-16 16:11:24 +02:00
Aliaksandr Valialkin
ce15cecae4 lib/storage: typo fix 2020-02-16 15:53:44 +02:00
Aliaksandr Valialkin
32e153e834 lib/storage: prevent from clobbering nin-nil lastError in Storage.add 2020-02-16 15:51:26 +02:00
Aliaksandr Valialkin
7b1c7051a3 app/vmselect: add sort_by_label(q, label) and sort_by_label_desc(q, label) functions
This is implementation of https://github.com/prometheus/prometheus/pull/1533 for VictoriaMetrics.
2020-02-13 17:01:37 +02:00
Aliaksandr Valialkin
7836ad8907 lib/mergeset: skip createing temporary part objects when merging source inmemory parts
This should reduce CPU usage when adding new entries to inverted index.
This should alos prevent from creating stalled cleaner goroutines for the created temporary parts,
since they were never closed.

This should fix the following issue: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/316 .
2020-02-13 14:09:32 +02:00
Aliaksandr Valialkin
eceaf13e5e lib/{storage,mergeset}: use time.Ticker instead of time.Timer where appropriate
It has been appeared that time.Timer was used in places where time.Ticker must be used instead.
This could result in blocked goroutines as in the https://github.com/VictoriaMetrics/VictoriaMetrics/issues/316 .
2020-02-13 13:10:07 +02:00
Aliaksandr Valialkin
8162d58dbd make vendor-update 2020-02-10 23:28:15 +02:00
Aliaksandr Valialkin
848d5da0be vendor: update github.com/VictoriaMetrics/metrics from v1.9.3 to v1.10.1 2020-02-10 23:08:38 +02:00
Aliaksandr Valialkin
4cc0163c7c docs: migrate ExtendedPromQL->MetricsQL in order to be more consistent 2020-02-10 23:02:43 +02:00
Aliaksandr Valialkin
a801a1a6e7 .github/ISSUE_TEMPLATE: ask for command-line flags and Prometheus logs 2020-02-10 22:56:17 +02:00
Aliaksandr Valialkin
02e852854a README.md: refer to the article about data deletion via relabeling 2020-02-10 22:46:52 +02:00
Aliaksandr Valialkin
9e6e2319b9 README.md: mention that flags may be read from env vars if -envflag.enable command-line flag is set 2020-02-10 16:20:15 +02:00
Aliaksandr Valialkin
025297f15d lib/envflag: check for incorrect flag values read from environment vars 2020-02-10 16:08:10 +02:00
Aliaksandr Valialkin
5d207b2025 lib/envflag: add -envflag.enable command-line flag for enabling reading flags from environment vars
By default flags are read only from command line. They can be read from environment vars if `-envflag.enable` is set.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/311
2020-02-10 16:02:37 +02:00
Aliaksandr Valialkin
8466ab0034 all: allow setting flags via environment vars
Now flags can be set via environment vars with the same names as flags.
Command-line flags override flags set via env vars.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/311
2020-02-10 13:29:13 +02:00
Aliaksandr Valialkin
e210cd9da1 lib/storage: move -dedup.minScrapeInterval flag outside lib/storage, so it doesnt show up in vminsert in cluster version 2020-02-10 13:09:51 +02:00
Aliaksandr Valialkin
6db573470c docs/Single-server-VictoriaMetrics.md: sync with README.md 2020-02-07 00:02:34 +02:00
Ryota Arai
fffe5d4ba4 Fix a typo in README (selfScrapeInterval) (#310) 2020-02-06 13:14:31 +02:00
Aliaksandr Valialkin
a6c6a2debc app/vmselect/promql: do not add step to range end, since this hack became obsolete since commit 9e1119dab8 2020-02-05 21:22:19 +02:00
Aliaksandr Valialkin
78b62dee87 app/vmselect/promql: properly adjust time range for data to select
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/309
2020-02-05 21:22:18 +02:00
Aliaksandr Valialkin
366693b9f1 app/vmselect: unconditionally offset -step to rollup_candlestick. This makes results more consistent 2020-02-04 23:32:12 +02:00
Aliaksandr Valialkin
525101339e app/vmselect/promql: automatically apply offset -step to rollup_candlestick function in order to obtain the expected OHLC results
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/309
2020-02-04 23:24:35 +02:00
Aliaksandr Valialkin
ada6a3da8d app/vmselect/promql: adjust rollup_candlestick calculations to the exepcted results
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/309
2020-02-04 22:42:13 +02:00
Aliaksandr Valialkin
40c6ae2952 lib/logger: initialize output to os.Stderr by default 2020-02-04 22:40:44 +02:00
Aliaksandr Valialkin
cff0cb297c Do not require checking for errors returned from fmt.Fprint
This fixes `make errcheck` error found in lib/logger
2020-02-04 22:03:37 +02:00
Aliaksandr Valialkin
e0a4c37fc1 lib/logger: add -loggerOutput command-line flag
This flag allows changing log output from `stderr` to `stdout` if `-loggerOutput=stdout` is set.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/306
2020-02-04 21:47:56 +02:00
Aliaksandr Valialkin
7f3e3a6034 lib/logger: do not clutter -loggerFormat=json output with stack trace
This should improve json parsing
2020-02-04 21:37:25 +02:00
Aliaksandr Valialkin
bd4698bb7a lib/storage: do not deduplicate blocks with less than 32 samples during merge
This should improve deduplication accuracy for blocks with higher number of samples.
2020-02-04 18:41:54 +02:00
Aliaksandr Valialkin
36a1ac8360 app/vmselect: take into account the time the requests wait in the queue if -search.maxConcurrentRequests is exceeded
This will prevent from excess CPU usage for timed out queries.
2020-02-04 16:15:08 +02:00
Aliaksandr Valialkin
834051e5b2 app/vmselect: add a placeholder for /api/v1/metadata, which could be requested by Grafana
See https://prometheus.io/docs/prometheus/latest/querying/api/#querying-metric-metadata

VictoriaMetrics doesn't collect any metadata for metrics, so just return empty response.
2020-02-04 15:53:47 +02:00
Aliaksandr Valialkin
42864bb52f all: do not clash flag description with back-quoted flag types
See https://golang.org/pkg/flag/#PrintDefaults for more details.
2020-02-04 15:46:52 +02:00
Roman Khavronenko
1e023c6a72 Single dashboard (#300)
* improve description for `Pending datapoints` panel

* bump VM version requirement
2020-02-03 02:09:53 +02:00
Artem Navoiev
a47f292295 [vmalert] add vmalert.png.2 2020-02-02 12:17:19 +02:00
Artem Navoiev
354232b62b [vmalert] add vmalert.png 2020-02-02 12:16:05 +02:00
Artem Navoiev
28778be0cc [vmalert] initial 2020-02-02 12:14:09 +02:00
Aliaksandr Valialkin
90cf356ea1 app/vmselect/promql: adjust and and unless binary operator handling to be consistent with Prometheus 2020-01-31 18:52:38 +02:00
Aliaksandr Valialkin
c0b69ed06e deployment/docker: update Go builder from v1.13.6 to v1.13.7 2020-01-31 18:06:58 +02:00
Aliaksandr Valialkin
011a79da85 lib/fs: remove unused readerAt interface 2020-01-31 15:12:43 +02:00
Aliaksandr Valialkin
c3d86eef96 all: add -dedup.minScrapeInterval command-line flag for data de-duplication
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/86
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/278
2020-01-31 01:16:57 +02:00
Aliaksandr Valialkin
2152f6f0cd lib/storage: re-use indexSearch inside Storage.prefetchMetricNames 2020-01-31 01:16:53 +02:00
Aliaksandr Valialkin
d70ba7eb37 lib/fs: optimize small reads for ReaderAt.MustReadAt by reading from memory-mapped space instead of reading from file descriptor
This should improve performance when reading many small blocks.
2020-01-30 15:09:05 +02:00
Aliaksandr Valialkin
ad8af629bb all: rename ReadAt* to MustReadAt* in order to dont clash with io.ReaderAt 2020-01-30 15:08:58 +02:00
Aliaksandr Valialkin
d68546aa4a lib/storage: pre-fetch metricNames for the found metricIDs in Search.Init
This should speed up Search.NextMetricBlock loop for big number of found time series.
2020-01-30 15:08:51 +02:00
Aliaksandr Valialkin
5bb9ccb6bf lib/mergeset: properly update lastAccesstime in indexBlockCache entries
This is a follow-up for 6665f10e7b
2020-01-29 21:20:47 +02:00
Aliaksandr Valialkin
a462355b2f app/vmselect/promql: add keep_next_value(q) for filling gaps with the next non-empty value 2020-01-29 00:48:04 +02:00
Aliaksandr Valialkin
bdbb463756 docs/Single-server-VictoriaMetrics.md: fix heading size for Third-party contributions section 2020-01-28 23:13:35 +02:00
Aliaksandr Valialkin
371e86194d app/vminsert: moved -maxInsertRequestSize command-line flag out of lib/prompb in order to prevent its inclusion in vmselect and vmstorage apps 2020-01-28 23:02:08 +02:00
Aliaksandr Valialkin
adbbc4fa1a app/vmselect/promql: return expected results from increase() over the beginning of time series, which start from big value
Examples for such counters: OS-level counters for network or cpu stats.
2020-01-28 16:30:11 +02:00
Aliaksandr Valialkin
75ad47a43c app/victoria-metrics: check for error arg passed to filepath.Walk callback 2020-01-27 20:56:45 +02:00
Aliaksandr Valialkin
6320a19a8c app/victoria-metrics: remove integration build tag from tests
This simplifies testing with `go test ./app/victoria-metrics` without
the need to remember to pass `-tags=integration` to Go commands.
2020-01-27 20:25:28 +02:00
Aliaksandr Valialkin
7b26db5527 docs/Single-server-VictoriaMetrics.md: update Retention section 2020-01-27 18:44:21 +02:00
Alexander Danilov
1a3626bbe1 Add description for retention and how it works (#297) 2020-01-27 18:38:22 +02:00
Aliaksandr Valialkin
8074c10590 README.md: mention https://github.com/AnchorFree/tsdb-remote-write 2020-01-27 18:35:48 +02:00
Aliaksandr Valialkin
2392a359e1 app/vmselect/promql: fix panic on a single zero vmrange bucket in prometheus_buckets() function
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/296
2020-01-27 18:04:55 +02:00
Aliaksandr Valialkin
6caa9bb51b lib/logger: fix improperly set skipframes for all the logging functions
The bug has been introduced in the previous commit f6baee6efe
2020-01-26 18:34:27 +02:00
Aliaksandr Valialkin
f6baee6efe lib/httpserver: log the caller of httpserver.Errorf
Previously log message contained `httpserver.Errorf`, not it contains the caller of `httpserver.Errorf`, which is more useful.
2020-01-25 20:17:59 +02:00
Aliaksandr Valialkin
9df5b2d1c3 app/victoria-metrics: add -selfScrapeInterval flag for self-scraping /metrics page
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/30
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/180
2020-01-25 19:19:59 +02:00
Aliaksandr Valialkin
2a0a0ed14d lib/protoparser: add parser for Prometheus exposition text format
This parser will be used by vmagent
2020-01-24 20:11:02 +02:00
Aliaksandr Valialkin
6456c93dbb app/vminsert: move ingestion protocol parsers to lib/protoparser, so they could be re-used in the upcoming vmagent 2020-01-24 16:53:00 +02:00
Aliaksandr Valialkin
1efea246b7 docs/Articles.md: add a link to https://medium.com/@valyala/billy-how-victoriametrics-deals-with-more-than-500-billion-rows-e82ff8f725da 2020-01-22 19:08:35 +02:00
Aliaksandr Valialkin
680080887d all: consistently log durations in seconds with millisecond precision
This should improve logs readability
2020-01-22 18:28:27 +02:00
Aliaksandr Valialkin
3992984e10 vendor: make vendor-update 2020-01-22 18:08:39 +02:00
Aliaksandr Valialkin
9773022e50 app/vmselect: mention the original query and time range in error messages
This should simplify debugging invalid or heavy queries.
2020-01-22 17:36:36 +02:00
Aliaksandr Valialkin
f8954c7250 vendor: update github.com/klauspost/compress from v1.9.7 to v1.9.8
New version should have better gzip compression. See https://github.com/klauspost/compress#changelog
2020-01-22 16:50:15 +02:00
Aliaksandr Valialkin
0ef6f91410 docs: Mention Slack and Telegram channels for user questions 2020-01-22 16:50:14 +02:00
Aliaksandr Valialkin
efc7ad88ec app/vmselect: mention command-line flag, which could be used for adjusting query timeouts, in timeout errors 2020-01-22 15:50:48 +02:00
Aliaksandr Valialkin
ec9651e266 app/vmselect/prometheus: increase default value -maxExportDuration to 30 days, since 10 minutes beat users exporting bit amounts of data 2020-01-22 15:50:47 +02:00
Aliaksandr Valialkin
a8b2f82fc6 vendor: update github.com/VictoriaMetrics/fastcache from v1.5.5 to v1.5.7 2020-01-22 12:31:32 +02:00
Aliaksandr Valialkin
582dd01f42 app/vmselect/promql: add range_over_time(m[d]) function for calculating value range for m over d 2020-01-21 19:05:17 +02:00
Aliaksandr Valialkin
36973ee975 app/vmselect/promql: add label_match(q, label, regexp) and label_mismatch(q, label, regexp) functions for filtering out time series with labels matching the given regexp 2020-01-21 15:00:20 +02:00
Aliaksandr Valialkin
6665f10e7b lib/{mergeset,storage}: properly update lastAccessTime in index and data block cache entries 2020-01-20 14:59:47 +02:00
Aliaksandr Valialkin
04363d6511 README.md: mention that delete API shouldnt be used on a regular basis due to non-zero overhead 2020-01-20 13:28:36 +02:00
Aliaksandr Valialkin
c97ade4487 docs/FAQ.md: typo fix according to comment from https://www.reddit.com/message/messages/lezkmo 2020-01-18 18:05:13 +02:00
Aliaksandr Valialkin
970f0dfbf2 docs/CaseStudies.md: add links to COLOPL talk about VictoriaMetrics 2020-01-18 17:23:33 +02:00
Aliaksandr Valialkin
227cf53ef9 app/vminsert: increase default value for -insert.maxQueueDuration from 30s to 60s
This should help catching up with high ingestion rate after VictoriaMetrics restart.
2020-01-18 14:39:36 +02:00
Aliaksandr Valialkin
257e61195a lib/uint64set: add missing bucket32.b16his values 2020-01-18 14:26:04 +02:00
Aliaksandr Valialkin
4cc0c44b9e lib/uint64set: optimize Set.Union
This should improve performance for queries over big number of time series
2020-01-18 13:47:03 +02:00
Aliaksandr Valialkin
1b5f02e293 lib/uint64set: add benchmarks for Set.Union 2020-01-18 13:47:02 +02:00
Aliaksandr Valialkin
3748fb24b6 lib/storage: skip recovering timestamps order for lossless compression (PrecisionBits=64) 2020-01-18 00:09:33 +02:00
Aliaksandr Valialkin
c9472e4f3a all: use github.com/klauspost/compress/gzip instead of compress/gzip
`github.com/klauspost/compress/gzip` is more optimized than `compress/gzip`.
This gives better gzip compression and decompression speeds.
2020-01-17 23:58:46 +02:00
Aliaksandr Valialkin
bc0f897fcb lib/uint64set: reduce memory allocations in Set.AppendTo 2020-01-17 22:33:09 +02:00
Aliaksandr Valialkin
f9289b804a lib/storage: reduce memory allocations when merging metricID sets 2020-01-17 22:10:44 +02:00
Aliaksandr Valialkin
0c8ad08578 lib/uint64set: typo fix in Set.Intersect 2020-01-17 18:10:58 +02:00
Aliaksandr Valialkin
cdcacaea6d app/vmselect/netstorage: make fmt 2020-01-17 17:47:21 +02:00
Aliaksandr Valialkin
7327adbc86 app/vmselect/netstorage: limit the maximum size for in-memory buffer for temporary blocks file
This should reduce memory usage on systems with more than 8GB RAM.
2020-01-17 16:28:21 +02:00
Aliaksandr Valialkin
9f027ec176 lib/uint64set: optimize Intersect, Subtract and Union functions
This should improve performance for queries over big number of time series.
2020-01-17 16:11:49 +02:00
Aliaksandr Valialkin
cd53f7d177 lib/uint64set: improve benchmark for Set.Intersect 2020-01-17 16:08:17 +02:00
Aliaksandr Valialkin
d0d258b314 app/vmselect: limit the default value for -search.maxConcurrentRequests, so it plays well on systems with more than 16 vCPUs
A single heavy request can saturate all the available CPUs, so let's limit the number of concurrent requests to lower value.
This will give more chances for executing insert path.
2020-01-17 15:43:54 +02:00
Aliaksandr Valialkin
d88725f133 app/{vminsert,vmselect}: improve error messages when VictoriaMetrics cannot handle too high number of concurrent inserts / selects 2020-01-17 13:24:37 +02:00
Aliaksandr Valialkin
8dbf430469 lib/uint64set: add benchmark for Set.Intersect 2020-01-17 00:31:07 +02:00
Aliaksandr Valialkin
9ef4d32a9a make vendor-update 2020-01-16 14:14:19 +02:00
Aliaksandr Valialkin
0d7505b00e all: mention command-line flags used for limiting the incoming request size in error messages
This should improve error logs usability.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/287
2020-01-16 13:03:30 +02:00
Aliaksandr Valialkin
2839f4688a app/vmselect/promql: fix panic on sum(aggr_over_time(...)) with incorrect number of args 2020-01-15 16:26:09 +02:00
Aliaksandr Valialkin
605d588ba6 lib/uint64set: reduce memory usage in Union, Intersect and Subtract methods
Iterate items with newly added Set.ForEach method instead of allocating `[]uint64`
slice for all the items before the iteration.
2020-01-15 12:12:49 +02:00
Aliaksandr Valialkin
7483deccca docs/FAQ.md: add bullet comparison with Cortex and Thanos 2020-01-15 10:47:40 +02:00
Aliaksandr Valialkin
893b62c682 lib/{mergeset,storage}: fix uint64 counters alignment for 32-bit architectures (GOARCH=386, GOARCH=arm) 2020-01-14 22:47:04 +02:00
Aliaksandr Valialkin
7830c10eb2 lib/{storage,mergeset}: gradually remove stale entries from block cache and index caches
This should reduce memory usage in the long run when old blocks and indexes
aren't accessed anymore.
2020-01-14 21:38:44 +02:00
Aliaksandr Valialkin
e4f1bfd221 deployment/docker: update Prometheus from v2.14.0 to v2.15.2 and Grafana from v6.5.0 to v6.5.2 2020-01-12 23:14:10 +02:00
Aliaksandr Valialkin
91ee1bce2e README.md: add a link to VictoriaMetrics subreddit - https://www.reddit.com/r/VictoriaMetrics/ 2020-01-12 00:06:20 +02:00
Aliaksandr Valialkin
8b14572f70 app/vmselect/promql: add hoeffding_bound_upper(phi, m[d]) and hoeffding_bound_lower(phi, m[d]) functions
These functions can be used for calculating Hoeffding bounds
for `m` over `d` time range and for the given `phi` in the range `[0..1]`.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/283
2020-01-11 14:46:23 +02:00
Aliaksandr Valialkin
8eaced8cae app/vmselect/promql: return continuous values for min_over_time and max_over_time when step is smaller than scrape_interval 2020-01-11 12:47:50 +02:00
Aliaksandr Valialkin
1585dab5a3 deployment/docker: switch Go builder from v1.13.5 to v1.13.6 2020-01-11 11:06:00 +02:00
Aliaksandr Valialkin
cd66d3fc43 README.md: mention about Prometheus->VictoriaMetrics exporter https://github.com/ryotarai/prometheus-tsdb-dump
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/93
2020-01-11 01:29:09 +02:00
Aliaksandr Valialkin
ea231f8167 app/victoria-metrics: adjust integration tests after the commit 99facd71cd6ac151d512cea1df73be91c10c7f83 2020-01-11 00:58:16 +02:00
Aliaksandr Valialkin
46bfdbe6cf app/vmselect/promql: do not take into account the previous point before time window in square brackets for min_over_time, max_over_time, rollup_first and rollup_last functions
This makes the behaviour for these functions similar to Prometheus when processing broken time series with irregular data points
like `gitlab_runner_jobs`. See https://gitlab.com/gitlab-org/gitlab-exporter/issues/50 for details.
2020-01-11 00:26:26 +02:00
Aliaksandr Valialkin
4f0a645f77 vendor: update github.com/valyala/fastjson from v1.4.2 to v1.4.5
This should fix parsing Inf values in `/api/v1/import`. The previous attempt to fix this in VictoriaMetrics v1.32.1 was unsuccessful.
2020-01-10 23:15:15 +02:00
Aliaksandr Valialkin
b829fe5e39 app/vmselect/promql: properly handle aggr(aggr_over_time(...)) 2020-01-10 21:57:18 +02:00
Aliaksandr Valialkin
164278151f app/vmselect/promql: add aggr_over_time(("aggr_func1", "aggr_func2", ...), m[d]) function
This function can be used for simultaneous calculating of multiple `aggr_func*` functions
that accept range vector. For example, `aggr_over_time(("min_over_time", "max_over_time"), m[d])`
would calculate `min_over_time` and `max_over_time` for `m[d]`.
2020-01-10 21:18:06 +02:00
Aliaksandr Valialkin
c4632faa9d app/vmselect/promql: add tmin_over_time(m[d]) and tmax_over_time(m[d]) functions
These functions return timestamp in seconds for the minimum and maximum value for `m` over time range `d`
2020-01-10 19:39:28 +02:00
Aliaksandr Valialkin
a768198814 docs: fix spelling typos 2020-01-09 23:42:55 +02:00
Roman Khavronenko
57f4875024 fix spellcheck issues (#285) 2020-01-09 23:41:52 +02:00
Aliaksandr Valialkin
b8038a14e7 lib/backup/s3remote: check whether the file exists before deleting it
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/284
2020-01-09 23:20:31 +02:00
Aliaksandr Valialkin
f358fb72d1 app/{vmbackup,vmrestore}: add backup complete file to backup when it is complete and check for this file before restoring from backup
This should prevent from restoring from incomplete backups.

Add `-skipBackupCompleteCheck` command-line flag to `vmrestore` in order to be able restoring from old backups without `backup complete` file.
2020-01-09 15:35:38 +02:00
Aliaksandr Valialkin
1c436b2723 vendor: update github.com/valyala/fastjson from v1.4.1 to v1.4.2
This fixes parsing of `inf` and `nan` values in json lines passed to `/api/v1/import`
2020-01-08 20:47:21 +02:00
Aliaksandr Valialkin
a973df6d79 README.md: remove height="200px" from logo image, since it is improperly displayed on smartphones 2020-01-08 20:29:11 +02:00
Aliaksandr Valialkin
d4132a6915 docs: typo fix 2020-01-08 14:45:27 +02:00
Aliaksandr Valialkin
d5aeda0e1a app/vmselect/promql: skip rate calculation for the first point on time series 2020-01-08 14:42:53 +02:00
Aliaksandr Valialkin
bb71b6d47d docs: add references to Remote Write Storage Wars
Also mention than VictoriaMetrics uses less RAM than Thanos Store Gateway - see https://github.com/thanos-io/thanos/issues/448 for details.
2020-01-04 23:57:35 +02:00
Aliaksandr Valialkin
fc71602039 lib/storage: limit maxRaRowsPerPartition by 500K for any number of rawRowsShardsPerPartition
This should reduce write amplification for high ingestion rate on multi-CPU systems
2020-01-04 23:57:31 +02:00
Aliaksandr Valialkin
c60fdbed30 docs/CaseStudies.md: add link to Remote Write Storage Wars talk from Adidas at PromCon 2019 2020-01-04 16:51:45 +02:00
Aliaksandr Valialkin
d410c78c7e app/vmselect/promql: fix calculations for histogram_share 2020-01-04 14:44:48 +02:00
Aliaksandr Valialkin
66f3d1dac8 README.md: update Alerting section 2020-01-04 13:55:09 +02:00
Aliaksandr Valialkin
d9c4ac9978 lib/metricsql: export IsRollupFunc and IsTransformFunc, since they can be used by package users 2020-01-04 13:25:05 +02:00
Aliaksandr Valialkin
4567a59fa0 LICENSE: update year 2020-01-04 13:21:04 +02:00
Aliaksandr Valialkin
d64699bb9f app/vmselect/promql: add missing MetricName into netstorage.Result in tests 2020-01-04 12:52:39 +02:00
Aliaksandr Valialkin
f409f2d050 app/vmselect/promql: add histogram_share(le, buckets) function 2020-01-04 12:45:55 +02:00
Aliaksandr Valialkin
b1ded7cf9a app/vmselect/promql: add absent_over_time(m[d]) func similar to the function in Prometheus 2.16
See https://github.com/prometheus/prometheus/issues/2882
2020-01-04 12:45:07 +02:00
Aliaksandr Valialkin
a8360d04c0 app/vmselect/promql: add histogram_over_time(m[d]) rollup function 2020-01-04 12:44:56 +02:00
Aliaksandr Valialkin
3e09d38f29 app/vmselect/promql: fix results caching for multi-arg rollup functions such as quantile_over_time
Previosly only a single arg was taken into account, so caching didn't work properly for multi-arg rollup funcs.
2020-01-03 20:49:08 +02:00
Aliaksandr Valialkin
a774120460 app/vmselect/promql: use scrapeInterval instead of window in denominator when calculating rate for the first point on the time series
This should provide better estimation for `rate` in the beginning of time series.
2020-01-03 19:01:50 +02:00
Aliaksandr Valialkin
695682232f lib/uint64set: reduce memory usage when storing big number of sparse metric_id values 2020-01-03 18:16:44 +02:00
Aliaksandr Valialkin
b5645ccbdf app/vmselect/promql: increase the estimated number of time series returned by aggr() by (something) from 100 to 1K, since 100 may result in OOM for high number of time series 2020-01-03 01:02:21 +02:00
Aliaksandr Valialkin
cb3a342882 app/vmselect/promql: add share_le_over_time and share_gt_over_time functions for SLI and SLO calculations 2020-01-03 00:41:16 +02:00
Aliaksandr Valialkin
0038365206 docs: refer to standalone MetricsQL package 2020-01-02 23:43:35 +02:00
Aliaksandr Valialkin
61c9d320ed vendor: update github.com/VictoriaMetrics/fastcache from v1.5.4 to v1.5.5 2019-12-29 18:17:49 +02:00
Aliaksandr Valialkin
a21d786d3c lib/metricsql: add example for ExpandWithExprs 2019-12-26 21:32:11 +02:00
Aliaksandr Valialkin
192b51c246 vendor: make vendor-update 2019-12-26 19:41:02 +02:00
Aliaksandr Valialkin
17a4dc9782 vendor: update github.com/valyala/gozstd from v1.6.3 to v1.6.4
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/281
2019-12-26 19:30:08 +02:00
Aliaksandr Valialkin
6f67e0b56b lib/metricsq: add ExpandWithExprs 2019-12-25 22:20:30 +02:00
Aliaksandr Valialkin
1925ee038d Rename lib/promql to lib/metricsql and apply small fixes 2019-12-25 22:03:59 +02:00
Mike Poindexter
bec62e4e43 Split Extended PromQL parsing to a separate library 2019-12-25 22:03:51 +02:00
Aliaksandr Valialkin
d880325cf6 app/vmselect/promql: make sure AdjustStartEnd returns time range covering the same number of points as the initial time range
This should prevent from the following panic at app/vmselect/promql/binary_op.go:255:

    BUG: len(leftVaues) must match len(rightValues) and len(dstValues)
2019-12-24 22:45:56 +02:00
Aliaksandr Valialkin
c18802af59 lib/fs: typo fix in fadvise_unix.go 2019-12-24 20:59:28 +02:00
Aliaksandr Valialkin
4ba4abe666 lib/encoding: log the compressed block contents if it cannot be decompressed or unmarshaled
This should help detecting the root cause of https://github.com/VictoriaMetrics/VictoriaMetrics/issues/281
2019-12-24 20:48:31 +02:00
Aliaksandr Valialkin
5bb39e757b lib/encoding: mention src contents in error message returned from unmarshalInt64NearestDelta*
This should simplify detecting the root cause of the issue at https://github.com/VictoriaMetrics/VictoriaMetrics/issues/281
2019-12-24 20:41:52 +02:00
Aliaksandr Valialkin
d5c9841220 lib/encoding: mention unpacked block size in the error message if unparsed tail left
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/281
2019-12-24 20:35:13 +02:00
Aliaksandr Valialkin
9e19949c6b app/vmselect/promql: adjust calculations for rate and increase for the first value
These calculations should trigger alerts on `/api/v1/query` for counters starting from values greater than 0.
2019-12-24 19:39:25 +02:00
Aliaksandr Valialkin
0455c03cb9 app/vmselect/promql: properly calculate rate on the first data point
It is calculated as `value / scrape_interval`, since the value was missing on the previous scrape,
i.e. we can assume its value was 0 at this time.
2019-12-24 15:55:52 +02:00
Aliaksandr Valialkin
5cb8d97743 all: use gozstd instead of pure Go zstd for GOARCH=amd64 2019-12-24 12:42:42 +02:00
Aliaksandr Valialkin
31d04fb5df Revert "lib/logger: prevent from blocking when log output isn't consumed in timely manner"
This reverts commit e3c462f08a.

Reason to revert: this leaves incomplete logs on app shutdown.
2019-12-24 12:21:39 +02:00
Aliaksandr Valialkin
5b75984aa9 app/vmselect/netstorage: move MustAdviseSequentialRead to lib/fs 2019-12-23 23:16:11 +02:00
Aliaksandr Valialkin
097c21931c docs: sync README.md with Single-server-VictoriaMetrics.md 2019-12-23 20:34:21 +02:00
Roman Khavronenko
85463a7199 update configuration recommendations for Prometheus remote_write (#277) 2019-12-23 20:33:10 +02:00
Aliaksandr Valialkin
6a1499efa3 lib/encoding/zstd: prevent from possible encoder leak when concurrent goroutines create encoders for the same compressionLevel
Thanks to @klauspost for the pointer to this issue. See https://github.com/klauspost/compress/issues/195 for details.
2019-12-23 18:05:41 +02:00
Aliaksandr Valialkin
bf4413e58d README.md: document how to export and import gzipped data 2019-12-23 13:40:22 +02:00
Aliaksandr Valialkin
e3c462f08a lib/logger: prevent from blocking when log output isn't consumed in timely manner
Drop log messages instead of blocking and increment `vm_log_messages_dropped_total` metric.
2019-12-20 11:49:34 +02:00
Aliaksandr Valialkin
bea5a8700a app/vmselect: add -search.maxExportDuration command-line flag for limiting /api/v1/export duration
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/275
2019-12-20 11:35:22 +02:00
Aliaksandr Valialkin
1825893eef lib/storage: scale ingestion performance by sharding rawRows on systems with more than 8 CPU cores 2019-12-19 18:18:29 +02:00
Aliaksandr Valialkin
97f70ccda7 lib/storage: optimize bulk import performance when multiple data points are inserted for the same time series
This should speed up `/api/v1/import` and make it more scalable on multi-core systems.
2019-12-19 18:18:29 +02:00
Andrii Dembitskyi
2fba7b6f35 Fix typo in log message 2019-12-19 14:33:20 +02:00
Aliaksandr Valialkin
d03827c57d app/vminsert: return StatusNoContent http response for /api/v1/import to be consistent with other insert handlers 2019-12-19 01:21:54 +02:00
Aliaksandr Valialkin
bb530a0591 lib/httpserver: inline checkAuth code to make it more clear 2019-12-18 23:06:25 +02:00
koalaty-code
aea4c80dd7 Ignore /health endpoint when checking auth 2019-12-18 23:04:31 +02:00
Aliaksandr Valialkin
5e8e0fbc80 docs/ExtendedPromQL.md: rewording regarding scalar vs instant vector difference 2019-12-18 21:47:24 +02:00
Aliaksandr Valialkin
1e8aa89a3b docs/Home.md: fix link to case studies 2019-12-18 01:04:20 +02:00
Aliaksandr Valialkin
56595ae12a docs: renaming: PromQL extensions -> MetricsQL 2019-12-18 00:56:51 +02:00
Aliaksandr Valialkin
96ff8d9adb app/vmselect: add ability to pass match[], start and end to /api/v1/labels
This makes the `/api/v1/labels` handler consistent with already existing functionality for `/api/v1/label/.../values`.

See https://github.com/prometheus/prometheus/issues/6178 for more details.
2019-12-15 00:20:50 +02:00
Aliaksandr Valialkin
02f6566ce1 app/vmbackup: mention that backups are possible to Ceph and Swift 2019-12-14 01:08:49 +02:00
Aliaksandr Valialkin
7535f20c98 docs: publish vmbackup and vmrestore docs on wiki and victoriametrics.github.io 2019-12-14 01:05:55 +02:00
Aliaksandr Valialkin
bc645152cb app/vminsert: simultaneously accept telnet put and HTTP /api/put OpenTSDB metrics at -opentsdbListenAddr
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/266
2019-12-14 00:30:12 +02:00
Aliaksandr Valialkin
f5ac9b0721 lib/logger: add -loggerFormat for choosing log message formats
Supported formats: default, json

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/265
2019-12-13 15:10:05 +02:00
Aliaksandr Valialkin
d95a43f392 docs: sync with cluster branch 2019-12-12 20:49:55 +02:00
Aliaksandr Valialkin
87a8348062 make vendor-update 2019-12-12 19:39:52 +02:00
Aliaksandr Valialkin
cea5a14853 all: rename Extended PromQL to PromQL extensions 2019-12-12 19:25:58 +02:00
Aliaksandr Valialkin
9787c228a4 docs/CaseStudies.md: add a link to VMQL 2019-12-12 14:53:48 +02:00
Aliaksandr Valialkin
c121608205 README.md: mention that {__name__!=""} selects all the time series in /api/v1/export 2019-12-12 14:48:30 +02:00
Aliaksandr Valialkin
492f032b38 docs: add Dreamteam numbers 2019-12-12 01:01:07 +02:00
Aliaksandr Valialkin
4624c060ac docs/Single-server-VictoriaMetrics.md: sync with README.md 2019-12-12 00:55:14 +02:00
Clémence Saussez
8454679d9f README.md: adds link to Grafana dashboard for clustered version (#259)
Signed-off-by: Clemence Saussez <clemence@zen.ly>
2019-12-12 00:54:24 +02:00
Aliaksandr Valialkin
440a15111e deployment/docker/Makefile: mention that the Makefile rules must be invoked from the repository root 2019-12-11 23:33:02 +02:00
Aliaksandr Valialkin
6ddcd162ed all: publish Docker images for the following GOARCH: amd64, arm, arm64, ppc64le and 386
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/258
2019-12-11 23:32:59 +02:00
Aliaksandr Valialkin
6504f78ce4 README.md: add Docker hub shield 2019-12-11 18:34:26 +02:00
Aliaksandr Valialkin
73b2a3d4b7 app/vmselect/promql: return lower and upper bounds for the estimated percentile from histogram_quantile if third arg is passed
Updates https://github.com/prometheus/prometheus/issues/5706
2019-12-11 13:57:26 +02:00
Aliaksandr Valialkin
07d5bc986b CaseStudies: clarify wording: metrics -> active time series 2019-12-11 12:05:08 +02:00
Aliaksandr Valialkin
caa4eb72d9 app/vmselect/promql: return matrix instead of vector on subqueries to /api/v1/query like Prometheus does 2019-12-11 01:00:26 +02:00
Aliaksandr Valialkin
3c076544bf app/vmselect/promql: allow negative offsets
Updates https://github.com/prometheus/prometheus/issues/6282
2019-12-11 01:00:23 +02:00
Aliaksandr Valialkin
35f5ca1def README.md: typo fixes 2019-12-09 23:30:01 +02:00
Aliaksandr Valialkin
a7d80f62be README.md: add a chapter about Prometheus querying API usage
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/253
2019-12-09 23:27:23 +02:00
Aliaksandr Valialkin
40540397c3 README.md: use relative links to REAMDE.md 2019-12-09 23:04:34 +02:00
Aliaksandr Valialkin
c107f46b0e docs: mention about /api/v1/import in Single-server-VictoriaMetrics.md 2019-12-09 23:02:07 +02:00
Aliaksandr Valialkin
8cce513a15 docs: mention about /api/v1/import in Cluster-VictoriaMetrics.md 2019-12-09 23:01:14 +02:00
Aliaksandr Valialkin
b01ddfdd76 deployment/docker: update Go builder from go1.13.4 to go1.13.5 2019-12-09 22:58:26 +02:00
Aliaksandr Valialkin
68e1cf8942 app/vminsert: add /api/v1/import handler
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6
2019-12-09 20:59:04 +02:00
Aliaksandr Valialkin
8501b4a48d app/vminsert: consistency renaming for counters 2019-12-09 16:43:10 +02:00
Aliaksandr Valialkin
0ed9258545 lib/{mergeset,storage}: log info message when both source and destination part paths from txn are missing during startup
This is expected condition after unclean shutdown (OOM, hard reset, `kill -9`) on NFS disk.
2019-12-09 15:44:53 +02:00
Roman Khavronenko
b0d88460de #251 - add Logging rate panel (#254) 2019-12-09 13:05:59 +02:00
Aliaksandr Valialkin
8db7660afe docs/CaseStudies.md: mention that additional references and reviews can be obtained from our Slack channel 2019-12-08 14:04:18 +02:00
Aliaksandr Valialkin
18369bca42 docs/ExtendedPromQL.md: add a link to https://medium.com/@valyala/improving-histogram-usability-for-prometheus-and-grafana-bc7e5df0e350 for histogram func 2019-12-08 13:48:33 +02:00
Aliaksandr Valialkin
95328782c3 docs/CaseStudies.md: re-wording 2019-12-08 13:43:49 +02:00
Aliaksandr Valialkin
981cb66a95 docs/CaseStudies.md: improve wording 2019-12-08 13:39:29 +02:00
Aliaksandr Valialkin
f15d89bfe0 vendor: fix broken build for GOARCH=arm64 on golang.org/x/sys/unix 2019-12-08 13:27:38 +02:00
Aliaksandr Valialkin
36feb7d3e4 docs: add draft version of case studies 2019-12-08 13:23:15 +02:00
Aliaksandr Valialkin
d900184d8d vendor: fix arm build for golang.org/x/sys/unix/zptrace_armnn_linux.go 2019-12-08 12:49:05 +02:00
Aliaksandr Valialkin
293b541784 make vendor-update 2019-12-07 23:10:16 +02:00
Aliaksandr Valialkin
84b57e8974 app/vminsert/influx: add a test case from https://community.librenms.org/t/integration-with-victoriametrics/9689 2019-12-07 23:00:40 +02:00
Aliaksandr Valialkin
b458e5a213 README.md: mention that VictoriaMetrics is built on shared nothing architecture 2019-12-05 20:39:44 +02:00
Aliaksandr Valialkin
c09472dfd9 app/vmselect/promql: add {topk|bottomk}_{min|max|avg|median} aggregate functions for returning the exact k time series on the given time range
The full list of functions added:
- `topk_min(k, q)` - returns top K time series with the max minimums on the given time range
- `topk_max(k, q)` - returns top K time series with the max maximums on the given time range
- `topk_avg(k, q)` - returns top K time series with the max averages on the given time range
- `topk_median(k, q)` - returns top K time series with the max medians on the given time range
- `bottomk_min(k, q)` - returns bottom K time series with the min minimums on the given time range
- `bottomk_max(k, q)` - returns bottom K time series with the min maximums on the given time range
- `bottomk_avg(k, q)` - returns bottom K time series with the min averages on the given time range
- `bottomk_median(k, q)` - returns bottom K time series with the min medians on the given time range
2019-12-05 19:26:47 +02:00
Aliaksandr Valialkin
72345eb5bd lib/{mergeset,storage}: make sure pending transaction deletions are finished before and after runTransactions call.
`runTransactions` call issues async deletions for transaction files. The previously issued transaction deletions
can race with the next call to `runTransactions`. Prevent this by waiting until all the pending transaction
deletions are funished in the beginning of `runTransactions`. Also make sure that all the pending transaction
deletions are finished before returning from `runTransactions`.
2019-12-04 21:40:30 +02:00
Aliaksandr Valialkin
1244ad810d lib/httpserver: add /ping handler for compatibility with Influx agents
Certain Influx agents check for `/ping` endpoint before starting
to send Influx line protocol data. See https://docs.influxdata.com/influxdb/v1.7/tools/api/#ping-http-endpoint
2019-12-04 19:15:52 +02:00
Aliaksandr Valialkin
359c4d6109 docs: add a link to https://medium.com/@valyala/prometheus-storage-technical-terms-for-humans-4ab4de6c3d48 2019-12-03 22:37:16 +02:00
Aliaksandr Valialkin
face3d57bf app/vmselect: add placeholders for /api/v1/rules and /api/v1/alerts 2019-12-03 19:36:33 +02:00
Aliaksandr Valialkin
a247236f61 lib/storage: fall back to global inverted index if a filter match too many time series in per-day index
Previously this resulted to error message. The query may succeed via search in global index.
2019-12-03 14:48:31 +02:00
Aliaksandr Valialkin
54741ee578 lib/storage: fix printing tag filters in TagFilters.String 2019-12-03 14:25:13 +02:00
Aliaksandr Valialkin
efbc83a13e lib/storage: print __name__ instead of empty string in user-visible tag filters 2019-12-03 14:18:28 +02:00
Aliaksandr Valialkin
ade453847f docs: typo fixes 2019-12-03 00:44:50 +02:00
Aliaksandr Valialkin
f52874dab4 lib/storage: optimize regexp filter search 2019-12-03 00:43:12 +02:00
Artem Navoiev
652ba59ce9 [docs] update release page doc 2019-12-02 23:01:51 +02:00
Artem Navoiev
3e81ab2f75 [docs] change titles 2019-12-02 22:53:11 +02:00
Artem Navoiev
a778233877 [docs] change titles 2019-12-02 22:50:54 +02:00
Aliaksandr Valialkin
14100ed643 vendor: update github.com/VictoriaMetrics/metrics from v1.9.1 to v1.9.2
This fixes possible deadlock when metrics.WritePrometheus calls Gauge callback, which calls metrics functions with internal lock.
2019-12-02 22:33:33 +02:00
Artem Navoiev
cfc6e7df07 [docs] revert titles 2019-12-02 22:06:39 +02:00
Artem Navoiev
c07a83374c [docs] remove double titles 2019-12-02 22:02:59 +02:00
Artem Navoiev
c76b2be21f [ci] add github pages action 2019-12-02 21:53:33 +02:00
Aliaksandr Valialkin
638a5cbb16 lib/{mergeset,storage}: remove transaction files only after the mentioned dirs are really removed
This should fix the issue on NFS when incompletely removed dirs may be left
after unclean shutdown (OOM, kill -9, hard reset, etc.), while the corresponding transaction
files are already removed.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/162
2019-12-02 21:36:31 +02:00
Aliaksandr Valialkin
20812008a7 lib/storage: remove metricID with missing metricID->metricName entry
The metricID->metricName entry can be missing in the indexdb after unclean shutdown
when only a part of entries for new time series is written into indexdb.

Recover from such a situation by removing the broken metricID. New metricID
will be automatically created for time series with the given metricName
when new data point will arive to it.
2019-12-02 20:46:44 +02:00
Aliaksandr Valialkin
62a915f2b2 lib/storage: protect from time drift during indexdb rotation
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/248
2019-12-02 14:44:42 +02:00
Aliaksandr Valialkin
42da569bcd lib/logger: merge file and line labels into location="file:line"
This should improve the usability for `vm_log_messages_total` metric during practical queries
2019-12-02 14:44:40 +02:00
Aliaksandr Valialkin
70b8191fab lib/storage: generate more human-friendly result in TagFilters.String 2019-12-02 13:52:22 +02:00
Aliaksandr Valialkin
9476b73527 app/vmselect/promql: estimate per-series scrape interval as 0.6 quantile for the first 100 intervals
This should improve scrape interval estimation for tiem series with gaps.
2019-12-02 13:42:33 +02:00
Aliaksandr Valialkin
542b9c2043 lib/logger: consistency renaming from vm_log_messages_count to vm_log_messages_total, since this is a counter 2019-12-02 00:49:00 +02:00
Aliaksandr Valialkin
c567919f80 lib/logger: track the number of log messages by (level, file, line) in the vm_log_messages_count metric 2019-12-01 18:37:49 +02:00
Aliaksandr Valialkin
761645b20a lib/netutil: use IPv6 for both listening and dialing if -enabledTCP6 is set
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/244
2019-12-01 02:57:13 +02:00
Aliaksandr Valialkin
811b7a8303 app/vminsert/influx: allow empty measurement in Influx line protocol
In this case metric names are mapped directly from field names without any prefixes.
2019-11-30 23:18:41 +02:00
Artem Navoiev
4972bd4c96 Update release guide add Wiki section. Change styling 2019-11-30 21:10:42 +02:00
Artem Navoiev
335e0f8f6a Update release guide add Wiki section 2019-11-30 21:08:48 +02:00
Artem Navoiev
505e46980a [ci] push docs/*.md file to wiki 2019-11-30 20:58:28 +02:00
Artem Navoiev
ab88b77515 rename doc to docs 2019-11-30 20:48:40 +02:00
Artem Navoiev
3d8e75e065 [ci] test wiki push 2019-11-30 20:38:37 +02:00
Artem Navoiev
74b4ccfc91 [ci] push to wiki 2019-11-30 20:36:10 +02:00
Aliaksandr Valialkin
75ff524a4e app/vmselect/promql: fix corner case for increase over time series with gaps
In this case `increase` could return invalid high value for the first point after the gap.
2019-11-30 01:34:56 +02:00
Aliaksandr Valialkin
96492348cb deployment/docker/certs: update TLS certs source from alpine:3.9 to alpine:3.10 2019-11-29 19:57:29 +02:00
Aliaksandr Valialkin
f733cb2186 lib/backup: cosmetic fixes after #243 2019-11-29 18:07:04 +02:00
glebsam
15b7406f7b Add option to provide custom endpoint for S3, add option to specify S3 config profile (#243)
* Add option to provide custom endpoint for S3 for use with s3-compatible storages, add option to specify S3 config profile

* make fmt
2019-11-29 17:59:56 +02:00
Aliaksandr Valialkin
9010c6a1d6 lib/netutil: add -enableTCP6 command-line flag for enabling listening for IPv6 additionally to IPv4 TCP ports 2019-11-29 17:32:47 +02:00
Aliaksandr Valialkin
a7125a5b7b lib/backup: remove flock.lock file in empty dirs
This fixes an issue when VictoriaMetrics doesn't see the restored data after the following operations:

1. Stop VictoriaMetrics.
2. Delete `<-storageDataPath>` dir.
3. Start VictoriaMetrics, then stop it.
4. Restore data from backup with `vmrestore`.
5. Start VictoriaMetrics.

`vmrestore` didn't delete properly empty dirs in `<-storageDataPath>/indexdb` because of the remaining `flock.lock` files in these dirs.
2019-11-28 13:38:58 +02:00
Aliaksandr Valialkin
a6d7179286 README.md: remove the unnecessary step during restoring from backups 2019-11-27 19:57:03 +02:00
Aliaksandr Valialkin
e828647d0f vendor: make vendor-update 2019-11-27 15:37:14 +02:00
Aliaksandr Valialkin
31fb6f2b07 vendor: update github.com/VictoriaMetrics/fastcache from v1.5.2 to v1.5.4 2019-11-27 15:30:33 +02:00
Aliaksandr Valialkin
2c86816950 deployment/docker: update Grafana from v6.4.4 to v6.5.0 2019-11-27 15:10:37 +02:00
Aliaksandr Valialkin
4c859d980c app/vmselect/prometheus: consistently apply nocache arg to /api/v1/query the same way ast to /api/v1/query_range
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/241
2019-11-26 22:55:43 +02:00
Aliaksandr Valialkin
14bcff6015 lib/httpserver: improve docs for -tls* flags to be more clear
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/242
2019-11-26 18:08:35 +02:00
Aliaksandr Valialkin
110235f789 app/vmselect/prometheus: fix content-type for /api/v1/export responses
The correct Content-Type should be `application/stream+json` instead of `application/json`
Thanks to Joshua Ryder for pointing to this.
2019-11-26 17:45:26 +02:00
Aliaksandr Valialkin
205233d9a7 app/vmselect/promql: remove zero timeseries from prometheus_buckets output 2019-11-25 19:10:23 +02:00
Aliaksandr Valialkin
3f99f39e9b app/vmselect/prometheus: reduce default value for -search.latencyOffset from 60s to 30s
30 seconds should be enough for almost all the cases
2019-11-25 16:33:42 +02:00
Aliaksandr Valialkin
e91cb34c0e app/vmselect/promql: allow nested parens 2019-11-25 16:13:41 +02:00
Aliaksandr Valialkin
826dfd63a5 vendor: update github.com/VictoriaMetrics/metrics from v1.9.0 to v1.9.1 2019-11-25 15:23:01 +02:00
Aliaksandr Valialkin
0401969d78 app/vmselect/promql: re-use metrics.Histogram when calculating histogram function for each point on the graph
This should reduce the amounts memory allocations
2019-11-25 14:24:21 +02:00
Aliaksandr Valialkin
da98703748 app/vmselect/promql: optimize binary search over big number of samples during rollup calculations 2019-11-25 14:01:46 +02:00
Aliaksandr Valialkin
c28876172f app/vmselect/promql: adjust tests after the upgrade of github.com/VictoriaMetrics/metrics from v1.8.3 to v1.9.0 2019-11-25 13:43:57 +02:00
Aliaksandr Valialkin
66c53bf3c6 vendor: update github.com/VictoriaMetrics/metrics from v1.8.3 to v1.9.0 2019-11-25 13:19:43 +02:00
Aliaksandr Valialkin
50ae1879c6 app/vmselect/promql: add histogram aggregate function, which is useful for building heatmaps from multiple time series 2019-11-24 00:04:25 +02:00
Aliaksandr Valialkin
4ff2fbcf3f vendor: update github.com/VictoriaMetrics/metrics from v1.8.2 to v1.8.3 2019-11-24 00:04:24 +02:00
Aliaksandr Valialkin
5285acae3e lib/decimal: calculate ln2/ln10 constant during compile time 2019-11-23 15:52:58 +02:00
Aliaksandr Valialkin
8582b50360 app/vmselect/promql: do not take into account buckets with negative counters in prometheus_buckets 2019-11-23 14:19:25 +02:00
Aliaksandr Valialkin
19dfe52254 app/vmselect/promql: properly handle histogram_quantile(0, ...) with zero buckets 2019-11-23 14:02:35 +02:00
Aliaksandr Valialkin
4bb88843cf app/vmselect: add vm_per_query_{rows,series}_processed_count histograms 2019-11-23 13:23:26 +02:00
Aliaksandr Valialkin
0827bb6ce5 vendor: update github.com/VictoriaMetrics/metrics from v1.8.1 to v1.8.2 2019-11-23 11:48:54 +02:00
Aliaksandr Valialkin
7753c8c0a1 app/vmselect/promql: transparently apply prometheus_buckets in histogram_quantile 2019-11-23 11:48:51 +02:00
Aliaksandr Valialkin
ef25e1b049 vendor: update github.com/VictoriaMetrics/metrics from v1.8.0 to v1.8.1 2019-11-23 00:49:13 +02:00
Aliaksandr Valialkin
9d1fcb2be6 vendor: update github.com/VictoriaMetrics/metrics from v1.7.2 to v1.8.0. This version supports histograms 2019-11-23 00:20:27 +02:00
Aliaksandr Valialkin
c4287b3c86 app/vmselect/promql: add prometheus_buckets function for converting the upcoming histogram buckets from github.com/VictoriaMetrics/metrics to Prometheus-compatible buckets 2019-11-23 00:20:20 +02:00
Aliaksandr Valialkin
1f3fd2c910 app/vmselect: adjust end arg instead of adjusting start arg if start > end
`start` arg has higher chances to be set properly comparing to `end` arg,
so it is expected that the `end` arg could be adjusted if it was set incorrectly.
2019-11-22 16:12:19 +02:00
Aliaksandr Valialkin
90b03309de vendor: updated github.com/valyala/gozstd from v1.6.2 to v1.6.3 2019-11-21 23:57:00 +02:00
Aliaksandr Valialkin
7a4635f853 all: remove the remaining mentions of cluster version 2019-11-21 23:18:22 +02:00
Aliaksandr Valialkin
3e9b7addb1 lib/httpserver: typo fix in -httpAuth.password command-line description 2019-11-21 21:54:26 +02:00
Aliaksandr Valialkin
f652c0f40f lib/storage: move non-matching tag filters to the top at matchTagFilters
This should reduce the amount of useless work needed for matching the next metricNames.
2019-11-21 21:35:13 +02:00
Aliaksandr Valialkin
b8cde6cce1 lib/storage: speed up time series search for queries with multiple filters
Use optimized specialized binary search for uint64 metricIDs instead of generic sort.Search.
2019-11-21 18:43:17 +02:00
Aliaksandr Valialkin
aeea59e280 Makefile: create files with sha256 checksums during make release
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/19
2019-11-20 22:43:37 +02:00
Aliaksandr Valialkin
74e563ca3f README.md: added a link to https://github.com/dreamteam-gg/ansible-victoriametrics-role 2019-11-20 21:26:43 +02:00
Aliaksandr Valialkin
5c1e4143e9 lib/storage: verify the number of returned metricIDs in BenchmarkHeadPostingForMatchers 2019-11-20 15:39:28 +02:00
Aliaksandr Valialkin
52d7ca6bf0 lib/decimal: increase decimal->float speed conversion for integer numbers 2019-11-20 13:04:34 +02:00
Aliaksandr Valialkin
75eeea21ee lib/decimal: reduce rounding error when converting from decimal to float with negative exponent
While at it, slightly increase the conversion performance by moving fast path to the top of the loop.
2019-11-19 23:35:33 +02:00
Artem Navoiev
c03b87dac0 update version of codecove to 1.04 2019-11-19 22:23:14 +02:00
Aliaksandr Valialkin
259dc95366 make vendor-update 2019-11-19 21:35:07 +02:00
Aliaksandr Valialkin
cfb9fa2100 lib/backup: retrieve only the required metadata when reading GCS objects 2019-11-19 21:06:34 +02:00
Aliaksandr Valialkin
355ccba81a make vendor-update 2019-11-19 21:05:37 +02:00
Aliaksandr Valialkin
443189fb0a app/{vmbackup,vmrestore}: add -maxBytesPerSecond command-line flag for limiting the used network bandwidth during backup / restore 2019-11-19 20:31:52 +02:00
Aliaksandr Valialkin
2db06f0ef8 lib/backup: prevent from restoring to directory which is in use by VictoriaMetrics during the restore 2019-11-19 18:36:23 +02:00
Aliaksandr Valialkin
0094bc4fc9 app/vmselect/prometheus: properly adjust too big time time on /api/v1/query
Too big `time` must be adjusted to `now()-queryOffset`.
2019-11-19 00:42:00 +02:00
Aliaksandr Valialkin
b6f22a62cb lib/storage: increase the number of created time series in BenchmarkHeadPostingForMatchers in order to be on par with Promethues
The previous commit was accidentally creating 10x smaller number of time series than Prometheus
and this led to invalid benchmark results.

The updated benchmark results:

benchmark                                                          old ns/op      new ns/op     delta
BenchmarkHeadPostingForMatchers/n="1"                              272756688      6194893       -97.73%
BenchmarkHeadPostingForMatchers/n="1",j="foo"                      138132923      10781372      -92.19%
BenchmarkHeadPostingForMatchers/j="foo",n="1"                      134723762      10632834      -92.11%
BenchmarkHeadPostingForMatchers/n="1",j!="foo"                     195823953      10679975      -94.55%
BenchmarkHeadPostingForMatchers/i=~".*"                            7962582919     100118510     -98.74%
BenchmarkHeadPostingForMatchers/i=~".+"                            7589543864     154955671     -97.96%
BenchmarkHeadPostingForMatchers/i=~""                              1142371741     258003769     -77.42%
BenchmarkHeadPostingForMatchers/i!=""                              9964150263     159783895     -98.40%
BenchmarkHeadPostingForMatchers/n="1",i=~".*",j="foo"              216995884      10937895      -94.96%
BenchmarkHeadPostingForMatchers/n="1",i=~".*",i!="2",j="foo"       202541348      10990027      -94.57%
BenchmarkHeadPostingForMatchers/n="1",i!=""                        486285711      87004349      -82.11%
BenchmarkHeadPostingForMatchers/n="1",i!="",j="foo"                350776931      53342793      -84.79%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",j="foo"              380888565      54256156      -85.76%
BenchmarkHeadPostingForMatchers/n="1",i=~"1.+",j="foo"             89500296       21823279      -75.62%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",i!="2",j="foo"       379529654      46671359      -87.70%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",i!~"2.*",j="foo"     424563825      53915842      -87.30%

VictoriaMetrics uses 1GB of RAM during the benchmark (vs 3.5GB of RAM for Prometheus)
2019-11-18 19:50:58 +02:00
Aliaksandr Valialkin
8a0dfc6220 lib/storage: add BenchmarkHeadPostingForMatchers similar to the benchmark from Prometheus
See the corresponding benchmark in Prometheus - 23c0299d85/tsdb/head_bench_test.go (L52)

The benchmark allows performing apples-to-apples comparison of time series search
in Prometheus and VictoriaMetrics. The following article - https://www.robustperception.io/evaluating-performance-and-correctness -
contains incorrect numbers for VictoriaMetrics, since there wasn't this benchmark yet. Fix this.

Benchmarks can be repeated with the following commands from Prometheus and VictoriaMetrics source code roots:

- Prometheus: GOMAXPROCS=1 go test ./tsdb/ -run=111 -bench=BenchmarkHeadPostingForMatchers
- VictoriaMetrics: GOMAXPROCS=1 go test ./lib/storage/ -run=111 -bench=BenchmarkHeadPostingForMatchers

Benchmark results:
benchmark                                                          old ns/op      new ns/op     delta
BenchmarkHeadPostingForMatchers/n="1"                              272756688      364977        -99.87%
BenchmarkHeadPostingForMatchers/n="1",j="foo"                      138132923      1181636       -99.14%
BenchmarkHeadPostingForMatchers/j="foo",n="1"                      134723762      1141578       -99.15%
BenchmarkHeadPostingForMatchers/n="1",j!="foo"                     195823953      1148056       -99.41%
BenchmarkHeadPostingForMatchers/i=~".*"                            7962582919     8716755       -99.89%
BenchmarkHeadPostingForMatchers/i=~".+"                            7589543864     12096587      -99.84%
BenchmarkHeadPostingForMatchers/i=~""                              1142371741     16164560      -98.59%
BenchmarkHeadPostingForMatchers/i!=""                              9964150263     12230021      -99.88%
BenchmarkHeadPostingForMatchers/n="1",i=~".*",j="foo"              216995884      1173476       -99.46%
BenchmarkHeadPostingForMatchers/n="1",i=~".*",i!="2",j="foo"       202541348      1299743       -99.36%
BenchmarkHeadPostingForMatchers/n="1",i!=""                        486285711      11555193      -97.62%
BenchmarkHeadPostingForMatchers/n="1",i!="",j="foo"                350776931      5607506       -98.40%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",j="foo"              380888565      6380335       -98.32%
BenchmarkHeadPostingForMatchers/n="1",i=~"1.+",j="foo"             89500296       2078970       -97.68%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",i!="2",j="foo"       379529654      6561368       -98.27%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",i!~"2.*",j="foo"     424563825      6757132       -98.41%

The first column (old) is for Prometheus, the second column (new) is for VictoriaMetrics.

As you can see, VictoriaMetrics outperforms Prometheus by more than 100x in almost all the test cases of this benchmark.

Prometheus was using 3.5GB of RAM during the benchmark, while VictoriaMetrics was using 400MB of RAM.
2019-11-18 18:45:06 +02:00
Aliaksandr Valialkin
2ab4cea5e5 lib/storage: always start using per-day inverted index on the next day after its creation
The current day could miss entries for already stopped time series before
enabling per-day index.

This fixes the issue when queries return empty results during the first hour after
upgrading to v1.29.*
2019-11-16 12:11:25 +02:00
Aliaksandr Valialkin
c050abbbad deployment/docker: update Prometheus version from v2.12.0 to v2.14.0 2019-11-16 00:13:15 +02:00
Aliaksandr Valialkin
3f1637fae8 app/vmselect/promql: properly calculate integrate(q[d]) 2019-11-13 21:10:41 +02:00
Aliaksandr Valialkin
c56b9ed03b app/victoria-metrics: add build rules for GOARCH=ppc64le
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/235
2019-11-13 20:24:33 +02:00
Aliaksandr Valialkin
3fd32e331a app/vmselect/promql: use universal approach for determining maxByteSliceLen on 32-bit and 64-bit archs
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/235
2019-11-13 20:24:26 +02:00
Aliaksandr Valialkin
119dfd01bb lib/storage: add vm_cache_size_bytes{type="storage/hour_metric_ids"} metric 2019-11-13 20:24:21 +02:00
Aliaksandr Valialkin
86a1cd700b lib/storage: remove inmemory index for recent hour, since it uses too much memory
Production workload shows that the index requires ~4Kb of RAM per active time series.
This is too much for high number of active time series, so let's delete this index.

Now the queries should fall back to the index for the current day instead of the index
for the recent hour. The query performance for the current day index should be good enough
given the 100M rows/sec scan speed per CPU core.
2019-11-13 17:58:07 +02:00
Aliaksandr Valialkin
33895d4a0f lib/storage: add missing increment for recentHourInvertedIndexSearchCalls 2019-11-13 15:13:51 +02:00
Aliaksandr Valialkin
c57eb0ff83 lib/storage: add -disableRecentHourIndex flag for disabling inmemory index for recent hour
This may be useful for saving RAM on high number of time series aka high cardinality
2019-11-13 15:02:51 +02:00
Aliaksandr Valialkin
e14ab14e54 lib/storage: verify marshaling for iidx.pendingMetricIDs in TestInmemoryInvertedIndexMarshalUnmarshal 2019-11-13 13:35:30 +02:00
Aliaksandr Valialkin
ca259864e2 lib/storage: return back inmemory inverted index for recent hour
Issues fixed:
- Slow startup times. Now the index is loaded from cache during start.
- High memory usage related to superflouos index copies every 10 seconds.
2019-11-13 13:11:04 +02:00
Aliaksandr Valialkin
01bb3c06c7 lib/storage: remove inmemory inverted index for recent hours
Production load with >10M active time series showed it could
slow down VictoriaMetrics startup times and could eat
all the memory leading to OOM.

Remove inmemory inverted index for recent hours until thorough
testing on production data shows it works OK.
2019-11-13 10:45:53 +02:00
Aliaksandr Valialkin
66c4961ff8 README.md: mention that VictoriaMetrics executable is small 2019-11-12 16:58:15 +02:00
Aliaksandr Valialkin
3e16248ed6 README.md: small updates 2019-11-12 16:54:18 +02:00
Aliaksandr Valialkin
5e6c1cd986 README.md: typo fix 2019-11-12 16:48:40 +02:00
Aliaksandr Valialkin
6c2303764e Revert "lib/fs: do not postpone directory removal on NFS error"
This reverts commit 4c02e496f7.

Reason for revert: the commit breaks on NFS - see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/234
2019-11-12 16:18:09 +02:00
Mike Poindexter
f3ad330635 Add test for invalid caching of tsids (#232)
* Add test for invalid caching of tsids

* Clean up error handling
2019-11-12 15:09:33 +02:00
Aliaksandr Valialkin
6c362d82cb README.md: mention that backups are made to S3 or GCS 2019-11-12 14:32:37 +02:00
Aliaksandr Valialkin
661dd190bb Refer to https://medium.com/@valyala/speeding-up-backups-for-big-time-series-databases-533c1a927883 from multiple places in README.md 2019-11-12 13:02:39 +02:00
Aliaksandr Valialkin
630ba810f1 deployment/docker: upgrade Go from v1.13.4 to v1.13.4 2019-11-12 03:49:19 +02:00
Oleg Kovalov
b4f44befa3 fix misspelled words (#229) 2019-11-12 00:16:42 +02:00
Roman Khavronenko
5fc8fb1323 add churn rate panel (#230) 2019-11-12 00:14:53 +02:00
Aliaksandr Valialkin
8e8f98f712 lib/storage: add tests for dateMetricIDCache 2019-11-11 13:21:57 +02:00
Aliaksandr Valialkin
c342f5e37e lib/storage: eliminate data race when updating lastSyncTime in dateMetricIDCache.Has 2019-11-10 22:04:01 +02:00
Aliaksandr Valialkin
56d7cc8a0d app/victoria-metrics: remove deprecated fs.MustStopDirRemover from main_test.go 2019-11-10 13:37:13 +02:00
Aliaksandr Valialkin
4c02e496f7 lib/fs: do not postpone directory removal on NFS error
Continue trying to remove NFS directory on temporary errors for up to a minute.

The previous async removal process breaks in the following case during VictoriaMetrics start

- VictoriaMetrics opens index, finds incomplete merge transactions and starts replaying them.
- The transaction instructs removing old directories for parts, which were already merged into bigger part.
- VictoriaMetrics removes these directories, but their removal is delayed due to NFS errors.
- VictoriaMetrics scans partition directory after all the incomplete merge transactions are finished
  and finds directories, which should be removed, but weren't still removed due to NFS errors.
- VictoriaMetrics panics when it finds unexpected empty directory.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/162
2019-11-10 13:24:51 +02:00
Aliaksandr Valialkin
3956003dd0 lib/storage: reorganize the code in getStartDateForPerDayInvertedIndex according to golangci-lint 2019-11-10 00:38:59 +02:00
Aliaksandr Valialkin
5c3fa59181 app/vmrestore: the upcoming release would be 1.29.0 2019-11-10 00:20:41 +02:00
Aliaksandr Valialkin
ee7765b10d lib/storage: implement per-day inverted index 2019-11-10 00:02:46 +02:00
Aliaksandr Valialkin
5810ba57c2 lib/storage: use specialized cache for (date, metricID) entries
This improves ingestion performance.
2019-11-09 23:06:11 +02:00
Aliaksandr Valialkin
e573ef2126 lib/storage: remove unused code from getMetricIDsForTimeRange: it is expected that time range is always non-zero 2019-11-09 19:03:34 +02:00
Aliaksandr Valialkin
823fa085ef lib/storage: properly set time range when deleting time series 2019-11-09 18:49:49 +02:00
Aliaksandr Valialkin
695c1dc5eb lib/storage: obtain all the time series ids from (tag->metricIDs) rows instead of (metricID->TSID) rows, since this much faster 2019-11-09 18:04:33 +02:00
Aliaksandr Valialkin
cdbe848102 lib/storage: small code prettifying 2019-11-09 14:19:52 +02:00
Aliaksandr Valialkin
5c25070556 lib/uint64set: remove superflouos check for item existence before deleting it in Set.Subtract 2019-11-09 14:19:47 +02:00
Aliaksandr Valialkin
bb08bab263 lib/storage: inmemoryInvertedIndex prettifying 2019-11-09 14:19:41 +02:00
Aliaksandr Valialkin
6ad7fe8eeb lib/storage: export vm_new_timeseries_created_total metric for determining time series churn rate 2019-11-08 21:21:07 +02:00
Aliaksandr Valialkin
9ea549ed24 lib/storage: sync with cluster changes 2019-11-08 21:21:07 +02:00
Aliaksandr Valialkin
63b05c0b9f app/vmselect/promql: adjust memory limits calculations for incremental aggregate functions
Incremental aggregate functions don't keep all the selected time series in memory -
they keep only up to GOMAXPROCS time series for incremental aggregations.

Take into account that the number of time series in RAM can be higher if they are split
into many groups with `by (...)` or `without (...)` modifiers.

This should reduce the number of `not enough memory for processing ... data points` false
positive errors.
2019-11-08 21:21:07 +02:00
Aliaksandr Valialkin
d888b21657 lib/storage: add inmemory inverted index for the last hour
It should improve performance for `last N hours` dashboards with update intervals smaller than 1 hour.
2019-11-08 21:21:07 +02:00
Aliaksandr Valialkin
1e46961d68 app/{vmbackup,vmrestore}: add vmbackup and vmrestore tools for creating backups on s3 or gcs from instant snapshots
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/203
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/38
2019-11-08 21:21:07 +02:00
Roman Khavronenko
72756ab8c7 #224: add slow_queries, on-going merges and merge speed panels to dashboard (#226) 2019-11-08 21:20:38 +02:00
Aliaksandr Valialkin
543dc8d337 lib/storage: populate partition names from both small and big directories
Certain partition directories may be missing after restoring from backups
if they had no data. Re-create such directories on start.
2019-11-06 19:49:34 +02:00
Aliaksandr Valialkin
e472f0b23b lib/storage: substitute error message about unsorted items in the index block after metricIDs merge with counter
The origin of the error has been detected and documented in the code,
so it is enough to export a counter for such errors at `vm_index_blocks_with_metric_ids_incorrect_order_total`,
so it could be monitored and alerted on high error rates.

Export also the counter for processed index blocks with metricIDs - `vm_index_blocks_with_metric_ids_processed_total`,
so its' rate could be compared to `rate(vm_index_blocks_with_metric_ids_incorrect_order_total)`.
2019-11-06 14:28:11 +02:00
Aliaksandr Valialkin
c51ca04a43 lib/storage: take into account the requested time range when caching TSIDs for the given tag filters 2019-11-06 14:28:11 +02:00
Aliaksandr Valialkin
e37f06dc52 lib/storage: dump incorrectly sorted items on a single line; this should simplify error reporting 2019-11-05 18:44:22 +02:00
Aliaksandr Valialkin
5c2099ecfe lib/storage: return back finalPartsToMerge from 2 to 3 in order to prevent from excessive merges in old partitions 2019-11-05 17:27:48 +02:00
Aliaksandr Valialkin
885ba17905 lib/storage: separate the max inverted index scan loops per metric into fast and slow loops
Slow loops could require seeks and expensive regexp matching, while fast loops just scans
all the metricIDs for the given `tag=value` prefix. So these operations must have separate
max loops multiplier.
2019-11-05 17:27:48 +02:00
Aliaksandr Valialkin
b9a06e8e74 lib/storage: skip repeated useless work when intersection of metricIDs with the given filter is too expensive
This should improve performance for query filters over big number of time series.
2019-11-05 14:19:13 +02:00
Aliaksandr Valialkin
30c8301b11 lib/storage: reduce the maximum inverted index scans before giving up to label filters matching by metric name
The new value reduces the amount of wasted work during index scans over big number of time series.
2019-11-05 14:19:06 +02:00
Aliaksandr Valialkin
e53f9e553d lib/storage: try potentially faster tag filters at first, then apply slower tag filters
The fastest tag filters are non-negative non-regexp, since they are the most specific.
The slowest tag filters are negative regexp, since they require scanning
all the entries for the given label.
2019-11-05 14:19:01 +02:00
Aliaksandr Valialkin
d6ade02fd3 Makefile: add pprof-cpu rule for inspecting CPU profiles with PPROF_FILE=/path/to/cpu.pprof make pprof-cpu 2019-11-04 12:44:09 +02:00
Aliaksandr Valialkin
3c90d77858 lib/storage: pass pointer to MetricName in Fatalf, so it is properly detected as an interface with String() method
This fixes lint errors
2019-11-04 01:07:19 +02:00
Artem Navoiev
478767d0ed add unittests for bytesutil and storage (#221) 2019-11-04 00:54:46 +02:00
Aliaksandr Valialkin
02e0b19a62 lib/storage: tune the returned value from adjustMaxMetricsAdaptive 2019-11-04 00:44:37 +02:00
Aliaksandr Valialkin
6be4456d88 lib/{storage,uint64set}: add Set.Union() function and use it 2019-11-04 00:44:37 +02:00
Aliaksandr Valialkin
9becc26f4b lib/storage: remove interface conversion in hot path during block merging
This should improve merge speed a bit for parts with big number of small blocks.
2019-11-03 12:33:34 +02:00
Aliaksandr Valialkin
c62399eb3e lib/{storage,mergeset}: create missing partition directories after restoring from backups
Backup tools could skip empty directories. So re-create such directories on the first run.
2019-11-02 02:27:11 +02:00
Aliaksandr Valialkin
55d728c849 lib/{decimal,encoding}: optimize float64<->decimal conversion for arrays with zeros or ones
Time series with only zeros or ones frequently occur in monitoring, so it is worth optimizing their handling.
2019-11-01 16:48:12 +02:00
Aliaksandr Valialkin
808fc0971f lib/{encoding,decimal}: add benchmarks for blocks containing zeros or ones
Time series with such values are quite common in monitoring space,
so it would be great to have benchmarks for them.
2019-11-01 16:48:12 +02:00
Aliaksandr Valialkin
370cfbb365 lib/uint64set: return an emptry set instead of nil set from Set.Clone, since the caller may add data to the cloned set
This fixes the following panic in v1.28.1:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x783a7e]

goroutine 1155 [running]:
github.com/VictoriaMetrics/VictoriaMetrics/lib/uint64set.(*Set).Add(0x0, 0x15b3bfb41e8b71ec)
  github.com/VictoriaMetrics/VictoriaMetrics@/lib/uint64set/uint64set.go:57 +0x2e
github.com/VictoriaMetrics/VictoriaMetrics/lib/storage.(*indexSearch).getMetricIDsForRecentHours(0xc5bdc0dd40, 0x16e273f6b50, 0x16e2745d3f0, 0x5b8d95, 0x10, 0x4a2f51, 0xaa01000000000000)
  github.com/VictoriaMetrics/VictoriaMetrics@/lib/storage/index_db.go:1951 +0x260
github.com/VictoriaMetrics/VictoriaMetrics/lib/storage.(*indexSearch).getMetricIDsForTimeRange(0xc5bdc0dd40, 0x16e273f6b50, 0x16e2745d3f0, 0x5b8d95, 0x10, 0xb296c0, 0xc00009cd80, 0x9bc640)
2019-11-01 16:12:44 +02:00
Aliaksandr Valialkin
2f58f37f07 app/vmselect/promql: add lag(q[d]) function, which returns the lag between the current timestamp and the timstamp for the last data point in q 2019-11-01 12:21:33 +02:00
Aliaksandr Valialkin
d18ea0c95b app/vmstorage: add -bigMergeConcurrency and -smallMergeConcurrency flags for tuning the maximum number of CPU cores used during merges 2019-10-31 16:19:13 +02:00
Aliaksandr Valialkin
e0b292c6de lib/storage: small cleanup in Storage.add 2019-10-31 14:30:34 +02:00
Aliaksandr Valialkin
86f6be40db README.md: update information about vm_rows{type="indexdb"} metric
The previous information became outdated after v1.28.0, since now each row in the inverted index
can refer to multiple time series.
2019-10-31 13:30:29 +02:00
Aliaksandr Valialkin
e76e21e4c7 lib/decimal: speed up FromFloat for common case with integers 2019-10-31 13:24:59 +02:00
Aliaksandr Valialkin
cfa5e279c2 lib/decimal: increase float64->decimal conversion precision a bit
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/213
2019-10-30 02:04:56 +02:00
Aliaksandr Valialkin
fa7c3ab93a README.md: fix delimiter between {measurement} and {field_name} in the Influx line protocol example 2019-10-30 02:04:56 +02:00
Aliaksandr Valialkin
26d570bb3a lib/storage: get parts to merge after applying the limit on the number of concurrent merges
This should reduce write amplification under high ingestion rate.
2019-10-30 02:04:56 +02:00
Roman Khavronenko
62ed508546 Bump version requirements in description 2019-10-29 22:29:48 +00:00
Aliaksandr Valialkin
2e2eff90d5 lib/{mergeset,storage}: limit the maximum number of concurrent merges; leave smaller number of parts during final merge 2019-10-29 12:45:28 +02:00
Aliaksandr Valialkin
855e5c8963 vendor: update github.com/VictoriaMetrics/fastcache from v1.5.1 to v1.5.2 2019-10-29 11:31:29 +02:00
Aliaksandr Valialkin
04e48ef064 lib/fs: typo fix in comment to WriteFileAtomically 2019-10-29 11:31:26 +02:00
Roman Khavronenko
971206b514 update single-version dashboard with panels: (#219)
* concurrent inserts
* rows ignored
2019-10-28 13:54:10 +02:00
Aliaksandr Valialkin
d063bfaf83 vendor: make vendor-update 2019-10-28 13:39:05 +02:00
Roman Khavronenko
6ab48838bf #215: update klauspost/compress lib (#217)
* #215: update klauspost/compress lib

* #215: bump klauspost/compress lib to 1.9.1
2019-10-28 13:36:35 +02:00
Aliaksandr Valialkin
a42b5db39f lib/decimal: increase float->decimal conversion precision for big numbers
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/213
2019-10-28 13:23:44 +02:00
Aliaksandr Valialkin
b0295dbf2e app/vmselect: add -search.latencyOffset flag for tuning the time after data collection when data points become visible in query results
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/218
2019-10-28 12:31:07 +02:00
Petr Mikusek
3cea200309 Fix typo s/telergam/telegram/ in README.md 2019-10-23 19:30:36 +03:00
Aliaksandr Valialkin
32600ba4fc deployment/docker: upgrade Go builder from go1.13.1 to go1.13.3 2019-10-20 23:50:05 +03:00
hanzai
b3c946e35a warns during rows addition (#214) 2019-10-20 23:41:07 +03:00
Aliaksandr Valialkin
e83fe938c8 all: make fmt 2019-10-17 20:04:34 +03:00
Aliaksandr Valialkin
f708aa7003 Makefile: disable structcheck in golangci-lint, since it gives false positive on embedded structs 2019-10-17 19:59:10 +03:00
Aliaksandr Valialkin
97ce4e03a5 all: add support for GOARCH=386 and fix all the issues related to 32-bit architectures such as GOARCH=arm
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/212
2019-10-17 18:23:23 +03:00
Aliaksandr Valialkin
a398343bb6 vendor: update github.com/valyala/quicktemplate from v1.2.0 to v1.3.1 2019-10-17 18:23:19 +03:00
Aliaksandr Valialkin
6ebf537153 lib/memory: properly handle int overflow in sysTotalMemory
This should fix builds on 32-bit architectures such as arm.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/212
2019-10-17 00:50:48 +03:00
Aliaksandr Valialkin
f752479cb8 app/victoria-metrics/test: add missing docs to public funcs PopulateTimeTplString and PopulateTimeTpl 2019-10-17 00:50:46 +03:00
Aliaksandr Valialkin
61e956e175 app/victoria-metrics: add a test for max_lookback=<duration> query arg
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/209
2019-10-15 21:31:48 +03:00
Aliaksandr Valialkin
c66a691593 app/vmselect/prometheus: add -search.maxLookback command-line flag for overriding dynamic calculations for max lookback interval
This flag is similar to `-search.lookback-delta` if set. The max lookback interval is determined dynamically
from interval between datapoints for each input time series if the flag isn't set.

The interval can be overriden on per-query basis by passing `max_lookback=<duration>` query arg to `/api/v1/query` and `/api/v1/query_range`.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/209
2019-10-15 21:31:48 +03:00
Aliaksandr Valialkin
cc21b31502 app/victoria-metrics/test: add a test for PopulateTimeTplString 2019-10-15 21:31:48 +03:00
Aliaksandr Valialkin
195cefd81a lib/prompb: removed outdated README.md 2019-10-14 22:12:57 +03:00
Aliaksandr Valialkin
c1581c3810 vendor: make vendor-update 2019-10-13 23:17:47 +03:00
Aliaksandr Valialkin
16cae15c45 README.md: add integrations section 2019-10-11 19:14:28 +03:00
Aliaksandr Valialkin
f6334bffa1 lib/storage: harden the check that the original items are sorted after mergeTagToMetricIDsRows fails to preserve sort order 2019-10-09 12:13:17 +03:00
Aliaksandr Valialkin
2abd5154e0 lib/storage: typo fix in comment to maxRowsPerSmallPart. 2019-10-08 18:51:20 +03:00
Aliaksandr Valialkin
c1cf7d9f93 lib/storage: add tests for mergeTagToMetricIDsRows and return the original items if the function breaks items` ordering.
This should save from data corruption issues revealed in the previous releases up to v1.28.0-beta5.
2019-10-08 16:27:35 +03:00
Aliaksandr Valialkin
956fdd89d3 app/vmselect/promql: take into account the previous point when calculating max_over_time and min_over_time
This lines up with `first_over_time` function used in `rollup_candlestick`, so `rollup=low` always returns
the minimum value.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/204
2019-10-08 12:30:05 +03:00
Alexander Danilov
1bc6377863 Improve documentation a little bit 2019-10-07 22:18:40 +03:00
Artem Navoiev
1e2c511747 Add regression test for query apo
Part of https://github.com/VictoriaMetrics/VictoriaMetrics/issues/187
cover:
- https://github.com/VictoriaMetrics/VictoriaMetrics/issues/153
- https://github.com/VictoriaMetrics/VictoriaMetrics/issues/150
2019-10-07 22:18:04 +03:00
Aliaksandr Valialkin
0eeffb910f vendor: make vendor-update 2019-10-06 15:47:23 +03:00
Aliaksandr Valialkin
4ba86f501a vendor: update github.com/VictoriaMetrics/metrics from v1.7.1 to v1.7.2 2019-10-06 11:20:45 +03:00
Aliaksandr Valialkin
fdc5cfd838 lib/mergeset: reduce the maximum number of cached blocks, since there are reports on OOMs due to too big caches
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/189
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/195
2019-09-30 12:25:40 +03:00
Artem Navoiev
a116f5e7c1 Add regression test for query apo (#194)
Part of https://github.com/VictoriaMetrics/VictoriaMetrics/issues/187
cover:
- https://github.com/VictoriaMetrics/VictoriaMetrics/issues/184
2019-09-30 11:25:54 +03:00
Aliaksandr Valialkin
4e9e1ca0f7 app/vmselect/netstorage: hint the OS that tmpBlocksFile is read almost sequentially
This became the case after b7ee2e7af2 .
2019-09-30 00:11:14 +03:00
Aliaksandr Valialkin
c1d3705be0 app/vmselect/netstorage: marshal block outside tmpBlocksFile.WriteBlock
This allows re-using the destination buffer for marshaling in the outer loop.
2019-09-28 21:07:13 +03:00
Aliaksandr Valialkin
b7ee2e7af2 app/vmselect/netstorage: reduce the number of disk seeks when the query processes big number of time series 2019-09-28 21:07:09 +03:00
Aliaksandr Valialkin
67d44b0845 app/vmselect/promql: do not generate timestamps for NaN values in timestamp function according to Prometheus logic 2019-09-27 18:54:43 +03:00
Artem Navoiev
1e6ae9eff4 Add regression test for duplicated labels and series
Part of https://github.com/VictoriaMetrics/VictoriaMetrics/issues/187
cover:
- https://github.com/VictoriaMetrics/VictoriaMetrics/issues/155
- https://github.com/VictoriaMetrics/VictoriaMetrics/issues/172
2019-09-27 16:52:16 +03:00
Aliaksandr Valialkin
fa81f82714 deployment/docker: switch Go builder image from v1.13.0 to v1.13.1 2019-09-26 17:09:40 +03:00
Aliaksandr Valialkin
0fa6df94a2 lib/storage: optimize TSID comparison 2019-09-26 14:16:02 +03:00
Aliaksandr Valialkin
c39355921e lib/storage: verify whether items are sorted in the end of call to mergeTagToMetricIDsRows
This should prevent from inverted index corruption if bug in mergeTagToMetricIDsRows is discovered.
2019-09-26 13:13:41 +03:00
Artem Navoiev
cf4786f34a add test for https://github.com/VictoriaMetrics/VictoriaMetrics/issues/161 2019-09-26 12:45:19 +03:00
Aliaksandr Valialkin
3e67862676 README.md: typo fix 2019-09-26 11:03:14 +03:00
Aliaksandr Valialkin
0db9fcedd5 lib/storage: properly match labels against regexp with (?i) flag
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/161
2019-09-26 11:03:10 +03:00
Aliaksandr Valialkin
391530bb74 README.md: mention recommended ext4 options for mkfs.ext4 when creating multi-TB partition 2019-09-25 23:52:43 +03:00
Aliaksandr Valialkin
60c5b368bc README.md: tiny updates 2019-09-25 23:29:55 +03:00
Aliaksandr Valialkin
26dc21cf64 app/vmselect/promql: add increases_over_time and decreases_over_time functions
`increases_over_time(q[d])` returns the number of `q` increases during the given duration `d`.
`decreases_over_time(q[d])` returns the number of `q` decreases during the given duration `d`.
2019-09-25 20:38:44 +03:00
Aliaksandr Valialkin
2444433d83 lib/storage: add missing break in removeDuplicateMetricIDs 2019-09-25 18:23:43 +03:00
Aliaksandr Valialkin
ea4c828bae lib/storage: remove duplicate MetricIDs in tag->metricIDs items before writing them into inverted index 2019-09-25 17:55:13 +03:00
Aliaksandr Valialkin
aebc45ad26 lib/{mergeset,storage}: do not cache inverted index blocks containing tag->metricIDs items
This should reduce the amounts of used RAM during queries with filters over big number of time series.
2019-09-25 14:02:15 +03:00
Aliaksandr Valialkin
2cb811b42f lib/uint64set: optimize Set.AppendTo 2019-09-25 00:34:17 +03:00
Aliaksandr Valialkin
b986516fbe lib/storage: create and use lib/uint64set instead of map[uint64]struct{}
This should improve inverted index search performance for filters matching big number of time series,
since `lib/uint64set.Set` is faster than `map[uint64]struct{}` for both `Add` and `Has` calls.
See the corresponding benchmarks in `lib/uint64set`.
2019-09-24 21:17:55 +03:00
Aliaksandr Valialkin
ef2296e420 lib/storage: typo fix: return dstData instead of data from mergeTagToMetricIDsRows 2019-09-24 19:32:34 +03:00
Aliaksandr Valialkin
a6086cde78 lib/storage: limit the number of metricIDs in tag->metricIDs row
This reduces the overhead on index and metaindex in lib/mergeset
2019-09-24 00:49:51 +03:00
Aliaksandr Valialkin
c9063ece66 lib/storage: share tsids across all the partSearch instances
This should reduce memory usage when big number of time series matches the given query.
2019-09-23 22:35:15 +03:00
Aliaksandr Valialkin
4e26ad869b lib/{storage,mergeset}: verify PrepareBlock callback results
Do not touch the first and the last item passed to PrepareBlock
in order to preserve sort order of mergeset blocks.
2019-09-23 20:43:13 +03:00
Aliaksandr Valialkin
0772191975 lib/mergeset: detect whether we are in test by executable suffix 2019-09-22 23:12:15 +03:00
Aliaksandr Valialkin
48999e5396 lib/workingsetcache: remove data race when resetting c.misses 2019-09-22 19:36:49 +03:00
Aliaksandr Valialkin
0adebae1f8 lib/storage: generate the first tag->metricIDs item in a mergeset block with a single metricID
The first item from each mergeset block goes into index (lib/mergeset.blockHeader),
so it must be short in order to reduce index size.
2019-09-22 19:21:33 +03:00
Aliaksandr Valialkin
267efde5ae README.md: update troubleshooting and tuning sections according to recent questions from our users 2019-09-22 19:12:24 +03:00
Aliaksandr Valialkin
0686ac52c3 lib/{storage,mergeset}: merge tag->metricID rows into tag->metricIDs rows for common tag values
This should improve lookup performance if the same `label=value` pair exists
in big number of time series.
This should also reduce memory usage for mergeset data cache, since `tag->metricIDs` rows
occupy less space than the original `tag->metricID` rows.
2019-09-20 22:06:41 +03:00
Aliaksandr Valialkin
68722c3c74 lib/encoding: optimize UnmarshalUint* and UnmarshalInt* 2019-09-20 13:08:16 +03:00
Aliaksandr Valialkin
a544f49c2b lib/storage: optimize selecting all the metricIDs by scanning MetricID->TSID entries instead of tag->MetricID entries
The number of MetricID->TSID entries is smaller than the number of tag->MetricID entries
and MetricID->TSID entries are usually shorter than tag->MetricID entries.
This should improve performance when selecting all the metricIDs.
2019-09-20 11:54:10 +03:00
Aliaksandr Valialkin
d32f88c378 app/vminsert/opentsdbhttp: remove FATAL prefix from logger.Fatalf errors for the sake of consistency with other logger.Fatalf calls 2019-09-19 22:15:59 +03:00
Aliaksandr Valialkin
00cfb2d2b9 lib/mergeset: rename misleading mergeSmallParts to mergeExistingParts 2019-09-19 21:48:20 +03:00
Aliaksandr Valialkin
37dc223e25 lib/mergeset: use sort.IsSorted instead of sort.SliceIsSorted in inmemoryBlock.isSorted in order to reduce memory allocations 2019-09-19 20:13:08 +03:00
Aliaksandr Valialkin
a84fe76677 lib/storage: use sort.Sort instead of sort.slice in getSortedMetricIDs 2019-09-19 20:07:22 +03:00
Aliaksandr Valialkin
3a697a935a lib/storage: skip duplicate call to intersectMetricIDsWithTagFilter on zero successful intersects 2019-09-19 17:49:56 +03:00
Aliaksandr Valialkin
51a21c7d4b lib/mergeset: fill partHeader.firstItem on first block flush 2019-09-19 17:48:09 +03:00
Aliaksandr Valialkin
3d83f5d334 lib/storage: mark tag filter returning errFallbackToMetricNameMatch as useless
This will save CPU on subsequent calls for this filter
2019-09-18 19:10:32 +03:00
Aliaksandr Valialkin
6f3b2fd600 deployment/docker/docker-compose.yml: update Prometheus and Grafana image tags
Prometheus: from v2.10.0 to v2.12.0
Grafana: v6.2.1 from to v6.3.5
2019-09-18 18:29:09 +03:00
Aliaksandr Valialkin
8d35718dc6 lib/storage: properly construct keys for uselessTagFiltersCache and register useless negative tag filters there 2019-09-17 23:20:27 +03:00
Aliaksandr Valialkin
33975513d0 vendor: update github.com/valyala/gozstd from v1.6.1 to v1.6.2 2019-09-16 21:50:49 +03:00
Aliaksandr Valialkin
63f2b539df vendor: make vendor-update 2019-09-13 22:48:56 +03:00
Aliaksandr Valialkin
9428ec9c9f deployment/docker: remove file system paths from the compiled binary 2019-09-13 22:45:59 +03:00
Aliaksandr Valialkin
0c8057924f lib/mergeset: properly check for sorted block headers
Fix a typo for https://github.com/VictoriaMetrics/VictoriaMetrics/issues/181
2019-09-13 21:59:29 +03:00
Aliaksandr Valialkin
d4218d27e6 app/vmselect/promql: properly handle subqueries like aggr_func(rollup_func(metric[window:step]))
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/184
2019-09-13 21:41:04 +03:00
hanzai
e2274714b1 lib/workingsetcache: adjust switching from mode=split to mode=whole smoothly and load cachefile successfully 2019-09-13 19:13:01 +03:00
Aliaksandr Valialkin
4d636c244d app/vmselect/promql: binary operation fixes according to Prometheus behaviour
The follosing issues were fixed:
- VictoriaMetrics could leave superflouos labels when using `on` or `ignoring` modifiers
- VictoriaMetrics could return `duplicate timeseries` error when using `group_left` or `group_right` with non-empty label list
2019-09-13 17:42:52 +03:00
Aliaksandr Valialkin
bad53e4207 lib/mergeset: dynamically calculate the maximum number of items per part, which can be cached in OS page cache 2019-09-11 14:53:45 +03:00
Artem Navoiev
3f581a9860 [ci] github actions - run pipeline on pull request. Fix running of test in external PR from forks 2019-09-11 09:30:11 +03:00
sundy-li
398e00aa54 README.md: fix ExtendedPromQL link url 2019-09-10 14:56:19 +03:00
Artem Navoiev
4fd741f40d [tests] check timestamp in tests (#177) 2019-09-08 19:48:38 +03:00
Artem Navoiev
4a2cd85b92 [ci] bump version of go to 1.13 in github actions config 2019-09-08 14:02:23 +03:00
Aliaksandr Valialkin
6c46afb087 vendor: update github.com/klauspost/compress from v1.7.6 to v1.8.2 2019-09-06 00:47:31 +03:00
Aliaksandr Valialkin
7343e8b408 vendor: update golang.org/x/sys 2019-09-06 00:47:31 +03:00
Artem Navoiev
22e3fabefd Add OpenTSDB and Prometheus integration tests (#168)
* [WIP] open tsdb and prometheus integration tests

* app/victoria-metrics: fix race condition on parallel tests
2019-09-05 17:55:38 +03:00
Aliaksandr Valialkin
88f8670ede lib/fs: add MustStopDirRemover for waiting until pending directories are removed on graceful shutdown
This patch is mainly required for laggy NFS. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/162
2019-09-05 11:13:17 +03:00
Aliaksandr Valialkin
9eb5de334f lib/storage: typo fix 2019-09-04 19:58:01 +03:00
Aliaksandr Valialkin
6954e126fc app/vmselect/promql: ignore grouping by destination label in count_values, since such a grouping is performed automatically 2019-09-04 19:58:01 +03:00
Aliaksandr Valialkin
bce35b8dd9 README.md: mention that Prometheus doesn't drop data when VictoriaMetrics restarts 2019-09-04 18:40:39 +03:00
Aliaksandr Valialkin
16dd145586 lib/storage: remove duplicate tag keys on MetricName.Marshal call
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/172
2019-09-04 18:13:45 +03:00
Aliaksandr Valialkin
cd2c9e39da deployment/docker: switch Go builder from Go 1.12.9 to Go 1.13.0 2019-09-04 17:17:23 +03:00
Aliaksandr Valialkin
305e7bc981 app/vmselect/promql: do not return artificial points beyond the last point in time series 2019-09-04 16:35:34 +03:00
Aliaksandr Valialkin
9721d06c6a app/vmselect/prometheus: do not adjust start and end args in /api/v1/query_range if nocache=1 arg is set
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/171
2019-09-04 13:10:09 +03:00
Aliaksandr Valialkin
4862e93024 lib/fs: try harder with directory removal on NFS in the event of temporary lock
Do not give up after 11 attempts of directory removal on laggy NFS.

Add `vm_nfs_dir_remove_failed_attempts_total` metric for counting the number of failed attempts
on directory removal.

Log failed attempts on directory removal after long sleep times.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/162
2019-09-04 12:24:50 +03:00
Aliaksandr Valialkin
db4560ca31 app/vmselect/promql: reset timeseries name on group_left and group_right as Prometheus does 2019-09-03 20:42:54 +03:00
Aliaksandr Valialkin
1575a560f0 app/vmselect/netstorage: adaptively adjust the maximum inmemory file size for storing temporary blocks
The maximum inmemory file size now depends on `-memory.allowedPercent`.
This should improve performance and reduce the number of filesystem calls
on machines with big amounts of RAM when performing heavy queries
over big number of samples and time series.
2019-09-03 13:32:09 +03:00
Aliaksandr Valialkin
e1d76ec1f3 lib/storage: invalidate tagFilters -> TSIDS cache when newly added index data becomes visible to search
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/163
2019-08-29 15:08:35 +03:00
Aliaksandr Valialkin
aeaa5de5fe lib/prombp: apply ba06b47c16
The following commands used:

gofmt -r '(uint64(x)&0x7F)<<shift -> uint64(x&0x7F)<<shift' -w ./lib/prompb/
gofmt -r '(int64(x)&0x7F)<<shift -> int64(x&0x7F)<<shift' -w ./lib/prompb/
2019-08-29 13:35:27 +03:00
Aliaksandr Valialkin
4c0a262a2e .github/workflows: verify builds on freebsd and darwin 2019-08-28 23:05:15 +03:00
Aliaksandr Valialkin
3685fc18d5 Makefile: extract app-local and app-local-pure build rules 2019-08-28 01:34:58 +03:00
Aliaksandr Valialkin
ede7ad3703 app/victoria-metrics: add missing victoria-metrics prefix to --version output when building with make victoria-metrics 2019-08-28 01:28:08 +03:00
Aliaksandr Valialkin
9196c085a7 all: port to FreeBSD on GOARCH=amd64 2019-08-28 01:19:23 +03:00
Aliaksandr Valialkin
3802ae9269 README.md: recommend checking which metrics will be deleted before deleting them 2019-08-27 15:01:16 +03:00
Artem Navoiev
b0090dbd86 add github actions (#160) 2019-08-27 14:42:46 +03:00
Aliaksandr Valialkin
603a79b357 app/vmstorage: increase default values for search.maxTagKeys, search.maxTagValues and search.maxUniqueTimeseries 2019-08-27 14:29:53 +03:00
Aliaksandr Valialkin
2655220c58 lib/storage: go fmt 2019-08-27 14:29:51 +03:00
Aliaksandr Valialkin
bf915fc0db lib/storage: report proper maxMetrics limit when more than -search.maxUniqueTimeseries series match the given filters 2019-08-27 14:21:42 +03:00
Aliaksandr Valialkin
2fc157ff7a lib/storage: properly handle (?i) in the tag filter regexp
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/161
2019-08-26 00:44:45 +03:00
Aliaksandr Valialkin
0dc0006f34 lib/storage: calculate the maximum number of rows per small part from -memory.allowedPercent
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/159

This simplifies error detection additionally to the `vm_rows_ignored_total` counters.
2019-08-25 15:31:47 +03:00
Aliaksandr Valialkin
4b688fffee lib/storage: calculate the maximum number of rows per small part from -memory.allowedPercent
This should improve query speed over recent data on machines with big amounts of RAM
2019-08-25 14:41:12 +03:00
Aliaksandr Valialkin
1402a6b981 lib/storage: properly limit the number of output rows in small and big parts storage
Previously small parts storage didn't take into account the available disk space for big parts.
2019-08-25 14:41:12 +03:00
Aliaksandr Valialkin
3308279c4e lib/storage: remove outdated comment on maxRowsPerSmallPart
The commend became outdated after the commit ed6ac1a5df027f0dfc22448e3b27c26b6f77c67a,
which stops merging of small parts on graceful shutdown instead of waiting
for their completion.
2019-08-25 13:47:32 +03:00
Aliaksandr Valialkin
fb909cf710 app/vminsert/influx: set db label only if Influx line doesnt have db tag 2019-08-24 13:52:48 +03:00
Aliaksandr Valialkin
c4e75f09dc README.md: mention that -retentionPeriod must cover the backfilled data 2019-08-24 13:52:48 +03:00
Aliaksandr Valialkin
fb8840ac38 vendor: update github.com/valyala/quicktemplate from v1.1.1 to v1.2.0 2019-08-24 13:41:15 +03:00
Aliaksandr Valialkin
9c9221d1b2 app/vminsert: skip empty tags 2019-08-24 13:36:29 +03:00
Aliaksandr Valialkin
70ca018a57 app/vminsert/opentsdbhttp: skip invalid rows and continue parsing the remaining rows
Invalid rows are logged and counted in `vm_rows_invalid_total{type="opentsdb-http"}` metric
2019-08-24 13:36:29 +03:00
Aliaksandr Valialkin
4266091e4f app/vminsert/opentsdb: skip invalid rows and continue parsing the remaining rows
Invalid rows are logged and counted in `vm_rows_invalid_total{type="opentsdb"}` metric
2019-08-24 13:36:29 +03:00
Aliaksandr Valialkin
8001d29b6e app/vminsert/graphite: skip invalid rows and continue parsing the remaining rows
Invalid rows are logged and counted in `vm_rows_invalid_total{type="graphite"}` metric
2019-08-24 13:36:29 +03:00
Aliaksandr Valialkin
9d3f1fcbb9 app/vminsert/influx: skip invalid rows and continue parsing the remaining rows
Invalid influx lines are logged and counted in `vm_rows_invalid_total{type="influx"}` metric.
2019-08-24 13:36:29 +03:00
Aliaksandr Valialkin
ba7b3806be app/vminsert/influx: do not allow escaping newline char, since they dont occur in real life
The prefious report with escaped newline chars in influx line protocol was false alarm.
2019-08-23 18:42:05 +03:00
Aliaksandr Valialkin
7fa88c6efc app/vminsert/opentsdbhttp: allow timestamp as float64 and as string, since it occurs in real life 2019-08-23 18:35:41 +03:00
Aliaksandr Valialkin
4da34b11f8 app/vminsert/influx: handle \r\n aka crlf influx line endings from windows world
Such lines exist in real life.
2019-08-23 18:28:49 +03:00
Aliaksandr Valialkin
a18317adbc app/vminsert/influx: allow escaping newline char
Though newline char isn't mentioned in escape rules at https://docs.influxdata.com/influxdb/v1.7/write_protocols/line_protocol_tutorial/ ,
there are reports that such chars occur in real life
2019-08-23 15:14:46 +03:00
Aliaksandr Valialkin
44d7fc599d app/vminsert/influx: skip comments starting with # in influx line protocol 2019-08-23 14:43:09 +03:00
Aliaksandr Valialkin
dce6079379 README.md: add a section about Go profiling 2019-08-23 13:37:09 +03:00
Aliaksandr Valialkin
98419c00ef vendor: make vendor-update 2019-08-23 10:02:10 +03:00
Aliaksandr Valialkin
ac004665b5 all: return 503 http error if service is temporarily unavailable
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/156
2019-08-23 09:55:07 +03:00
Aliaksandr Valialkin
8c03a8c4b4 app/vminsert: allow setting the maximum number of labels per time series via -maxLabelsPerTimeseries 2019-08-23 08:45:26 +03:00
Aliaksandr Valialkin
8a126c2865 README.md: mention that VictoriaMetrics supports enterprise workloads 2019-08-22 18:00:47 +03:00
Aliaksandr Valialkin
380cae23a0 lib/storage: add benchmarks for regexp filter match / mismatch
These benchmarks allow estimate the performance of regexp filters in promql
2019-08-22 16:36:42 +03:00
Aliaksandr Valialkin
1272e407b2 app/vmselect/promql: attempt to repair invalid bucket counts passed to histogram_quantile
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/136
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/154
2019-08-22 14:39:46 +03:00
Aliaksandr Valialkin
5f33fc8e46 app/vminsert: add ability to ingest data via HTTP OpenTSDB /api/put requests
This is manual merge of the https://github.com/VictoriaMetrics/VictoriaMetrics/pull/152
Thanks to nustinov@gmail.com for the initial pull request.
2019-08-22 12:28:32 +03:00
Aliaksandr Valialkin
ec8125606d app/vminsert/opentsdb: fix BenchmarkRowsUnmarshal by adding missing put prefixes to each line 2019-08-21 19:14:47 +03:00
Aliaksandr Valialkin
f4a38f7fb1 app/vmselect/promql: fix panic on -search.disableCache
Reset the cache if it is disabled instead of stopping, since it is stopped on graceful shutdown.
2019-08-21 17:11:52 +03:00
Aliaksandr Valialkin
ab740afd0d app/vmselect/promql: explain why empty timeseries arent removed in transformLabelValue 2019-08-21 11:29:24 +03:00
Aliaksandr Valialkin
7b5168adfb app/vmselect/promql: remove NaNs from /api/v1/query_range output like Prometheus does
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/153
2019-08-20 23:01:41 +03:00
Aliaksandr Valialkin
a0d480fbf3 app/vmselect/promql: pre-allocate memory for map for checking for duplicate timeseries
This should reduce memory allocations for big number of timeseries
2019-08-20 23:01:39 +03:00
Aliaksandr Valialkin
0dfc1ace53 README.md: add a section about backfilling 2019-08-20 00:34:51 +03:00
Aliaksandr Valialkin
d3fd113a80 app/vmselect/promql: add label_value(q, label_name) func, which returns numeric value labels with name label_name in q 2019-08-20 00:28:34 +03:00
Aliaksandr Valialkin
4f738c8a15 lib/storage: try slower path for searching the tag filter with the minimum number of matching time series before giving up with increase -search.maxUniqueTimeseries error 2019-08-19 16:04:21 +03:00
Aliaksandr Valialkin
dd86e6130c app/vmselect/promql: independently track offset hints for tStart and tEnd
This should improve performance if timeseries starts or ends on the selected time range
2019-08-19 13:40:14 +03:00
Aliaksandr Valialkin
6a27657d73 app/vmselect/promql: optimize search for timestamp boundaries in rollupConfig.Do
This should improve the performance of queries over big number of time series
with big number of output points.
2019-08-19 13:03:29 +03:00
Aliaksandr Valialkin
c23b66a1ad lib/storage: pre-allocate memory for blockHeader slice in unmarshalBlockHeaders
This reduces memory usage and memory fragmentation when working with big number of time series
2019-08-19 12:46:33 +03:00
Aliaksandr Valialkin
be39414f9c deployment/docker: switch Go builder from go1.12.8 to go1.12.9 2019-08-18 22:07:58 +03:00
Aliaksandr Valialkin
e74fb23189 app/vmselect/promql: add scrape_interval(q[d]) function, which would return scrape interval for q over d 2019-08-18 21:08:26 +03:00
Aliaksandr Valialkin
582fdc059a app/vmselect/promql: hande comparisons with NaN similar to Prometheus
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/150
2019-08-18 00:25:50 +03:00
Aliaksandr Valialkin
1c108fc494 app/vmselect/promql: add lifetime(q[d]) function, which returns the lifetime of q over d in seconds.
This function is useful for determining time series lifetime.
`d` must exceed the expected lifetime of the time series, otherwise
the function would return values close to `d`.
2019-08-16 11:59:32 +03:00
Aliaksandr Valialkin
d6b5ed6d39 app/vmselect/promql: fix corner-case calculation for ideriv 2019-08-16 11:59:28 +03:00
Aliaksandr Valialkin
639b14e8ab app/vmselect/promql: properly handle corner cases for rollup functions 2019-08-15 23:29:59 +03:00
Aliaksandr Valialkin
483de1cc06 lib/workingsetcache: automatically detect when it is better to double cache capacity 2019-08-15 22:57:55 +03:00
Aliaksandr Valialkin
9e0896055d deployment/docker: switch Go builder from go1.12.7 to go1.12.8 2019-08-15 20:43:36 +03:00
Aliaksandr Valialkin
5bb61b8b38 vendor: update github.com/valyala/gozstd from v1.5.1 to v1.6.0 2019-08-15 12:56:42 +03:00
Aliaksandr Valialkin
75a58dee02 README.md: typo fix 2019-08-14 03:28:07 +03:00
Aliaksandr Valialkin
5b41122292 lib/storage: properly cache tagFilters -> TSIDs entries from historical index 2019-08-14 02:29:58 +03:00
Aliaksandr Valialkin
964c296f96 lib/storage: compress contents of cache for tagFilters -> TSIDs
This should increase cache capacity
2019-08-14 02:29:52 +03:00
Aliaksandr Valialkin
9ecb994671 app/vmselect/promql: store compressed results in the cache
This should increase rollup results cache capacity.
2019-08-14 02:29:45 +03:00
Aliaksandr Valialkin
9d41e0dcae README.md: reduce the recommended max_shards value according to test results
See https://github.com/prometheus/prometheus/issues/5803#issuecomment-520973662
2019-08-13 22:33:10 +03:00
Aliaksandr Valialkin
09fc6e22e5 all: use workingsetcache instead of fastcache
This should reduce the amount of RAM required for processing time series
with non-zero churn rate.

The previous cache behavior can be restored with `-cache.oldBehavior` command-line flag.
2019-08-13 21:39:34 +03:00
Aliaksandr Valialkin
99c37c2c96 lib/fs: add test for IsTemporaryFileName 2019-08-13 21:33:45 +03:00
Aliaksandr Valialkin
06c2c25544 Makefile: consistency renaming: check_all -> check-all 2019-08-13 21:31:19 +03:00
Aliaksandr Valialkin
ec1b185991 lib/storage: remove broken BenchmarkIndexDBSearchTSIDs 2019-08-13 20:22:08 +03:00
Aliaksandr Valialkin
0967683ae9 lib: move common code for creating flock.lock file into fs.CreateFlockFile 2019-08-13 01:45:46 +03:00
Aliaksandr Valialkin
ad8a43b4e1 README.md: fix metric names in influx line protocol example
Default separator between `measurement` and `field_name` is `_`.
2019-08-12 15:58:34 +03:00
Aliaksandr Valialkin
7346982763 README.md: mention that Influx line protocol accepts timestamps in nanoseconds by default 2019-08-12 15:31:52 +03:00
Aliaksandr Valialkin
5d8d110010 lib/fs: atomically create file with the given contents on WriteFileAtomically
This should prevent from `transaction` and `metadata.json` files corruption
on unclean shutdown such as OOM, `kill -9`, power loss, etc.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/148
2019-08-12 15:02:55 +03:00
Aliaksandr Valialkin
0b488f1e37 lib/storage: do not change timestamps to constant rate if values are constant or have constant delta
This breaks the original timestamps, which results in issues like
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/120 and
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/141 .
2019-08-06 15:40:07 +03:00
Aliaksandr Valialkin
b8bb74ffc6 app/vmstorage: add vm_concurrent_addrows_* metrics for tracking concurrency for Storage.AddRows calls
Track also the number of dropped rows due to the exceeded timeout
on concurrency limit for Storage.AddRows. This number is tracked in `vm_concurrent_addrows_dropped_rows_total`
2019-08-06 15:08:33 +03:00
Aliaksandr Valialkin
5c9e48417a vendor: update github.com/VictoriaMetrics/metrics to v1.7.1 2019-08-05 19:21:36 +03:00
Aliaksandr Valialkin
5c83f8e203 app: add vm_concurrent_ metrics for visibility in concurrency limiters for vminsert and vmselect 2019-08-05 18:30:57 +03:00
Aliaksandr Valialkin
05713469c3 vendor: make vendor-update 2019-08-05 10:33:21 +03:00
Aliaksandr Valialkin
8822079b77 lib/storage: properly reset partSearch.fetchData in partSearch.reset 2019-08-05 09:56:06 +03:00
Aliaksandr Valialkin
99e048c9df app/vmselect: allow passing match[], start and time to /api/v1/label/<label_name>/values
`/api/v1/label/<label_name>/values?match[]=q` emulates emulates `label_values(q, <label_name>)`
call in Grafana templating.
2019-08-04 23:09:21 +03:00
Aliaksandr Valialkin
47e4b50112 app/vmselect: optimize /api/v1/series by skipping storage data
Fetch and process only time series metainfo.
2019-08-04 23:01:28 +03:00
Aliaksandr Valialkin
241170dc05 app/vmselect/prometheus: prevent from fetching and scanning all the data on /api/v1/searies call by default 2019-08-04 19:42:36 +03:00
Aliaksandr Valialkin
1c69f4eadc app/vmselect/promql: tune automatic window adjustement
Increase the windows adjustement for small scrape intervals,
since they usually have higher jitter.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/139
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/134
2019-08-04 19:34:05 +03:00
Aliaksandr Valialkin
8d93b15b86 app/vmselect/promql: further increase the allowed jitter for scrape interval
Real-world production data shows higher jitter than 1/8 of scrape interval.
This may results in gaps on the graph. So increase the allowed jitter to 1/4
of scrape interval in order to reduce the probability of gaps on the graphs
over time series with high jitter for scrape_interval.
2019-08-02 20:10:23 +03:00
Aliaksandr Valialkin
fcc166622a README.md: mention that monitoring is recommended for VictoriaMetrics 2019-08-02 15:27:10 +03:00
Aliaksandr Valialkin
a9f39168d2 app/vminsert/influx: round automatically generated timestamp according to the given precision arg 2019-08-02 00:24:06 +03:00
Aliaksandr Valialkin
f090b2e917 app/vmselect/promql: tolerate higher jitter in scrape interval
Allow jitter for up to 1/8 instead of 1/16 for the scrape interval.
This should imrpove graphs when `step` is smaller than the `scrape_interval`.
2019-08-01 23:26:00 +03:00
Aliaksandr Valialkin
10caad4728 lib/decimal: modernize tests a bit 2019-07-31 21:10:03 +03:00
Aliaksandr Valialkin
3b90c2a99a Add CODE_OF_CONDUCT.md 2019-07-31 15:44:26 +03:00
Aliaksandr Valialkin
57ec4f5f92 Update issue templates
Add a template for feature request
2019-07-31 15:41:57 +03:00
Aliaksandr Valialkin
01cb15b6f5 Update issue templates
Add a template for bug report.
2019-07-31 15:39:41 +03:00
Aliaksandr Valialkin
b9256511e8 README.md: add join slack badge 2019-07-31 15:27:11 +03:00
Aliaksandr Valialkin
3a38b23fa3 app/vmselect/promql: add vm_slow_queries_total metric for counting slow queries
The query is slow if its execution time exceeds `-search.logSlowQueryDuration`
2019-07-31 03:36:37 +03:00
Aliaksandr Valialkin
8bd6f1f6df app/vmselect/promql: return NaN from histogram_quantile if at least a single bucket is broken 2019-07-31 01:18:07 +03:00
Aliaksandr Valialkin
4aaa5c2efc app/vmselect/promql: allow adjusting window for default rollup function
Default rollup function is `last_over_time`. It must support adjusting
the provided window in order to prevent from gaps on the graph
for window values smaller than scrape interval.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/134
2019-07-31 00:45:54 +03:00
Aliaksandr Valialkin
10f5a26bec app/vmselect/promql: return NaN values if invalid bucket counts are passed to histogram_quantile
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/136
2019-07-30 22:05:10 +03:00
Aliaksandr Valialkin
c14fd6c43f lib/storage: typo fixes after a77e88db7d 2019-07-30 15:38:52 +03:00
Aliaksandr Valialkin
a77e88db7d lib/storage: fix matching against tag filter with empty name
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/137
2019-07-30 15:15:09 +03:00
Aliaksandr Valialkin
aad7236e5d README.md: formatting fixes 2019-07-28 22:02:42 +03:00
Artem Navoiev
5e5de6be9a Create CONTRIBUTING.md 2019-07-28 20:42:32 +03:00
Anton Patsev
90cf6f3fcb change /usr/bin/victoriametrics to /usr/bin/victoria-metrics-prod (#132) 2019-07-28 20:40:46 +03:00
Artem Navoiev
8e3d69219f Add roadmap (#130)
* Add roadmap
* Fix typos
2019-07-28 18:39:39 +01:00
Aliaksandr Valialkin
b842a2eccc README.md: mention that VictoriaMetrics needs free disk space for background merges 2019-07-28 12:26:16 +03:00
Aliaksandr Valialkin
afcc7fb167 app/vmselect/netstorage: improve error message when reading data blocks from storage
Mention the block number in the error. This should simplify troubleshooting in this code.
2019-07-28 12:12:35 +03:00
Aliaksandr Valialkin
57a57c711a package: changed the remaining /usr/local/bin to /usr/bin
This is a follo-up after 68f260d878
2019-07-28 11:08:07 +03:00
Anton Patsev
68f260d878 change /usr/local/bin to /usr/bin (#131) 2019-07-28 11:06:24 +03:00
Aliaksandr Valialkin
1eade9b358 app/vminsert: add vm_rows_per_insert summary metric
This metric should help tuning batch sizes on clients writing data to VictoriaMetrics
2019-07-27 13:21:46 +03:00
Aliaksandr Valialkin
7e8747f6ed README.md: add a section for production ARM build 2019-07-26 22:34:31 +03:00
Aliaksandr Valialkin
0168a1b658 package: various fixes
- Use `-prod` binaries instead of development binaries for both deb and rpm packages.
- Fix binary directory from /usr/sbin to /usr/local/bin as outlined in package/victoria-metrics.service
- Fix binary name from `victoriametrics` to `victoria-metrics-prod` in package/victoria-metrics.service
2019-07-26 22:31:04 +03:00
Aliaksandr Valialkin
bf6cbb762c app/vminsert: improve error messages for Influx, OpenTSDB and Graphite parsing
Include in the error message the line which failed to parse.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/127
2019-07-26 22:08:52 +03:00
Kostya Vasilyev
6aeac37fc5 pick up .service file from ./rpm (#126)
* pick up .service file from ./rpm

* feedback from @patsevanton

* remove 'start' from ExecStart command
2019-07-26 21:56:30 +03:00
Aliaksandr Valialkin
c98725db55 app/vmstorage: consistency renaming for ignored rows metrics
vm_too_big_timestamp_rows_total -> vm_rows_ignored_total{reason="big_timestamp"}
  vm_too_small_timestamp_rows_total -> vm_rows_ignored_total{reason="small_timestamp"}
2019-07-26 20:02:06 +03:00
Anton Patsev
d8043f7161 Change default value storageDataPath (#125)
Fixes #124 .
2019-07-26 14:13:55 +03:00
Aliaksandr Valialkin
f586e1f83c lib/storage: add metrics for calculating skipped rows outside the retention
The metrics are:

    - vm_too_big_timestamp_rows_total
    - vm_too_small_timestamp_rows_total
2019-07-26 14:11:01 +03:00
Kostya Vasilyev
d1132bb188 deb packaging fixes: 1) stop the service in prerm 2) reload services in postrm (#123) 2019-07-26 12:38:59 +03:00
Aliaksandr Valialkin
915fb6df79 README.md: mention that arm builds can run on Raspberry Pi 2019-07-26 12:28:40 +03:00
Kostya Vasilyev
89eb6d78a4 RPM packaging (#122) 2019-07-25 23:47:41 +03:00
Aliaksandr Valialkin
17096b5750 app/vmselect/promql: return NaN from count() over zero time series
This aligns `count` behavior with Prometheus.
2019-07-25 22:02:30 +03:00
Aliaksandr Valialkin
66efa5745f app/vmselect/promql: properly calculate incremental aggregations grouped by __name__
Previously the following query may fail on multiple distinct metric names match:

    sum(count_over_time{__name__!=''}) by (__name__)
2019-07-25 21:53:20 +03:00
Anton Patsev
106ab78a47 Add package/rpm/ (#121) 2019-07-25 11:21:55 +03:00
Aliaksandr Valialkin
8aa474d685 README.md: move how to build VictoriaMetrics section to the bottom
This streamlines `getting started` experience
2019-07-25 11:17:30 +03:00
Aliaksandr Valialkin
9e059bb330 README.md: add links to ARM build and Pure Go build in TOC 2019-07-25 11:05:35 +03:00
Aliaksandr Valialkin
2346335ea6 README.md: moved advanced topics to the bottom, so they don't clutter getting started workflow 2019-07-25 11:00:41 +03:00
Aliaksandr Valialkin
b339890dca lib/encoding/zstd: go fmt 2019-07-25 01:37:16 +03:00
Aliaksandr Valialkin
6c4ca89d75 lib/encoding/zstd: disable CRC checks in pure Go build
This should give slightly better compression and decompressions performance.
Additionally this shaves off 4 bytes per each compressed block.
2019-07-24 19:17:16 +03:00
Roman Khavronenko
f0fe7b5ad6 fix typo (#117) 2019-07-24 07:48:28 +01:00
Aliaksandr Valialkin
22ed4e7fd4 vendor: make vendor-update 2019-07-23 20:00:19 +03:00
Aliaksandr Valialkin
162f1fb1b7 all: small updates after PR #114 2019-07-23 19:54:50 +03:00
Aliaksandr Valialkin
d07f616609 lib/encoding: small fixes in tests after the PR #114 2019-07-23 19:37:51 +03:00
Roman Khavronenko
5bf4e5ffb5 all: add Pure Go build (pull request #114)
Updates #94
2019-07-23 19:26:39 +03:00
Kostya Vasilyev
8c3629a892 Debian packaging (#116)
* initial commit of deb packaging

* Incorporated feedback from @valyala:
- Put data directory under /var/lib
- More beef in systemd file
- Packaging for arm64
- Package all target which builds and packages both amd64 and arm64

* Remove PIDFile from systemd unit, useless

* per PR feedback, move debian specific files into deb subdirectory

Updates #107 .
2019-07-22 17:12:48 +03:00
Aliaksandr Valialkin
ea07cf68ba README.md: add querying Graphite data section
Mention that Graphite data may be read either via Prometheus querying API
or via go-graphite/carbonapi. See https://github.com/go-graphite/carbonapi/blob/master/cmd/carbonapi/carbonapi.example.prometheus.yaml
2019-07-21 16:10:19 +03:00
Roman Khavronenko
4ee41bab43 add versioning to dashboard description (#113) 2019-07-21 14:34:50 +03:00
Roman Khavronenko
1273f31f19 Add CPU usage panel; rename Go runtime to Resource usage (#112)
* add CPU usage panel; rename `Go runtime` to `Resource usage`

* rm irate from CPU usage panel

Updates #92 .
2019-07-20 17:24:24 +03:00
Aliaksandr Valialkin
0f2ecde0e6 lib/encoding: improve gauge series detection
- Series with negative values are always gauges
- Counters may only have increasing values with possible counter resets

This should improve compression ratio for gauge series which
were previously mistakenly detected as counters.
2019-07-20 14:05:09 +03:00
Aliaksandr Valialkin
6cd77d4847 deployment: switch builder from go1.12.6 to go1.12.7 2019-07-20 12:15:05 +03:00
Roman Khavronenko
fb14f23532 mention docker-compose as option to spin up VM (#97) 2019-07-16 00:45:21 +03:00
Aliaksandr Valialkin
daba0cdb05 lib/netutil: do not count timeouts as network errors 2019-07-15 23:05:35 +03:00
Aliaksandr Valialkin
575d2f0a91 app/vminsert: use netutil.TCPListener for collecting network-related metrics for Graphite and OpenTSDB TCP traffic 2019-07-15 22:58:00 +03:00
Aliaksandr Valialkin
ec1b439329 README.md: expand capacity planning section a bit 2019-07-12 21:19:27 +03:00
Aliaksandr Valialkin
6a943a6a58 app/vmselect/promql: remove empty time series after applying filters like q > 0
This should reduce CPU and RAM usage for queries over high number of time series.
2019-07-12 19:59:27 +03:00
Aliaksandr Valialkin
998525999c vendor: update github.com/VictoriaMetrics/metrics to v1.7.0
This version adds support for `process_*` metrics similar
to metrics exposed by https://github.com/prometheus/client_golang .

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/92
2019-07-12 17:22:53 +03:00
Aliaksandr Valialkin
ab88890523 app/vmselect/promql: parallelize incremental aggregation to multiple CPU cores
This may reduce response times for aggregation over big number of time series
with small step between output data points.
2019-07-12 15:52:22 +03:00
Aliaksandr Valialkin
374d681848 README.md: clarify that Prometheus replicates data to remote storage 2019-07-12 02:51:04 +03:00
Aliaksandr Valialkin
e75d5f47c4 lib/storage: remove unused function isTooBigTimeRangeForDateMetricIDs 2019-07-12 02:28:23 +03:00
Aliaksandr Valialkin
fc90ebf43c lib/storage: do not reduce maxMetrics on time ranges exceeding maxDaysForDateMetricIDs
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/95
2019-07-12 02:20:34 +03:00
Aliaksandr Valialkin
62a7353479 app/vmselect/prometheus: set start arg in /api/v1/series to the minimum allowed time by default as Prometheus does
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/91
2019-07-11 17:10:14 +03:00
Aliaksandr Valialkin
54bd21eb4a app/vmselect/prometheus: convert negative times to 0, since they arent supported by the storage 2019-07-11 17:07:20 +03:00
Aliaksandr Valialkin
2bd1a01d1a lib/storage: do not pollute inverted index with data for samples outside the retention period 2019-07-11 17:04:56 +03:00
Artem Navoiev
cd4833d3d0 integration tests 2019-07-11 15:48:08 +03:00
Aliaksandr Valialkin
101fa258e5 app/vmstorage: prepare for integration tests with multiple Init / Stop cycles 2019-07-11 15:34:50 +03:00
Aliaksandr Valialkin
d031e04023 lib/storage: use fast path for orSuffix when searching for metricIDs against plain tag value 2019-07-11 14:48:37 +03:00
Aliaksandr Valialkin
43ea4ce428 lib/storage: remember and skip individual tag filters matching too many metrics
This saves CPU time by skipping useless matching for individual tag filters.
2019-07-11 14:48:30 +03:00
Aliaksandr Valialkin
a336bb4e22 app/vmselect/promql: reduce RAM usage for aggregates over big number of time series
Calculate incremental aggregates for `aggr(metric_selector)` function instead of
keeping all the time series matching the given `metric_selector` in memory.
2019-07-10 13:04:39 +03:00
Aliaksandr Valialkin
1fe6d784d8 all: consistency renaming: bytesSize -> sizeBytes 2019-07-10 00:47:36 +03:00
Aliaksandr Valialkin
55fe36149c app/vmselect/promql: mention -search.logSlowQueryDuration flag value in the slow query log message 2019-07-10 00:41:24 +03:00
Aliaksandr Valialkin
9203170eb2 app/vmselect/promql: extract rmoeveGroupTags function for removing unneeded tags from MetricName according to the given modifierExpr 2019-07-09 23:20:48 +03:00
Aliaksandr Valialkin
2db685c19c app/vmselect/promql: properly preserve metric name after applying functions in any case from transformFuncsKeepMetricGroup 2019-07-09 23:10:35 +03:00
Aliaksandr Valialkin
6ddfb06b52 README.md: add alerting section 2019-07-08 22:45:34 +03:00
Aliaksandr Valialkin
40a6c0d672 app/vmselect/prometheus: typo fix 2019-07-07 23:34:23 +03:00
Aliaksandr Valialkin
1371024747 app/vmselect/prometheus: handle minTime and maxTime values that may be set by Promxy or Prometheus client
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/88
2019-07-07 21:53:48 +03:00
Roman Khavronenko
c27c6de297 add panels for Active time series, Disk space usage (datapoints) and Disk space usage (index) (#87) 2019-07-04 22:15:24 +03:00
Aliaksandr Valialkin
0c629429de README.md: clarify upgrading and applying new config sections 2019-07-04 20:07:00 +03:00
Aliaksandr Valialkin
4dbd642c86 app/vmselect/promql: remove empty timeseries left after topk call 2019-07-04 19:42:39 +03:00
Aliaksandr Valialkin
56c154f45b all: add vm_data_size_bytes metrics for easy monitoring of on-disk data size and on-disk inverted index size 2019-07-04 19:42:30 +03:00
Aliaksandr Valialkin
8d83dcf332 README.md: update community and contributions section 2019-07-04 09:36:36 +03:00
Aliaksandr Valialkin
9a4b2b8315 app/vmselect/prometheus: update adjustLastPoints function
- Do not overwrite last points by the previous NaNs, since this may result in empty time series.
- Overwrite the last 2 points instead of 3. This should be enough in most cases.
2019-07-04 09:14:18 +03:00
Aliaksandr Valialkin
e06866005d app/vmselect/promql: gracefully handle duplicate timestamps in irate and rollup_rate funcs
Previously such timestamps result in `+Inf` results. Now the previous timestamp is used
for the calculations.
2019-07-03 12:39:55 +03:00
Aliaksandr Valialkin
2c76a9c9ab README.md: enumerate the most interesting metrics exported at /metrics page 2019-07-01 23:41:08 +03:00
Aliaksandr Valialkin
b9166a60ff app/vmselect: do not return empty time series in /api/v1/query result 2019-07-01 17:16:34 +03:00
Aliaksandr Valialkin
c7034fc51b lib/memory: attempt #3 to determine memory limit for LXC container
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/84
2019-07-01 14:01:13 +03:00
Aliaksandr Valialkin
715c423f1a README.md: mention Thanos vs VictoriaMetrics article 2019-07-01 12:26:47 +03:00
Aliaksandr Valialkin
ca74e29458 README.md: explain how to configure HA setup for Prometheus HA pairs 2019-06-29 19:54:46 +03:00
Aliaksandr Valialkin
a41955863a lib/mergeset: make fmt 2019-06-29 14:25:26 +03:00
Aliaksandr Valialkin
2ecb117082 lib/storage: skip non-matching metricIDs in sortedFilter
This should improve performance for big sorteFilter lists.
2019-06-29 13:48:32 +03:00
Aliaksandr Valialkin
0c88afa386 lib/mergeset: speed up binarySearchKey by skipping the first item during binary search 2019-06-29 13:45:49 +03:00
Aliaksandr Valialkin
74c0fb04f3 app/vmselect/promql: consistency renaming: candlestick -> rollup_candlestick 2019-06-29 03:13:02 +03:00
Aliaksandr Valialkin
828078eb45 lib/memory: remove TestReadLXCMemoryLimit, since it doesnt work in Travis 2019-06-28 18:22:46 +03:00
Aliaksandr Valialkin
7b59466667 lib/memory: attempt #2 to determine memory limit inside LXC container
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/84
2019-06-28 18:08:15 +03:00
Aliaksandr Valialkin
79ac02ba74 README.md: clean up <img> attributes 2019-06-28 17:57:43 +03:00
Aliaksandr Valialkin
593bd35aaa lib/memory: an attempt to read proper memory limit inside LXC container
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/84
2019-06-28 15:34:30 +03:00
Aliaksandr Valialkin
7354f10336 vendor: update github.com/VictoriaMetrics/metrics to v1.6.2
This fixes Summary printing for *_count and *_sum values with metric names containing labels.
2019-06-28 14:17:17 +03:00
Aliaksandr Valialkin
e8998c69a7 vendor: update github.com/VictoriaMetrics/metrics to v1.6.1 2019-06-28 14:06:49 +03:00
Aliaksandr Valialkin
55bcf60ea6 app/vmselect: fix 32bit arm build
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/83
2019-06-27 19:36:17 +03:00
Aliaksandr Valialkin
796b010139 app/vmselect: add candlestick(m[d]) func for returning open, close, low and high rollups on the given time range d
This function is frequently used in financial apps. See https://en.wikipedia.org/wiki/Candlestick_chart
2019-06-27 18:46:13 +03:00
Aliaksandr Valialkin
0c8a09c8e1 README.md: mention about global query view 2019-06-27 17:38:37 +03:00
Aliaksandr Valialkin
c1be1e4342 lib/storage: optimize time series search by regexp filter
This should improve search speed on label filters like `{foo=~"bar.+baz"}`
2019-06-27 16:17:43 +03:00
Aliaksandr Valialkin
0c8d463307 README.md: mention that Prometheus 2.10.0+ works better with remote_write
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/80
2019-06-27 00:54:48 +03:00
Jiri Tyr
e0fccc6c60 Change the default influxMeasurementFieldSeparator 2019-06-26 13:22:03 +03:00
Aliaksandr Valialkin
1f7d9a213a app/vminsert: fix inifinite loop when reading two lines without newline in the end
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/82
2019-06-26 02:51:56 +03:00
Aliaksandr Valialkin
7ce1f73ada README.md: add more information to rough estimation of the required resources 2019-06-26 02:20:33 +03:00
Aliaksandr Valialkin
e605315f01 README.md: add link to slack chat 2019-06-26 02:05:38 +03:00
Aliaksandr Valialkin
fcef49184b README.md: clarify docs about Influx line protocol support 2019-06-26 00:05:09 +03:00
Aliaksandr Valialkin
844ce4731e app/vmselect/promql: suppress error when template func is used inside modifier list. Just leave it as is
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/78
2019-06-25 20:43:22 +03:00
Aliaksandr Valialkin
683bf2a11f lib/storage: make sure non-nil args are passed to openIndexDB 2019-06-25 20:10:04 +03:00
Aliaksandr Valialkin
eb2283a029 lib/storage: reduce too big maxMetrics in getTagFilterWithMinMetricIDsCountAdaptive
This should improve performance on inverted index search for big amount of unique time series
when big -search.maxUniqueTimeseries is set.
2019-06-25 19:55:27 +03:00
Aliaksandr Valialkin
e8377011ab lib/storage: free up memory from caches owned by indexDB when it is deleted 2019-06-25 14:42:44 +03:00
Aliaksandr Valialkin
33ea2120c3 lib/storage: use unversioned keys for tag cache in extDB
Data in ExtDB cannot be changed, so it is OK to use unversioned keys for tag cache.
This should improve performance for index lookups over big amount of time series.
2019-06-25 13:08:58 +03:00
Aliaksandr Valialkin
cf63669303 lib/storage: skip searching in extDB if it doesn't contain items for the given time range
This should improve inverted index search performance for big amount
of unique time series when the search is performed only on recent data.
2019-06-25 13:00:37 +03:00
Aliaksandr Valialkin
feacfffe89 app/vmselect/promql: increase default value for -search.maxPointsPerTimeSeries from 10k to 30k
This may be required for subqueries with small steps. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/77
2019-06-24 22:53:18 +03:00
Aliaksandr Valialkin
4bb738ddd9 app/vmselect/promql: adjust value returned by linearRegression to the end of time range like Prometheus does
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/71
2019-06-24 22:45:58 +03:00
Aliaksandr Valialkin
90e72c2a42 app/vmselect/promql: add sum2 and sum2_over_time, geomean and geomean_over_time funcs.
These functions may be useful for statistic calculations.
2019-06-24 16:44:44 +03:00
Aliaksandr Valialkin
ccd8b7a003 README.md: mention how to recover from broken parts due to disk errors
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/76
2019-06-24 14:17:58 +03:00
Aliaksandr Valialkin
d32845781e README.md: remove unused TOC items 2019-06-24 14:12:07 +03:00
Aliaksandr Valialkin
af2ceaaa0b lib/storage: mention source parts on merge error
This should improve determining broken source part.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/76
2019-06-24 14:08:43 +03:00
Aliaksandr Valialkin
61926bae01 app/vmselect/promql: adjust the provided window only for range functions with dt in denominator
This should fix range function calculations such as `changes(m[d])` where `d` is smaller
than the scrape interval.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/72
2019-06-23 19:27:31 +03:00
Aliaksandr Valialkin
ee13256f74 app/vmselect/promql: use deriv_fast instead of deriv in ttf, since deriv calculations have been changed recently 2019-06-23 15:54:18 +03:00
Aliaksandr Valialkin
3b3b2f1e6e app/vmselect/promql: adjust ttf calculation, so deriv(freev) for freev=m[d] could be properly calculated 2019-06-23 14:31:19 +03:00
Aliaksandr Valialkin
c9cbf5351c vendor: update github.com/valyala/gozstd to v1.5.1 2019-06-22 00:14:19 +03:00
Aliaksandr Valialkin
146c6e1f72 app/vmselect/promql: typo fixes in comments 2019-06-21 23:22:59 +03:00
Aliaksandr Valialkin
d261fa2885 app/vmselect/promql: add deriv_fast function for calculating fast derivative
`deriv_fast` calculates derivative based on the first and the last point on the interval
instead of calculating linear regression based on all the data points on the interval.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/73
2019-06-21 23:05:39 +03:00
Aliaksandr Valialkin
5b47c00910 app/vmselect/promql: use linear regression in deriv func like Prometheus does
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/73
2019-06-21 22:59:46 +03:00
Aliaksandr Valialkin
9e1119dab8 app/vmselect/promql: ajdust data model to the model used in Prometheus
Do not take into account data points on the range `[timestamp .. timestamp+step)`
when calculating value on the given `timestamp`.
Use only data points from the past when performing these calculations like Prometheus does.

This should reduce discrepancies between results returned by VictoriaMetrics
and results returned by Prometheus.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/72
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/71
2019-06-21 21:54:48 +03:00
Aliaksandr Valialkin
47a3228108 app/vmselect/promql: do not strip __name__ form time series after binary comparison operation
Example:

  foo > 10

Would leave `foo` name for all the matching time series on the left.
2019-06-21 13:09:38 +03:00
Aliaksandr Valialkin
e88a03323a all: initial stubs for Windows support; see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/70 2019-06-20 20:07:10 +03:00
Aliaksandr Valialkin
b75630fcf4 Makefile: enable golangci-lint in make check_all 2019-06-20 14:52:58 +03:00
Aliaksandr Valialkin
80db24386e lib/storage: typo fixes found by golangci-lint; updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/69 2019-06-20 14:37:55 +03:00
Aliaksandr Valialkin
296c14317f lib/netutil: remove unused TCPListener.name; updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/69 2019-06-20 14:36:15 +03:00
Aliaksandr Valialkin
973e4b5b76 app/vmselect/promql: remove unused func keepLastValue; updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/69 2019-06-20 14:35:11 +03:00
Aliaksandr Valialkin
7aadec8e3c app/vmselect/promql: typo fix; updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/69 2019-06-20 14:33:47 +03:00
Aliaksandr Valialkin
45fc8cb72f Makefile: add make golangci-lint rule for running golangci-lint run; updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/69 2019-06-20 14:30:55 +03:00
Aliaksandr Valialkin
4b2523fb40 app/vminsert/opentsdb: remove unused const maxReadPacketSize; update https://github.com/VictoriaMetrics/VictoriaMetrics/issues/69 2019-06-20 14:30:06 +03:00
Aliaksandr Valialkin
70ba36fa37 app/vmselect/prometheus: return better error messages on missing args to /api/v1/* 2019-06-20 14:07:55 +03:00
Aliaksandr Valialkin
a78b3dba7f app/vmstorage: add vm_cache_entries{type="storage/hour_metric_ids"} metric for tracking active time series count 2019-06-19 18:36:47 +03:00
Aliaksandr Valialkin
a9cfca6a72 README.md: add max_shards: 100 to the recommended Prometheus config
Prometheus establishes a connection per shard in remote_write config.
By default it establishes up to 1000 connections to remote storage (max_shards: 1000).
This is quite big, so set `max_shards: 100` in the recommmended Prometheus config.
2019-06-19 17:48:09 +03:00
Aliaksandr Valialkin
710d6c33ea lib/prompb: remove superflouos bytes copying in ReadSnappy 2019-06-18 20:37:51 +03:00
Aliaksandr Valialkin
a8d4224828 app/vminsert/graphite: allow skipping timestamps in Graphite plaintext protocol
In this case VictoriaMetrics uses the ingestion time as a timestamp.
2019-06-18 19:04:04 +03:00
Aliaksandr Valialkin
341bed4822 README.md: mention that arbitrary number of lines may be sent in a single request via supported ingestion protocols 2019-06-18 18:59:12 +03:00
Aliaksandr Valialkin
5982e94c94 vendor: update golang.org/x/sys 2019-06-18 16:19:26 +03:00
Aliaksandr Valialkin
6d6c9eb1f8 lib/flagutil: remove unused package 2019-06-18 10:43:55 +03:00
Aliaksandr Valialkin
86d3d907a5 app/vminsert/influx: add -influxSkipSingleField flag for using {measurement} instead of {measurement}{separator}{field_name} for Influx lines with a single field
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/66
2019-06-17 19:05:57 +03:00
Aliaksandr Valialkin
269285848f app/vminsert/influx: add -influxMeasurementFieldSeparator flag for the ability to change separator for {measurement}{separator}{field_name} metric name 2019-06-14 10:00:12 +03:00
Aliaksandr Valialkin
47e1e5eb4b deployment/docker: switch builder from go1.12.5 to go1.12.6 2019-06-14 09:32:06 +03:00
Aliaksandr Valialkin
d2c801029b lib/storage: persist metric ids for the current and the previous hour on graceful shutdown
This should improve performance after restart when the db contains a lot of time series
with high time series churn (i.e. metrics from Kubernetes with many pods and frequent deployments)
2019-06-14 07:55:14 +03:00
Aliaksandr Valialkin
beb479b8f1 app/vmselect/promql: use dynamic limit on memory for concurrent queries 2019-06-12 23:18:44 +03:00
Aliaksandr Valialkin
611c4401f8 README.md: mention about multi-tenancy 2019-06-12 21:30:36 +03:00
Aliaksandr Valialkin
a8db528930 app/vmselect/promql: merge non-overlapping duplicate time series in group_left and group_right joins 2019-06-12 20:32:32 +03:00
Aliaksandr Valialkin
15613e5338 app/vmselect/promql: swap binary operation with modifier in the error message for improved readability 2019-06-12 17:14:39 +03:00
Aliaksandr Valialkin
3237d0309c app/vmselect/promql: list a sample of duplicate time series in the error message for group_left or group_right
This should improve troubleshooting for complex queries involving `group_left` and `group_right` modifiers.
2019-06-12 16:57:37 +03:00
Aliaksandr Valialkin
26f8d7ea1b lib/fs: sync parent dir in MustRemoveAll only if it exists
The parent directory may be non-existing when the deleted directory
didn't exist before the MustRemoveAll call
2019-06-12 02:14:44 +03:00
Aliaksandr Valialkin
419197ba08 lib/fs: consolidate *RemoveAll* funcs into a single MustRemoveAll func
The func syncs parent dir in order to persist directory removal
in the event of power loss
2019-06-12 01:53:46 +03:00
Aliaksandr Valialkin
a4b4db9bf6 README.md: add a chapter about downsampling 2019-06-12 01:32:26 +03:00
Aliaksandr Valialkin
c1276edab5 lib/fs: panic with fatal error when directories cannot be removed
Unremoved directories may lead to inconsistent data directory,
so VictoriaMetrics will fail to start next time.

So panic on the first error when trying to remove directory in order
to simplify recover process.
2019-06-12 01:20:54 +03:00
Aliaksandr Valialkin
2322c9a45a lib/fs: attempt #2 to work around NFS issue with directory removal
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/61
2019-06-12 01:07:05 +03:00
Aliaksandr Valialkin
89b928ff24 vendor: update github.com/VictoriaMetrics/fastcache to v1.5.1 2019-06-11 23:56:08 +03:00
Aliaksandr Valialkin
935bfd7a18 lib/fs: consistency renaming SyncPath -> MustSyncPath, since it doesnt return error 2019-06-11 23:13:49 +03:00
Aliaksandr Valialkin
3dd36b8088 lib/fs: make sure the created directory remains visible in the fs in the event of power loss
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/63
2019-06-11 23:08:09 +03:00
Aliaksandr Valialkin
afb964670a lib/fs: use filepath.Dir instead of filepath.Split, since the filename is unused 2019-06-11 22:54:26 +03:00
Aliaksandr Valialkin
20fc0e0e54 lib/{storage,mergeset}: sync filenames inside part when finalizing the part
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/63
2019-06-11 21:51:13 +03:00
Aliaksandr Valialkin
4d9f088526 README.md: add examples on how to write data with Graphite and OpenTSDB protocols 2019-06-11 21:24:32 +03:00
Aliaksandr Valialkin
82d1707861 README.md: add missing port to example urls 2019-06-11 21:05:24 +03:00
Aliaksandr Valialkin
70d20ce8de README.md: use proper urls for single-node version in examples 2019-06-11 20:33:52 +03:00
Aliaksandr Valialkin
723bf1af7f README.md: add example on how to write data with Influx line protocol to VictoriaMetrics 2019-06-11 20:31:25 +03:00
Aliaksandr Valialkin
ac7b186f13 all: try hard removing directory with contents
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/61
2019-06-11 01:57:59 +03:00
Roman Khavronenko
cd1bc32158 convert dashboard for provisioning (#62) 2019-06-11 01:07:09 +03:00
Aliaksandr Valialkin
1c33b5937e app/vmselect/promql: prevent from count_values explosion of timeseries, which could result in OOM 2019-06-11 01:03:13 +03:00
Aliaksandr Valialkin
8bb6bc986d app/vmselect/promql: skip superflouos timestamps copying in count_values 2019-06-11 00:44:01 +03:00
Aliaksandr Valialkin
d2be567482 app/vmselect/promql: remove superflouos timeseries copy in histogram_quantile func 2019-06-11 00:39:41 +03:00
Aliaksandr Valialkin
7e7d4d5275 app/vmselect/promql: remove superflouos timeseries copy in union func 2019-06-11 00:35:20 +03:00
Aliaksandr Valialkin
bf9782eaf6 app/vmselect/promql: skip NaN values in count_values func 2019-06-10 22:42:32 +03:00
Aliaksandr Valialkin
cbe692f0e2 app/vmselect: add /api/v1/labels/count handler for quick detection of labels with the maximum number of distinct values 2019-06-10 19:55:38 +03:00
Aliaksandr Valialkin
7b6623558f lib/storage: skip adaptive searching for tag filter matching the minimum number of metrics if the identical previous search didn't found such filter
This should improve speed for searching metrics among high number of time series
with high churn rate like in big Kubernetes clusters with frequent deployments.
2019-06-10 14:07:39 +03:00
Aliaksandr Valialkin
a1351bbaee lib/storage: factor out getTagFilterWithMinMetricIDsCountAdaptive from updateMetricIDsForTagFilters 2019-06-10 13:26:44 +03:00
Aliaksandr Valialkin
b4d707d9bb lib/storage: give clearer names to more functions 2019-06-10 13:01:23 +03:00
Aliaksandr Valialkin
bee7298f81 lib/storage: give more clear names to functions 2019-06-10 12:50:44 +03:00
Aliaksandr Valialkin
dbd217b8f0 lib/storage: test GetSeriesCount 2019-06-10 12:43:34 +03:00
Aliaksandr Valialkin
4d936b1524 lib/storage: make getSeriesCount func indexSearch method 2019-06-10 12:29:11 +03:00
Aliaksandr Valialkin
7354090aad app/vmstorage: add missing _total suffixes to newly added metrics 2019-06-09 22:11:36 +03:00
Aliaksandr Valialkin
d37924900b lib/storage: optimize time series lookup for recent hours when the db contains many millions of time series with high churn rate (aka frequent deployments in Kubernetes) 2019-06-09 19:13:56 +03:00
Aliaksandr Valialkin
c0baa977cf app/vminsert/concurrencylimiter: typo fix in the error message 2019-06-08 22:43:33 +03:00
Aliaksandr Valialkin
f4252f87e6 app/vminsert: really fix #60
ReadLinesBlock may accept dstBuf with non-zero length. In this case the last line without trailing newline isn't read.
Fix this by comparing len(dstBuf) to 0 instead of its original length.
2019-06-07 23:37:03 +03:00
Aliaksandr Valialkin
0b78d228d2 app/vminsert: properly read trailing line without newline in the end
This fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/60
2019-06-07 23:17:59 +03:00
Aliaksandr Valialkin
0371c216a7 deployment/docker: move victoriametrics single-node docker image from valyala/victoria-metrics to victoriametrics/victoria-metrics docker hub path 2019-06-07 11:52:53 +03:00
Aliaksandr Valialkin
c1f18ee48d app/vmselect/promql: properly handle {__name__ op "string"} queries
This has been broken in 7294ef333ad26f4f6578b783e97649e58b1f8945 .
2019-06-07 02:02:04 +03:00
Roman Khavronenko
fbd7044b2b Dashboard update (#57)
* split "pending datapoints" by storage and index pending entities

* update provisioned dVM dashboard
2019-06-07 01:31:45 +03:00
Roman Khavronenko
2afe511d80 Setup Grafana provisioning for docker-compose setup (#50)
* setup Grafana provisioning for docker-compose setup

* review fixes
2019-06-06 23:37:44 +03:00
Seua Polyakov
f4e63cd070 Add SIGINT as stopsignal to docker file (#54)
Add sigint as stopsignal to docker file. You can find more here: https://docs.docker.com/engine/reference/builder/#usage
With this change, the main process inside the container will receive SIGINT, and after a grace period, SIGKILL.
2019-06-06 22:36:21 +03:00
Aliaksandr Valialkin
667115a5c7 app/vmselect/prometheus: report about incorrect time or duration instead of silently using the default value
This should prevent from incorrect usage of the querying API.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/52
2019-06-06 22:18:18 +03:00
Aliaksandr Valialkin
1458450dba app/vmselect/promql: return the correct time series from quantile
Previously arbitrary time series could be returned from `quantile`
depending on sort order for the last data point in the selected range.

Fix this by returning the calculated time series.

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/55
2019-06-06 17:07:31 +03:00
Aliaksandr Valialkin
5a5ba749f2 README.md: add an example on how Influx line protocol is converted into Prometheus data points 2019-06-06 16:08:29 +03:00
Aliaksandr Valialkin
a3e26de45e lib/procutil: typo fix in comment to WaitForSigterm 2019-06-04 17:31:47 +03:00
Aliaksandr Valialkin
53ea90865d app/vmselect/promql: add -search.disableCache flag for disabling response caching
This may be useful for data back-filling, when the response caching
could interfere badly with newly added data points with timestamps
in the past.
2019-06-04 17:30:45 +03:00
Aliaksandr Valialkin
17f0a53068 app/vminsert: explain that /query request emulation is required for TSBS benchmark 2019-06-03 18:40:27 +03:00
Anton Patsev
b03bdb32ff Prettify Table of contents (#47) 2019-06-03 17:31:15 +03:00
Aliaksandr Valialkin
15f59c6df9 deployment/docker: remove trailing whitespace 2019-06-03 14:53:08 +03:00
Artem Navoiev
da45a20491 docker compose for VM 2019-06-03 09:57:33 +02:00
Roman Khavronenko
5859bb9556 Add grafana dashboard for VM (#46) 2019-06-03 00:25:07 +03:00
Aliaksandr Valialkin
28f6c36ab4 lib/storage: tune updating a map with today`s metric ids
- Increase update iterval from 1s to 10s. This should reduce CPU usage
  for large amounts of metric ids with constant churn.
- Reduce pendingTodayMetricIDsLock lock duration during the update.
2019-06-02 21:58:16 +03:00
Aliaksandr Valialkin
4794f894a4 lib/storage: speed up checking metricID existence in the list for the current date 2019-06-02 18:34:08 +03:00
Aliaksandr Valialkin
c7280ba61a vendor: update deps with make vendor-update 2019-06-01 23:39:58 +03:00
Aliaksandr Valialkin
fbd8b03f15 README.md: fixed the link to yum repository source codes 2019-06-01 13:55:44 +03:00
Aliaksandr Valialkin
d17a47e3e0 README.md: add setting up service chapter 2019-05-31 23:34:09 +03:00
Aliaksandr Valialkin
d6862a2d97 README.md: mention that VictoriaMetrics works with time series data from Kubernetes 2019-05-31 22:53:35 +03:00
Aliaksandr Valialkin
f2cf5d8e36 app/vmselect/promql: allow escaping identifiers with \ and \xXX
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/42
2019-05-31 17:35:17 +03:00
Aliaksandr Valialkin
27f0d098bd app/victoria-metrics: add make victoria-metrics-arm64 rule for building GOARCH=arm64 binary 2019-05-29 23:07:14 +03:00
Aliaksandr Valialkin
a51ff2c6cb README.md: add LICENSE shield 2019-05-29 14:09:36 +03:00
Aliaksandr Valialkin
56b952c456 app/vminsert: add -maxConcurrentInserts command-line flag for limiting the number of concurrent inserts 2019-05-29 12:41:23 +03:00
Aliaksandr Valialkin
61bad1e07e Makefile: run go vet with -mod=vendor in order to disable downloading vendored deps 2019-05-29 01:38:13 +03:00
Artem Navoiev
be97f764f5 [ci-ci] enable CI (#39) 2019-05-29 01:32:49 +03:00
Artem Navoiev
a576d1f5d3 README.md: add links to slack and telegrams (#40) 2019-05-29 01:30:37 +03:00
Aliaksandr Valialkin
968d094524 app/vminsert: reduce memory usage for Influx, Graphite and OpenTSDB protocols
Do not buffer per-connection data and just store it as it arrives
2019-05-28 18:47:23 +03:00
Aliaksandr Valialkin
e307a4d92c lib/timerpool: use timer pool in concurrency limiters
This should reduce the number of memory allocations in highly loaded system
2019-05-28 17:20:10 +03:00
Aliaksandr Valialkin
0eae39daa7 app/vminsert: properly reset InsertCtx.mrs - they must be empty after Reset call 2019-05-28 16:08:01 +03:00
Aliaksandr Valialkin
437e0b2300 README.md: typo fix 2019-05-27 21:37:48 +03:00
Aliaksandr Valialkin
4b3af728ea README.md: add steps for restoring from a snapshot 2019-05-27 20:36:51 +03:00
Aliaksandr Valialkin
4a12c4c982 README.md: add Third-party contributions section 2019-05-27 20:23:39 +03:00
Anton Patsev
2e75efb64e README.md: add unofficial yum repository (#37) 2019-05-27 20:19:54 +03:00
Aliaksandr Valialkin
25900162f6 Makefile: add -mod=vendor to go test, so tests use external deps from vendor folder 2019-05-27 00:35:46 +03:00
Aliaksandr Valialkin
16afcd6aff vendor: update dependencies with make vendor-update 2019-05-26 23:25:12 +03:00
Aliaksandr Valialkin
c2a5eef5e3 Makefile: pass GO111MODULE=on to all the go invocations 2019-05-26 23:23:43 +03:00
Aliaksandr Valialkin
4859ca0cda app/vmselect: update comment according to the updated code 2019-05-26 22:38:58 +03:00
Aliaksandr Valialkin
feb6b203a4 app/vminsert/influx: try converting string values to numeric values, since Influx agents may send numeric values as strings
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/34
2019-05-26 22:11:19 +03:00
Aliaksandr Valialkin
51ee990902 README.md: typo fix 2019-05-26 17:59:04 +03:00
Aliaksandr Valialkin
5262aae5da app/vmselect/promql: misspeling fix 2019-05-25 21:53:11 +03:00
Aliaksandr Valialkin
54fb8b21f9 all: fix misspellings 2019-05-25 21:51:11 +03:00
Aliaksandr Valialkin
d6523ffe90 Makefile: add -s flag to go fmt in make fmt command 2019-05-25 21:43:35 +03:00
Aliaksandr Valialkin
024560b161 README.md: add goreportcard.com badge 2019-05-25 21:38:57 +03:00
Aliaksandr Valialkin
96ac664b27 Add make victoria-metrics Makefile rule for building dev binary 2019-05-25 18:24:51 +03:00
Aliaksandr Valialkin
2ffcf7a4a5 README.md: mention that VictoriaMetrics is scalable 2019-05-25 17:09:43 +03:00
Aliaksandr Valialkin
5cbd4cfca9 app/vmselect: log slow queries if their execution time exceeds -search.logSlowQueryDuration 2019-05-24 16:12:31 +03:00
Aliaksandr Valialkin
718ce33714 app/vmselect: consume resultsCh data in exportHandler if writeResponseFunc failed to consume it 2019-05-24 14:54:31 +03:00
Aliaksandr Valialkin
f332c0d54e README.md: add contacts chapter 2019-05-24 13:58:26 +03:00
0xflotus
eca566ed22 fixed small errors (#31) 2019-05-24 13:27:42 +03:00
Aliaksandr Valialkin
5bbfdff9fe Makefile: add make publish and make package shortcuts for building and publishing docker images 2019-05-24 13:19:24 +03:00
Aliaksandr Valialkin
6b0ae332f8 lib/encoding: add vm_zstd_block_{compress|decompress}_calls_total for determining the number CompressZSTD / DecompressZSTD calls 2019-05-24 13:01:02 +03:00
Aliaksandr Valialkin
2eb3602d61 app/victoria-metrics: remove -p XXXX:XXXX from docker run options, since it is unnesessary if --net=host is set 2019-05-24 12:54:53 +03:00
Aliaksandr Valialkin
6fb9dd09f5 lib/encoding: add vm_zstd_block_{original|compressed}_bytes_total metrics for rough estimation of block compression ratio 2019-05-24 12:34:32 +03:00
Aliaksandr Valialkin
19b6643e5c lib/encoding: substitute CompressZSTD with CompressZSTDLevel 2019-05-24 12:32:55 +03:00
Aliaksandr Valialkin
08b889ef09 lib/httpserver: add -http.disableResponseCompression flag, which may help saving CPU resources at the cost of higher network bandwidth usage 2019-05-24 12:18:40 +03:00
Aliaksandr Valialkin
d15d0127fe app/vmselect/promql: add alias(q, name) function that sets the given name to all the time series in q 2019-05-24 02:41:45 +03:00
Aliaksandr Valialkin
674888fdc9 lib/decimal: add a comment explaining weird code in maxUpExponent. Fixes #29 2019-05-23 17:18:35 +03:00
Aliaksandr Valialkin
fb140eda33 app/vmselect/promql: add label_transform(q, label, regexp, replacement) function for replacing all the occurences of regexp with replacement in the given label for q 2019-05-23 16:26:19 +03:00
Aliaksandr Valialkin
398ec4383e README.md: typo fix 2019-05-23 02:09:51 +03:00
Aliaksandr Valialkin
eff0debe14 README.md: mention that VictoriaMetrics is high-perf cost-effective TSDB 2019-05-23 00:36:45 +03:00
7826 changed files with 310823 additions and 1874629 deletions

View File

@@ -4,4 +4,3 @@ gocache-for-docker
victoria-metrics-data
vmstorage-data
vmselect-cache
.vscode

41
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,41 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Version**
The line returned when passing `--version` command line flag to binary. For example:
```
$ ./victoria-metrics-prod --version
victoria-metrics-20190730-121249-heads-single-node-0-g671d9e55
```
**Used command-line flags**
Command-line flags are listed as `flag{name="httpListenAddr", value=":443"} 1` lines at `/metrics` page.
See the following docs for details:
* [monitoring for single-node VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#monitoring)
* [montioring for VictoriaMetrics cluster](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/cluster/README.md#monitoring)
**Additional context**
Add any other context about the problem here such as error logs from VictoriaMetrics and Prometheus,
`/metrics` output, screenshots from the official Grafana dashboards for VictoriaMetrics:
* [Grafana dashboard for single-node VictoriaMetrics](https://grafana.com/dashboards/10229)
* [Grafana dashboard for VictoriaMetrics cluster](https://grafana.com/grafana/dashboards/11176)

View File

@@ -1,86 +0,0 @@
name: Bug report
description: Create a report to help us improve
labels: [bug]
body:
- type: markdown
attributes:
value: |
Before filling a bug report it would be great to [upgrade](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#how-to-upgrade)
to [the latest available release](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest)
and verify whether the bug is reproducible there.
It's also recommended to read the [troubleshooting docs](https://docs.victoriametrics.com/victoriametrics/troubleshooting/) first.
- type: textarea
id: describe-the-bug
attributes:
label: Describe the bug
description: |
A clear and concise description of what the bug is.
placeholder: |
When I do `A` VictoriaMetrics does `B`. I expect it to do `C`.
validations:
required: true
- type: textarea
id: to-reproduce
attributes:
label: To Reproduce
description: |
Steps to reproduce the behavior.
If reproducing an issue requires some specific configuration file, please paste it here.
placeholder: |
Steps to reproduce the behavior.
validations:
required: true
- type: textarea
id: version
attributes:
label: Version
description: |
The line returned when passing `--version` command line flag to the binary. For example:
```
$ ./victoria-metrics-prod --version
victoria-metrics-20190730-121249-heads-single-node-0-g671d9e55
```
validations:
required: true
- type: textarea
id: logs
attributes:
label: Logs
description: |
Check if any warnings or errors were logged by VictoriaMetrics components
or components in communication with VictoriaMetrics (e.g. Prometheus, Grafana).
validations:
required: false
- type: textarea
id: screenshots
attributes:
label: Screenshots
description: |
If applicable, add screenshots to help explain your problem.
For VictoriaMetrics health-state issues please provide full-length screenshots
of Grafana dashboards if possible:
* [Grafana dashboard for single-node VictoriaMetrics](https://grafana.com/grafana/dashboards/10229)
* [Grafana dashboard for VictoriaMetrics cluster](https://grafana.com/grafana/dashboards/11176)
See how to setup monitoring here:
* [monitoring for single-node VictoriaMetrics](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#monitoring)
* [monitoring for VictoriaMetrics cluster](https://docs.victoriametrics.com/victoriametrics/cluster-victoriametrics/#monitoring)
validations:
required: false
- type: textarea
id: flags
attributes:
label: Used command-line flags
description: |
Please provide the command-line flags used for running VictoriaMetrics and its components.
validations:
required: false
- type: textarea
id: additional-info
attributes:
label: Additional information
placeholder: |
Additional information that doesn't fit elsewhere
validations:
required: false

View File

@@ -1,5 +0,0 @@
blank_issues_enabled: true
contact_links:
- name: Ask on Slack
url: https://slack.victoriametrics.com/
about: You can ask for help here!

View File

@@ -0,0 +1,20 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

View File

@@ -1,43 +0,0 @@
name: Feature request
description: Suggest an idea for this project
labels: [enhancement]
body:
- type: textarea
id: describe-the-problem
attributes:
label: Is your feature request related to a problem? Please describe
description: |
A clear and concise description of what the problem is.
placeholder: |
Ex. I'm always frustrated when [...]
validations:
required: false
- type: textarea
id: describe-the-solution
attributes:
label: Describe the solution you'd like
description: |
A clear and concise description of what you want to happen.
validations:
required: true
- type: textarea
id: alternative-solutions
attributes:
label: Describe alternatives you've considered
description: |
A clear and concise description of any alternative solutions or features you've considered.
placeholder: |
I have tried to do `A`, but that doesn't solve a problem completely.
I have tried to do `A` and `B`, but implementing this would be better.
validations:
required: false
- type: textarea
id: feature-additional-info
attributes:
label: Additional information
description: |
Additional information which you consider helpful for implementing this feature.
placeholder: |
Add any other context or screenshots about the feature request here.
validations:
required: false

View File

@@ -1,32 +0,0 @@
name: Question
description: Ask a question regarding VictoriaMetrics or its components
labels: [question]
body:
- type: textarea
id: describe-the-component
attributes:
label: Is your question request related to a specific component?
placeholder: |
VictoriaMetrics, vmagent, vmalert, vmui, etc...
validations:
required: false
- type: textarea
id: describe-the-question
attributes:
label: Describe the question in detail
description: |
A clear and concise description of the issue and the question.
validations:
required: true
- type: checkboxes
id: troubleshooting
attributes:
label: Troubleshooting docs
description: I am familiar with the following troubleshooting docs
options:
- label: General - https://docs.victoriametrics.com/victoriametrics/troubleshooting/
required: false
- label: vmagent - https://docs.victoriametrics.com/victoriametrics/vmagent/#troubleshooting
required: false
- label: vmalert - https://docs.victoriametrics.com/victoriametrics/vmalert/#troubleshooting
required: false

View File

@@ -1,23 +0,0 @@
# Project Overview
VictoriaMetrics is a fast, cost-saving, and scalable solution for monitoring and managing time series data. It delivers high performance and reliability, making it an ideal choice for businesses of all sizes.
## Folder Structure
- `/app`: Contains the compilable binaries.
- `/lib`: Contains the golang reusable libraries
- `/docs/victoriametrics`: Contains documentation for the project.
- `/apptest/tests`: Contains integration tests.
## Libraries and Frameworks
- Backend: Golang, no framework. Use third-party libraries sparingly.
- Frontend: React.
## Code review guidelines
Ensure the feature or bugfix includes a changelog entry in /docs/victoriametrics/changelog/CHANGELOG.md.
Verify the entry is under the ## tip section and matches the structure and style of existing entries.
Chore-only changes may be omitted from the changelog.

View File

@@ -1,30 +0,0 @@
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "daily"
- package-ecosystem: "gomod"
directory: "/"
schedule:
interval: "weekly"
open-pull-requests-limit: 0
- package-ecosystem: "bundler"
directory: "/docs"
schedule:
interval: "weekly"
open-pull-requests-limit: 0
- package-ecosystem: "gomod"
directory: "/app/vmui/packages/vmui/web"
schedule:
interval: "weekly"
open-pull-requests-limit: 0
- package-ecosystem: "docker"
directory: "/"
schedule:
interval: "daily"
- package-ecosystem: "npm"
directory: "/app/vmui/packages/vmui"
schedule:
interval: "weekly"
open-pull-requests-limit: 0

View File

@@ -1,10 +0,0 @@
### Describe Your Changes
Please provide a brief description of the changes you made. Be as specific as possible to help others understand the purpose and impact of your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development goals](https://docs.victoriametrics.com/victoriametrics/goals/).

View File

@@ -1,69 +0,0 @@
name: build
on:
push:
branches:
- cluster
- master
paths:
- '**.go'
- '**/Dockerfile'
- '**/Makefile'
- '!app/vmui/**'
- '.github/workflows/build.yml'
pull_request:
branches:
- cluster
- master
paths:
- '**.go'
- '**/Dockerfile'
- '**/Makefile'
- '!app/vmui/**'
- '.github/workflows/build.yml'
permissions:
contents: read
concurrency:
cancel-in-progress: true
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
jobs:
build:
name: ${{ matrix.os }}-${{ matrix.arch }}
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
include:
- os: linux
arch: amd64
- os: linux
arch: arm64
- os: linux
arch: arm
- os: linux
arch: ppc64le
- os: linux
arch: 386
- os: freebsd
arch: amd64
- os: openbsd
arch: amd64
steps:
- name: Code checkout
uses: actions/checkout@v5
- name: Setup Go
id: go
uses: actions/setup-go@v5
with:
cache-dependency-path: |
go.sum
Makefile
app/**/Makefile
go-version: stable
- name: Build vmcluster for ${{ matrix.os }}-${{ matrix.arch }}
run: make vmcluster-${{ matrix.os }}-${{ matrix.arch }}

View File

@@ -1,38 +0,0 @@
name: license-check
on:
push:
paths:
- 'vendor'
pull_request:
paths:
- 'vendor'
permissions:
contents: read
jobs:
build:
name: Build
runs-on: ubuntu-latest
steps:
- name: Code checkout
uses: actions/checkout@master
- name: Setup Go
id: go
uses: actions/setup-go@v5
with:
go-version: stable
cache: false
- name: Cache Go artifacts
uses: actions/cache@v4
with:
path: |
~/.cache/go-build
~/go/pkg/mod
~/go/bin
key: go-artifacts-${{ runner.os }}-check-licenses-${{ steps.go.outputs.go-version }}-${{ hashFiles('go.sum', 'Makefile', 'app/**/Makefile') }}
restore-keys: go-artifacts-${{ runner.os }}-check-licenses-
- name: Check License
run: make check-licenses

View File

@@ -1,62 +0,0 @@
name: 'CodeQL Go'
on:
push:
branches:
- cluster
- master
paths:
- '**.go'
pull_request:
branches:
- cluster
- master
paths:
- '**.go'
concurrency:
cancel-in-progress: true
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
permissions:
actions: read
contents: read
security-events: write
steps:
- name: Checkout repository
uses: actions/checkout@v5
- name: Set up Go
id: go
uses: actions/setup-go@v5
with:
cache: false
go-version: stable
- name: Cache Go artifacts
uses: actions/cache@v4
with:
path: |
~/.cache/go-build
~/go/bin
~/go/pkg/mod
key: go-artifacts-${{ runner.os }}-codeql-analyze-${{ steps.go.outputs.go-version }}-${{ hashFiles('go.sum', 'Makefile', 'app/**/Makefile') }}
restore-keys: go-artifacts-${{ runner.os }}-codeql-analyze-
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:
languages: go
- name: Autobuild
uses: github/codeql-action/autobuild@v3
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
with:
category: 'language:go'

View File

@@ -1,57 +0,0 @@
name: publish-docs
on:
push:
branches:
- 'master'
paths:
- 'docs/**'
- '.github/workflows/docs.yaml'
workflow_dispatch: {}
permissions:
contents: read # This is required for actions/checkout and to commit back image update
deployments: write
jobs:
build:
name: Build
runs-on: ubuntu-latest
steps:
- name: Code checkout
uses: actions/checkout@v5
with:
path: __vm
- name: Checkout private code
uses: actions/checkout@v5
with:
repository: VictoriaMetrics/vmdocs
token: ${{ secrets.VM_BOT_GH_TOKEN }}
path: __vm-docs
- name: Import GPG key
uses: crazy-max/ghaction-import-gpg@v6
id: import-gpg
with:
gpg_private_key: ${{ secrets.VM_BOT_GPG_PRIVATE_KEY }}
passphrase: ${{ secrets.VM_BOT_PASSPHRASE }}
git_user_signingkey: true
git_commit_gpgsign: true
git_config_global: true
- name: Copy docs
id: update
run: |
find docs -type d -maxdepth 1 -mindepth 1 -exec \
sh -c 'rsync -zarvh --delete {}/ ../__vm-docs/content/$(basename {})/' \;
echo "SHORT_SHA=$(git rev-parse --short $GITHUB_SHA)" >> $GITHUB_OUTPUT
working-directory: __vm
- name: Push to vmdocs
run: |
git config --global user.name "${{ steps.import-gpg.outputs.email }}"
git config --global user.email "${{ steps.import-gpg.outputs.email }}"
if [[ -n $(git status --porcelain) ]]; then
git add .
git commit -S -m "sync docs with VictoriaMetrics/VictoriaMetrics commit: ${{ steps.update.outputs.SHORT_SHA }}"
git push
fi
working-directory: __vm-docs

30
.github/workflows/github-pages.yml vendored Normal file
View File

@@ -0,0 +1,30 @@
name: github-pages
on:
push:
paths:
- 'docs/*'
- 'README.md'
branches:
- master
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@master
- name: publish
shell: bash
env:
TOKEN: ${{secrets.CI_TOKEN}}
run: |
git clone https://vika:${TOKEN}@github.com/VictoriaMetrics/VictoriaMetrics.github.io.git gpages
cp docs/* gpages
cp README.md gpages
cd gpages
git config --local user.email "info@victoriametrics.com"
git config --local user.name "Vika"
git add .
git commit -m "update github pages"
remote_repo="https://vika:${TOKEN}@github.com/VictoriaMetrics/VictoriaMetrics.github.io.git"
git push "${remote_repo}"
cd ..
rm -rf gpages

52
.github/workflows/main.yml vendored Normal file
View File

@@ -0,0 +1,52 @@
name: main
on:
push:
paths-ignore:
- 'docs/**'
- '**.md'
pull_request:
paths-ignore:
- 'docs/**'
- '**.md'
jobs:
build:
name: Build
runs-on: ubuntu-latest
steps:
- name: Setup Go
uses: actions/setup-go@master
with:
go-version: 1.14
id: go
- name: Dependencies
env:
GO111MODULE: on
run: |
go get -u golang.org/x/lint/golint
go get -u github.com/kisielk/errcheck
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(go env GOPATH)/bin v1.27.0
- name: Code checkout
uses: actions/checkout@master
- name: Build
env:
GO111MODULE: on
run: |
export PATH=$PATH:$(go env GOPATH)/bin # temporary fix. See https://github.com/actions/setup-go/issues/14
make check-all
git diff --exit-code
make test-full
make test-pure
make test-full-386
make victoria-metrics
make victoria-metrics-pure
make victoria-metrics-arm
make victoria-metrics-arm64
make vmutils
GOOS=freebsd go build -mod=vendor ./app/victoria-metrics
GOOS=darwin go build -mod=vendor ./app/victoria-metrics
- name: Publish coverage
uses: codecov/codecov-action@v1.0.6
with:
token: ${{secrets.CODECOV_TOKEN}}
file: ./coverage.txt

View File

@@ -1,113 +0,0 @@
name: test
on:
push:
branches:
- cluster
- master
paths:
- '**.go'
- 'go.*'
- '.github/workflows/main.yml'
pull_request:
branches:
- cluster
- master
paths:
- '**.go'
- 'go.*'
- '.github/workflows/main.yml'
permissions:
contents: read
concurrency:
cancel-in-progress: true
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
jobs:
lint:
name: lint
runs-on: ubuntu-latest
steps:
- name: Code checkout
uses: actions/checkout@v5
- name: Setup Go
id: go
uses: actions/setup-go@v5
with:
cache-dependency-path: |
go.sum
Makefile
app/**/Makefile
go-version: stable
- name: Cache golangci-lint
uses: actions/cache@v4
with:
path: |
~/.cache/golangci-lint
~/go/bin
key: golangci-lint-${{ runner.os }}-${{ hashFiles('.golangci.yml') }}
- name: Run check-all
run: |
make check-all
git diff --exit-code
unit:
name: unit
runs-on: ubuntu-latest
strategy:
matrix:
scenario:
- 'test-full'
- 'test-full-386'
- 'test-pure'
steps:
- name: Code checkout
uses: actions/checkout@v5
- name: Setup Go
id: go
uses: actions/setup-go@v5
with:
cache-dependency-path: |
go.sum
Makefile
app/**/Makefile
go-version: stable
- name: Run tests
run: GOGC=10 make ${{ matrix.scenario}}
- name: Publish coverage
uses: codecov/codecov-action@v5
with:
files: ./coverage.txt
integration:
name: integration
runs-on: ubuntu-latest
steps:
- name: Code checkout
uses: actions/checkout@v5
- name: Setup Go
id: go
uses: actions/setup-go@v5
with:
cache-dependency-path: |
go.sum
Makefile
app/**/Makefile
go-version: stable
- name: Run integration tests
run: make integration-test

View File

@@ -1,82 +0,0 @@
name: vmui
on:
push:
branches:
- cluster
- master
paths:
- 'app/vmui/packages/vmui/**'
- '.github/workflows/vmui.yml'
pull_request:
branches:
- cluster
- master
paths:
- 'app/vmui/packages/vmui/**'
- '.github/workflows/vmui.yml'
permissions:
contents: read
packages: read
pull-requests: read
checks: write
concurrency:
cancel-in-progress: true
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
jobs:
vmui-checks:
name: VMUI Checks (lint, test, typecheck)
runs-on: ubuntu-latest
steps:
- name: Code checkout
uses: actions/checkout@v5
- name: Setup Node
uses: actions/setup-node@v4
with:
node-version: '24.x'
- name: Cache node-modules
uses: actions/cache@v4
with:
path: |
app/vmui/packages/vmui/node_modules
key: vmui-artifacts-${{ runner.os }}-${{ hashFiles('package-lock.json') }}
restore-keys: vmui-artifacts-${{ runner.os }}-
- name: Run lint
id: lint
run: make vmui-lint
continue-on-error: true
- name: Run tests
id: test
run: make vmui-test
continue-on-error: true
- name: Run typecheck
id: typecheck
run: make vmui-typecheck
continue-on-error: true
- name: Annotate Code Linting Results
uses: ataylorme/eslint-annotate-action@v3
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
report-json: app/vmui/packages/vmui/vmui-lint-report.json
- name: Check overall status
run: |
echo "Lint status: ${{ steps.lint.outcome }}"
echo "Test status: ${{ steps.test.outcome }}"
echo "Typecheck status: ${{ steps.typecheck.outcome }}"
if [[ "${{ steps.lint.outcome }}" == "failure" || "${{ steps.test.outcome }}" == "failure" || "${{ steps.typecheck.outcome }}" == "failure" ]]; then
echo "One or more checks failed"
exit 1
else
echo "All checks passed"
fi

28
.github/workflows/wiki.yml vendored Normal file
View File

@@ -0,0 +1,28 @@
name: wiki
on:
push:
paths:
- 'docs/*'
branches:
- master
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@master
- name: publish
shell: bash
env:
TOKEN: ${{secrets.CI_TOKEN}}
run: |
git clone https://vika:${TOKEN}@github.com/VictoriaMetrics/VictoriaMetrics.wiki.git wiki
cp docs/* wiki
cd wiki
git config --local user.email "info@victoriametrics.com"
git config --local user.name "Vika"
git add .
git commit -m "update wiki pages"
remote_repo="https://vika:${TOKEN}@github.com/VictoriaMetrics/VictoriaMetrics.wiki.git"
git push "${remote_repo}"
cd ..
rm -rf wiki

14
.gitignore vendored
View File

@@ -4,28 +4,14 @@
*.pprof
/bin
.idea
.vscode
*.test
*.swp
/vmdocs
/gocache-for-docker
/victoria-logs-data
/victoria-metrics-data
/vmagent-remotewrite-data
/vlagent-remotewritewrite
/vmstorage-data
/vmselect-cache
/package/temp-deb-*
/package/temp-rpm-*
/package/*.deb
/package/*.rpm
.DS_store
Gemfile.lock
/_site
_site
*.tmp
/docs/.jekyll-metadata
coverage.txt
cspell.json
*~
deployment/docker/provisioning/plugins/

View File

@@ -1,29 +0,0 @@
version: "2"
linters:
settings:
errcheck:
exclude-functions:
- fmt.Fprintf
- fmt.Fprint
- (net/http.ResponseWriter).Write
exclusions:
generated: lax
presets:
- common-false-positives
- legacy
- std-error-handling
rules:
- linters:
- staticcheck
text: 'SA(4003|1019|5011):'
paths:
- third_party$
- builtin$
- examples$
formatters:
exclusions:
generated: lax
paths:
- third_party$
- builtin$
- examples$

View File

@@ -1,7 +0,0 @@
allowlist:
- Apache-2.0
- MIT
- BSD-3-Clause
- BSD-2-Clause
- ISC
- MPL-2.0

View File

@@ -7,7 +7,7 @@ contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, sex characteristics, gender identity and expression,
level of experience, education, socio-economic status, nationality, personal
appearance, race, religion or sexual identity and orientation.
appearance, race, religion, or sexual identity and orientation.
## Our Standards
@@ -24,9 +24,9 @@ Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments and personal or political attacks
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as physical or electronic
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
@@ -38,26 +38,26 @@ behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues and other contributions
that are not aligned to this Code of Conduct or to ban temporarily or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive or harmful.
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account or acting as an appointed
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing or otherwise unacceptable behavior may be
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at info@victoriametrics.com. All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate for the circumstances. The project team is
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
@@ -68,9 +68,9 @@ members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at <https://www.contributor-covenant.org/version/1/4/code-of-conduct.html>
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see
<https://www.contributor-covenant.org/faq>
https://www.contributor-covenant.org/faq

View File

@@ -1 +1,16 @@
The document has been moved [here](https://docs.victoriametrics.com/victoriametrics/contributing/).
If you like VictoriaMetrics and want to contribute, then we need the following:
- Filing issues and feature requests [here](https://github.com/VictoriaMetrics/VictoriaMetrics/issues).
- Spreading a word about VictoriaMetrics: conference talks, articles, comments, experience sharing with colleagues.
- Updating documentation.
We are open to third-party pull requests provided they follow [KISS design principle](https://en.wikipedia.org/wiki/KISS_principle):
- Prefer simple code and architecture.
- Avoid complex abstractions.
- Avoid magic code and fancy algorithms.
- Avoid [big external dependencies](https://medium.com/@valyala/stripping-dependency-bloat-in-victoriametrics-docker-image-983fb5912b0d).
- Minimize the number of moving parts in the distributed system.
- Avoid automated decisions, which may hurt cluster availability, consistency or performance.
Adhering `KISS` principle simplifies the resulting code and architecture, so it can be reviewed, understood and verified by many people.

View File

@@ -175,7 +175,7 @@
END OF TERMS AND CONDITIONS
Copyright 2019-2025 VictoriaMetrics, Inc.
Copyright 2019-2020 VictoriaMetrics, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

353
Makefile
View File

@@ -1,330 +1,147 @@
PKG_PREFIX := github.com/VictoriaMetrics/VictoriaMetrics
MAKE_CONCURRENCY ?= $(shell getconf _NPROCESSORS_ONLN)
MAKE_PARALLEL := $(MAKE) -j $(MAKE_CONCURRENCY)
DATEINFO_TAG ?= $(shell date -u +'%Y%m%d-%H%M%S')
BUILDINFO_TAG ?= $(shell echo $$(git describe --long --all | tr '/' '-')$$( \
git diff-index --quiet HEAD -- || echo '-dirty-'$$(git diff-index -u HEAD | openssl sha1 | cut -d' ' -f2 | cut -c 1-8)))
git diff-index --quiet HEAD -- || echo '-dirty-'$$(git diff-index -u HEAD | openssl sha1 | cut -c 10-17)))
PKG_TAG ?= $(shell git tag -l --points-at HEAD)
ifeq ($(PKG_TAG),)
PKG_TAG := $(BUILDINFO_TAG)
endif
EXTRA_DOCKER_TAG_SUFFIX ?=
EXTRA_GO_BUILD_TAGS ?=
GO_BUILDINFO = -X '$(PKG_PREFIX)/lib/buildinfo.Version=$(APP_NAME)-$(DATEINFO_TAG)-$(BUILDINFO_TAG)'
TAR_OWNERSHIP ?= --owner=1000 --group=1000
GOLANGCI_LINT_VERSION := 2.4.0
.PHONY: $(MAKECMDGOALS)
include app/*/Makefile
include codespell/Makefile
include docs/Makefile
include deployment/*/Makefile
include dashboards/Makefile
include package/release/Makefile
include benchmarks/Makefile
GO_BUILDINFO = -X '$(PKG_PREFIX)/lib/buildinfo.Version=$(APP_NAME)-$(shell date -u +'%Y%m%d-%H%M%S')-$(BUILDINFO_TAG)'
all: \
vminsert \
vmselect \
vmstorage
victoria-metrics-prod \
vmagent-prod \
vmalert-prod \
vmauth-prod \
vmbackup-prod \
vmrestore-prod
all-pure: \
vminsert-pure \
vmselect-pure \
vmstorage-pure
include app/*/Makefile
include deployment/*/Makefile
clean:
rm -rf bin/*
vmcluster-linux-amd64: \
vminsert-linux-amd64 \
vmselect-linux-amd64 \
vmstorage-linux-amd64
vmcluster-linux-arm64: \
vminsert-linux-arm64 \
vmselect-linux-arm64 \
vmstorage-linux-arm64
vmcluster-linux-arm: \
vminsert-linux-arm \
vmselect-linux-arm \
vmstorage-linux-arm
vmcluster-linux-ppc64le: \
vminsert-linux-ppc64le \
vmselect-linux-ppc64le \
vmstorage-linux-ppc64le
vmcluster-linux-386: \
vminsert-linux-386 \
vmselect-linux-386 \
vmstorage-linux-386
vmcluster-freebsd-amd64: \
vminsert-freebsd-amd64 \
vmselect-freebsd-amd64 \
vmstorage-freebsd-amd64
vmcluster-openbsd-amd64: \
vminsert-openbsd-amd64 \
vmselect-openbsd-amd64 \
vmstorage-openbsd-amd64
vmcluster-windows-amd64: \
vminsert-windows-amd64 \
vmselect-windows-amd64 \
vmstorage-windows-amd64
vmcluster-darwin-amd64: \
vminsert-darwin-amd64 \
vmselect-darwin-amd64 \
vmstorage-darwin-amd64
vmcluster-darwin-arm64: \
vminsert-darwin-arm64 \
vmselect-darwin-arm64 \
vmstorage-darwin-arm64
# When adding a new crossbuild target, please also add it to the .github/workflows/build.yml
crossbuild: vmcluster-crossbuild
# When adding a new crossbuild target, please also add it to the .github/workflows/build.yml
vmcluster-crossbuild:
$(MAKE_PARALLEL) vmcluster-linux-amd64 \
vmcluster-linux-arm64 \
vmcluster-linux-arm \
vmcluster-linux-ppc64le \
vmcluster-linux-386 \
vmcluster-freebsd-amd64 \
vmcluster-openbsd-amd64
publish: \
publish-vminsert \
publish-vmselect \
publish-vmstorage
publish-victoria-metrics \
publish-vmagent \
publish-vmalert \
publish-vmauth \
publish-vmbackup \
publish-vmrestore
package: \
package-vminsert \
package-vmselect \
package-vmstorage
package-victoria-metrics \
package-vmagent \
package-vmalert \
package-vmauth \
package-vmbackup \
package-vmrestore
publish-final-images:
PKG_TAG=$(TAG) APP_NAME=victoria-metrics $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG) APP_NAME=vmagent $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG) APP_NAME=vmalert $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG) APP_NAME=vmalert-tool $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG) APP_NAME=vmauth $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG) APP_NAME=vmbackup $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG) APP_NAME=vmrestore $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG) APP_NAME=vmctl $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG)-cluster APP_NAME=vminsert $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG)-cluster APP_NAME=vmselect $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG)-cluster APP_NAME=vmstorage $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG)-enterprise APP_NAME=victoria-metrics $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG)-enterprise APP_NAME=vmagent $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG)-enterprise APP_NAME=vmalert $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG)-enterprise APP_NAME=vmauth $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG)-enterprise APP_NAME=vmbackup $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG)-enterprise APP_NAME=vmrestore $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG)-enterprise-cluster APP_NAME=vminsert $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG)-enterprise-cluster APP_NAME=vmselect $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG)-enterprise-cluster APP_NAME=vmstorage $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG)-enterprise APP_NAME=vmgateway $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG)-enterprise APP_NAME=vmbackupmanager $(MAKE) publish-via-docker-from-rc && \
PKG_TAG=$(TAG) $(MAKE) publish-latest
vmutils: \
vmagent \
vmalert \
vmauth \
vmbackup \
vmrestore
publish-latest:
PKG_TAG=$(TAG) APP_NAME=victoria-metrics $(MAKE) publish-via-docker-latest && \
PKG_TAG=$(TAG) APP_NAME=vmagent $(MAKE) publish-via-docker-latest && \
PKG_TAG=$(TAG) APP_NAME=vmalert $(MAKE) publish-via-docker-latest && \
PKG_TAG=$(TAG) APP_NAME=vmalert-tool $(MAKE) publish-via-docker-latest && \
PKG_TAG=$(TAG) APP_NAME=vmauth $(MAKE) publish-via-docker-latest && \
PKG_TAG=$(TAG) APP_NAME=vmbackup $(MAKE) publish-via-docker-latest && \
PKG_TAG=$(TAG) APP_NAME=vmrestore $(MAKE) publish-via-docker-latest && \
PKG_TAG=$(TAG) APP_NAME=vmctl $(MAKE) publish-via-docker-latest && \
PKG_TAG=$(TAG)-cluster APP_NAME=vminsert $(MAKE) publish-via-docker-latest && \
PKG_TAG=$(TAG)-cluster APP_NAME=vmselect $(MAKE) publish-via-docker-latest && \
PKG_TAG=$(TAG)-cluster APP_NAME=vmstorage $(MAKE) publish-via-docker-latest && \
PKG_TAG=$(TAG)-enterprise APP_NAME=vmgateway $(MAKE) publish-via-docker-latest
PKG_TAG=$(TAG)-enterprise APP_NAME=vmbackupmanager $(MAKE) publish-via-docker-latest
release: \
release-victoria-metrics \
release-vmutils
publish-release:
rm -rf bin/*
git checkout $(TAG) && $(MAKE) release && $(MAKE) publish && \
git checkout $(TAG)-cluster && $(MAKE) release && $(MAKE) publish && \
git checkout $(TAG)-enterprise && $(MAKE) release && $(MAKE) publish && \
git checkout $(TAG)-enterprise-cluster && $(MAKE) release && $(MAKE) publish
release-victoria-metrics: victoria-metrics-prod
cd bin && tar czf victoria-metrics-$(PKG_TAG).tar.gz victoria-metrics-prod && \
sha256sum victoria-metrics-$(PKG_TAG).tar.gz > victoria-metrics-$(PKG_TAG)_checksums.txt
release:
$(MAKE_PARALLEL) release-vmcluster
release-vmcluster: \
release-vmcluster-linux-amd64 \
release-vmcluster-linux-arm64 \
release-vmcluster-freebsd-amd64 \
release-vmcluster-openbsd-amd64 \
release-vmcluster-windows-amd64 \
release-vmcluster-darwin-amd64 \
release-vmcluster-darwin-arm64
release-vmcluster-linux-amd64:
GOOS=linux GOARCH=amd64 $(MAKE) release-vmcluster-goos-goarch
release-vmcluster-linux-arm64:
GOOS=linux GOARCH=arm64 $(MAKE) release-vmcluster-goos-goarch
release-vmcluster-freebsd-amd64:
GOOS=freebsd GOARCH=amd64 $(MAKE) release-vmcluster-goos-goarch
release-vmcluster-openbsd-amd64:
GOOS=openbsd GOARCH=amd64 $(MAKE) release-vmcluster-goos-goarch
release-vmcluster-windows-amd64:
GOARCH=amd64 $(MAKE) release-vmcluster-windows-goarch
release-vmcluster-darwin-amd64:
GOOS=darwin GOARCH=amd64 $(MAKE) release-vmcluster-goos-goarch
release-vmcluster-darwin-arm64:
GOOS=darwin GOARCH=arm64 $(MAKE) release-vmcluster-goos-goarch
release-vmcluster-goos-goarch: \
vminsert-$(GOOS)-$(GOARCH)-prod \
vmselect-$(GOOS)-$(GOARCH)-prod \
vmstorage-$(GOOS)-$(GOARCH)-prod
cd bin && \
tar $(TAR_OWNERSHIP) --transform="flags=r;s|-$(GOOS)-$(GOARCH)||" -czf victoria-metrics-$(GOOS)-$(GOARCH)-$(PKG_TAG).tar.gz \
vminsert-$(GOOS)-$(GOARCH)-prod \
vmselect-$(GOOS)-$(GOARCH)-prod \
vmstorage-$(GOOS)-$(GOARCH)-prod \
&& sha256sum victoria-metrics-$(GOOS)-$(GOARCH)-$(PKG_TAG).tar.gz \
vminsert-$(GOOS)-$(GOARCH)-prod \
vmselect-$(GOOS)-$(GOARCH)-prod \
vmstorage-$(GOOS)-$(GOARCH)-prod \
| sed s/-$(GOOS)-$(GOARCH)-prod/-prod/ > victoria-metrics-$(GOOS)-$(GOARCH)-$(PKG_TAG)_checksums.txt
cd bin && rm -rf \
vminsert-$(GOOS)-$(GOARCH)-prod \
vmselect-$(GOOS)-$(GOARCH)-prod \
vmstorage-$(GOOS)-$(GOARCH)-prod
release-vmcluster-windows-goarch: \
vminsert-windows-$(GOARCH)-prod \
vmselect-windows-$(GOARCH)-prod \
vmstorage-windows-$(GOARCH)-prod
cd bin && \
zip victoria-metrics-windows-$(GOARCH)-$(PKG_TAG).zip \
vminsert-windows-$(GOARCH)-prod.exe \
vmselect-windows-$(GOARCH)-prod.exe \
vmstorage-windows-$(GOARCH)-prod.exe \
&& sha256sum victoria-metrics-windows-$(GOARCH)-$(PKG_TAG).zip \
vminsert-windows-$(GOARCH)-prod.exe \
vmselect-windows-$(GOARCH)-prod.exe \
vmstorage-windows-$(GOARCH)-prod.exe \
> victoria-metrics-windows-$(GOARCH)-$(PKG_TAG)_checksums.txt
cd bin && rm -rf \
vminsert-windows-$(GOARCH)-prod.exe \
vmselect-windows-$(GOARCH)-prod.exe \
vmstorage-windows-$(GOARCH)-prod.exe
release-vmutils: \
vmagent-prod \
vmalert-prod \
vmauth-prod \
vmbackup-prod \
vmrestore-prod
cd bin && tar czf vmutils-$(PKG_TAG).tar.gz vmagent-prod vmalert-prod vmauth-prod vmbackup-prod vmrestore-prod && \
sha256sum vmutils-$(PKG_TAG).tar.gz > vmutils-$(PKG_TAG)_checksums.txt
pprof-cpu:
go tool pprof -trim_path=github.com/VictoriaMetrics/VictoriaMetrics@ $(PPROF_FILE)
fmt:
gofmt -l -w -s ./lib
gofmt -l -w -s ./app
gofmt -l -w -s ./apptest
GO111MODULE=on gofmt -l -w -s ./lib
GO111MODULE=on gofmt -l -w -s ./app
vet:
GOEXPERIMENT=synctest go vet ./lib/...
go vet ./app/...
go vet ./apptest/...
GO111MODULE=on go vet -mod=vendor ./lib/...
GO111MODULE=on go vet -mod=vendor ./app/...
check-all: fmt vet golangci-lint govulncheck
lint: install-golint
golint lib/...
golint app/...
clean-checkers: remove-golangci-lint remove-govulncheck
install-golint:
which golint || GO111MODULE=off go get -u golang.org/x/lint/golint
errcheck: install-errcheck
errcheck -exclude=errcheck_excludes.txt ./lib/...
errcheck -exclude=errcheck_excludes.txt ./app/vminsert/...
errcheck -exclude=errcheck_excludes.txt ./app/vmselect/...
errcheck -exclude=errcheck_excludes.txt ./app/vmstorage/...
errcheck -exclude=errcheck_excludes.txt ./app/vmagent/...
errcheck -exclude=errcheck_excludes.txt ./app/vmalert/...
errcheck -exclude=errcheck_excludes.txt ./app/vmauth/...
errcheck -exclude=errcheck_excludes.txt ./app/vmbackup/...
errcheck -exclude=errcheck_excludes.txt ./app/vmrestore/...
install-errcheck:
which errcheck || GO111MODULE=off go get -u github.com/kisielk/errcheck
check-all: fmt vet lint errcheck golangci-lint
test:
GOEXPERIMENT=synctest go test ./lib/... ./app/...
GO111MODULE=on go test -mod=vendor ./lib/... ./app/...
test-race:
GOEXPERIMENT=synctest go test -race ./lib/... ./app/...
GO111MODULE=on go test -mod=vendor -race ./lib/... ./app/...
test-pure:
GOEXPERIMENT=synctest CGO_ENABLED=0 go test ./lib/... ./app/...
GO111MODULE=on CGO_ENABLED=0 go test -mod=vendor ./lib/... ./app/...
test-full:
GOEXPERIMENT=synctest go test -coverprofile=coverage.txt -covermode=atomic ./lib/... ./app/...
GO111MODULE=on go test -mod=vendor -coverprofile=coverage.txt -covermode=atomic ./lib/... ./app/...
test-full-386:
GOEXPERIMENT=synctest GOARCH=386 go test -coverprofile=coverage.txt -covermode=atomic ./lib/... ./app/...
integration-test:
$(MAKE) apptest
apptest:
$(MAKE) all vmctl vmbackup vmrestore
go test ./apptest/... -skip="^TestSingle.*"
GO111MODULE=on GOARCH=386 go test -mod=vendor -coverprofile=coverage.txt -covermode=atomic ./lib/... ./app/...
benchmark:
GOEXPERIMENT=synctest go test -bench=. ./lib/...
go test -bench=. ./app/...
GO111MODULE=on go test -mod=vendor -bench=. ./lib/...
GO111MODULE=on go test -mod=vendor -bench=. ./app/...
benchmark-pure:
GOEXPERIMENT=synctest CGO_ENABLED=0 go test -bench=. ./lib/...
CGO_ENABLED=0 go test -bench=. ./app/...
GO111MODULE=on CGO_ENABLED=0 go test -mod=vendor -bench=. ./lib/...
GO111MODULE=on CGO_ENABLED=0 go test -mod=vendor -bench=. ./app/...
vendor-update:
go get -u ./lib/...
go get -u ./app/...
go mod tidy -compat=1.24
go mod vendor
GO111MODULE=on go get -u ./lib/...
GO111MODULE=on go get -u ./app/...
GO111MODULE=on go mod tidy
GO111MODULE=on go mod vendor
app-local:
CGO_ENABLED=1 go build $(RACE) -ldflags "$(GO_BUILDINFO)" -tags "$(EXTRA_GO_BUILD_TAGS)" -o bin/$(APP_NAME)$(RACE) $(PKG_PREFIX)/app/$(APP_NAME)
CGO_ENABLED=1 GO111MODULE=on go build $(RACE) -mod=vendor -ldflags "$(GO_BUILDINFO)" -o bin/$(APP_NAME)$(RACE) $(PKG_PREFIX)/app/$(APP_NAME)
app-local-pure:
CGO_ENABLED=0 go build $(RACE) -ldflags "$(GO_BUILDINFO)" -tags "$(EXTRA_GO_BUILD_TAGS)" -o bin/$(APP_NAME)-pure$(RACE) $(PKG_PREFIX)/app/$(APP_NAME)
app-local-goos-goarch:
CGO_ENABLED=$(CGO_ENABLED) GOOS=$(GOOS) GOARCH=$(GOARCH) go build $(RACE) -ldflags "$(GO_BUILDINFO)" -tags "$(EXTRA_GO_BUILD_TAGS)" -o bin/$(APP_NAME)-$(GOOS)-$(GOARCH)$(RACE) $(PKG_PREFIX)/app/$(APP_NAME)
app-local-windows-goarch:
CGO_ENABLED=0 GOOS=windows GOARCH=$(GOARCH) go build $(RACE) -ldflags "$(GO_BUILDINFO)" -tags "$(EXTRA_GO_BUILD_TAGS)" -o bin/$(APP_NAME)-windows-$(GOARCH)$(RACE).exe $(PKG_PREFIX)/app/$(APP_NAME)
CGO_ENABLED=0 GO111MODULE=on go build $(RACE) -mod=vendor -ldflags "$(GO_BUILDINFO)" -o bin/$(APP_NAME)-pure$(RACE) $(PKG_PREFIX)/app/$(APP_NAME)
quicktemplate-gen: install-qtc
qtc
install-qtc:
which qtc || go install github.com/valyala/quicktemplate/qtc@latest
which qtc || GO111MODULE=off go get -u github.com/valyala/quicktemplate/qtc
golangci-lint: install-golangci-lint
GOEXPERIMENT=synctest golangci-lint run
golangci-lint run --exclude '(SA4003|SA1019|SA5011):' -D errcheck -D structcheck --timeout 2m
install-golangci-lint:
which golangci-lint && (golangci-lint --version | grep -q $(GOLANGCI_LINT_VERSION)) || curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(shell go env GOPATH)/bin v$(GOLANGCI_LINT_VERSION)
remove-golangci-lint:
rm -rf `which golangci-lint`
govulncheck: install-govulncheck
govulncheck ./...
install-govulncheck:
which govulncheck || go install golang.org/x/vuln/cmd/govulncheck@latest
remove-govulncheck:
rm -rf `which govulncheck`
install-wwhrd:
which wwhrd || go install github.com/frapposelli/wwhrd@latest
check-licenses: install-wwhrd
wwhrd check -f .wwhrd.yml
which golangci-lint || GO111MODULE=off go get -u github.com/golangci/golangci-lint/cmd/golangci-lint

1187
README.md

File diff suppressed because it is too large Load Diff

View File

@@ -1,18 +0,0 @@
# Security Policy
## Supported Versions
The following versions of VictoriaMetrics receive regular security fixes:
| Version | Supported |
|---------|--------------------|
| [latest release](https://docs.victoriametrics.com/victoriametrics/changelog/) | :white_check_mark: |
| v1.102.x [LTS line](https://docs.victoriametrics.com/victoriametrics/lts-releases/) | :white_check_mark: |
| v1.110.x [LTS line](https://docs.victoriametrics.com/victoriametrics/lts-releases/) | :white_check_mark: |
| other releases | :x: |
See [this page](https://victoriametrics.com/security/) for more details.
## Reporting a Vulnerability
Please report any security issues to <security@victoriametrics.com>

Binary file not shown.

View File

@@ -1 +0,0 @@
VictoriaLogs source code has been moved to [github.com/VictoriaMetrics/VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaLogs/).

View File

@@ -0,0 +1,110 @@
# All these commands must run from repository root.
victoria-metrics:
APP_NAME=victoria-metrics $(MAKE) app-local
victoria-metrics-race:
APP_NAME=victoria-metrics RACE=-race $(MAKE) app-local
victoria-metrics-prod:
APP_NAME=victoria-metrics $(MAKE) app-via-docker
victoria-metrics-pure-prod:
APP_NAME=victoria-metrics $(MAKE) app-via-docker-pure
victoria-metrics-amd64-prod:
APP_NAME=victoria-metrics $(MAKE) app-via-docker-amd64
victoria-metrics-arm-prod:
APP_NAME=victoria-metrics $(MAKE) app-via-docker-arm
victoria-metrics-arm64-prod:
APP_NAME=victoria-metrics $(MAKE) app-via-docker-arm64
victoria-metrics-ppc64le-prod:
APP_NAME=victoria-metrics $(MAKE) app-via-docker-ppc64le
victoria-metrics-386-prod:
APP_NAME=victoria-metrics $(MAKE) app-via-docker-386
package-victoria-metrics:
APP_NAME=victoria-metrics $(MAKE) package-via-docker
package-victoria-metrics-pure:
APP_NAME=victoria-metrics $(MAKE) package-via-docker-pure
package-victoria-metrics-amd64:
APP_NAME=victoria-metrics $(MAKE) package-via-docker-amd64
package-victoria-metrics-arm:
APP_NAME=victoria-metrics $(MAKE) package-via-docker-arm
package-victoria-metrics-arm64:
APP_NAME=victoria-metrics $(MAKE) package-via-docker-arm64
package-victoria-metrics-ppc64le:
APP_NAME=victoria-metrics $(MAKE) package-via-docker-ppc64le
package-victoria-metrics-386:
APP_NAME=victoria-metrics $(MAKE) package-via-docker-386
publish-victoria-metrics:
APP_NAME=victoria-metrics $(MAKE) publish-via-docker
run-victoria-metrics:
mkdir -p victoria-metrics-data
DOCKER_OPTS='-v $(shell pwd)/victoria-metrics-data:/victoria-metrics-data' \
APP_NAME=victoria-metrics \
ARGS='-graphiteListenAddr=:2003 -opentsdbListenAddr=:4242 -retentionPeriod=12 -search.maxUniqueTimeseries=1000000 -search.maxQueryDuration=10m' \
$(MAKE) run-via-docker
victoria-metrics-amd64:
CGO_ENABLED=1 GOOS=linux GOARCH=amd64 GO111MODULE=on go build -mod=vendor -ldflags "$(GO_BUILDINFO)" -o bin/victoria-metrics-amd64 ./app/victoria-metrics
victoria-metrics-arm:
CGO_ENABLED=0 GOOS=linux GOARCH=arm GO111MODULE=on go build -mod=vendor -ldflags "$(GO_BUILDINFO)" -o bin/victoria-metrics-arm ./app/victoria-metrics
victoria-metrics-arm64:
CGO_ENABLED=0 GOOS=linux GOARCH=arm64 GO111MODULE=on go build -mod=vendor -ldflags "$(GO_BUILDINFO)" -o bin/victoria-metrics-arm64 ./app/victoria-metrics
victoria-metrics-ppc64le:
CGO_ENABLED=0 GOOS=linux GOARCH=ppc64le GO111MODULE=on go build -mod=vendor -ldflags "$(GO_BUILDINFO)" -o bin/victoria-metrics-ppc64le ./app/victoria-metrics
victoria-metrics-386:
CGO_ENABLED=0 GOOS=linux GOARCH=386 GO111MODULE=on go build -mod=vendor -ldflags "$(GO_BUILDINFO)" -o bin/victoria-metrics-386 ./app/victoria-metrics
victoria-metrics-pure:
APP_NAME=victoria-metrics $(MAKE) app-local-pure
### Packaging as DEB - amd64
victoria-metrics-package-deb: victoria-metrics-prod
./package/package_deb.sh amd64
### Packaging as DEB - arm64
victoria-metrics-package-deb-arm64: victoria-metrics-arm64-prod
./package/package_deb.sh arm64
### Packaging as DEB - all
victoria-metrics-package-deb-all: \
victoria-metrics-package-deb \
victoria-metrics-package-deb-arm64
### Packaging as RPM - amd64
victoria-metrics-package-rpm: victoria-metrics-prod
./package/package_rpm.sh amd64
### Packaging as RPM - arm64
victoria-metrics-package-rpm-arm64: victoria-metrics-arm64-prod
./package/package_rpm.sh arm64
### Packaging as RPM - all
victoria-metrics-package-rpm-all: \
victoria-metrics-package-rpm \
victoria-metrics-package-rpm-arm64
### Packaging as both DEB and RPM - all
victoria-metrics-package-deb-rpm-all: \
victoria-metrics-package-deb \
victoria-metrics-package-deb-arm64 \
victoria-metrics-package-rpm \
victoria-metrics-package-rpm-arm64

View File

@@ -0,0 +1,8 @@
ARG base_image
FROM $base_image
EXPOSE 8428
ENTRYPOINT ["/victoria-metrics-prod"]
ARG src_binary
COPY $src_binary ./victoria-metrics-prod

View File

@@ -0,0 +1,77 @@
package main
import (
"flag"
"net/http"
"os"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/buildinfo"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/envflag"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fs"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/procutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
)
var (
httpListenAddr = flag.String("httpListenAddr", ":8428", "TCP address to listen for http connections")
minScrapeInterval = flag.Duration("dedup.minScrapeInterval", 0, "Remove superflouos samples from time series if they are located closer to each other than this duration. "+
"This may be useful for reducing overhead when multiple identically configured Prometheus instances write data to the same VictoriaMetrics. "+
"Deduplication is disabled if the -dedup.minScrapeInterval is 0")
)
func main() {
// Write flags and help message to stdout, since it is easier to grep or pipe.
flag.CommandLine.SetOutput(os.Stdout)
envflag.Parse()
buildinfo.Init()
logger.Init()
logger.Infof("starting VictoriaMetrics at %q...", *httpListenAddr)
startTime := time.Now()
storage.SetMinScrapeIntervalForDeduplication(*minScrapeInterval)
vmstorage.Init()
vmselect.Init()
vminsert.Init()
startSelfScraper()
go httpserver.Serve(*httpListenAddr, requestHandler)
logger.Infof("started VictoriaMetrics in %.3f seconds", time.Since(startTime).Seconds())
sig := procutil.WaitForSigterm()
logger.Infof("received signal %s", sig)
stopSelfScraper()
logger.Infof("gracefully shutting down webservice at %q", *httpListenAddr)
startTime = time.Now()
if err := httpserver.Stop(*httpListenAddr); err != nil {
logger.Fatalf("cannot stop the webservice: %s", err)
}
vminsert.Stop()
logger.Infof("successfully shut down the webservice in %.3f seconds", time.Since(startTime).Seconds())
vmstorage.Stop()
vmselect.Stop()
fs.MustStopDirRemover()
logger.Infof("the VictoriaMetrics has been stopped in %.3f seconds", time.Since(startTime).Seconds())
}
func requestHandler(w http.ResponseWriter, r *http.Request) bool {
if vminsert.RequestHandler(w, r) {
return true
}
if vmselect.RequestHandler(w, r) {
return true
}
if vmstorage.RequestHandler(w, r) {
return true
}
return false
}

View File

@@ -0,0 +1,496 @@
package main
import (
"bytes"
"encoding/json"
"flag"
"fmt"
"io"
"io/ioutil"
"log"
"net"
"net/http"
"os"
"path/filepath"
"reflect"
"strings"
"testing"
"time"
testutil "github.com/VictoriaMetrics/VictoriaMetrics/app/victoria-metrics/test"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/envflag"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fs"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
)
const (
testFixturesDir = "testdata"
testStorageSuffix = "vm-test-storage"
testHTTPListenAddr = ":7654"
testStatsDListenAddr = ":2003"
testOpenTSDBListenAddr = ":4242"
testOpenTSDBHTTPListenAddr = ":4243"
testLogLevel = "INFO"
)
const (
testReadHTTPPath = "http://127.0.0.1" + testHTTPListenAddr
testWriteHTTPPath = "http://127.0.0.1" + testHTTPListenAddr + "/write"
testOpenTSDBWriteHTTPPath = "http://127.0.0.1" + testOpenTSDBHTTPListenAddr + "/api/put"
testPromWriteHTTPPath = "http://127.0.0.1" + testHTTPListenAddr + "/api/v1/write"
testHealthHTTPPath = "http://127.0.0.1" + testHTTPListenAddr + "/health"
)
const (
testStorageInitTimeout = 10 * time.Second
)
var (
storagePath string
insertionTime = time.Now().UTC()
)
type test struct {
Name string `json:"name"`
Data []string `json:"data"`
Query []string `json:"query"`
ResultMetrics []Metric `json:"result_metrics"`
ResultSeries Series `json:"result_series"`
ResultQuery Query `json:"result_query"`
ResultQueryRange QueryRange `json:"result_query_range"`
Issue string `json:"issue"`
}
type Metric struct {
Metric map[string]string `json:"metric"`
Values []float64 `json:"values"`
Timestamps []int64 `json:"timestamps"`
}
func (r *Metric) UnmarshalJSON(b []byte) error {
type plain Metric
return json.Unmarshal(testutil.PopulateTimeTpl(b, insertionTime), (*plain)(r))
}
type Series struct {
Status string `json:"status"`
Data []map[string]string `json:"data"`
}
type Query struct {
Status string `json:"status"`
Data QueryData `json:"data"`
}
type QueryData struct {
ResultType string `json:"resultType"`
Result []QueryDataResult `json:"result"`
}
type QueryDataResult struct {
Metric map[string]string `json:"metric"`
Value []interface{} `json:"value"`
}
func (r *QueryDataResult) UnmarshalJSON(b []byte) error {
type plain QueryDataResult
return json.Unmarshal(testutil.PopulateTimeTpl(b, insertionTime), (*plain)(r))
}
type QueryRange struct {
Status string `json:"status"`
Data QueryRangeData `json:"data"`
}
type QueryRangeData struct {
ResultType string `json:"resultType"`
Result []QueryRangeDataResult `json:"result"`
}
type QueryRangeDataResult struct {
Metric map[string]string `json:"metric"`
Values [][]interface{} `json:"values"`
}
func (r *QueryRangeDataResult) UnmarshalJSON(b []byte) error {
type plain QueryRangeDataResult
return json.Unmarshal(testutil.PopulateTimeTpl(b, insertionTime), (*plain)(r))
}
func TestMain(m *testing.M) {
setUp()
code := m.Run()
tearDown()
os.Exit(code)
}
func setUp() {
storagePath = filepath.Join(os.TempDir(), testStorageSuffix)
processFlags()
logger.Init()
vmstorage.InitWithoutMetrics()
vmselect.Init()
vminsert.Init()
go httpserver.Serve(*httpListenAddr, requestHandler)
readyStorageCheckFunc := func() bool {
resp, err := http.Get(testHealthHTTPPath)
if err != nil {
return false
}
resp.Body.Close()
return resp.StatusCode == 200
}
if err := waitFor(testStorageInitTimeout, readyStorageCheckFunc); err != nil {
log.Fatalf("http server can't start for %s seconds, err %s", testStorageInitTimeout, err)
}
}
func processFlags() {
envflag.Parse()
for _, fv := range []struct {
flag string
value string
}{
{flag: "storageDataPath", value: storagePath},
{flag: "httpListenAddr", value: testHTTPListenAddr},
{flag: "graphiteListenAddr", value: testStatsDListenAddr},
{flag: "opentsdbListenAddr", value: testOpenTSDBListenAddr},
{flag: "loggerLevel", value: testLogLevel},
{flag: "opentsdbHTTPListenAddr", value: testOpenTSDBHTTPListenAddr},
} {
// panics if flag doesn't exist
if err := flag.Lookup(fv.flag).Value.Set(fv.value); err != nil {
log.Fatalf("unable to set %q with value %q, err: %v", fv.flag, fv.value, err)
}
}
}
func waitFor(timeout time.Duration, f func() bool) error {
fraction := timeout / 10
for i := fraction; i < timeout; i += fraction {
if f() {
return nil
}
time.Sleep(fraction)
}
return fmt.Errorf("timeout")
}
func tearDown() {
if err := httpserver.Stop(*httpListenAddr); err != nil {
log.Printf("cannot stop the webservice: %s", err)
}
vminsert.Stop()
vmstorage.Stop()
vmselect.Stop()
fs.MustRemoveAll(storagePath)
}
func TestWriteRead(t *testing.T) {
t.Run("write", testWrite)
time.Sleep(1 * time.Second)
vmstorage.Stop()
// open storage after stop in write
vmstorage.InitWithoutMetrics()
t.Run("read", testRead)
}
func testWrite(t *testing.T) {
t.Run("prometheus", func(t *testing.T) {
for _, test := range readIn("prometheus", t, insertionTime) {
s := newSuite(t)
r := testutil.WriteRequest{}
s.noError(json.Unmarshal([]byte(strings.Join(test.Data, "\n")), &r.Timeseries))
data, err := testutil.Compress(r)
s.greaterThan(len(r.Timeseries), 0)
if err != nil {
t.Errorf("error compressing %v %s", r, err)
t.Fail()
}
httpWrite(t, testPromWriteHTTPPath, bytes.NewBuffer(data))
}
})
t.Run("influxdb", func(t *testing.T) {
for _, x := range readIn("influxdb", t, insertionTime) {
test := x
t.Run(test.Name, func(t *testing.T) {
t.Parallel()
httpWrite(t, testWriteHTTPPath, bytes.NewBufferString(strings.Join(test.Data, "\n")))
})
}
})
t.Run("graphite", func(t *testing.T) {
for _, x := range readIn("graphite", t, insertionTime) {
test := x
t.Run(test.Name, func(t *testing.T) {
t.Parallel()
tcpWrite(t, "127.0.0.1"+testStatsDListenAddr, strings.Join(test.Data, "\n"))
})
}
})
t.Run("opentsdb", func(t *testing.T) {
for _, x := range readIn("opentsdb", t, insertionTime) {
test := x
t.Run(test.Name, func(t *testing.T) {
t.Parallel()
tcpWrite(t, "127.0.0.1"+testOpenTSDBListenAddr, strings.Join(test.Data, "\n"))
})
}
})
t.Run("opentsdbhttp", func(t *testing.T) {
for _, x := range readIn("opentsdbhttp", t, insertionTime) {
test := x
t.Run(test.Name, func(t *testing.T) {
t.Parallel()
logger.Infof("writing %s", test.Data)
httpWrite(t, testOpenTSDBWriteHTTPPath, bytes.NewBufferString(strings.Join(test.Data, "\n")))
})
}
})
}
func testRead(t *testing.T) {
for _, engine := range []string{"prometheus", "graphite", "opentsdb", "influxdb", "opentsdbhttp"} {
t.Run(engine, func(t *testing.T) {
for _, x := range readIn(engine, t, insertionTime) {
test := x
t.Run(test.Name, func(t *testing.T) {
t.Parallel()
for _, q := range test.Query {
q = testutil.PopulateTimeTplString(q, insertionTime)
if test.Issue != "" {
test.Issue = "Regression in " + test.Issue
}
switch true {
case strings.HasPrefix(q, "/api/v1/export"):
if err := checkMetricsResult(httpReadMetrics(t, testReadHTTPPath, q), test.ResultMetrics); err != nil {
t.Fatalf("Export. %s fails with error %s.%s", q, err, test.Issue)
}
case strings.HasPrefix(q, "/api/v1/series"):
s := Series{}
httpReadStruct(t, testReadHTTPPath, q, &s)
if err := checkSeriesResult(s, test.ResultSeries); err != nil {
t.Fatalf("Series. %s fails with error %s.%s", q, err, test.Issue)
}
case strings.HasPrefix(q, "/api/v1/query_range"):
queryResult := QueryRange{}
httpReadStruct(t, testReadHTTPPath, q, &queryResult)
if err := checkQueryRangeResult(queryResult, test.ResultQueryRange); err != nil {
t.Fatalf("Query Range. %s fails with error %s.%s", q, err, test.Issue)
}
case strings.HasPrefix(q, "/api/v1/query"):
queryResult := Query{}
httpReadStruct(t, testReadHTTPPath, q, &queryResult)
if err := checkQueryResult(queryResult, test.ResultQuery); err != nil {
t.Fatalf("Query. %s fails with error %s.%s", q, err, test.Issue)
}
default:
t.Fatalf("unsupported read query %s", q)
}
}
})
}
})
}
}
func readIn(readFor string, t *testing.T, insertTime time.Time) []test {
t.Helper()
s := newSuite(t)
var tt []test
s.noError(filepath.Walk(filepath.Join(testFixturesDir, readFor), func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
if filepath.Ext(path) != ".json" {
return nil
}
b, err := ioutil.ReadFile(path)
s.noError(err)
item := test{}
s.noError(json.Unmarshal(b, &item))
for i := range item.Data {
item.Data[i] = testutil.PopulateTimeTplString(item.Data[i], insertTime)
}
tt = append(tt, item)
return nil
}))
if len(tt) == 0 {
t.Fatalf("no test found in %s", filepath.Join(testFixturesDir, readFor))
}
return tt
}
func httpWrite(t *testing.T, address string, r io.Reader) {
t.Helper()
s := newSuite(t)
resp, err := http.Post(address, "", r)
s.noError(err)
s.noError(resp.Body.Close())
s.equalInt(resp.StatusCode, 204)
}
func tcpWrite(t *testing.T, address string, data string) {
t.Helper()
s := newSuite(t)
conn, err := net.Dial("tcp", address)
s.noError(err)
defer conn.Close()
n, err := conn.Write([]byte(data))
s.noError(err)
s.equalInt(n, len(data))
}
func httpReadMetrics(t *testing.T, address, query string) []Metric {
t.Helper()
s := newSuite(t)
resp, err := http.Get(address + query)
s.noError(err)
defer resp.Body.Close()
s.equalInt(resp.StatusCode, 200)
var rows []Metric
for dec := json.NewDecoder(resp.Body); dec.More(); {
var row Metric
s.noError(dec.Decode(&row))
rows = append(rows, row)
}
return rows
}
func httpReadStruct(t *testing.T, address, query string, dst interface{}) {
t.Helper()
s := newSuite(t)
resp, err := http.Get(address + query)
s.noError(err)
defer resp.Body.Close()
s.equalInt(resp.StatusCode, 200)
s.noError(json.NewDecoder(resp.Body).Decode(dst))
}
func checkMetricsResult(got, want []Metric) error {
for _, r := range append([]Metric(nil), got...) {
want = removeIfFoundMetrics(r, want)
}
if len(want) > 0 {
return fmt.Errorf("exptected metrics %+v not found in %+v", want, got)
}
return nil
}
func removeIfFoundMetrics(r Metric, contains []Metric) []Metric {
for i, item := range contains {
if reflect.DeepEqual(r.Metric, item.Metric) && reflect.DeepEqual(r.Values, item.Values) &&
reflect.DeepEqual(r.Timestamps, item.Timestamps) {
contains[i] = contains[len(contains)-1]
return contains[:len(contains)-1]
}
}
return contains
}
func checkSeriesResult(got, want Series) error {
if got.Status != want.Status {
return fmt.Errorf("status mismatch %q - %q", want.Status, got.Status)
}
wantData := append([]map[string]string(nil), want.Data...)
for _, r := range got.Data {
wantData = removeIfFoundSeries(r, wantData)
}
if len(wantData) > 0 {
return fmt.Errorf("expected seria(s) %+v not found in %+v", wantData, got.Data)
}
return nil
}
func removeIfFoundSeries(r map[string]string, contains []map[string]string) []map[string]string {
for i, item := range contains {
if reflect.DeepEqual(r, item) {
contains[i] = contains[len(contains)-1]
return contains[:len(contains)-1]
}
}
return contains
}
func checkQueryResult(got, want Query) error {
if got.Status != want.Status {
return fmt.Errorf("status mismatch %q - %q", want.Status, got.Status)
}
if got.Data.ResultType != want.Data.ResultType {
return fmt.Errorf("result type mismatch %q - %q", want.Data.ResultType, got.Data.ResultType)
}
wantData := append([]QueryDataResult(nil), want.Data.Result...)
for _, r := range got.Data.Result {
wantData = removeIfFoundQueryData(r, wantData)
}
if len(wantData) > 0 {
return fmt.Errorf("expected query result %+v not found in %+v", wantData, got.Data.Result)
}
return nil
}
func removeIfFoundQueryData(r QueryDataResult, contains []QueryDataResult) []QueryDataResult {
for i, item := range contains {
if reflect.DeepEqual(r.Metric, item.Metric) && reflect.DeepEqual(r.Value[0], item.Value[0]) && reflect.DeepEqual(r.Value[1], item.Value[1]) {
contains[i] = contains[len(contains)-1]
return contains[:len(contains)-1]
}
}
return contains
}
func checkQueryRangeResult(got, want QueryRange) error {
if got.Status != want.Status {
return fmt.Errorf("status mismatch %q - %q", want.Status, got.Status)
}
if got.Data.ResultType != want.Data.ResultType {
return fmt.Errorf("result type mismatch %q - %q", want.Data.ResultType, got.Data.ResultType)
}
wantData := append([]QueryRangeDataResult(nil), want.Data.Result...)
for _, r := range got.Data.Result {
wantData = removeIfFoundQueryRangeData(r, wantData)
}
if len(wantData) > 0 {
return fmt.Errorf("expected query range result %+v not found in %+v", wantData, got.Data.Result)
}
return nil
}
func removeIfFoundQueryRangeData(r QueryRangeDataResult, contains []QueryRangeDataResult) []QueryRangeDataResult {
for i, item := range contains {
if reflect.DeepEqual(r.Metric, item.Metric) && reflect.DeepEqual(r.Values, item.Values) {
contains[i] = contains[len(contains)-1]
return contains[:len(contains)-1]
}
}
return contains
}
type suite struct{ t *testing.T }
func newSuite(t *testing.T) *suite { return &suite{t: t} }
func (s *suite) noError(err error) {
s.t.Helper()
if err != nil {
s.t.Errorf("unexpected error %v", err)
s.t.FailNow()
}
}
func (s *suite) equalInt(a, b int) {
s.t.Helper()
if a != b {
s.t.Errorf("%d not equal %d", a, b)
s.t.FailNow()
}
}
func (s *suite) greaterThan(a, b int) {
s.t.Helper()
if a <= b {
s.t.Errorf("%d less or equal then %d", a, b)
s.t.FailNow()
}
}

View File

@@ -0,0 +1,103 @@
package main
import (
"flag"
"sync"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/prometheus"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
)
var (
selfScrapeInterval = flag.Duration("selfScrapeInterval", 0, "Interval for self-scraping own metrics at /metrics page")
selfScrapeInstance = flag.String("selfScrapeInstance", "self", "Value for 'instance' label, which is added to self-scraped metrics")
selfScrapeJob = flag.String("selfScrapeJob", "victoria-metrics", "Value for 'job' label, which is added to self-scraped metrics")
)
var selfScraperStopCh chan struct{}
var selfScraperWG sync.WaitGroup
func startSelfScraper() {
selfScraperStopCh = make(chan struct{})
selfScraperWG.Add(1)
go func() {
defer selfScraperWG.Done()
selfScraper(*selfScrapeInterval)
}()
}
func stopSelfScraper() {
close(selfScraperStopCh)
selfScraperWG.Wait()
}
func selfScraper(scrapeInterval time.Duration) {
if scrapeInterval <= 0 {
// Self-scrape is disabled.
return
}
logger.Infof("started self-scraping `/metrics` page with interval %.3f seconds", scrapeInterval.Seconds())
var bb bytesutil.ByteBuffer
var rows prometheus.Rows
var mrs []storage.MetricRow
var labels []prompb.Label
t := time.NewTicker(scrapeInterval)
var currentTimestamp int64
for {
select {
case <-selfScraperStopCh:
t.Stop()
logger.Infof("stopped self-scraping `/metrics` page")
return
case currentTime := <-t.C:
currentTimestamp = currentTime.UnixNano() / 1e6
}
bb.Reset()
httpserver.WritePrometheusMetrics(&bb)
s := bytesutil.ToUnsafeString(bb.B)
rows.Reset()
rows.Unmarshal(s)
mrs = mrs[:0]
for i := range rows.Rows {
r := &rows.Rows[i]
labels = labels[:0]
labels = addLabel(labels, "", r.Metric)
labels = addLabel(labels, "job", *selfScrapeJob)
labels = addLabel(labels, "instance", *selfScrapeInstance)
for j := range r.Tags {
t := &r.Tags[j]
labels = addLabel(labels, t.Key, t.Value)
}
if len(mrs) < cap(mrs) {
mrs = mrs[:len(mrs)+1]
} else {
mrs = append(mrs, storage.MetricRow{})
}
mr := &mrs[len(mrs)-1]
mr.MetricNameRaw = storage.MarshalMetricNameRaw(mr.MetricNameRaw[:0], labels)
mr.Timestamp = currentTimestamp
mr.Value = r.Value
}
logger.Infof("writing %d rows at timestamp %d", len(mrs), currentTimestamp)
vmstorage.AddRows(mrs)
}
}
func addLabel(dst []prompb.Label, key, value string) []prompb.Label {
if len(dst) < cap(dst) {
dst = dst[:len(dst)+1]
} else {
dst = append(dst, prompb.Label{})
}
lb := &dst[len(dst)-1]
lb.Name = bytesutil.ToUnsafeBytes(key)
lb.Value = bytesutil.ToUnsafeBytes(value)
return dst
}

View File

@@ -0,0 +1,52 @@
package test
import (
"fmt"
"log"
"regexp"
"strings"
"time"
)
var (
parseTimeExpRegex = regexp.MustCompile(`"?{TIME[^}]*}"?`)
extractRegex = regexp.MustCompile(`"?{([^}]*)}"?`)
)
// PopulateTimeTplString substitutes {TIME_*} with t in s and returns the result.
func PopulateTimeTplString(s string, t time.Time) string {
return string(PopulateTimeTpl([]byte(s), t))
}
// PopulateTimeTpl substitutes {TIME_*} with tGlobal in b and returns the result.
func PopulateTimeTpl(b []byte, tGlobal time.Time) []byte {
return parseTimeExpRegex.ReplaceAllFunc(b, func(repl []byte) []byte {
t := tGlobal
repl = extractRegex.FindSubmatch(repl)[1]
parts := strings.SplitN(string(repl), "-", 2)
if len(parts) == 2 {
duration, err := time.ParseDuration(strings.TrimSpace(parts[1]))
if err != nil {
log.Fatalf("error %s parsing duration %s in %s", err, parts[1], repl)
}
t = t.Add(-duration)
}
switch strings.TrimSpace(parts[0]) {
case `TIME_S`:
return []byte(fmt.Sprintf("%d", t.Unix()))
case `TIME_MSZ`:
return []byte(fmt.Sprintf("%d", t.Unix()*1e3))
case `TIME_MS`:
return []byte(fmt.Sprintf("%d", timeToMillis(t)))
case `TIME_NS`:
return []byte(fmt.Sprintf("%d", t.UnixNano()))
default:
log.Fatalf("unknown time pattern %s in %s", parts[0], repl)
}
return repl
})
}
func timeToMillis(t time.Time) int64 {
return t.UnixNano() / 1e6
}

View File

@@ -0,0 +1,24 @@
package test
import (
"testing"
"time"
)
func TestPopulateTimeTplString(t *testing.T) {
now, err := time.Parse(time.RFC3339, "2006-01-02T15:04:05Z")
if err != nil {
t.Fatalf("unexpected error when parsing time: %s", err)
}
f := func(s, resultExpected string) {
t.Helper()
result := PopulateTimeTplString(s, now)
if result != resultExpected {
t.Fatalf("unexpected result; got %q; want %q", result, resultExpected)
}
}
f("", "")
f("{TIME_S}", "1136214245")
f("now: {TIME_S}, past 30s: {TIME_MS-30s}, now: {TIME_S}", "now: 1136214245, past 30s: 1136214215000, now: 1136214245")
f("now: {TIME_MS}, past 30m: {TIME_MSZ-30m}, past 2h: {TIME_NS-2h}", "now: 1136214245000, past 30m: 1136212445000, past 2h: 1136207045000000000")
}

View File

@@ -0,0 +1,364 @@
package test
// Source https://github.com/prometheus/prometheus/blob/master/prompb/remote.pb.go . Code is copy pasted and cleaned up
import (
"encoding/binary"
"math"
"math/bits"
)
// WriteRequest is write request
type WriteRequest struct {
Timeseries []TimeSeries `protobuf:"bytes,1,rep,name=timeseries,proto3" json:"timeseries"`
}
// Size returns m size in bytes after marshaling.
func (m *WriteRequest) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
if len(m.Timeseries) > 0 {
for _, e := range m.Timeseries {
l = e.Size()
n += 1 + l + sovRemote(uint64(l))
}
}
return n
}
func sovRemote(x uint64) (n int) {
return (bits.Len64(x|1) + 6) / 7
}
// Marshal marshals m.
func (m *WriteRequest) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
// MarshalTo marshals m to dAtA
func (m *WriteRequest) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
// MarshalToSizedBuffer marshals m to dAtA.
func (m *WriteRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
if len(m.Timeseries) > 0 {
for iNdEx := len(m.Timeseries) - 1; iNdEx >= 0; iNdEx-- {
{
size, err := m.Timeseries[iNdEx].MarshalToSizedBuffer(dAtA[:i])
if err != nil {
return 0, err
}
i -= size
i = encodeVarintRemote(dAtA, i, uint64(size))
}
i--
dAtA[i] = 0xa
}
}
return len(dAtA) - i, nil
}
func encodeVarintRemote(dAtA []byte, offset int, v uint64) int {
offset -= sovRemote(v)
base := offset
for v >= 1<<7 {
dAtA[offset] = uint8(v&0x7f | 0x80)
v >>= 7
offset++
}
dAtA[offset] = uint8(v)
return base
}
// Sample is time series sample.
type Sample struct {
Value float64 `protobuf:"fixed64,1,opt,name=value,proto3" json:"value,omitempty"`
Timestamp int64 `protobuf:"varint,2,opt,name=timestamp,proto3" json:"timestamp,omitempty"`
}
// Reset resets m.
func (m *Sample) Reset() { *m = Sample{} }
// TimeSeries represents samples and labels for a single time series.
type TimeSeries struct {
Labels []Label `protobuf:"bytes,1,rep,name=labels,proto3" json:"labels"`
Samples []Sample `protobuf:"bytes,2,rep,name=samples,proto3" json:"samples"`
}
// Reset resets m.
func (m *TimeSeries) Reset() { *m = TimeSeries{} }
// Label is time series label.
type Label struct {
Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
Value string `protobuf:"bytes,2,opt,name=value,proto3" json:"value,omitempty"`
}
// Reset resets m.
func (m *Label) Reset() { *m = Label{} }
// Labels is a set of labels.
type Labels struct {
Labels []Label `protobuf:"bytes,1,rep,name=labels,proto3" json:"labels"`
}
// Reset resets m.
func (m *Labels) Reset() { *m = Labels{} }
// Marshal marshals m.
func (m *Sample) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
// MarshalTo marshals m to dAtA.
func (m *Sample) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
// MarshalToSizedBuffer marshals m to dAtA.
func (m *Sample) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
if m.Timestamp != 0 {
i = encodeVarintTypes(dAtA, i, uint64(m.Timestamp))
i--
dAtA[i] = 0x10
}
if m.Value != 0 {
i -= 8
binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.Value))))
i--
dAtA[i] = 0x9
}
return len(dAtA) - i, nil
}
// Marshal marshals m.
func (m *TimeSeries) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
// MarshalTo marshals m to dAtA.
func (m *TimeSeries) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
// MarshalToSizedBuffer marshals m to dAtA.
func (m *TimeSeries) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
if len(m.Samples) > 0 {
for iNdEx := len(m.Samples) - 1; iNdEx >= 0; iNdEx-- {
{
size, err := m.Samples[iNdEx].MarshalToSizedBuffer(dAtA[:i])
if err != nil {
return 0, err
}
i -= size
i = encodeVarintTypes(dAtA, i, uint64(size))
}
i--
dAtA[i] = 0x12
}
}
if len(m.Labels) > 0 {
for iNdEx := len(m.Labels) - 1; iNdEx >= 0; iNdEx-- {
{
size, err := m.Labels[iNdEx].MarshalToSizedBuffer(dAtA[:i])
if err != nil {
return 0, err
}
i -= size
i = encodeVarintTypes(dAtA, i, uint64(size))
}
i--
dAtA[i] = 0xa
}
}
return len(dAtA) - i, nil
}
// Marshal marshals m.
func (m *Label) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
// MarshalTo marshals m to dAtA.
func (m *Label) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
// MarshalToSizedBuffer marshals m to dAtA.
func (m *Label) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
if len(m.Value) > 0 {
i -= len(m.Value)
copy(dAtA[i:], m.Value)
i = encodeVarintTypes(dAtA, i, uint64(len(m.Value)))
i--
dAtA[i] = 0x12
}
if len(m.Name) > 0 {
i -= len(m.Name)
copy(dAtA[i:], m.Name)
i = encodeVarintTypes(dAtA, i, uint64(len(m.Name)))
i--
dAtA[i] = 0xa
}
return len(dAtA) - i, nil
}
// Marshal marshals m.
func (m *Labels) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
// MarshalTo marshals m to dAtA.
func (m *Labels) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
// MarshalToSizedBuffer marshals m to dAtA.
func (m *Labels) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
if len(m.Labels) > 0 {
for iNdEx := len(m.Labels) - 1; iNdEx >= 0; iNdEx-- {
{
size, err := m.Labels[iNdEx].MarshalToSizedBuffer(dAtA[:i])
if err != nil {
return 0, err
}
i -= size
i = encodeVarintTypes(dAtA, i, uint64(size))
}
i--
dAtA[i] = 0xa
}
}
return len(dAtA) - i, nil
}
func encodeVarintTypes(dAtA []byte, offset int, v uint64) int {
offset -= sovTypes(v)
base := offset
for v >= 1<<7 {
dAtA[offset] = uint8(v&0x7f | 0x80)
v >>= 7
offset++
}
dAtA[offset] = uint8(v)
return base
}
// Size returns the size of marshaled m.
func (m *Sample) Size() (n int) {
if m == nil {
return 0
}
if m.Value != 0 {
n += 9
}
if m.Timestamp != 0 {
n += 1 + sovTypes(uint64(m.Timestamp))
}
return n
}
// Size returns the size of marshaled m.
func (m *TimeSeries) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
if len(m.Labels) > 0 {
for _, e := range m.Labels {
l = e.Size()
n += 1 + l + sovTypes(uint64(l))
}
}
if len(m.Samples) > 0 {
for _, e := range m.Samples {
l = e.Size()
n += 1 + l + sovTypes(uint64(l))
}
}
return n
}
// Size returns the size of marshaled m.
func (m *Label) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
l = len(m.Name)
if l > 0 {
n += 1 + l + sovTypes(uint64(l))
}
l = len(m.Value)
if l > 0 {
n += 1 + l + sovTypes(uint64(l))
}
return n
}
// Size returns the size of marshaled m.
func (m *Labels) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
if len(m.Labels) > 0 {
for _, e := range m.Labels {
l = e.Size()
n += 1 + l + sovTypes(uint64(l))
}
}
return n
}
func sovTypes(x uint64) (n int) {
return (bits.Len64(x|1) + 6) / 7
}

View File

@@ -0,0 +1,12 @@
package test
import "github.com/golang/snappy"
// Compress marshals and compresses wr.
func Compress(wr WriteRequest) ([]byte, error) {
data, err := wr.Marshal()
if err != nil {
return nil, err
}
return snappy.Encode(nil, data), nil
}

View File

@@ -0,0 +1,8 @@
{
"name": "basic_insertion",
"data": ["graphite.foo.bar.baz;tag1=value1;tag2=value2 123 {TIME_S}"],
"query": ["/api/v1/export?match={__name__!=''}"],
"result_metrics": [
{"metric":{"__name__":"graphite.foo.bar.baz","tag1":"value1","tag2":"value2"},"values":[123], "timestamps": ["{TIME_MSZ}"]}
]
}

View File

@@ -0,0 +1,16 @@
{
"name": "comparison-not-inf-not-nan",
"issue": "https://github.com/VictoriaMetrics/VictoriaMetrics/issues/150",
"data": [
"not_nan_not_inf;item=x 1 {TIME_S-1m}",
"not_nan_not_inf;item=x 1 {TIME_S-2m}",
"not_nan_not_inf;item=y 3 {TIME_S-1m}",
"not_nan_not_inf;item=y 1 {TIME_S-2m}"],
"query": ["/api/v1/query_range?query=1/(not_nan_not_inf-1)!=inf!=nan&start={TIME_S-3m}&end={TIME_S}&step=60"],
"result_query_range": {
"status":"success",
"data":{"resultType":"matrix",
"result":[
{"metric":{"item":"y"},"values":[["{TIME_S-1m}","0.5"],["{TIME_S}","0.5"]]}
]}}
}

View File

@@ -0,0 +1,16 @@
{
"name": "empty-label-match",
"issue": "https://github.com/VictoriaMetrics/VictoriaMetrics/issues/395",
"data": [
"empty_label_match 1 {TIME_S-1m}",
"empty_label_match;foo=bar 2 {TIME_S-1m}",
"empty_label_match;foo=baz 3 {TIME_S-1m}"],
"query": ["/api/v1/query_range?query=empty_label_match{foo=~'bar|'}&start={TIME_S}&end={TIME_S}&step=60"],
"result_query_range": {
"status":"success",
"data":{"resultType":"matrix",
"result":[
{"metric":{"__name__":"empty_label_match"},"values":[["{TIME_S}","1"]]},
{"metric":{"__name__":"empty_label_match","foo":"bar"},"values":[["{TIME_S}","2"]]}
]}}
}

View File

@@ -0,0 +1,21 @@
{
"name": "max_lookback_set",
"issue": "https://github.com/VictoriaMetrics/VictoriaMetrics/issues/209",
"data": [
"max_lookback_set 1 {TIME_S-30s}",
"max_lookback_set 2 {TIME_S-60s}",
"max_lookback_set 3 {TIME_S-120s}",
"max_lookback_set 4 {TIME_S-150s}"
],
"query": ["/api/v1/query_range?query=max_lookback_set&start={TIME_S-150s}&end={TIME_S}&step=10s&max_lookback=1s"],
"result_query_range": {
"status":"success",
"data":{"resultType":"matrix",
"result":[{"metric":{"__name__":"max_lookback_set"},"values":[
["{TIME_S-150s}","4"],
["{TIME_S-120s}","3"],
["{TIME_S-60s}","2"],
["{TIME_S-30s}","1"],
["{TIME_S-20s}","1"]
]}]}}
}

View File

@@ -0,0 +1,30 @@
{
"name": "max_lookback_unset",
"issue": "https://github.com/VictoriaMetrics/VictoriaMetrics/issues/209",
"data": [
"max_lookback_unset 1 {TIME_S-30s}",
"max_lookback_unset 2 {TIME_S-60s}",
"max_lookback_unset 3 {TIME_S-120s}",
"max_lookback_unset 4 {TIME_S-150s}"
],
"query": ["/api/v1/query_range?query=max_lookback_unset&start={TIME_S-150s}&end={TIME_S}&step=10s"],
"result_query_range": {
"status":"success",
"data":{"resultType":"matrix",
"result":[{"metric":{"__name__":"max_lookback_unset"},"values":[
["{TIME_S-150s}","4"],
["{TIME_S-140s}","4"],
["{TIME_S-130s}","4"],
["{TIME_S-120s}","3"],
["{TIME_S-110s}","3"],
["{TIME_S-100s}","3"],
["{TIME_S-90s}","3"],
["{TIME_S-60s}","2"],
["{TIME_S-50s}","2"],
["{TIME_S-40s}","2"],
["{TIME_S-30s}","1"],
["{TIME_S-20s}","1"],
["{TIME_S-10s}","1"],
["{TIME_S-0s}","1"]
]}]}}
}

View File

@@ -0,0 +1,18 @@
{
"name": "not-nan-as-missing-data",
"issue": "https://github.com/VictoriaMetrics/VictoriaMetrics/issues/153",
"data": [
"not_nan_as_missing_data;item=x 2 {TIME_S-2m}",
"not_nan_as_missing_data;item=x 1 {TIME_S-1m}",
"not_nan_as_missing_data;item=y 4 {TIME_S-2m}",
"not_nan_as_missing_data;item=y 3 {TIME_S-1m}"
],
"query": ["/api/v1/query_range?query=not_nan_as_missing_data>1&start={TIME_S-2m}&end={TIME_S}&step=60"],
"result_query_range": {
"status":"success",
"data":{"resultType":"matrix",
"result":[
{"metric":{"__name__":"not_nan_as_missing_data","item":"x"},"values":[["{TIME_S-2m}","2"]]},
{"metric":{"__name__":"not_nan_as_missing_data","item":"y"},"values":[["{TIME_S-2m}","4"],["{TIME_S-1m}","3"],["{TIME_S}","3"]]}
]}}
}

View File

@@ -0,0 +1,14 @@
{
"name": "subquery-aggregation",
"issue": "https://github.com/VictoriaMetrics/VictoriaMetrics/issues/184",
"data": [
"forms_daily_count;item=x 1 {TIME_S-1m}",
"forms_daily_count;item=x 2 {TIME_S-2m}",
"forms_daily_count;item=y 3 {TIME_S-1m}",
"forms_daily_count;item=y 4 {TIME_S-2m}"],
"query": ["/api/v1/query?query=min%20by%20(item)%20(min_over_time(forms_daily_count[10m:1m]))&time={TIME_S-1m}"],
"result_query": {
"status":"success",
"data":{"resultType":"vector","result":[{"metric":{"item":"x"},"value":["{TIME_S-1m}","1"]},{"metric":{"item":"y"},"value":["{TIME_S-1m}","3"]}]}
}
}

View File

@@ -0,0 +1,9 @@
{
"name": "basic_insertion",
"data": ["measurement,tag1=value1,tag2=value2 field1=1.23,field2=123 {TIME_NS}"],
"query": ["/api/v1/export?match={__name__!=''}"],
"result_metrics": [
{"metric":{"__name__":"measurement_field2","tag1":"value1","tag2":"value2"},"values":[123], "timestamps": ["{TIME_MS}"]},
{"metric":{"__name__":"measurement_field1","tag1":"value1","tag2":"value2"},"values":[1.23], "timestamps": ["{TIME_MS}"]}
]
}

View File

@@ -0,0 +1,8 @@
{
"name": "basic_insertion",
"data": ["put openstdb.foo.bar.baz {TIME_S} 123 tag1=value1 tag2=value2"],
"query": ["/api/v1/export?match={__name__!=''}"],
"result_metrics": [
{"metric":{"__name__":"openstdb.foo.bar.baz","tag1":"value1","tag2":"value2"},"values":[123], "timestamps": ["{TIME_MSZ}"]}
]
}

View File

@@ -0,0 +1,8 @@
{
"name": "basic_insertion",
"data": ["{\"metric\": \"opentsdbhttp.foo\", \"value\": 1001, \"timestamp\": {TIME_S}, \"tags\": {\"bar\":\"baz\", \"x\": \"y\"}}"],
"query": ["/api/v1/export?match={__name__!=''}"],
"result_metrics": [
{"metric":{"__name__":"opentsdbhttp.foo","bar":"baz","x":"y"},"values":[1001], "timestamps": ["{TIME_MSZ}"]}
]
}

View File

@@ -0,0 +1,9 @@
{
"name": "multiline",
"data": ["[{\"metric\": \"opentsdbhttp.multiline1\", \"value\": 1001, \"timestamp\": \"{TIME_S}\", \"tags\": {\"bar\":\"baz\", \"x\": \"y\"}}, {\"metric\": \"opentsdbhttp.multiline2\", \"value\": 1002, \"timestamp\": {TIME_S}}]"],
"query": ["/api/v1/export?match={__name__!=''}"],
"result_metrics": [
{"metric":{"__name__":"opentsdbhttp.multiline1","bar":"baz","x":"y"},"values":[1001], "timestamps": ["{TIME_MSZ}"]},
{"metric":{"__name__":"opentsdbhttp.multiline2"},"values":[1002], "timestamps": ["{TIME_MSZ}"]}
]
}

View File

@@ -0,0 +1,8 @@
{
"name": "basic_insertion",
"data": ["[{\"labels\":[{\"name\":\"__name__\",\"value\":\"prometheus.bar\"},{\"name\":\"baz\",\"value\":\"qux\"}],\"samples\":[{\"value\":100000,\"timestamp\":\"{TIME_MS}\"}]}]"],
"query": ["/api/v1/export?match={__name__!=''}"],
"result_metrics": [
{"metric":{"__name__":"prometheus.bar","baz":"qux"},"values":[100000], "timestamps": ["{TIME_MS}"]}
]
}

View File

@@ -0,0 +1,10 @@
{
"name": "case-sensitive-regex",
"issue": "https://github.com/VictoriaMetrics/VictoriaMetrics/issues/161",
"data": ["[{\"labels\":[{\"name\":\"__name__\",\"value\":\"prometheus.sensitiveRegex\"},{\"name\":\"label\",\"value\":\"sensitiveRegex\"}],\"samples\":[{\"value\":2,\"timestamp\":\"{TIME_MS}\"}]},{\"labels\":[{\"name\":\"__name__\",\"value\":\"prometheus.sensitiveRegex\"},{\"name\":\"label\",\"value\":\"SensitiveRegex\"}],\"samples\":[{\"value\":1,\"timestamp\":\"{TIME_MS}\"}]}]"],
"query": ["/api/v1/export?match={label=~'(?i)sensitiveregex'}"],
"result_metrics": [
{"metric":{"__name__":"prometheus.sensitiveRegex","label":"sensitiveRegex"},"values":[2], "timestamps": ["{TIME_MS}"]},
{"metric":{"__name__":"prometheus.sensitiveRegex","label":"SensitiveRegex"},"values":[1], "timestamps": ["{TIME_MS}"]}
]
}

View File

@@ -0,0 +1,9 @@
{
"name": "duplicate_label",
"issue": "https://github.com/VictoriaMetrics/VictoriaMetrics/issues/172",
"data": ["[{\"labels\":[{\"name\":\"__name__\",\"value\":\"prometheus.duplicate_label\"},{\"name\":\"duplicate\",\"value\":\"label\"},{\"name\":\"duplicate\",\"value\":\"label\"}],\"samples\":[{\"value\":1,\"timestamp\":\"{TIME_MS}\"}]}]"],
"query": ["/api/v1/export?match={__name__!=''}"],
"result_metrics": [
{"metric":{"__name__":"prometheus.duplicate_label","duplicate":"label"},"values":[1], "timestamps": ["{TIME_MS}"]}
]
}

View File

@@ -0,0 +1,15 @@
{
"name": "match_series",
"issue": "https://github.com/VictoriaMetrics/VictoriaMetrics/issues/155",
"data": ["[{\"labels\":[{\"name\":\"__name__\",\"value\":\"MatchSeries\"},{\"name\":\"db\",\"value\":\"TenMinute\"},{\"name\":\"TurbineType\",\"value\":\"V112\"},{\"name\":\"Park\",\"value\":\"1\"}],\"samples\":[{\"value\":1,\"timestamp\":\"{TIME_MS}\"}]},{\"labels\":[{\"name\":\"__name__\",\"value\":\"MatchSeries\"},{\"name\":\"db\",\"value\":\"TenMinute\"},{\"name\":\"TurbineType\",\"value\":\"V112\"},{\"name\":\"Park\",\"value\":\"2\"}],\"samples\":[{\"value\":1,\"timestamp\":\"{TIME_MS}\"}]},{\"labels\":[{\"name\":\"__name__\",\"value\":\"MatchSeries\"},{\"name\":\"db\",\"value\":\"TenMinute\"},{\"name\":\"TurbineType\",\"value\":\"V112\"},{\"name\":\"Park\",\"value\":\"3\"}],\"samples\":[{\"value\":1,\"timestamp\":\"{TIME_MS}\"}]},{\"labels\":[{\"name\":\"__name__\",\"value\":\"MatchSeries\"},{\"name\":\"db\",\"value\":\"TenMinute\"},{\"name\":\"TurbineType\",\"value\":\"V112\"},{\"name\":\"Park\",\"value\":\"4\"}],\"samples\":[{\"value\":1,\"timestamp\":\"{TIME_MS}\"}]}]"],
"query": ["/api/v1/series?match[]={__name__='MatchSeries'}", "/api/v1/series?match[]={__name__=~'MatchSeries.*'}"],
"result_series": {
"status": "success",
"data": [
{"__name__":"MatchSeries","db":"TenMinute","Park":"1","TurbineType":"V112"},
{"__name__":"MatchSeries","db":"TenMinute","Park":"2","TurbineType":"V112"},
{"__name__":"MatchSeries","db":"TenMinute","Park":"3","TurbineType":"V112"},
{"__name__":"MatchSeries","db":"TenMinute","Park":"4","TurbineType":"V112"}
]
}
}

View File

@@ -1 +0,0 @@
VictoriaLogs source code has been moved to [github.com/VictoriaMetrics/VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaLogs/).

View File

@@ -1 +0,0 @@
VictoriaLogs source code has been moved to [github.com/VictoriaMetrics/VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaLogs/).

View File

@@ -1 +0,0 @@
VictoriaLogs source code has been moved to [github.com/VictoriaMetrics/VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaLogs/).

View File

@@ -1 +0,0 @@
VictoriaLogs source code has been moved to [github.com/VictoriaMetrics/VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaLogs/).

View File

@@ -1 +0,0 @@
VictoriaLogs source code has been moved to [github.com/VictoriaMetrics/VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaLogs/).

View File

@@ -1 +0,0 @@
VictoriaLogs source code has been moved to [github.com/VictoriaMetrics/VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaLogs/).

View File

@@ -12,35 +12,20 @@ vmagent-prod:
vmagent-pure-prod:
APP_NAME=vmagent $(MAKE) app-via-docker-pure
vmagent-linux-amd64-prod:
APP_NAME=vmagent $(MAKE) app-via-docker-linux-amd64
vmagent-amd64-prod:
APP_NAME=vmagent $(MAKE) app-via-docker-amd64
vmagent-linux-arm-prod:
APP_NAME=vmagent $(MAKE) app-via-docker-linux-arm
vmagent-arm-prod:
APP_NAME=vmagent $(MAKE) app-via-docker-arm
vmagent-linux-arm64-prod:
APP_NAME=vmagent $(MAKE) app-via-docker-linux-arm64
vmagent-arm64-prod:
APP_NAME=vmagent $(MAKE) app-via-docker-arm64
vmagent-linux-ppc64le-prod:
APP_NAME=vmagent $(MAKE) app-via-docker-linux-ppc64le
vmagent-ppc64le-prod:
APP_NAME=vmagent $(MAKE) app-via-docker-ppc64le
vmagent-linux-386-prod:
APP_NAME=vmagent $(MAKE) app-via-docker-linux-386
vmagent-darwin-amd64-prod:
APP_NAME=vmagent $(MAKE) app-via-docker-darwin-amd64
vmagent-darwin-arm64-prod:
APP_NAME=vmagent $(MAKE) app-via-docker-darwin-arm64
vmagent-freebsd-amd64-prod:
APP_NAME=vmagent $(MAKE) app-via-docker-freebsd-amd64
vmagent-openbsd-amd64-prod:
APP_NAME=vmagent $(MAKE) app-via-docker-openbsd-amd64
vmagent-windows-amd64-prod:
APP_NAME=vmagent $(MAKE) app-via-docker-windows-amd64
vmagent-386-prod:
APP_NAME=vmagent $(MAKE) app-via-docker-386
package-vmagent:
APP_NAME=vmagent $(MAKE) package-via-docker
@@ -73,41 +58,20 @@ run-vmagent:
APP_NAME=vmagent \
$(MAKE) run-via-docker
vmagent-linux-amd64:
APP_NAME=vmagent CGO_ENABLED=1 GOOS=linux GOARCH=amd64 $(MAKE) app-local-goos-goarch
vmagent-amd64:
CGO_ENABLED=1 GOOS=linux GOARCH=amd64 GO111MODULE=on go build -mod=vendor -ldflags "$(GO_BUILDINFO)" -o bin/vmagent-amd64 ./app/vmagent
vmagent-linux-arm:
APP_NAME=vmagent CGO_ENABLED=0 GOOS=linux GOARCH=arm $(MAKE) app-local-goos-goarch
vmagent-arm:
CGO_ENABLED=0 GOOS=linux GOARCH=arm GO111MODULE=on go build -mod=vendor -ldflags "$(GO_BUILDINFO)" -o bin/vmagent-arm ./app/vmagent
vmagent-linux-arm64:
APP_NAME=vmagent CGO_ENABLED=0 GOOS=linux GOARCH=arm64 $(MAKE) app-local-goos-goarch
vmagent-arm64:
CGO_ENABLED=0 GOOS=linux GOARCH=arm64 GO111MODULE=on go build -mod=vendor -ldflags "$(GO_BUILDINFO)" -o bin/vmagent-arm64 ./app/vmagent
vmagent-linux-ppc64le:
APP_NAME=vmagent CGO_ENABLED=0 GOOS=linux GOARCH=ppc64le $(MAKE) app-local-goos-goarch
vmagent-ppc64le:
CGO_ENABLED=0 GOOS=linux GOARCH=ppc64le GO111MODULE=on go build -mod=vendor -ldflags "$(GO_BUILDINFO)" -o bin/vmagent-ppc64le ./app/vmagent
vmagent-linux-s390x:
APP_NAME=vmagent CGO_ENABLED=0 GOOS=linux GOARCH=s390x $(MAKE) app-local-goos-goarch
vmagent-linux-loong64:
APP_NAME=vmagent CGO_ENABLED=0 GOOS=linux GOARCH=loong64 $(MAKE) app-local-goos-goarch
vmagent-linux-386:
APP_NAME=vmagent CGO_ENABLED=0 GOOS=linux GOARCH=386 $(MAKE) app-local-goos-goarch
vmagent-darwin-amd64:
APP_NAME=vmagent CGO_ENABLED=0 GOOS=darwin GOARCH=amd64 $(MAKE) app-local-goos-goarch
vmagent-darwin-arm64:
APP_NAME=vmagent CGO_ENABLED=0 GOOS=darwin GOARCH=arm64 $(MAKE) app-local-goos-goarch
vmagent-freebsd-amd64:
APP_NAME=vmagent CGO_ENABLED=0 GOOS=freebsd GOARCH=amd64 $(MAKE) app-local-goos-goarch
vmagent-openbsd-amd64:
APP_NAME=vmagent CGO_ENABLED=0 GOOS=openbsd GOARCH=amd64 $(MAKE) app-local-goos-goarch
vmagent-windows-amd64:
GOARCH=amd64 APP_NAME=vmagent $(MAKE) app-local-windows-goarch
vmagent-386:
CGO_ENABLED=0 GOOS=linux GOARCH=386 GO111MODULE=on go build -mod=vendor -ldflags "$(GO_BUILDINFO)" -o bin/vmagent-386 ./app/vmagent
vmagent-pure:
APP_NAME=vmagent $(MAKE) app-local-pure

View File

@@ -1,3 +1,263 @@
See vmagent docs [here](https://docs.victoriametrics.com/victoriametrics/vmagent/).
## vmagent
vmagent docs can be edited at [docs/vmagent.md](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/docs/victoriametrics/vmagent.md).
`vmagent` is a tiny but brave agent, which helps you collect metrics from various sources
and stores them in [VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics)
or any other Prometheus-compatible storage system that supports the `remote_write` protocol.
<img alt="vmagent" src="vmagent.png">
### Motivation
While VictoriaMetrics provides an efficient solution to store and observe metrics, our users needed something fast
and RAM friendly to scrape metrics from Prometheus-compatible exporters to VictoriaMetrics.
Also, we found that users infrastructure are snowflakes - no two are alike, and we decided to add more flexibility
to `vmagent` (like the ability to push metrics instead of pulling them). We did our best and plan to do even more.
### Features
* Can be used as drop-in replacement for Prometheus for scraping targets such as [node_exporter](https://github.com/prometheus/node_exporter).
See [Quick Start](#quick-start) for details.
* Can add, remove and modify labels (aka tags) via Prometheus relabeling. Can filter data before sending it to remote storage. See [these docs](#relabeling) for details.
* Accepts data via all the ingestion protocols supported by VictoriaMetrics:
* Influx line protocol via `http://<vmagent>:8429/write`. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf).
* Graphite plaintext protocol if `-graphiteListenAddr` command-line flag is set. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-send-data-from-graphite-compatible-agents-such-as-statsd).
* OpenTSDB telnet and http protocols if `-opentsdbListenAddr` command-line flag is set. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-send-data-from-opentsdb-compatible-agents).
* Prometheus remote write protocol via `http://<vmagent>:8429/api/v1/write`.
* JSON lines import protocol via `http://<vmagent>:8429/api/v1/import`. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-import-time-series-data).
* Arbitrary CSV data via `http://<vmagent>:8429/api/v1/import/csv`. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-import-csv-data).
* Can replicate collected metrics simultaneously to multiple remote storage systems.
* Works in environments with unstable connections to remote storage. If the remote storage is unavailable, the collected metrics
are buffered at `-remoteWrite.tmpDataPath`. The buffered metrics are sent to remote storage as soon as connection
to remote storage is recovered. The maximum disk usage for the buffer can be limited with `-remoteWrite.maxDiskUsagePerURL`.
* Uses lower amounts of RAM, CPU, disk IO and network bandwidth compared to Prometheus.
### Quick Start
Just download `vmutils-*` archive from [releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases), unpack it
and pass the following flags to `vmagent` binary in order to start scraping Prometheus targets:
* `-promscrape.config` with the path to Prometheus config file (it is usually located at `/etc/prometheus/prometheus.yml`)
* `-remoteWrite.url` with the remote storage endpoint such as VictoriaMetrics. The `-remoteWrite.url` argument can be specified multiple times in order to replicate data concurrently to an arbitrary amount of remote storage systems.
Example command line:
```
/path/to/vmagent -promscrape.config=/path/to/prometheus.yml -remoteWrite.url=https://victoria-metrics-host:8428/api/v1/write
```
If you only need to collect Influx data, then the following is sufficient:
```
/path/to/vmagent -remoteWrite.url=https://victoria-metrics-host:8428/api/v1/write
```
Then send Influx data to `http://vmagent-host:8429`. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf) for more details.
`vmagent` is also available in [docker images](https://hub.docker.com/r/victoriametrics/vmagent/tags).
Pass `-help` to `vmagent` in order to see the full list of supported command-line flags with their descriptions.
### Use cases
#### IoT and Edge monitoring
`vmagent` can run and collect metrics in IoT and industrial networks with unreliable or scheduled connections to the remote storage.
It buffers the collected data in local files until the connection to remote storage becomes available and then sends the buffered
data to the remote storage. It re-tries sending the data to remote storage on any errors.
The maximum buffer size can be limited with `-remoteWrite.maxDiskUsagePerURL`.
`vmagent` works on various architectures from IoT world - 32-bit arm, 64-bit arm, ppc64, 386, amd64.
See [the corresponding Makefile rules](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/app/vmagent/Makefile) for details.
#### Drop-in replacement for Prometheus
If you use Prometheus only for scraping metrics from various targets and forwarding these metrics to remote storage,
then `vmagent` can replace such Prometheus setup. Usually `vmagent` requires lower amounts of RAM, CPU and network bandwidth comparing to Prometheus for such a setup.
See [these docs](#how-to-collect-metrics-in-prometheus-format) for details.
#### Replication and high availability
`vmagent` replicates the collected metrics among multiple remote storage instances configured via `-remoteWrite.url` args.
If a single remote storage instance temporarily is out of service, then the collected data remains available in another remote storage instances.
`vmagent` buffers the collected data in files at `-remoteWrite.tmpDataPath` until the remote storage becomes available again.
Then it sends the buffered data to the remote storage in order to prevent data gaps in the remote storage.
#### Relabeling and filtering
`vmagent` can add, remove or update labels on the collected data before sending it to remote storage. Additionally,
it can remove unwanted samples via Prometheus-like relabeling before sending the collected data to remote storage.
See [these docs](#relabeling) for details.
#### Splitting data streams among multiple systems
`vmagent` supports splitting the collected data between muliple destinations with the help of `-remoteWrite.urlRelabelConfig`,
which is applied independently for each configured `-remoteWrite.url` destination. For instance, it is possible to replicate or split
data among long-term remote storage, short-term remote storage and real-time analytical system [built on top of Kafka](https://github.com/Telefonica/prometheus-kafka-adapter).
Note that each destination can receive its own subset of the collected data thanks to per-destination relabeling via `-remoteWrite.urlRelabelConfig`.
#### Prometheus remote_write proxy
`vmagent` may be used as a proxy for Prometheus data sent via Prometheus `remote_write` protocol. It can accept data via `remote_write` API
at `/api/v1/write` endpoint, apply relabeling and filtering and then proxy it to another `remote_write` systems.
The `vmagent` can be configured to encrypt the incoming `remote_write` requests with `-tls*` command-line flags.
Additionally, Basic Auth can be enabled for the incoming `remote_write` requests with `-httpAuth.*` command-line flags.
### How to collect metrics in Prometheus format
Pass the path to `prometheus.yml` to `-promscrape.config` command-line flag. `vmagent` takes into account the following
sections from [Prometheus config file](https://prometheus.io/docs/prometheus/latest/configuration/configuration/):
* `global`
* `scrape_configs`
All the other sections are ignored, including [remote_write](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write) section.
Use `-remoteWrite.*` command-line flags instead for configuring remote write settings.
The following scrape types in [scrape_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config) section are supported:
* `static_configs` - for scraping statically defined targets. See [these docs](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#static_config) for details.
* `file_sd_configs` - for scraping targets defined in external files aka file-based service discover.
See [these docs](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#file_sd_config) for details.
* `kubernetes_sd_configs` - for scraping targets in Kubernetes (k8s).
See [kubernetes_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config) for details.
* `ec2_sd_configs` - for scraping targets in Amazon EC2.
See [ec2_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#ec2_sd_config) for details.
`vmagent` doesn't support `role_arn` config param yet.
* `gce_sd_configs` - for scraping targets in Google Compute Engine (GCE).
See [gce_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#gce_sd_config) for details.
`vmagent` provides the following additional functionality for `gce_sd_config`:
* if `project` arg is missing, then `vmagent` uses the project for the instance where it runs;
* if `zone` arg is missing, then `vmagent` uses the zone for the instance where it runs;
* if `zone` arg equals to `"*"`, then `vmagent` discovers all the zones for the given project;
* `zone` may contain arbitrary number of zones, i.e. `zone: [us-east1-a, us-east1-b]`.
* `consul_sd_configs` - for scraping targets registered in Consul.
See [consul_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#consul_sd_config) for details.
* `dns_sd_configs` - for scraping targets discovered from DNS records (SRV, A and AAAA).
See [dns_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dns_sd_config) for details.
Note that `vmagent` doesn't support `refresh_interval` option these scrape configs. Use the corresponding `-promscrape.*CheckInterval`
command-line flag instead. For example, `-promscrape.consulSDCheckInterval=60s` sets `refresh_interval` for all the `consul_sd_configs`
entries to 60s. Run `vmagent -help` in order to see default values for `-promscrape.*CheckInterval` flags.
File feature requests at [our issue tracker](https://github.com/VictoriaMetrics/VictoriaMetrics/issues) if you need other service discovery mechanisms to be supported by `vmagent`.
### Adding labels to metrics
Labels can be added to metrics via the following mechanisms:
* Via `global -> external_labels` section in `-promscrape.config` file. These labels are added only to metrics scraped from targets configured in `-promscrape.config` file.
* Via `-remoteWrite.label` command-line flag. These labels are added to all the collected metrics before sending them to `-remoteWrite.url`.
### Relabeling
`vmagent` supports [Prometheus relabeling](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config).
Additionally it provides the following extra actions:
* `replace_all`: replaces all the occurences of `regex` in the values of `source_labels` with the `replacement` and stores the result in the `target_label`.
* `labelmap_all`: replaces all the occurences of `regex` in all the label names with the `replacement`.
The relabeling can be defined in the following places:
* At `scrape_config -> relabel_configs` section in `-promscrape.config` file. This relabeling is applied to target labels.
* At `scrape_config -> metric_relabel_configs` section in `-promscrape.config` file. This relabeling is applied to all the scraped metrics in the given `scrape_config`.
* At `-remoteWrite.relabelConfig` file. This relabeling is aplied to all the collected metrics before sending them to remote storage.
* At `-remoteWrite.urlRelabelConfig` files. This relabeling is applied to metrics before sending them to the corresponding `-remoteWrite.url`.
Read more about relabeling in the following articles:
* [Life of a label](https://www.robustperception.io/life-of-a-label)
* [Discarding targets and timeseries with relabeling](https://www.robustperception.io/relabelling-can-discard-targets-timeseries-and-alerts)
* [Dropping labels at scrape time](https://www.robustperception.io/dropping-metrics-at-scrape-time-with-prometheus)
* [Extracting labels from legacy metric names](https://www.robustperception.io/extracting-labels-from-legacy-metric-names)
* [relabel_configs vs metric_relabel_configs](https://www.robustperception.io/relabel_configs-vs-metric_relabel_configs)
### Monitoring
`vmagent` exports various metrics in Prometheus exposition format at `http://vmagent-host:8429/metrics` page. It is recommended setting up regular scraping of this page
either via `vmagent` itself or via Prometheus, so the exported metrics could be analyzed later.
`vmagent` also exports target statuses at `http://vmagent-host:8429/targets` page in plaintext format.
### Troubleshooting
* It is recommended increasing the maximum number of open files in the system (`ulimit -n`) when scraping big number of targets,
since `vmagent` establishes at least a single TCP connection per each target.
* When `vmagent` scrapes many unreliable targets, it can flood error log with scrape errors. These errors can be suppressed
by passing `-promscrape.suppressScrapeErrors` command-line flag to `vmagent`. The most recent scrape error per each target can be observed at `http://vmagent-host:8429/targets`.
* It is recommended to increase `-remoteWrite.queues` if `vmagent` collects more than 100K samples per second
and `vmagent_remotewrite_pending_data_bytes` metric exported at `http://vmagent-host:8429/metrics` page constantly grows.
* `vmagent` buffers scraped data at `-remoteWrite.tmpDataPath` directory until it is sent to `-remoteWrite.url`.
The directory can grow large when remote storage is unavailable for extended periods of time and if `-remoteWrite.maxDiskUsagePerURL` isn't set.
If you don't want to send all the data from the directory to remote storage, simply stop `vmagent` and delete the directory.
### How to build from sources
It is recommended using [binary releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) - `vmagent` is located in `vmutils-*` archives there.
#### Development build
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.13.
2. Run `make vmagent` from the root folder of the repository.
It builds `vmagent` binary and puts it into the `bin` folder.
#### Production build
1. [Install docker](https://docs.docker.com/install/).
2. Run `make vmagent-prod` from the root folder of the repository.
It builds `vmagent-prod` binary and puts it into the `bin` folder.
#### Building docker images
Run `make package-vmagent`. It builds `victoriametrics/vmagent:<PKG_TAG>` docker image locally.
`<PKG_TAG>` is auto-generated image tag, which depends on source code in the repository.
The `<PKG_TAG>` may be manually set via `PKG_TAG=foobar make package-vmagent`.
By default the image is built on top of `scratch` image. It is possible to build the package on top of any other base image
by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of `alpine:3.11` image:
```bash
ROOT_IMAGE=alpine:3.11 make package-vmagent
```
### Profiling
`vmagent` provides handlers for collecting the following [Go profiles](https://blog.golang.org/profiling-go-programs):
* Memory profile. It can be collected with the following command:
```bash
curl -s http://<vmagent-host>:8429/debug/pprof/heap > mem.pprof
```
* CPU profile. It can be collected with the following command:
```bash
curl -s http://<vmagent-host>:8429/debug/pprof/profile > cpu.pprof
```
The command for collecting CPU profile waits for 30 seconds before returning.
The collected profiles may be analyzed with [go tool pprof](https://github.com/google/pprof).

View File

@@ -1,31 +1,39 @@
package common
import (
"runtime"
"sync"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promrelabel"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
)
// PushCtx is a context used for populating WriteRequest.
type PushCtx struct {
// WriteRequest contains the WriteRequest, which must be pushed later to remote storage.
//
// The actual labels and samples for the time series are stored in Labels and Samples fields.
WriteRequest prompb.WriteRequest
WriteRequest prompbmarshal.WriteRequest
// Labels contains flat list of all the labels used in WriteRequest.
Labels []prompb.Label
Labels []prompbmarshal.Label
// Samples contains flat list of all the samples used in WriteRequest.
Samples []prompb.Sample
Samples []prompbmarshal.Sample
}
// Reset resets ctx.
func (ctx *PushCtx) Reset() {
ctx.WriteRequest.Reset()
tss := ctx.WriteRequest.Timeseries
for i := range tss {
ts := &tss[i]
ts.Labels = nil
ts.Samples = nil
}
ctx.WriteRequest.Timeseries = ctx.WriteRequest.Timeseries[:0]
promrelabel.CleanLabels(ctx.Labels)
labels := ctx.Labels
for i := range labels {
label := &labels[i]
label.Name = ""
label.Value = ""
}
ctx.Labels = ctx.Labels[:0]
ctx.Samples = ctx.Samples[:0]
@@ -35,10 +43,15 @@ func (ctx *PushCtx) Reset() {
//
// Call PutPushCtx when the ctx is no longer needed.
func GetPushCtx() *PushCtx {
if v := pushCtxPool.Get(); v != nil {
return v.(*PushCtx)
select {
case ctx := <-pushCtxPoolCh:
return ctx
default:
if v := pushCtxPool.Get(); v != nil {
return v.(*PushCtx)
}
return &PushCtx{}
}
return &PushCtx{}
}
// PutPushCtx returns ctx to the pool.
@@ -46,7 +59,12 @@ func GetPushCtx() *PushCtx {
// ctx mustn't be used after returning to the pool.
func PutPushCtx(ctx *PushCtx) {
ctx.Reset()
pushCtxPool.Put(ctx)
select {
case pushCtxPoolCh <- ctx:
default:
pushCtxPool.Put(ctx)
}
}
var pushCtxPool sync.Pool
var pushCtxPoolCh = make(chan *PushCtx, runtime.GOMAXPROCS(-1))

View File

@@ -5,33 +5,25 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/csvimport"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/csvimport/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parser "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/csvimport"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
"github.com/VictoriaMetrics/metrics"
)
var (
rowsInserted = metrics.NewCounter(`vmagent_rows_inserted_total{type="csvimport"}`)
rowsTenantInserted = tenantmetrics.NewCounterMap(`vmagent_tenant_inserted_rows_total{type="csvimport"}`)
rowsPerInsert = metrics.NewHistogram(`vmagent_rows_per_insert{type="csvimport"}`)
rowsInserted = metrics.NewCounter(`vmagent_rows_inserted_total{type="csvimport"}`)
rowsPerInsert = metrics.NewHistogram(`vmagent_rows_per_insert{type="csvimport"}`)
)
// InsertHandler processes csv data from req.
func InsertHandler(at *auth.Token, req *http.Request) error {
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
return stream.Parse(req, func(rows []csvimport.Row) error {
return insertRows(at, rows, extraLabels)
func InsertHandler(req *http.Request) error {
return writeconcurrencylimiter.Do(func() error {
return parser.ParseStream(req, insertRows)
})
}
func insertRows(at *auth.Token, rows []csvimport.Row, extraLabels []prompb.Label) error {
func insertRows(rows []parser.Row) error {
ctx := common.GetPushCtx()
defer common.PutPushCtx(ctx)
@@ -41,23 +33,22 @@ func insertRows(at *auth.Token, rows []csvimport.Row, extraLabels []prompb.Label
for i := range rows {
r := &rows[i]
labelsLen := len(labels)
labels = append(labels, prompb.Label{
labels = append(labels, prompbmarshal.Label{
Name: "__name__",
Value: r.Metric,
})
for j := range r.Tags {
tag := &r.Tags[j]
labels = append(labels, prompb.Label{
labels = append(labels, prompbmarshal.Label{
Name: tag.Key,
Value: tag.Value,
})
}
labels = append(labels, extraLabels...)
samples = append(samples, prompb.Sample{
samples = append(samples, prompbmarshal.Sample{
Value: r.Value,
Timestamp: r.Timestamp,
})
tssDst = append(tssDst, prompb.TimeSeries{
tssDst = append(tssDst, prompbmarshal.TimeSeries{
Labels: labels[labelsLen:],
Samples: samples[len(samples)-1:],
})
@@ -65,13 +56,8 @@ func insertRows(at *auth.Token, rows []csvimport.Row, extraLabels []prompb.Label
ctx.WriteRequest.Timeseries = tssDst
ctx.Labels = labels
ctx.Samples = samples
if !remotewrite.TryPush(at, &ctx.WriteRequest) {
return remotewrite.ErrQueueFullHTTPRetry
}
remotewrite.Push(&ctx.WriteRequest)
rowsInserted.Add(len(rows))
if at != nil {
rowsTenantInserted.Get(at).Add(len(rows))
}
rowsPerInsert.Update(float64(len(rows)))
return nil
}

View File

@@ -1,95 +0,0 @@
package datadogsketches
import (
"net/http"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogsketches"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogsketches/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
"github.com/VictoriaMetrics/metrics"
)
var (
rowsInserted = metrics.NewCounter(`vmagent_rows_inserted_total{type="datadogsketches"}`)
rowsTenantInserted = tenantmetrics.NewCounterMap(`vmagent_tenant_inserted_rows_total{type="datadogsketches"}`)
rowsPerInsert = metrics.NewHistogram(`vmagent_rows_per_insert{type="datadogsketches"}`)
)
// InsertHandlerForHTTP processes remote write for DataDog POST /api/beta/sketches request.
func InsertHandlerForHTTP(at *auth.Token, req *http.Request) error {
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
ce := req.Header.Get("Content-Encoding")
return stream.Parse(req.Body, ce, func(sketches []*datadogsketches.Sketch) error {
return insertRows(at, sketches, extraLabels)
})
}
func insertRows(at *auth.Token, sketches []*datadogsketches.Sketch, extraLabels []prompb.Label) error {
ctx := common.GetPushCtx()
defer common.PutPushCtx(ctx)
rowsTotal := 0
tssDst := ctx.WriteRequest.Timeseries[:0]
labels := ctx.Labels[:0]
samples := ctx.Samples[:0]
for _, sketch := range sketches {
ms := sketch.ToSummary()
for _, m := range ms {
labelsLen := len(labels)
labels = append(labels, prompb.Label{
Name: "__name__",
Value: m.Name,
})
for _, label := range m.Labels {
labels = append(labels, prompb.Label{
Name: label.Name,
Value: label.Value,
})
}
for _, tag := range sketch.Tags {
name, value := datadogutil.SplitTag(tag)
if name == "host" {
name = "exported_host"
}
labels = append(labels, prompb.Label{
Name: name,
Value: value,
})
}
labels = append(labels, extraLabels...)
samplesLen := len(samples)
for _, p := range m.Points {
samples = append(samples, prompb.Sample{
Timestamp: p.Timestamp,
Value: p.Value,
})
}
rowsTotal += len(m.Points)
tssDst = append(tssDst, prompb.TimeSeries{
Labels: labels[labelsLen:],
Samples: samples[samplesLen:],
})
}
}
ctx.WriteRequest.Timeseries = tssDst
ctx.Labels = labels
ctx.Samples = samples
if !remotewrite.TryPush(at, &ctx.WriteRequest) {
return remotewrite.ErrQueueFullHTTPRetry
}
rowsInserted.Add(rowsTotal)
if at != nil {
rowsTenantInserted.Get(at).Add(rowsTotal)
}
rowsPerInsert.Update(float64(rowsTotal))
return nil
}

View File

@@ -1,99 +0,0 @@
package datadogv1
import (
"net/http"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogv1"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogv1/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
"github.com/VictoriaMetrics/metrics"
)
var (
rowsInserted = metrics.NewCounter(`vmagent_rows_inserted_total{type="datadogv1"}`)
rowsTenantInserted = tenantmetrics.NewCounterMap(`vmagent_tenant_inserted_rows_total{type="datadogv1"}`)
rowsPerInsert = metrics.NewHistogram(`vmagent_rows_per_insert{type="datadogv1"}`)
)
// InsertHandlerForHTTP processes remote write for DataDog POST /api/v1/series request.
func InsertHandlerForHTTP(at *auth.Token, req *http.Request) error {
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
ce := req.Header.Get("Content-Encoding")
return stream.Parse(req.Body, ce, func(series []datadogv1.Series) error {
return insertRows(at, series, extraLabels)
})
}
func insertRows(at *auth.Token, series []datadogv1.Series, extraLabels []prompb.Label) error {
ctx := common.GetPushCtx()
defer common.PutPushCtx(ctx)
rowsTotal := 0
tssDst := ctx.WriteRequest.Timeseries[:0]
labels := ctx.Labels[:0]
samples := ctx.Samples[:0]
for i := range series {
ss := &series[i]
rowsTotal += len(ss.Points)
labelsLen := len(labels)
labels = append(labels, prompb.Label{
Name: "__name__",
Value: ss.Metric,
})
if ss.Host != "" {
labels = append(labels, prompb.Label{
Name: "host",
Value: ss.Host,
})
}
if ss.Device != "" {
labels = append(labels, prompb.Label{
Name: "device",
Value: ss.Device,
})
}
for _, tag := range ss.Tags {
name, value := datadogutil.SplitTag(tag)
if name == "host" {
name = "exported_host"
}
labels = append(labels, prompb.Label{
Name: name,
Value: value,
})
}
labels = append(labels, extraLabels...)
samplesLen := len(samples)
for _, pt := range ss.Points {
samples = append(samples, prompb.Sample{
Timestamp: pt.Timestamp(),
Value: pt.Value(),
})
}
tssDst = append(tssDst, prompb.TimeSeries{
Labels: labels[labelsLen:],
Samples: samples[samplesLen:],
})
}
ctx.WriteRequest.Timeseries = tssDst
ctx.Labels = labels
ctx.Samples = samples
if !remotewrite.TryPush(at, &ctx.WriteRequest) {
return remotewrite.ErrQueueFullHTTPRetry
}
rowsInserted.Add(rowsTotal)
if at != nil {
rowsTenantInserted.Get(at).Add(rowsTotal)
}
rowsPerInsert.Update(float64(rowsTotal))
return nil
}

View File

@@ -1,102 +0,0 @@
package datadogv2
import (
"net/http"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogv2"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogv2/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
"github.com/VictoriaMetrics/metrics"
)
var (
rowsInserted = metrics.NewCounter(`vmagent_rows_inserted_total{type="datadogv2"}`)
rowsTenantInserted = tenantmetrics.NewCounterMap(`vmagent_tenant_inserted_rows_total{type="datadogv2"}`)
rowsPerInsert = metrics.NewHistogram(`vmagent_rows_per_insert{type="datadogv2"}`)
)
// InsertHandlerForHTTP processes remote write for DataDog POST /api/v2/series request.
//
// See https://docs.datadoghq.com/api/latest/metrics/#submit-metrics
func InsertHandlerForHTTP(at *auth.Token, req *http.Request) error {
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
ct := req.Header.Get("Content-Type")
ce := req.Header.Get("Content-Encoding")
return stream.Parse(req.Body, ce, ct, func(series []datadogv2.Series) error {
return insertRows(at, series, extraLabels)
})
}
func insertRows(at *auth.Token, series []datadogv2.Series, extraLabels []prompb.Label) error {
ctx := common.GetPushCtx()
defer common.PutPushCtx(ctx)
rowsTotal := 0
tssDst := ctx.WriteRequest.Timeseries[:0]
labels := ctx.Labels[:0]
samples := ctx.Samples[:0]
for i := range series {
ss := &series[i]
rowsTotal += len(ss.Points)
labelsLen := len(labels)
labels = append(labels, prompb.Label{
Name: "__name__",
Value: ss.Metric,
})
for _, rs := range ss.Resources {
labels = append(labels, prompb.Label{
Name: rs.Type,
Value: rs.Name,
})
}
if ss.SourceTypeName != "" {
labels = append(labels, prompb.Label{
Name: "source_type_name",
Value: ss.SourceTypeName,
})
}
for _, tag := range ss.Tags {
name, value := datadogutil.SplitTag(tag)
if name == "host" {
name = "exported_host"
}
labels = append(labels, prompb.Label{
Name: name,
Value: value,
})
}
labels = append(labels, extraLabels...)
samplesLen := len(samples)
for _, pt := range ss.Points {
samples = append(samples, prompb.Sample{
Timestamp: pt.Timestamp * 1000,
Value: pt.Value,
})
}
tssDst = append(tssDst, prompb.TimeSeries{
Labels: labels[labelsLen:],
Samples: samples[samplesLen:],
})
}
ctx.WriteRequest.Timeseries = tssDst
ctx.Labels = labels
ctx.Samples = samples
if !remotewrite.TryPush(at, &ctx.WriteRequest) {
return remotewrite.ErrQueueFullHTTPRetry
}
rowsInserted.Add(rowsTotal)
if at != nil {
rowsTenantInserted.Get(at).Add(rowsTotal)
}
rowsPerInsert.Update(float64(rowsTotal))
return nil
}

View File

@@ -1,8 +1,8 @@
ARG base_image=non-existing
ARG base_image
FROM $base_image
EXPOSE 8429
ENTRYPOINT ["/vmagent-prod"]
ARG src_binary=non-existing
ARG src_binary
COPY $src_binary ./vmagent-prod

View File

@@ -5,10 +5,9 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parser "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/graphite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/graphite/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
"github.com/VictoriaMetrics/metrics"
)
@@ -21,12 +20,12 @@ var (
//
// See https://graphite.readthedocs.io/en/latest/feeding-carbon.html#the-plaintext-protocol
func InsertHandler(r io.Reader) error {
return stream.Parse(r, "", func(rows []parser.Row) error {
return insertRows(nil, rows)
return writeconcurrencylimiter.Do(func() error {
return parser.ParseStream(r, insertRows)
})
}
func insertRows(at *auth.Token, rows []parser.Row) error {
func insertRows(rows []parser.Row) error {
ctx := common.GetPushCtx()
defer common.PutPushCtx(ctx)
@@ -36,22 +35,22 @@ func insertRows(at *auth.Token, rows []parser.Row) error {
for i := range rows {
r := &rows[i]
labelsLen := len(labels)
labels = append(labels, prompb.Label{
labels = append(labels, prompbmarshal.Label{
Name: "__name__",
Value: r.Metric,
})
for j := range r.Tags {
tag := &r.Tags[j]
labels = append(labels, prompb.Label{
labels = append(labels, prompbmarshal.Label{
Name: tag.Key,
Value: tag.Value,
})
}
samples = append(samples, prompb.Sample{
samples = append(samples, prompbmarshal.Sample{
Value: r.Value,
Timestamp: r.Timestamp,
})
tssDst = append(tssDst, prompb.TimeSeries{
tssDst = append(tssDst, prompbmarshal.TimeSeries{
Labels: labels[labelsLen:],
Samples: samples[len(samples)-1:],
})
@@ -59,9 +58,7 @@ func insertRows(at *auth.Token, rows []parser.Row) error {
ctx.WriteRequest.Timeseries = tssDst
ctx.Labels = labels
ctx.Samples = samples
if !remotewrite.TryPush(at, &ctx.WriteRequest) {
return remotewrite.ErrQueueFullHTTPRetry
}
remotewrite.Push(&ctx.WriteRequest)
rowsInserted.Add(len(rows))
rowsPerInsert.Update(float64(len(rows)))
return nil

View File

@@ -4,63 +4,52 @@ import (
"flag"
"io"
"net/http"
"runtime"
"sync"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promrelabel"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/influx"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/influx/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parser "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/influx"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
"github.com/VictoriaMetrics/metrics"
)
var (
measurementFieldSeparator = flag.String("influxMeasurementFieldSeparator", "_", "Separator for '{measurement}{separator}{field_name}' metric name when inserted via InfluxDB line protocol")
skipSingleField = flag.Bool("influxSkipSingleField", false, "Uses '{measurement}' instead of '{measurement}{separator}{field_name}' for metric name if InfluxDB line contains only a single field")
skipMeasurement = flag.Bool("influxSkipMeasurement", false, "Uses '{field_name}' as a metric name while ignoring '{measurement}' and '-influxMeasurementFieldSeparator'")
dbLabel = flag.String("influxDBLabel", "db", "Default label for the DB name sent over '?db={db_name}' query parameter")
measurementFieldSeparator = flag.String("influxMeasurementFieldSeparator", "_", "Separator for '{measurement}{separator}{field_name}' metric name when inserted via Influx line protocol")
skipSingleField = flag.Bool("influxSkipSingleField", false, "Uses '{measurement}' instead of '{measurement}{separator}{field_name}' for metic name if Influx line contains only a single field")
)
var (
rowsInserted = metrics.NewCounter(`vmagent_rows_inserted_total{type="influx"}`)
rowsTenantInserted = tenantmetrics.NewCounterMap(`vmagent_tenant_inserted_rows_total{type="influx"}`)
rowsPerInsert = metrics.NewHistogram(`vmagent_rows_per_insert{type="influx"}`)
rowsInserted = metrics.NewCounter(`vmagent_rows_inserted_total{type="influx"}`)
rowsPerInsert = metrics.NewHistogram(`vmagent_rows_per_insert{type="influx"}`)
)
// InsertHandlerForReader processes remote write for influx line protocol.
//
// See https://github.com/influxdata/telegraf/tree/master/plugins/inputs/socket_listener/
func InsertHandlerForReader(at *auth.Token, r io.Reader, encoding string) error {
return stream.Parse(r, encoding, true, "", "", func(db string, rows []influx.Row) error {
return insertRows(at, db, rows, nil)
func InsertHandlerForReader(r io.Reader) error {
return writeconcurrencylimiter.Do(func() error {
return parser.ParseStream(r, false, "", "", insertRows)
})
}
// InsertHandlerForHTTP processes remote write for influx line protocol.
//
// See https://github.com/influxdata/influxdb/blob/4cbdc197b8117fee648d62e2e5be75c6575352f0/tsdb/README.md
func InsertHandlerForHTTP(at *auth.Token, req *http.Request) error {
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
q := req.URL.Query()
precision := q.Get("precision")
// Read db tag from https://docs.influxdata.com/influxdb/v1.7/tools/api/#write-http-endpoint
db := q.Get("db")
encoding := req.Header.Get("Content-Encoding")
isStreamMode := req.Header.Get("Stream-Mode") == "1"
return stream.Parse(req.Body, encoding, isStreamMode, precision, db, func(db string, rows []influx.Row) error {
return insertRows(at, db, rows, extraLabels)
func InsertHandlerForHTTP(req *http.Request) error {
return writeconcurrencylimiter.Do(func() error {
isGzipped := req.Header.Get("Content-Encoding") == "gzip"
q := req.URL.Query()
precision := q.Get("precision")
// Read db tag from https://docs.influxdata.com/influxdb/v1.7/tools/api/#write-http-endpoint
db := q.Get("db")
return parser.ParseStream(req.Body, isGzipped, precision, db, insertRows)
})
}
func insertRows(at *auth.Token, db string, rows []influx.Row, extraLabels []prompb.Label) error {
func insertRows(db string, rows []parser.Row) error {
ctx := getPushCtx()
defer putPushCtx(ctx)
@@ -72,32 +61,26 @@ func insertRows(at *auth.Token, db string, rows []influx.Row, extraLabels []prom
buf := ctx.buf[:0]
for i := range rows {
r := &rows[i]
rowsTotal += len(r.Fields)
commonLabels = commonLabels[:0]
hasDBKey := false
hasDBLabel := false
for j := range r.Tags {
tag := &r.Tags[j]
if tag.Key == *dbLabel {
hasDBKey = true
if tag.Key == "db" {
hasDBLabel = true
}
commonLabels = append(commonLabels, prompb.Label{
commonLabels = append(commonLabels, prompbmarshal.Label{
Name: tag.Key,
Value: tag.Value,
})
}
if len(db) > 0 && !hasDBKey {
commonLabels = append(commonLabels, prompb.Label{
Name: *dbLabel,
if len(db) > 0 && !hasDBLabel {
commonLabels = append(commonLabels, prompbmarshal.Label{
Name: "db",
Value: db,
})
}
commonLabels = append(commonLabels, extraLabels...)
ctx.metricGroupBuf = ctx.metricGroupBuf[:0]
if !*skipMeasurement {
ctx.metricGroupBuf = append(ctx.metricGroupBuf, r.Measurement...)
}
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1139
skipFieldKey := len(r.Measurement) > 0 && len(r.Fields) == 1 && *skipSingleField
ctx.metricGroupBuf = append(ctx.metricGroupBuf[:0], r.Measurement...)
skipFieldKey := len(r.Fields) == 1 && *skipSingleField
if len(ctx.metricGroupBuf) > 0 && !skipFieldKey {
ctx.metricGroupBuf = append(ctx.metricGroupBuf, *measurementFieldSeparator...)
}
@@ -110,33 +93,29 @@ func insertRows(at *auth.Token, db string, rows []influx.Row, extraLabels []prom
}
metricGroup := bytesutil.ToUnsafeString(buf[bufLen:])
labelsLen := len(labels)
labels = append(labels, prompb.Label{
labels = append(labels, prompbmarshal.Label{
Name: "__name__",
Value: metricGroup,
})
labels = append(labels, commonLabels...)
samples = append(samples, prompb.Sample{
samples = append(samples, prompbmarshal.Sample{
Timestamp: r.Timestamp,
Value: f.Value,
})
tssDst = append(tssDst, prompb.TimeSeries{
tssDst = append(tssDst, prompbmarshal.TimeSeries{
Labels: labels[labelsLen:],
Samples: samples[len(samples)-1:],
})
}
rowsTotal += len(r.Fields)
}
ctx.buf = buf
ctx.ctx.WriteRequest.Timeseries = tssDst
ctx.ctx.Labels = labels
ctx.ctx.Samples = samples
ctx.commonLabels = commonLabels
if !remotewrite.TryPush(at, &ctx.ctx.WriteRequest) {
return remotewrite.ErrQueueFullHTTPRetry
}
remotewrite.Push(&ctx.ctx.WriteRequest)
rowsInserted.Add(rowsTotal)
if at != nil {
rowsTenantInserted.Get(at).Add(rowsTotal)
}
rowsPerInsert.Update(float64(rowsTotal))
return nil
@@ -144,7 +123,7 @@ func insertRows(at *auth.Token, db string, rows []influx.Row, extraLabels []prom
type pushCtx struct {
ctx common.PushCtx
commonLabels []prompb.Label
commonLabels []prompbmarshal.Label
metricGroupBuf []byte
buf []byte
}
@@ -152,23 +131,37 @@ type pushCtx struct {
func (ctx *pushCtx) reset() {
ctx.ctx.Reset()
promrelabel.CleanLabels(ctx.commonLabels)
ctx.commonLabels = ctx.commonLabels[:0]
commonLabels := ctx.commonLabels
for i := range commonLabels {
label := &commonLabels[i]
label.Name = ""
label.Value = ""
}
ctx.metricGroupBuf = ctx.metricGroupBuf[:0]
ctx.buf = ctx.buf[:0]
}
func getPushCtx() *pushCtx {
if v := pushCtxPool.Get(); v != nil {
return v.(*pushCtx)
select {
case ctx := <-pushCtxPoolCh:
return ctx
default:
if v := pushCtxPool.Get(); v != nil {
return v.(*pushCtx)
}
return &pushCtx{}
}
return &pushCtx{}
}
func putPushCtx(ctx *pushCtx) {
ctx.reset()
pushCtxPool.Put(ctx)
select {
case pushCtxPoolCh <- ctx:
default:
pushCtxPool.Put(ctx)
}
}
var pushCtxPool sync.Pool
var pushCtxPoolCh = make(chan *pushCtx, runtime.GOMAXPROCS(-1))

View File

@@ -1,40 +1,24 @@
package main
import (
"embed"
"flag"
"fmt"
"io"
"net/http"
"os"
"strings"
"time"
"github.com/VictoriaMetrics/metrics"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/csvimport"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/datadogsketches"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/datadogv1"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/datadogv2"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/graphite"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/influx"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/native"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/newrelic"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/opentelemetry"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/opentsdb"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/opentsdbhttp"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/prometheusimport"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/promremotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/vmimport"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/buildinfo"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/cgroup"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/envflag"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/influxutil"
graphiteserver "github.com/VictoriaMetrics/VictoriaMetrics/lib/ingestserver/graphite"
influxserver "github.com/VictoriaMetrics/VictoriaMetrics/lib/ingestserver/influx"
opentsdbserver "github.com/VictoriaMetrics/VictoriaMetrics/lib/ingestserver/opentsdb"
@@ -42,46 +26,22 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/procutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentelemetry/firehose"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/pushmetrics"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/stringsutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeserieslimits"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
"github.com/VictoriaMetrics/metrics"
)
var (
httpListenAddrs = flagutil.NewArrayString("httpListenAddr", "TCP address to listen for incoming http requests. "+
httpListenAddr = flag.String("httpListenAddr", ":8429", "TCP address to listen for http connections. "+
"Set this flag to empty value in order to disable listening on any port. This mode may be useful for running multiple vmagent instances on the same server. "+
"Note that /targets and /metrics pages aren't available if -httpListenAddr=''. See also -tls and -httpListenAddr.useProxyProtocol")
useProxyProtocol = flagutil.NewArrayBool("httpListenAddr.useProxyProtocol", "Whether to use proxy protocol for connections accepted at the corresponding -httpListenAddr . "+
"See https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt . "+
"With enabled proxy protocol http server cannot serve regular /metrics endpoint. Use -pushmetrics.url for metrics pushing")
influxListenAddr = flag.String("influxListenAddr", "", "TCP and UDP address to listen for InfluxDB line protocol data. Usually :8089 must be set. Doesn't work if empty. "+
"This flag isn't needed when ingesting data over HTTP - just send it to http://<vmagent>:8429/write . "+
"See also -influxListenAddr.useProxyProtocol")
influxUseProxyProtocol = flag.Bool("influxListenAddr.useProxyProtocol", false, "Whether to use proxy protocol for connections accepted at -influxListenAddr . "+
"See https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt")
graphiteListenAddr = flag.String("graphiteListenAddr", "", "TCP and UDP address to listen for Graphite plaintext data. Usually :2003 must be set. Doesn't work if empty. "+
"See also -graphiteListenAddr.useProxyProtocol")
graphiteUseProxyProtocol = flag.Bool("graphiteListenAddr.useProxyProtocol", false, "Whether to use proxy protocol for connections accepted at -graphiteListenAddr . "+
"See https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt")
opentsdbListenAddr = flag.String("opentsdbListenAddr", "", "TCP and UDP address to listen for OpenTSDB metrics. "+
"Note that /targets and /metrics pages aren't available if -httpListenAddr=''")
influxListenAddr = flag.String("influxListenAddr", "", "TCP and UDP address to listen for Influx line protocol data. Usually :8189 must be set. Doesn't work if empty")
graphiteListenAddr = flag.String("graphiteListenAddr", "", "TCP and UDP address to listen for Graphite plaintext data. Usually :2003 must be set. Doesn't work if empty")
opentsdbListenAddr = flag.String("opentsdbListenAddr", "", "TCP and UDP address to listen for OpentTSDB metrics. "+
"Telnet put messages and HTTP /api/put messages are simultaneously served on TCP port. "+
"Usually :4242 must be set. Doesn't work if empty. See also -opentsdbListenAddr.useProxyProtocol")
opentsdbUseProxyProtocol = flag.Bool("opentsdbListenAddr.useProxyProtocol", false, "Whether to use proxy protocol for connections accepted at -opentsdbListenAddr . "+
"See https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt")
opentsdbHTTPListenAddr = flag.String("opentsdbHTTPListenAddr", "", "TCP address to listen for OpenTSDB HTTP put requests. Usually :4242 must be set. Doesn't work if empty. "+
"See also -opentsdbHTTPListenAddr.useProxyProtocol")
opentsdbHTTPUseProxyProtocol = flag.Bool("opentsdbHTTPListenAddr.useProxyProtocol", false, "Whether to use proxy protocol for connections accepted "+
"at -opentsdbHTTPListenAddr . See https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt")
configAuthKey = flagutil.NewPassword("configAuthKey", "Authorization key for accessing /config page. It must be passed via authKey query arg. It overrides -httpAuth.*")
reloadAuthKey = flagutil.NewPassword("reloadAuthKey", "Auth key for /-/reload http endpoint. It must be passed via authKey query arg. It overrides -httpAuth.*")
dryRun = flag.Bool("dryRun", false, "Whether to check config files without running vmagent. The following files are checked: "+
"-promscrape.config, -remoteWrite.relabelConfig, -remoteWrite.urlRelabelConfig, -remoteWrite.streamAggr.config . "+
"Unknown config entries aren't allowed in -promscrape.config by default. This can be changed by passing -promscrape.config.strictParse=false command-line flag")
maxLabelsPerTimeseries = flag.Int("maxLabelsPerTimeseries", 0, "The maximum number of labels per time series to be accepted. Series with superfluous labels are ignored. In this case the vm_rows_ignored_total{reason=\"too_many_labels\"} metric at /metrics page is incremented")
maxLabelNameLen = flag.Int("maxLabelNameLen", 0, "The maximum length of label names in the accepted time series. Series with longer label name are ignored. In this case the vm_rows_ignored_total{reason=\"too_long_label_name\"} metric at /metrics page is incremented")
maxLabelValueLen = flag.Int("maxLabelValueLen", 0, "The maximum length of label values in the accepted time series. Series with longer label value are ignored. In this case the vm_rows_ignored_total{reason=\"too_long_label_value\"} metric at /metrics page is incremented")
"Usually :4242 must be set. Doesn't work if empty")
opentsdbHTTPListenAddr = flag.String("opentsdbHTTPListenAddr", "", "TCP address to listen for OpentTSDB HTTP put requests. Usually :4242 must be set. Doesn't work if empty")
dryRun = flag.Bool("dryRun", false, "Whether to check only config files without running vmagent. The following files are checked: "+
"-promscrape.config, -remoteWrite.relabelConfig, -remoteWrite.urlRelabelConfig . See also -promscrape.config.dryRun")
)
var (
@@ -91,97 +51,62 @@ var (
opentsdbhttpServer *opentsdbhttpserver.Server
)
var (
//go:embed static
staticFiles embed.FS
staticServer = http.FileServer(http.FS(staticFiles))
)
func main() {
// vmagent is optimized for reduced memory allocations,
// so it can run with the reduced GOGC in order to reduce the used memory,
// while keeping CPU usage spent in GC at low levels.
//
// Some workloads may need increased GOGC values. Then such values can be set via GOGC environment variable.
// It is recommended increasing GOGC if go_memstats_gc_cpu_fraction metric exposed at /metrics page
// exceeds 0.05 for extended periods of time.
cgroup.SetGOGC(50)
// Write flags and help message to stdout, since it is easier to grep or pipe.
flag.CommandLine.SetOutput(os.Stdout)
flag.Usage = usage
envflag.Parse()
remotewrite.InitSecretFlags()
buildinfo.Init()
logger.Init()
timeserieslimits.Init(*maxLabelsPerTimeseries, *maxLabelNameLen, *maxLabelValueLen)
if promscrape.IsDryRun() {
if err := promscrape.CheckConfig(); err != nil {
logger.Fatalf("error when checking -promscrape.config: %s", err)
}
logger.Infof("-promscrape.config is ok; exiting with 0 status code")
return
}
if *dryRun {
if err := promscrape.CheckConfig(); err != nil {
logger.Fatalf("error when checking -promscrape.config: %s", err)
if err := flag.Set("promscrape.config.strictParse", "true"); err != nil {
logger.Panicf("BUG: cannot set promscrape.config.strictParse=true: %s", err)
}
if err := remotewrite.CheckRelabelConfigs(); err != nil {
logger.Fatalf("error when checking relabel configs: %s", err)
}
if err := remotewrite.CheckStreamAggrConfigs(); err != nil {
logger.Fatalf("error when checking -streamAggr.config and -remoteWrite.streamAggr.config: %s", err)
if err := promscrape.CheckConfig(); err != nil {
logger.Fatalf("error when checking Prometheus config: %s", err)
}
logger.Infof("all the configs are ok; exiting with 0 status code")
logger.Infof("all the configs are ok; exitting with 0 status code")
return
}
listenAddrs := *httpListenAddrs
if len(listenAddrs) == 0 {
listenAddrs = []string{":8429"}
}
logger.Infof("starting vmagent at %q...", listenAddrs)
logger.Infof("starting vmagent at %q...", *httpListenAddr)
startTime := time.Now()
remotewrite.StartIngestionRateLimiter()
remotewrite.Init()
protoparserutil.StartUnmarshalWorkers()
writeconcurrencylimiter.Init()
if len(*influxListenAddr) > 0 {
influxServer = influxserver.MustStart(*influxListenAddr, *influxUseProxyProtocol, func(r io.Reader) error {
return influx.InsertHandlerForReader(nil, r, "")
})
influxServer = influxserver.MustStart(*influxListenAddr, influx.InsertHandlerForReader)
}
if len(*graphiteListenAddr) > 0 {
graphiteServer = graphiteserver.MustStart(*graphiteListenAddr, *graphiteUseProxyProtocol, graphite.InsertHandler)
graphiteServer = graphiteserver.MustStart(*graphiteListenAddr, graphite.InsertHandler)
}
if len(*opentsdbListenAddr) > 0 {
httpInsertHandler := getOpenTSDBHTTPInsertHandler()
opentsdbServer = opentsdbserver.MustStart(*opentsdbListenAddr, *opentsdbUseProxyProtocol, opentsdb.InsertHandler, httpInsertHandler)
opentsdbServer = opentsdbserver.MustStart(*opentsdbListenAddr, opentsdb.InsertHandler, opentsdbhttp.InsertHandler)
}
if len(*opentsdbHTTPListenAddr) > 0 {
httpInsertHandler := getOpenTSDBHTTPInsertHandler()
opentsdbhttpServer = opentsdbhttpserver.MustStart(*opentsdbHTTPListenAddr, *opentsdbHTTPUseProxyProtocol, httpInsertHandler)
opentsdbhttpServer = opentsdbhttpserver.MustStart(*opentsdbHTTPListenAddr, opentsdbhttp.InsertHandler)
}
promscrape.Init(remotewrite.PushDropSamplesOnFailure)
promscrape.Init(remotewrite.Push)
go httpserver.Serve(listenAddrs, requestHandler, httpserver.ServeOptions{
UseProxyProtocol: useProxyProtocol,
})
if len(*httpListenAddr) > 0 {
go httpserver.Serve(*httpListenAddr, requestHandler)
}
logger.Infof("started vmagent in %.3f seconds", time.Since(startTime).Seconds())
pushmetrics.Init()
sig := procutil.WaitForSigterm()
logger.Infof("received signal %s", sig)
remotewrite.StopIngestionRateLimiter()
pushmetrics.Stop()
startTime = time.Now()
logger.Infof("gracefully shutting down webservice at %q", listenAddrs)
if err := httpserver.Stop(listenAddrs); err != nil {
logger.Fatalf("cannot stop the webservice: %s", err)
if len(*httpListenAddr) > 0 {
logger.Infof("gracefully shutting down webservice at %q", *httpListenAddr)
if err := httpserver.Stop(*httpListenAddr); err != nil {
logger.Fatalf("cannot stop the webservice: %s", err)
}
logger.Infof("successfully shut down the webservice in %.3f seconds", time.Since(startTime).Seconds())
}
logger.Infof("successfully shut down the webservice in %.3f seconds", time.Since(startTime).Seconds())
promscrape.Stop()
@@ -197,495 +122,68 @@ func main() {
if len(*opentsdbHTTPListenAddr) > 0 {
opentsdbhttpServer.MustStop()
}
protoparserutil.StopUnmarshalWorkers()
remotewrite.Stop()
logger.Infof("successfully stopped vmagent in %.3f seconds", time.Since(startTime).Seconds())
}
func getOpenTSDBHTTPInsertHandler() func(req *http.Request) error {
if !remotewrite.MultitenancyEnabled() {
return func(req *http.Request) error {
path := strings.ReplaceAll(req.URL.Path, "//", "/")
if path != "/api/put" {
return fmt.Errorf("unsupported path requested: %q; expecting '/api/put'", path)
}
return opentsdbhttp.InsertHandler(nil, req)
}
}
return func(req *http.Request) error {
path := strings.ReplaceAll(req.URL.Path, "//", "/")
at, err := getAuthTokenFromPath(path)
if err != nil {
return fmt.Errorf("cannot obtain auth token from path %q: %w", path, err)
}
return opentsdbhttp.InsertHandler(at, req)
}
}
func getAuthTokenFromPath(path string) (*auth.Token, error) {
p, err := httpserver.ParsePath(path)
if err != nil {
return nil, fmt.Errorf("cannot parse multitenant path: %w", err)
}
if p.Prefix != "insert" {
return nil, fmt.Errorf(`unsupported multitenant prefix: %q; expected "insert"`, p.Prefix)
}
if p.Suffix != "opentsdb/api/put" {
return nil, fmt.Errorf("unsupported path requested: %q; expecting 'opentsdb/api/put'", p.Suffix)
}
return auth.NewTokenPossibleMultitenant(p.AuthToken)
}
func requestHandler(w http.ResponseWriter, r *http.Request) bool {
if r.URL.Path == "/" {
if r.Method != http.MethodGet {
return false
}
w.Header().Add("Content-Type", "text/html; charset=utf-8")
fmt.Fprintf(w, "<h2>vmagent</h2>")
fmt.Fprintf(w, "See docs at <a href='https://docs.victoriametrics.com/victoriametrics/vmagent/'>https://docs.victoriametrics.com/victoriametrics/vmagent/</a></br>")
fmt.Fprintf(w, "Useful endpoints:</br>")
httpserver.WriteAPIHelp(w, [][2]string{
{"targets", "status for discovered active targets"},
{"service-discovery", "labels before and after relabeling for discovered targets"},
{"metric-relabel-debug", "debug metric relabeling"},
{"api/v1/targets", "advanced information about discovered targets in JSON format"},
{"config", "-promscrape.config contents"},
{"metrics", "available service metrics"},
{"flags", "command-line flags"},
{"-/reload", "reload configuration"},
})
return true
}
path := strings.ReplaceAll(r.URL.Path, "//", "/")
if strings.HasPrefix(path, "/prometheus/api/v1/import/prometheus") || strings.HasPrefix(path, "/api/v1/import/prometheus") {
prometheusimportRequests.Inc()
if err := prometheusimport.InsertHandler(nil, r); err != nil {
prometheusimportErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
return true
}
statusCode := http.StatusNoContent
if strings.HasPrefix(path, "/prometheus/api/v1/import/prometheus/metrics/job/") ||
strings.HasPrefix(path, "/api/v1/import/prometheus/metrics/job/") {
// Return 200 status code for pushgateway requests.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3636
statusCode = http.StatusOK
}
w.WriteHeader(statusCode)
return true
}
if strings.HasPrefix(path, "/datadog/") {
// Trim suffix from paths starting from /datadog/ in order to support legacy DataDog agent.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2670
path = strings.TrimSuffix(path, "/")
}
path := strings.Replace(r.URL.Path, "//", "/", -1)
switch path {
case "/prometheus/api/v1/write", "/api/v1/write", "/api/v1/push", "/prometheus/api/v1/push":
if protoparserutil.HandleVMProtoServerHandshake(w, r) {
return true
}
case "/api/v1/write":
prometheusWriteRequests.Inc()
if err := promremotewrite.InsertHandler(nil, r); err != nil {
if err := promremotewrite.InsertHandler(r); err != nil {
prometheusWriteErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
httpserver.Errorf(w, "error in %q: %s", r.URL.Path, err)
return true
}
w.WriteHeader(http.StatusNoContent)
return true
case "/prometheus/api/v1/import", "/api/v1/import":
case "/api/v1/import":
vmimportRequests.Inc()
if err := vmimport.InsertHandler(nil, r); err != nil {
if err := vmimport.InsertHandler(r); err != nil {
vmimportErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
httpserver.Errorf(w, "error in %q: %s", r.URL.Path, err)
return true
}
w.WriteHeader(http.StatusNoContent)
return true
case "/prometheus/api/v1/import/csv", "/api/v1/import/csv":
case "/api/v1/import/csv":
csvimportRequests.Inc()
if err := csvimport.InsertHandler(nil, r); err != nil {
if err := csvimport.InsertHandler(r); err != nil {
csvimportErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
httpserver.Errorf(w, "error in %q: %s", r.URL.Path, err)
return true
}
w.WriteHeader(http.StatusNoContent)
return true
case "/prometheus/api/v1/import/native", "/api/v1/import/native":
nativeimportRequests.Inc()
if err := native.InsertHandler(nil, r); err != nil {
nativeimportErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
return true
}
w.WriteHeader(http.StatusNoContent)
return true
case "/influx/write", "/influx/api/v2/write", "/write", "/api/v2/write":
case "/write", "/api/v2/write":
influxWriteRequests.Inc()
if err := influx.InsertHandlerForHTTP(nil, r); err != nil {
if err := influx.InsertHandlerForHTTP(r); err != nil {
influxWriteErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
httpserver.Errorf(w, "error in %q: %s", r.URL.Path, err)
return true
}
w.WriteHeader(http.StatusNoContent)
return true
case "/influx/query", "/query":
case "/query":
// Emulate fake response for influx query.
// This is required for TSBS benchmark.
influxQueryRequests.Inc()
influxutil.WriteDatabaseNames(w)
fmt.Fprintf(w, `{"results":[{"series":[{"values":[]}]}]}`)
return true
case "/influx/health":
influxHealthRequests.Inc()
influxutil.WriteHealthCheckResponse(w)
return true
case "/opentelemetry/api/v1/push", "/opentelemetry/v1/metrics":
opentelemetryPushRequests.Inc()
if err := opentelemetry.InsertHandler(nil, r); err != nil {
opentelemetryPushErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
return true
}
firehose.WriteSuccessResponse(w, r)
return true
case "/newrelic":
newrelicCheckRequest.Inc()
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(202)
fmt.Fprintf(w, `{"status":"ok"}`)
return true
case "/newrelic/inventory/deltas":
newrelicInventoryRequests.Inc()
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(202)
fmt.Fprintf(w, `{"payload":{"version": 1, "state": {}, "reset": "false"}}`)
return true
case "/newrelic/infra/v2/metrics/events/bulk":
newrelicWriteRequests.Inc()
if err := newrelic.InsertHandlerForHTTP(nil, r); err != nil {
newrelicWriteErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
return true
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(202)
fmt.Fprintf(w, `{"status":"ok"}`)
return true
case "/datadog/api/v1/series":
datadogv1WriteRequests.Inc()
if err := datadogv1.InsertHandlerForHTTP(nil, r); err != nil {
datadogv1WriteErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
return true
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(202)
fmt.Fprintf(w, `{"status":"ok"}`)
return true
case "/datadog/api/v2/series":
datadogv2WriteRequests.Inc()
if err := datadogv2.InsertHandlerForHTTP(nil, r); err != nil {
datadogv2WriteErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
return true
}
// See https://docs.datadoghq.com/api/latest/metrics/#submit-metrics
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(202)
fmt.Fprintf(w, `{"status":"ok"}`)
return true
case "/datadog/api/beta/sketches":
datadogsketchesWriteRequests.Inc()
if err := datadogsketches.InsertHandlerForHTTP(nil, r); err != nil {
datadogsketchesWriteErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
return true
}
w.WriteHeader(202)
return true
case "/datadog/api/v1/validate":
datadogValidateRequests.Inc()
// See https://docs.datadoghq.com/api/latest/authentication/#validate-api-key
w.Header().Set("Content-Type", "application/json")
fmt.Fprintf(w, `{"valid":true}`)
return true
case "/datadog/api/v1/check_run":
datadogCheckRunRequests.Inc()
// See https://docs.datadoghq.com/api/latest/service-checks/#submit-a-service-check
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(202)
fmt.Fprintf(w, `{"status":"ok"}`)
return true
case "/datadog/intake":
datadogIntakeRequests.Inc()
w.Header().Set("Content-Type", "application/json")
fmt.Fprintf(w, `{}`)
return true
case "/datadog/api/v1/metadata":
datadogMetadataRequests.Inc()
w.Header().Set("Content-Type", "application/json")
fmt.Fprintf(w, `{}`)
return true
case "/prometheus/targets", "/targets":
case "/targets":
promscrapeTargetsRequests.Inc()
promscrape.WriteHumanReadableTargetsStatus(w, r)
w.Header().Set("Content-Type", "text/plain")
promscrape.WriteHumanReadableTargetsStatus(w)
return true
case "/prometheus/service-discovery", "/service-discovery":
promscrapeServiceDiscoveryRequests.Inc()
promscrape.WriteServiceDiscovery(w, r)
return true
case "/prometheus/metric-relabel-debug", "/metric-relabel-debug":
promscrapeMetricRelabelDebugRequests.Inc()
promscrape.WriteMetricRelabelDebug(w, r)
return true
case "/prometheus/target-relabel-debug", "/target-relabel-debug":
promscrapeTargetRelabelDebugRequests.Inc()
promscrape.WriteTargetRelabelDebug(w, r)
return true
case "/prometheus/api/v1/targets", "/api/v1/targets":
promscrapeAPIV1TargetsRequests.Inc()
w.Header().Set("Content-Type", "application/json")
// https://prometheus.io/docs/prometheus/latest/querying/api/#targets
state := r.FormValue("state")
scrapePool := r.FormValue("scrapePool")
promscrape.WriteAPIV1Targets(w, state, scrapePool)
return true
case "/prometheus/target_response", "/target_response":
promscrapeTargetResponseRequests.Inc()
if err := promscrape.WriteTargetResponse(w, r); err != nil {
promscrapeTargetResponseErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
return true
}
return true
case "/prometheus/config", "/config":
if !httpserver.CheckAuthFlag(w, r, configAuthKey) {
return true
}
promscrapeConfigRequests.Inc()
w.Header().Set("Content-Type", "text/plain; charset=utf-8")
promscrape.WriteConfigData(w)
return true
case "/prometheus/api/v1/status/config", "/api/v1/status/config":
// See https://prometheus.io/docs/prometheus/latest/querying/api/#config
if !httpserver.CheckAuthFlag(w, r, configAuthKey) {
return true
}
promscrapeStatusConfigRequests.Inc()
w.Header().Set("Content-Type", "application/json")
var bb bytesutil.ByteBuffer
promscrape.WriteConfigData(&bb)
fmt.Fprintf(w, `{"status":"success","data":{"yaml":%s}}`, stringsutil.JSONString(string(bb.B)))
return true
case "/prometheus/-/reload", "/-/reload":
if !httpserver.CheckAuthFlag(w, r, reloadAuthKey) {
return true
}
case "/-/reload":
promscrapeConfigReloadRequests.Inc()
procutil.SelfSIGHUP()
w.WriteHeader(http.StatusOK)
return true
case "/ready":
if rdy := promscrape.PendingScrapeConfigs.Load(); rdy > 0 {
errMsg := fmt.Sprintf("waiting for scrapes to init, left: %d", rdy)
http.Error(w, errMsg, http.StatusTooEarly)
} else {
w.Header().Set("Content-Type", "text/plain; charset=utf-8")
w.WriteHeader(http.StatusOK)
w.Write([]byte("OK"))
}
return true
default:
if strings.HasPrefix(r.URL.Path, "/static") {
staticServer.ServeHTTP(w, r)
return true
}
if remotewrite.MultitenancyEnabled() {
return processMultitenantRequest(w, r, path)
}
return false
}
}
func processMultitenantRequest(w http.ResponseWriter, r *http.Request, path string) bool {
p, err := httpserver.ParsePath(path)
if err != nil {
// Cannot parse multitenant path. Skip it - probably it will be parsed later.
return false
}
if p.Prefix != "insert" {
httpserver.Errorf(w, r, `unsupported multitenant prefix: %q; expected "insert"`, p.Prefix)
return true
}
at, err := auth.NewTokenPossibleMultitenant(p.AuthToken)
if err != nil {
httpserver.Errorf(w, r, "cannot obtain auth token: %s", err)
return true
}
if strings.HasPrefix(p.Suffix, "prometheus/api/v1/import/prometheus") {
prometheusimportRequests.Inc()
if err := prometheusimport.InsertHandler(at, r); err != nil {
prometheusimportErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
return true
}
statusCode := http.StatusNoContent
if strings.HasPrefix(p.Suffix, "prometheus/api/v1/import/prometheus/metrics/job/") {
// Return 200 status code for pushgateway requests.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3636
statusCode = http.StatusOK
}
w.WriteHeader(statusCode)
return true
}
if strings.HasPrefix(p.Suffix, "datadog/") {
// Trim suffix from paths starting from /datadog/ in order to support legacy DataDog agent.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2670
p.Suffix = strings.TrimSuffix(p.Suffix, "/")
}
switch p.Suffix {
case "prometheus/", "prometheus", "prometheus/api/v1/write", "prometheus/api/v1/push":
prometheusWriteRequests.Inc()
if err := promremotewrite.InsertHandler(at, r); err != nil {
prometheusWriteErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
return true
}
w.WriteHeader(http.StatusNoContent)
return true
case "prometheus/api/v1/import":
vmimportRequests.Inc()
if err := vmimport.InsertHandler(at, r); err != nil {
vmimportErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
return true
}
w.WriteHeader(http.StatusNoContent)
return true
case "prometheus/api/v1/import/csv":
csvimportRequests.Inc()
if err := csvimport.InsertHandler(at, r); err != nil {
csvimportErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
return true
}
w.WriteHeader(http.StatusNoContent)
return true
case "prometheus/api/v1/import/native":
nativeimportRequests.Inc()
if err := native.InsertHandler(at, r); err != nil {
nativeimportErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
return true
}
w.WriteHeader(http.StatusNoContent)
return true
case "influx/write", "influx/api/v2/write":
influxWriteRequests.Inc()
if err := influx.InsertHandlerForHTTP(at, r); err != nil {
influxWriteErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
return true
}
w.WriteHeader(http.StatusNoContent)
return true
case "influx/query":
influxQueryRequests.Inc()
influxutil.WriteDatabaseNames(w)
return true
case "influx/health":
influxHealthRequests.Inc()
influxutil.WriteHealthCheckResponse(w)
return true
case "opentelemetry/api/v1/push", "opentelemetry/v1/metrics":
opentelemetryPushRequests.Inc()
if err := opentelemetry.InsertHandler(at, r); err != nil {
opentelemetryPushErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
return true
}
firehose.WriteSuccessResponse(w, r)
return true
case "newrelic":
newrelicCheckRequest.Inc()
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(202)
fmt.Fprintf(w, `{"status":"ok"}`)
return true
case "newrelic/inventory/deltas":
newrelicInventoryRequests.Inc()
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(202)
fmt.Fprintf(w, `{"payload":{"version": 1, "state": {}, "reset": "false"}}`)
return true
case "newrelic/infra/v2/metrics/events/bulk":
newrelicWriteRequests.Inc()
if err := newrelic.InsertHandlerForHTTP(at, r); err != nil {
newrelicWriteErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
return true
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(202)
fmt.Fprintf(w, `{"status":"ok"}`)
return true
case "datadog/api/v1/series":
datadogv1WriteRequests.Inc()
if err := datadogv1.InsertHandlerForHTTP(at, r); err != nil {
datadogv1WriteErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
return true
}
w.WriteHeader(202)
fmt.Fprintf(w, `{"status":"ok"}`)
return true
case "datadog/api/v2/series":
datadogv2WriteRequests.Inc()
if err := datadogv2.InsertHandlerForHTTP(at, r); err != nil {
datadogv2WriteErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
return true
}
// See https://docs.datadoghq.com/api/latest/metrics/#submit-metrics
w.WriteHeader(202)
fmt.Fprintf(w, `{"status":"ok"}`)
return true
case "datadog/api/beta/sketches":
datadogsketchesWriteRequests.Inc()
if err := datadogsketches.InsertHandlerForHTTP(at, r); err != nil {
datadogsketchesWriteErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
return true
}
w.WriteHeader(202)
return true
case "datadog/api/v1/validate":
datadogValidateRequests.Inc()
// See https://docs.datadoghq.com/api/latest/authentication/#validate-api-key
w.Header().Set("Content-Type", "application/json")
fmt.Fprintf(w, `{"valid":true}`)
return true
case "datadog/api/v1/check_run":
datadogCheckRunRequests.Inc()
// See https://docs.datadoghq.com/api/latest/service-checks/#submit-a-service-check
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(202)
fmt.Fprintf(w, `{"status":"ok"}`)
return true
case "datadog/intake":
datadogIntakeRequests.Inc()
w.Header().Set("Content-Type", "application/json")
fmt.Fprintf(w, `{}`)
return true
case "datadog/api/v1/metadata":
datadogMetadataRequests.Inc()
w.Header().Set("Content-Type", "application/json")
fmt.Fprintf(w, `{}`)
return true
default:
httpserver.Errorf(w, r, "unsupported multitenant path suffix: %q", p.Suffix)
return true
}
return false
}
var (
@@ -698,63 +196,12 @@ var (
csvimportRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/api/v1/import/csv", protocol="csvimport"}`)
csvimportErrors = metrics.NewCounter(`vmagent_http_request_errors_total{path="/api/v1/import/csv", protocol="csvimport"}`)
prometheusimportRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/api/v1/import/prometheus", protocol="prometheusimport"}`)
prometheusimportErrors = metrics.NewCounter(`vmagent_http_request_errors_total{path="/api/v1/import/prometheus", protocol="prometheusimport"}`)
influxWriteRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/write", protocol="influx"}`)
influxWriteErrors = metrics.NewCounter(`vmagent_http_request_errors_total{path="/write", protocol="influx"}`)
nativeimportRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/api/v1/import/native", protocol="nativeimport"}`)
nativeimportErrors = metrics.NewCounter(`vmagent_http_request_errors_total{path="/api/v1/import/native", protocol="nativeimport"}`)
influxQueryRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/query", protocol="influx"}`)
influxWriteRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/influx/write", protocol="influx"}`)
influxWriteErrors = metrics.NewCounter(`vmagent_http_request_errors_total{path="/influx/write", protocol="influx"}`)
influxQueryRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/influx/query", protocol="influx"}`)
influxHealthRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/influx/health", protocol="influx"}`)
datadogv1WriteRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/api/v1/series", protocol="datadog"}`)
datadogv1WriteErrors = metrics.NewCounter(`vmagent_http_request_errors_total{path="/datadog/api/v1/series", protocol="datadog"}`)
datadogv2WriteRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/api/v2/series", protocol="datadog"}`)
datadogv2WriteErrors = metrics.NewCounter(`vmagent_http_request_errors_total{path="/datadog/api/v2/series", protocol="datadog"}`)
datadogsketchesWriteRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/api/beta/sketches", protocol="datadog"}`)
datadogsketchesWriteErrors = metrics.NewCounter(`vmagent_http_request_errors_total{path="/datadog/api/beta/sketches", protocol="datadog"}`)
datadogValidateRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/api/v1/validate", protocol="datadog"}`)
datadogCheckRunRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/api/v1/check_run", protocol="datadog"}`)
datadogIntakeRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/intake", protocol="datadog"}`)
datadogMetadataRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/api/v1/metadata", protocol="datadog"}`)
opentelemetryPushRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/opentelemetry/v1/metrics", protocol="opentelemetry"}`)
opentelemetryPushErrors = metrics.NewCounter(`vmagent_http_request_errors_total{path="/opentelemetry/v1/metrics", protocol="opentelemetry"}`)
newrelicWriteRequests = metrics.NewCounter(`vm_http_requests_total{path="/newrelic/infra/v2/metrics/events/bulk", protocol="newrelic"}`)
newrelicWriteErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/newrelic/infra/v2/metrics/events/bulk", protocol="newrelic"}`)
newrelicInventoryRequests = metrics.NewCounter(`vm_http_requests_total{path="/newrelic/inventory/deltas", protocol="newrelic"}`)
newrelicCheckRequest = metrics.NewCounter(`vm_http_requests_total{path="/newrelic", protocol="newrelic"}`)
promscrapeTargetsRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/targets"}`)
promscrapeServiceDiscoveryRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/service-discovery"}`)
promscrapeMetricRelabelDebugRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/metric-relabel-debug"}`)
promscrapeTargetRelabelDebugRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/target-relabel-debug"}`)
promscrapeAPIV1TargetsRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/api/v1/targets"}`)
promscrapeTargetResponseRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/target_response"}`)
promscrapeTargetResponseErrors = metrics.NewCounter(`vmagent_http_request_errors_total{path="/target_response"}`)
promscrapeConfigRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/config"}`)
promscrapeStatusConfigRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/api/v1/status/config"}`)
promscrapeTargetsRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/targets"}`)
promscrapeConfigReloadRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/-/reload"}`)
)
func usage() {
const s = `
vmagent collects metrics data via popular data ingestion protocols and routes it to VictoriaMetrics.
See the docs at https://docs.victoriametrics.com/victoriametrics/vmagent/ .
`
flagutil.Usage(s)
}

View File

@@ -1,13 +0,0 @@
# See https://medium.com/on-docker/use-multi-stage-builds-to-inject-ca-certs-ad1e8f01de1b
ARG certs_image=non-existing
ARG root_image=non-existing
FROM $certs_image AS certs
RUN apk update && apk upgrade && apk --update --no-cache add ca-certificates
FROM $root_image
COPY --from=certs /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
EXPOSE 8429
ENTRYPOINT ["/vmagent-prod"]
ARG TARGETARCH
ARG BINARY_SUFFIX=non-existing
COPY vmagent-linux-${TARGETARCH}-prod${BINARY_SUFFIX} ./vmagent-prod

View File

@@ -1,91 +0,0 @@
package native
import (
"net/http"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/native/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
"github.com/VictoriaMetrics/metrics"
)
var (
rowsInserted = metrics.NewCounter(`vmagent_rows_inserted_total{type="native"}`)
rowsTenantInserted = tenantmetrics.NewCounterMap(`vmagent_tenant_inserted_rows_total{type="native"}`)
rowsPerInsert = metrics.NewHistogram(`vmagent_rows_per_insert{type="native"}`)
)
// InsertHandler processes `/api/v1/import` request.
//
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6
func InsertHandler(at *auth.Token, req *http.Request) error {
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
encoding := req.Header.Get("Content-Encoding")
return stream.Parse(req.Body, encoding, func(block *stream.Block) error {
return insertRows(at, block, extraLabels)
})
}
func insertRows(at *auth.Token, block *stream.Block, extraLabels []prompb.Label) error {
ctx := common.GetPushCtx()
defer common.PutPushCtx(ctx)
// Update rowsInserted and rowsPerInsert before actual inserting,
// since relabeling can prevent from inserting the rows.
rowsLen := len(block.Values)
rowsInserted.Add(rowsLen)
if at != nil {
rowsTenantInserted.Get(at).Add(rowsLen)
}
rowsPerInsert.Update(float64(rowsLen))
tssDst := ctx.WriteRequest.Timeseries[:0]
labels := ctx.Labels[:0]
samples := ctx.Samples[:0]
mn := &block.MetricName
labelsLen := len(labels)
labels = append(labels, prompb.Label{
Name: "__name__",
Value: bytesutil.ToUnsafeString(mn.MetricGroup),
})
for j := range mn.Tags {
tag := &mn.Tags[j]
labels = append(labels, prompb.Label{
Name: bytesutil.ToUnsafeString(tag.Key),
Value: bytesutil.ToUnsafeString(tag.Value),
})
}
labels = append(labels, extraLabels...)
values := block.Values
timestamps := block.Timestamps
if len(timestamps) != len(values) {
logger.Panicf("BUG: len(timestamps)=%d must match len(values)=%d", len(timestamps), len(values))
}
samplesLen := len(samples)
for j, value := range values {
samples = append(samples, prompb.Sample{
Value: value,
Timestamp: timestamps[j],
})
}
tssDst = append(tssDst, prompb.TimeSeries{
Labels: labels[labelsLen:],
Samples: samples[samplesLen:],
})
ctx.WriteRequest.Timeseries = tssDst
ctx.Labels = labels
ctx.Samples = samples
if !remotewrite.TryPush(at, &ctx.WriteRequest) {
return remotewrite.ErrQueueFullHTTPRetry
}
return nil
}

View File

@@ -1,87 +0,0 @@
package newrelic
import (
"net/http"
"github.com/VictoriaMetrics/metrics"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/newrelic"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/newrelic/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
)
var (
rowsInserted = metrics.NewCounter(`vmagent_rows_inserted_total{type="newrelic"}`)
rowsTenantInserted = tenantmetrics.NewCounterMap(`vmagent_tenant_inserted_rows_total{type="newrelic"}`)
rowsPerInsert = metrics.NewHistogram(`vmagent_rows_per_insert{type="newrelic"}`)
)
// InsertHandlerForHTTP processes remote write for NewRelic POST /infra/v2/metrics/events/bulk request.
func InsertHandlerForHTTP(at *auth.Token, req *http.Request) error {
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
encoding := req.Header.Get("Content-Encoding")
return stream.Parse(req.Body, encoding, func(rows []newrelic.Row) error {
return insertRows(at, rows, extraLabels)
})
}
func insertRows(at *auth.Token, rows []newrelic.Row, extraLabels []prompb.Label) error {
ctx := common.GetPushCtx()
defer common.PutPushCtx(ctx)
samplesCount := 0
tssDst := ctx.WriteRequest.Timeseries[:0]
labels := ctx.Labels[:0]
samples := ctx.Samples[:0]
for i := range rows {
r := &rows[i]
tags := r.Tags
srcSamples := r.Samples
for j := range srcSamples {
s := &srcSamples[j]
labelsLen := len(labels)
labels = append(labels, prompb.Label{
Name: "__name__",
Value: bytesutil.ToUnsafeString(s.Name),
})
for k := range tags {
t := &tags[k]
labels = append(labels, prompb.Label{
Name: bytesutil.ToUnsafeString(t.Key),
Value: bytesutil.ToUnsafeString(t.Value),
})
}
samples = append(samples, prompb.Sample{
Value: s.Value,
Timestamp: r.Timestamp,
})
tssDst = append(tssDst, prompb.TimeSeries{
Labels: labels[labelsLen:],
Samples: samples[len(samples)-1:],
})
labels = append(labels, extraLabels...)
}
samplesCount += len(srcSamples)
}
ctx.WriteRequest.Timeseries = tssDst
ctx.Labels = labels
ctx.Samples = samples
if !remotewrite.TryPush(at, &ctx.WriteRequest) {
return remotewrite.ErrQueueFullHTTPRetry
}
rowsInserted.Add(len(rows))
if at != nil {
rowsTenantInserted.Get(at).Add(samplesCount)
}
rowsPerInsert.Update(float64(samplesCount))
return nil
}

View File

@@ -1,99 +0,0 @@
package opentelemetry
import (
"fmt"
"net/http"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentelemetry/firehose"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentelemetry/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
"github.com/VictoriaMetrics/metrics"
)
var (
rowsInserted = metrics.NewCounter(`vmagent_rows_inserted_total{type="opentelemetry"}`)
metadataInserted = metrics.NewCounter(`vmagent_metadata_inserted_total{type="opentelemetry"}`)
rowsTenantInserted = tenantmetrics.NewCounterMap(`vmagent_tenant_inserted_rows_total{type="opentelemetry"}`)
metadataTenantInserted = tenantmetrics.NewCounterMap(`vmagent_tenant_inserted_metadata_total{type="opentelemetry"}`)
rowsPerInsert = metrics.NewHistogram(`vmagent_rows_per_insert{type="opentelemetry"}`)
)
// InsertHandler processes opentelemetry metrics.
func InsertHandler(at *auth.Token, req *http.Request) error {
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
encoding := req.Header.Get("Content-Encoding")
var processBody func([]byte) ([]byte, error)
if req.Header.Get("Content-Type") == "application/json" {
if req.Header.Get("X-Amz-Firehose-Protocol-Version") != "" {
processBody = firehose.ProcessRequestBody
} else {
return fmt.Errorf("json encoding isn't supported for opentelemetry format. Use protobuf encoding")
}
}
return stream.ParseStream(req.Body, encoding, processBody, func(tss []prompb.TimeSeries, mms []prompb.MetricMetadata) error {
return insertRows(at, tss, mms, extraLabels)
})
}
func insertRows(at *auth.Token, tss []prompb.TimeSeries, mms []prompb.MetricMetadata, extraLabels []prompb.Label) error {
ctx := common.GetPushCtx()
defer common.PutPushCtx(ctx)
rowsTotal := 0
tssDst := ctx.WriteRequest.Timeseries[:0]
labels := ctx.Labels[:0]
samples := ctx.Samples[:0]
for i := range tss {
ts := &tss[i]
rowsTotal += len(ts.Samples)
labelsLen := len(labels)
labels = append(labels, ts.Labels...)
labels = append(labels, extraLabels...)
samplesLen := len(samples)
samples = append(samples, ts.Samples...)
tssDst = append(tssDst, prompb.TimeSeries{
Labels: labels[labelsLen:],
Samples: samples[samplesLen:],
})
}
ctx.WriteRequest.Timeseries = tssDst
var metadataTotal int
if promscrape.IsMetadataEnabled() {
var accountID, projectID uint32
if at != nil {
accountID = at.AccountID
projectID = at.ProjectID
for i := range mms {
mm := &mms[i]
mm.AccountID = accountID
mm.ProjectID = projectID
}
}
ctx.WriteRequest.Metadata = mms
metadataTotal = len(mms)
}
ctx.Labels = labels
ctx.Samples = samples
if !remotewrite.TryPush(at, &ctx.WriteRequest) {
return remotewrite.ErrQueueFullHTTPRetry
}
rowsInserted.Add(rowsTotal)
metadataInserted.Add(metadataTotal)
if at != nil {
rowsTenantInserted.Get(at).Add(rowsTotal)
metadataTenantInserted.Get(at).Add(metadataTotal)
}
rowsPerInsert.Update(float64(rowsTotal))
return nil
}

View File

@@ -5,9 +5,9 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parser "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentsdb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentsdb/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
"github.com/VictoriaMetrics/metrics"
)
@@ -20,7 +20,9 @@ var (
//
// See http://opentsdb.net/docs/build/html/api_telnet/put.html
func InsertHandler(r io.Reader) error {
return stream.Parse(r, insertRows)
return writeconcurrencylimiter.Do(func() error {
return parser.ParseStream(r, insertRows)
})
}
func insertRows(rows []parser.Row) error {
@@ -33,22 +35,22 @@ func insertRows(rows []parser.Row) error {
for i := range rows {
r := &rows[i]
labelsLen := len(labels)
labels = append(labels, prompb.Label{
labels = append(labels, prompbmarshal.Label{
Name: "__name__",
Value: r.Metric,
})
for j := range r.Tags {
tag := &r.Tags[j]
labels = append(labels, prompb.Label{
labels = append(labels, prompbmarshal.Label{
Name: tag.Key,
Value: tag.Value,
})
}
samples = append(samples, prompb.Sample{
samples = append(samples, prompbmarshal.Sample{
Value: r.Value,
Timestamp: r.Timestamp,
})
tssDst = append(tssDst, prompb.TimeSeries{
tssDst = append(tssDst, prompbmarshal.TimeSeries{
Labels: labels[labelsLen:],
Samples: samples[len(samples)-1:],
})
@@ -56,9 +58,7 @@ func insertRows(rows []parser.Row) error {
ctx.WriteRequest.Timeseries = tssDst
ctx.Labels = labels
ctx.Samples = samples
if !remotewrite.TryPush(nil, &ctx.WriteRequest) {
return remotewrite.ErrQueueFullHTTPRetry
}
remotewrite.Push(&ctx.WriteRequest)
rowsInserted.Add(len(rows))
rowsPerInsert.Update(float64(len(rows)))
return nil

View File

@@ -5,11 +5,9 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentsdbhttp"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentsdbhttp/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parser "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentsdbhttp"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
"github.com/VictoriaMetrics/metrics"
)
@@ -20,17 +18,13 @@ var (
// InsertHandler processes HTTP OpenTSDB put requests.
// See http://opentsdb.net/docs/build/html/api_http/put.html
func InsertHandler(at *auth.Token, req *http.Request) error {
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
return stream.Parse(req, func(rows []opentsdbhttp.Row) error {
return insertRows(at, rows, extraLabels)
func InsertHandler(req *http.Request) error {
return writeconcurrencylimiter.Do(func() error {
return parser.ParseStream(req, insertRows)
})
}
func insertRows(at *auth.Token, rows []opentsdbhttp.Row, extraLabels []prompb.Label) error {
func insertRows(rows []parser.Row) error {
ctx := common.GetPushCtx()
defer common.PutPushCtx(ctx)
@@ -40,23 +34,22 @@ func insertRows(at *auth.Token, rows []opentsdbhttp.Row, extraLabels []prompb.La
for i := range rows {
r := &rows[i]
labelsLen := len(labels)
labels = append(labels, prompb.Label{
labels = append(labels, prompbmarshal.Label{
Name: "__name__",
Value: r.Metric,
})
for j := range r.Tags {
tag := &r.Tags[j]
labels = append(labels, prompb.Label{
labels = append(labels, prompbmarshal.Label{
Name: tag.Key,
Value: tag.Value,
})
}
labels = append(labels, extraLabels...)
samples = append(samples, prompb.Sample{
samples = append(samples, prompbmarshal.Sample{
Value: r.Value,
Timestamp: r.Timestamp,
})
tssDst = append(tssDst, prompb.TimeSeries{
tssDst = append(tssDst, prompbmarshal.TimeSeries{
Labels: labels[labelsLen:],
Samples: samples[len(samples)-1:],
})
@@ -64,9 +57,7 @@ func insertRows(at *auth.Token, rows []opentsdbhttp.Row, extraLabels []prompb.La
ctx.WriteRequest.Timeseries = tssDst
ctx.Labels = labels
ctx.Samples = samples
if !remotewrite.TryPush(at, &ctx.WriteRequest) {
return remotewrite.ErrQueueFullHTTPRetry
}
remotewrite.Push(&ctx.WriteRequest)
rowsInserted.Add(len(rows))
rowsPerInsert.Update(float64(len(rows)))
return nil

View File

@@ -1,110 +0,0 @@
package prometheusimport
import (
"net/http"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/prometheus"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/prometheus/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
"github.com/VictoriaMetrics/metrics"
)
var (
rowsInserted = metrics.NewCounter(`vmagent_rows_inserted_total{type="prometheus"}`)
metadataInserted = metrics.NewCounter(`vmagent_metadata_inserted_total{type="prometheus"}`)
rowsTenantInserted = tenantmetrics.NewCounterMap(`vmagent_tenant_inserted_rows_total{type="prometheus"}`)
metadataTenantInserted = tenantmetrics.NewCounterMap(`vmagent_tenant_inserted_metadata_total{type="prometheus"}`)
rowsPerInsert = metrics.NewHistogram(`vmagent_rows_per_insert{type="prometheus"}`)
)
// InsertHandler processes `/api/v1/import/prometheus` request.
func InsertHandler(at *auth.Token, req *http.Request) error {
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
defaultTimestamp, err := protoparserutil.GetTimestamp(req)
if err != nil {
return err
}
encoding := req.Header.Get("Content-Encoding")
return stream.Parse(req.Body, defaultTimestamp, encoding, true, promscrape.IsMetadataEnabled(), func(rows []prometheus.Row, mms []prometheus.Metadata) error {
return insertRows(at, rows, mms, extraLabels)
}, func(s string) {
httpserver.LogError(req, s)
})
}
func insertRows(at *auth.Token, rows []prometheus.Row, mms []prometheus.Metadata, extraLabels []prompb.Label) error {
ctx := common.GetPushCtx()
defer common.PutPushCtx(ctx)
tssDst := ctx.WriteRequest.Timeseries[:0]
mmsDst := ctx.WriteRequest.Metadata[:0]
labels := ctx.Labels[:0]
samples := ctx.Samples[:0]
for i := range rows {
r := &rows[i]
labelsLen := len(labels)
labels = append(labels, prompb.Label{
Name: "__name__",
Value: r.Metric,
})
for j := range r.Tags {
tag := &r.Tags[j]
labels = append(labels, prompb.Label{
Name: tag.Key,
Value: tag.Value,
})
}
labels = append(labels, extraLabels...)
samples = append(samples, prompb.Sample{
Value: r.Value,
Timestamp: r.Timestamp,
})
tssDst = append(tssDst, prompb.TimeSeries{
Labels: labels[labelsLen:],
Samples: samples[len(samples)-1:],
})
}
var accountID, projectID uint32
if at != nil {
accountID = at.AccountID
projectID = at.ProjectID
}
for i := range mms {
mm := &mms[i]
mmsDst = append(mmsDst, prompb.MetricMetadata{
MetricFamilyName: mm.Metric,
Help: mm.Help,
Type: mm.Type,
// there is no unit in Prometheus exposition formats
AccountID: accountID,
ProjectID: projectID,
})
}
ctx.WriteRequest.Timeseries = tssDst
ctx.WriteRequest.Metadata = mmsDst
ctx.Labels = labels
ctx.Samples = samples
if !remotewrite.TryPush(at, &ctx.WriteRequest) {
return remotewrite.ErrQueueFullHTTPRetry
}
rowsInserted.Add(len(rows))
metadataInserted.Add(len(mms))
if at != nil {
rowsTenantInserted.Get(at).Add(len(rows))
metadataTenantInserted.Get(at).Add(len(mms))
}
rowsPerInsert.Update(float64(len(rows)))
return nil
}

View File

@@ -1,60 +0,0 @@
package prometheusimport
import (
"bytes"
"flag"
"log"
"net/http"
"net/http/httptest"
"strings"
"testing"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fs"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
)
var (
srv *httptest.Server
testOutput *bytes.Buffer
)
func TestInsertHandler(t *testing.T) {
setUp()
defer tearDown()
req := httptest.NewRequest(http.MethodPost, "/insert/0/api/v1/import/prometheus", bytes.NewBufferString(`{"foo":"bar"}
go_memstats_alloc_bytes_total 1`))
if err := InsertHandler(nil, req); err != nil {
t.Fatalf("unexpected error %s", err)
}
expectedMsg := "cannot unmarshal Prometheus line"
if !strings.Contains(testOutput.String(), expectedMsg) {
t.Fatalf("output %q should contain %q", testOutput.String(), expectedMsg)
}
}
func setUp() {
srv = httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
w.WriteHeader(204)
}))
flag.Parse()
remoteWriteFlag := "remoteWrite.url"
if err := flag.Lookup(remoteWriteFlag).Value.Set(srv.URL); err != nil {
log.Fatalf("unable to set %q with value %q, err: %v", remoteWriteFlag, srv.URL, err)
}
logger.Init()
protoparserutil.StartUnmarshalWorkers()
remotewrite.Init()
testOutput = &bytes.Buffer{}
logger.SetOutputForTests(testOutput)
}
func tearDown() {
protoparserutil.StopUnmarshalWorkers()
srv.Close()
logger.ResetOutputForTest()
tmpDataDir := flag.Lookup("remoteWrite.tmpDataPath").Value.String()
fs.MustRemoveDir(tmpDataDir)
}

View File

@@ -5,105 +5,63 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/promremotewrite/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parser "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/promremotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
"github.com/VictoriaMetrics/metrics"
)
var (
rowsInserted = metrics.NewCounter(`vmagent_rows_inserted_total{type="promremotewrite"}`)
metadataInserted = metrics.NewCounter(`vmagent_metadata_inserted_total{type="promremotewrite"}`)
rowsTenantInserted = tenantmetrics.NewCounterMap(`vmagent_tenant_inserted_rows_total{type="promremotewrite"}`)
metadataTenantInserted = tenantmetrics.NewCounterMap(`vmagent_tenant_inserted_metadata_total{type="promremotewrite"}`)
rowsPerInsert = metrics.NewHistogram(`vmagent_rows_per_insert{type="promremotewrite"}`)
rowsInserted = metrics.NewCounter(`vmagent_rows_inserted_total{type="promremotewrite"}`)
rowsPerInsert = metrics.NewHistogram(`vmagent_rows_per_insert{type="promremotewrite"}`)
)
// InsertHandler processes remote write for prometheus.
func InsertHandler(at *auth.Token, req *http.Request) error {
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
isVMRemoteWrite := req.Header.Get("Content-Encoding") == "zstd"
return stream.Parse(req.Body, isVMRemoteWrite, func(tss []prompb.TimeSeries, mms []prompb.MetricMetadata) error {
return insertRows(at, tss, mms, extraLabels)
func InsertHandler(req *http.Request) error {
return writeconcurrencylimiter.Do(func() error {
return parser.ParseStream(req, insertRows)
})
}
func insertRows(at *auth.Token, timeseries []prompb.TimeSeries, mms []prompb.MetricMetadata, extraLabels []prompb.Label) error {
func insertRows(timeseries []prompb.TimeSeries) error {
ctx := common.GetPushCtx()
defer common.PutPushCtx(ctx)
rowsTotal := 0
tssDst := ctx.WriteRequest.Timeseries[:0]
mmsDst := ctx.WriteRequest.Metadata[:0]
labels := ctx.Labels[:0]
samples := ctx.Samples[:0]
for i := range timeseries {
ts := &timeseries[i]
rowsTotal += len(ts.Samples)
labelsLen := len(labels)
for i := range ts.Labels {
label := &ts.Labels[i]
labels = append(labels, prompb.Label{
Name: label.Name,
Value: label.Value,
labels = append(labels, prompbmarshal.Label{
Name: bytesutil.ToUnsafeString(label.Name),
Value: bytesutil.ToUnsafeString(label.Value),
})
}
labels = append(labels, extraLabels...)
samplesLen := len(samples)
for i := range ts.Samples {
sample := &ts.Samples[i]
samples = append(samples, prompb.Sample{
samples = append(samples, prompbmarshal.Sample{
Value: sample.Value,
Timestamp: sample.Timestamp,
})
}
tssDst = append(tssDst, prompb.TimeSeries{
tssDst = append(tssDst, prompbmarshal.TimeSeries{
Labels: labels[labelsLen:],
Samples: samples[samplesLen:],
})
rowsTotal += len(ts.Samples)
}
ctx.WriteRequest.Timeseries = tssDst
var metadataTotal int
if promscrape.IsMetadataEnabled() {
var accountID, projectID uint32
if at != nil {
accountID = at.AccountID
projectID = at.ProjectID
}
for i := range mms {
mm := &mms[i]
mmsDst = append(mmsDst, prompb.MetricMetadata{
MetricFamilyName: mm.MetricFamilyName,
Help: mm.Help,
Type: mm.Type,
Unit: mm.Unit,
AccountID: accountID,
ProjectID: projectID,
})
}
ctx.WriteRequest.Metadata = mmsDst
metadataTotal = len(mms)
}
ctx.Labels = labels
ctx.Samples = samples
if !remotewrite.TryPush(at, &ctx.WriteRequest) {
return remotewrite.ErrQueueFullHTTPRetry
}
remotewrite.Push(&ctx.WriteRequest)
rowsInserted.Add(rowsTotal)
if at != nil {
rowsTenantInserted.Get(at).Add(rowsTotal)
metadataTenantInserted.Get(at).Add(metadataTotal)
}
metadataInserted.Add(metadataTotal)
rowsPerInsert.Update(float64(rowsTotal))
return nil
}

View File

@@ -1,210 +1,140 @@
package remotewrite
import (
"bytes"
"errors"
"crypto/tls"
"encoding/base64"
"flag"
"fmt"
"io"
"net/http"
"net/url"
"strconv"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/awsapi"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding/zstd"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httputil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/persistentqueue"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promauth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/ratelimiter"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timerpool"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeutil"
"github.com/VictoriaMetrics/fasthttp"
"github.com/VictoriaMetrics/metrics"
"github.com/golang/snappy"
)
var (
forcePromProto = flagutil.NewArrayBool("remoteWrite.forcePromProto", "Whether to force Prometheus remote write protocol for sending data "+
"to the corresponding -remoteWrite.url . See https://docs.victoriametrics.com/victoriametrics/vmagent/#victoriametrics-remote-write-protocol")
forceVMProto = flagutil.NewArrayBool("remoteWrite.forceVMProto", "Whether to force VictoriaMetrics remote write protocol for sending data "+
"to the corresponding -remoteWrite.url . See https://docs.victoriametrics.com/victoriametrics/vmagent/#victoriametrics-remote-write-protocol")
sendTimeout = flag.Duration("remoteWrite.sendTimeout", time.Minute, "Timeout for sending a single block of data to -remoteWrite.url")
rateLimit = flagutil.NewArrayInt("remoteWrite.rateLimit", 0, "Optional rate limit in bytes per second for data sent to the corresponding -remoteWrite.url. "+
"By default, the rate limit is disabled. It can be useful for limiting load on remote storage when big amounts of buffered data "+
"is sent after temporary unavailability of the remote storage. See also -maxIngestionRate")
sendTimeout = flagutil.NewArrayDuration("remoteWrite.sendTimeout", time.Minute, "Timeout for sending a single block of data to the corresponding -remoteWrite.url")
retryMinInterval = flagutil.NewArrayDuration("remoteWrite.retryMinInterval", time.Second, "The minimum delay between retry attempts to send a block of data to the corresponding -remoteWrite.url. Every next retry attempt will double the delay to prevent hammering of remote database. See also -remoteWrite.retryMaxInterval")
// deprecated in the future. use -remoteWrite.retryMaxInterval instead
retryMaxTime = flagutil.NewArrayDuration("remoteWrite.retryMaxTime", time.Minute, "The max time spent on retry attempts to send a block of data to the corresponding -remoteWrite.url. This flag is deprecated, use -remoteWrite.retryMaxInterval instead")
retryMaxInterval = flagutil.NewArrayDuration("remoteWrite.retryMaxInterval", time.Minute, "The maximum delay between retry attempts to send a block of data to the corresponding -remoteWrite.url. The delay doubles with each retry until this maximum is reached, after which it remains constant. See also -remoteWrite.retryMinInterval")
proxyURL = flagutil.NewArrayString("remoteWrite.proxyURL", "Optional proxy URL for writing data to the corresponding -remoteWrite.url. "+
"Supported proxies: http, https, socks5. Example: -remoteWrite.proxyURL=socks5://proxy:1234")
tlsInsecureSkipVerify = flag.Bool("remoteWrite.tlsInsecureSkipVerify", false, "Whether to skip tls verification when connecting to -remoteWrite.url")
tlsCertFile = flagutil.NewArray("remoteWrite.tlsCertFile", "Optional path to client-side TLS certificate file to use when connecting to -remoteWrite.url. "+
"If multiple args are set, then they are applied independently for the corresponding -remoteWrite.url")
tlsKeyFile = flagutil.NewArray("remoteWrite.tlsKeyFile", "Optional path to client-side TLS certificate key to use when connecting to -remoteWrite.url. "+
"If multiple args are set, then they are applied independently for the corresponding -remoteWrite.url")
tlsCAFile = flagutil.NewArray("remoteWrite.tlsCAFile", "Optional path to TLS CA file to use for verifying connections to -remoteWrite.url. "+
"By default system CA is used. If multiple args are set, then they are applied independently for the corresponding -remoteWrite.url")
tlsServerName = flagutil.NewArray("remoteWrite.tlsServerName", "Optional TLS server name to use for connections to -remoteWrite.url. "+
"By default the server name from -remoteWrite.url is used. If multiple args are set, then they are applied independently for the corresponding -remoteWrite.url")
tlsHandshakeTimeout = flagutil.NewArrayDuration("remoteWrite.tlsHandshakeTimeout", 20*time.Second, "The timeout for establishing tls connections to the corresponding -remoteWrite.url")
tlsInsecureSkipVerify = flagutil.NewArrayBool("remoteWrite.tlsInsecureSkipVerify", "Whether to skip tls verification when connecting to the corresponding -remoteWrite.url")
tlsCertFile = flagutil.NewArrayString("remoteWrite.tlsCertFile", "Optional path to client-side TLS certificate file to use when connecting "+
"to the corresponding -remoteWrite.url")
tlsKeyFile = flagutil.NewArrayString("remoteWrite.tlsKeyFile", "Optional path to client-side TLS certificate key to use when connecting to the corresponding -remoteWrite.url")
tlsCAFile = flagutil.NewArrayString("remoteWrite.tlsCAFile", "Optional path to TLS CA file to use for verifying connections to the corresponding -remoteWrite.url. "+
"By default, system CA is used")
tlsServerName = flagutil.NewArrayString("remoteWrite.tlsServerName", "Optional TLS server name to use for connections to the corresponding -remoteWrite.url. "+
"By default, the server name from -remoteWrite.url is used")
headers = flagutil.NewArrayString("remoteWrite.headers", "Optional HTTP headers to send with each request to the corresponding -remoteWrite.url. "+
"For example, -remoteWrite.headers='My-Auth:foobar' would send 'My-Auth: foobar' HTTP header with every request to the corresponding -remoteWrite.url. "+
"Multiple headers must be delimited by '^^': -remoteWrite.headers='header1:value1^^header2:value2'")
basicAuthUsername = flagutil.NewArrayString("remoteWrite.basicAuth.username", "Optional basic auth username to use for the corresponding -remoteWrite.url")
basicAuthPassword = flagutil.NewArrayString("remoteWrite.basicAuth.password", "Optional basic auth password to use for the corresponding -remoteWrite.url")
basicAuthPasswordFile = flagutil.NewArrayString("remoteWrite.basicAuth.passwordFile", "Optional path to basic auth password to use for the corresponding -remoteWrite.url. "+
"The file is re-read every second")
bearerToken = flagutil.NewArrayString("remoteWrite.bearerToken", "Optional bearer auth token to use for the corresponding -remoteWrite.url")
bearerTokenFile = flagutil.NewArrayString("remoteWrite.bearerTokenFile", "Optional path to bearer token file to use for the corresponding -remoteWrite.url. "+
"The token is re-read from the file every second")
oauth2ClientID = flagutil.NewArrayString("remoteWrite.oauth2.clientID", "Optional OAuth2 clientID to use for the corresponding -remoteWrite.url")
oauth2ClientSecret = flagutil.NewArrayString("remoteWrite.oauth2.clientSecret", "Optional OAuth2 clientSecret to use for the corresponding -remoteWrite.url")
oauth2ClientSecretFile = flagutil.NewArrayString("remoteWrite.oauth2.clientSecretFile", "Optional OAuth2 clientSecretFile to use for the corresponding -remoteWrite.url")
oauth2EndpointParams = flagutil.NewArrayString("remoteWrite.oauth2.endpointParams", "Optional OAuth2 endpoint parameters to use for the corresponding -remoteWrite.url . "+
`The endpoint parameters must be set in JSON format: {"param1":"value1",...,"paramN":"valueN"}`)
oauth2TokenURL = flagutil.NewArrayString("remoteWrite.oauth2.tokenUrl", "Optional OAuth2 tokenURL to use for the corresponding -remoteWrite.url")
oauth2Scopes = flagutil.NewArrayString("remoteWrite.oauth2.scopes", "Optional OAuth2 scopes to use for the corresponding -remoteWrite.url. Scopes must be delimited by ';'")
awsUseSigv4 = flagutil.NewArrayBool("remoteWrite.aws.useSigv4", "Enables SigV4 request signing for the corresponding -remoteWrite.url. "+
"It is expected that other -remoteWrite.aws.* command-line flags are set if sigv4 request signing is enabled")
awsEC2Endpoint = flagutil.NewArrayString("remoteWrite.aws.ec2Endpoint", "Optional AWS EC2 API endpoint to use for the corresponding -remoteWrite.url if -remoteWrite.aws.useSigv4 is set")
awsSTSEndpoint = flagutil.NewArrayString("remoteWrite.aws.stsEndpoint", "Optional AWS STS API endpoint to use for the corresponding -remoteWrite.url if -remoteWrite.aws.useSigv4 is set")
awsRegion = flagutil.NewArrayString("remoteWrite.aws.region", "Optional AWS region to use for the corresponding -remoteWrite.url if -remoteWrite.aws.useSigv4 is set")
awsRoleARN = flagutil.NewArrayString("remoteWrite.aws.roleARN", "Optional AWS roleARN to use for the corresponding -remoteWrite.url if -remoteWrite.aws.useSigv4 is set")
awsAccessKey = flagutil.NewArrayString("remoteWrite.aws.accessKey", "Optional AWS AccessKey to use for the corresponding -remoteWrite.url if -remoteWrite.aws.useSigv4 is set")
awsService = flagutil.NewArrayString("remoteWrite.aws.service", "Optional AWS Service to use for the corresponding -remoteWrite.url if -remoteWrite.aws.useSigv4 is set. "+
"Defaults to \"aps\"")
awsSecretKey = flagutil.NewArrayString("remoteWrite.aws.secretKey", "Optional AWS SecretKey to use for the corresponding -remoteWrite.url if -remoteWrite.aws.useSigv4 is set")
basicAuthUsername = flagutil.NewArray("remoteWrite.basicAuth.username", "Optional basic auth username to use for -remoteWrite.url. "+
"If multiple args are set, then they are applied independently for the corresponding -remoteWrite.url")
basicAuthPassword = flagutil.NewArray("remoteWrite.basicAuth.password", "Optional basic auth password to use for -remoteWrite.url. "+
"If multiple args are set, then they are applied independently for the corresponding -remoteWrite.url")
bearerToken = flagutil.NewArray("remoteWrite.bearerToken", "Optional bearer auth token to use for -remoteWrite.url. "+
"If multiple args are set, then they are applied independently for the corresponding -remoteWrite.url")
)
type client struct {
sanitizedURL string
urlLabelValue string
remoteWriteURL string
host string
requestURI string
authHeader string
fq *persistentqueue.FastQueue
hc *fasthttp.HostClient
// Whether to use VictoriaMetrics remote write protocol for sending the data to remoteWriteURL
useVMProto atomic.Bool
canDowngradeVMProto atomic.Bool
fq *persistentqueue.FastQueue
hc *http.Client
retryMinInterval time.Duration
retryMaxInterval time.Duration
sendBlock func(block []byte) bool
authCfg *promauth.Config
awsCfg *awsapi.Config
rl *ratelimiter.RateLimiter
bytesSent *metrics.Counter
blocksSent *metrics.Counter
requestDuration *metrics.Histogram
requestsOKCount *metrics.Counter
errorsCount *metrics.Counter
packetsDropped *metrics.Counter
rateLimit *metrics.Gauge
retriesCount *metrics.Counter
sendDuration *metrics.FloatCounter
wg sync.WaitGroup
stopCh chan struct{}
}
func newHTTPClient(argIdx int, remoteWriteURL, sanitizedURL string, fq *persistentqueue.FastQueue, concurrency int) *client {
authCfg, err := getAuthConfig(argIdx)
if err != nil {
logger.Fatalf("cannot initialize auth config for -remoteWrite.url=%q: %s", remoteWriteURL, err)
func newClient(argIdx int, remoteWriteURL, urlLabelValue string, fq *persistentqueue.FastQueue, concurrency int) *client {
authHeader := ""
username := basicAuthUsername.GetOptionalArg(argIdx)
password := basicAuthPassword.GetOptionalArg(argIdx)
if len(username) > 0 || len(password) > 0 {
// See https://en.wikipedia.org/wiki/Basic_access_authentication
token := username + ":" + password
token64 := base64.StdEncoding.EncodeToString([]byte(token))
authHeader = "Basic " + token64
}
awsCfg, err := getAWSAPIConfig(argIdx)
if err != nil {
logger.Fatalf("cannot initialize AWS Config for -remoteWrite.url=%q: %s", remoteWriteURL, err)
}
tr := httputil.NewTransport(false, "vmagent_remotewrite")
tr.TLSHandshakeTimeout = tlsHandshakeTimeout.GetOptionalArg(argIdx)
tr.MaxConnsPerHost = 2 * concurrency
tr.MaxIdleConnsPerHost = 2 * concurrency
tr.IdleConnTimeout = time.Minute
tr.WriteBufferSize = 64 * 1024
pURL := proxyURL.GetOptionalArg(argIdx)
if len(pURL) > 0 {
if !strings.Contains(pURL, "://") {
logger.Fatalf("cannot parse -remoteWrite.proxyURL=%q: it must start with `http://`, `https://` or `socks5://`", pURL)
token := bearerToken.GetOptionalArg(argIdx)
if len(token) > 0 {
if authHeader != "" {
logger.Panicf("FATAL: `-remoteWrite.bearerToken`=%q cannot be set when `-remoteWrite.basicAuth.*` flags are set", token)
}
pu, err := url.Parse(pURL)
authHeader = "Bearer " + token
}
readTimeout := *sendTimeout
if readTimeout <= 0 {
readTimeout = time.Minute
}
writeTimeout := readTimeout
var u fasthttp.URI
u.Update(remoteWriteURL)
scheme := string(u.Scheme())
switch scheme {
case "http", "https":
default:
logger.Panicf("FATAL: unsupported scheme in -remoteWrite.url=%q: %q. It must be http or https", remoteWriteURL, scheme)
}
host := string(u.Host())
if len(host) == 0 {
logger.Panicf("FATAL: invalid -remoteWrite.url=%q: host cannot be empty. Make sure the url looks like `http://host:port/path`", remoteWriteURL)
}
requestURI := string(u.RequestURI())
isTLS := scheme == "https"
var tlsCfg *tls.Config
if isTLS {
var err error
tlsCfg, err = getTLSConfig(argIdx)
if err != nil {
logger.Fatalf("cannot parse -remoteWrite.proxyURL=%q: %s", pURL, err)
logger.Panicf("FATAL: cannot initialize TLS config: %s", err)
}
tr.Proxy = http.ProxyURL(pu)
}
hc := &http.Client{
Transport: authCfg.NewRoundTripper(tr),
Timeout: sendTimeout.GetOptionalArg(argIdx),
if !strings.Contains(host, ":") {
if isTLS {
host += ":443"
} else {
host += ":80"
}
}
retryMaxIntervalFlag := retryMaxTime
if retryMaxInterval.String() != "" {
retryMaxIntervalFlag = retryMaxInterval
maxConns := 2 * concurrency
hc := &fasthttp.HostClient{
Addr: host,
Name: "vmagent",
Dial: statDial,
IsTLS: isTLS,
TLSConfig: tlsCfg,
MaxConns: maxConns,
MaxIdleConnDuration: 10 * readTimeout,
ReadTimeout: readTimeout,
WriteTimeout: writeTimeout,
MaxResponseBodySize: 1024 * 1024,
}
c := &client{
sanitizedURL: sanitizedURL,
remoteWriteURL: remoteWriteURL,
authCfg: authCfg,
awsCfg: awsCfg,
fq: fq,
hc: hc,
retryMinInterval: retryMinInterval.GetOptionalArg(argIdx),
retryMaxInterval: retryMaxIntervalFlag.GetOptionalArg(argIdx),
stopCh: make(chan struct{}),
urlLabelValue: urlLabelValue,
remoteWriteURL: remoteWriteURL,
host: host,
requestURI: requestURI,
authHeader: authHeader,
fq: fq,
hc: hc,
stopCh: make(chan struct{}),
}
c.sendBlock = c.sendBlockHTTP
useVMProto := forceVMProto.GetOptionalArg(argIdx)
usePromProto := forcePromProto.GetOptionalArg(argIdx)
if useVMProto && usePromProto {
logger.Fatalf("-remoteWrite.useVMProto and -remoteWrite.usePromProto cannot be set simultaneously for -remoteWrite.url=%s", sanitizedURL)
}
if !useVMProto && !usePromProto {
// The VM protocol could be downgraded later at runtime if unsupported media type response status is received.
useVMProto = true
c.canDowngradeVMProto.Store(true)
}
c.useVMProto.Store(useVMProto)
return c
}
func (c *client) init(argIdx, concurrency int, sanitizedURL string) {
limitReached := metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_remotewrite_rate_limit_reached_total{url=%q}`, c.sanitizedURL))
if bytesPerSec := rateLimit.GetOptionalArg(argIdx); bytesPerSec > 0 {
logger.Infof("applying %d bytes per second rate limit for -remoteWrite.url=%q", bytesPerSec, sanitizedURL)
c.rl = ratelimiter.New(int64(bytesPerSec), limitReached, c.stopCh)
}
c.bytesSent = metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_remotewrite_bytes_sent_total{url=%q}`, c.sanitizedURL))
c.blocksSent = metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_remotewrite_blocks_sent_total{url=%q}`, c.sanitizedURL))
c.rateLimit = metrics.GetOrCreateGauge(fmt.Sprintf(`vmagent_remotewrite_rate_limit{url=%q}`, c.sanitizedURL), func() float64 {
return float64(rateLimit.GetOptionalArg(argIdx))
})
c.requestDuration = metrics.GetOrCreateHistogram(fmt.Sprintf(`vmagent_remotewrite_duration_seconds{url=%q}`, c.sanitizedURL))
c.requestsOKCount = metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_remotewrite_requests_total{url=%q, status_code="2XX"}`, c.sanitizedURL))
c.errorsCount = metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_remotewrite_errors_total{url=%q}`, c.sanitizedURL))
c.packetsDropped = metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_remotewrite_packets_dropped_total{url=%q}`, c.sanitizedURL))
c.retriesCount = metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_remotewrite_retries_count_total{url=%q}`, c.sanitizedURL))
c.sendDuration = metrics.GetOrCreateFloatCounter(fmt.Sprintf(`vmagent_remotewrite_send_duration_seconds_total{url=%q}`, c.sanitizedURL))
metrics.GetOrCreateGauge(fmt.Sprintf(`vmagent_remotewrite_queues{url=%q}`, c.sanitizedURL), func() float64 {
return float64(*queues)
})
c.requestDuration = metrics.GetOrCreateHistogram(fmt.Sprintf(`vmagent_remotewrite_duration_seconds{url=%q}`, c.urlLabelValue))
c.requestsOKCount = metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_remotewrite_requests_total{url=%q, status_code="2XX"}`, c.urlLabelValue))
c.errorsCount = metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_remotewrite_errors_total{url=%q}`, c.urlLabelValue))
c.retriesCount = metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_remotewrite_retries_count_total{url=%q}`, c.urlLabelValue))
for i := 0; i < concurrency; i++ {
c.wg.Add(1)
go func() {
@@ -212,389 +142,133 @@ func (c *client) init(argIdx, concurrency int, sanitizedURL string) {
c.runWorker()
}()
}
logger.Infof("initialized client for -remoteWrite.url=%q", c.sanitizedURL)
logger.Infof("initialized client for -remoteWrite.url=%q", c.remoteWriteURL)
return c
}
func (c *client) MustStop() {
close(c.stopCh)
c.wg.Wait()
logger.Infof("stopped client for -remoteWrite.url=%q", c.sanitizedURL)
logger.Infof("stopped client for -remoteWrite.url=%q", c.remoteWriteURL)
}
func getAuthConfig(argIdx int) (*promauth.Config, error) {
headersValue := headers.GetOptionalArg(argIdx)
var hdrs []string
if headersValue != "" {
hdrs = strings.Split(headersValue, "^^")
}
username := basicAuthUsername.GetOptionalArg(argIdx)
password := basicAuthPassword.GetOptionalArg(argIdx)
passwordFile := basicAuthPasswordFile.GetOptionalArg(argIdx)
var basicAuthCfg *promauth.BasicAuthConfig
if username != "" || password != "" || passwordFile != "" {
basicAuthCfg = &promauth.BasicAuthConfig{
Username: username,
Password: promauth.NewSecret(password),
PasswordFile: passwordFile,
}
}
token := bearerToken.GetOptionalArg(argIdx)
tokenFile := bearerTokenFile.GetOptionalArg(argIdx)
var oauth2Cfg *promauth.OAuth2Config
clientSecret := oauth2ClientSecret.GetOptionalArg(argIdx)
clientSecretFile := oauth2ClientSecretFile.GetOptionalArg(argIdx)
if clientSecretFile != "" || clientSecret != "" {
endpointParamsJSON := oauth2EndpointParams.GetOptionalArg(argIdx)
endpointParams, err := flagutil.ParseJSONMap(endpointParamsJSON)
if err != nil {
return nil, fmt.Errorf("cannot parse JSON for -remoteWrite.oauth2.endpointParams=%s: %w", endpointParamsJSON, err)
}
oauth2Cfg = &promauth.OAuth2Config{
ClientID: oauth2ClientID.GetOptionalArg(argIdx),
ClientSecret: promauth.NewSecret(clientSecret),
ClientSecretFile: clientSecretFile,
EndpointParams: endpointParams,
TokenURL: oauth2TokenURL.GetOptionalArg(argIdx),
Scopes: strings.Split(oauth2Scopes.GetOptionalArg(argIdx), ";"),
}
}
tlsCfg := &promauth.TLSConfig{
func getTLSConfig(argIdx int) (*tls.Config, error) {
tlsConfig := &promauth.TLSConfig{
CAFile: tlsCAFile.GetOptionalArg(argIdx),
CertFile: tlsCertFile.GetOptionalArg(argIdx),
KeyFile: tlsKeyFile.GetOptionalArg(argIdx),
ServerName: tlsServerName.GetOptionalArg(argIdx),
InsecureSkipVerify: tlsInsecureSkipVerify.GetOptionalArg(argIdx),
InsecureSkipVerify: *tlsInsecureSkipVerify,
}
opts := &promauth.Options{
BasicAuth: basicAuthCfg,
BearerToken: token,
BearerTokenFile: tokenFile,
OAuth2: oauth2Cfg,
TLSConfig: tlsCfg,
Headers: hdrs,
}
authCfg, err := opts.NewConfig()
cfg, err := promauth.NewConfig(".", nil, "", "", tlsConfig)
if err != nil {
return nil, fmt.Errorf("cannot populate auth config for remoteWrite idx: %d, err: %w", argIdx, err)
return nil, fmt.Errorf("cannot populate TLS config: %s", err)
}
return authCfg, nil
}
func getAWSAPIConfig(argIdx int) (*awsapi.Config, error) {
if !awsUseSigv4.GetOptionalArg(argIdx) {
return nil, nil
}
ec2Endpoint := awsEC2Endpoint.GetOptionalArg(argIdx)
stsEndpoint := awsSTSEndpoint.GetOptionalArg(argIdx)
region := awsRegion.GetOptionalArg(argIdx)
roleARN := awsRoleARN.GetOptionalArg(argIdx)
accessKey := awsAccessKey.GetOptionalArg(argIdx)
secretKey := awsSecretKey.GetOptionalArg(argIdx)
service := awsService.GetOptionalArg(argIdx)
cfg, err := awsapi.NewConfig(ec2Endpoint, stsEndpoint, region, roleARN, accessKey, secretKey, service)
if err != nil {
return nil, err
}
return cfg, nil
tlsCfg := cfg.NewTLSConfig()
return tlsCfg, nil
}
func (c *client) runWorker() {
var ok bool
var block []byte
ch := make(chan bool, 1)
ch := make(chan struct{})
for {
block, ok = c.fq.MustReadBlock(block[:0])
if !ok {
return
}
if len(block) == 0 {
// skip empty data blocks from sending
// see https://github.com/VictoriaMetrics/VictoriaMetrics/pull/6241
continue
}
go func() {
startTime := time.Now()
ch <- c.sendBlock(block)
c.sendDuration.Add(time.Since(startTime).Seconds())
c.sendBlock(block)
ch <- struct{}{}
}()
select {
case ok := <-ch:
if ok {
// The block has been sent successfully
continue
}
// Return unsent block to the queue.
c.fq.MustWriteBlockIgnoreDisabledPQ(block)
return
case <-ch:
// The block has been sent successfully
continue
case <-c.stopCh:
// c must be stopped. Wait for a while in the hope the block will be sent.
graceDuration := 5 * time.Second
select {
case ok := <-ch:
if !ok {
// Return unsent block to the queue.
c.fq.MustWriteBlockIgnoreDisabledPQ(block)
}
case <-ch:
// The block has been sent successfully.
case <-time.After(graceDuration):
// Return unsent block to the queue.
c.fq.MustWriteBlockIgnoreDisabledPQ(block)
logger.Errorf("couldn't sent block with size %d bytes to %q in %.3f seconds during shutdown; dropping it",
len(block), c.remoteWriteURL, graceDuration.Seconds())
}
return
}
}
}
func (c *client) doRequest(url string, body []byte) (*http.Response, error) {
req, err := c.newRequest(url, body)
if err != nil {
return nil, err
func (c *client) sendBlock(block []byte) {
req := fasthttp.AcquireRequest()
req.SetRequestURI(c.requestURI)
req.SetHost(c.host)
req.Header.SetMethod("POST")
req.Header.Add("Content-Type", "application/x-protobuf")
req.Header.Add("Content-Encoding", "snappy")
req.Header.Add("X-Prometheus-Remote-Write-Version", "0.1.0")
if c.authHeader != "" {
req.Header.Set("Authorization", c.authHeader)
}
resp, err := c.hc.Do(req)
if err == nil {
return resp, nil
}
if !errors.Is(err, io.EOF) && !errors.Is(err, io.ErrUnexpectedEOF) {
return nil, err
}
// It is likely connection became stale or timed out during the first request.
// Make another attempt in hope request will succeed.
// If not, the error should be handled by the caller as usual.
// This should help with https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4139
req, err = c.newRequest(url, body)
if err != nil {
return nil, fmt.Errorf("second attempt: %w", err)
}
resp, err = c.hc.Do(req)
if err != nil {
return nil, fmt.Errorf("second attempt: %w", err)
}
return resp, nil
}
req.SetBody(block)
func (c *client) newRequest(url string, body []byte) (*http.Request, error) {
reqBody := bytes.NewBuffer(body)
req, err := http.NewRequest(http.MethodPost, url, reqBody)
if err != nil {
logger.Panicf("BUG: unexpected error from http.NewRequest(%q): %s", url, err)
}
err = c.authCfg.SetHeaders(req, true)
if err != nil {
return nil, err
}
h := req.Header
h.Set("User-Agent", "vmagent")
h.Set("Content-Type", "application/x-protobuf")
if encoding.IsZstd(body) {
h.Set("Content-Encoding", "zstd")
h.Set("X-VictoriaMetrics-Remote-Write-Version", "1")
} else {
h.Set("Content-Encoding", "snappy")
h.Set("X-Prometheus-Remote-Write-Version", "0.1.0")
}
if c.awsCfg != nil {
sigv4Hash := awsapi.HashHex(body)
if err := c.awsCfg.SignRequest(req, sigv4Hash); err != nil {
return nil, fmt.Errorf("cannot sign remoteWrite request with AWS sigv4: %w", err)
}
}
return req, nil
}
// sendBlockHTTP sends the given block to c.remoteWriteURL.
//
// The function returns false only if c.stopCh is closed.
// Otherwise, it tries sending the block to remote storage indefinitely.
func (c *client) sendBlockHTTP(block []byte) bool {
c.rl.Register(len(block))
maxRetryDuration := timeutil.AddJitterToDuration(c.retryMaxInterval)
retryDuration := timeutil.AddJitterToDuration(c.retryMinInterval)
retriesCount := 0
retryDuration := time.Second
resp := fasthttp.AcquireResponse()
again:
select {
case <-c.stopCh:
fasthttp.ReleaseRequest(req)
fasthttp.ReleaseResponse(resp)
return
default:
}
startTime := time.Now()
resp, err := c.doRequest(c.remoteWriteURL, block)
err := doRequestWithPossibleRetry(c.hc, req, resp)
c.requestDuration.UpdateDuration(startTime)
if err != nil {
c.errorsCount.Inc()
retryDuration *= 2
if retryDuration > maxRetryDuration {
retryDuration = maxRetryDuration
}
remoteWriteRetryLogger.Warnf("couldn't send a block with size %d bytes to %q: %s; re-sending the block in %.3f seconds",
len(block), c.sanitizedURL, err, retryDuration.Seconds())
t := timerpool.Get(retryDuration)
select {
case <-c.stopCh:
timerpool.Put(t)
return false
case <-t.C:
timerpool.Put(t)
if retryDuration > time.Minute {
retryDuration = time.Minute
}
logger.Errorf("couldn't send a block with size %d bytes to %q: %s; re-sending the block in %.3f seconds",
len(block), c.remoteWriteURL, err, retryDuration.Seconds())
time.Sleep(retryDuration)
c.retriesCount.Inc()
goto again
}
statusCode := resp.StatusCode
if statusCode/100 == 2 {
_ = resp.Body.Close()
c.requestsOKCount.Inc()
c.bytesSent.Add(len(block))
c.blocksSent.Inc()
return true
}
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_remotewrite_requests_total{url=%q, status_code="%d"}`, c.sanitizedURL, statusCode)).Inc()
switch statusCode {
case 409:
logBlockRejected(block, c.sanitizedURL, resp)
// Just drop block on 409 status code like Prometheus does.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/873
// and https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1149
_ = resp.Body.Close()
c.packetsDropped.Inc()
return true
// - Remote Write v1 specification implicitly expects a `400 Bad Request` when the encoding is not supported.
// - Remote Write v2 specification explicitly specifies a `415 Unsupported Media Type` for unsupported encodings.
// - Real-world implementations of v1 use both 400 and 415 status codes.
// See more in research: https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8462#issuecomment-2786918054
case 415, 400:
if encoding.IsZstd(block) {
logger.Infof("received unsupported media type or bad request from remote storage at %q. Re-packing the block to Prometheus remote write and retrying."+
"See https://docs.victoriametrics.com/victoriametrics/vmagent/#victoriametrics-remote-write-protocol", c.sanitizedURL)
zstdBlockLen := len(block)
block, err = repackBlockFromZstdToSnappy(block)
if err == nil {
if c.canDowngradeVMProto.Swap(false) {
logger.Infof("received unsupported media type or bad request from remote storage at %q. Downgrading protocol from VictoriaMetrics to Prometheus remote write for all future requests. "+
"See https://docs.victoriametrics.com/victoriametrics/vmagent/#victoriametrics-remote-write-protocol", c.sanitizedURL)
c.useVMProto.Store(false)
}
c.retriesCount.Inc()
_ = resp.Body.Close()
goto again
}
logger.Warnf("failed to repack zstd block (%s bytes) to snappy: %s; The block will be rejected. "+
"Possible cause: ungraceful shutdown leading to persisted queue corruption.",
zstdBlockLen, err)
statusCode := resp.StatusCode()
if statusCode/100 != 2 {
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_remotewrite_requests_total{url=%q, status_code="%d"}`, c.urlLabelValue, statusCode)).Inc()
retryDuration *= 2
if retryDuration > time.Minute {
retryDuration = time.Minute
}
// Just drop snappy blocks on 400 or 415 status codes like Prometheus does.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/873
// and https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1149
logBlockRejected(block, c.sanitizedURL, resp)
_ = resp.Body.Close()
c.packetsDropped.Inc()
return true
logger.Errorf("unexpected status code received after sending a block with size %d bytes to %q: %d; response body=%q; re-sending the block in %.3f seconds",
len(block), c.remoteWriteURL, statusCode, resp.Body(), retryDuration.Seconds())
time.Sleep(retryDuration)
c.retriesCount.Inc()
goto again
}
c.requestsOKCount.Inc()
// Unexpected status code returned
retriesCount++
retryAfterHeader := parseRetryAfterHeader(resp.Header.Get("Retry-After"))
retryDuration = getRetryDuration(retryAfterHeader, retryDuration, maxRetryDuration)
// Handle response
body, err := io.ReadAll(resp.Body)
_ = resp.Body.Close()
if err != nil {
logger.Errorf("cannot read response body from %q during retry #%d: %s", c.sanitizedURL, retriesCount, err)
} else {
logger.Errorf("unexpected status code received after sending a block with size %d bytes to %q during retry #%d: %d; response body=%q; "+
"re-sending the block in %.3f seconds", len(block), c.sanitizedURL, retriesCount, statusCode, body, retryDuration.Seconds())
}
t := timerpool.Get(retryDuration)
select {
case <-c.stopCh:
timerpool.Put(t)
return false
case <-t.C:
timerpool.Put(t)
}
c.retriesCount.Inc()
goto again
// The block has been successfully sent to the remote storage.
fasthttp.ReleaseResponse(resp)
fasthttp.ReleaseRequest(req)
}
var remoteWriteRejectedLogger = logger.WithThrottler("remoteWriteRejected", 5*time.Second)
var remoteWriteRetryLogger = logger.WithThrottler("remoteWriteRetry", 5*time.Second)
// getRetryDuration returns retry duration.
// retryAfterDuration has the highest priority.
// If retryAfterDuration is not specified, retryDuration gets doubled.
// retryDuration can't exceed maxRetryDuration.
//
// Also see: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6097
func getRetryDuration(retryAfterDuration, retryDuration, maxRetryDuration time.Duration) time.Duration {
// retryAfterDuration has the highest priority duration
if retryAfterDuration > 0 {
return timeutil.AddJitterToDuration(retryAfterDuration)
func doRequestWithPossibleRetry(hc *fasthttp.HostClient, req *fasthttp.Request, resp *fasthttp.Response) error {
// There is no need in calling DoTimeout, since the timeout must be already set in hc.ReadTimeout.
err := hc.Do(req, resp)
if err == nil {
return nil
}
// default backoff retry policy
retryDuration *= 2
if retryDuration > maxRetryDuration {
retryDuration = maxRetryDuration
if err != fasthttp.ErrConnectionClosed {
return err
}
return retryDuration
}
// repackBlockFromZstdToSnappy repacks the given zstd-compressed block to snappy-compressed block.
//
// The input block may be corrupted, for example, if vmagent was shut down ungracefully and
// failed to properly update the persisted queue files. In such cases, zstd decompression
// will fail and an error will be returned.
//
// For more details, see: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9417
func repackBlockFromZstdToSnappy(zstdBlock []byte) ([]byte, error) {
plainBlock := make([]byte, 0, len(zstdBlock)*2)
plainBlock, err := zstd.Decompress(plainBlock, zstdBlock)
if err != nil {
return nil, fmt.Errorf("zstd: decompress: %s", err)
}
return snappy.Encode(nil, plainBlock), nil
}
func logBlockRejected(block []byte, sanitizedURL string, resp *http.Response) {
body, err := io.ReadAll(resp.Body)
if err != nil {
remoteWriteRejectedLogger.Errorf("sending a block with size %d bytes to %q was rejected (skipping the block): status code %d; "+
"failed to read response body: %s",
len(block), sanitizedURL, resp.StatusCode, err)
} else {
remoteWriteRejectedLogger.Errorf("sending a block with size %d bytes to %q was rejected (skipping the block): status code %d; response body: %s",
len(block), sanitizedURL, resp.StatusCode, string(body))
}
}
// parseRetryAfterHeader parses `Retry-After` value retrieved from HTTP response header.
// retryAfterString should be in either HTTP-date or a number of seconds.
// It will return time.Duration(0) if `retryAfterString` does not follow RFC 7231.
func parseRetryAfterHeader(retryAfterString string) (retryAfterDuration time.Duration) {
if retryAfterString == "" {
return retryAfterDuration
}
defer func() {
v := retryAfterDuration.Seconds()
logger.Infof("'Retry-After: %s' parsed into %.2f second(s)", retryAfterString, v)
}()
// Retry-After could be in "Mon, 02 Jan 2006 15:04:05 GMT" format.
if parsedTime, err := time.Parse(http.TimeFormat, retryAfterString); err == nil {
return time.Duration(time.Until(parsedTime).Seconds()) * time.Second
}
// Retry-After could be in seconds.
if seconds, err := strconv.Atoi(retryAfterString); err == nil {
return time.Duration(seconds) * time.Second
}
return 0
// Retry request if the server closed the keep-alive connection during the first attempt.
return hc.Do(req, resp)
}

View File

@@ -1,132 +0,0 @@
package remotewrite
import (
"math"
"net/http"
"testing"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding"
"github.com/golang/snappy"
)
func TestCalculateRetryDuration(t *testing.T) {
// `testFunc` call `calculateRetryDuration` for `n` times
// and evaluate if the result of `calculateRetryDuration` is
// 1. >= expectMinDuration
// 2. <= expectMinDuration + 10% (see timeutil.AddJitterToDuration)
f := func(retryAfterDuration, retryDuration time.Duration, n int, expectMinDuration time.Duration) {
t.Helper()
for i := 0; i < n; i++ {
retryDuration = getRetryDuration(retryAfterDuration, retryDuration, time.Minute)
}
expectMaxDuration := helper(expectMinDuration)
expectMinDuration = expectMinDuration - (1000 * time.Millisecond) // Avoid edge case when calculating time.Until(now)
if retryDuration < expectMinDuration || retryDuration > expectMaxDuration {
t.Fatalf(
"incorrect retry duration, want (ms): [%d, %d], got (ms): %d",
expectMinDuration.Milliseconds(), expectMaxDuration.Milliseconds(),
retryDuration.Milliseconds(),
)
}
}
// Call calculateRetryDuration for 1 time.
{
// default backoff policy
f(0, time.Second, 1, 2*time.Second)
// default backoff policy exceed max limit"
f(0, 10*time.Minute, 1, time.Minute)
// retry after > default backoff policy
f(10*time.Second, 1*time.Second, 1, 10*time.Second)
// retry after < default backoff policy
f(1*time.Second, 10*time.Second, 1, 1*time.Second)
// retry after invalid and < default backoff policy
f(0, time.Second, 1, 2*time.Second)
}
// Call calculateRetryDuration for multiple times.
{
// default backoff policy 2 times
f(0, time.Second, 2, 4*time.Second)
// default backoff policy 3 times
f(0, time.Second, 3, 8*time.Second)
// default backoff policy N times exceed max limit
f(0, time.Second, 10, time.Minute)
// retry after 120s 1 times
f(120*time.Second, time.Second, 1, 120*time.Second)
// retry after 120s 2 times
f(120*time.Second, time.Second, 2, 120*time.Second)
}
}
func TestParseRetryAfterHeader(t *testing.T) {
f := func(retryAfterString string, expectResult time.Duration) {
t.Helper()
result := parseRetryAfterHeader(retryAfterString)
// expect `expectResult == result` when retryAfterString is in seconds or invalid
// expect the difference between result and expectResult to be lower than 10%
if !(expectResult == result || math.Abs(float64(expectResult-result))/float64(expectResult) < 0.10) {
t.Fatalf(
"incorrect retry after duration, want (ms): %d, got (ms): %d",
expectResult.Milliseconds(), result.Milliseconds(),
)
}
}
// retry after header in seconds
f("10", 10*time.Second)
// retry after header in date time
f(time.Now().Add(30*time.Second).UTC().Format(http.TimeFormat), 30*time.Second)
// retry after header invalid
f("invalid-retry-after", 0)
// retry after header not in GMT
f(time.Now().Add(10*time.Second).Format("Mon, 02 Jan 2006 15:04:05 FAKETZ"), 0)
}
// helper calculate the max possible time duration calculated by timeutil.AddJitterToDuration.
func helper(d time.Duration) time.Duration {
dv := d / 10
if dv > 10*time.Second {
dv = 10 * time.Second
}
return d + dv
}
func TestRepackBlockFromZstdToSnappy(t *testing.T) {
expectedPlainBlock := []byte(`foobar`)
zstdBlock := encoding.CompressZSTDLevel(nil, expectedPlainBlock, 1)
snappyBlock, err := repackBlockFromZstdToSnappy(zstdBlock)
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
actualPlainBlock, err := snappy.Decode(nil, snappyBlock)
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
if string(actualPlainBlock) != string(expectedPlainBlock) {
t.Fatalf("unexpected plain block; got %q; want %q", actualPlainBlock, expectedPlainBlock)
}
}
func TestRepackBlockFromZstdToSnappyInvalidBlock(t *testing.T) {
snappyBlock, err := repackBlockFromZstdToSnappy([]byte("invalid zstd block"))
if err == nil {
t.Fatalf("expected error for invalid zstd block; got nil")
}
if len(snappyBlock) != 0 {
t.Fatalf("expected empty snappy block; got %d bytes", len(snappyBlock))
}
}

View File

@@ -3,36 +3,27 @@ package remotewrite
import (
"flag"
"sync"
"sync/atomic"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/cgroup"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/decimal"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding/zstd"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/persistentqueue"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promrelabel"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/slicesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
"github.com/VictoriaMetrics/metrics"
"github.com/golang/snappy"
)
var (
flushInterval = flag.Duration("remoteWrite.flushInterval", time.Second, "Interval for flushing the data to remote storage. "+
"This option takes effect only when less than -remoteWrite.maxRowsPerBlock data points per -remoteWrite.flushInterval are pushed to -remoteWrite.url")
maxUnpackedBlockSize = flagutil.NewBytes("remoteWrite.maxBlockSize", 8*1024*1024, "The maximum block size to send to remote storage. Bigger blocks may improve performance at the cost of the increased memory usage. See also -remoteWrite.maxRowsPerBlock")
maxRowsPerBlock = flag.Int("remoteWrite.maxRowsPerBlock", 10000, "The maximum number of samples to send in each block to remote storage. Higher number may improve performance at the cost of the increased memory usage. See also -remoteWrite.maxBlockSize")
maxMetadataPerBlock = flag.Int("remoteWrite.maxMetadataPerBlock", 5000, "The maximum number of metadata to send in each block to remote storage. Higher number may improve performance at the cost of the increased memory usage. See also -remoteWrite.maxBlockSize")
vmProtoCompressLevel = flag.Int("remoteWrite.vmProtoCompressLevel", 0, "The compression level for VictoriaMetrics remote write protocol. "+
"Higher values reduce network traffic at the cost of higher CPU usage. Negative values reduce CPU usage at the cost of increased network traffic. "+
"See https://docs.victoriametrics.com/victoriametrics/vmagent/#victoriametrics-remote-write-protocol")
"Higher value reduces network bandwidth usage at the cost of delayed push of scraped data to remote storage. "+
"Minimum supported interval is 1 second")
maxUnpackedBlockSize = flag.Int("remoteWrite.maxBlockSize", 32*1024*1024, "The maximum size in bytes of unpacked request to send to remote storage. "+
"It shouldn't exceed -maxInsertRequestSize from VictoriaMetrics")
)
// the maximum number of rows to send per each block.
const maxRowsPerBlock = 10000
type pendingSeries struct {
mu sync.Mutex
wr writeRequest
@@ -41,12 +32,9 @@ type pendingSeries struct {
periodicFlusherWG sync.WaitGroup
}
func newPendingSeries(fq *persistentqueue.FastQueue, isVMRemoteWrite *atomic.Bool, significantFigures, roundDigits int) *pendingSeries {
func newPendingSeries(pushBlock func(block []byte)) *pendingSeries {
var ps pendingSeries
ps.wr.fq = fq
ps.wr.isVMRemoteWrite = isVMRemoteWrite
ps.wr.significantFigures = significantFigures
ps.wr.roundDigits = roundDigits
ps.wr.pushBlock = pushBlock
ps.stopCh = make(chan struct{})
ps.periodicFlusherWG.Add(1)
go func() {
@@ -61,18 +49,10 @@ func (ps *pendingSeries) MustStop() {
ps.periodicFlusherWG.Wait()
}
func (ps *pendingSeries) TryPushTimeSeries(tss []prompb.TimeSeries) bool {
func (ps *pendingSeries) Push(tss []prompbmarshal.TimeSeries) {
ps.mu.Lock()
ok := ps.wr.tryPushTimeSeries(tss)
ps.wr.push(tss)
ps.mu.Unlock()
return ok
}
func (ps *pendingSeries) TryPushMetadata(mms []prompb.MetricMetadata) bool {
ps.mu.Lock()
ok := ps.wr.tryPushMetadata(mms)
ps.mu.Unlock()
return ok
}
func (ps *pendingSeries) periodicFlusher() {
@@ -80,348 +60,140 @@ func (ps *pendingSeries) periodicFlusher() {
if flushSeconds <= 0 {
flushSeconds = 1
}
d := timeutil.AddJitterToDuration(*flushInterval)
ticker := time.NewTicker(d)
ticker := time.NewTicker(*flushInterval)
defer ticker.Stop()
for {
mustStop := false
for !mustStop {
select {
case <-ps.stopCh:
ps.mu.Lock()
ps.wr.mustFlushOnStop()
ps.mu.Unlock()
return
mustStop = true
case <-ticker.C:
if fasttime.UnixTimestamp()-ps.wr.lastFlushTime.Load() < uint64(flushSeconds) {
if fasttime.UnixTimestamp()-ps.wr.lastFlushTime < uint64(flushSeconds) {
continue
}
}
ps.mu.Lock()
_ = ps.wr.tryFlush()
ps.wr.flush()
ps.mu.Unlock()
}
}
type writeRequest struct {
lastFlushTime atomic.Uint64
wr prompbmarshal.WriteRequest
pushBlock func(block []byte)
lastFlushTime uint64
// The queue to send blocks to.
fq *persistentqueue.FastQueue
tss []prompbmarshal.TimeSeries
// Whether to encode the write request with VictoriaMetrics remote write protocol.
isVMRemoteWrite *atomic.Bool
// How many significant figures must be left before sending the writeRequest to fq.
significantFigures int
// How many decimal digits after point must be left before sending the writeRequest to fq.
roundDigits int
wr prompb.WriteRequest
tss []prompb.TimeSeries
mms []prompb.MetricMetadata
labels []prompb.Label
samples []prompb.Sample
// buf holds labels data
buf []byte
// metadatabuf holds metadata data
metadatabuf []byte
labels []prompbmarshal.Label
samples []prompbmarshal.Sample
buf []byte
}
func (wr *writeRequest) reset() {
// Do not reset lastFlushTime, fq, isVMRemoteWrite, significantFigures and roundDigits, since they are reused.
wr.wr.Timeseries = nil
wr.wr.Metadata = nil
clear(wr.tss)
for i := range wr.tss {
ts := &wr.tss[i]
ts.Labels = nil
ts.Samples = nil
}
wr.tss = wr.tss[:0]
clear(wr.mms)
wr.mms = wr.mms[:0]
promrelabel.CleanLabels(wr.labels)
for i := range wr.labels {
label := &wr.labels[i]
label.Name = ""
label.Value = ""
}
wr.labels = wr.labels[:0]
wr.samples = wr.samples[:0]
wr.buf = wr.buf[:0]
wr.metadatabuf = wr.metadatabuf[:0]
}
// mustFlushOnStop force pushes wr data into wr.fq
//
// This is needed in order to properly save in-memory data to persistent queue on graceful shutdown.
func (wr *writeRequest) mustFlushOnStop() {
func (wr *writeRequest) flush() {
wr.wr.Timeseries = wr.tss
wr.wr.Metadata = wr.mms
if !tryPushWriteRequest(&wr.wr, wr.mustWriteBlock, wr.isVMRemoteWrite.Load()) {
logger.Panicf("BUG: final flush must always return true")
}
wr.lastFlushTime = fasttime.UnixTimestamp()
pushWriteRequest(&wr.wr, wr.pushBlock)
wr.reset()
}
func (wr *writeRequest) mustWriteBlock(block []byte) bool {
wr.fq.MustWriteBlockIgnoreDisabledPQ(block)
return true
}
func (wr *writeRequest) tryFlush() bool {
wr.wr.Timeseries = wr.tss
wr.wr.Metadata = wr.mms
wr.lastFlushTime.Store(fasttime.UnixTimestamp())
if !tryPushWriteRequest(&wr.wr, wr.fq.TryWriteBlock, wr.isVMRemoteWrite.Load()) {
return false
}
wr.reset()
return true
}
func adjustSampleValues(samples []prompb.Sample, significantFigures, roundDigits int) {
if n := significantFigures; n > 0 {
for i := range samples {
s := &samples[i]
s.Value = decimal.RoundToSignificantFigures(s.Value, n)
}
}
if n := roundDigits; n < 100 {
for i := range samples {
s := &samples[i]
s.Value = decimal.RoundToDecimalDigits(s.Value, n)
}
}
}
func (wr *writeRequest) tryPushMetadata(mms []prompb.MetricMetadata) bool {
mmdDst := wr.mms
maxMetadataPerBlock := *maxMetadataPerBlock
for i := range mms {
if len(wr.mms) >= maxMetadataPerBlock {
if !wr.tryFlush() {
return false
}
mmdDst = wr.mms
}
mmSrc := &mms[i]
mmdDst = append(mmdDst, prompb.MetricMetadata{})
wr.copyMetadata(&mmdDst[len(mmdDst)-1], mmSrc)
}
wr.mms = mmdDst
return true
}
func (wr *writeRequest) copyMetadata(dst, src *prompb.MetricMetadata) {
// Direct copy for non-string fields, which are safe by value.
dst.Type = src.Type
dst.Unit = src.Unit
// Pre-allocate memory for all string fields.
neededBufLen := len(src.MetricFamilyName) + len(src.Help)
bufLen := len(wr.metadatabuf)
wr.metadatabuf = slicesutil.SetLength(wr.metadatabuf, bufLen+neededBufLen)
buf := wr.metadatabuf[:bufLen]
// Copy MetricFamilyName
bufLen = len(buf)
buf = append(buf, src.MetricFamilyName...)
dst.MetricFamilyName = bytesutil.ToUnsafeString(buf[bufLen:])
// Copy Help
bufLen = len(buf)
buf = append(buf, src.Help...)
dst.Help = bytesutil.ToUnsafeString(buf[bufLen:])
wr.metadatabuf = buf
}
func (wr *writeRequest) tryPushTimeSeries(src []prompb.TimeSeries) bool {
func (wr *writeRequest) push(src []prompbmarshal.TimeSeries) {
tssDst := wr.tss
maxSamplesPerBlock := *maxRowsPerBlock
// Allow up to 10x of labels per each block on average.
maxLabelsPerBlock := 10 * maxSamplesPerBlock
for i := range src {
if len(wr.samples) >= maxSamplesPerBlock || len(wr.labels) >= maxLabelsPerBlock {
wr.tss = tssDst
if !wr.tryFlush() {
return false
}
tssDst = append(tssDst, prompbmarshal.TimeSeries{})
dst := &tssDst[len(tssDst)-1]
wr.copyTimeSeries(dst, &src[i])
if len(wr.tss) >= maxRowsPerBlock {
wr.flush()
tssDst = wr.tss
}
tsSrc := &src[i]
adjustSampleValues(tsSrc.Samples, wr.significantFigures, wr.roundDigits)
tssDst = append(tssDst, prompb.TimeSeries{})
wr.copyTimeSeries(&tssDst[len(tssDst)-1], tsSrc)
}
wr.tss = tssDst
return true
}
func (wr *writeRequest) copyTimeSeries(dst, src *prompb.TimeSeries) {
labelsSrc := src.Labels
// Pre-allocate memory for labels.
func (wr *writeRequest) copyTimeSeries(dst, src *prompbmarshal.TimeSeries) {
labelsDst := wr.labels
labelsLen := len(wr.labels)
wr.labels = slicesutil.SetLength(wr.labels, labelsLen+len(labelsSrc))
labelsDst := wr.labels[labelsLen:]
samplesDst := wr.samples
buf := wr.buf
for i := range src.Labels {
labelsDst = append(labelsDst, prompbmarshal.Label{})
dstLabel := &labelsDst[len(labelsDst)-1]
srcLabel := &src.Labels[i]
// Pre-allocate memory for byte slice needed for storing label names and values.
neededBufLen := 0
for i := range labelsSrc {
label := &labelsSrc[i]
neededBufLen += len(label.Name) + len(label.Value)
}
bufLen := len(wr.buf)
wr.buf = slicesutil.SetLength(wr.buf, bufLen+neededBufLen)
buf := wr.buf[:bufLen]
// Copy labels
for i := range labelsSrc {
dstLabel := &labelsDst[i]
srcLabel := &labelsSrc[i]
bufLen := len(buf)
buf = append(buf, srcLabel.Name...)
dstLabel.Name = bytesutil.ToUnsafeString(buf[bufLen:])
bufLen = len(buf)
dstLabel.Name = bytesutil.ToUnsafeString(buf[len(buf)-len(srcLabel.Name):])
buf = append(buf, srcLabel.Value...)
dstLabel.Value = bytesutil.ToUnsafeString(buf[bufLen:])
dstLabel.Value = bytesutil.ToUnsafeString(buf[len(buf)-len(srcLabel.Value):])
}
wr.buf = buf
dst.Labels = labelsDst
dst.Labels = labelsDst[labelsLen:]
// Copy samples
samplesLen := len(wr.samples)
wr.samples = append(wr.samples, src.Samples...)
dst.Samples = wr.samples[samplesLen:]
samplesDst = append(samplesDst, src.Samples...)
dst.Samples = samplesDst[len(samplesDst)-len(src.Samples):]
wr.samples = samplesDst
wr.labels = labelsDst
wr.buf = buf
}
// marshalConcurrency limits the maximum number of concurrent workers, which marshal and compress WriteRequest.
var marshalConcurrencyCh = make(chan struct{}, cgroup.AvailableCPUs())
func tryPushWriteRequest(wr *prompb.WriteRequest, tryPushBlock func(block []byte) bool, isVMRemoteWrite bool) bool {
if wr.IsEmpty() {
func pushWriteRequest(wr *prompbmarshal.WriteRequest, pushBlock func(block []byte)) {
if len(wr.Timeseries) == 0 {
// Nothing to push
return true
return
}
marshalConcurrencyCh <- struct{}{}
bb := writeRequestBufPool.Get()
bb.B = wr.MarshalProtobuf(bb.B[:0])
if len(bb.B) <= maxUnpackedBlockSize.IntN() {
zb := compressBufPool.Get()
if isVMRemoteWrite {
zb.B = zstd.CompressLevel(zb.B[:0], bb.B, *vmProtoCompressLevel)
} else {
zb.B = snappy.Encode(zb.B[:cap(zb.B)], bb.B)
}
bb.B = prompbmarshal.MarshalWriteRequest(bb.B[:0], wr)
if len(bb.B) <= *maxUnpackedBlockSize {
zb := snappyBufPool.Get()
zb.B = snappy.Encode(zb.B[:cap(zb.B)], bb.B)
writeRequestBufPool.Put(bb)
<-marshalConcurrencyCh
if len(zb.B) <= persistentqueue.MaxBlockSize {
zbLen := len(zb.B)
ok := tryPushBlock(zb.B)
compressBufPool.Put(zb)
if ok {
blockSizeRows.Update(float64(len(wr.Timeseries)))
blockMetadataRows.Update(float64(len(wr.Metadata)))
blockSizeBytes.Update(float64(zbLen))
}
return ok
pushBlock(zb.B)
blockSizeRows.Update(float64(len(wr.Timeseries)))
blockSizeBytes.Update(float64(len(zb.B)))
snappyBufPool.Put(zb)
return
}
compressBufPool.Put(zb)
snappyBufPool.Put(zb)
} else {
writeRequestBufPool.Put(bb)
<-marshalConcurrencyCh
}
// Split timeseries or metadata into two smaller blocks
switch len(wr.Timeseries) {
case 0:
if len(wr.Metadata) == 1 {
logger.Warnf("dropping a metadata exceeding -remoteWrite.maxBlockSize=%d bytes", maxUnpackedBlockSize.N)
return true
}
metadata := wr.Metadata
n := len(metadata) / 2
wr.Metadata = metadata[:n]
if !tryPushWriteRequest(wr, tryPushBlock, isVMRemoteWrite) {
wr.Metadata = metadata
return false
}
wr.Metadata = metadata[n:]
if !tryPushWriteRequest(wr, tryPushBlock, isVMRemoteWrite) {
wr.Metadata = metadata
return false
}
wr.Metadata = metadata
return true
case 1:
// A single time series left. Recursively split its samples and metadata into smaller parts if possible.
samples := wr.Timeseries[0].Samples
metaData := wr.Metadata
if len(samples) == 1 && len(metaData) <= 1 {
logger.Warnf("dropping a sample for metric and %d metadata which are exceeding -remoteWrite.maxBlockSize=%d bytes", len(metaData), maxUnpackedBlockSize.N)
return true
}
n := len(samples) / 2
m := len(metaData) / 2
wr.Timeseries[0].Samples = samples[:n]
wr.Metadata = metaData[:m]
if !tryPushWriteRequest(wr, tryPushBlock, isVMRemoteWrite) {
wr.Timeseries[0].Samples = samples
wr.Metadata = metaData
return false
}
wr.Timeseries[0].Samples = samples[n:]
wr.Metadata = metaData[m:]
if !tryPushWriteRequest(wr, tryPushBlock, isVMRemoteWrite) {
wr.Timeseries[0].Samples = samples
wr.Metadata = metaData
return false
}
wr.Timeseries[0].Samples = samples
wr.Metadata = metaData
return true
default:
// Split both timeseries and metadata.
timeseries := wr.Timeseries
metaData := wr.Metadata
n := len(timeseries) / 2
m := len(metaData) / 2
wr.Timeseries = timeseries[:n]
wr.Metadata = metaData[:m]
if !tryPushWriteRequest(wr, tryPushBlock, isVMRemoteWrite) {
wr.Timeseries = timeseries
wr.Metadata = metaData
return false
}
wr.Timeseries = timeseries[n:]
wr.Metadata = metaData[m:]
if !tryPushWriteRequest(wr, tryPushBlock, isVMRemoteWrite) {
wr.Timeseries = timeseries
wr.Metadata = metaData
return false
}
wr.Timeseries = timeseries
wr.Metadata = metaData
return true
}
// Too big block. Recursively split it into smaller parts.
timeseries := wr.Timeseries
n := len(timeseries) / 2
wr.Timeseries = timeseries[:n]
pushWriteRequest(wr, pushBlock)
wr.Timeseries = timeseries[n:]
pushWriteRequest(wr, pushBlock)
wr.Timeseries = timeseries
}
var (
blockSizeBytes = metrics.NewHistogram(`vmagent_remotewrite_block_size_bytes`)
blockSizeRows = metrics.NewHistogram(`vmagent_remotewrite_block_size_rows`)
blockMetadataRows = metrics.NewHistogram(`vmagent_remotewrite_block_metadata_rows`)
blockSizeBytes = metrics.NewHistogram(`vmagent_remotewrite_block_size_bytes`)
blockSizeRows = metrics.NewHistogram(`vmagent_remotewrite_block_size_rows`)
)
var (
writeRequestBufPool bytesutil.ByteBufferPool
compressBufPool bytesutil.ByteBufferPool
)
var writeRequestBufPool bytesutil.ByteBufferPool
var snappyBufPool bytesutil.ByteBufferPool

View File

@@ -1,73 +0,0 @@
package remotewrite
import (
"fmt"
"math"
"testing"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
)
func TestPushWriteRequest(t *testing.T) {
rowsCounts := []int{1, 10, 100, 1e3, 1e4}
expectedBlockLensProm := []int{216, 1848, 16424, 169882, 1757876}
expectedBlockLensVM := []int{138, 492, 3927, 34995, 288476}
for i, rowsCount := range rowsCounts {
expectedBlockLenProm := expectedBlockLensProm[i]
expectedBlockLenVM := expectedBlockLensVM[i]
t.Run(fmt.Sprintf("%d", rowsCount), func(t *testing.T) {
testPushWriteRequest(t, rowsCount, expectedBlockLenProm, expectedBlockLenVM)
})
}
}
func testPushWriteRequest(t *testing.T, rowsCount, expectedBlockLenProm, expectedBlockLenVM int) {
f := func(isVMRemoteWrite bool, expectedBlockLen int, tolerancePrc float64) {
t.Helper()
wr := newTestWriteRequest(rowsCount, 20)
pushBlockLen := 0
pushBlock := func(block []byte) bool {
if pushBlockLen > 0 {
panic(fmt.Errorf("BUG: pushBlock called multiple times; pushBlockLen=%d at first call, len(block)=%d at second call", pushBlockLen, len(block)))
}
pushBlockLen = len(block)
return true
}
if !tryPushWriteRequest(wr, pushBlock, isVMRemoteWrite) {
t.Fatalf("cannot push data to remote storage")
}
if math.Abs(float64(pushBlockLen-expectedBlockLen)/float64(expectedBlockLen)*100) > tolerancePrc {
t.Fatalf("unexpected block len for rowsCount=%d, isVMRemoteWrite=%v; got %d bytes; expecting %d bytes +- %.0f%%",
rowsCount, isVMRemoteWrite, pushBlockLen, expectedBlockLen, tolerancePrc)
}
}
// Check Prometheus remote write
f(false, expectedBlockLenProm, 3)
// Check VictoriaMetrics remote write
f(true, expectedBlockLenVM, 15)
}
func newTestWriteRequest(seriesCount, labelsCount int) *prompb.WriteRequest {
var wr prompb.WriteRequest
for i := 0; i < seriesCount; i++ {
var labels []prompb.Label
for j := 0; j < labelsCount; j++ {
labels = append(labels, prompb.Label{
Name: fmt.Sprintf("label_%d_%d", i, j),
Value: fmt.Sprintf("value_%d_%d", i, j),
})
}
wr.Timeseries = append(wr.Timeseries, prompb.TimeSeries{
Labels: labels,
Samples: []prompb.Sample{
{
Value: float64(i),
Timestamp: 1000 * int64(i),
},
},
})
}
return &wr
}

View File

@@ -1,35 +0,0 @@
package remotewrite
import (
"fmt"
"testing"
"github.com/golang/snappy"
"github.com/klauspost/compress/s2"
)
func BenchmarkCompressWriteRequestSnappy(b *testing.B) {
b.Run("snappy", func(b *testing.B) {
benchmarkCompressWriteRequest(b, snappy.Encode)
})
b.Run("s2", func(b *testing.B) {
benchmarkCompressWriteRequest(b, s2.EncodeSnappy)
})
}
func benchmarkCompressWriteRequest(b *testing.B, compressFunc func(dst, src []byte) []byte) {
for _, rowsCount := range []int{1, 10, 100, 1e3, 1e4} {
b.Run(fmt.Sprintf("rows_%d", rowsCount), func(b *testing.B) {
wr := newTestWriteRequest(rowsCount, 10)
data := wr.MarshalProtobuf(nil)
b.ReportAllocs()
b.SetBytes(int64(rowsCount))
b.RunParallel(func(pb *testing.PB) {
var zb []byte
for pb.Next() {
zb = compressFunc(zb[:cap(zb)], data)
}
})
})
}
}

View File

@@ -2,179 +2,78 @@ package remotewrite
import (
"flag"
"fmt"
"strconv"
"strings"
"sync"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promrelabel"
"github.com/VictoriaMetrics/metrics"
)
var (
unparsedLabelsGlobal = flagutil.NewArrayString("remoteWrite.label", "Optional label in the form 'name=value' to add to all the metrics before sending them to -remoteWrite.url. "+
"Pass multiple -remoteWrite.label flags in order to add multiple labels to metrics before sending them to remote storage")
relabelConfigPathGlobal = flag.String("remoteWrite.relabelConfig", "", "Optional path to file with relabeling configs, which are applied "+
"to all the metrics before sending them to -remoteWrite.url. See also -remoteWrite.urlRelabelConfig. "+
"The path can point either to local file or to http url. "+
"See https://docs.victoriametrics.com/victoriametrics/relabeling/")
relabelConfigPaths = flagutil.NewArrayString("remoteWrite.urlRelabelConfig", "Optional path to relabel configs for the corresponding -remoteWrite.url. "+
"See also -remoteWrite.relabelConfig. The path can point either to local file or to http url. "+
"See https://docs.victoriametrics.com/victoriametrics/relabeling/")
usePromCompatibleNaming = flag.Bool("usePromCompatibleNaming", false, "Whether to replace characters unsupported by Prometheus with underscores "+
"in the ingested metric names and label names. For example, foo.bar{a.b='c'} is transformed into foo_bar{a_b='c'} during data ingestion if this flag is set. "+
"See https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels")
unparsedLabelsGlobal = flagutil.NewArray("remoteWrite.label", "Optional label in the form 'name=value' to add to all the metrics before sending them to -remoteWrite.url. "+
"Pass multiple -remoteWrite.label flags in order to add multiple flags to metrics before sending them to remote storage")
relabelConfigPathGlobal = flag.String("remoteWrite.relabelConfig", "", "Optional path to file with relabel_config entries. These entries are applied to all the metrics "+
"before sending them to -remoteWrite.url. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config for details")
)
var labelsGlobal []prompb.Label
var labelsGlobal []prompbmarshal.Label
var prcsGlobal []promrelabel.ParsedRelabelConfig
var (
relabelConfigReloads *metrics.Counter
relabelConfigReloadErrors *metrics.Counter
relabelConfigSuccess *metrics.Gauge
relabelConfigTimestamp *metrics.Counter
)
func initRelabelMetrics() {
relabelConfigReloads = metrics.NewCounter(`vmagent_relabel_config_reloads_total`)
relabelConfigReloadErrors = metrics.NewCounter(`vmagent_relabel_config_reloads_errors_total`)
relabelConfigSuccess = metrics.NewGauge(`vmagent_relabel_config_last_reload_successful`, nil)
relabelConfigTimestamp = metrics.NewCounter(`vmagent_relabel_config_last_reload_success_timestamp_seconds`)
}
// CheckRelabelConfigs checks -remoteWrite.relabelConfig and -remoteWrite.urlRelabelConfig.
func CheckRelabelConfigs() error {
_, err := loadRelabelConfigs()
return err
}
func initRelabelConfigs() {
rcs, err := loadRelabelConfigs()
if err != nil {
logger.Fatalf("cannot initialize relabel configs: %s", err)
}
allRelabelConfigs.Store(rcs)
if rcs.isSet() {
initRelabelMetrics()
relabelConfigSuccess.Set(1)
relabelConfigTimestamp.Set(fasttime.UnixTimestamp())
}
}
func reloadRelabelConfigs() {
rcs := allRelabelConfigs.Load()
if !rcs.isSet() {
return
}
relabelConfigReloads.Inc()
logger.Infof("reloading relabel configs pointed by -remoteWrite.relabelConfig and -remoteWrite.urlRelabelConfig")
rcs, err := loadRelabelConfigs()
if err != nil {
relabelConfigReloadErrors.Inc()
relabelConfigSuccess.Set(0)
logger.Errorf("cannot reload relabel configs; preserving the previous configs; error: %s", err)
return
}
allRelabelConfigs.Store(rcs)
relabelConfigSuccess.Set(1)
relabelConfigTimestamp.Set(fasttime.UnixTimestamp())
logger.Infof("successfully reloaded relabel configs")
}
func loadRelabelConfigs() (*relabelConfigs, error) {
var rcs relabelConfigs
if *relabelConfigPathGlobal != "" {
global, err := promrelabel.LoadRelabelConfigs(*relabelConfigPathGlobal)
if err != nil {
return nil, fmt.Errorf("cannot load -remoteWrite.relabelConfig=%q: %w", *relabelConfigPathGlobal, err)
}
rcs.global = global
}
if len(*relabelConfigPaths) > len(*remoteWriteURLs) {
return nil, fmt.Errorf("too many -remoteWrite.urlRelabelConfig args: %d; it mustn't exceed the number of -remoteWrite.url args: %d",
len(*relabelConfigPaths), (len(*remoteWriteURLs)))
}
rcs.perURL = make([]*promrelabel.ParsedConfigs, len(*remoteWriteURLs))
for i, path := range *relabelConfigPaths {
if len(path) == 0 {
// Skip empty relabel config.
continue
}
prc, err := promrelabel.LoadRelabelConfigs(path)
if err != nil {
return nil, fmt.Errorf("cannot load relabel configs from -remoteWrite.urlRelabelConfig=%q: %w", path, err)
}
rcs.perURL[i] = prc
}
return &rcs, nil
}
type relabelConfigs struct {
global *promrelabel.ParsedConfigs
perURL []*promrelabel.ParsedConfigs
}
func (rcs *relabelConfigs) isSet() bool {
if rcs == nil {
return false
}
if rcs.global.Len() > 0 {
return true
}
for _, pc := range rcs.perURL {
if pc.Len() > 0 {
return true
}
}
return false
}
// initLabelsGlobal must be called after parsing command-line flags.
func initLabelsGlobal() {
// initRelabelGlobal must be called after parsing command-line flags.
func initRelabelGlobal() {
// Init labelsGlobal
labelsGlobal = nil
for _, s := range *unparsedLabelsGlobal {
if len(s) == 0 {
continue
}
n := strings.IndexByte(s, '=')
if n < 0 {
logger.Fatalf("missing '=' in `-remoteWrite.label`. It must contain label in the form `name=value`; got %q", s)
logger.Panicf("FATAL: missing '=' in `-remoteWrite.label`. It must contain label in the form `name=value`; got %q", s)
}
labelsGlobal = append(labelsGlobal, prompb.Label{
labelsGlobal = append(labelsGlobal, prompbmarshal.Label{
Name: s[:n],
Value: s[n+1:],
})
}
// Init prcsGlobal
prcsGlobal = nil
if len(*relabelConfigPathGlobal) > 0 {
var err error
prcsGlobal, err = promrelabel.LoadRelabelConfigs(*relabelConfigPathGlobal)
if err != nil {
logger.Panicf("FATAL: cannot load relabel configs from -remoteWrite.relabelConfig=%q: %s", *relabelConfigPathGlobal, err)
}
}
}
func (rctx *relabelCtx) applyRelabeling(tss []prompb.TimeSeries, pcs *promrelabel.ParsedConfigs) []prompb.TimeSeries {
if pcs.Len() == 0 && !*usePromCompatibleNaming {
func (rctx *relabelCtx) applyRelabeling(tss []prompbmarshal.TimeSeries, extraLabels []prompbmarshal.Label, prcs []promrelabel.ParsedRelabelConfig) []prompbmarshal.TimeSeries {
if len(extraLabels) == 0 && len(prcs) == 0 {
// Nothing to change.
return tss
}
rctx.reset()
tssDst := tss[:0]
labels := rctx.labels[:0]
for i := range tss {
ts := &tss[i]
labelsLen := len(labels)
labels = append(labels, ts.Labels...)
labels = pcs.Apply(labels, labelsLen)
labels = promrelabel.FinalizeLabels(labels[:labelsLen], labels[labelsLen:])
// extraLabels must be added before applying relabeling according to https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write
for j := range extraLabels {
extraLabel := &extraLabels[j]
tmp := promrelabel.GetLabelByName(labels[labelsLen:], extraLabel.Name)
if tmp != nil {
tmp.Value = extraLabel.Value
} else {
labels = append(labels, *extraLabel)
}
}
labels = promrelabel.ApplyRelabelConfigs(labels, labelsLen, prcs, true)
if len(labels) == labelsLen {
// Drop the current time series, since relabeling removed all the labels.
continue
}
if *usePromCompatibleNaming {
fixPromCompatibleNaming(labels[labelsLen:])
}
tssDst = append(tssDst, prompb.TimeSeries{
tssDst = append(tssDst, prompbmarshal.TimeSeries{
Labels: labels[labelsLen:],
Samples: ts.Samples,
})
@@ -183,70 +82,23 @@ func (rctx *relabelCtx) applyRelabeling(tss []prompb.TimeSeries, pcs *promrelabe
return tssDst
}
func (rctx *relabelCtx) appendExtraLabels(tss []prompb.TimeSeries, extraLabels []prompb.Label) {
if len(extraLabels) == 0 {
return
}
rctx.reset()
labels := rctx.labels[:0]
for i := range tss {
ts := &tss[i]
labelsLen := len(labels)
labels = append(labels, ts.Labels...)
for j := range extraLabels {
extraLabel := extraLabels[j]
tmp := promrelabel.GetLabelByName(labels[labelsLen:], extraLabel.Name)
if tmp != nil {
tmp.Value = extraLabel.Value
} else {
labels = append(labels, extraLabel)
}
}
ts.Labels = labels[labelsLen:]
}
rctx.labels = labels
}
func (rctx *relabelCtx) tenantToLabels(tss []prompb.TimeSeries, accountID, projectID uint32) {
rctx.reset()
accountIDStr := strconv.FormatUint(uint64(accountID), 10)
projectIDStr := strconv.FormatUint(uint64(projectID), 10)
labels := rctx.labels[:0]
for i := range tss {
ts := &tss[i]
labelsLen := len(labels)
for _, label := range ts.Labels {
labelName := label.Name
if labelName == "vm_account_id" || labelName == "vm_project_id" {
continue
}
labels = append(labels, label)
}
labels = append(labels, prompb.Label{
Name: "vm_account_id",
Value: accountIDStr,
})
labels = append(labels, prompb.Label{
Name: "vm_project_id",
Value: projectIDStr,
})
ts.Labels = labels[labelsLen:]
}
rctx.labels = labels
}
type relabelCtx struct {
// pool for labels, which are used during the relabeling.
labels []prompb.Label
labels []prompbmarshal.Label
}
func (rctx *relabelCtx) reset() {
promrelabel.CleanLabels(rctx.labels)
labels := rctx.labels
for i := range labels {
label := &labels[i]
label.Name = ""
label.Value = ""
}
rctx.labels = rctx.labels[:0]
}
var relabelCtxPool = &sync.Pool{
New: func() any {
New: func() interface{} {
return &relabelCtx{}
},
}
@@ -256,18 +108,6 @@ func getRelabelCtx() *relabelCtx {
}
func putRelabelCtx(rctx *relabelCtx) {
rctx.reset()
rctx.labels = rctx.labels[:0]
relabelCtxPool.Put(rctx)
}
func fixPromCompatibleNaming(labels []prompb.Label) {
// Replace unsupported Prometheus chars in label names and metric names with underscores.
for i := range labels {
label := &labels[i]
if label.Name == "__name__" {
label.Value = promrelabel.SanitizeMetricName(label.Value)
} else {
label.Name = promrelabel.SanitizeLabelName(label.Name)
}
}
}

View File

@@ -1,69 +0,0 @@
package remotewrite
import (
"reflect"
"testing"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promrelabel"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutil"
)
func TestApplyRelabeling(t *testing.T) {
f := func(pcs *promrelabel.ParsedConfigs, sTss, sExpTss string) {
rctx := &relabelCtx{}
tss, expTss := parseSeries(sTss), parseSeries(sExpTss)
gotTss := rctx.applyRelabeling(tss, pcs)
if !reflect.DeepEqual(gotTss, expTss) {
t.Fatalf("expected to have: \n%v;\ngot: \n%v", expTss, gotTss)
}
}
f(nil, "up", "up")
pcs, err := promrelabel.ParseRelabelConfigsData([]byte(`
- target_label: "foo"
replacement: "aaa"
- action: labeldrop
regex: "env.*"
`))
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
f(pcs, `up{foo="baz", env="prod"}`, `up{foo="aaa"}`)
oldVal := *usePromCompatibleNaming
*usePromCompatibleNaming = true
f(nil, `foo.bar`, `foo_bar`)
*usePromCompatibleNaming = oldVal
}
func TestAppendExtraLabels(t *testing.T) {
f := func(extraLabels []prompb.Label, sTss, sExpTss string) {
t.Helper()
rctx := &relabelCtx{}
tss, expTss := parseSeries(sTss), parseSeries(sExpTss)
rctx.appendExtraLabels(tss, extraLabels)
if !reflect.DeepEqual(tss, expTss) {
t.Fatalf("expected to have: \n%v;\ngot: \n%v", expTss, tss)
}
}
f(nil, "up", "up")
f([]prompb.Label{{Name: "foo", Value: "bar"}}, "up", `up{foo="bar"}`)
f([]prompb.Label{{Name: "foo", Value: "bar"}}, `up{foo="baz"}`, `up{foo="bar"}`)
f([]prompb.Label{{Name: "baz", Value: "qux"}}, `up{foo="baz"}`, `up{foo="baz",baz="qux"}`)
oldVal := *usePromCompatibleNaming
*usePromCompatibleNaming = true
f([]prompb.Label{{Name: "foo.bar", Value: "baz"}}, "up", `up{foo.bar="baz"}`)
*usePromCompatibleNaming = oldVal
}
func parseSeries(data string) []prompb.TimeSeries {
var tss []prompb.TimeSeries
tss = append(tss, prompb.TimeSeries{
Labels: promutil.MustNewLabelsFromString(data).GetLabels(),
})
return tss
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,349 +0,0 @@
package remotewrite
import (
"fmt"
"math"
"reflect"
"strconv"
"sync/atomic"
"testing"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/consistenthash"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promrelabel"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/prometheus"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/streamaggr"
"github.com/VictoriaMetrics/metrics"
)
func TestGetLabelsHash_Distribution(t *testing.T) {
f := func(bucketsCount int) {
t.Helper()
// Distribute itemsCount hashes returned by getLabelsHash() across bucketsCount buckets.
itemsCount := 1_000 * bucketsCount
m := make([]int, bucketsCount)
var labels []prompb.Label
for i := 0; i < itemsCount; i++ {
labels = append(labels[:0], prompb.Label{
Name: "__name__",
Value: fmt.Sprintf("some_name_%d", i),
})
for j := 0; j < 10; j++ {
labels = append(labels, prompb.Label{
Name: fmt.Sprintf("label_%d", j),
Value: fmt.Sprintf("value_%d_%d", i, j),
})
}
h := getLabelsHash(labels)
m[h%uint64(bucketsCount)]++
}
// Verify that the distribution is even
expectedItemsPerBucket := itemsCount / bucketsCount
for _, n := range m {
if math.Abs(1-float64(n)/float64(expectedItemsPerBucket)) > 0.04 {
t.Fatalf("unexpected items in the bucket for %d buckets; got %d; want around %d", bucketsCount, n, expectedItemsPerBucket)
}
}
}
f(2)
f(3)
f(4)
f(5)
f(10)
}
func TestRemoteWriteContext_TryPush_ImmutableTimeseries(t *testing.T) {
f := func(streamAggrConfig, relabelConfig string, enableWindows bool, dedupInterval time.Duration, keepInput, dropInput bool, input string) {
t.Helper()
perURLRelabel, err := promrelabel.ParseRelabelConfigsData([]byte(relabelConfig))
if err != nil {
t.Fatalf("cannot load relabel configs: %s", err)
}
rcs := &relabelConfigs{
perURL: []*promrelabel.ParsedConfigs{
perURLRelabel,
},
}
allRelabelConfigs.Store(rcs)
pss := make([]*pendingSeries, 1)
isVMProto := &atomic.Bool{}
isVMProto.Store(true)
pss[0] = newPendingSeries(nil, isVMProto, 0, 100)
rwctx := &remoteWriteCtx{
idx: 0,
streamAggrKeepInput: keepInput,
streamAggrDropInput: dropInput,
pss: pss,
rowsPushedAfterRelabel: metrics.GetOrCreateCounter(`foo`),
rowsDroppedByRelabel: metrics.GetOrCreateCounter(`bar`),
}
if dedupInterval > 0 {
rwctx.deduplicator = streamaggr.NewDeduplicator(nil, enableWindows, dedupInterval, nil, "dedup-global")
}
if streamAggrConfig != "" {
pushNoop := func(_ []prompb.TimeSeries) {}
opts := streamaggr.Options{
EnableWindows: enableWindows,
}
sas, err := streamaggr.LoadFromData([]byte(streamAggrConfig), pushNoop, &opts, "global")
if err != nil {
t.Fatalf("cannot load streamaggr configs: %s", err)
}
defer sas.MustStop()
rwctx.sas.Store(sas)
}
offsetMsecs := time.Now().UnixMilli()
inputTss := prometheus.MustParsePromMetrics(input, offsetMsecs)
expectedTss := make([]prompb.TimeSeries, len(inputTss))
// copy inputTss to make sure it is not mutated during TryPush call
copy(expectedTss, inputTss)
if !rwctx.TryPushTimeSeries(inputTss, false) {
t.Fatalf("cannot push samples to rwctx")
}
if !reflect.DeepEqual(expectedTss, inputTss) {
t.Fatalf("unexpected samples;\ngot\n%v\nwant\n%v", inputTss, expectedTss)
}
}
f(`
- interval: 1m
outputs: [sum_samples]
- interval: 2m
outputs: [count_series]
`, `
- action: keep
source_labels: [env]
regex: "dev"
`, false, 0, false, false, `
metric{env="dev"} 10
metric{env="bar"} 20
metric{env="dev"} 15
metric{env="bar"} 25
`)
f(``, ``, true, time.Hour, false, false, `
metric{env="dev"} 10
metric{env="foo"} 20
metric{env="dev"} 15
metric{env="foo"} 25
`)
f(``, `
- action: keep
source_labels: [env]
regex: "dev"
`, true, time.Hour, false, false, `
metric{env="dev"} 10
metric{env="bar"} 20
metric{env="dev"} 15
metric{env="bar"} 25
`)
f(``, `
- action: keep
source_labels: [env]
regex: "dev"
`, true, time.Hour, true, false, `
metric{env="test"} 10
metric{env="dev"} 20
metric{env="foo"} 15
metric{env="dev"} 25
`)
f(``, `
- action: keep
source_labels: [env]
regex: "dev"
`, true, time.Hour, false, true, `
metric{env="foo"} 10
metric{env="dev"} 20
metric{env="foo"} 15
metric{env="dev"} 25
`)
f(``, `
- action: keep
source_labels: [env]
regex: "dev"
`, true, time.Hour, true, true, `
metric{env="dev"} 10
metric{env="test"} 20
metric{env="dev"} 15
metric{env="bar"} 25
`)
}
func TestShardAmountRemoteWriteCtx(t *testing.T) {
// 1. distribute 100000 series to n nodes.
// 2. remove the last node from healthy list.
// 3. distribute the same 10000 series to (n-1) node again.
// 4. check active time series change rate:
// change rate must < (3/total nodes). e.g. +30% if 10 you have 10 nodes.
f := func(remoteWriteCount int, healthyIdx []int, replicas int) {
t.Helper()
defer func() {
rwctxsGlobal = nil
rwctxsGlobalIdx = nil
rwctxConsistentHashGlobal = nil
}()
rwctxsGlobal = make([]*remoteWriteCtx, remoteWriteCount)
rwctxsGlobalIdx = make([]int, remoteWriteCount)
rwctxs := make([]*remoteWriteCtx, 0, len(healthyIdx))
for i := range remoteWriteCount {
rwCtx := &remoteWriteCtx{
idx: i,
}
rwctxsGlobalIdx[i] = i
if i >= len(healthyIdx) {
rwctxsGlobal[i] = rwCtx
continue
}
hIdx := healthyIdx[i]
if hIdx != i {
rwctxs = append(rwctxs, &remoteWriteCtx{
idx: hIdx,
})
} else {
rwctxs = append(rwctxs, rwCtx)
}
rwctxsGlobal[i] = rwCtx
}
seriesCount := 100000
// build 1000000 series
tssBlock := make([]prompb.TimeSeries, 0, seriesCount)
for i := 0; i < seriesCount; i++ {
tssBlock = append(tssBlock, prompb.TimeSeries{
Labels: []prompb.Label{
{
Name: "label",
Value: strconv.Itoa(i),
},
},
Samples: []prompb.Sample{
{
Timestamp: 0,
Value: 0,
},
},
})
}
// build consistent hash for x remote write context
// build active time series set
nodes := make([]string, 0, remoteWriteCount)
activeTimeSeriesByNodes := make([]map[string]struct{}, remoteWriteCount)
for i := 0; i < remoteWriteCount; i++ {
nodes = append(nodes, fmt.Sprintf("node%d", i))
activeTimeSeriesByNodes[i] = make(map[string]struct{})
}
rwctxConsistentHashGlobal = consistenthash.NewConsistentHash(nodes, 0)
// create shards
x := getTSSShards(len(rwctxs))
shards := x.shards
// execute
shardAmountRemoteWriteCtx(tssBlock, shards, rwctxs, replicas)
for i, nodeIdx := range healthyIdx {
for _, ts := range shards[i] {
// add it to node[nodeIdx]'s active time series
activeTimeSeriesByNodes[nodeIdx][prompb.LabelsToString(ts.Labels)] = struct{}{}
}
}
totalActiveTimeSeries := 0
for _, activeTimeSeries := range activeTimeSeriesByNodes {
totalActiveTimeSeries += len(activeTimeSeries)
}
avgActiveTimeSeries1 := totalActiveTimeSeries / remoteWriteCount
putTSSShards(x)
// removed last node
rwctxs = rwctxs[:len(rwctxs)-1]
healthyIdx = healthyIdx[:len(healthyIdx)-1]
x = getTSSShards(len(rwctxs))
shards = x.shards
// execute
shardAmountRemoteWriteCtx(tssBlock, shards, rwctxs, replicas)
for i, nodeIdx := range healthyIdx {
for _, ts := range shards[i] {
// add it to node[nodeIdx]'s active time series
activeTimeSeriesByNodes[nodeIdx][prompb.LabelsToString(ts.Labels)] = struct{}{}
}
}
totalActiveTimeSeries = 0
for _, activeTimeSeries := range activeTimeSeriesByNodes {
totalActiveTimeSeries += len(activeTimeSeries)
}
avgActiveTimeSeries2 := totalActiveTimeSeries / remoteWriteCount
changed := math.Abs(float64(avgActiveTimeSeries2-avgActiveTimeSeries1) / float64(avgActiveTimeSeries1))
threshold := 3 / float64(remoteWriteCount)
if changed >= threshold {
t.Fatalf("average active time series before: %d, after: %d, changed: %.2f. threshold: %.2f", avgActiveTimeSeries1, avgActiveTimeSeries2, changed, threshold)
}
}
f(5, []int{0, 1, 2, 3, 4}, 1)
f(5, []int{0, 1, 2, 3, 4}, 2)
f(10, []int{0, 1, 2, 3, 4, 5, 6, 7, 9}, 1)
f(10, []int{0, 1, 2, 3, 4, 5, 6, 7, 9}, 3)
}
func TestCalculateHealthyRwctxIdx(t *testing.T) {
f := func(total int, healthyIdx []int, unhealthyIdx []int) {
t.Helper()
healthyMap := make(map[int]bool)
for _, idx := range healthyIdx {
healthyMap[idx] = true
}
rwctxsGlobal = make([]*remoteWriteCtx, total)
rwctxsGlobalIdx = make([]int, total)
rwctxs := make([]*remoteWriteCtx, 0, len(healthyIdx))
for i := range rwctxsGlobal {
rwctx := &remoteWriteCtx{idx: i}
rwctxsGlobal[i] = rwctx
if healthyMap[i] {
rwctxs = append(rwctxs, rwctx)
}
rwctxsGlobalIdx[i] = i
}
gotHealthyIdx, gotUnhealthyIdx := calculateHealthyRwctxIdx(rwctxs)
if !reflect.DeepEqual(healthyIdx, gotHealthyIdx) {
t.Errorf("calculateHealthyRwctxIdx want healthyIdx = %v, got %v", healthyIdx, gotHealthyIdx)
}
if !reflect.DeepEqual(unhealthyIdx, gotUnhealthyIdx) {
t.Errorf("calculateHealthyRwctxIdx want unhealthyIdx = %v, got %v", unhealthyIdx, gotUnhealthyIdx)
}
}
f(5, []int{0, 1, 2, 3, 4}, nil)
f(5, []int{0, 1, 2, 4}, []int{3})
f(5, []int{2, 4}, []int{0, 1, 3})
f(5, []int{0, 2, 4}, []int{1, 3})
f(5, []int{}, []int{0, 1, 2, 3, 4})
f(5, []int{4}, []int{0, 1, 2, 3})
f(1, []int{0}, nil)
f(1, []int{}, []int{0})
}

View File

@@ -0,0 +1,76 @@
package remotewrite
import (
"net"
"sync/atomic"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/netutil"
"github.com/VictoriaMetrics/fasthttp"
"github.com/VictoriaMetrics/metrics"
)
func statDial(addr string) (conn net.Conn, err error) {
if netutil.TCP6Enabled() {
conn, err = fasthttp.DialDualStack(addr)
} else {
conn, err = fasthttp.Dial(addr)
}
dialsTotal.Inc()
if err != nil {
dialErrors.Inc()
return nil, err
}
conns.Inc()
sc := &statConn{
Conn: conn,
}
return sc, nil
}
var (
dialsTotal = metrics.NewCounter(`vmagent_remotewrite_dials_total`)
dialErrors = metrics.NewCounter(`vmagent_remotewrite_dial_errors_total`)
conns = metrics.NewCounter(`vmagent_remotewrite_conns`)
)
type statConn struct {
closed uint64
net.Conn
}
func (sc *statConn) Read(p []byte) (int, error) {
n, err := sc.Conn.Read(p)
connReadsTotal.Inc()
if err != nil {
connReadErrors.Inc()
}
connBytesRead.Add(n)
return n, err
}
func (sc *statConn) Write(p []byte) (int, error) {
n, err := sc.Conn.Write(p)
connWritesTotal.Inc()
if err != nil {
connWriteErrors.Inc()
}
connBytesWritten.Add(n)
return n, err
}
func (sc *statConn) Close() error {
err := sc.Conn.Close()
if atomic.AddUint64(&sc.closed, 1) == 1 {
conns.Dec()
}
return err
}
var (
connReadsTotal = metrics.NewCounter(`vmagent_remotewrite_conn_reads_total`)
connWritesTotal = metrics.NewCounter(`vmagent_remotewrite_conn_writes_total`)
connReadErrors = metrics.NewCounter(`vmagent_remotewrite_conn_read_errors_total`)
connWriteErrors = metrics.NewCounter(`vmagent_remotewrite_conn_write_errors_total`)
connBytesRead = metrics.NewCounter(`vmagent_remotewrite_conn_bytes_read_total`)
connBytesWritten = metrics.NewCounter(`vmagent_remotewrite_conn_bytes_written_total`)
)

View File

@@ -1,258 +0,0 @@
package remotewrite
import (
"flag"
"fmt"
"strings"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/streamaggr"
"github.com/VictoriaMetrics/metrics"
)
var (
// Global config
streamAggrGlobalConfig = flag.String("streamAggr.config", "", "Optional path to file with stream aggregation config. "+
"See https://docs.victoriametrics.com/victoriametrics/stream-aggregation/ . "+
"See also -streamAggr.keepInput, -streamAggr.dropInput and -streamAggr.dedupInterval")
streamAggrGlobalKeepInput = flag.Bool("streamAggr.keepInput", false, "Whether to keep all the input samples after the aggregation "+
"with -streamAggr.config. By default, only aggregates samples are dropped, while the remaining samples "+
"are written to remote storages write. See also -streamAggr.dropInput and https://docs.victoriametrics.com/victoriametrics/stream-aggregation/")
streamAggrGlobalDropInput = flag.Bool("streamAggr.dropInput", false, "Whether to drop all the input samples after the aggregation "+
"with -remoteWrite.streamAggr.config. By default, only aggregates samples are dropped, while the remaining samples "+
"are written to remote storages write. See also -streamAggr.keepInput and https://docs.victoriametrics.com/victoriametrics/stream-aggregation/")
streamAggrGlobalDedupInterval = flag.Duration("streamAggr.dedupInterval", 0, "Input samples are de-duplicated with this interval on "+
"aggregator before optional aggregation with -streamAggr.config . "+
"See also -dedup.minScrapeInterval and https://docs.victoriametrics.com/victoriametrics/stream-aggregation/#deduplication")
streamAggrGlobalIgnoreOldSamples = flag.Bool("streamAggr.ignoreOldSamples", false, "Whether to ignore input samples with old timestamps outside the "+
"current aggregation interval for aggregator. "+
"See https://docs.victoriametrics.com/victoriametrics/stream-aggregation/#ignoring-old-samples")
streamAggrGlobalIgnoreFirstIntervals = flag.Int("streamAggr.ignoreFirstIntervals", 0, "Number of aggregation intervals to skip after the start for "+
"aggregator. Increase this value if you observe incorrect aggregation results after vmagent restarts. It could be caused by receiving unordered delayed data from "+
"clients pushing data into the vmagent. See https://docs.victoriametrics.com/victoriametrics/stream-aggregation/#ignore-aggregation-intervals-on-start")
streamAggrGlobalDropInputLabels = flagutil.NewArrayString("streamAggr.dropInputLabels", "An optional list of labels to drop from samples for aggregator "+
"before stream de-duplication and aggregation . See https://docs.victoriametrics.com/victoriametrics/stream-aggregation/#dropping-unneeded-labels")
streamAggrGlobalEnableWindows = flag.Bool("streamAggr.enableWindows", false, "Enables aggregation within fixed windows for all global aggregators. "+
"This allows to get more precise results, but impacts resource usage as it requires twice more memory to store two states. "+
"See https://docs.victoriametrics.com/victoriametrics/stream-aggregation/#aggregation-windows.")
// Per URL config
streamAggrConfig = flagutil.NewArrayString("remoteWrite.streamAggr.config", "Optional path to file with stream aggregation config for the corresponding -remoteWrite.url. "+
"See https://docs.victoriametrics.com/victoriametrics/stream-aggregation/ . "+
"See also -remoteWrite.streamAggr.keepInput, -remoteWrite.streamAggr.dropInput and -remoteWrite.streamAggr.dedupInterval")
streamAggrDropInput = flagutil.NewArrayBool("remoteWrite.streamAggr.dropInput", "Whether to drop all the input samples after the aggregation "+
"with -remoteWrite.streamAggr.config at the corresponding -remoteWrite.url. By default, only aggregates samples are dropped, while the remaining samples "+
"are written to the corresponding -remoteWrite.url . See also -remoteWrite.streamAggr.keepInput and https://docs.victoriametrics.com/victoriametrics/stream-aggregation/")
streamAggrKeepInput = flagutil.NewArrayBool("remoteWrite.streamAggr.keepInput", "Whether to keep all the input samples after the aggregation "+
"with -remoteWrite.streamAggr.config at the corresponding -remoteWrite.url. By default, only aggregates samples are dropped, while the remaining samples "+
"are written to the corresponding -remoteWrite.url . See also -remoteWrite.streamAggr.dropInput and https://docs.victoriametrics.com/victoriametrics/stream-aggregation/")
streamAggrDedupInterval = flagutil.NewArrayDuration("remoteWrite.streamAggr.dedupInterval", 0, "Input samples are de-duplicated with this interval before optional aggregation "+
"with -remoteWrite.streamAggr.config at the corresponding -remoteWrite.url. See also -dedup.minScrapeInterval and https://docs.victoriametrics.com/victoriametrics/stream-aggregation/#deduplication")
streamAggrIgnoreOldSamples = flagutil.NewArrayBool("remoteWrite.streamAggr.ignoreOldSamples", "Whether to ignore input samples with old timestamps outside the current "+
"aggregation interval for the corresponding -remoteWrite.streamAggr.config at the corresponding -remoteWrite.url. "+
"See https://docs.victoriametrics.com/victoriametrics/stream-aggregation/#ignoring-old-samples")
streamAggrIgnoreFirstIntervals = flagutil.NewArrayInt("remoteWrite.streamAggr.ignoreFirstIntervals", 0, "Number of aggregation intervals to skip after the start "+
"for the corresponding -remoteWrite.streamAggr.config at the corresponding -remoteWrite.url. Increase this value if "+
"you observe incorrect aggregation results after vmagent restarts. It could be caused by receiving buffered delayed data from clients pushing data into the vmagent. "+
"See https://docs.victoriametrics.com/victoriametrics/stream-aggregation/#ignore-aggregation-intervals-on-start")
streamAggrDropInputLabels = flagutil.NewArrayString("remoteWrite.streamAggr.dropInputLabels", "An optional list of labels to drop from samples "+
"before stream de-duplication and aggregation with -remoteWrite.streamAggr.config and -remoteWrite.streamAggr.dedupInterval at the corresponding -remoteWrite.url. "+
"Multiple labels per remoteWrite.url must be delimited by '^^': -remoteWrite.streamAggr.dropInputLabels='replica^^az,replica'. "+
"See https://docs.victoriametrics.com/victoriametrics/stream-aggregation/#dropping-unneeded-labels")
streamAggrEnableWindows = flagutil.NewArrayBool("remoteWrite.streamAggr.enableWindows", "Enables aggregation within fixed windows for all remote write's aggregators. "+
"This allows to get more precise results, but impacts resource usage as it requires twice more memory to store two states. "+
"See https://docs.victoriametrics.com/victoriametrics/stream-aggregation/#aggregation-windows.")
)
// CheckStreamAggrConfigs checks -remoteWrite.streamAggr.config and -streamAggr.config.
func CheckStreamAggrConfigs() error {
// Check global config
sas, err := newStreamAggrConfigGlobal()
if err != nil {
return err
}
sas.MustStop()
if len(*streamAggrConfig) > len(*remoteWriteURLs) {
return fmt.Errorf("too many -remoteWrite.streamAggr.config args: %d; it mustn't exceed the number of -remoteWrite.url args: %d", len(*streamAggrConfig), len(*remoteWriteURLs))
}
pushNoop := func(_ []prompb.TimeSeries) {}
for idx := range *streamAggrConfig {
sas, err := newStreamAggrConfigPerURL(idx, pushNoop)
if err != nil {
return err
}
sas.MustStop()
}
return nil
}
func reloadStreamAggrConfigs() {
reloadStreamAggrConfigGlobal()
for _, rwctx := range rwctxsGlobal {
rwctx.reloadStreamAggrConfig()
}
}
func reloadStreamAggrConfigGlobal() {
path := *streamAggrGlobalConfig
if path == "" {
return
}
logger.Infof("reloading stream aggregation configs pointed by -streamAggr.config=%q", path)
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_streamaggr_config_reloads_total{path=%q}`, path)).Inc()
sasNew, err := newStreamAggrConfigGlobal()
if err != nil {
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_streamaggr_config_reloads_errors_total{path=%q}`, path)).Inc()
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_streamaggr_config_reload_successful{path=%q}`, path)).Set(0)
logger.Errorf("cannot reload -streamAggr.config=%q; continue using the previously loaded config; error: %s", path, err)
return
}
sas := sasGlobal.Load()
if !sasNew.Equal(sas) {
sasOld := sasGlobal.Swap(sasNew)
sasOld.MustStop()
logger.Infof("successfully reloaded -streamAggr.config=%q", path)
} else {
sasNew.MustStop()
logger.Infof("-streamAggr.config=%q wasn't changed since the last reload", path)
}
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_streamaggr_config_reload_successful{path=%q}`, path)).Set(1)
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_streamaggr_config_reload_success_timestamp_seconds{path=%q}`, path)).Set(fasttime.UnixTimestamp())
}
func initStreamAggrConfigGlobal() {
sas, err := newStreamAggrConfigGlobal()
if err != nil {
logger.Fatalf("cannot initialize global stream aggregators: %s", err)
}
if sas != nil {
filePath := sas.FilePath()
sasGlobal.Store(sas)
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_streamaggr_config_reload_successful{path=%q}`, filePath)).Set(1)
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_streamaggr_config_reload_success_timestamp_seconds{path=%q}`, filePath)).Set(fasttime.UnixTimestamp())
}
dedupInterval := *streamAggrGlobalDedupInterval
if dedupInterval > 0 {
deduplicatorGlobal = streamaggr.NewDeduplicator(pushTimeSeriesToRemoteStoragesTrackDropped, *streamAggrGlobalEnableWindows, dedupInterval, *streamAggrGlobalDropInputLabels, "dedup-global")
}
}
func (rwctx *remoteWriteCtx) initStreamAggrConfig() {
idx := rwctx.idx
sas, err := rwctx.newStreamAggrConfig()
if err != nil {
logger.Fatalf("cannot initialize stream aggregators: %s", err)
}
if sas != nil {
filePath := sas.FilePath()
rwctx.sas.Store(sas)
rwctx.streamAggrKeepInput = streamAggrKeepInput.GetOptionalArg(idx)
rwctx.streamAggrDropInput = streamAggrDropInput.GetOptionalArg(idx)
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_streamaggr_config_reload_successful{path=%q}`, filePath)).Set(1)
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_streamaggr_config_reload_success_timestamp_seconds{path=%q}`, filePath)).Set(fasttime.UnixTimestamp())
}
dedupInterval := streamAggrDedupInterval.GetOptionalArg(idx)
if dedupInterval > 0 {
alias := fmt.Sprintf("dedup-%d", idx+1)
var dropLabels []string
if streamAggrDropInputLabels.GetOptionalArg(idx) != "" {
dropLabels = strings.Split(streamAggrDropInputLabels.GetOptionalArg(idx), "^^")
}
rwctx.deduplicator = streamaggr.NewDeduplicator(rwctx.pushInternalTrackDropped, *streamAggrGlobalEnableWindows, dedupInterval, dropLabels, alias)
}
}
func (rwctx *remoteWriteCtx) reloadStreamAggrConfig() {
path := streamAggrConfig.GetOptionalArg(rwctx.idx)
if path == "" {
return
}
logger.Infof("reloading stream aggregation configs pointed by -remoteWrite.streamAggr.config=%q", path)
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_streamaggr_config_reloads_total{path=%q}`, path)).Inc()
sasNew, err := rwctx.newStreamAggrConfig()
if err != nil {
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_streamaggr_config_reloads_errors_total{path=%q}`, path)).Inc()
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_streamaggr_config_reload_successful{path=%q}`, path)).Set(0)
logger.Errorf("cannot reload -remoteWrite.streamAggr.config=%q; continue using the previously loaded config; error: %s", path, err)
return
}
sas := rwctx.sas.Load()
if !sasNew.Equal(sas) {
sasOld := rwctx.sas.Swap(sasNew)
sasOld.MustStop()
logger.Infof("successfully reloaded -remoteWrite.streamAggr.config=%q", path)
} else {
sasNew.MustStop()
logger.Infof("-remoteWrite.streamAggr.config=%q wasn't changed since the last reload", path)
}
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_streamaggr_config_reload_successful{path=%q}`, path)).Set(1)
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_streamaggr_config_reload_success_timestamp_seconds{path=%q}`, path)).Set(fasttime.UnixTimestamp())
}
func newStreamAggrConfigGlobal() (*streamaggr.Aggregators, error) {
path := *streamAggrGlobalConfig
if path == "" {
return nil, nil
}
opts := &streamaggr.Options{
DedupInterval: *streamAggrGlobalDedupInterval,
DropInputLabels: *streamAggrGlobalDropInputLabels,
IgnoreOldSamples: *streamAggrGlobalIgnoreOldSamples,
IgnoreFirstIntervals: *streamAggrGlobalIgnoreFirstIntervals,
KeepInput: *streamAggrGlobalKeepInput,
EnableWindows: *streamAggrGlobalEnableWindows,
}
sas, err := streamaggr.LoadFromFile(path, pushTimeSeriesToRemoteStoragesTrackDropped, opts, "global")
if err != nil {
return nil, fmt.Errorf("cannot load -streamAggr.config=%q: %w", *streamAggrGlobalConfig, err)
}
return sas, nil
}
func (rwctx *remoteWriteCtx) newStreamAggrConfig() (*streamaggr.Aggregators, error) {
return newStreamAggrConfigPerURL(rwctx.idx, rwctx.pushInternalTrackDropped)
}
func newStreamAggrConfigPerURL(idx int, pushFunc streamaggr.PushFunc) (*streamaggr.Aggregators, error) {
path := streamAggrConfig.GetOptionalArg(idx)
if path == "" {
return nil, nil
}
alias := fmt.Sprintf("%d:secret-url", idx+1)
if *showRemoteWriteURL {
alias = fmt.Sprintf("%d:%s", idx+1, remoteWriteURLs.GetOptionalArg(idx))
}
var dropLabels []string
if streamAggrDropInputLabels.GetOptionalArg(idx) != "" {
dropLabels = strings.Split(streamAggrDropInputLabels.GetOptionalArg(idx), "^^")
}
opts := &streamaggr.Options{
DedupInterval: streamAggrDedupInterval.GetOptionalArg(idx),
DropInputLabels: dropLabels,
IgnoreOldSamples: streamAggrIgnoreOldSamples.GetOptionalArg(idx),
IgnoreFirstIntervals: streamAggrIgnoreFirstIntervals.GetOptionalArg(idx),
KeepInput: streamAggrKeepInput.GetOptionalArg(idx),
EnableWindows: streamAggrEnableWindows.GetOptionalArg(idx),
}
sas, err := streamaggr.LoadFromFile(path, pushFunc, opts, alias)
if err != nil {
return nil, fmt.Errorf("cannot load -remoteWrite.streamAggr.config=%q: %w", path, err)
}
return sas, nil
}

File diff suppressed because one or more lines are too long

View File

@@ -5,38 +5,28 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/vmimport"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/vmimport/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parser "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/vmimport"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
"github.com/VictoriaMetrics/metrics"
)
var (
rowsInserted = metrics.NewCounter(`vmagent_rows_inserted_total{type="vmimport"}`)
rowsTenantInserted = tenantmetrics.NewCounterMap(`vmagent_tenant_inserted_rows_total{type="vmimport"}`)
rowsPerInsert = metrics.NewHistogram(`vmagent_rows_per_insert{type="vmimport"}`)
rowsInserted = metrics.NewCounter(`vmagent_rows_inserted_total{type="vmimport"}`)
rowsPerInsert = metrics.NewHistogram(`vmagent_rows_per_insert{type="vmimport"}`)
)
// InsertHandler processes `/api/v1/import` request.
//
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6
func InsertHandler(at *auth.Token, req *http.Request) error {
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
encoding := req.Header.Get("Content-Encoding")
return stream.Parse(req.Body, encoding, func(rows []vmimport.Row) error {
return insertRows(at, rows, extraLabels)
func InsertHandler(req *http.Request) error {
return writeconcurrencylimiter.Do(func() error {
return parser.ParseStream(req, insertRows)
})
}
func insertRows(at *auth.Token, rows []vmimport.Row, extraLabels []prompb.Label) error {
func insertRows(rows []parser.Row) error {
ctx := common.GetPushCtx()
defer common.PutPushCtx(ctx)
@@ -46,43 +36,35 @@ func insertRows(at *auth.Token, rows []vmimport.Row, extraLabels []prompb.Label)
samples := ctx.Samples[:0]
for i := range rows {
r := &rows[i]
rowsTotal += len(r.Values)
labelsLen := len(labels)
for j := range r.Tags {
tag := &r.Tags[j]
labels = append(labels, prompb.Label{
labels = append(labels, prompbmarshal.Label{
Name: bytesutil.ToUnsafeString(tag.Key),
Value: bytesutil.ToUnsafeString(tag.Value),
})
}
labels = append(labels, extraLabels...)
values := r.Values
timestamps := r.Timestamps
if len(timestamps) != len(values) {
logger.Panicf("BUG: len(timestamps)=%d must match len(values)=%d", len(timestamps), len(values))
}
_ = timestamps[len(values)-1]
samplesLen := len(samples)
for j, value := range values {
samples = append(samples, prompb.Sample{
samples = append(samples, prompbmarshal.Sample{
Value: value,
Timestamp: timestamps[j],
})
}
tssDst = append(tssDst, prompb.TimeSeries{
tssDst = append(tssDst, prompbmarshal.TimeSeries{
Labels: labels[labelsLen:],
Samples: samples[samplesLen:],
})
rowsTotal += len(values)
}
ctx.WriteRequest.Timeseries = tssDst
ctx.Labels = labels
ctx.Samples = samples
if !remotewrite.TryPush(at, &ctx.WriteRequest) {
return remotewrite.ErrQueueFullHTTPRetry
}
remotewrite.Push(&ctx.WriteRequest)
rowsInserted.Add(rowsTotal)
if at != nil {
rowsTenantInserted.Get(at).Add(rowsTotal)
}
rowsPerInsert.Update(float64(rowsTotal))
return nil
}

View File

@@ -1,3 +0,0 @@
See vmalert-tool docs [here](https://docs.victoriametrics.com/victoriametrics/vmalert-tool/).
vmalert-tool docs can be edited at [docs/vmalert-tool.md](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/docs/victoriametrics/vmalert-tool.md).

View File

@@ -1,8 +0,0 @@
ARG base_image=non-existing
FROM $base_image
EXPOSE 8880
ENTRYPOINT ["/vmalert-tool-prod"]
ARG src_binary=non-existing
COPY $src_binary ./vmalert-tool-prod

View File

@@ -1,13 +0,0 @@
# See https://medium.com/on-docker/use-multi-stage-builds-to-inject-ca-certs-ad1e8f01de1b
ARG certs_image=non-existing
ARG root_image=non-existing
FROM $certs_image AS certs
RUN apk update && apk upgrade && apk --update --no-cache add ca-certificates
FROM $root_image
COPY --from=certs /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
EXPOSE 8429
ENTRYPOINT ["/vmalert-tool-prod"]
ARG TARGETARCH
ARG BINARY_SUFFIX=non-existing
COPY vmalert-tool-linux-${TARGETARCH}-prod${BINARY_SUFFIX} ./vmalert-tool-prod

View File

@@ -12,35 +12,20 @@ vmalert-prod:
vmalert-pure-prod:
APP_NAME=vmalert $(MAKE) app-via-docker-pure
vmalert-linux-amd64-prod:
APP_NAME=vmalert $(MAKE) app-via-docker-linux-amd64
vmalert-amd64-prod:
APP_NAME=vmalert $(MAKE) app-via-docker-amd64
vmalert-linux-arm-prod:
APP_NAME=vmalert $(MAKE) app-via-docker-linux-arm
vmalert-arm-prod:
APP_NAME=vmalert $(MAKE) app-via-docker-arm
vmalert-linux-arm64-prod:
APP_NAME=vmalert $(MAKE) app-via-docker-linux-arm64
vmalert-arm64-prod:
APP_NAME=vmalert $(MAKE) app-via-docker-arm64
vmalert-linux-ppc64le-prod:
APP_NAME=vmalert $(MAKE) app-via-docker-linux-ppc64le
vmalert-ppc64le-prod:
APP_NAME=vmalert $(MAKE) app-via-docker-ppc64le
vmalert-linux-386-prod:
APP_NAME=vmalert $(MAKE) app-via-docker-linux-386
vmalert-darwin-amd64-prod:
APP_NAME=vmalert $(MAKE) app-via-docker-darwin-amd64
vmalert-darwin-arm64-prod:
APP_NAME=vmalert $(MAKE) app-via-docker-darwin-arm64
vmalert-freebsd-amd64-prod:
APP_NAME=vmalert $(MAKE) app-via-docker-freebsd-amd64
vmalert-openbsd-amd64-prod:
APP_NAME=vmalert $(MAKE) app-via-docker-openbsd-amd64
vmalert-windows-amd64-prod:
APP_NAME=vmalert $(MAKE) app-via-docker-windows-amd64
vmalert-386-prod:
APP_NAME=vmalert $(MAKE) app-via-docker-386
package-vmalert:
APP_NAME=vmalert $(MAKE) package-via-docker
@@ -68,76 +53,31 @@ publish-vmalert:
test-vmalert:
go test -v -race -cover ./app/vmalert -loggerLevel=ERROR
go test -v -race -cover ./app/vmalert/rule
go test -v -race -cover ./app/vmalert/templates
go test -v -race -cover ./app/vmalert/datasource
go test -v -race -cover ./app/vmalert/notifier
go test -v -race -cover ./app/vmalert/config
go test -v -race -cover ./app/vmalert/remotewrite
go test -v -race -cover ./app/vmalert/vmalertutil
run-vmalert: vmalert
./bin/vmalert -rule=app/vmalert/config/testdata/rules/rules2-good.rules \
./bin/vmalert -rule=app/vmalert/testdata/rules0-good.rules \
-datasource.url=http://localhost:8428 \
-notifier.url=http://localhost:9093 \
-notifier.url=http://127.0.0.1:9093 \
-remoteWrite.url=http://localhost:8428 \
-remoteRead.url=http://localhost:8428 \
-external.label=cluster=east-1 \
-external.label=replica=a \
-evaluationInterval=3s \
-configCheckInterval=10s
-evaluationInterval=3s
run-vmalert-sd: vmalert
./bin/vmalert -rule=app/vmalert/config/testdata/rules2-good.rules \
-datasource.url=http://localhost:8428 \
-remoteWrite.url=http://localhost:8428 \
-notifier.config=app/vmalert/notifier/testdata/mixed.good.yaml \
-configCheckInterval=10s
vmalert-amd64:
CGO_ENABLED=1 GOOS=linux GOARCH=amd64 GO111MODULE=on go build -mod=vendor -ldflags "$(GO_BUILDINFO)" -o bin/vmalert-amd64 ./app/vmalert
replay-vmalert: vmalert
./bin/vmalert -rule=app/vmalert/config/testdata/rules/rules-replay-good.rules \
-datasource.url=http://localhost:8428 \
-remoteWrite.url=http://localhost:8428 \
-external.label=cluster=east-1 \
-external.label=replica=a \
-replay.timeFrom=2024-06-01T00:00:00Z
vmalert-arm:
CGO_ENABLED=0 GOOS=linux GOARCH=arm GO111MODULE=on go build -mod=vendor -ldflags "$(GO_BUILDINFO)" -o bin/vmalert-arm ./app/vmalert
vmalert-linux-amd64:
APP_NAME=vmalert CGO_ENABLED=1 GOOS=linux GOARCH=amd64 $(MAKE) app-local-goos-goarch
vmalert-arm64:
CGO_ENABLED=0 GOOS=linux GOARCH=arm64 GO111MODULE=on go build -mod=vendor -ldflags "$(GO_BUILDINFO)" -o bin/vmalert-arm64 ./app/vmalert
vmalert-linux-arm:
APP_NAME=vmalert CGO_ENABLED=0 GOOS=linux GOARCH=arm $(MAKE) app-local-goos-goarch
vmalert-ppc64le:
CGO_ENABLED=0 GOOS=linux GOARCH=ppc64le GO111MODULE=on go build -mod=vendor -ldflags "$(GO_BUILDINFO)" -o bin/vmalert-ppc64le ./app/vmalert
vmalert-linux-arm64:
APP_NAME=vmalert CGO_ENABLED=0 GOOS=linux GOARCH=arm64 $(MAKE) app-local-goos-goarch
vmalert-linux-ppc64le:
APP_NAME=vmalert CGO_ENABLED=0 GOOS=linux GOARCH=ppc64le $(MAKE) app-local-goos-goarch
vmalert-linux-s390x:
APP_NAME=vmalert CGO_ENABLED=0 GOOS=linux GOARCH=s390x $(MAKE) app-local-goos-goarch
vmalert-linux-loong64:
APP_NAME=vmalert CGO_ENABLED=0 GOOS=linux GOARCH=loong64 $(MAKE) app-local-goos-goarch
vmalert-linux-386:
APP_NAME=vmalert CGO_ENABLED=0 GOOS=linux GOARCH=386 $(MAKE) app-local-goos-goarch
vmalert-darwin-amd64:
APP_NAME=vmalert CGO_ENABLED=0 GOOS=darwin GOARCH=amd64 $(MAKE) app-local-goos-goarch
vmalert-darwin-arm64:
APP_NAME=vmalert CGO_ENABLED=0 GOOS=darwin GOARCH=arm64 $(MAKE) app-local-goos-goarch
vmalert-freebsd-amd64:
APP_NAME=vmalert CGO_ENABLED=0 GOOS=freebsd GOARCH=amd64 $(MAKE) app-local-goos-goarch
vmalert-openbsd-amd64:
APP_NAME=vmalert CGO_ENABLED=0 GOOS=openbsd GOARCH=amd64 $(MAKE) app-local-goos-goarch
vmalert-windows-amd64:
GOARCH=amd64 APP_NAME=vmalert $(MAKE) app-local-windows-goarch
vmalert-386:
CGO_ENABLED=0 GOOS=linux GOARCH=386 GO111MODULE=on go build -mod=vendor -ldflags "$(GO_BUILDINFO)" -o bin/vmalert-386 ./app/vmalert
vmalert-pure:
APP_NAME=vmalert $(MAKE) app-local-pure

View File

@@ -1,3 +1,141 @@
See vmalert docs [here](https://docs.victoriametrics.com/victoriametrics/vmalert/).
## VM Alert
vmalert docs can be edited at [docs/vmalert.md](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/docs/victoriametrics/vmalert.md).
`vmalert` executes a list of given MetricsQL expressions (rules) and
sends alerts to [Alert Manager](https://github.com/prometheus/alertmanager).
### Features:
* Integration with [VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics) TSDB;
* VictoriaMetrics [MetricsQL](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/MetricsQL)
expressions validation;
* Prometheus [alerting rules definition format](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/#defining-alerting-rules)
support;
* Integration with [Alertmanager](https://github.com/prometheus/alertmanager);
* Lightweight without extra dependencies.
### TODO:
* Support recording rules.
### QuickStart
To build `vmalert` from sources:
```
git clone https://github.com/VictoriaMetrics/VictoriaMetrics
cd VictoriaMetrics
make vmalert
```
The build binary will be placed to `VictoriaMetrics/bin` folder.
To start using `vmalert` you will need the following things:
* list of alert rules - PromQL/MetricsQL expressions to execute;
* datasource address - reachable VictoriaMetrics instance for rules execution;
* notifier address - reachable Alertmanager instance for processing,
aggregating alerts and sending notifications.
Then configure `vmalert` accordingly:
```
./bin/vmalert -rule=alert.rules \
-datasource.url=http://localhost:8428 \
-notifier.url=http://localhost:9093
```
Example for `.rules` file may be found [here](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/app/vmalert/testdata/rules0-good.rules)
`vmalert` runs evaluation for every group in a separate goroutine.
Rules in group evaluated one-by-one sequentially.
`vmalert` also runs a web-server (`-httpListenAddr`) for serving metrics and alerts endpoints:
* `http://<vmalert-addr>/api/v1/alerts` - list of all active alerts;
* `http://<vmalert-addr>/api/v1/<groupName>/<alertID>/status" ` - get alert status by ID.
Used as alert source in AlertManager.
* `http://<vmalert-addr>/metrics` - application metrics.
* `http://<vmalert-addr>/-/reload` - hot configuration reload.
`vmalert` may be configured with `-remoteWrite` flag to write alerts state in form of timeseries
via remote write protocol. Alerts state will be written as `ALERTS` timeseries. These timeseries
may be used to recover alerts state on `vmalert` restarts if `-remoteRead` is configured.
### Configuration
The shortlist of configuration flags is the following:
```
Usage of vmalert:
-datasource.basicAuth.password string
Optional basic auth password for -datasource.url
-datasource.basicAuth.username string
Optional basic auth username for -datasource.url
-datasource.url string
Victoria Metrics or VMSelect url. Required parameter. E.g. http://127.0.0.1:8428
-enableTCP6
Whether to enable IPv6 for listening and dialing. By default only IPv4 TCP is used
-evaluationInterval duration
How often to evaluate the rules. Default 1m (default 1m0s)
-external.url string
External URL is used as alert's source for sent alerts to the notifier
-http.maxGracefulShutdownDuration duration
The maximum duration for graceful shutdown of HTTP server. Highly loaded server may require increased value for graceful shutdown (default 7s)
-httpAuth.password string
Password for HTTP Basic Auth. The authentication is disabled if -httpAuth.username is empty
-httpAuth.username string
Username for HTTP Basic Auth. The authentication is disabled if empty. See also -httpAuth.password
-httpListenAddr string
Address to listen for http connections (default ":8880")
-notifier.url string
Prometheus alertmanager URL. Required parameter. e.g. http://127.0.0.1:9093
-remoteRead.basicAuth.password string
Optional basic auth password for -remoteRead.url
-remoteRead.basicAuth.username string
Optional basic auth username for -remoteRead.url
-remoteRead.lookback duration
Lookback defines how far to look into past for alerts timeseries. For example, if lookback=1h then range from now() to now()-1h will be scanned. (default 1h0m0s)
-remoteRead.url vmalert
Optional URL to Victoria Metrics or VMSelect that will be used to restore alerts state. This configuration makes sense only if vmalert was configured with `remoteWrite.url` before and has been successfully persisted its state. E.g. http://127.0.0.1:8428
-remoteWrite.basicAuth.password string
Optional basic auth password for -remoteWrite.url
-remoteWrite.basicAuth.username string
Optional basic auth username for -remoteWrite.url
-remoteWrite.maxQueueSize
Defines the max number of pending datapoints to remote write endpoint
-remoteWrite.url string
Optional URL to Victoria Metrics or VMInsert where to persist alerts state in form of timeseries. E.g. http://127.0.0.1:8428
-rule value
Path to the file with alert rules.
Supports patterns. Flag can be specified multiple times.
Examples:
-rule /path/to/file. Path to a single file with alerting rules
-rule dir/*.yaml -rule /*.yaml. Relative path to all .yaml files in "dir" folder,
absolute path to all .yaml files in root.
-rule.validateTemplates
Indicates to validate annotation and label templates (default true)
```
Pass `-help` to `vmalert` in order to see the full list of supported
command-line flags with their descriptions.
To reload configuration without `vmalert` restart send SIGHUP signal
or send GET request to `/-/reload` endpoint.
### Contributing
`vmalert` is mostly designed and built by VictoriaMetrics community.
Feel free to share your experience and ideas for improving this
software. Please keep simplicity as the main priority.
### How to build from sources
It is recommended using
[binary releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases)
- `vmalert` is located in `vmutils-*` archives there.
#### Development build
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.13.
2. Run `make vmalert` from the root folder of the repository.
It builds `vmalert` binary and puts it into the `bin` folder.
#### Production build
1. [Install docker](https://docs.docker.com/install/).
2. Run `make vmalert-prod` from the root folder of the repository.
It builds `vmalert-prod` binary and puts it into the `bin` folder.

74
app/vmalert/config.go Normal file
View File

@@ -0,0 +1,74 @@
package main
import (
"fmt"
"gopkg.in/yaml.v2"
"io/ioutil"
"path/filepath"
"strings"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/notifier"
)
// Parse parses rule configs from given file patterns
func Parse(pathPatterns []string, validateAnnotations bool) ([]Group, error) {
var fp []string
for _, pattern := range pathPatterns {
matches, err := filepath.Glob(pattern)
if err != nil {
return nil, fmt.Errorf("error reading file patther %s:%v", pattern, err)
}
fp = append(fp, matches...)
}
var groups []Group
for _, file := range fp {
groupsNames := map[string]struct{}{}
gr, err := parseFile(file)
if err != nil {
return nil, fmt.Errorf("file %s: %w", file, err)
}
for _, g := range gr {
if _, ok := groupsNames[g.Name]; ok {
return nil, fmt.Errorf("one file can not contain groups with the same name %s, filepath:%s", g.Name, file)
}
g.File = file
g.doneCh = make(chan struct{})
g.finishedCh = make(chan struct{})
g.updateCh = make(chan Group)
groupsNames[g.Name] = struct{}{}
for _, rule := range g.Rules {
if err = rule.Validate(); err != nil {
return nil, fmt.Errorf("invalid rule filepath: %s, group %s: %w", file, g.Name, err)
}
if validateAnnotations {
if err = notifier.ValidateTemplates(rule.Annotations); err != nil {
return nil, fmt.Errorf("invalid annotations filepath: %s, group %s: %w", file, g.Name, err)
}
if err = notifier.ValidateTemplates(rule.Labels); err != nil {
return nil, fmt.Errorf("invalid labels filepath: %s, group %s: %w", file, g.Name, err)
}
}
rule.group = g
rule.alerts = make(map[uint64]*notifier.Alert)
}
groups = append(groups, g)
}
}
if len(groups) < 1 {
return nil, fmt.Errorf("no groups found in %s", strings.Join(pathPatterns, ";"))
}
return groups, nil
}
func parseFile(path string) ([]Group, error) {
data, err := ioutil.ReadFile(path)
if err != nil {
return nil, fmt.Errorf("error reading alert rule file: %w", err)
}
g := struct {
Groups []Group `yaml:"groups"`
}{}
err = yaml.Unmarshal(data, &g)
return g.Groups, err
}

View File

@@ -1,349 +0,0 @@
package config
import (
"bytes"
"flag"
"fmt"
"hash/fnv"
"io"
"net/url"
"sort"
"strings"
"gopkg.in/yaml.v2"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/config/log"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/vmalertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/envtemplate"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutil"
)
var defaultRuleType = flag.String("rule.defaultRuleType", "prometheus", `Default type for rule expressions, can be overridden via "type" parameter on the group level, see https://docs.victoriametrics.com/victoriametrics/vmalert/#groups. Supported values: "graphite", "prometheus" and "vlogs".`)
// Group contains list of Rules grouped into
// entity with one name and evaluation interval
type Group struct {
Type Type `yaml:"type,omitempty"`
File string
Name string `yaml:"name"`
Interval *promutil.Duration `yaml:"interval,omitempty"`
EvalOffset *promutil.Duration `yaml:"eval_offset,omitempty"`
// EvalDelay will adjust the `time` parameter of rule evaluation requests to compensate intentional query delay from datasource.
// see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5155
EvalDelay *promutil.Duration `yaml:"eval_delay,omitempty"`
Limit int `yaml:"limit,omitempty"`
Rules []Rule `yaml:"rules"`
Concurrency int `yaml:"concurrency"`
// Labels is a set of label value pairs, that will be added to every rule.
// It has priority over the external labels.
Labels map[string]string `yaml:"labels"`
// Checksum stores the hash of yaml definition for this group.
// May be used to detect any changes like rules re-ordering etc.
Checksum string
// Optional HTTP URL parameters added to each rule request
Params url.Values `yaml:"params"`
// Headers contains optional HTTP headers added to each rule request
Headers []Header `yaml:"headers,omitempty"`
// NotifierHeaders contains optional HTTP headers sent to notifiers for generated notifications
NotifierHeaders []Header `yaml:"notifier_headers,omitempty"`
// EvalAlignment will make the timestamp of group query requests be aligned with interval
EvalAlignment *bool `yaml:"eval_alignment,omitempty"`
// Debug enables debug logs for the group
Debug bool `yaml:"debug,omitempty"`
// Catches all undefined fields and must be empty after parsing.
XXX map[string]any `yaml:",inline"`
}
// UnmarshalYAML implements the yaml.Unmarshaler interface.
func (g *Group) UnmarshalYAML(unmarshal func(any) error) error {
type group Group
if err := unmarshal((*group)(g)); err != nil {
return err
}
b, err := yaml.Marshal(g)
if err != nil {
return fmt.Errorf("failed to marshal group configuration for checksum: %w", err)
}
if g.Type.Get() == "" {
g.Type = NewRawType(*defaultRuleType)
}
h := fnv.New64a()
h.Write(b)
g.Checksum = fmt.Sprintf("%x", h.Sum(nil))
return nil
}
// Validate checks configuration errors for group and internal rules
func (g *Group) Validate(validateTplFn ValidateTplFn, validateExpressions bool) error {
if g.Name == "" {
return fmt.Errorf("group name must be set")
}
if g.Interval.Duration() < 0 {
return fmt.Errorf("interval shouldn't be lower than 0")
}
if g.EvalOffset.Duration() < 0 {
return fmt.Errorf("eval_offset shouldn't be lower than 0")
}
// if `eval_offset` is set, interval won't use global evaluationInterval flag and must bigger than offset.
if g.EvalOffset.Duration() > g.Interval.Duration() {
return fmt.Errorf("eval_offset should be smaller than interval; now eval_offset: %v, interval: %v", g.EvalOffset.Duration(), g.Interval.Duration())
}
if g.EvalOffset != nil && g.EvalDelay != nil {
return fmt.Errorf("eval_offset cannot be used with eval_delay")
}
if g.Limit < 0 {
return fmt.Errorf("invalid limit %d, shouldn't be less than 0", g.Limit)
}
if g.Concurrency < 0 {
return fmt.Errorf("invalid concurrency %d, shouldn't be less than 0", g.Concurrency)
}
uniqueRules := map[uint64]struct{}{}
for _, r := range g.Rules {
ruleName := r.Record
if r.Alert != "" {
ruleName = r.Alert
}
if _, ok := uniqueRules[r.ID]; ok {
return fmt.Errorf("%q is a duplicate in group", r.String())
}
uniqueRules[r.ID] = struct{}{}
if err := r.Validate(); err != nil {
return fmt.Errorf("invalid rule %q: %w", ruleName, err)
}
if validateExpressions {
// its needed only for tests.
// because correct types must be inherited after unmarshalling.
exprValidator := g.Type.ValidateExpr
if err := exprValidator(r.Expr); err != nil {
return fmt.Errorf("invalid expression for rule %q: %w", ruleName, err)
}
}
if validateTplFn != nil {
if err := validateTplFn(r.Annotations); err != nil {
return fmt.Errorf("invalid annotations for rule %q: %w", ruleName, err)
}
if err := validateTplFn(r.Labels); err != nil {
return fmt.Errorf("invalid labels for rule %q: %w", ruleName, err)
}
}
}
return checkOverflow(g.XXX, fmt.Sprintf("group %q", g.Name))
}
// Rule describes entity that represent either
// recording rule or alerting rule.
type Rule struct {
ID uint64
Record string `yaml:"record,omitempty"`
Alert string `yaml:"alert,omitempty"`
Expr string `yaml:"expr"`
For *promutil.Duration `yaml:"for,omitempty"`
// Alert will continue firing for this long even when the alerting expression no longer has results.
KeepFiringFor *promutil.Duration `yaml:"keep_firing_for,omitempty"`
Labels map[string]string `yaml:"labels,omitempty"`
Annotations map[string]string `yaml:"annotations,omitempty"`
Debug *bool `yaml:"debug,omitempty"`
// UpdateEntriesLimit defines max number of rule's state updates stored in memory.
// Overrides `-rule.updateEntriesLimit`.
UpdateEntriesLimit *int `yaml:"update_entries_limit,omitempty"`
// Catches all undefined fields and must be empty after parsing.
XXX map[string]any `yaml:",inline"`
}
// UnmarshalYAML implements the yaml.Unmarshaler interface.
func (r *Rule) UnmarshalYAML(unmarshal func(any) error) error {
type rule Rule
if err := unmarshal((*rule)(r)); err != nil {
return err
}
r.ID = HashRule(*r)
return nil
}
// Name returns Rule name according to its type
func (r *Rule) Name() string {
if r.Record != "" {
return r.Record
}
return r.Alert
}
// String implements Stringer interface
func (r *Rule) String() string {
ruleType := "recording"
if r.Alert != "" {
ruleType = "alerting"
}
b := strings.Builder{}
b.WriteString(fmt.Sprintf("%s rule %q", ruleType, r.Name()))
b.WriteString(fmt.Sprintf("; expr: %q", r.Expr))
kv := sortMap(r.Labels)
for i := range kv {
if i == 0 {
b.WriteString("; labels:")
}
b.WriteString(" ")
b.WriteString(kv[i].key)
b.WriteString("=")
b.WriteString(kv[i].value)
if i < len(kv)-1 {
b.WriteString(",")
}
}
return b.String()
}
// HashRule hashes significant Rule fields into
// unique hash that supposed to define Rule uniqueness
func HashRule(r Rule) uint64 {
h := fnv.New64a()
h.Write([]byte(r.Expr))
if r.Record != "" {
h.Write([]byte("recording"))
h.Write([]byte(r.Record))
} else {
h.Write([]byte("alerting"))
h.Write([]byte(r.Alert))
}
kv := sortMap(r.Labels)
for _, i := range kv {
h.Write([]byte(i.key))
h.Write([]byte(i.value))
h.Write([]byte("\xff"))
}
return h.Sum64()
}
// Validate check for Rule configuration errors
func (r *Rule) Validate() error {
if (r.Record == "" && r.Alert == "") || (r.Record != "" && r.Alert != "") {
return fmt.Errorf("either `record` or `alert` must be set")
}
if r.Expr == "" {
return fmt.Errorf("expression can't be empty")
}
return checkOverflow(r.XXX, "rule")
}
// ValidateTplFn must validate the given annotations
type ValidateTplFn func(annotations map[string]string) error
// cLogger is a logger with support of logs suppressing.
// it is used when logs emitted by config package needs
// to be suppressed.
var cLogger = &log.Logger{}
// ParseSilent parses rule configs from given file patterns without emitting logs
func ParseSilent(pathPatterns []string, validateTplFn ValidateTplFn, validateExpressions bool) ([]Group, error) {
cLogger.Suppress(true)
defer cLogger.Suppress(false)
files, err := ReadFromFS(pathPatterns)
if err != nil {
return nil, fmt.Errorf("failed to read from the config: %w", err)
}
return parse(files, validateTplFn, validateExpressions)
}
// Parse parses rule configs from given file patterns
func Parse(pathPatterns []string, validateTplFn ValidateTplFn, validateExpressions bool) ([]Group, error) {
files, err := ReadFromFS(pathPatterns)
if err != nil {
return nil, fmt.Errorf("failed to read from the config: %w", err)
}
groups, err := parse(files, validateTplFn, validateExpressions)
if err != nil {
return nil, fmt.Errorf("failed to parse %s: %w", pathPatterns, err)
}
if len(groups) < 1 {
cLogger.Warnf("no groups found in %s", strings.Join(pathPatterns, ";"))
}
return groups, nil
}
func parse(files map[string][]byte, validateTplFn ValidateTplFn, validateExpressions bool) ([]Group, error) {
errGroup := new(vmalertutil.ErrGroup)
var groups []Group
for file, data := range files {
uniqueGroups := map[string]struct{}{}
gr, err := parseConfig(data)
if err != nil {
errGroup.Add(fmt.Errorf("failed to parse file %q: %w", file, err))
continue
}
for _, g := range gr {
if err := g.Validate(validateTplFn, validateExpressions); err != nil {
errGroup.Add(fmt.Errorf("invalid group %q in file %q: %w", g.Name, file, err))
continue
}
if _, ok := uniqueGroups[g.Name]; ok {
errGroup.Add(fmt.Errorf("group name %q duplicate in file %q", g.Name, file))
continue
}
uniqueGroups[g.Name] = struct{}{}
g.File = file
groups = append(groups, g)
}
}
if err := errGroup.Err(); err != nil {
return nil, err
}
return groups, nil
}
func parseConfig(data []byte) ([]Group, error) {
data = envtemplate.ReplaceBytes(data)
var result []Group
type cfgFile struct {
Groups []Group `yaml:"groups"`
// Catches all undefined fields and must be empty after parsing.
XXX map[string]any `yaml:",inline"`
}
decoder := yaml.NewDecoder(bytes.NewReader(data))
for {
var cf cfgFile
if err := decoder.Decode(&cf); err != nil {
if err == io.EOF { // EOF indicates no more documents to read
break
}
return nil, err
}
if err := checkOverflow(cf.XXX, "config"); err != nil {
return nil, err
}
result = append(result, cf.Groups...)
}
return result, nil
}
func checkOverflow(m map[string]any, ctx string) error {
if len(m) > 0 {
var keys []string
for k := range m {
keys = append(keys, k)
}
return fmt.Errorf("unknown fields in %s: %s", ctx, strings.Join(keys, ", "))
}
return nil
}
type item struct {
key, value string
}
func sortMap(m map[string]string) []item {
var kv []item
for k, v := range m {
kv = append(kv, item{key: k, value: v})
}
sort.Slice(kv, func(i, j int) bool {
return kv[i].key < kv[j].key
})
return kv
}

Some files were not shown because too many files have changed in this diff Show More