Compare commits

..

25 Commits

Author SHA1 Message Date
func25
f9e5881303 clean 2025-08-27 13:58:07 +07:00
func25
ab6fd0afed clean 2025-08-27 11:33:49 +07:00
func25
8f8ead2c50 clean 2025-08-27 10:47:30 +07:00
func25
2f422bad85 clean 2025-08-27 10:46:54 +07:00
func25
c0a41b41ca missing after cherry-pick 2025-08-27 10:27:05 +07:00
func25
68e493cef3 remove maxDebugSamples flag and limit checking 2025-08-27 10:12:17 +07:00
func25
06572772d4 update 2025-08-27 10:11:55 +07:00
func25
d12f6c280f update 2025-08-27 10:08:30 +07:00
Alexander Frolov
e62e0685dc vmctl: inconsistent vm-native logs (#9607)
### Describe Your Changes

Some messages were written to `stdout` using `fmt.Printf` and
`fmt.Println`, while the other messages like import statistics were
written to `stderr` through the `log` package.

This led to ordering problems where the `Import finished!` +
`VictoriaMetrics importer stats` messages, which expected to be the last
messages, appeared before `Continue import process with filter`
messages, creating confusing output for users.

```
2025/08/20 13:07:26 Import finished!
2025/08/20 13:07:26 VictoriaMetrics importer stats:
  time spent while importing: 20h49m10.8497184s;
  total bytes: 277.1 GB;
  bytes/s: 3.7 MB;
  requests: 7978614;
  requests retries: 0;
2025/08/20 13:07:26 Total time: 20h49m10.851006088s
Continue import process with filter
        filter: match[]={__name__!=""}
        start: 2025-08-08T00:00:00Z
        end: 2025-08-15T00:00:00Z:
Continue import process with filter
        filter: match[]={__name__!=""}
        start: 2025-08-15T00:00:00Z
        end: 2025-08-19T16:18:15Z:
```


### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-08-26 18:53:59 +03:00
Max Kotliar
df92e617db Revert "app/{vminsert,vmagent}: added flags for periodical relabel and stream aggregation configs check (#9598)"
This reverts commit 07291c1d62 and partly
7c0c8cc702.

The reasons explained in
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9598#issuecomment-3223766551
2025-08-26 14:42:35 +03:00
Max Kotliar
7c0c8cc702 docs: sync documented flags with binaries 2025-08-26 10:53:43 +03:00
Andrii Chubatiuk
07291c1d62 app/{vminsert,vmagent}: added flags for periodical relabel and stream aggregation configs check (#9598)
related issue
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9590

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

Co-authored-by: Max Kotliar <mkotlyar@victoriametrics.com>
2025-08-26 09:46:44 +03:00
Alexander Frolov
7c0015b836 app/vmagent/remotewrite: restore protocol downgrade logic (#9621)
### Describe Your Changes

It seems db39f045e1 accidentally reverted
#9419 changes.
```patch
--- a/app/vmagent/remotewrite/client.go
+++ b/app/vmagent/remotewrite/client.go
@@ -448,7 +448,8 @@ again:
 	}
 
 	metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_remotewrite_requests_total{url=%q, status_code="%d"}`, c.sanitizedURL, statusCode)).Inc()
-	if statusCode == 409 {
+	switch statusCode {
+	case 409:
 		logBlockRejected(block, c.sanitizedURL, resp)
 
 		// Just drop block on 409 status code like Prometheus does.
@@ -461,7 +462,13 @@ again:
 		// - Remote Write v2 specification explicitly specifies a `415 Unsupported Media Type` for unsupported encodings.
 		// - Real-world implementations of v1 use both 400 and 415 status codes.
 		// See more in research: https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8462#issuecomment-2786918054
-	} else if statusCode == 415 || statusCode == 400 {
+	case 415, 400:
+		if c.canDowngradeVMProto.Swap(false) {
+			logger.Infof("received unsupported media type or bad request from remote storage at %q. Downgrading protocol from VictoriaMetrics to Prometheus remote write for all future requests. "+
+				"See https://docs.victoriametrics.com/victoriametrics/vmagent/#victoriametrics-remote-write-protocol", c.sanitizedURL)
+			c.useVMProto.Store(false)
+		}
+
 		if encoding.IsZstd(block) {
 			logger.Infof("received unsupported media type or bad request from remote storage at %q. Re-packing the block to Prometheus remote write and retrying."+
 				"See https://docs.victoriametrics.com/victoriametrics/vmagent/#victoriametrics-remote-write-protocol", c.sanitizedURL)
```

cc @makasim

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-08-26 09:17:53 +03:00
Hui Wang
06e52a99fd lib/prompb: replace fields hardcoded hex values with their correspond… (#9617)
…ing bitwise operations

fix https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9608
2025-08-26 09:03:36 +03:00
f41gh7
f5840951a4 app/vmagent: pubsub properly handle ingestion error
Previously, if pushBlockPubSub function returned error, vmagent stopped
remote write worker thread assigned for it. Expected behavior for this
scenario is to retry error inside pushBlockPubSub function. It must
return only on vmagent shutdown.

 This commit properly handles this error and prevents from ingestion
stop.
2025-08-24 21:37:30 +02:00
Aliaksandr Valialkin
9ca5a8d0f4 lib/netutil: return tls.Conn from TCPListener.Accept for TLS connections
This is needed because the servers, which may use the TCPListener, such as net/http.Server,
expect to get tls.Conn for TLS connections in order to properly fill various fields such as net/http.Request.TLS.
If the listener returns some other net.Conn, then these fields aren't filled properly,
and this may prevent from the proper mTLS-based authorization and request routing
such as https://docs.victoriametrics.com/victoriametrics/vmauth/#mtls-based-request-routing

Updates https://github.com/VictoriaMetrics/VictoriaLogs/issues/29
2025-08-22 20:25:40 +02:00
Aliaksandr Valialkin
894b22590d docs/victoriametrics/enterprise.md: mention VictoriaLogs enterprise
Updates https://github.com/VictoriaMetrics/VictoriaLogs/issues/120
2025-08-22 18:31:51 +02:00
hagen1778
f85fd161e4 docs: reword -vmalert.proxyURL usage in vmalert
Make it clear that `-vmalert.proxyURL` needs to be applied to
VM single or vmselect.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-08-22 09:49:13 +02:00
Max Kotliar
7d552dbd9a metricsql: improve timestamp function compatibility with Prometheus when used with sub-expressions (#9603)
### Describe Your Changes

Fixes
[#9527](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9527)
Related PR: https://github.com/VictoriaMetrics/metricsql/pull/55

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-08-21 17:38:12 +03:00
Max Kotliar
795c3deaee ib/appmetrics: revert accidental change 2025-08-21 17:30:12 +03:00
Max Kotliar
cb44353a36 docs/changelog: add update note 2025-08-21 17:29:32 +03:00
Andrii Chubatiuk
7e05200c60 deployment/rules: set proper job filters for rules (#9587)
### Describe Your Changes

related issue https://github.com/VictoriaMetrics/helm-charts/issues/2350

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [ ] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).
2025-08-21 15:26:36 +02:00
hagen1778
a2f033ce6c docs: refresh vmui description
* add missing features
* re-organize text without breaking links to improve clarity

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-08-21 15:25:49 +02:00
Artur Minchukou
78b217d70c app/vmui: add export functionality for Query and RawQuery tabs with CSV/JSON support (#9463)
### Describe Your Changes

Related issue: #9332 
- add export functionality for Query and RawQuery tabs with CSV/JSON
support;
 - replace unused icons and update `DebugIcon` usage in `DownloadReport`

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/#pull-request-checklist).
- [x] My change adheres to [VictoriaMetrics development
goals](https://docs.victoriametrics.com/victoriametrics/goals/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-08-21 14:37:27 +02:00
Aliaksandr Valialkin
c9b23de9ce lib/httpserver: add missing whitespace after the dot in the description for the -tlsAutocertEmail command-line flag
This is a follow-up for 1d80e8f860
2025-08-21 11:02:43 +02:00
51 changed files with 1551 additions and 486 deletions

View File

@@ -463,12 +463,6 @@ again:
// - Real-world implementations of v1 use both 400 and 415 status codes.
// See more in research: https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8462#issuecomment-2786918054
case 415, 400:
if c.canDowngradeVMProto.Swap(false) {
logger.Infof("received unsupported media type or bad request from remote storage at %q. Downgrading protocol from VictoriaMetrics to Prometheus remote write for all future requests. "+
"See https://docs.victoriametrics.com/victoriametrics/vmagent/#victoriametrics-remote-write-protocol", c.sanitizedURL)
c.useVMProto.Store(false)
}
if encoding.IsZstd(block) {
logger.Infof("received unsupported media type or bad request from remote storage at %q. Re-packing the block to Prometheus remote write and retrying."+
"See https://docs.victoriametrics.com/victoriametrics/vmagent/#victoriametrics-remote-write-protocol", c.sanitizedURL)

View File

@@ -121,7 +121,7 @@ func (p *vmNativeProcessor) runSingle(ctx context.Context, f native.Filter, srcU
pr := bar.NewProxyReader(reader)
if pr != nil {
reader = pr
fmt.Printf("Continue import process with filter %s:\n", f.String())
fmt.Fprintf(log.Writer(), "Continue import process with filter %s:\n", f.String())
}
}
@@ -191,7 +191,7 @@ func (p *vmNativeProcessor) runBackfilling(ctx context.Context, tenantID string,
initParams = []any{srcURL, dstURL, p.filter.String(), tenantID}
}
fmt.Println("") // extra line for better output formatting
fmt.Fprintln(log.Writer(), "") // extra line for better output formatting
log.Printf(initMessage, initParams...)
if len(ranges) > 1 {
log.Printf("Selected time range will be split into %d ranges according to %q step", len(ranges), p.filter.Chunk)

View File

@@ -262,6 +262,13 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
return true
}
return true
case "/api/v1/config":
httpserver.EnableCORS(w, r)
if err := prometheus.ConfigHandler(qt, startTime, w, r); err != nil {
httpserver.SendPrometheusError(w, r, err)
return true
}
return true
case "/api/v1/export":
exportRequests.Inc()
if err := prometheus.ExportHandler(startTime, w, r); err != nil {
@@ -538,6 +545,13 @@ func handleStaticAndSimpleRequests(w http.ResponseWriter, r *http.Request, path
expandWithExprsRequests.Inc()
prometheus.ExpandWithExprs(w, r)
return true
case "/extract-metric-exprs":
startTime := time.Now()
if err := prometheus.ExtractMetricExprsHandler(startTime, w, r); err != nil {
httpserver.Errorf(w, r, "%s", err)
return true
}
return true
case "/prettify-query":
prettifyQueryRequests.Inc()
prometheus.PrettifyQuery(w, r)

View File

@@ -63,10 +63,18 @@ type Results struct {
packedTimeseries []packedTimeseries
sr *storage.Search
tbf *tmpBlocksFile
// the result is simulated
isSimulated bool
simulatedSeries []*storage.SimulatedSamples
}
// Len returns the number of results in rss.
func (rss *Results) Len() int {
if rss.isSimulated {
return len(rss.simulatedSeries)
}
return len(rss.packedTimeseries)
}
@@ -218,6 +226,10 @@ var defaultMaxWorkersPerQuery = func() int {
//
// rss becomes unusable after the call to RunParallel.
func (rss *Results) RunParallel(qt *querytracer.Tracer, f func(rs *Result, workerID uint) error) error {
if rss.isSimulated {
return rss.runParallelSimulated(qt, f)
}
qt = qt.NewChild("parallel process of fetched data")
defer rss.mustClose()
@@ -233,6 +245,87 @@ func (rss *Results) RunParallel(qt *querytracer.Tracer, f func(rs *Result, worke
return err
}
func (rss *Results) runParallelSimulated(qt *querytracer.Tracer, f func(rs *Result, workerID uint) error) error {
qt = qt.NewChild("parallel process of fetched data")
cb := f
tmpResult := getTmpResult()
defer putTmpResult(tmpResult)
// For simplicity, let's process serially first. Parallelization can be added if needed.
// If parallelization is desired, it would mirror the worker pool logic of the original runParallel,
// but iterating over rss.simulatedSamples entries.
workerID := uint(0)
var firstErr error
for _, metric := range rss.simulatedSeries {
r := &tmpResult.rs
r.reset()
r.MetricName.CopyFrom(&metric.Name)
for i, ts := range metric.Timestamps {
if ts >= rss.tr.MinTimestamp && ts <= rss.tr.MaxTimestamp {
r.Values = append(r.Values, metric.Value[i])
r.Timestamps = append(r.Timestamps, ts)
}
}
// Sort timestamps chronologically to match real storage behavior.
// Real storage ensures chronological order through:
// 1. Block-level sorting by MinTimestamp
// 2. Within-block timestamp ordering via encoding.EnsureNonDecreasingSequence()
if len(r.Timestamps) > 1 {
// Create pairs for sorting
type timestampValue struct {
timestamp int64
value float64
}
pairs := make([]timestampValue, len(r.Timestamps))
for i := range r.Timestamps {
pairs[i] = timestampValue{
timestamp: r.Timestamps[i],
value: r.Values[i],
}
}
// Sort by timestamp
sort.Slice(pairs, func(i, j int) bool {
return pairs[i].timestamp < pairs[j].timestamp
})
// Extract back to separate slices
for i := range pairs {
r.Timestamps[i] = pairs[i].timestamp
r.Values[i] = pairs[i].value
}
}
// The input from the client is most likely already deduplicated, since it's emitted by
// vmselect. However, the client may modify the input instead of using the returned one.
dedupInterval := storage.GetDedupInterval()
if dedupInterval > 0 && len(r.Timestamps) > 0 {
r.Timestamps, r.Values = storage.DeduplicateSamples(r.Timestamps, r.Values, dedupInterval)
}
rowProcessed := len(r.Timestamps)
if rowProcessed > 0 {
err := cb(r, workerID)
if err != nil {
firstErr = err
break
}
}
}
// Count total samples across all series
totalSamples := 0
for _, metric := range rss.simulatedSeries {
totalSamples += len(metric.Timestamps)
}
qt.Donef("series=%d, samples=%d", len(rss.simulatedSeries), totalSamples)
return firstErr
}
func (rss *Results) runParallel(qt *querytracer.Tracer, f func(rs *Result, workerID uint) error) (int, error) {
tswsLen := len(rss.packedTimeseries)
if tswsLen == 0 {
@@ -1119,6 +1212,10 @@ func SearchMetricNames(qt *querytracer.Tracer, sq *storage.SearchQuery, deadline
//
// Results.RunParallel or Results.Cancel must be called on the returned Results.
func ProcessSearchQuery(qt *querytracer.Tracer, sq *storage.SearchQuery, deadline searchutil.Deadline) (*Results, error) {
if len(sq.SimulatedSeries) > 0 {
return processSearchSimulated(qt, sq, deadline)
}
qt = qt.NewChild("fetch matching series: %s", sq)
defer qt.Done()
if deadline.Exceeded() {
@@ -1291,6 +1388,41 @@ func ProcessSearchQuery(qt *querytracer.Tracer, sq *storage.SearchQuery, deadlin
return &rss, nil
}
func processSearchSimulated(qt *querytracer.Tracer, sq *storage.SearchQuery, deadline searchutil.Deadline) (*Results, error) {
qt = qt.NewChild("fetch matching series (simulated): %s", sq)
defer qt.Done()
if deadline.Exceeded() {
return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String())
}
tr := storage.TimeRange{
MinTimestamp: sq.MinTimestamp,
MaxTimestamp: sq.MaxTimestamp,
}
// Process simulated samples.
matchedSamples, err := storage.MatchSimulatedSamples(sq.SimulatedSeries, sq.TagFilterss)
if err != nil {
return nil, fmt.Errorf("cannot match simulated samples: %w", err)
}
// Create a result set similar to ProcessSearchQuery
rss := &Results{
tr: tr,
deadline: deadline,
isSimulated: true,
simulatedSeries: matchedSamples,
}
if len(matchedSamples) == 0 {
qt.Printf("no matching series found")
} else {
qt.Printf("found %d series", len(rss.simulatedSeries))
}
return rss, nil
}
type blockRef struct {
partRef storage.PartRef
addr tmpBlockAddr

View File

@@ -0,0 +1,20 @@
{% import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
) %}
{% stripspace %}
ConfigResponse generates response for /api/v1/config .
{% func ConfigResponse(config *ConfigData, qt *querytracer.Tracer) %}
{
"status":"success",
"data":{
"minStalenessInterval": {%q= config.MinStalenessInterval %},
"maxStalenessInterval": {%q= config.MaxStalenessInterval %}
}
{% code qt.Done() %}
{%= dumpQueryTrace(qt) %}
}
{% endfunc %}
{% endstripspace %}

View File

@@ -0,0 +1,73 @@
// Code generated by qtc from "config_response.qtpl". DO NOT EDIT.
// See https://github.com/valyala/quicktemplate for details.
//line app/vmselect/prometheus/config_response.qtpl:1
package prometheus
//line app/vmselect/prometheus/config_response.qtpl:1
import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
)
// ConfigResponse generates response for /api/v1/config .
//line app/vmselect/prometheus/config_response.qtpl:8
import (
qtio422016 "io"
qt422016 "github.com/valyala/quicktemplate"
)
//line app/vmselect/prometheus/config_response.qtpl:8
var (
_ = qtio422016.Copy
_ = qt422016.AcquireByteBuffer
)
//line app/vmselect/prometheus/config_response.qtpl:8
func StreamConfigResponse(qw422016 *qt422016.Writer, config *ConfigData, qt *querytracer.Tracer) {
//line app/vmselect/prometheus/config_response.qtpl:8
qw422016.N().S(`{"status":"success","data":{"minStalenessInterval":`)
//line app/vmselect/prometheus/config_response.qtpl:12
qw422016.N().Q(config.MinStalenessInterval)
//line app/vmselect/prometheus/config_response.qtpl:12
qw422016.N().S(`,"maxStalenessInterval":`)
//line app/vmselect/prometheus/config_response.qtpl:13
qw422016.N().Q(config.MaxStalenessInterval)
//line app/vmselect/prometheus/config_response.qtpl:13
qw422016.N().S(`}`)
//line app/vmselect/prometheus/config_response.qtpl:15
qt.Done()
//line app/vmselect/prometheus/config_response.qtpl:16
streamdumpQueryTrace(qw422016, qt)
//line app/vmselect/prometheus/config_response.qtpl:16
qw422016.N().S(`}`)
//line app/vmselect/prometheus/config_response.qtpl:18
}
//line app/vmselect/prometheus/config_response.qtpl:18
func WriteConfigResponse(qq422016 qtio422016.Writer, config *ConfigData, qt *querytracer.Tracer) {
//line app/vmselect/prometheus/config_response.qtpl:18
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/prometheus/config_response.qtpl:18
StreamConfigResponse(qw422016, config, qt)
//line app/vmselect/prometheus/config_response.qtpl:18
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/prometheus/config_response.qtpl:18
}
//line app/vmselect/prometheus/config_response.qtpl:18
func ConfigResponse(config *ConfigData, qt *querytracer.Tracer) string {
//line app/vmselect/prometheus/config_response.qtpl:18
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/prometheus/config_response.qtpl:18
WriteConfigResponse(qb422016, config, qt)
//line app/vmselect/prometheus/config_response.qtpl:18
qs422016 := string(qb422016.B)
//line app/vmselect/prometheus/config_response.qtpl:18
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/prometheus/config_response.qtpl:18
return qs422016
//line app/vmselect/prometheus/config_response.qtpl:18
}

View File

@@ -0,0 +1,18 @@
{% stripspace %}
ExtractMetricExprsResponse generates response for /extract-metric-exprs .
{% func ExtractMetricExprsResponse(metrics []string) %}
{
"status":"success",
"data":[
{% if len(metrics) > 0 %}
{%q= metrics[0] %}
{% for i := 1; i < len(metrics); i++ %}
,{%q= metrics[i] %}
{% endfor %}
{% endif %}
]
}
{% endfunc %}
{% endstripspace %}

View File

@@ -0,0 +1,69 @@
// Code generated by qtc from "extract_metric_exprs_response.qtpl". DO NOT EDIT.
// See https://github.com/valyala/quicktemplate for details.
// ExtractMetricExprsResponse generates response for /extract-metric-exprs .
//line app/vmselect/prometheus/extract_metric_exprs_response.qtpl:4
package prometheus
//line app/vmselect/prometheus/extract_metric_exprs_response.qtpl:4
import (
qtio422016 "io"
qt422016 "github.com/valyala/quicktemplate"
)
//line app/vmselect/prometheus/extract_metric_exprs_response.qtpl:4
var (
_ = qtio422016.Copy
_ = qt422016.AcquireByteBuffer
)
//line app/vmselect/prometheus/extract_metric_exprs_response.qtpl:4
func StreamExtractMetricExprsResponse(qw422016 *qt422016.Writer, metrics []string) {
//line app/vmselect/prometheus/extract_metric_exprs_response.qtpl:4
qw422016.N().S(`{"status":"success","data":[`)
//line app/vmselect/prometheus/extract_metric_exprs_response.qtpl:8
if len(metrics) > 0 {
//line app/vmselect/prometheus/extract_metric_exprs_response.qtpl:9
qw422016.N().Q(metrics[0])
//line app/vmselect/prometheus/extract_metric_exprs_response.qtpl:10
for i := 1; i < len(metrics); i++ {
//line app/vmselect/prometheus/extract_metric_exprs_response.qtpl:10
qw422016.N().S(`,`)
//line app/vmselect/prometheus/extract_metric_exprs_response.qtpl:11
qw422016.N().Q(metrics[i])
//line app/vmselect/prometheus/extract_metric_exprs_response.qtpl:12
}
//line app/vmselect/prometheus/extract_metric_exprs_response.qtpl:13
}
//line app/vmselect/prometheus/extract_metric_exprs_response.qtpl:13
qw422016.N().S(`]}`)
//line app/vmselect/prometheus/extract_metric_exprs_response.qtpl:16
}
//line app/vmselect/prometheus/extract_metric_exprs_response.qtpl:16
func WriteExtractMetricExprsResponse(qq422016 qtio422016.Writer, metrics []string) {
//line app/vmselect/prometheus/extract_metric_exprs_response.qtpl:16
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/prometheus/extract_metric_exprs_response.qtpl:16
StreamExtractMetricExprsResponse(qw422016, metrics)
//line app/vmselect/prometheus/extract_metric_exprs_response.qtpl:16
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/prometheus/extract_metric_exprs_response.qtpl:16
}
//line app/vmselect/prometheus/extract_metric_exprs_response.qtpl:16
func ExtractMetricExprsResponse(metrics []string) string {
//line app/vmselect/prometheus/extract_metric_exprs_response.qtpl:16
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/prometheus/extract_metric_exprs_response.qtpl:16
WriteExtractMetricExprsResponse(qb422016, metrics)
//line app/vmselect/prometheus/extract_metric_exprs_response.qtpl:16
qs422016 := string(qb422016.B)
//line app/vmselect/prometheus/extract_metric_exprs_response.qtpl:16
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/prometheus/extract_metric_exprs_response.qtpl:16
return qs422016
//line app/vmselect/prometheus/extract_metric_exprs_response.qtpl:16
}

View File

@@ -1,8 +1,10 @@
package prometheus
import (
"encoding/json"
"flag"
"fmt"
"io"
"math"
"net/http"
"runtime"
@@ -20,6 +22,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/promql"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/querystats"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/searchutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bufferedwriter"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding"
@@ -42,6 +45,9 @@ var (
maxLookback = flag.Duration("search.maxLookback", 0, "Synonym to -query.lookback-delta from Prometheus. "+
"The value is dynamically detected from interval between time series datapoints if not set. It can be overridden on per-query basis via max_lookback arg. "+
"See also '-search.maxStalenessInterval' flag, which has the same meaning due to historical reasons")
minStalenessInterval = flag.Duration("search.minStalenessInterval", 0, "The minimum interval for staleness calculations. "+
"This flag could be useful for removing gaps on graphs generated from time series with irregular intervals between samples. "+
"See also '-search.maxStalenessInterval'")
maxStalenessInterval = flag.Duration("search.maxStalenessInterval", 0, "The maximum interval for staleness calculations. "+
"By default, it is automatically calculated from the median interval between samples. This flag could be useful for tuning "+
"Prometheus data model closer to Influx-style data model. See https://prometheus.io/docs/prometheus/latest/querying/basics/#staleness for details. "+
@@ -116,7 +122,7 @@ func FederateHandler(startTime time.Time, w http.ResponseWriter, r *http.Request
if err != nil {
return err
}
lookbackDelta, err := getMaxLookback(r)
lookbackDelta, err := getMaxLookback(r, *maxStalenessInterval)
if err != nil {
return err
}
@@ -611,6 +617,55 @@ func TSDBStatusHandler(qt *querytracer.Tracer, startTime time.Time, w http.Respo
var tsdbStatusDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/api/v1/status/tsdb"}`)
// ConfigData holds the current configuration values for search-related flags
type ConfigData struct {
MinStalenessInterval string
MaxStalenessInterval string
}
// ConfigHandler processes /api/v1/config request.
//
// It returns the current configuration for search-related flags.
func ConfigHandler(qt *querytracer.Tracer, startTime time.Time, w http.ResponseWriter, _ *http.Request) error {
config := &ConfigData{
MinStalenessInterval: (*minStalenessInterval).String(),
MaxStalenessInterval: (*maxStalenessInterval).String(),
}
w.Header().Set("Content-Type", "application/json")
bw := bufferedwriter.Get(w)
defer bufferedwriter.Put(bw)
WriteConfigResponse(bw, config, qt)
if err := bw.Flush(); err != nil {
return fmt.Errorf("cannot send config response to remote client: %w", err)
}
return nil
}
// ExtractMetricExprsHandler processes /extract-metric-exprs request.
//
// It extracts metric expressions from a given PromQL query.
func ExtractMetricExprsHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error {
query := r.FormValue("query")
if len(query) == 0 {
return fmt.Errorf("missing `query` arg")
}
metrics, err := promql.ExtractMetricsFromQuery(query)
if err != nil {
return fmt.Errorf("cannot extract metrics from query: %w", err)
}
w.Header().Set("Content-Type", "application/json")
bw := bufferedwriter.Get(w)
defer bufferedwriter.Put(bw)
WriteExtractMetricExprsResponse(bw, metrics)
if err := bw.Flush(); err != nil {
return fmt.Errorf("cannot send extract metric exprs response to remote client: %w", err)
}
return nil
}
// LabelsHandler processes /api/v1/labels request.
//
// See https://prometheus.io/docs/prometheus/latest/querying/api/#getting-label-names
@@ -712,7 +767,8 @@ func QueryHandler(qt *querytracer.Tracer, startTime time.Time, w http.ResponseWr
ct := startTime.UnixNano() / 1e6
deadline := searchutil.GetDeadlineForQuery(r, startTime)
mayCache := !httputil.GetBool(r, "nocache")
isDebug := httputil.GetBool(r, "debug")
noCache := httputil.GetBool(r, "nocache") || isDebug
query := r.FormValue("query")
if len(query) == 0 {
return fmt.Errorf("missing `query` arg")
@@ -721,7 +777,7 @@ func QueryHandler(qt *querytracer.Tracer, startTime time.Time, w http.ResponseWr
if err != nil {
return err
}
lookbackDelta, err := getMaxLookback(r)
lookbackDelta, err := getMaxLookback(r, *maxStalenessInterval)
if err != nil {
return err
}
@@ -807,23 +863,14 @@ func QueryHandler(qt *querytracer.Tracer, startTime time.Time, w http.ResponseWr
} else {
queryOffset = 0
}
ec := &promql.EvalConfig{
Start: start,
End: start,
Step: step,
MaxPointsPerSeries: *maxPointsPerTimeseries,
MaxSeries: GetMaxUniqueTimeSeries(),
QuotedRemoteAddr: httpserver.GetQuotedRemoteAddr(r),
Deadline: deadline,
MayCache: mayCache,
LookbackDelta: lookbackDelta,
RoundDigits: getRoundDigits(r),
EnforcedTagFilterss: etfs,
CacheTagFilters: etfs,
GetRequestURI: func() string {
return httpserver.GetRequestURI(r)
},
ec := newEvalConfig(r, start, start, step, deadline, noCache, lookbackDelta, isDebug, etfs)
if isDebug {
if err := populateSimulatedData(r, nil, ec); err != nil {
_ = r.Body.Close()
return fmt.Errorf("cannot read simulated samples: %w", err)
}
}
_ = r.Body.Close()
qs := promql.NewQueryStats(query, nil, ec)
ec.QueryStats = qs
@@ -897,8 +944,9 @@ func QueryRangeHandler(qt *querytracer.Tracer, startTime time.Time, w http.Respo
func queryRangeHandler(qt *querytracer.Tracer, startTime time.Time, w http.ResponseWriter, query string,
start, end, step int64, r *http.Request, ct int64, etfs [][]storage.TagFilter) error {
deadline := searchutil.GetDeadlineForQuery(r, startTime)
mayCache := !httputil.GetBool(r, "nocache")
lookbackDelta, err := getMaxLookback(r)
isDebug := httputil.GetBool(r, "debug")
noCache := httputil.GetBool(r, "nocache") || isDebug
lookbackDelta, err := getMaxLookback(r, *maxStalenessInterval)
if err != nil {
return err
}
@@ -913,27 +961,19 @@ func queryRangeHandler(qt *querytracer.Tracer, startTime time.Time, w http.Respo
if err := promql.ValidateMaxPointsPerSeries(start, end, step, *maxPointsPerTimeseries); err != nil {
return fmt.Errorf("%w; (see -search.maxPointsPerTimeseries command-line flag)", err)
}
if mayCache {
if !noCache {
start, end = promql.AdjustStartEnd(start, end, step)
}
ec := &promql.EvalConfig{
Start: start,
End: end,
Step: step,
MaxPointsPerSeries: *maxPointsPerTimeseries,
MaxSeries: GetMaxUniqueTimeSeries(),
QuotedRemoteAddr: httpserver.GetQuotedRemoteAddr(r),
Deadline: deadline,
MayCache: mayCache,
LookbackDelta: lookbackDelta,
RoundDigits: getRoundDigits(r),
EnforcedTagFilterss: etfs,
CacheTagFilters: etfs,
GetRequestURI: func() string {
return httpserver.GetRequestURI(r)
},
ec := newEvalConfig(r, start, end, step, deadline, noCache, lookbackDelta, isDebug, etfs)
if isDebug {
if err := populateSimulatedData(r, nil, ec); err != nil {
_ = r.Body.Close()
return fmt.Errorf("cannot read simulated samples: %w", err)
}
}
_ = r.Body.Close()
qs := promql.NewQueryStats(query, nil, ec)
ec.QueryStats = qs
@@ -969,6 +1009,93 @@ func queryRangeHandler(qt *querytracer.Tracer, startTime time.Time, w http.Respo
return nil
}
func newEvalConfig(r *http.Request, start, end, step int64, deadline searchutil.Deadline, noCache bool, lookbackDelta int64, isDebug bool, etfs [][]storage.TagFilter) *promql.EvalConfig {
ec := &promql.EvalConfig{
Start: start,
End: end,
Step: step,
MaxPointsPerSeries: *maxPointsPerTimeseries,
MaxSeries: GetMaxUniqueTimeSeries(),
MinStalenessInterval: *minStalenessInterval,
QuotedRemoteAddr: httpserver.GetQuotedRemoteAddr(r),
Deadline: deadline,
MayCache: !noCache,
LookbackDelta: lookbackDelta,
RoundDigits: getRoundDigits(r),
EnforcedTagFilterss: etfs,
CacheTagFilters: etfs,
GetRequestURI: func() string {
return httpserver.GetRequestURI(r)
},
}
return ec
}
func populateSimulatedData(r *http.Request, at *auth.Token, evalConfig *promql.EvalConfig) error {
type jsonExportBlockInput struct {
Metric map[string]string `json:"metric"`
Values []float64 `json:"values"`
Timestamps []int64 `json:"timestamps"`
}
// --- Read and Parse Input Samples from r.Body ---
var simulatedSeries []*storage.SimulatedSamples
decoder := json.NewDecoder(r.Body)
lineNum := 0
for {
var jeb jsonExportBlockInput
if err := decoder.Decode(&jeb); err == io.EOF {
break
} else if err != nil {
return fmt.Errorf("error decoding input JSON on line %d: %w", lineNum, err)
}
// Validate that values and timestamps arrays have the same length
if len(jeb.Values) != len(jeb.Timestamps) {
return fmt.Errorf("mismatched values and timestamps arrays length in debug data on line %d: values=%d, timestamps=%d", lineNum, len(jeb.Values), len(jeb.Timestamps))
}
var mn = storage.GetMetricName()
defer storage.PutMetricName(mn)
for k, v := range jeb.Metric {
mn.AddTag(k, v)
}
ss := &storage.SimulatedSamples{
Value: jeb.Values,
Timestamps: jeb.Timestamps,
}
ss.Name.CopyFrom(mn)
simulatedSeries = append(simulatedSeries, ss)
lineNum++
}
// It doesn't make sense to debug with empty samples
if len(simulatedSeries) == 0 {
return fmt.Errorf("no simulated samples found")
}
minStalenessInterval, err := httputil.GetDurationRaw(r, "min_staleness_interval", evalConfig.MinStalenessInterval)
if err != nil {
return fmt.Errorf("cannot parse `min_staleness_interval` arg: %w", err)
}
maxStalenessInterval, err := httputil.GetDurationRaw(r, "max_staleness_interval", *maxStalenessInterval)
if err != nil {
return fmt.Errorf("cannot parse `max_staleness_interval` arg: %w", err)
}
evalConfig.SimulatedSamples = simulatedSeries
evalConfig.MinStalenessInterval = minStalenessInterval
evalConfig.LookbackDelta, err = getMaxLookback(r, maxStalenessInterval)
if err != nil {
return err
}
return nil
}
func removeEmptyValuesAndTimeseries(tss []netstorage.Result) []netstorage.Result {
dst := tss[:0]
for i := range tss {
@@ -1044,7 +1171,7 @@ func adjustLastPoints(tss []netstorage.Result, start, end int64) []netstorage.Re
return tss
}
func getMaxLookback(r *http.Request) (int64, error) {
func getMaxLookback(r *http.Request, maxStalenessInterval time.Duration) (int64, error) {
d := maxLookback.Milliseconds()
if d == 0 {
d = maxStalenessInterval.Milliseconds()

View File

@@ -134,6 +134,10 @@ type EvalConfig struct {
// LookbackDelta is analog to `-query.lookback-delta` from Prometheus.
LookbackDelta int64
// MaxStalenessInterval corresponds to -search.maxStalenessInterval,
// but customized per query request.
MinStalenessInterval time.Duration
// How many decimal digits after the point to leave in response.
RoundDigits int
@@ -158,6 +162,9 @@ type EvalConfig struct {
timestamps []int64
timestampsOnce sync.Once
// Simulated samples
SimulatedSamples []*storage.SimulatedSamples
}
// copyEvalConfig returns src copy.
@@ -176,6 +183,8 @@ func copyEvalConfig(src *EvalConfig) *EvalConfig {
ec.CacheTagFilters = src.CacheTagFilters
ec.GetRequestURI = src.GetRequestURI
ec.QueryStats = src.QueryStats
ec.MinStalenessInterval = src.MinStalenessInterval
ec.SimulatedSamples = src.SimulatedSamples
// do not copy src.timestamps - they must be generated again.
return &ec
@@ -929,7 +938,7 @@ func evalRollupFuncWithSubquery(qt *querytracer.Tracer, ec *EvalConfig, funcName
}
ecSQ := copyEvalConfig(ec)
ecSQ.Start -= window + step + maxSilenceInterval()
ecSQ.Start -= window + step + maxSilenceInterval(ec.MinStalenessInterval)
ecSQ.End += step
ecSQ.Step = step
ecSQ.MaxPointsPerSeries = *maxPointsSubqueryPerTimeseries
@@ -946,7 +955,7 @@ func evalRollupFuncWithSubquery(qt *querytracer.Tracer, ec *EvalConfig, funcName
return nil, nil
}
sharedTimestamps := getTimestamps(ec.Start, ec.End, ec.Step, ec.MaxPointsPerSeries)
preFunc, rcs, err := getRollupConfigs(funcName, rf, expr, ec.Start, ec.End, ec.Step, ec.MaxPointsPerSeries, window, ec.LookbackDelta, sharedTimestamps)
preFunc, rcs, err := getRollupConfigs(funcName, rf, expr, ec.Start, ec.End, ec.Step, ec.MaxPointsPerSeries, window, ec.LookbackDelta, sharedTimestamps, ec.MinStalenessInterval)
if err != nil {
return nil, err
}
@@ -1684,7 +1693,7 @@ func evalRollupFuncNoCache(qt *querytracer.Tracer, ec *EvalConfig, funcName stri
}
// Obtain rollup configs before fetching data from db, so type errors could be caught earlier.
sharedTimestamps := getTimestamps(ec.Start, ec.End, ec.Step, ec.MaxPointsPerSeries)
preFunc, rcs, err := getRollupConfigs(funcName, rf, expr, ec.Start, ec.End, ec.Step, ec.MaxPointsPerSeries, window, ec.LookbackDelta, sharedTimestamps)
preFunc, rcs, err := getRollupConfigs(funcName, rf, expr, ec.Start, ec.End, ec.Step, ec.MaxPointsPerSeries, window, ec.LookbackDelta, sharedTimestamps, ec.MinStalenessInterval)
if err != nil {
return nil, err
}
@@ -1694,7 +1703,7 @@ func evalRollupFuncNoCache(qt *querytracer.Tracer, ec *EvalConfig, funcName stri
tfss = searchutil.JoinTagFilterss(tfss, ec.EnforcedTagFilterss)
minTimestamp := ec.Start
if needSilenceIntervalForRollupFunc[funcName] {
minTimestamp -= maxSilenceInterval()
minTimestamp -= maxSilenceInterval(ec.MinStalenessInterval)
}
if window > ec.Step {
minTimestamp -= window
@@ -1702,6 +1711,8 @@ func evalRollupFuncNoCache(qt *querytracer.Tracer, ec *EvalConfig, funcName stri
minTimestamp -= ec.Step
}
sq := storage.NewSearchQuery(minTimestamp, ec.End, tfss, ec.MaxSeries)
sq.SimulatedSeries = ec.SimulatedSamples
rss, err := netstorage.ProcessSearchQuery(qt, sq, ec.Deadline)
if err != nil {
return nil, err
@@ -1787,7 +1798,7 @@ func getRollupMemoryLimiter() *memoryLimiter {
return &rollupMemoryLimiter
}
func maxSilenceInterval() int64 {
func maxSilenceInterval(minStalenessInterval time.Duration) int64 {
d := minStalenessInterval.Milliseconds()
if d <= 0 {
d = 5 * 60 * 1000

View File

@@ -61,12 +61,15 @@ func Exec(qt *querytracer.Tracer, ec *EvalConfig, q string, isFirstPointOnly boo
}
}
var rv []*timeseries
qid := activeQueriesV.Add(ec, q)
rv, err := evalExpr(qt, ec, e)
rv, err = evalExpr(qt, ec, e)
activeQueriesV.Remove(qid)
if err != nil {
return nil, err
}
if isFirstPointOnly {
// Remove all the points except the first one from every time series.
for _, ts := range rv {
@@ -325,3 +328,23 @@ func escapeDots(s string) string {
}
return string(result)
}
// ExtractMetricsFromQuery visits all the expressions in query and returns all the metrics found in the query.
func ExtractMetricsFromQuery(query string) ([]string, error) {
expr, err := metricsql.Parse(query)
if err != nil {
return nil, fmt.Errorf("error parsing query: %w", err)
}
var metrics []string
metricsql.VisitAll(expr, func(e metricsql.Expr) {
if me, ok := e.(*metricsql.MetricExpr); ok {
metricStr := string(me.AppendString(nil))
if metricStr != "" {
metrics = append(metrics, metricStr)
}
}
})
return metrics, nil
}

View File

@@ -0,0 +1,313 @@
package promql
import (
"math"
"slices"
"testing"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/netstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/searchutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
)
func TestSimulatedExec(t *testing.T) {
accountID := uint32(123)
projectID := uint32(567)
start := int64(1000e3)
end := int64(2000e3)
step := int64(200e3)
// Base EvalConfig that will be copied for each test
baseEC := EvalConfig{
Start: start,
End: end,
Step: step,
MaxPointsPerSeries: 1e4,
MaxSeries: 1000,
Deadline: searchutil.NewDeadline(time.Now(), time.Hour, ""),
RoundDigits: 100,
MayCache: false,
}
t.Run(`simple_metric_exact_match`, func(t *testing.T) {
t.Skip()
ec := copyEvalConfig(&baseEC)
mn := newMetric(accountID, projectID,
"__name__", "test_metric",
"a", "b",
)
ec.SimulatedSamples = []*storage.SimulatedSamples{mn.build()}
q := `test_metric{a="b"}`
result, err := Exec(nil, ec, q, false)
if err != nil {
t.Fatalf(`unexpected error when executing %q: %s`, q, err)
}
// Expected result
expectedMN := storage.MetricName{
MetricGroup: []byte("test_metric"),
Tags: []storage.Tag{
{
Key: []byte("a"),
Value: []byte("b"),
},
},
}
expectedResult := []netstorage.Result{
{
MetricName: expectedMN,
Values: mn.Value,
Timestamps: mn.Timestamps,
},
}
testResultsEqual(t, result, expectedResult)
})
t.Run(`filtered_by_tag_value`, func(t *testing.T) {
t.Skip()
// Create a copy of base EvalConfig
ec := copyEvalConfig(&baseEC)
mn := metricBuilders{
newMetric(accountID, projectID,
"__name__", "test_metric",
"a", "b",
"region", "us-west",
),
newMetric(accountID, projectID,
"__name__", "test_metric",
"a", "b",
"region", "us-east",
),
}
ec.SimulatedSamples = mn.build()
q := `test_metric{region="us-west"}`
result, err := Exec(nil, ec, q, false)
if err != nil {
t.Fatalf(`unexpected error when executing %q: %s`, q, err)
}
// Expected result
expectedMN := storage.MetricName{
MetricGroup: []byte("test_metric"),
Tags: []storage.Tag{
{
Key: []byte("a"),
Value: []byte("b"),
},
{
Key: []byte("region"),
Value: []byte("us-west"),
},
},
}
expectedResult := []netstorage.Result{
{
MetricName: expectedMN,
Values: mn[0].Value,
Timestamps: mn[0].Timestamps,
},
}
testResultsEqual(t, result, expectedResult)
})
t.Run(`regex_match_on_tag`, func(t *testing.T) {
ec := copyEvalConfig(&baseEC)
mn := metricBuilders{
newMetric(accountID, projectID,
"__name__", "test_metric",
"env", "prod",
),
newMetric(accountID, projectID,
"__name__", "test_metric",
"env", "staging",
),
newMetric(accountID, projectID,
"__name__", "test_metric",
"env", "dev",
),
}
ec.SimulatedSamples = mn.build()
q := `test_metric{env=~"prod|staging"}`
result, err := Exec(nil, ec, q, false)
if err != nil {
t.Fatalf(`unexpected error when executing %q: %s`, q, err)
}
expectedResult := []netstorage.Result{mn[0].toResult(), mn[1].toResult()}
testResultsEqual(t, result, expectedResult)
})
}
func TestSumOverTime(t *testing.T) {
accountID := uint32(123)
projectID := uint32(567)
start := int64(1000e3)
end := int64(1300e3)
step := int64(30e3)
baseEC := EvalConfig{
Start: start,
End: end,
Step: step,
MaxPointsPerSeries: 1e4,
MaxSeries: 1000,
Deadline: searchutil.NewDeadline(time.Now(), time.Hour, ""),
RoundDigits: 100,
MayCache: false,
}
t.Run(`basic_sum_over_time`, func(t *testing.T) {
ec := copyEvalConfig(&baseEC)
metric := newMetric(accountID, projectID,
"__name__", "test_metric",
"app", "api-server",
).withValues(1, 2, 3, 4, 5, 6).withUnix(1000, 1015, 1030, 1045, 1060, 1075)
ec.SimulatedSamples = []*storage.SimulatedSamples{metric.build()}
q := `sum_over_time(test_metric[30s])`
result, err := Exec(nil, ec, q, false)
if err != nil {
t.Fatalf(`unexpected error when executing %q: %s`, q, err)
}
expectedResult := []netstorage.Result{
newMetric(accountID, projectID,
"app", "api-server",
).withValues(1, 5, 9, 6).withUnix(1000, 1030, 1060, 1090).toResult(),
}
testSimulatedResultsEqual(t, result, expectedResult)
})
}
type metricBuilder storage.SimulatedSamples
func newMetric(accountID uint32, projectID uint32, pairs ...string) *metricBuilder {
mn := storage.MetricName{}
for i := 0; i < len(pairs); i += 2 {
mn.AddTag(pairs[i], pairs[i+1])
}
return &metricBuilder{
Name: mn,
Value: []float64{10, 20, 30, 40, 50, 60},
Timestamps: []int64{1000e3, 1200e3, 1400e3, 1600e3, 1800e3, 2000e3},
}
}
func (b *metricBuilder) withUnix(unix ...int64) *metricBuilder {
b.Timestamps = make([]int64, len(unix))
for i := range unix {
b.Timestamps[i] = unix[i] * 1e3
}
return b
}
func (b *metricBuilder) withValues(values ...float64) *metricBuilder {
b.Value = values
return b
}
func (b *metricBuilder) build() *storage.SimulatedSamples {
return (*storage.SimulatedSamples)(b)
}
func (b *metricBuilder) toResult() netstorage.Result {
return netstorage.Result{
MetricName: b.Name,
Values: b.Value,
Timestamps: b.Timestamps,
}
}
type metricBuilders []*metricBuilder
func (b metricBuilders) build() []*storage.SimulatedSamples {
ss := make([]*storage.SimulatedSamples, len(b))
for i := range b {
ss[i] = b[i].build()
}
return ss
}
func testSimulatedResultsEqual(t *testing.T, result, resultExpected []netstorage.Result) {
t.Helper()
result = removeEmptyValuesAndTimeseries(result)
if len(result) != len(resultExpected) {
t.Fatalf(`unexpected timeseries count; got %d; want %d`, len(result), len(resultExpected))
}
for i := range result {
r := &result[i]
rExpected := &resultExpected[i]
testMetricNamesEqual(t, &r.MetricName, &rExpected.MetricName, i)
testRowsEqual(t, r.Values, r.Timestamps, rExpected.Values, rExpected.Timestamps)
}
}
func removeEmptyValuesAndTimeseries(tss []netstorage.Result) []netstorage.Result {
dst := tss[:0]
for i := range tss {
ts := &tss[i]
hasNaNs := slices.ContainsFunc(ts.Values, math.IsNaN)
if !hasNaNs {
// Fast path: nothing to remove.
if len(ts.Values) > 0 {
dst = append(dst, *ts)
}
continue
}
// Slow path: remove NaNs.
srcTimestamps := ts.Timestamps
dstValues := ts.Values[:0]
// Do not reuse ts.Timestamps for dstTimestamps, since ts.Timestamps
// may be shared among multiple time series.
dstTimestamps := make([]int64, 0, len(ts.Timestamps))
for j, v := range ts.Values {
if math.IsNaN(v) {
continue
}
dstValues = append(dstValues, v)
dstTimestamps = append(dstTimestamps, srcTimestamps[j])
}
ts.Values = dstValues
ts.Timestamps = dstTimestamps
if len(ts.Values) > 0 {
dst = append(dst, *ts)
}
}
return dst
}
func TestExtractMetricsFromQuery(t *testing.T) {
query := `(vm_free_disk_space_bytes{job=~"$job", instance=~"$instance"}-vm_free_disk_space_limit_bytes{job=~"$job", instance=~"$instance"})
/
ignoring(path) (
(rate(vm_rows_added_to_storage_total{job=~"$job", instance=~"$instance"}[1d]) -
sum(rate(vm_deduplicated_samples_total{job=~"$job", instance=~"$instance"}[1d])) without (type)) *
(
sum(vm_data_size_bytes{job=~"$job", instance=~"$instance", type!~"indexdb.*"}) without(type) /
sum(vm_rows{job=~"$job", instance=~"$instance", type!~"indexdb.*"}) without(type)
)
+
rate(vm_new_timeseries_created_total{job=~"$job", instance=~"$instance"}[1d]) *
(
sum(vm_data_size_bytes{job=~"$job", instance=~"$instance", type="indexdb/file"}) /
sum(vm_rows{job=~"$job", instance=~"$instance", type="indexdb/file"})
)
)`
metrics, err := ExtractMetricsFromQuery(query)
if err != nil {
t.Fatalf(`unexpected error when extracting metrics from query: %s`, err)
}
t.Logf(`metrics: %v`, metrics)
}

View File

@@ -1,12 +1,12 @@
package promql
import (
"flag"
"fmt"
"math"
"strconv"
"strings"
"sync"
"time"
"github.com/VictoriaMetrics/metrics"
"github.com/VictoriaMetrics/metricsql"
@@ -17,10 +17,6 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
)
var minStalenessInterval = flag.Duration("search.minStalenessInterval", 0, "The minimum interval for staleness calculations. "+
"This flag could be useful for removing gaps on graphs generated from time series with irregular intervals between samples. "+
"See also '-search.maxStalenessInterval'")
var rollupFuncs = map[string]newRollupFunc{
"absent_over_time": newRollupFuncOneArg(rollupAbsent),
"aggr_over_time": newRollupFuncTwoArgs(rollupFake),
@@ -372,7 +368,7 @@ func getRollupTag(expr metricsql.Expr) (string, error) {
}
func getRollupConfigs(funcName string, rf rollupFunc, expr metricsql.Expr, start, end, step int64, maxPointsPerSeries int,
window, lookbackDelta int64, sharedTimestamps []int64) (
window, lookbackDelta int64, sharedTimestamps []int64, minStalenessInterval time.Duration) (
func(values []float64, timestamps []int64), []*rollupConfig, error) {
preFunc := func(_ []float64, _ []int64) {}
funcName = strings.ToLower(funcName)
@@ -408,6 +404,7 @@ func getRollupConfigs(funcName string, rf rollupFunc, expr metricsql.Expr, start
Timestamps: sharedTimestamps,
isDefaultRollup: funcName == "default_rollup",
samplesScannedPerCall: samplesScannedPerCall,
minStalenessInterval: minStalenessInterval,
}
}
@@ -600,6 +597,9 @@ type rollupConfig struct {
//
// If zero, then it is considered that Func scans all the samples passed to it.
samplesScannedPerCall int
// The minimum interval for staleness calculations.
minStalenessInterval time.Duration
}
func (rc *rollupConfig) getTimestamps() []int64 {
@@ -723,8 +723,8 @@ func (rc *rollupConfig) doInternal(dstValues []float64, tsm *timeseriesMap, valu
if rc.LookbackDelta > 0 && maxPrevInterval > rc.LookbackDelta {
maxPrevInterval = rc.LookbackDelta
}
if *minStalenessInterval > 0 {
if msi := minStalenessInterval.Milliseconds(); msi > 0 && maxPrevInterval < msi {
if rc.minStalenessInterval > 0 {
if msi := rc.minStalenessInterval.Milliseconds(); msi > 0 && maxPrevInterval < msi {
maxPrevInterval = msi
}
}

View File

@@ -15,3 +15,24 @@ export const getExportDataUrl = (server: string, query: string, period: TimePara
if (reduceMemUsage) params.set("reduce_mem_usage", "1");
return `${server}/api/v1/export?${params}`;
};
export const getExportCSVDataUrl = (server: string, query: string[], period: TimeParams, reduceMemUsage: boolean): string => {
const params = new URLSearchParams({
start: period.start.toString(),
end: period.end.toString(),
format: "__name__,__value__,__timestamp__:unix_ms",
});
query.forEach((q => params.append("match[]", q)));
if (reduceMemUsage) params.set("reduce_mem_usage", "1");
return `${server}/api/v1/export/csv?${params}`;
};
export const getExportJSONDataUrl = (server: string, query: string[], period: TimeParams, reduceMemUsage: boolean): string => {
const params = new URLSearchParams({
start: period.start.toString(),
end: period.end.toString(),
});
query.forEach((q => params.append("match[]", q)));
if (reduceMemUsage) params.set("reduce_mem_usage", "1");
return `${server}/api/v1/export?${params}`;
};

View File

@@ -1,20 +1,18 @@
import { FC, useCallback } from "preact/compat";
import { useCallback, useRef } from "preact/compat";
import Tooltip from "../Main/Tooltip/Tooltip";
import Button from "../Main/Button/Button";
import { DownloadIcon } from "../Main/Icons";
import Popper from "../Main/Popper/Popper";
import { useRef } from "react";
import "./style.scss";
import useBoolean from "../../hooks/useBoolean";
interface DownloadButtonProps {
interface DownloadButtonProps<T extends string> {
title: string;
downloadFormatOptions?: string[];
onDownload: (format?: string) => void;
downloadFormatOptions?: T[];
onDownload: (format?: T) => void;
}
/** TODO: Currently unused, later will be added for the exporting metrics */
const DownloadButton: FC<DownloadButtonProps> = ({ title, downloadFormatOptions, onDownload }) => {
const DownloadButton = <T extends string>({ title, downloadFormatOptions, onDownload }: DownloadButtonProps<T>) => {
const {
value: isPopupOpen,
setTrue: onOpenPopup,
@@ -35,9 +33,19 @@ const DownloadButton: FC<DownloadButtonProps> = ({ title, downloadFormatOptions,
}
}, [onDownload, onClosePopup, isPopupOpen, onOpenPopup]);
const isDownloadFormat = useCallback((format: string): format is T => {
return (downloadFormatOptions as string[])?.includes(format);
}, [downloadFormatOptions]);
const onDownloadFormatClick = useCallback((event: Event) => {
const button = event.currentTarget as HTMLButtonElement;
onDownload(button.textContent ?? undefined);
const format = button.textContent;
if (format && isDownloadFormat(format)) {
onDownload(format);
} else {
onDownload();
}
onClosePopup();
}, [onDownload]);
return (

View File

@@ -578,97 +578,13 @@ export const CommentIcon = () => (
</svg>
);
export const FilterIcon = () => (
export const DebugIcon = () => (
<svg
viewBox="0 0 24 24"
fill="currentColor"
>
<path
d="M4.25 5.61C6.27 8.2 10 13 10 13v6c0 .55.45 1 1 1h2c.55 0 1-.45 1-1v-6s3.72-4.8 5.74-7.39c.51-.66.04-1.61-.79-1.61H5.04c-.83 0-1.3.95-.79 1.61"
></path>
</svg>
);
export const FilterOffIcon = () => (
<svg
viewBox="0 0 24 24"
fill="currentColor"
>
<path
d="M19.79 5.61C20.3 4.95 19.83 4 19 4H6.83l7.97 7.97zM2.81 2.81 1.39 4.22 10 13v6c0 .55.45 1 1 1h2c.55 0 1-.45 1-1v-2.17l5.78 5.78 1.41-1.41z"
></path>
</svg>
);
export const OpenNewIcon = () => (
<svg
viewBox="0 0 24 24"
fill="currentColor"
>
<path
d="M19 19H5V5h7V3H5c-1.11 0-2 .9-2 2v14c0 1.1.89 2 2 2h14c1.1 0 2-.9 2-2v-7h-2zM14 3v2h3.59l-9.83 9.83 1.41 1.41L19 6.41V10h2V3z"
></path>
</svg>
);
export const ModalIcon = () => (
<svg
viewBox="0 0 24 24"
fill="currentColor"
>
<path d="M19 4H5c-1.11 0-2 .9-2 2v12c0 1.1.89 2 2 2h14c1.1 0 2-.9 2-2V6c0-1.1-.89-2-2-2m0 14H5V8h14z"></path>
</svg>
);
export const PauseIcon = () => (
<svg
viewBox="0 0 24 24"
fill="currentColor"
>
<path d="M6 19h4V5H6v14zm8-14v14h4V5h-4z" />
</svg>
);
export const ScrollToTopIcon = () => (
<svg
viewBox="0 0 24 24"
fill="currentColor"
>
<path
d="M8 12l4-4 4 4m-4-4v12"
strokeWidth="2"
stroke="currentColor"
fill="none"
d="M20 8h-2.81c-.45-.78-1.07-1.45-1.82-1.96L17 4.41 15.59 3l-2.17 2.17C12.96 5.06 12.49 5 12 5c-.49 0-.96.06-1.41.17L8.41 3 7 4.41l1.62 1.63C7.88 6.55 7.26 7.22 6.81 8H4v2h2.09c-.05.33-.09.66-.09 1v1H4v2h2v1c0 .34.04.67.09 1H4v2h2.81c1.04 1.79 2.97 3 5.19 3s4.15-1.21 5.19-3H20v-2h-2.09c.05-.33.09-.66.09-1v-1h2v-2h-2v-1c0-.34-.04-.67-.09-1H20V8zm-6 8h-4v-2h4v2zm0-4h-4v-2h4v2z"
/>
</svg>
);
export const SortIcon = () => (
<svg
viewBox="0 0 24 24"
fill="currentColor"
>
<path d="M4 3 L4 15 L1.5 15 L5.5 21 L9.5 15 L7 15 L7 3 Z"/>
<path d="M13 21 L13 9 L10.5 9 L14.5 3 L18.5 9 L16 9 L16 21 Z"/>
</svg>
);
export const SortArrowDownIcon = () => (
<svg
viewBox="0 0 24 24"
fill="currentColor"
>
<path d="M10.5 3 L10.5 15 L8 15 L12 21 L16 15 L13.5 15 L13.5 3 Z"/>
</svg>
);
export const SortArrowUpIcon = () => (
<svg
viewBox="0 0 24 24"
fill="currentColor"
>
<path d="M10.5 21 L10.5 9 L8 9 L12 3 L16 9 L13.5 9 L13.5 21 Z"/>
</svg>
);

View File

@@ -1,5 +1,5 @@
import { FC, useCallback, useEffect, useRef, useState } from "preact/compat";
import { DownloadIcon } from "../../../components/Main/Icons";
import { DebugIcon } from "../../../components/Main/Icons";
import Button from "../../../components/Main/Button/Button";
import Tooltip from "../../../components/Main/Tooltip/Tooltip";
import useBoolean from "../../../hooks/useBoolean";
@@ -217,17 +217,17 @@ const DownloadReport: FC<Props> = ({ fetchUrl, reportType = ReportType.QUERY_DAT
return (
<>
<Tooltip title={"Export query"}>
<Tooltip title={"Debug query"}>
<Button
variant="text"
startIcon={<DownloadIcon/>}
startIcon={<DebugIcon />}
onClick={toggleOpen}
ariaLabel="export query"
ariaLabel="Debug query"
/>
</Tooltip>
{openModal && (
<Modal
title={"Export query"}
title={"Debug query"}
onClose={handleClose}
isOpen={openModal}
>

View File

@@ -1,4 +1,4 @@
import { FC, useEffect, useState } from "preact/compat";
import { FC, useEffect, useState, useMemo, useRef, useCallback } from "preact/compat";
import QueryConfigurator from "./QueryConfigurator/QueryConfigurator";
import { useFetchQuery } from "../../hooks/useFetchQuery";
import { DisplayTypeSwitch } from "./DisplayTypeSwitch";
@@ -12,13 +12,17 @@ import Alert from "../../components/Main/Alert/Alert";
import classNames from "classnames";
import useDeviceDetect from "../../hooks/useDeviceDetect";
import InstantQueryTip from "./InstantQueryTip/InstantQueryTip";
import { useRef } from "react";
import CustomPanelTraces from "./CustomPanelTraces/CustomPanelTraces";
import WarningLimitSeries from "./WarningLimitSeries/WarningLimitSeries";
import CustomPanelTabs from "./CustomPanelTabs";
import { DisplayType } from "../../types";
import DownloadReport from "./DownloadReport/DownloadReport";
import WarningHeatmapToLine from "./WarningHeatmapToLine/WarningHeatmapToLine";
import DownloadButton from "../../components/DownloadButton/DownloadButton";
import { downloadCSV, downloadJSON } from "../../utils/file";
import { convertMetricsDataToCSV } from "./utils";
type ExportFormats = "csv" | "json";
const CustomPanel: FC = () => {
useSetQueryParams();
@@ -55,6 +59,27 @@ const CustomPanel: FC = () => {
showAllSeries
});
const fileDownloaders = useMemo(() => {
const getFilename = (format: ExportFormats) => {
return `vmui_export_${query.join("_")}.${format}`;
};
return {
csv: async () => {
if(!liveData) return;
const csvData = convertMetricsDataToCSV(liveData);
downloadCSV(csvData, getFilename("csv"));
},
json: async () => {
downloadJSON(JSON.stringify(liveData), getFilename("json"));
},
};
}, [liveData, query]);
const onDownloadClick = useCallback((format?: ExportFormats) => {
format && fileDownloaders[format]();
}, [fileDownloaders]);
const showInstantQueryTip = !liveData?.length && (displayType !== DisplayType.chart);
const showError = !hideError && error;
@@ -110,7 +135,7 @@ const CustomPanel: FC = () => {
"vm-block_mobile": isMobile,
})}
>
{isLoading && <LineLoader />}
{isLoading && <LineLoader/>}
<div
className="vm-custom-panel-body-header"
ref={controlsRef}
@@ -118,7 +143,13 @@ const CustomPanel: FC = () => {
<div className="vm-custom-panel-body-header__tabs">
<DisplayTypeSwitch/>
</div>
{(graphData || liveData) && <DownloadReport fetchUrl={fetchUrl}/>}
{displayType === "table" && (
<DownloadButton
title={"Export query"}
onDownload={onDownloadClick}
downloadFormatOptions={["json", "csv"]}
/>)}
{(graphData || liveData) && displayType !== "code" && <DownloadReport fetchUrl={fetchUrl}/>}
</div>
<CustomPanelTabs
graphData={graphData}

View File

@@ -0,0 +1,86 @@
import { describe, expect, it } from "vitest";
import { convertMetricsDataToCSV } from "./utils";
import { InstantMetricResult } from "../../api/types";
describe("convertMetricsDataToCSV", () => {
it("should return an empty string if headers are empty", () => {
const data: InstantMetricResult[] = [];
expect(convertMetricsDataToCSV(data)).toBe("");
});
it("should return a valid CSV string for single metric entry with value", () => {
const data: InstantMetricResult[] = [
{
value: [1623945600, "123"],
group: 0,
metric: {
header1: "123",
header2: "value2"
}
},
];
const result = convertMetricsDataToCSV(data);
expect(result).toBe("header1,header2\n123,value2");
});
it("should return a valid CSV string for multiple metric entries with values", () => {
const data: InstantMetricResult[] = [
{
value: [1623945600, "123"],
group: 0,
metric: {
header1: "123",
header2: "value2"
}
},
{
value: [1623949200, "456"],
group: 0,
metric: {
header1: "456",
header2: "value4"
}
},
];
const result = convertMetricsDataToCSV(data);
expect(result).toBe("header1,header2\n123,value2\n456,value4");
});
it("should handle metric entries with multiple values field", () => {
const data: InstantMetricResult[] = [
{
values: [[1623945600, "123"], [1623949200, "456"]],
group: 0,
metric: {
header1: "123-456",
header2: "values"
}
},
];
const result = convertMetricsDataToCSV(data);
expect(result).toBe("header1,header2\n123-456,values");
});
it("should handle a combination of metric entries with value and values", () => {
const data: InstantMetricResult[] = [
{
value: [1623945600, "123"],
group: 0,
metric: {
header1: "123",
header2: "first"
}
},
{
values: [[1623949200, "456"], [1623952800, "789"]],
group: 0,
metric: {
header1: "456-789",
header2: "second"
}
},
];
const result = convertMetricsDataToCSV(data);
expect(result).toBe("header1,header2\n123,first\n456-789,second");
});
});

View File

@@ -0,0 +1,18 @@
import { InstantMetricResult } from "../../api/types";
import { getColumns, MetricCategory } from "../../hooks/useSortedCategories";
import { formatValueToCSV } from "../../utils/csv";
const getHeaders = (data: InstantMetricResult[]): string => {
return getColumns(data).map(({ key }) => key).join(",");
};
const getRows = (data: InstantMetricResult[], headers: MetricCategory[]) => {
return data?.map(d => headers.map(c => formatValueToCSV(d.metric[c.key] || "-")).join(","));
};
export const convertMetricsDataToCSV = (data: InstantMetricResult[]): string => {
const headers = getHeaders(data);
if (!headers.length) return "";
const rows = getRows(data, getColumns(data));
return [headers, ...rows].join("\n");
};

View File

@@ -1,13 +1,15 @@
import { Dispatch, SetStateAction, useCallback, useEffect, useMemo, useRef, useState } from "preact/compat";
import { MetricBase, MetricResult, ExportMetricResult } from "../../../api/types";
import { ErrorTypes, SeriesLimits } from "../../../types";
import { ErrorTypes, SeriesLimits, TimeParams } from "../../../types";
import { useQueryState } from "../../../state/query/QueryStateContext";
import { useTimeState } from "../../../state/time/TimeStateContext";
import { useAppState } from "../../../state/common/StateContext";
import { useCustomPanelState } from "../../../state/customPanel/CustomPanelStateContext";
import { isValidHttpUrl } from "../../../utils/url";
import { getExportDataUrl } from "../../../api/query-range";
import { getExportCSVDataUrl, getExportDataUrl, getExportJSONDataUrl } from "../../../api/query-range";
import { parseLineToJSON } from "../../../utils/json";
import { downloadCSV, downloadJSON } from "../../../utils/file";
import { useSnack } from "../../../contexts/Snackbar";
interface FetchQueryParams {
hideQuery?: number[];
@@ -16,6 +18,7 @@ interface FetchQueryParams {
interface FetchQueryReturn {
fetchUrl?: string[],
exportData: (format: ExportFormats) => void,
isLoading: boolean,
data?: MetricResult[],
error?: ErrorTypes | string,
@@ -25,11 +28,16 @@ interface FetchQueryReturn {
abortFetch: () => void
}
type ExportFormats = "csv" | "json";
type FormatDownloader = (serverUrl: string, query: string[], period: TimeParams, reduceMemUsage: boolean) => void;
type DownloadFileFormats = Record<ExportFormats, FormatDownloader>
export const useFetchExport = ({ hideQuery, showAllSeries }: FetchQueryParams): FetchQueryReturn => {
const { query } = useQueryState();
const { period } = useTimeState();
const { displayType, reduceMemUsage, seriesLimits: stateSeriesLimits } = useCustomPanelState();
const { serverUrl } = useAppState();
const { showInfoMessage } = useSnack();
const [isLoading, setIsLoading] = useState(false);
const [data, setData] = useState<MetricResult[]>();
@@ -55,6 +63,35 @@ export const useFetchExport = ({ hideQuery, showAllSeries }: FetchQueryParams):
}
}, [serverUrl, period, hideQuery, reduceMemUsage]);
const fileDownloaders: DownloadFileFormats = useMemo(() => {
const getFilename = (format: ExportFormats) => `vmui_export_${query.join("_")}_${period.start}_${period.end}.${format}`;
return {
csv: async () => {
const url = getExportCSVDataUrl(serverUrl, query, period, reduceMemUsage);
const response = await fetch(url);
try {
let text = await response.text();
text = "name,value,timestamp\n" + text;
downloadCSV(text, getFilename("csv"));
} catch (e) {
console.error(e);
showInfoMessage({ text: "Couldn't fetch data for CSV export. Please try again", type: "error" });
}
},
json: async () => {
const url = getExportJSONDataUrl(serverUrl, query, period, reduceMemUsage);
try {
const response = await fetch(url);
const text = await response.text();
downloadJSON(text, getFilename("json"));
} catch (e) {
console.error(e);
showInfoMessage({ text: "Couldn't fetch data for JSON export. Please try again", type: "error" });
}
}
};
}, [query, period, serverUrl, reduceMemUsage]);
const fetchData = useCallback(async ({ fetchUrl, stateSeriesLimits, showAllSeries }: {
fetchUrl: string[];
stateSeriesLimits: SeriesLimits;
@@ -144,6 +181,12 @@ export const useFetchExport = ({ hideQuery, showAllSeries }: FetchQueryParams):
}
}, [displayType, hideQuery]);
const exportData = useCallback((format: ExportFormats) => {
if (error) return;
const updatedPeriod = { ...period };
fileDownloaders[format](serverUrl, query, updatedPeriod, reduceMemUsage);
}, [serverUrl, query, period, reduceMemUsage, error, fileDownloaders]);
const abortFetch = useCallback(() => {
abortControllerRef.current.abort();
setData([]);
@@ -167,5 +210,6 @@ export const useFetchExport = ({ hideQuery, showAllSeries }: FetchQueryParams):
setQueryErrors,
warning,
abortFetch,
exportData
};
};

View File

@@ -1,4 +1,4 @@
import { FC, useState } from "preact/compat";
import { FC, useCallback, useState } from "preact/compat";
import LineLoader from "../../components/Main/LineLoader/LineLoader";
import { useCustomPanelState } from "../../state/customPanel/CustomPanelStateContext";
import { useQueryState } from "../../state/query/QueryStateContext";
@@ -17,7 +17,7 @@ import { DisplayType } from "../../types";
import Hyperlink from "../../components/Main/Hyperlink/Hyperlink";
import { CloseIcon } from "../../components/Main/Icons";
import Button from "../../components/Main/Button/Button";
import DownloadReport, { ReportType } from "../CustomPanel/DownloadReport/DownloadReport";
import DownloadButton from "../../components/DownloadButton/DownloadButton";
const RawSamplesLink = () => (
<Hyperlink
@@ -66,7 +66,7 @@ const RawQueryPage: FC = () => {
queryErrors,
setQueryErrors,
abortFetch,
fetchUrl,
exportData
} = useFetchExport({ hideQuery, showAllSeries });
const controlsRef = useRef<HTMLDivElement>(null);
@@ -85,6 +85,11 @@ const RawQueryPage: FC = () => {
setShowPageDescription(false);
};
const onExportClick = useCallback(async (format?: "csv" | "json") => {
if (!format) return;
exportData(format);
}, [exportData]);
return (
<div
className={classNames({
@@ -159,9 +164,10 @@ const RawQueryPage: FC = () => {
<DisplayTypeSwitch tabFilter={(tab) => (tab.value !== DisplayType.table)}/>
</div>
{data && (
<DownloadReport
fetchUrl={fetchUrl}
reportType={ReportType.RAW_DATA}
<DownloadButton
title={"Export query"}
downloadFormatOptions={["json", "csv"]}
onDownload={onExportClick}
/>
)}
</div>

View File

@@ -0,0 +1,34 @@
import { describe, expect, it } from "vitest";
import { formatValueToCSV } from "./csv";
describe("formatValueToCSV", () => {
it("should wrap value in quotes if it contains a comma", () => {
const value = "hello,world";
const result = formatValueToCSV(value);
expect(result).toBe("\"hello,world\"");
});
it("should wrap value in quotes if it contains a newline", () => {
const value = "hello\nworld";
const result = formatValueToCSV(value);
expect(result).toBe("\"hello\nworld\"");
});
it("should escape quotes and wrap in quotes if value contains a double quote", () => {
const value = "hello \"world\"";
const result = formatValueToCSV(value);
expect(result).toBe("\"hello \"\"world\"\"\"");
});
it("should return the same value if it does not contain special characters", () => {
const value = "hello world";
const result = formatValueToCSV(value);
expect(result).toBe("hello world");
});
it("should handle empty strings correctly", () => {
const value = "";
const result = formatValueToCSV(value);
expect(result).toBe("");
});
});

View File

@@ -0,0 +1,4 @@
export const formatValueToCSV= (value: string) =>
(value.includes(",") || value.includes("\n") || value.includes("\""))
? "\"" + value.replace(/"/g, "\"\"") + "\""
: value;

View File

@@ -11,38 +11,12 @@ export const downloadFile = (data: Blob, filename: string) => {
URL.revokeObjectURL(url);
};
export const downloadCSV = (data: Record<string, string>[], filename: string) => {
const getHeader = (data: Record<string, string>[]) => {
const headersObj = data.reduce<Record<string, boolean>>((headers, row) => {
Object.keys(row).forEach((key) => {
if(key && !headers[key]){
headers[key] = true;
}
});
return headers;
}, {});
return Object.keys(headersObj);
};
const formatValueToCSV= (value: string) =>
(value.includes(",") || value.includes("\n") || value.includes("\""))
? "\"" + value.replace(/"/g, "\"\"") + "\""
: value;
const convertToCSV = (data: Record<string, string>[]): string => {
const header = getHeader(data);
const rows = data.map(item =>
header.map(fieldName => item[fieldName] ? formatValueToCSV(item[fieldName]): "").join(",")
);
return [header.map(formatValueToCSV).join(","), ...rows].join("\r\n");
};
const csvContent = convertToCSV(data);
const blob = new Blob([csvContent], { type: "text/csv;charset=utf-8;" });
export const downloadCSV = (data: string, filename: string) => {
const blob = new Blob([data], { type: "text/csv;charset=utf-8;" });
downloadFile(blob, filename);
};
export const downloadJSON = (data: string, filename: string) => {
const blob = new Blob([data], { type: "application/json" });
downloadFile(blob, filename);
};
};

View File

@@ -7,7 +7,7 @@ groups:
# note the `job` filter and update accordingly to your setup
rules:
- alert: TooManyRestarts
expr: changes(process_start_time_seconds{job=~".*(victoriametrics|vmselect|vminsert|vmstorage|vmagent|vmalert|vmsingle|vmalertmanager|vmauth|victorialogs|vlstorage|vlselect|vlinsert).*"}[15m]) > 2
expr: changes(process_start_time_seconds{job=~".*(victoriametrics|vmselect|vminsert|vmstorage|vmagent|vmalert|vmsingle|vmalertmanager|vmauth).*"}[15m]) > 2
labels:
severity: critical
annotations:
@@ -17,7 +17,7 @@ groups:
It might be crashlooping.
- alert: ServiceDown
expr: up{job=~".*(victoriametrics|vmselect|vminsert|vmstorage|vmagent|vmalert|vmsingle|vmalertmanager|vmauth|victorialogs|vlstorage|vlselect|vlinsert).*"} == 0
expr: up{job=~".*(victoriametrics|vmselect|vminsert|vmstorage|vmagent|vmalert|vmsingle|vmalertmanager|vmauth).*"} == 0
for: 2m
labels:
severity: critical
@@ -59,7 +59,7 @@ groups:
Consider to either increase available CPU resources or decrease the load on the process.
- alert: TooHighGoroutineSchedulingLatency
expr: histogram_quantile(0.99, sum(rate(go_sched_latencies_seconds_bucket[5m])) by (le, job, instance)) > 0.1
expr: histogram_quantile(0.99, sum(rate(go_sched_latencies_seconds_bucket{job=~".*(victoriametrics|vmselect|vminsert|vmstorage|vmagent|vmalert|vmsingle|vmalertmanager|vmauth).*"}[5m])) by (le, job, instance)) > 0.1
for: 15m
labels:
severity: critical

View File

@@ -1133,6 +1133,8 @@ Below is the output for `/path/to/vminsert -help`:
Whether to disable re-routing when some of vmstorage nodes are unavailable. Disabled re-routing stops ingestion when some storage nodes are unavailable. On the other side, disabled re-routing minimizes the number of active time series in the cluster during rolling restarts and during spikes in series churn rate. See also -disableRerouting
-dropSamplesOnOverload
Whether to drop incoming samples if the destination vmstorage node is overloaded and/or unavailable. This prioritizes cluster availability over consistency, e.g. the cluster continues accepting all the ingested samples, but some of them may be dropped if vmstorage nodes are temporarily unavailable and/or overloaded. The drop of samples happens before the replication, so it's not recommended to use this flag with -replicationFactor enabled.
-enableMetadata
Whether to enable metadata processing for metrics scraped from targets, received via VictoriaMetrics remote write, Prometheus remote write v1 or OpenTelemetry protocol. See also remoteWrite.maxMetadataPerBlock
-enableTCP6
Whether to enable IPv6 for listening and dialing. By default, only IPv4 TCP and UDP are used
-envflag.enable
@@ -1311,6 +1313,113 @@ Below is the output for `/path/to/vminsert -help`:
Flag value can be read from the given file when using -pprofAuthKey=file:///abs/path/to/file or -pprofAuthKey=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -pprofAuthKey=http://host/path or -pprofAuthKey=https://host/path
-prevCacheRemovalPercent float
Items in the previous caches are removed when the percent of requests it serves becomes lower than this value. Higher values reduce memory usage at the cost of higher CPU usage. See also -cacheExpireDuration (default 0.1)
-promscrape.azureSDCheckInterval duration
Interval for checking for changes in Azure. This works only if azure_sd_configs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/victoriametrics/sd_configs/#azure_sd_configs for details (default 1m0s)
-promscrape.cluster.memberLabel string
If non-empty, then the label with this name and the -promscrape.cluster.memberNum value is added to all the scraped metrics. See https://docs.victoriametrics.com/victoriametrics/vmagent/#scraping-big-number-of-targets for more info
-promscrape.cluster.memberNum string
The number of vmagent instance in the cluster of scrapers. It must be a unique value in the range 0 ... promscrape.cluster.membersCount-1 across scrapers in the cluster. Can be specified as pod name of Kubernetes StatefulSet - pod-name-Num, where Num is a numeric part of pod name. See also -promscrape.cluster.memberLabel . See https://docs.victoriametrics.com/victoriametrics/vmagent/#scraping-big-number-of-targets for more info (default "0")
-promscrape.cluster.memberURLTemplate string
An optional template for URL to access vmagent instance with the given -promscrape.cluster.memberNum value. Every %d occurrence in the template is substituted with -promscrape.cluster.memberNum at urls to vmagent instances responsible for scraping the given target at /service-discovery page. For example -promscrape.cluster.memberURLTemplate='http://vmagent-%d:8429/targets'. See https://docs.victoriametrics.com/victoriametrics/vmagent/#scraping-big-number-of-targets for more details
-promscrape.cluster.membersCount int
The number of members in a cluster of scrapers. Each member must have a unique -promscrape.cluster.memberNum in the range 0 ... promscrape.cluster.membersCount-1 . Each member then scrapes roughly 1/N of all the targets. By default, cluster scraping is disabled, i.e. a single scraper scrapes all the targets. See https://docs.victoriametrics.com/victoriametrics/vmagent/#scraping-big-number-of-targets for more info (default 1)
-promscrape.cluster.name string
Optional name of the cluster. If multiple vmagent clusters scrape the same targets, then each cluster must have unique name in order to properly de-duplicate samples received from these clusters. See https://docs.victoriametrics.com/victoriametrics/vmagent/#scraping-big-number-of-targets for more info
-promscrape.cluster.replicationFactor int
The number of members in the cluster, which scrape the same targets. If the replication factor is greater than 1, then the deduplication must be enabled at remote storage side. See https://docs.victoriametrics.com/victoriametrics/vmagent/#scraping-big-number-of-targets for more info (default 1)
-promscrape.config string
Optional path to Prometheus config file with 'scrape_configs' section containing targets to scrape. The path can point to local file and to http url. See https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#how-to-scrape-prometheus-exporters-such-as-node-exporter for details
-promscrape.config.dryRun
Checks -promscrape.config file for errors and unsupported fields and then exits. Returns non-zero exit code on parsing errors and emits these errors to stderr. See also -promscrape.config.strictParse command-line flag. Pass -loggerLevel=ERROR if you don't need to see info messages in the output.
-promscrape.config.strictParse
Whether to deny unsupported fields in -promscrape.config . Set to false in order to silently skip unsupported fields (default true)
-promscrape.configCheckInterval duration
Interval for checking for changes in -promscrape.config file. By default, the checking is disabled. See how to reload -promscrape.config file at https://docs.victoriametrics.com/victoriametrics/vmagent/#configuration-update
-promscrape.consul.waitTime duration
Wait time used by Consul service discovery. Default value is used if not set
-promscrape.consulSDCheckInterval duration
Interval for checking for changes in Consul. This works only if consul_sd_configs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/victoriametrics/sd_configs/#consul_sd_configs for details (default 30s)
-promscrape.consulagentSDCheckInterval duration
Interval for checking for changes in Consul Agent. This works only if consulagent_sd_configs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/victoriametrics/sd_configs/#consulagent_sd_configs for details (default 30s)
-promscrape.digitaloceanSDCheckInterval duration
Interval for checking for changes in digital ocean. This works only if digitalocean_sd_configs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/victoriametrics/sd_configs/#digitalocean_sd_configs for details (default 1m0s)
-promscrape.disableCompression
Whether to disable sending 'Accept-Encoding: gzip' request headers to all the scrape targets. This may reduce CPU usage on scrape targets at the cost of higher network bandwidth utilization. It is possible to set 'disable_compression: true' individually per each 'scrape_config' section in '-promscrape.config' for fine-grained control
-promscrape.disableKeepAlive
Whether to disable HTTP keep-alive connections when scraping all the targets. This may be useful when targets has no support for HTTP keep-alive connection. It is possible to set 'disable_keepalive: true' individually per each 'scrape_config' section in '-promscrape.config' for fine-grained control. Note that disabling HTTP keep-alive may increase load on both vmagent and scrape targets
-promscrape.discovery.concurrency int
The maximum number of concurrent requests to Prometheus autodiscovery API (Consul, Kubernetes, etc.) (default 100)
-promscrape.discovery.concurrentWaitTime duration
The maximum duration for waiting to perform API requests if more than -promscrape.discovery.concurrency requests are simultaneously performed (default 1m0s)
-promscrape.dnsSDCheckInterval duration
Interval for checking for changes in dns. This works only if dns_sd_configs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/victoriametrics/sd_configs/#dns_sd_configs for details (default 30s)
-promscrape.dockerSDCheckInterval duration
Interval for checking for changes in docker. This works only if docker_sd_configs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/victoriametrics/sd_configs/#docker_sd_configs for details (default 30s)
-promscrape.dockerswarmSDCheckInterval duration
Interval for checking for changes in dockerswarm. This works only if dockerswarm_sd_configs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/victoriametrics/sd_configs/#dockerswarm_sd_configs for details (default 30s)
-promscrape.dropOriginalLabels
Whether to drop original labels for scrape targets at /targets and /api/v1/targets pages. This may be needed for reducing memory usage when original labels for big number of scrape targets occupy big amounts of memory. Note that this reduces debuggability for improper per-target relabeling configs
-promscrape.ec2SDCheckInterval duration
Interval for checking for changes in ec2. This works only if ec2_sd_configs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/victoriametrics/sd_configs/#ec2_sd_configs for details (default 1m0s)
-promscrape.eurekaSDCheckInterval duration
Interval for checking for changes in eureka. This works only if eureka_sd_configs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/victoriametrics/sd_configs/#eureka_sd_configs for details (default 30s)
-promscrape.fileSDCheckInterval duration
Interval for checking for changes in 'file_sd_config'. See https://docs.victoriametrics.com/victoriametrics/sd_configs/#file_sd_configs for details (default 1m0s)
-promscrape.gceSDCheckInterval duration
Interval for checking for changes in gce. This works only if gce_sd_configs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/victoriametrics/sd_configs/#gce_sd_configs for details (default 1m0s)
-promscrape.hetznerSDCheckInterval duration
Interval for checking for changes in Hetzner API. This works only if hetzner_sd_configs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/victoriametrics/sd_configs/#hetzner_sd_configs for details (default 1m0s)
-promscrape.httpSDCheckInterval duration
Interval for checking for changes in http endpoint service discovery. This works only if http_sd_configs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/victoriametrics/sd_configs/#http_sd_configs for details (default 1m0s)
-promscrape.kubernetes.apiServerTimeout duration
How frequently to reload the full state from Kubernetes API server (default 30m0s)
-promscrape.kubernetes.attachNodeMetadataAll
Whether to set attach_metadata.node=true for all the kubernetes_sd_configs at -promscrape.config . It is possible to set attach_metadata.node=false individually per each kubernetes_sd_configs . See https://docs.victoriametrics.com/victoriametrics/sd_configs/#kubernetes_sd_configs
-promscrape.kubernetes.useHTTP2Client
Whether to use HTTP/2 client for connection to Kubernetes API server. This may reduce amount of concurrent connections to API server when watching for a big number of Kubernetes objects.
-promscrape.kubernetesSDCheckInterval duration
Interval for checking for changes in Kubernetes API server. This works only if kubernetes_sd_configs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/victoriametrics/sd_configs/#kubernetes_sd_configs for details (default 30s)
-promscrape.kumaSDCheckInterval duration
Interval for checking for changes in kuma service discovery. This works only if kuma_sd_configs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/victoriametrics/sd_configs/#kuma_sd_configs for details (default 30s)
-promscrape.marathonSDCheckInterval duration
Interval for checking for changes in Marathon REST API. This works only if marathon_sd_configs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/victoriametrics/sd_configs/#marathon_sd_configs for details (default 30s)
-promscrape.maxDroppedTargets int
The maximum number of droppedTargets to show at /api/v1/targets page. Increase this value if your setup drops more scrape targets during relabeling and you need investigating labels for all the dropped targets. Note that the increased number of tracked dropped targets may result in increased memory usage (default 10000)
-promscrape.maxResponseHeadersSize size
The maximum size of http response headers from Prometheus scrape targets
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 4096)
-promscrape.maxScrapeSize size
The maximum size of scrape response in bytes to process from Prometheus targets. Bigger responses are rejected. See also max_scrape_size option at https://docs.victoriametrics.com/victoriametrics/sd_configs/#scrape_configs
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 16777216)
-promscrape.minResponseSizeForStreamParse size
The minimum target response size for automatic switching to stream parsing mode, which can reduce memory usage. See https://docs.victoriametrics.com/victoriametrics/vmagent/#stream-parsing-mode
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 1000000)
-promscrape.noStaleMarkers
Whether to disable sending Prometheus stale markers for metrics when scrape target disappears. This option may reduce memory usage if stale markers aren't needed for your setup. This option also disables populating the scrape_series_added metric. See https://prometheus.io/docs/concepts/jobs_instances/#automatically-generated-labels-and-time-series
-promscrape.nomad.waitTime duration
Wait time used by Nomad service discovery. Default value is used if not set
-promscrape.nomadSDCheckInterval duration
Interval for checking for changes in Nomad. This works only if nomad_sd_configs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/victoriametrics/sd_configs/#nomad_sd_configs for details (default 30s)
-promscrape.openstackSDCheckInterval duration
Interval for checking for changes in openstack API server. This works only if openstack_sd_configs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/victoriametrics/sd_configs/#openstack_sd_configs for details (default 30s)
-promscrape.ovhcloudSDCheckInterval duration
Interval for checking for changes in OVH Cloud API. This works only if ovhcloud_sd_configs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/victoriametrics/sd_configs/#ovhcloud_sd_configs for details (default 30s)
-promscrape.puppetdbSDCheckInterval duration
Interval for checking for changes in PuppetDB API. This works only if puppetdb_sd_configs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/victoriametrics/sd_configs/#puppetdb_sd_configs for details (default 30s)
-promscrape.seriesLimitPerTarget int
Optional limit on the number of unique time series a single scrape target can expose. See https://docs.victoriametrics.com/victoriametrics/vmagent/#cardinality-limiter for more info
-promscrape.streamParse
Whether to enable stream parsing for metrics obtained from scrape targets. This may be useful for reducing memory usage when millions of metrics are exposed per each scrape target. It is possible to set 'stream_parse: true' individually per each 'scrape_config' section in '-promscrape.config' for fine-grained control
-promscrape.suppressDuplicateScrapeTargetErrors
Whether to suppress 'duplicate scrape target' errors; see https://docs.victoriametrics.com/victoriametrics/vmagent/#troubleshooting for details
-promscrape.suppressScrapeErrors
Whether to suppress scrape errors logging. The last error for each target is always available at '/targets' page even if scrape errors logging is suppressed. See also -promscrape.suppressScrapeErrorsDelay
-promscrape.suppressScrapeErrorsDelay duration
The delay for suppressing repeated scrape errors logging per each scrape targets. This may be used for reducing the number of log lines related to scrape errors. See also -promscrape.suppressScrapeErrors
-promscrape.vultrSDCheckInterval duration
Interval for checking for changes in Vultr. This works only if vultr_sd_configs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/victoriametrics/sd_configs/#vultr_sd_configs for details (default 30s)
-promscrape.yandexcloudSDCheckInterval duration
Interval for checking for changes in Yandex Cloud API. This works only if yandexcloud_sd_configs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/victoriametrics/sd_configs/#yandexcloud_sd_configs for details (default 30s)
-pushmetrics.disableCompression
Whether to disable request body compression when pushing metrics to every -pushmetrics.url
-pushmetrics.extraLabel array
@@ -1328,7 +1437,7 @@ Below is the output for `/path/to/vminsert -help`:
Supports an array of values separated by comma or specified via multiple flags.
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
-relabelConfig string
Optional path to a file with relabeling rules, which are applied to all the ingested metrics. The path can point either to local file or to http url. See https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#relabeling for details. The config is reloaded on SIGHUP signal
Optional path to a file with relabeling rules, which are applied to all the ingested metrics. The path can point either to local file or to http url. See https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#relabeling for details. The config is reloaded on SIGHUP signal orwith periodicity that is defined via -relabel.configCheckInterval
-relabelConfigCheckInterval duration
Interval for checking for changes in '-relabelConfig' file. By default the checking is disabled. Send SIGHUP signal in order to force config check for changes
-replicationFactor int
@@ -1336,7 +1445,7 @@ Below is the output for `/path/to/vminsert -help`:
-rpc.disableCompression
Whether to disable compression for the data sent from vminsert to vmstorage. This reduces CPU usage at the cost of higher network bandwidth usage
-rpc.handshakeTimeout duration
Timeout for RPC handshake between vminsert/vmselect and vmstorage. Increase this value if transient handshake failures occur. (default 5s)
Timeout for RPC handshake between vminsert/vmselect and vmstorage. Increase this value if transient handshake failures occur. See https://docs.victoriametrics.com/victoriametrics/troubleshooting/#cluster-instability section for more details. (default 5s)
-search.denyPartialResponse
Whether to deny partial responses if a part of -storageNode instances fail to perform queries; this trades availability over consistency; see also -search.maxQueryDuration
-sortLabels
@@ -1356,7 +1465,7 @@ Below is the output for `/path/to/vminsert -help`:
-tlsAutocertCacheDir string
Directory to store TLS certificates issued via Let's Encrypt. Certificates are lost on restarts if this flag isn't set. This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
-tlsAutocertEmail string
Contact email for the issued Let's Encrypt TLS certificates. See also -tlsAutocertHosts and -tlsAutocertCacheDir .This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
Contact email for the issued Let's Encrypt TLS certificates. See also -tlsAutocertHosts and -tlsAutocertCacheDir . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
-tlsAutocertHosts array
Optional hostnames for automatic issuing of Let's Encrypt TLS certificates. These hostnames must be reachable at -httpListenAddr . The -httpListenAddr must listen tcp port 443 . The -tlsAutocertHosts overrides -tlsCertFile and -tlsKeyFile . See also -tlsAutocertEmail and -tlsAutocertCacheDir . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
Supports an array of values separated by comma or specified via multiple flags.
@@ -1411,7 +1520,7 @@ Below is the output for `/path/to/vmselect -help`:
-clusternative.disableCompression
Whether to disable compression of the data sent to vmselect via -clusternativeListenAddr. This reduces CPU usage at the cost of higher network bandwidth usage
-clusternative.maxConcurrentRequests int
The maximum number of concurrent vmselect requests the server can process at -clusternativeListenAddr. Default value depends on the number of available CPU cores. It shouldn't be high, since a single request usually saturates a CPU core at the underlying vmstorage nodes, and many concurrently executed requests may require high amounts of memory. See also -clusternative.maxQueueDuration
The maximum number of concurrent vmselect requests the server can process at -clusternativeListenAddr. It shouldn't be high, since a single request usually saturates a CPU core at the underlying vmstorage nodes, and many concurrently executed requests may require high amounts of memory. See also -clusternative.maxQueueDuration
-clusternative.maxQueueDuration duration
The maximum time the incoming query to -clusternativeListenAddr waits for execution when -clusternative.maxConcurrentRequests limit is reached (default 10s)
-clusternative.maxTagKeys int
@@ -1579,7 +1688,7 @@ Below is the output for `/path/to/vmselect -help`:
How many copies of every ingested sample is available across -storageNode nodes. vmselect continues returning full responses when up to replicationFactor-1 vmstorage nodes are temporarily unavailable. See also -globalReplicationFactor and -search.skipSlowReplicas (default 1)
Supports an array of `key:value` entries separated by comma or specified via multiple flags.
-rpc.handshakeTimeout duration
Timeout for RPC handshake between vminsert/vmselect and vmstorage. Increase this value if transient handshake failures occur. (default 5s)
Timeout for RPC handshake between vminsert/vmselect and vmstorage. Increase this value if transient handshake failures occur. See https://docs.victoriametrics.com/victoriametrics/troubleshooting/#cluster-instability section for more details. (default 5s)
-search.cacheTimestampOffset duration
The maximum duration since the current time for response data, which is always queried from the original raw data, without using the response cache. Increase this value if you see gaps in responses due to time synchronization issues between VictoriaMetrics and data sources (default 5m0s)
-search.denyPartialResponse
@@ -1609,7 +1718,7 @@ Below is the output for `/path/to/vmselect -help`:
-search.logSlowQueryStats duration
Log query statistics if execution time exceeding this value - see https://docs.victoriametrics.com/victoriametrics/query-stats . Zero disables slow query statistics logging. This flag is available only in VictoriaMetrics enterprise. See https://docs.victoriametrics.com/victoriametrics/enterprise/
-search.logSlowQueryStatsHeaders array
White list of header keys to log for queries exceeding -search.logSlowQueryStats. By default, no headers are logged. This flag is available only in VictoriaMetrics enterprise. See https://docs.victoriametrics.com/victoriametrics/enterprise/
White list of header keys to log for queries exceeding -search.logSlowQueryStats. Case insensitive. By default, no headers are logged. This flag is available only in VictoriaMetrics enterprise. See https://docs.victoriametrics.com/victoriametrics/enterprise/
Supports an array of values separated by comma or specified via multiple flags.
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
-search.maxBinaryOpPushdownLabelValues instance
@@ -1722,7 +1831,7 @@ Below is the output for `/path/to/vmselect -help`:
-tlsAutocertCacheDir string
Directory to store TLS certificates issued via Let's Encrypt. Certificates are lost on restarts if this flag isn't set. This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
-tlsAutocertEmail string
Contact email for the issued Let's Encrypt TLS certificates. See also -tlsAutocertHosts and -tlsAutocertCacheDir .This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
Contact email for the issued Let's Encrypt TLS certificates. See also -tlsAutocertHosts and -tlsAutocertCacheDir . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
-tlsAutocertHosts array
Optional hostnames for automatic issuing of Let's Encrypt TLS certificates. These hostnames must be reachable at -httpListenAddr . The -httpListenAddr must listen tcp port 443 . The -tlsAutocertHosts overrides -tlsCertFile and -tlsKeyFile . See also -tlsAutocertEmail and -tlsAutocertCacheDir . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
Supports an array of values separated by comma or specified via multiple flags.
@@ -1874,8 +1983,7 @@ Below is the output for `/path/to/vmstorage -help`:
Whether to log new series. This option is for debug purposes only. It can lead to performance issues when big number of new series are ingested into VictoriaMetrics
-logNewSeriesAuthKey value
authKey, which must be passed in query string to /internal/log_new_series. It overrides -httpAuth.*
Flag value can be read from the given file when using -logNewSeriesAuthKey=file:///abs/path/to/file or -logNewSeriesAuthKey=file://./relative/path/to/file .
Flag value can be read from the given http/https url when using -logNewSeriesAuthKey=http://host/path or -logNewSeriesAuthKey=https://host/path
Flag value can be read from the given file when using -logNewSeriesAuthKey=file:///abs/path/to/file or -logNewSeriesAuthKey=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -logNewSeriesAuthKey=http://host/path or -logNewSeriesAuthKey=https://host/path
-loggerDisableTimestamps
Whether to disable writing timestamps in logs
-loggerErrorsPerSecondLimit int
@@ -1949,7 +2057,7 @@ Below is the output for `/path/to/vmstorage -help`:
-rpc.disableCompression
Whether to disable compression of the data sent from vmstorage to vmselect. This reduces CPU usage at the cost of higher network bandwidth usage
-rpc.handshakeTimeout duration
Timeout for RPC handshake between vminsert/vmselect and vmstorage. Increase this value if transient handshake failures occur. (default 5s)
Timeout for RPC handshake between vminsert/vmselect and vmstorage. Increase this value if transient handshake failures occur. See https://docs.victoriametrics.com/victoriametrics/troubleshooting/#cluster-instability section for more details. (default 5s)
-search.maxConcurrentRequests int
The maximum number of concurrent vmselect requests the vmstorage can process at -vmselectAddr. It shouldn't be high, since a single request usually saturates a CPU core, and many concurrently executed requests may require high amounts of memory. See also -search.maxQueueDuration
-search.maxQueueDuration duration
@@ -1987,11 +2095,16 @@ Below is the output for `/path/to/vmstorage -help`:
-storage.cacheSizeMetricNamesStats size
Overrides max size for storage/metricNamesStatsTracker cache. See https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#cache-tuning
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 0)
-storage.cacheSizeStorageMetricName size
Overrides max size for storage/metricName cache. See https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#cache-tuning
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 0)
-storage.cacheSizeStorageTSID size
Overrides max size for storage/tsid cache. See https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#cache-tuning
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 0)
-storage.finalDedupScheduleCheckInterval duration
The interval for checking when final deduplication process should be started.Storage unconditionally adds 25% jitter to the interval value on each check evaluation. Changing the interval to the bigger values may delay downsampling, deduplication for historical data. See also https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#deduplication (default 1h0m0s)
-storage.idbPrefillStart duration
Specifies how early VictoriaMetrics starts pre-filling indexDB records before indexDB rotation. Starting the pre-fill process earlier can help reduce resource usage spikes during rotation. In most cases, this value should not be changed. The maximum allowed value is 23h. (default 1h0m0s)
-storage.maxDailySeries int
The maximum number of unique series can be added to the storage during the last 24 hours. Excess series are logged and dropped. This can be useful for limiting series churn rate. See https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#cardinality-limiter . See also -storage.maxHourlySeries
-storage.maxHourlySeries int
@@ -2012,7 +2125,7 @@ Below is the output for `/path/to/vmstorage -help`:
-tlsAutocertCacheDir string
Directory to store TLS certificates issued via Let's Encrypt. Certificates are lost on restarts if this flag isn't set. This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
-tlsAutocertEmail string
Contact email for the issued Let's Encrypt TLS certificates. See also -tlsAutocertHosts and -tlsAutocertCacheDir .This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
Contact email for the issued Let's Encrypt TLS certificates. See also -tlsAutocertHosts and -tlsAutocertCacheDir . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
-tlsAutocertHosts array
Optional hostnames for automatic issuing of Let's Encrypt TLS certificates. These hostnames must be reachable at -httpListenAddr . The -httpListenAddr must listen tcp port 443 . The -tlsAutocertHosts overrides -tlsCertFile and -tlsKeyFile . See also -tlsAutocertEmail and -tlsAutocertCacheDir . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
Supports an array of values separated by comma or specified via multiple flags.

View File

@@ -273,9 +273,13 @@ Prometheus doesn't drop data during VictoriaMetrics restart. See [this article](
VictoriaMetrics provides UI for query troubleshooting and exploration. The UI is available at `http://victoriametrics:8428/vmui`
(or at `http://<vmselect>:8481/select/<accountID>/vmui/` in [cluster version of VictoriaMetrics](https://docs.victoriametrics.com/victoriametrics/cluster-victoriametrics/)).
The UI allows exploring query results via graphs and tables. It also provides the following features:
- View [raw samples](https://docs.victoriametrics.com/victoriametrics/keyconcepts/#raw-samples) via `Raw Query` tab {{% available_from "v1.107.0" %}}. Helps in debugging of [unexpected query results](https://docs.victoriametrics.com/victoriametrics/troubleshooting/#unexpected-query-results).
> See [VMUI at VictoriaMetrics playground](https://play.victoriametrics.com?g0.expr=up).
VMUI provides the following features:
- `Query` tab for ad-hoc queries in MetricsQL, supporting time series, tables and histogram representation
- `Raw Query` tab {{% available_from "v1.107.0" %}} for viewing [raw samples](https://docs.victoriametrics.com/victoriametrics/keyconcepts/#raw-samples). Helps in debugging of [unexpected query results](https://docs.victoriametrics.com/victoriametrics/troubleshooting/#unexpected-query-results).
- Explore:
- [Metrics explorer](#metrics-explorer) - automatically builds graphs for selected metrics;
- [Cardinality explorer](#cardinality-explorer) - stats about existing metrics in TSDB;
@@ -286,46 +290,71 @@ The UI allows exploring query results via graphs and tables. It also provides th
- [Query analyzer](#query-tracing) - explore query results and traces loaded from JSON. See `Export query` button below;
- [WITH expressions playground](https://play.victoriametrics.com/select/accounting/1/6a716b0f-38bc-4856-90ce-448fd713e3fe/prometheus/graph/#/expand-with-exprs) - test how WITH expressions work;
- [Metric relabel debugger](https://play.victoriametrics.com/select/accounting/1/6a716b0f-38bc-4856-90ce-448fd713e3fe/prometheus/graph/#/relabeling) - debug [relabeling](#relabeling) rules.
- [Downsampling filters debugger](https://play.victoriametrics.com/select/accounting/1/6a716b0f-38bc-4856-90ce-448fd713e3fe/prometheus/graph/#/downsampling-filters-debug) - debug [downsampling](#downsampling) configs {{% available_from "v1.105.0" %}}.
- [Retention filters debugger](https://play.victoriametrics.com/select/accounting/1/6a716b0f-38bc-4856-90ce-448fd713e3fe/prometheus/graph/#/retention-filters-debug) - debug [retention filter](#retention-filters) configs {{% available_from "v1.105.0" %}}.
- [Downsampling filters debugger](https://play.victoriametrics.com/select/accounting/1/6a716b0f-38bc-4856-90ce-448fd713e3fe/prometheus/graph/#/downsampling-filters-debug) {{% available_from "v1.105.0" %}} - debug [downsampling](#downsampling) configs.
- [Retention filters debugger](https://play.victoriametrics.com/select/accounting/1/6a716b0f-38bc-4856-90ce-448fd713e3fe/prometheus/graph/#/retention-filters-debug) {{% available_from "v1.105.0" %}} - debug [retention filter](#retention-filters) configs.
VMUI provides auto-completion for [MetricsQL](https://docs.victoriametrics.com/victoriametrics/metricsql/) functions, metric names, label names and label values. The auto-completion can be enabled
by checking the `Autocomplete` toggle. When the auto-completion is disabled, it can still be triggered for the current cursor position by pressing `ctrl+space`.
**Querying**
Enter the MetricsQL query in `Query` field and hit `Enter`. Multi-line queries can be entered by pressing `Shift-Enter`.
VMUI provides auto-completion for [MetricsQL](https://docs.victoriametrics.com/victoriametrics/metricsql/) functions, metric names, label names and label values.
The auto-completion can be enabled by checking the `Autocomplete` toggle. When the auto-completion is disabled, it can
still be triggered for the current cursor position by pressing `ctrl+space`.
To correlate between multiple queries on the same graph click `Add Query` button and enter an additional query.
Results for all the queries are displayed simultaneously on the same graph.
Results of a particular query can be hidden by clicking the `eye` icon on the right side of the input field.
Clicking on the `eye` icon while holding the `ctrl` key hides results of all other queries.
VMUI automatically adjusts the interval between datapoints on the graph depending on the horizontal resolution and on the selected time range.
The step value can be customized by changing `Step` value in the top-right corner.
Clicking on the line on graph pins the tooltip. User can pin multiple tooltips. Press `x` icon to unpin the tooltip.
Query history can be navigated by holding `Ctrl` (or `Cmd` on MacOS) and pressing `up` or `down` arrows on the keyboard while the cursor is located in the query input field.
VMUI automatically switches from graph view to heatmap view when the query returns [histogram](https://docs.victoriametrics.com/victoriametrics/keyconcepts/#histogram) buckets
(both [Prometheus histograms](https://prometheus.io/docs/concepts/metric_types/#histogram)
and [VictoriaMetrics histograms](https://valyala.medium.com/improving-histogram-usability-for-prometheus-and-grafana-bc7e5df0e350) are supported).
Try, for example, [this query](https://play.victoriametrics.com/select/accounting/1/6a716b0f-38bc-4856-90ce-448fd713e3fe/prometheus/graph/#/?g0.expr=sum%28rate%28vm_promscrape_scrape_duration_seconds_bucket%29%29+by+%28vmrange%29&g0.range_input=24h&g0.end_input=2023-04-10T17%3A46%3A12&g0.relative_time=last_24_hours&g0.step_input=31m).
To disable heatmap view press on settings icon in the top-right corner of graph area and disable `Histogram mode` toggle.
Graphs in `vmui` support scrolling and zooming:
**Time range**
* Select the needed time range on the graph in order to zoom in into the selected time range. Hold `ctrl` (or `cmd` on MacOS) and scroll down in order to zoom out.
* Hold `ctrl` (or `cmd` on MacOS) and scroll up in order to zoom in the area under cursor.
* Hold `ctrl` (or `cmd` on MacOS) and drag the graph to the left / right in order to move the displayed time range into the future / past.
The time range for graphs can be adjusted in multiple ways:
Query history can be navigated by holding `Ctrl` (or `Cmd` on MacOS) and pressing `up` or `down` arrows on the keyboard while the cursor is located in the query input field.
* Click on time picker in the top-right corner to select a relative (`Last N minutes`) or absolute time range (specify `From` and `To`);
* Zoom-in into graph by click-and-drag motion over the graph area;
* When hovering cursor over the graph area, hold `ctrl` (or `cmd` on MacOS) and scroll up or down to zoom out or zoom in;
* When hovering cursor over the graph area, hold `ctrl` (or `cmd` on MacOS) and drag the graph to the left / right to move the displayed time range into the future / past.
Multi-line queries can be entered by pressing `Shift-Enter` in query input field.
**Legend**
Legend is displayed below the graph area.
Clicking on item in legend hides all other items from displaying. Clicking on the item while holding the `ctrl` key hides
only this item.
Clicking on the label-value pair in item automatically copies it into buffer, so it can be pasted later.
There are additional visualization settings in the top right-corner of the legend view: switching to table view,
hiding common labels, etc.
**Troubleshooting**
When querying the [backfilled data](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#backfilling)
or during [query troubleshooting](https://docs.victoriametrics.com/victoriametrics/troubleshooting/#unexpected-query-results),
it may be useful disabling response cache by clicking `Disable cache` checkbox.
VMUI automatically adjusts the interval between datapoints on the graph depending on the horizontal resolution and on the selected time range.
The step value can be customized by changing `Step value` input.
Query can be [traced](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#query-tracing)
by clicking on `Trace query` toggle below query input area and executing query again. Once trace is generated, click
on it to expand for more details.
VMUI allows investigating correlations between multiple queries on the same graph. Just click `Add Query` button,
enter an additional query in the newly appeared input field and press `Enter`.
Results for all the queries are displayed simultaneously on the same graph.
Graphs for a particular query can be temporarily hidden by clicking the `eye` icon on the right side of the input field.
When the `eye` icon is clicked while holding the `ctrl` key, then query results for the rest of queries become hidden
except of the current query results.
The query and its trace can be exported by clicking on `debug` icon in top right corner of trace block. The exported file
file can be loaded again in VMUI on `Tools=>Query Analyzer` page.
VMUI allows sharing query and [trace](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#query-tracing) results by clicking on
`Export query` button in top right corner of the graph area. The query and trace will be exported as a file that later
can be loaded in VMUI via `Query Analyzer` tool.
See the [example VMUI at VictoriaMetrics playground](https://play.victoriametrics.com/select/accounting/1/6a716b0f-38bc-4856-90ce-448fd713e3fe/prometheus/graph/?g0.expr=100%20*%20sum(rate(process_cpu_seconds_total))%20by%20(job)&g0.range_input=1d).
`Raw query` page allows displaying raw, unmodified data. It can be useful for seeing the actual scrape interval or detecting
sample duplicates.
### Top queries
@@ -2421,6 +2450,8 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
-dryRun
Whether to check config files without running VictoriaMetrics. The following config files are checked: -promscrape.config, -relabelConfig and -streamAggr.config. Unknown config entries aren't allowed in -promscrape.config by default. This can be changed with -promscrape.config.strictParse=false command-line flag
-enableMetadata
Whether to enable metadata processing for metrics scraped from targets, received via VictoriaMetrics remote write, Prometheus remote write v1 or OpenTelemetry protocol. See also remoteWrite.maxMetadataPerBlock
-enableTCP6
Whether to enable IPv6 for listening and dialing. By default, only IPv4 TCP and UDP are used
-envflag.enable
@@ -2538,8 +2569,7 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
Whether to log new series. This option is for debug purposes only. It can lead to performance issues when big number of new series are ingested into VictoriaMetrics
-logNewSeriesAuthKey value
authKey, which must be passed in query string to /internal/log_new_series. It overrides -httpAuth.*
Flag value can be read from the given file when using -logNewSeriesAuthKey=file:///abs/path/to/file or -logNewSeriesAuthKey=file://./relative/path/to/file .
Flag value can be read from the given http/https url when using -logNewSeriesAuthKey=http://host/path or -logNewSeriesAuthKey=https://host/path
Flag value can be read from the given file when using -logNewSeriesAuthKey=file:///abs/path/to/file or -logNewSeriesAuthKey=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -logNewSeriesAuthKey=http://host/path or -logNewSeriesAuthKey=https://host/path
-loggerDisableTimestamps
Whether to disable writing timestamps in logs
-loggerErrorsPerSecondLimit int
@@ -2788,7 +2818,7 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
-search.logSlowQueryStats duration
Log query statistics if execution time exceeding this value - see https://docs.victoriametrics.com/victoriametrics/query-stats . Zero disables slow query statistics logging. This flag is available only in VictoriaMetrics enterprise. See https://docs.victoriametrics.com/victoriametrics/enterprise/
-search.logSlowQueryStatsHeaders array
White list of header keys to log for queries exceeding -search.logSlowQueryStats. By default, no headers are logged. This flag is available only in VictoriaMetrics enterprise. See https://docs.victoriametrics.com/victoriametrics/enterprise/
White list of header keys to log for queries exceeding -search.logSlowQueryStats. Case insensitive. By default, no headers are logged. This flag is available only in VictoriaMetrics enterprise. See https://docs.victoriametrics.com/victoriametrics/enterprise/
Supports an array of values separated by comma or specified via multiple flags.
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
-search.maxBinaryOpPushdownLabelValues instance
@@ -2915,14 +2945,16 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
-storage.cacheSizeMetricNamesStats size
Overrides max size for storage/metricNamesStatsTracker cache. See https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#cache-tuning
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 0)
-storage.cacheSizeStorageMetricName size
Overrides max size for storage/metricName cache. See https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#cache-tuning
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 0)
-storage.cacheSizeStorageTSID size
Overrides max size for storage/tsid cache. See https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#cache-tuning
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 0)
-storage.finalDedupScheduleCheckInterval duration
The interval for checking when final deduplication process should be started.Storage unconditionally adds 25% jitter to the interval value on each check evaluation. Changing the interval to the bigger values may delay downsampling, deduplication for historical data. See also https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#deduplication (default 1h0m0s)
-storage.idbPrefillStart duration
Specifies how early VictoriaMetrics starts pre-filling indexDB records before indexDB rotation. Starting the pre-fill process earlier can help reduce resource usage spikes during rotation.
In most cases, this value should not be changed. The maximum allowed value is 23h. (default 1h0m0s)
Specifies how early VictoriaMetrics starts pre-filling indexDB records before indexDB rotation. Starting the pre-fill process earlier can help reduce resource usage spikes during rotation. In most cases, this value should not be changed. The maximum allowed value is 23h. (default 1h0m0s)
-storage.maxDailySeries int
The maximum number of unique series can be added to the storage during the last 24 hours. Excess series are logged and dropped. This can be useful for limiting series churn rate. See https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#cardinality-limiter . See also -storage.maxHourlySeries
-storage.maxHourlySeries int
@@ -2959,7 +2991,7 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
-tlsAutocertCacheDir string
Directory to store TLS certificates issued via Let's Encrypt. Certificates are lost on restarts if this flag isn't set. This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
-tlsAutocertEmail string
Contact email for the issued Let's Encrypt TLS certificates. See also -tlsAutocertHosts and -tlsAutocertCacheDir .This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
Contact email for the issued Let's Encrypt TLS certificates. See also -tlsAutocertHosts and -tlsAutocertCacheDir . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
-tlsAutocertHosts array
Optional hostnames for automatic issuing of Let's Encrypt TLS certificates. These hostnames must be reachable at -httpListenAddr . The -httpListenAddr must listen tcp port 443 . The -tlsAutocertHosts overrides -tlsCertFile and -tlsKeyFile . See also -tlsAutocertEmail and -tlsAutocertCacheDir . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
Supports an array of values separated by comma or specified via multiple flags.

View File

@@ -25,11 +25,18 @@ See also [LTS releases](https://docs.victoriametrics.com/victoriametrics/lts-rel
## tip
* FEATURE: upgrade Go builder from Go1.24.6 to Go1.25. See [Go1.25 release notes](https://tip.golang.org/doc/go1.25).
* FEATURE: [vmui](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#vmui): add export functionality for Query (Table view) and RawQuery tabs in CSV/JSON format. See [#9332](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9332).
* BUGFIX: [vmagent](https://docs.victoriametrics.com/victoriametrics/vmagent/): prevent remote write ingestion stop on push error for [Google Pub/Sub](https://docs.victoriametrics.com/victoriametrics/vmagent/#writing-metrics-to-pubsub) integration.
* BUGFIX: [vmauth](https://docs.victoriametrics.com/victoriametrics/vmauth/): properly handle [mTLS authorization and routing](https://docs.victoriametrics.com/victoriametrics/vmauth/#mtls-based-request-routing). Previously it didn't work. See [#29](https://github.com/VictoriaMetrics/VictoriaLogs/issues/29).
* BUGFIX: [MetricsQL](https://docs.victoriametrics.com/victoriametrics/metricsql/): fix `timestamp` function compatibility with Prometheus when used with sub-expressions such as `timestamp(sum(foo))`. The fix applies only when `-search.disableImplicitConversion` flag is set. See more in [#9527-comment](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9527#issuecomment-3200646020) and [metricsql#55](https://github.com/VictoriaMetrics/metricsql/pull/55).
## [v1.124.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.124.0)
Released at 2025-08-15
**Update Note 1:** [vmsingle](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/) and `vmstorage` in [VictoriaMetrics cluster](https://docs.victoriametrics.com/victoriametrics/cluster-victoriametrics/): performance regression for queries that match [previously deleted time series](https://docs.victoriametrics.com/#how-to-delete-time-series). The issue affects installation that previously deleted big number of time series but continue querying them. More details in [#9602](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9602). The degradation will be addressed in upcomming releases.
* SECURITY: upgrade Go builder from Go1.24.5 to Go1.24.6. See [the list of issues addressed in Go1.24.6](https://github.com/golang/go/issues?q=milestone%3AGo1.24.6+label%3ACherryPickApproved).
* FEATURE: [vmsingle](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/) and [vmselect](https://docs.victoriametrics.com/victoriametrics/cluster-victoriametrics/) in [VictoriaMetrics cluster](https://docs.victoriametrics.com/victoriametrics/cluster-victoriametrics/): protect graphite `/render` API endpoint with new flag `-search.maxGraphitePathExpressionLen`. See this PR [#9534](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9534) for details.

View File

@@ -14,18 +14,26 @@ aliases:
- /enterprise/index.html
- /enterprise/
---
VictoriaMetrics components are provided in two kinds - [Community edition](https://victoriametrics.com/products/open-source/)
and [Enterprise edition](https://victoriametrics.com/products/enterprise/).
VictoriaMetrics community components are open source and are free to use - see [the source code](https://github.com/VictoriaMetrics/VictoriaMetrics/)
and [the license](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/LICENSE).
VictoriaMetrics and [VictoriaLogs](https://docs.victoriametrics.com/victorialogs/) components are provided
in two kinds - [Community edition](https://victoriametrics.com/products/open-source/) and [Enterprise edition](https://victoriametrics.com/products/enterprise/).
VictoriaMetrics Enterprise components are available in binary form at [releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest)
and at [Docker Hub](https://hub.docker.com/u/victoriametrics) and [Quay](https://quay.io/organization/victoriametrics). Enterprise binaries and packages have `enterprise` suffix in their names.
VictoriaMetrics and VictoriaLogs community components are open source and are free to use:
- See [VictoriaMetrics source code](https://github.com/VictoriaMetrics/VictoriaMetrics/) and [VictoriaMetrics license](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/LICENSE).
- See [VictoriaLogs source code](https://github.com/VictoriaMetrics/VictoriaLogs/) and [VictoriaLogs license](https://github.com/VictoriaMetrics/VictoriaLogs/blob/master/LICENSE).
Enterprise components of VictoriaMetrics and VictoriaLogs are available at the following places:
- Binary executables are available at [the releases page for VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest)
and [the release page for VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaLogs/releases/latest).
- Docker images are available at [Docker Hub](https://hub.docker.com/u/victoriametrics) and [Quay](https://quay.io/organization/victoriametrics).
Enterprise executables and Docker images have `enterprise` suffix in their names and tags.
## Valid cases for VictoriaMetrics Enterprise
The use of VictoriaMetrics Enterprise components is permitted in the following cases:
The use of Enterprise components of VictoriaMetrics and VictoriaLogs is permitted in the following cases:
- Evaluation use in non-production setups. Please, request trial license [here](https://victoriametrics.com/products/enterprise/trial/)
and then pass it via `-license` or `-licenseFile` command-line flags as described [in these docs](#running-victoriametrics-enterprise).
@@ -35,7 +43,7 @@ The use of VictoriaMetrics Enterprise components is permitted in the following c
- [VictoriaMetrics Cloud](https://docs.victoriametrics.com/victoriametrics-cloud/) is built on top of VictoriaMetrics Enterprise.
See [these docs](#running-victoriametrics-enterprise) for details on how to run VictoriaMetrics Enterprise.
See [these docs](#running-victoriametrics-enterprise) for details on how to run Enterprise components of VictoriaMetrics and VictoriaLogs.
## VictoriaMetrics Enterprise features
@@ -75,9 +83,28 @@ On top of this, Enterprise package of VictoriaMetrics includes the following fea
Contact us via [this page](https://victoriametrics.com/products/enterprise/) if you are interested in VictoriaMetrics Enterprise.
## VictoriaLogs Enterprise features
VictoriaLogs enterprise includes [all the features of the community edition](https://docs.victoriametrics.com/victorialogs/),
plus the following additional features:
- First-class consulting and technical support provided by the core VictoriaMetrics dev team.
- [Monitoring of monitoring](https://victoriametrics.com/products/mom/) - this feature allows forecasting
and preventing possible issues in VictoriaMetrics setups.
- [Enterprise security compliance](https://victoriametrics.com/security/).
- Prioritizing of feature requests from Enterprise customers.
On top of this, Enterprise package of VictoriaLogs includes the following features:
- [Automatic issuing of TLS certificates](https://docs.victoriametrics.com/victorialogs/#automatic-issuing-of-tls-certificates).
- [mTLS for all the VictoriaMetrics components](https://docs.victoriametrics.com/victorialogs/#mtls).
- [mTLS for communications between cluster components](https://docs.victoriametrics.com/victorialogs/cluster/#mtls).
Contact us via [this page](https://victoriametrics.com/products/enterprise/) if you are interested in VictoriaLogs Enterprise.
## Running VictoriaMetrics Enterprise
VictoriaMetrics Enterprise components are available in the following forms:
Enterprise components of VictoriaMetrics and VictoriaLogs are available in the following forms:
- [Binary releases](#binary-releases)
- [Docker images](#docker-images)
@@ -86,15 +113,16 @@ VictoriaMetrics Enterprise components are available in the following forms:
### Binary releases
It is allowed to run VictoriaMetrics Enterprise components in [cases listed here](#valid-cases-for-victoriametrics-enterprise).
It is allowed to run VictoriaMetrics and VictoriaLogs Enterprise components in [cases listed here](#valid-cases-for-victoriametrics-enterprise).
Binary releases of VictoriaMetrics Enterprise are available [at the releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest).
Binary releases of Enterprise components are available at [the releases page for VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest)
and [the releases page for VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaLogs/releases/latest).
Enterprise binaries and packages have `enterprise` suffix in their names. For example, `victoria-metrics-linux-amd64-v1.124.0-enterprise.tar.gz`.
In order to run binary release of VictoriaMetrics Enterprise component, please download the `*-enterprise.tar.gz` archive for your OS and architecture
from the [releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest) and unpack it. Then run the unpacked binary.
In order to run binary release of Enterprise component, please download the `*-enterprise.tar.gz` archive for your OS and architecture
from the corresponding releases page and unpack it. Then run the unpacked binary.
All the VictoriaMetrics Enterprise components require specifying the following command-line flags:
All the Enterprise components of VictoriaMetrics and VictoriaLogs require specifying the following command-line flags:
* `-license` - this flag accepts VictoriaMetrics Enterprise license key, which can be obtained at [this page](https://victoriametrics.com/products/enterprise/trial/)
* `-licenseFile` - this flag accepts a path to file with VictoriaMetrics Enterprise license key,
@@ -120,9 +148,9 @@ Alternatively, VictoriaMetrics Enterprise license can be stored in the file and
### Docker images
It is allowed to run VictoriaMetrics Enterprise components in [cases listed here](#valid-cases-for-victoriametrics-enterprise).
It is allowed to run VictoriaMetrics and VictoriaLogs Enterprise components in [cases listed here](#valid-cases-for-victoriametrics-enterprise).
Docker images for VictoriaMetrics Enterprise are available at VictoriaMetrics [Docker Hub](https://hub.docker.com/u/victoriametrics) and [Quay](https://quay.io/organization/victoriametrics).
Docker images for Enterprise components are available at [VictoriaMetrics Docker Hub](https://hub.docker.com/u/victoriametrics) and [VictoriaMetrics Quay](https://quay.io/organization/victoriametrics).
Enterprise docker images have `enterprise` suffix in their names. For example, `victoriametrics/victoria-metrics:v1.124.0-enterprise`.
In order to run Docker image of VictoriaMetrics Enterprise component, it is required to provide the license key via command-line
@@ -165,17 +193,17 @@ The example assumes that the license file is stored at `/vm-license` on the host
### Helm charts
It is allowed to run VictoriaMetrics Enterprise components in [cases listed here](#valid-cases-for-victoriametrics-enterprise).
It is allowed to run VictoriaMetrics and VictoriaLogs Enterprise components in [cases listed here](#valid-cases-for-victoriametrics-enterprise).
Helm charts for VictoriaMetrics Enterprise components are available [here](https://github.com/VictoriaMetrics/helm-charts).
Helm charts for Enterprise components are available [here](https://github.com/VictoriaMetrics/helm-charts).
In order to run VictoriaMetrics Enterprise helm chart it is required to provide the license key via `license` value in `values.yaml` file
In order to run Enterprise helm chart it is required to provide the license key via `license` value in `values.yaml` file
and adjust the image tag to the Enterprise one as described [here](#docker-images).
Enterprise license key can be obtained at [this page](https://victoriametrics.com/products/enterprise/trial/).
For example, the following `values` file for [VictoriaMetrics single-node chart](https://github.com/VictoriaMetrics/helm-charts/tree/master/charts/victoria-metrics-single)
is used to provide key in plain-text:
is used to provide the license key in plain-text:
```yaml
server:
@@ -186,7 +214,7 @@ license:
key: {BASE64_ENCODED_LICENSE_KEY}
```
In order to provide key via existing secret, the following values file is used:
In order to provide the license key via existing secret, the following values file is used:
```yaml
server:
@@ -220,15 +248,15 @@ Note that license key provided by using secret is mounted in a file. This allows
### Kubernetes operator
It is allowed to run VictoriaMetrics Enterprise components in [cases listed here](#valid-cases-for-victoriametrics-enterprise).
It is allowed to run VictoriaMetrics and VictoriaLogs Enterprise components in [cases listed here](#valid-cases-for-victoriametrics-enterprise).
VictoriaMetrics Enterprise components can be deployed via [VictoriaMetrics operator](https://docs.victoriametrics.com/operator/).
Enterprise components can be deployed via [VictoriaMetrics operator](https://docs.victoriametrics.com/operator/).
In order to use Enterprise components it is required to provide the license key via `license` field and adjust the image tag to the enterprise one.
Enterprise license key can be obtained at [this page](https://victoriametrics.com/products/enterprise/trial/).
For example, the following custom resource for [VictoriaMetrics single-node](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/)
is used to provide key in plain-text:
is used to provide the license key in plain-text:
```yaml
apiVersion: operator.victoriametrics.com/v1beta1
@@ -243,7 +271,7 @@ spec:
tag: v1.124.0-enterprise
```
In order to provide key via existing secret, the following custom resource is used:
In order to provide the license key via existing secret, the following custom resource is used:
```yaml
apiVersion: operator.victoriametrics.com/v1beta1
@@ -283,10 +311,10 @@ See full list of CRD specifications [here](https://docs.victoriametrics.com/oper
### FIPS Compatibility
VictoriaMetrics Enterprise supports [FIPS 140-3](https://en.wikipedia.org/wiki/FIPS_140-3) compliant mode starting with version {{% available_from "v1.118.0" %}}, using the [Go FIPS 140-3 Cryptographic Module](https://go.dev/blog/fips140).
This ensures all cryptographic operations use a validated FIPS module.
VictoriaMetrics Enterprise supports [FIPS 140-3](https://en.wikipedia.org/wiki/FIPS_140-3) compliant mode starting with version {{% available_from "v1.118.0" %}},
using the [Go FIPS 140-3 Cryptographic Module](https://go.dev/blog/fips140). This ensures all cryptographic operations use a validated FIPS module.
Builds are available for amd64 and arm64
Builds are available for amd64 and arm64 architectures.
Example archive:
@@ -303,7 +331,7 @@ Example Docker image:
## Monitoring license expiration
All the VictoriaMetrics Enterprise components expose the following metrics at the `/metrics` page:
All the VictoriaMetrics and VictoriaLogs Enterprise components expose the following metrics at the `/metrics` page:
* `vm_license_expires_at` - license expiration date in unix timestamp format
* `vm_license_expires_in_seconds` - the number of seconds left until the license expires

View File

@@ -1484,7 +1484,7 @@ It is safe sharing the collected profiles from security point of view, since the
```bash
./vmagent -help
vmagent collects metrics data via popular data ingestion protocols and routes them to VictoriaMetrics.
vmagent collects metrics data via popular data ingestion protocols and routes it to VictoriaMetrics.
See the docs at https://docs.victoriametrics.com/victoriametrics/vmagent/ .
@@ -1556,7 +1556,7 @@ See the docs at https://docs.victoriametrics.com/victoriametrics/vmagent/ .
Supports array of values separated by comma or specified via multiple flags.
Empty values are set to false.
-gcp.pubsub.subscribe.topicSubscription.messageFormat array
Message format for the corresponding -gcp.pubsub.subscribe.topicSubscription. Valid formats: influx, prometheus, promremotewrite, graphite, jsonline . See https://docs.victoriametrics.com/victoriametrics/vmagent/#reading-metrics-from-pubsub . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
Message format for the corresponding -gcp.pubsub.subcribe.topicSubscription. Valid formats: influx, prometheus, promremotewrite, graphite, jsonline . See https://docs.victoriametrics.com/victoriametrics/vmagent/#reading-metrics-from-pubsub . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
Supports an array of values separated by comma or specified via multiple flags.
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
-graphite.sanitizeMetricName
@@ -2018,12 +2018,16 @@ See the docs at https://docs.victoriametrics.com/victoriametrics/vmagent/ .
Empty values are set to default value.
-remoteWrite.relabelConfig string
Optional path to file with relabeling configs, which are applied to all the metrics before sending them to -remoteWrite.url. See also -remoteWrite.urlRelabelConfig. The path can point either to local file or to http url. See https://docs.victoriametrics.com/victoriametrics/relabeling/
-remoteWrite.retryMaxInterval array
The maximum delay between retry attempts to send a block of data to the corresponding -remoteWrite.url. The delay doubles with each retry until this maximum is reached, after which it remains constant. See also -remoteWrite.retryMinInterval (default 1m0s)
Supports array of values separated by comma or specified via multiple flags.
Empty values are set to default value.
-remoteWrite.retryMaxTime array
The max time spent on retry attempts to send a block of data to the corresponding -remoteWrite.url. Change this value if it is expected for -remoteWrite.url to be unreachable for more than -remoteWrite.retryMaxTime. See also -remoteWrite.retryMinInterval (default 1m0s)
The max time spent on retry attempts to send a block of data to the corresponding -remoteWrite.url. This flag is deprecated, use -remoteWrite.retryMaxInterval instead (default 1m0s)
Supports array of values separated by comma or specified via multiple flags.
Empty values are set to default value.
-remoteWrite.retryMinInterval array
The minimum delay between retry attempts to send a block of data to the corresponding -remoteWrite.url. Every next retry attempt will double the delay to prevent hammering of remote database. See also -remoteWrite.retryMaxTime (default 1s)
The minimum delay between retry attempts to send a block of data to the corresponding -remoteWrite.url. Every next retry attempt will double the delay to prevent hammering of remote database. See also -remoteWrite.retryMaxInterval (default 1s)
Supports array of values separated by comma or specified via multiple flags.
Empty values are set to default value.
-remoteWrite.roundDigits array
@@ -2147,7 +2151,7 @@ See the docs at https://docs.victoriametrics.com/victoriametrics/vmagent/ .
-tlsAutocertCacheDir string
Directory to store TLS certificates issued via Let's Encrypt. Certificates are lost on restarts if this flag isn't set. This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
-tlsAutocertEmail string
Contact email for the issued Let's Encrypt TLS certificates. See also -tlsAutocertHosts and -tlsAutocertCacheDir .This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
Contact email for the issued Let's Encrypt TLS certificates. See also -tlsAutocertHosts and -tlsAutocertCacheDir . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
-tlsAutocertHosts array
Optional hostnames for automatic issuing of Let's Encrypt TLS certificates. These hostnames must be reachable at -httpListenAddr . The -httpListenAddr must listen tcp port 443 . The -tlsAutocertHosts overrides -tlsCertFile and -tlsKeyFile . See also -tlsAutocertEmail and -tlsAutocertCacheDir . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
Supports an array of values separated by comma or specified via multiple flags.

View File

@@ -22,12 +22,11 @@ protocol and require `-remoteWrite.url` to be configured.
`vmalert` is heavily inspired by [Prometheus](https://prometheus.io/docs/alerting/latest/overview/)
implementation and aims to be compatible with its syntax.
A [single-node](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#vmalert)
or [cluster version](https://docs.victoriametrics.com/victoriametrics/cluster-victoriametrics/#vmalert)
of VictoriaMetrics are capable of proxying requests to `vmalert` via `-vmalert.proxyURL` command-line flag.
Use this feature for the following cases:
* for proxying requests from [Grafana Alerting UI](https://grafana.com/docs/grafana/latest/alerting/);
* for accessing `vmalert`'s UI through VictoriaMetrics Web interface.
Configure `-vmalert.proxyURL` on VictoriaMetrics [single-node](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#vmalert)
or [vmselect in cluster version](https://docs.victoriametrics.com/victoriametrics/cluster-victoriametrics/#vmalert)
to proxy requests to `vmalert`. Proxying is needed for the following cases:
* to proxy requests from [Grafana Alerting UI](https://grafana.com/docs/grafana/latest/alerting/);
* to access `vmalert`'s UI through [VictoriaMetrics Web interface](https://docs.victoriametrics.com/#vmui).
[VictoriaMetrics Cloud](https://console.victoriametrics.cloud/signUp?utm_source=website&utm_campaign=docs_vm_vmalert_intro)
provides out-of-the-box alerting functionality based on `vmalert`. This service simplifies the setup
@@ -1621,7 +1620,7 @@ The shortlist of configuration flags is the following:
-tlsAutocertCacheDir string
Directory to store TLS certificates issued via Let's Encrypt. Certificates are lost on restarts if this flag isn't set. This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
-tlsAutocertEmail string
Contact email for the issued Let's Encrypt TLS certificates. See also -tlsAutocertHosts and -tlsAutocertCacheDir .This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
Contact email for the issued Let's Encrypt TLS certificates. See also -tlsAutocertHosts and -tlsAutocertCacheDir . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
-tlsAutocertHosts array
Optional hostnames for automatic issuing of Let's Encrypt TLS certificates. These hostnames must be reachable at -httpListenAddr . The -httpListenAddr must listen tcp port 443 . The -tlsAutocertHosts overrides -tlsCertFile and -tlsKeyFile . See also -tlsAutocertEmail and -tlsAutocertCacheDir . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
Supports an array of values separated by comma or specified via multiple flags.

View File

@@ -1415,7 +1415,7 @@ See the docs at https://docs.victoriametrics.com/victoriametrics/vmauth/ .
-tlsAutocertCacheDir string
Directory to store TLS certificates issued via Let's Encrypt. Certificates are lost on restarts if this flag isn't set. This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
-tlsAutocertEmail string
Contact email for the issued Let's Encrypt TLS certificates. See also -tlsAutocertHosts and -tlsAutocertCacheDir .This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
Contact email for the issued Let's Encrypt TLS certificates. See also -tlsAutocertHosts and -tlsAutocertCacheDir . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
-tlsAutocertHosts array
Optional hostnames for automatic issuing of Let's Encrypt TLS certificates. These hostnames must be reachable at -httpListenAddr . The -httpListenAddr must listen tcp port 443 . The -tlsAutocertHosts overrides -tlsCertFile and -tlsKeyFile . See also -tlsAutocertEmail and -tlsAutocertCacheDir . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
Supports an array of values separated by comma or specified via multiple flags.

View File

@@ -504,7 +504,7 @@ Run `vmbackup -help` in order to see all the available options:
-tlsAutocertCacheDir string
Directory to store TLS certificates issued via Let's Encrypt. Certificates are lost on restarts if this flag isn't set. This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
-tlsAutocertEmail string
Contact email for the issued Let's Encrypt TLS certificates. See also -tlsAutocertHosts and -tlsAutocertCacheDir .This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
Contact email for the issued Let's Encrypt TLS certificates. See also -tlsAutocertHosts and -tlsAutocertCacheDir . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
-tlsAutocertHosts array
Optional hostnames for automatic issuing of Let's Encrypt TLS certificates. These hostnames must be reachable at -httpListenAddr . The -httpListenAddr must listen tcp port 443 . The -tlsAutocertHosts overrides -tlsCertFile and -tlsKeyFile . See also -tlsAutocertEmail and -tlsAutocertCacheDir . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
Supports an array of values separated by comma or specified via multiple flags.

View File

@@ -636,7 +636,7 @@ command-line flags:
-tlsAutocertCacheDir string
Directory to store TLS certificates issued via Let's Encrypt. Certificates are lost on restarts if this flag isn't set. This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
-tlsAutocertEmail string
Contact email for the issued Let's Encrypt TLS certificates. See also -tlsAutocertHosts and -tlsAutocertCacheDir .This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
Contact email for the issued Let's Encrypt TLS certificates. See also -tlsAutocertHosts and -tlsAutocertCacheDir . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
-tlsAutocertHosts array
Optional hostnames for automatic issuing of Let's Encrypt TLS certificates. These hostnames must be reachable at -httpListenAddr . The -httpListenAddr must listen tcp port 443 . The -tlsAutocertHosts overrides -tlsCertFile and -tlsKeyFile . See also -tlsAutocertEmail and -tlsAutocertCacheDir . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
Supports an array of values separated by comma or specified via multiple flags.

View File

@@ -508,7 +508,7 @@ Below is the list of configuration flags (it can be viewed by running `./vmgatew
-tlsAutocertCacheDir string
Directory to store TLS certificates issued via Let's Encrypt. Certificates are lost on restarts if this flag isn't set. This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
-tlsAutocertEmail string
Contact email for the issued Let's Encrypt TLS certificates. See also -tlsAutocertHosts and -tlsAutocertCacheDir .This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
Contact email for the issued Let's Encrypt TLS certificates. See also -tlsAutocertHosts and -tlsAutocertCacheDir . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
-tlsAutocertHosts array
Optional hostnames for automatic issuing of Let's Encrypt TLS certificates. These hostnames must be reachable at -httpListenAddr . The -httpListenAddr must listen tcp port 443 . The -tlsAutocertHosts overrides -tlsCertFile and -tlsKeyFile . See also -tlsAutocertEmail and -tlsAutocertCacheDir . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
Supports an array of values separated by comma or specified via multiple flags.

View File

@@ -208,7 +208,7 @@ Run `vmrestore -help` in order to see all the available options:
-tlsAutocertCacheDir string
Directory to store TLS certificates issued via Let's Encrypt. Certificates are lost on restarts if this flag isn't set. This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
-tlsAutocertEmail string
Contact email for the issued Let's Encrypt TLS certificates. See also -tlsAutocertHosts and -tlsAutocertCacheDir .This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
Contact email for the issued Let's Encrypt TLS certificates. See also -tlsAutocertHosts and -tlsAutocertCacheDir . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
-tlsAutocertHosts array
Optional hostnames for automatic issuing of Let's Encrypt TLS certificates. These hostnames must be reachable at -httpListenAddr . The -httpListenAddr must listen tcp port 443 . The -tlsAutocertHosts overrides -tlsCertFile and -tlsKeyFile . See also -tlsAutocertEmail and -tlsAutocertCacheDir . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/victoriametrics/enterprise/
Supports an array of values separated by comma or specified via multiple flags.

2
go.mod
View File

@@ -30,7 +30,7 @@ require (
github.com/VictoriaMetrics/easyproto v0.1.4
github.com/VictoriaMetrics/fastcache v1.13.0
github.com/VictoriaMetrics/metrics v1.39.1
github.com/VictoriaMetrics/metricsql v0.84.6
github.com/VictoriaMetrics/metricsql v0.84.7
github.com/aws/aws-sdk-go-v2 v1.37.1
github.com/aws/aws-sdk-go-v2/config v1.30.2
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.18.2

4
go.sum
View File

@@ -60,8 +60,8 @@ github.com/VictoriaMetrics/fastcache v1.13.0 h1:AW4mheMR5Vd9FkAPUv+NH6Nhw+fmbTMG
github.com/VictoriaMetrics/fastcache v1.13.0/go.mod h1:hHXhl4DA2fTL2HTZDJFXWgW0LNjo6B+4aj2Wmng3TjU=
github.com/VictoriaMetrics/metrics v1.39.1 h1:AT7jz7oSpAK9phDl5O5Tmy06nXnnzALwqVnf4ros3Ow=
github.com/VictoriaMetrics/metrics v1.39.1/go.mod h1:XE4uudAAIRaJE614Tl5HMrtoEU6+GDZO4QTnNSsZRuA=
github.com/VictoriaMetrics/metricsql v0.84.6 h1:r1rl05prim/r+Me4BUULaZQYXn2eZa3dnrtk+hY3X90=
github.com/VictoriaMetrics/metricsql v0.84.6/go.mod h1:d4EisFO6ONP/HIGDYTAtwrejJBBeKGQYiRl095bS4QQ=
github.com/VictoriaMetrics/metricsql v0.84.7 h1:zMONjtEULMbwEYU/qL4Hkc3GDfTTrv1bO+a9lmJf3do=
github.com/VictoriaMetrics/metricsql v0.84.7/go.mod h1:d4EisFO6ONP/HIGDYTAtwrejJBBeKGQYiRl095bS4QQ=
github.com/VividCortex/ewma v1.2.0 h1:f58SaIzcDXrSy3kWaHNvuJgJ3Nmz59Zji6XoJR/q1ow=
github.com/VividCortex/ewma v1.2.0/go.mod h1:nz4BbCtbLyFDeC9SUHbtcT5644juEuWfUAUnGx7j5l4=
github.com/alecthomas/units v0.0.0-20240927000941-0f3dac36c52b h1:mimo19zliBX/vSQ6PWWSL9lK8qwHozUj03+zLoEB8O0=

View File

@@ -28,7 +28,7 @@ func initExposeMetadata() {
metrics.ExposeMetadata(*exposeMetadata)
}
var versionRe = regexp.MustCompile(`v\d+\.\d+\.\d+(?:-enterprise)?(?:-cluster.*)?`)
var versionRe = regexp.MustCompile(`v\d+\.\d+\.\d+(?:-enterprise)?(?:-cluster)?`)
// WritePrometheusMetrics writes all the registered metrics to w in Prometheus exposition format.
func WritePrometheusMetrics(w io.Writer) {

View File

@@ -4,6 +4,7 @@ import (
"fmt"
"net/http"
"strconv"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeutil"
)
@@ -34,4 +35,31 @@ func GetDuration(r *http.Request, argKey string, defaultValue int64) (int64, err
return msecs, nil
}
// GetDurationRaw returns time.Duration from the given argKey query arg.
func GetDurationRaw(r *http.Request, argKey string, defaultValue time.Duration) (time.Duration, error) {
argValue := r.FormValue(argKey)
if len(argValue) == 0 {
return defaultValue, nil
}
if argValue == "undefined" {
// This hack is needed for Grafana, which may send undefined value
return defaultValue, nil
}
secs, err := strconv.ParseFloat(argValue, 64)
if err != nil {
// Try parsing string format
d, err := timeutil.ParseDuration(argValue)
if err != nil {
return 0, fmt.Errorf("cannot parse %q=%q: %w", argKey, argValue, err)
}
return d, nil
}
d := time.Duration(secs * float64(time.Second))
msecs := d.Milliseconds()
if msecs <= 0 || msecs > maxDurationMsecs {
return 0, fmt.Errorf("%s=%s is out of allowed range [%s ... %s]", argKey, d, time.Millisecond, time.Duration(maxDurationMsecs)*time.Millisecond)
}
return d, nil
}
const maxDurationMsecs = 100 * 365 * 24 * 3600 * 1000

View File

@@ -26,12 +26,10 @@ func NewTCPListener(name, addr string, useProxyProtocol bool, tlsConfig *tls.Con
if err != nil {
return nil, err
}
if tlsConfig != nil {
ln = tls.NewListener(ln, tlsConfig)
}
ms := metrics.GetDefaultSet()
tln := &TCPListener{
Listener: ln,
tlsConfig: tlsConfig,
useProxyProtocol: useProxyProtocol,
accepts: ms.NewCounter(fmt.Sprintf(`vm_tcplistener_accepts_total{name=%q, addr=%q}`, name, addr)),
@@ -75,6 +73,8 @@ func GetTCPNetwork() string {
type TCPListener struct {
net.Listener
tlsConfig *tls.Config
accepts *metrics.Counter
acceptErrors *metrics.Counter
@@ -109,6 +109,16 @@ func (ln *TCPListener) Accept() (net.Conn, error) {
Conn: conn,
cm: &ln.cm,
}
return sc, nil
if ln.tlsConfig == nil {
return sc, nil
}
// Make sure we return tls.Conn instead of statConn, since servers, which use this listener,
// such as net/http.Server, assume that the TLS connection must be represented as tls.Conn.
// Otherwise they cannot initialize internal fields such as net/http.Request.TLS.
// This results in non-working mTLS-based authorization.
//
// See https://github.com/VictoriaMetrics/VictoriaLogs/issues/29
return tls.Server(sc, ln.tlsConfig), nil
}
}

View File

@@ -38,13 +38,13 @@ func (m *Sample) marshalToSizedBuffer(dst []byte) (int, error) {
if m.Timestamp != 0 {
i = encodeVarint(dst, i, uint64(m.Timestamp))
i--
dst[i] = 0x10
dst[i] = (2 << 3)
}
if m.Value != 0 {
i -= 8
binary.LittleEndian.PutUint64(dst[i:], uint64(math.Float64bits(float64(m.Value))))
i--
dst[i] = 0x9
dst[i] = (1 << 3) | 1
}
return len(dst) - i, nil
}
@@ -59,7 +59,7 @@ func (m *TimeSeries) marshalToSizedBuffer(dst []byte) (int, error) {
i -= size
i = encodeVarint(dst, i, uint64(size))
i--
dst[i] = 0x12
dst[i] = (2 << 3) | 2
}
for j := len(m.Labels) - 1; j >= 0; j-- {
size, err := m.Labels[j].marshalToSizedBuffer(dst[:i])
@@ -69,7 +69,7 @@ func (m *TimeSeries) marshalToSizedBuffer(dst []byte) (int, error) {
i -= size
i = encodeVarint(dst, i, uint64(size))
i--
dst[i] = 0xa
dst[i] = (1 << 3) | 2
}
return len(dst) - i, nil
}
@@ -81,14 +81,14 @@ func (m *Label) marshalToSizedBuffer(dst []byte) (int, error) {
copy(dst[i:], m.Value)
i = encodeVarint(dst, i, uint64(len(m.Value)))
i--
dst[i] = 0x12
dst[i] = (2 << 3) | 2
}
if len(m.Name) > 0 {
i -= len(m.Name)
copy(dst[i:], m.Name)
i = encodeVarint(dst, i, uint64(len(m.Name)))
i--
dst[i] = 0xa
dst[i] = (1 << 3) | 2
}
return len(dst) - i, nil
}
@@ -144,7 +144,7 @@ func (m *WriteRequest) marshalToSizedBuffer(dst []byte) (int, error) {
i -= size
i = encodeVarint(dst, i, uint64(size))
i--
dst[i] = 0x1a
dst[i] = (3 << 3) | 2
}
for j := len(m.Timeseries) - 1; j >= 0; j-- {
size, err := m.Timeseries[j].marshalToSizedBuffer(dst[:i])
@@ -154,7 +154,7 @@ func (m *WriteRequest) marshalToSizedBuffer(dst []byte) (int, error) {
i -= size
i = encodeVarint(dst, i, uint64(size))
i--
dst[i] = 0xa
dst[i] = (1 << 3) | 2
}
return len(dst) - i, nil
}
@@ -196,36 +196,36 @@ func (m *MetricMetadata) marshalToSizedBuffer(dst []byte) (int, error) {
copy(dst[i:], m.Unit)
i = encodeVarint(dst, i, uint64(len(m.Unit)))
i--
dst[i] = 0x2a
dst[i] = (5 << 3) | 2
}
if len(m.Help) > 0 {
i -= len(m.Help)
copy(dst[i:], m.Help)
i = encodeVarint(dst, i, uint64(len(m.Help)))
i--
dst[i] = 0x22
dst[i] = (4 << 3) | 2
}
if len(m.MetricFamilyName) > 0 {
i -= len(m.MetricFamilyName)
copy(dst[i:], m.MetricFamilyName)
i = encodeVarint(dst, i, uint64(len(m.MetricFamilyName)))
i--
dst[i] = 0x12
dst[i] = (2 << 3) | 2
}
if m.Type != 0 {
i = encodeVarint(dst, i, uint64(m.Type))
i--
dst[i] = 0x8
dst[i] = (1 << 3)
}
if m.AccountID != 0 {
i = encodeVarint(dst, i, uint64(m.AccountID))
i--
dst[i] = 0x58
dst[i] = (11 << 3)
}
if m.ProjectID != 0 {
i = encodeVarint(dst, i, uint64(m.ProjectID))
i--
dst[i] = 0x60
dst[i] = (12 << 3)
}
return len(dst) - i, nil
}

View File

@@ -1,57 +0,0 @@
package promutil
import (
"reflect"
"runtime"
"testing"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
)
func TestLabelsCompressorV2(t *testing.T) {
lc := NewLabelsCompressorV2()
labels1 := []prompb.Label{
{Name: "label1", Value: "value1"},
{Name: "label2", Value: "value2"},
{Name: "label3", Value: "value3"},
}
labels2 := []prompb.Label{
{Name: "label3", Value: "value3"},
{Name: "label4", Value: "value4"},
{Name: "label5", Value: "value5"},
}
compressed1 := lc.Compress(labels1)
compressed2 := lc.Compress(labels2)
runtime.GC()
cleaned := lc.Cleanup()
if cleaned != 0 {
t.Fatalf("lc.Cleanup() should've cleaned zero unused labels, got %d", cleaned)
}
decompressed1 := compressed1.Decompress()
if !reflect.DeepEqual(labels1, decompressed1) {
t.Fatalf("decompressed labels1 do not match original: got %+v, want %+v", decompressed1, labels1)
}
compressed1 = Key{}
runtime.GC()
cleaned = lc.Cleanup()
if cleaned != 2 {
t.Fatalf("lc.Cleanup() should've cleaned two unused labels, got %d", cleaned)
}
decompressed2 := compressed2.Decompress()
if !reflect.DeepEqual(labels2, decompressed2) {
t.Fatalf("decompressed labels2 do not match original: got %+v, want %+v", decompressed2, labels2)
}
compressed2 = Key{}
runtime.GC()
cleaned = lc.Cleanup()
if cleaned != 3 {
t.Fatalf("lc.Cleanup() should've cleaned two unused labels, got %d", cleaned)
}
}

View File

@@ -1,102 +0,0 @@
package promutil
import (
"log"
"sync"
"time"
"weak"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
)
type Key struct {
labelRefs []labelRef
}
func (k Key) Decompress() []prompb.Label {
res := make([]prompb.Label, 0, len(k.labelRefs))
for i := range k.labelRefs {
res = append(res, cloneLabel(*k.labelRefs[i].label))
}
return res
}
type labelRef struct {
label *prompb.Label
}
type LabelsCompressorV2 struct {
mux sync.Mutex
labels map[prompb.Label]weak.Pointer[prompb.Label]
}
func NewLabelsCompressorV2() *LabelsCompressorV2 {
lc := &LabelsCompressorV2{
labels: make(map[prompb.Label]weak.Pointer[prompb.Label]),
}
go lc.cleanup()
return lc
}
func (lc *LabelsCompressorV2) Compress(labels []prompb.Label) Key {
lc.mux.Lock()
defer lc.mux.Unlock()
labelRefs := make([]labelRef, 0, len(labels))
for i := range labels {
wl := lc.labels[labels[i]]
l := wl.Value()
if l == nil {
labelKey := cloneLabel(labels[i])
labelVal := cloneLabel(labels[i])
wl = weak.Make(&labelVal)
lc.labels[labelKey] = wl
l = wl.Value()
}
labelRefs = append(labelRefs, labelRef{
label: l,
})
}
return Key{
labelRefs: labelRefs,
}
}
func (lc *LabelsCompressorV2) cleanup() {
t := time.NewTicker(5 * time.Minute)
defer t.Stop()
for {
select {
case <-t.C:
lc.Cleanup()
}
}
}
func (lc *LabelsCompressorV2) Cleanup() int {
lc.mux.Lock()
defer lc.mux.Unlock()
count := 0
for l, wl := range lc.labels {
if wl.Value() != nil {
continue
}
log.Println(l)
count++
delete(lc.labels, l)
}
return count
}

View File

@@ -2105,6 +2105,43 @@ func hasCompositeTagFilters(tfs []*tagFilter, prefix []byte) bool {
return false
}
// MatchSimulatedSamples filters the given simulatedSamples against the provided tag filters.
// It returns only the simulated samples that match any of the given tag filter sets.
// This function is used for debugging and testing purposes to simulate metric queries.
func MatchSimulatedSamples(simulatedSamples []*SimulatedSamples, tagFilterss [][]TagFilter) ([]*SimulatedSamples, error) {
var kb bytesutil.ByteBuffer
matchedSamples := make([]*SimulatedSamples, 0, 1)
for _, rawTfs := range tagFilterss {
tfs := NewTagFilters()
for _, tf := range rawTfs {
err := tfs.Add(tf.Key, tf.Value, tf.IsNegative, tf.IsRegexp)
if err != nil {
return nil, fmt.Errorf("cannot add tagFilter %s: %w", tf.String(), err)
}
}
for idx, mn := range simulatedSamples {
ok, err := matchTagFilters(&mn.Name, toTFPointers(tfs.tfs), &kb)
if err != nil {
return nil, fmt.Errorf("cannot match MetricName %s against tagFilters: %w", mn.Name.String(), err)
}
if ok {
matchedSamples = append(matchedSamples, simulatedSamples[idx])
}
}
}
return matchedSamples, nil
}
func toTFPointers(tfs []tagFilter) []*tagFilter {
tfps := make([]*tagFilter, len(tfs))
for i := range tfs {
tfps[i] = &tfs[i]
}
return tfps
}
func matchTagFilters(mn *MetricName, tfs []*tagFilter, kb *bytesutil.ByteBuffer) (bool, error) {
kb.B = marshalCommonPrefix(kb.B[:0], nsPrefixTagToMetricIDs)
for i, tf := range tfs {

View File

@@ -2045,14 +2045,6 @@ func TestSearchTSIDWithTimeRange(t *testing.T) {
fs.MustRemoveDir(path)
}
func toTFPointers(tfs []tagFilter) []*tagFilter {
tfps := make([]*tagFilter, len(tfs))
for i := range tfs {
tfps[i] = &tfs[i]
}
return tfps
}
func newTestStorage() *Storage {
s := &Storage{
cachePath: "test-storage-cache",

View File

@@ -309,6 +309,33 @@ type SearchQuery struct {
// The maximum number of time series the search query can return.
MaxMetrics int
// SimulatedSeries is used for simulating samples returned from storage nodes.
SimulatedSeries []*SimulatedSamples
}
// SimulatedSamples represents simulated metric samples for debug and testing purposes.
// It contains metric name, timestamps and corresponding values.
type SimulatedSamples struct {
Name MetricName
Timestamps []int64
Value []float64
}
// NewSimulatedSeries creates a new SimulatedSamples instance with the given parameters.
// It constructs a metric name from the provided metric labels.
func NewSimulatedSeries(metric map[string]string, timestamp []int64, value []float64) *SimulatedSamples {
ss := &SimulatedSamples{
Timestamps: timestamp,
Value: value,
}
mn := GetMetricName()
defer PutMetricName(mn)
for k, v := range metric {
mn.AddTag(k, v)
}
ss.Name.CopyFrom(mn)
return ss
}
// GetTimeRange returns time range for the given sq.

View File

View File

@@ -65,10 +65,10 @@ func VisitAll(e Expr, f func(expr Expr)) {
//
// These expressions are implicitly converted into another expressions, which returns unexpected results most of the time:
//
// rate(default_rollup(sum(foo))[1i:1i])
// rate(default_rollup(abs(foo))[1i:1i])
// rate(default_rollup(foo + bar)[1i:1i])
// rate(default_rollup(foo > 10)[1i:1i])
// rate(sum(default_rollup(foo[1i:1i])))
// rate(abs(default_rollup(foo[1i:1i])))
// rate(default_rollup(foo[1i:1i]) + default_rollup(bar[1i:1i]))
// rate(default_rollup(foo[1i:1i]) > 10)
//
// See https://docs.victoriametrics.com/victoriametrics/metricsql/#implicit-query-conversions
//
@@ -83,6 +83,17 @@ func IsLikelyInvalid(e Expr) bool {
if !ok {
return
}
if fe.Name == `timestamp` {
// In Prometheus, timestamp is defined as a transform function on instant vectors,
// but its behavior is closer to a rollup since it returns raw sample timestamps.
// VictoriaMetrics explicitly defines timestamp as a rollup function.
// To remain consistent with Prometheus, IsLikelyInvalid does not treat timestamp
// as an implicit conversion even when applied to non-metric expressions, like timestamp(sum(foo)).
//
// See more in https://github.com/VictoriaMetrics/VictoriaMetrics/issues/9527#issuecomment-3191439447
return
}
idx := GetRollupArgIdx(fe)
if idx < 0 || idx >= len(fe.Args) {
return

2
vendor/modules.txt vendored
View File

@@ -142,7 +142,7 @@ github.com/VictoriaMetrics/fastcache
# github.com/VictoriaMetrics/metrics v1.39.1
## explicit; go 1.18
github.com/VictoriaMetrics/metrics
# github.com/VictoriaMetrics/metricsql v0.84.6
# github.com/VictoriaMetrics/metricsql v0.84.7
## explicit; go 1.24.2
github.com/VictoriaMetrics/metricsql
github.com/VictoriaMetrics/metricsql/binaryop