Files
VictoriaMetrics/lib/persistentqueue/fastqueue_timing_test.go

71 lines
1.9 KiB
Go
Raw Permalink Normal View History

package persistentqueue
import (
"fmt"
"testing"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/cgroup"
lib/fs: simplify the code for directory removal and make it compatible with object storage (S3) and NFS - Drop the code needed for asynchronous removal of the directory on NFS shares. This code was needed when VictoriaMetrics could keep open files after their deletion or renaming. This is no longer the case after the commit 43b24164efdd35c2641a289ab93f5cb94279cb60 . Now files are deleted only after all the readers close them. This updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/61 - Unify MustRemoveAll() and MustRemoveDirAtomic() into MustRemoveDir() and MustRemovePath() functions: - The MustRemoveDir() deletes the given directory with all its contents, in an "atomic" way: it creates a special `.delete-this-dir` file in the directory, then removes all its contents except of this file, and later removes the `.delete-this-dir` file together with the directory itself. This makes possible easily determining whether the given directory needs to be deleted after unclean shutdown - if it contains the `.delete-this-dir` file or if it is empty, it must be deleted. Add IsPartiallyRemovedDir() function, which can be used for detecting whether the given directory must be removed at starup. Previously the MustRemoveDirAtomic() was using a "trick" for atomic directory removal: it was "atomically" renaming the directory to a temporary directory with '.must-remove.' marker in the directory name, and after that it was removing the renamed directory. On startup all the directories with the `.must-remove.` marker were deleted if they are left after unclean shutdown. This "trick" doesn't work for NFS and object storage such as S3, since these storage systems do not support atomic renaming of directories with multiple entries inside. The new MustRemoveDir() function doesn't use this "trick", so it can be safely used in NFS and S3-like storage systems. This is based on the pull request from @func25 - https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9486/files . - The MustRemovePath() deletes the given file or an empty directory. - Delete the existing parts and partitions at startup if they were partially deleted. - Consistently use fs.MustRemoveDir() and fs.MustRemovePath() instead of os.RemoveAll() across the codebase. This reduces the amounts of bolierplate code related to error handling. - Consistently use fs.MustWriteSync() instead of os.WriteFile() across the codebase.
2025-07-25 18:41:17 +02:00
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fs"
)
func BenchmarkFastQueueThroughputSerial(b *testing.B) {
const iterationsCount = 10
for _, blockSize := range []int{1e0, 1e1, 1e2, 1e3, 1e4, 1e5, 1e6} {
block := make([]byte, blockSize)
b.Run(fmt.Sprintf("block-size-%d", blockSize), func(b *testing.B) {
b.ReportAllocs()
b.SetBytes(int64(blockSize) * iterationsCount)
path := fmt.Sprintf("bench-fast-queue-throughput-serial-%d", blockSize)
lib/fs: simplify the code for directory removal and make it compatible with object storage (S3) and NFS - Drop the code needed for asynchronous removal of the directory on NFS shares. This code was needed when VictoriaMetrics could keep open files after their deletion or renaming. This is no longer the case after the commit 43b24164efdd35c2641a289ab93f5cb94279cb60 . Now files are deleted only after all the readers close them. This updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/61 - Unify MustRemoveAll() and MustRemoveDirAtomic() into MustRemoveDir() and MustRemovePath() functions: - The MustRemoveDir() deletes the given directory with all its contents, in an "atomic" way: it creates a special `.delete-this-dir` file in the directory, then removes all its contents except of this file, and later removes the `.delete-this-dir` file together with the directory itself. This makes possible easily determining whether the given directory needs to be deleted after unclean shutdown - if it contains the `.delete-this-dir` file or if it is empty, it must be deleted. Add IsPartiallyRemovedDir() function, which can be used for detecting whether the given directory must be removed at starup. Previously the MustRemoveDirAtomic() was using a "trick" for atomic directory removal: it was "atomically" renaming the directory to a temporary directory with '.must-remove.' marker in the directory name, and after that it was removing the renamed directory. On startup all the directories with the `.must-remove.` marker were deleted if they are left after unclean shutdown. This "trick" doesn't work for NFS and object storage such as S3, since these storage systems do not support atomic renaming of directories with multiple entries inside. The new MustRemoveDir() function doesn't use this "trick", so it can be safely used in NFS and S3-like storage systems. This is based on the pull request from @func25 - https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9486/files . - The MustRemovePath() deletes the given file or an empty directory. - Delete the existing parts and partitions at startup if they were partially deleted. - Consistently use fs.MustRemoveDir() and fs.MustRemovePath() instead of os.RemoveAll() across the codebase. This reduces the amounts of bolierplate code related to error handling. - Consistently use fs.MustWriteSync() instead of os.WriteFile() across the codebase.
2025-07-25 18:41:17 +02:00
fs.MustRemoveDir(path)
fq := MustOpenFastQueue(path, "foobar", iterationsCount*2, 0, false)
defer func() {
fq.MustClose()
lib/fs: simplify the code for directory removal and make it compatible with object storage (S3) and NFS - Drop the code needed for asynchronous removal of the directory on NFS shares. This code was needed when VictoriaMetrics could keep open files after their deletion or renaming. This is no longer the case after the commit 43b24164efdd35c2641a289ab93f5cb94279cb60 . Now files are deleted only after all the readers close them. This updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/61 - Unify MustRemoveAll() and MustRemoveDirAtomic() into MustRemoveDir() and MustRemovePath() functions: - The MustRemoveDir() deletes the given directory with all its contents, in an "atomic" way: it creates a special `.delete-this-dir` file in the directory, then removes all its contents except of this file, and later removes the `.delete-this-dir` file together with the directory itself. This makes possible easily determining whether the given directory needs to be deleted after unclean shutdown - if it contains the `.delete-this-dir` file or if it is empty, it must be deleted. Add IsPartiallyRemovedDir() function, which can be used for detecting whether the given directory must be removed at starup. Previously the MustRemoveDirAtomic() was using a "trick" for atomic directory removal: it was "atomically" renaming the directory to a temporary directory with '.must-remove.' marker in the directory name, and after that it was removing the renamed directory. On startup all the directories with the `.must-remove.` marker were deleted if they are left after unclean shutdown. This "trick" doesn't work for NFS and object storage such as S3, since these storage systems do not support atomic renaming of directories with multiple entries inside. The new MustRemoveDir() function doesn't use this "trick", so it can be safely used in NFS and S3-like storage systems. This is based on the pull request from @func25 - https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9486/files . - The MustRemovePath() deletes the given file or an empty directory. - Delete the existing parts and partitions at startup if they were partially deleted. - Consistently use fs.MustRemoveDir() and fs.MustRemovePath() instead of os.RemoveAll() across the codebase. This reduces the amounts of bolierplate code related to error handling. - Consistently use fs.MustWriteSync() instead of os.WriteFile() across the codebase.
2025-07-25 18:41:17 +02:00
fs.MustRemoveDir(path)
}()
for range b.N {
app/vmagent: follow-up for 090cb2c9de8d533eaba45a3ebbdb0d2503e97e00 - Add Try* prefix to functions, which return bool result in order to improve readability and reduce the probability of missing check for the result returned from these functions. - Call the adjustSampleValues() only once on input samples. Previously it was called on every attempt to flush data to peristent queue. - Properly restore the initial state of WriteRequest passed to tryPushWriteRequest() before returning from this function after unsuccessful push to persistent queue. Previously a part of WriteRequest samples may be lost in such case. - Add -remoteWrite.dropSamplesOnOverload command-line flag, which can be used for dropping incoming samples instead of returning 429 Too Many Requests error to the client when -remoteWrite.disableOnDiskQueue is set and the remote storage cannot keep up with the data ingestion rate. - Add vmagent_remotewrite_samples_dropped_total metric, which counts the number of dropped samples. - Add vmagent_remotewrite_push_failures_total metric, which counts the number of unsuccessful attempts to push data to persistent queue when -remoteWrite.disableOnDiskQueue is set. - Remove vmagent_remotewrite_aggregation_metrics_dropped_total and vm_promscrape_push_samples_dropped_total metrics, because they are replaced with vmagent_remotewrite_samples_dropped_total metric. - Update 'Disabling on-disk persistence' docs at docs/vmagent.md - Update stale comments in the code Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5088 Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2110
2023-11-25 11:31:30 +02:00
writeReadIterationFastQueue(fq, block, iterationsCount)
}
})
}
}
func BenchmarkFastQueueThroughputConcurrent(b *testing.B) {
const iterationsCount = 10
for _, blockSize := range []int{1e0, 1e1, 1e2, 1e3, 1e4, 1e5, 1e6} {
block := make([]byte, blockSize)
b.Run(fmt.Sprintf("block-size-%d", blockSize), func(b *testing.B) {
b.ReportAllocs()
b.SetBytes(int64(blockSize) * iterationsCount)
path := fmt.Sprintf("bench-fast-queue-throughput-concurrent-%d", blockSize)
lib/fs: simplify the code for directory removal and make it compatible with object storage (S3) and NFS - Drop the code needed for asynchronous removal of the directory on NFS shares. This code was needed when VictoriaMetrics could keep open files after their deletion or renaming. This is no longer the case after the commit 43b24164efdd35c2641a289ab93f5cb94279cb60 . Now files are deleted only after all the readers close them. This updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/61 - Unify MustRemoveAll() and MustRemoveDirAtomic() into MustRemoveDir() and MustRemovePath() functions: - The MustRemoveDir() deletes the given directory with all its contents, in an "atomic" way: it creates a special `.delete-this-dir` file in the directory, then removes all its contents except of this file, and later removes the `.delete-this-dir` file together with the directory itself. This makes possible easily determining whether the given directory needs to be deleted after unclean shutdown - if it contains the `.delete-this-dir` file or if it is empty, it must be deleted. Add IsPartiallyRemovedDir() function, which can be used for detecting whether the given directory must be removed at starup. Previously the MustRemoveDirAtomic() was using a "trick" for atomic directory removal: it was "atomically" renaming the directory to a temporary directory with '.must-remove.' marker in the directory name, and after that it was removing the renamed directory. On startup all the directories with the `.must-remove.` marker were deleted if they are left after unclean shutdown. This "trick" doesn't work for NFS and object storage such as S3, since these storage systems do not support atomic renaming of directories with multiple entries inside. The new MustRemoveDir() function doesn't use this "trick", so it can be safely used in NFS and S3-like storage systems. This is based on the pull request from @func25 - https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9486/files . - The MustRemovePath() deletes the given file or an empty directory. - Delete the existing parts and partitions at startup if they were partially deleted. - Consistently use fs.MustRemoveDir() and fs.MustRemovePath() instead of os.RemoveAll() across the codebase. This reduces the amounts of bolierplate code related to error handling. - Consistently use fs.MustWriteSync() instead of os.WriteFile() across the codebase.
2025-07-25 18:41:17 +02:00
fs.MustRemoveDir(path)
fq := MustOpenFastQueue(path, "foobar", iterationsCount*cgroup.AvailableCPUs()*2, 0, false)
defer func() {
fq.MustClose()
lib/fs: simplify the code for directory removal and make it compatible with object storage (S3) and NFS - Drop the code needed for asynchronous removal of the directory on NFS shares. This code was needed when VictoriaMetrics could keep open files after their deletion or renaming. This is no longer the case after the commit 43b24164efdd35c2641a289ab93f5cb94279cb60 . Now files are deleted only after all the readers close them. This updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/61 - Unify MustRemoveAll() and MustRemoveDirAtomic() into MustRemoveDir() and MustRemovePath() functions: - The MustRemoveDir() deletes the given directory with all its contents, in an "atomic" way: it creates a special `.delete-this-dir` file in the directory, then removes all its contents except of this file, and later removes the `.delete-this-dir` file together with the directory itself. This makes possible easily determining whether the given directory needs to be deleted after unclean shutdown - if it contains the `.delete-this-dir` file or if it is empty, it must be deleted. Add IsPartiallyRemovedDir() function, which can be used for detecting whether the given directory must be removed at starup. Previously the MustRemoveDirAtomic() was using a "trick" for atomic directory removal: it was "atomically" renaming the directory to a temporary directory with '.must-remove.' marker in the directory name, and after that it was removing the renamed directory. On startup all the directories with the `.must-remove.` marker were deleted if they are left after unclean shutdown. This "trick" doesn't work for NFS and object storage such as S3, since these storage systems do not support atomic renaming of directories with multiple entries inside. The new MustRemoveDir() function doesn't use this "trick", so it can be safely used in NFS and S3-like storage systems. This is based on the pull request from @func25 - https://github.com/VictoriaMetrics/VictoriaMetrics/pull/9486/files . - The MustRemovePath() deletes the given file or an empty directory. - Delete the existing parts and partitions at startup if they were partially deleted. - Consistently use fs.MustRemoveDir() and fs.MustRemovePath() instead of os.RemoveAll() across the codebase. This reduces the amounts of bolierplate code related to error handling. - Consistently use fs.MustWriteSync() instead of os.WriteFile() across the codebase.
2025-07-25 18:41:17 +02:00
fs.MustRemoveDir(path)
}()
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
app/vmagent: follow-up for 090cb2c9de8d533eaba45a3ebbdb0d2503e97e00 - Add Try* prefix to functions, which return bool result in order to improve readability and reduce the probability of missing check for the result returned from these functions. - Call the adjustSampleValues() only once on input samples. Previously it was called on every attempt to flush data to peristent queue. - Properly restore the initial state of WriteRequest passed to tryPushWriteRequest() before returning from this function after unsuccessful push to persistent queue. Previously a part of WriteRequest samples may be lost in such case. - Add -remoteWrite.dropSamplesOnOverload command-line flag, which can be used for dropping incoming samples instead of returning 429 Too Many Requests error to the client when -remoteWrite.disableOnDiskQueue is set and the remote storage cannot keep up with the data ingestion rate. - Add vmagent_remotewrite_samples_dropped_total metric, which counts the number of dropped samples. - Add vmagent_remotewrite_push_failures_total metric, which counts the number of unsuccessful attempts to push data to persistent queue when -remoteWrite.disableOnDiskQueue is set. - Remove vmagent_remotewrite_aggregation_metrics_dropped_total and vm_promscrape_push_samples_dropped_total metrics, because they are replaced with vmagent_remotewrite_samples_dropped_total metric. - Update 'Disabling on-disk persistence' docs at docs/vmagent.md - Update stale comments in the code Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5088 Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2110
2023-11-25 11:31:30 +02:00
writeReadIterationFastQueue(fq, block, iterationsCount)
}
})
})
}
}
app/vmagent: follow-up for 090cb2c9de8d533eaba45a3ebbdb0d2503e97e00 - Add Try* prefix to functions, which return bool result in order to improve readability and reduce the probability of missing check for the result returned from these functions. - Call the adjustSampleValues() only once on input samples. Previously it was called on every attempt to flush data to peristent queue. - Properly restore the initial state of WriteRequest passed to tryPushWriteRequest() before returning from this function after unsuccessful push to persistent queue. Previously a part of WriteRequest samples may be lost in such case. - Add -remoteWrite.dropSamplesOnOverload command-line flag, which can be used for dropping incoming samples instead of returning 429 Too Many Requests error to the client when -remoteWrite.disableOnDiskQueue is set and the remote storage cannot keep up with the data ingestion rate. - Add vmagent_remotewrite_samples_dropped_total metric, which counts the number of dropped samples. - Add vmagent_remotewrite_push_failures_total metric, which counts the number of unsuccessful attempts to push data to persistent queue when -remoteWrite.disableOnDiskQueue is set. - Remove vmagent_remotewrite_aggregation_metrics_dropped_total and vm_promscrape_push_samples_dropped_total metrics, because they are replaced with vmagent_remotewrite_samples_dropped_total metric. - Update 'Disabling on-disk persistence' docs at docs/vmagent.md - Update stale comments in the code Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5088 Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2110
2023-11-25 11:31:30 +02:00
func writeReadIterationFastQueue(fq *FastQueue, block []byte, iterationsCount int) {
for range iterationsCount {
app/vmagent: follow-up for 090cb2c9de8d533eaba45a3ebbdb0d2503e97e00 - Add Try* prefix to functions, which return bool result in order to improve readability and reduce the probability of missing check for the result returned from these functions. - Call the adjustSampleValues() only once on input samples. Previously it was called on every attempt to flush data to peristent queue. - Properly restore the initial state of WriteRequest passed to tryPushWriteRequest() before returning from this function after unsuccessful push to persistent queue. Previously a part of WriteRequest samples may be lost in such case. - Add -remoteWrite.dropSamplesOnOverload command-line flag, which can be used for dropping incoming samples instead of returning 429 Too Many Requests error to the client when -remoteWrite.disableOnDiskQueue is set and the remote storage cannot keep up with the data ingestion rate. - Add vmagent_remotewrite_samples_dropped_total metric, which counts the number of dropped samples. - Add vmagent_remotewrite_push_failures_total metric, which counts the number of unsuccessful attempts to push data to persistent queue when -remoteWrite.disableOnDiskQueue is set. - Remove vmagent_remotewrite_aggregation_metrics_dropped_total and vm_promscrape_push_samples_dropped_total metrics, because they are replaced with vmagent_remotewrite_samples_dropped_total metric. - Update 'Disabling on-disk persistence' docs at docs/vmagent.md - Update stale comments in the code Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5088 Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2110
2023-11-25 11:31:30 +02:00
if !fq.TryWriteBlock(block) {
panic(fmt.Errorf("TryWriteBlock must return true"))
}
}
var ok bool
bb := bbPool.Get()
for range iterationsCount {
bb.B, ok = fq.MustReadBlock(bb.B[:0])
if !ok {
panic(fmt.Errorf("unexpected ok=false"))
}
}
bbPool.Put(bb)
}