Compare commits

...

420 Commits

Author SHA1 Message Date
Zakhar Bessarab
c32fa83d38 Merge tag 'v1.109.1' into oss/pmm-6401-read-prometheus-data-files 2025-01-17 18:58:38 +04:00
Zakhar Bessarab
1c599d9661 Merge tag 'v1.109.0' into oss/pmm-6401-read-prometheus-data-files 2025-01-14 18:51:09 +04:00
f41gh7
ec08a408d2 Merge tag 'v1.108.0' into pmm-6401-read-prometheus-data-files-cpc 2024-12-16 12:24:24 +01:00
f41gh7
b5e4499c29 CHANGELOG.md: cut v1.108.0 release 2024-12-13 17:06:34 +01:00
f41gh7
d6cb7d09e5 Merge tag 'v1.107.0' into pmm-6401-read-prometheus-data-files-cpc 2024-12-02 11:34:23 +01:00
f41gh7
61b84e9021 CHANGELOG.md: cut v1.107.0 release 2024-11-29 17:45:47 +01:00
f41gh7
54df0fa870 Merge tag 'v1.106.1' into pmm-6401-read-prometheus-data-files-cpc 2024-11-18 15:09:17 +01:00
Zakhar Bessarab
cd513b9758 Merge tag 'v1.106.0' into pmm-6401-read-prometheus-data-files 2024-11-04 13:56:36 -03:00
f41gh7
cf7eb6bc7c Merge tag 'v1.105.0' into pmm-6401-read-prometheus-data-files-cpc
v1.105.0
2024-10-21 19:01:12 +02:00
hagen1778
2404b4bc00 Merge tag 'v1.104.0' into pmm-6401-read-prometheus-data-files
v1.104.0

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-10-02 16:53:41 +02:00
f41gh7
e3e06b1f47 Merge remote-tracking branch 'origin/master' into pmm-6401-read-prometheus-data-files-cpc 2024-08-29 15:47:43 +02:00
f41gh7
1d0ad32b30 make vendor-update 2024-08-02 11:24:52 +02:00
f41gh7
2557e66ee0 Merge tag 'tags/v1.102.1' into pmm-6401-read-prometheus-data-files-cpc 2024-08-02 11:20:14 +02:00
hagen1778
381d4494e9 Merge tag 'v1.101.0' into pmm-6401-read-prometheus-data-files
v1.101.0

Signed-off-by: hagen1778 <roman@victoriametrics.com>

# gpg: Signature made чт 25 кві 16:52:11 2024 CEST
# gpg:                using RSA key 9212FA37DBE64938E0D154953BF75F3741CA9640
# gpg: Good signature from "hagen1778 (VM GPG key) <roman@victoriametrics.com>" [ultimate]

# Conflicts:
#	go.mod
2024-04-26 13:30:14 +02:00
Aliaksandr Valialkin
b7b731d340 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2024-04-04 03:49:49 +03:00
Aliaksandr Valialkin
1016aae126 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2024-03-01 03:34:49 +02:00
Aliaksandr Valialkin
5c2f85f38d vendor: run make vendor-update 2024-03-01 02:38:41 +02:00
Aliaksandr Valialkin
2d8f54f831 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2024-02-14 16:40:40 +02:00
Aliaksandr Valialkin
778c092740 vendor: run make vendor-update 2024-02-14 15:53:41 +02:00
Aliaksandr Valialkin
9f8ada83b6 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2024-02-01 15:28:24 +02:00
Aliaksandr Valialkin
0b503fba0b Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2024-01-30 22:55:54 +02:00
Aliaksandr Valialkin
f6c91b49a2 Merge branch 'public-single-node' into HEAD 2023-12-13 01:17:25 +02:00
Aliaksandr Valialkin
2faa23c495 vendor: run make vendor-update 2023-12-11 11:00:42 +02:00
Aliaksandr Valialkin
fd49331671 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-11-16 21:42:09 +01:00
Aliaksandr Valialkin
4de0514731 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-11-15 21:56:57 +01:00
Aliaksandr Valialkin
b65a9f2057 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-11-15 20:05:11 +01:00
Aliaksandr Valialkin
0eb733a31e Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-11-14 03:03:15 +01:00
Aliaksandr Valialkin
6be10fb2ff Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-11-02 21:33:03 +01:00
Aliaksandr Valialkin
7a503e0c91 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-10-31 20:28:01 +01:00
Aliaksandr Valialkin
31a3672982 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-10-02 22:36:23 +02:00
Aliaksandr Valialkin
1590ddecba Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-09-22 12:33:27 +02:00
Aliaksandr Valialkin
b80ebb8bfd Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-09-19 01:15:00 +02:00
Aliaksandr Valialkin
58ecb90665 Merge remote-tracking branch 'public/pmm-6401-read-prometheus-data-files' into pmm-6401-read-prometheus-data-files 2023-09-09 06:26:34 +02:00
Aliaksandr Valialkin
f7d0d3a229 app/vmstorage: fix after 0c7d46d637: retentionPeriod.Msecs -> retentionPeriod.Milliseconds() 2023-09-09 06:20:42 +02:00
Aliaksandr Valialkin
af85055f3a Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-09-09 06:18:18 +02:00
f41gh7
ca20478a69 Merge remote-tracking branch 'origin/lts-1.93' into pmm-6401-read-prometheus-data-files 2023-08-23 15:15:47 +02:00
Dmytro Kozlov
c8c20b7f7a docs: cut 1.93.1-lts in changelog
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2023-08-23 14:13:36 +02:00
Nikolay
35263983a6 lib/storage: properly caclucate nextRotationTimestamp (#4874)
cause of typo unix millis was used instead of unix for current timestamp
calculation
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4873

(cherry picked from commit c5aac34b68)
2023-08-23 14:12:21 +02:00
Aliaksandr Valialkin
a2c901423b docs/stream-aggregation.md: typo fix after 54f522ac25 2023-08-17 15:28:26 +02:00
Aliaksandr Valialkin
382721a3ac docs/stream-aggregation.md: clarify the usage of -remoteWrite.label after the fix at a27c2f3773
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4247
2023-08-17 15:19:32 +02:00
Aliaksandr Valialkin
d688f9a744 app/vmagent/remotewrite: follow-up after a27c2f3773
- Fix Prometheus-compatible naming after applying the relabeling if -usePromCompatibleNaming command-line flag is set.
  This should prevent from possible Prometheus-incompatible metric names and label names generated by the relabeling.
- Do not return anything from relabelCtx.appendExtraLabels() function, since it cannot change the number of time series
  passed to it. Append labels for the passed time series in-place.
- Remove promrelabel.FinalizeLabels() call after adding extra labels to time series, since this call has been already
  made at relabelCtx.applyRelabeling(). It is user's responsibility if he passes labels with double underscore prefixes
  to -remoteWrite.label.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4247
2023-08-17 14:47:11 +02:00
Alexander Marshalov
c060c6d839 vmagent: fixed premature release of the context (after #4247 / #4824) (#4849)
Follow-up after a27c2f3773

https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4247

Signed-off-by: Alexander Marshalov <_@marshalov.org>
2023-08-17 14:46:54 +02:00
Alexander Marshalov
927ded6c3b fixed applying remoteWrite.label for pushed metrics (#4247) (#4824)
vmagent: properly add extra labels before sending data to remote storage

labels from `remoteWrite.label` are now added to sent metrics just before they
 are pushed to `remoteWrite.url` after all relabelings, including stream aggregation relabelings (#4247)

https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4247

Signed-off-by: Alexander Marshalov <_@marshalov.org>
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
2023-08-17 14:46:41 +02:00
Aliaksandr Valialkin
d4123e135f lib/envflag: do not allow unsupported form for boolean command-line flags in the form -boolFlag value
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4845
2023-08-17 14:15:39 +02:00
Aliaksandr Valialkin
4b86a18105 docs/CHANGELOG.md: mention that this is v1.93.x LTS release line 2023-08-17 13:57:29 +02:00
Aliaksandr Valialkin
c6154f8f52 lib/promrelabel: stop emitting DEBUG log lines when parsing if expressions
These lines were accidentally left in the commit 62651570bb

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4635
2023-08-17 13:56:35 +02:00
Aliaksandr Valialkin
b4c79fc606 lib/promrelabel: properly replace : char with _ in metric names when -usePromCompatibleNaming command-line flag is set
This addresses https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3113#issuecomment-1275077071 comment from @johnseekins
2023-08-17 13:52:45 +02:00
Roman Khavronenko
b4529df08d vmbackup: correctly check if specified -dst belongs to specified -storageDataPath (#4841)
See this issue https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4837

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2023-08-17 13:50:42 +02:00
Dmytro Kozlov
a63fb21ab2 app/vmctl: fix migration process if tenant have no data (#4799)
app/vmctl: don't interrupt migration process if tenant has no data

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: Alexander Marshalov <_@marshalov.org>
2023-08-17 13:48:55 +02:00
Aliaksandr Valialkin
7a19b2a14c docs/CHANGELOG.md: document that v1.93.x is a new line of LTS releases 2023-08-12 15:30:13 -07:00
Aliaksandr Valialkin
e06d855636 deployment/docker/Makefile: do not overwrite latest tag when pushing Docker images for LTS release
The `latest` tag is reserved for the latest release
2023-08-12 15:28:55 -07:00
Aliaksandr Valialkin
e29fe89791 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-08-12 06:01:30 -07:00
Aliaksandr Valialkin
978594f50f Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-08-11 06:45:26 -07:00
Aliaksandr Valialkin
e16015fa3b Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-07-28 11:20:54 -07:00
Aliaksandr Valialkin
8033f1705c Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-07-27 15:01:46 -07:00
Aliaksandr Valialkin
9f1e9c54c8 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-07-26 14:59:49 -07:00
Aliaksandr Valialkin
d59e66caa8 Merge remote-tracking branch 'public/pmm-6401-read-prometheus-data-files' into pmm-6401-read-prometheus-data-files 2023-07-06 23:53:39 -07:00
Aliaksandr Valialkin
a2e224593e Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-07-06 23:51:50 -07:00
hagen1778
a2d68d249b Merge tag 'v1.91.2' into pmm-6401-read-prometheus-data-files 2023-06-07 09:30:59 +02:00
hagen1778
713d3431fe app/vmstorage/promdb: check if promdb is available before doing API calls
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2023-06-07 09:29:26 +02:00
Aliaksandr Valialkin
02642248cf deployment/docker/Makefile: use alpine 3.17.3 instead of alpine 3.18.0 for certs image, since alpine 3.18.0 doesnt work for cross-platform builds 2023-05-18 14:11:47 -07:00
Aliaksandr Valialkin
1aebd15549 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-05-18 12:44:45 -07:00
Aliaksandr Valialkin
43f0baabcd Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-05-18 12:34:50 -07:00
Aliaksandr Valialkin
eba0e6dbc0 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-05-09 23:33:49 -07:00
Aliaksandr Valialkin
f0f1eb07dc Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-04-06 19:19:02 -07:00
Aliaksandr Valialkin
bb7b59033d app/vmctl/terminal: fix builds for GOOS=freebsd and GOOS=openbsd
This is a follow-up for 8da9502df6
2023-04-06 17:11:15 -07:00
Aliaksandr Valialkin
e0cef082f4 vendor: make vendor-update 2023-04-06 16:30:47 -07:00
Aliaksandr Valialkin
20fedaf7c2 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-04-03 01:08:56 -07:00
Aliaksandr Valialkin
efc5190950 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-03-31 23:43:02 -07:00
Aliaksandr Valialkin
14ab18375f Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-03-31 22:55:41 -07:00
Aliaksandr Valialkin
4280cc281a Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-03-27 20:06:34 -07:00
Aliaksandr Valialkin
740638ad30 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-03-27 15:37:34 -07:00
Aliaksandr Valialkin
3d377d0c22 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-03-25 22:45:03 -07:00
Aliaksandr Valialkin
99aeb3b21b Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-03-25 15:16:07 -07:00
Aliaksandr Valialkin
d60c212784 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-03-25 01:44:30 -07:00
Aliaksandr Valialkin
dc9537f44e Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-03-24 18:00:37 -07:00
Aliaksandr Valialkin
1b9a279494 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-03-20 22:20:56 -07:00
Aliaksandr Valialkin
f42572e049 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-03-20 20:39:18 -07:00
Aliaksandr Valialkin
827cde4c64 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-03-14 16:26:58 -07:00
Aliaksandr Valialkin
7c271d6a39 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-03-13 00:24:33 -07:00
Aliaksandr Valialkin
b61e9297a1 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-03-12 20:06:05 -07:00
Aliaksandr Valialkin
88b4c30021 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-03-12 19:16:13 -07:00
Aliaksandr Valialkin
ab535bf127 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-03-12 17:30:41 -07:00
Aliaksandr Valialkin
fee8a30f1a Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-03-12 17:13:41 -07:00
Aliaksandr Valialkin
02ffbfb8dc Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-03-12 03:40:02 -07:00
Aliaksandr Valialkin
3822d83276 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-03-12 01:01:41 -08:00
Aliaksandr Valialkin
8561bb48fd Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-03-08 01:45:02 -08:00
Aliaksandr Valialkin
a32a9070c1 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-02-27 19:25:57 -08:00
Aliaksandr Valialkin
b596228765 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-02-27 17:36:37 -08:00
Aliaksandr Valialkin
d0f9a5d4c4 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-02-27 15:39:19 -08:00
Aliaksandr Valialkin
472a9360e6 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-02-27 14:19:26 -08:00
Aliaksandr Valialkin
b00fcad604 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-02-26 12:28:11 -08:00
Aliaksandr Valialkin
3d755041c3 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-02-24 18:59:30 -08:00
Aliaksandr Valialkin
e22a9d6ba6 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-02-24 17:55:04 -08:00
Aliaksandr Valialkin
9d7dc73038 vendor: make vendor-update 2023-02-24 17:33:28 -08:00
Aliaksandr Valialkin
63d9048990 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-02-24 17:32:15 -08:00
Aliaksandr Valialkin
8db1fd2f78 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-02-24 17:17:34 -08:00
Aliaksandr Valialkin
8f0afc656e Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-02-24 13:49:32 -08:00
Aliaksandr Valialkin
be94882ada Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-02-23 19:27:31 -08:00
Aliaksandr Valialkin
ff990ab0c5 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-02-20 20:00:43 -08:00
Aliaksandr Valialkin
5c8a01aecc Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-02-18 22:44:46 -08:00
Aliaksandr Valialkin
2ce4d04d8e Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-02-13 11:11:49 -08:00
Aliaksandr Valialkin
b026ebe91e Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-02-11 20:54:02 -08:00
Aliaksandr Valialkin
c2b724d3ab Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-02-11 14:43:19 -08:00
Aliaksandr Valialkin
e4a61581e1 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-02-11 12:53:25 -08:00
Aliaksandr Valialkin
a38bf70679 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-02-11 12:09:55 -08:00
Aliaksandr Valialkin
7b41c9ac72 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-02-11 01:07:13 -08:00
Aliaksandr Valialkin
c1d42f3288 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-02-11 00:33:44 -08:00
Aliaksandr Valialkin
4167344edb Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-02-09 19:13:40 -08:00
Aliaksandr Valialkin
44e388ee6a Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-02-09 15:07:22 -08:00
Aliaksandr Valialkin
b8ab0b2f31 .github/workflows: remove unneeded workflows 2023-02-09 14:28:01 -08:00
Aliaksandr Valialkin
dcc4b84319 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-02-09 14:26:43 -08:00
Aliaksandr Valialkin
37f48cdaa5 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-02-09 14:07:32 -08:00
Aliaksandr Valialkin
a39140baef Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-02-09 13:08:34 -08:00
Aliaksandr Valialkin
30c0a37032 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-02-01 13:04:00 -08:00
Aliaksandr Valialkin
32e46ea35f Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-02-01 13:02:21 -08:00
Aliaksandr Valialkin
6faaefef7b Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-01-27 11:38:43 -08:00
Aliaksandr Valialkin
5cd89aaaa1 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-01-27 00:05:34 -08:00
Aliaksandr Valialkin
3a21fde0f3 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-01-26 23:54:43 -08:00
Aliaksandr Valialkin
274627943e Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-01-25 09:23:08 -08:00
Aliaksandr Valialkin
21140318cc Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-01-24 09:33:54 -08:00
Aliaksandr Valialkin
3f5bc2adce Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-01-18 14:06:05 -08:00
Aliaksandr Valialkin
a5975c31c2 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-01-18 12:02:26 -08:00
Aliaksandr Valialkin
fad61eafc1 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-01-18 01:42:05 -08:00
Aliaksandr Valialkin
30453af768 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-01-18 01:14:19 -08:00
Aliaksandr Valialkin
7737321133 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-01-18 00:01:57 -08:00
Aliaksandr Valialkin
a2ab1f0ec9 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-01-17 21:49:03 -08:00
Aliaksandr Valialkin
a092df3f84 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-01-12 01:13:39 -08:00
Aliaksandr Valialkin
c3f178aa53 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-01-11 01:34:37 -08:00
Aliaksandr Valialkin
393e7636be Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-01-10 16:23:24 -08:00
Aliaksandr Valialkin
ebc200846c Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2023-01-10 16:11:27 -08:00
Aliaksandr Valialkin
0158237875 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-12-20 14:52:33 -08:00
Aliaksandr Valialkin
be5bbb7ba7 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-12-19 13:37:52 -08:00
Aliaksandr Valialkin
b79f02de21 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-12-14 17:54:17 -08:00
Aliaksandr Valialkin
ac58ab9664 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-12-14 12:59:54 -08:00
Aliaksandr Valialkin
0613ac5d02 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-12-11 03:24:33 -08:00
Aliaksandr Valialkin
22e48e6517 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-12-11 02:08:00 -08:00
Aliaksandr Valialkin
1f0432b5c1 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-12-05 23:21:20 -08:00
Aliaksandr Valialkin
079953b4ea Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-12-02 19:18:34 -08:00
Aliaksandr Valialkin
d92da32041 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-11-29 21:47:16 -08:00
Aliaksandr Valialkin
8548650c2d Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-11-25 20:13:43 -08:00
Aliaksandr Valialkin
2dd82e8355 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-11-17 01:33:16 +02:00
Aliaksandr Valialkin
bf0b5602d0 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-11-11 01:28:54 +02:00
Aliaksandr Valialkin
e25d05f992 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-11-11 01:25:29 +02:00
Aliaksandr Valialkin
5ce8fa8b10 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-11-10 14:13:54 +02:00
Aliaksandr Valialkin
881f22ca62 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-11-07 15:00:11 +02:00
Aliaksandr Valialkin
38294e2f17 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-11-05 11:12:14 +02:00
Aliaksandr Valialkin
2d909f4979 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-10-29 02:58:19 +03:00
Aliaksandr Valialkin
0821298471 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-10-28 22:16:15 +03:00
Aliaksandr Valialkin
fa5cda60d9 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-10-28 14:25:57 +03:00
Aliaksandr Valialkin
700eb5bb1d Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-10-28 00:33:10 +03:00
Aliaksandr Valialkin
70bcc97d1c Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-10-26 14:57:17 +03:00
Aliaksandr Valialkin
0074539441 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-10-26 01:11:17 +03:00
Aliaksandr Valialkin
fe0ab3840f Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-10-25 17:55:02 +03:00
Aliaksandr Valialkin
c4fc87f8b8 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-10-24 21:30:41 +03:00
Aliaksandr Valialkin
8e3198ba29 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-10-24 18:04:48 +03:00
Aliaksandr Valialkin
6c7c0790a0 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-10-24 16:32:43 +03:00
Aliaksandr Valialkin
33343695a9 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-10-23 14:09:38 +03:00
Aliaksandr Valialkin
db553f12bc Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-10-23 14:02:45 +03:00
Aliaksandr Valialkin
07fe2c5361 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-10-21 15:03:12 +03:00
Aliaksandr Valialkin
22e87b0088 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-10-21 01:11:40 +03:00
Aliaksandr Valialkin
f105e2e8c3 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-10-18 20:55:52 +03:00
Aliaksandr Valialkin
20414b3038 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-10-14 15:31:43 +03:00
Aliaksandr Valialkin
fcb7ef68f8 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-10-07 03:41:08 +03:00
Aliaksandr Valialkin
626142ab90 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-10-07 03:16:29 +03:00
Aliaksandr Valialkin
fd1b8be2e5 go.mod: go mod tidy 2022-10-07 01:21:34 +03:00
Aliaksandr Valialkin
d39ba2536e app/victoria-metrics: flagutil.NewArray -> flagutil.NewArrayString after c1fa9828b3 2022-10-07 01:16:52 +03:00
Aliaksandr Valialkin
e2c4578751 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-10-07 01:15:41 +03:00
Aliaksandr Valialkin
6ad7b0619c Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-09-30 18:42:04 +03:00
Aliaksandr Valialkin
3a15bc761b Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-09-30 13:21:21 +03:00
Aliaksandr Valialkin
bd79706eb3 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-09-26 18:03:30 +03:00
Aliaksandr Valialkin
e69fb9f3cf vendor: make vendor-update 2022-09-26 16:40:54 +03:00
Aliaksandr Valialkin
1a9cb85647 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-09-26 16:36:50 +03:00
Aliaksandr Valialkin
a80f0c9f42 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-09-21 11:32:46 +03:00
Aliaksandr Valialkin
4db1d24973 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-09-19 15:31:39 +03:00
Aliaksandr Valialkin
1c9f5b3580 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-09-14 17:54:06 +03:00
Aliaksandr Valialkin
9682c23786 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-09-08 19:02:16 +03:00
Aliaksandr Valialkin
bd2bb272f0 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-09-02 22:01:12 +03:00
Aliaksandr Valialkin
6111abd0e6 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-09-02 21:49:51 +03:00
Aliaksandr Valialkin
3f3f664b76 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-08-31 05:05:04 +03:00
Aliaksandr Valialkin
d1c6fb74fc Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-08-31 02:35:29 +03:00
Aliaksandr Valialkin
b9668d5294 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-08-31 02:32:32 +03:00
Aliaksandr Valialkin
96160000e0 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-08-30 12:40:52 +03:00
Aliaksandr Valialkin
28e961e511 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-08-24 01:26:12 +03:00
Aliaksandr Valialkin
628e87e727 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-08-22 00:40:04 +03:00
Aliaksandr Valialkin
3600c97ad7 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-08-21 19:17:23 +03:00
Aliaksandr Valialkin
bb154f8829 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-08-18 01:31:49 +03:00
Aliaksandr Valialkin
d2e293b5c9 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-08-15 01:43:51 +03:00
Aliaksandr Valialkin
e80ddbebd4 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-08-08 19:59:28 +03:00
Aliaksandr Valialkin
bdd4940140 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-08-08 14:06:18 +03:00
Aliaksandr Valialkin
a8fee2d9b6 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-08-08 13:56:56 +03:00
Aliaksandr Valialkin
2dbbf51ea9 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-08-07 22:49:37 +03:00
Aliaksandr Valialkin
cd5cc4ec81 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-08-04 18:04:51 +03:00
Aliaksandr Valialkin
549d430907 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-08-02 13:33:59 +03:00
Aliaksandr Valialkin
69aef55ae7 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-07-21 21:21:04 +03:00
Aliaksandr Valialkin
274145af2d Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-07-21 20:23:41 +03:00
Aliaksandr Valialkin
c444f7e2b9 docs/Cluster-VictoriaMetrics.md: update after fe68bb3ba7 2022-07-21 20:21:58 +03:00
Aliaksandr Valialkin
10f41ea5f9 all: follow-up after 46f803fa7a
Add -pushmetrics.* command-line flags to all the VictoriaMetrics apps
2022-07-21 20:14:27 +03:00
Aliaksandr Valialkin
46f803fa7a all: add ability to push internal metrics to remote storage system specified via -pushmetrics.url 2022-07-21 19:49:52 +03:00
Aliaksandr Valialkin
ffe9bd248c Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-07-14 16:18:18 +03:00
Aliaksandr Valialkin
151286f5a8 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-07-14 11:04:42 +03:00
Aliaksandr Valialkin
77a1af4f7f Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-07-14 00:56:50 +03:00
Aliaksandr Valialkin
c83ff99e0d Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-07-13 18:07:46 +03:00
Aliaksandr Valialkin
4a0c9a1069 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-07-11 18:22:49 +03:00
Aliaksandr Valialkin
2fd56ddb38 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-07-07 20:38:21 +03:00
Aliaksandr Valialkin
b42e5627fb Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-07-07 02:37:06 +03:00
Aliaksandr Valialkin
57375e72fa Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-07-06 13:50:59 +03:00
Aliaksandr Valialkin
0746766d95 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-07-06 13:31:28 +03:00
Aliaksandr Valialkin
6712a8269c Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-07-06 13:04:08 +03:00
Aliaksandr Valialkin
4e20ea4b59 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-07-04 12:17:00 +03:00
Aliaksandr Valialkin
44dfb2ec0d Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-06-30 20:20:19 +03:00
Aliaksandr Valialkin
e7b4e657a1 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-06-28 20:22:11 +03:00
Aliaksandr Valialkin
cd91c29243 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-06-28 13:26:58 +03:00
Aliaksandr Valialkin
8b8e547dc8 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-06-23 19:33:36 +03:00
Aliaksandr Valialkin
34a6b1fa3b Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-06-22 13:20:58 +03:00
Aliaksandr Valialkin
af37ec8020 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-06-21 15:50:01 +03:00
Aliaksandr Valialkin
fff8ff946f Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-06-21 14:02:43 +03:00
Aliaksandr Valialkin
fdccca238a Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-06-20 18:11:39 +03:00
Aliaksandr Valialkin
1b24afec36 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-06-20 17:42:27 +03:00
Aliaksandr Valialkin
cacd3d6f6d Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-06-19 23:05:31 +03:00
Aliaksandr Valialkin
8632b8200e Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-06-15 18:42:09 +03:00
Aliaksandr Valialkin
0445ad59db Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-06-14 13:29:48 +03:00
Aliaksandr Valialkin
f7b52b64a3 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-06-14 12:26:32 +03:00
Aliaksandr Valialkin
7fc62feddc Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-06-09 13:33:07 +03:00
Aliaksandr Valialkin
0ea0168d98 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-06-04 01:13:48 +03:00
Aliaksandr Valialkin
3dec16702a Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-05-23 11:00:31 +03:00
Aliaksandr Valialkin
993ecbb141 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-05-21 02:27:04 +03:00
Aliaksandr Valialkin
35eb512efa Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-05-20 15:03:38 +03:00
Aliaksandr Valialkin
7f01217c3c Makefile: explicitly specify go1.17 compatibility when running go mod tidy at make vendor-update
This is needed because go1.17 is the minimum supported version of Go,
which is needed for building VictoriaMetrics
2022-05-20 14:41:09 +03:00
Aliaksandr Valialkin
2398b4a10a vendor: make vendor-update 2022-05-20 14:40:09 +03:00
Aliaksandr Valialkin
5a60387eea Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-05-20 14:34:02 +03:00
Aliaksandr Valialkin
2685992ca9 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-05-07 02:02:31 +03:00
Aliaksandr Valialkin
ee63748753 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-05-05 11:01:55 +03:00
Aliaksandr Valialkin
620b0d11b7 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-05-05 10:33:08 +03:00
Aliaksandr Valialkin
316cac2c0b Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-05-05 00:16:55 +03:00
Aliaksandr Valialkin
9eb61e67af Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-05-05 00:02:14 +03:00
Aliaksandr Valialkin
a7333a7380 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-05-02 22:06:17 +03:00
Aliaksandr Valialkin
ee5bd20157 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-04-29 19:37:07 +03:00
Aliaksandr Valialkin
d713bdec20 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-04-29 14:23:04 +03:00
Aliaksandr Valialkin
6a5d6244d4 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-04-20 22:55:51 +03:00
Aliaksandr Valialkin
095feeee41 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-04-12 16:23:16 +03:00
Aliaksandr Valialkin
9dd493363c Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-04-07 15:34:32 +03:00
Aliaksandr Valialkin
d964b04efd Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-04-07 15:33:09 +03:00
Aliaksandr Valialkin
ec01a188fd Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-04-07 15:25:52 +03:00
Aliaksandr Valialkin
40112df441 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-04-05 19:24:37 +03:00
Aliaksandr Valialkin
9e74fe3145 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-04-04 13:11:51 +03:00
Aliaksandr Valialkin
2c22e168f5 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-04-01 12:38:34 +03:00
Aliaksandr Valialkin
5747b78f6f Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-03-28 12:31:27 +03:00
Aliaksandr Valialkin
d9166e899e Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-03-28 12:17:18 +03:00
Aliaksandr Valialkin
38699170c9 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-03-24 19:22:42 +02:00
Aliaksandr Valialkin
5b4f7bbc0c Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-03-18 19:54:43 +02:00
Aliaksandr Valialkin
db85f4a1cb Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-03-18 17:59:12 +02:00
Aliaksandr Valialkin
780b2a139a Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-03-17 20:11:56 +02:00
Aliaksandr Valialkin
9d2805320b Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-03-16 13:40:03 +02:00
Aliaksandr Valialkin
e636cab272 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-03-16 12:59:06 +02:00
Aliaksandr Valialkin
90a1502335 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-03-03 19:31:14 +02:00
Aliaksandr Valialkin
f8a05d4ada Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-02-22 21:12:18 +02:00
Aliaksandr Valialkin
ae64c2db61 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-02-22 21:10:53 +02:00
Aliaksandr Valialkin
37a4347a37 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-02-14 18:32:41 +02:00
Aliaksandr Valialkin
20cdb879e7 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-02-14 17:56:09 +02:00
Aliaksandr Valialkin
7917486d78 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-02-14 17:52:50 +02:00
Aliaksandr Valialkin
107607bf47 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-02-07 18:34:47 +02:00
Aliaksandr Valialkin
78b028064f Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-02-02 23:58:11 +02:00
Aliaksandr Valialkin
db286fdd73 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-01-25 15:33:35 +02:00
Aliaksandr Valialkin
e8ff658b2e Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-01-23 13:24:13 +02:00
Aliaksandr Valialkin
e1668e7441 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-01-18 23:29:16 +02:00
Aliaksandr Valialkin
0d0469cc80 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-01-18 22:44:04 +02:00
Aliaksandr Valialkin
8d6d4e8033 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-01-18 22:38:35 +02:00
Aliaksandr Valialkin
b894f25f21 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-01-17 15:51:07 +02:00
Aliaksandr Valialkin
b6bae2f05f Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-01-07 12:56:47 +02:00
Aliaksandr Valialkin
9e15858baf vendor: make vendor-update 2022-01-07 12:37:58 +02:00
Aliaksandr Valialkin
3f5b1084eb Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2022-01-07 12:36:24 +02:00
Aliaksandr Valialkin
c2e9be96a7 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-12-20 19:11:26 +02:00
Aliaksandr Valialkin
a72dadb8f4 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-12-20 13:53:03 +02:00
Aliaksandr Valialkin
08219faf8d Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-12-17 20:21:56 +02:00
Aliaksandr Valialkin
288620ca40 lib/storage: initial support for multi-level downsampling
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/36

Based on https://github.com/valyala/VictoriaMetrics/pull/203
2021-12-15 16:42:44 +02:00
Aliaksandr Valialkin
2847c84a7b Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-12-15 16:41:56 +02:00
Aliaksandr Valialkin
6a64823581 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-12-14 19:57:21 +02:00
Aliaksandr Valialkin
b94e986710 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-12-02 15:03:46 +02:00
Aliaksandr Valialkin
a29565d1bd Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-11-08 15:48:09 +02:00
Aliaksandr Valialkin
39332cfc5c Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-11-08 13:56:29 +02:00
Aliaksandr Valialkin
d07d2811d4 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-10-22 19:41:51 +03:00
Aliaksandr Valialkin
206e451cae Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-10-08 17:56:22 +03:00
Aliaksandr Valialkin
307034fc2f Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-10-08 16:10:22 +03:00
Aliaksandr Valialkin
c149132b14 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-09-23 22:55:56 +03:00
Aliaksandr Valialkin
6dd7a90c7c Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-09-22 01:49:36 +03:00
Aliaksandr Valialkin
dc5507754f Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-09-20 15:22:36 +03:00
Aliaksandr Valialkin
c68663deee Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-09-20 14:55:21 +03:00
Aliaksandr Valialkin
114a40e63f Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-09-15 18:26:16 +03:00
Aliaksandr Valialkin
163f2a46fd Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-09-15 18:18:59 +03:00
Aliaksandr Valialkin
375c46cb1f Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-09-01 17:13:09 +03:00
Aliaksandr Valialkin
bb2d1128b8 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-09-01 16:37:40 +03:00
Aliaksandr Valialkin
479b9da827 vendor: make vendor-update 2021-09-01 12:53:52 +03:00
Aliaksandr Valialkin
62857fc30e Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-09-01 12:49:13 +03:00
Aliaksandr Valialkin
253315b1fe Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-08-19 10:35:13 +03:00
Aliaksandr Valialkin
efe6e30008 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-08-18 22:07:32 +03:00
Aliaksandr Valialkin
bc2512abdd docs/CHANGELOG.md: update urls to Prometheus 2.29 release
Previously these urls were pointing to rc0 release
2021-08-16 14:52:23 +03:00
Aliaksandr Valialkin
a07f8017ba docs/CHANGELOG.md: typo fix: satureated -> saturated 2021-08-16 14:51:04 +03:00
Aliaksandr Valialkin
cf70b766eb Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-08-15 23:52:55 +03:00
Aliaksandr Valialkin
b00732074c Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-08-15 23:51:06 +03:00
Aliaksandr Valialkin
8df8c414de Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-07-15 14:05:30 +03:00
Aliaksandr Valialkin
ce844238a4 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-06-25 13:30:19 +03:00
Aliaksandr Valialkin
452720c5dc Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-06-25 13:24:42 +03:00
Aliaksandr Valialkin
bbca1740c1 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-06-18 10:55:54 +03:00
Aliaksandr Valialkin
e1c85395eb Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-06-11 13:03:20 +03:00
Aliaksandr Valialkin
b348114dab Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-06-11 13:00:09 +03:00
Aliaksandr Valialkin
bb54e34dc5 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-06-09 20:43:33 +03:00
Aliaksandr Valialkin
e0d0b9447e vendor: make vendor-update 2021-06-09 20:43:19 +03:00
Aliaksandr Valialkin
fae6e4fc85 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-06-09 19:10:11 +03:00
Aliaksandr Valialkin
e49bf9bc73 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-05-24 16:03:14 +03:00
Aliaksandr Valialkin
a142390014 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-05-01 10:05:48 +03:00
Aliaksandr Valialkin
bceb8082f6 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-04-30 10:11:24 +03:00
Aliaksandr Valialkin
276969500e Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-04-24 01:42:54 +03:00
Aliaksandr Valialkin
030e3a63f2 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-04-21 11:00:07 +03:00
Aliaksandr Valialkin
1c5e0564af lib/promscrape: create a single swosFunc per scrape_config 2021-04-08 09:31:55 +03:00
Aliaksandr Valialkin
b8300338f0 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-04-08 01:03:03 +03:00
Aliaksandr Valialkin
660c3c7251 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-04-08 00:54:19 +03:00
Aliaksandr Valialkin
80ba07dc95 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-03-30 15:41:16 +03:00
Aliaksandr Valialkin
11ded82e60 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-03-29 19:15:52 +03:00
Aliaksandr Valialkin
558b390ebc vendor: make vendor-update 2021-03-25 20:57:46 +02:00
Aliaksandr Valialkin
343f444e87 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-03-25 19:15:08 +02:00
Aliaksandr Valialkin
16884c20c0 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-03-17 02:05:46 +02:00
Aliaksandr Valialkin
7d44cdd8ce Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-03-15 22:44:24 +02:00
Aliaksandr Valialkin
5d2394ad9b vendor: make vendor-update 2021-03-09 11:51:21 +02:00
Aliaksandr Valialkin
8582fba4b1 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-03-09 11:46:39 +02:00
Aliaksandr Valialkin
b045f506f2 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-03-03 11:51:32 +02:00
Aliaksandr Valialkin
6197440bb9 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-03-02 21:46:03 +02:00
Aliaksandr Valialkin
966e9c227a Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-02-27 01:48:33 +02:00
Aliaksandr Valialkin
edb2ab7d8e Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-02-27 00:25:01 +02:00
Aliaksandr Valialkin
0ad887fd4d Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-02-26 23:00:40 +02:00
Aliaksandr Valialkin
d5dde7f6b1 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-02-18 19:14:41 +02:00
Aliaksandr Valialkin
a54ca9bd8f Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-02-18 15:47:41 +02:00
Aliaksandr Valialkin
3588687f84 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-02-18 14:53:19 +02:00
Aliaksandr Valialkin
687eb4ab00 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-02-16 22:29:45 +02:00
Aliaksandr Valialkin
b04fece006 lib/promscrape/discovery/kubernetes: add __meta_kubernetes_endpoints_label_* and __meta_kuberntes_endpoints_annotation_* labels to role: endpoints
This syncs kubernetes SD with Prometheus 2.25
See 617c56f55a
2021-02-15 02:50:46 +02:00
Aliaksandr Valialkin
d0c364d93d Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-02-15 01:45:39 +02:00
Aliaksandr Valialkin
63c88d8ea2 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-02-03 23:52:43 +02:00
Aliaksandr Valialkin
dc6636e2b2 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-02-03 12:30:44 +02:00
Aliaksandr Valialkin
c13f1d99e0 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-02-01 20:19:50 +02:00
Aliaksandr Valialkin
079888f719 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-01-27 01:12:45 +02:00
Aliaksandr Valialkin
b68264b4f5 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-01-22 12:10:57 +02:00
Aliaksandr Valialkin
aed049f660 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-01-13 13:55:45 +02:00
Aliaksandr Valialkin
7fcc0a1ef0 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-01-13 12:55:45 +02:00
Aliaksandr Valialkin
48951073c4 lib/backup: increase backup chunk size from 128MB to 1GB
This should reduce costs for object storage API calls by 8x. See https://cloud.google.com/storage/pricing#operations-pricing
2021-01-13 12:15:38 +02:00
Aliaksandr Valialkin
d0dfcb72b4 vendor: make vendor-update 2021-01-13 12:09:54 +02:00
Nikolay
4cf7a55808 fixes tmpBlockFile remove on prometheus search error (#109) 2021-01-13 11:53:11 +02:00
Aliaksandr Valialkin
d72fc60108 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-01-13 11:41:15 +02:00
Aliaksandr Valialkin
0b92e18047 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-01-11 21:16:32 +02:00
Aliaksandr Valialkin
aa8ea16160 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-01-11 13:12:44 +02:00
Nikolay
f5e70f0ab9 adds multiple match args support for prometheusSearch, (#106)
it merges result according to prometheus ChainedSeriesMerge.
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1001
2021-01-11 13:06:54 +02:00
Aliaksandr Valialkin
9e10d5083e Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-01-11 13:04:58 +02:00
Aliaksandr Valialkin
30c2d75815 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2021-01-08 00:26:00 +02:00
Aliaksandr Valialkin
0e80f3f45a go.mod: add missing dependency on github.com/oklog/ulid
This is a follow up for a5583ddaff
2021-01-07 23:44:50 +02:00
Aliaksandr Valialkin
6e3cbae0b3 app/vmstorage/promdb: code prettifying after a5583ddaff 2021-01-07 23:30:19 +02:00
Nikolay
a5583ddaff adds period compaction to prometheus data (#105)
* adds period compaction to prometheus data
and filtering for datapoints outside retention period

* lint fix

* adds custom retention func

* fixes compaction,
fixes search query adjustment
2021-01-07 22:55:35 +02:00
Aliaksandr Valialkin
5db9e82e54 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-12-29 11:45:16 +02:00
Aliaksandr Valialkin
80676cf1fd Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-12-28 12:09:22 +02:00
Aliaksandr Valialkin
ba4c49dde6 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-12-27 14:10:59 +02:00
Aliaksandr Valialkin
35e5e8ff1e Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-12-27 13:32:54 +02:00
Aliaksandr Valialkin
4cdbc4642d Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-12-25 17:41:24 +02:00
Aliaksandr Valialkin
23c0fb1efc Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-12-24 17:21:19 +02:00
Aliaksandr Valialkin
441d3e4b3f Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-12-24 12:50:11 +02:00
Aliaksandr Valialkin
a0ea5777f0 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-12-24 11:49:03 +02:00
Aliaksandr Valialkin
fb006fc6c0 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-12-21 08:44:04 +02:00
Aliaksandr Valialkin
8593358965 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-12-19 16:43:03 +02:00
Aliaksandr Valialkin
d0311b7fe5 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-12-15 22:44:35 +02:00
Aliaksandr Valialkin
4edd38a906 Merge remote-tracking branch 'public/pmm-6401-read-prometheus-data-files' into pmm-6401-read-prometheus-data-files 2020-12-15 14:35:38 +02:00
Aliaksandr Valialkin
56054f4eb7 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-12-15 14:33:39 +02:00
Nikolay
0ff0787797 adds custom apiPathLinks for victoria-metrics / api help (#968)
* adds custom apiPathLinks for victoria-metrics / api help

* adds custom paths for PMM
2020-12-15 14:20:51 +02:00
Aliaksandr Valialkin
f9c706e186 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-12-15 12:43:44 +02:00
Aliaksandr Valialkin
d74d22460c .github/workflows/main.yml: set GO111MODULE=off when installing auxiliary tools via go install 2020-12-15 01:01:22 +02:00
Aliaksandr Valialkin
d1193c87a8 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-12-14 20:21:19 +02:00
Aliaksandr Valialkin
4f311e5827 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-12-14 14:20:52 +02:00
Aliaksandr Valialkin
142e6b6ecf Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-12-14 11:50:16 +02:00
Aliaksandr Valialkin
1b4ef473b9 .github/ISSUE_TEMPLATE/bug_report.md: add a link to upgrade procedure 2020-12-11 22:08:59 +02:00
Aliaksandr Valialkin
8beb1f9519 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-12-11 21:29:22 +02:00
Aliaksandr Valialkin
501fd8efd9 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-12-11 12:10:21 +02:00
Aliaksandr Valialkin
45f2ba2572 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-12-05 14:57:17 +02:00
Aliaksandr Valialkin
cb2342029e Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-12-05 13:24:28 +02:00
Aliaksandr Valialkin
ff0088ceec Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-11-26 02:07:16 +02:00
Aliaksandr Valialkin
afe6d2e736 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-11-25 23:03:44 +02:00
Aliaksandr Valialkin
e1a6262302 lib/fs: replace fs.OpenReaderAt with fs.MustOpenReaderAt
All the callers for fs.OpenReaderAt expect that the file will be opened.
So it is better to log fatal error inside fs.MustOpenReaderAt instead of leaving this to the caller.
2020-11-23 09:55:41 +02:00
Aliaksandr Valialkin
f000a10cd0 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-11-23 09:41:15 +02:00
Aliaksandr Valialkin
4aee6ef4c0 vendor: make vendor-update 2020-11-19 19:07:10 +02:00
Aliaksandr Valialkin
f4dfacd493 vendor: update prometheus dependency 2020-11-19 19:00:46 +02:00
Aliaksandr Valialkin
fb2d4e56ce Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-11-19 18:56:02 +02:00
Aliaksandr Valialkin
36b748dfc7 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-11-16 21:06:21 +02:00
Aliaksandr Valialkin
c625dc5b96 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-11-10 00:28:50 +02:00
Aliaksandr Valialkin
e32620afa1 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-11-10 00:21:55 +02:00
Aliaksandr Valialkin
3f298272a8 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-11-08 13:41:41 +02:00
Aliaksandr Valialkin
7a473798b7 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-11-07 01:05:52 +02:00
Aliaksandr Valialkin
00ce906d97 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-11-05 17:14:27 +02:00
Aliaksandr Valialkin
41c9565aa1 update github.com/prometheus/prometheus from v1.8.2-0.20200911110723-e83ef207b6c2 to v1.8.2-0.20201029103703-63be30dceed9 2020-11-05 02:55:46 +02:00
Aliaksandr Valialkin
56303aee5b Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-11-05 02:51:08 +02:00
Aliaksandr Valialkin
8d8e2ccf5f Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-11-04 19:10:41 +02:00
Aliaksandr Valialkin
8772cb617c Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-11-03 14:15:45 +02:00
Aliaksandr Valialkin
65fbfc5cbc vendor: add missing files 2020-11-02 22:01:42 +02:00
Aliaksandr Valialkin
1b389674c0 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-11-02 21:58:57 +02:00
Aliaksandr Valialkin
98529e16ee Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-11-02 02:12:19 +02:00
Aliaksandr Valialkin
1b112405a8 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-11-02 00:47:34 +02:00
Aliaksandr Valialkin
8bbc83e85e Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-10-17 12:13:56 +03:00
Aliaksandr Valialkin
8349140744 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-10-16 15:19:09 +03:00
Aliaksandr Valialkin
4dc13754d8 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-10-13 18:37:48 +03:00
Aliaksandr Valialkin
83b7eb8ca6 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-10-09 12:36:16 +03:00
Aliaksandr Valialkin
e5ef3288dd Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-10-08 19:27:30 +03:00
Aliaksandr Valialkin
e7f2907138 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-10-07 21:40:45 +03:00
Aliaksandr Valialkin
757c5cfbe0 Merge branch 'pmm-6401-read-prometheus-data-files' of github.com:valyala/VictoriaMetrics into pmm-6401-read-prometheus-data-files 2020-10-06 19:07:38 +03:00
Aliaksandr Valialkin
317ddb84b9 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-10-06 16:18:01 +03:00
Aliaksandr Valialkin
2b1d0510fa Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-10-06 16:12:25 +03:00
Aliaksandr Valialkin
40d2f6fee4 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-10-05 18:14:51 +03:00
Aliaksandr Valialkin
9fbb84d5c2 app/vmselect/promql: fill gaps on graphs for range_* and running_* functions
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/806
2020-10-02 13:58:19 +03:00
Aliaksandr Valialkin
bdaa9a91f3 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-10-01 19:31:26 +03:00
Aliaksandr Valialkin
1a91da35be Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-09-30 09:58:18 +03:00
Aliaksandr Valialkin
f85be226bb vendor: make vendor-update 2020-09-30 08:58:52 +03:00
Aliaksandr Valialkin
8df5a3c5f6 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-09-30 08:55:57 +03:00
Aliaksandr Valialkin
9d3eb3f4b8 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-09-24 20:49:16 +03:00
Aliaksandr Valialkin
2cd48959d4 lib/storage: correctly use maxBlockSize in various checks
Previously `maxBlockSize` has been multiplied by 8 in certain checks. This is unnecessary.
2020-09-24 20:17:51 +03:00
Aliaksandr Valialkin
8fc8874db4 Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files 2020-09-23 23:25:34 +03:00
Aliaksandr Valialkin
ff1cbb524e app/vmselect: obtain labels and label values from Prometheus storage inside netstorage package 2020-09-23 22:43:01 +03:00
Aliaksandr Valialkin
a70df4bd83 PMM-6401 Initial implementaton for reading data from Prometheus files 2020-09-23 14:26:39 +03:00
157 changed files with 2623 additions and 936 deletions

View File

@@ -40,6 +40,8 @@ var (
"The saved data survives unclean shutdowns such as OOM crash, hardware reset, SIGKILL, etc. "+
"Bigger intervals may help increase the lifetime of flash storage with limited write cycles (e.g. Raspberry PI). "+
"Smaller intervals increase disk IO load. Minimum supported value is 1s")
downsamplingPeriods = flagutil.NewArrayString("downsampling.period", "Comma-separated downsampling periods in the format 'offset:period'. For example, '30d:10m' instructs "+
"to leave a single sample per 10 minutes for samples older than 30 days. See https://docs.victoriametrics.com/#downsampling for details")
maxIngestionRate = flag.Int("maxIngestionRate", 0, "The maximum number of samples vmsingle can receive per second. Data ingestion is paused when the limit is exceeded. "+
"By default there are no limits on samples ingestion rate.")
finalDedupScheduleInterval = flag.Duration("storage.finalDedupScheduleCheckInterval", time.Hour, "The interval for checking when final deduplication process should be started."+
@@ -48,6 +50,12 @@ var (
" See also https://docs.victoriametrics.com/#deduplication")
)
// custom api help links [["/api","doc"]] without http.pathPrefix.
var customAPIPathList = [][]string{
{"/graph/explore", "explore metrics grafana page"},
{"/graph/d/prometheus-advanced/advanced-data-exploration", "PMM grafana dashboard"},
}
func main() {
// VictoriaMetrics is optimized for reduced memory allocations,
// so it can run with the reduced GOGC in order to reduce the used memory,
@@ -88,7 +96,10 @@ func main() {
}
logger.Infof("starting VictoriaMetrics at %q...", listenAddrs)
startTime := time.Now()
storage.SetDedupInterval(*minScrapeInterval)
err := storage.SetDownsamplingPeriods(*downsamplingPeriods, *minScrapeInterval)
if err != nil {
logger.Fatalf("cannot parse -downsampling.period: %s", err)
}
storage.SetDataFlushInterval(*inmemoryDataFlushInterval)
if *finalDedupScheduleInterval < time.Hour {
logger.Fatalf("-dedup.finalDedupScheduleCheckInterval cannot be smaller than 1 hour; got %s", *finalDedupScheduleInterval)
@@ -152,6 +163,10 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
{"api/v1/status/active_queries", "active queries"},
{"-/reload", "reload configuration"},
})
for _, p := range customAPIPathList {
p, doc := p[0], p[1]
fmt.Fprintf(w, "<a href=%q>%s</a> - %s<br/>", p, p, doc)
}
return true
}
if vminsert.RequestHandler(w, r) {

View File

@@ -16,6 +16,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/searchutils"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmstorage/promdb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/cgroup"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
@@ -337,6 +338,12 @@ var (
type packedTimeseries struct {
metricName string
brs []blockRef
pd *promData
}
type promData struct {
values []float64
timestamps []int64
}
type unpackWork struct {
@@ -440,9 +447,21 @@ func (pts *packedTimeseries) Unpack(dst *Result, tbf *tmpBlocksFile, tr storage.
putSortBlocksHeap(sbh)
return err
}
dedupInterval := storage.GetDedupInterval()
if pts.pd != nil {
// Add data from Prometheus to dst.
// It usually has smaller timestamps than the data from sbs, so put it first.
dst.Values = append(dst.Values, pts.pd.values...)
dst.Timestamps = append(dst.Timestamps, pts.pd.timestamps...)
}
dedupInterval := storage.GetDedupInterval(tr.MinTimestamp)
mergeSortBlocks(dst, sbh, dedupInterval)
putSortBlocksHeap(sbh)
if pts.pd != nil {
if !sort.IsSorted(dst) {
sort.Sort(dst)
}
pts.pd = nil
}
return nil
}
@@ -559,6 +578,27 @@ func (pts *packedTimeseries) unpackTo(dst []*sortBlock, tbf *tmpBlocksFile, tr s
return dst, firstErr
}
// sort.Interface implementation for Result
// Len implements sort.Interface
func (r *Result) Len() int {
return len(r.Timestamps)
}
// Less implements sort.Interface
func (r *Result) Less(i, j int) bool {
timestamps := r.Timestamps
return timestamps[i] < timestamps[j]
}
// Swap implements sort.Interface
func (r *Result) Swap(i, j int) {
timestamps := r.Timestamps
values := r.Values
timestamps[i], timestamps[j] = timestamps[j], timestamps[i]
values[i], values[j] = values[j], values[i]
}
func getSortBlock() *sortBlock {
v := sbPool.Get()
if v == nil {
@@ -796,6 +836,15 @@ func LabelNames(qt *querytracer.Tracer, sq *storage.SearchQuery, maxLabelNames i
if err != nil {
return nil, fmt.Errorf("error during labels search on time range: %w", err)
}
// Merge labels obtained from Prometheus storage.
promLabels, err := promdb.GetLabelNamesOnTimeRange(tr, deadline)
if err != nil {
return nil, fmt.Errorf("cannot obtain labels from Prometheus storage: %w", err)
}
qt.Printf("get %d label names from Prometheus storage", len(promLabels))
labels = mergeStrings(labels, promLabels)
// Sort labels like Prometheus does
sort.Strings(labels)
qt.Printf("sort %d labels", len(labels))
@@ -867,14 +916,44 @@ func LabelValues(qt *querytracer.Tracer, labelName string, sq *storage.SearchQue
}
labelValues, err := vmstorage.SearchLabelValuesWithFiltersOnTimeRange(qt, labelName, tfss, tr, maxLabelValues, sq.MaxMetrics, deadline.Deadline())
if err != nil {
return nil, fmt.Errorf("error during label values search on time range for labelName=%q: %w", labelName, err)
return nil, fmt.Errorf("error during label values search on time range: %w", err)
}
// Merge label values obtained from Prometheus storage.
promLabelValues, err := promdb.GetLabelValuesOnTimeRange(labelName, tr, deadline)
if err != nil {
return nil, fmt.Errorf("cannot obtain label values on time range for %q from Prometheus storage: %w", labelName, err)
}
qt.Printf("get %d label values from Prometheus storage", len(promLabelValues))
labelValues = mergeStrings(labelValues, promLabelValues)
// Sort labelValues like Prometheus does
sort.Strings(labelValues)
qt.Printf("sort %d label values", len(labelValues))
return labelValues, nil
}
func mergeStrings(a, b []string) []string {
if len(a) == 0 {
return b
}
if len(b) == 0 {
return a
}
m := make(map[string]struct{}, len(a)+len(b))
for _, s := range a {
m[s] = struct{}{}
}
for _, s := range b {
m[s] = struct{}{}
}
result := make([]string, 0, len(m))
for s := range m {
result = append(result, s)
}
return result
}
// GraphiteTagValues returns tag values for the given tagName until the given deadline.
func GraphiteTagValues(qt *querytracer.Tracer, tagName, filter string, limit int, deadline searchutils.Deadline) ([]string, error) {
qt = qt.NewChild("get graphite tag values for tagName=%s, filter=%s, limit=%d", tagName, filter, limit)
@@ -1280,6 +1359,26 @@ func ProcessSearchQuery(qt *querytracer.Tracer, sq *storage.SearchQuery, deadlin
}
qt.Printf("fetch unique series=%d, blocks=%d, samples=%d, bytes=%d", len(m), blocksRead, samples, tbf.Len())
// Fetch data from promdb.
pm := make(map[string]*promData)
err = promdb.VisitSeries(sq, deadline, func(metricName []byte, values []float64, timestamps []int64) {
pd := pm[string(metricName)]
if pd == nil {
if _, ok := m[string(metricName)]; !ok {
orderedMetricNames = append(orderedMetricNames, string(metricName))
}
pd = &promData{}
pm[string(metricName)] = pd
}
pd.values = append(pd.values, values...)
pd.timestamps = append(pd.timestamps, timestamps...)
})
if err != nil {
putTmpBlocksFile(tbf)
putStorageSearch(sr)
return nil, fmt.Errorf("error when searching in Prometheus data: %w", err)
}
var rss Results
rss.tr = tr
rss.deadline = deadline
@@ -1288,6 +1387,7 @@ func ProcessSearchQuery(qt *querytracer.Tracer, sq *storage.SearchQuery, deadlin
pts[i] = packedTimeseries{
metricName: metricName,
brs: brssPool[m[metricName]].brs,
pd: pm[metricName],
}
}
rss.packedTimeseries = pts

View File

@@ -12,6 +12,7 @@ import (
"github.com/VictoriaMetrics/metrics"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmstorage/promdb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
@@ -127,9 +128,11 @@ func Init(resetCacheIfNeeded func(mrs []storage.MetricRow)) {
// register storage metrics
storageMetrics = metrics.NewSet()
storageMetrics.RegisterMetricsWriter(func(w io.Writer) {
writeStorageMetrics(w, strg)
writeStorageMetrics(w, Storage)
})
metrics.RegisterSet(storageMetrics)
promdb.Init(retentionPeriod.Milliseconds())
}
var storageMetrics *metrics.Set
@@ -250,6 +253,7 @@ func Stop() {
logger.Infof("gracefully closing the storage at %s", *DataPath)
startTime := time.Now()
WG.WaitAndBlock()
promdb.MustClose()
stopStaleSnapshotsRemover()
Storage.MustClose()
logger.Infof("successfully closed the storage in %.3f seconds", time.Since(startTime).Seconds())

View File

@@ -0,0 +1,268 @@
package promdb
import (
"context"
"flag"
"fmt"
"log/slog"
"time"
"github.com/oklog/ulid"
"github.com/prometheus/prometheus/model/labels"
promstorage "github.com/prometheus/prometheus/storage"
"github.com/prometheus/prometheus/tsdb"
"github.com/prometheus/prometheus/tsdb/chunkenc"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/searchutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
)
var prometheusDataPath = flag.String("prometheusDataPath", "", "Optional path to readonly historical Prometheus data")
var prometheusRetentionMsecs int64
// Init must be called after flag.Parse and before using the package.
//
// See also MustClose.
func Init(retentionMsecs int64) {
if promDB != nil {
logger.Fatalf("BUG: promdb.Init is called multiple times without promdb.MustClose call")
}
prometheusRetentionMsecs = retentionMsecs
if *prometheusDataPath == "" {
return
}
l := slog.New(slog.Default().Handler())
opts := tsdb.DefaultOptions()
opts.RetentionDuration = retentionMsecs
// Set max block duration to 10% of retention period or 31 days
// according to https://prometheus.io/docs/prometheus/latest/storage/#compaction
maxBlockDuration := int64((31 * 24 * time.Hour) / time.Millisecond)
if maxBlockDuration > retentionMsecs/10 {
maxBlockDuration = retentionMsecs / 10
}
if maxBlockDuration < opts.MinBlockDuration {
maxBlockDuration = opts.MinBlockDuration
}
opts.MaxBlockDuration = maxBlockDuration
// Custom delete function is needed, because Prometheus by default doesn't delete
// blocks outside the retention if no new blocks are created with samples with the current timestamps.
// See https://github.com/prometheus/prometheus/blob/997bb7134fcfd7279f250e183e78681e48a56aff/tsdb/db.go#L1116
opts.BlocksToDelete = func(blocks []*tsdb.Block) map[ulid.ULID]struct{} {
m := make(map[ulid.ULID]struct{})
minRetentionTime := time.Now().Unix()*1000 - retentionMsecs
for _, block := range blocks {
meta := block.Meta()
// delete block marked for deletion by compaction code.
if meta.Compaction.Deletable {
m[meta.ULID] = struct{}{}
continue
}
if block.MaxTime() < minRetentionTime {
m[meta.ULID] = struct{}{}
}
}
return m
}
pdb, err := tsdb.Open(*prometheusDataPath, l, nil, opts, nil)
if err != nil {
logger.Panicf("FATAL: cannot open Prometheus data at -prometheusDataPath=%q: %s", *prometheusDataPath, err)
}
promDB = pdb
logger.Infof("successfully opened historical Prometheus data at -prometheusDataPath=%q with retentionMsecs=%d", *prometheusDataPath, retentionMsecs)
}
// MustClose must be called on graceful shutdown.
//
// Package functionality cannot be used after this call.
func MustClose() {
if *prometheusDataPath == "" {
return
}
if promDB == nil {
logger.Panicf("BUG: promdb.MustClose is called without promdb.Init call")
}
if err := promDB.Close(); err != nil {
logger.Panicf("FATAL: cannot close promDB: %s", err)
}
promDB = nil
logger.Infof("successfully closed historical Prometheus data at -prometheusDataPath=%q", *prometheusDataPath)
}
var promDB *tsdb.DB
// GetLabelNamesOnTimeRange returns label names.
func GetLabelNamesOnTimeRange(tr storage.TimeRange, deadline searchutils.Deadline) ([]string, error) {
if *prometheusDataPath == "" {
return nil, nil
}
d := time.Unix(int64(deadline.Deadline()), 0)
ctx, cancel := context.WithDeadline(context.Background(), d)
defer cancel()
q, err := promDB.Querier(tr.MinTimestamp, tr.MaxTimestamp)
if err != nil {
return nil, err
}
defer mustCloseQuerier(q)
names, _, err := q.LabelNames(ctx, nil)
// Make full copy of names, since they cannot be used after q is closed.
names = copyStringsWithMemory(names)
return names, err
}
// GetLabelValuesOnTimeRange returns values for the given labelName on the given tr.
func GetLabelValuesOnTimeRange(labelName string, tr storage.TimeRange, deadline searchutils.Deadline) ([]string, error) {
if *prometheusDataPath == "" {
return nil, nil
}
d := time.Unix(int64(deadline.Deadline()), 0)
ctx, cancel := context.WithDeadline(context.Background(), d)
defer cancel()
q, err := promDB.Querier(tr.MinTimestamp, tr.MaxTimestamp)
if err != nil {
return nil, err
}
defer mustCloseQuerier(q)
values, _, err := q.LabelValues(ctx, labelName, nil)
// Make full copy of values, since they cannot be used after q is closed.
values = copyStringsWithMemory(values)
return values, err
}
func copyStringsWithMemory(a []string) []string {
result := make([]string, len(a))
for i, s := range a {
result[i] = string(append([]byte{}, s...))
}
return result
}
// SeriesVisitor is called by VisitSeries for each matching time series.
//
// The caller shouldn't hold references to metricName, values and timestamps after returning.
type SeriesVisitor func(metricName []byte, values []float64, timestamps []int64)
// VisitSeries calls f for each series found in the pdb.
func VisitSeries(sq *storage.SearchQuery, deadline searchutils.Deadline, f SeriesVisitor) error {
if *prometheusDataPath == "" {
return nil
}
d := time.Unix(int64(deadline.Deadline()), 0)
ctx, cancel := context.WithDeadline(context.Background(), d)
defer cancel()
minTime, maxTime := getSearchTimeRange(sq)
q, err := promDB.Querier(minTime, maxTime)
if err != nil {
return err
}
defer mustCloseQuerier(q)
var seriesSet []promstorage.SeriesSet
for _, tf := range sq.TagFilterss {
ms, err := convertTagFiltersToMatchers(tf)
if err != nil {
return fmt.Errorf("cannot convert tag filters to matchers: %w", err)
}
s := q.Select(ctx, false, nil, ms...)
seriesSet = append(seriesSet, s)
}
ss := promstorage.NewMergeSeriesSet(seriesSet, 0, promstorage.ChainedSeriesMerge)
var (
mn storage.MetricName
metricName []byte
values []float64
timestamps []int64
)
var it chunkenc.Iterator
for ss.Next() {
s := ss.At()
convertPromLabelsToMetricName(&mn, s.Labels())
metricName = mn.SortAndMarshal(metricName[:0])
values = values[:0]
timestamps = timestamps[:0]
it = s.Iterator(it)
for {
typ := it.Next()
if typ == chunkenc.ValNone {
break
}
if typ != chunkenc.ValFloat {
// Skip unsupported values
continue
}
ts, v := it.At()
values = append(values, v)
timestamps = append(timestamps, ts)
}
if err := it.Err(); err != nil {
return fmt.Errorf("error when iterating Prometheus series: %w", err)
}
f(metricName, values, timestamps)
}
return ss.Err()
}
func getSearchTimeRange(sq *storage.SearchQuery) (int64, int64) {
maxTime := sq.MaxTimestamp
minTime := sq.MinTimestamp
minRetentionTime := time.Now().Unix()*1000 - prometheusRetentionMsecs
if maxTime < minRetentionTime {
maxTime = minRetentionTime
}
if minTime < minRetentionTime {
minTime = minRetentionTime
}
return minTime, maxTime
}
func convertPromLabelsToMetricName(dst *storage.MetricName, labels []labels.Label) {
dst.Reset()
for _, label := range labels {
if label.Name == "__name__" {
dst.MetricGroup = append(dst.MetricGroup[:0], label.Value...)
} else {
dst.AddTag(label.Name, label.Value)
}
}
}
func convertTagFiltersToMatchers(tfs []storage.TagFilter) ([]*labels.Matcher, error) {
ms := make([]*labels.Matcher, 0, len(tfs))
for _, tf := range tfs {
var mt labels.MatchType
if tf.IsNegative {
if tf.IsRegexp {
mt = labels.MatchNotRegexp
} else {
mt = labels.MatchNotEqual
}
} else {
if tf.IsRegexp {
mt = labels.MatchRegexp
} else {
mt = labels.MatchEqual
}
}
key := string(tf.Key)
if key == "" {
key = "__name__"
}
value := string(tf.Value)
m, err := labels.NewMatcher(mt, key, value)
if err != nil {
return nil, err
}
ms = append(ms, m)
}
return ms, nil
}
func mustCloseQuerier(q promstorage.Querier) {
if err := q.Close(); err != nil {
logger.Panicf("FATAL: cannot close querier: %s", err)
}
}

View File

@@ -93,7 +93,6 @@ publish-via-docker:
--label "org.opencontainers.image.version=$(PKG_TAG)" \
--label "org.opencontainers.image.created=$(shell date -u +"%Y-%m-%dT%H:%M:%SZ")" \
--tag $(DOCKER_NAMESPACE)/$(APP_NAME):$(PKG_TAG)$(RACE) \
--tag $(DOCKER_NAMESPACE)/$(APP_NAME):$(LATEST_TAG)$(RACE) \
-o type=image \
--provenance=false \
-f app/$(APP_NAME)/multiarch/Dockerfile \

56
go.mod
View File

@@ -13,17 +13,17 @@ replace cloud.google.com/go/storage => cloud.google.com/go/storage v1.43.0
require (
cloud.google.com/go/storage v1.50.0
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.17.0
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.8.0
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.8.1
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.5.0
github.com/VictoriaMetrics/easyproto v0.1.4
github.com/VictoriaMetrics/fastcache v1.12.2
github.com/VictoriaMetrics/metrics v1.35.1
github.com/VictoriaMetrics/metricsql v0.81.1
github.com/aws/aws-sdk-go-v2 v1.32.8
github.com/aws/aws-sdk-go-v2/config v1.28.10
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.48
github.com/aws/aws-sdk-go-v2/service/s3 v1.72.2
github.com/bmatcuk/doublestar/v4 v4.7.1
github.com/aws/aws-sdk-go-v2 v1.33.0
github.com/aws/aws-sdk-go-v2/config v1.29.0
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.51
github.com/aws/aws-sdk-go-v2/service/s3 v1.73.1
github.com/bmatcuk/doublestar/v4 v4.8.0
github.com/cespare/xxhash/v2 v2.3.0
github.com/cheggaaa/pb/v3 v3.1.5
github.com/ergochat/readline v0.1.3
@@ -34,6 +34,7 @@ require (
github.com/influxdata/influxdb v1.11.8
github.com/klauspost/compress v1.17.11
github.com/mattn/go-isatty v0.0.20
github.com/oklog/ulid v1.3.1
github.com/prometheus/prometheus v0.301.0
github.com/urfave/cli/v2 v2.27.5
github.com/valyala/fastjson v1.6.4
@@ -45,7 +46,7 @@ require (
golang.org/x/net v0.34.0
golang.org/x/oauth2 v0.25.0
golang.org/x/sys v0.29.0
google.golang.org/api v0.216.0
google.golang.org/api v0.217.0
gopkg.in/yaml.v2 v2.4.0
)
@@ -59,21 +60,21 @@ require (
github.com/AzureAD/microsoft-authentication-library-for-go v1.3.2 // indirect
github.com/VividCortex/ewma v1.2.0 // indirect
github.com/alecthomas/units v0.0.0-20240927000941-0f3dac36c52b // indirect
github.com/aws/aws-sdk-go v1.55.5 // indirect
github.com/aws/aws-sdk-go v1.55.6 // indirect
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.7 // indirect
github.com/aws/aws-sdk-go-v2/credentials v1.17.51 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.23 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.27 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.27 // indirect
github.com/aws/aws-sdk-go-v2/credentials v1.17.53 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.24 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.28 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.28 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.1 // indirect
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.27 // indirect
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.28 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.1 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.4.8 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.8 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.8 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.24.9 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.8 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.33.6 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.5.1 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.9 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.9 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.24.10 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.9 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.33.8 // indirect
github.com/aws/smithy-go v1.22.1 // indirect
github.com/bboreham/go-loser v0.0.0-20230920113527-fcc2c21820a3 // indirect
github.com/beorn7/perks v1.0.1 // indirect
@@ -102,12 +103,11 @@ require (
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f // indirect
github.com/oklog/ulid v1.3.1 // indirect
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/prometheus/client_golang v1.20.5 // indirect
github.com/prometheus/client_model v0.6.1 // indirect
github.com/prometheus/common v0.61.0 // indirect
github.com/prometheus/common v0.62.0 // indirect
github.com/prometheus/procfs v0.15.1 // indirect
github.com/prometheus/sigv4 v0.1.1 // indirect
github.com/rivo/uniseg v0.4.7 // indirect
@@ -133,14 +133,14 @@ require (
golang.org/x/sync v0.10.0 // indirect
golang.org/x/text v0.21.0 // indirect
golang.org/x/time v0.9.0 // indirect
google.golang.org/genproto v0.0.0-20250106144421-5f5ef82da422 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20250106144421-5f5ef82da422 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250106144421-5f5ef82da422 // indirect
google.golang.org/grpc v1.69.2 // indirect
google.golang.org/protobuf v1.36.2 // indirect
google.golang.org/genproto v0.0.0-20250115164207-1a7da9e5054f // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20250115164207-1a7da9e5054f // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250115164207-1a7da9e5054f // indirect
google.golang.org/grpc v1.69.4 // indirect
google.golang.org/protobuf v1.36.3 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
k8s.io/apimachinery v0.32.0 // indirect
k8s.io/client-go v0.32.0 // indirect
k8s.io/apimachinery v0.32.1 // indirect
k8s.io/client-go v0.32.1 // indirect
k8s.io/klog/v2 v2.130.1 // indirect
k8s.io/utils v0.0.0-20241210054802-24370beab758 // indirect
)

120
go.sum
View File

@@ -14,10 +14,10 @@ cloud.google.com/go/storage v1.43.0 h1:CcxnSohZwizt4LCzQHWvBf1/kvtHUn7gk9QERXPyX
cloud.google.com/go/storage v1.43.0/go.mod h1:ajvxEa7WmZS1PxvKRq4bq0tFT3vMd502JwstCcYv0Q0=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.17.0 h1:g0EZJwz7xkXQiZAI5xi9f3WWFYBlX1CPTrR+NDToRkQ=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.17.0/go.mod h1:XCW7KnZet0Opnr7HccfUw1PLc4CjHqpcaxW8DHklNkQ=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.8.0 h1:B/dfvscEQtew9dVuoxqxrUKKv8Ih2f55PydknDamU+g=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.8.0/go.mod h1:fiPSssYvltE08HJchL04dOy+RD4hgrjph0cwGGMntdI=
github.com/Azure/azure-sdk-for-go/sdk/azidentity/cache v0.3.0 h1:+m0M/LFxN43KvULkDNfdXOgrjtg6UYJPFBJyuEcRCAw=
github.com/Azure/azure-sdk-for-go/sdk/azidentity/cache v0.3.0/go.mod h1:PwOyop78lveYMRs6oCxjiVyBdyCgIYH6XHIVZO9/SFQ=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.8.1 h1:1mvYtZfWQAnwNah/C+Z+Jb9rQH95LPE2vlmMuWAHJk8=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.8.1/go.mod h1:75I/mXtme1JyWFtz8GocPHVFyH421IBoZErnO16dd0k=
github.com/Azure/azure-sdk-for-go/sdk/azidentity/cache v0.3.1 h1:Bk5uOhSAenHyR5P61D/NzeQCv+4fEVV8mOkJ82NqpWw=
github.com/Azure/azure-sdk-for-go/sdk/azidentity/cache v0.3.1/go.mod h1:QZ4pw3or1WPmRBxf0cHd1tknzrT54WPBOQoGutCPvSU=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0 h1:ywEEhmNahHBihViHepv3xPBn1663uRv2t2q/ESv9seY=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0/go.mod h1:iZDifYGJTIgIIkYRNWPENUnqx6bJ2xnSDFI2tjwZNuY=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/compute/armcompute/v5 v5.7.0 h1:LkHbJbgF3YyvC53aqYGR+wWQDn2Rdp9AQdGndf9QvY4=
@@ -53,52 +53,52 @@ github.com/allegro/bigcache v1.2.1-0.20190218064605-e24eb225f156 h1:eMwmnE/GDgah
github.com/allegro/bigcache v1.2.1-0.20190218064605-e24eb225f156/go.mod h1:Cb/ax3seSYIx7SuZdm2G2xzfwmv3TPSk2ucNfQESPXM=
github.com/armon/go-metrics v0.4.1 h1:hR91U9KYmb6bLBYLQjyM+3j+rcd/UhE+G78SFnF8gJA=
github.com/armon/go-metrics v0.4.1/go.mod h1:E6amYzXo6aW1tqzoZGT755KkbgrJsSdpwZ+3JqfkOG4=
github.com/aws/aws-sdk-go v1.55.5 h1:KKUZBfBoyqy5d3swXyiC7Q76ic40rYcbqH7qjh59kzU=
github.com/aws/aws-sdk-go v1.55.5/go.mod h1:eRwEWoyTWFMVYVQzKMNHWP5/RV4xIUGMQfXQHfHkpNU=
github.com/aws/aws-sdk-go-v2 v1.32.8 h1:cZV+NUS/eGxKXMtmyhtYPJ7Z4YLoI/V8bkTdRZfYhGo=
github.com/aws/aws-sdk-go-v2 v1.32.8/go.mod h1:P5WJBrYqqbWVaOxgH0X/FYYD47/nooaPOZPlQdmiN2U=
github.com/aws/aws-sdk-go v1.55.6 h1:cSg4pvZ3m8dgYcgqB97MrcdjUmZ1BeMYKUxMMB89IPk=
github.com/aws/aws-sdk-go v1.55.6/go.mod h1:eRwEWoyTWFMVYVQzKMNHWP5/RV4xIUGMQfXQHfHkpNU=
github.com/aws/aws-sdk-go-v2 v1.33.0 h1:Evgm4DI9imD81V0WwD+TN4DCwjUMdc94TrduMLbgZJs=
github.com/aws/aws-sdk-go-v2 v1.33.0/go.mod h1:P5WJBrYqqbWVaOxgH0X/FYYD47/nooaPOZPlQdmiN2U=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.7 h1:lL7IfaFzngfx0ZwUGOZdsFFnQ5uLvR0hWqqhyE7Q9M8=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.7/go.mod h1:QraP0UcVlQJsmHfioCrveWOC1nbiWUl3ej08h4mXWoc=
github.com/aws/aws-sdk-go-v2/config v1.28.10 h1:fKODZHfqQu06pCzR69KJ3GuttraRJkhlC8g80RZ0Dfg=
github.com/aws/aws-sdk-go-v2/config v1.28.10/go.mod h1:PvdxRYZ5Um9QMq9PQ0zHHNdtKK+he2NHtFCUFMXWXeg=
github.com/aws/aws-sdk-go-v2/credentials v1.17.51 h1:F/9Sm6Y6k4LqDesZDPJCLxQGXNNHd/ZtJiWd0lCZKRk=
github.com/aws/aws-sdk-go-v2/credentials v1.17.51/go.mod h1:TKbzCHm43AoPyA+iLGGcruXd4AFhF8tOmLex2R9jWNQ=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.23 h1:IBAoD/1d8A8/1aA8g4MBVtTRHhXRiNAgwdbo/xRM2DI=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.23/go.mod h1:vfENuCM7dofkgKpYzuzf1VT1UKkA/YL3qanfBn7HCaA=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.48 h1:XnXVe2zRyPf0+fAW5L05esmngvBpC6DQZK7oZB/z/Co=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.48/go.mod h1:S3wey90OrS4f7kYxH6PT175YyEcHTORY07++HurMaRM=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.27 h1:jSJjSBzw8VDIbWv+mmvBSP8ezsztMYJGH+eKqi9AmNs=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.27/go.mod h1:/DAhLbFRgwhmvJdOfSm+WwikZrCuUJiA4WgJG0fTNSw=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.27 h1:l+X4K77Dui85pIj5foXDhPlnqcNRG2QUyvca300lXh8=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.27/go.mod h1:KvZXSFEXm6x84yE8qffKvT3x8J5clWnVFXphpohhzJ8=
github.com/aws/aws-sdk-go-v2/config v1.29.0 h1:Vk/u4jof33or1qAQLdofpjKV7mQQT7DcUpnYx8kdmxY=
github.com/aws/aws-sdk-go-v2/config v1.29.0/go.mod h1:iXAZK3Gxvpq3tA+B9WaDYpZis7M8KFgdrDPMmHrgbJM=
github.com/aws/aws-sdk-go-v2/credentials v1.17.53 h1:lwrVhiEDW5yXsuVKlFVUnR2R50zt2DklhOyeLETqDuE=
github.com/aws/aws-sdk-go-v2/credentials v1.17.53/go.mod h1:CkqM1bIw/xjEpBMhBnvqUXYZbpCFuj6dnCAyDk2AtAY=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.24 h1:5grmdTdMsovn9kPZPI23Hhvp0ZyNm5cRO+IZFIYiAfw=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.24/go.mod h1:zqi7TVKTswH3Ozq28PkmBmgzG1tona7mo9G2IJg4Cis=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.51 h1:Q0FNHs6JTGuoBWNQycD5LRSf+/WVHWEl+FwJ0tEDZUE=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.51/go.mod h1:B9sW5/AD5bStKdTyUdz1xWRKOwnyUwJ4eJ4olQBtZo0=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.28 h1:igORFSiH3bfq4lxKFkTSYDhJEUCYo6C8VKiWJjYwQuQ=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.28/go.mod h1:3So8EA/aAYm36L7XIvCVwLa0s5N0P7o2b1oqnx/2R4g=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.28 h1:1mOW9zAUMhTSrMDssEHS/ajx8JcAj/IcftzcmNlmVLI=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.28/go.mod h1:kGlXVIWDfvt2Ox5zEaNglmq0hXPHgQFNMix33Tw22jA=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.1 h1:VaRN3TlFdd6KxX1x3ILT5ynH6HvKgqdiXoTxAF4HQcQ=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.1/go.mod h1:FbtygfRFze9usAadmnGJNc8KsP346kEe+y2/oyhGAGc=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.27 h1:AmB5QxnD+fBFrg9LcqzkgF/CaYvMyU/BTlejG4t1S7Q=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.27/go.mod h1:Sai7P3xTiyv9ZUYO3IFxMnmiIP759/67iQbU4kdmkyU=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.28 h1:7kpeALOUeThs2kEjlAxlADAVfxKmkYAedlpZ3kdoSJ4=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.28/go.mod h1:pyaOYEdp1MJWgtXLy6q80r3DhsVdOIOZNB9hdTcJIvI=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.1 h1:iXtILhvDxB6kPvEXgsDhGaZCSC6LQET5ZHSdJozeI0Y=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.1/go.mod h1:9nu0fVANtYiAePIBh2/pFUSwtJ402hLnp854CNoDOeE=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.4.8 h1:iwYS40JnrBeA9e9aI5S6KKN4EB2zR4iUVYN0nwVivz4=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.4.8/go.mod h1:Fm9Mi+ApqmFiknZtGpohVcBGvpTu542VC4XO9YudRi0=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.8 h1:cWno7lefSH6Pp+mSznagKCgfDGeZRin66UvYUqAkyeA=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.8/go.mod h1:tPD+VjU3ABTBoEJ3nctu5Nyg4P4yjqSH5bJGGkY4+XE=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.8 h1:/Mn7gTedG86nbpjT4QEKsN1D/fThiYe1qvq7WsBGNHg=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.8/go.mod h1:Ae3va9LPmvjj231ukHB6UeT8nS7wTPfC3tMZSZMwNYg=
github.com/aws/aws-sdk-go-v2/service/s3 v1.72.2 h1:a7aQ3RW+ug4IbhoQp29NZdc7vqrzKZZfWZSaQAXOZvQ=
github.com/aws/aws-sdk-go-v2/service/s3 v1.72.2/go.mod h1:xMekrnhmJ5aqmyxtmALs7mlvXw5xRh+eYjOjvrIIFJ4=
github.com/aws/aws-sdk-go-v2/service/sso v1.24.9 h1:YqtxripbjWb2QLyzRK9pByfEDvgg95gpC2AyDq4hFE8=
github.com/aws/aws-sdk-go-v2/service/sso v1.24.9/go.mod h1:lV8iQpg6OLOfBnqbGMBKYjilBlf633qwHnBEiMSPoHY=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.8 h1:6dBT1Lz8fK11m22R+AqfRsFn8320K0T5DTGxxOQBSMw=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.8/go.mod h1:/kiBvRQXBc6xeJTYzhSdGvJ5vm1tjaDEjH+MSeRJnlY=
github.com/aws/aws-sdk-go-v2/service/sts v1.33.6 h1:VwhTrsTuVn52an4mXx29PqRzs2Dvu921NpGk7y43tAM=
github.com/aws/aws-sdk-go-v2/service/sts v1.33.6/go.mod h1:+8h7PZb3yY5ftmVLD7ocEoE98hdc8PoKS0H3wfx1dlc=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.5.1 h1:mJ9FRktB8v1Ihpqwfk0AWvYEd0FgQtLsshc2Qb2TVc8=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.5.1/go.mod h1:dIW8puxSbYLSPv/ju0d9A3CpwXdtqvJtYKDMVmPLOWE=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.9 h1:TQmKDyETFGiXVhZfQ/I0cCFziqqX58pi4tKJGYGFSz0=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.9/go.mod h1:HVLPK2iHQBUx7HfZeOQSEu3v2ubZaAY2YPbAm5/WUyY=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.9 h1:2aInXbh02XsbO0KobPGMNXyv2QP73VDKsWPNJARj/+4=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.9/go.mod h1:dgXS1i+HgWnYkPXqNoPIPKeUsUUYHaUbThC90aDnNiE=
github.com/aws/aws-sdk-go-v2/service/s3 v1.73.1 h1:OzmyfYGiMCOIAq5pa0KWcaZoA9F8FqajOJevh+hhFdY=
github.com/aws/aws-sdk-go-v2/service/s3 v1.73.1/go.mod h1:K+0a0kWDHAUXBH8GvYGS3cQRwIuRjO9bMWUz6vpNCaU=
github.com/aws/aws-sdk-go-v2/service/sso v1.24.10 h1:DyZUj3xSw3FR3TXSwDhPhuZkkT14QHBiacdbUVcD0Dg=
github.com/aws/aws-sdk-go-v2/service/sso v1.24.10/go.mod h1:Ro744S4fKiCCuZECXgOi760TiYylUM8ZBf6OGiZzJtY=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.9 h1:I1TsPEs34vbpOnR81GIcAq4/3Ud+jRHVGwx6qLQUHLs=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.9/go.mod h1:Fzsj6lZEb8AkTE5S68OhcbBqeWPsR8RnGuKPr8Todl8=
github.com/aws/aws-sdk-go-v2/service/sts v1.33.8 h1:pqEJQtlKWvnv3B6VRt60ZmsHy3SotlEBvfUBPB1KVcM=
github.com/aws/aws-sdk-go-v2/service/sts v1.33.8/go.mod h1:f6vjfZER1M17Fokn0IzssOTMT2N8ZSq+7jnNF0tArvw=
github.com/aws/smithy-go v1.22.1 h1:/HPHZQ0g7f4eUeK6HKglFz8uwVfZKgoI25rb/J+dnro=
github.com/aws/smithy-go v1.22.1/go.mod h1:irrKGvNn1InZwb2d7fkIRNucdfwR8R+Ts3wxYa/cJHg=
github.com/bboreham/go-loser v0.0.0-20230920113527-fcc2c21820a3 h1:6df1vn4bBlDDo4tARvBm7l6KA9iVMnE3NWizDeWSrps=
github.com/bboreham/go-loser v0.0.0-20230920113527-fcc2c21820a3/go.mod h1:CIWtjkly68+yqLPbvwwR/fjNJA/idrtULjZWh2v1ys0=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/bmatcuk/doublestar/v4 v4.7.1 h1:fdDeAqgT47acgwd9bd9HxJRDmc9UAmPpc+2m0CXv75Q=
github.com/bmatcuk/doublestar/v4 v4.7.1/go.mod h1:xBQ8jztBU6kakFMg+8WGxn0c6z1fTSPVIjEY1Wr7jzc=
github.com/bmatcuk/doublestar/v4 v4.8.0 h1:DSXtrypQddoug1459viM9X9D3dp1Z7993fw36I2kNcQ=
github.com/bmatcuk/doublestar/v4 v4.8.0/go.mod h1:xBQ8jztBU6kakFMg+8WGxn0c6z1fTSPVIjEY1Wr7jzc=
github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
@@ -298,16 +298,16 @@ github.com/prometheus/client_golang v1.20.5 h1:cxppBPuYhUnsO6yo/aoRol4L7q7UFfdm+
github.com/prometheus/client_golang v1.20.5/go.mod h1:PIEt8X02hGcP8JWbeHyeZ53Y/jReSnHgO035n//V5WE=
github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E=
github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY=
github.com/prometheus/common v0.61.0 h1:3gv/GThfX0cV2lpO7gkTUwZru38mxevy90Bj8YFSRQQ=
github.com/prometheus/common v0.61.0/go.mod h1:zr29OCN/2BsJRaFwG8QOBr41D6kkchKbpeNH7pAjb/s=
github.com/prometheus/common v0.62.0 h1:xasJaQlnWAeyHdUBeGjXmutelfJHWMRr+Fg4QszZ2Io=
github.com/prometheus/common v0.62.0/go.mod h1:vyBcEuLSvWos9B1+CyL7JZ2up+uFzXhkqml0W5zIY1I=
github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc=
github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk=
github.com/prometheus/prometheus v0.301.0 h1:0z8dgegmILivNomCd79RKvVkIols8vBGPKmcIBc7OyY=
github.com/prometheus/prometheus v0.301.0/go.mod h1:BJLjWCKNfRfjp7Q48DrAjARnCi7GhfUVvUFEAWTssZM=
github.com/prometheus/sigv4 v0.1.1 h1:UJxjOqVcXctZlwDjpUpZ2OiMWJdFijgSofwLzO1Xk0Q=
github.com/prometheus/sigv4 v0.1.1/go.mod h1:RAmWVKqx0bwi0Qm4lrKMXFM0nhpesBcenfCtz9qRyH8=
github.com/redis/go-redis/v9 v9.6.1 h1:HHDteefn6ZkTtY5fGUE8tj8uy85AHk6zP7CpzIAM0y4=
github.com/redis/go-redis/v9 v9.6.1/go.mod h1:0C0c6ycQsdpVNQpxb1njEQIqkx5UcsM8FJCQLgE9+RA=
github.com/redis/go-redis/v9 v9.7.0 h1:HhLSs+B6O021gwzl+locl0zEDnyNkxMtf/Z3NNBMa9E=
github.com/redis/go-redis/v9 v9.7.0/go.mod h1:f6zhXITC7JUJIlPEiBOTXxJgPLdZcA93GewI7inzyWw=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ=
github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
@@ -437,18 +437,18 @@ golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8T
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/api v0.216.0 h1:xnEHy+xWFrtYInWPy8OdGFsyIfWJjtVnO39g7pz2BFY=
google.golang.org/api v0.216.0/go.mod h1:K9wzQMvWi47Z9IU7OgdOofvZuw75Ge3PPITImZR/UyI=
google.golang.org/genproto v0.0.0-20250106144421-5f5ef82da422 h1:6GUHKGv2huWOHKmDXLMNE94q3fBDlEHI+oTRIZSebK0=
google.golang.org/genproto v0.0.0-20250106144421-5f5ef82da422/go.mod h1:1NPAxoesyw/SgLPqaUp9u1f9PWCLAk/jVmhx7gJZStg=
google.golang.org/genproto/googleapis/api v0.0.0-20250106144421-5f5ef82da422 h1:GVIKPyP/kLIyVOgOnTwFOrvQaQUzOzGMCxgFUOEmm24=
google.golang.org/genproto/googleapis/api v0.0.0-20250106144421-5f5ef82da422/go.mod h1:b6h1vNKhxaSoEI+5jc3PJUCustfli/mRab7295pY7rw=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250106144421-5f5ef82da422 h1:3UsHvIr4Wc2aW4brOaSCmcxh9ksica6fHEr8P1XhkYw=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250106144421-5f5ef82da422/go.mod h1:3ENsm/5D1mzDyhpzeRi1NR784I0BcofWBoSc5QqqMK4=
google.golang.org/grpc v1.69.2 h1:U3S9QEtbXC0bYNvRtcoklF3xGtLViumSYxWykJS+7AU=
google.golang.org/grpc v1.69.2/go.mod h1:vyjdE6jLBI76dgpDojsFGNaHlxdjXN9ghpnd2o7JGZ4=
google.golang.org/protobuf v1.36.2 h1:R8FeyR1/eLmkutZOM5CWghmo5itiG9z0ktFlTVLuTmU=
google.golang.org/protobuf v1.36.2/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
google.golang.org/api v0.217.0 h1:GYrUtD289o4zl1AhiTZL0jvQGa2RDLyC+kX1N/lfGOU=
google.golang.org/api v0.217.0/go.mod h1:qMc2E8cBAbQlRypBTBWHklNJlaZZJBwDv81B1Iu8oSI=
google.golang.org/genproto v0.0.0-20250115164207-1a7da9e5054f h1:387Y+JbxF52bmesc8kq1NyYIp33dnxCw6eiA7JMsTmw=
google.golang.org/genproto v0.0.0-20250115164207-1a7da9e5054f/go.mod h1:0joYwWwLQh18AOj8zMYeZLjzuqcYTU3/nC5JdCvC3JI=
google.golang.org/genproto/googleapis/api v0.0.0-20250115164207-1a7da9e5054f h1:gap6+3Gk41EItBuyi4XX/bp4oqJ3UwuIMl25yGinuAA=
google.golang.org/genproto/googleapis/api v0.0.0-20250115164207-1a7da9e5054f/go.mod h1:Ic02D47M+zbarjYYUlK57y316f2MoN0gjAwI3f2S95o=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250115164207-1a7da9e5054f h1:OxYkA3wjPsZyBylwymxSHa7ViiW1Sml4ToBrncvFehI=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250115164207-1a7da9e5054f/go.mod h1:+2Yz8+CLJbIfL9z73EW45avw8Lmge3xVElCP9zEKi50=
google.golang.org/grpc v1.69.4 h1:MF5TftSMkd8GLw/m0KM6V8CMOCY6NZ1NQDPGFgbTt4A=
google.golang.org/grpc v1.69.4/go.mod h1:vyjdE6jLBI76dgpDojsFGNaHlxdjXN9ghpnd2o7JGZ4=
google.golang.org/protobuf v1.36.3 h1:82DV7MYdb8anAVi3qge1wSnMDrnKK7ebr+I0hHRN1BU=
google.golang.org/protobuf v1.36.3/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
@@ -464,12 +464,12 @@ gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
k8s.io/api v0.32.0 h1:OL9JpbvAU5ny9ga2fb24X8H6xQlVp+aJMFlgtQjR9CE=
k8s.io/api v0.32.0/go.mod h1:4LEwHZEf6Q/cG96F3dqR965sYOfmPM7rq81BLgsE0p0=
k8s.io/apimachinery v0.32.0 h1:cFSE7N3rmEEtv4ei5X6DaJPHHX0C+upp+v5lVPiEwpg=
k8s.io/apimachinery v0.32.0/go.mod h1:GpHVgxoKlTxClKcteaeuF1Ul/lDVb74KpZcxcmLDElE=
k8s.io/client-go v0.32.0 h1:DimtMcnN/JIKZcrSrstiwvvZvLjG0aSxy8PxN8IChp8=
k8s.io/client-go v0.32.0/go.mod h1:boDWvdM1Drk4NJj/VddSLnx59X3OPgwrOo0vGbtq9+8=
k8s.io/api v0.32.1 h1:f562zw9cy+GvXzXf0CKlVQ7yHJVYzLfL6JAS4kOAaOc=
k8s.io/api v0.32.1/go.mod h1:/Yi/BqkuueW1BgpoePYBRdDYfjPF5sgTr5+YqDZra5k=
k8s.io/apimachinery v0.32.1 h1:683ENpaCBjma4CYqsmZyhEzrGz6cjn1MY/X2jB2hkZs=
k8s.io/apimachinery v0.32.1/go.mod h1:GpHVgxoKlTxClKcteaeuF1Ul/lDVb74KpZcxcmLDElE=
k8s.io/client-go v0.32.1 h1:otM0AxdhdBIaQh7l1Q0jQpmo7WOFIk5FFa4bg6YMdUU=
k8s.io/client-go v0.32.1/go.mod h1:aTTKZY7MdxUaJ/KiUs8D+GssR9zJZi77ZqtzcGXIiDg=
k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk=
k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE=
k8s.io/kube-openapi v0.0.0-20241105132330-32ad38e42d3f h1:GA7//TjRY9yWGy1poLzYYJJ4JRdzg3+O6e8I+e+8T5Y=

View File

@@ -163,7 +163,8 @@ func (b *Block) deduplicateSamplesDuringMerge() {
// Nothing to dedup.
return
}
dedupInterval := GetDedupInterval()
maxTimestamp := srcTimestamps[len(srcTimestamps)-1]
dedupInterval := GetDedupInterval(maxTimestamp)
if dedupInterval <= 0 {
// Deduplication is disabled.
return

View File

@@ -15,15 +15,10 @@ func SetDedupInterval(dedupInterval time.Duration) {
globalDedupInterval = dedupInterval.Milliseconds()
}
// GetDedupInterval returns the dedup interval in milliseconds, which has been set via SetDedupInterval.
func GetDedupInterval() int64 {
return globalDedupInterval
}
var globalDedupInterval int64
func isDedupEnabled() bool {
return globalDedupInterval > 0
return len(downsamplingPeriods) > 0
}
// DeduplicateSamples removes samples from src* if they are closer to each other than dedupInterval in milliseconds.

123
lib/storage/downsampling.go Normal file
View File

@@ -0,0 +1,123 @@
package storage
import (
"fmt"
"sort"
"strings"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
"github.com/VictoriaMetrics/metricsql"
)
// SetDownsamplingPeriods configures downsampling.
//
// The function must be called before opening or creating any storage.
func SetDownsamplingPeriods(periods []string, dedupInterval time.Duration) error {
dsps, err := parseDownsamplingPeriods(periods)
if err != nil {
return err
}
dedupIntervalMs := dedupInterval.Milliseconds()
if dedupIntervalMs > 0 {
if len(dsps) > 0 && dsps[len(dsps)-1].Offset == 0 {
return fmt.Errorf("-dedup.minScrapeInterval=%s cannot be used if -downsampling.period=%s contains zero offset", dedupInterval, periods)
}
// Deduplication is a special case of downsampling with zero offset.
dsps = append(dsps, DownsamplingPeriod{
Offset: 0,
Interval: dedupIntervalMs,
})
}
downsamplingPeriods = dsps
return nil
}
// DownsamplingPeriod describes downsampling period
type DownsamplingPeriod struct {
// Offset in milliseconds from the current time when the downsampling with the given interval must be applied
Offset int64
// Interval for downsampling - only a single sample is left per each interval
Interval int64
}
// String implements interface
func (dsp DownsamplingPeriod) String() string {
offset := time.Duration(dsp.Offset) * time.Millisecond
interval := time.Duration(dsp.Interval) * time.Millisecond
return fmt.Sprintf("%s:%s", offset, interval)
}
func (dsp *DownsamplingPeriod) parse(s string) error {
idx := strings.Index(s, ":")
if idx <= 0 {
return fmt.Errorf("incorrect format for downsampling period: %s, want `offset:interval` format", s)
}
offsetStr, intervalStr := s[:idx], s[idx+1:]
interval, err := metricsql.DurationValue(intervalStr, 0)
if err != nil {
return fmt.Errorf("incorrect interval: %s format for downsampling interval: %s err: %w", intervalStr, s, err)
}
offset, err := metricsql.DurationValue(offsetStr, 0)
if err != nil {
return fmt.Errorf("incorrect duration: %s format for downsampling offset: %s err: %w", offsetStr, s, err)
}
dsp.Interval = interval
dsp.Offset = offset
// sanity check
if offset > 0 && interval > offset {
return fmt.Errorf("downsampling interval=%d cannot exceed offset=%d", dsp.Interval, dsp.Offset)
}
return nil
}
var downsamplingPeriods []DownsamplingPeriod
// GetDedupInterval returns dedup interval, which must be applied to samples with the given timestamp.
func GetDedupInterval(timestamp int64) int64 {
dsp := getDownsamplingPeriod(timestamp)
return dsp.Interval
}
// getDownsamplingPeriod returns downsampling period, which must be used for the given timestamp
func getDownsamplingPeriod(timestamp int64) DownsamplingPeriod {
offset := int64(fasttime.UnixTimestamp())*1000 - timestamp
for _, dsp := range downsamplingPeriods {
if offset >= dsp.Offset {
return dsp
}
}
return DownsamplingPeriod{}
}
func parseDownsamplingPeriods(periods []string) ([]DownsamplingPeriod, error) {
if len(periods) == 0 {
return nil, nil
}
var dsps []DownsamplingPeriod
for _, period := range periods {
var dsp DownsamplingPeriod
if err := dsp.parse(period); err != nil {
return nil, fmt.Errorf("cannot parse downsampling period %q: %w", period, err)
}
dsps = append(dsps, dsp)
}
sort.Slice(dsps, func(i, j int) bool {
return dsps[i].Offset > dsps[j].Offset
})
dspPrev := dsps[0]
// sanity checks.
for _, dsp := range dsps[1:] {
if dspPrev.Interval <= dsp.Interval {
return nil, fmt.Errorf("prev downsampling interval %d must be bigger than the next interval %d", dspPrev.Interval, dsp.Interval)
}
if dspPrev.Offset == dsp.Offset {
return nil, fmt.Errorf("duplicate downsampling offset: %d", dsp.Offset)
}
if dspPrev.Interval%dsp.Interval != 0 {
return nil, fmt.Errorf("downsamping intervals must be multiples; prev: %d, current: %d", dspPrev.Interval, dsp.Interval)
}
dspPrev = dsp
}
return dsps, nil
}

View File

@@ -0,0 +1,62 @@
package storage
import (
"strings"
"testing"
)
func TestParseDownsamplingPeriodsFailure(t *testing.T) {
f := func(name string, src []string) {
t.Helper()
t.Run(name, func(t *testing.T) {
if _, err := parseDownsamplingPeriods(src); err == nil {
t.Fatalf("want fail for input: %s", strings.Join(src, ","))
}
})
}
f("empty duration", []string{"15d"})
f("empty interval", []string{":1m"})
f("incorrect duration decrease", []string{"30d:15h", "60d:1h"})
f("duplicate offset", []string{"30d:15h", "30d:1h"})
f("duplicate interval", []string{"60d:1h", "30d:1h"})
f("not multiple intervals", []string{"90d:12h", "60:9h", "30d:7h"})
}
func TestParseDownsamplingPeriodsSuccess(t *testing.T) {
f := func(name string, src []string, expected []DownsamplingPeriod) {
t.Helper()
t.Run(name, func(t *testing.T) {
dsps, err := parseDownsamplingPeriods(src)
if err != nil {
t.Fatalf("cannot parse downsampling configuration for: %s, err: %s", strings.Join(src, ","), err)
}
assertDownsamplingPeriods(t, expected, dsps)
})
}
f("one period", []string{"30d:1m"}, []DownsamplingPeriod{
{Offset: 30 * 24 * 3600 * 1000, Interval: 60 * 1000},
})
f("three periods", []string{"15d:30s", "30d:1m", "60d:15m"}, []DownsamplingPeriod{
{Offset: 60 * 24 * 3600 * 1000, Interval: 15 * 60 * 1000},
{Offset: 30 * 24 * 3600 * 1000, Interval: 60 * 1000},
{Offset: 15 * 24 * 3600 * 1000, Interval: 30 * 1000},
})
f("with the same divider periods", []string{"15d:1m", "30d:7m", "60d:14m", "90d:28m"}, []DownsamplingPeriod{
{Offset: 90 * 24 * 3600 * 1000, Interval: 28 * 60 * 1000},
{Offset: 60 * 24 * 3600 * 1000, Interval: 14 * 60 * 1000},
{Offset: 30 * 24 * 3600 * 1000, Interval: 7 * 60 * 1000},
{Offset: 15 * 24 * 3600 * 1000, Interval: 60 * 1000},
})
}
func assertDownsamplingPeriods(t *testing.T, want, got []DownsamplingPeriod) {
t.Helper()
if len(want) != len(got) {
t.Fatalf("len mismatch, want: %d, got: %d", len(want), len(got))
}
for i := 0; i < len(want); i++ {
if want[i] != got[i] {
t.Fatalf("want period: %s, got period: %s, idx: %d", want[i], got[i], i)
}
}
}

View File

@@ -404,6 +404,12 @@ func (mn *MetricName) String() string {
return fmt.Sprintf("%s{%s}", mnCopy.MetricGroup, tagsStr)
}
// SortAndMarshal sorts mn tags and then marshals them to dst.
func (mn *MetricName) SortAndMarshal(dst []byte) []byte {
mn.sortTags()
return mn.Marshal(dst)
}
// Marshal appends marshaled mn to dst and returns the result.
//
// mn.sortTags must be called before calling this function

View File

@@ -1344,7 +1344,7 @@ func (pt *partition) runFinalDedup(stopCh <-chan struct{}) error {
}
func (pt *partition) isFinalDedupNeeded() bool {
dedupInterval := GetDedupInterval()
dedupInterval := GetDedupInterval(pt.tr.MaxTimestamp)
pws := pt.GetParts(nil, false)
minDedupInterval := getMinDedupInterval(pws)
@@ -1577,7 +1577,7 @@ func (pt *partition) mergePartsInternal(dstPartPath string, bsw *blockStreamWrit
return nil, fmt.Errorf("cannot merge %d parts to %s: %w", len(bsrs), dstPartPath, err)
}
if dstPartPath != "" {
ph.MinDedupInterval = GetDedupInterval()
ph.MinDedupInterval = GetDedupInterval(ph.MaxTimestamp)
ph.MustWriteMetadata(dstPartPath)
}
return &ph, nil

View File

@@ -1,5 +1,15 @@
# Breaking Changes
## v1.8.0
### New errors from `NewManagedIdentityCredential` in some environments
`NewManagedIdentityCredential` now returns an error when `ManagedIdentityCredentialOptions.ID` is set in a hosting environment whose managed identity API doesn't support user-assigned identities. `ManagedIdentityCredential.GetToken()` formerly logged a warning in these cases. Returning an error instead prevents the credential authenticating an unexpected identity. The affected hosting environments are:
* Azure Arc
* Azure ML (when a resource or object ID is specified; client IDs are supported)
* Cloud Shell
* Service Fabric
## v1.6.0
### Behavioral change to `DefaultAzureCredential` in IMDS managed identity scenarios

View File

@@ -1,5 +1,19 @@
# Release History
## 1.8.1 (2025-01-15)
### Bugs Fixed
* User credential types inconsistently log access token scopes
* `DefaultAzureCredential` skips managed identity in Azure Container Instances
* Credentials having optional tenant IDs such as `AzureCLICredential` and
`InteractiveBrowserCredential` require setting `AdditionallyAllowedTenants`
when used with some clients
### Other Changes
* `ChainedTokenCredential` and `DefaultAzureCredential` continue to their next
credential after `ManagedIdentityCredential` receives an unexpected response
from IMDS, indicating the response is from something else such as a proxy
## 1.8.0 (2024-10-08)
### Other Changes

View File

@@ -54,17 +54,7 @@ The `azidentity` module focuses on OAuth authentication with Microsoft Entra ID.
### DefaultAzureCredential
`DefaultAzureCredential` simplifies authentication while developing applications that deploy to Azure by combining credentials used in Azure hosting environments and credentials used in local development. In production, it's better to use a specific credential type so authentication is more predictable and easier to debug. `DefaultAzureCredential` attempts to authenticate via the following mechanisms in this order, stopping when one succeeds:
![DefaultAzureCredential authentication flow](img/mermaidjs/DefaultAzureCredentialAuthFlow.svg)
1. **Environment** - `DefaultAzureCredential` will read account information specified via [environment variables](#environment-variables) and use it to authenticate.
1. **Workload Identity** - If the app is deployed on Kubernetes with environment variables set by the workload identity webhook, `DefaultAzureCredential` will authenticate the configured identity.
1. **Managed Identity** - If the app is deployed to an Azure host with managed identity enabled, `DefaultAzureCredential` will authenticate with it.
1. **Azure CLI** - If a user or service principal has authenticated via the Azure CLI `az login` command, `DefaultAzureCredential` will authenticate that identity.
1. **Azure Developer CLI** - If the developer has authenticated via the Azure Developer CLI `azd auth login` command, the `DefaultAzureCredential` will authenticate with that account.
> Note: `DefaultAzureCredential` is intended to simplify getting started with the SDK by handling common scenarios with reasonable default behaviors. Developers who want more control or whose scenario isn't served by the default settings should use other credential types.
`DefaultAzureCredential` simplifies authentication while developing apps that deploy to Azure by combining credentials used in Azure hosting environments with credentials used in local development. For more information, see [DefaultAzureCredential overview][dac_overview].
## Managed Identity
@@ -128,10 +118,10 @@ client := armresources.NewResourceGroupsClient("subscription ID", chain, nil)
### Credential chains
|Credential|Usage
|-|-
|[DefaultAzureCredential](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#DefaultAzureCredential)|Simplified authentication experience for getting started developing Azure apps
|[ChainedTokenCredential](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#ChainedTokenCredential)|Define custom authentication flows, composing multiple credentials
|Credential|Usage|Reference
|-|-|-
|[DefaultAzureCredential](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#DefaultAzureCredential)|Simplified authentication experience for getting started developing Azure apps|[DefaultAzureCredential overview][dac_overview]|
|[ChainedTokenCredential](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#ChainedTokenCredential)|Define custom authentication flows, composing multiple credentials|[ChainedTokenCredential overview][ctc_overview]|
### Authenticating Azure-Hosted Applications
@@ -260,4 +250,8 @@ For more information, see the
or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any
additional questions or comments.
<!-- LINKS -->
[ctc_overview]: https://aka.ms/azsdk/go/identity/credential-chains#chainedtokencredential-overview
[dac_overview]: https://aka.ms/azsdk/go/identity/credential-chains#defaultazurecredential-overview
![Impressions](https://azure-sdk-impressions.azurewebsites.net/api/impressions/azure-sdk-for-go%2Fsdk%2Fazidentity%2FREADME.png)

View File

@@ -22,13 +22,13 @@ Some credential types support opt-in persistent token caching (see [the below ta
Persistent caches are encrypted at rest using a mechanism that depends on the operating system:
| Operating system | Encryption facility |
|------------------|---------------------------------------|
| Linux | kernel key retention service (keyctl) |
| macOS | Keychain |
| Windows | Data Protection API (DPAPI) |
| Operating system | Encryption facility |
| ---------------- | ---------------------------------------------- |
| Linux | kernel key retention service (keyctl) |
| macOS | Keychain (requires cgo and native build tools) |
| Windows | Data Protection API (DPAPI) |
Persistent caching requires encryption. When the required encryption facility is unuseable, or the application is running on an unsupported OS, the persistent cache constructor returns an error. This doesn't mean that authentication is impossible, only that credentials can't persist authentication data and the application will need to reauthenticate the next time it runs. See the [package documentation][example] for example code showing how to configure persistent caching and access cached data.
Persistent caching requires encryption. When the required encryption facility is unuseable, or the application is running on an unsupported OS, the persistent cache constructor returns an error. This doesn't mean that authentication is impossible, only that credentials can't persist authentication data and the application will need to reauthenticate the next time it runs. See the package documentation for examples showing how to configure persistent caching and access cached data for [users][user_example] and [service principals][sp_example].
### Credentials supporting token caching
@@ -37,7 +37,7 @@ The following table indicates the state of in-memory and persistent caching in e
**Note:** in-memory caching is enabled by default for every type supporting it. Persistent token caching must be enabled explicitly. See the [package documentation][user_example] for an example showing how to do this for credential types authenticating users. For types that authenticate service principals, set the `Cache` field on the constructor's options as shown in [this example][sp_example].
| Credential | In-memory token caching | Persistent token caching |
|--------------------------------|---------------------------------------------------------------------|--------------------------|
| ------------------------------ | ------------------------------------------------------------------- | ------------------------ |
| `AzureCLICredential` | Not Supported | Not Supported |
| `AzureDeveloperCLICredential` | Not Supported | Not Supported |
| `AzurePipelinesCredential` | Supported | Supported |

View File

@@ -8,6 +8,7 @@ This troubleshooting guide covers failure investigation techniques, common error
- [Permission issues](#permission-issues)
- [Find relevant information in errors](#find-relevant-information-in-errors)
- [Enable and configure logging](#enable-and-configure-logging)
- [Troubleshoot persistent token caching issues](#troubleshoot-persistent-token-caching-issues)
- [Troubleshoot AzureCLICredential authentication issues](#troubleshoot-azureclicredential-authentication-issues)
- [Troubleshoot AzureDeveloperCLICredential authentication issues](#troubleshoot-azuredeveloperclicredential-authentication-issues)
- [Troubleshoot AzurePipelinesCredential authentication issues](#troubleshoot-azurepipelinescredential-authentication-issues)
@@ -236,6 +237,29 @@ azd auth token --output json --scope https://management.core.windows.net/.defaul
| No service connection found with identifier |The `serviceConnectionID` argument to `NewAzurePipelinesCredential` is incorrect| Verify the service connection ID. This parameter refers to the `resourceId` of the Azure Service Connection. It can also be found in the query string of the service connection's configuration in Azure DevOps. [Azure Pipelines documentation](https://learn.microsoft.com/azure/devops/pipelines/library/service-endpoints?view=azure-devops&tabs=yaml) has more information about service connections.|
|401 (Unauthorized) response from OIDC endpoint|The `systemAccessToken` argument to `NewAzurePipelinesCredential` is incorrect|Check pipeline configuration. This value comes from the predefined variable `System.AccessToken` [as described in Azure Pipelines documentation](https://learn.microsoft.com/azure/devops/pipelines/build/variables?view=azure-devops&tabs=yaml#systemaccesstoken).|
## Troubleshoot persistent token caching issues
### macOS
[azidentity/cache](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity/cache) encrypts persistent caches with the system Keychain on macOS. You may see build and runtime errors there because calling the Keychain API requires cgo and macOS prohibits Keychain access in some scenarios.
#### Build errors
Build errors about undefined `accessor` symbols indicate that cgo wasn't enabled. For example:
```
$ GOOS=darwin go build
# github.com/Azure/azure-sdk-for-go/sdk/azidentity/cache
../../go/pkg/mod/github.com/!azure/azure-sdk-for-go/sdk/azidentity/cache@v0.3.0/darwin.go:18:19: undefined: accessor.New
../../go/pkg/mod/github.com/!azure/azure-sdk-for-go/sdk/azidentity/cache@v0.3.0/darwin.go:18:38: undefined: accessor.WithAccount
```
Try `go build` again with `CGO_ENABLED=1`. You may need to install native build tools.
#### Runtime errors
macOS prohibits Keychain access from environments without a GUI such as SSH sessions. If your application calls the persistent cache constructor ([cache.New](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity/cache#New)) from an SSH session on a macOS host, you'll see an error like
`persistent storage isn't available due to error "User interaction is not allowed. (-25308)"`. This doesn't mean authentication is impossible, only that credentials can't persist data and the application must reauthenticate the next time it runs.
## Get additional help
Additional information on ways to reach out for support can be found in [SUPPORT.md](https://github.com/Azure/azure-sdk-for-go/blob/main/SUPPORT.md).

View File

@@ -42,6 +42,8 @@ const (
developerSignOnClientID = "04b07795-8ddb-461a-bbee-02f9e1bf7b46"
defaultSuffix = "/.default"
scopeLogFmt = "%s.GetToken() acquired a token for scope %q"
traceNamespace = "Microsoft.Entra"
traceOpGetToken = "GetToken"
traceOpAuthenticate = "Authenticate"
@@ -103,7 +105,16 @@ func resolveAdditionalTenants(tenants []string) []string {
return cp
}
// resolveTenant returns the correct tenant for a token request
// resolveTenant returns the correct tenant for a token request, or "" when the calling credential doesn't
// have an explicitly configured tenant and the caller didn't specify a tenant for the token request.
//
// - defaultTenant: tenant set when constructing the credential, if any. "" is valid for credentials
// having an optional or implicit tenant such as dev tool and interactive user credentials. Those
// default to the tool's configured tenant or the user's home tenant, respectively.
// - specified: tenant specified for this token request i.e., TokenRequestOptions.TenantID. May be "".
// - credName: name of the calling credential type; for error messages
// - additionalTenants: optional allow list of tenants the credential may acquire tokens from in
// addition to defaultTenant i.e., the credential's AdditionallyAllowedTenants option
func resolveTenant(defaultTenant, specified, credName string, additionalTenants []string) (string, error) {
if specified == "" || specified == defaultTenant {
return defaultTenant, nil
@@ -119,6 +130,17 @@ func resolveTenant(defaultTenant, specified, credName string, additionalTenants
return specified, nil
}
}
if len(additionalTenants) == 0 {
switch defaultTenant {
case "", organizationsTenantID:
// The application didn't specify a tenant or allow list when constructing the credential. Allow the
// tenant specified for this token request because we have nothing to compare it to (i.e., it vacuously
// satisfies the credential's configuration); don't know whether the application is multitenant; and
// don't want to return an error in the common case that the specified tenant matches the credential's
// default tenant determined elsewhere e.g., in some dev tool's configuration.
return specified, nil
}
}
return "", fmt.Errorf(`%s isn't configured to acquire tokens for tenant %q. To enable acquiring tokens for this tenant add it to the AdditionallyAllowedTenants on the credential options, or add "*" to allow acquiring tokens for any tenant`, credName, specified)
}

View File

@@ -30,9 +30,9 @@ type azTokenProvider func(ctx context.Context, scopes []string, tenant, subscrip
// AzureCLICredentialOptions contains optional parameters for AzureCLICredential.
type AzureCLICredentialOptions struct {
// AdditionallyAllowedTenants specifies tenants for which the credential may acquire tokens, in addition
// to TenantID. Add the wildcard value "*" to allow the credential to acquire tokens for any tenant the
// logged in account can access.
// AdditionallyAllowedTenants specifies tenants to which the credential may authenticate, in addition to
// TenantID. When TenantID is empty, this option has no effect and the credential will authenticate to
// any requested tenant. Add the wildcard value "*" to allow the credential to authenticate to any tenant.
AdditionallyAllowedTenants []string
// Subscription is the name or ID of a subscription. Set this to acquire tokens for an account other

View File

@@ -30,9 +30,9 @@ type azdTokenProvider func(ctx context.Context, scopes []string, tenant string)
// AzureDeveloperCLICredentialOptions contains optional parameters for AzureDeveloperCLICredential.
type AzureDeveloperCLICredentialOptions struct {
// AdditionallyAllowedTenants specifies tenants for which the credential may acquire tokens, in addition
// to TenantID. Add the wildcard value "*" to allow the credential to acquire tokens for any tenant the
// logged in account can access.
// AdditionallyAllowedTenants specifies tenants to which the credential may authenticate, in addition to
// TenantID. When TenantID is empty, this option has no effect and the credential will authenticate to
// any requested tenant. Add the wildcard value "*" to allow the credential to authenticate to any tenant.
AdditionallyAllowedTenants []string
// TenantID identifies the tenant the credential should authenticate in. Defaults to the azd environment,

View File

@@ -27,7 +27,10 @@ type ChainedTokenCredentialOptions struct {
}
// ChainedTokenCredential links together multiple credentials and tries them sequentially when authenticating. By default,
// it tries all the credentials until one authenticates, after which it always uses that credential.
// it tries all the credentials until one authenticates, after which it always uses that credential. For more information,
// see [ChainedTokenCredential overview].
//
// [ChainedTokenCredential overview]: https://aka.ms/azsdk/go/identity/credential-chains#chainedtokencredential-overview
type ChainedTokenCredential struct {
cond *sync.Cond
iterating bool
@@ -46,6 +49,9 @@ func NewChainedTokenCredential(sources []azcore.TokenCredential, options *Chaine
if source == nil { // cannot have a nil credential in the chain or else the application will panic when GetToken() is called on nil
return nil, errors.New("sources cannot contain nil")
}
if mc, ok := source.(*ManagedIdentityCredential); ok {
mc.mic.chained = true
}
}
cp := make([]azcore.TokenCredential, len(sources))
copy(cp, sources)

View File

@@ -26,27 +26,16 @@ extends:
parameters:
CloudConfig:
Public:
ServiceConnection: azure-sdk-tests
SubscriptionConfigurationFilePaths:
- eng/common/TestResources/sub-config/AzurePublicMsft.json
SubscriptionConfigurations:
- $(sub-config-azure-cloud-test-resources)
- $(sub-config-identity-test-resources)
EnableRaceDetector: true
Location: westus2
RunLiveTests: true
ServiceDirectory: azidentity
UsePipelineProxy: false
${{ if endsWith(variables['Build.DefinitionName'], 'weekly') }}:
PreSteps:
- task: AzureCLI@2
displayName: Set OIDC token
inputs:
addSpnToEnvironment: true
azureSubscription: azure-sdk-tests
inlineScript: Write-Host "##vso[task.setvariable variable=OIDC_TOKEN;]$($env:idToken)"
scriptLocation: inlineScript
scriptType: pscore
PersistOidcToken: true
MatrixConfigs:
- Name: managed_identity_matrix
GenerateVMJobs: true

View File

@@ -115,7 +115,7 @@ func (c *confidentialClient) GetToken(ctx context.Context, tro policy.TokenReque
err = newAuthenticationFailedErrorFromMSAL(c.name, err)
}
} else {
msg := fmt.Sprintf("%s.GetToken() acquired a token for scope %q", c.name, strings.Join(ar.GrantedScopes, ", "))
msg := fmt.Sprintf(scopeLogFmt, c.name, strings.Join(ar.GrantedScopes, ", "))
log.Write(EventAuthentication, msg)
}
return azcore.AccessToken{Token: ar.AccessToken, ExpiresOn: ar.ExpiresOn.UTC()}, err

View File

@@ -23,15 +23,19 @@ type DefaultAzureCredentialOptions struct {
// to credential types that authenticate via external tools such as the Azure CLI.
azcore.ClientOptions
// AdditionallyAllowedTenants specifies additional tenants for which the credential may acquire tokens. Add
// the wildcard value "*" to allow the credential to acquire tokens for any tenant. This value can also be
// set as a semicolon delimited list of tenants in the environment variable AZURE_ADDITIONALLY_ALLOWED_TENANTS.
// AdditionallyAllowedTenants specifies tenants to which the credential may authenticate, in addition to
// TenantID. When TenantID is empty, this option has no effect and the credential will authenticate to
// any requested tenant. Add the wildcard value "*" to allow the credential to authenticate to any tenant.
// This value can also be set as a semicolon delimited list of tenants in the environment variable
// AZURE_ADDITIONALLY_ALLOWED_TENANTS.
AdditionallyAllowedTenants []string
// DisableInstanceDiscovery should be set true only by applications authenticating in disconnected clouds, or
// private clouds such as Azure Stack. It determines whether the credential requests Microsoft Entra instance metadata
// from https://login.microsoft.com before authenticating. Setting this to true will skip this request, making
// the application responsible for ensuring the configured authority is valid and trustworthy.
DisableInstanceDiscovery bool
// TenantID sets the default tenant for authentication via the Azure CLI and workload identity.
TenantID string
}
@@ -39,7 +43,7 @@ type DefaultAzureCredentialOptions struct {
// DefaultAzureCredential simplifies authentication while developing applications that deploy to Azure by
// combining credentials used in Azure hosting environments and credentials used in local development. In
// production, it's better to use a specific credential type so authentication is more predictable and easier
// to debug.
// to debug. For more information, see [DefaultAzureCredential overview].
//
// DefaultAzureCredential attempts to authenticate with each of these credential types, in the following order,
// stopping when one provides a token:
@@ -55,6 +59,8 @@ type DefaultAzureCredentialOptions struct {
// Consult the documentation for these credential types for more information on how they authenticate.
// Once a credential has successfully authenticated, DefaultAzureCredential will use that credential for
// every subsequent authentication.
//
// [DefaultAzureCredential overview]: https://aka.ms/azsdk/go/identity/credential-chains#defaultazurecredential-overview
type DefaultAzureCredential struct {
chain *ChainedTokenCredential
}

View File

@@ -21,8 +21,9 @@ const credNameDeviceCode = "DeviceCodeCredential"
type DeviceCodeCredentialOptions struct {
azcore.ClientOptions
// AdditionallyAllowedTenants specifies additional tenants for which the credential may acquire
// tokens. Add the wildcard value "*" to allow the credential to acquire tokens for any tenant.
// AdditionallyAllowedTenants specifies tenants to which the credential may authenticate, in addition to
// TenantID. When TenantID is empty, this option has no effect and the credential will authenticate to
// any requested tenant. Add the wildcard value "*" to allow the credential to authenticate to any tenant.
AdditionallyAllowedTenants []string
// AuthenticationRecord returned by a call to a credential's Authenticate method. Set this option

View File

@@ -20,8 +20,9 @@ const credNameBrowser = "InteractiveBrowserCredential"
type InteractiveBrowserCredentialOptions struct {
azcore.ClientOptions
// AdditionallyAllowedTenants specifies additional tenants for which the credential may acquire
// tokens. Add the wildcard value "*" to allow the credential to acquire tokens for any tenant.
// AdditionallyAllowedTenants specifies tenants to which the credential may authenticate, in addition to
// TenantID. When TenantID is empty, this option has no effect and the credential will authenticate to
// any requested tenant. Add the wildcard value "*" to allow the credential to authenticate to any tenant.
AdditionallyAllowedTenants []string
// AuthenticationRecord returned by a call to a credential's Authenticate method. Set this option

View File

@@ -65,6 +65,9 @@ type managedIdentityClient struct {
id ManagedIDKind
msiType msiType
probeIMDS bool
// chained indicates whether the client is part of a credential chain. If true, the client will return
// a credentialUnavailableError instead of an AuthenticationFailedError for an unexpected IMDS response.
chained bool
}
// arcKeyDirectory returns the directory expected to contain Azure Arc keys
@@ -144,7 +147,7 @@ func newManagedIdentityClient(options *ManagedIdentityCredentialOptions) (*manag
if _, ok := os.LookupEnv(identityHeader); ok {
if _, ok := os.LookupEnv(identityServerThumbprint); ok {
if options.ID != nil {
return nil, errors.New("the Service Fabric API doesn't support specifying a user-assigned managed identity at runtime")
return nil, errors.New("the Service Fabric API doesn't support specifying a user-assigned identity at runtime. The identity is determined by cluster resource configuration. See https://aka.ms/servicefabricmi")
}
env = "Service Fabric"
c.endpoint = endpoint
@@ -215,6 +218,7 @@ func (c *managedIdentityClient) authenticate(ctx context.Context, id ManagedIDKi
// no need to synchronize around this value because it's true only when DefaultAzureCredential constructed the client,
// and in that case ChainedTokenCredential.GetToken synchronizes goroutines that would execute this block
if c.probeIMDS {
// send a malformed request (no Metadata header) to IMDS to determine whether the endpoint is available
cx, cancel := context.WithTimeout(ctx, imdsProbeTimeout)
defer cancel()
cx = policy.WithRetryOptions(cx, policy.RetryOptions{MaxRetries: -1})
@@ -222,24 +226,14 @@ func (c *managedIdentityClient) authenticate(ctx context.Context, id ManagedIDKi
if err != nil {
return azcore.AccessToken{}, fmt.Errorf("failed to create IMDS probe request: %s", err)
}
res, err := c.azClient.Pipeline().Do(req)
if err != nil {
if _, err = c.azClient.Pipeline().Do(req); err != nil {
msg := err.Error()
if errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
msg = "managed identity timed out. See https://aka.ms/azsdk/go/identity/troubleshoot#dac for more information"
}
return azcore.AccessToken{}, newCredentialUnavailableError(credNameManagedIdentity, msg)
}
// because IMDS always responds with JSON, assume a non-JSON response is from something else, such
// as a proxy, and return credentialUnavailableError so DefaultAzureCredential continues iterating
b, err := azruntime.Payload(res)
if err != nil {
return azcore.AccessToken{}, newCredentialUnavailableError(credNameManagedIdentity, fmt.Sprintf("failed to read IMDS probe response: %s", err))
}
if !json.Valid(b) {
return azcore.AccessToken{}, newCredentialUnavailableError(credNameManagedIdentity, "unexpected response to IMDS probe")
}
// send normal token requests from now on because IMDS responded
// send normal token requests from now on because something responded
c.probeIMDS = false
}
@@ -254,13 +248,21 @@ func (c *managedIdentityClient) authenticate(ctx context.Context, id ManagedIDKi
}
if azruntime.HasStatusCode(resp, http.StatusOK, http.StatusCreated) {
return c.createAccessToken(resp)
tk, err := c.createAccessToken(resp)
if err != nil && c.chained && c.msiType == msiTypeIMDS {
// failure to unmarshal a 2xx implies the response is from something other than IMDS such as a proxy listening at
// the same address. Return a credentialUnavailableError so credential chains continue to their next credential
err = newCredentialUnavailableError(credNameManagedIdentity, err.Error())
}
return tk, err
}
if c.msiType == msiTypeIMDS {
switch resp.StatusCode {
case http.StatusBadRequest:
if id != nil {
// return authenticationFailedError, halting any encompassing credential chain,
// because the explicit user-assigned identity implies the developer expected this to work
return azcore.AccessToken{}, newAuthenticationFailedError(credNameManagedIdentity, "the requested identity isn't assigned to this resource", resp)
}
msg := "failed to authenticate a system assigned identity"
@@ -276,6 +278,13 @@ func (c *managedIdentityClient) authenticate(ctx context.Context, id ManagedIDKi
return azcore.AccessToken{}, newCredentialUnavailableError(credNameManagedIdentity, fmt.Sprintf("unexpected response %q", string(body)))
}
}
if c.chained {
// the response may be from something other than IMDS, for example a proxy returning
// 404. Return credentialUnavailableError so credential chains continue to their
// next credential, include the response in the error message to help debugging
err = newAuthenticationFailedError(credNameManagedIdentity, "", resp)
return azcore.AccessToken{}, newCredentialUnavailableError(credNameManagedIdentity, err.Error())
}
}
return azcore.AccessToken{}, newAuthenticationFailedError(credNameManagedIdentity, "", resp)
@@ -290,7 +299,7 @@ func (c *managedIdentityClient) createAccessToken(res *http.Response) (azcore.Ac
ExpiresOn interface{} `json:"expires_on,omitempty"` // the value returned in this field varies between a number and a date string
}{}
if err := azruntime.UnmarshalAsJSON(res, &value); err != nil {
return azcore.AccessToken{}, fmt.Errorf("internal AccessToken: %v", err)
return azcore.AccessToken{}, newAuthenticationFailedError(credNameManagedIdentity, "Unexpected response content", res)
}
if value.ExpiresIn != "" {
expiresIn, err := json.Number(value.ExpiresIn).Int64()

View File

@@ -154,12 +154,7 @@ func (p *publicClient) GetToken(ctx context.Context, tro policy.TokenRequestOpti
if p.opts.DisableAutomaticAuthentication {
return azcore.AccessToken{}, newAuthenticationRequiredError(p.name, tro)
}
at, err := p.reqToken(ctx, client, tro)
if err == nil {
msg := fmt.Sprintf("%s.GetToken() acquired a token for scope %q", p.name, strings.Join(ar.GrantedScopes, ", "))
log.Write(EventAuthentication, msg)
}
return at, err
return p.reqToken(ctx, client, tro)
}
// reqToken requests a token from the MSAL public client. It's separate from GetToken() to enable Authenticate() to bypass the cache.
@@ -242,6 +237,8 @@ func (p *publicClient) newMSALClient(enableCAE bool) (msalPublicClient, error) {
func (p *publicClient) token(ar public.AuthResult, err error) (azcore.AccessToken, error) {
if err == nil {
msg := fmt.Sprintf(scopeLogFmt, p.name, strings.Join(ar.GrantedScopes, ", "))
log.Write(EventAuthentication, msg)
p.record, err = newAuthenticationRecord(ar)
} else {
err = newAuthenticationFailedErrorFromMSAL(p.name, err)

View File

@@ -7,6 +7,10 @@ param (
[hashtable] $AdditionalParameters = @{},
[hashtable] $DeploymentOutputs,
[Parameter(Mandatory = $true)]
[ValidateNotNullOrEmpty()]
[string] $SubscriptionId,
[Parameter(ParameterSetName = 'Provisioner', Mandatory = $true)]
[ValidateNotNullOrEmpty()]
[string] $TenantId,
@@ -15,6 +19,10 @@ param (
[ValidatePattern('^[0-9a-f]{8}(-[0-9a-f]{4}){3}-[0-9a-f]{12}$')]
[string] $TestApplicationId,
[Parameter(Mandatory = $true)]
[ValidateNotNullOrEmpty()]
[string] $Environment,
# Captures any arguments from eng/New-TestResources.ps1 not declared here (no parameter errors).
[Parameter(ValueFromRemainingArguments = $true)]
$RemainingArguments
@@ -28,8 +36,9 @@ if ($CI) {
Write-Host "Skipping post-provisioning script because resources weren't deployed"
return
}
az login --federated-token $env:OIDC_TOKEN --service-principal -t $TenantId -u $TestApplicationId
az account set --subscription $DeploymentOutputs['AZIDENTITY_SUBSCRIPTION_ID']
az cloud set -n $Environment
az login --federated-token $env:ARM_OIDC_TOKEN --service-principal -t $TenantId -u $TestApplicationId
az account set --subscription $SubscriptionId
}
Write-Host "Building container"
@@ -62,6 +71,9 @@ $aciName = "azidentity-test"
az container create -g $rg -n $aciName --image $image `
--acr-identity $($DeploymentOutputs['AZIDENTITY_USER_ASSIGNED_IDENTITY']) `
--assign-identity [system] $($DeploymentOutputs['AZIDENTITY_USER_ASSIGNED_IDENTITY']) `
--cpu 1 `
--memory 1.0 `
--os-type Linux `
--role "Storage Blob Data Reader" `
--scope $($DeploymentOutputs['AZIDENTITY_STORAGE_ID']) `
-e AZIDENTITY_STORAGE_NAME=$($DeploymentOutputs['AZIDENTITY_STORAGE_NAME']) `

View File

@@ -14,5 +14,5 @@ const (
module = "github.com/Azure/azure-sdk-for-go/sdk/" + component
// Version is the semantic version (see http://semver.org) of this module.
version = "v1.8.0"
version = "v1.8.1"
)

33
vendor/github.com/aws/aws-sdk-go-v2/aws/checksum.go generated vendored Normal file
View File

@@ -0,0 +1,33 @@
package aws
// RequestChecksumCalculation controls request checksum calculation workflow
type RequestChecksumCalculation int
const (
// RequestChecksumCalculationUnset is the unset value for RequestChecksumCalculation
RequestChecksumCalculationUnset RequestChecksumCalculation = iota
// RequestChecksumCalculationWhenSupported indicates request checksum will be calculated
// if the operation supports input checksums
RequestChecksumCalculationWhenSupported
// RequestChecksumCalculationWhenRequired indicates request checksum will be calculated
// if required by the operation or if user elects to set a checksum algorithm in request
RequestChecksumCalculationWhenRequired
)
// ResponseChecksumValidation controls response checksum validation workflow
type ResponseChecksumValidation int
const (
// ResponseChecksumValidationUnset is the unset value for ResponseChecksumValidation
ResponseChecksumValidationUnset ResponseChecksumValidation = iota
// ResponseChecksumValidationWhenSupported indicates response checksum will be validated
// if the operation supports output checksums
ResponseChecksumValidationWhenSupported
// ResponseChecksumValidationWhenRequired indicates response checksum will only
// be validated if the operation requires output checksum validation
ResponseChecksumValidationWhenRequired
)

View File

@@ -165,6 +165,33 @@ type Config struct {
// Controls how a resolved AWS account ID is handled for endpoint routing.
AccountIDEndpointMode AccountIDEndpointMode
// RequestChecksumCalculation determines when request checksum calculation is performed.
//
// There are two possible values for this setting:
//
// 1. RequestChecksumCalculationWhenSupported (default): The checksum is always calculated
// if the operation supports it, regardless of whether the user sets an algorithm in the request.
//
// 2. RequestChecksumCalculationWhenRequired: The checksum is only calculated if the user
// explicitly sets a checksum algorithm in the request.
//
// This setting is sourced from the environment variable AWS_REQUEST_CHECKSUM_CALCULATION
// or the shared config profile attribute "request_checksum_calculation".
RequestChecksumCalculation RequestChecksumCalculation
// ResponseChecksumValidation determines when response checksum validation is performed
//
// There are two possible values for this setting:
//
// 1. ResponseChecksumValidationWhenSupported (default): The checksum is always validated
// if the operation supports it, regardless of whether the user sets the validation mode to ENABLED in request.
//
// 2. ResponseChecksumValidationWhenRequired: The checksum is only validated if the user
// explicitly sets the validation mode to ENABLED in the request
// This variable is sourced from environment variable AWS_RESPONSE_CHECKSUM_VALIDATION or
// the shared config profile attribute "response_checksum_validation".
ResponseChecksumValidation ResponseChecksumValidation
}
// NewConfig returns a new Config pointer that can be chained with builder

View File

@@ -3,4 +3,4 @@
package aws
// goModuleVersion is the tagged release for this module
const goModuleVersion = "1.32.8"
const goModuleVersion = "1.33.0"

View File

@@ -76,19 +76,28 @@ type UserAgentFeature string
// Enumerates UserAgentFeature.
const (
UserAgentFeatureResourceModel UserAgentFeature = "A" // n/a (we don't generate separate resource types)
UserAgentFeatureWaiter = "B"
UserAgentFeaturePaginator = "C"
UserAgentFeatureRetryModeLegacy = "D" // n/a (equivalent to standard)
UserAgentFeatureRetryModeStandard = "E"
UserAgentFeatureRetryModeAdaptive = "F"
UserAgentFeatureS3Transfer = "G"
UserAgentFeatureS3CryptoV1N = "H" // n/a (crypto client is external)
UserAgentFeatureS3CryptoV2 = "I" // n/a
UserAgentFeatureS3ExpressBucket = "J"
UserAgentFeatureS3AccessGrants = "K" // not yet implemented
UserAgentFeatureGZIPRequestCompression = "L"
UserAgentFeatureProtocolRPCV2CBOR = "M"
UserAgentFeatureResourceModel UserAgentFeature = "A" // n/a (we don't generate separate resource types)
UserAgentFeatureWaiter = "B"
UserAgentFeaturePaginator = "C"
UserAgentFeatureRetryModeLegacy = "D" // n/a (equivalent to standard)
UserAgentFeatureRetryModeStandard = "E"
UserAgentFeatureRetryModeAdaptive = "F"
UserAgentFeatureS3Transfer = "G"
UserAgentFeatureS3CryptoV1N = "H" // n/a (crypto client is external)
UserAgentFeatureS3CryptoV2 = "I" // n/a
UserAgentFeatureS3ExpressBucket = "J"
UserAgentFeatureS3AccessGrants = "K" // not yet implemented
UserAgentFeatureGZIPRequestCompression = "L"
UserAgentFeatureProtocolRPCV2CBOR = "M"
UserAgentFeatureRequestChecksumCRC32 = "U"
UserAgentFeatureRequestChecksumCRC32C = "V"
UserAgentFeatureRequestChecksumCRC64 = "W"
UserAgentFeatureRequestChecksumSHA1 = "X"
UserAgentFeatureRequestChecksumSHA256 = "Y"
UserAgentFeatureRequestChecksumWhenSupported = "Z"
UserAgentFeatureRequestChecksumWhenRequired = "a"
UserAgentFeatureResponseChecksumWhenSupported = "b"
UserAgentFeatureResponseChecksumWhenRequired = "c"
)
// RequestUserAgent is a build middleware that set the User-Agent for the request.

View File

@@ -1,3 +1,12 @@
# v1.29.0 (2025-01-15)
* **Feature**: S3 client behavior is updated to always calculate a checksum by default for operations that support it (such as PutObject or UploadPart), or require it (such as DeleteObjects). The checksum algorithm used by default now becomes CRC32. Checksum behavior can be configured using `when_supported` and `when_required` options - in code using RequestChecksumCalculation, in shared config using request_checksum_calculation, or as env variable using AWS_REQUEST_CHECKSUM_CALCULATION. The S3 client attempts to validate response checksums for all S3 API operations that support checksums. However, if the SDK has not implemented the specified checksum algorithm then this validation is skipped. Checksum validation behavior can be configured using `when_supported` and `when_required` options - in code using ResponseChecksumValidation, in shared config using response_checksum_validation, or as env variable using AWS_RESPONSE_CHECKSUM_VALIDATION.
* **Dependency Update**: Updated to the latest SDK module versions
# v1.28.11 (2025-01-14)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.28.10 (2025-01-10)
* **Dependency Update**: Updated to the latest SDK module versions

View File

@@ -83,6 +83,12 @@ var defaultAWSConfigResolvers = []awsConfigResolver{
// Sets the AccountIDEndpointMode if present in env var or shared config profile
resolveAccountIDEndpointMode,
// Sets the RequestChecksumCalculation if present in env var or shared config profile
resolveRequestChecksumCalculation,
// Sets the ResponseChecksumValidation if present in env var or shared config profile
resolveResponseChecksumValidation,
}
// A Config represents a generic configuration value or set of values. This type

View File

@@ -83,6 +83,9 @@ const (
awsAccountIDEnv = "AWS_ACCOUNT_ID"
awsAccountIDEndpointModeEnv = "AWS_ACCOUNT_ID_ENDPOINT_MODE"
awsRequestChecksumCalculation = "AWS_REQUEST_CHECKSUM_CALCULATION"
awsResponseChecksumValidation = "AWS_RESPONSE_CHECKSUM_VALIDATION"
)
var (
@@ -296,6 +299,12 @@ type EnvConfig struct {
// Indicates whether account ID will be required/ignored in endpoint2.0 routing
AccountIDEndpointMode aws.AccountIDEndpointMode
// Indicates whether request checksum should be calculated
RequestChecksumCalculation aws.RequestChecksumCalculation
// Indicates whether response checksum should be validated
ResponseChecksumValidation aws.ResponseChecksumValidation
}
// loadEnvConfig reads configuration values from the OS's environment variables.
@@ -400,6 +409,13 @@ func NewEnvConfig() (EnvConfig, error) {
return cfg, err
}
if err := setRequestChecksumCalculationFromEnvVal(&cfg.RequestChecksumCalculation, []string{awsRequestChecksumCalculation}); err != nil {
return cfg, err
}
if err := setResponseChecksumValidationFromEnvVal(&cfg.ResponseChecksumValidation, []string{awsResponseChecksumValidation}); err != nil {
return cfg, err
}
return cfg, nil
}
@@ -432,6 +448,14 @@ func (c EnvConfig) getAccountIDEndpointMode(context.Context) (aws.AccountIDEndpo
return c.AccountIDEndpointMode, len(c.AccountIDEndpointMode) > 0, nil
}
func (c EnvConfig) getRequestChecksumCalculation(context.Context) (aws.RequestChecksumCalculation, bool, error) {
return c.RequestChecksumCalculation, c.RequestChecksumCalculation > 0, nil
}
func (c EnvConfig) getResponseChecksumValidation(context.Context) (aws.ResponseChecksumValidation, bool, error) {
return c.ResponseChecksumValidation, c.ResponseChecksumValidation > 0, nil
}
// GetRetryMaxAttempts returns the value of AWS_MAX_ATTEMPTS if was specified,
// and not 0.
func (c EnvConfig) GetRetryMaxAttempts(ctx context.Context) (int, bool, error) {
@@ -528,6 +552,45 @@ func setAIDEndPointModeFromEnvVal(m *aws.AccountIDEndpointMode, keys []string) e
return nil
}
func setRequestChecksumCalculationFromEnvVal(m *aws.RequestChecksumCalculation, keys []string) error {
for _, k := range keys {
value := os.Getenv(k)
if len(value) == 0 {
continue
}
switch strings.ToLower(value) {
case checksumWhenSupported:
*m = aws.RequestChecksumCalculationWhenSupported
case checksumWhenRequired:
*m = aws.RequestChecksumCalculationWhenRequired
default:
return fmt.Errorf("invalid value for environment variable, %s=%s, must be when_supported/when_required", k, value)
}
}
return nil
}
func setResponseChecksumValidationFromEnvVal(m *aws.ResponseChecksumValidation, keys []string) error {
for _, k := range keys {
value := os.Getenv(k)
if len(value) == 0 {
continue
}
switch strings.ToLower(value) {
case checksumWhenSupported:
*m = aws.ResponseChecksumValidationWhenSupported
case checksumWhenRequired:
*m = aws.ResponseChecksumValidationWhenRequired
default:
return fmt.Errorf("invalid value for environment variable, %s=%s, must be when_supported/when_required", k, value)
}
}
return nil
}
// GetRegion returns the AWS Region if set in the environment. Returns an empty
// string if not set.
func (c EnvConfig) getRegion(ctx context.Context) (string, bool, error) {

View File

@@ -3,4 +3,4 @@
package config
// goModuleVersion is the tagged release for this module
const goModuleVersion = "1.28.10"
const goModuleVersion = "1.29.0"

View File

@@ -216,8 +216,15 @@ type LoadOptions struct {
// Whether S3 Express auth is disabled.
S3DisableExpressAuth *bool
// Whether account id should be built into endpoint resolution
AccountIDEndpointMode aws.AccountIDEndpointMode
// Specify if request checksum should be calculated
RequestChecksumCalculation aws.RequestChecksumCalculation
// Specifies if response checksum should be validated
ResponseChecksumValidation aws.ResponseChecksumValidation
// Service endpoint override. This value is not necessarily final and is
// passed to the service's EndpointResolverV2 for further delegation.
BaseEndpoint string
@@ -288,6 +295,14 @@ func (o LoadOptions) getAccountIDEndpointMode(ctx context.Context) (aws.AccountI
return o.AccountIDEndpointMode, len(o.AccountIDEndpointMode) > 0, nil
}
func (o LoadOptions) getRequestChecksumCalculation(ctx context.Context) (aws.RequestChecksumCalculation, bool, error) {
return o.RequestChecksumCalculation, o.RequestChecksumCalculation > 0, nil
}
func (o LoadOptions) getResponseChecksumValidation(ctx context.Context) (aws.ResponseChecksumValidation, bool, error) {
return o.ResponseChecksumValidation, o.ResponseChecksumValidation > 0, nil
}
func (o LoadOptions) getBaseEndpoint(context.Context) (string, bool, error) {
return o.BaseEndpoint, o.BaseEndpoint != "", nil
}
@@ -357,6 +372,26 @@ func WithAccountIDEndpointMode(m aws.AccountIDEndpointMode) LoadOptionsFunc {
}
}
// WithRequestChecksumCalculation is a helper function to construct functional options
// that sets RequestChecksumCalculation on config's LoadOptions
func WithRequestChecksumCalculation(c aws.RequestChecksumCalculation) LoadOptionsFunc {
return func(o *LoadOptions) error {
if c > 0 {
o.RequestChecksumCalculation = c
}
return nil
}
}
// WithResponseChecksumValidation is a helper function to construct functional options
// that sets ResponseChecksumValidation on config's LoadOptions
func WithResponseChecksumValidation(v aws.ResponseChecksumValidation) LoadOptionsFunc {
return func(o *LoadOptions) error {
o.ResponseChecksumValidation = v
return nil
}
}
// getDefaultRegion returns DefaultRegion from config's LoadOptions
func (o LoadOptions) getDefaultRegion(ctx context.Context) (string, bool, error) {
if len(o.DefaultRegion) == 0 {

View File

@@ -242,6 +242,40 @@ func getAccountIDEndpointMode(ctx context.Context, configs configs) (value aws.A
return
}
// requestChecksumCalculationProvider provides access to the RequestChecksumCalculation
type requestChecksumCalculationProvider interface {
getRequestChecksumCalculation(context.Context) (aws.RequestChecksumCalculation, bool, error)
}
func getRequestChecksumCalculation(ctx context.Context, configs configs) (value aws.RequestChecksumCalculation, found bool, err error) {
for _, cfg := range configs {
if p, ok := cfg.(requestChecksumCalculationProvider); ok {
value, found, err = p.getRequestChecksumCalculation(ctx)
if err != nil || found {
break
}
}
}
return
}
// responseChecksumValidationProvider provides access to the ResponseChecksumValidation
type responseChecksumValidationProvider interface {
getResponseChecksumValidation(context.Context) (aws.ResponseChecksumValidation, bool, error)
}
func getResponseChecksumValidation(ctx context.Context, configs configs) (value aws.ResponseChecksumValidation, found bool, err error) {
for _, cfg := range configs {
if p, ok := cfg.(responseChecksumValidationProvider); ok {
value, found, err = p.getResponseChecksumValidation(ctx)
if err != nil || found {
break
}
}
}
return
}
// ec2IMDSRegionProvider provides access to the ec2 imds region
// configuration value
type ec2IMDSRegionProvider interface {

View File

@@ -182,6 +182,36 @@ func resolveAccountIDEndpointMode(ctx context.Context, cfg *aws.Config, configs
return nil
}
// resolveRequestChecksumCalculation extracts the RequestChecksumCalculation from the configs slice's
// SharedConfig or EnvConfig
func resolveRequestChecksumCalculation(ctx context.Context, cfg *aws.Config, configs configs) error {
c, found, err := getRequestChecksumCalculation(ctx, configs)
if err != nil {
return err
}
if !found {
c = aws.RequestChecksumCalculationWhenSupported
}
cfg.RequestChecksumCalculation = c
return nil
}
// resolveResponseValidation extracts the ResponseChecksumValidation from the configs slice's
// SharedConfig or EnvConfig
func resolveResponseChecksumValidation(ctx context.Context, cfg *aws.Config, configs configs) error {
c, found, err := getResponseChecksumValidation(ctx, configs)
if err != nil {
return err
}
if !found {
c = aws.ResponseChecksumValidationWhenSupported
}
cfg.ResponseChecksumValidation = c
return nil
}
// resolveDefaultRegion extracts the first instance of a default region and sets `aws.Config.Region` to the default
// region if region had not been resolved from other sources.
func resolveDefaultRegion(ctx context.Context, cfg *aws.Config, configs configs) error {

View File

@@ -118,6 +118,11 @@ const (
accountIDKey = "aws_account_id"
accountIDEndpointMode = "account_id_endpoint_mode"
requestChecksumCalculationKey = "request_checksum_calculation"
responseChecksumValidationKey = "response_checksum_validation"
checksumWhenSupported = "when_supported"
checksumWhenRequired = "when_required"
)
// defaultSharedConfigProfile allows for swapping the default profile for testing
@@ -346,6 +351,12 @@ type SharedConfig struct {
S3DisableExpressAuth *bool
AccountIDEndpointMode aws.AccountIDEndpointMode
// RequestChecksumCalculation indicates if the request checksum should be calculated
RequestChecksumCalculation aws.RequestChecksumCalculation
// ResponseChecksumValidation indicates if the response checksum should be validated
ResponseChecksumValidation aws.ResponseChecksumValidation
}
func (c SharedConfig) getDefaultsMode(ctx context.Context) (value aws.DefaultsMode, ok bool, err error) {
@@ -1133,6 +1144,13 @@ func (c *SharedConfig) setFromIniSection(profile string, section ini.Section) er
return fmt.Errorf("failed to load %s from shared config, %w", accountIDEndpointMode, err)
}
if err := updateRequestChecksumCalculation(&c.RequestChecksumCalculation, section, requestChecksumCalculationKey); err != nil {
return fmt.Errorf("failed to load %s from shared config, %w", requestChecksumCalculationKey, err)
}
if err := updateResponseChecksumValidation(&c.ResponseChecksumValidation, section, responseChecksumValidationKey); err != nil {
return fmt.Errorf("failed to load %s from shared config, %w", responseChecksumValidationKey, err)
}
// Shared Credentials
creds := aws.Credentials{
AccessKeyID: section.String(accessKeyIDKey),
@@ -1207,6 +1225,42 @@ func updateAIDEndpointMode(m *aws.AccountIDEndpointMode, sec ini.Section, key st
return nil
}
func updateRequestChecksumCalculation(m *aws.RequestChecksumCalculation, sec ini.Section, key string) error {
if !sec.Has(key) {
return nil
}
v := sec.String(key)
switch strings.ToLower(v) {
case checksumWhenSupported:
*m = aws.RequestChecksumCalculationWhenSupported
case checksumWhenRequired:
*m = aws.RequestChecksumCalculationWhenRequired
default:
return fmt.Errorf("invalid value for shared config profile field, %s=%s, must be when_supported/when_required", key, v)
}
return nil
}
func updateResponseChecksumValidation(m *aws.ResponseChecksumValidation, sec ini.Section, key string) error {
if !sec.Has(key) {
return nil
}
v := sec.String(key)
switch strings.ToLower(v) {
case checksumWhenSupported:
*m = aws.ResponseChecksumValidationWhenSupported
case checksumWhenRequired:
*m = aws.ResponseChecksumValidationWhenRequired
default:
return fmt.Errorf("invalid value for shared config profile field, %s=%s, must be when_supported/when_required", key, v)
}
return nil
}
func (c SharedConfig) getRequestMinCompressSizeBytes(ctx context.Context) (int64, bool, error) {
if c.RequestMinCompressSizeBytes == nil {
return 0, false, nil
@@ -1225,6 +1279,14 @@ func (c SharedConfig) getAccountIDEndpointMode(ctx context.Context) (aws.Account
return c.AccountIDEndpointMode, len(c.AccountIDEndpointMode) > 0, nil
}
func (c SharedConfig) getRequestChecksumCalculation(ctx context.Context) (aws.RequestChecksumCalculation, bool, error) {
return c.RequestChecksumCalculation, c.RequestChecksumCalculation > 0, nil
}
func (c SharedConfig) getResponseChecksumValidation(ctx context.Context) (aws.ResponseChecksumValidation, bool, error) {
return c.ResponseChecksumValidation, c.ResponseChecksumValidation > 0, nil
}
func updateDefaultsMode(mode *aws.DefaultsMode, section ini.Section, key string) error {
if !section.Has(key) {
return nil

View File

@@ -1,3 +1,11 @@
# v1.17.53 (2025-01-15)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.17.52 (2025-01-14)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.17.51 (2025-01-10)
* **Dependency Update**: Updated to the latest SDK module versions

View File

@@ -3,4 +3,4 @@
package credentials
// goModuleVersion is the tagged release for this module
const goModuleVersion = "1.17.51"
const goModuleVersion = "1.17.53"

View File

@@ -1,3 +1,7 @@
# v1.16.24 (2025-01-15)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.16.23 (2025-01-09)
* **Dependency Update**: Updated to the latest SDK module versions

View File

@@ -3,4 +3,4 @@
package imds
// goModuleVersion is the tagged release for this module
const goModuleVersion = "1.16.23"
const goModuleVersion = "1.16.24"

View File

@@ -1,3 +1,15 @@
# v1.17.51 (2025-01-16)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.17.50 (2025-01-15)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.17.49 (2025-01-14)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.17.48 (2025-01-10)
* **Dependency Update**: Updated to the latest SDK module versions

View File

@@ -3,4 +3,4 @@
package manager
// goModuleVersion is the tagged release for this module
const goModuleVersion = "1.17.48"
const goModuleVersion = "1.17.51"

View File

@@ -584,6 +584,8 @@ func (a completedParts) Less(i, j int) bool {
// upload will perform a multipart upload using the firstBuf buffer containing
// the first chunk of data.
func (u *multiuploader) upload(firstBuf io.ReadSeeker, cleanup func()) (*UploadOutput, error) {
u.initChecksumAlgorithm()
var params s3.CreateMultipartUploadInput
awsutil.Copy(&params, u.in)
@@ -752,6 +754,25 @@ func (u *multiuploader) send(c chunk) error {
return nil
}
func (u *multiuploader) initChecksumAlgorithm() {
if u.in.ChecksumAlgorithm != "" {
return
}
switch {
case u.in.ChecksumCRC32 != nil:
u.in.ChecksumAlgorithm = types.ChecksumAlgorithmCrc32
case u.in.ChecksumCRC32C != nil:
u.in.ChecksumAlgorithm = types.ChecksumAlgorithmCrc32c
case u.in.ChecksumSHA1 != nil:
u.in.ChecksumAlgorithm = types.ChecksumAlgorithmSha1
case u.in.ChecksumSHA256 != nil:
u.in.ChecksumAlgorithm = types.ChecksumAlgorithmSha256
default:
u.in.ChecksumAlgorithm = types.ChecksumAlgorithmCrc32
}
}
// geterr is a thread-safe getter for the error object
func (u *multiuploader) geterr() error {
u.m.Lock()

View File

@@ -1,3 +1,7 @@
# v1.3.28 (2025-01-15)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.3.27 (2025-01-09)
* **Dependency Update**: Updated to the latest SDK module versions

View File

@@ -3,4 +3,4 @@
package configsources
// goModuleVersion is the tagged release for this module
const goModuleVersion = "1.3.27"
const goModuleVersion = "1.3.28"

View File

@@ -1,3 +1,7 @@
# v2.6.28 (2025-01-15)
* **Dependency Update**: Updated to the latest SDK module versions
# v2.6.27 (2025-01-09)
* **Dependency Update**: Updated to the latest SDK module versions

View File

@@ -3,4 +3,4 @@
package endpoints
// goModuleVersion is the tagged release for this module
const goModuleVersion = "2.6.27"
const goModuleVersion = "2.6.28"

View File

@@ -1,3 +1,7 @@
# v1.3.28 (2025-01-15)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.3.27 (2025-01-09)
* **Dependency Update**: Updated to the latest SDK module versions

View File

@@ -3,4 +3,4 @@
package v4a
// goModuleVersion is the tagged release for this module
const goModuleVersion = "1.3.27"
const goModuleVersion = "1.3.28"

View File

@@ -1,3 +1,12 @@
# v1.5.1 (2025-01-16)
* **Bug Fix**: Fix nil dereference panic for operations that require checksums, but do not have an input setting for which algorithm to use.
# v1.5.0 (2025-01-15)
* **Feature**: S3 client behavior is updated to always calculate a checksum by default for operations that support it (such as PutObject or UploadPart), or require it (such as DeleteObjects). The checksum algorithm used by default now becomes CRC32. Checksum behavior can be configured using `when_supported` and `when_required` options - in code using RequestChecksumCalculation, in shared config using request_checksum_calculation, or as env variable using AWS_REQUEST_CHECKSUM_CALCULATION. The S3 client attempts to validate response checksums for all S3 API operations that support checksums. However, if the SDK has not implemented the specified checksum algorithm then this validation is skipped. Checksum validation behavior can be configured using `when_supported` and `when_required` options - in code using ResponseChecksumValidation, in shared config using response_checksum_validation, or as env variable using AWS_RESPONSE_CHECKSUM_VALIDATION.
* **Dependency Update**: Updated to the latest SDK module versions
# v1.4.8 (2025-01-09)
* **Dependency Update**: Updated to the latest SDK module versions

View File

@@ -30,6 +30,9 @@ const (
// AlgorithmSHA256 represents SHA256 hash algorithm
AlgorithmSHA256 Algorithm = "SHA256"
// AlgorithmCRC64NVME represents CRC64NVME hash algorithm
AlgorithmCRC64NVME Algorithm = "CRC64NVME"
)
var supportedAlgorithms = []Algorithm{

View File

@@ -3,4 +3,4 @@
package checksum
// goModuleVersion is the tagged release for this module
const goModuleVersion = "1.4.8"
const goModuleVersion = "1.5.1"

View File

@@ -1,6 +1,7 @@
package checksum
import (
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/smithy-go/middleware"
)
@@ -14,11 +15,16 @@ type InputMiddlewareOptions struct {
// and true, or false if no algorithm is specified.
GetAlgorithm func(interface{}) (string, bool)
// Forces the middleware to compute the input payload's checksum. The
// request will fail if the algorithm is not specified or unable to compute
// the checksum.
// RequireChecksum indicates whether operation model forces middleware to compute the input payload's checksum.
// If RequireChecksum is set to true, checksum will be calculated and RequestChecksumCalculation will be ignored,
// otherwise RequestChecksumCalculation will be used to indicate if checksum will be calculated
RequireChecksum bool
// RequestChecksumCalculation is the user config to opt-in/out request checksum calculation. If RequireChecksum is
// set to true, checksum will be calculated and this field will be ignored, otherwise
// RequestChecksumCalculation will be used to indicate if checksum will be calculated
RequestChecksumCalculation aws.RequestChecksumCalculation
// Enables support for wrapping the serialized input payload with a
// content-encoding: aws-check wrapper, and including a trailer for the
// algorithm's checksum value.
@@ -72,7 +78,9 @@ func AddInputMiddleware(stack *middleware.Stack, options InputMiddlewareOptions)
// Initial checksum configuration look up middleware
err = stack.Initialize.Add(&setupInputContext{
GetAlgorithm: options.GetAlgorithm,
GetAlgorithm: options.GetAlgorithm,
RequireChecksum: options.RequireChecksum,
RequestChecksumCalculation: options.RequestChecksumCalculation,
}, middleware.Before)
if err != nil {
return err
@@ -81,7 +89,6 @@ func AddInputMiddleware(stack *middleware.Stack, options InputMiddlewareOptions)
stack.Build.Remove("ContentChecksum")
inputChecksum := &computeInputPayloadChecksum{
RequireChecksum: options.RequireChecksum,
EnableTrailingChecksum: options.EnableTrailingChecksum,
EnableComputePayloadHash: options.EnableComputeSHA256PayloadHash,
EnableDecodedContentLengthHeader: options.EnableDecodedContentLengthHeader,
@@ -94,7 +101,6 @@ func AddInputMiddleware(stack *middleware.Stack, options InputMiddlewareOptions)
if options.EnableTrailingChecksum {
trailerMiddleware := &addInputChecksumTrailer{
EnableTrailingChecksum: inputChecksum.EnableTrailingChecksum,
RequireChecksum: inputChecksum.RequireChecksum,
EnableComputePayloadHash: inputChecksum.EnableComputePayloadHash,
EnableDecodedContentLengthHeader: inputChecksum.EnableDecodedContentLengthHeader,
}
@@ -126,6 +132,9 @@ type OutputMiddlewareOptions struct {
// mode and true, or false if no mode is specified.
GetValidationMode func(interface{}) (string, bool)
// ResponseChecksumValidation is the user config to opt-in/out response checksum validation
ResponseChecksumValidation aws.ResponseChecksumValidation
// The set of checksum algorithms that should be used for response payload
// checksum validation. The algorithm(s) used will be a union of the
// output's returned algorithms and this set.
@@ -134,7 +143,7 @@ type OutputMiddlewareOptions struct {
ValidationAlgorithms []string
// If set the middleware will ignore output multipart checksums. Otherwise
// an checksum format error will be returned by the middleware.
// a checksum format error will be returned by the middleware.
IgnoreMultipartValidation bool
// When set the middleware will log when output does not have checksum or
@@ -150,7 +159,8 @@ type OutputMiddlewareOptions struct {
// checksum.
func AddOutputMiddleware(stack *middleware.Stack, options OutputMiddlewareOptions) error {
err := stack.Initialize.Add(&setupOutputContext{
GetValidationMode: options.GetValidationMode,
GetValidationMode: options.GetValidationMode,
ResponseChecksumValidation: options.ResponseChecksumValidation,
}, middleware.Before)
if err != nil {
return err

View File

@@ -0,0 +1,90 @@
package checksum
import (
"context"
"fmt"
"github.com/aws/aws-sdk-go-v2/aws"
awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
"github.com/aws/smithy-go/middleware"
smithyhttp "github.com/aws/smithy-go/transport/http"
)
var supportedChecksumFeatures = map[Algorithm]awsmiddleware.UserAgentFeature{
AlgorithmCRC32: awsmiddleware.UserAgentFeatureRequestChecksumCRC32,
AlgorithmCRC32C: awsmiddleware.UserAgentFeatureRequestChecksumCRC32C,
AlgorithmSHA1: awsmiddleware.UserAgentFeatureRequestChecksumSHA1,
AlgorithmSHA256: awsmiddleware.UserAgentFeatureRequestChecksumSHA256,
AlgorithmCRC64NVME: awsmiddleware.UserAgentFeatureRequestChecksumCRC64,
}
// RequestChecksumMetricsTracking is the middleware to track operation request's checksum usage
type RequestChecksumMetricsTracking struct {
RequestChecksumCalculation aws.RequestChecksumCalculation
UserAgent *awsmiddleware.RequestUserAgent
}
// ID provides the middleware identifier
func (m *RequestChecksumMetricsTracking) ID() string {
return "AWSChecksum:RequestMetricsTracking"
}
// HandleBuild checks request checksum config and checksum value sent
// and sends corresponding feature id to user agent
func (m *RequestChecksumMetricsTracking) HandleBuild(
ctx context.Context, in middleware.BuildInput, next middleware.BuildHandler,
) (
out middleware.BuildOutput, metadata middleware.Metadata, err error,
) {
req, ok := in.Request.(*smithyhttp.Request)
if !ok {
return out, metadata, fmt.Errorf("unknown request type %T", req)
}
switch m.RequestChecksumCalculation {
case aws.RequestChecksumCalculationWhenSupported:
m.UserAgent.AddUserAgentFeature(awsmiddleware.UserAgentFeatureRequestChecksumWhenSupported)
case aws.RequestChecksumCalculationWhenRequired:
m.UserAgent.AddUserAgentFeature(awsmiddleware.UserAgentFeatureRequestChecksumWhenRequired)
}
for algo, feat := range supportedChecksumFeatures {
checksumHeader := AlgorithmHTTPHeader(algo)
if checksum := req.Header.Get(checksumHeader); checksum != "" {
m.UserAgent.AddUserAgentFeature(feat)
}
}
return next.HandleBuild(ctx, in)
}
// ResponseChecksumMetricsTracking is the middleware to track operation response's checksum usage
type ResponseChecksumMetricsTracking struct {
ResponseChecksumValidation aws.ResponseChecksumValidation
UserAgent *awsmiddleware.RequestUserAgent
}
// ID provides the middleware identifier
func (m *ResponseChecksumMetricsTracking) ID() string {
return "AWSChecksum:ResponseMetricsTracking"
}
// HandleBuild checks the response checksum config and sends corresponding feature id to user agent
func (m *ResponseChecksumMetricsTracking) HandleBuild(
ctx context.Context, in middleware.BuildInput, next middleware.BuildHandler,
) (
out middleware.BuildOutput, metadata middleware.Metadata, err error,
) {
req, ok := in.Request.(*smithyhttp.Request)
if !ok {
return out, metadata, fmt.Errorf("unknown request type %T", req)
}
switch m.ResponseChecksumValidation {
case aws.ResponseChecksumValidationWhenSupported:
m.UserAgent.AddUserAgentFeature(awsmiddleware.UserAgentFeatureResponseChecksumWhenSupported)
case aws.ResponseChecksumValidationWhenRequired:
m.UserAgent.AddUserAgentFeature(awsmiddleware.UserAgentFeatureResponseChecksumWhenRequired)
}
return next.HandleBuild(ctx, in)
}

View File

@@ -7,6 +7,7 @@ import (
"hash"
"io"
"strconv"
"strings"
v4 "github.com/aws/aws-sdk-go-v2/aws/signer/v4"
internalcontext "github.com/aws/aws-sdk-go-v2/internal/context"
@@ -16,7 +17,6 @@ import (
)
const (
contentMD5Header = "Content-Md5"
streamingUnsignedPayloadTrailerPayloadHash = "STREAMING-UNSIGNED-PAYLOAD-TRAILER"
)
@@ -49,13 +49,6 @@ type computeInputPayloadChecksum struct {
// the Algorithm's header is already set on the request.
EnableTrailingChecksum bool
// States that a checksum is required to be included for the operation. If
// Input does not specify a checksum, fallback to built in MD5 checksum is
// used.
//
// Replaces smithy-go's ContentChecksum middleware.
RequireChecksum bool
// Enables support for computing the SHA256 checksum of input payloads
// along with the algorithm specified checksum. Prevents downstream
// middleware handlers (computePayloadSHA256) re-reading the payload.
@@ -110,6 +103,15 @@ func (m *computeInputPayloadChecksum) HandleFinalize(
) (
out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
) {
var checksum string
algorithm, ok, err := getInputAlgorithm(ctx)
if err != nil {
return out, metadata, err
}
if !ok {
return next.HandleFinalize(ctx, in)
}
req, ok := in.Request.(*smithyhttp.Request)
if !ok {
return out, metadata, computeInputHeaderChecksumError{
@@ -117,8 +119,6 @@ func (m *computeInputPayloadChecksum) HandleFinalize(
}
}
var algorithm Algorithm
var checksum string
defer func() {
if algorithm == "" || checksum == "" || err != nil {
return
@@ -130,29 +130,14 @@ func (m *computeInputPayloadChecksum) HandleFinalize(
})
}()
// If no algorithm was specified, and the operation requires a checksum,
// fallback to the legacy content MD5 checksum.
algorithm, ok, err = getInputAlgorithm(ctx)
if err != nil {
return out, metadata, err
} else if !ok {
if m.RequireChecksum {
checksum, err = setMD5Checksum(ctx, req)
if err != nil {
return out, metadata, computeInputHeaderChecksumError{
Msg: "failed to compute stream's MD5 checksum",
Err: err,
}
}
algorithm = Algorithm("MD5")
// If any checksum header is already set nothing to do.
for header := range req.Header {
h := strings.ToUpper(header)
if strings.HasPrefix(h, "X-AMZ-CHECKSUM-") {
algorithm = Algorithm(strings.TrimPrefix(h, "X-AMZ-CHECKSUM-"))
checksum = req.Header.Get(header)
return next.HandleFinalize(ctx, in)
}
return next.HandleFinalize(ctx, in)
}
// If the checksum header is already set nothing to do.
checksumHeader := AlgorithmHTTPHeader(algorithm)
if checksum = req.Header.Get(checksumHeader); checksum != "" {
return next.HandleFinalize(ctx, in)
}
computePayloadHash := m.EnableComputePayloadHash
@@ -217,6 +202,7 @@ func (m *computeInputPayloadChecksum) HandleFinalize(
}
}
checksumHeader := AlgorithmHTTPHeader(algorithm)
req.Header.Set(checksumHeader, checksum)
if computePayloadHash {
@@ -248,7 +234,6 @@ func (e computeInputTrailingChecksumError) Unwrap() error { return e.Err }
// - Trailing checksums are supported.
type addInputChecksumTrailer struct {
EnableTrailingChecksum bool
RequireChecksum bool
EnableComputePayloadHash bool
EnableDecodedContentLengthHeader bool
}
@@ -264,6 +249,16 @@ func (m *addInputChecksumTrailer) HandleFinalize(
) (
out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
) {
algorithm, ok, err := getInputAlgorithm(ctx)
if err != nil {
return out, metadata, computeInputTrailingChecksumError{
Msg: "failed to get algorithm",
Err: err,
}
} else if !ok {
return next.HandleFinalize(ctx, in)
}
if enabled, _ := middleware.GetStackValue(ctx, useTrailer{}).(bool); !enabled {
return next.HandleFinalize(ctx, in)
}
@@ -281,24 +276,11 @@ func (m *addInputChecksumTrailer) HandleFinalize(
}
}
// If no algorithm was specified, there is nothing to do.
algorithm, ok, err := getInputAlgorithm(ctx)
if err != nil {
return out, metadata, computeInputTrailingChecksumError{
Msg: "failed to get algorithm",
Err: err,
// If any checksum header is already set nothing to do.
for header := range req.Header {
if strings.HasPrefix(strings.ToLower(header), "x-amz-checksum-") {
return next.HandleFinalize(ctx, in)
}
} else if !ok {
return out, metadata, computeInputTrailingChecksumError{
Msg: "no algorithm specified",
}
}
// If the checksum header is already set before finalize could run, there
// is nothing to do.
checksumHeader := AlgorithmHTTPHeader(algorithm)
if req.Header.Get(checksumHeader) != "" {
return next.HandleFinalize(ctx, in)
}
stream := req.GetStream()
@@ -444,39 +426,3 @@ func getRequestStreamLength(req *smithyhttp.Request) (int64, error) {
return -1, nil
}
// setMD5Checksum computes the MD5 of the request payload and sets it to the
// Content-MD5 header. Returning the MD5 base64 encoded string or error.
//
// If the MD5 is already set as the Content-MD5 header, that value will be
// returned, and nothing else will be done.
//
// If the payload is empty, no MD5 will be computed. No error will be returned.
// Empty payloads do not have an MD5 value.
//
// Replaces the smithy-go middleware for httpChecksum trait.
func setMD5Checksum(ctx context.Context, req *smithyhttp.Request) (string, error) {
if v := req.Header.Get(contentMD5Header); len(v) != 0 {
return v, nil
}
stream := req.GetStream()
if stream == nil {
return "", nil
}
if !req.IsStreamSeekable() {
return "", fmt.Errorf(
"unseekable stream is not supported for computing md5 checksum")
}
v, err := computeMD5Checksum(stream)
if err != nil {
return "", err
}
if err := req.RewindStream(); err != nil {
return "", fmt.Errorf("failed to rewind stream after computing MD5 checksum, %w", err)
}
// set the 'Content-MD5' header
req.Header.Set(contentMD5Header, string(v))
return string(v), nil
}

View File

@@ -2,11 +2,16 @@ package checksum
import (
"context"
"github.com/aws/aws-sdk-go-v2/aws"
internalcontext "github.com/aws/aws-sdk-go-v2/internal/context"
"github.com/aws/smithy-go/middleware"
)
const (
checksumValidationModeEnabled = "ENABLED"
)
// setupChecksumContext is the initial middleware that looks up the input
// used to configure checksum behavior. This middleware must be executed before
// input validation step or any other checksum middleware.
@@ -17,6 +22,16 @@ type setupInputContext struct {
// Given the input parameter value, the function must return the algorithm
// and true, or false if no algorithm is specified.
GetAlgorithm func(interface{}) (string, bool)
// RequireChecksum indicates whether operation model forces middleware to compute the input payload's checksum.
// If RequireChecksum is set to true, checksum will be calculated and RequestChecksumCalculation will be ignored,
// otherwise RequestChecksumCalculation will be used to indicate if checksum will be calculated
RequireChecksum bool
// RequestChecksumCalculation is the user config to opt-in/out request checksum calculation. If RequireChecksum is
// set to true, checksum will be calculated and this field will be ignored, otherwise
// RequestChecksumCalculation will be used to indicate if checksum will be calculated
RequestChecksumCalculation aws.RequestChecksumCalculation
}
// ID for the middleware
@@ -31,15 +46,18 @@ func (m *setupInputContext) HandleInitialize(
) (
out middleware.InitializeOutput, metadata middleware.Metadata, err error,
) {
// Check if validation algorithm is specified.
// nil check here is for operations that require checksum but do not have input algorithm setting
if m.GetAlgorithm != nil {
// check is input resource has a checksum algorithm
algorithm, ok := m.GetAlgorithm(in.Parameters)
if ok && len(algorithm) != 0 {
if algorithm, ok := m.GetAlgorithm(in.Parameters); ok {
ctx = internalcontext.SetChecksumInputAlgorithm(ctx, algorithm)
return next.HandleInitialize(ctx, in)
}
}
if m.RequireChecksum || m.RequestChecksumCalculation == aws.RequestChecksumCalculationWhenSupported {
ctx = internalcontext.SetChecksumInputAlgorithm(ctx, string(AlgorithmCRC32))
}
return next.HandleInitialize(ctx, in)
}
@@ -50,6 +68,9 @@ type setupOutputContext struct {
// Given the input parameter value, the function must return the validation
// mode and true, or false if no mode is specified.
GetValidationMode func(interface{}) (string, bool)
// ResponseChecksumValidation states user config to opt-in/out checksum validation
ResponseChecksumValidation aws.ResponseChecksumValidation
}
// ID for the middleware
@@ -64,13 +85,11 @@ func (m *setupOutputContext) HandleInitialize(
) (
out middleware.InitializeOutput, metadata middleware.Metadata, err error,
) {
// Check if validation mode is specified.
if m.GetValidationMode != nil {
// check is input resource has a checksum algorithm
mode, ok := m.GetValidationMode(in.Parameters)
if ok && len(mode) != 0 {
ctx = setContextOutputValidationMode(ctx, mode)
}
mode, _ := m.GetValidationMode(in.Parameters)
if m.ResponseChecksumValidation == aws.ResponseChecksumValidationWhenSupported || mode == checksumValidationModeEnabled {
ctx = setContextOutputValidationMode(ctx, checksumValidationModeEnabled)
}
return next.HandleInitialize(ctx, in)

View File

@@ -55,7 +55,7 @@ func (m *validateOutputPayloadChecksum) ID() string {
}
// HandleDeserialize is a Deserialize middleware that wraps the HTTP response
// body with an io.ReadCloser that will validate the its checksum.
// body with an io.ReadCloser that will validate its checksum.
func (m *validateOutputPayloadChecksum) HandleDeserialize(
ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler,
) (
@@ -66,8 +66,7 @@ func (m *validateOutputPayloadChecksum) HandleDeserialize(
return out, metadata, err
}
// If there is no validation mode specified nothing is supported.
if mode := getContextOutputValidationMode(ctx); mode != "ENABLED" {
if mode := getContextOutputValidationMode(ctx); mode != checksumValidationModeEnabled {
return out, metadata, err
}
@@ -90,8 +89,6 @@ func (m *validateOutputPayloadChecksum) HandleDeserialize(
algorithmToUse = algorithm
}
// TODO this must validate the validation mode is set to enabled.
logger := middleware.GetLogger(ctx)
// Skip validation if no checksum algorithm or checksum is available.

View File

@@ -1,3 +1,7 @@
# v1.12.9 (2025-01-15)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.12.8 (2025-01-09)
* **Dependency Update**: Updated to the latest SDK module versions

View File

@@ -3,4 +3,4 @@
package presignedurl
// goModuleVersion is the tagged release for this module
const goModuleVersion = "1.12.8"
const goModuleVersion = "1.12.9"

View File

@@ -1,3 +1,7 @@
# v1.18.9 (2025-01-15)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.18.8 (2025-01-09)
* **Dependency Update**: Updated to the latest SDK module versions

View File

@@ -3,4 +3,4 @@
package s3shared
// goModuleVersion is the tagged release for this module
const goModuleVersion = "1.18.8"
const goModuleVersion = "1.18.9"

View File

@@ -1,3 +1,17 @@
# v1.73.1 (2025-01-16)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.73.0 (2025-01-15)
* **Feature**: S3 client behavior is updated to always calculate a checksum by default for operations that support it (such as PutObject or UploadPart), or require it (such as DeleteObjects). The checksum algorithm used by default now becomes CRC32. Checksum behavior can be configured using `when_supported` and `when_required` options - in code using RequestChecksumCalculation, in shared config using request_checksum_calculation, or as env variable using AWS_REQUEST_CHECKSUM_CALCULATION. The S3 client attempts to validate response checksums for all S3 API operations that support checksums. However, if the SDK has not implemented the specified checksum algorithm then this validation is skipped. Checksum validation behavior can be configured using `when_supported` and `when_required` options - in code using ResponseChecksumValidation, in shared config using response_checksum_validation, or as env variable using AWS_RESPONSE_CHECKSUM_VALIDATION.
* **Feature**: This change enhances integrity protections for new SDK requests to S3. S3 SDKs now support the CRC64NVME checksum algorithm, full object checksums for multipart S3 objects, and new default integrity protections for S3 requests.
* **Dependency Update**: Updated to the latest SDK module versions
# v1.72.3 (2025-01-14)
* **Bug Fix**: Fix issue where waiters were not failing on unmatched errors as they should. This may have breaking behavioral changes for users in fringe cases. See [this announcement](https://github.com/aws/aws-sdk-go-v2/discussions/2954) for more information.
# v1.72.2 (2025-01-09)
* **Dependency Update**: Updated to the latest SDK module versions

View File

@@ -449,15 +449,17 @@ func setResolvedDefaultsMode(o *Options) {
// NewFromConfig returns a new client from the provided config.
func NewFromConfig(cfg aws.Config, optFns ...func(*Options)) *Client {
opts := Options{
Region: cfg.Region,
DefaultsMode: cfg.DefaultsMode,
RuntimeEnvironment: cfg.RuntimeEnvironment,
HTTPClient: cfg.HTTPClient,
Credentials: cfg.Credentials,
APIOptions: cfg.APIOptions,
Logger: cfg.Logger,
ClientLogMode: cfg.ClientLogMode,
AppID: cfg.AppID,
Region: cfg.Region,
DefaultsMode: cfg.DefaultsMode,
RuntimeEnvironment: cfg.RuntimeEnvironment,
HTTPClient: cfg.HTTPClient,
Credentials: cfg.Credentials,
APIOptions: cfg.APIOptions,
Logger: cfg.Logger,
ClientLogMode: cfg.ClientLogMode,
AppID: cfg.AppID,
RequestChecksumCalculation: cfg.RequestChecksumCalculation,
ResponseChecksumValidation: cfg.ResponseChecksumValidation,
}
resolveAWSRetryerProvider(cfg, &opts)
resolveAWSRetryMaxAttempts(cfg, &opts)
@@ -845,6 +847,30 @@ func addUserAgentRetryMode(stack *middleware.Stack, options Options) error {
return nil
}
func addRequestChecksumMetricsTracking(stack *middleware.Stack, options Options) error {
ua, err := getOrAddRequestUserAgent(stack)
if err != nil {
return err
}
return stack.Build.Insert(&internalChecksum.RequestChecksumMetricsTracking{
RequestChecksumCalculation: options.RequestChecksumCalculation,
UserAgent: ua,
}, "UserAgent", middleware.Before)
}
func addResponseChecksumMetricsTracking(stack *middleware.Stack, options Options) error {
ua, err := getOrAddRequestUserAgent(stack)
if err != nil {
return err
}
return stack.Build.Insert(&internalChecksum.ResponseChecksumMetricsTracking{
ResponseChecksumValidation: options.ResponseChecksumValidation,
UserAgent: ua,
}, "UserAgent", middleware.Before)
}
func resolveTracerProvider(options *Options) {
if options.TracerProvider == nil {
options.TracerProvider = &tracing.NopTracerProvider{}
@@ -1148,6 +1174,10 @@ func (c presignConverter) convertToPresignMiddleware(stack *middleware.Stack, op
return nil
}
func withNoDefaultChecksumAPIOption(options *Options) {
options.RequestChecksumCalculation = aws.RequestChecksumCalculationWhenRequired
}
func addRequestResponseLogging(stack *middleware.Stack, o Options) error {
return stack.Deserialize.Add(&smithyhttp.RequestResponseLogger{
LogRequest: o.ClientLogMode.IsRequest(),

View File

@@ -38,7 +38,7 @@ import (
// https://bucket-name.s3express-zone-id.region-code.amazonaws.com/key-name .
// Path-style requests are not supported. For more information about endpoints in
// Availability Zones, see [Regional and Zonal endpoints for directory buckets in Availability Zones]in the Amazon S3 User Guide. For more information
// about endpoints in Local Zones, see [Concepts for directory buckets in Local Zones]in the Amazon S3 User Guide.
// about endpoints in Local Zones, see [Available Local Zone for directory buckets]in the Amazon S3 User Guide.
//
// Permissions
//
@@ -73,13 +73,13 @@ import (
// [ListMultipartUploads]
//
// [ListParts]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html
// [Concepts for directory buckets in Local Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [UploadPart]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html
// [ListMultipartUploads]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html
// [CreateSession]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html
// [Multipart Upload and Permissions]: https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuAndPermissions.html
// [CompleteMultipartUpload]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/endpoint-directory-buckets-AZ.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-Regions-and-Zones.html
// [Available Local Zone for directory buckets]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [CreateMultipartUpload]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html
func (c *Client) AbortMultipartUpload(ctx context.Context, params *AbortMultipartUploadInput, optFns ...func(*Options)) (*AbortMultipartUploadOutput, error) {
if params == nil {

View File

@@ -57,7 +57,7 @@ import (
// https://bucket-name.s3express-zone-id.region-code.amazonaws.com/key-name .
// Path-style requests are not supported. For more information about endpoints in
// Availability Zones, see [Regional and Zonal endpoints for directory buckets in Availability Zones]in the Amazon S3 User Guide. For more information about
// endpoints in Local Zones, see [Concepts for directory buckets in Local Zones]in the Amazon S3 User Guide.
// endpoints in Local Zones, see [Available Local Zone for directory buckets]in the Amazon S3 User Guide.
//
// Permissions
// - General purpose bucket permissions - For information about permissions
@@ -133,16 +133,16 @@ import (
//
// [Uploading Objects Using Multipart Upload]: https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html
// [Amazon S3 Error Best Practices]: https://docs.aws.amazon.com/AmazonS3/latest/dev/ErrorBestPractices.html
// [Concepts for directory buckets in Local Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [ListParts]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html
// [UploadPart]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html
// [additional checksum value]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_Checksum.html
// [UploadPartCopy]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html
// [Available Local Zone for directory buckets]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [CreateMultipartUpload]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html
// [AbortMultipartUpload]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html
// [ListMultipartUploads]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html
// [Multipart Upload and Permissions]: https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuAndPermissions.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/endpoint-directory-buckets-AZ.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-Regions-and-Zones.html
//
// [CreateSession]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html
func (c *Client) CompleteMultipartUpload(ctx context.Context, params *CompleteMultipartUploadInput, optFns ...func(*Options)) (*CompleteMultipartUploadOutput, error) {
@@ -212,7 +212,7 @@ type CompleteMultipartUploadInput struct {
// This header can be used as a data integrity check to verify that the data
// received is the same data that was originally sent. This header specifies the
// base64-encoded, 32-bit CRC-32 checksum of the object. For more information, see [Checking object integrity]
// Base64 encoded, 32-bit CRC-32 checksum of the object. For more information, see [Checking object integrity]
// in the Amazon S3 User Guide.
//
// [Checking object integrity]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
@@ -220,15 +220,23 @@ type CompleteMultipartUploadInput struct {
// This header can be used as a data integrity check to verify that the data
// received is the same data that was originally sent. This header specifies the
// base64-encoded, 32-bit CRC-32C checksum of the object. For more information, see
// [Checking object integrity]in the Amazon S3 User Guide.
// Base64 encoded, 32-bit CRC-32C checksum of the object. For more information,
// see [Checking object integrity]in the Amazon S3 User Guide.
//
// [Checking object integrity]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
ChecksumCRC32C *string
// This header can be used as a data integrity check to verify that the data
// received is the same data that was originally sent. This header specifies the
// base64-encoded, 160-bit SHA-1 digest of the object. For more information, see [Checking object integrity]
// Base64 encoded, 64-bit CRC-64NVME checksum of the object. The CRC-64NVME
// checksum is always a full object checksum. For more information, see [Checking object integrity in the Amazon S3 User Guide].
//
// [Checking object integrity in the Amazon S3 User Guide]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
ChecksumCRC64NVME *string
// This header can be used as a data integrity check to verify that the data
// received is the same data that was originally sent. This header specifies the
// Base64 encoded, 160-bit SHA-1 digest of the object. For more information, see [Checking object integrity]
// in the Amazon S3 User Guide.
//
// [Checking object integrity]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
@@ -236,12 +244,22 @@ type CompleteMultipartUploadInput struct {
// This header can be used as a data integrity check to verify that the data
// received is the same data that was originally sent. This header specifies the
// base64-encoded, 256-bit SHA-256 digest of the object. For more information, see [Checking object integrity]
// Base64 encoded, 256-bit SHA-256 digest of the object. For more information, see [Checking object integrity]
// in the Amazon S3 User Guide.
//
// [Checking object integrity]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
ChecksumSHA256 *string
// This header specifies the checksum type of the object, which determines how
// part-level checksums are combined to create an object-level checksum for
// multipart objects. You can use this header as a data integrity check to verify
// that the checksum type that is received is the same checksum that was specified.
// If the checksum type doesnt match the checksum type that was specified for the
// object during the CreateMultipartUpload request, itll result in a BadDigest
// error. For more information, see Checking object integrity in the Amazon S3 User
// Guide.
ChecksumType types.ChecksumType
// The account ID of the expected bucket owner. If the account ID that you provide
// does not match the actual owner of the bucket, the request fails with the HTTP
// status code 403 Forbidden (access denied).
@@ -281,6 +299,11 @@ type CompleteMultipartUploadInput struct {
// [RFC 7232]: https://tools.ietf.org/html/rfc7232
IfNoneMatch *string
// The expected total object size of the multipart upload request. If theres a
// mismatch between the specified object size value and the actual object size
// value, it results in an HTTP 400 InvalidRequest error.
MpuObjectSize *string
// The container for the multipart upload request information.
MultipartUpload *types.CompletedMultipartUpload
@@ -346,50 +369,67 @@ type CompleteMultipartUploadOutput struct {
// encryption with Key Management Service (KMS) keys (SSE-KMS).
BucketKeyEnabled *bool
// The base64-encoded, 32-bit CRC-32 checksum of the object. This will only be
// present if it was uploaded with the object. When you use an API operation on an
// object that was uploaded using multipart uploads, this value may not be a direct
// checksum value of the full object. Instead, it's a calculation based on the
// checksum values of each individual part. For more information about how
// checksums are calculated with multipart uploads, see [Checking object integrity]in the Amazon S3 User
// The Base64 encoded, 32-bit CRC-32 checksum of the object. This checksum is only
// be present if the checksum was uploaded with the object. When you use an API
// operation on an object that was uploaded using multipart uploads, this value may
// not be a direct checksum value of the full object. Instead, it's a calculation
// based on the checksum values of each individual part. For more information about
// how checksums are calculated with multipart uploads, see [Checking object integrity]in the Amazon S3 User
// Guide.
//
// [Checking object integrity]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html#large-object-checksums
ChecksumCRC32 *string
// The base64-encoded, 32-bit CRC-32C checksum of the object. This will only be
// present if it was uploaded with the object. When you use an API operation on an
// object that was uploaded using multipart uploads, this value may not be a direct
// checksum value of the full object. Instead, it's a calculation based on the
// checksum values of each individual part. For more information about how
// checksums are calculated with multipart uploads, see [Checking object integrity]in the Amazon S3 User
// The Base64 encoded, 32-bit CRC-32C checksum of the object. This checksum is
// only present if the checksum was uploaded with the object. When you use an API
// operation on an object that was uploaded using multipart uploads, this value may
// not be a direct checksum value of the full object. Instead, it's a calculation
// based on the checksum values of each individual part. For more information about
// how checksums are calculated with multipart uploads, see [Checking object integrity]in the Amazon S3 User
// Guide.
//
// [Checking object integrity]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html#large-object-checksums
ChecksumCRC32C *string
// The base64-encoded, 160-bit SHA-1 digest of the object. This will only be
// present if it was uploaded with the object. When you use the API operation on an
// object that was uploaded using multipart uploads, this value may not be a direct
// checksum value of the full object. Instead, it's a calculation based on the
// checksum values of each individual part. For more information about how
// checksums are calculated with multipart uploads, see [Checking object integrity]in the Amazon S3 User
// This header can be used as a data integrity check to verify that the data
// received is the same data that was originally sent. This header specifies the
// Base64 encoded, 64-bit CRC-64NVME checksum of the object. The CRC-64NVME
// checksum is always a full object checksum. For more information, see [Checking object integrity in the Amazon S3 User Guide].
//
// [Checking object integrity in the Amazon S3 User Guide]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
ChecksumCRC64NVME *string
// The Base64 encoded, 160-bit SHA-1 digest of the object. This will only be
// present if the object was uploaded with the object. When you use the API
// operation on an object that was uploaded using multipart uploads, this value may
// not be a direct checksum value of the full object. Instead, it's a calculation
// based on the checksum values of each individual part. For more information about
// how checksums are calculated with multipart uploads, see [Checking object integrity]in the Amazon S3 User
// Guide.
//
// [Checking object integrity]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html#large-object-checksums
ChecksumSHA1 *string
// The base64-encoded, 256-bit SHA-256 digest of the object. This will only be
// present if it was uploaded with the object. When you use an API operation on an
// object that was uploaded using multipart uploads, this value may not be a direct
// checksum value of the full object. Instead, it's a calculation based on the
// checksum values of each individual part. For more information about how
// checksums are calculated with multipart uploads, see [Checking object integrity]in the Amazon S3 User
// The Base64 encoded, 256-bit SHA-256 digest of the object. This will only be
// present if the object was uploaded with the object. When you use an API
// operation on an object that was uploaded using multipart uploads, this value may
// not be a direct checksum value of the full object. Instead, it's a calculation
// based on the checksum values of each individual part. For more information about
// how checksums are calculated with multipart uploads, see [Checking object integrity]in the Amazon S3 User
// Guide.
//
// [Checking object integrity]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html#large-object-checksums
ChecksumSHA256 *string
// The checksum type, which determines how part-level checksums are combined to
// create an object-level checksum for multipart objects. You can use this header
// as a data integrity check to verify that the checksum type that is received is
// the same checksum type that was specified during the CreateMultipartUpload
// request. For more information, see [Checking object integrity in the Amazon S3 User Guide].
//
// [Checking object integrity in the Amazon S3 User Guide]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
ChecksumType types.ChecksumType
// Entity tag that identifies the newly created object's data. Objects with
// different object data will have different entity tags. The entity tag is an
// opaque string. The entity tag may or may not be an MD5 digest of the object

View File

@@ -34,7 +34,7 @@ import (
// https://bucket-name.s3express-zone-id.region-code.amazonaws.com/key-name .
// Path-style requests are not supported. For more information about endpoints in
// Availability Zones, see [Regional and Zonal endpoints for directory buckets in Availability Zones]in the Amazon S3 User Guide. For more information
// about endpoints in Local Zones, see [Concepts for directory buckets in Local Zones]in the Amazon S3 User Guide.
// about endpoints in Local Zones, see [Available Local Zone for directory buckets]in the Amazon S3 User Guide.
//
// - VPC endpoints don't support cross-Region requests (including copies). If
// you're using VPC endpoints, your source and destination buckets should be in the
@@ -144,7 +144,6 @@ import (
//
// [GetObject]
//
// [Concepts for directory buckets in Local Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [Amazon Web Services Identity and Access Management (IAM) identity-based policies for S3 Express One Zone]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-security-iam-identity-policies.html
// [Resolve the Error 200 response when copying objects to Amazon S3]: https://repost.aws/knowledge-center/s3-resolve-200-internalerror
// [Copy Object Using the REST Multipart Upload API]: https://docs.aws.amazon.com/AmazonS3/latest/dev/CopyingObjctsUsingRESTMPUapi.html
@@ -154,7 +153,8 @@ import (
// [Transfer Acceleration]: https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html
// [PutObject]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html
// [GetObject]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/endpoint-directory-buckets-AZ.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-Regions-and-Zones.html
// [Available Local Zone for directory buckets]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [Amazon S3 pricing]: http://aws.amazon.com/s3/pricing/
func (c *Client) CopyObject(ctx context.Context, params *CopyObjectInput, optFns ...func(*Options)) (*CopyObjectOutput, error) {
if params == nil {
@@ -859,7 +859,7 @@ type CopyObjectOutput struct {
SSECustomerKeyMD5 *string
// If present, indicates the Amazon Web Services KMS Encryption Context to use for
// object encryption. The value of this header is a base64-encoded UTF-8 string
// object encryption. The value of this header is a Base64 encoded UTF-8 string
// holding JSON with the encryption context key-value pairs.
SSEKMSEncryptionContext *string

View File

@@ -39,7 +39,7 @@ import (
// https://s3express-control.region-code.amazonaws.com/bucket-name .
// Virtual-hosted-style requests aren't supported. For more information about
// endpoints in Availability Zones, see [Regional and Zonal endpoints for directory buckets in Availability Zones]in the Amazon S3 User Guide. For more
// information about endpoints in Local Zones, see [Concepts for directory buckets in Local Zones]in the Amazon S3 User Guide.
// information about endpoints in Local Zones, see [Available Local Zone for directory buckets]in the Amazon S3 User Guide.
//
// Permissions
//
@@ -114,12 +114,12 @@ import (
// [DeleteBucket]
//
// [Creating, configuring, and working with Amazon S3 buckets]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-buckets-s3.html
// [Concepts for directory buckets in Local Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [PutObject]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html
// [Available Local Zone for directory buckets]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [DeleteBucket]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucket.html
// [CreateBucket]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateBucket.html
// [Virtual hosting of buckets]: https://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/endpoint-directory-buckets-AZ.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-Regions-and-Zones.html
//
// [DeletePublicAccessBlock]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeletePublicAccessBlock.html
// [Directory buckets]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/directory-buckets-overview.html

View File

@@ -170,6 +170,9 @@ func (c *Client) addOperationCreateBucketMetadataTableConfigurationMiddlewares(s
if err = addIsExpressUserAgent(stack); err != nil {
return err
}
if err = addRequestChecksumMetricsTracking(stack, options); err != nil {
return err
}
if err = addOpCreateBucketMetadataTableConfigurationValidationMiddleware(stack); err != nil {
return err
}
@@ -253,6 +256,7 @@ func addCreateBucketMetadataTableConfigurationInputChecksumMiddlewares(stack *mi
return internalChecksum.AddInputMiddleware(stack, internalChecksum.InputMiddlewareOptions{
GetAlgorithm: getCreateBucketMetadataTableConfigurationRequestAlgorithmMember,
RequireChecksum: true,
RequestChecksumCalculation: options.RequestChecksumCalculation,
EnableTrailingChecksum: false,
EnableComputeSHA256PayloadHash: true,
EnableDecodedContentLengthHeader: true,

View File

@@ -41,7 +41,7 @@ import (
// https://bucket-name.s3express-zone-id.region-code.amazonaws.com/key-name .
// Path-style requests are not supported. For more information about endpoints in
// Availability Zones, see [Regional and Zonal endpoints for directory buckets in Availability Zones]in the Amazon S3 User Guide. For more information
// about endpoints in Local Zones, see [Concepts for directory buckets in Local Zones]in the Amazon S3 User Guide.
// about endpoints in Local Zones, see [Available Local Zone for directory buckets]in the Amazon S3 User Guide.
//
// Request signing For request signing, multipart upload is just a series of
// regular requests. You initiate a multipart upload, send one or more requests to
@@ -202,7 +202,6 @@ import (
//
// [ListMultipartUploads]
//
// [Concepts for directory buckets in Local Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [ListParts]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html
// [UploadPart]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html
// [Protecting Data Using Server-Side Encryption with KMS keys]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingKMSEncryption.html
@@ -213,12 +212,13 @@ import (
// [Multipart upload API and permissions]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html#mpuAndPermissions
// [UploadPartCopy]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html
// [CompleteMultipartUpload]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html
// [Available Local Zone for directory buckets]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [Authenticating Requests (Amazon Web Services Signature Version 4)]: https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html
// [AbortMultipartUpload]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html
// [Multipart Upload Overview]: https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html
// [Protecting data using server-side encryption with Amazon Web Services KMS]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingKMSEncryption.html
// [ListMultipartUploads]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/endpoint-directory-buckets-AZ.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-Regions-and-Zones.html
//
// [Specifying server-side encryption with KMS for new object uploads]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-specifying-kms-encryption.html
// [Protecting data using server-side encryption with customer-provided encryption keys (SSE-C)]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/ServerSideEncryptionCustomerKeys.html
@@ -332,6 +332,12 @@ type CreateMultipartUploadInput struct {
// [Checking object integrity]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
ChecksumAlgorithm types.ChecksumAlgorithm
// Indicates the checksum type that you want Amazon S3 to use to calculate the
// objects checksum value. For more information, see [Checking object integrity in the Amazon S3 User Guide].
//
// [Checking object integrity in the Amazon S3 User Guide]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
ChecksumType types.ChecksumType
// Specifies presentational information for the object.
ContentDisposition *string
@@ -632,7 +638,7 @@ type CreateMultipartUploadInput struct {
SSECustomerKeyMD5 *string
// Specifies the Amazon Web Services KMS Encryption Context to use for object
// encryption. The value of this header is a Base64-encoded string of a UTF-8
// encryption. The value of this header is a Base64 encoded string of a UTF-8
// encoded JSON, which contains the encryption context as key-value pairs.
//
// Directory buckets - You can optionally provide an explicit encryption context
@@ -779,6 +785,12 @@ type CreateMultipartUploadOutput struct {
// The algorithm that was used to create a checksum of the object.
ChecksumAlgorithm types.ChecksumAlgorithm
// Indicates the checksum type that you want Amazon S3 to use to calculate the
// objects checksum value. For more information, see [Checking object integrity in the Amazon S3 User Guide].
//
// [Checking object integrity in the Amazon S3 User Guide]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
ChecksumType types.ChecksumType
// Object key for which the multipart upload was initiated.
Key *string
@@ -803,7 +815,7 @@ type CreateMultipartUploadOutput struct {
SSECustomerKeyMD5 *string
// If present, indicates the Amazon Web Services KMS Encryption Context to use for
// object encryption. The value of this header is a Base64-encoded string of a
// object encryption. The value of this header is a Base64 encoded string of a
// UTF-8 encoded JSON, which contains the encryption context as key-value pairs.
SSEKMSEncryptionContext *string

View File

@@ -50,7 +50,7 @@ import (
// https://bucket-name.s3express-zone-id.region-code.amazonaws.com . Path-style
// requests are not supported. For more information about endpoints in Availability
// Zones, see [Regional and Zonal endpoints for directory buckets in Availability Zones]in the Amazon S3 User Guide. For more information about endpoints
// in Local Zones, see [Concepts for directory buckets in Local Zones]in the Amazon S3 User Guide.
// in Local Zones, see [Available Local Zone for directory buckets]in the Amazon S3 User Guide.
//
// - CopyObject API operation - Unlike other Zonal endpoint API operations, the
// CopyObject API operation doesn't use the temporary security credentials
@@ -124,13 +124,13 @@ import (
// Bucket-name.s3express-zone-id.region-code.amazonaws.com .
//
// [Specifying server-side encryption with KMS for new object uploads]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-specifying-kms-encryption.html
// [Concepts for directory buckets in Local Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [Performance guidelines and design patterns]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-optimizing-performance-guidelines-design-patterns.html#s3-express-optimizing-performance-session-authentication
// [CopyObject]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html
// [CreateSession]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html
// [S3 Express One Zone APIs]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-APIs.html
// [HeadBucket]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html
// [UploadPartCopy]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html
// [Available Local Zone for directory buckets]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [Amazon Web Services managed key]: https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk
// [Amazon Web Services Identity and Access Management (IAM) identity-based policies for S3 Express One Zone]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-security-iam-identity-policies.html
// [Example bucket policies for S3 Express One Zone]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-security-iam-example-bucket-policies.html
@@ -138,7 +138,7 @@ import (
// [Protecting data with server-side encryption]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-serv-side-encryption.html
// [x-amz-create-session-mode]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html#API_CreateSession_RequestParameters
// [Zonal endpoint (object-level) API operations]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-differences.html#s3-express-differences-api-operations
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/endpoint-directory-buckets-AZ.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-Regions-and-Zones.html
func (c *Client) CreateSession(ctx context.Context, params *CreateSessionInput, optFns ...func(*Options)) (*CreateSessionOutput, error) {
if params == nil {
params = &CreateSessionInput{}
@@ -179,7 +179,7 @@ type CreateSessionInput struct {
// Specifies the Amazon Web Services KMS Encryption Context as an additional
// encryption context to use for object encryption. The value of this header is a
// Base64-encoded string of a UTF-8 encoded JSON, which contains the encryption
// Base64 encoded string of a UTF-8 encoded JSON, which contains the encryption
// context as key-value pairs. This value is stored as object metadata and
// automatically gets passed on to Amazon Web Services KMS for future GetObject
// operations on this object.
@@ -251,7 +251,7 @@ type CreateSessionOutput struct {
BucketKeyEnabled *bool
// If present, indicates the Amazon Web Services KMS Encryption Context to use for
// object encryption. The value of this header is a Base64-encoded string of a
// object encryption. The value of this header is a Base64 encoded string of a
// UTF-8 encoded JSON, which contains the encryption context as key-value pairs.
// This value is stored as object metadata and automatically gets passed on to
// Amazon Web Services KMS for future GetObject operations on this object.

View File

@@ -26,7 +26,7 @@ import (
// https://s3express-control.region-code.amazonaws.com/bucket-name .
// Virtual-hosted-style requests aren't supported. For more information about
// endpoints in Availability Zones, see [Regional and Zonal endpoints for directory buckets in Availability Zones]in the Amazon S3 User Guide. For more
// information about endpoints in Local Zones, see [Concepts for directory buckets in Local Zones]in the Amazon S3 User Guide.
// information about endpoints in Local Zones, see [Available Local Zone for directory buckets]in the Amazon S3 User Guide.
//
// Permissions
//
@@ -49,10 +49,10 @@ import (
//
// [DeleteObject]
//
// [Concepts for directory buckets in Local Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [DeleteObject]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObject.html
// [CreateBucket]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/endpoint-directory-buckets-AZ.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-Regions-and-Zones.html
// [Available Local Zone for directory buckets]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [Amazon Web Services Identity and Access Management (IAM) for S3 Express One Zone]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-security-iam.html
func (c *Client) DeleteBucket(ctx context.Context, params *DeleteBucketInput, optFns ...func(*Options)) (*DeleteBucketOutput, error) {
if params == nil {

View File

@@ -47,7 +47,7 @@ import (
// in the format https://s3express-control.region-code.amazonaws.com/bucket-name
// . Virtual-hosted-style requests aren't supported. For more information about
// endpoints in Availability Zones, see [Regional and Zonal endpoints for directory buckets in Availability Zones]in the Amazon S3 User Guide. For more
// information about endpoints in Local Zones, see [Concepts for directory buckets in Local Zones]in the Amazon S3 User Guide.
// information about endpoints in Local Zones, see [Available Local Zone for directory buckets]in the Amazon S3 User Guide.
//
// HTTP Host header syntax Directory buckets - The HTTP Host header syntax is
// s3express-control.region.amazonaws.com .
@@ -66,8 +66,8 @@ import (
// [Authorizing Regional endpoint APIs with IAM]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-security-iam.html
// [Managing Access Permissions to Your Amazon S3 Resources]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-access-control.html
//
// [Concepts for directory buckets in Local Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/endpoint-directory-buckets-AZ.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-Regions-and-Zones.html
// [Available Local Zone for directory buckets]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
func (c *Client) DeleteBucketLifecycle(ctx context.Context, params *DeleteBucketLifecycleInput, optFns ...func(*Options)) (*DeleteBucketLifecycleOutput, error) {
if params == nil {
params = &DeleteBucketLifecycleInput{}

View File

@@ -20,7 +20,7 @@ import (
// in the format https://s3express-control.region-code.amazonaws.com/bucket-name .
// Virtual-hosted-style requests aren't supported. For more information about
// endpoints in Availability Zones, see [Regional and Zonal endpoints for directory buckets in Availability Zones]in the Amazon S3 User Guide. For more
// information about endpoints in Local Zones, see [Concepts for directory buckets in Local Zones]in the Amazon S3 User Guide.
// information about endpoints in Local Zones, see [Available Local Zone for directory buckets]in the Amazon S3 User Guide.
//
// Permissions If you are using an identity other than the root user of the Amazon
// Web Services account that owns the bucket, the calling identity must both have
@@ -60,11 +60,11 @@ import (
//
// [DeleteObject]
//
// [Concepts for directory buckets in Local Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [DeleteObject]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObject.html
// [Using Bucket Policies and User Policies]: https://docs.aws.amazon.com/AmazonS3/latest/dev/using-iam-policies.html
// [CreateBucket]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/endpoint-directory-buckets-AZ.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-Regions-and-Zones.html
// [Available Local Zone for directory buckets]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [Amazon Web Services Identity and Access Management (IAM) for S3 Express One Zone]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-security-iam.html
func (c *Client) DeleteBucketPolicy(ctx context.Context, params *DeleteBucketPolicyInput, optFns ...func(*Options)) (*DeleteBucketPolicyOutput, error) {
if params == nil {

View File

@@ -44,7 +44,7 @@ import (
// https://bucket-name.s3express-zone-id.region-code.amazonaws.com/key-name .
// Path-style requests are not supported. For more information about endpoints in
// Availability Zones, see [Regional and Zonal endpoints for directory buckets in Availability Zones]in the Amazon S3 User Guide. For more information
// about endpoints in Local Zones, see [Concepts for directory buckets in Local Zones]in the Amazon S3 User Guide.
// about endpoints in Local Zones, see [Available Local Zone for directory buckets]in the Amazon S3 User Guide.
//
// To remove a specific version, you must use the versionId query parameter. Using
// this query parameter permanently deletes the version. If the object deleted is a
@@ -97,13 +97,13 @@ import (
// [PutObject]
//
// [Sample Request]: https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectDELETE.html#ExampleVersionObjectDelete
// [Concepts for directory buckets in Local Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [Deleting objects from versioning-suspended buckets]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/DeletingObjectsfromVersioningSuspendedBuckets.html
// [PutObject]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html
// [PutBucketLifecycle]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLifecycle.html
// [CreateSession]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html
// [Deleting object versions from a versioning-enabled bucket]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/DeletingObjectVersions.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/endpoint-directory-buckets-AZ.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-Regions-and-Zones.html
// [Available Local Zone for directory buckets]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [Using MFA Delete]: https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMFADelete.html
func (c *Client) DeleteObject(ctx context.Context, params *DeleteObjectInput, optFns ...func(*Options)) (*DeleteObjectOutput, error) {
if params == nil {

View File

@@ -36,7 +36,7 @@ import (
// https://bucket-name.s3express-zone-id.region-code.amazonaws.com/key-name .
// Path-style requests are not supported. For more information about endpoints in
// Availability Zones, see [Regional and Zonal endpoints for directory buckets in Availability Zones]in the Amazon S3 User Guide. For more information
// about endpoints in Local Zones, see [Concepts for directory buckets in Local Zones]in the Amazon S3 User Guide.
// about endpoints in Local Zones, see [Available Local Zone for directory buckets]in the Amazon S3 User Guide.
//
// The operation supports two modes for the response: verbose and quiet. By
// default, the operation uses verbose mode in which the response includes the
@@ -104,13 +104,13 @@ import (
//
// [AbortMultipartUpload]
//
// [Concepts for directory buckets in Local Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [ListParts]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html
// [AbortMultipartUpload]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html
// [UploadPart]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html
// [CreateSession]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html
// [CompleteMultipartUpload]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/endpoint-directory-buckets-AZ.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-Regions-and-Zones.html
// [Available Local Zone for directory buckets]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [MFA Delete]: https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html#MultiFactorAuthenticationDelete
// [CreateMultipartUpload]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html
func (c *Client) DeleteObjects(ctx context.Context, params *DeleteObjectsInput, optFns ...func(*Options)) (*DeleteObjectsOutput, error) {
@@ -189,21 +189,22 @@ type DeleteObjectsInput struct {
// For the x-amz-checksum-algorithm header, replace algorithm with the
// supported algorithm from the following list:
//
// - CRC32
// - CRC-32
//
// - CRC32C
// - CRC-32C
//
// - SHA1
// - CRC-64NVME
//
// - SHA256
// - SHA-1
//
// - SHA-256
//
// For more information, see [Checking object integrity] in the Amazon S3 User Guide.
//
// If the individual checksum value you provide through x-amz-checksum-algorithm
// doesn't match the checksum algorithm you set through
// x-amz-sdk-checksum-algorithm , Amazon S3 ignores any provided ChecksumAlgorithm
// parameter and uses the checksum algorithm that matches the provided value in
// x-amz-checksum-algorithm .
// x-amz-sdk-checksum-algorithm , Amazon S3 fails the request with a BadDigest
// error.
//
// If you provide an individual checksum, Amazon S3 ignores any provided
// ChecksumAlgorithm parameter.
@@ -347,6 +348,9 @@ func (c *Client) addOperationDeleteObjectsMiddlewares(stack *middleware.Stack, o
if err = addIsExpressUserAgent(stack); err != nil {
return err
}
if err = addRequestChecksumMetricsTracking(stack, options); err != nil {
return err
}
if err = addOpDeleteObjectsValidationMiddleware(stack); err != nil {
return err
}
@@ -430,6 +434,7 @@ func addDeleteObjectsInputChecksumMiddlewares(stack *middleware.Stack, options O
return internalChecksum.AddInputMiddleware(stack, internalChecksum.InputMiddlewareOptions{
GetAlgorithm: getDeleteObjectsRequestAlgorithmMember,
RequireChecksum: true,
RequestChecksumCalculation: options.RequestChecksumCalculation,
EnableTrailingChecksum: false,
EnableComputeSHA256PayloadHash: true,
EnableDecodedContentLengthHeader: true,

View File

@@ -57,7 +57,7 @@ import (
// in the format https://s3express-control.region-code.amazonaws.com/bucket-name
// . Virtual-hosted-style requests aren't supported. For more information about
// endpoints in Availability Zones, see [Regional and Zonal endpoints for directory buckets in Availability Zones]in the Amazon S3 User Guide. For more
// information about endpoints in Local Zones, see [Concepts for directory buckets in Local Zones]in the Amazon S3 User Guide.
// information about endpoints in Local Zones, see [Available Local Zone for directory buckets]in the Amazon S3 User Guide.
//
// HTTP Host header syntax Directory buckets - The HTTP Host header syntax is
// s3express-control.region.amazonaws.com .
@@ -87,8 +87,8 @@ import (
// [Managing Access Permissions to Your Amazon S3 Resources]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-access-control.html
// [DeleteBucketLifecycle]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketLifecycle.html
//
// [Concepts for directory buckets in Local Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/endpoint-directory-buckets-AZ.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-Regions-and-Zones.html
// [Available Local Zone for directory buckets]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
func (c *Client) GetBucketLifecycleConfiguration(ctx context.Context, params *GetBucketLifecycleConfigurationInput, optFns ...func(*Options)) (*GetBucketLifecycleConfigurationOutput, error) {
if params == nil {
params = &GetBucketLifecycleConfigurationInput{}
@@ -136,7 +136,7 @@ type GetBucketLifecycleConfigurationOutput struct {
// Indicates which default minimum object size behavior is applied to the
// lifecycle configuration.
//
// This parameter applies to general purpose buckets only. It is not supported for
// This parameter applies to general purpose buckets only. It isn't supported for
// directory bucket lifecycle configurations.
//
// - all_storage_classes_128K - Objects smaller than 128 KB will not transition

View File

@@ -20,7 +20,7 @@ import (
// in the format https://s3express-control.region-code.amazonaws.com/bucket-name .
// Virtual-hosted-style requests aren't supported. For more information about
// endpoints in Availability Zones, see [Regional and Zonal endpoints for directory buckets in Availability Zones]in the Amazon S3 User Guide. For more
// information about endpoints in Local Zones, see [Concepts for directory buckets in Local Zones]in the Amazon S3 User Guide.
// information about endpoints in Local Zones, see [Available Local Zone for directory buckets]in the Amazon S3 User Guide.
//
// Permissions If you are using an identity other than the root user of the Amazon
// Web Services account that owns the bucket, the calling identity must both have
@@ -64,11 +64,11 @@ import (
// [GetObject]
//
// [Bucket policy examples]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html
// [Concepts for directory buckets in Local Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [Example bucket policies for S3 Express One Zone]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-security-iam-example-bucket-policies.html
// [Using Bucket Policies and User Policies]: https://docs.aws.amazon.com/AmazonS3/latest/dev/using-iam-policies.html
// [GetObject]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/endpoint-directory-buckets-AZ.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-Regions-and-Zones.html
// [Available Local Zone for directory buckets]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [Amazon Web Services Identity and Access Management (IAM) for S3 Express One Zone]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-security-iam.html
func (c *Client) GetBucketPolicy(ctx context.Context, params *GetBucketPolicyInput, optFns ...func(*Options)) (*GetBucketPolicyOutput, error) {
if params == nil {

View File

@@ -39,7 +39,7 @@ import (
// https://bucket-name.s3express-zone-id.region-code.amazonaws.com/key-name .
// Path-style requests are not supported. For more information about endpoints in
// Availability Zones, see [Regional and Zonal endpoints for directory buckets in Availability Zones]in the Amazon S3 User Guide. For more information about
// endpoints in Local Zones, see [Concepts for directory buckets in Local Zones]in the Amazon S3 User Guide.
// endpoints in Local Zones, see [Available Local Zone for directory buckets]in the Amazon S3 User Guide.
//
// Permissions
// - General purpose bucket permissions - You must have the required permissions
@@ -152,7 +152,6 @@ import (
//
// [GetObjectAcl]
//
// [Concepts for directory buckets in Local Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [RestoreObject]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_RestoreObject.html
// [Protecting data with server-side encryption]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-serv-side-encryption.html
// [ListBuckets]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBuckets.html
@@ -160,7 +159,8 @@ import (
// [Restoring Archived Objects]: https://docs.aws.amazon.com/AmazonS3/latest/dev/restoring-objects.html
// [GetObjectAcl]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectAcl.html
// [Specifying permissions in a policy]: https://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/endpoint-directory-buckets-AZ.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-Regions-and-Zones.html
// [Available Local Zone for directory buckets]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
//
// [CreateSession]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html
func (c *Client) GetObject(ctx context.Context, params *GetObjectInput, optFns ...func(*Options)) (*GetObjectOutput, error) {
@@ -449,34 +449,49 @@ type GetObjectOutput struct {
// Specifies caching behavior along the request/reply chain.
CacheControl *string
// The base64-encoded, 32-bit CRC-32 checksum of the object. This will only be
// present if it was uploaded with the object. For more information, see [Checking object integrity]in the
// Amazon S3 User Guide.
// The Base64 encoded, 32-bit CRC-32 checksum of the object. This checksum is only
// present if the object was uploaded with the object. For more information, see [Checking object integrity]
// in the Amazon S3 User Guide.
//
// [Checking object integrity]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
ChecksumCRC32 *string
// The base64-encoded, 32-bit CRC-32C checksum of the object. This will only be
// present if it was uploaded with the object. For more information, see [Checking object integrity]in the
// Amazon S3 User Guide.
// The Base64 encoded, 32-bit CRC-32C checksum of the object. This will only be
// present if the object was uploaded with the object. For more information, see [Checking object integrity]
// in the Amazon S3 User Guide.
//
// [Checking object integrity]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
ChecksumCRC32C *string
// The base64-encoded, 160-bit SHA-1 digest of the object. This will only be
// present if it was uploaded with the object. For more information, see [Checking object integrity]in the
// Amazon S3 User Guide.
// The Base64 encoded, 64-bit CRC-64NVME checksum of the object. For more
// information, see [Checking object integrity in the Amazon S3 User Guide].
//
// [Checking object integrity in the Amazon S3 User Guide]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
ChecksumCRC64NVME *string
// The Base64 encoded, 160-bit SHA-1 digest of the object. This will only be
// present if the object was uploaded with the object. For more information, see [Checking object integrity]
// in the Amazon S3 User Guide.
//
// [Checking object integrity]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
ChecksumSHA1 *string
// The base64-encoded, 256-bit SHA-256 digest of the object. This will only be
// present if it was uploaded with the object. For more information, see [Checking object integrity]in the
// Amazon S3 User Guide.
// The Base64 encoded, 256-bit SHA-256 digest of the object. This will only be
// present if the object was uploaded with the object. For more information, see [Checking object integrity]
// in the Amazon S3 User Guide.
//
// [Checking object integrity]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
ChecksumSHA256 *string
// The checksum type, which determines how part-level checksums are combined to
// create an object-level checksum for multipart objects. You can use this header
// response to verify that the checksum type that is received is the same checksum
// type that was specified in the CreateMultipartUpload request. For more
// information, see [Checking object integrity]in the Amazon S3 User Guide.
//
// [Checking object integrity]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
ChecksumType types.ChecksumType
// Specifies presentational information for the object.
ContentDisposition *string
@@ -720,6 +735,9 @@ func (c *Client) addOperationGetObjectMiddlewares(stack *middleware.Stack, optio
if err = addIsExpressUserAgent(stack); err != nil {
return err
}
if err = addResponseChecksumMetricsTracking(stack, options); err != nil {
return err
}
if err = addOpGetObjectValidationMiddleware(stack); err != nil {
return err
}
@@ -799,7 +817,8 @@ func getGetObjectRequestValidationModeMember(input interface{}) (string, bool) {
func addGetObjectOutputChecksumMiddlewares(stack *middleware.Stack, options Options) error {
return internalChecksum.AddOutputMiddleware(stack, internalChecksum.OutputMiddlewareOptions{
GetValidationMode: getGetObjectRequestValidationModeMember,
ValidationAlgorithms: []string{"CRC32", "CRC32C", "SHA256", "SHA1"},
ResponseChecksumValidation: options.ResponseChecksumValidation,
ValidationAlgorithms: []string{"CRC64NVME", "CRC32", "CRC32C", "SHA256", "SHA1"},
IgnoreMultipartValidation: true,
LogValidationSkipped: true,
LogMultipartValidationSkipped: true,

View File

@@ -27,7 +27,7 @@ import (
// https://bucket-name.s3express-zone-id.region-code.amazonaws.com/key-name .
// Path-style requests are not supported. For more information about endpoints in
// Availability Zones, see [Regional and Zonal endpoints for directory buckets in Availability Zones]in the Amazon S3 User Guide. For more information about
// endpoints in Local Zones, see [Concepts for directory buckets in Local Zones]in the Amazon S3 User Guide.
// endpoints in Local Zones, see [Available Local Zone for directory buckets]in the Amazon S3 User Guide.
//
// Permissions
//
@@ -149,20 +149,20 @@ import (
//
// [Specifying server-side encryption with KMS for new object uploads]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-specifying-kms-encryption.html
// [GetObjectLegalHold]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectLegalHold.html
// [Concepts for directory buckets in Local Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [ListParts]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html
// [Server-Side Encryption (Using Customer-Provided Encryption Keys)]: https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
// [CreateSession]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html
// [GetObjectTagging]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectTagging.html
// [Specifying Permissions in a Policy]: https://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html
// [RFC 7232]: https://tools.ietf.org/html/rfc7232
// [Available Local Zone for directory buckets]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [HeadObject]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html
// [GetObjectLockConfiguration]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectLockConfiguration.html
// [Protecting data with server-side encryption]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-serv-side-encryption.html
// [GetObjectAcl]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectAcl.html
// [GetObjectRetention]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectRetention.html
// [GetObject]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/endpoint-directory-buckets-AZ.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-Regions-and-Zones.html
func (c *Client) GetObjectAttributes(ctx context.Context, params *GetObjectAttributesInput, optFns ...func(*Options)) (*GetObjectAttributesOutput, error) {
if params == nil {
params = &GetObjectAttributesInput{}

View File

@@ -64,14 +64,14 @@ import (
// https://bucket-name.s3express-zone-id.region-code.amazonaws.com . Path-style
// requests are not supported. For more information about endpoints in Availability
// Zones, see [Regional and Zonal endpoints for directory buckets in Availability Zones]in the Amazon S3 User Guide. For more information about endpoints in
// Local Zones, see [Concepts for directory buckets in Local Zones]in the Amazon S3 User Guide.
// Local Zones, see [Available Local Zone for directory buckets]in the Amazon S3 User Guide.
//
// [Amazon Web Services Identity and Access Management (IAM) identity-based policies for S3 Express One Zone]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-security-iam-identity-policies.html
// [Concepts for directory buckets in Local Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [REST Authentication]: https://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html
// [Example bucket policies for S3 Express One Zone]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-security-iam-example-bucket-policies.html
// [Managing access permissions to your Amazon S3 resources]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-access-control.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/endpoint-directory-buckets-AZ.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-Regions-and-Zones.html
// [Available Local Zone for directory buckets]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
func (c *Client) HeadBucket(ctx context.Context, params *HeadBucketInput, optFns ...func(*Options)) (*HeadBucketOutput, error) {
if params == nil {
params = &HeadBucketInput{}
@@ -465,6 +465,9 @@ func bucketExistsStateRetryable(ctx context.Context, input *HeadBucketInput, out
}
}
if err != nil {
return false, err
}
return true, nil
}
@@ -634,6 +637,9 @@ func bucketNotExistsStateRetryable(ctx context.Context, input *HeadBucketInput,
}
}
if err != nil {
return false, err
}
return true, nil
}

View File

@@ -120,7 +120,7 @@ import (
// format https://bucket-name.s3express-zone-id.region-code.amazonaws.com/key-name
// . Path-style requests are not supported. For more information about endpoints in
// Availability Zones, see [Regional and Zonal endpoints for directory buckets in Availability Zones]in the Amazon S3 User Guide. For more information about
// endpoints in Local Zones, see [Concepts for directory buckets in Local Zones]in the Amazon S3 User Guide.
// endpoints in Local Zones, see [Available Local Zone for directory buckets]in the Amazon S3 User Guide.
//
// The following actions are related to HeadObject :
//
@@ -128,14 +128,14 @@ import (
//
// [GetObjectAttributes]
//
// [Concepts for directory buckets in Local Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [Server-Side Encryption (Using Customer-Provided Encryption Keys)]: https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
// [GetObjectAttributes]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectAttributes.html
// [Protecting data with server-side encryption]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-serv-side-encryption.html
// [Actions, resources, and condition keys for Amazon S3]: https://docs.aws.amazon.com/AmazonS3/latest/dev/list_amazons3.html
// [GetObject]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html
// [Common Request Headers]: https://docs.aws.amazon.com/AmazonS3/latest/API/RESTCommonRequestHeaders.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/endpoint-directory-buckets-AZ.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-Regions-and-Zones.html
// [Available Local Zone for directory buckets]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
//
// [CreateSession]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html
func (c *Client) HeadObject(ctx context.Context, params *HeadObjectInput, optFns ...func(*Options)) (*HeadObjectOutput, error) {
@@ -381,50 +381,65 @@ type HeadObjectOutput struct {
// Specifies caching behavior along the request/reply chain.
CacheControl *string
// The base64-encoded, 32-bit CRC-32 checksum of the object. This will only be
// present if it was uploaded with the object. When you use an API operation on an
// object that was uploaded using multipart uploads, this value may not be a direct
// checksum value of the full object. Instead, it's a calculation based on the
// checksum values of each individual part. For more information about how
// checksums are calculated with multipart uploads, see [Checking object integrity]in the Amazon S3 User
// The Base64 encoded, 32-bit CRC-32 checksum of the object. This checksum is only
// be present if the checksum was uploaded with the object. When you use an API
// operation on an object that was uploaded using multipart uploads, this value may
// not be a direct checksum value of the full object. Instead, it's a calculation
// based on the checksum values of each individual part. For more information about
// how checksums are calculated with multipart uploads, see [Checking object integrity]in the Amazon S3 User
// Guide.
//
// [Checking object integrity]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html#large-object-checksums
ChecksumCRC32 *string
// The base64-encoded, 32-bit CRC-32C checksum of the object. This will only be
// present if it was uploaded with the object. When you use an API operation on an
// object that was uploaded using multipart uploads, this value may not be a direct
// checksum value of the full object. Instead, it's a calculation based on the
// checksum values of each individual part. For more information about how
// checksums are calculated with multipart uploads, see [Checking object integrity]in the Amazon S3 User
// The Base64 encoded, 32-bit CRC-32C checksum of the object. This checksum is
// only present if the checksum was uploaded with the object. When you use an API
// operation on an object that was uploaded using multipart uploads, this value may
// not be a direct checksum value of the full object. Instead, it's a calculation
// based on the checksum values of each individual part. For more information about
// how checksums are calculated with multipart uploads, see [Checking object integrity]in the Amazon S3 User
// Guide.
//
// [Checking object integrity]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html#large-object-checksums
ChecksumCRC32C *string
// The base64-encoded, 160-bit SHA-1 digest of the object. This will only be
// present if it was uploaded with the object. When you use the API operation on an
// object that was uploaded using multipart uploads, this value may not be a direct
// checksum value of the full object. Instead, it's a calculation based on the
// checksum values of each individual part. For more information about how
// checksums are calculated with multipart uploads, see [Checking object integrity]in the Amazon S3 User
// The Base64 encoded, 64-bit CRC-64NVME checksum of the object. For more
// information, see [Checking object integrity in the Amazon S3 User Guide].
//
// [Checking object integrity in the Amazon S3 User Guide]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
ChecksumCRC64NVME *string
// The Base64 encoded, 160-bit SHA-1 digest of the object. This will only be
// present if the object was uploaded with the object. When you use the API
// operation on an object that was uploaded using multipart uploads, this value may
// not be a direct checksum value of the full object. Instead, it's a calculation
// based on the checksum values of each individual part. For more information about
// how checksums are calculated with multipart uploads, see [Checking object integrity]in the Amazon S3 User
// Guide.
//
// [Checking object integrity]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html#large-object-checksums
ChecksumSHA1 *string
// The base64-encoded, 256-bit SHA-256 digest of the object. This will only be
// present if it was uploaded with the object. When you use an API operation on an
// object that was uploaded using multipart uploads, this value may not be a direct
// checksum value of the full object. Instead, it's a calculation based on the
// checksum values of each individual part. For more information about how
// checksums are calculated with multipart uploads, see [Checking object integrity]in the Amazon S3 User
// The Base64 encoded, 256-bit SHA-256 digest of the object. This will only be
// present if the object was uploaded with the object. When you use an API
// operation on an object that was uploaded using multipart uploads, this value may
// not be a direct checksum value of the full object. Instead, it's a calculation
// based on the checksum values of each individual part. For more information about
// how checksums are calculated with multipart uploads, see [Checking object integrity]in the Amazon S3 User
// Guide.
//
// [Checking object integrity]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html#large-object-checksums
ChecksumSHA256 *string
// The checksum type, which determines how part-level checksums are combined to
// create an object-level checksum for multipart objects. You can use this header
// response to verify that the checksum type that is received is the same checksum
// type that was specified in CreateMultipartUpload request. For more information,
// see [Checking object integrity in the Amazon S3 User Guide].
//
// [Checking object integrity in the Amazon S3 User Guide]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
ChecksumType types.ChecksumType
// Specifies presentational information for the object.
ContentDisposition *string
@@ -925,6 +940,9 @@ func objectExistsStateRetryable(ctx context.Context, input *HeadObjectInput, out
}
}
if err != nil {
return false, err
}
return true, nil
}
@@ -1094,6 +1112,9 @@ func objectNotExistsStateRetryable(ctx context.Context, input *HeadObjectInput,
}
}
if err != nil {
return false, err
}
return true, nil
}

View File

@@ -23,7 +23,7 @@ import (
// in the format https://s3express-control.region-code.amazonaws.com/bucket-name .
// Virtual-hosted-style requests aren't supported. For more information about
// endpoints in Availability Zones, see [Regional and Zonal endpoints for directory buckets in Availability Zones]in the Amazon S3 User Guide. For more
// information about endpoints in Local Zones, see [Concepts for directory buckets in Local Zones]in the Amazon S3 User Guide.
// information about endpoints in Local Zones, see [Available Local Zone for directory buckets]in the Amazon S3 User Guide.
//
// Permissions You must have the s3express:ListAllMyDirectoryBuckets permission in
// an IAM identity-based policy instead of a bucket policy. Cross-account access to
@@ -37,9 +37,9 @@ import (
// The BucketRegion response element is not part of the ListDirectoryBuckets
// Response Syntax.
//
// [Concepts for directory buckets in Local Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [Directory buckets]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/directory-buckets-overview.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/endpoint-directory-buckets-AZ.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-Regions-and-Zones.html
// [Available Local Zone for directory buckets]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [Amazon Web Services Identity and Access Management (IAM) for S3 Express One Zone]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-security-iam.html
func (c *Client) ListDirectoryBuckets(ctx context.Context, params *ListDirectoryBucketsInput, optFns ...func(*Options)) (*ListDirectoryBucketsOutput, error) {
if params == nil {

View File

@@ -49,7 +49,7 @@ import (
// https://bucket-name.s3express-zone-id.region-code.amazonaws.com/key-name .
// Path-style requests are not supported. For more information about endpoints in
// Availability Zones, see [Regional and Zonal endpoints for directory buckets in Availability Zones]in the Amazon S3 User Guide. For more information about
// endpoints in Local Zones, see [Concepts for directory buckets in Local Zones]in the Amazon S3 User Guide.
// endpoints in Local Zones, see [Available Local Zone for directory buckets]in the Amazon S3 User Guide.
//
// Permissions
//
@@ -100,14 +100,14 @@ import (
// [AbortMultipartUpload]
//
// [Uploading Objects Using Multipart Upload]: https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html
// [Concepts for directory buckets in Local Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [ListParts]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html
// [AbortMultipartUpload]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html
// [UploadPart]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html
// [CreateSession]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html
// [Multipart Upload and Permissions]: https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuAndPermissions.html
// [CompleteMultipartUpload]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/endpoint-directory-buckets-AZ.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-Regions-and-Zones.html
// [Available Local Zone for directory buckets]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [CreateMultipartUpload]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html
func (c *Client) ListMultipartUploads(ctx context.Context, params *ListMultipartUploadsInput, optFns ...func(*Options)) (*ListMultipartUploadsOutput, error) {
if params == nil {

View File

@@ -32,7 +32,7 @@ import (
// https://bucket-name.s3express-zone-id.region-code.amazonaws.com/key-name .
// Path-style requests are not supported. For more information about endpoints in
// Availability Zones, see [Regional and Zonal endpoints for directory buckets in Availability Zones]in the Amazon S3 User Guide. For more information
// about endpoints in Local Zones, see [Concepts for directory buckets in Local Zones]in the Amazon S3 User Guide.
// about endpoints in Local Zones, see [Available Local Zone for directory buckets]in the Amazon S3 User Guide.
//
// Permissions
//
@@ -79,7 +79,6 @@ import (
// [CreateBucket]
//
// [ListObjects]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html
// [Concepts for directory buckets in Local Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [Permissions Related to Bucket Subresource Operations]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-with-s3-actions.html#using-with-s3-actions-related-to-bucket-subresources
// [Listing object keys programmatically]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/ListingKeysUsingAPIs.html
// [ListBuckets]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBuckets.html
@@ -88,7 +87,8 @@ import (
// [CreateSession]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html
// [GetObject]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html
// [CreateBucket]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/endpoint-directory-buckets-AZ.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-Regions-and-Zones.html
// [Available Local Zone for directory buckets]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
func (c *Client) ListObjectsV2(ctx context.Context, params *ListObjectsV2Input, optFns ...func(*Options)) (*ListObjectsV2Output, error) {
if params == nil {
params = &ListObjectsV2Input{}

View File

@@ -36,7 +36,7 @@ import (
// https://bucket-name.s3express-zone-id.region-code.amazonaws.com/key-name .
// Path-style requests are not supported. For more information about endpoints in
// Availability Zones, see [Regional and Zonal endpoints for directory buckets in Availability Zones]in the Amazon S3 User Guide. For more information about
// endpoints in Local Zones, see [Concepts for directory buckets in Local Zones]in the Amazon S3 User Guide.
// endpoints in Local Zones, see [Available Local Zone for directory buckets]in the Amazon S3 User Guide.
//
// Permissions
// - General purpose bucket permissions - For information about permissions
@@ -78,7 +78,6 @@ import (
// [ListMultipartUploads]
//
// [Uploading Objects Using Multipart Upload]: https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html
// [Concepts for directory buckets in Local Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [AbortMultipartUpload]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html
// [UploadPart]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html
// [GetObjectAttributes]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectAttributes.html
@@ -86,7 +85,8 @@ import (
// [Multipart Upload and Permissions]: https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuAndPermissions.html
// [CompleteMultipartUpload]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html
// [CreateMultipartUpload]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/endpoint-directory-buckets-AZ.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-Regions-and-Zones.html
// [Available Local Zone for directory buckets]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
//
// [CreateSession]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html
func (c *Client) ListParts(ctx context.Context, params *ListPartsInput, optFns ...func(*Options)) (*ListPartsOutput, error) {
@@ -245,6 +245,15 @@ type ListPartsOutput struct {
// The algorithm that was used to create a checksum of the object.
ChecksumAlgorithm types.ChecksumAlgorithm
// The checksum type, which determines how part-level checksums are combined to
// create an object-level checksum for multipart objects. You can use this header
// response to verify that the checksum type that is received is the same checksum
// type that was specified in CreateMultipartUpload request. For more information,
// see [Checking object integrity in the Amazon S3 User Guide].
//
// [Checking object integrity in the Amazon S3 User Guide]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
ChecksumType types.ChecksumType
// Container element that identifies who initiated the multipart upload. If the
// initiator is an Amazon Web Services account, this element provides the same
// information as the Owner element. If the initiator is an IAM User, this element

View File

@@ -185,6 +185,9 @@ func (c *Client) addOperationPutBucketAccelerateConfigurationMiddlewares(stack *
if err = addIsExpressUserAgent(stack); err != nil {
return err
}
if err = addRequestChecksumMetricsTracking(stack, options); err != nil {
return err
}
if err = addOpPutBucketAccelerateConfigurationValidationMiddleware(stack); err != nil {
return err
}
@@ -265,6 +268,7 @@ func addPutBucketAccelerateConfigurationInputChecksumMiddlewares(stack *middlewa
return internalChecksum.AddInputMiddleware(stack, internalChecksum.InputMiddlewareOptions{
GetAlgorithm: getPutBucketAccelerateConfigurationRequestAlgorithmMember,
RequireChecksum: false,
RequestChecksumCalculation: options.RequestChecksumCalculation,
EnableTrailingChecksum: false,
EnableComputeSHA256PayloadHash: true,
EnableDecodedContentLengthHeader: true,

View File

@@ -209,7 +209,7 @@ type PutBucketAclInput struct {
// [Checking object integrity]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
ChecksumAlgorithm types.ChecksumAlgorithm
// The base64-encoded 128-bit MD5 digest of the data. This header must be used as
// The Base64 encoded 128-bit MD5 digest of the data. This header must be used as
// a message integrity check to verify that the request body was not corrupted in
// transit. For more information, go to [RFC 1864.]
//
@@ -329,6 +329,9 @@ func (c *Client) addOperationPutBucketAclMiddlewares(stack *middleware.Stack, op
if err = addIsExpressUserAgent(stack); err != nil {
return err
}
if err = addRequestChecksumMetricsTracking(stack, options); err != nil {
return err
}
if err = addOpPutBucketAclValidationMiddleware(stack); err != nil {
return err
}
@@ -412,6 +415,7 @@ func addPutBucketAclInputChecksumMiddlewares(stack *middleware.Stack, options Op
return internalChecksum.AddInputMiddleware(stack, internalChecksum.InputMiddlewareOptions{
GetAlgorithm: getPutBucketAclRequestAlgorithmMember,
RequireChecksum: true,
RequestChecksumCalculation: options.RequestChecksumCalculation,
EnableTrailingChecksum: false,
EnableComputeSHA256PayloadHash: true,
EnableDecodedContentLengthHeader: true,

View File

@@ -106,7 +106,7 @@ type PutBucketCorsInput struct {
// [Checking object integrity]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
ChecksumAlgorithm types.ChecksumAlgorithm
// The base64-encoded 128-bit MD5 digest of the data. This header must be used as
// The Base64 encoded 128-bit MD5 digest of the data. This header must be used as
// a message integrity check to verify that the request body was not corrupted in
// transit. For more information, go to [RFC 1864.]
//
@@ -207,6 +207,9 @@ func (c *Client) addOperationPutBucketCorsMiddlewares(stack *middleware.Stack, o
if err = addIsExpressUserAgent(stack); err != nil {
return err
}
if err = addRequestChecksumMetricsTracking(stack, options); err != nil {
return err
}
if err = addOpPutBucketCorsValidationMiddleware(stack); err != nil {
return err
}
@@ -290,6 +293,7 @@ func addPutBucketCorsInputChecksumMiddlewares(stack *middleware.Stack, options O
return internalChecksum.AddInputMiddleware(stack, internalChecksum.InputMiddlewareOptions{
GetAlgorithm: getPutBucketCorsRequestAlgorithmMember,
RequireChecksum: true,
RequestChecksumCalculation: options.RequestChecksumCalculation,
EnableTrailingChecksum: false,
EnableComputeSHA256PayloadHash: true,
EnableDecodedContentLengthHeader: true,

View File

@@ -23,7 +23,7 @@ import (
// in the format https://s3express-control.region-code.amazonaws.com/bucket-name .
// Virtual-hosted-style requests aren't supported. For more information about
// endpoints in Availability Zones, see [Regional and Zonal endpoints for directory buckets in Availability Zones]in the Amazon S3 User Guide. For more
// information about endpoints in Local Zones, see [Concepts for directory buckets in Local Zones]in the Amazon S3 User Guide.
// information about endpoints in Local Zones, see [Available Local Zone for directory buckets]in the Amazon S3 User Guide.
//
// By default, all buckets have a default encryption configuration that uses
// server-side encryption with Amazon S3 managed keys (SSE-S3).
@@ -107,13 +107,13 @@ import (
// [DeleteBucketEncryption]
//
// [Specifying server-side encryption with KMS for new object uploads]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-specifying-kms-encryption.html
// [Concepts for directory buckets in Local Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [KMS customer managed key]: https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk
// [Amazon S3 Bucket Default Encryption]: https://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-encryption.html
// [CopyObject]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html
// [Managing Access Permissions to Your Amazon S3 Resources]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-access-control.html
// [Permissions Related to Bucket Operations]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-with-s3-actions.html#using-with-s3-actions-related-to-bucket-subresources
// [UploadPartCopy]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html
// [Available Local Zone for directory buckets]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [Amazon Web Services managed key]: https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk
// [Authenticating Requests (Amazon Web Services Signature Version 4)]: https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html
// [Amazon Web Services Identity and Access Management (IAM) for S3 Express One Zone]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-security-iam.html
@@ -124,7 +124,7 @@ import (
// [default bucket encryption]: https://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-encryption.html
// [the import jobs]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-import-job
// [the Copy operation in Batch Operations]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/directory-buckets-objects-Batch-Ops
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/endpoint-directory-buckets-AZ.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-Regions-and-Zones.html
func (c *Client) PutBucketEncryption(ctx context.Context, params *PutBucketEncryptionInput, optFns ...func(*Options)) (*PutBucketEncryptionOutput, error) {
if params == nil {
params = &PutBucketEncryptionInput{}
@@ -180,7 +180,7 @@ type PutBucketEncryptionInput struct {
// [Checking object integrity]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
ChecksumAlgorithm types.ChecksumAlgorithm
// The base64-encoded 128-bit MD5 digest of the server-side encryption
// The Base64 encoded 128-bit MD5 digest of the server-side encryption
// configuration.
//
// For requests made using the Amazon Web Services Command Line Interface (CLI) or
@@ -284,6 +284,9 @@ func (c *Client) addOperationPutBucketEncryptionMiddlewares(stack *middleware.St
if err = addIsExpressUserAgent(stack); err != nil {
return err
}
if err = addRequestChecksumMetricsTracking(stack, options); err != nil {
return err
}
if err = addOpPutBucketEncryptionValidationMiddleware(stack); err != nil {
return err
}
@@ -367,6 +370,7 @@ func addPutBucketEncryptionInputChecksumMiddlewares(stack *middleware.Stack, opt
return internalChecksum.AddInputMiddleware(stack, internalChecksum.InputMiddlewareOptions{
GetAlgorithm: getPutBucketEncryptionRequestAlgorithmMember,
RequireChecksum: true,
RequestChecksumCalculation: options.RequestChecksumCalculation,
EnableTrailingChecksum: false,
EnableComputeSHA256PayloadHash: true,
EnableDecodedContentLengthHeader: true,

View File

@@ -100,7 +100,7 @@ import (
// in the format https://s3express-control.region-code.amazonaws.com/bucket-name
// . Virtual-hosted-style requests aren't supported. For more information about
// endpoints in Availability Zones, see [Regional and Zonal endpoints for directory buckets in Availability Zones]in the Amazon S3 User Guide. For more
// information about endpoints in Local Zones, see [Concepts for directory buckets in Local Zones]in the Amazon S3 User Guide.
// information about endpoints in Local Zones, see [Available Local Zone for directory buckets]in the Amazon S3 User Guide.
//
// Directory buckets - The HTTP Host header syntax is
// s3express-control.region.amazonaws.com .
@@ -120,8 +120,8 @@ import (
// [DeleteBucketLifecycle]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketLifecycle.html
// [Managing your storage lifecycle]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecycle-mgmt.html
//
// [Concepts for directory buckets in Local Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/endpoint-directory-buckets-AZ.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-Regions-and-Zones.html
// [Available Local Zone for directory buckets]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
func (c *Client) PutBucketLifecycleConfiguration(ctx context.Context, params *PutBucketLifecycleConfigurationInput, optFns ...func(*Options)) (*PutBucketLifecycleConfigurationOutput, error) {
if params == nil {
params = &PutBucketLifecycleConfigurationInput{}
@@ -293,6 +293,9 @@ func (c *Client) addOperationPutBucketLifecycleConfigurationMiddlewares(stack *m
if err = addIsExpressUserAgent(stack); err != nil {
return err
}
if err = addRequestChecksumMetricsTracking(stack, options); err != nil {
return err
}
if err = addOpPutBucketLifecycleConfigurationValidationMiddleware(stack); err != nil {
return err
}
@@ -376,6 +379,7 @@ func addPutBucketLifecycleConfigurationInputChecksumMiddlewares(stack *middlewar
return internalChecksum.AddInputMiddleware(stack, internalChecksum.InputMiddlewareOptions{
GetAlgorithm: getPutBucketLifecycleConfigurationRequestAlgorithmMember,
RequireChecksum: true,
RequestChecksumCalculation: options.RequestChecksumCalculation,
EnableTrailingChecksum: false,
EnableComputeSHA256PayloadHash: true,
EnableDecodedContentLengthHeader: true,

View File

@@ -214,6 +214,9 @@ func (c *Client) addOperationPutBucketLoggingMiddlewares(stack *middleware.Stack
if err = addIsExpressUserAgent(stack); err != nil {
return err
}
if err = addRequestChecksumMetricsTracking(stack, options); err != nil {
return err
}
if err = addOpPutBucketLoggingValidationMiddleware(stack); err != nil {
return err
}
@@ -297,6 +300,7 @@ func addPutBucketLoggingInputChecksumMiddlewares(stack *middleware.Stack, option
return internalChecksum.AddInputMiddleware(stack, internalChecksum.InputMiddlewareOptions{
GetAlgorithm: getPutBucketLoggingRequestAlgorithmMember,
RequireChecksum: true,
RequestChecksumCalculation: options.RequestChecksumCalculation,
EnableTrailingChecksum: false,
EnableComputeSHA256PayloadHash: true,
EnableDecodedContentLengthHeader: true,

View File

@@ -156,6 +156,9 @@ func (c *Client) addOperationPutBucketOwnershipControlsMiddlewares(stack *middle
if err = addIsExpressUserAgent(stack); err != nil {
return err
}
if err = addRequestChecksumMetricsTracking(stack, options); err != nil {
return err
}
if err = addOpPutBucketOwnershipControlsValidationMiddleware(stack); err != nil {
return err
}
@@ -229,6 +232,7 @@ func addPutBucketOwnershipControlsInputChecksumMiddlewares(stack *middleware.Sta
return internalChecksum.AddInputMiddleware(stack, internalChecksum.InputMiddlewareOptions{
GetAlgorithm: nil,
RequireChecksum: true,
RequestChecksumCalculation: options.RequestChecksumCalculation,
EnableTrailingChecksum: false,
EnableComputeSHA256PayloadHash: true,
EnableDecodedContentLengthHeader: true,

View File

@@ -22,7 +22,7 @@ import (
// in the format https://s3express-control.region-code.amazonaws.com/bucket-name .
// Virtual-hosted-style requests aren't supported. For more information about
// endpoints in Availability Zones, see [Regional and Zonal endpoints for directory buckets in Availability Zones]in the Amazon S3 User Guide. For more
// information about endpoints in Local Zones, see [Concepts for directory buckets in Local Zones]in the Amazon S3 User Guide.
// information about endpoints in Local Zones, see [Available Local Zone for directory buckets]in the Amazon S3 User Guide.
//
// Permissions If you are using an identity other than the root user of the Amazon
// Web Services account that owns the bucket, the calling identity must both have
@@ -68,12 +68,12 @@ import (
// [DeleteBucket]
//
// [Bucket policy examples]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html
// [Concepts for directory buckets in Local Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [Example bucket policies for S3 Express One Zone]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-security-iam-example-bucket-policies.html
// [DeleteBucket]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucket.html
// [Using Bucket Policies and User Policies]: https://docs.aws.amazon.com/AmazonS3/latest/dev/using-iam-policies.html
// [CreateBucket]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/endpoint-directory-buckets-AZ.html
// [Regional and Zonal endpoints for directory buckets in Availability Zones]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-Regions-and-Zones.html
// [Available Local Zone for directory buckets]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html
// [Amazon Web Services Identity and Access Management (IAM) for S3 Express One Zone]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-security-iam.html
func (c *Client) PutBucketPolicy(ctx context.Context, params *PutBucketPolicyInput, optFns ...func(*Options)) (*PutBucketPolicyOutput, error) {
if params == nil {
@@ -125,21 +125,22 @@ type PutBucketPolicyInput struct {
// For the x-amz-checksum-algorithm header, replace algorithm with the
// supported algorithm from the following list:
//
// - CRC32
// - CRC-32
//
// - CRC32C
// - CRC-32C
//
// - SHA1
// - CRC-64NVME
//
// - SHA256
// - SHA-1
//
// - SHA-256
//
// For more information, see [Checking object integrity] in the Amazon S3 User Guide.
//
// If the individual checksum value you provide through x-amz-checksum-algorithm
// doesn't match the checksum algorithm you set through
// x-amz-sdk-checksum-algorithm , Amazon S3 ignores any provided ChecksumAlgorithm
// parameter and uses the checksum algorithm that matches the provided value in
// x-amz-checksum-algorithm .
// x-amz-sdk-checksum-algorithm , Amazon S3 fails the request with a BadDigest
// error.
//
// For directory buckets, when you use Amazon Web Services SDKs, CRC32 is the
// default checksum algorithm that's used for performance.
@@ -256,6 +257,9 @@ func (c *Client) addOperationPutBucketPolicyMiddlewares(stack *middleware.Stack,
if err = addIsExpressUserAgent(stack); err != nil {
return err
}
if err = addRequestChecksumMetricsTracking(stack, options); err != nil {
return err
}
if err = addOpPutBucketPolicyValidationMiddleware(stack); err != nil {
return err
}
@@ -339,6 +343,7 @@ func addPutBucketPolicyInputChecksumMiddlewares(stack *middleware.Stack, options
return internalChecksum.AddInputMiddleware(stack, internalChecksum.InputMiddlewareOptions{
GetAlgorithm: getPutBucketPolicyRequestAlgorithmMember,
RequireChecksum: true,
RequestChecksumCalculation: options.RequestChecksumCalculation,
EnableTrailingChecksum: false,
EnableComputeSHA256PayloadHash: true,
EnableDecodedContentLengthHeader: true,

Some files were not shown because too many files have changed in this diff Show More