Compare commits

..

23 Commits

Author SHA1 Message Date
Georges Chaudy
fc8376bff9 Refactor: Remove fallback logic from Check and Compile methods in authzLimitedClient 2026-01-06 17:27:28 +01:00
Matthew Jacobson
5eb0e6f432 Alerting: Update RuleGroupConfig definitions with missing fields (#115850)
* Alerting: Update RuleGroupConfig definitions with missing fields

This update adds previously missing fields to the `RuleGroupConfig` structs to
ensure compatibility with external Prometheus-like rulers.

Includes:
- `labels`: per https://github.com/prometheus/prometheus/pull/11474
- `remote_write`: per https://github
.com/grafana/mimir/blob/56f33fed6254fee5a53bde1eab36c604863e3d5f/pkg/mimirtool/rules/rwrulefmt/rulefmt.go#L16

Note: This does not add full support in Grafana; it only allows these fields to
pass through the alerting proxy without causing unmarshal errors when using
external rulers.

* Update OpenAPI spec
2026-01-06 11:05:01 -05:00
Larissa Wandzura
1465b44d5a Docs: Created a troubleshooting guide for CloudWatch (#115603)
* created new troubleshooting doc

* fixed dropdown

* fixed another linter issue

* ran prettier

* updates based on feedback
2026-01-06 09:56:08 -06:00
Gilles De Mey
ee62baea2c Alerting: Update alerting API client package paths (#115883) 2026-01-06 16:40:30 +01:00
Isabel Matwawana
1f20ca5a3d Docs: Add private preview notice for restoring dashboards (#115856) 2026-01-06 09:57:12 -05:00
Ryan McKinley
97b241d4ab Stars: Return an error when trying to save non-dashboard stars with legacy storage (#115761) 2026-01-06 06:53:38 -08:00
Will Browne
466a27deff Plugins: Remove pkg/infra/log as dependency (#115832)
* remove pkg/infra/log as dependency

* add pluginslog

* add slog caching
2026-01-06 14:44:49 +00:00
Matt Cowley
264131a390 OpenFeature: Add OFREP provider type (#115857)
Add new OFREP provider for OpenFeature
2026-01-06 14:39:07 +00:00
Bruno
7698970f22 Secrets: changes to allow a 3rd party keeper / secret references (#115156)
* Secrets: changes to allow a 3rd party keeper / secret references

* fix test

* make gofmt

* lint

* fix tests

* assign aws secrets manager to @grafana/grafana-operator-experience-squad

* rename Keeper.Reference to Keeper.RetrieveReference

* rename ModelSecretsManager to ModelAWSSecretsManager

* validator: ensure that only one of keeper.Spec.Aws.AccessKey or keeper.Spec.Aws.AssumeRole are set

* move secrets manager dep / go mod tidy

* move secrets manager dep

* keeper validator: move 3rd party secret stores validation to their own functions

* add github.com/aws/aws-sdk-go-v2/service/secretsmanager pkg/extensions/enterprise_imports

* make update-workspace

* undo go.mod changes in /apps

* make update-workspace

* fix test

* add github.com/aws/aws-sdk-go-v2/service/secretsmanager to enterprise_imports

* make update-workspace

* gcworker: handle refs

* make update-workspace

* create toggle: FeatureStageExperimental

* allow features.IsEnabled for now

* format
2026-01-06 11:30:04 -03:00
Gilles De Mey
bbaf91ed9c Alerting: Move alerting RTKQ client to api-clients package (#114546) 2026-01-06 14:25:36 +00:00
Ayush Kaithwas
92464b2dc8 Dynamic Dashboards: Fix Content outline not being scrollable (#115827)
Enhancement: Add ScrollContainer to DashboardOutline for improved scrolling experience
2026-01-06 14:25:57 +01:00
Joe Elliott
5fe192a893 Tempo: Fix multiple streaming TraceQL metrics queries being conflated into one (#114360)
* Correctly stream multiple metrics series

Signed-off-by: Joe Elliott <number101010@gmail.com>

* cleanup

Signed-off-by: Joe Elliott <number101010@gmail.com>

* prettier fix

---------

Signed-off-by: Joe Elliott <number101010@gmail.com>
Co-authored-by: Andre Pereira <adrapereira@gmail.com>
Co-authored-by: Zoltán Bedi <zoltan.bedi@gmail.com>
2026-01-06 12:47:52 +00:00
Alexander Akhmetov
380154707b Alerting: Fix hyphen escaping in rule labels filter (#115869) 2026-01-06 12:39:28 +01:00
Peter Nguyen
217427e072 Loki Language Provider: Add missing interpolation to fetchLabelsByLabelsEndpoint (#114608)
* Plugins: Implement bug fix for loki label selectors w/ variable interpolation

* Chore: Add test to ensure result is interpolated

---------

Co-authored-by: Zoltán Bedi <zoltan.bedi@gmail.com>
2026-01-06 10:29:51 +00:00
grafana-pr-automation[bot]
585d24dafa I18n: Download translations from Crowdin (#115860)
New Crowdin translations by GitHub Action

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-01-06 10:01:38 +00:00
Josh Hunt
fccece3ca0 Refactor: Remove jQuery from AppWrapper (#115842) 2026-01-06 09:58:42 +00:00
Juan Cabanas
d44cab9eaf DashboardLibrary: Add validations to visualize community dashboards (#114562)
* dashboard library check added

* community dashboard section tests in progress

* tests added

* translations added

* pagination removed

* total pages removed

* test updated. pagination removed

* filters applied

* tracking event removed to be created in another pr

* slug added so url is correclty generated

* ui fix

* improvements after review

* improvements after review

* more tests added. new logic created

* fix

* changes applied

* tests removed. pattern updated

* preset of 6 elements applied

* Improve code comments and adjust variable name based on PR feedback

* Fix unit test and add extra case for regex pattern

* Fix interaction event, we were missing contentKind on BasicProvisioned flow and datasources types were not being send

---------

Co-authored-by: nmarrs <nathanielmarrs@gmail.com>
Co-authored-by: alexandra vargas <alexa1866@gmail.com>
2026-01-06 10:38:15 +01:00
Saurabh Yadav
3d3b4dd213 Clean up packages/grafana-prometheus/src/dashboards (#115861)
Some checks failed
Frontend performance tests / performance-tests (push) Has been cancelled
Actionlint / Lint GitHub Actions files (push) Has been cancelled
Backend Code Checks / Detect whether code changed (push) Has been cancelled
Backend Code Checks / Validate Backend Configs (push) Has been cancelled
Backend Unit Tests / Detect whether code changed (push) Has been cancelled
Backend Unit Tests / Grafana (1/8) (push) Has been cancelled
Backend Unit Tests / Grafana (2/8) (push) Has been cancelled
Backend Unit Tests / Grafana (3/8) (push) Has been cancelled
Backend Unit Tests / Grafana (4/8) (push) Has been cancelled
Backend Unit Tests / Grafana (5/8) (push) Has been cancelled
Backend Unit Tests / Grafana (6/8) (push) Has been cancelled
Backend Unit Tests / Grafana (7/8) (push) Has been cancelled
Backend Unit Tests / Grafana (8/8) (push) Has been cancelled
Backend Unit Tests / Grafana Enterprise (1/8) (push) Has been cancelled
Backend Unit Tests / Grafana Enterprise (2/8) (push) Has been cancelled
Backend Unit Tests / Grafana Enterprise (3/8) (push) Has been cancelled
Backend Unit Tests / Grafana Enterprise (4/8) (push) Has been cancelled
Backend Unit Tests / Grafana Enterprise (5/8) (push) Has been cancelled
Backend Unit Tests / Grafana Enterprise (6/8) (push) Has been cancelled
Backend Unit Tests / Grafana Enterprise (7/8) (push) Has been cancelled
Backend Unit Tests / Grafana Enterprise (8/8) (push) Has been cancelled
Backend Unit Tests / All backend unit tests complete (push) Has been cancelled
Lint Frontend / Detect whether code changed (push) Has been cancelled
Lint Frontend / Lint (push) Has been cancelled
Lint Frontend / Typecheck (push) Has been cancelled
Lint Frontend / Verify API clients (push) Has been cancelled
Lint Frontend / Verify API clients (enterprise) (push) Has been cancelled
Verify i18n / verify-i18n (push) Has been cancelled
End-to-end tests / Detect whether code changed (push) Has been cancelled
End-to-end tests / Build & Package Grafana (push) Has been cancelled
End-to-end tests / Build E2E test runner (push) Has been cancelled
End-to-end tests / push-docker-image (push) Has been cancelled
End-to-end tests / dashboards-suite (old arch) (push) Has been cancelled
End-to-end tests / panels-suite (old arch) (push) Has been cancelled
End-to-end tests / smoke-tests-suite (old arch) (push) Has been cancelled
End-to-end tests / various-suite (old arch) (push) Has been cancelled
End-to-end tests / Verify Storybook (Playwright) (push) Has been cancelled
End-to-end tests / Playwright E2E tests (1/8) (push) Has been cancelled
End-to-end tests / Playwright E2E tests (2/8) (push) Has been cancelled
End-to-end tests / Playwright E2E tests (3/8) (push) Has been cancelled
End-to-end tests / Playwright E2E tests (4/8) (push) Has been cancelled
End-to-end tests / Playwright E2E tests (5/8) (push) Has been cancelled
End-to-end tests / Playwright E2E tests (6/8) (push) Has been cancelled
End-to-end tests / Playwright E2E tests (7/8) (push) Has been cancelled
End-to-end tests / Playwright E2E tests (8/8) (push) Has been cancelled
End-to-end tests / run-azure-monitor-e2e (push) Has been cancelled
End-to-end tests / All Playwright tests complete (push) Has been cancelled
End-to-end tests / A11y test (push) Has been cancelled
End-to-end tests / Publish metrics (push) Has been cancelled
End-to-end tests / All E2E tests complete (push) Has been cancelled
Frontend tests / Detect whether code changed (push) Has been cancelled
Frontend tests / Unit tests (1 / 16) (push) Has been cancelled
Frontend tests / Unit tests (10 / 16) (push) Has been cancelled
Frontend tests / Unit tests (11 / 16) (push) Has been cancelled
Frontend tests / Unit tests (12 / 16) (push) Has been cancelled
Frontend tests / Unit tests (13 / 16) (push) Has been cancelled
Frontend tests / Unit tests (14 / 16) (push) Has been cancelled
Frontend tests / Unit tests (15 / 16) (push) Has been cancelled
Frontend tests / Unit tests (16 / 16) (push) Has been cancelled
Frontend tests / Unit tests (2 / 16) (push) Has been cancelled
Frontend tests / Unit tests (3 / 16) (push) Has been cancelled
Frontend tests / Unit tests (4 / 16) (push) Has been cancelled
Frontend tests / Unit tests (5 / 16) (push) Has been cancelled
Frontend tests / Unit tests (6 / 16) (push) Has been cancelled
Frontend tests / Unit tests (7 / 16) (push) Has been cancelled
Frontend tests / Unit tests (8 / 16) (push) Has been cancelled
Frontend tests / Unit tests (9 / 16) (push) Has been cancelled
Frontend tests / Decoupled plugin tests (push) Has been cancelled
Frontend tests / Packages unit tests (push) Has been cancelled
Frontend tests / All frontend unit tests complete (push) Has been cancelled
Frontend tests / Devenv frontend-service build (push) Has been cancelled
Integration Tests / Detect whether code changed (push) Has been cancelled
Integration Tests / Sqlite (1/4) (push) Has been cancelled
Integration Tests / Sqlite (2/4) (push) Has been cancelled
Integration Tests / Sqlite (3/4) (push) Has been cancelled
Integration Tests / Sqlite (4/4) (push) Has been cancelled
Integration Tests / Sqlite Without CGo (1/4) (push) Has been cancelled
Integration Tests / Sqlite Without CGo (2/4) (push) Has been cancelled
Integration Tests / Sqlite Without CGo (3/4) (push) Has been cancelled
Integration Tests / Sqlite Without CGo (4/4) (push) Has been cancelled
Integration Tests / Sqlite Without CGo (profiled) (push) Has been cancelled
Integration Tests / MySQL (1/16) (push) Has been cancelled
Integration Tests / MySQL (10/16) (push) Has been cancelled
Integration Tests / MySQL (11/16) (push) Has been cancelled
Integration Tests / MySQL (12/16) (push) Has been cancelled
Integration Tests / MySQL (13/16) (push) Has been cancelled
Integration Tests / MySQL (14/16) (push) Has been cancelled
Integration Tests / MySQL (15/16) (push) Has been cancelled
Integration Tests / MySQL (16/16) (push) Has been cancelled
Integration Tests / MySQL (2/16) (push) Has been cancelled
Integration Tests / MySQL (3/16) (push) Has been cancelled
Integration Tests / MySQL (4/16) (push) Has been cancelled
Integration Tests / MySQL (5/16) (push) Has been cancelled
Integration Tests / MySQL (6/16) (push) Has been cancelled
Integration Tests / MySQL (7/16) (push) Has been cancelled
Integration Tests / MySQL (8/16) (push) Has been cancelled
Integration Tests / MySQL (9/16) (push) Has been cancelled
Integration Tests / Postgres (1/16) (push) Has been cancelled
Integration Tests / Postgres (10/16) (push) Has been cancelled
Integration Tests / Postgres (11/16) (push) Has been cancelled
Integration Tests / Postgres (12/16) (push) Has been cancelled
Integration Tests / Postgres (13/16) (push) Has been cancelled
Integration Tests / Postgres (14/16) (push) Has been cancelled
Integration Tests / Postgres (15/16) (push) Has been cancelled
Integration Tests / Postgres (16/16) (push) Has been cancelled
Integration Tests / Postgres (2/16) (push) Has been cancelled
Integration Tests / Postgres (3/16) (push) Has been cancelled
Integration Tests / Postgres (4/16) (push) Has been cancelled
Integration Tests / Postgres (5/16) (push) Has been cancelled
Integration Tests / Postgres (6/16) (push) Has been cancelled
Integration Tests / Postgres (7/16) (push) Has been cancelled
Integration Tests / Postgres (8/16) (push) Has been cancelled
Integration Tests / Postgres (9/16) (push) Has been cancelled
Integration Tests / All backend integration tests complete (push) Has been cancelled
Reject GitHub secrets / reject-gh-secrets (push) Has been cancelled
Build Release Packages / setup (push) Has been cancelled
Build Release Packages / Dispatch grafana-enterprise build (push) Has been cancelled
Build Release Packages / / darwin-amd64 (push) Has been cancelled
Build Release Packages / / darwin-arm64 (push) Has been cancelled
Build Release Packages / / linux-amd64 (push) Has been cancelled
Build Release Packages / / linux-armv6 (push) Has been cancelled
Build Release Packages / / linux-armv7 (push) Has been cancelled
Build Release Packages / / linux-arm64 (push) Has been cancelled
Build Release Packages / / linux-s390x (push) Has been cancelled
Build Release Packages / / windows-amd64 (push) Has been cancelled
Build Release Packages / / windows-arm64 (push) Has been cancelled
Build Release Packages / Upload artifacts (push) Has been cancelled
Build Release Packages / publish-dockerhub (push) Has been cancelled
Build Release Packages / Dispatch publish NPM canaries (push) Has been cancelled
Build Release Packages / notify-pr (push) Has been cancelled
Run dashboard schema v2 e2e / dashboard-schema-v2-e2e (push) Has been cancelled
Shellcheck / Shellcheck scripts (push) Has been cancelled
Run Storybook a11y tests / Detect whether code changed (push) Has been cancelled
Run Storybook a11y tests / Run Storybook a11y tests (light theme) (push) Has been cancelled
Run Storybook a11y tests / Run Storybook a11y tests (dark theme) (push) Has been cancelled
Swagger generated code / Detect whether code changed (push) Has been cancelled
Swagger generated code / Verify committed API specs match (push) Has been cancelled
Dispatch sync to mirror / dispatch-job (push) Has been cancelled
* remove:Dashboard json files

* removed: dashboards from packages/grafana-prometheus/src/dashboards
2026-01-06 09:26:04 +00:00
Larissa Wandzura
2947d41ea8 Docs: Fixed broken links for Cloudwatch (#115848)
Some checks failed
Frontend performance tests / performance-tests (push) Has been cancelled
Actionlint / Lint GitHub Actions files (push) Has been cancelled
Backend Code Checks / Detect whether code changed (push) Has been cancelled
Backend Code Checks / Validate Backend Configs (push) Has been cancelled
Backend Unit Tests / Detect whether code changed (push) Has been cancelled
Backend Unit Tests / Grafana (1/8) (push) Has been cancelled
Backend Unit Tests / Grafana (2/8) (push) Has been cancelled
Backend Unit Tests / Grafana (3/8) (push) Has been cancelled
Backend Unit Tests / Grafana (4/8) (push) Has been cancelled
Backend Unit Tests / Grafana (5/8) (push) Has been cancelled
Backend Unit Tests / Grafana (6/8) (push) Has been cancelled
Backend Unit Tests / Grafana (7/8) (push) Has been cancelled
Backend Unit Tests / Grafana (8/8) (push) Has been cancelled
Backend Unit Tests / Grafana Enterprise (1/8) (push) Has been cancelled
Backend Unit Tests / Grafana Enterprise (2/8) (push) Has been cancelled
Backend Unit Tests / Grafana Enterprise (3/8) (push) Has been cancelled
Backend Unit Tests / Grafana Enterprise (4/8) (push) Has been cancelled
Backend Unit Tests / Grafana Enterprise (5/8) (push) Has been cancelled
Backend Unit Tests / Grafana Enterprise (6/8) (push) Has been cancelled
Backend Unit Tests / Grafana Enterprise (7/8) (push) Has been cancelled
Backend Unit Tests / Grafana Enterprise (8/8) (push) Has been cancelled
Backend Unit Tests / All backend unit tests complete (push) Has been cancelled
Lint Frontend / Detect whether code changed (push) Has been cancelled
Lint Frontend / Lint (push) Has been cancelled
Lint Frontend / Typecheck (push) Has been cancelled
Lint Frontend / Verify API clients (push) Has been cancelled
Lint Frontend / Verify API clients (enterprise) (push) Has been cancelled
Verify i18n / verify-i18n (push) Has been cancelled
End-to-end tests / Detect whether code changed (push) Has been cancelled
End-to-end tests / Build & Package Grafana (push) Has been cancelled
End-to-end tests / Build E2E test runner (push) Has been cancelled
End-to-end tests / push-docker-image (push) Has been cancelled
End-to-end tests / dashboards-suite (old arch) (push) Has been cancelled
End-to-end tests / panels-suite (old arch) (push) Has been cancelled
End-to-end tests / smoke-tests-suite (old arch) (push) Has been cancelled
End-to-end tests / various-suite (old arch) (push) Has been cancelled
End-to-end tests / Verify Storybook (Playwright) (push) Has been cancelled
End-to-end tests / Playwright E2E tests (1/8) (push) Has been cancelled
End-to-end tests / Playwright E2E tests (2/8) (push) Has been cancelled
End-to-end tests / Playwright E2E tests (3/8) (push) Has been cancelled
End-to-end tests / Playwright E2E tests (4/8) (push) Has been cancelled
End-to-end tests / Playwright E2E tests (5/8) (push) Has been cancelled
End-to-end tests / Playwright E2E tests (6/8) (push) Has been cancelled
End-to-end tests / Playwright E2E tests (7/8) (push) Has been cancelled
End-to-end tests / Playwright E2E tests (8/8) (push) Has been cancelled
End-to-end tests / run-azure-monitor-e2e (push) Has been cancelled
End-to-end tests / All Playwright tests complete (push) Has been cancelled
End-to-end tests / A11y test (push) Has been cancelled
End-to-end tests / Publish metrics (push) Has been cancelled
End-to-end tests / All E2E tests complete (push) Has been cancelled
Frontend tests / Detect whether code changed (push) Has been cancelled
Frontend tests / Unit tests (1 / 16) (push) Has been cancelled
Frontend tests / Unit tests (10 / 16) (push) Has been cancelled
Frontend tests / Unit tests (11 / 16) (push) Has been cancelled
Frontend tests / Unit tests (12 / 16) (push) Has been cancelled
Frontend tests / Unit tests (13 / 16) (push) Has been cancelled
Frontend tests / Unit tests (14 / 16) (push) Has been cancelled
Frontend tests / Unit tests (15 / 16) (push) Has been cancelled
Frontend tests / Unit tests (16 / 16) (push) Has been cancelled
Frontend tests / Unit tests (2 / 16) (push) Has been cancelled
Frontend tests / Unit tests (3 / 16) (push) Has been cancelled
Frontend tests / Unit tests (4 / 16) (push) Has been cancelled
Frontend tests / Unit tests (5 / 16) (push) Has been cancelled
Frontend tests / Unit tests (6 / 16) (push) Has been cancelled
Frontend tests / Unit tests (7 / 16) (push) Has been cancelled
Frontend tests / Unit tests (8 / 16) (push) Has been cancelled
Frontend tests / Unit tests (9 / 16) (push) Has been cancelled
Frontend tests / Decoupled plugin tests (push) Has been cancelled
Frontend tests / Packages unit tests (push) Has been cancelled
Frontend tests / All frontend unit tests complete (push) Has been cancelled
Frontend tests / Devenv frontend-service build (push) Has been cancelled
Integration Tests / Detect whether code changed (push) Has been cancelled
Integration Tests / Sqlite (1/4) (push) Has been cancelled
Integration Tests / Sqlite (2/4) (push) Has been cancelled
Integration Tests / Sqlite (3/4) (push) Has been cancelled
Integration Tests / Sqlite (4/4) (push) Has been cancelled
Integration Tests / Sqlite Without CGo (1/4) (push) Has been cancelled
Integration Tests / Sqlite Without CGo (2/4) (push) Has been cancelled
Integration Tests / Sqlite Without CGo (3/4) (push) Has been cancelled
Integration Tests / Sqlite Without CGo (4/4) (push) Has been cancelled
Integration Tests / Sqlite Without CGo (profiled) (push) Has been cancelled
Integration Tests / MySQL (1/16) (push) Has been cancelled
Integration Tests / MySQL (10/16) (push) Has been cancelled
Integration Tests / MySQL (11/16) (push) Has been cancelled
Integration Tests / MySQL (12/16) (push) Has been cancelled
Integration Tests / MySQL (13/16) (push) Has been cancelled
Integration Tests / MySQL (14/16) (push) Has been cancelled
Integration Tests / MySQL (15/16) (push) Has been cancelled
Integration Tests / MySQL (16/16) (push) Has been cancelled
Integration Tests / MySQL (2/16) (push) Has been cancelled
Integration Tests / MySQL (3/16) (push) Has been cancelled
Integration Tests / MySQL (4/16) (push) Has been cancelled
Integration Tests / MySQL (5/16) (push) Has been cancelled
Integration Tests / MySQL (6/16) (push) Has been cancelled
Integration Tests / MySQL (7/16) (push) Has been cancelled
Integration Tests / MySQL (8/16) (push) Has been cancelled
Integration Tests / MySQL (9/16) (push) Has been cancelled
Integration Tests / Postgres (1/16) (push) Has been cancelled
Integration Tests / Postgres (10/16) (push) Has been cancelled
Integration Tests / Postgres (11/16) (push) Has been cancelled
Integration Tests / Postgres (12/16) (push) Has been cancelled
Integration Tests / Postgres (13/16) (push) Has been cancelled
Integration Tests / Postgres (14/16) (push) Has been cancelled
Integration Tests / Postgres (15/16) (push) Has been cancelled
Integration Tests / Postgres (16/16) (push) Has been cancelled
Integration Tests / Postgres (2/16) (push) Has been cancelled
Integration Tests / Postgres (3/16) (push) Has been cancelled
Integration Tests / Postgres (4/16) (push) Has been cancelled
Integration Tests / Postgres (5/16) (push) Has been cancelled
Integration Tests / Postgres (6/16) (push) Has been cancelled
Integration Tests / Postgres (7/16) (push) Has been cancelled
Integration Tests / Postgres (8/16) (push) Has been cancelled
Integration Tests / Postgres (9/16) (push) Has been cancelled
Integration Tests / All backend integration tests complete (push) Has been cancelled
Reject GitHub secrets / reject-gh-secrets (push) Has been cancelled
Build Release Packages / setup (push) Has been cancelled
Build Release Packages / Dispatch grafana-enterprise build (push) Has been cancelled
Build Release Packages / / darwin-amd64 (push) Has been cancelled
Build Release Packages / / darwin-arm64 (push) Has been cancelled
Build Release Packages / / linux-amd64 (push) Has been cancelled
Build Release Packages / / linux-armv6 (push) Has been cancelled
Build Release Packages / / linux-armv7 (push) Has been cancelled
Build Release Packages / / linux-arm64 (push) Has been cancelled
Build Release Packages / / linux-s390x (push) Has been cancelled
Build Release Packages / / windows-amd64 (push) Has been cancelled
Build Release Packages / / windows-arm64 (push) Has been cancelled
Build Release Packages / Upload artifacts (push) Has been cancelled
Build Release Packages / publish-dockerhub (push) Has been cancelled
Build Release Packages / Dispatch publish NPM canaries (push) Has been cancelled
Build Release Packages / notify-pr (push) Has been cancelled
Run dashboard schema v2 e2e / dashboard-schema-v2-e2e (push) Has been cancelled
Shellcheck / Shellcheck scripts (push) Has been cancelled
Run Storybook a11y tests / Detect whether code changed (push) Has been cancelled
Run Storybook a11y tests / Run Storybook a11y tests (light theme) (push) Has been cancelled
Run Storybook a11y tests / Run Storybook a11y tests (dark theme) (push) Has been cancelled
Swagger generated code / Detect whether code changed (push) Has been cancelled
Swagger generated code / Verify committed API specs match (push) Has been cancelled
Dispatch sync to mirror / dispatch-job (push) Has been cancelled
CodeQL checks / Detect whether code changed (push) Has been cancelled
CodeQL checks / Analyze (actions) (push) Has been cancelled
CodeQL checks / Analyze (go) (push) Has been cancelled
CodeQL checks / Analyze (javascript) (push) Has been cancelled
golangci-lint / Detect whether code changed (push) Has been cancelled
golangci-lint / go-fmt (push) Has been cancelled
golangci-lint / lint-go (push) Has been cancelled
Documentation / Build & Verify Docs (push) Has been cancelled
publish-technical-documentation-next / sync (push) Has been cancelled
Close stale issues and PRs / stale (push) Has been cancelled
Update `make docs` procedure / main (push) Has been cancelled
* updates broken links and aliases

* fixed query editor links
2026-01-05 20:56:50 +00:00
Stephanie Hingtgen
0acb030f46 Revert: OSS Seeding (115729) (#115839) 2026-01-05 12:33:55 -06:00
Stephanie Hingtgen
658a1c8228 Dashboards: Allow editing provisioned dashboards if AllowUIUpdates is set (#115804) 2026-01-05 11:46:14 -06:00
Will Browne
618316a2f7 Revert "App Plugins: Allow to define experimental pages" (#115841)
Some checks failed
Frontend performance tests / performance-tests (push) Has been cancelled
Actionlint / Lint GitHub Actions files (push) Has been cancelled
Backend Code Checks / Detect whether code changed (push) Has been cancelled
Backend Code Checks / Validate Backend Configs (push) Has been cancelled
Backend Unit Tests / Detect whether code changed (push) Has been cancelled
Backend Unit Tests / Grafana (1/8) (push) Has been cancelled
Backend Unit Tests / Grafana (2/8) (push) Has been cancelled
Backend Unit Tests / Grafana (3/8) (push) Has been cancelled
Backend Unit Tests / Grafana (4/8) (push) Has been cancelled
Backend Unit Tests / Grafana (5/8) (push) Has been cancelled
Backend Unit Tests / Grafana (6/8) (push) Has been cancelled
Backend Unit Tests / Grafana (7/8) (push) Has been cancelled
Backend Unit Tests / Grafana (8/8) (push) Has been cancelled
Backend Unit Tests / Grafana Enterprise (1/8) (push) Has been cancelled
Backend Unit Tests / Grafana Enterprise (2/8) (push) Has been cancelled
Backend Unit Tests / Grafana Enterprise (3/8) (push) Has been cancelled
Backend Unit Tests / Grafana Enterprise (4/8) (push) Has been cancelled
Backend Unit Tests / Grafana Enterprise (5/8) (push) Has been cancelled
Backend Unit Tests / Grafana Enterprise (6/8) (push) Has been cancelled
Backend Unit Tests / Grafana Enterprise (7/8) (push) Has been cancelled
Backend Unit Tests / Grafana Enterprise (8/8) (push) Has been cancelled
Backend Unit Tests / All backend unit tests complete (push) Has been cancelled
CodeQL checks / Detect whether code changed (push) Has been cancelled
CodeQL checks / Analyze (actions) (push) Has been cancelled
CodeQL checks / Analyze (go) (push) Has been cancelled
CodeQL checks / Analyze (javascript) (push) Has been cancelled
Lint Frontend / Detect whether code changed (push) Has been cancelled
Lint Frontend / Lint (push) Has been cancelled
Lint Frontend / Typecheck (push) Has been cancelled
Lint Frontend / Verify API clients (push) Has been cancelled
Lint Frontend / Verify API clients (enterprise) (push) Has been cancelled
golangci-lint / Detect whether code changed (push) Has been cancelled
golangci-lint / go-fmt (push) Has been cancelled
golangci-lint / lint-go (push) Has been cancelled
Verify i18n / verify-i18n (push) Has been cancelled
Documentation / Build & Verify Docs (push) Has been cancelled
End-to-end tests / Detect whether code changed (push) Has been cancelled
End-to-end tests / Build & Package Grafana (push) Has been cancelled
End-to-end tests / Build E2E test runner (push) Has been cancelled
End-to-end tests / push-docker-image (push) Has been cancelled
End-to-end tests / dashboards-suite (old arch) (push) Has been cancelled
End-to-end tests / panels-suite (old arch) (push) Has been cancelled
End-to-end tests / smoke-tests-suite (old arch) (push) Has been cancelled
End-to-end tests / various-suite (old arch) (push) Has been cancelled
End-to-end tests / Verify Storybook (Playwright) (push) Has been cancelled
End-to-end tests / Playwright E2E tests (1/8) (push) Has been cancelled
End-to-end tests / Playwright E2E tests (2/8) (push) Has been cancelled
End-to-end tests / Playwright E2E tests (3/8) (push) Has been cancelled
End-to-end tests / Playwright E2E tests (4/8) (push) Has been cancelled
End-to-end tests / Playwright E2E tests (5/8) (push) Has been cancelled
End-to-end tests / Playwright E2E tests (6/8) (push) Has been cancelled
End-to-end tests / Playwright E2E tests (7/8) (push) Has been cancelled
End-to-end tests / Playwright E2E tests (8/8) (push) Has been cancelled
End-to-end tests / run-azure-monitor-e2e (push) Has been cancelled
End-to-end tests / All Playwright tests complete (push) Has been cancelled
End-to-end tests / A11y test (push) Has been cancelled
End-to-end tests / Publish metrics (push) Has been cancelled
End-to-end tests / All E2E tests complete (push) Has been cancelled
Frontend tests / Detect whether code changed (push) Has been cancelled
Frontend tests / Unit tests (1 / 16) (push) Has been cancelled
Frontend tests / Unit tests (10 / 16) (push) Has been cancelled
Frontend tests / Unit tests (11 / 16) (push) Has been cancelled
Frontend tests / Unit tests (12 / 16) (push) Has been cancelled
Frontend tests / Unit tests (13 / 16) (push) Has been cancelled
Frontend tests / Unit tests (14 / 16) (push) Has been cancelled
Frontend tests / Unit tests (15 / 16) (push) Has been cancelled
Frontend tests / Unit tests (16 / 16) (push) Has been cancelled
Frontend tests / Unit tests (2 / 16) (push) Has been cancelled
Frontend tests / Unit tests (3 / 16) (push) Has been cancelled
Frontend tests / Unit tests (4 / 16) (push) Has been cancelled
Frontend tests / Unit tests (5 / 16) (push) Has been cancelled
Frontend tests / Unit tests (6 / 16) (push) Has been cancelled
Frontend tests / Unit tests (7 / 16) (push) Has been cancelled
Frontend tests / Unit tests (8 / 16) (push) Has been cancelled
Frontend tests / Unit tests (9 / 16) (push) Has been cancelled
Frontend tests / Decoupled plugin tests (push) Has been cancelled
Frontend tests / Packages unit tests (push) Has been cancelled
Frontend tests / All frontend unit tests complete (push) Has been cancelled
Frontend tests / Devenv frontend-service build (push) Has been cancelled
Integration Tests / Detect whether code changed (push) Has been cancelled
Integration Tests / Sqlite (1/4) (push) Has been cancelled
Integration Tests / Sqlite (2/4) (push) Has been cancelled
Integration Tests / Sqlite (3/4) (push) Has been cancelled
Integration Tests / Sqlite (4/4) (push) Has been cancelled
Integration Tests / Sqlite Without CGo (1/4) (push) Has been cancelled
Integration Tests / Sqlite Without CGo (2/4) (push) Has been cancelled
Integration Tests / Sqlite Without CGo (3/4) (push) Has been cancelled
Integration Tests / Sqlite Without CGo (4/4) (push) Has been cancelled
Integration Tests / Sqlite Without CGo (profiled) (push) Has been cancelled
Integration Tests / MySQL (1/16) (push) Has been cancelled
Integration Tests / MySQL (10/16) (push) Has been cancelled
Integration Tests / MySQL (11/16) (push) Has been cancelled
Integration Tests / MySQL (12/16) (push) Has been cancelled
Integration Tests / MySQL (13/16) (push) Has been cancelled
Integration Tests / MySQL (14/16) (push) Has been cancelled
Integration Tests / MySQL (15/16) (push) Has been cancelled
Integration Tests / MySQL (16/16) (push) Has been cancelled
Integration Tests / MySQL (2/16) (push) Has been cancelled
Integration Tests / MySQL (3/16) (push) Has been cancelled
Integration Tests / MySQL (4/16) (push) Has been cancelled
Integration Tests / MySQL (5/16) (push) Has been cancelled
Integration Tests / MySQL (6/16) (push) Has been cancelled
Integration Tests / MySQL (7/16) (push) Has been cancelled
Integration Tests / MySQL (8/16) (push) Has been cancelled
Integration Tests / MySQL (9/16) (push) Has been cancelled
Integration Tests / Postgres (1/16) (push) Has been cancelled
Integration Tests / Postgres (10/16) (push) Has been cancelled
Integration Tests / Postgres (11/16) (push) Has been cancelled
Integration Tests / Postgres (12/16) (push) Has been cancelled
Integration Tests / Postgres (13/16) (push) Has been cancelled
Integration Tests / Postgres (14/16) (push) Has been cancelled
Integration Tests / Postgres (15/16) (push) Has been cancelled
Integration Tests / Postgres (16/16) (push) Has been cancelled
Integration Tests / Postgres (2/16) (push) Has been cancelled
Integration Tests / Postgres (3/16) (push) Has been cancelled
Integration Tests / Postgres (4/16) (push) Has been cancelled
Integration Tests / Postgres (5/16) (push) Has been cancelled
Integration Tests / Postgres (6/16) (push) Has been cancelled
Integration Tests / Postgres (7/16) (push) Has been cancelled
Integration Tests / Postgres (8/16) (push) Has been cancelled
Integration Tests / Postgres (9/16) (push) Has been cancelled
Integration Tests / All backend integration tests complete (push) Has been cancelled
publish-technical-documentation-next / sync (push) Has been cancelled
Reject GitHub secrets / reject-gh-secrets (push) Has been cancelled
Build Release Packages / setup (push) Has been cancelled
Build Release Packages / Dispatch grafana-enterprise build (push) Has been cancelled
Build Release Packages / / darwin-amd64 (push) Has been cancelled
Build Release Packages / / darwin-arm64 (push) Has been cancelled
Build Release Packages / / linux-amd64 (push) Has been cancelled
Build Release Packages / / linux-armv6 (push) Has been cancelled
Build Release Packages / / linux-armv7 (push) Has been cancelled
Build Release Packages / / linux-arm64 (push) Has been cancelled
Build Release Packages / / linux-s390x (push) Has been cancelled
Build Release Packages / / windows-amd64 (push) Has been cancelled
Build Release Packages / / windows-arm64 (push) Has been cancelled
Build Release Packages / Upload artifacts (push) Has been cancelled
Build Release Packages / publish-dockerhub (push) Has been cancelled
Build Release Packages / Dispatch publish NPM canaries (push) Has been cancelled
Build Release Packages / notify-pr (push) Has been cancelled
Run dashboard schema v2 e2e / dashboard-schema-v2-e2e (push) Has been cancelled
Shellcheck / Shellcheck scripts (push) Has been cancelled
Run Storybook a11y tests / Detect whether code changed (push) Has been cancelled
Run Storybook a11y tests / Run Storybook a11y tests (light theme) (push) Has been cancelled
Run Storybook a11y tests / Run Storybook a11y tests (dark theme) (push) Has been cancelled
Swagger generated code / Detect whether code changed (push) Has been cancelled
Swagger generated code / Verify committed API specs match (push) Has been cancelled
Dispatch sync to mirror / dispatch-job (push) Has been cancelled
Crowdin Upload Action / upload-sources-to-crowdin (push) Has been cancelled
publish-kinds-next / main (push) Has been cancelled
Trivy Scan / trivy-scan (push) Has been cancelled
Relyance Compliance Inspection / relyance-compliance-inspector (push) Has been cancelled
Crowdin Download Action / download-sources-from-crowdin (push) Has been cancelled
Revert "App Plugins: Allow to define experimental pages (#114232)"

This reverts commit e1a2f178e7.
2026-01-05 17:04:07 +00:00
vesalaakso-oura
a9c2117aa7 Transformers: Add smoothing transformer (#111077)
* Transformers: Add smoothing transformer

Added a smoothing transformer to help clean up noisy time series data.
It uses the ASAP algorithm to pick the most important data points while
keeping the overall shape and trends intact.

The transformer always keeps the first and last points so you get the
complete time range. I also added a test for it.

* Change category

Change category from Reformat to CalculateNewFields

* Remove first/last point preservation

* Fix operator recreation

* Simplify ASAP code

Include performance optimization as well

* Refactor interpolateFromSmoothedCurve

Break function into smaller focused functions and lift functions to the
top level

* Add isApplicable Check

Make sure the transformer is applicable for timeseries data

* Add tests for isApplicable check

* UI/UX improvements: Display effective resolution when limited by data points

Show "Effective: X" indicator when resolution is capped by the 2x data
points multiplier. Includes tooltip explaining the limit.

Memoizes calculation to prevent unnecessary recalculation on re-renders.

Example: With 72 data points and resolution set to 150, displays
"Effective: 144" since the limit is 72 x 2 = 144.

Plus added tests

* Improve discoverability by adding tags

* Preserve Original Data

Let's preserve original data as well, makes the UX so much better.
Changed from appending (smoothed) to frame names to use Smoothed frame name. This should match the pattern used by other transformers (e.g,. regression)
Updated tests accordingly
Updated tooltip note

* Add asap tests

Basic functionality:
* returns valid DataPoint objects
* Maintain x-axis ordering

Edge cases:
* Empty array
* single data point
* filter NaN values
* all NaN values
* sort unsorted data
* negative values

* Update dark and light images

* Clear state cache

* Add feature toggle

* Conditionally add new transformation to the registry

* chore: update and regenerate feature toggles

* chore: update yarn.lock

* chore: fix transformers and imports
2026-01-05 17:53:45 +01:00
195 changed files with 6295 additions and 8451 deletions

View File

@@ -68,14 +68,14 @@ require (
github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 // indirect
github.com/at-wat/mqtt-go v0.19.6 // indirect
github.com/aws/aws-sdk-go v1.55.7 // indirect
github.com/aws/aws-sdk-go-v2 v1.39.1 // indirect
github.com/aws/aws-sdk-go-v2 v1.40.0 // indirect
github.com/aws/aws-sdk-go-v2/credentials v1.18.14 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.8 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.8 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.14 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.14 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.1 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.8 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.38.5 // indirect
github.com/aws/smithy-go v1.23.1 // indirect
github.com/aws/smithy-go v1.23.2 // indirect
github.com/barkimedes/go-deepcopy v0.0.0-20220514131651-17c30cfc62df // indirect
github.com/benbjohnson/clock v1.3.5 // indirect
github.com/beorn7/perks v1.0.1 // indirect

View File

@@ -173,8 +173,8 @@ github.com/aws/aws-sdk-go v1.17.7/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN
github.com/aws/aws-sdk-go v1.38.35/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
github.com/aws/aws-sdk-go v1.55.7 h1:UJrkFq7es5CShfBwlWAC8DA077vp8PyVbQd3lqLiztE=
github.com/aws/aws-sdk-go v1.55.7/go.mod h1:eRwEWoyTWFMVYVQzKMNHWP5/RV4xIUGMQfXQHfHkpNU=
github.com/aws/aws-sdk-go-v2 v1.39.1 h1:fWZhGAwVRK/fAN2tmt7ilH4PPAE11rDj7HytrmbZ2FE=
github.com/aws/aws-sdk-go-v2 v1.39.1/go.mod h1:sDioUELIUO9Znk23YVmIk86/9DOpkbyyVb1i/gUNFXY=
github.com/aws/aws-sdk-go-v2 v1.40.0 h1:/WMUA0kjhZExjOQN2z3oLALDREea1A7TobfuiBrKlwc=
github.com/aws/aws-sdk-go-v2 v1.40.0/go.mod h1:c9pm7VwuW0UPxAEYGyTmyurVcNrbF6Rt/wixFqDhcjE=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.11 h1:12SpdwU8Djs+YGklkinSSlcrPyj3H4VifVsKf78KbwA=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.11/go.mod h1:dd+Lkp6YmMryke+qxW/VnKyhMBDTYP41Q2Bb+6gNZgY=
github.com/aws/aws-sdk-go-v2/config v1.31.10 h1:7LllDZAegXU3yk41mwM6KcPu0wmjKGQB1bg99bNdQm4=
@@ -185,10 +185,10 @@ github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.8 h1:gLD09eaJUdiszm7vd1btiQU
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.8/go.mod h1:4RW3oMPt1POR74qVOC4SbubxAwdP4pCT0nSw3jycOU4=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.84 h1:cTXRdLkpBanlDwISl+5chq5ui1d1YWg4PWMR9c3kXyw=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.84/go.mod h1:kwSy5X7tfIHN39uucmjQVs2LvDdXEjQucgQQEqCggEo=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.8 h1:6bgAZgRyT4RoFWhxS+aoGMFyE0cD1bSzFnEEi4bFPGI=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.8/go.mod h1:KcGkXFVU8U28qS4KvLEcPxytPZPBcRawaH2Pf/0jptE=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.8 h1:HhJYoES3zOz34yWEpGENqJvRVPqpmJyR3+AFg9ybhdY=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.8/go.mod h1:JnA+hPWeYAVbDssp83tv+ysAG8lTfLVXvSsyKg/7xNA=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.14 h1:PZHqQACxYb8mYgms4RZbhZG0a7dPW06xOjmaH0EJC/I=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.14/go.mod h1:VymhrMJUWs69D8u0/lZ7jSB6WgaG/NqHi3gX0aYf6U0=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.14 h1:bOS19y6zlJwagBfHxs0ESzr1XCOU2KXJCWcq3E2vfjY=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.14/go.mod h1:1ipeGBMAxZ0xcTm6y6paC2C/J6f6OO7LBODV9afuAyM=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3 h1:bIqFDwgGXXN1Kpp99pDOdKMTTb5d2KyU5X/BZxjOkRo=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3/go.mod h1:H5O/EsxDWyU+LP/V8i5sm8cxoZgc2fdNR9bxlOFrQTo=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.36 h1:GMYy2EOWfzdP3wfVAGXBNKY5vK4K8vMET4sYOYltmqs=
@@ -209,8 +209,8 @@ github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.0 h1:I7ghctfGXrscr7r1Ga/mDqSJ
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.0/go.mod h1:Zo9id81XP6jbayIFWNuDpA6lMBWhsVy+3ou2jLa4JnA=
github.com/aws/aws-sdk-go-v2/service/sts v1.38.5 h1:+LVB0xBqEgjQoqr9bGZbRzvg212B0f17JdflleJRNR4=
github.com/aws/aws-sdk-go-v2/service/sts v1.38.5/go.mod h1:xoaxeqnnUaZjPjaICgIy5B+MHCSb/ZSOn4MvkFNOUA0=
github.com/aws/smithy-go v1.23.1 h1:sLvcH6dfAFwGkHLZ7dGiYF7aK6mg4CgKA/iDKjLDt9M=
github.com/aws/smithy-go v1.23.1/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0=
github.com/aws/smithy-go v1.23.2 h1:Crv0eatJUQhaManss33hS5r40CG3ZFH+21XSkqMrIUM=
github.com/aws/smithy-go v1.23.2/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0=
github.com/bahlo/generic-list-go v0.2.0 h1:5sz/EEAK+ls5wF+NeqDpk5+iNdMDXrh3z3nPnH1Wvgk=
github.com/bahlo/generic-list-go v0.2.0/go.mod h1:2KvAjgMlE5NNynlg/5iLrrCCZ2+5xWbdbCW3pNTGyYg=
github.com/barkimedes/go-deepcopy v0.0.0-20220514131651-17c30cfc62df h1:GSoSVRLoBaFpOOds6QyY1L8AX7uoY+Ln3BHc22W40X0=

View File

@@ -106,14 +106,14 @@ require (
github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 // indirect
github.com/at-wat/mqtt-go v0.19.6 // indirect
github.com/aws/aws-sdk-go v1.55.7 // indirect
github.com/aws/aws-sdk-go-v2 v1.39.1 // indirect
github.com/aws/aws-sdk-go-v2 v1.40.0 // indirect
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.11 // indirect
github.com/aws/aws-sdk-go-v2/config v1.31.10 // indirect
github.com/aws/aws-sdk-go-v2/credentials v1.18.14 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.8 // indirect
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.84 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.8 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.8 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.14 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.14 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3 // indirect
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.36 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.1 // indirect
@@ -124,7 +124,7 @@ require (
github.com/aws/aws-sdk-go-v2/service/sso v1.29.4 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.0 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.38.5 // indirect
github.com/aws/smithy-go v1.23.1 // indirect
github.com/aws/smithy-go v1.23.2 // indirect
github.com/bahlo/generic-list-go v0.2.0 // indirect
github.com/barkimedes/go-deepcopy v0.0.0-20220514131651-17c30cfc62df // indirect
github.com/benbjohnson/clock v1.3.5 // indirect

View File

@@ -238,8 +238,8 @@ github.com/aws/aws-sdk-go v1.17.7/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN
github.com/aws/aws-sdk-go v1.38.35/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
github.com/aws/aws-sdk-go v1.55.7 h1:UJrkFq7es5CShfBwlWAC8DA077vp8PyVbQd3lqLiztE=
github.com/aws/aws-sdk-go v1.55.7/go.mod h1:eRwEWoyTWFMVYVQzKMNHWP5/RV4xIUGMQfXQHfHkpNU=
github.com/aws/aws-sdk-go-v2 v1.39.1 h1:fWZhGAwVRK/fAN2tmt7ilH4PPAE11rDj7HytrmbZ2FE=
github.com/aws/aws-sdk-go-v2 v1.39.1/go.mod h1:sDioUELIUO9Znk23YVmIk86/9DOpkbyyVb1i/gUNFXY=
github.com/aws/aws-sdk-go-v2 v1.40.0 h1:/WMUA0kjhZExjOQN2z3oLALDREea1A7TobfuiBrKlwc=
github.com/aws/aws-sdk-go-v2 v1.40.0/go.mod h1:c9pm7VwuW0UPxAEYGyTmyurVcNrbF6Rt/wixFqDhcjE=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.11 h1:12SpdwU8Djs+YGklkinSSlcrPyj3H4VifVsKf78KbwA=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.11/go.mod h1:dd+Lkp6YmMryke+qxW/VnKyhMBDTYP41Q2Bb+6gNZgY=
github.com/aws/aws-sdk-go-v2/config v1.31.10 h1:7LllDZAegXU3yk41mwM6KcPu0wmjKGQB1bg99bNdQm4=
@@ -250,10 +250,10 @@ github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.8 h1:gLD09eaJUdiszm7vd1btiQU
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.8/go.mod h1:4RW3oMPt1POR74qVOC4SbubxAwdP4pCT0nSw3jycOU4=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.84 h1:cTXRdLkpBanlDwISl+5chq5ui1d1YWg4PWMR9c3kXyw=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.84/go.mod h1:kwSy5X7tfIHN39uucmjQVs2LvDdXEjQucgQQEqCggEo=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.8 h1:6bgAZgRyT4RoFWhxS+aoGMFyE0cD1bSzFnEEi4bFPGI=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.8/go.mod h1:KcGkXFVU8U28qS4KvLEcPxytPZPBcRawaH2Pf/0jptE=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.8 h1:HhJYoES3zOz34yWEpGENqJvRVPqpmJyR3+AFg9ybhdY=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.8/go.mod h1:JnA+hPWeYAVbDssp83tv+ysAG8lTfLVXvSsyKg/7xNA=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.14 h1:PZHqQACxYb8mYgms4RZbhZG0a7dPW06xOjmaH0EJC/I=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.14/go.mod h1:VymhrMJUWs69D8u0/lZ7jSB6WgaG/NqHi3gX0aYf6U0=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.14 h1:bOS19y6zlJwagBfHxs0ESzr1XCOU2KXJCWcq3E2vfjY=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.14/go.mod h1:1ipeGBMAxZ0xcTm6y6paC2C/J6f6OO7LBODV9afuAyM=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3 h1:bIqFDwgGXXN1Kpp99pDOdKMTTb5d2KyU5X/BZxjOkRo=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3/go.mod h1:H5O/EsxDWyU+LP/V8i5sm8cxoZgc2fdNR9bxlOFrQTo=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.36 h1:GMYy2EOWfzdP3wfVAGXBNKY5vK4K8vMET4sYOYltmqs=
@@ -280,14 +280,16 @@ github.com/aws/aws-sdk-go-v2/service/resourcegroupstaggingapi v1.26.6 h1:Pwbxovp
github.com/aws/aws-sdk-go-v2/service/resourcegroupstaggingapi v1.26.6/go.mod h1:Z4xLt5mXspLKjBV92i165wAJ/3T6TIv4n7RtIS8pWV0=
github.com/aws/aws-sdk-go-v2/service/s3 v1.84.0 h1:0reDqfEN+tB+sozj2r92Bep8MEwBZgtAXTND1Kk9OXg=
github.com/aws/aws-sdk-go-v2/service/s3 v1.84.0/go.mod h1:kUklwasNoCn5YpyAqC/97r6dzTA1SRKJfKq16SXeoDU=
github.com/aws/aws-sdk-go-v2/service/secretsmanager v1.40.1 h1:w6a0H79HrHf3lr+zrw+pSzR5B+caiQFAKiNHlrUcnoc=
github.com/aws/aws-sdk-go-v2/service/secretsmanager v1.40.1/go.mod h1:c6Vg0BRiU7v0MVhHupw90RyL120QBwAMLbDCzptGeMk=
github.com/aws/aws-sdk-go-v2/service/sso v1.29.4 h1:FTdEN9dtWPB0EOURNtDPmwGp6GGvMqRJCAihkSl/1No=
github.com/aws/aws-sdk-go-v2/service/sso v1.29.4/go.mod h1:mYubxV9Ff42fZH4kexj43gFPhgc/LyC7KqvUKt1watc=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.0 h1:I7ghctfGXrscr7r1Ga/mDqSJKm7Fkpl5Mwq79Z+rZqU=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.0/go.mod h1:Zo9id81XP6jbayIFWNuDpA6lMBWhsVy+3ou2jLa4JnA=
github.com/aws/aws-sdk-go-v2/service/sts v1.38.5 h1:+LVB0xBqEgjQoqr9bGZbRzvg212B0f17JdflleJRNR4=
github.com/aws/aws-sdk-go-v2/service/sts v1.38.5/go.mod h1:xoaxeqnnUaZjPjaICgIy5B+MHCSb/ZSOn4MvkFNOUA0=
github.com/aws/smithy-go v1.23.1 h1:sLvcH6dfAFwGkHLZ7dGiYF7aK6mg4CgKA/iDKjLDt9M=
github.com/aws/smithy-go v1.23.1/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0=
github.com/aws/smithy-go v1.23.2 h1:Crv0eatJUQhaManss33hS5r40CG3ZFH+21XSkqMrIUM=
github.com/aws/smithy-go v1.23.2/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0=
github.com/axiomhq/hyperloglog v0.0.0-20240507144631-af9851f82b27 h1:60m4tnanN1ctzIu4V3bfCNJ39BiOPSm1gHFlFjTkRE0=
github.com/axiomhq/hyperloglog v0.0.0-20240507144631-af9851f82b27/go.mod h1:k08r+Yj1PRAmuayFiRK6MYuR5Ve4IuZtTfxErMIh0+c=
github.com/bahlo/generic-list-go v0.2.0 h1:5sz/EEAK+ls5wF+NeqDpk5+iNdMDXrh3z3nPnH1Wvgk=
@@ -2321,6 +2323,8 @@ modernc.org/strutil v1.2.1 h1:UneZBkQA+DX2Rp35KcM69cSsNES9ly8mQWD71HKlOA0=
modernc.org/strutil v1.2.1/go.mod h1:EHkiggD70koQxjVdSBM3JKM7k6L0FbGE5eymy9i3B9A=
modernc.org/token v1.1.0 h1:Xl7Ap9dKaEs5kLoOQeQmPWevfnk/DM5qcLcYlA8ys6Y=
modernc.org/token v1.1.0/go.mod h1:UGzOrNV1mAFSEB63lOFHIpNRUVMvYTc6yu1SMY/XTDM=
pgregory.net/rapid v1.2.0 h1:keKAYRcjm+e1F0oAuU5F5+YPAWcyxNNRK2wud503Gnk=
pgregory.net/rapid v1.2.0/go.mod h1:PY5XlDGj0+V1FCq0o192FdRhpKHGTRIWBgqjDBTrq04=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
rsc.io/pdf v0.1.1/go.mod h1:n8OzWcQ6Sp37PL01nO98y4iUCRdTGarVfzxY20ICaU4=
rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=

View File

@@ -30,14 +30,14 @@ require (
github.com/antlr4-go/antlr/v4 v4.13.1 // indirect
github.com/apache/arrow-go/v18 v18.4.1 // indirect
github.com/armon/go-metrics v0.4.1 // indirect
github.com/aws/aws-sdk-go-v2 v1.39.1 // indirect
github.com/aws/aws-sdk-go-v2 v1.40.0 // indirect
github.com/aws/aws-sdk-go-v2/credentials v1.18.14 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.8 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.8 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.14 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.14 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.1 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.8 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.38.5 // indirect
github.com/aws/smithy-go v1.23.1 // indirect
github.com/aws/smithy-go v1.23.2 // indirect
github.com/barkimedes/go-deepcopy v0.0.0-20220514131651-17c30cfc62df // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/blang/semver v3.5.1+incompatible // indirect

View File

@@ -28,22 +28,22 @@ github.com/apache/thrift v0.22.0 h1:r7mTJdj51TMDe6RtcmNdQxgn9XcyfGDOzegMDRg47uc=
github.com/apache/thrift v0.22.0/go.mod h1:1e7J/O1Ae6ZQMTYdy9xa3w9k+XHWPfRvdPyJeynQ+/g=
github.com/armon/go-metrics v0.4.1 h1:hR91U9KYmb6bLBYLQjyM+3j+rcd/UhE+G78SFnF8gJA=
github.com/armon/go-metrics v0.4.1/go.mod h1:E6amYzXo6aW1tqzoZGT755KkbgrJsSdpwZ+3JqfkOG4=
github.com/aws/aws-sdk-go-v2 v1.39.1 h1:fWZhGAwVRK/fAN2tmt7ilH4PPAE11rDj7HytrmbZ2FE=
github.com/aws/aws-sdk-go-v2 v1.39.1/go.mod h1:sDioUELIUO9Znk23YVmIk86/9DOpkbyyVb1i/gUNFXY=
github.com/aws/aws-sdk-go-v2 v1.40.0 h1:/WMUA0kjhZExjOQN2z3oLALDREea1A7TobfuiBrKlwc=
github.com/aws/aws-sdk-go-v2 v1.40.0/go.mod h1:c9pm7VwuW0UPxAEYGyTmyurVcNrbF6Rt/wixFqDhcjE=
github.com/aws/aws-sdk-go-v2/credentials v1.18.14 h1:TxkI7QI+sFkTItN/6cJuMZEIVMFXeu2dI1ZffkXngKI=
github.com/aws/aws-sdk-go-v2/credentials v1.18.14/go.mod h1:12x4Uw/vijC11XkctTjy92TNCQ+UnNJkT7fzX0Yd93E=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.8 h1:6bgAZgRyT4RoFWhxS+aoGMFyE0cD1bSzFnEEi4bFPGI=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.8/go.mod h1:KcGkXFVU8U28qS4KvLEcPxytPZPBcRawaH2Pf/0jptE=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.8 h1:HhJYoES3zOz34yWEpGENqJvRVPqpmJyR3+AFg9ybhdY=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.8/go.mod h1:JnA+hPWeYAVbDssp83tv+ysAG8lTfLVXvSsyKg/7xNA=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.14 h1:PZHqQACxYb8mYgms4RZbhZG0a7dPW06xOjmaH0EJC/I=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.14/go.mod h1:VymhrMJUWs69D8u0/lZ7jSB6WgaG/NqHi3gX0aYf6U0=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.14 h1:bOS19y6zlJwagBfHxs0ESzr1XCOU2KXJCWcq3E2vfjY=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.14/go.mod h1:1ipeGBMAxZ0xcTm6y6paC2C/J6f6OO7LBODV9afuAyM=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.1 h1:oegbebPEMA/1Jny7kvwejowCaHz1FWZAQ94WXFNCyTM=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.1/go.mod h1:kemo5Myr9ac0U9JfSjMo9yHLtw+pECEHsFtJ9tqCEI8=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.8 h1:M6JI2aGFEzYxsF6CXIuRBnkge9Wf9a2xU39rNeXgu10=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.8/go.mod h1:Fw+MyTwlwjFsSTE31mH211Np+CUslml8mzc0AFEG09s=
github.com/aws/aws-sdk-go-v2/service/sts v1.38.5 h1:+LVB0xBqEgjQoqr9bGZbRzvg212B0f17JdflleJRNR4=
github.com/aws/aws-sdk-go-v2/service/sts v1.38.5/go.mod h1:xoaxeqnnUaZjPjaICgIy5B+MHCSb/ZSOn4MvkFNOUA0=
github.com/aws/smithy-go v1.23.1 h1:sLvcH6dfAFwGkHLZ7dGiYF7aK6mg4CgKA/iDKjLDt9M=
github.com/aws/smithy-go v1.23.1/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0=
github.com/aws/smithy-go v1.23.2 h1:Crv0eatJUQhaManss33hS5r40CG3ZFH+21XSkqMrIUM=
github.com/aws/smithy-go v1.23.2/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0=
github.com/barkimedes/go-deepcopy v0.0.0-20220514131651-17c30cfc62df h1:GSoSVRLoBaFpOOds6QyY1L8AX7uoY+Ln3BHc22W40X0=
github.com/barkimedes/go-deepcopy v0.0.0-20220514131651-17c30cfc62df/go.mod h1:hiVxq5OP2bUGBRNS3Z/bt/reCLFNbdcST6gISi1fiOM=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=

View File

@@ -30,11 +30,22 @@ KeeperSpec: {
}
#AWSConfig: {
accessKeyID: #CredentialValue
secretAccessKey: #CredentialValue
region: string
accessKey?: #AWSAccessKey
assumeRole?: #AWSAssumeRole
kmsKeyID?: string
}
#AWSAccessKey: {
accessKeyID: #CredentialValue
secretAccessKey: #CredentialValue
}
#AWSAssumeRole: {
assumeRoleArn: string
externalID: string
}
#AzureConfig: {
keyVaultName: string
tenantID: string

View File

@@ -4,14 +4,26 @@ package v1beta1
// +k8s:openapi-gen=true
type KeeperAWSConfig struct {
AccessKeyID KeeperCredentialValue `json:"accessKeyID"`
SecretAccessKey KeeperCredentialValue `json:"secretAccessKey"`
KmsKeyID *string `json:"kmsKeyID,omitempty"`
Region string `json:"region"`
AccessKey *KeeperAWSAccessKey `json:"accessKey,omitempty"`
AssumeRole *KeeperAWSAssumeRole `json:"assumeRole,omitempty"`
KmsKeyID *string `json:"kmsKeyID,omitempty"`
}
// NewKeeperAWSConfig creates a new KeeperAWSConfig object.
func NewKeeperAWSConfig() *KeeperAWSConfig {
return &KeeperAWSConfig{
return &KeeperAWSConfig{}
}
// +k8s:openapi-gen=true
type KeeperAWSAccessKey struct {
AccessKeyID KeeperCredentialValue `json:"accessKeyID"`
SecretAccessKey KeeperCredentialValue `json:"secretAccessKey"`
}
// NewKeeperAWSAccessKey creates a new KeeperAWSAccessKey object.
func NewKeeperAWSAccessKey() *KeeperAWSAccessKey {
return &KeeperAWSAccessKey{
AccessKeyID: *NewKeeperCredentialValue(),
SecretAccessKey: *NewKeeperCredentialValue(),
}
@@ -36,6 +48,17 @@ func NewKeeperCredentialValue() *KeeperCredentialValue {
return &KeeperCredentialValue{}
}
// +k8s:openapi-gen=true
type KeeperAWSAssumeRole struct {
AssumeRoleArn string `json:"assumeRoleArn"`
ExternalID string `json:"externalID"`
}
// NewKeeperAWSAssumeRole creates a new KeeperAWSAssumeRole object.
func NewKeeperAWSAssumeRole() *KeeperAWSAssumeRole {
return &KeeperAWSAssumeRole{}
}
// +k8s:openapi-gen=true
type KeeperAzureConfig struct {
KeyVaultName string `json:"keyVaultName"`

View File

@@ -12,6 +12,7 @@ const (
AzureKeeperType KeeperType = "azure"
GCPKeeperType KeeperType = "gcp"
HashiCorpKeeperType KeeperType = "hashicorp"
SystemKeeperType KeeperType = "system"
)
func (kt KeeperType) String() string {
@@ -20,9 +21,31 @@ func (kt KeeperType) String() string {
// KeeperConfig is an interface that all keeper config types must implement.
type KeeperConfig interface {
// Returns the name of the keeper
GetName() string
Type() KeeperType
}
type NamedKeeperConfig[T interface {
Type() KeeperType
}] struct {
Name string
Cfg T
}
func NewNamedKeeperConfig[T interface {
Type() KeeperType
}](keeperName string, cfg T) *NamedKeeperConfig[T] {
return &NamedKeeperConfig[T]{Name: keeperName, Cfg: cfg}
}
func (c *NamedKeeperConfig[T]) GetName() string {
return c.Name
}
func (c *NamedKeeperConfig[T]) Type() KeeperType {
return c.Cfg.Type()
}
func (s *KeeperSpec) GetType() KeeperType {
if s.Aws != nil {
return AWSKeeperType
@@ -43,7 +66,7 @@ func (s *KeeperSpec) GetType() KeeperType {
type SystemKeeperConfig struct{}
func (*SystemKeeperConfig) Type() KeeperType {
return "system"
return SystemKeeperType
}
func (s *KeeperAWSConfig) Type() KeeperType {

View File

@@ -14,6 +14,8 @@ import (
func GetOpenAPIDefinitions(ref common.ReferenceCallback) map[string]common.OpenAPIDefinition {
return map[string]common.OpenAPIDefinition{
"github.com/grafana/grafana/apps/secret/pkg/apis/secret/v1beta1.Keeper": schema_pkg_apis_secret_v1beta1_Keeper(ref),
"github.com/grafana/grafana/apps/secret/pkg/apis/secret/v1beta1.KeeperAWSAccessKey": schema_pkg_apis_secret_v1beta1_KeeperAWSAccessKey(ref),
"github.com/grafana/grafana/apps/secret/pkg/apis/secret/v1beta1.KeeperAWSAssumeRole": schema_pkg_apis_secret_v1beta1_KeeperAWSAssumeRole(ref),
"github.com/grafana/grafana/apps/secret/pkg/apis/secret/v1beta1.KeeperAWSConfig": schema_pkg_apis_secret_v1beta1_KeeperAWSConfig(ref),
"github.com/grafana/grafana/apps/secret/pkg/apis/secret/v1beta1.KeeperAzureConfig": schema_pkg_apis_secret_v1beta1_KeeperAzureConfig(ref),
"github.com/grafana/grafana/apps/secret/pkg/apis/secret/v1beta1.KeeperCredentialValue": schema_pkg_apis_secret_v1beta1_KeeperCredentialValue(ref),
@@ -79,7 +81,7 @@ func schema_pkg_apis_secret_v1beta1_Keeper(ref common.ReferenceCallback) common.
}
}
func schema_pkg_apis_secret_v1beta1_KeeperAWSConfig(ref common.ReferenceCallback) common.OpenAPIDefinition {
func schema_pkg_apis_secret_v1beta1_KeeperAWSAccessKey(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
@@ -97,6 +99,65 @@ func schema_pkg_apis_secret_v1beta1_KeeperAWSConfig(ref common.ReferenceCallback
Ref: ref("github.com/grafana/grafana/apps/secret/pkg/apis/secret/v1beta1.KeeperCredentialValue"),
},
},
},
Required: []string{"accessKeyID", "secretAccessKey"},
},
},
Dependencies: []string{
"github.com/grafana/grafana/apps/secret/pkg/apis/secret/v1beta1.KeeperCredentialValue"},
}
}
func schema_pkg_apis_secret_v1beta1_KeeperAWSAssumeRole(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"assumeRoleArn": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"externalID": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
},
Required: []string{"assumeRoleArn", "externalID"},
},
},
}
}
func schema_pkg_apis_secret_v1beta1_KeeperAWSConfig(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"region": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"accessKey": {
SchemaProps: spec.SchemaProps{
Ref: ref("github.com/grafana/grafana/apps/secret/pkg/apis/secret/v1beta1.KeeperAWSAccessKey"),
},
},
"assumeRole": {
SchemaProps: spec.SchemaProps{
Ref: ref("github.com/grafana/grafana/apps/secret/pkg/apis/secret/v1beta1.KeeperAWSAssumeRole"),
},
},
"kmsKeyID": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
@@ -104,11 +165,11 @@ func schema_pkg_apis_secret_v1beta1_KeeperAWSConfig(ref common.ReferenceCallback
},
},
},
Required: []string{"accessKeyID", "secretAccessKey"},
Required: []string{"region"},
},
},
Dependencies: []string{
"github.com/grafana/grafana/apps/secret/pkg/apis/secret/v1beta1.KeeperCredentialValue"},
"github.com/grafana/grafana/apps/secret/pkg/apis/secret/v1beta1.KeeperAWSAccessKey", "github.com/grafana/grafana/apps/secret/pkg/apis/secret/v1beta1.KeeperAWSAssumeRole"},
}
}

View File

@@ -105,6 +105,11 @@ refs:
destination: /docs/grafana/<GRAFANA_VERSION>/panels-visualizations/visualizations/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/panels-visualizations/visualizations/
cloudwatch-troubleshooting:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/datasources/aws-cloudwatch/troubleshooting/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/datasources/aws-cloudwatch/troubleshooting/
---
# Amazon CloudWatch data source
@@ -119,6 +124,7 @@ The following documents will help you get started working with the CloudWatch da
- [CloudWatch query editor](ref:cloudwatch-query-editor)
- [Templates and variables](ref:cloudwatch-template-variables)
- [Configure AWS authentication](ref:cloudwatch-aws-authentication)
- [Troubleshoot CloudWatch issues](ref:cloudwatch-troubleshooting)
## Import pre-configured dashboards

View File

@@ -1,11 +1,12 @@
---
aliases:
- ../data-sources/aws-CloudWatch/
- ../data-sources/aws-CloudWatch/preconfig-CloudWatch-dashboards/
- ../data-sources/aws-CloudWatch/provision-CloudWatch/
- CloudWatch/
- preconfig-CloudWatch-dashboards/
- provision-CloudWatch/
- ../../data-sources/aws-cloudwatch/configure/
- ../../data-sources/aws-cloudwatch/
- ../../data-sources/aws-cloudwatch/preconfig-cloudwatch-dashboards/
- ../../data-sources/aws-cloudwatch/provision-cloudwatch/
- ../cloudwatch/
- ../preconfig-cloudwatch-dashboards/
- ../provision-cloudwatch/
description: This document provides configuration instructions for the CloudWatch data source.
keywords:
- grafana
@@ -25,11 +26,6 @@ refs:
destination: /docs/grafana/<GRAFANA_VERSION>/panels-visualizations/visualizations/logs/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/panels-visualizations/visualizations/logs/
explore:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/explore/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/explore/
provisioning-data-sources:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/administration/provisioning/#data-sources
@@ -40,16 +36,6 @@ refs:
destination: /docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-grafana/#aws
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-grafana/#aws
alerting:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/
build-dashboards:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/build-dashboards/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/build-dashboards/
data-source-management:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/administration/data-source-management/
@@ -153,7 +139,7 @@ You must use both an access key ID and a secret access key to authenticate.
Grafana automatically creates a link to a trace in X-Ray data source if logs contain the `@xrayTraceId` field. To use this feature, you must already have an X-Ray data source configured. For details, see the [X-Ray data source docs](/grafana/plugins/grafana-X-Ray-datasource/). To view the X-Ray link, select the log row in either the Explore view or dashboard [Logs panel](ref:logs) to view the log details section.
To log the `@xrayTraceId`, refer to the [AWS X-Ray documentation](https://docs.amazonaws.cn/en_us/xray/latest/devguide/xray-services.html). To provide the field to Grafana, your log queries must also contain the `@xrayTraceId` field, for example by using the query `fields @message, @xrayTraceId`.
To log the `@xrayTraceId`, refer to the [AWS X-Ray documentation](https://docs.aws.amazon.com/xray/latest/devguide/xray-services.html). To provide the field to Grafana, your log queries must also contain the `@xrayTraceId` field, for example by using the query `fields @message, @xrayTraceId`.
**Private data source connect** - _Only for Grafana Cloud users._

View File

@@ -34,11 +34,6 @@ refs:
destination: /docs/grafana/<GRAFANA_VERSION>/panels-visualizations/query-transform-data/#navigate-the-query-tab
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/panels-visualizations/query-transform-data/#navigate-the-query-tab
explore:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/explore/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/explore/
alerting:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/
@@ -183,7 +178,7 @@ If you use the expression field to reference another query, such as `queryA * 2`
When you select `Builder` mode within the Metric search editor, a new Account field is displayed. Use the `Account` field to specify which of the linked monitoring accounts to target for the given query. By default, the `All` option is specified, which will target all linked accounts.
While in `Code` mode, you can specify any math expression. If the Monitoring account badge displays in the query editor header, all `SEARCH` expressions entered in this field will be cross-account by default and can query metrics from linked accounts. Note that while queries run cross-account, the autocomplete feature currently doesn't fetch cross-account resources, so you'll need to manually specify resource names when writing cross-account queries.
You can limit the search to one or a set of accounts, as documented in the [AWS documentation](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Unified-Cross-Account.html).
You can limit the search to one or a set of accounts, as documented in the [AWS documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Unified-Cross-Account.html).
### Period macro
@@ -198,7 +193,7 @@ The link provided is valid for any account but displays the expected metrics onl
{{< figure src="/media/docs/cloudwatch/cloudwatch-deep-link-v12.1.png" caption="CloudWatch deep linking" >}}
This feature is not available for metrics based on [metric math expressions](#metric-math-expressions).
This feature is not available for metrics based on [metric math expressions](#use-metric-math-expressions).
### Use Metric Insights syntax
@@ -319,9 +314,9 @@ The CloudWatch plugin monitors and troubleshoots applications that span multiple
To enable cross-account observability, complete the following steps:
1. Go to the [Amazon CloudWatch documentation](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Unified-Cross-Account.html) and follow the instructions for enabling cross-account observability.
1. Go to the [Amazon CloudWatch documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Unified-Cross-Account.html) and follow the instructions for enabling cross-account observability.
1. Add [two API actions](https://grafana.com//docs/grafana/latest/datasources/aws-cloudwatch/configure/#cross-account-observability-permissions) to the IAM policy attached to the role/user running the plugin.
1. Add [two API actions](https://grafana.com/docs/grafana/latest/datasources/aws-cloudwatch/configure/#cross-account-observability-permissions) to the IAM policy attached to the role/user running the plugin.
Cross-account querying is available in the plugin through the **Logs**, **Metric search**, and **Metric Insights** modes.
After you have configured it, you'll see a **Monitoring account** badge in the query editor header.

View File

@@ -0,0 +1,519 @@
---
aliases:
- ../../data-sources/aws-cloudwatch/troubleshooting/
description: Troubleshooting guide for the Amazon CloudWatch data source in Grafana
keywords:
- grafana
- cloudwatch
- aws
- troubleshooting
- errors
- authentication
- query
labels:
products:
- cloud
- enterprise
- oss
menuTitle: Troubleshooting
title: Troubleshoot Amazon CloudWatch data source issues
weight: 500
refs:
configure-cloudwatch:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/datasources/aws-cloudwatch/configure/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/datasources/aws-cloudwatch/configure/
cloudwatch-aws-authentication:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/datasources/aws-cloudwatch/aws-authentication/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/datasources/aws-cloudwatch/aws-authentication/
cloudwatch-template-variables:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/datasources/aws-cloudwatch/template-variables/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/datasources/aws-cloudwatch/template-variables/
cloudwatch-query-editor:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/datasources/aws-cloudwatch/query-editor/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/datasources/aws-cloudwatch/query-editor/
private-data-source-connect:
- pattern: /docs/grafana/
destination: /docs/grafana-cloud/connect-externally-hosted/private-data-source-connect/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/connect-externally-hosted/private-data-source-connect/
---
# Troubleshoot Amazon CloudWatch data source issues
This document provides solutions to common issues you may encounter when configuring or using the Amazon CloudWatch data source. For configuration instructions, refer to [Configure CloudWatch](ref:configure-cloudwatch).
{{< admonition type="note" >}}
The data source health check validates both metrics and logs permissions. If your IAM policy only grants access to one of these (for example, metrics-only or logs-only), the health check displays a red status. However, the service you have permissions for is still usable—you can query metrics or logs based on whichever permissions are configured.
{{< /admonition >}}
## Authentication errors
These errors occur when AWS credentials are invalid, missing, or don't have the required permissions.
### "Access Denied" or "Not authorized to perform this operation"
**Symptoms:**
- Save & test fails with "Access Denied"
- Queries return authorization errors
- Namespaces, metrics, or dimensions don't load
**Possible causes and solutions:**
| Cause | Solution |
| --------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| IAM policy missing required permissions | Attach the appropriate IAM policy to your user or role. For metrics, you need `cloudwatch:ListMetrics`, `cloudwatch:GetMetricData`, and related permissions. For logs, you need `logs:DescribeLogGroups`, `logs:StartQuery`, `logs:GetQueryResults`, and related permissions. Refer to [Configure CloudWatch](ref:configure-cloudwatch) for complete policy examples. |
| Incorrect access key or secret key | Verify the credentials in the AWS Console under **IAM** > **Users** > your user > **Security credentials**. Generate new credentials if necessary. |
| Credentials have expired | For temporary credentials, generate new ones. For access keys, verify they haven't been deactivated or deleted. |
| Wrong AWS region | Verify the default region in the data source configuration matches where your resources are located. |
| Assume Role ARN is incorrect | Verify the role ARN format: `arn:aws:iam::<account-id>:role/<role-name>`. Check that the role exists in the AWS Console. |
### "Unable to assume role"
**Symptoms:**
- Authentication fails when using Assume Role ARN
- Error message references STS or AssumeRole
**Solutions:**
1. Verify the trust relationship on the IAM role allows the Grafana credentials to assume it.
1. Check the trust policy includes the correct principal (the user or role running Grafana).
1. If using an external ID, ensure it matches exactly in both the role's trust policy and the Grafana data source configuration.
1. Verify the base credentials have the `sts:AssumeRole` permission.
1. Check that the role ARN is correct and the role exists.
**Example trust policy:**
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<your-account-id>:user/<grafana-user>"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:ExternalId": "<your-external-id>"
}
}
}
]
}
```
### AWS SDK Default authentication not working
**Symptoms:**
- Data source test fails when using AWS SDK Default
- Works locally but fails in production
**Solutions:**
1. Verify AWS credentials are configured in the environment where Grafana runs.
1. Check for credentials in the default locations:
- Environment variables (`AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`)
- Shared credentials file (`~/.aws/credentials`)
- EC2 instance metadata (if running on EC2)
- ECS task role (if running in ECS)
- EKS service account (if running in EKS)
1. Ensure the Grafana process has permission to read the credentials file.
1. For EKS with IRSA, set the pod's security context to allow user 472 (grafana) to access the projected token. Refer to [AWS authentication](ref:cloudwatch-aws-authentication) for details.
### Credentials file not found
**Symptoms:**
- Error indicates credentials file cannot be read
- Authentication fails with "Credentials file" option
**Solutions:**
1. Create the credentials file at `~/.aws/credentials` for the user running the `grafana-server` service.
1. Verify the file has correct permissions (`0644`).
1. If the file exists but isn't working, move it to `/usr/share/grafana/` and set permissions to `0644`.
1. Ensure the profile name in the data source configuration matches a profile in the credentials file.
## Connection errors
These errors occur when Grafana cannot reach AWS CloudWatch endpoints.
### "Request timed out" or connection failures
**Symptoms:**
- Data source test times out
- Queries fail with timeout errors
- Intermittent connection issues
**Solutions:**
1. Verify network connectivity from the Grafana server to AWS endpoints.
1. Check firewall rules allow outbound HTTPS (port 443) to AWS services.
1. If using a VPC, ensure proper NAT gateway or VPC endpoint configuration.
1. For Grafana Cloud connecting to private resources, configure [Private data source connect](ref:private-data-source-connect).
1. Check if the default region is correct—incorrect regions may cause longer timeouts.
1. Increase the timeout settings if queries involve large data volumes.
### Custom endpoint configuration issues
**Symptoms:**
- Connection fails when using a custom endpoint
- Endpoint URL rejected
**Solutions:**
1. Verify the endpoint URL format is correct.
1. Ensure the endpoint is accessible from the Grafana server.
1. Check that the endpoint supports the required AWS APIs.
1. For VPC endpoints, verify the endpoint policy allows the required actions.
## CloudWatch Metrics query errors
These errors occur when querying CloudWatch Metrics.
### "No data" or empty results
**Symptoms:**
- Query executes without error but returns no data
- Charts show "No data" message
**Possible causes and solutions:**
| Cause | Solution |
| ------------------------------- | ---------------------------------------------------------------------------------------------------------- |
| Time range doesn't contain data | Expand the dashboard time range. CloudWatch metrics have different retention periods based on resolution. |
| Wrong namespace or metric name | Verify the namespace (for example, `AWS/EC2`) and metric name (for example, `CPUUtilization`) are correct. |
| Incorrect dimensions | Ensure dimension names and values match your AWS resources exactly. |
| Match Exact enabled incorrectly | When Match Exact is enabled, all dimensions must be specified. Try disabling it to see if metrics appear. |
| Period too large | Reduce the period setting or set it to "auto" to ensure data points are returned for your time range. |
| Custom metrics not configured | Add custom metric namespaces in the data source configuration under **Namespaces of Custom Metrics**. |
### "Metric not found" or metrics don't appear in drop-down
**Symptoms:**
- Expected metrics don't appear in the query editor
- Metric drop-down is empty for a namespace
**Solutions:**
1. Verify the metric exists in the selected region.
1. For custom metrics, add the namespace to **Namespaces of Custom Metrics** in the data source configuration.
1. Check that the IAM policy includes `cloudwatch:ListMetrics` permission.
1. CloudWatch limits `ListMetrics` to 500 results per page. To retrieve more metrics, increase the `list_metrics_page_limit` setting in the [Grafana configuration file](https://grafana.com/docs/grafana/latest/datasources/aws-cloudwatch/configure/#configure-the-data-source-with-grafanaini).
1. Use the Query Inspector to verify the API request and response.
### Dimension values not loading
**Symptoms:**
- Dimension value drop-down doesn't populate
- Wildcard searches return no results
**Solutions:**
1. Verify the IAM policy includes `cloudwatch:ListMetrics` permission.
1. Check that the namespace and metric are selected before dimension values can load.
1. For EC2 dimensions, ensure `ec2:DescribeTags` and `ec2:DescribeInstances` permissions are granted.
1. Dimension values require existing metrics—if no metrics match, no values appear.
### "Too many data points" or API throttling
**Symptoms:**
- Queries fail with throttling errors
- Performance degrades with multiple panels
**Solutions:**
1. Increase the period setting to reduce the number of data points.
1. Reduce the time range of your queries.
1. Use fewer dimensions or wildcard queries per panel.
1. Request a quota increase for `GetMetricData` requests per second in the [AWS Service Quotas console](https://console.aws.amazon.com/servicequotas/).
1. Enable query caching in Grafana to reduce API calls.
### Metric math expression errors
**Symptoms:**
- Expression returns errors
- Referenced metrics not found
**Solutions:**
1. Verify each referenced metric has a unique ID set.
1. Check that metric IDs start with a lowercase letter and contain only letters, numbers, and underscores.
1. Ensure all referenced metrics are in the same query.
1. Verify the expression syntax follows [AWS Metric Math](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/using-metric-math.html) documentation.
1. Metric math expressions can't be used with Grafana alerting if they reference other query rows.
## CloudWatch Logs query errors
These errors occur when querying CloudWatch Logs.
### "Query failed" or logs don't appear
**Symptoms:**
- Log queries return errors
- No log data is displayed
**Solutions:**
1. Verify log group names are correct and exist in the selected region.
1. Check the IAM policy includes `logs:StartQuery`, `logs:GetQueryResults`, and `logs:DescribeLogGroups` permissions.
1. Ensure the time range contains log data.
1. Verify the query syntax is valid. For CloudWatch Logs Insights QL, test the query in the AWS Console.
1. Select the correct query language (Logs Insights QL, OpenSearch PPL, or OpenSearch SQL) based on your query syntax.
### Log query timeout
**Symptoms:**
- Query runs for a long time then fails
- Error mentions timeout
**Solutions:**
1. Increase the **Query timeout result** setting in the data source configuration (default is 30 minutes).
1. Narrow the time range to reduce the amount of data scanned.
1. Add filters to your query to limit results.
1. Break complex queries into smaller, more focused queries.
1. For alerting, the timeout defined in the [Grafana configuration file](https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana/#unified_alerting) takes precedence.
### Log groups not appearing in selector
**Symptoms:**
- Log group selector is empty
- Can't find expected log groups
**Solutions:**
1. Verify the IAM policy includes `logs:DescribeLogGroups` permission.
1. Check that log groups exist in the selected region.
1. For cross-account observability, ensure proper IAM permissions for `oam:ListSinks` and `oam:ListAttachedLinks`.
1. Use prefix search to filter log groups if you have many groups.
1. Verify the selected account (for cross-account) contains the expected log groups.
### OpenSearch SQL query errors
**Symptoms:**
- OpenSearch SQL queries fail
- Syntax errors with SQL queries
**Solutions:**
1. Specify the log group identifier or ARN in the `FROM` clause:
```sql
SELECT * FROM `log_group_name` WHERE `@message` LIKE '%error%'
```
1. For multiple log groups, use the `logGroups` function:
```sql
SELECT * FROM `logGroups(logGroupIdentifier: ['LogGroup1', 'LogGroup2'])`
```
1. Amazon CloudWatch supports only a subset of OpenSearch SQL commands. Refer to the [CloudWatch Logs documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_AnalyzeLogData_Languages.html) for supported syntax.
## Template variable errors
These errors occur when using template variables with the CloudWatch data source.
### Variables return no values
**Symptoms:**
- Variable drop-down is empty
- Dashboard fails to load with variable errors
**Solutions:**
1. Verify the data source connection is working.
1. Check that the IAM policy includes permissions for the variable query type:
- **Regions:** No additional permissions needed.
- **Namespaces:** No additional permissions needed.
- **Metrics:** Requires `cloudwatch:ListMetrics`.
- **Dimension Values:** Requires `cloudwatch:ListMetrics`.
- **EC2 Instance Attributes:** Requires `ec2:DescribeInstances`.
- **EBS Volume IDs:** Requires `ec2:DescribeVolumes`.
- **Resource ARNs:** Requires `tag:GetResources`.
- **Log Groups:** Requires `logs:DescribeLogGroups`.
1. For dependent variables, ensure parent variables have valid selections.
1. Verify the region is set correctly (use "default" for the data source's default region).
For more information on template variables, refer to [CloudWatch template variables](ref:cloudwatch-template-variables).
### Multi-value template variables cause query failures
**Symptoms:**
- Queries fail when selecting multiple dimension values
- Error about search expression limits
**Solutions:**
1. Search expressions are limited to 1,024 characters. Reduce the number of selected values.
1. Use the asterisk (`*`) wildcard instead of selecting "All" to query all metrics for a dimension.
1. Multi-valued template variables are only supported for dimension values—not for Region, Namespace, or Metric Name.
## Cross-account observability errors
These errors occur when using CloudWatch cross-account observability features.
### Cross-account queries fail
**Symptoms:**
- Can't query metrics or logs from linked accounts
- Monitoring account badge doesn't appear
**Solutions:**
1. Verify cross-account observability is configured in the AWS CloudWatch console.
1. Add the required IAM permissions:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Action": ["oam:ListSinks", "oam:ListAttachedLinks"],
"Effect": "Allow",
"Resource": "*"
}
]
}
```
1. Check that the monitoring account and source accounts are properly linked in AWS.
1. Cross-account observability works within a single region—verify all accounts are in the same region.
1. EC2 Instance Attributes can't be queried across accounts because they use the EC2 API, not the CloudWatch API.
## Quota and pricing issues
These issues relate to AWS service quotas and cost management.
### API throttling errors
**Symptoms:**
- "Rate exceeded" errors
- Dashboard panels intermittently fail to load
**Solutions:**
1. Reduce the frequency of dashboard refreshes.
1. Increase the period setting to reduce `GetMetricData` requests.
1. Enable query caching in Grafana (available in Grafana Enterprise and Grafana Cloud).
1. Request a quota increase in the [AWS Service Quotas console](https://console.aws.amazon.com/servicequotas/).
1. Consider consolidating similar queries using metric math.
### Unexpectedly high CloudWatch costs
**Symptoms:**
- AWS CloudWatch costs are higher than expected
- Frequent API calls from Grafana
**Solutions:**
1. The `GetMetricData` API doesn't qualify for the CloudWatch API free tier.
1. Reduce dashboard auto-refresh frequency.
1. Increase the period setting to reduce data points returned.
1. Use query caching to reduce repeated API calls.
1. Review variable query settings—set variable refresh to "On dashboard load" instead of "On time range change."
1. Avoid using wildcards in dimensions when possible, as they generate search expressions with multiple API calls.
## Other common issues
These issues don't produce specific error messages but are commonly encountered.
### Custom metrics don't appear
**Symptoms:**
- Custom metrics from applications or agents don't show in the namespace drop-down
- Only standard AWS namespaces are visible
**Solutions:**
1. Add your custom metric namespace to the **Namespaces of Custom Metrics** field in the data source configuration.
1. Separate multiple namespaces with commas (for example, `CWAgent,CustomNamespace`).
1. Verify custom metrics have been published to CloudWatch in the selected region.
### Pre-configured dashboards not working
**Symptoms:**
- Imported dashboards show no data
- Dashboard variables don't load
**Solutions:**
1. Verify the data source name in the dashboard matches your CloudWatch data source.
1. Check that the dashboard's AWS region setting matches where your resources are located.
1. Ensure the IAM policy grants access to the required services (EC2, Lambda, RDS, etc.).
1. Verify resources exist and are emitting metrics in the selected region.
### X-Ray trace links not appearing
**Symptoms:**
- Log entries don't show X-Ray trace links
- `@xrayTraceId` field not appearing
**Solutions:**
1. Verify an X-Ray data source is configured and linked in the CloudWatch data source settings.
1. Ensure your logs contain the `@xrayTraceId` field.
1. Update log queries to include `@xrayTraceId` in the fields, for example: `fields @message, @xrayTraceId`.
1. Configure your application to log X-Ray trace IDs. Refer to the [AWS X-Ray documentation](https://docs.aws.amazon.com/xray/latest/devguide/xray-services.html).
## Enable debug logging
To capture detailed error information for troubleshooting:
1. Set the Grafana log level to `debug` in the configuration file:
```ini
[log]
level = debug
```
1. Review logs in `/var/log/grafana/grafana.log` (or your configured log location).
1. Look for CloudWatch-specific entries that include request and response details.
1. Reset the log level to `info` after troubleshooting to avoid excessive log volume.
## Get additional help
If you've tried the solutions above and still encounter issues:
1. Check the [Grafana community forums](https://community.grafana.com/) for similar issues.
1. Review the [CloudWatch plugin GitHub issues](https://github.com/grafana/grafana/issues) for known bugs.
1. Consult the [AWS CloudWatch documentation](https://docs.aws.amazon.com/cloudwatch/) for service-specific guidance.
1. Contact Grafana Support if you're an Enterprise, Cloud Pro, or Cloud Contracted user.
1. When reporting issues, include:
- Grafana version
- AWS region
- Error messages (redact sensitive information)
- Steps to reproduce
- Query configuration (redact credentials and account IDs)

View File

@@ -124,6 +124,8 @@ For more information about dashboard permissions, refer to [Dashboard permission
## Restore deleted dashboards
{{% admonition type="caution" %}}
Restoring deleted dashboards is currently in private preview. Grafana Labs offers support on a best-effort basis, and breaking changes might occur prior to the feature being made generally available.
The feature is only available in Grafana Cloud.
{{% /admonition %}}

11
go.mod
View File

@@ -32,13 +32,14 @@ require (
github.com/apache/arrow-go/v18 v18.4.1 // @grafana/plugins-platform-backend
github.com/armon/go-radix v1.0.0 // @grafana/grafana-app-platform-squad
github.com/aws/aws-sdk-go v1.55.7 // @grafana/aws-datasources
github.com/aws/aws-sdk-go-v2 v1.39.1 // @grafana/aws-datasources
github.com/aws/aws-sdk-go-v2 v1.40.0 // @grafana/aws-datasources
github.com/aws/aws-sdk-go-v2/service/cloudwatch v1.45.3 // @grafana/aws-datasources
github.com/aws/aws-sdk-go-v2/service/cloudwatchlogs v1.51.0 // @grafana/aws-datasources
github.com/aws/aws-sdk-go-v2/service/ec2 v1.225.2 // @grafana/aws-datasources
github.com/aws/aws-sdk-go-v2/service/oam v1.18.3 // @grafana/aws-datasources
github.com/aws/aws-sdk-go-v2/service/resourcegroupstaggingapi v1.26.6 // @grafana/aws-datasources
github.com/aws/smithy-go v1.23.1 // @grafana/aws-datasources
github.com/aws/aws-sdk-go-v2/service/secretsmanager v1.40.1 // @grafana/grafana-operator-experience-squad
github.com/aws/smithy-go v1.23.2 // @grafana/aws-datasources
github.com/beevik/etree v1.4.1 // @grafana/grafana-backend-group
github.com/benbjohnson/clock v1.3.5 // @grafana/alerting-backend
github.com/blang/semver/v4 v4.0.0 // indirect; @grafana/grafana-developer-enablement-squad
@@ -146,6 +147,7 @@ require (
github.com/olekukonko/tablewriter v0.0.5 // @grafana/grafana-backend-group
github.com/open-feature/go-sdk v1.16.0 // @grafana/grafana-backend-group
github.com/open-feature/go-sdk-contrib/providers/go-feature-flag v0.2.6 // @grafana/grafana-backend-group
github.com/open-feature/go-sdk-contrib/providers/ofrep v0.1.6 // @grafana/grafana-backend-group
github.com/openfga/api/proto v0.0.0-20250909172242-b4b2a12f5c67 // @grafana/identity-access-team
github.com/openfga/language/pkg/go v0.2.0-beta.2.0.20251027165255-0f8f255e5f6c // @grafana/identity-access-team
github.com/openfga/openfga v1.11.1 // @grafana/identity-access-team
@@ -344,8 +346,8 @@ require (
github.com/aws/aws-sdk-go-v2/credentials v1.18.14 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.8 // indirect
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.84 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.8 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.8 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.14 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.14 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3 // indirect
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.36 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.1 // indirect
@@ -543,7 +545,6 @@ require (
github.com/oklog/run v1.1.0 // indirect
github.com/oklog/ulid v1.3.1 // indirect
github.com/oklog/ulid/v2 v2.1.1 // indirect
github.com/open-feature/go-sdk-contrib/providers/ofrep v0.1.6 // indirect
github.com/open-telemetry/opentelemetry-collector-contrib/internal/coreinternal v0.124.1 // indirect
github.com/open-telemetry/opentelemetry-collector-contrib/pkg/core/xidutils v0.124.1 // indirect
github.com/open-telemetry/opentelemetry-collector-contrib/pkg/translator/jaeger v0.124.1 // indirect

18
go.sum
View File

@@ -850,8 +850,8 @@ github.com/aws/aws-sdk-go v1.38.35/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2z
github.com/aws/aws-sdk-go v1.55.5/go.mod h1:eRwEWoyTWFMVYVQzKMNHWP5/RV4xIUGMQfXQHfHkpNU=
github.com/aws/aws-sdk-go v1.55.7 h1:UJrkFq7es5CShfBwlWAC8DA077vp8PyVbQd3lqLiztE=
github.com/aws/aws-sdk-go v1.55.7/go.mod h1:eRwEWoyTWFMVYVQzKMNHWP5/RV4xIUGMQfXQHfHkpNU=
github.com/aws/aws-sdk-go-v2 v1.39.1 h1:fWZhGAwVRK/fAN2tmt7ilH4PPAE11rDj7HytrmbZ2FE=
github.com/aws/aws-sdk-go-v2 v1.39.1/go.mod h1:sDioUELIUO9Znk23YVmIk86/9DOpkbyyVb1i/gUNFXY=
github.com/aws/aws-sdk-go-v2 v1.40.0 h1:/WMUA0kjhZExjOQN2z3oLALDREea1A7TobfuiBrKlwc=
github.com/aws/aws-sdk-go-v2 v1.40.0/go.mod h1:c9pm7VwuW0UPxAEYGyTmyurVcNrbF6Rt/wixFqDhcjE=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.11 h1:12SpdwU8Djs+YGklkinSSlcrPyj3H4VifVsKf78KbwA=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.11/go.mod h1:dd+Lkp6YmMryke+qxW/VnKyhMBDTYP41Q2Bb+6gNZgY=
github.com/aws/aws-sdk-go-v2/config v1.31.10 h1:7LllDZAegXU3yk41mwM6KcPu0wmjKGQB1bg99bNdQm4=
@@ -862,10 +862,10 @@ github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.8 h1:gLD09eaJUdiszm7vd1btiQU
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.8/go.mod h1:4RW3oMPt1POR74qVOC4SbubxAwdP4pCT0nSw3jycOU4=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.84 h1:cTXRdLkpBanlDwISl+5chq5ui1d1YWg4PWMR9c3kXyw=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.84/go.mod h1:kwSy5X7tfIHN39uucmjQVs2LvDdXEjQucgQQEqCggEo=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.8 h1:6bgAZgRyT4RoFWhxS+aoGMFyE0cD1bSzFnEEi4bFPGI=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.8/go.mod h1:KcGkXFVU8U28qS4KvLEcPxytPZPBcRawaH2Pf/0jptE=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.8 h1:HhJYoES3zOz34yWEpGENqJvRVPqpmJyR3+AFg9ybhdY=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.8/go.mod h1:JnA+hPWeYAVbDssp83tv+ysAG8lTfLVXvSsyKg/7xNA=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.14 h1:PZHqQACxYb8mYgms4RZbhZG0a7dPW06xOjmaH0EJC/I=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.14/go.mod h1:VymhrMJUWs69D8u0/lZ7jSB6WgaG/NqHi3gX0aYf6U0=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.14 h1:bOS19y6zlJwagBfHxs0ESzr1XCOU2KXJCWcq3E2vfjY=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.14/go.mod h1:1ipeGBMAxZ0xcTm6y6paC2C/J6f6OO7LBODV9afuAyM=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3 h1:bIqFDwgGXXN1Kpp99pDOdKMTTb5d2KyU5X/BZxjOkRo=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3/go.mod h1:H5O/EsxDWyU+LP/V8i5sm8cxoZgc2fdNR9bxlOFrQTo=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.36 h1:GMYy2EOWfzdP3wfVAGXBNKY5vK4K8vMET4sYOYltmqs=
@@ -892,14 +892,16 @@ github.com/aws/aws-sdk-go-v2/service/resourcegroupstaggingapi v1.26.6 h1:Pwbxovp
github.com/aws/aws-sdk-go-v2/service/resourcegroupstaggingapi v1.26.6/go.mod h1:Z4xLt5mXspLKjBV92i165wAJ/3T6TIv4n7RtIS8pWV0=
github.com/aws/aws-sdk-go-v2/service/s3 v1.84.0 h1:0reDqfEN+tB+sozj2r92Bep8MEwBZgtAXTND1Kk9OXg=
github.com/aws/aws-sdk-go-v2/service/s3 v1.84.0/go.mod h1:kUklwasNoCn5YpyAqC/97r6dzTA1SRKJfKq16SXeoDU=
github.com/aws/aws-sdk-go-v2/service/secretsmanager v1.40.1 h1:w6a0H79HrHf3lr+zrw+pSzR5B+caiQFAKiNHlrUcnoc=
github.com/aws/aws-sdk-go-v2/service/secretsmanager v1.40.1/go.mod h1:c6Vg0BRiU7v0MVhHupw90RyL120QBwAMLbDCzptGeMk=
github.com/aws/aws-sdk-go-v2/service/sso v1.29.4 h1:FTdEN9dtWPB0EOURNtDPmwGp6GGvMqRJCAihkSl/1No=
github.com/aws/aws-sdk-go-v2/service/sso v1.29.4/go.mod h1:mYubxV9Ff42fZH4kexj43gFPhgc/LyC7KqvUKt1watc=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.0 h1:I7ghctfGXrscr7r1Ga/mDqSJKm7Fkpl5Mwq79Z+rZqU=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.0/go.mod h1:Zo9id81XP6jbayIFWNuDpA6lMBWhsVy+3ou2jLa4JnA=
github.com/aws/aws-sdk-go-v2/service/sts v1.38.5 h1:+LVB0xBqEgjQoqr9bGZbRzvg212B0f17JdflleJRNR4=
github.com/aws/aws-sdk-go-v2/service/sts v1.38.5/go.mod h1:xoaxeqnnUaZjPjaICgIy5B+MHCSb/ZSOn4MvkFNOUA0=
github.com/aws/smithy-go v1.23.1 h1:sLvcH6dfAFwGkHLZ7dGiYF7aK6mg4CgKA/iDKjLDt9M=
github.com/aws/smithy-go v1.23.1/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0=
github.com/aws/smithy-go v1.23.2 h1:Crv0eatJUQhaManss33hS5r40CG3ZFH+21XSkqMrIUM=
github.com/aws/smithy-go v1.23.2/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0=
github.com/axiomhq/hyperloglog v0.0.0-20191112132149-a4c4c47bc57f/go.mod h1:2stgcRjl6QmW+gU2h5E7BQXg4HU0gzxKWDuT5HviN9s=
github.com/axiomhq/hyperloglog v0.0.0-20240507144631-af9851f82b27 h1:60m4tnanN1ctzIu4V3bfCNJ39BiOPSm1gHFlFjTkRE0=
github.com/axiomhq/hyperloglog v0.0.0-20240507144631-af9851f82b27/go.mod h1:k08r+Yj1PRAmuayFiRK6MYuR5Ve4IuZtTfxErMIh0+c=

View File

@@ -423,6 +423,7 @@ github.com/aws/aws-msk-iam-sasl-signer-go v1.0.1 h1:nMp7diZObd4XEVUR0pEvn7/E13JI
github.com/aws/aws-msk-iam-sasl-signer-go v1.0.1/go.mod h1:MVYeeOhILFFemC/XlYTClvBjYZrg/EPd3ts885KrNTI=
github.com/aws/aws-sdk-go-v2 v1.36.5/go.mod h1:EYrzvCCN9CMUTa5+6lf6MM4tq3Zjp8UhSGR/cBsjai0=
github.com/aws/aws-sdk-go-v2 v1.38.1/go.mod h1:9Q0OoGQoboYIAJyslFyF1f5K1Ryddop8gqMhWx/n4Wg=
github.com/aws/aws-sdk-go-v2 v1.39.1/go.mod h1:sDioUELIUO9Znk23YVmIk86/9DOpkbyyVb1i/gUNFXY=
github.com/aws/aws-sdk-go-v2/config v1.29.17/go.mod h1:9P4wwACpbeXs9Pm9w1QTh6BwWwJjwYvJ1iCt5QbCXh8=
github.com/aws/aws-sdk-go-v2/config v1.31.2/go.mod h1:17ft42Yb2lF6OigqSYiDAiUcX4RIkEMY6XxEMJsrAes=
github.com/aws/aws-sdk-go-v2/credentials v1.17.70/go.mod h1:M+lWhhmomVGgtuPOhO85u4pEa3SmssPTdcYpP/5J/xc=
@@ -436,8 +437,10 @@ github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.4/go.mod h1:9xzb8/SV62W6gHQG
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.69/go.mod h1:GJj8mmO6YT6EqgduWocwhMoxTLFitkhIrK+owzrYL2I=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.36/go.mod h1:Q1lnJArKRXkenyog6+Y+zr7WDpk4e6XlR6gs20bbeNo=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.4/go.mod h1:l4bdfCD7XyyZA9BolKBo1eLqgaJxl0/x91PL4Yqe0ao=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.8/go.mod h1:KcGkXFVU8U28qS4KvLEcPxytPZPBcRawaH2Pf/0jptE=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.36/go.mod h1:UdyGa7Q91id/sdyHPwth+043HhmP6yP9MBHgbZM0xo8=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.4/go.mod h1:yDmJgqOiH4EA8Hndnv4KwAo8jCGTSnM5ASG1nBI+toA=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.8/go.mod h1:JnA+hPWeYAVbDssp83tv+ysAG8lTfLVXvSsyKg/7xNA=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.34/go.mod h1:zf7Vcd1ViW7cPqYWEHLHJkS50X0JS2IKz9Cgaj6ugrs=
github.com/aws/aws-sdk-go-v2/service/dynamodb v1.44.0 h1:A99gjqZDbdhjtjJVZrmVzVKO2+p3MSg35bDWtbMQVxw=
github.com/aws/aws-sdk-go-v2/service/dynamodb v1.44.0/go.mod h1:mWB0GE1bqcVSvpW7OtFA0sKuHk52+IqtnsYU2jUfYAs=
@@ -491,6 +494,7 @@ github.com/aws/smithy-go v1.22.4/go.mod h1:t1ufH5HMublsJYulve2RKmHDC15xu1f26kHCp
github.com/aws/smithy-go v1.22.5 h1:P9ATCXPMb2mPjYBgueqJNCA5S9UfktsW0tTxi+a7eqw=
github.com/aws/smithy-go v1.22.5/go.mod h1:t1ufH5HMublsJYulve2RKmHDC15xu1f26kHCp/HgceI=
github.com/aws/smithy-go v1.23.0/go.mod h1:t1ufH5HMublsJYulve2RKmHDC15xu1f26kHCp/HgceI=
github.com/aws/smithy-go v1.23.1/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0=
github.com/awslabs/aws-lambda-go-api-proxy v0.16.2 h1:CJyGEyO1CIwOnXTU40urf0mchf6t3voxpvUDikOU9LY=
github.com/awslabs/aws-lambda-go-api-proxy v0.16.2/go.mod h1:vxxjwBHe/KbgFeNlAP/Tvp4SsVRL3WQamcWRxqVh0z0=
github.com/aymanbagabas/go-udiff v0.2.0 h1:TK0fH4MteXUDspT88n8CKzvK0X9O2xu9yQjWpi6yML8=

View File

@@ -347,6 +347,7 @@
"date-fns": "4.1.0",
"debounce-promise": "3.1.2",
"diff": "^8.0.0",
"downsample": "1.4.0",
"fast-deep-equal": "^3.1.3",
"fast-json-patch": "3.1.1",
"file-saver": "2.0.5",

View File

@@ -58,14 +58,12 @@
"bundle": "rollup -c rollup.config.ts --configPlugin esbuild",
"clean": "rimraf ./dist ./compiled ./unstable ./testing ./package.tgz",
"typecheck": "tsc --emitDeclarationOnly false --noEmit",
"codegen": "rtk-query-codegen-openapi ./scripts/codegen.ts",
"prepack": "cp package.json package.json.bak && node ../../scripts/prepare-npm-package.js",
"postpack": "mv package.json.bak package.json",
"i18n-extract": "i18next-cli extract --sync-primary"
},
"devDependencies": {
"@grafana/test-utils": "workspace:*",
"@rtk-query/codegen-openapi": "^2.0.0",
"@testing-library/jest-dom": "^6.6.3",
"@testing-library/react": "^16.3.0",
"@testing-library/user-event": "^14.6.1",
@@ -96,6 +94,7 @@
"dependencies": {
"@emotion/css": "11.13.5",
"@faker-js/faker": "^9.8.0",
"@grafana/api-clients": "12.4.0-pre",
"@grafana/i18n": "12.4.0-pre",
"@reduxjs/toolkit": "^2.9.0",
"fishery": "^2.3.1",

View File

@@ -1,23 +0,0 @@
# Re-generate the clients
⚠️ This guide assumes the Backend definitions have been updated in `apps/alerting`.
## Re-create OpenAPI specification
Start with re-generating the OpenAPI snapshots by running the test in `pkg/tests/apis/openapi_test.go`.
This will output the OpenAPI JSON spec file(s) in `pkg/tests/apis/openapi_snapshots`.
## Process OpenAPI specifications
Next up run the post-processing of the snapshots with `yarn run process-specs`, this will copy processed specifications to `./data/openapi/`.
## Generate RTKQ files
These files are built using the `yarn run codegen` command, make sure to run that in the Grafana Alerting package working directory.
`yarn --cwd ./packages/grafana-alerting run codegen`.
API clients will be written to `src/grafana/api/<version>/api.gen.ts`.
Make sure to create a versioned API client for each API version see `src/grafana/api/v0alpha1/api.ts` as an example.

View File

@@ -1,55 +0,0 @@
/**
* This script will generate TypeScript type definitions and a RTKQ clients for the alerting k8s APIs.
*
* Run `yarn run codegen` from the "grafana-alerting" package to invoke this script.
*
* API clients will be placed in "src/grafana/api/<version>/api.gen.ts"
*/
import type { ConfigFile } from '@rtk-query/codegen-openapi';
// append API groups and versions here to generate additional API clients
const SPECS = [
['notifications.alerting.grafana.app', ['v0alpha1']],
['rules.alerting.grafana.app', ['v0alpha1']],
// keep this in Grafana Enterprise
// ['alertenrichment.grafana.app', ['v1beta1']],
] as const;
type OutputFile = Omit<ConfigFile, 'outputFile'>;
type OutputFiles = Record<string, OutputFile>;
const outputFiles = SPECS.reduce<OutputFiles>((groupAcc, [group, versions]) => {
return versions.reduce<OutputFiles>((versionAcc, version) => {
// Create a unique export name based on the group
const groupName = group.split('.')[0]; // e.g., 'notifications', 'rules', 'alertenrichment'
const exportName = `${groupName}API`;
// these snapshots are generated by running "go test pkg/tests/apis/openapi_test.go" and "scripts/process-specs.ts",
// see the README in the "openapi_snapshots" directory
const schemaFile = `../../../data/openapi/${group}-${version}.json`;
// make sure there is a API file in each versioned directory
const apiFile = `../src/grafana/api/${groupName}/${version}/api.ts`;
// output each api client into a versioned directory with group-specific naming
const outputPath = `../src/grafana/api/${groupName}/${version}/${groupName}.api.gen.ts`;
versionAcc[outputPath] = {
exportName,
schemaFile,
apiFile,
tag: true, // generate tags for cache invalidation
} satisfies OutputFile;
return versionAcc;
}, groupAcc);
}, {});
export default {
// these are intentionally empty but will be set for each versioned config file
exportName: '',
schemaFile: '',
apiFile: '',
outputFiles,
} satisfies ConfigFile;

View File

@@ -1,18 +0,0 @@
import { createApi, fetchBaseQuery } from '@reduxjs/toolkit/query/react';
import { getAPIBaseURL, getAPIReducerPath } from '../../util';
import { GROUP, VERSION } from './const';
const baseUrl = getAPIBaseURL(GROUP, VERSION);
const reducerPath = getAPIReducerPath(GROUP, VERSION);
export const api = createApi({
reducerPath,
baseQuery: fetchBaseQuery({
// Set URL correctly so MSW can intercept requests
// https://mswjs.io/docs/runbook#rtk-query-requests-are-not-intercepted
baseUrl: new URL(baseUrl, globalThis.location.origin).href,
}),
endpoints: () => ({}),
});

View File

@@ -1,2 +0,0 @@
export const VERSION = 'v0alpha1' as const;
export const GROUP = 'notifications.alerting.grafana.app' as const;

View File

@@ -1,8 +1,9 @@
import { faker } from '@faker-js/faker';
import { Factory } from 'fishery';
import { API_GROUP, API_VERSION } from '@grafana/api-clients/rtkq/notifications.alerting/v0alpha1';
import { DEFAULT_NAMESPACE, generateResourceVersion, generateTitle, generateUID } from '../../../../../mocks/util';
import { GROUP, VERSION } from '../../const';
import {
ContactPoint,
ContactPointMetadataAnnotations,
@@ -14,7 +15,7 @@ import { AlertingEntityMetadataAnnotationsFactory } from './common';
export const ListReceiverApiResponseFactory = Factory.define<EnhancedListReceiverApiResponse>(() => ({
kind: 'ReceiverList',
apiVersion: `${GROUP}/${VERSION}`,
apiVersion: `${API_GROUP}/${API_VERSION}`,
metadata: {
resourceVersion: generateResourceVersion(),
},
@@ -26,7 +27,7 @@ export const ContactPointFactory = Factory.define<ContactPoint>(() => {
return {
kind: 'Receiver',
apiVersion: `${GROUP}/${VERSION}`,
apiVersion: `${API_GROUP}/${API_VERSION}`,
metadata: {
name: btoa(title),
namespace: DEFAULT_NAMESPACE,

View File

@@ -1,13 +1,17 @@
import { HttpResponse, http } from 'msw';
import {
API_GROUP,
API_VERSION,
CreateReceiverApiResponse,
} from '@grafana/api-clients/rtkq/notifications.alerting/v0alpha1';
import { getAPIBaseURLForMocks } from '../../../../../../mocks/util';
import { CreateReceiverApiResponse } from '../../../../v0alpha1/notifications.api.gen';
import { GROUP, VERSION } from '../../../const';
export function createReceiverHandler(
data: CreateReceiverApiResponse | ((info: Parameters<Parameters<typeof http.post>[1]>[0]) => Response)
) {
return http.post(getAPIBaseURLForMocks(GROUP, VERSION, '/receivers'), function handler(info) {
return http.post(getAPIBaseURLForMocks(API_GROUP, API_VERSION, '/receivers'), function handler(info) {
if (typeof data === 'function') {
return data(info);
}

View File

@@ -1,13 +1,17 @@
import { HttpResponse, http } from 'msw';
import {
API_GROUP,
API_VERSION,
DeleteReceiverApiResponse,
} from '@grafana/api-clients/rtkq/notifications.alerting/v0alpha1';
import { getAPIBaseURLForMocks } from '../../../../../../mocks/util';
import { DeleteReceiverApiResponse } from '../../../../v0alpha1/notifications.api.gen';
import { GROUP, VERSION } from '../../../const';
export function deleteReceiverHandler(
data: DeleteReceiverApiResponse | ((info: Parameters<Parameters<typeof http.delete>[1]>[0]) => Response)
) {
return http.delete(getAPIBaseURLForMocks(GROUP, VERSION, '/receivers/:name'), function handler(info) {
return http.delete(getAPIBaseURLForMocks(API_GROUP, API_VERSION, '/receivers/:name'), function handler(info) {
if (typeof data === 'function') {
return data(info);
}

View File

@@ -1,13 +1,17 @@
import { HttpResponse, http } from 'msw';
import {
API_GROUP,
API_VERSION,
DeletecollectionReceiverApiResponse,
} from '@grafana/api-clients/rtkq/notifications.alerting/v0alpha1';
import { getAPIBaseURLForMocks } from '../../../../../../mocks/util';
import { DeletecollectionReceiverApiResponse } from '../../../../v0alpha1/notifications.api.gen';
import { GROUP, VERSION } from '../../../const';
export function deletecollectionReceiverHandler(
data: DeletecollectionReceiverApiResponse | ((info: Parameters<Parameters<typeof http.delete>[1]>[0]) => Response)
) {
return http.delete(getAPIBaseURLForMocks(GROUP, VERSION, '/receivers'), function handler(info) {
return http.delete(getAPIBaseURLForMocks(API_GROUP, API_VERSION, '/receivers'), function handler(info) {
if (typeof data === 'function') {
return data(info);
}

View File

@@ -1,13 +1,17 @@
import { HttpResponse, http } from 'msw';
import {
API_GROUP,
API_VERSION,
GetReceiverApiResponse,
} from '@grafana/api-clients/rtkq/notifications.alerting/v0alpha1';
import { getAPIBaseURLForMocks } from '../../../../../../mocks/util';
import { GetReceiverApiResponse } from '../../../../v0alpha1/notifications.api.gen';
import { GROUP, VERSION } from '../../../const';
export function getReceiverHandler(
data: GetReceiverApiResponse | ((info: Parameters<Parameters<typeof http.get>[1]>[0]) => Response)
) {
return http.get(getAPIBaseURLForMocks(GROUP, VERSION, '/receivers/:name'), function handler(info) {
return http.get(getAPIBaseURLForMocks(API_GROUP, API_VERSION, '/receivers/:name'), function handler(info) {
if (typeof data === 'function') {
return data(info);
}

View File

@@ -1,13 +1,14 @@
import { HttpResponse, http } from 'msw';
import { API_GROUP, API_VERSION } from '@grafana/api-clients/rtkq/notifications.alerting/v0alpha1';
import { getAPIBaseURLForMocks } from '../../../../../../mocks/util';
import { GROUP, VERSION } from '../../../const';
import { EnhancedListReceiverApiResponse } from '../../../types';
export function listReceiverHandler(
data: EnhancedListReceiverApiResponse | ((info: Parameters<Parameters<typeof http.get>[1]>[0]) => Response)
) {
return http.get(getAPIBaseURLForMocks(GROUP, VERSION, '/receivers'), function handler(info) {
return http.get(getAPIBaseURLForMocks(API_GROUP, API_VERSION, '/receivers'), function handler(info) {
if (typeof data === 'function') {
return data(info);
}

View File

@@ -1,13 +1,17 @@
import { HttpResponse, http } from 'msw';
import {
API_GROUP,
API_VERSION,
ReplaceReceiverApiResponse,
} from '@grafana/api-clients/rtkq/notifications.alerting/v0alpha1';
import { getAPIBaseURLForMocks } from '../../../../../../mocks/util';
import { ReplaceReceiverApiResponse } from '../../../../v0alpha1/notifications.api.gen';
import { GROUP, VERSION } from '../../../const';
export function replaceReceiverHandler(
data: ReplaceReceiverApiResponse | ((info: Parameters<Parameters<typeof http.put>[1]>[0]) => Response)
) {
return http.put(getAPIBaseURLForMocks(GROUP, VERSION, '/receivers/:name'), function handler(info) {
return http.put(getAPIBaseURLForMocks(API_GROUP, API_VERSION, '/receivers/:name'), function handler(info) {
if (typeof data === 'function') {
return data(info);
}

View File

@@ -1,13 +1,17 @@
import { HttpResponse, http } from 'msw';
import {
API_GROUP,
API_VERSION,
UpdateReceiverApiResponse,
} from '@grafana/api-clients/rtkq/notifications.alerting/v0alpha1';
import { getAPIBaseURLForMocks } from '../../../../../../mocks/util';
import { UpdateReceiverApiResponse } from '../../../../v0alpha1/notifications.api.gen';
import { GROUP, VERSION } from '../../../const';
export function updateReceiverHandler(
data: UpdateReceiverApiResponse | ((info: Parameters<Parameters<typeof http.patch>[1]>[0]) => Response)
) {
return http.patch(getAPIBaseURLForMocks(GROUP, VERSION, '/receivers/:name'), function handler(info) {
return http.patch(getAPIBaseURLForMocks(API_GROUP, API_VERSION, '/receivers/:name'), function handler(info) {
if (typeof data === 'function') {
return data(info);
}

View File

@@ -3,7 +3,11 @@
*/
import { MergeDeep, MergeExclusive, OverrideProperties } from 'type-fest';
import type { ListReceiverApiResponse, Receiver, ReceiverIntegration } from './notifications.api.gen';
import type {
ListReceiverApiResponse,
Receiver,
ReceiverIntegration,
} from '@grafana/api-clients/rtkq/notifications.alerting/v0alpha1';
type GenericIntegration = OverrideProperties<
ReceiverIntegration,

View File

@@ -1,18 +0,0 @@
import { createApi, fetchBaseQuery } from '@reduxjs/toolkit/query/react';
import { getAPIBaseURL, getAPIReducerPath } from '../../util';
import { GROUP, VERSION } from './const';
const baseUrl = getAPIBaseURL(GROUP, VERSION);
const reducerPath = getAPIReducerPath(GROUP, VERSION);
export const api = createApi({
reducerPath,
baseQuery: fetchBaseQuery({
// Set URL correctly so MSW can intercept requests
// https://mswjs.io/docs/runbook#rtk-query-requests-are-not-intercepted
baseUrl: new URL(baseUrl, globalThis.location.origin).href,
}),
endpoints: () => ({}),
});

View File

@@ -1,2 +0,0 @@
export const VERSION = 'v0alpha1' as const;
export const GROUP = 'rules.alerting.grafana.app' as const;

View File

@@ -7,9 +7,10 @@ import { OverrideProperties } from 'type-fest';
import {
CreateReceiverApiArg,
type ListReceiverApiArg,
notificationsAPI,
} from '../../../api/notifications/v0alpha1/notifications.api.gen';
ListReceiverApiArg,
generatedAPI as notificationsAPIv0alpha1,
} from '@grafana/api-clients/rtkq/notifications.alerting/v0alpha1';
import type { ContactPoint, EnhancedListReceiverApiResponse } from '../../../api/notifications/v0alpha1/types';
// this is a workaround for the fact that the generated types are not narrow enough
@@ -22,17 +23,17 @@ type ListContactPointsHookResult = TypedUseQueryHookResult<
// Type for the options that can be passed to the hook
// Based on the pattern used for mutation options in this file
type ListContactPointsQueryArgs = Parameters<
typeof notificationsAPI.endpoints.listReceiver.useQuery<ListContactPointsHookResult>
typeof notificationsAPIv0alpha1.endpoints.listReceiver.useQuery<ListContactPointsHookResult>
>[0];
type ListContactPointsQueryOptions = Parameters<
typeof notificationsAPI.endpoints.listReceiver.useQuery<ListContactPointsHookResult>
typeof notificationsAPIv0alpha1.endpoints.listReceiver.useQuery<ListContactPointsHookResult>
>[1];
/**
* useListContactPoints is a hook that fetches a list of contact points
*
* This function wraps the notificationsAPI.useListReceiverQuery with proper typing
* This function wraps the notificationsAPIv0alpha1.useListReceiverQuery with proper typing
* to ensure that the returned ContactPoints are correctly typed in the data.items array.
*
* It automatically uses the configured namespace for the query.
@@ -43,8 +44,8 @@ type ListContactPointsQueryOptions = Parameters<
export function useListContactPoints(
queryArgs: ListContactPointsQueryArgs = {},
queryOptions: ListContactPointsQueryOptions = {}
) {
return notificationsAPI.useListReceiverQuery<ListContactPointsHookResult>(queryArgs, queryOptions);
): ListContactPointsHookResult {
return notificationsAPIv0alpha1.useListReceiverQuery<ListContactPointsHookResult>(queryArgs, queryOptions);
}
// type narrowing mutations requires us to define a few helper types
@@ -60,7 +61,7 @@ type CreateContactPointMutation = TypedUseMutationResult<
>;
type UseCreateContactPointOptions = Parameters<
typeof notificationsAPI.endpoints.createReceiver.useMutation<CreateContactPointMutation>
typeof notificationsAPIv0alpha1.endpoints.createReceiver.useMutation<CreateContactPointMutation>
>[0];
/**
@@ -69,8 +70,16 @@ type UseCreateContactPointOptions = Parameters<
* This function wraps the notificationsAPI.useCreateReceiverMutation with proper typing
* to ensure that the payload supports type narrowing.
*/
export function useCreateContactPoint(options?: UseCreateContactPointOptions) {
const [updateFn, result] = notificationsAPI.endpoints.createReceiver.useMutation<CreateContactPointMutation>(options);
export function useCreateContactPoint(
options?: UseCreateContactPointOptions
): readonly [
(
args: CreateContactPointArgs
) => ReturnType<ReturnType<typeof notificationsAPIv0alpha1.endpoints.createReceiver.useMutation>[0]>,
ReturnType<typeof notificationsAPIv0alpha1.endpoints.createReceiver.useMutation<CreateContactPointMutation>>[1],
] {
const [updateFn, result] =
notificationsAPIv0alpha1.endpoints.createReceiver.useMutation<CreateContactPointMutation>(options);
const typedUpdateFn = (args: CreateContactPointArgs) => {
// @ts-expect-error this one is just impossible for me to figure out

View File

@@ -1,6 +1,7 @@
import { countBy, isEmpty } from 'lodash';
import { Receiver } from '../api/notifications/v0alpha1/notifications.api.gen';
import { Receiver } from '@grafana/api-clients/rtkq/notifications.alerting/v0alpha1';
import { ContactPoint } from '../api/notifications/v0alpha1/types';
/**

View File

@@ -1,4 +1,4 @@
import { RoutingTreeMatcher } from '../api/notifications/v0alpha1/notifications.api.gen';
import { RoutingTreeMatcher } from '@grafana/api-clients/rtkq/notifications.alerting/v0alpha1';
export type Label = [string, string];

View File

@@ -1,6 +1,6 @@
import { VERSION } from '../../api/notifications/v0alpha1/const';
import { API_VERSION, RoutingTree } from '@grafana/api-clients/rtkq/notifications.alerting/v0alpha1';
import { LabelMatcherFactory, RouteFactory } from '../../api/notifications/v0alpha1/mocks/fakes/Routes';
import { RoutingTree } from '../../api/notifications/v0alpha1/notifications.api.gen';
import { Label } from '../../matchers/types';
import { matchInstancesToRouteTrees } from './useMatchPolicies';
@@ -16,7 +16,7 @@ describe('matchInstancesToRouteTrees', () => {
const trees: RoutingTree[] = [
{
kind: 'RoutingTree',
apiVersion: VERSION,
apiVersion: API_VERSION,
metadata: { name: treeName },
spec: {
defaults: {
@@ -24,7 +24,6 @@ describe('matchInstancesToRouteTrees', () => {
},
routes: [route],
},
status: {},
},
];
@@ -51,7 +50,7 @@ describe('matchInstancesToRouteTrees', () => {
const trees: RoutingTree[] = [
{
kind: 'RoutingTree',
apiVersion: VERSION,
apiVersion: API_VERSION,
metadata: { name: treeName },
spec: {
defaults: {
@@ -59,7 +58,6 @@ describe('matchInstancesToRouteTrees', () => {
},
routes: [route],
},
status: {},
},
];

View File

@@ -1,6 +1,10 @@
import { useCallback } from 'react';
import { RoutingTree, notificationsAPI } from '../../api/notifications/v0alpha1/notifications.api.gen';
import {
RoutingTree,
generatedAPI as notificationsAPIv0alpha1,
} from '@grafana/api-clients/rtkq/notifications.alerting/v0alpha1';
import { Label } from '../../matchers/types';
import { USER_DEFINED_TREE_NAME } from '../consts';
import { Route, RouteWithID } from '../types';
@@ -24,6 +28,11 @@ export type InstanceMatchResult = {
matchedRoutes: RouteMatch[];
};
interface UseMatchInstancesToRouteTreesReturnType
extends ReturnType<typeof notificationsAPIv0alpha1.endpoints.listRoutingTree.useQuery> {
matchInstancesToRouteTrees: (instances: Label[][]) => InstanceMatchResult[];
}
/**
* React hook that finds notification policy routes in all routing trees that match the provided set of alert instances.
*
@@ -35,8 +44,8 @@ export type InstanceMatchResult = {
* @returns An object containing a `matchInstancesToRoutingTrees` function that takes alert instances
* and returns an array of InstanceMatchResult objects, each containing the matched routes and matching details
*/
export function useMatchInstancesToRouteTrees() {
const { data, ...rest } = notificationsAPI.endpoints.listRoutingTree.useQuery(
export function useMatchInstancesToRouteTrees(): UseMatchInstancesToRouteTreesReturnType {
const { data, ...rest } = notificationsAPIv0alpha1.endpoints.listRoutingTree.useQuery(
{},
{
refetchOnFocus: true,

View File

@@ -1,6 +1,7 @@
import { OverrideProperties } from 'type-fest';
import { RoutingTreeRoute } from '../api/notifications/v0alpha1/notifications.api.gen';
import { RoutingTreeRoute } from '@grafana/api-clients/rtkq/notifications.alerting/v0alpha1';
import { LabelMatcher } from '../matchers/types';
// type-narrow the route tree

View File

@@ -1,6 +1,7 @@
import { groupBy, isArray, pick, reduce, uniqueId } from 'lodash';
import { RoutingTree, RoutingTreeRoute } from '../api/notifications/v0alpha1/notifications.api.gen';
import { RoutingTree, RoutingTreeRoute } from '@grafana/api-clients/rtkq/notifications.alerting/v0alpha1';
import { Label } from '../matchers/types';
import { LabelMatchDetails, matchLabels } from '../matchers/utils';

View File

@@ -19,5 +19,5 @@ export { type LabelMatcher, type Label } from './grafana/matchers/types';
export { matchLabelsSet, matchLabels, isLabelMatch, type LabelMatchDetails } from './grafana/matchers/utils';
// API endpoints
export { notificationsAPI as notificationsAPIv0alpha1 } from './grafana/api/notifications/v0alpha1/notifications.api.gen';
export { rulesAPI as rulesAPIv0alpha1 } from './grafana/api/rules/v0alpha1/rules.api.gen';
export { generatedAPI as notificationsAPIv0alpha1 } from '@grafana/api-clients/rtkq/notifications.alerting/v0alpha1';
export { generatedAPI as rulesAPIv0alpha1 } from '@grafana/api-clients/rtkq/rules.alerting/v0alpha1';

View File

@@ -2,13 +2,25 @@ import { configureStore } from '@reduxjs/toolkit';
import { useEffect } from 'react';
import { Provider } from 'react-redux';
import { notificationsAPIv0alpha1 } from '../src/unstable';
import { MockBackendSrv } from '@grafana/api-clients';
import { generatedAPI as notificationsAPIv0alpha1 } from '@grafana/api-clients/rtkq/notifications.alerting/v0alpha1';
import { generatedAPI as rulesAPIv0alpha1 } from '@grafana/api-clients/rtkq/rules.alerting/v0alpha1';
import { setBackendSrv } from '@grafana/runtime';
// Initialize BackendSrv for tests - this allows RTKQ to make HTTP requests
// The actual HTTP requests will be intercepted by MSW (setupMockServer)
// We only need to implement fetch() which is what RTKQ uses
// we could remove this once @grafana/api-client no longer uses the BackendSrv
// @ts-ignore
setBackendSrv(new MockBackendSrv());
// create an empty store
export const store = configureStore({
middleware: (getDefaultMiddleware) => getDefaultMiddleware().concat(notificationsAPIv0alpha1.middleware),
export const store: ReturnType<typeof configureStore> = configureStore({
middleware: (getDefaultMiddleware) =>
getDefaultMiddleware().concat(notificationsAPIv0alpha1.middleware).concat(rulesAPIv0alpha1.middleware),
reducer: {
[notificationsAPIv0alpha1.reducerPath]: notificationsAPIv0alpha1.reducer,
[rulesAPIv0alpha1.reducerPath]: rulesAPIv0alpha1.reducer,
},
});

View File

@@ -13,7 +13,14 @@ import '@testing-library/jest-dom';
* method which wraps the passed element in all of the necessary Providers,
* so it can render correctly in the context of the application
*/
const customRender = (ui: React.ReactNode, renderOptions: RenderOptions = {}) => {
const customRender = (
ui: React.ReactNode,
renderOptions: RenderOptions = {}
): {
renderResult: ReturnType<typeof render>;
user: ReturnType<typeof userEvent.setup>;
store: typeof store;
} => {
const user = userEvent.setup();
const Providers = renderOptions.wrapper || getDefaultWrapper();

View File

@@ -116,6 +116,18 @@
"import": "./dist/esm/clients/rtkq/shorturl/v1beta1/index.mjs",
"require": "./dist/cjs/clients/rtkq/shorturl/v1beta1/index.cjs"
},
"./rtkq/notifications.alerting/v0alpha1": {
"@grafana-app/source": "./src/clients/rtkq/notifications.alerting/v0alpha1/index.ts",
"types": "./dist/types/clients/rtkq/notifications.alerting/v0alpha1/index.d.ts",
"import": "./dist/esm/clients/rtkq/notifications.alerting/v0alpha1/index.mjs",
"require": "./dist/cjs/clients/rtkq/notifications.alerting/v0alpha1/index.cjs"
},
"./rtkq/rules.alerting/v0alpha1": {
"@grafana-app/source": "./src/clients/rtkq/rules.alerting/v0alpha1/index.ts",
"types": "./dist/types/clients/rtkq/rules.alerting/v0alpha1/index.d.ts",
"import": "./dist/esm/clients/rtkq/rules.alerting/v0alpha1/index.mjs",
"require": "./dist/cjs/clients/rtkq/rules.alerting/v0alpha1/index.cjs"
},
"./rtkq/historian.alerting/v0alpha1": {
"@grafana-app/source": "./src/clients/rtkq/historian.alerting/v0alpha1/index.ts",
"types": "./dist/types/clients/rtkq/historian.alerting/v0alpha1/index.d.ts",

View File

@@ -10,10 +10,12 @@ import { generatedAPI as historianAlertingAPIv0alpha1 } from './historian.alerti
import { generatedAPI as iamAPIv0alpha1 } from './iam/v0alpha1';
import { generatedAPI as logsdrilldownAPIv1alpha1 } from './logsdrilldown/v1alpha1';
import { generatedAPI as migrateToCloudAPI } from './migrate-to-cloud';
import { generatedAPI as notificationsAlertingAPIv0alpha1 } from './notifications.alerting/v0alpha1';
import { generatedAPI as playlistAPIv0alpha1 } from './playlist/v0alpha1';
import { generatedAPI as preferencesUserAPI } from './preferences/user';
import { generatedAPI as preferencesAPIv1alpha1 } from './preferences/v1alpha1';
import { generatedAPI as provisioningAPIv0alpha1 } from './provisioning/v0alpha1';
import { generatedAPI as rulesAlertingAPIv0alpha1 } from './rules.alerting/v0alpha1';
import { generatedAPI as shortURLAPIv1beta1 } from './shorturl/v1beta1';
import { generatedAPI as legacyUserAPI } from './user';
// PLOP_INJECT_IMPORT
@@ -33,6 +35,8 @@ export const allMiddleware = [
shortURLAPIv1beta1.middleware,
correlationsAPIv0alpha1.middleware,
legacyUserAPI.middleware,
notificationsAlertingAPIv0alpha1.middleware,
rulesAlertingAPIv0alpha1.middleware,
historianAlertingAPIv0alpha1.middleware,
logsdrilldownAPIv1alpha1.middleware,
// PLOP_INJECT_MIDDLEWARE
@@ -53,6 +57,8 @@ export const allReducers = {
[shortURLAPIv1beta1.reducerPath]: shortURLAPIv1beta1.reducer,
[correlationsAPIv0alpha1.reducerPath]: correlationsAPIv0alpha1.reducer,
[legacyUserAPI.reducerPath]: legacyUserAPI.reducer,
[notificationsAlertingAPIv0alpha1.reducerPath]: notificationsAlertingAPIv0alpha1.reducer,
[rulesAlertingAPIv0alpha1.reducerPath]: rulesAlertingAPIv0alpha1.reducer,
[historianAlertingAPIv0alpha1.reducerPath]: historianAlertingAPIv0alpha1.reducer,
[logsdrilldownAPIv1alpha1.reducerPath]: logsdrilldownAPIv1alpha1.reducer,
// PLOP_INJECT_REDUCER

View File

@@ -0,0 +1,16 @@
import { createApi } from '@reduxjs/toolkit/query/react';
import { getAPIBaseURL } from '../../../../utils/utils';
import { createBaseQuery } from '../../createBaseQuery';
export const API_GROUP = 'notifications.alerting.grafana.app' as const;
export const API_VERSION = 'v0alpha1' as const;
export const BASE_URL = getAPIBaseURL(API_GROUP, API_VERSION);
export const api = createApi({
reducerPath: 'notificationsAlertingAPIv0alpha1',
baseQuery: createBaseQuery({
baseURL: BASE_URL,
}),
endpoints: () => ({}),
});

View File

@@ -1,4 +1,4 @@
import { api } from './api';
import { api } from './baseAPI';
export const addTagTypes = ['API Discovery', 'Receiver', 'RoutingTree', 'TemplateGroup', 'TimeInterval'] as const;
const injectedRtkApi = api
.enhanceEndpoints({
@@ -7,7 +7,7 @@ const injectedRtkApi = api
.injectEndpoints({
endpoints: (build) => ({
getApiResources: build.query<GetApiResourcesApiResponse, GetApiResourcesApiArg>({
query: () => ({ url: `/apis/notifications.alerting.grafana.app/v0alpha1/` }),
query: () => ({ url: `/` }),
providesTags: ['API Discovery'],
}),
listReceiver: build.query<ListReceiverApiResponse, ListReceiverApiArg>({
@@ -119,44 +119,6 @@ const injectedRtkApi = api
}),
invalidatesTags: ['Receiver'],
}),
getReceiverStatus: build.query<GetReceiverStatusApiResponse, GetReceiverStatusApiArg>({
query: (queryArg) => ({
url: `/receivers/${queryArg.name}/status`,
params: {
pretty: queryArg.pretty,
},
}),
providesTags: ['Receiver'],
}),
replaceReceiverStatus: build.mutation<ReplaceReceiverStatusApiResponse, ReplaceReceiverStatusApiArg>({
query: (queryArg) => ({
url: `/receivers/${queryArg.name}/status`,
method: 'PUT',
body: queryArg.receiver,
params: {
pretty: queryArg.pretty,
dryRun: queryArg.dryRun,
fieldManager: queryArg.fieldManager,
fieldValidation: queryArg.fieldValidation,
},
}),
invalidatesTags: ['Receiver'],
}),
updateReceiverStatus: build.mutation<UpdateReceiverStatusApiResponse, UpdateReceiverStatusApiArg>({
query: (queryArg) => ({
url: `/receivers/${queryArg.name}/status`,
method: 'PATCH',
body: queryArg.patch,
params: {
pretty: queryArg.pretty,
dryRun: queryArg.dryRun,
fieldManager: queryArg.fieldManager,
fieldValidation: queryArg.fieldValidation,
force: queryArg.force,
},
}),
invalidatesTags: ['Receiver'],
}),
listRoutingTree: build.query<ListRoutingTreeApiResponse, ListRoutingTreeApiArg>({
query: (queryArg) => ({
url: `/routingtrees`,
@@ -269,44 +231,6 @@ const injectedRtkApi = api
}),
invalidatesTags: ['RoutingTree'],
}),
getRoutingTreeStatus: build.query<GetRoutingTreeStatusApiResponse, GetRoutingTreeStatusApiArg>({
query: (queryArg) => ({
url: `/routingtrees/${queryArg.name}/status`,
params: {
pretty: queryArg.pretty,
},
}),
providesTags: ['RoutingTree'],
}),
replaceRoutingTreeStatus: build.mutation<ReplaceRoutingTreeStatusApiResponse, ReplaceRoutingTreeStatusApiArg>({
query: (queryArg) => ({
url: `/routingtrees/${queryArg.name}/status`,
method: 'PUT',
body: queryArg.routingTree,
params: {
pretty: queryArg.pretty,
dryRun: queryArg.dryRun,
fieldManager: queryArg.fieldManager,
fieldValidation: queryArg.fieldValidation,
},
}),
invalidatesTags: ['RoutingTree'],
}),
updateRoutingTreeStatus: build.mutation<UpdateRoutingTreeStatusApiResponse, UpdateRoutingTreeStatusApiArg>({
query: (queryArg) => ({
url: `/routingtrees/${queryArg.name}/status`,
method: 'PATCH',
body: queryArg.patch,
params: {
pretty: queryArg.pretty,
dryRun: queryArg.dryRun,
fieldManager: queryArg.fieldManager,
fieldValidation: queryArg.fieldValidation,
force: queryArg.force,
},
}),
invalidatesTags: ['RoutingTree'],
}),
listTemplateGroup: build.query<ListTemplateGroupApiResponse, ListTemplateGroupApiArg>({
query: (queryArg) => ({
url: `/templategroups`,
@@ -419,47 +343,6 @@ const injectedRtkApi = api
}),
invalidatesTags: ['TemplateGroup'],
}),
getTemplateGroupStatus: build.query<GetTemplateGroupStatusApiResponse, GetTemplateGroupStatusApiArg>({
query: (queryArg) => ({
url: `/templategroups/${queryArg.name}/status`,
params: {
pretty: queryArg.pretty,
},
}),
providesTags: ['TemplateGroup'],
}),
replaceTemplateGroupStatus: build.mutation<
ReplaceTemplateGroupStatusApiResponse,
ReplaceTemplateGroupStatusApiArg
>({
query: (queryArg) => ({
url: `/templategroups/${queryArg.name}/status`,
method: 'PUT',
body: queryArg.templateGroup,
params: {
pretty: queryArg.pretty,
dryRun: queryArg.dryRun,
fieldManager: queryArg.fieldManager,
fieldValidation: queryArg.fieldValidation,
},
}),
invalidatesTags: ['TemplateGroup'],
}),
updateTemplateGroupStatus: build.mutation<UpdateTemplateGroupStatusApiResponse, UpdateTemplateGroupStatusApiArg>({
query: (queryArg) => ({
url: `/templategroups/${queryArg.name}/status`,
method: 'PATCH',
body: queryArg.patch,
params: {
pretty: queryArg.pretty,
dryRun: queryArg.dryRun,
fieldManager: queryArg.fieldManager,
fieldValidation: queryArg.fieldValidation,
force: queryArg.force,
},
}),
invalidatesTags: ['TemplateGroup'],
}),
listTimeInterval: build.query<ListTimeIntervalApiResponse, ListTimeIntervalApiArg>({
query: (queryArg) => ({
url: `/timeintervals`,
@@ -572,48 +455,10 @@ const injectedRtkApi = api
}),
invalidatesTags: ['TimeInterval'],
}),
getTimeIntervalStatus: build.query<GetTimeIntervalStatusApiResponse, GetTimeIntervalStatusApiArg>({
query: (queryArg) => ({
url: `/timeintervals/${queryArg.name}/status`,
params: {
pretty: queryArg.pretty,
},
}),
providesTags: ['TimeInterval'],
}),
replaceTimeIntervalStatus: build.mutation<ReplaceTimeIntervalStatusApiResponse, ReplaceTimeIntervalStatusApiArg>({
query: (queryArg) => ({
url: `/timeintervals/${queryArg.name}/status`,
method: 'PUT',
body: queryArg.timeInterval,
params: {
pretty: queryArg.pretty,
dryRun: queryArg.dryRun,
fieldManager: queryArg.fieldManager,
fieldValidation: queryArg.fieldValidation,
},
}),
invalidatesTags: ['TimeInterval'],
}),
updateTimeIntervalStatus: build.mutation<UpdateTimeIntervalStatusApiResponse, UpdateTimeIntervalStatusApiArg>({
query: (queryArg) => ({
url: `/timeintervals/${queryArg.name}/status`,
method: 'PATCH',
body: queryArg.patch,
params: {
pretty: queryArg.pretty,
dryRun: queryArg.dryRun,
fieldManager: queryArg.fieldManager,
fieldValidation: queryArg.fieldValidation,
force: queryArg.force,
},
}),
invalidatesTags: ['TimeInterval'],
}),
}),
overrideExisting: false,
});
export { injectedRtkApi as notificationsAPI };
export { injectedRtkApi as generatedAPI };
export type GetApiResourcesApiResponse = /** status 200 OK */ ApiResourceList;
export type GetApiResourcesApiArg = void;
export type ListReceiverApiResponse = /** status 200 OK */ ReceiverList;
@@ -781,43 +626,6 @@ export type UpdateReceiverApiArg = {
force?: boolean;
patch: Patch;
};
export type GetReceiverStatusApiResponse = /** status 200 OK */ Receiver;
export type GetReceiverStatusApiArg = {
/** name of the Receiver */
name: string;
/** If 'true', then the output is pretty printed. Defaults to 'false' unless the user-agent indicates a browser or command-line HTTP tool (curl and wget). */
pretty?: string;
};
export type ReplaceReceiverStatusApiResponse = /** status 200 OK */ Receiver | /** status 201 Created */ Receiver;
export type ReplaceReceiverStatusApiArg = {
/** name of the Receiver */
name: string;
/** If 'true', then the output is pretty printed. Defaults to 'false' unless the user-agent indicates a browser or command-line HTTP tool (curl and wget). */
pretty?: string;
/** When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed */
dryRun?: string;
/** fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. */
fieldManager?: string;
/** fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. */
fieldValidation?: string;
receiver: Receiver;
};
export type UpdateReceiverStatusApiResponse = /** status 200 OK */ Receiver | /** status 201 Created */ Receiver;
export type UpdateReceiverStatusApiArg = {
/** name of the Receiver */
name: string;
/** If 'true', then the output is pretty printed. Defaults to 'false' unless the user-agent indicates a browser or command-line HTTP tool (curl and wget). */
pretty?: string;
/** When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed */
dryRun?: string;
/** fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). */
fieldManager?: string;
/** fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. */
fieldValidation?: string;
/** Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. */
force?: boolean;
patch: Patch;
};
export type ListRoutingTreeApiResponse = /** status 200 OK */ RoutingTreeList;
export type ListRoutingTreeApiArg = {
/** If 'true', then the output is pretty printed. Defaults to 'false' unless the user-agent indicates a browser or command-line HTTP tool (curl and wget). */
@@ -983,47 +791,6 @@ export type UpdateRoutingTreeApiArg = {
force?: boolean;
patch: Patch;
};
export type GetRoutingTreeStatusApiResponse = /** status 200 OK */ RoutingTree;
export type GetRoutingTreeStatusApiArg = {
/** name of the RoutingTree */
name: string;
/** If 'true', then the output is pretty printed. Defaults to 'false' unless the user-agent indicates a browser or command-line HTTP tool (curl and wget). */
pretty?: string;
};
export type ReplaceRoutingTreeStatusApiResponse = /** status 200 OK */
| RoutingTree
| /** status 201 Created */ RoutingTree;
export type ReplaceRoutingTreeStatusApiArg = {
/** name of the RoutingTree */
name: string;
/** If 'true', then the output is pretty printed. Defaults to 'false' unless the user-agent indicates a browser or command-line HTTP tool (curl and wget). */
pretty?: string;
/** When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed */
dryRun?: string;
/** fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. */
fieldManager?: string;
/** fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. */
fieldValidation?: string;
routingTree: RoutingTree;
};
export type UpdateRoutingTreeStatusApiResponse = /** status 200 OK */
| RoutingTree
| /** status 201 Created */ RoutingTree;
export type UpdateRoutingTreeStatusApiArg = {
/** name of the RoutingTree */
name: string;
/** If 'true', then the output is pretty printed. Defaults to 'false' unless the user-agent indicates a browser or command-line HTTP tool (curl and wget). */
pretty?: string;
/** When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed */
dryRun?: string;
/** fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). */
fieldManager?: string;
/** fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. */
fieldValidation?: string;
/** Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. */
force?: boolean;
patch: Patch;
};
export type ListTemplateGroupApiResponse = /** status 200 OK */ TemplateGroupList;
export type ListTemplateGroupApiArg = {
/** If 'true', then the output is pretty printed. Defaults to 'false' unless the user-agent indicates a browser or command-line HTTP tool (curl and wget). */
@@ -1193,47 +960,6 @@ export type UpdateTemplateGroupApiArg = {
force?: boolean;
patch: Patch;
};
export type GetTemplateGroupStatusApiResponse = /** status 200 OK */ TemplateGroup;
export type GetTemplateGroupStatusApiArg = {
/** name of the TemplateGroup */
name: string;
/** If 'true', then the output is pretty printed. Defaults to 'false' unless the user-agent indicates a browser or command-line HTTP tool (curl and wget). */
pretty?: string;
};
export type ReplaceTemplateGroupStatusApiResponse = /** status 200 OK */
| TemplateGroup
| /** status 201 Created */ TemplateGroup;
export type ReplaceTemplateGroupStatusApiArg = {
/** name of the TemplateGroup */
name: string;
/** If 'true', then the output is pretty printed. Defaults to 'false' unless the user-agent indicates a browser or command-line HTTP tool (curl and wget). */
pretty?: string;
/** When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed */
dryRun?: string;
/** fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. */
fieldManager?: string;
/** fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. */
fieldValidation?: string;
templateGroup: TemplateGroup;
};
export type UpdateTemplateGroupStatusApiResponse = /** status 200 OK */
| TemplateGroup
| /** status 201 Created */ TemplateGroup;
export type UpdateTemplateGroupStatusApiArg = {
/** name of the TemplateGroup */
name: string;
/** If 'true', then the output is pretty printed. Defaults to 'false' unless the user-agent indicates a browser or command-line HTTP tool (curl and wget). */
pretty?: string;
/** When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed */
dryRun?: string;
/** fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). */
fieldManager?: string;
/** fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. */
fieldValidation?: string;
/** Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. */
force?: boolean;
patch: Patch;
};
export type ListTimeIntervalApiResponse = /** status 200 OK */ TimeIntervalList;
export type ListTimeIntervalApiArg = {
/** If 'true', then the output is pretty printed. Defaults to 'false' unless the user-agent indicates a browser or command-line HTTP tool (curl and wget). */
@@ -1399,47 +1125,6 @@ export type UpdateTimeIntervalApiArg = {
force?: boolean;
patch: Patch;
};
export type GetTimeIntervalStatusApiResponse = /** status 200 OK */ TimeInterval;
export type GetTimeIntervalStatusApiArg = {
/** name of the TimeInterval */
name: string;
/** If 'true', then the output is pretty printed. Defaults to 'false' unless the user-agent indicates a browser or command-line HTTP tool (curl and wget). */
pretty?: string;
};
export type ReplaceTimeIntervalStatusApiResponse = /** status 200 OK */
| TimeInterval
| /** status 201 Created */ TimeInterval;
export type ReplaceTimeIntervalStatusApiArg = {
/** name of the TimeInterval */
name: string;
/** If 'true', then the output is pretty printed. Defaults to 'false' unless the user-agent indicates a browser or command-line HTTP tool (curl and wget). */
pretty?: string;
/** When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed */
dryRun?: string;
/** fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. */
fieldManager?: string;
/** fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. */
fieldValidation?: string;
timeInterval: TimeInterval;
};
export type UpdateTimeIntervalStatusApiResponse = /** status 200 OK */
| TimeInterval
| /** status 201 Created */ TimeInterval;
export type UpdateTimeIntervalStatusApiArg = {
/** name of the TimeInterval */
name: string;
/** If 'true', then the output is pretty printed. Defaults to 'false' unless the user-agent indicates a browser or command-line HTTP tool (curl and wget). */
pretty?: string;
/** When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed */
dryRun?: string;
/** fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). */
fieldManager?: string;
/** fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. */
fieldValidation?: string;
/** Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. */
force?: boolean;
patch: Patch;
};
export type ApiResource = {
/** categories is a list of the grouped resources this resource belongs to (e.g. 'all') */
categories?: string[];
@@ -1572,34 +1257,6 @@ export type ReceiverSpec = {
integrations: ReceiverIntegration[];
title: string;
};
export type ReceiverOperatorState = {
/** descriptiveState is an optional more descriptive state field which has no requirements on format */
descriptiveState?: string;
/** details contains any extra information that is operator-specific */
details?: {
[key: string]: {
[key: string]: any;
};
};
/** lastEvaluation is the ResourceVersion last evaluated */
lastEvaluation: string;
/** state describes the state of the lastEvaluation.
It is limited to three possible states for machine evaluation. */
state: 'success' | 'in_progress' | 'failed';
};
export type ReceiverStatus = {
/** additionalFields is reserved for future use */
additionalFields?: {
[key: string]: {
[key: string]: any;
};
};
/** operatorStates is a map of operator ID to operator state evaluations.
Any operator which consumes this kind SHOULD add its state evaluation information to this field. */
operatorStates?: {
[key: string]: ReceiverOperatorState;
};
};
export type Receiver = {
/** APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources */
apiVersion: string;
@@ -1607,7 +1264,6 @@ export type Receiver = {
kind: string;
metadata: ObjectMeta;
spec: ReceiverSpec;
status?: ReceiverStatus;
};
export type ListMeta = {
/** continue may be set if the user set a limit on the number of items returned, and indicates that the server has more data available. The value is opaque and may be used to issue another request to the endpoint that served this list to retrieve the next set of available objects. Continuing a consistent list may not be possible if the server configuration has changed or more than a few minutes have passed. The resourceVersion field returned when using this continue value will be identical to the value in the first response, unless you have received this token from an error message. */
@@ -1700,34 +1356,6 @@ export type RoutingTreeSpec = {
defaults: RoutingTreeRouteDefaults;
routes: RoutingTreeRoute[];
};
export type RoutingTreeOperatorState = {
/** descriptiveState is an optional more descriptive state field which has no requirements on format */
descriptiveState?: string;
/** details contains any extra information that is operator-specific */
details?: {
[key: string]: {
[key: string]: any;
};
};
/** lastEvaluation is the ResourceVersion last evaluated */
lastEvaluation: string;
/** state describes the state of the lastEvaluation.
It is limited to three possible states for machine evaluation. */
state: 'success' | 'in_progress' | 'failed';
};
export type RoutingTreeStatus = {
/** additionalFields is reserved for future use */
additionalFields?: {
[key: string]: {
[key: string]: any;
};
};
/** operatorStates is a map of operator ID to operator state evaluations.
Any operator which consumes this kind SHOULD add its state evaluation information to this field. */
operatorStates?: {
[key: string]: RoutingTreeOperatorState;
};
};
export type RoutingTree = {
/** APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources */
apiVersion: string;
@@ -1735,7 +1363,6 @@ export type RoutingTree = {
kind: string;
metadata: ObjectMeta;
spec: RoutingTreeSpec;
status?: RoutingTreeStatus;
};
export type RoutingTreeList = {
/** APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources */
@@ -1745,38 +1372,12 @@ export type RoutingTreeList = {
kind?: string;
metadata: ListMeta;
};
export type TemplateGroupTemplateKind = 'grafana' | 'mimir';
export type TemplateGroupSpec = {
content: string;
kind: TemplateGroupTemplateKind;
title: string;
};
export type TemplateGroupOperatorState = {
/** descriptiveState is an optional more descriptive state field which has no requirements on format */
descriptiveState?: string;
/** details contains any extra information that is operator-specific */
details?: {
[key: string]: {
[key: string]: any;
};
};
/** lastEvaluation is the ResourceVersion last evaluated */
lastEvaluation: string;
/** state describes the state of the lastEvaluation.
It is limited to three possible states for machine evaluation. */
state: 'success' | 'in_progress' | 'failed';
};
export type TemplateGroupStatus = {
/** additionalFields is reserved for future use */
additionalFields?: {
[key: string]: {
[key: string]: any;
};
};
/** operatorStates is a map of operator ID to operator state evaluations.
Any operator which consumes this kind SHOULD add its state evaluation information to this field. */
operatorStates?: {
[key: string]: TemplateGroupOperatorState;
};
};
export type TemplateGroup = {
/** APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources */
apiVersion: string;
@@ -1784,7 +1385,6 @@ export type TemplateGroup = {
kind: string;
metadata: ObjectMeta;
spec: TemplateGroupSpec;
status?: TemplateGroupStatus;
};
export type TemplateGroupList = {
/** APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources */
@@ -1810,34 +1410,6 @@ export type TimeIntervalSpec = {
name: string;
time_intervals: TimeIntervalInterval[];
};
export type TimeIntervalOperatorState = {
/** descriptiveState is an optional more descriptive state field which has no requirements on format */
descriptiveState?: string;
/** details contains any extra information that is operator-specific */
details?: {
[key: string]: {
[key: string]: any;
};
};
/** lastEvaluation is the ResourceVersion last evaluated */
lastEvaluation: string;
/** state describes the state of the lastEvaluation.
It is limited to three possible states for machine evaluation. */
state: 'success' | 'in_progress' | 'failed';
};
export type TimeIntervalStatus = {
/** additionalFields is reserved for future use */
additionalFields?: {
[key: string]: {
[key: string]: any;
};
};
/** operatorStates is a map of operator ID to operator state evaluations.
Any operator which consumes this kind SHOULD add its state evaluation information to this field. */
operatorStates?: {
[key: string]: TimeIntervalOperatorState;
};
};
export type TimeInterval = {
/** APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources */
apiVersion: string;
@@ -1845,7 +1417,6 @@ export type TimeInterval = {
kind: string;
metadata: ObjectMeta;
spec: TimeIntervalSpec;
status?: TimeIntervalStatus;
};
export type TimeIntervalList = {
/** APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources */
@@ -1855,3 +1426,43 @@ export type TimeIntervalList = {
kind?: string;
metadata: ListMeta;
};
export const {
useGetApiResourcesQuery,
useLazyGetApiResourcesQuery,
useListReceiverQuery,
useLazyListReceiverQuery,
useCreateReceiverMutation,
useDeletecollectionReceiverMutation,
useGetReceiverQuery,
useLazyGetReceiverQuery,
useReplaceReceiverMutation,
useDeleteReceiverMutation,
useUpdateReceiverMutation,
useListRoutingTreeQuery,
useLazyListRoutingTreeQuery,
useCreateRoutingTreeMutation,
useDeletecollectionRoutingTreeMutation,
useGetRoutingTreeQuery,
useLazyGetRoutingTreeQuery,
useReplaceRoutingTreeMutation,
useDeleteRoutingTreeMutation,
useUpdateRoutingTreeMutation,
useListTemplateGroupQuery,
useLazyListTemplateGroupQuery,
useCreateTemplateGroupMutation,
useDeletecollectionTemplateGroupMutation,
useGetTemplateGroupQuery,
useLazyGetTemplateGroupQuery,
useReplaceTemplateGroupMutation,
useDeleteTemplateGroupMutation,
useUpdateTemplateGroupMutation,
useListTimeIntervalQuery,
useLazyListTimeIntervalQuery,
useCreateTimeIntervalMutation,
useDeletecollectionTimeIntervalMutation,
useGetTimeIntervalQuery,
useLazyGetTimeIntervalQuery,
useReplaceTimeIntervalMutation,
useDeleteTimeIntervalMutation,
useUpdateTimeIntervalMutation,
} = injectedRtkApi;

View File

@@ -0,0 +1,5 @@
export { BASE_URL, API_GROUP, API_VERSION } from './baseAPI';
import { generatedAPI as rawAPI } from './endpoints.gen';
export * from './endpoints.gen';
export const generatedAPI = rawAPI.enhanceEndpoints({});

View File

@@ -0,0 +1,16 @@
import { createApi } from '@reduxjs/toolkit/query/react';
import { getAPIBaseURL } from '../../../../utils/utils';
import { createBaseQuery } from '../../createBaseQuery';
export const API_GROUP = 'rules.alerting.grafana.app' as const;
export const API_VERSION = 'v0alpha1' as const;
export const BASE_URL = getAPIBaseURL(API_GROUP, API_VERSION);
export const api = createApi({
reducerPath: 'rulesAlertingAPIv0alpha1',
baseQuery: createBaseQuery({
baseURL: BASE_URL,
}),
endpoints: () => ({}),
});

View File

@@ -1,4 +1,4 @@
import { api } from './api';
import { api } from './baseAPI';
export const addTagTypes = ['API Discovery', 'AlertRule', 'RecordingRule'] as const;
const injectedRtkApi = api
.enhanceEndpoints({
@@ -7,7 +7,7 @@ const injectedRtkApi = api
.injectEndpoints({
endpoints: (build) => ({
getApiResources: build.query<GetApiResourcesApiResponse, GetApiResourcesApiArg>({
query: () => ({ url: `/apis/rules.alerting.grafana.app/v0alpha1/` }),
query: () => ({ url: `/` }),
providesTags: ['API Discovery'],
}),
listAlertRule: build.query<ListAlertRuleApiResponse, ListAlertRuleApiArg>({
@@ -313,7 +313,7 @@ const injectedRtkApi = api
}),
overrideExisting: false,
});
export { injectedRtkApi as rulesAPI };
export { injectedRtkApi as generatedAPI };
export type GetApiResourcesApiResponse = /** status 200 OK */ ApiResourceList;
export type GetApiResourcesApiArg = void;
export type ListAlertRuleApiResponse = /** status 200 OK */ AlertRuleList;
@@ -1085,3 +1085,33 @@ export type RecordingRuleList = {
kind?: string;
metadata: ListMeta;
};
export const {
useGetApiResourcesQuery,
useLazyGetApiResourcesQuery,
useListAlertRuleQuery,
useLazyListAlertRuleQuery,
useCreateAlertRuleMutation,
useDeletecollectionAlertRuleMutation,
useGetAlertRuleQuery,
useLazyGetAlertRuleQuery,
useReplaceAlertRuleMutation,
useDeleteAlertRuleMutation,
useUpdateAlertRuleMutation,
useGetAlertRuleStatusQuery,
useLazyGetAlertRuleStatusQuery,
useReplaceAlertRuleStatusMutation,
useUpdateAlertRuleStatusMutation,
useListRecordingRuleQuery,
useLazyListRecordingRuleQuery,
useCreateRecordingRuleMutation,
useDeletecollectionRecordingRuleMutation,
useGetRecordingRuleQuery,
useLazyGetRecordingRuleQuery,
useReplaceRecordingRuleMutation,
useDeleteRecordingRuleMutation,
useUpdateRecordingRuleMutation,
useGetRecordingRuleStatusQuery,
useLazyGetRecordingRuleStatusQuery,
useReplaceRecordingRuleStatusMutation,
useUpdateRecordingRuleStatusMutation,
} = injectedRtkApi;

View File

@@ -0,0 +1,5 @@
export { BASE_URL, API_GROUP, API_VERSION } from './baseAPI';
import { generatedAPI as rawAPI } from './endpoints.gen';
export * from './endpoints.gen';
export const generatedAPI = rawAPI.enhanceEndpoints({});

View File

@@ -1 +1,4 @@
export { getAPINamespace, getAPIBaseURL, normalizeError, handleRequestError } from './utils/utils';
/* @TODO figure out how to automatically set the MockBackendSrv when consumers of this package write tests using the exported clients */
export { MockBackendSrv } from './utils/backendSrv.mock';

View File

@@ -108,6 +108,8 @@ const config: ConfigFile = {
...createAPIConfig('preferences', 'v1alpha1'),
...createAPIConfig('provisioning', 'v0alpha1'),
...createAPIConfig('shorturl', 'v1beta1'),
...createAPIConfig('notifications.alerting', 'v0alpha1'),
...createAPIConfig('rules.alerting', 'v0alpha1'),
...createAPIConfig('historian.alerting', 'v0alpha1'),
...createAPIConfig('logsdrilldown', 'v1alpha1'),
// PLOP_INJECT_API_CLIENT - Used by the API client generator

View File

@@ -0,0 +1,46 @@
import { Observable } from 'rxjs';
import { fromFetch } from 'rxjs/fetch';
import { BackendSrv, BackendSrvRequest, FetchResponse } from '@grafana/runtime';
/**
* Minimal mock implementation of BackendSrv for testing.
* Only implements the fetch() method which is used by RTKQ.
* HTTP requests are intercepted by MSW in tests.
*/
export class MockBackendSrv implements Partial<BackendSrv> {
fetch<T>(options: BackendSrvRequest): Observable<FetchResponse<T>> {
const init: RequestInit = {
method: options.method || 'GET',
headers: options.headers,
body: options.data ? JSON.stringify(options.data) : undefined,
credentials: options.credentials,
signal: options.abortSignal,
};
return new Observable((observer) => {
fromFetch(options.url, init).subscribe({
next: async (response) => {
try {
const data = await response.json();
observer.next({
data,
status: response.status,
statusText: response.statusText,
ok: response.ok,
headers: response.headers,
redirected: response.redirected,
type: response.type,
url: response.url,
config: options,
});
observer.complete();
} catch (error) {
observer.error(error);
}
},
error: (error) => observer.error(error),
});
});
}
}

View File

@@ -42,5 +42,6 @@ export enum DataTransformerID {
formatTime = 'formatTime',
formatString = 'formatString',
regression = 'regression',
smoothing = 'smoothing',
groupToNestedTable = 'groupToNestedTable',
}

View File

@@ -1255,4 +1255,12 @@ export interface FeatureToggles {
* Enables support for variables whose values can have multiple properties
*/
multiPropsVariables?: boolean;
/**
* Enables the ASAP smoothing transformation for time series data
*/
smoothingTransformation?: boolean;
/**
* Enables the creation of keepers that manage secrets stored on AWS secrets manager
*/
secretsManagementAppPlatformAwsKeeper?: boolean;
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,834 +0,0 @@
{
"_comment": "Core Grafana history https://github.com/grafana/grafana/blob/v11.0.0-preview/public/app/plugins/datasource/prometheus/dashboards/prometheus_stats.json",
"__inputs": [
{
"name": "DS_GDEV-PROMETHEUS",
"label": "gdev-prometheus",
"description": "",
"type": "datasource",
"pluginId": "prometheus",
"pluginName": "Prometheus"
}
],
"__requires": [
{
"type": "grafana",
"id": "grafana",
"name": "Grafana",
"version": "8.1.0-pre"
},
{
"type": "datasource",
"id": "prometheus",
"name": "Prometheus",
"version": "1.0.0"
},
{
"type": "panel",
"id": "stat",
"name": "Stat",
"version": ""
},
{
"type": "panel",
"id": "text",
"name": "Text",
"version": ""
},
{
"type": "panel",
"id": "timeseries",
"name": "Time series",
"version": ""
}
],
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": "-- Grafana --",
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"type": "dashboard"
}
]
},
"editable": true,
"gnetId": null,
"graphTooltip": 0,
"id": null,
"iteration": 1624859749459,
"links": [
{
"icon": "info",
"tags": [],
"targetBlank": true,
"title": "Grafana Docs",
"tooltip": "",
"type": "link",
"url": "https://grafana.com/docs/grafana/latest/"
},
{
"icon": "info",
"tags": [],
"targetBlank": true,
"title": "Prometheus Docs",
"type": "link",
"url": "http://prometheus.io/docs/introduction/overview/"
}
],
"panels": [
{
"cacheTimeout": null,
"datasource": "${DS_GDEV-PROMETHEUS}",
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"decimals": 1,
"mappings": [
{
"options": {
"match": "null",
"result": {
"text": "N/A"
}
},
"type": "special"
}
],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "s"
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 6,
"x": 0,
"y": 0
},
"id": 5,
"interval": null,
"links": [],
"maxDataPoints": 100,
"options": {
"colorMode": "none",
"graphMode": "none",
"justifyMode": "auto",
"orientation": "horizontal",
"reduceOptions": {
"calcs": ["lastNotNull"],
"fields": "",
"values": false
},
"text": {},
"textMode": "auto"
},
"pluginVersion": "8.1.0-pre",
"targets": [
{
"expr": "(time() - process_start_time_seconds{job=\"prometheus\", instance=~\"$node\"})",
"intervalFactor": 2,
"refId": "A"
}
],
"title": "Uptime",
"type": "stat"
},
{
"cacheTimeout": null,
"datasource": "${DS_GDEV-PROMETHEUS}",
"fieldConfig": {
"defaults": {
"color": {
"fixedColor": "rgb(31, 120, 193)",
"mode": "fixed"
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "rgba(50, 172, 45, 0.97)",
"value": null
},
{
"color": "rgba(237, 129, 40, 0.89)",
"value": 1
},
{
"color": "rgba(245, 54, 54, 0.9)",
"value": 5
}
]
},
"unit": "none"
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 6,
"x": 6,
"y": 0
},
"id": 6,
"interval": null,
"links": [],
"maxDataPoints": 100,
"options": {
"colorMode": "none",
"graphMode": "area",
"justifyMode": "auto",
"orientation": "horizontal",
"reduceOptions": {
"calcs": ["lastNotNull"],
"fields": "",
"values": false
},
"text": {},
"textMode": "auto"
},
"pluginVersion": "8.1.0-pre",
"targets": [
{
"expr": "prometheus_local_storage_memory_series{instance=~\"$node\"}",
"intervalFactor": 2,
"refId": "A"
}
],
"title": "Local Storage Memory Series",
"type": "stat"
},
{
"cacheTimeout": null,
"datasource": "${DS_GDEV-PROMETHEUS}",
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [
{
"options": {
"0": {
"text": "Empty"
}
},
"type": "value"
}
],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "rgba(50, 172, 45, 0.97)",
"value": null
},
{
"color": "rgba(237, 129, 40, 0.89)",
"value": 500
},
{
"color": "rgba(245, 54, 54, 0.9)",
"value": 4000
}
]
},
"unit": "none"
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 6,
"x": 12,
"y": 0
},
"id": 7,
"interval": null,
"links": [],
"maxDataPoints": 100,
"options": {
"colorMode": "value",
"graphMode": "area",
"justifyMode": "auto",
"orientation": "horizontal",
"reduceOptions": {
"calcs": ["lastNotNull"],
"fields": "",
"values": false
},
"text": {},
"textMode": "auto"
},
"pluginVersion": "8.1.0-pre",
"targets": [
{
"expr": "prometheus_local_storage_indexing_queue_length{instance=~\"$node\"}",
"intervalFactor": 2,
"refId": "A"
}
],
"title": "Internal Storage Queue Length",
"type": "stat"
},
{
"datasource": null,
"editable": true,
"error": false,
"gridPos": {
"h": 5,
"w": 6,
"x": 18,
"y": 0
},
"id": 9,
"links": [],
"options": {
"content": "<span style=\"font-family: 'Open Sans', 'Helvetica Neue', Helvetica; font-size: 25px;vertical-align: text-top;color: #bbbfc2;margin-left: 10px;\">Prometheus</span>\n\n<p style=\"margin-top: 10px;\">You're using Prometheus, an open-source systems monitoring and alerting toolkit originally built at SoundCloud. For more information, check out the <a href=\"https://grafana.com/\">Grafana</a> and <a href=\"http://prometheus.io/\">Prometheus</a> projects.</p>",
"mode": "html"
},
"pluginVersion": "8.1.0-pre",
"style": {},
"transparent": true,
"type": "text"
},
{
"datasource": "${DS_GDEV-PROMETHEUS}",
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"lineInterpolation": "linear",
"lineWidth": 2,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"links": [],
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "short"
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "prometheus"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "#C15C17",
"mode": "fixed"
}
}
]
},
{
"matcher": {
"id": "byName",
"options": "{instance=\"localhost:9090\",job=\"prometheus\"}"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "#C15C17",
"mode": "fixed"
}
}
]
}
]
},
"gridPos": {
"h": 6,
"w": 18,
"x": 0,
"y": 5
},
"id": 3,
"links": [],
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom"
},
"tooltip": {
"mode": "single"
}
},
"pluginVersion": "8.1.0-pre",
"targets": [
{
"expr": "rate(prometheus_local_storage_ingested_samples_total{instance=~\"$node\"}[5m])",
"interval": "",
"intervalFactor": 2,
"legendFormat": "{{job}}",
"metric": "",
"refId": "A"
}
],
"timeFrom": null,
"timeShift": null,
"title": "Samples ingested (rate-5m)",
"type": "timeseries"
},
{
"datasource": null,
"editable": true,
"error": false,
"gridPos": {
"h": 6,
"w": 4,
"x": 18,
"y": 5
},
"id": 8,
"links": [],
"options": {
"content": "#### Samples Ingested\nThis graph displays the count of samples ingested by the Prometheus server, as measured over the last 5 minutes, per time series in the range vector. When troubleshooting an issue on IRC or GitHub, this is often the first stat requested by the Prometheus team. ",
"mode": "markdown"
},
"pluginVersion": "8.1.0-pre",
"style": {},
"transparent": true,
"type": "text"
},
{
"datasource": "${DS_GDEV-PROMETHEUS}",
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"lineInterpolation": "linear",
"lineWidth": 2,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"links": [],
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "short"
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "prometheus"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "#F9BA8F",
"mode": "fixed"
}
}
]
},
{
"matcher": {
"id": "byName",
"options": "{instance=\"localhost:9090\",interval=\"5s\",job=\"prometheus\"}"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "#F9BA8F",
"mode": "fixed"
}
}
]
}
]
},
"gridPos": {
"h": 7,
"w": 10,
"x": 0,
"y": 11
},
"id": 2,
"links": [],
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom"
},
"tooltip": {
"mode": "single"
}
},
"pluginVersion": "8.1.0-pre",
"targets": [
{
"expr": "rate(prometheus_target_interval_length_seconds_count{instance=~\"$node\"}[5m])",
"intervalFactor": 2,
"legendFormat": "{{job}}",
"refId": "A"
}
],
"timeFrom": null,
"timeShift": null,
"title": "Target Scrapes (last 5m)",
"type": "timeseries"
},
{
"datasource": "${DS_GDEV-PROMETHEUS}",
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"lineInterpolation": "linear",
"lineWidth": 2,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"links": [],
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "short"
},
"overrides": []
},
"gridPos": {
"h": 7,
"w": 8,
"x": 10,
"y": 11
},
"id": 14,
"links": [],
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom"
},
"tooltip": {
"mode": "single"
}
},
"pluginVersion": "8.1.0-pre",
"targets": [
{
"expr": "prometheus_target_interval_length_seconds{quantile!=\"0.01\", quantile!=\"0.05\",instance=~\"$node\"}",
"interval": "",
"intervalFactor": 2,
"legendFormat": "{{quantile}} ({{interval}})",
"metric": "",
"refId": "A"
}
],
"timeFrom": null,
"timeShift": null,
"title": "Scrape Duration",
"type": "timeseries"
},
{
"datasource": null,
"editable": true,
"error": false,
"gridPos": {
"h": 7,
"w": 6,
"x": 18,
"y": 11
},
"id": 11,
"links": [],
"options": {
"content": "#### Scrapes\nPrometheus scrapes metrics from instrumented jobs, either directly or via an intermediary push gateway for short-lived jobs. Target scrapes will show how frequently targets are scraped, as measured over the last 5 minutes, per time series in the range vector. Scrape Duration will show how long the scrapes are taking, with percentiles available as series. ",
"mode": "markdown"
},
"pluginVersion": "8.1.0-pre",
"style": {},
"transparent": true,
"type": "text"
},
{
"datasource": "${DS_GDEV-PROMETHEUS}",
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"lineInterpolation": "linear",
"lineWidth": 2,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"links": [],
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "percentunit"
},
"overrides": []
},
"gridPos": {
"h": 7,
"w": 18,
"x": 0,
"y": 18
},
"id": 12,
"links": [],
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom"
},
"tooltip": {
"mode": "single"
}
},
"pluginVersion": "8.1.0-pre",
"targets": [
{
"expr": "prometheus_evaluator_duration_seconds{quantile!=\"0.01\", quantile!=\"0.05\",instance=~\"$node\"}",
"interval": "",
"intervalFactor": 2,
"legendFormat": "{{quantile}}",
"refId": "A"
}
],
"timeFrom": null,
"timeShift": null,
"title": "Rule Eval Duration",
"type": "timeseries"
},
{
"datasource": null,
"editable": true,
"error": false,
"gridPos": {
"h": 7,
"w": 6,
"x": 18,
"y": 18
},
"id": 15,
"links": [],
"options": {
"content": "#### Rule Evaluation Duration\nThis graph panel plots the duration for all evaluations to execute. The 50th percentile, 90th percentile and 99th percentile are shown as three separate series to help identify outliers that may be skewing the data.",
"mode": "markdown"
},
"pluginVersion": "8.1.0-pre",
"style": {},
"transparent": true,
"type": "text"
}
],
"refresh": false,
"revision": "1.0",
"schemaVersion": 30,
"tags": ["prometheus"],
"templating": {
"list": [
{
"allValue": null,
"current": {},
"datasource": "${DS_GDEV-PROMETHEUS}",
"definition": "",
"description": null,
"error": null,
"hide": 0,
"includeAll": false,
"label": "HOST:",
"multi": false,
"name": "node",
"options": [],
"query": {
"query": "label_values(prometheus_build_info, instance)",
"refId": "gdev-prometheus-node-Variable-Query"
},
"refresh": 1,
"regex": "",
"skipUrlSync": false,
"sort": 1,
"tagValuesQuery": "",
"tagsQuery": "",
"type": "query",
"useTags": false
}
]
},
"time": {
"from": "now-5m",
"to": "now"
},
"timepicker": {
"now": true,
"refresh_intervals": ["5s", "10s", "30s", "1m", "5m", "15m", "30m", "1h", "2h", "1d"]
},
"timezone": "browser",
"title": "Prometheus Stats",
"uid": "rpfmFFz7z",
"version": 2
}

View File

@@ -46,6 +46,7 @@ import (
_ "sigs.k8s.io/randfill"
_ "xorm.io/builder"
_ "github.com/aws/aws-sdk-go-v2/service/secretsmanager"
_ "github.com/grafana/authlib/authn"
_ "github.com/grafana/authlib/authz"
_ "github.com/grafana/authlib/cache"

View File

@@ -1,7 +1,6 @@
package middleware
import (
"context"
"errors"
"net/http"
"net/url"
@@ -22,13 +21,6 @@ import (
"github.com/grafana/grafana/pkg/services/pluginsintegration/pluginstore"
"github.com/grafana/grafana/pkg/setting"
"github.com/grafana/grafana/pkg/web"
"github.com/open-feature/go-sdk/openfeature"
)
var openfeatureClient = openfeature.NewDefaultClient()
const (
pluginPageFeatureFlagPrefix = "plugin-page-visible."
)
type AuthOptions struct {
@@ -154,12 +146,6 @@ func RoleAppPluginAuth(accessControl ac.AccessControl, ps pluginstore.Store, log
return
}
if !PageIsFeatureToggleEnabled(c.Req.Context(), c.Req.URL.Path) {
logger.Debug("Forbidden experimental plugin page", "plugin", pluginID, "path", c.Req.URL.Path)
accessForbidden(c)
return
}
permitted := true
path := normalizeIncludePath(c.Req.URL.Path)
hasAccess := ac.HasAccess(accessControl, c)
@@ -308,18 +294,3 @@ func shouldForceLogin(c *contextmodel.ReqContext) bool {
return forceLogin
}
// PageIsFeatureToggleEnabled checks if a page is enabled via OpenFeature feature flags.
// It returns false if the feature flag is set and set to false.
// The feature flag key format is: "plugin-page-visible.<path>"
func PageIsFeatureToggleEnabled(ctx context.Context, path string) bool {
flagKey := pluginPageFeatureFlagPrefix + filepath.Clean(path)
enabled := openfeatureClient.Boolean(
ctx,
flagKey,
true,
openfeature.TransactionContext(ctx),
)
return enabled
}

View File

@@ -1,17 +1,12 @@
package middleware
import (
"context"
"errors"
"fmt"
"net/http"
"net/http/httptest"
"sync"
"testing"
"github.com/open-feature/go-sdk/openfeature"
"github.com/open-feature/go-sdk/openfeature/memprovider"
oftesting "github.com/open-feature/go-sdk/openfeature/testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@@ -33,8 +28,6 @@ import (
"github.com/grafana/grafana/pkg/web"
)
var openfeatureTestMutex sync.Mutex
func setupAuthMiddlewareTest(t *testing.T, identity *authn.Identity, authErr error) *contexthandler.ContextHandler {
return contexthandler.ProvideService(setting.NewCfg(), &authntest.FakeService{
ExpectedErr: authErr,
@@ -429,60 +422,6 @@ func TestCanAdminPlugin(t *testing.T) {
}
}
func TestPageIsFeatureToggleEnabled(t *testing.T) {
type testCase struct {
desc string
path string
flags map[string]bool
expectedResult bool
}
tests := []testCase{
{
desc: "returns true when feature flag is enabled",
path: "/a/my-plugin/settings",
flags: map[string]bool{
pluginPageFeatureFlagPrefix + "/a/my-plugin/settings": true,
},
expectedResult: true,
},
{
desc: "returns false when feature flag is disabled",
path: "/a/my-plugin/settings",
flags: map[string]bool{
pluginPageFeatureFlagPrefix + "/a/my-plugin/settings": false,
},
expectedResult: false,
},
{
desc: "returns false when feature flag is disabled with trailing slash",
path: "/a/my-plugin/settings/",
flags: map[string]bool{
pluginPageFeatureFlagPrefix + "/a/my-plugin/settings": false,
},
expectedResult: false,
},
{
desc: "returns true when feature flag does not exist",
path: "/a/my-plugin/settings",
flags: map[string]bool{},
expectedResult: true,
},
}
for _, tt := range tests {
t.Run(tt.desc, func(t *testing.T) {
ctx := context.Background()
setupTestProvider(t, tt.flags)
result := PageIsFeatureToggleEnabled(ctx, tt.path)
assert.Equal(t, tt.expectedResult, result)
})
}
}
func contextProvider(modifiers ...func(c *contextmodel.ReqContext)) web.Handler {
return func(c *web.Context) {
reqCtx := &contextmodel.ReqContext{
@@ -498,38 +437,3 @@ func contextProvider(modifiers ...func(c *contextmodel.ReqContext)) web.Handler
c.Req = c.Req.WithContext(ctxkey.Set(c.Req.Context(), reqCtx))
}
}
// setupTestProvider creates a test OpenFeature provider with the given flags.
// Uses a global lock to prevent concurrent provider changes across tests.
func setupTestProvider(t *testing.T, flags map[string]bool) oftesting.TestProvider {
t.Helper()
// Lock to prevent concurrent provider changes
openfeatureTestMutex.Lock()
testProvider := oftesting.NewTestProvider()
flagsMap := map[string]memprovider.InMemoryFlag{}
for key, value := range flags {
flagsMap[key] = memprovider.InMemoryFlag{
DefaultVariant: "defaultVariant",
Variants: map[string]any{
"defaultVariant": value,
},
}
}
testProvider.UsingFlags(t, flagsMap)
err := openfeature.SetProviderAndWait(testProvider)
require.NoError(t, err)
t.Cleanup(func() {
testProvider.Cleanup()
_ = openfeature.SetProviderAndWait(openfeature.NoopProvider{})
// Unlock after cleanup to allow other tests to run
openfeatureTestMutex.Unlock()
})
return testProvider
}

View File

@@ -2,54 +2,84 @@ package log
import (
"context"
"github.com/grafana/grafana/pkg/infra/log"
"log/slog"
"sync"
)
// loggerFactory is a function that creates a Logger given a name.
// It can be set by calling SetLoggerFactory to use a custom logger implementation.
var loggerFactory func(name string) Logger
// SetLoggerFactory sets the factory function used to create loggers.
// This should be called during initialization to register a custom logger implementation.
// If not set, a default slog-based logger will be used.
func SetLoggerFactory(factory func(name string) Logger) {
loggerFactory = factory
}
var slogLogManager = &slogLoggerManager{
cache: sync.Map{},
}
func New(name string) Logger {
return &grafanaInfraLogWrapper{
l: log.New(name),
if loggerFactory != nil {
return loggerFactory(name)
}
// add a caching layer since slog doesn't perform any caching itself
return slogLogManager.getOrCreate(name)
}
type grafanaInfraLogWrapper struct {
l *log.ConcreteLogger
type slogLoggerManager struct {
cache sync.Map
}
func (d *grafanaInfraLogWrapper) New(ctx ...any) Logger {
func (m *slogLoggerManager) getOrCreate(name string) Logger {
if cached, ok := m.cache.Load(name); ok {
return cached.(*slogLogger)
}
logger := &slogLogger{
logger: slog.Default().With("logger", name),
name: name,
}
actual, _ := m.cache.LoadOrStore(name, logger)
return actual.(*slogLogger)
}
type slogLogger struct {
logger *slog.Logger
name string
}
func (l *slogLogger) New(ctx ...any) Logger {
if len(ctx) == 0 {
return &grafanaInfraLogWrapper{
l: d.l.New(),
return &slogLogger{
logger: l.logger,
name: l.name,
}
}
return &grafanaInfraLogWrapper{
l: d.l.New(ctx...),
return &slogLogger{
logger: l.logger.With(ctx...),
name: l.name,
}
}
func (d *grafanaInfraLogWrapper) Debug(msg string, ctx ...any) {
d.l.Debug(msg, ctx...)
func (l *slogLogger) Debug(msg string, ctx ...any) {
l.logger.Debug(msg, ctx...)
}
func (d *grafanaInfraLogWrapper) Info(msg string, ctx ...any) {
d.l.Info(msg, ctx...)
func (l *slogLogger) Info(msg string, ctx ...any) {
l.logger.Info(msg, ctx...)
}
func (d *grafanaInfraLogWrapper) Warn(msg string, ctx ...any) {
d.l.Warn(msg, ctx...)
func (l *slogLogger) Warn(msg string, ctx ...any) {
l.logger.Warn(msg, ctx...)
}
func (d *grafanaInfraLogWrapper) Error(msg string, ctx ...any) {
d.l.Error(msg, ctx...)
func (l *slogLogger) Error(msg string, ctx ...any) {
l.logger.Error(msg, ctx...)
}
func (d *grafanaInfraLogWrapper) FromContext(ctx context.Context) Logger {
concreteInfraLogger, ok := d.l.FromContext(ctx).(*log.ConcreteLogger)
if !ok {
return d.New()
}
return &grafanaInfraLogWrapper{
l: concreteInfraLogger,
}
func (l *slogLogger) FromContext(_ context.Context) Logger {
return l
}

View File

@@ -170,6 +170,13 @@ func (s *DashboardStarsStorage) write(ctx context.Context, obj *collections.Star
return nil, err
}
// Send an error if we try to save a non-dashboard star
for _, res := range obj.Spec.Resource {
if res.Group != "dashboard.grafana.app" || res.Kind != "Dashboard" {
return nil, fmt.Errorf("only dashboard stars are supported until the migration to unified storage is complete")
}
}
user, err := s.users.GetByUID(ctx, &user.GetUserByUIDQuery{
UID: owner.Identifier,
})

View File

@@ -276,7 +276,7 @@ func (b *APIBuilder) oneFlagHandler(w http.ResponseWriter, r *http.Request) {
return
}
if b.providerType == setting.GOFFProviderType {
if b.providerType == setting.GOFFProviderType || b.providerType == setting.OFREPProviderType {
b.proxyFlagReq(ctx, flagKey, isAuthedReq, w, r)
return
}
@@ -304,7 +304,7 @@ func (b *APIBuilder) allFlagsHandler(w http.ResponseWriter, r *http.Request) {
isAuthedReq := b.isAuthenticatedRequest(r)
span.SetAttributes(attribute.Bool("authenticated", isAuthedReq))
if b.providerType == setting.GOFFProviderType {
if b.providerType == setting.GOFFProviderType || b.providerType == setting.OFREPProviderType {
b.proxyAllFlagReq(ctx, isAuthedReq, w, r)
return
}

View File

@@ -507,64 +507,6 @@ func TestFolderMetaAccessor(t *testing.T) {
})
}
func TestPathCreationError(t *testing.T) {
t.Run("Error method returns formatted message", func(t *testing.T) {
underlyingErr := fmt.Errorf("underlying error")
pathErr := &resources.PathCreationError{
Path: "grafana/folder-1",
Err: underlyingErr,
}
expectedMsg := "failed to create path grafana/folder-1: underlying error"
require.Equal(t, expectedMsg, pathErr.Error())
})
t.Run("Unwrap returns underlying error", func(t *testing.T) {
underlyingErr := fmt.Errorf("underlying error")
pathErr := &resources.PathCreationError{
Path: "grafana/folder-1",
Err: underlyingErr,
}
unwrapped := pathErr.Unwrap()
require.Equal(t, underlyingErr, unwrapped)
require.EqualError(t, unwrapped, "underlying error")
})
t.Run("errors.Is finds underlying error", func(t *testing.T) {
underlyingErr := fmt.Errorf("underlying error")
pathErr := &resources.PathCreationError{
Path: "grafana/folder-1",
Err: underlyingErr,
}
require.True(t, errors.Is(pathErr, underlyingErr))
require.False(t, errors.Is(pathErr, fmt.Errorf("different error")))
})
t.Run("errors.As extracts PathCreationError", func(t *testing.T) {
underlyingErr := fmt.Errorf("underlying error")
pathErr := &resources.PathCreationError{
Path: "grafana/folder-1",
Err: underlyingErr,
}
var extractedErr *resources.PathCreationError
require.True(t, errors.As(pathErr, &extractedErr))
require.NotNil(t, extractedErr)
require.Equal(t, "grafana/folder-1", extractedErr.Path)
require.Equal(t, underlyingErr, extractedErr.Err)
})
t.Run("errors.As returns false for non-PathCreationError", func(t *testing.T) {
regularErr := fmt.Errorf("regular error")
var extractedErr *resources.PathCreationError
require.False(t, errors.As(regularErr, &extractedErr))
require.Nil(t, extractedErr)
})
}
// mockDynamicInterface implements a simplified version of the dynamic.ResourceInterface
type mockDynamicInterface struct {
dynamic.ResourceInterface

View File

@@ -71,98 +71,6 @@ func (_c *MockJobProgressRecorder_Complete_Call) RunAndReturn(run func(context.C
return _c
}
// HasDirPathFailedDeletion provides a mock function with given fields: folderPath
func (_m *MockJobProgressRecorder) HasDirPathFailedDeletion(folderPath string) bool {
ret := _m.Called(folderPath)
if len(ret) == 0 {
panic("no return value specified for HasDirPathFailedDeletion")
}
var r0 bool
if rf, ok := ret.Get(0).(func(string) bool); ok {
r0 = rf(folderPath)
} else {
r0 = ret.Get(0).(bool)
}
return r0
}
// MockJobProgressRecorder_HasDirPathFailedDeletion_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'HasDirPathFailedDeletion'
type MockJobProgressRecorder_HasDirPathFailedDeletion_Call struct {
*mock.Call
}
// HasDirPathFailedDeletion is a helper method to define mock.On call
// - folderPath string
func (_e *MockJobProgressRecorder_Expecter) HasDirPathFailedDeletion(folderPath interface{}) *MockJobProgressRecorder_HasDirPathFailedDeletion_Call {
return &MockJobProgressRecorder_HasDirPathFailedDeletion_Call{Call: _e.mock.On("HasDirPathFailedDeletion", folderPath)}
}
func (_c *MockJobProgressRecorder_HasDirPathFailedDeletion_Call) Run(run func(folderPath string)) *MockJobProgressRecorder_HasDirPathFailedDeletion_Call {
_c.Call.Run(func(args mock.Arguments) {
run(args[0].(string))
})
return _c
}
func (_c *MockJobProgressRecorder_HasDirPathFailedDeletion_Call) Return(_a0 bool) *MockJobProgressRecorder_HasDirPathFailedDeletion_Call {
_c.Call.Return(_a0)
return _c
}
func (_c *MockJobProgressRecorder_HasDirPathFailedDeletion_Call) RunAndReturn(run func(string) bool) *MockJobProgressRecorder_HasDirPathFailedDeletion_Call {
_c.Call.Return(run)
return _c
}
// HasDirPathFailedCreation provides a mock function with given fields: path
func (_m *MockJobProgressRecorder) HasDirPathFailedCreation(path string) bool {
ret := _m.Called(path)
if len(ret) == 0 {
panic("no return value specified for HasDirPathFailedCreation")
}
var r0 bool
if rf, ok := ret.Get(0).(func(string) bool); ok {
r0 = rf(path)
} else {
r0 = ret.Get(0).(bool)
}
return r0
}
// MockJobProgressRecorder_HasDirPathFailedCreation_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'HasDirPathFailedCreation'
type MockJobProgressRecorder_HasDirPathFailedCreation_Call struct {
*mock.Call
}
// HasDirPathFailedCreation is a helper method to define mock.On call
// - path string
func (_e *MockJobProgressRecorder_Expecter) HasDirPathFailedCreation(path interface{}) *MockJobProgressRecorder_HasDirPathFailedCreation_Call {
return &MockJobProgressRecorder_HasDirPathFailedCreation_Call{Call: _e.mock.On("HasDirPathFailedCreation", path)}
}
func (_c *MockJobProgressRecorder_HasDirPathFailedCreation_Call) Run(run func(path string)) *MockJobProgressRecorder_HasDirPathFailedCreation_Call {
_c.Call.Run(func(args mock.Arguments) {
run(args[0].(string))
})
return _c
}
func (_c *MockJobProgressRecorder_HasDirPathFailedCreation_Call) Return(_a0 bool) *MockJobProgressRecorder_HasDirPathFailedCreation_Call {
_c.Call.Return(_a0)
return _c
}
func (_c *MockJobProgressRecorder_HasDirPathFailedCreation_Call) RunAndReturn(run func(string) bool) *MockJobProgressRecorder_HasDirPathFailedCreation_Call {
_c.Call.Return(run)
return _c
}
// Record provides a mock function with given fields: ctx, result
func (_m *MockJobProgressRecorder) Record(ctx context.Context, result JobResourceResult) {
_m.Called(ctx, result)

View File

@@ -2,7 +2,6 @@ package jobs
import (
"context"
"errors"
"fmt"
"sync"
"time"
@@ -10,8 +9,6 @@ import (
"github.com/grafana/grafana-app-sdk/logging"
provisioning "github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1"
"github.com/grafana/grafana/apps/provisioning/pkg/repository"
"github.com/grafana/grafana/apps/provisioning/pkg/safepath"
"github.com/grafana/grafana/pkg/registry/apis/provisioning/resources"
)
// maybeNotifyProgress will only notify if a certain amount of time has passed
@@ -61,8 +58,6 @@ type jobProgressRecorder struct {
notifyImmediatelyFn ProgressFn
maybeNotifyFn ProgressFn
summaries map[string]*provisioning.JobResourceSummary
failedCreations []string // Tracks folder paths that failed to be created
failedDeletions []string // Tracks resource paths that failed to be deleted
}
func newJobProgressRecorder(ProgressFn ProgressFn) JobProgressRecorder {
@@ -89,26 +84,10 @@ func (r *jobProgressRecorder) Record(ctx context.Context, result JobResourceResu
if result.Error != nil {
shouldLogError = true
logErr = result.Error
// Don't count ignored actions as errors in error count or error list
if result.Action != repository.FileActionIgnored {
if len(r.errors) < 20 {
r.errors = append(r.errors, result.Error.Error())
}
r.errorCount++
}
// Automatically track failed operations based on error type and action
// Check if this is a PathCreationError (folder creation failure)
var pathErr *resources.PathCreationError
if errors.As(result.Error, &pathErr) {
r.failedCreations = append(r.failedCreations, pathErr.Path)
}
// Track failed deletions, any deletion will stop the deletion of the parent folder (as it won't be empty)
if result.Action == repository.FileActionDeleted {
r.failedDeletions = append(r.failedDeletions, result.Path)
if len(r.errors) < 20 {
r.errors = append(r.errors, result.Error.Error())
}
r.errorCount++
}
r.updateSummary(result)
@@ -133,8 +112,6 @@ func (r *jobProgressRecorder) ResetResults() {
r.errorCount = 0
r.errors = nil
r.summaries = make(map[string]*provisioning.JobResourceSummary)
r.failedCreations = nil
r.failedDeletions = nil
}
func (r *jobProgressRecorder) SetMessage(ctx context.Context, msg string) {
@@ -332,29 +309,3 @@ func (r *jobProgressRecorder) Complete(ctx context.Context, err error) provision
return jobStatus
}
// HasDirPathFailedCreation checks if a path is nested under any failed folder creation
func (r *jobProgressRecorder) HasDirPathFailedCreation(path string) bool {
r.mu.RLock()
defer r.mu.RUnlock()
for _, failedCreation := range r.failedCreations {
if safepath.InDir(path, failedCreation) {
return true
}
}
return false
}
// HasDirPathFailedDeletion checks if any resource deletions failed under a folder path
func (r *jobProgressRecorder) HasDirPathFailedDeletion(folderPath string) bool {
r.mu.RLock()
defer r.mu.RUnlock()
for _, failedDeletion := range r.failedDeletions {
if safepath.InDir(failedDeletion, folderPath) {
return true
}
}
return false
}

View File

@@ -7,7 +7,6 @@ import (
provisioning "github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1"
"github.com/grafana/grafana/apps/provisioning/pkg/repository"
"github.com/grafana/grafana/pkg/registry/apis/provisioning/resources"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@@ -253,221 +252,3 @@ func TestJobProgressRecorderWarningOnlyNoErrors(t *testing.T) {
require.NotNil(t, finalStatus.Warnings)
assert.Len(t, finalStatus.Warnings, 1)
}
func TestJobProgressRecorderFolderFailureTracking(t *testing.T) {
ctx := context.Background()
// Create a progress recorder
mockProgressFn := func(ctx context.Context, status provisioning.JobStatus) error {
return nil
}
recorder := newJobProgressRecorder(mockProgressFn).(*jobProgressRecorder)
// Record a folder creation failure with PathCreationError
pathErr := &resources.PathCreationError{
Path: "folder1/",
Err: assert.AnError,
}
recorder.Record(ctx, JobResourceResult{
Path: "folder1/file.json",
Action: repository.FileActionCreated,
Error: pathErr,
})
// Record another PathCreationError for a different folder
pathErr2 := &resources.PathCreationError{
Path: "folder2/subfolder/",
Err: assert.AnError,
}
recorder.Record(ctx, JobResourceResult{
Path: "folder2/subfolder/file.json",
Action: repository.FileActionCreated,
Error: pathErr2,
})
// Record a deletion failure
recorder.Record(ctx, JobResourceResult{
Path: "folder3/file1.json",
Action: repository.FileActionDeleted,
Error: assert.AnError,
})
// Record another deletion failure
recorder.Record(ctx, JobResourceResult{
Path: "folder4/subfolder/file2.json",
Action: repository.FileActionDeleted,
Error: assert.AnError,
})
// Verify failed creations are tracked
recorder.mu.RLock()
assert.Len(t, recorder.failedCreations, 2)
assert.Contains(t, recorder.failedCreations, "folder1/")
assert.Contains(t, recorder.failedCreations, "folder2/subfolder/")
// Verify failed deletions are tracked
assert.Len(t, recorder.failedDeletions, 2)
assert.Contains(t, recorder.failedDeletions, "folder3/file1.json")
assert.Contains(t, recorder.failedDeletions, "folder4/subfolder/file2.json")
recorder.mu.RUnlock()
}
func TestJobProgressRecorderHasDirPathFailedCreation(t *testing.T) {
ctx := context.Background()
// Create a progress recorder
mockProgressFn := func(ctx context.Context, status provisioning.JobStatus) error {
return nil
}
recorder := newJobProgressRecorder(mockProgressFn).(*jobProgressRecorder)
// Add failed creations via Record
pathErr1 := &resources.PathCreationError{
Path: "folder1/",
Err: assert.AnError,
}
recorder.Record(ctx, JobResourceResult{
Path: "folder1/file.json",
Action: repository.FileActionCreated,
Error: pathErr1,
})
pathErr2 := &resources.PathCreationError{
Path: "folder2/subfolder/",
Err: assert.AnError,
}
recorder.Record(ctx, JobResourceResult{
Path: "folder2/subfolder/file.json",
Action: repository.FileActionCreated,
Error: pathErr2,
})
// Test nested paths
assert.True(t, recorder.HasDirPathFailedCreation("folder1/file.json"))
assert.True(t, recorder.HasDirPathFailedCreation("folder1/nested/file.json"))
assert.True(t, recorder.HasDirPathFailedCreation("folder2/subfolder/file.json"))
// Test non-nested paths
assert.False(t, recorder.HasDirPathFailedCreation("folder2/file2.json"))
assert.False(t, recorder.HasDirPathFailedCreation("folder2/othersubfolder/inside.json"))
assert.False(t, recorder.HasDirPathFailedCreation("other/file.json"))
assert.False(t, recorder.HasDirPathFailedCreation("folder3/file.json"))
assert.False(t, recorder.HasDirPathFailedCreation("file.json"))
}
func TestJobProgressRecorderHasDirPathFailedDeletion(t *testing.T) {
ctx := context.Background()
// Create a progress recorder
mockProgressFn := func(ctx context.Context, status provisioning.JobStatus) error {
return nil
}
recorder := newJobProgressRecorder(mockProgressFn).(*jobProgressRecorder)
// Add failed deletions via Record
recorder.Record(ctx, JobResourceResult{
Path: "folder1/file1.json",
Action: repository.FileActionDeleted,
Error: assert.AnError,
})
recorder.Record(ctx, JobResourceResult{
Path: "folder2/subfolder/file2.json",
Action: repository.FileActionDeleted,
Error: assert.AnError,
})
recorder.Record(ctx, JobResourceResult{
Path: "folder3/nested/deep/file3.json",
Action: repository.FileActionDeleted,
Error: assert.AnError,
})
// Test folder paths with failed deletions
assert.True(t, recorder.HasDirPathFailedDeletion("folder1/"))
assert.True(t, recorder.HasDirPathFailedDeletion("folder2/"))
assert.True(t, recorder.HasDirPathFailedDeletion("folder2/subfolder/"))
assert.True(t, recorder.HasDirPathFailedDeletion("folder3/"))
assert.True(t, recorder.HasDirPathFailedDeletion("folder3/nested/"))
assert.True(t, recorder.HasDirPathFailedDeletion("folder3/nested/deep/"))
// Test folder paths without failed deletions
assert.False(t, recorder.HasDirPathFailedDeletion("other/"))
assert.False(t, recorder.HasDirPathFailedDeletion("different/"))
assert.False(t, recorder.HasDirPathFailedDeletion("folder2/othersubfolder/"))
assert.False(t, recorder.HasDirPathFailedDeletion("folder2/subfolder/othersubfolder/"))
assert.False(t, recorder.HasDirPathFailedDeletion("folder3/nested/anotherdeep/"))
assert.False(t, recorder.HasDirPathFailedDeletion("folder3/nested/deep/insidedeep/"))
}
func TestJobProgressRecorderResetResults(t *testing.T) {
ctx := context.Background()
// Create a progress recorder
mockProgressFn := func(ctx context.Context, status provisioning.JobStatus) error {
return nil
}
recorder := newJobProgressRecorder(mockProgressFn).(*jobProgressRecorder)
// Add some data via Record
pathErr := &resources.PathCreationError{
Path: "folder1/",
Err: assert.AnError,
}
recorder.Record(ctx, JobResourceResult{
Path: "folder1/file.json",
Action: repository.FileActionCreated,
Error: pathErr,
})
recorder.Record(ctx, JobResourceResult{
Path: "folder2/file.json",
Action: repository.FileActionDeleted,
Error: assert.AnError,
})
// Verify data is stored
recorder.mu.RLock()
assert.Len(t, recorder.failedCreations, 1)
assert.Len(t, recorder.failedDeletions, 1)
recorder.mu.RUnlock()
// Reset results
recorder.ResetResults()
// Verify data is cleared
recorder.mu.RLock()
assert.Nil(t, recorder.failedCreations)
assert.Nil(t, recorder.failedDeletions)
recorder.mu.RUnlock()
}
func TestJobProgressRecorderIgnoredActionsDontCountAsErrors(t *testing.T) {
ctx := context.Background()
// Create a progress recorder
mockProgressFn := func(ctx context.Context, status provisioning.JobStatus) error {
return nil
}
recorder := newJobProgressRecorder(mockProgressFn).(*jobProgressRecorder)
// Record an ignored action with error
recorder.Record(ctx, JobResourceResult{
Path: "folder1/file1.json",
Action: repository.FileActionIgnored,
Error: assert.AnError,
})
// Record a real error for comparison
recorder.Record(ctx, JobResourceResult{
Path: "folder2/file2.json",
Action: repository.FileActionCreated,
Error: assert.AnError,
})
// Verify error count doesn't include ignored actions
recorder.mu.RLock()
assert.Equal(t, 1, recorder.errorCount, "ignored actions should not be counted as errors")
assert.Len(t, recorder.errors, 1, "ignored action errors should not be in error list")
recorder.mu.RUnlock()
}

View File

@@ -29,10 +29,6 @@ type JobProgressRecorder interface {
StrictMaxErrors(maxErrors int)
SetRefURLs(ctx context.Context, refURLs *provisioning.RepositoryURLs)
Complete(ctx context.Context, err error) provisioning.JobStatus
// HasDirPathFailedCreation checks if a path has any folder creations that failed
HasDirPathFailedCreation(path string) bool
// HasDirPathFailedDeletion checks if a folderPath has any folder deletions that failed
HasDirPathFailedDeletion(folderPath string) bool
}
// Worker is a worker that can process a job

View File

@@ -75,47 +75,11 @@ func FullSync(
return applyChanges(ctx, changes, clients, repositoryResources, progress, tracer, maxSyncWorkers, metrics)
}
// shouldSkipChange checks if a change should be skipped based on previous failures on parent/child folders.
// If there is a previous failure on the path, we don't need to process the change as it will fail anyway.
func shouldSkipChange(ctx context.Context, change ResourceFileChange, progress jobs.JobProgressRecorder, tracer tracing.Tracer) bool {
if change.Action != repository.FileActionDeleted && progress.HasDirPathFailedCreation(change.Path) {
skipCtx, skipSpan := tracer.Start(ctx, "provisioning.sync.full.apply_changes.skip_nested_resource")
skipSpan.SetAttributes(attribute.String("path", change.Path))
progress.Record(skipCtx, jobs.JobResourceResult{
Path: change.Path,
Action: repository.FileActionIgnored,
Warning: fmt.Errorf("resource was not processed because the parent folder could not be created"),
})
skipSpan.End()
return true
}
if change.Action == repository.FileActionDeleted && safepath.IsDir(change.Path) && progress.HasDirPathFailedDeletion(change.Path) {
skipCtx, skipSpan := tracer.Start(ctx, "provisioning.sync.full.apply_changes.skip_folder_with_failed_deletions")
skipSpan.SetAttributes(attribute.String("path", change.Path))
progress.Record(skipCtx, jobs.JobResourceResult{
Path: change.Path,
Action: repository.FileActionIgnored,
Group: resources.FolderKind.Group,
Kind: resources.FolderKind.Kind,
Warning: fmt.Errorf("folder was not processed because children resources in its path could not be deleted"),
})
skipSpan.End()
return true
}
return false
}
func applyChange(ctx context.Context, change ResourceFileChange, clients resources.ResourceClients, repositoryResources resources.RepositoryResources, progress jobs.JobProgressRecorder, tracer tracing.Tracer) {
if ctx.Err() != nil {
return
}
if shouldSkipChange(ctx, change, progress, tracer) {
return
}
if change.Action == repository.FileActionDeleted {
deleteCtx, deleteSpan := tracer.Start(ctx, "provisioning.sync.full.apply_changes.delete")
result := jobs.JobResourceResult{
@@ -174,7 +138,6 @@ func applyChange(ctx context.Context, change ResourceFileChange, clients resourc
ensureFolderSpan.RecordError(err)
ensureFolderSpan.End()
progress.Record(ctx, result)
return
}

View File

@@ -1,432 +0,0 @@
package sync
import (
"context"
"fmt"
"testing"
"github.com/prometheus/client_golang/prometheus"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
dynamicfake "k8s.io/client-go/dynamic/fake"
k8testing "k8s.io/client-go/testing"
provisioning "github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1"
"github.com/grafana/grafana/apps/provisioning/pkg/repository"
"github.com/grafana/grafana/pkg/infra/tracing"
"github.com/grafana/grafana/pkg/registry/apis/provisioning/jobs"
"github.com/grafana/grafana/pkg/registry/apis/provisioning/resources"
)
/*
TestFullSync_HierarchicalErrorHandling tests the hierarchical error handling behavior:
FOLDER CREATION FAILURES:
- When a folder fails to be created with PathCreationError, all nested resources are skipped
- Nested resources are recorded with FileActionIgnored and error "folder was not processed because children resources in its path could not be deleted"
- Only the folder creation error counts toward error limits
- Nested resource skips do NOT count toward error limits
FOLDER DELETION FAILURES:
- When a file deletion fails, it's tracked in failedDeletions
- When cleaning up folders, we check HasDirPathFailedDeletion()
- If children failed to delete, folder deletion is skipped with FileActionIgnored
- This prevents orphaning resources that still exist
DELETIONS NOT AFFECTED BY CREATION FAILURES:
- If a folder creation fails, deletion operations for resources in that folder still proceed
- This is because the resource might already exist from a previous sync
- Only creations/updates/renames are affected by failed folder creation
AUTOMATIC TRACKING:
- Record() automatically detects PathCreationError and adds to failedCreations
- Record() automatically detects deletion failures and adds to failedDeletions
- No manual calls to AddFailedCreation/AddFailedDeletion needed
*/
func TestFullSync_HierarchicalErrorHandling(t *testing.T) { // nolint:gocyclo
tests := []struct {
name string
setupMocks func(*repository.MockRepository, *resources.MockRepositoryResources, *resources.MockResourceClients, *jobs.MockJobProgressRecorder, *dynamicfake.FakeDynamicClient)
changes []ResourceFileChange
description string
expectError bool
errorContains string
}{
{
name: "folder creation fails, nested file skipped",
description: "When folder1/ fails to create, folder1/file.json should be skipped with FileActionIgnored",
changes: []ResourceFileChange{
{Path: "folder1/file.json", Action: repository.FileActionCreated},
},
setupMocks: func(repo *repository.MockRepository, repoResources *resources.MockRepositoryResources, clients *resources.MockResourceClients, progress *jobs.MockJobProgressRecorder, _ *dynamicfake.FakeDynamicClient) {
// First, check if nested under failed creation - not yet
progress.On("HasDirPathFailedCreation", "folder1/file.json").Return(false).Once()
// WriteResourceFromFile fails with PathCreationError for folder1/
folderErr := &resources.PathCreationError{Path: "folder1/", Err: fmt.Errorf("permission denied")}
repoResources.On("WriteResourceFromFile", mock.Anything, "folder1/file.json", "").
Return("", schema.GroupVersionKind{}, folderErr).Once()
// File will be recorded with error, triggering automatic tracking of folder1/ failure
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "folder1/file.json" && r.Error != nil && r.Action == repository.FileActionCreated
})).Return().Once()
},
},
{
name: "folder creation fails, multiple nested resources skipped",
description: "When folder1/ fails to create, all nested resources (subfolder, files) are skipped",
changes: []ResourceFileChange{
{Path: "folder1/file1.json", Action: repository.FileActionCreated},
{Path: "folder1/subfolder/file2.json", Action: repository.FileActionCreated},
{Path: "folder1/file3.json", Action: repository.FileActionCreated},
},
setupMocks: func(repo *repository.MockRepository, repoResources *resources.MockRepositoryResources, clients *resources.MockResourceClients, progress *jobs.MockJobProgressRecorder, _ *dynamicfake.FakeDynamicClient) {
// First file triggers folder creation failure
progress.On("HasDirPathFailedCreation", "folder1/file1.json").Return(false).Once()
folderErr := &resources.PathCreationError{Path: "folder1/", Err: fmt.Errorf("permission denied")}
repoResources.On("WriteResourceFromFile", mock.Anything, "folder1/file1.json", "").
Return("", schema.GroupVersionKind{}, folderErr).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "folder1/file1.json" && r.Error != nil
})).Return().Once()
// Subsequent files in same folder are skipped
progress.On("HasDirPathFailedCreation", "folder1/subfolder/file2.json").Return(true).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "folder1/subfolder/file2.json" &&
r.Action == repository.FileActionIgnored &&
r.Warning != nil &&
r.Warning.Error() == "resource was not processed because the parent folder could not be created"
})).Return().Once()
progress.On("HasDirPathFailedCreation", "folder1/file3.json").Return(true).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "folder1/file3.json" &&
r.Action == repository.FileActionIgnored &&
r.Warning != nil &&
r.Warning.Error() == "resource was not processed because the parent folder could not be created"
})).Return().Once()
},
},
{
name: "file deletion failure tracked",
description: "When a file deletion fails, it's automatically tracked in failedDeletions",
changes: []ResourceFileChange{
{
Path: "folder1/file.json",
Action: repository.FileActionDeleted,
Existing: &provisioning.ResourceListItem{
Name: "file1",
Group: "dashboard.grafana.app",
Resource: "dashboards",
},
},
},
setupMocks: func(repo *repository.MockRepository, repoResources *resources.MockRepositoryResources, clients *resources.MockResourceClients, progress *jobs.MockJobProgressRecorder, dynamicClient *dynamicfake.FakeDynamicClient) {
gvk := schema.GroupVersionKind{Group: "dashboard.grafana.app", Kind: "Dashboard", Version: "v1"}
gvr := schema.GroupVersionResource{Group: "dashboard.grafana.app", Resource: "dashboards", Version: "v1"}
clients.On("ForResource", mock.Anything, mock.MatchedBy(func(gvr schema.GroupVersionResource) bool {
return gvr.Group == "dashboard.grafana.app"
})).Return(dynamicClient.Resource(gvr), gvk, nil)
// File deletion fails
dynamicClient.PrependReactor("delete", "dashboards", func(action k8testing.Action) (bool, runtime.Object, error) {
return true, nil, fmt.Errorf("permission denied")
})
// File deletion recorded with error, automatically tracked in failedDeletions
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "folder1/file.json" &&
r.Action == repository.FileActionDeleted &&
r.Error != nil
})).Return().Once()
},
},
{
name: "deletion proceeds despite creation failure",
description: "When folder1/ fails to create, deletion of folder1/file2.json still proceeds (resource might exist from previous sync)",
changes: []ResourceFileChange{
{Path: "folder1/file1.json", Action: repository.FileActionCreated},
{
Path: "folder1/file2.json",
Action: repository.FileActionDeleted,
Existing: &provisioning.ResourceListItem{
Name: "file2",
Group: "dashboard.grafana.app",
Resource: "dashboards",
},
},
},
setupMocks: func(repo *repository.MockRepository, repoResources *resources.MockRepositoryResources, clients *resources.MockResourceClients, progress *jobs.MockJobProgressRecorder, dynamicClient *dynamicfake.FakeDynamicClient) {
// Creation fails
progress.On("HasDirPathFailedCreation", "folder1/file1.json").Return(false).Once()
folderErr := &resources.PathCreationError{Path: "folder1/", Err: fmt.Errorf("permission denied")}
repoResources.On("WriteResourceFromFile", mock.Anything, "folder1/file1.json", "").
Return("", schema.GroupVersionKind{}, folderErr).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "folder1/file1.json" && r.Error != nil
})).Return().Once()
// Deletion proceeds (NOT checking HasDirPathFailedCreation for deletions)
// Note: deletion will fail because resource doesn't exist, but that's fine for this test
gvk := schema.GroupVersionKind{Group: "dashboard.grafana.app", Kind: "Dashboard", Version: "v1"}
gvr := schema.GroupVersionResource{Group: "dashboard.grafana.app", Resource: "dashboards", Version: "v1"}
clients.On("ForResource", mock.Anything, mock.MatchedBy(func(gvr schema.GroupVersionResource) bool {
return gvr.Group == "dashboard.grafana.app"
})).Return(dynamicClient.Resource(gvr), gvk, nil)
// Record deletion attempt (will have error since resource doesn't exist, but that's ok)
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "folder1/file2.json" &&
r.Action == repository.FileActionDeleted
// Not checking r.Error because resource doesn't exist in fake client
})).Return().Once()
},
},
{
name: "multi-level nesting - all skipped",
description: "When level1/ fails, level1/level2/level3/file.json is also skipped",
changes: []ResourceFileChange{
{Path: "level1/file1.json", Action: repository.FileActionCreated},
{Path: "level1/level2/file2.json", Action: repository.FileActionCreated},
{Path: "level1/level2/level3/file3.json", Action: repository.FileActionCreated},
},
setupMocks: func(repo *repository.MockRepository, repoResources *resources.MockRepositoryResources, clients *resources.MockResourceClients, progress *jobs.MockJobProgressRecorder, _ *dynamicfake.FakeDynamicClient) {
// First file triggers level1/ failure
progress.On("HasDirPathFailedCreation", "level1/file1.json").Return(false).Once()
folderErr := &resources.PathCreationError{Path: "level1/", Err: fmt.Errorf("permission denied")}
repoResources.On("WriteResourceFromFile", mock.Anything, "level1/file1.json", "").
Return("", schema.GroupVersionKind{}, folderErr).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "level1/file1.json" && r.Error != nil
})).Return().Once()
// All nested files are skipped
for _, path := range []string{"level1/level2/file2.json", "level1/level2/level3/file3.json"} {
progress.On("HasDirPathFailedCreation", path).Return(true).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == path && r.Action == repository.FileActionIgnored
})).Return().Once()
}
},
},
{
name: "mixed success and failure",
description: "When success/ works and failure/ fails, only failure/* are skipped",
changes: []ResourceFileChange{
{Path: "success/file1.json", Action: repository.FileActionCreated},
{Path: "failure/file2.json", Action: repository.FileActionCreated},
{Path: "failure/nested/file3.json", Action: repository.FileActionCreated},
},
setupMocks: func(repo *repository.MockRepository, repoResources *resources.MockRepositoryResources, clients *resources.MockResourceClients, progress *jobs.MockJobProgressRecorder, _ *dynamicfake.FakeDynamicClient) {
// Success path works
progress.On("HasDirPathFailedCreation", "success/file1.json").Return(false).Once()
repoResources.On("WriteResourceFromFile", mock.Anything, "success/file1.json", "").
Return("resource1", schema.GroupVersionKind{Kind: "Dashboard"}, nil).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "success/file1.json" && r.Error == nil
})).Return().Once()
// Failure path fails
progress.On("HasDirPathFailedCreation", "failure/file2.json").Return(false).Once()
folderErr := &resources.PathCreationError{Path: "failure/", Err: fmt.Errorf("disk full")}
repoResources.On("WriteResourceFromFile", mock.Anything, "failure/file2.json", "").
Return("", schema.GroupVersionKind{}, folderErr).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "failure/file2.json" && r.Error != nil
})).Return().Once()
// Nested file in failure path is skipped
progress.On("HasDirPathFailedCreation", "failure/nested/file3.json").Return(true).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "failure/nested/file3.json" && r.Action == repository.FileActionIgnored
})).Return().Once()
},
},
{
name: "folder creation fails with explicit folder in changes",
description: "When folder1/ is explicitly in changes and fails to create, all nested resources (subfolders and files) are skipped",
changes: []ResourceFileChange{
{Path: "folder1/", Action: repository.FileActionCreated},
{Path: "folder1/subfolder/", Action: repository.FileActionCreated},
{Path: "folder1/file1.json", Action: repository.FileActionCreated},
{Path: "folder1/subfolder/file2.json", Action: repository.FileActionCreated},
},
setupMocks: func(repo *repository.MockRepository, repoResources *resources.MockRepositoryResources, clients *resources.MockResourceClients, progress *jobs.MockJobProgressRecorder, _ *dynamicfake.FakeDynamicClient) {
progress.On("HasDirPathFailedCreation", "folder1/").Return(false).Once()
folderErr := &resources.PathCreationError{Path: "folder1/", Err: fmt.Errorf("permission denied")}
repoResources.On("EnsureFolderPathExist", mock.Anything, "folder1/").Return("", folderErr).Once()
progress.On("HasDirPathFailedCreation", "folder1/subfolder/").Return(true).Once()
progress.On("HasDirPathFailedCreation", "folder1/file1.json").Return(true).Once()
progress.On("HasDirPathFailedCreation", "folder1/subfolder/file2.json").Return(true).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "folder1/" && r.Error != nil
})).Return().Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "folder1/subfolder/" && r.Action == repository.FileActionIgnored
})).Return().Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "folder1/file1.json" && r.Action == repository.FileActionIgnored
})).Return().Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "folder1/subfolder/file2.json" && r.Action == repository.FileActionIgnored
})).Return().Once()
},
},
{
name: "folder deletion prevented when child deletion fails",
description: "When a file deletion fails, folder deletion is skipped with FileActionIgnored to prevent orphaning resources",
changes: []ResourceFileChange{
{
Path: "folder1/file1.json",
Action: repository.FileActionDeleted,
Existing: &provisioning.ResourceListItem{Name: "file1", Group: "dashboard.grafana.app", Resource: "dashboards"},
},
{Path: "folder1/", Action: repository.FileActionDeleted, Existing: &provisioning.ResourceListItem{Name: "folder1", Group: "folder.grafana.app", Resource: "Folder"}},
},
setupMocks: func(repo *repository.MockRepository, repoResources *resources.MockRepositoryResources, clients *resources.MockResourceClients, progress *jobs.MockJobProgressRecorder, dynamicClient *dynamicfake.FakeDynamicClient) {
gvk := schema.GroupVersionKind{Group: "dashboard.grafana.app", Kind: "Dashboard", Version: "v1"}
gvr := schema.GroupVersionResource{Group: "dashboard.grafana.app", Resource: "dashboards", Version: "v1"}
clients.On("ForResource", mock.Anything, mock.MatchedBy(func(gvr schema.GroupVersionResource) bool {
return gvr.Group == "dashboard.grafana.app"
})).Return(dynamicClient.Resource(gvr), gvk, nil)
dynamicClient.PrependReactor("delete", "dashboards", func(action k8testing.Action) (bool, runtime.Object, error) {
return true, nil, fmt.Errorf("permission denied")
})
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "folder1/file1.json" && r.Error != nil
})).Return().Once()
progress.On("HasDirPathFailedDeletion", "folder1/").Return(true).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "folder1/" && r.Action == repository.FileActionIgnored
})).Return().Once()
},
},
{
name: "multiple folder deletion failures",
description: "When multiple independent folders have child deletion failures, all folder deletions are skipped",
changes: []ResourceFileChange{
{Path: "folder1/file1.json", Action: repository.FileActionDeleted, Existing: &provisioning.ResourceListItem{Name: "file1", Group: "dashboard.grafana.app", Resource: "dashboards"}},
{Path: "folder1/", Action: repository.FileActionDeleted},
{Path: "folder2/file2.json", Action: repository.FileActionDeleted, Existing: &provisioning.ResourceListItem{Name: "file2", Group: "dashboard.grafana.app", Resource: "dashboards"}},
{Path: "folder2/", Action: repository.FileActionDeleted},
},
setupMocks: func(repo *repository.MockRepository, repoResources *resources.MockRepositoryResources, clients *resources.MockResourceClients, progress *jobs.MockJobProgressRecorder, dynamicClient *dynamicfake.FakeDynamicClient) {
gvk := schema.GroupVersionKind{Group: "dashboard.grafana.app", Kind: "Dashboard", Version: "v1"}
gvr := schema.GroupVersionResource{Group: "dashboard.grafana.app", Resource: "dashboards", Version: "v1"}
clients.On("ForResource", mock.Anything, mock.MatchedBy(func(gvr schema.GroupVersionResource) bool {
return gvr.Group == "dashboard.grafana.app"
})).Return(dynamicClient.Resource(gvr), gvk, nil)
dynamicClient.PrependReactor("delete", "dashboards", func(action k8testing.Action) (bool, runtime.Object, error) {
return true, nil, fmt.Errorf("permission denied")
})
for _, path := range []string{"folder1/file1.json", "folder2/file2.json"} {
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == path && r.Error != nil
})).Return().Once()
}
progress.On("HasDirPathFailedDeletion", "folder1/").Return(true).Once()
progress.On("HasDirPathFailedDeletion", "folder2/").Return(true).Once()
for _, path := range []string{"folder1/", "folder2/"} {
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == path && r.Action == repository.FileActionIgnored
})).Return().Once()
}
},
},
{
name: "nested subfolder deletion failure",
description: "When a file deletion fails in a nested subfolder, both the subfolder and parent folder deletions are skipped",
changes: []ResourceFileChange{
{Path: "parent/subfolder/file.json", Action: repository.FileActionDeleted, Existing: &provisioning.ResourceListItem{Name: "file1", Group: "dashboard.grafana.app", Resource: "dashboards"}},
{Path: "parent/subfolder/", Action: repository.FileActionDeleted, Existing: &provisioning.ResourceListItem{Name: "subfolder", Group: "folder.grafana.app", Resource: "Folder"}},
{Path: "parent/", Action: repository.FileActionDeleted, Existing: &provisioning.ResourceListItem{Name: "parent", Group: "folder.grafana.app", Resource: "Folder"}},
},
setupMocks: func(repo *repository.MockRepository, repoResources *resources.MockRepositoryResources, clients *resources.MockResourceClients, progress *jobs.MockJobProgressRecorder, dynamicClient *dynamicfake.FakeDynamicClient) {
gvk := schema.GroupVersionKind{Group: "dashboard.grafana.app", Kind: "Dashboard", Version: "v1"}
gvr := schema.GroupVersionResource{Group: "dashboard.grafana.app", Resource: "dashboards", Version: "v1"}
clients.On("ForResource", mock.Anything, mock.MatchedBy(func(gvr schema.GroupVersionResource) bool {
return gvr.Group == "dashboard.grafana.app"
})).Return(dynamicClient.Resource(gvr), gvk, nil)
dynamicClient.PrependReactor("delete", "dashboards", func(action k8testing.Action) (bool, runtime.Object, error) {
return true, nil, fmt.Errorf("permission denied")
})
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "parent/subfolder/file.json" && r.Error != nil
})).Return().Once()
progress.On("HasDirPathFailedDeletion", "parent/subfolder/").Return(true).Once()
progress.On("HasDirPathFailedDeletion", "parent/").Return(true).Once()
for _, path := range []string{"parent/subfolder/", "parent/"} {
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == path && r.Action == repository.FileActionIgnored
})).Return().Once()
}
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
scheme := runtime.NewScheme()
dynamicClient := dynamicfake.NewSimpleDynamicClient(scheme)
repo := repository.NewMockRepository(t)
repoResources := resources.NewMockRepositoryResources(t)
clients := resources.NewMockResourceClients(t)
progress := jobs.NewMockJobProgressRecorder(t)
compareFn := NewMockCompareFn(t)
repo.On("Config").Return(&provisioning.Repository{
ObjectMeta: metav1.ObjectMeta{Name: "test-repo"},
Spec: provisioning.RepositorySpec{Title: "Test Repo"},
})
tt.setupMocks(repo, repoResources, clients, progress, dynamicClient)
compareFn.On("Execute", mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(tt.changes, nil)
progress.On("SetTotal", mock.Anything, len(tt.changes)).Return()
progress.On("TooManyErrors").Return(nil).Maybe()
err := FullSync(context.Background(), repo, compareFn.Execute, clients, "ref", repoResources, progress, tracing.NewNoopTracerService(), 10, jobs.RegisterJobMetrics(prometheus.NewPedanticRegistry()))
if tt.expectError {
require.Error(t, err)
if tt.errorContains != "" {
require.Contains(t, err.Error(), tt.errorContains)
}
} else {
require.NoError(t, err)
}
progress.AssertExpectations(t)
repoResources.AssertExpectations(t)
})
}
}

View File

@@ -213,10 +213,6 @@ func TestFullSync_ApplyChanges(t *testing.T) { //nolint:gocyclo
return nil
})
progress.On("HasDirPathFailedCreation", mock.MatchedBy(func(path string) bool {
return path == "dashboards/one.json" || path == "dashboards/two.json" || path == "dashboards/three.json"
})).Return(false).Maybe()
repoResources.On("WriteResourceFromFile", mock.Anything, mock.MatchedBy(func(path string) bool {
return path == "dashboards/one.json" || path == "dashboards/two.json" || path == "dashboards/three.json"
}), "").Return("test-dashboard", schema.GroupVersionKind{Kind: "Dashboard", Group: "dashboards"}, nil).Maybe()
@@ -239,7 +235,6 @@ func TestFullSync_ApplyChanges(t *testing.T) { //nolint:gocyclo
},
setupMocks: func(repo *repository.MockRepository, repoResources *resources.MockRepositoryResources, clients *resources.MockResourceClients, progress *jobs.MockJobProgressRecorder, compareFn *MockCompareFn) {
progress.On("TooManyErrors").Return(nil)
progress.On("HasDirPathFailedCreation", "dashboards/test.json").Return(false)
repoResources.On("WriteResourceFromFile", mock.Anything, "dashboards/test.json", "").
Return("test-dashboard", schema.GroupVersionKind{Kind: "Dashboard", Group: "dashboards"}, nil)
@@ -264,7 +259,6 @@ func TestFullSync_ApplyChanges(t *testing.T) { //nolint:gocyclo
},
setupMocks: func(repo *repository.MockRepository, repoResources *resources.MockRepositoryResources, clients *resources.MockResourceClients, progress *jobs.MockJobProgressRecorder, compareFn *MockCompareFn) {
progress.On("TooManyErrors").Return(nil)
progress.On("HasDirPathFailedCreation", "dashboards/test.json").Return(false)
repoResources.On("WriteResourceFromFile", mock.Anything, "dashboards/test.json", "").
Return("test-dashboard", schema.GroupVersionKind{Kind: "Dashboard", Group: "dashboards"}, fmt.Errorf("write error"))
@@ -291,7 +285,6 @@ func TestFullSync_ApplyChanges(t *testing.T) { //nolint:gocyclo
},
setupMocks: func(repo *repository.MockRepository, repoResources *resources.MockRepositoryResources, clients *resources.MockResourceClients, progress *jobs.MockJobProgressRecorder, compareFn *MockCompareFn) {
progress.On("TooManyErrors").Return(nil)
progress.On("HasDirPathFailedCreation", "dashboards/test.json").Return(false)
repoResources.On("WriteResourceFromFile", mock.Anything, "dashboards/test.json", "").
Return("test-dashboard", schema.GroupVersionKind{Kind: "Dashboard", Group: "dashboards"}, nil)
@@ -316,7 +309,6 @@ func TestFullSync_ApplyChanges(t *testing.T) { //nolint:gocyclo
},
setupMocks: func(repo *repository.MockRepository, repoResources *resources.MockRepositoryResources, clients *resources.MockResourceClients, progress *jobs.MockJobProgressRecorder, compareFn *MockCompareFn) {
progress.On("TooManyErrors").Return(nil)
progress.On("HasDirPathFailedCreation", "dashboards/test.json").Return(false)
repoResources.On("WriteResourceFromFile", mock.Anything, "dashboards/test.json", "").
Return("test-dashboard", schema.GroupVersionKind{Kind: "Dashboard", Group: "dashboards"}, fmt.Errorf("write error"))
@@ -343,7 +335,6 @@ func TestFullSync_ApplyChanges(t *testing.T) { //nolint:gocyclo
},
setupMocks: func(repo *repository.MockRepository, repoResources *resources.MockRepositoryResources, clients *resources.MockResourceClients, progress *jobs.MockJobProgressRecorder, compareFn *MockCompareFn) {
progress.On("TooManyErrors").Return(nil)
progress.On("HasDirPathFailedCreation", "one/two/three/").Return(false)
repoResources.On("EnsureFolderPathExist", mock.Anything, "one/two/three/").Return("some-folder", nil)
progress.On("Record", mock.Anything, jobs.JobResourceResult{
@@ -366,7 +357,6 @@ func TestFullSync_ApplyChanges(t *testing.T) { //nolint:gocyclo
},
setupMocks: func(repo *repository.MockRepository, repoResources *resources.MockRepositoryResources, clients *resources.MockResourceClients, progress *jobs.MockJobProgressRecorder, compareFn *MockCompareFn) {
progress.On("TooManyErrors").Return(nil)
progress.On("HasDirPathFailedCreation", "one/two/three/").Return(false)
repoResources.On(
"EnsureFolderPathExist",
@@ -591,7 +581,6 @@ func TestFullSync_ApplyChanges(t *testing.T) { //nolint:gocyclo
},
setupMocks: func(repo *repository.MockRepository, repoResources *resources.MockRepositoryResources, clients *resources.MockResourceClients, progress *jobs.MockJobProgressRecorder, compareFn *MockCompareFn) {
progress.On("TooManyErrors").Return(nil)
progress.On("HasDirPathFailedDeletion", "to-be-deleted/").Return(false)
scheme := runtime.NewScheme()
require.NoError(t, metav1.AddMetaToScheme(scheme))
@@ -651,7 +640,6 @@ func TestFullSync_ApplyChanges(t *testing.T) { //nolint:gocyclo
},
setupMocks: func(repo *repository.MockRepository, repoResources *resources.MockRepositoryResources, clients *resources.MockResourceClients, progress *jobs.MockJobProgressRecorder, compareFn *MockCompareFn) {
progress.On("TooManyErrors").Return(nil)
progress.On("HasDirPathFailedDeletion", "to-be-deleted/").Return(false)
scheme := runtime.NewScheme()
require.NoError(t, metav1.AddMetaToScheme(scheme))
@@ -707,7 +695,6 @@ func TestFullSync_ApplyChanges(t *testing.T) { //nolint:gocyclo
},
setupMocks: func(repo *repository.MockRepository, repoResources *resources.MockRepositoryResources, clients *resources.MockResourceClients, progress *jobs.MockJobProgressRecorder, compareFn *MockCompareFn) {
progress.On("TooManyErrors").Return(nil)
progress.On("HasDirPathFailedCreation", "dashboards/slow.json").Return(false)
repoResources.On("WriteResourceFromFile", mock.Anything, "dashboards/slow.json", "").
Run(func(args mock.Arguments) {

View File

@@ -60,7 +60,7 @@ func IncrementalSync(ctx context.Context, repo repository.Versioned, previousRef
if len(affectedFolders) > 0 {
cleanupStart := time.Now()
span.AddEvent("checking if impacted folders should be deleted", trace.WithAttributes(attribute.Int("affected_folders", len(affectedFolders))))
err := cleanupOrphanedFolders(ctx, repo, affectedFolders, repositoryResources, tracer, progress)
err := cleanupOrphanedFolders(ctx, repo, affectedFolders, repositoryResources, tracer)
metrics.RecordIncrementalSyncPhase(jobs.IncrementalSyncPhaseCleanup, time.Since(cleanupStart))
if err != nil {
return tracing.Error(span, fmt.Errorf("cleanup orphaned folders: %w", err))
@@ -85,20 +85,6 @@ func applyIncrementalChanges(ctx context.Context, diff []repository.VersionedFil
return nil, tracing.Error(span, err)
}
// Check if this resource is nested under a failed folder creation
// This only applies to creation/update/rename operations, not deletions
if change.Action != repository.FileActionDeleted && progress.HasDirPathFailedCreation(change.Path) {
// Skip this resource since its parent folder failed to be created
skipCtx, skipSpan := tracer.Start(ctx, "provisioning.sync.incremental.skip_nested_resource")
progress.Record(skipCtx, jobs.JobResourceResult{
Path: change.Path,
Action: repository.FileActionIgnored,
Warning: fmt.Errorf("resource was not processed because the parent folder could not be created"),
})
skipSpan.End()
continue
}
if err := resources.IsPathSupported(change.Path); err != nil {
ensureFolderCtx, ensureFolderSpan := tracer.Start(ctx, "provisioning.sync.incremental.ensure_folder_path_exist")
// Maintain the safe segment for empty folders
@@ -112,13 +98,7 @@ func applyIncrementalChanges(ctx context.Context, diff []repository.VersionedFil
if err != nil {
ensureFolderSpan.RecordError(err)
ensureFolderSpan.End()
progress.Record(ensureFolderCtx, jobs.JobResourceResult{
Path: change.Path,
Action: repository.FileActionIgnored,
Error: err,
})
continue
return nil, tracing.Error(span, fmt.Errorf("unable to create empty file folder: %w", err))
}
progress.Record(ensureFolderCtx, jobs.JobResourceResult{
@@ -205,7 +185,6 @@ func cleanupOrphanedFolders(
affectedFolders map[string]string,
repositoryResources resources.RepositoryResources,
tracer tracing.Tracer,
progress jobs.JobProgressRecorder,
) error {
ctx, span := tracer.Start(ctx, "provisioning.sync.incremental.cleanup_orphaned_folders")
defer span.End()
@@ -219,12 +198,6 @@ func cleanupOrphanedFolders(
for path, folderName := range affectedFolders {
span.SetAttributes(attribute.String("folder", folderName))
// Check if any resources under this folder failed to delete
if progress.HasDirPathFailedDeletion(path) {
span.AddEvent("skipping orphaned folder cleanup: a child resource in its path failed to be deleted")
continue
}
// if we can no longer find the folder in git, then we can delete it from grafana
_, err := readerRepo.Read(ctx, path, "")
if err != nil && (errors.Is(err, repository.ErrFileNotFound) || apierrors.IsNotFound(err)) {

View File

@@ -1,623 +0,0 @@
package sync
import (
"context"
"fmt"
"testing"
"github.com/prometheus/client_golang/prometheus"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
"k8s.io/apimachinery/pkg/runtime/schema"
"github.com/grafana/grafana/apps/provisioning/pkg/repository"
"github.com/grafana/grafana/pkg/infra/tracing"
"github.com/grafana/grafana/pkg/registry/apis/provisioning/jobs"
"github.com/grafana/grafana/pkg/registry/apis/provisioning/resources"
)
/*
TestIncrementalSync_HierarchicalErrorHandling tests the hierarchical error handling behavior:
FOLDER CREATION FAILURES:
- When EnsureFolderPathExist fails with PathCreationError, the path is tracked
- Subsequent resources under that path are skipped with FileActionIgnored
- Only the initial folder creation error counts toward error limits
- WriteResourceFromFile can also return PathCreationError for implicit folder creation
FOLDER DELETION FAILURES (cleanupOrphanedFolders):
- When RemoveResourceFromFile fails, path is tracked in failedDeletions
- In cleanupOrphanedFolders, HasDirPathFailedDeletion() is checked before RemoveFolder
- If children failed to delete, folder cleanup is skipped with a span event
DELETIONS NOT AFFECTED BY CREATION FAILURES:
- HasDirPathFailedCreation is NOT checked for FileActionDeleted
- Deletions proceed even if their parent folder failed to be created
- This handles cleanup of resources that exist from previous syncs
RENAME OPERATIONS:
- RenameResourceFile can return PathCreationError for the destination folder
- Renames are affected by failed destination folder creation
- Renames are NOT skipped due to source folder creation failures
AUTOMATIC TRACKING:
- Record() automatically detects PathCreationError via errors.As() and adds to failedCreations
- Record() automatically detects FileActionDeleted with error and adds to failedDeletions
- No manual tracking calls needed
*/
func TestIncrementalSync_HierarchicalErrorHandling(t *testing.T) { // nolint:gocyclo
tests := []struct {
name string
setupMocks func(*repository.MockVersioned, *resources.MockRepositoryResources, *jobs.MockJobProgressRecorder)
changes []repository.VersionedFileChange
previousRef string
currentRef string
description string
expectError bool
errorContains string
}{
{
name: "folder creation fails, nested file skipped",
description: "When unsupported/ fails to create via EnsureFolderPathExist, nested file is skipped",
previousRef: "old-ref",
currentRef: "new-ref",
changes: []repository.VersionedFileChange{
{Action: repository.FileActionCreated, Path: "unsupported/file.txt", Ref: "new-ref"},
{Action: repository.FileActionCreated, Path: "unsupported/nested/file2.txt", Ref: "new-ref"},
},
setupMocks: func(repo *repository.MockVersioned, repoResources *resources.MockRepositoryResources, progress *jobs.MockJobProgressRecorder) {
// First file triggers folder creation which fails
progress.On("HasDirPathFailedCreation", "unsupported/file.txt").Return(false).Once()
folderErr := &resources.PathCreationError{Path: "unsupported/", Err: fmt.Errorf("permission denied")}
repoResources.On("EnsureFolderPathExist", mock.Anything, "unsupported/").Return("", folderErr).Once()
// First file recorded with error (note: error is from folder creation, but recorded against file)
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "unsupported/file.txt" &&
r.Action == repository.FileActionIgnored &&
r.Error != nil
})).Return().Once()
// Second file is skipped because parent folder failed
progress.On("HasDirPathFailedCreation", "unsupported/nested/file2.txt").Return(true).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "unsupported/nested/file2.txt" &&
r.Action == repository.FileActionIgnored &&
r.Warning != nil &&
r.Warning.Error() == "resource was not processed because the parent folder could not be created"
})).Return().Once()
},
},
{
name: "WriteResourceFromFile returns PathCreationError, nested resources skipped",
description: "When WriteResourceFromFile implicitly creates a folder and fails, nested resources are skipped",
previousRef: "old-ref",
currentRef: "new-ref",
changes: []repository.VersionedFileChange{
{Action: repository.FileActionCreated, Path: "folder1/file1.json", Ref: "new-ref"},
{Action: repository.FileActionCreated, Path: "folder1/file2.json", Ref: "new-ref"},
{Action: repository.FileActionCreated, Path: "folder1/nested/file3.json", Ref: "new-ref"},
},
setupMocks: func(repo *repository.MockVersioned, repoResources *resources.MockRepositoryResources, progress *jobs.MockJobProgressRecorder) {
// First file write fails with PathCreationError
progress.On("HasDirPathFailedCreation", "folder1/file1.json").Return(false).Once()
folderErr := &resources.PathCreationError{Path: "folder1/", Err: fmt.Errorf("permission denied")}
repoResources.On("WriteResourceFromFile", mock.Anything, "folder1/file1.json", "new-ref").
Return("", schema.GroupVersionKind{}, folderErr).Once()
// First file recorded with error, automatically tracked
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "folder1/file1.json" &&
r.Action == repository.FileActionCreated &&
r.Error != nil
})).Return().Once()
// Subsequent files are skipped
progress.On("HasDirPathFailedCreation", "folder1/file2.json").Return(true).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "folder1/file2.json" && r.Action == repository.FileActionIgnored && r.Warning != nil
})).Return().Once()
progress.On("HasDirPathFailedCreation", "folder1/nested/file3.json").Return(true).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "folder1/nested/file3.json" && r.Action == repository.FileActionIgnored && r.Warning != nil
})).Return().Once()
},
},
{
name: "file deletion fails, folder cleanup skipped",
description: "When RemoveResourceFromFile fails, cleanupOrphanedFolders skips folder removal",
previousRef: "old-ref",
currentRef: "new-ref",
changes: []repository.VersionedFileChange{
{Action: repository.FileActionDeleted, Path: "dashboards/file1.json", PreviousRef: "old-ref"},
},
setupMocks: func(repo *repository.MockVersioned, repoResources *resources.MockRepositoryResources, progress *jobs.MockJobProgressRecorder) {
// File deletion fails (deletions don't check HasDirPathFailedCreation)
repoResources.On("RemoveResourceFromFile", mock.Anything, "dashboards/file1.json", "old-ref").
Return("dashboard-1", "folder-uid", schema.GroupVersionKind{Kind: "Dashboard"}, fmt.Errorf("permission denied")).Once()
// Error recorded, automatically tracked in failedDeletions
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "dashboards/file1.json" &&
r.Action == repository.FileActionDeleted &&
r.Error != nil
})).Return().Once()
// During cleanup, folder deletion is skipped
progress.On("HasDirPathFailedDeletion", "dashboards/").Return(true).Once()
// Note: RemoveFolder should NOT be called (verified via AssertNotCalled in test)
},
},
{
name: "deletion proceeds despite creation failure",
description: "When folder1/ creation fails, deletion of folder1/old.json still proceeds",
previousRef: "old-ref",
currentRef: "new-ref",
changes: []repository.VersionedFileChange{
{Action: repository.FileActionCreated, Path: "folder1/new.json", Ref: "new-ref"},
{Action: repository.FileActionDeleted, Path: "folder1/old.json", PreviousRef: "old-ref"},
},
setupMocks: func(repo *repository.MockVersioned, repoResources *resources.MockRepositoryResources, progress *jobs.MockJobProgressRecorder) {
// Creation fails
progress.On("HasDirPathFailedCreation", "folder1/new.json").Return(false).Once()
folderErr := &resources.PathCreationError{Path: "folder1/", Err: fmt.Errorf("permission denied")}
repoResources.On("WriteResourceFromFile", mock.Anything, "folder1/new.json", "new-ref").
Return("", schema.GroupVersionKind{}, folderErr).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "folder1/new.json" && r.Error != nil
})).Return().Once()
// Deletion proceeds (NOT checking HasDirPathFailedCreation for deletions)
repoResources.On("RemoveResourceFromFile", mock.Anything, "folder1/old.json", "old-ref").
Return("old-resource", "", schema.GroupVersionKind{Kind: "Dashboard"}, nil).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "folder1/old.json" &&
r.Action == repository.FileActionDeleted &&
r.Error == nil // Deletion succeeds!
})).Return().Once()
},
},
{
name: "multi-level nesting cascade",
description: "When level1/ fails, level1/level2/level3/file.json is also skipped",
previousRef: "old-ref",
currentRef: "new-ref",
changes: []repository.VersionedFileChange{
{Action: repository.FileActionCreated, Path: "level1/file.txt", Ref: "new-ref"},
{Action: repository.FileActionCreated, Path: "level1/level2/file.txt", Ref: "new-ref"},
{Action: repository.FileActionCreated, Path: "level1/level2/level3/file.txt", Ref: "new-ref"},
},
setupMocks: func(repo *repository.MockVersioned, repoResources *resources.MockRepositoryResources, progress *jobs.MockJobProgressRecorder) {
// First file triggers level1/ failure
progress.On("HasDirPathFailedCreation", "level1/file.txt").Return(false).Once()
folderErr := &resources.PathCreationError{Path: "level1/", Err: fmt.Errorf("permission denied")}
repoResources.On("EnsureFolderPathExist", mock.Anything, "level1/").Return("", folderErr).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "level1/file.txt" && r.Action == repository.FileActionIgnored
})).Return().Once()
// All nested files are skipped
for _, path := range []string{"level1/level2/file.txt", "level1/level2/level3/file.txt"} {
progress.On("HasDirPathFailedCreation", path).Return(true).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == path && r.Action == repository.FileActionIgnored
})).Return().Once()
}
},
},
{
name: "mixed success and failure",
description: "When success/ works and failure/ fails, only failure/* are skipped",
previousRef: "old-ref",
currentRef: "new-ref",
changes: []repository.VersionedFileChange{
{Action: repository.FileActionCreated, Path: "success/file1.json", Ref: "new-ref"},
{Action: repository.FileActionCreated, Path: "success/nested/file2.json", Ref: "new-ref"},
{Action: repository.FileActionCreated, Path: "failure/file3.txt", Ref: "new-ref"},
{Action: repository.FileActionCreated, Path: "failure/nested/file4.txt", Ref: "new-ref"},
},
setupMocks: func(repo *repository.MockVersioned, repoResources *resources.MockRepositoryResources, progress *jobs.MockJobProgressRecorder) {
// Success path works
progress.On("HasDirPathFailedCreation", "success/file1.json").Return(false).Once()
repoResources.On("WriteResourceFromFile", mock.Anything, "success/file1.json", "new-ref").
Return("resource-1", schema.GroupVersionKind{Kind: "Dashboard"}, nil).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "success/file1.json" && r.Error == nil
})).Return().Once()
progress.On("HasDirPathFailedCreation", "success/nested/file2.json").Return(false).Once()
repoResources.On("WriteResourceFromFile", mock.Anything, "success/nested/file2.json", "new-ref").
Return("resource-2", schema.GroupVersionKind{Kind: "Dashboard"}, nil).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "success/nested/file2.json" && r.Error == nil
})).Return().Once()
// Failure path fails
progress.On("HasDirPathFailedCreation", "failure/file3.txt").Return(false).Once()
folderErr := &resources.PathCreationError{Path: "failure/", Err: fmt.Errorf("disk full")}
repoResources.On("EnsureFolderPathExist", mock.Anything, "failure/").Return("", folderErr).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "failure/file3.txt" && r.Action == repository.FileActionIgnored
})).Return().Once()
// Nested file in failure path is skipped
progress.On("HasDirPathFailedCreation", "failure/nested/file4.txt").Return(true).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "failure/nested/file4.txt" && r.Action == repository.FileActionIgnored
})).Return().Once()
},
},
{
name: "rename with failed destination folder",
description: "When RenameResourceFile fails with PathCreationError for destination, rename is skipped",
previousRef: "old-ref",
currentRef: "new-ref",
changes: []repository.VersionedFileChange{
{
Action: repository.FileActionRenamed,
Path: "newfolder/file.json",
PreviousPath: "oldfolder/file.json",
Ref: "new-ref",
PreviousRef: "old-ref",
},
},
setupMocks: func(repo *repository.MockVersioned, repoResources *resources.MockRepositoryResources, progress *jobs.MockJobProgressRecorder) {
// Rename fails with PathCreationError for destination folder
progress.On("HasDirPathFailedCreation", "newfolder/file.json").Return(false).Once()
folderErr := &resources.PathCreationError{Path: "newfolder/", Err: fmt.Errorf("permission denied")}
repoResources.On("RenameResourceFile", mock.Anything, "oldfolder/file.json", "old-ref", "newfolder/file.json", "new-ref").
Return("", "", schema.GroupVersionKind{}, folderErr).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "newfolder/file.json" &&
r.Action == repository.FileActionRenamed &&
r.Error != nil
})).Return().Once()
},
},
{
name: "renamed file still checked, subsequent nested resources skipped",
description: "After rename fails for folder1/file.json, other folder1/* files are skipped",
previousRef: "old-ref",
currentRef: "new-ref",
changes: []repository.VersionedFileChange{
{Action: repository.FileActionRenamed, Path: "folder1/file1.json", PreviousPath: "old/file1.json", Ref: "new-ref", PreviousRef: "old-ref"},
{Action: repository.FileActionCreated, Path: "folder1/file2.json", Ref: "new-ref"},
},
setupMocks: func(repo *repository.MockVersioned, repoResources *resources.MockRepositoryResources, progress *jobs.MockJobProgressRecorder) {
// Rename is NOT skipped for creation failures (it's checking the destination path)
progress.On("HasDirPathFailedCreation", "folder1/file1.json").Return(true).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "folder1/file1.json" &&
r.Action == repository.FileActionIgnored &&
r.Warning != nil && r.Warning.Error() == "resource was not processed because the parent folder could not be created"
})).Return().Once()
// Second file also skipped
progress.On("HasDirPathFailedCreation", "folder1/file2.json").Return(true).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "folder1/file2.json" && r.Action == repository.FileActionIgnored && r.Warning != nil
})).Return().Once()
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
runHierarchicalErrorHandlingTest(t, tt)
})
}
}
type compositeRepoForTest struct {
*repository.MockVersioned
*repository.MockReader
}
func runHierarchicalErrorHandlingTest(t *testing.T, tt struct {
name string
setupMocks func(*repository.MockVersioned, *resources.MockRepositoryResources, *jobs.MockJobProgressRecorder)
changes []repository.VersionedFileChange
previousRef string
currentRef string
description string
expectError bool
errorContains string
}) {
var repo repository.Versioned
mockVersioned := repository.NewMockVersioned(t)
repoResources := resources.NewMockRepositoryResources(t)
progress := jobs.NewMockJobProgressRecorder(t)
// For tests that need cleanup (folder deletion), use composite repo
if tt.name == "file deletion fails, folder cleanup skipped" {
mockReader := repository.NewMockReader(t)
repo = &compositeRepoForTest{
MockVersioned: mockVersioned,
MockReader: mockReader,
}
} else {
repo = mockVersioned
}
mockVersioned.On("CompareFiles", mock.Anything, tt.previousRef, tt.currentRef).Return(tt.changes, nil)
progress.On("SetTotal", mock.Anything, len(tt.changes)).Return()
progress.On("SetMessage", mock.Anything, "replicating versioned changes").Return()
progress.On("SetMessage", mock.Anything, "versioned changes replicated").Return()
progress.On("TooManyErrors").Return(nil).Maybe()
tt.setupMocks(mockVersioned, repoResources, progress)
err := IncrementalSync(context.Background(), repo, tt.previousRef, tt.currentRef, repoResources, progress, tracing.NewNoopTracerService(), jobs.RegisterJobMetrics(prometheus.NewPedanticRegistry()))
if tt.expectError {
require.Error(t, err)
if tt.errorContains != "" {
require.Contains(t, err.Error(), tt.errorContains)
}
} else {
require.NoError(t, err)
}
progress.AssertExpectations(t)
repoResources.AssertExpectations(t)
// For deletion tests, verify RemoveFolder was NOT called
if tt.name == "file deletion fails, folder cleanup skipped" {
repoResources.AssertNotCalled(t, "RemoveFolder", mock.Anything, mock.Anything)
}
}
// TestIncrementalSync_HierarchicalErrorHandling_FailedFolderCreation tests nested resource skipping
func TestIncrementalSync_HierarchicalErrorHandling_FailedFolderCreation(t *testing.T) {
repo := repository.NewMockVersioned(t)
repoResources := resources.NewMockRepositoryResources(t)
progress := jobs.NewMockJobProgressRecorder(t)
changes := []repository.VersionedFileChange{
{Action: repository.FileActionCreated, Path: "unsupported/file.txt", Ref: "new-ref"},
{Action: repository.FileActionCreated, Path: "unsupported/subfolder/file2.txt", Ref: "new-ref"},
{Action: repository.FileActionCreated, Path: "unsupported/file3.json", Ref: "new-ref"},
{Action: repository.FileActionCreated, Path: "other/file.json", Ref: "new-ref"},
}
repo.On("CompareFiles", mock.Anything, "old-ref", "new-ref").Return(changes, nil)
progress.On("SetTotal", mock.Anything, 4).Return()
progress.On("SetMessage", mock.Anything, "replicating versioned changes").Return()
progress.On("SetMessage", mock.Anything, "versioned changes replicated").Return()
progress.On("TooManyErrors").Return(nil).Maybe()
folderErr := &resources.PathCreationError{Path: "unsupported/", Err: fmt.Errorf("permission denied")}
// First check is before it fails.
progress.On("HasDirPathFailedCreation", "unsupported/file.txt").Return(false).Once()
repoResources.On("EnsureFolderPathExist", mock.Anything, "unsupported/").Return("", folderErr).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "unsupported/file.txt" && r.Action == repository.FileActionIgnored && r.Error != nil
})).Return().Once()
progress.On("HasDirPathFailedCreation", "unsupported/subfolder/file2.txt").Return(true).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "unsupported/subfolder/file2.txt" && r.Action == repository.FileActionIgnored &&
r.Warning != nil && r.Warning.Error() == "resource was not processed because the parent folder could not be created"
})).Return().Once()
progress.On("HasDirPathFailedCreation", "unsupported/file3.json").Return(true).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "unsupported/file3.json" && r.Action == repository.FileActionIgnored &&
r.Warning != nil && r.Warning.Error() == "resource was not processed because the parent folder could not be created"
})).Return().Once()
progress.On("HasDirPathFailedCreation", "other/file.json").Return(false).Once()
repoResources.On("WriteResourceFromFile", mock.Anything, "other/file.json", "new-ref").
Return("test-resource", schema.GroupVersionKind{Kind: "Dashboard"}, nil).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "other/file.json" && r.Action == repository.FileActionCreated && r.Error == nil
})).Return().Once()
err := IncrementalSync(context.Background(), repo, "old-ref", "new-ref", repoResources, progress, tracing.NewNoopTracerService(), jobs.RegisterJobMetrics(prometheus.NewPedanticRegistry()))
require.NoError(t, err)
progress.AssertExpectations(t)
}
// TestIncrementalSync_HierarchicalErrorHandling_FailedFileDeletion tests folder cleanup prevention
func TestIncrementalSync_HierarchicalErrorHandling_FailedFileDeletion(t *testing.T) {
mockVersioned := repository.NewMockVersioned(t)
mockReader := repository.NewMockReader(t)
repo := &compositeRepoForTest{MockVersioned: mockVersioned, MockReader: mockReader}
repoResources := resources.NewMockRepositoryResources(t)
progress := jobs.NewMockJobProgressRecorder(t)
changes := []repository.VersionedFileChange{
{Action: repository.FileActionDeleted, Path: "dashboards/file1.json", PreviousRef: "old-ref"},
}
mockVersioned.On("CompareFiles", mock.Anything, "old-ref", "new-ref").Return(changes, nil)
progress.On("SetTotal", mock.Anything, 1).Return()
progress.On("SetMessage", mock.Anything, "replicating versioned changes").Return()
progress.On("SetMessage", mock.Anything, "versioned changes replicated").Return()
progress.On("TooManyErrors").Return(nil).Maybe()
// Deletions don't check HasDirPathFailedCreation, they go straight to removal
repoResources.On("RemoveResourceFromFile", mock.Anything, "dashboards/file1.json", "old-ref").
Return("dashboard-1", "folder-uid", schema.GroupVersionKind{Kind: "Dashboard"}, fmt.Errorf("permission denied")).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "dashboards/file1.json" && r.Action == repository.FileActionDeleted &&
r.Error != nil && r.Error.Error() == "removing resource from file dashboards/file1.json: permission denied"
})).Return().Once()
progress.On("HasDirPathFailedDeletion", "dashboards/").Return(true).Once()
err := IncrementalSync(context.Background(), repo, "old-ref", "new-ref", repoResources, progress, tracing.NewNoopTracerService(), jobs.RegisterJobMetrics(prometheus.NewPedanticRegistry()))
require.NoError(t, err)
progress.AssertExpectations(t)
repoResources.AssertNotCalled(t, "RemoveFolder", mock.Anything, mock.Anything)
}
// TestIncrementalSync_HierarchicalErrorHandling_DeletionNotAffectedByCreationFailure tests deletions proceed despite creation failures
func TestIncrementalSync_HierarchicalErrorHandling_DeletionNotAffectedByCreationFailure(t *testing.T) {
repo := repository.NewMockVersioned(t)
repoResources := resources.NewMockRepositoryResources(t)
progress := jobs.NewMockJobProgressRecorder(t)
changes := []repository.VersionedFileChange{
{Action: repository.FileActionCreated, Path: "folder1/file.json", Ref: "new-ref"},
{Action: repository.FileActionDeleted, Path: "folder1/old.json", PreviousRef: "old-ref"},
}
repo.On("CompareFiles", mock.Anything, "old-ref", "new-ref").Return(changes, nil)
progress.On("SetTotal", mock.Anything, 2).Return()
progress.On("SetMessage", mock.Anything, "replicating versioned changes").Return()
progress.On("SetMessage", mock.Anything, "versioned changes replicated").Return()
progress.On("TooManyErrors").Return(nil).Maybe()
// Creation fails
progress.On("HasDirPathFailedCreation", "folder1/file.json").Return(false).Once()
repoResources.On("WriteResourceFromFile", mock.Anything, "folder1/file.json", "new-ref").
Return("", schema.GroupVersionKind{}, &resources.PathCreationError{Path: "folder1/", Err: fmt.Errorf("permission denied")}).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "folder1/file.json" && r.Error != nil
})).Return().Once()
// Deletion should NOT be skipped (not checking HasDirPathFailedCreation for deletions)
// Deletions don't check HasDirPathFailedCreation, they go straight to removal
repoResources.On("RemoveResourceFromFile", mock.Anything, "folder1/old.json", "old-ref").
Return("old-resource", "", schema.GroupVersionKind{Kind: "Dashboard"}, nil).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "folder1/old.json" && r.Action == repository.FileActionDeleted && r.Error == nil
})).Return().Once()
err := IncrementalSync(context.Background(), repo, "old-ref", "new-ref", repoResources, progress, tracing.NewNoopTracerService(), jobs.RegisterJobMetrics(prometheus.NewPedanticRegistry()))
require.NoError(t, err)
progress.AssertExpectations(t)
}
// TestIncrementalSync_HierarchicalErrorHandling_MultiLevelNesting tests multi-level cascade
func TestIncrementalSync_HierarchicalErrorHandling_MultiLevelNesting(t *testing.T) {
repo := repository.NewMockVersioned(t)
repoResources := resources.NewMockRepositoryResources(t)
progress := jobs.NewMockJobProgressRecorder(t)
changes := []repository.VersionedFileChange{
{Action: repository.FileActionCreated, Path: "level1/file.txt", Ref: "new-ref"},
{Action: repository.FileActionCreated, Path: "level1/level2/file.txt", Ref: "new-ref"},
{Action: repository.FileActionCreated, Path: "level1/level2/level3/file.txt", Ref: "new-ref"},
}
repo.On("CompareFiles", mock.Anything, "old-ref", "new-ref").Return(changes, nil)
progress.On("SetTotal", mock.Anything, 3).Return()
progress.On("SetMessage", mock.Anything, "replicating versioned changes").Return()
progress.On("SetMessage", mock.Anything, "versioned changes replicated").Return()
progress.On("TooManyErrors").Return(nil).Maybe()
folderErr := &resources.PathCreationError{Path: "level1/", Err: fmt.Errorf("permission denied")}
// First check is before it fails.
progress.On("HasDirPathFailedCreation", "level1/file.txt").Return(false).Once()
repoResources.On("EnsureFolderPathExist", mock.Anything, "level1/").Return("", folderErr).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "level1/file.txt" && r.Action == repository.FileActionIgnored && r.Error != nil
})).Return().Once()
progress.On("HasDirPathFailedCreation", "level1/level2/file.txt").Return(true).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "level1/level2/file.txt" && r.Action == repository.FileActionIgnored &&
r.Warning != nil && r.Warning.Error() == "resource was not processed because the parent folder could not be created"
})).Return().Once()
progress.On("HasDirPathFailedCreation", "level1/level2/level3/file.txt").Return(true).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "level1/level2/level3/file.txt" && r.Action == repository.FileActionIgnored &&
r.Warning != nil && r.Warning.Error() == "resource was not processed because the parent folder could not be created"
})).Return().Once()
err := IncrementalSync(context.Background(), repo, "old-ref", "new-ref", repoResources, progress, tracing.NewNoopTracerService(), jobs.RegisterJobMetrics(prometheus.NewPedanticRegistry()))
require.NoError(t, err)
progress.AssertExpectations(t)
}
// TestIncrementalSync_HierarchicalErrorHandling_MixedSuccessAndFailure tests partial failures
func TestIncrementalSync_HierarchicalErrorHandling_MixedSuccessAndFailure(t *testing.T) {
repo := repository.NewMockVersioned(t)
repoResources := resources.NewMockRepositoryResources(t)
progress := jobs.NewMockJobProgressRecorder(t)
changes := []repository.VersionedFileChange{
{Action: repository.FileActionCreated, Path: "success/file1.json", Ref: "new-ref"},
{Action: repository.FileActionCreated, Path: "success/nested/file2.json", Ref: "new-ref"},
{Action: repository.FileActionCreated, Path: "failure/file3.txt", Ref: "new-ref"},
{Action: repository.FileActionCreated, Path: "failure/nested/file4.txt", Ref: "new-ref"},
}
repo.On("CompareFiles", mock.Anything, "old-ref", "new-ref").Return(changes, nil)
progress.On("SetTotal", mock.Anything, 4).Return()
progress.On("SetMessage", mock.Anything, "replicating versioned changes").Return()
progress.On("SetMessage", mock.Anything, "versioned changes replicated").Return()
progress.On("TooManyErrors").Return(nil).Maybe()
progress.On("HasDirPathFailedCreation", "success/file1.json").Return(false).Once()
repoResources.On("WriteResourceFromFile", mock.Anything, "success/file1.json", "new-ref").
Return("resource-1", schema.GroupVersionKind{Kind: "Dashboard"}, nil).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "success/file1.json" && r.Action == repository.FileActionCreated && r.Error == nil
})).Return().Once()
progress.On("HasDirPathFailedCreation", "success/nested/file2.json").Return(false).Once()
repoResources.On("WriteResourceFromFile", mock.Anything, "success/nested/file2.json", "new-ref").
Return("resource-2", schema.GroupVersionKind{Kind: "Dashboard"}, nil).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "success/nested/file2.json" && r.Action == repository.FileActionCreated && r.Error == nil
})).Return().Once()
folderErr := &resources.PathCreationError{Path: "failure/", Err: fmt.Errorf("disk full")}
progress.On("HasDirPathFailedCreation", "failure/file3.txt").Return(false).Once()
repoResources.On("EnsureFolderPathExist", mock.Anything, "failure/").Return("", folderErr).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "failure/file3.txt" && r.Action == repository.FileActionIgnored
})).Return().Once()
progress.On("HasDirPathFailedCreation", "failure/nested/file4.txt").Return(true).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "failure/nested/file4.txt" && r.Action == repository.FileActionIgnored &&
r.Warning != nil && r.Warning.Error() == "resource was not processed because the parent folder could not be created"
})).Return().Once()
err := IncrementalSync(context.Background(), repo, "old-ref", "new-ref", repoResources, progress, tracing.NewNoopTracerService(), jobs.RegisterJobMetrics(prometheus.NewPedanticRegistry()))
require.NoError(t, err)
progress.AssertExpectations(t)
repoResources.AssertExpectations(t)
}
// TestIncrementalSync_HierarchicalErrorHandling_RenameWithFailedFolderCreation tests rename operations affected by folder failures
func TestIncrementalSync_HierarchicalErrorHandling_RenameWithFailedFolderCreation(t *testing.T) {
repo := repository.NewMockVersioned(t)
repoResources := resources.NewMockRepositoryResources(t)
progress := jobs.NewMockJobProgressRecorder(t)
changes := []repository.VersionedFileChange{
{Action: repository.FileActionRenamed, Path: "newfolder/file.json", PreviousPath: "oldfolder/file.json", Ref: "new-ref", PreviousRef: "old-ref"},
}
repo.On("CompareFiles", mock.Anything, "old-ref", "new-ref").Return(changes, nil)
progress.On("SetTotal", mock.Anything, 1).Return()
progress.On("SetMessage", mock.Anything, "replicating versioned changes").Return()
progress.On("SetMessage", mock.Anything, "versioned changes replicated").Return()
progress.On("TooManyErrors").Return(nil).Maybe()
progress.On("HasDirPathFailedCreation", "newfolder/file.json").Return(true).Once()
progress.On("Record", mock.Anything, mock.MatchedBy(func(r jobs.JobResourceResult) bool {
return r.Path == "newfolder/file.json" && r.Action == repository.FileActionIgnored &&
r.Warning != nil && r.Warning.Error() == "resource was not processed because the parent folder could not be created"
})).Return().Once()
err := IncrementalSync(context.Background(), repo, "old-ref", "new-ref", repoResources, progress, tracing.NewNoopTracerService(), jobs.RegisterJobMetrics(prometheus.NewPedanticRegistry()))
require.NoError(t, err)
progress.AssertExpectations(t)
}

View File

@@ -92,10 +92,6 @@ func TestIncrementalSync(t *testing.T) {
progress.On("SetMessage", mock.Anything, "replicating versioned changes").Return()
progress.On("SetMessage", mock.Anything, "versioned changes replicated").Return()
// Mock HasDirPathFailedCreation checks
progress.On("HasDirPathFailedCreation", "dashboards/test.json").Return(false)
progress.On("HasDirPathFailedCreation", "alerts/alert.yaml").Return(false)
// Mock successful resource writes
repoResources.On("WriteResourceFromFile", mock.Anything, "dashboards/test.json", "new-ref").
Return("test-dashboard", schema.GroupVersionKind{Kind: "Dashboard", Group: "dashboards"}, nil)
@@ -131,9 +127,6 @@ func TestIncrementalSync(t *testing.T) {
progress.On("SetMessage", mock.Anything, "replicating versioned changes").Return()
progress.On("SetMessage", mock.Anything, "versioned changes replicated").Return()
// Mock HasDirPathFailedCreation check
progress.On("HasDirPathFailedCreation", "unsupported/path/file.txt").Return(false)
// Mock folder creation
repoResources.On("EnsureFolderPathExist", mock.Anything, "unsupported/path/").
Return("test-folder", nil)
@@ -168,9 +161,6 @@ func TestIncrementalSync(t *testing.T) {
progress.On("SetMessage", mock.Anything, "replicating versioned changes").Return()
progress.On("SetMessage", mock.Anything, "versioned changes replicated").Return()
// Mock HasDirPathFailedCreation check
progress.On("HasDirPathFailedCreation", ".unsupported/path/file.txt").Return(false)
progress.On("Record", mock.Anything, jobs.JobResourceResult{
Action: repository.FileActionIgnored,
Path: ".unsupported/path/file.txt",
@@ -232,9 +222,6 @@ func TestIncrementalSync(t *testing.T) {
progress.On("SetMessage", mock.Anything, "replicating versioned changes").Return()
progress.On("SetMessage", mock.Anything, "versioned changes replicated").Return()
// Mock HasDirPathFailedCreation check
progress.On("HasDirPathFailedCreation", "dashboards/new.json").Return(false)
// Mock resource rename
repoResources.On("RenameResourceFile", mock.Anything, "dashboards/old.json", "old-ref", "dashboards/new.json", "new-ref").
Return("renamed-dashboard", "", schema.GroupVersionKind{Kind: "Dashboard", Group: "dashboards"}, nil)
@@ -267,10 +254,6 @@ func TestIncrementalSync(t *testing.T) {
progress.On("SetTotal", mock.Anything, 1).Return()
progress.On("SetMessage", mock.Anything, "replicating versioned changes").Return()
progress.On("SetMessage", mock.Anything, "versioned changes replicated").Return()
// Mock HasDirPathFailedCreation check
progress.On("HasDirPathFailedCreation", "dashboards/ignored.json").Return(false)
progress.On("Record", mock.Anything, jobs.JobResourceResult{
Action: repository.FileActionIgnored,
Path: "dashboards/ignored.json",
@@ -294,28 +277,16 @@ func TestIncrementalSync(t *testing.T) {
repo.On("CompareFiles", mock.Anything, "old-ref", "new-ref").Return(changes, nil)
progress.On("SetTotal", mock.Anything, 1).Return()
progress.On("SetMessage", mock.Anything, "replicating versioned changes").Return()
progress.On("SetMessage", mock.Anything, "versioned changes replicated").Return()
// Mock HasDirPathFailedCreation check
progress.On("HasDirPathFailedCreation", "unsupported/path/file.txt").Return(false)
// Mock folder creation error
repoResources.On("EnsureFolderPathExist", mock.Anything, "unsupported/path/").
Return("", fmt.Errorf("failed to create folder"))
// Mock progress recording with error
progress.On("Record", mock.Anything, mock.MatchedBy(func(result jobs.JobResourceResult) bool {
return result.Action == repository.FileActionIgnored &&
result.Path == "unsupported/path/file.txt" &&
result.Error != nil &&
result.Error.Error() == "failed to create folder"
})).Return()
progress.On("TooManyErrors").Return(nil)
},
previousRef: "old-ref",
currentRef: "new-ref",
expectedCalls: 1,
expectedError: "unable to create empty file folder: failed to create folder",
},
{
name: "error writing resource",
@@ -332,9 +303,6 @@ func TestIncrementalSync(t *testing.T) {
progress.On("SetMessage", mock.Anything, "replicating versioned changes").Return()
progress.On("SetMessage", mock.Anything, "versioned changes replicated").Return()
// Mock HasDirPathFailedCreation check
progress.On("HasDirPathFailedCreation", "dashboards/test.json").Return(false)
// Mock resource write error
repoResources.On("WriteResourceFromFile", mock.Anything, "dashboards/test.json", "new-ref").
Return("test-dashboard", schema.GroupVersionKind{Kind: "Dashboard", Group: "dashboards"}, fmt.Errorf("write failed"))
@@ -404,8 +372,7 @@ func TestIncrementalSync(t *testing.T) {
repo.On("CompareFiles", mock.Anything, "old-ref", "new-ref").Return(changes, nil)
progress.On("SetTotal", mock.Anything, 1).Return()
progress.On("SetMessage", mock.Anything, "replicating versioned changes").Return()
// Mock too many errors - this is checked before processing files, so HasDirPathFailedCreation won't be called
// Mock too many errors
progress.On("TooManyErrors").Return(fmt.Errorf("too many errors occurred"))
},
previousRef: "old-ref",
@@ -461,9 +428,6 @@ func TestIncrementalSync_CleanupOrphanedFolders(t *testing.T) {
repoResources.On("RemoveResourceFromFile", mock.Anything, "dashboards/old.json", "old-ref").
Return("old-dashboard", "folder-uid", schema.GroupVersionKind{Kind: "Dashboard", Group: "dashboards"}, nil)
// Mock HasDirPathFailedDeletion check for cleanup
progress.On("HasDirPathFailedDeletion", "dashboards/").Return(false)
// if the folder is not found in git, there should be a call to remove the folder from grafana
repo.MockReader.On("Read", mock.Anything, "dashboards/", "").
Return((*repository.FileInfo)(nil), repository.ErrFileNotFound)
@@ -489,10 +453,6 @@ func TestIncrementalSync_CleanupOrphanedFolders(t *testing.T) {
progress.On("SetMessage", mock.Anything, "versioned changes replicated").Return()
repoResources.On("RemoveResourceFromFile", mock.Anything, "dashboards/old.json", "old-ref").
Return("old-dashboard", "folder-uid", schema.GroupVersionKind{Kind: "Dashboard", Group: "dashboards"}, nil)
// Mock HasDirPathFailedDeletion check for cleanup
progress.On("HasDirPathFailedDeletion", "dashboards/").Return(false)
// if the folder still exists in git, there should not be a call to delete it from grafana
repo.MockReader.On("Read", mock.Anything, "dashboards/", "").
Return(&repository.FileInfo{}, nil)
@@ -525,13 +485,6 @@ func TestIncrementalSync_CleanupOrphanedFolders(t *testing.T) {
repoResources.On("RemoveResourceFromFile", mock.Anything, "alerts/old-alert.yaml", "old-ref").
Return("old-alert", "folder-uid-2", schema.GroupVersionKind{Kind: "Alert", Group: "alerts"}, nil)
progress.On("Record", mock.Anything, mock.Anything).Return()
progress.On("TooManyErrors").Return(nil)
// Mock HasDirPathFailedDeletion checks for cleanup
progress.On("HasDirPathFailedDeletion", "dashboards/").Return(false)
progress.On("HasDirPathFailedDeletion", "alerts/").Return(false)
// both not found in git, both should be deleted
repo.MockReader.On("Read", mock.Anything, "dashboards/", "").
Return((*repository.FileInfo)(nil), repository.ErrFileNotFound)
@@ -539,6 +492,9 @@ func TestIncrementalSync_CleanupOrphanedFolders(t *testing.T) {
Return((*repository.FileInfo)(nil), repository.ErrFileNotFound)
repoResources.On("RemoveFolder", mock.Anything, "folder-uid-1").Return(nil)
repoResources.On("RemoveFolder", mock.Anything, "folder-uid-2").Return(nil)
progress.On("Record", mock.Anything, mock.Anything).Return()
progress.On("TooManyErrors").Return(nil)
},
},
}

View File

@@ -20,21 +20,6 @@ import (
const MaxNumberOfFolders = 10000
// PathCreationError represents an error that occurred while creating a folder path.
// It contains the path that failed and the underlying error.
type PathCreationError struct {
Path string
Err error
}
func (e *PathCreationError) Unwrap() error {
return e.Err
}
func (e *PathCreationError) Error() string {
return fmt.Sprintf("failed to create path %s: %v", e.Path, e.Err)
}
type FolderManager struct {
repo repository.ReaderWriter
tree FolderTree
@@ -88,11 +73,7 @@ func (fm *FolderManager) EnsureFolderPathExist(ctx context.Context, filePath str
}
if err := fm.EnsureFolderExists(ctx, f, parent); err != nil {
// Wrap in PathCreationError to indicate which path failed
return &PathCreationError{
Path: f.Path,
Err: fmt.Errorf("ensure folder exists: %w", err),
}
return fmt.Errorf("ensure folder exists: %w", err)
}
fm.tree.Add(f, parent)

View File

@@ -9,6 +9,11 @@ import (
"k8s.io/apimachinery/pkg/util/validation/field"
)
const (
// This constant can be used as a key in resource tags
GrafanaSecretsManagerName = "grafana-secrets-manager"
)
var (
// The name used to refer to the system keeper
SystemKeeperName = "system"
@@ -102,8 +107,8 @@ func (s ExternalID) String() string {
// Keeper is the interface for secret keepers.
type Keeper interface {
Store(ctx context.Context, cfg secretv1beta1.KeeperConfig, namespace xkube.Namespace, name string, version int64, exposedValueOrRef string) (ExternalID, error)
Update(ctx context.Context, cfg secretv1beta1.KeeperConfig, namespace xkube.Namespace, name string, version int64, exposedValueOrRef string) error
Expose(ctx context.Context, cfg secretv1beta1.KeeperConfig, namespace xkube.Namespace, name string, version int64) (secretv1beta1.ExposedSecureValue, error)
RetrieveReference(ctx context.Context, cfg secretv1beta1.KeeperConfig, ref string) (secretv1beta1.ExposedSecureValue, error)
Delete(ctx context.Context, cfg secretv1beta1.KeeperConfig, namespace xkube.Namespace, name string, version int64) error
}

View File

@@ -21,8 +21,10 @@ type DecryptSecureValue struct {
}
var (
ErrSecureValueNotFound = errors.New("secure value not found")
ErrSecureValueAlreadyExists = errors.New("secure value already exists")
ErrSecureValueNotFound = errors.New("secure value not found")
ErrSecureValueAlreadyExists = errors.New("secure value already exists")
ErrReferenceWithSystemKeeper = errors.New("tried to create secure value using reference with system keeper, references can only be used with 3rd party keepers")
ErrSecureValueMissingSecretAndRef = errors.New("secure value spec doesn't have neither a secret or reference")
)
type ReadOpts struct {

View File

@@ -103,9 +103,12 @@ func (w *Worker) Cleanup(ctx context.Context, sv *secretv1beta1.SecureValue) err
return fmt.Errorf("getting keeper for config: namespace=%+v keeperName=%+v %w", sv.Namespace, sv.Status.Keeper, err)
}
// Keeper deletion is idempotent
if err := keeper.Delete(ctx, keeperCfg, xkube.Namespace(sv.Namespace), sv.Name, sv.Status.Version); err != nil {
return fmt.Errorf("deleting secure value from keeper: %w", err)
// If the secure value doesn't use a reference, delete the secret
if sv.Spec.Ref == nil {
// Keeper deletion is idempotent
if err := keeper.Delete(ctx, keeperCfg, xkube.Namespace(sv.Namespace), sv.Name, sv.Status.Version); err != nil {
return fmt.Errorf("deleting secure value from keeper: %w", err)
}
}
// Metadata deletion is not idempotent but not found errors are ignored

View File

@@ -1,7 +1,6 @@
package garbagecollectionworker_test
import (
"slices"
"testing"
"time"
@@ -97,27 +96,33 @@ func TestBasic(t *testing.T) {
require.NoError(t, sut.GarbageCollectionWorker.Cleanup(t.Context(), sv))
require.NoError(t, sut.GarbageCollectionWorker.Cleanup(t.Context(), sv))
})
}
var (
decryptersGen = rapid.SampledFrom([]string{"svc1", "svc2", "svc3", "svc4", "svc5"})
nameGen = rapid.SampledFrom([]string{"n1", "n2", "n3", "n4", "n5"})
namespaceGen = rapid.SampledFrom([]string{"ns1", "ns2", "ns3", "ns4", "ns5"})
anySecureValueGen = rapid.Custom(func(t *rapid.T) *secretv1beta1.SecureValue {
return &secretv1beta1.SecureValue{
t.Run("cleaning up secure values that use references", func(t *testing.T) {
sut := testutils.Setup(t)
keeper, err := sut.CreateAWSKeeper(t.Context())
require.NoError(t, err)
require.NoError(t, sut.KeeperMetadataStorage.SetAsActive(t.Context(), xkube.Namespace(keeper.Namespace), keeper.Name))
sv, err := sut.CreateSv(t.Context(), testutils.CreateSvWithSv(&secretv1beta1.SecureValue{
ObjectMeta: metav1.ObjectMeta{
Name: nameGen.Draw(t, "name"),
Namespace: namespaceGen.Draw(t, "ns"),
Namespace: keeper.Namespace,
Name: "sv1",
},
Spec: secretv1beta1.SecureValueSpec{
Description: rapid.SampledFrom([]string{"d1", "d2", "d3", "d4", "d5"}).Draw(t, "description"),
Value: ptr.To(secretv1beta1.NewExposedSecureValue(rapid.SampledFrom([]string{"v1", "v2", "v3", "v4", "v5"}).Draw(t, "value"))),
Decrypters: rapid.SliceOfDistinct(decryptersGen, func(v string) string { return v }).Draw(t, "decrypters"),
Description: "desc1",
Ref: ptr.To("ref1"),
Decrypters: []string{"decrypter1"},
},
Status: secretv1beta1.SecureValueStatus{},
}
}))
require.NoError(t, err)
_, err = sut.DeleteSv(t.Context(), sv.Namespace, sv.Name)
require.NoError(t, err)
require.NoError(t, sut.GarbageCollectionWorker.Cleanup(t.Context(), sv))
})
)
}
func TestProperty(t *testing.T) {
t.Parallel()
@@ -126,26 +131,59 @@ func TestProperty(t *testing.T) {
rapid.Check(t, func(t *rapid.T) {
sut := testutils.Setup(tt)
model := newModel()
model := testutils.NewModelGsm(nil)
t.Repeat(map[string]func(*rapid.T){
"create": func(t *rapid.T) {
sv := anySecureValueGen.Draw(t, "sv")
var sv *secretv1beta1.SecureValue
if rapid.Bool().Draw(t, "withRef") {
sv = testutils.AnySecureValueWithRefGen.Draw(t, "sv")
} else {
sv = testutils.AnySecureValueGen.Draw(t, "sv")
}
svCopy := sv.DeepCopy()
createdSv, err := sut.CreateSv(t.Context(), testutils.CreateSvWithSv(sv))
svCopy.UID = createdSv.UID
modelErr := model.create(sut.Clock.Now(), svCopy)
if err == nil {
svCopy.UID = createdSv.UID
}
_, modelErr := model.Create(sut.Clock.Now(), svCopy)
require.ErrorIs(t, err, modelErr)
},
"createKeeper": func(t *rapid.T) {
input := testutils.AnyKeeperGen.Draw(t, "keeper")
modelKeeper, modelErr := model.CreateKeeper(input)
keeper, err := sut.KeeperMetadataStorage.Create(t.Context(), input, "actor-uid")
if err != nil || modelErr != nil {
require.ErrorIs(t, err, modelErr)
return
}
require.Equal(t, modelKeeper.Name, keeper.Name)
},
"setKeeperAsActive": func(t *rapid.T) {
namespace := testutils.NamespaceGen.Draw(t, "namespace")
var keeper string
if rapid.Bool().Draw(t, "systemKeeper") {
keeper = contracts.SystemKeeperName
} else {
keeper = testutils.KeeperNameGen.Draw(t, "keeper")
}
modelErr := model.SetKeeperAsActive(namespace, keeper)
err := sut.KeeperMetadataStorage.SetAsActive(t.Context(), xkube.Namespace(namespace), keeper)
if err != nil || modelErr != nil {
require.ErrorIs(t, err, modelErr)
return
}
},
"delete": func(t *rapid.T) {
if len(model.items) == 0 {
if len(model.SecureValues) == 0 {
return
}
i := rapid.IntRange(0, len(model.items)-1).Draw(t, "index")
sv := model.items[i]
modelErr := model.delete(sv.Namespace, sv.Name)
i := rapid.IntRange(0, len(model.SecureValues)-1).Draw(t, "index")
sv := model.SecureValues[i]
_, modelErr := model.Delete(sv.Namespace, sv.Name)
_, err := sut.DeleteSv(t.Context(), sv.Namespace, sv.Name)
require.ErrorIs(t, err, modelErr)
},
@@ -153,7 +191,7 @@ func TestProperty(t *testing.T) {
// Taken from secureValueMetadataStorage.acquireLeases
minAge := 300 * time.Second
maxBatchSize := sut.GarbageCollectionWorker.Cfg.SecretsManagement.GCWorkerMaxBatchSize
modelDeleted, modelErr := model.cleanupInactiveSecureValues(sut.Clock.Now(), minAge, maxBatchSize)
modelDeleted, modelErr := model.CleanupInactiveSecureValues(sut.Clock.Now(), minAge, maxBatchSize)
deleted, err := sut.GarbageCollectionWorker.CleanupInactiveSecureValues(t.Context())
require.ErrorIs(t, err, modelErr)
@@ -174,77 +212,3 @@ func TestProperty(t *testing.T) {
})
})
}
type model struct {
items []*modelSecureValue
}
type modelSecureValue struct {
*secretv1beta1.SecureValue
active bool
created time.Time
}
func newModel() *model {
return &model{
items: make([]*modelSecureValue, 0),
}
}
func (m *model) create(now time.Time, sv *secretv1beta1.SecureValue) error {
created := now
for _, item := range m.items {
if item.active && item.Namespace == sv.Namespace && item.Name == sv.Name {
item.active = false
created = item.created
break
}
}
m.items = append(m.items, &modelSecureValue{SecureValue: sv, active: true, created: created})
return nil
}
func (m *model) delete(ns string, name string) error {
for _, sv := range m.items {
if sv.active && sv.Namespace == ns && sv.Name == name {
sv.active = false
return nil
}
}
return contracts.ErrSecureValueNotFound
}
func (m *model) cleanupInactiveSecureValues(now time.Time, minAge time.Duration, maxBatchSize uint16) ([]*modelSecureValue, error) {
// Using a slice to allow duplicates
toDelete := make([]*modelSecureValue, 0)
// The implementation query sorts by created time ascending
slices.SortFunc(m.items, func(a, b *modelSecureValue) int {
if a.created.Before(b.created) {
return -1
} else if a.created.After(b.created) {
return 1
}
return 0
})
for _, sv := range m.items {
if len(toDelete) >= int(maxBatchSize) {
break
}
if !sv.active && now.Sub(sv.created) > minAge {
toDelete = append(toDelete, sv)
}
}
// PERF: The slices are always small
m.items = slices.DeleteFunc(m.items, func(v1 *modelSecureValue) bool {
return slices.ContainsFunc(toDelete, func(v2 *modelSecureValue) bool {
return v2.UID == v1.UID
})
})
return toDelete, nil
}

View File

@@ -107,6 +107,10 @@ func (s *SQLKeeper) Expose(ctx context.Context, cfg secretv1beta1.KeeperConfig,
return exposedValue, nil
}
func (s *SQLKeeper) RetrieveReference(ctx context.Context, cfg secretv1beta1.KeeperConfig, ref string) (secretv1beta1.ExposedSecureValue, error) {
return "", fmt.Errorf("reference is not implemented by the SQLKeeper")
}
func (s *SQLKeeper) Delete(ctx context.Context, cfg secretv1beta1.KeeperConfig, namespace xkube.Namespace, name string, version int64) error {
ctx, span := s.tracer.Start(ctx, "SQLKeeper.Delete", trace.WithAttributes(
attribute.String("namespace", namespace.String()),
@@ -125,27 +129,3 @@ func (s *SQLKeeper) Delete(ctx context.Context, cfg secretv1beta1.KeeperConfig,
return nil
}
func (s *SQLKeeper) Update(ctx context.Context, cfg secretv1beta1.KeeperConfig, namespace xkube.Namespace, name string, version int64, exposedValueOrRef string) error {
ctx, span := s.tracer.Start(ctx, "SQLKeeper.Update", trace.WithAttributes(
attribute.String("namespace", namespace.String()),
attribute.String("name", name),
attribute.Int64("version", version),
))
defer span.End()
start := time.Now()
encryptedData, err := s.encryptionManager.Encrypt(ctx, namespace, []byte(exposedValueOrRef))
if err != nil {
return fmt.Errorf("unable to encrypt value: %w", err)
}
err = s.store.Update(ctx, namespace, name, version, encryptedData)
if err != nil {
return fmt.Errorf("failed to update encrypted value: %w", err)
}
s.metrics.UpdateDuration.WithLabelValues(string(cfg.Type())).Observe(time.Since(start).Seconds())
return nil
}

View File

@@ -26,7 +26,7 @@ func Test_SQLKeeperSetup(t *testing.T) {
plaintext1 := "very secret string in namespace 1"
plaintext2 := "very secret string in namespace 2"
keeperCfg := &secretv1beta1.SystemKeeperConfig{}
keeperCfg := secretv1beta1.NewNamedKeeperConfig("k1", &secretv1beta1.SystemKeeperConfig{})
t.Run("storing an encrypted value returns no error", func(t *testing.T) {
sut := testutils.Setup(t)
@@ -123,31 +123,6 @@ func Test_SQLKeeperSetup(t *testing.T) {
require.NoError(t, err)
})
t.Run("updating an existent encrypted value returns no error", func(t *testing.T) {
sut := testutils.Setup(t)
_, err := sut.SQLKeeper.Store(t.Context(), keeperCfg, namespace1, name1, version1, plaintext1)
require.NoError(t, err)
err = sut.SQLKeeper.Update(t.Context(), keeperCfg, namespace1, name1, version1, plaintext2)
require.NoError(t, err)
exposedVal, err := sut.SQLKeeper.Expose(t.Context(), keeperCfg, namespace1, name1, version1)
require.NoError(t, err)
assert.NotNil(t, exposedVal)
assert.Equal(t, plaintext2, exposedVal.DangerouslyExposeAndConsumeValue())
})
t.Run("updating a non existent encrypted value returns error", func(t *testing.T) {
sut := testutils.Setup(t)
_, err := sut.SQLKeeper.Store(t.Context(), keeperCfg, namespace1, name1, version1, plaintext1)
require.NoError(t, err)
err = sut.SQLKeeper.Update(t.Context(), nil, namespace1, "non_existing_name", version1, plaintext2)
require.Error(t, err)
})
t.Run("data key migration only runs if both secrets db migrations are enabled", func(t *testing.T) {
t.Parallel()

View File

@@ -141,7 +141,7 @@ func (s *SecureValueService) Update(ctx context.Context, newSecureValue *secretv
return nil, false, fmt.Errorf("fetching keeper config: namespace=%+v keeper: %q %w", newSecureValue.Namespace, currentVersion.Status.Keeper, err)
}
if newSecureValue.Spec.Value == nil {
if newSecureValue.Spec.Value == nil && newSecureValue.Spec.Ref == nil {
keeper, err := s.keeperService.KeeperForConfig(keeperCfg)
if err != nil {
return nil, false, fmt.Errorf("getting keeper for config: namespace=%+v keeperName=%+v %w", newSecureValue.Namespace, newSecureValue.Status.Keeper, err)
@@ -150,7 +150,7 @@ func (s *SecureValueService) Update(ctx context.Context, newSecureValue *secretv
secret, err := keeper.Expose(ctx, keeperCfg, xkube.Namespace(newSecureValue.Namespace), newSecureValue.Name, currentVersion.Status.Version)
if err != nil {
return nil, false, fmt.Errorf("reading secret value from keeper: %w", err)
return nil, false, fmt.Errorf("reading secret value from keeper: %w %w", contracts.ErrSecureValueMissingSecretAndRef, err)
}
newSecureValue.Spec.Value = &secret
@@ -174,6 +174,10 @@ func (s *SecureValueService) createNewVersion(ctx context.Context, keeperName st
return nil, contracts.NewErrValidateSecureValue(errorList)
}
if sv.Spec.Ref != nil && keeperCfg.Type() == secretv1beta1.SystemKeeperType {
return nil, contracts.ErrReferenceWithSystemKeeper
}
createdSv, err := s.secureValueMetadataStorage.Create(ctx, keeperName, sv, actorUID)
if err != nil {
return nil, fmt.Errorf("creating secure value: %w", err)
@@ -189,18 +193,28 @@ func (s *SecureValueService) createNewVersion(ctx context.Context, keeperName st
return nil, fmt.Errorf("getting keeper for config: namespace=%+v keeperName=%+v %w", createdSv.Namespace, keeperName, err)
}
logging.FromContext(ctx).Debug("retrieved keeper", "namespace", createdSv.Namespace, "type", keeperCfg.Type())
// TODO: can we stop using external id?
// TODO: store uses only the namespace and returns and id. It could be a kv instead.
// TODO: check that the encrypted store works with multiple versions
externalID, err := keeper.Store(ctx, keeperCfg, xkube.Namespace(createdSv.Namespace), createdSv.Name, createdSv.Status.Version, sv.Spec.Value.DangerouslyExposeAndConsumeValue())
if err != nil {
return nil, fmt.Errorf("storing secure value in keeper: %w", err)
}
createdSv.Status.ExternalID = string(externalID)
switch {
case sv.Spec.Value != nil:
externalID, err := keeper.Store(ctx, keeperCfg, xkube.Namespace(createdSv.Namespace), createdSv.Name, createdSv.Status.Version, sv.Spec.Value.DangerouslyExposeAndConsumeValue())
if err != nil {
return nil, fmt.Errorf("storing secure value in keeper: %w", err)
}
createdSv.Status.ExternalID = string(externalID)
if err := s.secureValueMetadataStorage.SetExternalID(ctx, xkube.Namespace(createdSv.Namespace), createdSv.Name, createdSv.Status.Version, externalID); err != nil {
return nil, fmt.Errorf("setting secure value external id: %w", err)
if err := s.secureValueMetadataStorage.SetExternalID(ctx, xkube.Namespace(createdSv.Namespace), createdSv.Name, createdSv.Status.Version, externalID); err != nil {
return nil, fmt.Errorf("setting secure value external id: %w", err)
}
case sv.Spec.Ref != nil:
// No-op, there's nothing to store in the keeper since the
// secret is already stored in the 3rd party secret store
// and it's being referenced.
default:
return nil, fmt.Errorf("secure value doesn't specify either a secret value or a reference")
}
if err := s.secureValueMetadataStorage.SetVersionToActive(ctx, xkube.Namespace(createdSv.Namespace), createdSv.Name, createdSv.Status.Version); err != nil {
@@ -366,3 +380,20 @@ func (s *SecureValueService) Delete(ctx context.Context, namespace xkube.Namespa
return sv, nil
}
func (s *SecureValueService) SetKeeperAsActive(ctx context.Context, namespace xkube.Namespace, name string) error {
// The system keeper is not in the database, so skip checking it exists.
// TODO: should the system keeper be in the database?
if name != contracts.SystemKeeperName {
// Check keeper exists. No need to worry about time of check to time of use
// since trying to activate a just deleted keeper will result in all
// keepers being inactive and defaulting to the system keeper.
if _, err := s.keeperMetadataStorage.Read(ctx, namespace, name, contracts.ReadOpts{}); err != nil {
return fmt.Errorf("reading keeper before setting as active: %w", err)
}
}
if err := s.keeperMetadataStorage.SetAsActive(ctx, namespace, name); err != nil {
return fmt.Errorf("calling keeper metadata storage to set keeper as active: %w", err)
}
return nil
}

View File

@@ -8,6 +8,7 @@ import (
"github.com/grafana/grafana/pkg/registry/apis/secret/testutils"
"github.com/grafana/grafana/pkg/registry/apis/secret/xkube"
"github.com/stretchr/testify/require"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/utils/ptr"
)
@@ -93,4 +94,149 @@ func TestCrud(t *testing.T) {
_, err = sut.SecureValueMetadataStorage.Read(t.Context(), xkube.Namespace(sv1.Namespace), sv1.Name, contracts.ReadOpts{})
require.ErrorIs(t, err, contracts.ErrSecureValueNotFound)
})
t.Run("secret can be referenced only when the active keeper is a 3rd party keeper", func(t *testing.T) {
t.Parallel()
sut := testutils.Setup(t)
ref := "path-to-secret"
sv := &secretv1beta1.SecureValue{
ObjectMeta: metav1.ObjectMeta{
Name: "sv1",
Namespace: "ns1",
},
Spec: secretv1beta1.SecureValueSpec{
Description: "desc1",
Ref: &ref,
Decrypters: []string{"decrypter1"},
},
Status: secretv1beta1.SecureValueStatus{},
}
// Creating a secure value using ref with the system keeper
createdSv, err := sut.CreateSv(t.Context(), testutils.CreateSvWithSv(sv))
require.NotNil(t, err)
require.Nil(t, createdSv)
require.Contains(t, err.Error(), "tried to create secure value using reference with system keeper, references can only be used with 3rd party keepers")
// Create a 3rd party keeper
keeper := &secretv1beta1.Keeper{
ObjectMeta: metav1.ObjectMeta{
Name: "k1",
Namespace: "ns1",
},
Spec: secretv1beta1.KeeperSpec{
Description: "desc",
Aws: &secretv1beta1.KeeperAWSConfig{
Region: "us-east-1",
AssumeRole: &secretv1beta1.KeeperAWSAssumeRole{
AssumeRoleArn: "arn",
ExternalID: "id",
},
},
},
}
// Create a 3rd party keeper
_, err = sut.KeeperMetadataStorage.Create(t.Context(), keeper, "actor-uid")
require.NoError(t, err)
// Set the new keeper as active
require.NoError(t, sut.KeeperMetadataStorage.SetAsActive(t.Context(), xkube.Namespace(keeper.Namespace), keeper.Name))
// Create a secure value using a ref
createdSv, err = sut.CreateSv(t.Context(), testutils.CreateSvWithSv(sv))
require.NoError(t, err)
require.Equal(t, keeper.Name, createdSv.Status.Keeper)
})
t.Run("creating secure value with reference", func(t *testing.T) {
t.Parallel()
sut := testutils.Setup(t)
// Create a keeper because references cannot be used with the system keeper
keeper, err := sut.KeeperMetadataStorage.Create(t.Context(), &secretv1beta1.Keeper{
ObjectMeta: metav1.ObjectMeta{
Namespace: "ns",
Name: "k1",
},
Spec: secretv1beta1.KeeperSpec{
Aws: &secretv1beta1.KeeperAWSConfig{},
},
}, "actor-uid")
require.NoError(t, err)
require.NoError(t, sut.KeeperMetadataStorage.SetAsActive(t.Context(), xkube.Namespace(keeper.Namespace), keeper.Name))
sv, err := sut.CreateSv(t.Context())
require.NoError(t, err)
require.NotNil(t, sv)
})
}
func Test_SetAsActive(t *testing.T) {
t.Parallel()
t.Run("setting the system keeper as the active keeper", func(t *testing.T) {
t.Parallel()
sut := testutils.Setup(t)
namespace := "ns"
// Create a new keeper
keeper, err := sut.KeeperMetadataStorage.Create(t.Context(), &secretv1beta1.Keeper{
ObjectMeta: metav1.ObjectMeta{
Namespace: "ns",
Name: "k1",
},
Spec: secretv1beta1.KeeperSpec{
Description: "description",
Aws: &secretv1beta1.KeeperAWSConfig{},
},
}, "actor-uid")
require.NoError(t, err)
// Set the new keeper as active
require.NoError(t, sut.KeeperMetadataStorage.SetAsActive(t.Context(), xkube.Namespace(keeper.Namespace), keeper.Name))
keeperName, _, err := sut.KeeperMetadataStorage.GetActiveKeeperConfig(t.Context(), namespace)
require.NoError(t, err)
require.Equal(t, keeper.Name, keeperName)
// Set the system keeper as active
require.NoError(t, sut.KeeperMetadataStorage.SetAsActive(t.Context(), xkube.Namespace(namespace), contracts.SystemKeeperName))
keeperName, _, err = sut.KeeperMetadataStorage.GetActiveKeeperConfig(t.Context(), namespace)
require.NoError(t, err)
require.Equal(t, contracts.SystemKeeperName, keeperName)
})
t.Run("each namespace can have one active keeper", func(t *testing.T) {
t.Parallel()
sut := testutils.Setup(t)
k1, err := sut.CreateKeeper(t.Context(), func(ckc *testutils.CreateKeeperConfig) {
ckc.Keeper.Namespace = "ns1"
ckc.Keeper.Name = "k1"
})
require.NoError(t, err)
k2, err := sut.CreateKeeper(t.Context(), func(ckc *testutils.CreateKeeperConfig) {
ckc.Keeper.Namespace = "ns2"
ckc.Keeper.Name = "k2"
})
require.NoError(t, err)
require.NoError(t, sut.KeeperMetadataStorage.SetAsActive(t.Context(), xkube.Namespace(k1.Namespace), k1.Name))
require.NoError(t, sut.KeeperMetadataStorage.SetAsActive(t.Context(), xkube.Namespace(k2.Namespace), k2.Name))
keeperName, _, err := sut.KeeperMetadataStorage.GetActiveKeeperConfig(t.Context(), k1.Namespace)
require.NoError(t, err)
require.Equal(t, k1.Name, keeperName)
keeperName, _, err = sut.KeeperMetadataStorage.GetActiveKeeperConfig(t.Context(), k2.Namespace)
require.NoError(t, err)
require.Equal(t, k2.Name, keeperName)
})
}

View File

@@ -0,0 +1,96 @@
package testutils
import (
"fmt"
secretv1beta1 "github.com/grafana/grafana/apps/secret/pkg/apis/secret/v1beta1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/utils/ptr"
"pgregory.net/rapid"
)
var (
DecryptersGen = rapid.SampledFrom([]string{"svc1", "svc2", "svc3", "svc4", "svc5"})
SecureValueNameGen = rapid.SampledFrom([]string{"n1", "n2", "n3", "n4", "n5"})
KeeperNameGen = rapid.SampledFrom([]string{"k1", "k2", "k3", "k4", "k5"})
NamespaceGen = rapid.SampledFrom([]string{"ns1", "ns2", "ns3", "ns4", "ns5"})
SecretsToRefGen = rapid.SampledFrom([]string{"ref1", "ref2", "ref3", "ref4", "ref5"})
// Generator for secure values that specify a secret value
AnySecureValueGen = rapid.Custom(func(t *rapid.T) *secretv1beta1.SecureValue {
return &secretv1beta1.SecureValue{
ObjectMeta: metav1.ObjectMeta{
Name: SecureValueNameGen.Draw(t, "name"),
Namespace: NamespaceGen.Draw(t, "ns"),
},
Spec: secretv1beta1.SecureValueSpec{
Description: rapid.SampledFrom([]string{"d1", "d2", "d3", "d4", "d5"}).Draw(t, "description"),
Value: ptr.To(secretv1beta1.NewExposedSecureValue(rapid.SampledFrom([]string{"v1", "v2", "v3", "v4", "v5"}).Draw(t, "value"))),
Decrypters: rapid.SliceOfDistinct(DecryptersGen, func(v string) string { return v }).Draw(t, "decrypters"),
},
Status: secretv1beta1.SecureValueStatus{},
}
})
// Generator for secure values that reference values from 3rd party stores
AnySecureValueWithRefGen = rapid.Custom(func(t *rapid.T) *secretv1beta1.SecureValue {
return &secretv1beta1.SecureValue{
ObjectMeta: metav1.ObjectMeta{
Name: SecureValueNameGen.Draw(t, "name"),
Namespace: NamespaceGen.Draw(t, "ns"),
},
Spec: secretv1beta1.SecureValueSpec{
Description: rapid.SampledFrom([]string{"d1", "d2", "d3", "d4", "d5"}).Draw(t, "description"),
Ref: ptr.To(SecretsToRefGen.Draw(t, "ref")),
Decrypters: rapid.SliceOfDistinct(DecryptersGen, func(v string) string { return v }).Draw(t, "decrypters"),
},
Status: secretv1beta1.SecureValueStatus{},
}
})
UpdateSecureValueGen = rapid.Custom(func(t *rapid.T) *secretv1beta1.SecureValue {
sv := AnySecureValueGen.Draw(t, "sv")
// Maybe update the secret value, maybe not
if !rapid.Bool().Draw(t, "should_update_value") {
sv.Spec.Value = nil
}
return sv
})
DecryptGen = rapid.Custom(func(t *rapid.T) DecryptInput {
return DecryptInput{
Namespace: NamespaceGen.Draw(t, "ns"),
Name: SecureValueNameGen.Draw(t, "name"),
Decrypter: DecryptersGen.Draw(t, "decrypter"),
}
})
AnyKeeperGen = rapid.Custom(func(t *rapid.T) *secretv1beta1.Keeper {
spec := secretv1beta1.KeeperSpec{
Description: rapid.String().Draw(t, "description"),
}
keeperType := rapid.SampledFrom([]string{"isAwsKeeper", "isAzureKeeper", "isGcpKeeper", "isVaultKeeper"}).Draw(t, "keeperType")
switch keeperType {
case "isAwsKeeper":
spec.Aws = &secretv1beta1.KeeperAWSConfig{}
case "isAzureKeeper":
spec.Azure = &secretv1beta1.KeeperAzureConfig{}
case "isGcpKeeper":
spec.Gcp = &secretv1beta1.KeeperGCPConfig{}
case "isVaultKeeper":
spec.HashiCorpVault = &secretv1beta1.KeeperHashiCorpConfig{}
default:
panic(fmt.Sprintf("unhandled keeper type '%+v', did you forget a switch case?", keeperType))
}
return &secretv1beta1.Keeper{
ObjectMeta: metav1.ObjectMeta{
Name: KeeperNameGen.Draw(t, "name"),
Namespace: NamespaceGen.Draw(t, "ns"),
},
Spec: spec,
}
})
)
type DecryptInput struct {
Namespace string
Name string
Decrypter string
}

View File

@@ -0,0 +1,321 @@
package testutils
import (
"context"
"fmt"
"slices"
"time"
secretv1beta1 "github.com/grafana/grafana/apps/secret/pkg/apis/secret/v1beta1"
"github.com/grafana/grafana/apps/secret/pkg/decrypt"
"github.com/grafana/grafana/pkg/registry/apis/secret/contracts"
)
type ModelSecureValue struct {
*secretv1beta1.SecureValue
active bool
created time.Time
leaseCreated time.Time
}
type ModelKeeper struct {
namespace string
name string
active bool
keeperType secretv1beta1.KeeperType
}
// A simplified in memoruy model of the grafana secrets manager
type ModelGsm struct {
SecureValues []*ModelSecureValue
Keepers []*ModelKeeper
modelSecretsManager *ModelAWSSecretsManager
}
func NewModelGsm(modelSecretsManager *ModelAWSSecretsManager) *ModelGsm {
return &ModelGsm{modelSecretsManager: modelSecretsManager}
}
func (m *ModelGsm) getNewVersionNumber(namespace, name string) int64 {
latestVersion := int64(0)
for _, sv := range m.SecureValues {
if sv.Namespace == namespace && sv.Name == name {
latestVersion = max(latestVersion, sv.Status.Version)
}
}
return latestVersion + 1
}
func (m *ModelGsm) SetVersionToActive(namespace, name string, version int64) {
for _, sv := range m.SecureValues {
if sv.Namespace == namespace && sv.Name == name {
sv.active = sv.Status.Version == version
}
}
}
func (m *ModelGsm) SetVersionToInactive(namespace, name string, version int64) {
for _, sv := range m.SecureValues {
if sv.Namespace == namespace && sv.Name == name && sv.Status.Version == version {
sv.active = false
return
}
}
}
func (m *ModelGsm) ReadActiveVersion(namespace, name string) *ModelSecureValue {
for _, sv := range m.SecureValues {
if sv.Namespace == namespace && sv.Name == name && sv.active {
return sv
}
}
return nil
}
func (m *ModelGsm) Create(now time.Time, sv *secretv1beta1.SecureValue) (*secretv1beta1.SecureValue, error) {
keeper := m.getActiveKeeper(sv.Namespace)
if sv.Spec.Ref != nil && keeper.keeperType == secretv1beta1.SystemKeeperType {
return nil, contracts.ErrReferenceWithSystemKeeper
}
sv = sv.DeepCopy()
// Preserve the original creation time if this secure value already exists
created := now
if sv := m.ReadActiveVersion(sv.Namespace, sv.Name); sv != nil {
created = sv.created
}
modelSv := &ModelSecureValue{SecureValue: sv, active: false, created: created}
modelSv.Status.Version = m.getNewVersionNumber(modelSv.Namespace, modelSv.Name)
modelSv.Status.ExternalID = fmt.Sprintf("%d", modelSv.Status.Version)
modelSv.Status.Keeper = keeper.name
m.SecureValues = append(m.SecureValues, modelSv)
m.SetVersionToActive(modelSv.Namespace, modelSv.Name, modelSv.Status.Version)
return modelSv.SecureValue, nil
}
func (m *ModelGsm) getActiveKeeper(namespace string) *ModelKeeper {
for _, k := range m.Keepers {
if k.namespace == namespace && k.active {
return k
}
}
// Default to the system keeper when there are no active keepers in the namespace
return &ModelKeeper{
namespace: namespace,
name: contracts.SystemKeeperName,
active: true,
keeperType: secretv1beta1.SystemKeeperType,
}
}
func (m *ModelGsm) keeperExists(namespace, name string) bool {
return m.findKeeper(namespace, name) != nil
}
func (m *ModelGsm) findKeeper(namespace, name string) *ModelKeeper {
// The system keeper is not in the list of keepers
if name == contracts.SystemKeeperName {
return &ModelKeeper{namespace: namespace, name: contracts.SystemKeeperName, active: true, keeperType: secretv1beta1.SystemKeeperType}
}
for _, k := range m.Keepers {
if k.namespace == namespace && k.name == name {
return k
}
}
return nil
}
func (m *ModelGsm) CreateKeeper(keeper *secretv1beta1.Keeper) (*secretv1beta1.Keeper, error) {
if m.keeperExists(keeper.Namespace, keeper.Name) {
return nil, contracts.ErrKeeperAlreadyExists
}
var keeperType secretv1beta1.KeeperType
switch {
case keeper.Spec.Aws != nil:
keeperType = secretv1beta1.AWSKeeperType
case keeper.Spec.Gcp != nil:
keeperType = secretv1beta1.GCPKeeperType
case keeper.Spec.Azure != nil:
keeperType = secretv1beta1.AzureKeeperType
case keeper.Spec.HashiCorpVault != nil:
keeperType = secretv1beta1.HashiCorpKeeperType
default:
keeperType = secretv1beta1.SystemKeeperType
}
m.Keepers = append(m.Keepers, &ModelKeeper{namespace: keeper.Namespace, name: keeper.Name, keeperType: keeperType})
return keeper.DeepCopy(), nil
}
func (m *ModelGsm) SetKeeperAsActive(namespace, keeperName string) error {
// Set every other keeper in the namespace as inactive
for _, k := range m.Keepers {
if k.namespace == namespace {
k.active = k.name == keeperName
}
}
return nil
}
func (m *ModelGsm) Update(now time.Time, newSecureValue *secretv1beta1.SecureValue) (*secretv1beta1.SecureValue, bool, error) {
sv := m.ReadActiveVersion(newSecureValue.Namespace, newSecureValue.Name)
if sv == nil {
return nil, false, contracts.ErrSecureValueNotFound
}
// If the keeper doesn't exist, return an error
if !m.keeperExists(sv.Namespace, sv.Status.Keeper) {
return nil, false, contracts.ErrKeeperNotFound
}
// If the payload doesn't contain a value and it's not using a reference, get the value from current version
if newSecureValue.Spec.Value == nil && newSecureValue.Spec.Ref == nil {
// Tried to update a secure value without providing a new value or a ref
if sv.Spec.Value == nil {
return nil, false, contracts.ErrSecureValueMissingSecretAndRef
}
newSecureValue.Spec.Value = sv.Spec.Value
}
createdSv, err := m.Create(now, newSecureValue)
return createdSv, true, err
}
func (m *ModelGsm) Delete(namespace, name string) (*secretv1beta1.SecureValue, error) {
modelSv := m.ReadActiveVersion(namespace, name)
if modelSv == nil {
return nil, contracts.ErrSecureValueNotFound
}
m.SetVersionToInactive(namespace, name, modelSv.Status.Version)
return modelSv.SecureValue, nil
}
func (m *ModelGsm) List(namespace string) (*secretv1beta1.SecureValueList, error) {
out := make([]secretv1beta1.SecureValue, 0)
for _, v := range m.SecureValues {
if v.Namespace == namespace && v.active {
out = append(out, *v.SecureValue)
}
}
return &secretv1beta1.SecureValueList{Items: out}, nil
}
func (m *ModelGsm) Decrypt(ctx context.Context, decrypter, namespace, name string) (map[string]decrypt.DecryptResult, error) {
for _, v := range m.SecureValues {
if v.Namespace == namespace &&
v.Name == name &&
v.active {
if slices.ContainsFunc(v.Spec.Decrypters, func(d string) bool { return d == decrypter }) {
switch {
// It's a secure value that specifies the secret
case v.Spec.Value != nil:
return map[string]decrypt.DecryptResult{
name: decrypt.NewDecryptResultValue(v.DeepCopy().Spec.Value),
}, nil
// It's a secure value that references a secret on a 3rd party store
case v.Spec.Ref != nil:
keeper := m.findKeeper(v.Namespace, v.Status.Keeper)
switch keeper.keeperType {
case secretv1beta1.AWSKeeperType:
exposedValue, err := m.modelSecretsManager.RetrieveReference(ctx, nil, *v.Spec.Ref)
if err != nil {
return map[string]decrypt.DecryptResult{
name: decrypt.NewDecryptResultErr(fmt.Errorf("%w: %w", contracts.ErrDecryptFailed, err)),
}, nil
}
return map[string]decrypt.DecryptResult{
name: decrypt.NewDecryptResultValue(&exposedValue),
}, nil
// Other keepers are not implemented so we default to the system keeper
default:
// The system keeper doesn't implement Reference so decryption always fails
return map[string]decrypt.DecryptResult{
name: decrypt.NewDecryptResultErr(contracts.ErrDecryptFailed),
}, nil
}
default:
panic("bug: secure value where Spec.Value and Spec.Ref are nil")
}
}
return map[string]decrypt.DecryptResult{
name: decrypt.NewDecryptResultErr(contracts.ErrDecryptNotAuthorized),
}, nil
}
}
return map[string]decrypt.DecryptResult{
name: decrypt.NewDecryptResultErr(contracts.ErrDecryptNotFound),
}, nil
}
func (m *ModelGsm) Read(namespace, name string) (*secretv1beta1.SecureValue, error) {
modelSv := m.ReadActiveVersion(namespace, name)
if modelSv == nil {
return nil, contracts.ErrSecureValueNotFound
}
return modelSv.SecureValue, nil
}
func (m *ModelGsm) LeaseInactiveSecureValues(now time.Time, minAge, leaseTTL time.Duration, maxBatchSize uint16) ([]*ModelSecureValue, error) {
out := make([]*ModelSecureValue, 0)
for _, sv := range m.SecureValues {
if len(out) >= int(maxBatchSize) {
break
}
if !sv.active && now.Sub(sv.created) > minAge && now.Sub(sv.leaseCreated) > leaseTTL {
sv.leaseCreated = now
out = append(out, sv)
}
}
return out, nil
}
func (m *ModelGsm) CleanupInactiveSecureValues(now time.Time, minAge time.Duration, maxBatchSize uint16) ([]*ModelSecureValue, error) {
// Using a slice to allow duplicates
toDelete := make([]*ModelSecureValue, 0)
// The implementation query sorts by created time ascending
slices.SortFunc(m.SecureValues, func(a, b *ModelSecureValue) int {
if a.created.Before(b.created) {
return -1
} else if a.created.After(b.created) {
return 1
}
return 0
})
for _, sv := range m.SecureValues {
if len(toDelete) >= int(maxBatchSize) {
break
}
if !sv.active && now.Sub(sv.created) > minAge {
toDelete = append(toDelete, sv)
}
}
// PERF: The slices are always small
m.SecureValues = slices.DeleteFunc(m.SecureValues, func(v1 *ModelSecureValue) bool {
return slices.ContainsFunc(toDelete, func(v2 *ModelSecureValue) bool {
return v2.UID == v1.UID
})
})
return toDelete, nil
}

View File

@@ -2,6 +2,7 @@ package testutils
import (
"context"
"fmt"
"testing"
"time"
@@ -143,7 +144,8 @@ func Setup(t *testing.T, opts ...func(*SetupConfig)) Sut {
realMigrationExecutor, err := encryptionstorage.ProvideEncryptedValueMigrationExecutor(database, tracer, encryptedValueStorage, globalEncryptedValueStorage)
require.NoError(t, err)
var keeperService contracts.KeeperService = newKeeperServiceWrapper(sqlKeeper)
mockAwsKeeper := NewModelSecretsManager()
var keeperService contracts.KeeperService = newKeeperServiceWrapper(sqlKeeper, mockAwsKeeper)
if setupCfg.KeeperService != nil {
keeperService = setupCfg.KeeperService
@@ -190,6 +192,7 @@ func Setup(t *testing.T, opts ...func(*SetupConfig)) Sut {
Clock: clock,
KeeperService: keeperService,
KeeperMetadataStorage: keeperMetadataStorage,
ModelSecretsManager: mockAwsKeeper,
}
}
@@ -212,6 +215,8 @@ type Sut struct {
Clock *FakeClock
KeeperService contracts.KeeperService
KeeperMetadataStorage contracts.KeeperMetadataStorage
// A mock of AWS secrets manager that implements contracts.Keeper
ModelSecretsManager *ModelAWSSecretsManager
}
type CreateSvConfig struct {
@@ -260,16 +265,54 @@ func (s *Sut) DeleteSv(ctx context.Context, namespace, name string) (*secretv1be
return sv, err
}
type keeperServiceWrapper struct {
keeper contracts.Keeper
type CreateKeeperConfig struct {
// The default keeper payload. Mutate it to change which keeper ends up being created
Keeper *secretv1beta1.Keeper
}
func newKeeperServiceWrapper(keeper contracts.Keeper) *keeperServiceWrapper {
return &keeperServiceWrapper{keeper: keeper}
func (s *Sut) CreateAWSKeeper(ctx context.Context) (*secretv1beta1.Keeper, error) {
return s.CreateKeeper(ctx, func(cfg *CreateKeeperConfig) {
cfg.Keeper.Spec = secretv1beta1.KeeperSpec{
Aws: &secretv1beta1.KeeperAWSConfig{},
}
})
}
func (s *Sut) CreateKeeper(ctx context.Context, opts ...func(*CreateKeeperConfig)) (*secretv1beta1.Keeper, error) {
cfg := CreateKeeperConfig{
Keeper: &secretv1beta1.Keeper{
ObjectMeta: metav1.ObjectMeta{
Name: "sv1",
Namespace: "ns1",
},
Spec: secretv1beta1.KeeperSpec{
Aws: &secretv1beta1.KeeperAWSConfig{},
},
},
}
for _, opt := range opts {
opt(&cfg)
}
return s.KeeperMetadataStorage.Create(ctx, cfg.Keeper, "actor-uid")
}
type keeperServiceWrapper struct {
sqlKeeper *sqlkeeper.SQLKeeper
awsKeeper *ModelAWSSecretsManager
}
func newKeeperServiceWrapper(sqlKeeper *sqlkeeper.SQLKeeper, awsKeeper *ModelAWSSecretsManager) *keeperServiceWrapper {
return &keeperServiceWrapper{sqlKeeper: sqlKeeper, awsKeeper: awsKeeper}
}
func (wrapper *keeperServiceWrapper) KeeperForConfig(cfg secretv1beta1.KeeperConfig) (contracts.Keeper, error) {
return wrapper.keeper, nil
switch cfg.(type) {
case *secretv1beta1.NamedKeeperConfig[*secretv1beta1.KeeperAWSConfig]:
return wrapper.awsKeeper, nil
default:
return wrapper.sqlKeeper, nil
}
}
func CreateUserAuthContext(ctx context.Context, namespace string, permissions map[string][]string) context.Context {
@@ -390,3 +433,113 @@ type NoopMigrationExecutor struct {
func (e *NoopMigrationExecutor) Execute(ctx context.Context) (int, error) {
return 0, nil
}
// A mock of AWS secrets manager, used for testing.
type ModelAWSSecretsManager struct {
secrets map[string]entry
alreadyDeleted map[string]bool
}
type entry struct {
exposedValueOrRef string
externalID string
}
func NewModelSecretsManager() *ModelAWSSecretsManager {
return &ModelAWSSecretsManager{
secrets: make(map[string]entry),
alreadyDeleted: make(map[string]bool),
}
}
func (m *ModelAWSSecretsManager) Store(ctx context.Context, cfg secretv1beta1.KeeperConfig, namespace xkube.Namespace, name string, version int64, exposedValueOrRef string) (externalID contracts.ExternalID, err error) {
if exposedValueOrRef == "" {
return "", fmt.Errorf("failed to satisfy constraint: Member must have length greater than or equal to 1")
}
versionID := buildVersionID(namespace, name, version)
if e, ok := m.secrets[versionID]; ok {
// Ignore duplicated requests
if e.exposedValueOrRef == exposedValueOrRef {
return contracts.ExternalID(e.externalID), nil
}
// Tried to create a secret that already exists
return "", fmt.Errorf("ResourceExistsException: The operation failed because the secret %+v already exists", versionID)
}
// First time creating the secret
entry := entry{
exposedValueOrRef: exposedValueOrRef,
externalID: "external-id",
}
m.secrets[versionID] = entry
return contracts.ExternalID(entry.externalID), nil
}
// Used to simulate the creation of secrets in the 3rd party secret store
func (m *ModelAWSSecretsManager) Create(name, value string) {
m.secrets[name] = entry{
exposedValueOrRef: value,
externalID: fmt.Sprintf("external_id_%+v", value),
}
}
func (m *ModelAWSSecretsManager) Expose(ctx context.Context, cfg secretv1beta1.KeeperConfig, namespace xkube.Namespace, name string, version int64) (exposedValue secretv1beta1.ExposedSecureValue, err error) {
versionID := buildVersionID(namespace, name, version)
if m.deleted(versionID) {
return "", fmt.Errorf("InvalidRequestException: You can't perform this operation on the secret because it was marked for deletion")
}
entry, ok := m.secrets[versionID]
if !ok {
return "", fmt.Errorf("ResourceNotFoundException: Secrets Manager can't find the specified secret")
}
return secretv1beta1.ExposedSecureValue(entry.exposedValueOrRef), nil
}
// TODO: this could be namespaced to make it more realistic
func (m *ModelAWSSecretsManager) RetrieveReference(ctx context.Context, _ secretv1beta1.KeeperConfig, ref string) (secretv1beta1.ExposedSecureValue, error) {
entry, ok := m.secrets[ref]
if !ok {
return "", fmt.Errorf("ResourceNotFoundException: Secrets Manager can't find the specified secret")
}
return secretv1beta1.ExposedSecureValue(entry.exposedValueOrRef), nil
}
func (m *ModelAWSSecretsManager) Delete(ctx context.Context, cfg secretv1beta1.KeeperConfig, namespace xkube.Namespace, name string, version int64) (err error) {
versionID := buildVersionID(namespace, name, version)
// Deleting a secret that existed at some point is idempotent
if m.deleted(versionID) {
return nil
}
// If the secret is being deleted for the first time
if m.exists(versionID) {
m.delete(versionID)
}
return nil
}
func (m *ModelAWSSecretsManager) deleted(versionID string) bool {
return m.alreadyDeleted[versionID]
}
func (m *ModelAWSSecretsManager) exists(versionID string) bool {
_, ok := m.secrets[versionID]
return ok
}
func (m *ModelAWSSecretsManager) delete(versionID string) {
m.alreadyDeleted[versionID] = true
delete(m.secrets, versionID)
}
func buildVersionID(namespace xkube.Namespace, name string, version int64) string {
return fmt.Sprintf("%s/%s/%d", namespace, name, version)
}

View File

@@ -1,6 +1,8 @@
package validator
import (
"context"
"fmt"
"strings"
"k8s.io/apimachinery/pkg/util/validation"
@@ -9,14 +11,17 @@ import (
secretv1beta1 "github.com/grafana/grafana/apps/secret/pkg/apis/secret/v1beta1"
"github.com/grafana/grafana/pkg/registry/apis/secret/contracts"
"github.com/grafana/grafana/pkg/services/featuremgmt"
)
type keeperValidator struct{}
type keeperValidator struct {
features featuremgmt.FeatureToggles
}
var _ contracts.KeeperValidator = &keeperValidator{}
func ProvideKeeperValidator() contracts.KeeperValidator {
return &keeperValidator{}
func ProvideKeeperValidator(features featuremgmt.FeatureToggles) contracts.KeeperValidator {
return &keeperValidator{features: features}
}
func (v *keeperValidator) Validate(keeper *secretv1beta1.Keeper, oldKeeper *secretv1beta1.Keeper, operation admission.Operation) field.ErrorList {
@@ -57,51 +62,110 @@ func (v *keeperValidator) Validate(keeper *secretv1beta1.Keeper, oldKeeper *secr
}
if keeper.Spec.Aws != nil {
if err := validateCredentialValue(field.NewPath("spec", "aws", "accessKeyID"), keeper.Spec.Aws.AccessKeyID); err != nil {
errs = append(errs, err)
}
if err := validateCredentialValue(field.NewPath("spec", "aws", "secretAccessKey"), keeper.Spec.Aws.SecretAccessKey); err != nil {
errs = append(errs, err)
//nolint
if !v.features.IsEnabled(context.Background(), featuremgmt.FlagSecretsManagementAppPlatformAwsKeeper) {
errs = append(errs,
field.Forbidden(field.NewPath("spec", "aws"),
fmt.Sprintf("enable aws keeper feature toggle to create aws keepers: %s", featuremgmt.FlagSecretsManagementAppPlatformAwsKeeper)))
} else {
errs = append(errs, validateAws(keeper.Spec.Aws)...)
}
}
if keeper.Spec.Azure != nil {
if keeper.Spec.Azure.KeyVaultName == "" {
errs = append(errs, field.Required(field.NewPath("spec", "azure", "keyVaultName"), "a `keyVaultName` is required"))
}
if keeper.Spec.Azure.TenantID == "" {
errs = append(errs, field.Required(field.NewPath("spec", "azure", "tenantID"), "a `tenantID` is required"))
}
if keeper.Spec.Azure.ClientID == "" {
errs = append(errs, field.Required(field.NewPath("spec", "azure", "clientID"), "a `clientID` is required"))
}
if err := validateCredentialValue(field.NewPath("spec", "azure", "clientSecret"), keeper.Spec.Azure.ClientSecret); err != nil {
errs = append(errs, err)
}
errs = append(errs, validateAzure(keeper.Spec.Azure)...)
}
if keeper.Spec.Gcp != nil {
if keeper.Spec.Gcp.ProjectID == "" {
errs = append(errs, field.Required(field.NewPath("spec", "gcp", "projectID"), "a `projectID` is required"))
}
if keeper.Spec.Gcp.CredentialsFile == "" {
errs = append(errs, field.Required(field.NewPath("spec", "gcp", "credentialsFile"), "a `credentialsFile` is required"))
}
errs = append(errs, validateGcp(keeper.Spec.Gcp)...)
}
if keeper.Spec.HashiCorpVault != nil {
if keeper.Spec.HashiCorpVault.Address == "" {
errs = append(errs, field.Required(field.NewPath("spec", "hashiCorpVault", "address"), "an `address` is required"))
}
errs = append(errs, validateHashiCorpVault(keeper.Spec.HashiCorpVault)...)
}
if err := validateCredentialValue(field.NewPath("spec", "hashiCorpVault", "token"), keeper.Spec.HashiCorpVault.Token); err != nil {
return errs
}
func validateAws(cfg *secretv1beta1.KeeperAWSConfig) field.ErrorList {
errs := make(field.ErrorList, 0)
if cfg.Region == "" {
errs = append(errs, field.Required(field.NewPath("spec", "aws", "region"), "region must be present"))
}
switch {
case cfg.AccessKey == nil && cfg.AssumeRole == nil:
errs = append(errs, field.Required(field.NewPath("spec", "aws"), "one of `accessKey` or `assumeRole` must be present"))
case cfg.AccessKey != nil && cfg.AssumeRole != nil:
errs = append(errs, field.Required(field.NewPath("spec", "aws"), "only one of `accessKey` or `assumeRole` can be present"))
case cfg.AccessKey != nil:
if err := validateCredentialValue(field.NewPath("spec", "aws", "accessKey", "accessKeyID"), cfg.AccessKey.AccessKeyID); err != nil {
errs = append(errs, err)
}
if err := validateCredentialValue(field.NewPath("spec", "aws", "accessKey", "secretAccessKey"), cfg.AccessKey.SecretAccessKey); err != nil {
errs = append(errs, err)
}
case cfg.AssumeRole != nil:
if cfg.AssumeRole.AssumeRoleArn == "" {
errs = append(errs, field.Required(field.NewPath("spec", "aws", "assumeRole", "assumeRoleArn"), "arn of the role to assume must be present"))
}
if cfg.AssumeRole.ExternalID == "" {
errs = append(errs, field.Required(field.NewPath("spec", "aws", "assumeRole", "externalId"), "externalId must be present"))
}
}
return errs
}
func validateAzure(cfg *secretv1beta1.KeeperAzureConfig) field.ErrorList {
errs := make(field.ErrorList, 0)
if cfg.KeyVaultName == "" {
errs = append(errs, field.Required(field.NewPath("spec", "azure", "keyVaultName"), "a `keyVaultName` is required"))
}
if cfg.TenantID == "" {
errs = append(errs, field.Required(field.NewPath("spec", "azure", "tenantID"), "a `tenantID` is required"))
}
if cfg.ClientID == "" {
errs = append(errs, field.Required(field.NewPath("spec", "azure", "clientID"), "a `clientID` is required"))
}
if err := validateCredentialValue(field.NewPath("spec", "azure", "clientSecret"), cfg.ClientSecret); err != nil {
errs = append(errs, err)
}
return errs
}
func validateGcp(cfg *secretv1beta1.KeeperGCPConfig) field.ErrorList {
errs := make(field.ErrorList, 0)
if cfg.ProjectID == "" {
errs = append(errs, field.Required(field.NewPath("spec", "gcp", "projectID"), "a `projectID` is required"))
}
if cfg.CredentialsFile == "" {
errs = append(errs, field.Required(field.NewPath("spec", "gcp", "credentialsFile"), "a `credentialsFile` is required"))
}
return errs
}
func validateHashiCorpVault(cfg *secretv1beta1.KeeperHashiCorpConfig) field.ErrorList {
errs := make(field.ErrorList, 0)
if cfg.Address == "" {
errs = append(errs, field.Required(field.NewPath("spec", "hashiCorpVault", "address"), "an `address` is required"))
}
if err := validateCredentialValue(field.NewPath("spec", "hashiCorpVault", "token"), cfg.Token); err != nil {
errs = append(errs, err)
}
return errs

View File

@@ -10,11 +10,12 @@ import (
"k8s.io/utils/ptr"
secretv1beta1 "github.com/grafana/grafana/apps/secret/pkg/apis/secret/v1beta1"
"github.com/grafana/grafana/pkg/services/featuremgmt"
)
func TestValidateKeeper(t *testing.T) {
objectMeta := metav1.ObjectMeta{Name: "test", Namespace: "test"}
validator := ProvideKeeperValidator()
validator := ProvideKeeperValidator(featuremgmt.WithFeatures(featuremgmt.FlagSecretsManagementAppPlatformAwsKeeper))
t.Run("when creating a new keeper", func(t *testing.T) {
t.Run("the `description` must be present", func(t *testing.T) {
@@ -22,9 +23,12 @@ func TestValidateKeeper(t *testing.T) {
ObjectMeta: objectMeta,
Spec: secretv1beta1.KeeperSpec{
Aws: &secretv1beta1.KeeperAWSConfig{
AccessKeyID: secretv1beta1.KeeperCredentialValue{ValueFromEnv: "some-value"},
SecretAccessKey: secretv1beta1.KeeperCredentialValue{ValueFromEnv: "some-value"},
KmsKeyID: ptr.To("kms-key-id"),
Region: "us-east-1",
AccessKey: &secretv1beta1.KeeperAWSAccessKey{
AccessKeyID: secretv1beta1.KeeperCredentialValue{ValueFromEnv: "some-value"},
SecretAccessKey: secretv1beta1.KeeperCredentialValue{ValueFromEnv: "some-value"},
},
KmsKeyID: ptr.To("kms-key-id"),
},
},
}
@@ -41,30 +45,42 @@ func TestValidateKeeper(t *testing.T) {
Spec: secretv1beta1.KeeperSpec{
Description: "description",
Aws: &secretv1beta1.KeeperAWSConfig{
AccessKeyID: secretv1beta1.KeeperCredentialValue{
ValueFromEnv: "some-value",
},
SecretAccessKey: secretv1beta1.KeeperCredentialValue{
SecureValueName: "some-value",
Region: "us-east-1",
AccessKey: &secretv1beta1.KeeperAWSAccessKey{
AccessKeyID: secretv1beta1.KeeperCredentialValue{
ValueFromEnv: "some-value",
},
SecretAccessKey: secretv1beta1.KeeperCredentialValue{
SecureValueName: "some-value",
},
},
KmsKeyID: ptr.To("optional"),
},
},
}
t.Run("aws keeper feature flag must be enabled", func(t *testing.T) {
// Validator with feature disabled
validator := ProvideKeeperValidator(featuremgmt.WithFeatures())
errs := validator.Validate(validKeeperAWS.DeepCopy(), nil, admission.Create)
require.Len(t, errs, 1)
require.Equal(t, "spec.aws", errs[0].Field)
require.Contains(t, errs[0].Detail, "secretsManagementAppPlatformAwsKeeper")
})
t.Run("`accessKeyID` must be present", func(t *testing.T) {
t.Run("at least one of the credential value must be present", func(t *testing.T) {
keeper := validKeeperAWS.DeepCopy()
keeper.Spec.Aws.AccessKeyID = secretv1beta1.KeeperCredentialValue{}
keeper.Spec.Aws.AccessKey.AccessKeyID = secretv1beta1.KeeperCredentialValue{}
errs := validator.Validate(keeper, nil, admission.Create)
require.Len(t, errs, 1)
require.Equal(t, "spec.aws.accessKeyID", errs[0].Field)
require.Equal(t, "spec.aws.accessKey.accessKeyID", errs[0].Field)
})
t.Run("at most one of the credential value must be present", func(t *testing.T) {
keeper := validKeeperAWS.DeepCopy()
keeper.Spec.Aws.AccessKeyID = secretv1beta1.KeeperCredentialValue{
keeper.Spec.Aws.AccessKey.AccessKeyID = secretv1beta1.KeeperCredentialValue{
SecureValueName: "a",
ValueFromEnv: "b",
ValueFromConfig: "c",
@@ -72,23 +88,23 @@ func TestValidateKeeper(t *testing.T) {
errs := validator.Validate(keeper, nil, admission.Create)
require.Len(t, errs, 1)
require.Equal(t, "spec.aws.accessKeyID", errs[0].Field)
require.Equal(t, "spec.aws.accessKey.accessKeyID", errs[0].Field)
})
})
t.Run("`secretAccessKey` must be present", func(t *testing.T) {
t.Run("at least one of the credential value must be present", func(t *testing.T) {
keeper := validKeeperAWS.DeepCopy()
keeper.Spec.Aws.SecretAccessKey = secretv1beta1.KeeperCredentialValue{}
keeper.Spec.Aws.AccessKey.SecretAccessKey = secretv1beta1.KeeperCredentialValue{}
errs := validator.Validate(keeper, nil, admission.Create)
require.Len(t, errs, 1)
require.Equal(t, "spec.aws.secretAccessKey", errs[0].Field)
require.Equal(t, "spec.aws.accessKey.secretAccessKey", errs[0].Field)
})
t.Run("at most one of the credential value must be present", func(t *testing.T) {
keeper := validKeeperAWS.DeepCopy()
keeper.Spec.Aws.SecretAccessKey = secretv1beta1.KeeperCredentialValue{
keeper.Spec.Aws.AccessKey.SecretAccessKey = secretv1beta1.KeeperCredentialValue{
SecureValueName: "a",
ValueFromEnv: "b",
ValueFromConfig: "c",
@@ -96,7 +112,23 @@ func TestValidateKeeper(t *testing.T) {
errs := validator.Validate(keeper, nil, admission.Create)
require.Len(t, errs, 1)
require.Equal(t, "spec.aws.secretAccessKey", errs[0].Field)
require.Equal(t, "spec.aws.accessKey.secretAccessKey", errs[0].Field)
})
t.Run("only one of accessKey or assumeRole can be present", func(t *testing.T) {
keeper := validKeeperAWS.DeepCopy()
keeper.Spec.Aws.AccessKey.SecretAccessKey = secretv1beta1.KeeperCredentialValue{
SecureValueName: "a",
}
keeper.Spec.Aws.AssumeRole = &secretv1beta1.KeeperAWSAssumeRole{
AssumeRoleArn: "arn",
ExternalID: "id",
}
errs := validator.Validate(keeper, nil, admission.Create)
require.Len(t, errs, 1)
require.Equal(t, "spec.aws", errs[0].Field)
require.Equal(t, "only one of `accessKey` or `assumeRole` can be present", errs[0].Detail)
})
})
})

View File

@@ -1,44 +0,0 @@
package acimpl
import (
"context"
"time"
"github.com/grafana/grafana/pkg/services/accesscontrol"
)
const (
ossBasicRoleSeedLockName = "oss-ac-basic-role-seeder"
ossBasicRoleSeedTimeout = 2 * time.Minute
)
// refreshBasicRolePermissionsInDB ensures basic role permissions are fully derived from in-memory registrations
func (s *Service) refreshBasicRolePermissionsInDB(ctx context.Context, rolesSnapshot map[string][]accesscontrol.Permission) error {
if s.sql == nil || s.seeder == nil {
return nil
}
run := func(ctx context.Context) error {
desired := map[accesscontrol.SeedPermission]struct{}{}
for role, permissions := range rolesSnapshot {
for _, permission := range permissions {
desired[accesscontrol.SeedPermission{BuiltInRole: role, Action: permission.Action, Scope: permission.Scope}] = struct{}{}
}
}
s.seeder.SetDesiredPermissions(desired)
return s.seeder.Seed(ctx)
}
if s.serverLock == nil {
return run(ctx)
}
var err error
errLock := s.serverLock.LockExecuteAndRelease(ctx, ossBasicRoleSeedLockName, ossBasicRoleSeedTimeout, func(ctx context.Context) {
err = run(ctx)
})
if errLock != nil {
return errLock
}
return err
}

View File

@@ -1,128 +0,0 @@
package acimpl
import (
"context"
"testing"
"time"
"github.com/stretchr/testify/require"
"github.com/grafana/grafana/pkg/infra/db"
"github.com/grafana/grafana/pkg/infra/localcache"
"github.com/grafana/grafana/pkg/infra/tracing"
"github.com/grafana/grafana/pkg/services/accesscontrol"
"github.com/grafana/grafana/pkg/services/accesscontrol/database"
"github.com/grafana/grafana/pkg/services/accesscontrol/permreg"
"github.com/grafana/grafana/pkg/services/accesscontrol/resourcepermissions"
"github.com/grafana/grafana/pkg/services/featuremgmt"
"github.com/grafana/grafana/pkg/services/org"
"github.com/grafana/grafana/pkg/setting"
"github.com/grafana/grafana/pkg/util/testutil"
)
func TestIntegration_OSSBasicRolePermissions_PersistAndRefreshOnRegisterFixedRoles(t *testing.T) {
testutil.SkipIntegrationTestInShortMode(t)
ctx := context.Background()
sql := db.InitTestDB(t)
store := database.ProvideService(sql)
svc := ProvideOSSService(
setting.NewCfg(),
store,
&resourcepermissions.FakeActionSetSvc{},
localcache.ProvideService(),
featuremgmt.WithFeatures(),
tracing.InitializeTracerForTest(),
sql,
permreg.ProvidePermissionRegistry(),
nil,
)
require.NoError(t, svc.DeclareFixedRoles(accesscontrol.RoleRegistration{
Role: accesscontrol.RoleDTO{
Name: "fixed:test:role",
Permissions: []accesscontrol.Permission{
{Action: "test:read", Scope: ""},
},
},
Grants: []string{string(org.RoleViewer)},
}))
require.NoError(t, svc.RegisterFixedRoles(ctx))
// verify permission is persisted to DB for basic:viewer
require.NoError(t, sql.WithDbSession(ctx, func(sess *db.Session) error {
var role accesscontrol.Role
ok, err := sess.Table("role").Where("uid = ?", accesscontrol.BasicRoleUIDPrefix+"viewer").Get(&role)
require.NoError(t, err)
require.True(t, ok)
var count int64
count, err = sess.Table("permission").Where("role_id = ? AND action = ? AND scope = ?", role.ID, "test:read", "").Count()
require.NoError(t, err)
require.Equal(t, int64(1), count)
return nil
}))
// ensure RegisterFixedRoles refreshes it back to defaults
require.NoError(t, sql.WithDbSession(ctx, func(sess *db.Session) error {
ts := time.Now()
var role accesscontrol.Role
ok, err := sess.Table("role").Where("uid = ?", accesscontrol.BasicRoleUIDPrefix+"viewer").Get(&role)
require.NoError(t, err)
require.True(t, ok)
_, err = sess.Exec("DELETE FROM permission WHERE role_id = ?", role.ID)
require.NoError(t, err)
p := accesscontrol.Permission{
RoleID: role.ID,
Action: "custom:keep",
Scope: "",
Created: ts,
Updated: ts,
}
p.Kind, p.Attribute, p.Identifier = accesscontrol.SplitScope(p.Scope)
_, err = sess.Table("permission").Insert(&p)
return err
}))
svc2 := ProvideOSSService(
setting.NewCfg(),
store,
&resourcepermissions.FakeActionSetSvc{},
localcache.ProvideService(),
featuremgmt.WithFeatures(),
tracing.InitializeTracerForTest(),
sql,
permreg.ProvidePermissionRegistry(),
nil,
)
require.NoError(t, svc2.DeclareFixedRoles(accesscontrol.RoleRegistration{
Role: accesscontrol.RoleDTO{
Name: "fixed:test:role",
Permissions: []accesscontrol.Permission{
{Action: "test:read", Scope: ""},
},
},
Grants: []string{string(org.RoleViewer)},
}))
require.NoError(t, svc2.RegisterFixedRoles(ctx))
require.NoError(t, sql.WithDbSession(ctx, func(sess *db.Session) error {
var role accesscontrol.Role
ok, err := sess.Table("role").Where("uid = ?", accesscontrol.BasicRoleUIDPrefix+"viewer").Get(&role)
require.NoError(t, err)
require.True(t, ok)
var count int64
count, err = sess.Table("permission").Where("role_id = ? AND action = ? AND scope = ?", role.ID, "test:read", "").Count()
require.NoError(t, err)
require.Equal(t, int64(1), count)
count, err = sess.Table("permission").Where("role_id = ? AND action = ?", role.ID, "custom:keep").Count()
require.NoError(t, err)
require.Equal(t, int64(0), count)
return nil
}))
}

View File

@@ -30,7 +30,6 @@ import (
"github.com/grafana/grafana/pkg/services/accesscontrol/migrator"
"github.com/grafana/grafana/pkg/services/accesscontrol/permreg"
"github.com/grafana/grafana/pkg/services/accesscontrol/pluginutils"
"github.com/grafana/grafana/pkg/services/accesscontrol/seeding"
"github.com/grafana/grafana/pkg/services/dashboards"
"github.com/grafana/grafana/pkg/services/featuremgmt"
"github.com/grafana/grafana/pkg/services/folder"
@@ -97,12 +96,6 @@ func ProvideOSSService(
roles: accesscontrol.BuildBasicRoleDefinitions(),
store: store,
permRegistry: permRegistry,
sql: db,
serverLock: lock,
}
if backend, ok := store.(*database.AccessControlStore); ok {
s.seeder = seeding.New(log.New("accesscontrol.seeder"), backend, backend)
}
return s
@@ -119,11 +112,8 @@ type Service struct {
rolesMu sync.RWMutex
roles map[string]*accesscontrol.RoleDTO
store accesscontrol.Store
seeder *seeding.Seeder
permRegistry permreg.PermissionRegistry
isInitialized bool
sql db.DB
serverLock *serverlock.ServerLockService
}
func (s *Service) GetUsageStats(_ context.Context) map[string]any {
@@ -441,54 +431,17 @@ func (s *Service) RegisterFixedRoles(ctx context.Context) error {
defer span.End()
s.rolesMu.Lock()
registrations := s.registrations.Slice()
defer s.rolesMu.Unlock()
s.registrations.Range(func(registration accesscontrol.RoleRegistration) bool {
s.registerRolesLocked(registration)
return true
})
s.isInitialized = true
rolesSnapshot := s.getBasicRolePermissionsLocked()
s.rolesMu.Unlock()
if s.seeder != nil {
if err := s.seeder.SeedRoles(ctx, registrations); err != nil {
return err
}
if err := s.seeder.RemoveAbsentRoles(ctx); err != nil {
return err
}
}
if err := s.refreshBasicRolePermissionsInDB(ctx, rolesSnapshot); err != nil {
return err
}
return nil
}
// getBasicRolePermissionsSnapshotFromRegistrationsLocked computes the desired basic role permissions from the
// current registration list, using the shared seeding registration logic.
//
// it has to be called while holding the roles lock
func (s *Service) getBasicRolePermissionsLocked() map[string][]accesscontrol.Permission {
desired := map[accesscontrol.SeedPermission]struct{}{}
s.registrations.Range(func(registration accesscontrol.RoleRegistration) bool {
seeding.AppendDesiredPermissions(desired, s.log, &registration.Role, registration.Grants, registration.Exclude, true)
return true
})
out := make(map[string][]accesscontrol.Permission)
for sp := range desired {
out[sp.BuiltInRole] = append(out[sp.BuiltInRole], accesscontrol.Permission{
Action: sp.Action,
Scope: sp.Scope,
})
}
return out
}
// registerRolesLocked processes a single role registration and adds permissions to basic roles.
// Must be called with s.rolesMu locked.
func (s *Service) registerRolesLocked(registration accesscontrol.RoleRegistration) {
@@ -521,7 +474,6 @@ func (s *Service) DeclarePluginRoles(ctx context.Context, ID, name string, regs
defer span.End()
acRegs := pluginutils.ToRegistrations(ID, name, regs)
updatedBasicRoles := false
for _, r := range acRegs {
if err := pluginutils.ValidatePluginRole(ID, r.Role); err != nil {
return err
@@ -548,23 +500,11 @@ func (s *Service) DeclarePluginRoles(ctx context.Context, ID, name string, regs
if initialized {
s.rolesMu.Lock()
s.registerRolesLocked(r)
updatedBasicRoles = true
s.rolesMu.Unlock()
s.cache.Flush()
}
}
if updatedBasicRoles {
s.rolesMu.RLock()
rolesSnapshot := s.getBasicRolePermissionsLocked()
s.rolesMu.RUnlock()
// plugin roles can be declared after startup - keep DB in sync
if err := s.refreshBasicRolePermissionsInDB(ctx, rolesSnapshot); err != nil {
return err
}
}
return nil
}

View File

@@ -1,623 +0,0 @@
package database
import (
"context"
"strings"
"time"
"github.com/grafana/grafana/pkg/infra/db"
"github.com/grafana/grafana/pkg/services/accesscontrol"
"github.com/grafana/grafana/pkg/services/accesscontrol/seeding"
"github.com/grafana/grafana/pkg/services/sqlstore/migrator"
"github.com/grafana/grafana/pkg/util/xorm/core"
)
const basicRolePermBatchSize = 500
// LoadRoles returns all fixed and plugin roles (global org) with permissions, indexed by role name.
func (s *AccessControlStore) LoadRoles(ctx context.Context) (map[string]*accesscontrol.RoleDTO, error) {
out := map[string]*accesscontrol.RoleDTO{}
err := s.sql.WithDbSession(ctx, func(sess *db.Session) error {
type roleRow struct {
ID int64 `xorm:"id"`
OrgID int64 `xorm:"org_id"`
Version int64 `xorm:"version"`
UID string `xorm:"uid"`
Name string `xorm:"name"`
DisplayName string `xorm:"display_name"`
Description string `xorm:"description"`
Group string `xorm:"group_name"`
Hidden bool `xorm:"hidden"`
Updated time.Time `xorm:"updated"`
Created time.Time `xorm:"created"`
}
roles := []roleRow{}
if err := sess.Table("role").
Where("org_id = ?", accesscontrol.GlobalOrgID).
Where("(name LIKE ? OR name LIKE ?)", accesscontrol.FixedRolePrefix+"%", accesscontrol.PluginRolePrefix+"%").
Find(&roles); err != nil {
return err
}
if len(roles) == 0 {
return nil
}
roleIDs := make([]any, 0, len(roles))
roleByID := make(map[int64]*accesscontrol.RoleDTO, len(roles))
for _, r := range roles {
dto := &accesscontrol.RoleDTO{
ID: r.ID,
OrgID: r.OrgID,
Version: r.Version,
UID: r.UID,
Name: r.Name,
DisplayName: r.DisplayName,
Description: r.Description,
Group: r.Group,
Hidden: r.Hidden,
Updated: r.Updated,
Created: r.Created,
}
out[dto.Name] = dto
roleByID[dto.ID] = dto
roleIDs = append(roleIDs, dto.ID)
}
type permRow struct {
RoleID int64 `xorm:"role_id"`
Action string `xorm:"action"`
Scope string `xorm:"scope"`
}
perms := []permRow{}
if err := sess.Table("permission").In("role_id", roleIDs...).Find(&perms); err != nil {
return err
}
for _, p := range perms {
dto := roleByID[p.RoleID]
if dto == nil {
continue
}
dto.Permissions = append(dto.Permissions, accesscontrol.Permission{
RoleID: p.RoleID,
Action: p.Action,
Scope: p.Scope,
})
}
return nil
})
return out, err
}
func (s *AccessControlStore) SetRole(ctx context.Context, existingRole *accesscontrol.RoleDTO, wantedRole accesscontrol.RoleDTO) error {
if existingRole == nil {
return nil
}
return s.sql.WithDbSession(ctx, func(sess *db.Session) error {
_, err := sess.Table("role").
Where("id = ? AND org_id = ?", existingRole.ID, accesscontrol.GlobalOrgID).
Update(map[string]any{
"display_name": wantedRole.DisplayName,
"description": wantedRole.Description,
"group_name": wantedRole.Group,
"hidden": wantedRole.Hidden,
"updated": time.Now(),
})
return err
})
}
func (s *AccessControlStore) SetPermissions(ctx context.Context, existingRole *accesscontrol.RoleDTO, wantedRole accesscontrol.RoleDTO) error {
if existingRole == nil {
return nil
}
type key struct{ Action, Scope string }
existing := map[key]struct{}{}
for _, p := range existingRole.Permissions {
existing[key{p.Action, p.Scope}] = struct{}{}
}
desired := map[key]struct{}{}
for _, p := range wantedRole.Permissions {
desired[key{p.Action, p.Scope}] = struct{}{}
}
toAdd := make([]accesscontrol.Permission, 0)
toRemove := make([]accesscontrol.SeedPermission, 0)
now := time.Now()
for k := range desired {
if _, ok := existing[k]; ok {
continue
}
perm := accesscontrol.Permission{
RoleID: existingRole.ID,
Action: k.Action,
Scope: k.Scope,
Created: now,
Updated: now,
}
perm.Kind, perm.Attribute, perm.Identifier = accesscontrol.SplitScope(perm.Scope)
toAdd = append(toAdd, perm)
}
for k := range existing {
if _, ok := desired[k]; ok {
continue
}
toRemove = append(toRemove, accesscontrol.SeedPermission{Action: k.Action, Scope: k.Scope})
}
if len(toAdd) == 0 && len(toRemove) == 0 {
return nil
}
return s.sql.WithTransactionalDbSession(ctx, func(sess *db.Session) error {
if len(toRemove) > 0 {
if err := DeleteRolePermissionTuples(sess, s.sql.GetDBType(), existingRole.ID, toRemove); err != nil {
return err
}
}
if len(toAdd) > 0 {
_, err := sess.InsertMulti(toAdd)
return err
}
return nil
})
}
func (s *AccessControlStore) CreateRole(ctx context.Context, role accesscontrol.RoleDTO) error {
now := time.Now()
uid := role.UID
if uid == "" && (strings.HasPrefix(role.Name, accesscontrol.FixedRolePrefix) || strings.HasPrefix(role.Name, accesscontrol.PluginRolePrefix)) {
uid = accesscontrol.PrefixedRoleUID(role.Name)
}
r := accesscontrol.Role{
OrgID: accesscontrol.GlobalOrgID,
Version: role.Version,
UID: uid,
Name: role.Name,
DisplayName: role.DisplayName,
Description: role.Description,
Group: role.Group,
Hidden: role.Hidden,
Created: now,
Updated: now,
}
if r.Version == 0 {
r.Version = 1
}
return s.sql.WithTransactionalDbSession(ctx, func(sess *db.Session) error {
if _, err := sess.Insert(&r); err != nil {
return err
}
if len(role.Permissions) == 0 {
return nil
}
// De-duplicate permissions on (action, scope) to avoid unique constraint violations.
// Some role definitions may accidentally include duplicates.
type permKey struct{ Action, Scope string }
seen := make(map[permKey]struct{}, len(role.Permissions))
perms := make([]accesscontrol.Permission, 0, len(role.Permissions))
for _, p := range role.Permissions {
k := permKey{Action: p.Action, Scope: p.Scope}
if _, ok := seen[k]; ok {
continue
}
seen[k] = struct{}{}
perm := accesscontrol.Permission{
RoleID: r.ID,
Action: p.Action,
Scope: p.Scope,
Created: now,
Updated: now,
}
perm.Kind, perm.Attribute, perm.Identifier = accesscontrol.SplitScope(perm.Scope)
perms = append(perms, perm)
}
_, err := sess.InsertMulti(perms)
return err
})
}
func (s *AccessControlStore) DeleteRoles(ctx context.Context, roleUIDs []string) error {
if len(roleUIDs) == 0 {
return nil
}
uids := make([]any, 0, len(roleUIDs))
for _, uid := range roleUIDs {
uids = append(uids, uid)
}
return s.sql.WithTransactionalDbSession(ctx, func(sess *db.Session) error {
type row struct {
ID int64 `xorm:"id"`
UID string `xorm:"uid"`
}
rows := []row{}
if err := sess.Table("role").
Where("org_id = ?", accesscontrol.GlobalOrgID).
In("uid", uids...).
Find(&rows); err != nil {
return err
}
if len(rows) == 0 {
return nil
}
roleIDs := make([]any, 0, len(rows))
for _, r := range rows {
roleIDs = append(roleIDs, r.ID)
}
// Remove permissions and assignments first to avoid FK issues (if enabled).
{
args := append([]any{"DELETE FROM permission WHERE role_id IN (?" + strings.Repeat(",?", len(roleIDs)-1) + ")"}, roleIDs...)
if _, err := sess.Exec(args...); err != nil {
return err
}
}
{
args := append([]any{"DELETE FROM user_role WHERE role_id IN (?" + strings.Repeat(",?", len(roleIDs)-1) + ")"}, roleIDs...)
if _, err := sess.Exec(args...); err != nil {
return err
}
}
{
args := append([]any{"DELETE FROM team_role WHERE role_id IN (?" + strings.Repeat(",?", len(roleIDs)-1) + ")"}, roleIDs...)
if _, err := sess.Exec(args...); err != nil {
return err
}
}
{
args := append([]any{"DELETE FROM builtin_role WHERE role_id IN (?" + strings.Repeat(",?", len(roleIDs)-1) + ")"}, roleIDs...)
if _, err := sess.Exec(args...); err != nil {
return err
}
}
args := append([]any{"DELETE FROM role WHERE org_id = ? AND uid IN (?" + strings.Repeat(",?", len(uids)-1) + ")", accesscontrol.GlobalOrgID}, uids...)
_, err := sess.Exec(args...)
return err
})
}
// OSS basic-role permission refresh uses seeding.Seeder.Seed() with a desired set computed in memory.
// These methods implement the permission seeding part of seeding.SeedingBackend against the current permission table.
func (s *AccessControlStore) LoadPrevious(ctx context.Context) (map[accesscontrol.SeedPermission]struct{}, error) {
var out map[accesscontrol.SeedPermission]struct{}
err := s.sql.WithDbSession(ctx, func(sess *db.Session) error {
rows, err := LoadBasicRoleSeedPermissions(sess)
if err != nil {
return err
}
out = make(map[accesscontrol.SeedPermission]struct{}, len(rows))
for _, r := range rows {
r.Origin = ""
out[r] = struct{}{}
}
return nil
})
return out, err
}
func (s *AccessControlStore) Apply(ctx context.Context, added, removed []accesscontrol.SeedPermission, updated map[accesscontrol.SeedPermission]accesscontrol.SeedPermission) error {
rolesToUpgrade := seeding.RolesToUpgrade(added, removed)
// Run the same OSS apply logic as ossBasicRoleSeedBackend.Apply inside a single transaction.
return s.sql.WithTransactionalDbSession(ctx, func(sess *db.Session) error {
defs := accesscontrol.BuildBasicRoleDefinitions()
builtinToRoleID, err := EnsureBasicRolesExist(sess, defs)
if err != nil {
return err
}
backend := &ossBasicRoleSeedBackend{
sess: sess,
now: time.Now(),
builtinToRoleID: builtinToRoleID,
desired: nil,
dbType: s.sql.GetDBType(),
}
if err := backend.Apply(ctx, added, removed, updated); err != nil {
return err
}
return BumpBasicRoleVersions(sess, rolesToUpgrade)
})
}
// EnsureBasicRolesExist ensures the built-in basic roles exist in the role table and are bound in builtin_role.
// It returns a mapping from builtin role name (for example "Admin") to role ID.
func EnsureBasicRolesExist(sess *db.Session, defs map[string]*accesscontrol.RoleDTO) (map[string]int64, error) {
uidToBuiltin := make(map[string]string, len(defs))
uids := make([]any, 0, len(defs))
for builtin, def := range defs {
uidToBuiltin[def.UID] = builtin
uids = append(uids, def.UID)
}
type roleRow struct {
ID int64 `xorm:"id"`
UID string `xorm:"uid"`
}
rows := []roleRow{}
if err := sess.Table("role").
Where("org_id = ?", accesscontrol.GlobalOrgID).
In("uid", uids...).
Find(&rows); err != nil {
return nil, err
}
ts := time.Now()
builtinToRoleID := make(map[string]int64, len(defs))
for _, r := range rows {
br, ok := uidToBuiltin[r.UID]
if !ok {
continue
}
builtinToRoleID[br] = r.ID
}
for builtin, def := range defs {
roleID, ok := builtinToRoleID[builtin]
if !ok {
role := accesscontrol.Role{
OrgID: def.OrgID,
Version: def.Version,
UID: def.UID,
Name: def.Name,
DisplayName: def.DisplayName,
Description: def.Description,
Group: def.Group,
Hidden: def.Hidden,
Created: ts,
Updated: ts,
}
if _, err := sess.Insert(&role); err != nil {
return nil, err
}
roleID = role.ID
builtinToRoleID[builtin] = roleID
}
has, err := sess.Table("builtin_role").
Where("role_id = ? AND role = ? AND org_id = ?", roleID, builtin, accesscontrol.GlobalOrgID).
Exist()
if err != nil {
return nil, err
}
if !has {
br := accesscontrol.BuiltinRole{
RoleID: roleID,
OrgID: accesscontrol.GlobalOrgID,
Role: builtin,
Created: ts,
Updated: ts,
}
if _, err := sess.Table("builtin_role").Insert(&br); err != nil {
return nil, err
}
}
}
return builtinToRoleID, nil
}
// DeleteRolePermissionTuples deletes permissions for a single role by (action, scope) pairs.
//
// It uses a row-constructor IN clause where supported (MySQL, Postgres, SQLite) and falls back
// to a WHERE ... OR ... form for MSSQL.
func DeleteRolePermissionTuples(sess *db.Session, dbType core.DbType, roleID int64, perms []accesscontrol.SeedPermission) error {
if len(perms) == 0 {
return nil
}
if dbType == migrator.MSSQL {
// MSSQL doesn't support (action, scope) IN ((?,?),(?,?)) row constructors.
where := make([]string, 0, len(perms))
args := make([]any, 0, 1+len(perms)*2)
args = append(args, roleID)
for _, p := range perms {
where = append(where, "(action = ? AND scope = ?)")
args = append(args, p.Action, p.Scope)
}
_, err := sess.Exec(
append([]any{
"DELETE FROM permission WHERE role_id = ? AND (" + strings.Join(where, " OR ") + ")",
}, args...)...,
)
return err
}
args := make([]any, 0, 1+len(perms)*2)
args = append(args, roleID)
for _, p := range perms {
args = append(args, p.Action, p.Scope)
}
sql := "DELETE FROM permission WHERE role_id = ? AND (action, scope) IN (" +
strings.Repeat("(?, ?),", len(perms)-1) + "(?, ?))"
_, err := sess.Exec(append([]any{sql}, args...)...)
return err
}
type ossBasicRoleSeedBackend struct {
sess *db.Session
now time.Time
builtinToRoleID map[string]int64
desired map[accesscontrol.SeedPermission]struct{}
dbType core.DbType
}
func (b *ossBasicRoleSeedBackend) LoadPrevious(_ context.Context) (map[accesscontrol.SeedPermission]struct{}, error) {
rows, err := LoadBasicRoleSeedPermissions(b.sess)
if err != nil {
return nil, err
}
out := make(map[accesscontrol.SeedPermission]struct{}, len(rows))
for _, r := range rows {
// Ensure the key matches what OSS seeding uses (Origin is always empty for basic role refresh).
r.Origin = ""
out[r] = struct{}{}
}
return out, nil
}
func (b *ossBasicRoleSeedBackend) LoadDesired(_ context.Context) (map[accesscontrol.SeedPermission]struct{}, error) {
return b.desired, nil
}
func (b *ossBasicRoleSeedBackend) Apply(_ context.Context, added, removed []accesscontrol.SeedPermission, updated map[accesscontrol.SeedPermission]accesscontrol.SeedPermission) error {
// Delete removed permissions (this includes user-defined permissions that aren't in desired).
if len(removed) > 0 {
permsByRoleID := map[int64][]accesscontrol.SeedPermission{}
for _, p := range removed {
roleID, ok := b.builtinToRoleID[p.BuiltInRole]
if !ok {
continue
}
permsByRoleID[roleID] = append(permsByRoleID[roleID], p)
}
for roleID, perms := range permsByRoleID {
// Chunk to keep statement sizes and parameter counts bounded.
if err := batch(len(perms), basicRolePermBatchSize, func(start, end int) error {
return DeleteRolePermissionTuples(b.sess, b.dbType, roleID, perms[start:end])
}); err != nil {
return err
}
}
}
// Insert added permissions and updated-target permissions.
toInsertSeed := make([]accesscontrol.SeedPermission, 0, len(added)+len(updated))
toInsertSeed = append(toInsertSeed, added...)
for _, v := range updated {
toInsertSeed = append(toInsertSeed, v)
}
if len(toInsertSeed) == 0 {
return nil
}
// De-duplicate on (role_id, action, scope). This avoids unique constraint violations when:
// - the same permission appears in both added and updated
// - multiple plugin origins grant the same permission (Origin is not persisted in permission table)
type permKey struct {
RoleID int64
Action string
Scope string
}
seen := make(map[permKey]struct{}, len(toInsertSeed))
toInsert := make([]accesscontrol.Permission, 0, len(toInsertSeed))
for _, p := range toInsertSeed {
roleID, ok := b.builtinToRoleID[p.BuiltInRole]
if !ok {
continue
}
k := permKey{RoleID: roleID, Action: p.Action, Scope: p.Scope}
if _, ok := seen[k]; ok {
continue
}
seen[k] = struct{}{}
perm := accesscontrol.Permission{
RoleID: roleID,
Action: p.Action,
Scope: p.Scope,
Created: b.now,
Updated: b.now,
}
perm.Kind, perm.Attribute, perm.Identifier = accesscontrol.SplitScope(perm.Scope)
toInsert = append(toInsert, perm)
}
return batch(len(toInsert), basicRolePermBatchSize, func(start, end int) error {
// MySQL: ignore conflicts to make seeding idempotent under retries/concurrency.
// Conflicts can happen if the same permission already exists (unique on role_id, action, scope).
if b.dbType == migrator.MySQL {
args := make([]any, 0, (end-start)*8)
for i := start; i < end; i++ {
p := toInsert[i]
args = append(args, p.RoleID, p.Action, p.Scope, p.Kind, p.Attribute, p.Identifier, p.Updated, p.Created)
}
sql := append([]any{`INSERT IGNORE INTO permission (role_id, action, scope, kind, attribute, identifier, updated, created) VALUES ` +
strings.Repeat("(?, ?, ?, ?, ?, ?, ?, ?),", end-start-1) + "(?, ?, ?, ?, ?, ?, ?, ?)"}, args...)
_, err := b.sess.Exec(sql...)
return err
}
_, err := b.sess.InsertMulti(toInsert[start:end])
return err
})
}
func batch(count, size int, eachFn func(start, end int) error) error {
for i := 0; i < count; {
end := i + size
if end > count {
end = count
}
if err := eachFn(i, end); err != nil {
return err
}
i = end
}
return nil
}
// BumpBasicRoleVersions increments the role version for the given builtin basic roles (Viewer/Editor/Admin/Grafana Admin).
// Unknown role names are ignored.
func BumpBasicRoleVersions(sess *db.Session, basicRoles []string) error {
if len(basicRoles) == 0 {
return nil
}
defs := accesscontrol.BuildBasicRoleDefinitions()
uids := make([]any, 0, len(basicRoles))
for _, br := range basicRoles {
def, ok := defs[br]
if !ok {
continue
}
uids = append(uids, def.UID)
}
if len(uids) == 0 {
return nil
}
sql := "UPDATE role SET version = version + 1 WHERE org_id = ? AND uid IN (?" + strings.Repeat(",?", len(uids)-1) + ")"
_, err := sess.Exec(append([]any{sql, accesscontrol.GlobalOrgID}, uids...)...)
return err
}
// LoadBasicRoleSeedPermissions returns the current (builtin_role, action, scope) permissions granted to basic roles.
// It sets Origin to empty.
func LoadBasicRoleSeedPermissions(sess *db.Session) ([]accesscontrol.SeedPermission, error) {
rows := []accesscontrol.SeedPermission{}
err := sess.SQL(
`SELECT role.display_name AS builtin_role, p.action, p.scope, '' AS origin
FROM role INNER JOIN permission AS p ON p.role_id = role.id
WHERE role.org_id = ? AND role.name LIKE 'basic:%'`,
accesscontrol.GlobalOrgID,
).Find(&rows)
return rows, err
}

View File

@@ -15,7 +15,6 @@ import (
"github.com/grafana/grafana/pkg/infra/db"
"github.com/grafana/grafana/pkg/infra/log"
"github.com/grafana/grafana/pkg/infra/serverlock"
"github.com/grafana/grafana/pkg/services/accesscontrol"
"github.com/grafana/grafana/pkg/services/authz/zanzana"
"github.com/grafana/grafana/pkg/services/featuremgmt"
"github.com/grafana/grafana/pkg/services/folder"
@@ -131,9 +130,6 @@ func (r *ZanzanaReconciler) Run(ctx context.Context) error {
// Reconcile schedules as job that will run and reconcile resources between
// legacy access control and zanzana.
func (r *ZanzanaReconciler) Reconcile(ctx context.Context) error {
// Ensure we don't reconcile an empty/partial RBAC state before OSS has seeded basic role permissions.
// This matters most during startup where fixed-role loading + basic-role permission refresh runs as another background service.
r.waitForBasicRolesSeeded(ctx)
r.reconcile(ctx)
// FIXME:
@@ -149,57 +145,6 @@ func (r *ZanzanaReconciler) Reconcile(ctx context.Context) error {
}
}
func (r *ZanzanaReconciler) hasBasicRolePermissions(ctx context.Context) bool {
var count int64
// Basic role permissions are stored on "basic:%" roles in the global org (0).
// In a fresh DB, this will be empty until fixed roles are registered and the basic role permission refresh runs.
type row struct {
Count int64 `xorm:"count"`
}
_ = r.store.WithDbSession(ctx, func(sess *db.Session) error {
var rr row
_, err := sess.SQL(
`SELECT COUNT(*) AS count
FROM role INNER JOIN permission AS p ON p.role_id = role.id
WHERE role.org_id = ? AND role.name LIKE ?`,
accesscontrol.GlobalOrgID,
accesscontrol.BasicRolePrefix+"%",
).Get(&rr)
if err != nil {
return err
}
count = rr.Count
return nil
})
return count > 0
}
func (r *ZanzanaReconciler) waitForBasicRolesSeeded(ctx context.Context) {
// Best-effort: don't block forever. If we can't observe basic roles, proceed anyway.
const (
maxWait = 15 * time.Second
interval = 1 * time.Second
)
deadline := time.NewTimer(maxWait)
defer deadline.Stop()
ticker := time.NewTicker(interval)
defer ticker.Stop()
for {
if r.hasBasicRolePermissions(ctx) {
return
}
select {
case <-ctx.Done():
return
case <-deadline.C:
return
case <-ticker.C:
}
}
}
func (r *ZanzanaReconciler) reconcile(ctx context.Context) {
run := func(ctx context.Context, namespace string) (ok bool) {
now := time.Now()

View File

@@ -1,67 +0,0 @@
package dualwrite
import (
"context"
"testing"
"time"
"github.com/stretchr/testify/require"
"github.com/grafana/grafana/pkg/infra/db"
"github.com/grafana/grafana/pkg/services/accesscontrol"
)
func TestZanzanaReconciler_hasBasicRolePermissions(t *testing.T) {
env := setupTestEnv(t)
r := &ZanzanaReconciler{
store: env.db,
}
ctx := context.Background()
require.False(t, r.hasBasicRolePermissions(ctx))
err := env.db.WithDbSession(ctx, func(sess *db.Session) error {
now := time.Now()
_, err := sess.Exec(
`INSERT INTO role (org_id, uid, name, display_name, group_name, description, hidden, version, created, updated)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)`,
accesscontrol.GlobalOrgID,
"basic_viewer_uid_test",
accesscontrol.BasicRolePrefix+"viewer",
"Viewer",
"Basic",
"Viewer role",
false,
1,
now,
now,
)
if err != nil {
return err
}
var roleID int64
if _, err := sess.SQL(`SELECT id FROM role WHERE org_id = ? AND uid = ?`, accesscontrol.GlobalOrgID, "basic_viewer_uid_test").Get(&roleID); err != nil {
return err
}
_, err = sess.Exec(
`INSERT INTO permission (role_id, action, scope, kind, attribute, identifier, created, updated)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)`,
roleID,
"dashboards:read",
"dashboards:*",
"",
"",
"",
now,
now,
)
return err
})
require.NoError(t, err)
require.True(t, r.hasBasicRolePermissions(ctx))
}

View File

@@ -1,7 +1,6 @@
package accesscontrol
import (
"context"
"encoding/json"
"errors"
"fmt"
@@ -595,18 +594,3 @@ type QueryWithOrg struct {
OrgId *int64 `json:"orgId"`
Global bool `json:"global"`
}
type SeedPermission struct {
BuiltInRole string `xorm:"builtin_role"`
Action string `xorm:"action"`
Scope string `xorm:"scope"`
Origin string `xorm:"origin"`
}
type RoleStore interface {
LoadRoles(ctx context.Context) (map[string]*RoleDTO, error)
SetRole(ctx context.Context, existingRole *RoleDTO, wantedRole RoleDTO) error
SetPermissions(ctx context.Context, existingRole *RoleDTO, wantedRole RoleDTO) error
CreateRole(ctx context.Context, role RoleDTO) error
DeleteRoles(ctx context.Context, roleUIDs []string) error
}

View File

@@ -1,451 +0,0 @@
package seeding
import (
"context"
"fmt"
"regexp"
"slices"
"strings"
"github.com/grafana/grafana/pkg/infra/log"
"github.com/grafana/grafana/pkg/services/accesscontrol"
"github.com/grafana/grafana/pkg/services/accesscontrol/pluginutils"
"github.com/grafana/grafana/pkg/services/pluginsintegration/pluginaccesscontrol"
)
type Seeder struct {
log log.Logger
roleStore accesscontrol.RoleStore
backend SeedingBackend
builtinsPermissions map[accesscontrol.SeedPermission]struct{}
seededFixedRoles map[string]bool
seededPluginRoles map[string]bool
seededPlugins map[string]bool
hasSeededAlready bool
}
// SeedingBackend provides the seed-set specific operations needed to seed.
type SeedingBackend interface {
// LoadPrevious returns the currently stored permissions for previously seeded roles.
LoadPrevious(ctx context.Context) (map[accesscontrol.SeedPermission]struct{}, error)
// Apply updates the database to match the desired permissions.
Apply(ctx context.Context,
added, removed []accesscontrol.SeedPermission,
updated map[accesscontrol.SeedPermission]accesscontrol.SeedPermission,
) error
}
func New(log log.Logger, roleStore accesscontrol.RoleStore, backend SeedingBackend) *Seeder {
return &Seeder{
log: log,
roleStore: roleStore,
backend: backend,
builtinsPermissions: map[accesscontrol.SeedPermission]struct{}{},
seededFixedRoles: map[string]bool{},
seededPluginRoles: map[string]bool{},
seededPlugins: map[string]bool{},
hasSeededAlready: false,
}
}
// SetDesiredPermissions replaces the in-memory desired permission set used by Seed().
func (s *Seeder) SetDesiredPermissions(desired map[accesscontrol.SeedPermission]struct{}) {
if desired == nil {
s.builtinsPermissions = map[accesscontrol.SeedPermission]struct{}{}
return
}
s.builtinsPermissions = desired
}
// Seed loads current and desired permissions, diffs them (including scope updates), applies changes, and bumps versions.
func (s *Seeder) Seed(ctx context.Context) error {
previous, err := s.backend.LoadPrevious(ctx)
if err != nil {
return err
}
// - Do not remove plugin permissions when the plugin didn't register this run (Origin set but not in seededPlugins).
// - Preserve legacy plugin app access permissions in the persisted seed set (these are granted by default).
if len(previous) > 0 {
filtered := make(map[accesscontrol.SeedPermission]struct{}, len(previous))
for p := range previous {
if p.Action == pluginaccesscontrol.ActionAppAccess {
continue
}
if p.Origin != "" && !s.seededPlugins[p.Origin] {
continue
}
filtered[p] = struct{}{}
}
previous = filtered
}
added, removed, updated := s.permissionDiff(previous, s.builtinsPermissions)
if err := s.backend.Apply(ctx, added, removed, updated); err != nil {
return err
}
return nil
}
// SeedRoles populates the database with the roles and their assignments
// It will create roles that do not exist and update roles that have changed
// Do not use for provisioning. Validation is not enforced.
func (s *Seeder) SeedRoles(ctx context.Context, registrationList []accesscontrol.RoleRegistration) error {
roleMap, err := s.roleStore.LoadRoles(ctx)
if err != nil {
return err
}
missingRoles := make([]accesscontrol.RoleRegistration, 0, len(registrationList))
// Diff existing roles with the ones we want to seed.
// If a role is missing, we add it to the missingRoles list
for _, registration := range registrationList {
registration := registration
role, ok := roleMap[registration.Role.Name]
switch {
case registration.Role.IsFixed():
s.seededFixedRoles[registration.Role.Name] = true
case registration.Role.IsPlugin():
s.seededPluginRoles[registration.Role.Name] = true
// To be resilient to failed plugin loadings, we remember the plugins that have registered,
// later we'll ignore permissions and roles of other plugins
s.seededPlugins[pluginutils.PluginIDFromName(registration.Role.Name)] = true
}
s.rememberPermissionAssignments(&registration.Role, registration.Grants, registration.Exclude)
if !ok {
missingRoles = append(missingRoles, registration)
continue
}
if needsRoleUpdate(role, registration.Role) {
if err := s.roleStore.SetRole(ctx, role, registration.Role); err != nil {
return err
}
}
if needsPermissionsUpdate(role, registration.Role) {
if err := s.roleStore.SetPermissions(ctx, role, registration.Role); err != nil {
return err
}
}
}
for _, registration := range missingRoles {
if err := s.roleStore.CreateRole(ctx, registration.Role); err != nil {
return err
}
}
return nil
}
func needsPermissionsUpdate(existingRole *accesscontrol.RoleDTO, wantedRole accesscontrol.RoleDTO) bool {
if existingRole == nil {
return true
}
if len(existingRole.Permissions) != len(wantedRole.Permissions) {
return true
}
for _, p := range wantedRole.Permissions {
found := false
for _, ep := range existingRole.Permissions {
if ep.Action == p.Action && ep.Scope == p.Scope {
found = true
break
}
}
if !found {
return true
}
}
return false
}
func needsRoleUpdate(existingRole *accesscontrol.RoleDTO, wantedRole accesscontrol.RoleDTO) bool {
if existingRole == nil {
return true
}
if existingRole.Name != wantedRole.Name {
return false
}
if existingRole.DisplayName != wantedRole.DisplayName {
return true
}
if existingRole.Description != wantedRole.Description {
return true
}
if existingRole.Group != wantedRole.Group {
return true
}
if existingRole.Hidden != wantedRole.Hidden {
return true
}
return false
}
// Deprecated: SeedRole is deprecated and should not be used.
// SeedRoles only does boot up seeding and should not be used for runtime seeding.
func (s *Seeder) SeedRole(ctx context.Context, role accesscontrol.RoleDTO, builtInRoles []string) error {
addedPermissions := make(map[string]struct{}, len(role.Permissions))
permissions := make([]accesscontrol.Permission, 0, len(role.Permissions))
for _, p := range role.Permissions {
key := fmt.Sprintf("%s:%s", p.Action, p.Scope)
if _, ok := addedPermissions[key]; !ok {
addedPermissions[key] = struct{}{}
permissions = append(permissions, accesscontrol.Permission{Action: p.Action, Scope: p.Scope})
}
}
wantedRole := accesscontrol.RoleDTO{
OrgID: accesscontrol.GlobalOrgID,
Version: role.Version,
UID: role.UID,
Name: role.Name,
DisplayName: role.DisplayName,
Description: role.Description,
Group: role.Group,
Permissions: permissions,
Hidden: role.Hidden,
}
roleMap, err := s.roleStore.LoadRoles(ctx)
if err != nil {
return err
}
existingRole := roleMap[wantedRole.Name]
if existingRole == nil {
if err := s.roleStore.CreateRole(ctx, wantedRole); err != nil {
return err
}
} else {
if needsRoleUpdate(existingRole, wantedRole) {
if err := s.roleStore.SetRole(ctx, existingRole, wantedRole); err != nil {
return err
}
}
if needsPermissionsUpdate(existingRole, wantedRole) {
if err := s.roleStore.SetPermissions(ctx, existingRole, wantedRole); err != nil {
return err
}
}
}
// Remember seeded roles
if wantedRole.IsFixed() {
s.seededFixedRoles[wantedRole.Name] = true
}
isPluginRole := wantedRole.IsPlugin()
if isPluginRole {
s.seededPluginRoles[wantedRole.Name] = true
// To be resilient to failed plugin loadings, we remember the plugins that have registered,
// later we'll ignore permissions and roles of other plugins
s.seededPlugins[pluginutils.PluginIDFromName(role.Name)] = true
}
s.rememberPermissionAssignments(&wantedRole, builtInRoles, []string{})
return nil
}
func (s *Seeder) rememberPermissionAssignments(role *accesscontrol.RoleDTO, builtInRoles []string, excludedRoles []string) {
AppendDesiredPermissions(s.builtinsPermissions, s.log, role, builtInRoles, excludedRoles, true)
}
// AppendDesiredPermissions accumulates permissions from a role registration onto basic roles (Viewer/Editor/Admin/Grafana Admin).
// - It expands parents via accesscontrol.BuiltInRolesWithParents.
// - It can optionally ignore plugin app access permissions (which are granted by default).
func AppendDesiredPermissions(
out map[accesscontrol.SeedPermission]struct{},
logger log.Logger,
role *accesscontrol.RoleDTO,
builtInRoles []string,
excludedRoles []string,
ignorePluginAppAccess bool,
) {
if out == nil || role == nil {
return
}
for builtInRole := range accesscontrol.BuiltInRolesWithParents(builtInRoles) {
// Skip excluded grants
if slices.Contains(excludedRoles, builtInRole) {
continue
}
for _, perm := range role.Permissions {
if ignorePluginAppAccess && perm.Action == pluginaccesscontrol.ActionAppAccess {
logger.Debug("Role is attempting to grant access permission, but this permission is already granted by default and will be ignored",
"role", role.Name, "permission", perm.Action, "scope", perm.Scope)
continue
}
sp := accesscontrol.SeedPermission{
BuiltInRole: builtInRole,
Action: perm.Action,
Scope: perm.Scope,
}
if role.IsPlugin() {
sp.Origin = pluginutils.PluginIDFromName(role.Name)
}
out[sp] = struct{}{}
}
}
}
// permissionDiff returns:
// - added: present in desired permissions, not in previous permissions
// - removed: present in previous permissions, not in desired permissions
// - updated: same role + action, but scope changed
func (s *Seeder) permissionDiff(previous, desired map[accesscontrol.SeedPermission]struct{}) (added, removed []accesscontrol.SeedPermission, updated map[accesscontrol.SeedPermission]accesscontrol.SeedPermission) {
addedSet := make(map[accesscontrol.SeedPermission]struct{}, 0)
for n := range desired {
if _, already := previous[n]; !already {
addedSet[n] = struct{}{}
} else {
delete(previous, n)
}
}
// Check if any of the new permissions is actually an old permission with an updated scope
updated = make(map[accesscontrol.SeedPermission]accesscontrol.SeedPermission, 0)
for n := range addedSet {
for p := range previous {
if n.BuiltInRole == p.BuiltInRole && n.Action == p.Action {
updated[p] = n
delete(addedSet, n)
}
}
}
for p := range addedSet {
added = append(added, p)
}
for p := range previous {
if p.Action == pluginaccesscontrol.ActionAppAccess &&
p.Scope != pluginaccesscontrol.ScopeProvider.GetResourceAllScope() {
// Allows backward compatibility with plugins that have been seeded before the grant ignore rule was added
s.log.Info("This permission already existed so it will not be removed",
"role", p.BuiltInRole, "permission", p.Action, "scope", p.Scope)
continue
}
removed = append(removed, p)
}
return added, removed, updated
}
func (s *Seeder) ClearBasicRolesPluginPermissions(ID string) {
removable := []accesscontrol.SeedPermission{}
for key := range s.builtinsPermissions {
if matchPermissionByPluginID(key, ID) {
removable = append(removable, key)
}
}
for _, perm := range removable {
delete(s.builtinsPermissions, perm)
}
}
func matchPermissionByPluginID(perm accesscontrol.SeedPermission, pluginID string) bool {
if perm.Origin != pluginID {
return false
}
actionTemplate := regexp.MustCompile(fmt.Sprintf("%s[.:]", pluginID))
scopeTemplate := fmt.Sprintf(":%s", pluginID)
return actionTemplate.MatchString(perm.Action) || strings.HasSuffix(perm.Scope, scopeTemplate)
}
// RolesToUpgrade returns the unique basic roles that should have their version incremented.
func RolesToUpgrade(added, removed []accesscontrol.SeedPermission) []string {
set := map[string]struct{}{}
for _, p := range added {
set[p.BuiltInRole] = struct{}{}
}
for _, p := range removed {
set[p.BuiltInRole] = struct{}{}
}
out := make([]string, 0, len(set))
for r := range set {
out = append(out, r)
}
return out
}
func (s *Seeder) ClearPluginRoles(ID string) {
expectedPrefix := fmt.Sprintf("%s%s:", accesscontrol.PluginRolePrefix, ID)
for roleName := range s.seededPluginRoles {
if strings.HasPrefix(roleName, expectedPrefix) {
delete(s.seededPluginRoles, roleName)
}
}
}
func (s *Seeder) MarkSeededAlready() {
s.hasSeededAlready = true
}
func (s *Seeder) HasSeededAlready() bool {
return s.hasSeededAlready
}
func (s *Seeder) RemoveAbsentRoles(ctx context.Context) error {
roleMap, errGet := s.roleStore.LoadRoles(ctx)
if errGet != nil {
s.log.Error("failed to get fixed roles from store", "err", errGet)
return errGet
}
toRemove := []string{}
for _, r := range roleMap {
if r == nil {
continue
}
if r.IsFixed() {
if !s.seededFixedRoles[r.Name] {
s.log.Info("role is not seeded anymore, mark it for deletion", "role", r.Name)
toRemove = append(toRemove, r.UID)
}
continue
}
if r.IsPlugin() {
if !s.seededPlugins[pluginutils.PluginIDFromName(r.Name)] {
// To be resilient to failed plugin loadings
// ignore stored roles related to plugins that have not registered this time
s.log.Debug("plugin role has not been registered on this run skipping its removal", "role", r.Name)
continue
}
if !s.seededPluginRoles[r.Name] {
s.log.Info("role is not seeded anymore, mark it for deletion", "role", r.Name)
toRemove = append(toRemove, r.UID)
}
}
}
if errDelete := s.roleStore.DeleteRoles(ctx, toRemove); errDelete != nil {
s.log.Error("failed to delete absent fixed and plugin roles", "err", errDelete)
return errDelete
}
return nil
}

Some files were not shown because too many files have changed in this diff Show More