mirror of
https://github.com/grafana/grafana.git
synced 2026-01-15 13:48:14 +00:00
Compare commits
21 Commits
ash/react-
...
ifrost/exp
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
71d12bef87 | ||
|
|
de26897aa0 | ||
|
|
e460571e6f | ||
|
|
47fcc1d7e5 | ||
|
|
db076f54ac | ||
|
|
7ae2eed876 | ||
|
|
cddc4776ef | ||
|
|
ec941b42ef | ||
|
|
873d35b494 | ||
|
|
d191425f3d | ||
|
|
0a66aacfb3 | ||
|
|
9f2f93b401 | ||
|
|
9e399e0b19 | ||
|
|
2f520454ae | ||
|
|
72f7bd3900 | ||
|
|
ba416eab4e | ||
|
|
189d50d815 | ||
|
|
450eaba447 | ||
|
|
87f5d5e741 | ||
|
|
5e68b07cac | ||
|
|
99acd3766d |
@@ -4,7 +4,8 @@ comments: |
|
||||
This file is used in the following visualizations: candlestick, heatmap, state timeline, status history, time series.
|
||||
---
|
||||
|
||||
You can zoom the panel time range in and out, which in turn, changes the dashboard time range.
|
||||
You can pan the panel time range left and right, and zoom it and in and out.
|
||||
This, in turn, changes the dashboard time range.
|
||||
|
||||
**Zoom in** - Click and drag on the panel to zoom in on a particular time range.
|
||||
|
||||
@@ -16,4 +17,9 @@ For example, if the original time range is from 9:00 to 9:59, the time range cha
|
||||
- Next range: 8:30 - 10:29
|
||||
- Next range: 7:30 - 11:29
|
||||
|
||||
For screen recordings showing these interactions, refer to the [Panel overview documentation](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/visualizations/panels-visualizations/panel-overview/#zoom-panel-time-range).
|
||||
**Pan** - Click and drag the x-axis area of the panel to pan the time range.
|
||||
|
||||
The time range shifts by the distance you drag.
|
||||
For example, if the original time range is from 9:00 to 9:59 and you drag 30 minutes to the right, the time range changes to 9:30 to 10:29.
|
||||
|
||||
For screen recordings showing these interactions, refer to the [Panel overview documentation](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/visualizations/panels-visualizations/panel-overview/#pan-and-zoom-panel-time-range).
|
||||
@@ -304,7 +304,8 @@ When things go bad, it often helps if you understand the context in which the fa
|
||||
|
||||
In the next part of the tutorial, we simulate some common use cases that someone would add annotations for.
|
||||
|
||||
1. To manually add an annotation, click anywhere in your graph, then click **Add annotation**.
|
||||
1. To manually add an annotation, click anywhere on a graph line to open the data tooltip, then click **Add annotation**.
|
||||
You can also press `Ctrl` or `Command` and click anywhere in the graph to open the **Add annotation** dialog box.
|
||||
Note: you might need to save the dashboard first.
|
||||
1. In **Description**, enter **Migrated user database**.
|
||||
1. Click **Save**.
|
||||
|
||||
@@ -317,13 +317,16 @@ Click the **Copy time range to clipboard** icon to copy the current time range t
|
||||
|
||||
You can also copy and paste a time range using the keyboard shortcuts `t+c` and `t+v` respectively.
|
||||
|
||||
#### Zoom out (Cmd+Z or Ctrl+Z)
|
||||
#### Zoom out
|
||||
|
||||
Click the **Zoom out** icon to view a larger time range in the dashboard or panel visualization.
|
||||
- Click the **Zoom out** icon to view a larger time range in the dashboard or panel visualizations
|
||||
- Double click on the panel graph area (time series family visualizations only)
|
||||
- Type the `t-` keyboard shortcut
|
||||
|
||||
#### Zoom in (only applicable to graph visualizations)
|
||||
#### Zoom in
|
||||
|
||||
Click and drag to select the time range in the visualization that you want to view.
|
||||
- Click and drag horizontally in the panel graph area to select a time range (time series family visualizations only)
|
||||
- Type the `t+` keyboard shortcut
|
||||
|
||||
#### Refresh dashboard
|
||||
|
||||
|
||||
@@ -146,7 +146,7 @@ To create a variable, follow these steps:
|
||||
- Variable drop-down lists are displayed in the order in which they're listed in the **Variables** in dashboard settings, so put the variables that you will change often at the top, so they will be shown first (far left on the dashboard).
|
||||
- By default, variables don't have a default value. This means that the topmost value in the drop-down list is always preselected. If you want to pre-populate a variable with an empty value, you can use the following workaround in the variable settings:
|
||||
1. Select the **Include All Option** checkbox.
|
||||
2. In the **Custom all value** field, enter a value like `+`.
|
||||
2. In the **Custom all value** field, enter a value like `.+`.
|
||||
|
||||
## Add a query variable
|
||||
|
||||
|
||||
@@ -175,9 +175,10 @@ By hovering over a panel with the mouse you can use some shortcuts that will tar
|
||||
- `pl`: Hide or show legend
|
||||
- `pr`: Remove Panel
|
||||
|
||||
## Zoom panel time range
|
||||
## Pan and zoom panel time range
|
||||
|
||||
You can zoom the panel time range in and out, which in turn, changes the dashboard time range.
|
||||
You can pan the panel time range left and right, and zoom it and in and out.
|
||||
This, in turn, changes the dashboard time range.
|
||||
|
||||
This feature is supported for the following visualizations:
|
||||
|
||||
@@ -191,7 +192,7 @@ This feature is supported for the following visualizations:
|
||||
|
||||
Click and drag on the panel to zoom in on a particular time range.
|
||||
|
||||
The following screen recordings show this interaction in the time series and x visualizations:
|
||||
The following screen recordings show this interaction in the time series and candlestick visualizations:
|
||||
|
||||
Time series
|
||||
|
||||
@@ -211,7 +212,7 @@ For example, if the original time range is from 9:00 to 9:59, the time range cha
|
||||
- Next range: 8:30 - 10:29
|
||||
- Next range: 7:30 - 11:29
|
||||
|
||||
The following screen recordings demonstrate the preceding example in the time series and x visualizations:
|
||||
The following screen recordings demonstrate the preceding example in the time series and heatmap visualizations:
|
||||
|
||||
Time series
|
||||
|
||||
@@ -221,6 +222,19 @@ Heatmap
|
||||
|
||||
{{< video-embed src="/media/docs/grafana/panels-visualizations/recording-heatmap-panel-time-zoom-out-mouse.mp4" >}}
|
||||
|
||||
### Pan
|
||||
|
||||
Click and drag the x-axis area of the panel to pan the time range.
|
||||
|
||||
The time range shifts by the distance you drag.
|
||||
For example, if the original time range is from 9:00 to 9:59 and you drag 30 minutes to the right, the time range changes to 9:30 to 10:29.
|
||||
|
||||
The following screen recordings show this interaction in the time series visualization:
|
||||
|
||||
Time series
|
||||
|
||||
{{< video-embed src="/media/docs/grafana/panels-visualizations/recording-ts-time-pan-mouse.mp4" >}}
|
||||
|
||||
## Add a panel
|
||||
|
||||
To add a panel in a new dashboard click **+ Add visualization** in the middle of the dashboard:
|
||||
|
||||
@@ -92,9 +92,9 @@ The data is converted as follows:
|
||||
|
||||
{{< figure src="/media/docs/grafana/panels-visualizations/screenshot-candles-volume-v11.6.png" max-width="750px" alt="A candlestick visualization showing the price movements of specific asset." >}}
|
||||
|
||||
## Zoom panel time range
|
||||
## Pan and zoom panel time range
|
||||
|
||||
{{< docs/shared lookup="visualizations/panel-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
|
||||
{{< docs/shared lookup="visualizations/panel-pan-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
|
||||
|
||||
## Configuration options
|
||||
|
||||
|
||||
@@ -79,9 +79,9 @@ The data is converted as follows:
|
||||
|
||||
{{< figure src="/static/img/docs/heatmap-panel/heatmap.png" max-width="1025px" alt="A heatmap visualization showing the random walk distribution over time" >}}
|
||||
|
||||
## Zoom panel time range
|
||||
## Pan and zoom panel time range
|
||||
|
||||
{{< docs/shared lookup="visualizations/panel-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
|
||||
{{< docs/shared lookup="visualizations/panel-pan-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
|
||||
|
||||
## Configuration options
|
||||
|
||||
|
||||
@@ -93,9 +93,9 @@ You can also create a state timeline visualization using time series data. To do
|
||||
|
||||

|
||||
|
||||
## Zoom panel time range
|
||||
## Pan and zoom panel time range
|
||||
|
||||
{{< docs/shared lookup="visualizations/panel-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
|
||||
{{< docs/shared lookup="visualizations/panel-pan-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
|
||||
|
||||
## Configuration options
|
||||
|
||||
|
||||
@@ -85,9 +85,9 @@ The data is converted as follows:
|
||||
|
||||
{{< figure src="/static/img/docs/status-history-panel/status_history.png" max-width="1025px" alt="A status history panel with two time columns showing the status of two servers" >}}
|
||||
|
||||
## Zoom panel time range
|
||||
## Pan and zoom panel time range
|
||||
|
||||
{{< docs/shared lookup="visualizations/panel-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
|
||||
{{< docs/shared lookup="visualizations/panel-pan-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
|
||||
|
||||
## Configuration options
|
||||
|
||||
|
||||
@@ -167,9 +167,9 @@ The following example shows three series: Min, Max, and Value. The Min and Max s
|
||||
|
||||
{{< docs/shared lookup="visualizations/multiple-y-axes.md" source="grafana" version="<GRAFANA_VERSION>" leveloffset="+2" >}}
|
||||
|
||||
## Zoom panel time range
|
||||
## Pan and zoom panel time range
|
||||
|
||||
{{< docs/shared lookup="visualizations/panel-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
|
||||
{{< docs/shared lookup="visualizations/panel-pan-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
|
||||
|
||||
## Configuration options
|
||||
|
||||
|
||||
@@ -962,6 +962,10 @@ export interface FeatureToggles {
|
||||
*/
|
||||
kubernetesAuthzCoreRolesApi?: boolean;
|
||||
/**
|
||||
* Registers AuthZ Global Roles /apis endpoint
|
||||
*/
|
||||
kubernetesAuthzGlobalRolesApi?: boolean;
|
||||
/**
|
||||
* Registers AuthZ Roles /apis endpoint
|
||||
*/
|
||||
kubernetesAuthzRolesApi?: boolean;
|
||||
|
||||
@@ -117,6 +117,44 @@ export const MyComponent = () => {
|
||||
};
|
||||
```
|
||||
|
||||
### Custom Header Rendering
|
||||
|
||||
Column headers can be customized using strings, React elements, or renderer functions. The `header` property accepts any value that matches React Table's `Renderer` type.
|
||||
|
||||
**Important:** When using custom header content, prefer inline elements (like `<span>`) over block elements (like `<div>`) to avoid layout issues. Block-level elements can cause extra spacing and alignment problems in table headers because they disrupt the table's inline flow. Use `display: inline-flex` or `display: inline-block` when you need flexbox or block-like behavior.
|
||||
|
||||
```tsx
|
||||
const columns: Array<Column<TableData>> = [
|
||||
// React element header
|
||||
{
|
||||
id: 'checkbox',
|
||||
header: (
|
||||
<>
|
||||
<label htmlFor="select-all" className="sr-only">
|
||||
Select all rows
|
||||
</label>
|
||||
<Checkbox id="select-all" />
|
||||
</>
|
||||
),
|
||||
cell: () => <Checkbox aria-label="Select row" />,
|
||||
},
|
||||
|
||||
// Function renderer header
|
||||
{
|
||||
id: 'firstName',
|
||||
header: () => (
|
||||
<span style={{ display: 'inline-flex', alignItems: 'center', gap: '8px' }}>
|
||||
<Icon name="user" size="sm" />
|
||||
<span>First Name</span>
|
||||
</span>
|
||||
),
|
||||
},
|
||||
|
||||
// String header
|
||||
{ id: 'lastName', header: 'Last name' },
|
||||
];
|
||||
```
|
||||
|
||||
### Custom Cell Rendering
|
||||
|
||||
Individual cells can be rendered using custom content dy defining a `cell` property on the column definition.
|
||||
|
||||
@@ -3,8 +3,11 @@ import { useCallback, useMemo, useState } from 'react';
|
||||
import { CellProps } from 'react-table';
|
||||
|
||||
import { LinkButton } from '../Button/Button';
|
||||
import { Checkbox } from '../Forms/Checkbox';
|
||||
import { Field } from '../Forms/Field';
|
||||
import { Icon } from '../Icon/Icon';
|
||||
import { Input } from '../Input/Input';
|
||||
import { Text } from '../Text/Text';
|
||||
|
||||
import { FetchDataArgs, InteractiveTable, InteractiveTableHeaderTooltip } from './InteractiveTable';
|
||||
import mdx from './InteractiveTable.mdx';
|
||||
@@ -297,4 +300,40 @@ export const WithControlledSort: StoryFn<typeof InteractiveTable> = (args) => {
|
||||
return <InteractiveTable {...args} data={data} pageSize={15} fetchData={fetchData} />;
|
||||
};
|
||||
|
||||
export const WithCustomHeader: TableStoryObj = {
|
||||
args: {
|
||||
columns: [
|
||||
// React element header
|
||||
{
|
||||
id: 'checkbox',
|
||||
header: (
|
||||
<>
|
||||
<label htmlFor="select-all" className="sr-only">
|
||||
Select all rows
|
||||
</label>
|
||||
<Checkbox id="select-all" />
|
||||
</>
|
||||
),
|
||||
cell: () => <Checkbox aria-label="Select row" />,
|
||||
},
|
||||
// Function renderer header
|
||||
{
|
||||
id: 'firstName',
|
||||
header: () => (
|
||||
<span style={{ display: 'inline-flex', alignItems: 'center', gap: '8px' }}>
|
||||
<Icon name="user" size="sm" />
|
||||
<Text element="span">First Name</Text>
|
||||
</span>
|
||||
),
|
||||
sortType: 'string',
|
||||
},
|
||||
// String header
|
||||
{ id: 'lastName', header: 'Last name', sortType: 'string' },
|
||||
{ id: 'car', header: 'Car', sortType: 'string' },
|
||||
{ id: 'age', header: 'Age', sortType: 'number' },
|
||||
],
|
||||
data: pageableData.slice(0, 10),
|
||||
getRowId: (r) => r.id,
|
||||
},
|
||||
};
|
||||
export default meta;
|
||||
|
||||
@@ -2,6 +2,9 @@ import { render, screen, within } from '@testing-library/react';
|
||||
import userEvent from '@testing-library/user-event';
|
||||
import * as React from 'react';
|
||||
|
||||
import { Checkbox } from '../Forms/Checkbox';
|
||||
import { Icon } from '../Icon/Icon';
|
||||
|
||||
import { InteractiveTable } from './InteractiveTable';
|
||||
import { Column } from './types';
|
||||
|
||||
@@ -247,4 +250,104 @@ describe('InteractiveTable', () => {
|
||||
expect(fetchData).toHaveBeenCalledWith({ sortBy: [{ id: 'id', desc: false }] });
|
||||
});
|
||||
});
|
||||
|
||||
describe('custom header rendering', () => {
|
||||
it('should render string headers', () => {
|
||||
const columns: Array<Column<TableData>> = [{ id: 'id', header: 'ID' }];
|
||||
const data: TableData[] = [{ id: '1', value: '1', country: 'Sweden' }];
|
||||
render(<InteractiveTable columns={columns} data={data} getRowId={getRowId} />);
|
||||
|
||||
expect(screen.getByRole('columnheader', { name: 'ID' })).toBeInTheDocument();
|
||||
});
|
||||
|
||||
it('should render React element headers', () => {
|
||||
const columns: Array<Column<TableData>> = [
|
||||
{
|
||||
id: 'checkbox',
|
||||
header: (
|
||||
<>
|
||||
<label htmlFor="select-all" className="sr-only">
|
||||
Select all rows
|
||||
</label>
|
||||
<Checkbox id="select-all" data-testid="header-checkbox" />
|
||||
</>
|
||||
),
|
||||
cell: () => <Checkbox data-testid="cell-checkbox" aria-label="Select row" />,
|
||||
},
|
||||
];
|
||||
const data: TableData[] = [{ id: '1', value: '1', country: 'Sweden' }];
|
||||
render(<InteractiveTable columns={columns} data={data} getRowId={getRowId} />);
|
||||
|
||||
expect(screen.getByTestId('header-checkbox')).toBeInTheDocument();
|
||||
expect(screen.getByTestId('cell-checkbox')).toBeInTheDocument();
|
||||
expect(screen.getByLabelText('Select all rows')).toBeInTheDocument();
|
||||
expect(screen.getByLabelText('Select row')).toBeInTheDocument();
|
||||
expect(screen.getByText('Select all rows')).toBeInTheDocument();
|
||||
});
|
||||
|
||||
it('should render function renderer headers', () => {
|
||||
const columns: Array<Column<TableData>> = [
|
||||
{
|
||||
id: 'firstName',
|
||||
header: () => (
|
||||
<span style={{ display: 'inline-flex', alignItems: 'center', gap: '8px' }}>
|
||||
<Icon name="user" size="sm" data-testid="header-icon" />
|
||||
<span>First Name</span>
|
||||
</span>
|
||||
),
|
||||
sortType: 'string',
|
||||
},
|
||||
];
|
||||
const data: TableData[] = [{ id: '1', value: '1', country: 'Sweden' }];
|
||||
render(<InteractiveTable columns={columns} data={data} getRowId={getRowId} />);
|
||||
|
||||
expect(screen.getByTestId('header-icon')).toBeInTheDocument();
|
||||
expect(screen.getByRole('columnheader', { name: /first name/i })).toBeInTheDocument();
|
||||
});
|
||||
|
||||
it('should render all header types together', () => {
|
||||
const columns: Array<Column<TableData>> = [
|
||||
{
|
||||
id: 'checkbox',
|
||||
header: (
|
||||
<>
|
||||
<label htmlFor="select-all" className="sr-only">
|
||||
Select all rows
|
||||
</label>
|
||||
<Checkbox id="select-all" data-testid="header-checkbox" />
|
||||
</>
|
||||
),
|
||||
cell: () => <Checkbox aria-label="Select row" />,
|
||||
},
|
||||
{
|
||||
id: 'id',
|
||||
header: () => (
|
||||
<span style={{ display: 'inline-flex', alignItems: 'center', gap: '8px' }}>
|
||||
<Icon name="user" size="sm" data-testid="header-icon" />
|
||||
<span>ID</span>
|
||||
</span>
|
||||
),
|
||||
sortType: 'string',
|
||||
},
|
||||
{ id: 'country', header: 'Country', sortType: 'string' },
|
||||
{ id: 'value', header: 'Value' },
|
||||
];
|
||||
const data: TableData[] = [
|
||||
{ id: '1', value: 'Value 1', country: 'Sweden' },
|
||||
{ id: '2', value: 'Value 2', country: 'Norway' },
|
||||
];
|
||||
render(<InteractiveTable columns={columns} data={data} getRowId={getRowId} />);
|
||||
|
||||
expect(screen.getByTestId('header-checkbox')).toBeInTheDocument();
|
||||
expect(screen.getByTestId('header-icon')).toBeInTheDocument();
|
||||
expect(screen.getByRole('columnheader', { name: 'Country' })).toBeInTheDocument();
|
||||
expect(screen.getByRole('columnheader', { name: 'Value' })).toBeInTheDocument();
|
||||
|
||||
// Verify data is rendered
|
||||
expect(screen.getByText('Sweden')).toBeInTheDocument();
|
||||
expect(screen.getByText('Norway')).toBeInTheDocument();
|
||||
expect(screen.getByText('Value 1')).toBeInTheDocument();
|
||||
expect(screen.getByText('Value 2')).toBeInTheDocument();
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
import { ReactNode } from 'react';
|
||||
import { CellProps, DefaultSortTypes, IdType, SortByFn } from 'react-table';
|
||||
import { CellProps, DefaultSortTypes, HeaderProps, IdType, Renderer, SortByFn } from 'react-table';
|
||||
|
||||
export interface Column<TableData extends object> {
|
||||
/**
|
||||
@@ -11,9 +11,9 @@ export interface Column<TableData extends object> {
|
||||
*/
|
||||
cell?: (props: CellProps<TableData>) => ReactNode;
|
||||
/**
|
||||
* Header name. if `undefined` the header will be empty. Useful for action columns.
|
||||
* Header name. Can be a string, renderer function, or undefined. If `undefined` the header will be empty. Useful for action columns.
|
||||
*/
|
||||
header?: string;
|
||||
header?: Renderer<HeaderProps<TableData>>;
|
||||
/**
|
||||
* Column sort type. If `undefined` the column will not be sortable.
|
||||
* */
|
||||
|
||||
@@ -76,21 +76,27 @@ func (hs *HTTPServer) CreateDashboardSnapshot(c *contextmodel.ReqContext) {
|
||||
return
|
||||
}
|
||||
|
||||
// Do not check permissions when the instance snapshot public mode is enabled
|
||||
if !hs.Cfg.SnapshotPublicMode {
|
||||
evaluator := ac.EvalAll(ac.EvalPermission(dashboards.ActionSnapshotsCreate), ac.EvalPermission(dashboards.ActionDashboardsRead, dashboards.ScopeDashboardsProvider.GetResourceScopeUID(cmd.Dashboard.GetNestedString("uid"))))
|
||||
if canSave, err := hs.AccessControl.Evaluate(c.Req.Context(), c.SignedInUser, evaluator); err != nil || !canSave {
|
||||
c.JsonApiErr(http.StatusForbidden, "forbidden", err)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
dashboardsnapshots.CreateDashboardSnapshot(c, snapshot.SnapshotSharingOptions{
|
||||
cfg := snapshot.SnapshotSharingOptions{
|
||||
SnapshotsEnabled: hs.Cfg.SnapshotEnabled,
|
||||
ExternalEnabled: hs.Cfg.ExternalEnabled,
|
||||
ExternalSnapshotName: hs.Cfg.ExternalSnapshotName,
|
||||
ExternalSnapshotURL: hs.Cfg.ExternalSnapshotUrl,
|
||||
}, cmd, hs.dashboardsnapshotsService)
|
||||
}
|
||||
|
||||
if hs.Cfg.SnapshotPublicMode {
|
||||
// Public mode: no user or dashboard validation needed
|
||||
dashboardsnapshots.CreateDashboardSnapshotPublic(c, cfg, cmd, hs.dashboardsnapshotsService)
|
||||
return
|
||||
}
|
||||
|
||||
// Regular mode: check permissions
|
||||
evaluator := ac.EvalAll(ac.EvalPermission(dashboards.ActionSnapshotsCreate), ac.EvalPermission(dashboards.ActionDashboardsRead, dashboards.ScopeDashboardsProvider.GetResourceScopeUID(cmd.Dashboard.GetNestedString("uid"))))
|
||||
if canSave, err := hs.AccessControl.Evaluate(c.Req.Context(), c.SignedInUser, evaluator); err != nil || !canSave {
|
||||
c.JsonApiErr(http.StatusForbidden, "forbidden", err)
|
||||
return
|
||||
}
|
||||
|
||||
dashboardsnapshots.CreateDashboardSnapshot(c, cfg, cmd, hs.dashboardsnapshotsService)
|
||||
}
|
||||
|
||||
// GET /api/snapshots/:key
|
||||
@@ -213,13 +219,6 @@ func (hs *HTTPServer) DeleteDashboardSnapshot(c *contextmodel.ReqContext) respon
|
||||
return response.Error(http.StatusUnauthorized, "OrgID mismatch", nil)
|
||||
}
|
||||
|
||||
if queryResult.External {
|
||||
err := dashboardsnapshots.DeleteExternalDashboardSnapshot(queryResult.ExternalDeleteURL)
|
||||
if err != nil {
|
||||
return response.Error(http.StatusInternalServerError, "Failed to delete external dashboard", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Dashboard can be empty (creation error or external snapshot). This means that the mustInt here returns a 0,
|
||||
// which before RBAC would result in a dashboard which has no ACL. A dashboard without an ACL would fallback
|
||||
// to the user’s org role, which for editors and admins would essentially always be allowed here. With RBAC,
|
||||
@@ -239,6 +238,13 @@ func (hs *HTTPServer) DeleteDashboardSnapshot(c *contextmodel.ReqContext) respon
|
||||
}
|
||||
}
|
||||
|
||||
if queryResult.External {
|
||||
err := dashboardsnapshots.DeleteExternalDashboardSnapshot(queryResult.ExternalDeleteURL)
|
||||
if err != nil {
|
||||
return response.Error(http.StatusInternalServerError, "Failed to delete external dashboard", err)
|
||||
}
|
||||
}
|
||||
|
||||
cmd := &dashboardsnapshots.DeleteDashboardSnapshotCommand{DeleteKey: queryResult.DeleteKey}
|
||||
|
||||
if err := hs.dashboardsnapshotsService.DeleteDashboardSnapshot(c.Req.Context(), cmd); err != nil {
|
||||
|
||||
@@ -32,6 +32,8 @@ import (
|
||||
var (
|
||||
logger = glog.New("data-proxy-log")
|
||||
client = newHTTPClient()
|
||||
|
||||
errPluginProxyRouteAccessDenied = errors.New("plugin proxy route access denied")
|
||||
)
|
||||
|
||||
type DataSourceProxy struct {
|
||||
@@ -308,12 +310,21 @@ func (proxy *DataSourceProxy) validateRequest() error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// issues/116273: When we have an empty input route (or input that becomes relative to "."), we do not want it
|
||||
// to be ".". This is because the `CleanRelativePath` function will never return "./" prefixes, and as such,
|
||||
// the common prefix we need is an empty string.
|
||||
if r1 == "." && proxy.proxyPath != "." {
|
||||
r1 = ""
|
||||
}
|
||||
if r2 == "." && route.Path != "." {
|
||||
r2 = ""
|
||||
}
|
||||
if !strings.HasPrefix(r1, r2) {
|
||||
continue
|
||||
}
|
||||
|
||||
if !proxy.hasAccessToRoute(route) {
|
||||
return errors.New("plugin proxy route access denied")
|
||||
return errPluginProxyRouteAccessDenied
|
||||
}
|
||||
|
||||
proxy.matchedRoute = route
|
||||
|
||||
@@ -673,6 +673,94 @@ func TestIntegrationDataSourceProxy_routeRule(t *testing.T) {
|
||||
runDatasourceAuthTest(t, secretsService, secretsStore, cfg, test)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("Regression of 116273: Fallback routes should apply fallback route roles", func(t *testing.T) {
|
||||
for _, tc := range []struct {
|
||||
InputPath string
|
||||
ConfigurationPath string
|
||||
ExpectError bool
|
||||
}{
|
||||
{
|
||||
InputPath: "api/v2/leak-ur-secrets",
|
||||
ConfigurationPath: "",
|
||||
ExpectError: true,
|
||||
},
|
||||
{
|
||||
InputPath: "",
|
||||
ConfigurationPath: "",
|
||||
ExpectError: true,
|
||||
},
|
||||
{
|
||||
InputPath: ".",
|
||||
ConfigurationPath: ".",
|
||||
ExpectError: true,
|
||||
},
|
||||
{
|
||||
InputPath: "",
|
||||
ConfigurationPath: ".",
|
||||
ExpectError: false,
|
||||
},
|
||||
{
|
||||
InputPath: "api",
|
||||
ConfigurationPath: ".",
|
||||
ExpectError: false,
|
||||
},
|
||||
} {
|
||||
orEmptyStr := func(s string) string {
|
||||
if s == "" {
|
||||
return "<empty>"
|
||||
}
|
||||
return s
|
||||
}
|
||||
t.Run(
|
||||
fmt.Sprintf("with inputPath=%s, configurationPath=%s, expectError=%v",
|
||||
orEmptyStr(tc.InputPath), orEmptyStr(tc.ConfigurationPath), tc.ExpectError),
|
||||
func(t *testing.T) {
|
||||
ds := &datasources.DataSource{
|
||||
UID: "dsUID",
|
||||
JsonData: simplejson.New(),
|
||||
}
|
||||
routes := []*plugins.Route{
|
||||
{
|
||||
Path: tc.ConfigurationPath,
|
||||
ReqRole: org.RoleAdmin,
|
||||
Method: "GET",
|
||||
},
|
||||
{
|
||||
Path: tc.ConfigurationPath,
|
||||
ReqRole: org.RoleAdmin,
|
||||
Method: "POST",
|
||||
},
|
||||
{
|
||||
Path: tc.ConfigurationPath,
|
||||
ReqRole: org.RoleAdmin,
|
||||
Method: "PUT",
|
||||
},
|
||||
{
|
||||
Path: tc.ConfigurationPath,
|
||||
ReqRole: org.RoleAdmin,
|
||||
Method: "DELETE",
|
||||
},
|
||||
}
|
||||
|
||||
req, err := http.NewRequestWithContext(t.Context(), "GET", "http://localhost/"+tc.InputPath, nil)
|
||||
require.NoError(t, err, "failed to create HTTP request")
|
||||
ctx := &contextmodel.ReqContext{
|
||||
Context: &web.Context{Req: req},
|
||||
SignedInUser: &user.SignedInUser{OrgRole: org.RoleViewer},
|
||||
}
|
||||
proxy, err := setupDSProxyTest(t, ctx, ds, routes, tc.InputPath)
|
||||
require.NoError(t, err, "failed to setup proxy test")
|
||||
err = proxy.validateRequest()
|
||||
if tc.ExpectError {
|
||||
require.ErrorIs(t, err, errPluginProxyRouteAccessDenied, "request was not denied due to access denied?")
|
||||
} else {
|
||||
require.NoError(t, err, "request was unexpectedly denied access")
|
||||
}
|
||||
},
|
||||
)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// test DataSourceProxy request handling.
|
||||
|
||||
602
pkg/promlib/models/interval_test.go
Normal file
602
pkg/promlib/models/interval_test.go
Normal file
@@ -0,0 +1,602 @@
|
||||
package models
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/grafana/grafana-plugin-sdk-go/backend"
|
||||
"github.com/stretchr/testify/require"
|
||||
"go.opentelemetry.io/otel"
|
||||
|
||||
"github.com/grafana/grafana/pkg/promlib/intervalv2"
|
||||
)
|
||||
|
||||
var (
|
||||
testNow = time.Now()
|
||||
testIntervalCalculator = intervalv2.NewCalculator()
|
||||
testTracer = otel.Tracer("test/interval")
|
||||
)
|
||||
|
||||
func TestCalculatePrometheusInterval(t *testing.T) {
|
||||
_, span := testTracer.Start(context.Background(), "test")
|
||||
defer span.End()
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
queryInterval string
|
||||
dsScrapeInterval string
|
||||
intervalMs int64
|
||||
intervalFactor int64
|
||||
query backend.DataQuery
|
||||
want time.Duration
|
||||
wantErr bool
|
||||
}{
|
||||
{
|
||||
name: "min step 2m with 300000 intervalMs",
|
||||
queryInterval: "2m",
|
||||
dsScrapeInterval: "",
|
||||
intervalMs: 300000,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(48 * time.Hour),
|
||||
},
|
||||
Interval: 5 * time.Minute,
|
||||
MaxDataPoints: 761,
|
||||
},
|
||||
want: 2 * time.Minute,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "min step 2m with 900000 intervalMs",
|
||||
queryInterval: "2m",
|
||||
dsScrapeInterval: "",
|
||||
intervalMs: 900000,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(48 * time.Hour),
|
||||
},
|
||||
Interval: 15 * time.Minute,
|
||||
MaxDataPoints: 175,
|
||||
},
|
||||
want: 2 * time.Minute,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "with step parameter",
|
||||
queryInterval: "",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 0,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(12 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
},
|
||||
want: 30 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "without step parameter",
|
||||
queryInterval: "",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 0,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(1 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
},
|
||||
want: 15 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "with high intervalFactor",
|
||||
queryInterval: "",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 0,
|
||||
intervalFactor: 10,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(48 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
},
|
||||
want: 20 * time.Minute,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "with low intervalFactor",
|
||||
queryInterval: "",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 0,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(48 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
},
|
||||
want: 2 * time.Minute,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "with specified scrape-interval in data source",
|
||||
queryInterval: "",
|
||||
dsScrapeInterval: "240s",
|
||||
intervalMs: 0,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(48 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
},
|
||||
want: 4 * time.Minute,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "with zero intervalFactor defaults to 1",
|
||||
queryInterval: "",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 0,
|
||||
intervalFactor: 0,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(1 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
},
|
||||
want: 15 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "with $__interval variable",
|
||||
queryInterval: "$__interval",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 60000,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(48 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
},
|
||||
want: 120 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "with ${__interval} variable",
|
||||
queryInterval: "${__interval}",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 60000,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(48 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
},
|
||||
want: 120 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "with ${__interval} variable and explicit interval",
|
||||
queryInterval: "1m",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 60000,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(48 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
},
|
||||
want: 1 * time.Minute,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "with $__rate_interval variable",
|
||||
queryInterval: "$__rate_interval",
|
||||
dsScrapeInterval: "30s",
|
||||
intervalMs: 100000,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(2 * 24 * time.Hour),
|
||||
},
|
||||
Interval: 100 * time.Second,
|
||||
MaxDataPoints: 12384,
|
||||
},
|
||||
want: 130 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "with ${__rate_interval} variable",
|
||||
queryInterval: "${__rate_interval}",
|
||||
dsScrapeInterval: "30s",
|
||||
intervalMs: 100000,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(2 * 24 * time.Hour),
|
||||
},
|
||||
Interval: 100 * time.Second,
|
||||
MaxDataPoints: 12384,
|
||||
},
|
||||
want: 130 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "intervalMs 100s, minStep override 150s and scrape interval 30s",
|
||||
queryInterval: "150s",
|
||||
dsScrapeInterval: "30s",
|
||||
intervalMs: 100000,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(2 * 24 * time.Hour),
|
||||
},
|
||||
Interval: 100 * time.Second,
|
||||
MaxDataPoints: 12384,
|
||||
},
|
||||
want: 150 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "intervalMs 120s, minStep override 150s and ds scrape interval 30s",
|
||||
queryInterval: "150s",
|
||||
dsScrapeInterval: "30s",
|
||||
intervalMs: 120000,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(2 * 24 * time.Hour),
|
||||
},
|
||||
Interval: 120 * time.Second,
|
||||
MaxDataPoints: 12384,
|
||||
},
|
||||
want: 150 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "intervalMs 120s, minStep auto (interval not overridden) and ds scrape interval 30s",
|
||||
queryInterval: "120s",
|
||||
dsScrapeInterval: "30s",
|
||||
intervalMs: 120000,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(2 * 24 * time.Hour),
|
||||
},
|
||||
Interval: 120 * time.Second,
|
||||
MaxDataPoints: 12384,
|
||||
},
|
||||
want: 120 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "interval and minStep are automatically calculated and ds scrape interval 30s and time range 1 hour",
|
||||
queryInterval: "30s",
|
||||
dsScrapeInterval: "30s",
|
||||
intervalMs: 30000,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(1 * time.Hour),
|
||||
},
|
||||
Interval: 30 * time.Second,
|
||||
MaxDataPoints: 12384,
|
||||
},
|
||||
want: 30 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "minStep is $__rate_interval and ds scrape interval 30s and time range 1 hour",
|
||||
queryInterval: "$__rate_interval",
|
||||
dsScrapeInterval: "30s",
|
||||
intervalMs: 30000,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(1 * time.Hour),
|
||||
},
|
||||
Interval: 30 * time.Second,
|
||||
MaxDataPoints: 12384,
|
||||
},
|
||||
want: 2 * time.Minute,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "minStep is $__rate_interval and ds scrape interval 30s and time range 2 days",
|
||||
queryInterval: "$__rate_interval",
|
||||
dsScrapeInterval: "30s",
|
||||
intervalMs: 120000,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(2 * 24 * time.Hour),
|
||||
},
|
||||
Interval: 120 * time.Second,
|
||||
MaxDataPoints: 12384,
|
||||
},
|
||||
want: 150 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "minStep is $__interval and ds scrape interval 15s and time range 2 days",
|
||||
queryInterval: "$__interval",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 120000,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(2 * 24 * time.Hour),
|
||||
},
|
||||
Interval: 120 * time.Second,
|
||||
MaxDataPoints: 12384,
|
||||
},
|
||||
want: 120 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "with empty dsScrapeInterval defaults to 15s",
|
||||
queryInterval: "",
|
||||
dsScrapeInterval: "",
|
||||
intervalMs: 0,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(1 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
},
|
||||
want: 15 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "with very short time range",
|
||||
queryInterval: "",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 0,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(1 * time.Minute),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
},
|
||||
want: 15 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "with very long time range",
|
||||
queryInterval: "",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 0,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(30 * 24 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
},
|
||||
want: 30 * time.Minute,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "with manual interval override",
|
||||
queryInterval: "5m",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 0,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(48 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
},
|
||||
want: 5 * time.Minute,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "minStep is auto and ds scrape interval 30s and time range 1 hour",
|
||||
queryInterval: "",
|
||||
dsScrapeInterval: "30s",
|
||||
intervalMs: 30000,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(1 * time.Hour),
|
||||
},
|
||||
Interval: 30 * time.Second,
|
||||
MaxDataPoints: 1613,
|
||||
},
|
||||
want: 30 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "minStep is auto and ds scrape interval 15s and time range 5 minutes",
|
||||
queryInterval: "",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 15000,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(5 * time.Minute),
|
||||
},
|
||||
Interval: 15 * time.Second,
|
||||
MaxDataPoints: 1055,
|
||||
},
|
||||
want: 15 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
// Additional test cases for better coverage
|
||||
{
|
||||
name: "with $__interval_ms variable",
|
||||
queryInterval: "$__interval_ms",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 60000,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(48 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
},
|
||||
want: 120 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "with ${__interval_ms} variable",
|
||||
queryInterval: "${__interval_ms}",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 60000,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(48 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
},
|
||||
want: 120 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "with MaxDataPoints zero",
|
||||
queryInterval: "",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 0,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(1 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
MaxDataPoints: 0,
|
||||
},
|
||||
want: 15 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "with negative intervalFactor",
|
||||
queryInterval: "",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 0,
|
||||
intervalFactor: -5,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(48 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
},
|
||||
want: -10 * time.Minute,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "with invalid interval string that fails parsing",
|
||||
queryInterval: "invalid-interval",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 0,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(48 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
},
|
||||
want: time.Duration(0),
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "with very small MaxDataPoints",
|
||||
queryInterval: "",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 0,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(1 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
MaxDataPoints: 10,
|
||||
},
|
||||
want: 5 * time.Minute,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "when safeInterval is larger than calculatedInterval",
|
||||
queryInterval: "",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 0,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(1 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
MaxDataPoints: 10000,
|
||||
},
|
||||
want: 15 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got, err := calculatePrometheusInterval(
|
||||
tt.queryInterval,
|
||||
tt.dsScrapeInterval,
|
||||
tt.intervalMs,
|
||||
tt.intervalFactor,
|
||||
tt.query,
|
||||
testIntervalCalculator,
|
||||
)
|
||||
|
||||
if tt.wantErr {
|
||||
require.Error(t, err)
|
||||
return
|
||||
}
|
||||
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, tt.want, got)
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -92,7 +92,6 @@ const (
|
||||
)
|
||||
|
||||
// Internal interval and range variables with {} syntax
|
||||
// Repetitive code, we should have functionality to unify these
|
||||
const (
|
||||
varIntervalAlt = "${__interval}"
|
||||
varIntervalMsAlt = "${__interval_ms}"
|
||||
@@ -112,8 +111,16 @@ const (
|
||||
UnknownQueryType TimeSeriesQueryType = "unknown"
|
||||
)
|
||||
|
||||
// safeResolution is the maximum number of data points to prevent excessive resolution.
|
||||
// This ensures queries don't exceed reasonable data point limits, improving performance
|
||||
// and preventing potential memory issues. The value of 11000 provides a good balance
|
||||
// between resolution and performance for most use cases.
|
||||
var safeResolution = 11000
|
||||
|
||||
// rateIntervalMultiplier is the minimum multiplier for rate interval calculation.
|
||||
// Rate intervals should be at least 4x the scrape interval to ensure accurate rate calculations.
|
||||
const rateIntervalMultiplier = 4
|
||||
|
||||
// QueryModel includes both the common and specific values
|
||||
// NOTE: this struct may have issues when decoding JSON that requires the special handling
|
||||
// registered in https://github.com/grafana/grafana-plugin-sdk-go/blob/v0.228.0/experimental/apis/data/v0alpha1/query.go#L298
|
||||
@@ -154,7 +161,7 @@ type Query struct {
|
||||
// may be either a string or DataSourceRef
|
||||
type internalQueryModel struct {
|
||||
PrometheusQueryProperties `json:",inline"`
|
||||
//sdkapi.CommonQueryProperties `json:",inline"`
|
||||
// sdkapi.CommonQueryProperties `json:",inline"`
|
||||
IntervalMS float64 `json:"intervalMs,omitempty"`
|
||||
|
||||
// The following properties may be part of the request payload, however they are not saved in panel JSON
|
||||
@@ -272,44 +279,121 @@ func (query *Query) TimeRange() TimeRange {
|
||||
}
|
||||
}
|
||||
|
||||
// isRateIntervalVariable checks if the interval string is a rate interval variable
|
||||
// ($__rate_interval, ${__rate_interval}, $__rate_interval_ms, or ${__rate_interval_ms})
|
||||
func isRateIntervalVariable(interval string) bool {
|
||||
return interval == varRateInterval ||
|
||||
interval == varRateIntervalAlt ||
|
||||
interval == varRateIntervalMs ||
|
||||
interval == varRateIntervalMsAlt
|
||||
}
|
||||
|
||||
// replaceVariable replaces both $__variable and ${__variable} formats in the expression
|
||||
func replaceVariable(expr, dollarFormat, altFormat, replacement string) string {
|
||||
expr = strings.ReplaceAll(expr, dollarFormat, replacement)
|
||||
expr = strings.ReplaceAll(expr, altFormat, replacement)
|
||||
return expr
|
||||
}
|
||||
|
||||
// isManualIntervalOverride checks if the interval is a manually specified non-variable value
|
||||
// that should override the calculated interval
|
||||
func isManualIntervalOverride(interval string) bool {
|
||||
return interval != "" &&
|
||||
interval != varInterval &&
|
||||
interval != varIntervalAlt &&
|
||||
interval != varIntervalMs &&
|
||||
interval != varIntervalMsAlt
|
||||
}
|
||||
|
||||
// maxDuration returns the maximum of two durations
|
||||
func maxDuration(a, b time.Duration) time.Duration {
|
||||
if a > b {
|
||||
return a
|
||||
}
|
||||
return b
|
||||
}
|
||||
|
||||
// normalizeIntervalFactor ensures intervalFactor is at least 1
|
||||
func normalizeIntervalFactor(factor int64) int64 {
|
||||
if factor == 0 {
|
||||
return 1
|
||||
}
|
||||
return factor
|
||||
}
|
||||
|
||||
// calculatePrometheusInterval calculates the optimal step interval for a Prometheus query.
|
||||
//
|
||||
// The function determines the query step interval by considering multiple factors:
|
||||
// - The minimum step specified in the query (queryInterval)
|
||||
// - The data source scrape interval (dsScrapeInterval)
|
||||
// - The requested interval in milliseconds (intervalMs)
|
||||
// - The time range and maximum data points from the query
|
||||
// - The interval factor multiplier
|
||||
//
|
||||
// Special handling:
|
||||
// - Variable intervals ($__interval, $__rate_interval, etc.) are replaced with calculated values
|
||||
// - Rate interval variables ($__rate_interval, ${__rate_interval}) use calculateRateInterval for proper rate() function support
|
||||
// - Manual interval overrides (non-variable strings) take precedence over calculated values
|
||||
// - The final interval ensures safe resolution limits are not exceeded
|
||||
//
|
||||
// Parameters:
|
||||
// - queryInterval: The minimum step interval string (may contain variables like $__interval or $__rate_interval)
|
||||
// - dsScrapeInterval: The data source scrape interval (e.g., "15s", "30s")
|
||||
// - intervalMs: The requested interval in milliseconds
|
||||
// - intervalFactor: Multiplier for the calculated interval (defaults to 1 if 0)
|
||||
// - query: The backend data query containing time range and max data points
|
||||
// - intervalCalculator: Calculator for determining optimal intervals
|
||||
//
|
||||
// Returns:
|
||||
// - The calculated step interval as a time.Duration
|
||||
// - An error if the interval cannot be calculated (e.g., invalid interval string)
|
||||
func calculatePrometheusInterval(
|
||||
queryInterval, dsScrapeInterval string,
|
||||
intervalMs, intervalFactor int64,
|
||||
query backend.DataQuery,
|
||||
intervalCalculator intervalv2.Calculator,
|
||||
) (time.Duration, error) {
|
||||
// we need to compare the original query model after it is overwritten below to variables so that we can
|
||||
// calculate the rateInterval if it is equal to $__rate_interval or ${__rate_interval}
|
||||
// Preserve the original interval for later comparison, as it may be modified below
|
||||
originalQueryInterval := queryInterval
|
||||
|
||||
// If we are using variable for interval/step, we will replace it with calculated interval
|
||||
// If we are using a variable for minStep, replace it with empty string
|
||||
// so that the interval calculation proceeds with the default logic
|
||||
if isVariableInterval(queryInterval) {
|
||||
queryInterval = ""
|
||||
}
|
||||
|
||||
// Get the minimum interval from various sources (dsScrapeInterval, queryInterval, intervalMs)
|
||||
minInterval, err := gtime.GetIntervalFrom(dsScrapeInterval, queryInterval, intervalMs, 15*time.Second)
|
||||
if err != nil {
|
||||
return time.Duration(0), err
|
||||
}
|
||||
|
||||
// Calculate the optimal interval based on time range and max data points
|
||||
calculatedInterval := intervalCalculator.Calculate(query.TimeRange, minInterval, query.MaxDataPoints)
|
||||
// Calculate the safe interval to prevent too many data points
|
||||
safeInterval := intervalCalculator.CalculateSafeInterval(query.TimeRange, int64(safeResolution))
|
||||
|
||||
adjustedInterval := safeInterval.Value
|
||||
if calculatedInterval.Value > safeInterval.Value {
|
||||
adjustedInterval = calculatedInterval.Value
|
||||
}
|
||||
// Use the larger of calculated or safe interval to ensure we don't exceed resolution limits
|
||||
adjustedInterval := maxDuration(calculatedInterval.Value, safeInterval.Value)
|
||||
|
||||
// here is where we compare for $__rate_interval or ${__rate_interval}
|
||||
if originalQueryInterval == varRateInterval || originalQueryInterval == varRateIntervalAlt {
|
||||
// Handle rate interval variables: these require special calculation
|
||||
if isRateIntervalVariable(originalQueryInterval) {
|
||||
// Rate interval is final and is not affected by resolution
|
||||
return calculateRateInterval(adjustedInterval, dsScrapeInterval), nil
|
||||
} else {
|
||||
queryIntervalFactor := intervalFactor
|
||||
if queryIntervalFactor == 0 {
|
||||
queryIntervalFactor = 1
|
||||
}
|
||||
return time.Duration(int64(adjustedInterval) * queryIntervalFactor), nil
|
||||
}
|
||||
|
||||
// Handle manual interval override: if user specified a non-variable interval,
|
||||
// it takes precedence over calculated values
|
||||
if isManualIntervalOverride(originalQueryInterval) {
|
||||
if parsedInterval, err := gtime.ParseIntervalStringToTimeDuration(originalQueryInterval); err == nil {
|
||||
return parsedInterval, nil
|
||||
}
|
||||
// If parsing fails, fall through to calculated interval with factor
|
||||
}
|
||||
|
||||
// Apply interval factor to the adjusted interval
|
||||
normalizedFactor := normalizeIntervalFactor(intervalFactor)
|
||||
return time.Duration(int64(adjustedInterval) * normalizedFactor), nil
|
||||
}
|
||||
|
||||
// calculateRateInterval calculates the $__rate_interval value
|
||||
@@ -331,7 +415,8 @@ func calculateRateInterval(
|
||||
return time.Duration(0)
|
||||
}
|
||||
|
||||
rateInterval := time.Duration(int64(math.Max(float64(queryInterval+scrapeIntervalDuration), float64(4)*float64(scrapeIntervalDuration))))
|
||||
minRateInterval := rateIntervalMultiplier * scrapeIntervalDuration
|
||||
rateInterval := maxDuration(queryInterval+scrapeIntervalDuration, minRateInterval)
|
||||
return rateInterval
|
||||
}
|
||||
|
||||
@@ -366,34 +451,33 @@ func InterpolateVariables(
|
||||
rateInterval = calculateRateInterval(queryInterval, requestedMinStep)
|
||||
}
|
||||
|
||||
expr = strings.ReplaceAll(expr, varIntervalMs, strconv.FormatInt(int64(calculatedStep/time.Millisecond), 10))
|
||||
expr = strings.ReplaceAll(expr, varInterval, gtime.FormatInterval(calculatedStep))
|
||||
expr = strings.ReplaceAll(expr, varRangeMs, strconv.FormatInt(rangeMs, 10))
|
||||
expr = strings.ReplaceAll(expr, varRangeS, strconv.FormatInt(rangeSRounded, 10))
|
||||
expr = strings.ReplaceAll(expr, varRange, strconv.FormatInt(rangeSRounded, 10)+"s")
|
||||
expr = strings.ReplaceAll(expr, varRateIntervalMs, strconv.FormatInt(int64(rateInterval/time.Millisecond), 10))
|
||||
expr = strings.ReplaceAll(expr, varRateInterval, rateInterval.String())
|
||||
// Replace interval variables (both $__var and ${__var} formats)
|
||||
expr = replaceVariable(expr, varIntervalMs, varIntervalMsAlt, strconv.FormatInt(int64(calculatedStep/time.Millisecond), 10))
|
||||
expr = replaceVariable(expr, varInterval, varIntervalAlt, gtime.FormatInterval(calculatedStep))
|
||||
|
||||
// Replace range variables (both $__var and ${__var} formats)
|
||||
expr = replaceVariable(expr, varRangeMs, varRangeMsAlt, strconv.FormatInt(rangeMs, 10))
|
||||
expr = replaceVariable(expr, varRangeS, varRangeSAlt, strconv.FormatInt(rangeSRounded, 10))
|
||||
expr = replaceVariable(expr, varRange, varRangeAlt, strconv.FormatInt(rangeSRounded, 10)+"s")
|
||||
|
||||
// Replace rate interval variables (both $__var and ${__var} formats)
|
||||
expr = replaceVariable(expr, varRateIntervalMs, varRateIntervalMsAlt, strconv.FormatInt(int64(rateInterval/time.Millisecond), 10))
|
||||
expr = replaceVariable(expr, varRateInterval, varRateIntervalAlt, rateInterval.String())
|
||||
|
||||
// Repetitive code, we should have functionality to unify these
|
||||
expr = strings.ReplaceAll(expr, varIntervalMsAlt, strconv.FormatInt(int64(calculatedStep/time.Millisecond), 10))
|
||||
expr = strings.ReplaceAll(expr, varIntervalAlt, gtime.FormatInterval(calculatedStep))
|
||||
expr = strings.ReplaceAll(expr, varRangeMsAlt, strconv.FormatInt(rangeMs, 10))
|
||||
expr = strings.ReplaceAll(expr, varRangeSAlt, strconv.FormatInt(rangeSRounded, 10))
|
||||
expr = strings.ReplaceAll(expr, varRangeAlt, strconv.FormatInt(rangeSRounded, 10)+"s")
|
||||
expr = strings.ReplaceAll(expr, varRateIntervalMsAlt, strconv.FormatInt(int64(rateInterval/time.Millisecond), 10))
|
||||
expr = strings.ReplaceAll(expr, varRateIntervalAlt, rateInterval.String())
|
||||
return expr
|
||||
}
|
||||
|
||||
// isVariableInterval checks if the interval string is a variable interval
|
||||
// (any of $__interval, ${__interval}, $__interval_ms, ${__interval_ms}, $__rate_interval, ${__rate_interval}, etc.)
|
||||
func isVariableInterval(interval string) bool {
|
||||
if interval == varInterval || interval == varIntervalMs || interval == varRateInterval || interval == varRateIntervalMs {
|
||||
return true
|
||||
}
|
||||
// Repetitive code, we should have functionality to unify these
|
||||
if interval == varIntervalAlt || interval == varIntervalMsAlt || interval == varRateIntervalAlt || interval == varRateIntervalMsAlt {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
return interval == varInterval ||
|
||||
interval == varIntervalAlt ||
|
||||
interval == varIntervalMs ||
|
||||
interval == varIntervalMsAlt ||
|
||||
interval == varRateInterval ||
|
||||
interval == varRateIntervalAlt ||
|
||||
interval == varRateIntervalMs ||
|
||||
interval == varRateIntervalMsAlt
|
||||
}
|
||||
|
||||
// AlignTimeRange aligns query range to step and handles the time offset.
|
||||
@@ -410,7 +494,7 @@ func AlignTimeRange(t time.Time, step time.Duration, offset int64) time.Time {
|
||||
//go:embed query.types.json
|
||||
var f embed.FS
|
||||
|
||||
// QueryTypeDefinitionsJSON returns the query type definitions
|
||||
// QueryTypeDefinitionListJSON returns the query type definitions
|
||||
func QueryTypeDefinitionListJSON() (json.RawMessage, error) {
|
||||
return f.ReadFile("query.types.json")
|
||||
}
|
||||
|
||||
@@ -2,7 +2,6 @@ package models_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"testing"
|
||||
"time"
|
||||
@@ -14,6 +13,7 @@ import (
|
||||
"go.opentelemetry.io/otel"
|
||||
|
||||
"github.com/grafana/grafana-plugin-sdk-go/backend/log"
|
||||
|
||||
"github.com/grafana/grafana/pkg/promlib/intervalv2"
|
||||
"github.com/grafana/grafana/pkg/promlib/models"
|
||||
)
|
||||
@@ -50,95 +50,6 @@ func TestParse(t *testing.T) {
|
||||
require.Equal(t, false, res.ExemplarQuery)
|
||||
})
|
||||
|
||||
t.Run("parsing query model with step", func(t *testing.T) {
|
||||
timeRange := backend.TimeRange{
|
||||
From: now,
|
||||
To: now.Add(12 * time.Hour),
|
||||
}
|
||||
|
||||
q := queryContext(`{
|
||||
"expr": "go_goroutines",
|
||||
"format": "time_series",
|
||||
"refId": "A"
|
||||
}`, timeRange, time.Duration(1)*time.Minute)
|
||||
|
||||
res, err := models.Parse(context.Background(), log.New(), span, q, "15s", intervalCalculator, false)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, time.Second*30, res.Step)
|
||||
})
|
||||
|
||||
t.Run("parsing query model without step parameter", func(t *testing.T) {
|
||||
timeRange := backend.TimeRange{
|
||||
From: now,
|
||||
To: now.Add(1 * time.Hour),
|
||||
}
|
||||
|
||||
q := queryContext(`{
|
||||
"expr": "go_goroutines",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"refId": "A"
|
||||
}`, timeRange, time.Duration(1)*time.Minute)
|
||||
|
||||
res, err := models.Parse(context.Background(), log.New(), span, q, "15s", intervalCalculator, false)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, time.Second*15, res.Step)
|
||||
})
|
||||
|
||||
t.Run("parsing query model with high intervalFactor", func(t *testing.T) {
|
||||
timeRange := backend.TimeRange{
|
||||
From: now,
|
||||
To: now.Add(48 * time.Hour),
|
||||
}
|
||||
|
||||
q := queryContext(`{
|
||||
"expr": "go_goroutines",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 10,
|
||||
"refId": "A"
|
||||
}`, timeRange, time.Duration(1)*time.Minute)
|
||||
|
||||
res, err := models.Parse(context.Background(), log.New(), span, q, "15s", intervalCalculator, false)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, time.Minute*20, res.Step)
|
||||
})
|
||||
|
||||
t.Run("parsing query model with low intervalFactor", func(t *testing.T) {
|
||||
timeRange := backend.TimeRange{
|
||||
From: now,
|
||||
To: now.Add(48 * time.Hour),
|
||||
}
|
||||
|
||||
q := queryContext(`{
|
||||
"expr": "go_goroutines",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"refId": "A"
|
||||
}`, timeRange, time.Duration(1)*time.Minute)
|
||||
|
||||
res, err := models.Parse(context.Background(), log.New(), span, q, "15s", intervalCalculator, false)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, time.Minute*2, res.Step)
|
||||
})
|
||||
|
||||
t.Run("parsing query model specified scrape-interval in the data source", func(t *testing.T) {
|
||||
timeRange := backend.TimeRange{
|
||||
From: now,
|
||||
To: now.Add(48 * time.Hour),
|
||||
}
|
||||
|
||||
q := queryContext(`{
|
||||
"expr": "go_goroutines",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"refId": "A"
|
||||
}`, timeRange, time.Duration(1)*time.Minute)
|
||||
|
||||
res, err := models.Parse(context.Background(), log.New(), span, q, "240s", intervalCalculator, false)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, time.Minute*4, res.Step)
|
||||
})
|
||||
|
||||
t.Run("parsing query model with $__interval variable", func(t *testing.T) {
|
||||
timeRange := backend.TimeRange{
|
||||
From: now,
|
||||
@@ -176,7 +87,7 @@ func TestParse(t *testing.T) {
|
||||
|
||||
res, err := models.Parse(context.Background(), log.New(), span, q, "15s", intervalCalculator, false)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "rate(ALERTS{job=\"test\" [2m]})", res.Expr)
|
||||
require.Equal(t, "rate(ALERTS{job=\"test\" [1m]})", res.Expr)
|
||||
})
|
||||
|
||||
t.Run("parsing query model with $__interval_ms variable", func(t *testing.T) {
|
||||
@@ -533,232 +444,6 @@ func TestParse(t *testing.T) {
|
||||
})
|
||||
}
|
||||
|
||||
func TestRateInterval(t *testing.T) {
|
||||
_, span := tracer.Start(context.Background(), "operation")
|
||||
defer span.End()
|
||||
type args struct {
|
||||
expr string
|
||||
interval string
|
||||
intervalMs int64
|
||||
dsScrapeInterval string
|
||||
timeRange *backend.TimeRange
|
||||
}
|
||||
tests := []struct {
|
||||
name string
|
||||
args args
|
||||
want *models.Query
|
||||
}{
|
||||
{
|
||||
name: "intervalMs 100s, minStep override 150s and scrape interval 30s",
|
||||
args: args{
|
||||
expr: "rate(rpc_durations_seconds_count[$__rate_interval])",
|
||||
interval: "150s",
|
||||
intervalMs: 100000,
|
||||
dsScrapeInterval: "30s",
|
||||
},
|
||||
want: &models.Query{
|
||||
Expr: "rate(rpc_durations_seconds_count[10m0s])",
|
||||
Step: time.Second * 150,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "intervalMs 120s, minStep override 150s and ds scrape interval 30s",
|
||||
args: args{
|
||||
expr: "rate(rpc_durations_seconds_count[$__rate_interval])",
|
||||
interval: "150s",
|
||||
intervalMs: 120000,
|
||||
dsScrapeInterval: "30s",
|
||||
},
|
||||
want: &models.Query{
|
||||
Expr: "rate(rpc_durations_seconds_count[10m0s])",
|
||||
Step: time.Second * 150,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "intervalMs 120s, minStep auto (interval not overridden) and ds scrape interval 30s",
|
||||
args: args{
|
||||
expr: "rate(rpc_durations_seconds_count[$__rate_interval])",
|
||||
interval: "120s",
|
||||
intervalMs: 120000,
|
||||
dsScrapeInterval: "30s",
|
||||
},
|
||||
want: &models.Query{
|
||||
Expr: "rate(rpc_durations_seconds_count[8m0s])",
|
||||
Step: time.Second * 120,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "interval and minStep are automatically calculated and ds scrape interval 30s and time range 1 hour",
|
||||
args: args{
|
||||
expr: "rate(rpc_durations_seconds_count[$__rate_interval])",
|
||||
interval: "30s",
|
||||
intervalMs: 30000,
|
||||
dsScrapeInterval: "30s",
|
||||
timeRange: &backend.TimeRange{
|
||||
From: now,
|
||||
To: now.Add(1 * time.Hour),
|
||||
},
|
||||
},
|
||||
want: &models.Query{
|
||||
Expr: "rate(rpc_durations_seconds_count[2m0s])",
|
||||
Step: time.Second * 30,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "minStep is $__rate_interval and ds scrape interval 30s and time range 1 hour",
|
||||
args: args{
|
||||
expr: "rate(rpc_durations_seconds_count[$__rate_interval])",
|
||||
interval: "$__rate_interval",
|
||||
intervalMs: 30000,
|
||||
dsScrapeInterval: "30s",
|
||||
timeRange: &backend.TimeRange{
|
||||
From: now,
|
||||
To: now.Add(1 * time.Hour),
|
||||
},
|
||||
},
|
||||
want: &models.Query{
|
||||
Expr: "rate(rpc_durations_seconds_count[2m0s])",
|
||||
Step: time.Minute * 2,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "minStep is $__rate_interval and ds scrape interval 30s and time range 2 days",
|
||||
args: args{
|
||||
expr: "rate(rpc_durations_seconds_count[$__rate_interval])",
|
||||
interval: "$__rate_interval",
|
||||
intervalMs: 120000,
|
||||
dsScrapeInterval: "30s",
|
||||
timeRange: &backend.TimeRange{
|
||||
From: now,
|
||||
To: now.Add(2 * 24 * time.Hour),
|
||||
},
|
||||
},
|
||||
want: &models.Query{
|
||||
Expr: "rate(rpc_durations_seconds_count[2m30s])",
|
||||
Step: time.Second * 150,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "minStep is $__rate_interval and ds scrape interval 15s and time range 2 days",
|
||||
args: args{
|
||||
expr: "rate(rpc_durations_seconds_count[$__rate_interval])",
|
||||
interval: "$__interval",
|
||||
intervalMs: 120000,
|
||||
dsScrapeInterval: "15s",
|
||||
timeRange: &backend.TimeRange{
|
||||
From: now,
|
||||
To: now.Add(2 * 24 * time.Hour),
|
||||
},
|
||||
},
|
||||
want: &models.Query{
|
||||
Expr: "rate(rpc_durations_seconds_count[8m0s])",
|
||||
Step: time.Second * 120,
|
||||
},
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
q := mockQuery(tt.args.expr, tt.args.interval, tt.args.intervalMs, tt.args.timeRange)
|
||||
q.MaxDataPoints = 12384
|
||||
res, err := models.Parse(context.Background(), log.New(), span, q, tt.args.dsScrapeInterval, intervalCalculator, false)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, tt.want.Expr, res.Expr)
|
||||
require.Equal(t, tt.want.Step, res.Step)
|
||||
})
|
||||
}
|
||||
|
||||
t.Run("minStep is auto and ds scrape interval 30s and time range 1 hour", func(t *testing.T) {
|
||||
query := backend.DataQuery{
|
||||
RefID: "G",
|
||||
QueryType: "",
|
||||
MaxDataPoints: 1613,
|
||||
Interval: 30 * time.Second,
|
||||
TimeRange: backend.TimeRange{
|
||||
From: now,
|
||||
To: now.Add(1 * time.Hour),
|
||||
},
|
||||
JSON: []byte(`{
|
||||
"datasource":{"type":"prometheus","uid":"zxS5e5W4k"},
|
||||
"datasourceId":38,
|
||||
"editorMode":"code",
|
||||
"exemplar":false,
|
||||
"expr":"sum(rate(process_cpu_seconds_total[$__rate_interval]))",
|
||||
"instant":false,
|
||||
"interval":"",
|
||||
"intervalMs":30000,
|
||||
"key":"Q-f96b6729-c47a-4ea8-8f71-a79774cf9bd5-0",
|
||||
"legendFormat":"__auto",
|
||||
"maxDataPoints":1613,
|
||||
"range":true,
|
||||
"refId":"G",
|
||||
"requestId":"1G",
|
||||
"utcOffsetSec":3600
|
||||
}`),
|
||||
}
|
||||
res, err := models.Parse(context.Background(), log.New(), span, query, "30s", intervalCalculator, false)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "sum(rate(process_cpu_seconds_total[2m0s]))", res.Expr)
|
||||
require.Equal(t, 30*time.Second, res.Step)
|
||||
})
|
||||
|
||||
t.Run("minStep is auto and ds scrape interval 15s and time range 5 minutes", func(t *testing.T) {
|
||||
query := backend.DataQuery{
|
||||
RefID: "A",
|
||||
QueryType: "",
|
||||
MaxDataPoints: 1055,
|
||||
Interval: 15 * time.Second,
|
||||
TimeRange: backend.TimeRange{
|
||||
From: now,
|
||||
To: now.Add(5 * time.Minute),
|
||||
},
|
||||
JSON: []byte(`{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "2z9d6ElGk"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "sum(rate(cache_requests_total[$__rate_interval]))",
|
||||
"legendFormat": "__auto",
|
||||
"range": true,
|
||||
"refId": "A",
|
||||
"exemplar": false,
|
||||
"requestId": "1A",
|
||||
"utcOffsetSec": 0,
|
||||
"interval": "",
|
||||
"datasourceId": 508,
|
||||
"intervalMs": 15000,
|
||||
"maxDataPoints": 1055
|
||||
}`),
|
||||
}
|
||||
res, err := models.Parse(context.Background(), log.New(), span, query, "15s", intervalCalculator, false)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "sum(rate(cache_requests_total[1m0s]))", res.Expr)
|
||||
require.Equal(t, 15*time.Second, res.Step)
|
||||
})
|
||||
}
|
||||
|
||||
func mockQuery(expr string, interval string, intervalMs int64, timeRange *backend.TimeRange) backend.DataQuery {
|
||||
if timeRange == nil {
|
||||
timeRange = &backend.TimeRange{
|
||||
From: now,
|
||||
To: now.Add(1 * time.Hour),
|
||||
}
|
||||
}
|
||||
return backend.DataQuery{
|
||||
Interval: time.Duration(intervalMs) * time.Millisecond,
|
||||
JSON: []byte(fmt.Sprintf(`{
|
||||
"expr": "%s",
|
||||
"format": "time_series",
|
||||
"interval": "%s",
|
||||
"intervalMs": %v,
|
||||
"intervalFactor": 1,
|
||||
"refId": "A"
|
||||
}`, expr, interval, intervalMs)),
|
||||
TimeRange: *timeRange,
|
||||
RefID: "A",
|
||||
}
|
||||
}
|
||||
|
||||
func queryContext(json string, timeRange backend.TimeRange, queryInterval time.Duration) backend.DataQuery {
|
||||
return backend.DataQuery{
|
||||
Interval: queryInterval,
|
||||
@@ -768,11 +453,6 @@ func queryContext(json string, timeRange backend.TimeRange, queryInterval time.D
|
||||
}
|
||||
}
|
||||
|
||||
// AlignTimeRange aligns query range to step and handles the time offset.
|
||||
// It rounds start and end down to a multiple of step.
|
||||
// Prometheus caching is dependent on the range being aligned with the step.
|
||||
// Rounding to the step can significantly change the start and end of the range for larger steps, i.e. a week.
|
||||
// In rounding the range to a 1w step the range will always start on a Thursday.
|
||||
func TestAlignTimeRange(t *testing.T) {
|
||||
type args struct {
|
||||
t time.Time
|
||||
|
||||
@@ -381,6 +381,102 @@ func TestPrometheus_parseTimeSeriesResponse(t *testing.T) {
|
||||
})
|
||||
}
|
||||
|
||||
func TestPrometheus_executedQueryString(t *testing.T) {
|
||||
t.Run("executedQueryString should match expected format with intervalMs 300_000", func(t *testing.T) {
|
||||
values := []p.SamplePair{
|
||||
{Value: 1, Timestamp: 1000},
|
||||
{Value: 2, Timestamp: 2000},
|
||||
}
|
||||
result := queryResult{
|
||||
Type: p.ValMatrix,
|
||||
Result: p.Matrix{
|
||||
&p.SampleStream{
|
||||
Metric: p.Metric{"app": "Application"},
|
||||
Values: values,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
queryJSON := `{
|
||||
"expr": "test_metric",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"interval": "2m",
|
||||
"intervalMs": 300000,
|
||||
"maxDataPoints": 761,
|
||||
"refId": "A",
|
||||
"range": true
|
||||
}`
|
||||
|
||||
now := time.Now()
|
||||
query := backend.DataQuery{
|
||||
RefID: "A",
|
||||
MaxDataPoints: 761,
|
||||
Interval: 300000 * time.Millisecond,
|
||||
TimeRange: backend.TimeRange{
|
||||
From: now,
|
||||
To: now.Add(48 * time.Hour),
|
||||
},
|
||||
JSON: []byte(queryJSON),
|
||||
}
|
||||
tctx, err := setup()
|
||||
require.NoError(t, err)
|
||||
res, err := execute(tctx, query, result, nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.Len(t, res, 1)
|
||||
require.NotNil(t, res[0].Meta)
|
||||
require.Equal(t, "Expr: test_metric\nStep: 2m0s", res[0].Meta.ExecutedQueryString)
|
||||
})
|
||||
|
||||
t.Run("executedQueryString should match expected format with intervalMs 900_000", func(t *testing.T) {
|
||||
values := []p.SamplePair{
|
||||
{Value: 1, Timestamp: 1000},
|
||||
{Value: 2, Timestamp: 2000},
|
||||
}
|
||||
result := queryResult{
|
||||
Type: p.ValMatrix,
|
||||
Result: p.Matrix{
|
||||
&p.SampleStream{
|
||||
Metric: p.Metric{"app": "Application"},
|
||||
Values: values,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
queryJSON := `{
|
||||
"expr": "test_metric",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"interval": "2m",
|
||||
"intervalMs": 900000,
|
||||
"maxDataPoints": 175,
|
||||
"refId": "A",
|
||||
"range": true
|
||||
}`
|
||||
|
||||
now := time.Now()
|
||||
query := backend.DataQuery{
|
||||
RefID: "A",
|
||||
MaxDataPoints: 175,
|
||||
Interval: 900000 * time.Millisecond,
|
||||
TimeRange: backend.TimeRange{
|
||||
From: now,
|
||||
To: now.Add(48 * time.Hour),
|
||||
},
|
||||
JSON: []byte(queryJSON),
|
||||
}
|
||||
tctx, err := setup()
|
||||
require.NoError(t, err)
|
||||
res, err := execute(tctx, query, result, nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.Len(t, res, 1)
|
||||
require.NotNil(t, res[0].Meta)
|
||||
require.Equal(t, "Expr: test_metric\nStep: 2m0s", res[0].Meta.ExecutedQueryString)
|
||||
})
|
||||
}
|
||||
|
||||
type queryResult struct {
|
||||
Type p.ValueType `json:"resultType"`
|
||||
Result any `json:"result"`
|
||||
|
||||
@@ -36,6 +36,9 @@ var client = &http.Client{
|
||||
Transport: &http.Transport{Proxy: http.ProxyFromEnvironment},
|
||||
}
|
||||
|
||||
// CreateDashboardSnapshot creates a snapshot when running Grafana in regular mode.
|
||||
// It validates the user and dashboard exist before creating the snapshot.
|
||||
// This mode supports both local and external snapshots.
|
||||
func CreateDashboardSnapshot(c *contextmodel.ReqContext, cfg snapshot.SnapshotSharingOptions, cmd CreateDashboardSnapshotCommand, svc Service) {
|
||||
if !cfg.SnapshotsEnabled {
|
||||
c.JsonApiErr(http.StatusForbidden, "Dashboard Snapshots are disabled", nil)
|
||||
@@ -43,6 +46,7 @@ func CreateDashboardSnapshot(c *contextmodel.ReqContext, cfg snapshot.SnapshotSh
|
||||
}
|
||||
|
||||
uid := cmd.Dashboard.GetNestedString("uid")
|
||||
|
||||
user, err := identity.GetRequester(c.Req.Context())
|
||||
if err != nil {
|
||||
c.JsonApiErr(http.StatusBadRequest, "missing user in context", nil)
|
||||
@@ -59,21 +63,18 @@ func CreateDashboardSnapshot(c *contextmodel.ReqContext, cfg snapshot.SnapshotSh
|
||||
return
|
||||
}
|
||||
|
||||
cmd.ExternalURL = ""
|
||||
cmd.OrgID = user.GetOrgID()
|
||||
cmd.UserID, _ = identity.UserIdentifier(user.GetID())
|
||||
|
||||
if cmd.Name == "" {
|
||||
cmd.Name = "Unnamed snapshot"
|
||||
}
|
||||
|
||||
var snapshotUrl string
|
||||
cmd.ExternalURL = ""
|
||||
cmd.OrgID = user.GetOrgID()
|
||||
cmd.UserID, _ = identity.UserIdentifier(user.GetID())
|
||||
originalDashboardURL, err := createOriginalDashboardURL(&cmd)
|
||||
if err != nil {
|
||||
c.JsonApiErr(http.StatusInternalServerError, "Invalid app URL", err)
|
||||
return
|
||||
}
|
||||
var snapshotURL string
|
||||
|
||||
if cmd.External {
|
||||
// Handle external snapshot creation
|
||||
if !cfg.ExternalEnabled {
|
||||
c.JsonApiErr(http.StatusForbidden, "External dashboard creation is disabled", nil)
|
||||
return
|
||||
@@ -85,40 +86,83 @@ func CreateDashboardSnapshot(c *contextmodel.ReqContext, cfg snapshot.SnapshotSh
|
||||
return
|
||||
}
|
||||
|
||||
snapshotUrl = resp.Url
|
||||
cmd.Key = resp.Key
|
||||
cmd.DeleteKey = resp.DeleteKey
|
||||
cmd.ExternalURL = resp.Url
|
||||
cmd.ExternalDeleteURL = resp.DeleteUrl
|
||||
cmd.Dashboard = &common.Unstructured{}
|
||||
snapshotURL = resp.Url
|
||||
|
||||
metrics.MApiDashboardSnapshotExternal.Inc()
|
||||
} else {
|
||||
cmd.Dashboard.SetNestedField(originalDashboardURL, "snapshot", "originalUrl")
|
||||
|
||||
if cmd.Key == "" {
|
||||
var err error
|
||||
cmd.Key, err = util.GetRandomString(32)
|
||||
if err != nil {
|
||||
c.JsonApiErr(http.StatusInternalServerError, "Could not generate random string", err)
|
||||
return
|
||||
}
|
||||
// Handle local snapshot creation
|
||||
originalDashboardURL, err := createOriginalDashboardURL(&cmd)
|
||||
if err != nil {
|
||||
c.JsonApiErr(http.StatusInternalServerError, "Invalid app URL", err)
|
||||
return
|
||||
}
|
||||
|
||||
if cmd.DeleteKey == "" {
|
||||
var err error
|
||||
cmd.DeleteKey, err = util.GetRandomString(32)
|
||||
if err != nil {
|
||||
c.JsonApiErr(http.StatusInternalServerError, "Could not generate random string", err)
|
||||
return
|
||||
}
|
||||
snapshotURL, err = prepareLocalSnapshot(&cmd, originalDashboardURL)
|
||||
if err != nil {
|
||||
c.JsonApiErr(http.StatusInternalServerError, "Could not generate random string", err)
|
||||
return
|
||||
}
|
||||
|
||||
snapshotUrl = setting.ToAbsUrl("dashboard/snapshot/" + cmd.Key)
|
||||
|
||||
metrics.MApiDashboardSnapshotCreate.Inc()
|
||||
}
|
||||
|
||||
saveAndRespond(c, svc, cmd, snapshotURL)
|
||||
}
|
||||
|
||||
// CreateDashboardSnapshotPublic creates a snapshot when running Grafana in public mode.
|
||||
// In public mode, there is no user or dashboard information to validate.
|
||||
// Only local snapshots are supported (external snapshots are not available).
|
||||
func CreateDashboardSnapshotPublic(c *contextmodel.ReqContext, cfg snapshot.SnapshotSharingOptions, cmd CreateDashboardSnapshotCommand, svc Service) {
|
||||
if !cfg.SnapshotsEnabled {
|
||||
c.JsonApiErr(http.StatusForbidden, "Dashboard Snapshots are disabled", nil)
|
||||
return
|
||||
}
|
||||
|
||||
if cmd.Name == "" {
|
||||
cmd.Name = "Unnamed snapshot"
|
||||
}
|
||||
|
||||
snapshotURL, err := prepareLocalSnapshot(&cmd, "")
|
||||
if err != nil {
|
||||
c.JsonApiErr(http.StatusInternalServerError, "Could not generate random string", err)
|
||||
return
|
||||
}
|
||||
|
||||
metrics.MApiDashboardSnapshotCreate.Inc()
|
||||
|
||||
saveAndRespond(c, svc, cmd, snapshotURL)
|
||||
}
|
||||
|
||||
// prepareLocalSnapshot prepares the command for a local snapshot and returns the snapshot URL.
|
||||
func prepareLocalSnapshot(cmd *CreateDashboardSnapshotCommand, originalDashboardURL string) (string, error) {
|
||||
cmd.Dashboard.SetNestedField(originalDashboardURL, "snapshot", "originalUrl")
|
||||
|
||||
if cmd.Key == "" {
|
||||
key, err := util.GetRandomString(32)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
cmd.Key = key
|
||||
}
|
||||
|
||||
if cmd.DeleteKey == "" {
|
||||
deleteKey, err := util.GetRandomString(32)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
cmd.DeleteKey = deleteKey
|
||||
}
|
||||
|
||||
return setting.ToAbsUrl("dashboard/snapshot/" + cmd.Key), nil
|
||||
}
|
||||
|
||||
// saveAndRespond saves the snapshot and sends the response.
|
||||
func saveAndRespond(c *contextmodel.ReqContext, svc Service, cmd CreateDashboardSnapshotCommand, snapshotURL string) {
|
||||
result, err := svc.CreateDashboardSnapshot(c.Req.Context(), &cmd)
|
||||
if err != nil {
|
||||
c.JsonApiErr(http.StatusInternalServerError, "Failed to create snapshot", err)
|
||||
@@ -128,7 +172,7 @@ func CreateDashboardSnapshot(c *contextmodel.ReqContext, cfg snapshot.SnapshotSh
|
||||
c.JSON(http.StatusOK, snapshot.DashboardCreateResponse{
|
||||
Key: result.Key,
|
||||
DeleteKey: result.DeleteKey,
|
||||
URL: snapshotUrl,
|
||||
URL: snapshotURL,
|
||||
DeleteURL: setting.ToAbsUrl("api/snapshots-delete/" + result.DeleteKey),
|
||||
})
|
||||
}
|
||||
|
||||
@@ -20,40 +20,30 @@ import (
|
||||
"github.com/grafana/grafana/pkg/web"
|
||||
)
|
||||
|
||||
func TestCreateDashboardSnapshot_DashboardNotFound(t *testing.T) {
|
||||
mockService := &MockService{}
|
||||
cfg := snapshot.SnapshotSharingOptions{
|
||||
SnapshotsEnabled: true,
|
||||
ExternalEnabled: false,
|
||||
func createTestDashboard(t *testing.T) *common.Unstructured {
|
||||
t.Helper()
|
||||
dashboard := &common.Unstructured{}
|
||||
dashboardData := map[string]any{
|
||||
"uid": "test-dashboard-uid",
|
||||
"id": 123,
|
||||
}
|
||||
testUser := &user.SignedInUser{
|
||||
dashboardBytes, _ := json.Marshal(dashboardData)
|
||||
_ = json.Unmarshal(dashboardBytes, dashboard)
|
||||
return dashboard
|
||||
}
|
||||
|
||||
func createTestUser() *user.SignedInUser {
|
||||
return &user.SignedInUser{
|
||||
UserID: 1,
|
||||
OrgID: 1,
|
||||
Login: "testuser",
|
||||
Name: "Test User",
|
||||
Email: "test@example.com",
|
||||
}
|
||||
dashboard := &common.Unstructured{}
|
||||
dashboardData := map[string]interface{}{
|
||||
"uid": "test-dashboard-uid",
|
||||
"id": 123,
|
||||
}
|
||||
dashboardBytes, _ := json.Marshal(dashboardData)
|
||||
_ = json.Unmarshal(dashboardBytes, dashboard)
|
||||
|
||||
cmd := CreateDashboardSnapshotCommand{
|
||||
DashboardCreateCommand: snapshot.DashboardCreateCommand{
|
||||
Dashboard: dashboard,
|
||||
Name: "Test Snapshot",
|
||||
},
|
||||
}
|
||||
|
||||
mockService.On("ValidateDashboardExists", mock.Anything, int64(1), "test-dashboard-uid").
|
||||
Return(dashboards.ErrDashboardNotFound)
|
||||
|
||||
req, _ := http.NewRequest("POST", "/api/snapshots", nil)
|
||||
req = req.WithContext(identity.WithRequester(req.Context(), testUser))
|
||||
}
|
||||
|
||||
func createReqContext(t *testing.T, req *http.Request, testUser *user.SignedInUser) (*contextmodel.ReqContext, *httptest.ResponseRecorder) {
|
||||
t.Helper()
|
||||
recorder := httptest.NewRecorder()
|
||||
ctx := &contextmodel.ReqContext{
|
||||
Context: &web.Context{
|
||||
@@ -63,13 +53,319 @@ func TestCreateDashboardSnapshot_DashboardNotFound(t *testing.T) {
|
||||
SignedInUser: testUser,
|
||||
Logger: log.NewNopLogger(),
|
||||
}
|
||||
|
||||
CreateDashboardSnapshot(ctx, cfg, cmd, mockService)
|
||||
|
||||
mockService.AssertExpectations(t)
|
||||
assert.Equal(t, http.StatusBadRequest, recorder.Code)
|
||||
var response map[string]interface{}
|
||||
err := json.Unmarshal(recorder.Body.Bytes(), &response)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "Dashboard not found", response["message"])
|
||||
return ctx, recorder
|
||||
}
|
||||
|
||||
// TestCreateDashboardSnapshot tests snapshot creation in regular mode (non-public instance).
|
||||
// These tests cover scenarios when Grafana is running as a regular server with user authentication.
|
||||
func TestCreateDashboardSnapshot(t *testing.T) {
|
||||
t.Run("should return error when dashboard not found", func(t *testing.T) {
|
||||
mockService := &MockService{}
|
||||
cfg := snapshot.SnapshotSharingOptions{
|
||||
SnapshotsEnabled: true,
|
||||
ExternalEnabled: false,
|
||||
}
|
||||
testUser := createTestUser()
|
||||
dashboard := createTestDashboard(t)
|
||||
|
||||
cmd := CreateDashboardSnapshotCommand{
|
||||
DashboardCreateCommand: snapshot.DashboardCreateCommand{
|
||||
Dashboard: dashboard,
|
||||
Name: "Test Snapshot",
|
||||
},
|
||||
}
|
||||
|
||||
mockService.On("ValidateDashboardExists", mock.Anything, int64(1), "test-dashboard-uid").
|
||||
Return(dashboards.ErrDashboardNotFound)
|
||||
|
||||
req, _ := http.NewRequest("POST", "/api/snapshots", nil)
|
||||
req = req.WithContext(identity.WithRequester(req.Context(), testUser))
|
||||
ctx, recorder := createReqContext(t, req, testUser)
|
||||
|
||||
CreateDashboardSnapshot(ctx, cfg, cmd, mockService)
|
||||
|
||||
mockService.AssertExpectations(t)
|
||||
assert.Equal(t, http.StatusBadRequest, recorder.Code)
|
||||
var response map[string]any
|
||||
err := json.Unmarshal(recorder.Body.Bytes(), &response)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "Dashboard not found", response["message"])
|
||||
})
|
||||
|
||||
t.Run("should create external snapshot when external is enabled", func(t *testing.T) {
|
||||
externalServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
assert.Equal(t, "/api/snapshots", r.URL.Path)
|
||||
assert.Equal(t, "POST", r.Method)
|
||||
|
||||
response := map[string]any{
|
||||
"key": "external-key",
|
||||
"deleteKey": "external-delete-key",
|
||||
"url": "https://external.example.com/dashboard/snapshot/external-key",
|
||||
"deleteUrl": "https://external.example.com/api/snapshots-delete/external-delete-key",
|
||||
}
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
_ = json.NewEncoder(w).Encode(response)
|
||||
}))
|
||||
defer externalServer.Close()
|
||||
|
||||
mockService := NewMockService(t)
|
||||
cfg := snapshot.SnapshotSharingOptions{
|
||||
SnapshotsEnabled: true,
|
||||
ExternalEnabled: true,
|
||||
ExternalSnapshotURL: externalServer.URL,
|
||||
}
|
||||
testUser := createTestUser()
|
||||
dashboard := createTestDashboard(t)
|
||||
|
||||
cmd := CreateDashboardSnapshotCommand{
|
||||
DashboardCreateCommand: snapshot.DashboardCreateCommand{
|
||||
Dashboard: dashboard,
|
||||
Name: "Test External Snapshot",
|
||||
External: true,
|
||||
},
|
||||
}
|
||||
|
||||
mockService.On("ValidateDashboardExists", mock.Anything, int64(1), "test-dashboard-uid").
|
||||
Return(nil)
|
||||
mockService.On("CreateDashboardSnapshot", mock.Anything, mock.Anything).
|
||||
Return(&DashboardSnapshot{
|
||||
Key: "external-key",
|
||||
DeleteKey: "external-delete-key",
|
||||
}, nil)
|
||||
|
||||
req, _ := http.NewRequest("POST", "/api/snapshots", nil)
|
||||
req = req.WithContext(identity.WithRequester(req.Context(), testUser))
|
||||
ctx, recorder := createReqContext(t, req, testUser)
|
||||
|
||||
CreateDashboardSnapshot(ctx, cfg, cmd, mockService)
|
||||
|
||||
mockService.AssertExpectations(t)
|
||||
assert.Equal(t, http.StatusOK, recorder.Code)
|
||||
|
||||
var response map[string]any
|
||||
err := json.Unmarshal(recorder.Body.Bytes(), &response)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "external-key", response["key"])
|
||||
assert.Equal(t, "external-delete-key", response["deleteKey"])
|
||||
assert.Equal(t, "https://external.example.com/dashboard/snapshot/external-key", response["url"])
|
||||
})
|
||||
|
||||
t.Run("should return forbidden when external is disabled", func(t *testing.T) {
|
||||
mockService := NewMockService(t)
|
||||
cfg := snapshot.SnapshotSharingOptions{
|
||||
SnapshotsEnabled: true,
|
||||
ExternalEnabled: false,
|
||||
}
|
||||
testUser := createTestUser()
|
||||
dashboard := createTestDashboard(t)
|
||||
|
||||
cmd := CreateDashboardSnapshotCommand{
|
||||
DashboardCreateCommand: snapshot.DashboardCreateCommand{
|
||||
Dashboard: dashboard,
|
||||
Name: "Test External Snapshot",
|
||||
External: true,
|
||||
},
|
||||
}
|
||||
|
||||
mockService.On("ValidateDashboardExists", mock.Anything, int64(1), "test-dashboard-uid").
|
||||
Return(nil)
|
||||
|
||||
req, _ := http.NewRequest("POST", "/api/snapshots", nil)
|
||||
req = req.WithContext(identity.WithRequester(req.Context(), testUser))
|
||||
ctx, recorder := createReqContext(t, req, testUser)
|
||||
|
||||
CreateDashboardSnapshot(ctx, cfg, cmd, mockService)
|
||||
|
||||
mockService.AssertExpectations(t)
|
||||
assert.Equal(t, http.StatusForbidden, recorder.Code)
|
||||
|
||||
var response map[string]any
|
||||
err := json.Unmarshal(recorder.Body.Bytes(), &response)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "External dashboard creation is disabled", response["message"])
|
||||
})
|
||||
|
||||
t.Run("should create local snapshot", func(t *testing.T) {
|
||||
mockService := NewMockService(t)
|
||||
cfg := snapshot.SnapshotSharingOptions{
|
||||
SnapshotsEnabled: true,
|
||||
}
|
||||
testUser := createTestUser()
|
||||
dashboard := createTestDashboard(t)
|
||||
|
||||
cmd := CreateDashboardSnapshotCommand{
|
||||
DashboardCreateCommand: snapshot.DashboardCreateCommand{
|
||||
Dashboard: dashboard,
|
||||
Name: "Test Local Snapshot",
|
||||
},
|
||||
Key: "local-key",
|
||||
DeleteKey: "local-delete-key",
|
||||
}
|
||||
|
||||
mockService.On("ValidateDashboardExists", mock.Anything, int64(1), "test-dashboard-uid").
|
||||
Return(nil)
|
||||
mockService.On("CreateDashboardSnapshot", mock.Anything, mock.Anything).
|
||||
Return(&DashboardSnapshot{
|
||||
Key: "local-key",
|
||||
DeleteKey: "local-delete-key",
|
||||
}, nil)
|
||||
|
||||
req, _ := http.NewRequest("POST", "/api/snapshots", nil)
|
||||
req = req.WithContext(identity.WithRequester(req.Context(), testUser))
|
||||
ctx, recorder := createReqContext(t, req, testUser)
|
||||
|
||||
CreateDashboardSnapshot(ctx, cfg, cmd, mockService)
|
||||
|
||||
mockService.AssertExpectations(t)
|
||||
assert.Equal(t, http.StatusOK, recorder.Code)
|
||||
|
||||
var response map[string]any
|
||||
err := json.Unmarshal(recorder.Body.Bytes(), &response)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "local-key", response["key"])
|
||||
assert.Equal(t, "local-delete-key", response["deleteKey"])
|
||||
assert.Contains(t, response["url"], "dashboard/snapshot/local-key")
|
||||
assert.Contains(t, response["deleteUrl"], "api/snapshots-delete/local-delete-key")
|
||||
})
|
||||
}
|
||||
|
||||
// TestCreateDashboardSnapshotPublic tests snapshot creation in public mode.
|
||||
// These tests cover scenarios when Grafana is running as a public snapshot server
|
||||
// where no user authentication or dashboard validation is required.
|
||||
func TestCreateDashboardSnapshotPublic(t *testing.T) {
|
||||
t.Run("should create local snapshot without user context", func(t *testing.T) {
|
||||
mockService := NewMockService(t)
|
||||
cfg := snapshot.SnapshotSharingOptions{
|
||||
SnapshotsEnabled: true,
|
||||
}
|
||||
dashboard := createTestDashboard(t)
|
||||
|
||||
cmd := CreateDashboardSnapshotCommand{
|
||||
DashboardCreateCommand: snapshot.DashboardCreateCommand{
|
||||
Dashboard: dashboard,
|
||||
Name: "Test Snapshot",
|
||||
},
|
||||
Key: "test-key",
|
||||
DeleteKey: "test-delete-key",
|
||||
}
|
||||
|
||||
mockService.On("CreateDashboardSnapshot", mock.Anything, mock.Anything).
|
||||
Return(&DashboardSnapshot{
|
||||
Key: "test-key",
|
||||
DeleteKey: "test-delete-key",
|
||||
}, nil)
|
||||
|
||||
req, _ := http.NewRequest("POST", "/api/snapshots", nil)
|
||||
recorder := httptest.NewRecorder()
|
||||
ctx := &contextmodel.ReqContext{
|
||||
Context: &web.Context{
|
||||
Req: req,
|
||||
Resp: web.NewResponseWriter("POST", recorder),
|
||||
},
|
||||
Logger: log.NewNopLogger(),
|
||||
}
|
||||
|
||||
CreateDashboardSnapshotPublic(ctx, cfg, cmd, mockService)
|
||||
|
||||
mockService.AssertExpectations(t)
|
||||
assert.Equal(t, http.StatusOK, recorder.Code)
|
||||
|
||||
var response map[string]any
|
||||
err := json.Unmarshal(recorder.Body.Bytes(), &response)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "test-key", response["key"])
|
||||
assert.Equal(t, "test-delete-key", response["deleteKey"])
|
||||
assert.Contains(t, response["url"], "dashboard/snapshot/test-key")
|
||||
assert.Contains(t, response["deleteUrl"], "api/snapshots-delete/test-delete-key")
|
||||
})
|
||||
|
||||
t.Run("should return forbidden when snapshots are disabled", func(t *testing.T) {
|
||||
mockService := NewMockService(t)
|
||||
cfg := snapshot.SnapshotSharingOptions{
|
||||
SnapshotsEnabled: false,
|
||||
}
|
||||
dashboard := createTestDashboard(t)
|
||||
|
||||
cmd := CreateDashboardSnapshotCommand{
|
||||
DashboardCreateCommand: snapshot.DashboardCreateCommand{
|
||||
Dashboard: dashboard,
|
||||
Name: "Test Snapshot",
|
||||
},
|
||||
}
|
||||
|
||||
req, _ := http.NewRequest("POST", "/api/snapshots", nil)
|
||||
recorder := httptest.NewRecorder()
|
||||
ctx := &contextmodel.ReqContext{
|
||||
Context: &web.Context{
|
||||
Req: req,
|
||||
Resp: web.NewResponseWriter("POST", recorder),
|
||||
},
|
||||
Logger: log.NewNopLogger(),
|
||||
}
|
||||
|
||||
CreateDashboardSnapshotPublic(ctx, cfg, cmd, mockService)
|
||||
|
||||
assert.Equal(t, http.StatusForbidden, recorder.Code)
|
||||
|
||||
var response map[string]any
|
||||
err := json.Unmarshal(recorder.Body.Bytes(), &response)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "Dashboard Snapshots are disabled", response["message"])
|
||||
})
|
||||
}
|
||||
|
||||
// TestDeleteExternalDashboardSnapshot tests deletion of external snapshots.
|
||||
// This function is called in public mode and doesn't require user context.
|
||||
func TestDeleteExternalDashboardSnapshot(t *testing.T) {
|
||||
t.Run("should return nil on successful deletion", func(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
assert.Equal(t, "GET", r.Method)
|
||||
w.WriteHeader(http.StatusOK)
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
err := DeleteExternalDashboardSnapshot(server.URL)
|
||||
assert.NoError(t, err)
|
||||
})
|
||||
|
||||
t.Run("should gracefully handle already deleted snapshot", func(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
w.WriteHeader(http.StatusInternalServerError)
|
||||
response := map[string]any{
|
||||
"message": "Failed to get dashboard snapshot",
|
||||
}
|
||||
_ = json.NewEncoder(w).Encode(response)
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
err := DeleteExternalDashboardSnapshot(server.URL)
|
||||
assert.NoError(t, err)
|
||||
})
|
||||
|
||||
t.Run("should return error on unexpected status code", func(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
w.WriteHeader(http.StatusNotFound)
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
err := DeleteExternalDashboardSnapshot(server.URL)
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "unexpected response when deleting external snapshot")
|
||||
assert.Contains(t, err.Error(), "404")
|
||||
})
|
||||
|
||||
t.Run("should return error on 500 with different message", func(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
w.WriteHeader(http.StatusInternalServerError)
|
||||
response := map[string]any{
|
||||
"message": "Some other error",
|
||||
}
|
||||
_ = json.NewEncoder(w).Encode(response)
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
err := DeleteExternalDashboardSnapshot(server.URL)
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "500")
|
||||
})
|
||||
}
|
||||
|
||||
@@ -1590,6 +1590,13 @@ var (
|
||||
Owner: identityAccessTeam,
|
||||
HideFromDocs: true,
|
||||
},
|
||||
{
|
||||
Name: "kubernetesAuthzGlobalRolesApi",
|
||||
Description: "Registers AuthZ Global Roles /apis endpoint",
|
||||
Stage: FeatureStageExperimental,
|
||||
Owner: identityAccessTeam,
|
||||
HideFromDocs: true,
|
||||
},
|
||||
{
|
||||
Name: "kubernetesAuthzRolesApi",
|
||||
Description: "Registers AuthZ Roles /apis endpoint",
|
||||
|
||||
1
pkg/services/featuremgmt/toggles_gen.csv
generated
1
pkg/services/featuremgmt/toggles_gen.csv
generated
@@ -218,6 +218,7 @@ kubernetesAuthZHandlerRedirect,experimental,@grafana/identity-access-team,false,
|
||||
kubernetesAuthzResourcePermissionApis,experimental,@grafana/identity-access-team,false,false,false
|
||||
kubernetesAuthzZanzanaSync,experimental,@grafana/identity-access-team,false,false,false
|
||||
kubernetesAuthzCoreRolesApi,experimental,@grafana/identity-access-team,false,false,false
|
||||
kubernetesAuthzGlobalRolesApi,experimental,@grafana/identity-access-team,false,false,false
|
||||
kubernetesAuthzRolesApi,experimental,@grafana/identity-access-team,false,false,false
|
||||
kubernetesAuthzRoleBindingsApi,experimental,@grafana/identity-access-team,false,false,false
|
||||
kubernetesAuthnMutation,experimental,@grafana/identity-access-team,false,false,false
|
||||
|
||||
|
4
pkg/services/featuremgmt/toggles_gen.go
generated
4
pkg/services/featuremgmt/toggles_gen.go
generated
@@ -646,6 +646,10 @@ const (
|
||||
// Registers AuthZ Core Roles /apis endpoint
|
||||
FlagKubernetesAuthzCoreRolesApi = "kubernetesAuthzCoreRolesApi"
|
||||
|
||||
// FlagKubernetesAuthzGlobalRolesApi
|
||||
// Registers AuthZ Global Roles /apis endpoint
|
||||
FlagKubernetesAuthzGlobalRolesApi = "kubernetesAuthzGlobalRolesApi"
|
||||
|
||||
// FlagKubernetesAuthzRolesApi
|
||||
// Registers AuthZ Roles /apis endpoint
|
||||
FlagKubernetesAuthzRolesApi = "kubernetesAuthzRolesApi"
|
||||
|
||||
13
pkg/services/featuremgmt/toggles_gen.json
generated
13
pkg/services/featuremgmt/toggles_gen.json
generated
@@ -2024,6 +2024,19 @@
|
||||
"hideFromDocs": true
|
||||
}
|
||||
},
|
||||
{
|
||||
"metadata": {
|
||||
"name": "kubernetesAuthzGlobalRolesApi",
|
||||
"resourceVersion": "1768463213468",
|
||||
"creationTimestamp": "2026-01-15T07:46:53Z"
|
||||
},
|
||||
"spec": {
|
||||
"description": "Registers AuthZ Global Roles /apis endpoint",
|
||||
"stage": "experimental",
|
||||
"codeowner": "@grafana/identity-access-team",
|
||||
"hideFromDocs": true
|
||||
}
|
||||
},
|
||||
{
|
||||
"metadata": {
|
||||
"name": "kubernetesAuthzResourcePermissionApis",
|
||||
|
||||
@@ -168,8 +168,21 @@ func (s *gPRCServerService) Run(ctx context.Context) error {
|
||||
return err
|
||||
case <-ctx.Done():
|
||||
}
|
||||
s.logger.Warn("GRPC server: shutting down")
|
||||
s.server.Stop()
|
||||
|
||||
s.logger.Warn("GRPC server: initiating graceful shutdown")
|
||||
gracefulStopDone := make(chan struct{})
|
||||
go func() {
|
||||
s.server.GracefulStop()
|
||||
close(gracefulStopDone)
|
||||
}()
|
||||
|
||||
select {
|
||||
case <-gracefulStopDone:
|
||||
s.logger.Info("GRPC server: graceful shutdown complete")
|
||||
case <-time.After(s.cfg.GracefulShutdownTimeout):
|
||||
s.logger.Warn("GRPC server: graceful shutdown timed out, forcing stop")
|
||||
s.server.Stop()
|
||||
}
|
||||
return ctx.Err()
|
||||
}
|
||||
|
||||
|
||||
@@ -13,13 +13,14 @@ import (
|
||||
)
|
||||
|
||||
type GRPCServerSettings struct {
|
||||
Enabled bool
|
||||
Network string
|
||||
Address string // with flags, call Process to fill this field defaults
|
||||
TLSConfig *tls.Config // with flags, call Process to fill this field
|
||||
EnableLogging bool // log request and response of each unary gRPC call
|
||||
MaxRecvMsgSize int
|
||||
MaxSendMsgSize int
|
||||
Enabled bool
|
||||
Network string
|
||||
Address string // with flags, call Process to fill this field defaults
|
||||
TLSConfig *tls.Config // with flags, call Process to fill this field
|
||||
EnableLogging bool // log request and response of each unary gRPC call
|
||||
MaxRecvMsgSize int
|
||||
MaxSendMsgSize int
|
||||
GracefulShutdownTimeout time.Duration
|
||||
|
||||
MaxConnectionAge time.Duration
|
||||
MaxConnectionAgeGrace time.Duration
|
||||
@@ -125,6 +126,7 @@ func readGRPCServerSettings(cfg *Cfg, iniFile *ini.File) error {
|
||||
cfg.GRPCServer.EnableLogging = server.Key("enable_logging").MustBool(false)
|
||||
cfg.GRPCServer.MaxRecvMsgSize = server.Key("max_recv_msg_size").MustInt(0)
|
||||
cfg.GRPCServer.MaxSendMsgSize = server.Key("max_send_msg_size").MustInt(0)
|
||||
cfg.GRPCServer.GracefulShutdownTimeout = server.Key("graceful_shutdown_timeout").MustDuration(10 * time.Second)
|
||||
|
||||
// Read connection management options from INI file
|
||||
cfg.GRPCServer.MaxConnectionAge = server.Key("max_connection_age").MustDuration(0)
|
||||
@@ -144,6 +146,7 @@ func (c *GRPCServerSettings) AddFlags(fs *pflag.FlagSet) {
|
||||
fs.BoolVar(&c.EnableLogging, "grpc-server-enable-logging", false, "Enable logging of gRPC requests and responses")
|
||||
fs.IntVar(&c.MaxRecvMsgSize, "grpc-server-max-recv-msg-size", 0, "Maximum size of a gRPC request message in bytes")
|
||||
fs.IntVar(&c.MaxSendMsgSize, "grpc-server-max-send-msg-size", 0, "Maximum size of a gRPC response message in bytes")
|
||||
fs.DurationVar(&c.GracefulShutdownTimeout, "grpc-server-graceful-shutdown-timeout", 10*time.Second, "Duration to wait for graceful gRPC server shutdown")
|
||||
|
||||
// Internal flags, we need to call ProcessTLSConfig
|
||||
fs.BoolVar(&c.useTLS, "grpc-server-use-tls", false, "Enable TLS for the gRPC server")
|
||||
|
||||
@@ -14,6 +14,7 @@ import (
|
||||
"github.com/grafana/grafana/pkg/apimachinery/validation"
|
||||
"github.com/grafana/grafana/pkg/storage/unified/sql/db"
|
||||
"github.com/grafana/grafana/pkg/storage/unified/sql/dbutil"
|
||||
"github.com/grafana/grafana/pkg/storage/unified/sql/rvmanager"
|
||||
"github.com/grafana/grafana/pkg/storage/unified/sql/sqltemplate"
|
||||
gocache "github.com/patrickmn/go-cache"
|
||||
)
|
||||
@@ -868,10 +869,18 @@ func (d *dataStore) applyBackwardsCompatibleChanges(ctx context.Context, tx db.T
|
||||
if key.Action == DataActionDeleted {
|
||||
generation = 0
|
||||
}
|
||||
|
||||
// In compatibility mode, the previous RV, when available, is saved as a microsecond
|
||||
// timestamp, as is done in the SQL backend.
|
||||
previousRV := event.PreviousRV
|
||||
if event.PreviousRV > 0 && isSnowflake(event.PreviousRV) {
|
||||
previousRV = rvmanager.RVFromSnowflake(event.PreviousRV)
|
||||
}
|
||||
|
||||
_, err := dbutil.Exec(ctx, tx, sqlKVUpdateLegacyResourceHistory, sqlKVLegacyUpdateHistoryRequest{
|
||||
SQLTemplate: sqltemplate.New(kv.dialect),
|
||||
GUID: key.GUID,
|
||||
PreviousRV: event.PreviousRV,
|
||||
PreviousRV: previousRV,
|
||||
Generation: generation,
|
||||
})
|
||||
|
||||
@@ -900,7 +909,7 @@ func (d *dataStore) applyBackwardsCompatibleChanges(ctx context.Context, tx db.T
|
||||
Name: key.Name,
|
||||
Action: action,
|
||||
Folder: key.Folder,
|
||||
PreviousRV: event.PreviousRV,
|
||||
PreviousRV: previousRV,
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
@@ -916,7 +925,7 @@ func (d *dataStore) applyBackwardsCompatibleChanges(ctx context.Context, tx db.T
|
||||
Name: key.Name,
|
||||
Action: action,
|
||||
Folder: key.Folder,
|
||||
PreviousRV: event.PreviousRV,
|
||||
PreviousRV: previousRV,
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
@@ -938,3 +947,15 @@ func (d *dataStore) applyBackwardsCompatibleChanges(ctx context.Context, tx db.T
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// isSnowflake returns whether the argument passed is a snowflake ID (new) or a microsecond timestamp (old).
|
||||
// We try to interpret the number as a microsecond timestamp first. If it represents a time in the past,
|
||||
// it is considered a microsecond timestamp. Snowflake IDs are much larger integers and would lead
|
||||
// to dates in the future if interpreted as a microsecond timestamp.
|
||||
func isSnowflake(rv int64) bool {
|
||||
ts := time.UnixMicro(rv)
|
||||
oneHourFromNow := time.Now().Add(time.Hour)
|
||||
isMicroSecRV := ts.Before(oneHourFromNow)
|
||||
|
||||
return !isMicroSecRV
|
||||
}
|
||||
|
||||
@@ -19,13 +19,18 @@ const (
|
||||
defaultBufferSize = 10000
|
||||
)
|
||||
|
||||
type notifier struct {
|
||||
type notifier interface {
|
||||
Watch(context.Context, watchOptions) <-chan Event
|
||||
}
|
||||
|
||||
type pollingNotifier struct {
|
||||
eventStore *eventStore
|
||||
log logging.Logger
|
||||
}
|
||||
|
||||
type notifierOptions struct {
|
||||
log logging.Logger
|
||||
log logging.Logger
|
||||
useChannelNotifier bool
|
||||
}
|
||||
|
||||
type watchOptions struct {
|
||||
@@ -44,15 +49,26 @@ func defaultWatchOptions() watchOptions {
|
||||
}
|
||||
}
|
||||
|
||||
func newNotifier(eventStore *eventStore, opts notifierOptions) *notifier {
|
||||
func newNotifier(eventStore *eventStore, opts notifierOptions) notifier {
|
||||
if opts.log == nil {
|
||||
opts.log = &logging.NoOpLogger{}
|
||||
}
|
||||
return ¬ifier{eventStore: eventStore, log: opts.log}
|
||||
|
||||
if opts.useChannelNotifier {
|
||||
return &channelNotifier{}
|
||||
}
|
||||
|
||||
return &pollingNotifier{eventStore: eventStore, log: opts.log}
|
||||
}
|
||||
|
||||
type channelNotifier struct{}
|
||||
|
||||
func (cn *channelNotifier) Watch(ctx context.Context, opts watchOptions) <-chan Event {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Return the last resource version from the event store
|
||||
func (n *notifier) lastEventResourceVersion(ctx context.Context) (int64, error) {
|
||||
func (n *pollingNotifier) lastEventResourceVersion(ctx context.Context) (int64, error) {
|
||||
e, err := n.eventStore.LastEventKey(ctx)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
@@ -60,11 +76,11 @@ func (n *notifier) lastEventResourceVersion(ctx context.Context) (int64, error)
|
||||
return e.ResourceVersion, nil
|
||||
}
|
||||
|
||||
func (n *notifier) cacheKey(evt Event) string {
|
||||
func (n *pollingNotifier) cacheKey(evt Event) string {
|
||||
return fmt.Sprintf("%s~%s~%s~%s~%d", evt.Namespace, evt.Group, evt.Resource, evt.Name, evt.ResourceVersion)
|
||||
}
|
||||
|
||||
func (n *notifier) Watch(ctx context.Context, opts watchOptions) <-chan Event {
|
||||
func (n *pollingNotifier) Watch(ctx context.Context, opts watchOptions) <-chan Event {
|
||||
if opts.MinBackoff <= 0 {
|
||||
opts.MinBackoff = defaultMinBackoff
|
||||
}
|
||||
|
||||
@@ -13,7 +13,7 @@ import (
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func setupTestNotifier(t *testing.T) (*notifier, *eventStore) {
|
||||
func setupTestNotifier(t *testing.T) (*pollingNotifier, *eventStore) {
|
||||
db := setupTestBadgerDB(t)
|
||||
t.Cleanup(func() {
|
||||
err := db.Close()
|
||||
@@ -22,10 +22,10 @@ func setupTestNotifier(t *testing.T) (*notifier, *eventStore) {
|
||||
kv := NewBadgerKV(db)
|
||||
eventStore := newEventStore(kv)
|
||||
notifier := newNotifier(eventStore, notifierOptions{log: &logging.NoOpLogger{}})
|
||||
return notifier, eventStore
|
||||
return notifier.(*pollingNotifier), eventStore
|
||||
}
|
||||
|
||||
func setupTestNotifierSqlKv(t *testing.T) (*notifier, *eventStore) {
|
||||
func setupTestNotifierSqlKv(t *testing.T) (*pollingNotifier, *eventStore) {
|
||||
dbstore := db.InitTestDB(t)
|
||||
eDB, err := dbimpl.ProvideResourceDB(dbstore, setting.NewCfg(), nil)
|
||||
require.NoError(t, err)
|
||||
@@ -33,7 +33,7 @@ func setupTestNotifierSqlKv(t *testing.T) (*notifier, *eventStore) {
|
||||
require.NoError(t, err)
|
||||
eventStore := newEventStore(kv)
|
||||
notifier := newNotifier(eventStore, notifierOptions{log: &logging.NoOpLogger{}})
|
||||
return notifier, eventStore
|
||||
return notifier.(*pollingNotifier), eventStore
|
||||
}
|
||||
|
||||
func TestNewNotifier(t *testing.T) {
|
||||
@@ -49,7 +49,7 @@ func TestDefaultWatchOptions(t *testing.T) {
|
||||
assert.Equal(t, defaultBufferSize, opts.BufferSize)
|
||||
}
|
||||
|
||||
func runNotifierTestWith(t *testing.T, storeName string, newStoreFn func(*testing.T) (*notifier, *eventStore), testFn func(*testing.T, context.Context, *notifier, *eventStore)) {
|
||||
func runNotifierTestWith(t *testing.T, storeName string, newStoreFn func(*testing.T) (*pollingNotifier, *eventStore), testFn func(*testing.T, context.Context, *pollingNotifier, *eventStore)) {
|
||||
t.Run(storeName, func(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
notifier, eventStore := newStoreFn(t)
|
||||
@@ -62,7 +62,7 @@ func TestNotifier_lastEventResourceVersion(t *testing.T) {
|
||||
runNotifierTestWith(t, "sqlkv", setupTestNotifierSqlKv, testNotifierLastEventResourceVersion)
|
||||
}
|
||||
|
||||
func testNotifierLastEventResourceVersion(t *testing.T, ctx context.Context, notifier *notifier, eventStore *eventStore) {
|
||||
func testNotifierLastEventResourceVersion(t *testing.T, ctx context.Context, notifier *pollingNotifier, eventStore *eventStore) {
|
||||
// Test with no events
|
||||
rv, err := notifier.lastEventResourceVersion(ctx)
|
||||
assert.Error(t, err)
|
||||
@@ -113,7 +113,7 @@ func TestNotifier_cachekey(t *testing.T) {
|
||||
runNotifierTestWith(t, "sqlkv", setupTestNotifierSqlKv, testNotifierCachekey)
|
||||
}
|
||||
|
||||
func testNotifierCachekey(t *testing.T, ctx context.Context, notifier *notifier, eventStore *eventStore) {
|
||||
func testNotifierCachekey(t *testing.T, ctx context.Context, notifier *pollingNotifier, eventStore *eventStore) {
|
||||
tests := []struct {
|
||||
name string
|
||||
event Event
|
||||
@@ -167,7 +167,7 @@ func TestNotifier_Watch_NoEvents(t *testing.T) {
|
||||
runNotifierTestWith(t, "sqlkv", setupTestNotifierSqlKv, testNotifierWatchNoEvents)
|
||||
}
|
||||
|
||||
func testNotifierWatchNoEvents(t *testing.T, ctx context.Context, notifier *notifier, eventStore *eventStore) {
|
||||
func testNotifierWatchNoEvents(t *testing.T, ctx context.Context, notifier *pollingNotifier, eventStore *eventStore) {
|
||||
ctx, cancel := context.WithTimeout(ctx, 500*time.Millisecond)
|
||||
defer cancel()
|
||||
|
||||
@@ -208,7 +208,7 @@ func TestNotifier_Watch_WithExistingEvents(t *testing.T) {
|
||||
runNotifierTestWith(t, "sqlkv", setupTestNotifierSqlKv, testNotifierWatchWithExistingEvents)
|
||||
}
|
||||
|
||||
func testNotifierWatchWithExistingEvents(t *testing.T, ctx context.Context, notifier *notifier, eventStore *eventStore) {
|
||||
func testNotifierWatchWithExistingEvents(t *testing.T, ctx context.Context, notifier *pollingNotifier, eventStore *eventStore) {
|
||||
ctx, cancel := context.WithTimeout(ctx, 2*time.Second)
|
||||
defer cancel()
|
||||
|
||||
@@ -282,7 +282,7 @@ func TestNotifier_Watch_EventDeduplication(t *testing.T) {
|
||||
runNotifierTestWith(t, "sqlkv", setupTestNotifierSqlKv, testNotifierWatchEventDeduplication)
|
||||
}
|
||||
|
||||
func testNotifierWatchEventDeduplication(t *testing.T, ctx context.Context, notifier *notifier, eventStore *eventStore) {
|
||||
func testNotifierWatchEventDeduplication(t *testing.T, ctx context.Context, notifier *pollingNotifier, eventStore *eventStore) {
|
||||
ctx, cancel := context.WithTimeout(ctx, 2*time.Second)
|
||||
defer cancel()
|
||||
|
||||
@@ -348,7 +348,7 @@ func TestNotifier_Watch_ContextCancellation(t *testing.T) {
|
||||
runNotifierTestWith(t, "sqlkv", setupTestNotifierSqlKv, testNotifierWatchContextCancellation)
|
||||
}
|
||||
|
||||
func testNotifierWatchContextCancellation(t *testing.T, ctx context.Context, notifier *notifier, eventStore *eventStore) {
|
||||
func testNotifierWatchContextCancellation(t *testing.T, ctx context.Context, notifier *pollingNotifier, eventStore *eventStore) {
|
||||
ctx, cancel := context.WithCancel(ctx)
|
||||
|
||||
// Add an initial event so that lastEventResourceVersion doesn't return ErrNotFound
|
||||
@@ -394,7 +394,7 @@ func TestNotifier_Watch_MultipleEvents(t *testing.T) {
|
||||
runNotifierTestWith(t, "sqlkv", setupTestNotifierSqlKv, testNotifierWatchMultipleEvents)
|
||||
}
|
||||
|
||||
func testNotifierWatchMultipleEvents(t *testing.T, ctx context.Context, notifier *notifier, eventStore *eventStore) {
|
||||
func testNotifierWatchMultipleEvents(t *testing.T, ctx context.Context, notifier *pollingNotifier, eventStore *eventStore) {
|
||||
ctx, cancel := context.WithTimeout(ctx, 3*time.Second)
|
||||
defer cancel()
|
||||
rv := time.Now().UnixNano()
|
||||
@@ -456,33 +456,27 @@ func testNotifierWatchMultipleEvents(t *testing.T, ctx context.Context, notifier
|
||||
},
|
||||
}
|
||||
|
||||
errCh := make(chan error)
|
||||
go func() {
|
||||
for _, event := range testEvents {
|
||||
err := eventStore.Save(ctx, event)
|
||||
require.NoError(t, err)
|
||||
errCh <- eventStore.Save(ctx, event)
|
||||
}
|
||||
}()
|
||||
|
||||
// Receive events
|
||||
receivedEvents := make([]Event, 0, len(testEvents))
|
||||
for i := 0; i < len(testEvents); i++ {
|
||||
receivedEvents := make([]string, 0, len(testEvents))
|
||||
for len(receivedEvents) != len(testEvents) {
|
||||
select {
|
||||
case event := <-events:
|
||||
receivedEvents = append(receivedEvents, event)
|
||||
receivedEvents = append(receivedEvents, event.Name)
|
||||
case err := <-errCh:
|
||||
require.NoError(t, err)
|
||||
case <-time.After(1 * time.Second):
|
||||
t.Fatalf("Timed out waiting for event %d", i+1)
|
||||
t.Fatalf("Timed out waiting for event %d", len(receivedEvents)+1)
|
||||
}
|
||||
}
|
||||
|
||||
// Verify all events were received
|
||||
assert.Len(t, receivedEvents, len(testEvents))
|
||||
|
||||
// Verify the events match and ordered by resource version
|
||||
receivedNames := make([]string, len(receivedEvents))
|
||||
for i, event := range receivedEvents {
|
||||
receivedNames[i] = event.Name
|
||||
}
|
||||
|
||||
expectedNames := []string{"test-resource-1", "test-resource-2", "test-resource-3"}
|
||||
assert.ElementsMatch(t, expectedNames, receivedNames)
|
||||
assert.ElementsMatch(t, expectedNames, receivedEvents)
|
||||
}
|
||||
|
||||
@@ -473,8 +473,6 @@ func (k *sqlKV) Delete(ctx context.Context, section string, key string) error {
|
||||
return ErrNotFound
|
||||
}
|
||||
|
||||
// TODO reflect change to resource table
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
@@ -61,7 +61,7 @@ type kvStorageBackend struct {
|
||||
bulkLock *BulkLock
|
||||
dataStore *dataStore
|
||||
eventStore *eventStore
|
||||
notifier *notifier
|
||||
notifier notifier
|
||||
builder DocumentBuilder
|
||||
log logging.Logger
|
||||
withPruner bool
|
||||
@@ -91,6 +91,7 @@ type KVBackendOptions struct {
|
||||
Tracer trace.Tracer // TODO add tracing
|
||||
Reg prometheus.Registerer // TODO add metrics
|
||||
|
||||
UseChannelNotifier bool
|
||||
// Adding RvManager overrides the RV generated with snowflake in order to keep backwards compatibility with
|
||||
// unified/sql
|
||||
RvManager *rvmanager.ResourceVersionManager
|
||||
@@ -121,7 +122,7 @@ func NewKVStorageBackend(opts KVBackendOptions) (KVBackend, error) {
|
||||
bulkLock: NewBulkLock(),
|
||||
dataStore: newDataStore(kv),
|
||||
eventStore: eventStore,
|
||||
notifier: newNotifier(eventStore, notifierOptions{}),
|
||||
notifier: newNotifier(eventStore, notifierOptions{useChannelNotifier: opts.UseChannelNotifier}),
|
||||
snowflake: s,
|
||||
builder: StandardDocumentBuilder(), // For now we use the standard document builder.
|
||||
log: &logging.NoOpLogger{}, // Make this configurable
|
||||
@@ -346,7 +347,7 @@ func (k *kvStorageBackend) WriteEvent(ctx context.Context, event WriteEvent) (in
|
||||
return 0, fmt.Errorf("failed to write data: %w", err)
|
||||
}
|
||||
|
||||
rv = rvmanager.SnowflakeFromRv(rv)
|
||||
rv = rvmanager.SnowflakeFromRV(rv)
|
||||
dataKey.ResourceVersion = rv
|
||||
} else {
|
||||
err := k.dataStore.Save(ctx, dataKey, bytes.NewReader(event.Value))
|
||||
@@ -688,9 +689,6 @@ func validateListHistoryRequest(req *resourcepb.ListRequest) error {
|
||||
if key.Namespace == "" {
|
||||
return fmt.Errorf("namespace is required")
|
||||
}
|
||||
if key.Name == "" {
|
||||
return fmt.Errorf("name is required")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
@@ -307,7 +307,7 @@ func (m *ResourceVersionManager) execBatch(ctx context.Context, group, resource
|
||||
// Allocate the RVs
|
||||
for i, guid := range guids {
|
||||
guidToRV[guid] = rv
|
||||
guidToSnowflakeRV[guid] = SnowflakeFromRv(rv)
|
||||
guidToSnowflakeRV[guid] = SnowflakeFromRV(rv)
|
||||
rvs[i] = rv
|
||||
rv++
|
||||
}
|
||||
@@ -364,12 +364,20 @@ func (m *ResourceVersionManager) execBatch(ctx context.Context, group, resource
|
||||
}
|
||||
}
|
||||
|
||||
// takes a unix microsecond rv and transforms into a snowflake format. The timestamp is converted from microsecond to
|
||||
// takes a unix microsecond RV and transforms into a snowflake format. The timestamp is converted from microsecond to
|
||||
// millisecond (the integer division) and the remainder is saved in the stepbits section. machine id is always 0
|
||||
func SnowflakeFromRv(rv int64) int64 {
|
||||
func SnowflakeFromRV(rv int64) int64 {
|
||||
return (((rv / 1000) - snowflake.Epoch) << (snowflake.NodeBits + snowflake.StepBits)) + (rv % 1000)
|
||||
}
|
||||
|
||||
// It is generally not possible to convert from a snowflakeID to a microsecond RV due to the loss in precision
|
||||
// (snowflake ID stores timestamp in milliseconds). However, this implementation stores the microsecond fraction
|
||||
// in the step bits (see SnowflakeFromRV), allowing us to compute the microsecond timestamp.
|
||||
func RVFromSnowflake(snowflakeID int64) int64 {
|
||||
microSecFraction := snowflakeID & ((1 << snowflake.StepBits) - 1)
|
||||
return ((snowflakeID>>(snowflake.NodeBits+snowflake.StepBits))+snowflake.Epoch)*1000 + microSecFraction
|
||||
}
|
||||
|
||||
// helper utility to compare two RVs. The first RV must be in snowflake format. Will convert rv2 to snowflake and retry
|
||||
// if comparison fails
|
||||
func IsRvEqual(rv1, rv2 int64) bool {
|
||||
@@ -377,7 +385,7 @@ func IsRvEqual(rv1, rv2 int64) bool {
|
||||
return true
|
||||
}
|
||||
|
||||
return rv1 == SnowflakeFromRv(rv2)
|
||||
return rv1 == SnowflakeFromRV(rv2)
|
||||
}
|
||||
|
||||
// Lock locks the resource version for the given key
|
||||
|
||||
@@ -63,3 +63,13 @@ func TestResourceVersionManager(t *testing.T) {
|
||||
require.Equal(t, rv, int64(200))
|
||||
})
|
||||
}
|
||||
|
||||
func TestSnowflakeFromRVRoundtrips(t *testing.T) {
|
||||
// 2026-01-12 19:33:58.806211 +0000 UTC
|
||||
offset := int64(1768246438806211) // in microseconds
|
||||
|
||||
for n := range int64(100) {
|
||||
ts := offset + n
|
||||
require.Equal(t, ts, RVFromSnowflake(SnowflakeFromRV(ts)))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -99,6 +99,9 @@ func NewResourceServer(opts ServerOptions) (resource.ResourceServer, error) {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
isHA := isHighAvailabilityEnabled(opts.Cfg.SectionWithEnvOverrides("database"),
|
||||
opts.Cfg.SectionWithEnvOverrides("resource_api"))
|
||||
|
||||
if opts.Cfg.EnableSQLKVBackend {
|
||||
sqlkv, err := resource.NewSQLKV(eDB)
|
||||
if err != nil {
|
||||
@@ -106,9 +109,10 @@ func NewResourceServer(opts ServerOptions) (resource.ResourceServer, error) {
|
||||
}
|
||||
|
||||
kvBackendOpts := resource.KVBackendOptions{
|
||||
KvStore: sqlkv,
|
||||
Tracer: opts.Tracer,
|
||||
Reg: opts.Reg,
|
||||
KvStore: sqlkv,
|
||||
Tracer: opts.Tracer,
|
||||
Reg: opts.Reg,
|
||||
UseChannelNotifier: !isHA,
|
||||
}
|
||||
|
||||
ctx := context.Background()
|
||||
@@ -140,9 +144,6 @@ func NewResourceServer(opts ServerOptions) (resource.ResourceServer, error) {
|
||||
serverOptions.Backend = kvBackend
|
||||
serverOptions.Diagnostics = kvBackend
|
||||
} else {
|
||||
isHA := isHighAvailabilityEnabled(opts.Cfg.SectionWithEnvOverrides("database"),
|
||||
opts.Cfg.SectionWithEnvOverrides("resource_api"))
|
||||
|
||||
backend, err := NewBackend(BackendOptions{
|
||||
DBProvider: eDB,
|
||||
Reg: opts.Reg,
|
||||
|
||||
@@ -23,6 +23,7 @@ import (
|
||||
"github.com/grafana/authlib/types"
|
||||
|
||||
"github.com/grafana/grafana/pkg/apimachinery/utils"
|
||||
"github.com/grafana/grafana/pkg/infra/db"
|
||||
"github.com/grafana/grafana/pkg/storage/unified/resource"
|
||||
"github.com/grafana/grafana/pkg/storage/unified/resourcepb"
|
||||
sqldb "github.com/grafana/grafana/pkg/storage/unified/sql/db"
|
||||
@@ -99,6 +100,10 @@ func RunStorageBackendTest(t *testing.T, newBackend NewBackendFunc, opts *TestOp
|
||||
}
|
||||
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
if db.IsTestDbSQLite() {
|
||||
t.Skip("Skipping tests on sqlite until channel notifier is implemented")
|
||||
}
|
||||
|
||||
tc.fn(t, newBackend(context.Background()), opts.NSPrefix)
|
||||
})
|
||||
}
|
||||
@@ -1166,7 +1171,7 @@ func runTestIntegrationBackendCreateNewResource(t *testing.T, backend resource.S
|
||||
}))
|
||||
|
||||
server := newServer(t, backend)
|
||||
ns := nsPrefix + "-create-resource"
|
||||
ns := nsPrefix + "-create-rsrce" // create-resource
|
||||
ctx = request.WithNamespace(ctx, ns)
|
||||
|
||||
request := &resourcepb.CreateRequest{
|
||||
@@ -1607,7 +1612,7 @@ func (s *sliceBulkRequestIterator) RollbackRequested() bool {
|
||||
|
||||
func runTestIntegrationBackendOptimisticLocking(t *testing.T, backend resource.StorageBackend, nsPrefix string) {
|
||||
ctx := testutil.NewTestContext(t, time.Now().Add(30*time.Second))
|
||||
ns := nsPrefix + "-optimistic-locking"
|
||||
ns := nsPrefix + "-optimis-lock" // optimistic-locking. need to cut down on characters to not exceed namespace character limit (40)
|
||||
|
||||
t.Run("concurrent updates with same RV - only one succeeds", func(t *testing.T) {
|
||||
// Create initial resource with rv0 (no previous RV)
|
||||
|
||||
@@ -36,6 +36,10 @@ func NewTestSqlKvBackend(t *testing.T, ctx context.Context, withRvManager bool)
|
||||
KvStore: kv,
|
||||
}
|
||||
|
||||
if db.DriverName() == "sqlite3" {
|
||||
kvOpts.UseChannelNotifier = true
|
||||
}
|
||||
|
||||
if withRvManager {
|
||||
dialect := sqltemplate.DialectForDriver(db.DriverName())
|
||||
rvManager, err := rvmanager.NewResourceVersionManager(rvmanager.ResourceManagerOptions{
|
||||
@@ -200,7 +204,7 @@ func verifyKeyPath(t *testing.T, db sqldb.DB, ctx context.Context, key *resource
|
||||
var keyPathRV int64
|
||||
if isSqlBackend {
|
||||
// Convert microsecond RV to snowflake for key_path construction
|
||||
keyPathRV = rvmanager.SnowflakeFromRv(resourceVersion)
|
||||
keyPathRV = rvmanager.SnowflakeFromRV(resourceVersion)
|
||||
} else {
|
||||
// KV backend already provides snowflake RV
|
||||
keyPathRV = resourceVersion
|
||||
@@ -434,9 +438,6 @@ func verifyResourceHistoryTable(t *testing.T, db sqldb.DB, namespace string, res
|
||||
|
||||
rows, err := db.QueryContext(ctx, query, namespace)
|
||||
require.NoError(t, err)
|
||||
defer func() {
|
||||
_ = rows.Close()
|
||||
}()
|
||||
|
||||
var records []ResourceHistoryRecord
|
||||
for rows.Next() {
|
||||
@@ -460,33 +461,34 @@ func verifyResourceHistoryTable(t *testing.T, db sqldb.DB, namespace string, res
|
||||
for resourceIdx, res := range resources {
|
||||
// Check create record (action=1, generation=1)
|
||||
createRecord := records[recordIndex]
|
||||
verifyResourceHistoryRecord(t, createRecord, res, resourceIdx, 1, 0, 1, resourceVersions[resourceIdx][0])
|
||||
verifyResourceHistoryRecord(t, createRecord, namespace, res, resourceIdx, 1, 0, 1, resourceVersions[resourceIdx][0])
|
||||
recordIndex++
|
||||
}
|
||||
|
||||
for resourceIdx, res := range resources {
|
||||
// Check update record (action=2, generation=2)
|
||||
updateRecord := records[recordIndex]
|
||||
verifyResourceHistoryRecord(t, updateRecord, res, resourceIdx, 2, resourceVersions[resourceIdx][0], 2, resourceVersions[resourceIdx][1])
|
||||
verifyResourceHistoryRecord(t, updateRecord, namespace, res, resourceIdx, 2, resourceVersions[resourceIdx][0], 2, resourceVersions[resourceIdx][1])
|
||||
recordIndex++
|
||||
}
|
||||
|
||||
for resourceIdx, res := range resources[:2] {
|
||||
// Check delete record (action=3, generation=0) - only first 2 resources were deleted
|
||||
deleteRecord := records[recordIndex]
|
||||
verifyResourceHistoryRecord(t, deleteRecord, res, resourceIdx, 3, resourceVersions[resourceIdx][1], 0, resourceVersions[resourceIdx][2])
|
||||
verifyResourceHistoryRecord(t, deleteRecord, namespace, res, resourceIdx, 3, resourceVersions[resourceIdx][1], 0, resourceVersions[resourceIdx][2])
|
||||
recordIndex++
|
||||
}
|
||||
}
|
||||
|
||||
// verifyResourceHistoryRecord validates a single resource_history record
|
||||
func verifyResourceHistoryRecord(t *testing.T, record ResourceHistoryRecord, expectedRes struct{ name, folder string }, resourceIdx, expectedAction int, expectedPrevRV int64, expectedGeneration int, expectedRV int64) {
|
||||
func verifyResourceHistoryRecord(t *testing.T, record ResourceHistoryRecord, namespace string, expectedRes struct{ name, folder string }, resourceIdx, expectedAction int, expectedPrevRV int64, expectedGeneration int, expectedRV int64) {
|
||||
// Validate GUID (should be non-empty)
|
||||
require.NotEmpty(t, record.GUID, "GUID should not be empty")
|
||||
|
||||
// Validate group/resource/namespace/name
|
||||
require.Equal(t, "playlist.grafana.app", record.Group)
|
||||
require.Equal(t, "playlists", record.Resource)
|
||||
require.Equal(t, namespace, record.Namespace)
|
||||
require.Equal(t, expectedRes.name, record.Name)
|
||||
|
||||
// Validate value contains expected JSON - server modifies/formats the JSON differently for different operations
|
||||
@@ -513,8 +515,12 @@ func verifyResourceHistoryRecord(t *testing.T, record ResourceHistoryRecord, exp
|
||||
// For KV backend operations, expectedPrevRV is now in snowflake format (returned by KV backend)
|
||||
// but resource_history table stores microsecond RV, so we need to use IsRvEqual for comparison
|
||||
if strings.Contains(record.Namespace, "-kv") {
|
||||
require.True(t, rvmanager.IsRvEqual(expectedPrevRV, record.PreviousResourceVersion),
|
||||
"Previous resource version should match (KV backend snowflake format)")
|
||||
if expectedPrevRV == 0 {
|
||||
require.Zero(t, record.PreviousResourceVersion)
|
||||
} else {
|
||||
require.Equal(t, expectedPrevRV, rvmanager.SnowflakeFromRV(record.PreviousResourceVersion),
|
||||
"Previous resource version should match (KV backend snowflake format)")
|
||||
}
|
||||
} else {
|
||||
require.Equal(t, expectedPrevRV, record.PreviousResourceVersion)
|
||||
}
|
||||
@@ -546,9 +552,6 @@ func verifyResourceTable(t *testing.T, db sqldb.DB, namespace string, resources
|
||||
|
||||
rows, err := db.QueryContext(ctx, query, namespace)
|
||||
require.NoError(t, err)
|
||||
defer func() {
|
||||
_ = rows.Close()
|
||||
}()
|
||||
|
||||
var records []ResourceRecord
|
||||
for rows.Next() {
|
||||
@@ -612,9 +615,6 @@ func verifyResourceVersionTable(t *testing.T, db sqldb.DB, namespace string, res
|
||||
// Check that we have exactly one entry for playlist.grafana.app/playlists
|
||||
rows, err := db.QueryContext(ctx, query, "playlist.grafana.app", "playlists")
|
||||
require.NoError(t, err)
|
||||
defer func() {
|
||||
_ = rows.Close()
|
||||
}()
|
||||
|
||||
var records []ResourceVersionRecord
|
||||
for rows.Next() {
|
||||
@@ -649,7 +649,7 @@ func verifyResourceVersionTable(t *testing.T, db sqldb.DB, namespace string, res
|
||||
isKvBackend := strings.Contains(namespace, "-kv")
|
||||
recordResourceVersion := record.ResourceVersion
|
||||
if isKvBackend {
|
||||
recordResourceVersion = rvmanager.SnowflakeFromRv(record.ResourceVersion)
|
||||
recordResourceVersion = rvmanager.SnowflakeFromRV(record.ResourceVersion)
|
||||
}
|
||||
|
||||
require.Less(t, recordResourceVersion, int64(9223372036854775807), "resource_version should be reasonable")
|
||||
@@ -841,24 +841,20 @@ func runMixedConcurrentOperations(t *testing.T, sqlServer, kvServer resource.Res
|
||||
}
|
||||
|
||||
// SQL backend operations
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
wg.Go(func() {
|
||||
<-startBarrier // Wait for signal to start
|
||||
if err := runBackendOperationsWithCounts(ctx, sqlServer, namespace+"-sql", "sql", opCounts); err != nil {
|
||||
errors <- fmt.Errorf("SQL backend operations failed: %w", err)
|
||||
}
|
||||
}()
|
||||
})
|
||||
|
||||
// KV backend operations
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
wg.Go(func() {
|
||||
<-startBarrier // Wait for signal to start
|
||||
if err := runBackendOperationsWithCounts(ctx, kvServer, namespace+"-kv", "kv", opCounts); err != nil {
|
||||
errors <- fmt.Errorf("KV backend operations failed: %w", err)
|
||||
}
|
||||
}()
|
||||
})
|
||||
|
||||
// Start both goroutines simultaneously
|
||||
close(startBarrier)
|
||||
|
||||
@@ -8,6 +8,7 @@ import (
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/grafana/grafana/pkg/storage/unified/resource"
|
||||
"github.com/grafana/grafana/pkg/util/testutil"
|
||||
)
|
||||
|
||||
func TestBadgerKVStorageBackend(t *testing.T) {
|
||||
@@ -36,19 +37,13 @@ func TestBadgerKVStorageBackend(t *testing.T) {
|
||||
})
|
||||
}
|
||||
|
||||
func TestSQLKVStorageBackend(t *testing.T) {
|
||||
func TestIntegrationSQLKVStorageBackend(t *testing.T) {
|
||||
testutil.SkipIntegrationTestInShortMode(t)
|
||||
|
||||
skipTests := map[string]bool{
|
||||
TestWatchWriteEvents: true,
|
||||
TestList: true,
|
||||
TestBlobSupport: true,
|
||||
TestGetResourceStats: true,
|
||||
TestListHistory: true,
|
||||
TestListHistoryErrorReporting: true,
|
||||
TestListModifiedSince: true,
|
||||
TestListTrash: true,
|
||||
TestCreateNewResource: true,
|
||||
TestGetResourceLastImportTime: true,
|
||||
TestOptimisticLocking: true,
|
||||
}
|
||||
|
||||
t.Run("Without RvManager", func(t *testing.T) {
|
||||
@@ -56,7 +51,7 @@ func TestSQLKVStorageBackend(t *testing.T) {
|
||||
backend, _ := NewTestSqlKvBackend(t, ctx, false)
|
||||
return backend
|
||||
}, &TestOptions{
|
||||
NSPrefix: "sqlkvstorage-test",
|
||||
NSPrefix: "sqlkvstoragetest",
|
||||
SkipTests: skipTests,
|
||||
})
|
||||
})
|
||||
@@ -66,7 +61,7 @@ func TestSQLKVStorageBackend(t *testing.T) {
|
||||
backend, _ := NewTestSqlKvBackend(t, ctx, true)
|
||||
return backend
|
||||
}, &TestOptions{
|
||||
NSPrefix: "sqlkvstorage-withrvmanager-test",
|
||||
NSPrefix: "sqlkvstoragetest-rvmanager",
|
||||
SkipTests: skipTests,
|
||||
})
|
||||
})
|
||||
|
||||
@@ -10,10 +10,10 @@ import (
|
||||
|
||||
"github.com/grafana/alerting/notify"
|
||||
"github.com/grafana/alerting/receivers/schema"
|
||||
"github.com/grafana/grafana-app-sdk/resource"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"k8s.io/apimachinery/pkg/api/errors"
|
||||
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
|
||||
"github.com/grafana/grafana/apps/alerting/notifications/pkg/apis/alertingnotifications/v0alpha1"
|
||||
"github.com/grafana/grafana/pkg/services/featuremgmt"
|
||||
@@ -21,7 +21,6 @@ import (
|
||||
"github.com/grafana/grafana/pkg/services/ngalert/models"
|
||||
"github.com/grafana/grafana/pkg/tests/api/alerting"
|
||||
"github.com/grafana/grafana/pkg/tests/apis"
|
||||
test_common "github.com/grafana/grafana/pkg/tests/apis/alerting/notifications/common"
|
||||
"github.com/grafana/grafana/pkg/tests/testinfra"
|
||||
)
|
||||
|
||||
@@ -34,7 +33,8 @@ func TestIntegrationReadImported_Snapshot(t *testing.T) {
|
||||
},
|
||||
})
|
||||
|
||||
receiverClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
|
||||
receiverClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
|
||||
cliCfg := helper.Org1.Admin.NewRestConfig()
|
||||
alertingApi := alerting.NewAlertingLegacyAPIClient(helper.GetEnv().Server.HTTPServer.Listener.Addr().String(), cliCfg.Username, cliCfg.Password)
|
||||
@@ -58,9 +58,9 @@ func TestIntegrationReadImported_Snapshot(t *testing.T) {
|
||||
response := alertingApi.ConvertPrometheusPostAlertmanagerConfig(t, amConfig, headers)
|
||||
require.Equal(t, "success", response.Status)
|
||||
|
||||
receiversRaw, err := receiverClient.Client.List(ctx, v1.ListOptions{})
|
||||
receiversRaw, err := receiverClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
raw, err := receiversRaw.MarshalJSON()
|
||||
raw, err := json.Marshal(receiversRaw)
|
||||
require.NoError(t, err)
|
||||
|
||||
expectedBytes, err := os.ReadFile(path.Join("test-data", "imported-expected-snapshot.json"))
|
||||
@@ -74,7 +74,7 @@ func TestIntegrationReadImported_Snapshot(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
receivers, err := receiverClient.List(ctx, v1.ListOptions{})
|
||||
receivers, err := receiverClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
t.Run("secure fields should be properly masked", func(t *testing.T) {
|
||||
for _, receiver := range receivers.Items {
|
||||
@@ -114,14 +114,14 @@ func TestIntegrationReadImported_Snapshot(t *testing.T) {
|
||||
toUpdate := receivers.Items[1]
|
||||
toUpdate.Spec.Title = "another title"
|
||||
|
||||
_, err = receiverClient.Update(ctx, &toUpdate, v1.UpdateOptions{})
|
||||
_, err = receiverClient.Update(ctx, &toUpdate, resource.UpdateOptions{})
|
||||
require.Truef(t, errors.IsBadRequest(err), "Expected BadRequest but got %s", err)
|
||||
})
|
||||
|
||||
t.Run("should not be able to delete", func(t *testing.T) {
|
||||
toDelete := receivers.Items[1]
|
||||
|
||||
err = receiverClient.Delete(ctx, toDelete.Name, v1.DeleteOptions{})
|
||||
err = receiverClient.Delete(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: toDelete.Name}, resource.DeleteOptions{})
|
||||
require.Truef(t, errors.IsBadRequest(err), "Expected BadRequest but got %s", err)
|
||||
})
|
||||
}
|
||||
|
||||
@@ -15,12 +15,12 @@ import (
|
||||
"github.com/grafana/alerting/notify/notifytest"
|
||||
"github.com/grafana/alerting/receivers/line"
|
||||
"github.com/grafana/alerting/receivers/schema"
|
||||
"github.com/grafana/grafana-app-sdk/resource"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"k8s.io/apimachinery/pkg/api/errors"
|
||||
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
"k8s.io/apimachinery/pkg/types"
|
||||
|
||||
"github.com/grafana/alerting/notify"
|
||||
|
||||
@@ -65,7 +65,8 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
|
||||
|
||||
ctx := context.Background()
|
||||
helper := getTestHelper(t)
|
||||
client := test_common.NewReceiverClient(t, helper.Org1.Admin)
|
||||
client, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
newResource := &v0alpha1.Receiver{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Namespace: "default",
|
||||
@@ -77,42 +78,42 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
|
||||
}
|
||||
|
||||
t.Run("create should fail if object name is specified", func(t *testing.T) {
|
||||
resource := newResource.Copy().(*v0alpha1.Receiver)
|
||||
resource.Name = "new-receiver"
|
||||
_, err := client.Create(ctx, resource, v1.CreateOptions{})
|
||||
receiver := newResource.Copy().(*v0alpha1.Receiver)
|
||||
receiver.Name = "new-receiver"
|
||||
_, err := client.Create(ctx, receiver, resource.CreateOptions{})
|
||||
require.Truef(t, errors.IsBadRequest(err), "Expected BadRequest but got %s", err)
|
||||
})
|
||||
|
||||
var resourceID string
|
||||
var resourceID resource.Identifier
|
||||
t.Run("create should succeed and provide resource name", func(t *testing.T) {
|
||||
actual, err := client.Create(ctx, newResource, v1.CreateOptions{})
|
||||
actual, err := client.Create(ctx, newResource, resource.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.NotEmptyf(t, actual.Name, "Resource name should not be empty")
|
||||
require.NotEmptyf(t, actual.UID, "Resource UID should not be empty")
|
||||
resourceID = actual.Name
|
||||
resourceID = actual.GetStaticMetadata().Identifier()
|
||||
})
|
||||
|
||||
t.Run("resource should be available by the identifier", func(t *testing.T) {
|
||||
actual, err := client.Get(ctx, resourceID, v1.GetOptions{})
|
||||
actual, err := client.Get(ctx, resourceID)
|
||||
require.NoError(t, err)
|
||||
require.NotEmptyf(t, actual.Name, "Resource name should not be empty")
|
||||
require.Equal(t, newResource.Spec, actual.Spec)
|
||||
})
|
||||
|
||||
t.Run("update should rename receiver if name in the specification changes", func(t *testing.T) {
|
||||
existing, err := client.Get(ctx, resourceID, v1.GetOptions{})
|
||||
existing, err := client.Get(ctx, resourceID)
|
||||
require.NoError(t, err)
|
||||
|
||||
updated := existing.Copy().(*v0alpha1.Receiver)
|
||||
updated.Spec.Title = "another-newReceiver"
|
||||
|
||||
actual, err := client.Update(ctx, updated, v1.UpdateOptions{})
|
||||
actual, err := client.Update(ctx, updated, resource.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, updated.Spec, actual.Spec)
|
||||
require.NotEqualf(t, updated.Name, actual.Name, "Update should change the resource name but it didn't")
|
||||
require.NotEqualf(t, updated.ResourceVersion, actual.ResourceVersion, "Update should change the resource version but it didn't")
|
||||
|
||||
resource, err := client.Get(ctx, actual.Name, v1.GetOptions{})
|
||||
resource, err := client.Get(ctx, actual.GetStaticMetadata().Identifier())
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, actual.Spec, resource.Spec)
|
||||
require.Equal(t, actual.Name, resource.Name)
|
||||
@@ -140,7 +141,8 @@ func TestIntegrationResourcePermissions(t *testing.T) {
|
||||
admin := org1.Admin
|
||||
viewer := org1.Viewer
|
||||
editor := org1.Editor
|
||||
adminClient := test_common.NewReceiverClient(t, admin)
|
||||
adminClient, err := v0alpha1.NewReceiverClientFromGenerator(admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
|
||||
writeACMetadata := []string{"canWrite", "canDelete"}
|
||||
allACMetadata := []string{"canWrite", "canDelete", "canReadSecrets", "canAdmin", "canModifyProtected"}
|
||||
@@ -292,8 +294,10 @@ func TestIntegrationResourcePermissions(t *testing.T) {
|
||||
},
|
||||
} {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
createClient := test_common.NewReceiverClient(t, tc.creatingUser)
|
||||
client := test_common.NewReceiverClient(t, tc.testUser)
|
||||
createClient, err := v0alpha1.NewReceiverClientFromGenerator(tc.creatingUser.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
client, err := v0alpha1.NewReceiverClientFromGenerator(tc.testUser.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
|
||||
var created = &v0alpha1.Receiver{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
@@ -308,12 +312,12 @@ func TestIntegrationResourcePermissions(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
|
||||
// Create receiver with creatingUser
|
||||
created, err = createClient.Create(ctx, created, v1.CreateOptions{})
|
||||
created, err = createClient.Create(ctx, created, resource.CreateOptions{})
|
||||
require.NoErrorf(t, err, "Payload %s", string(d))
|
||||
require.NotNil(t, created)
|
||||
|
||||
defer func() {
|
||||
_ = adminClient.Delete(ctx, created.Name, v1.DeleteOptions{})
|
||||
_ = adminClient.Delete(ctx, created.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
|
||||
}()
|
||||
|
||||
// Assign resource permissions
|
||||
@@ -338,7 +342,7 @@ func TestIntegrationResourcePermissions(t *testing.T) {
|
||||
|
||||
// Obtain expected responses using admin client as source of truth.
|
||||
expectedGetWithMetadata, expectedListWithMetadata := func() (*v0alpha1.Receiver, *v0alpha1.Receiver) {
|
||||
expectedGet, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
|
||||
expectedGet, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, expectedGet)
|
||||
|
||||
@@ -352,7 +356,7 @@ func TestIntegrationResourcePermissions(t *testing.T) {
|
||||
expectedGetWithMetadata.SetAccessControl(ac)
|
||||
}
|
||||
|
||||
expectedList, err := adminClient.List(ctx, v1.ListOptions{})
|
||||
expectedList, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
expectedListWithMetadata := extractReceiverFromList(expectedList, created.Name)
|
||||
require.NotNil(t, expectedListWithMetadata)
|
||||
@@ -368,26 +372,26 @@ func TestIntegrationResourcePermissions(t *testing.T) {
|
||||
}()
|
||||
|
||||
t.Run("should be able to list receivers", func(t *testing.T) {
|
||||
list, err := client.List(ctx, v1.ListOptions{})
|
||||
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
listedReceiver := extractReceiverFromList(list, created.Name)
|
||||
assert.Equalf(t, expectedListWithMetadata, listedReceiver, "Expected %v but got %v", expectedListWithMetadata, listedReceiver)
|
||||
})
|
||||
|
||||
t.Run("should be able to read receiver by resource identifier", func(t *testing.T) {
|
||||
got, err := client.Get(ctx, expectedGetWithMetadata.Name, v1.GetOptions{})
|
||||
got, err := client.Get(ctx, expectedGetWithMetadata.GetStaticMetadata().Identifier())
|
||||
require.NoError(t, err)
|
||||
assert.Equalf(t, expectedGetWithMetadata, got, "Expected %v but got %v", expectedGetWithMetadata, got)
|
||||
})
|
||||
} else {
|
||||
t.Run("list receivers should be empty", func(t *testing.T) {
|
||||
list, err := client.List(ctx, v1.ListOptions{})
|
||||
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Emptyf(t, list.Items, "Expected no receivers but got %v", list.Items)
|
||||
})
|
||||
|
||||
t.Run("should be forbidden to read receiver by name", func(t *testing.T) {
|
||||
_, err := client.Get(ctx, created.Name, v1.GetOptions{})
|
||||
_, err := client.Get(ctx, created.GetStaticMetadata().Identifier())
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
}
|
||||
@@ -559,10 +563,12 @@ func TestIntegrationAccessControl(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
adminClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
|
||||
adminClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
for _, tc := range testCases {
|
||||
t.Run(fmt.Sprintf("user '%s'", tc.user.Identity.GetLogin()), func(t *testing.T) {
|
||||
client := test_common.NewReceiverClient(t, tc.user)
|
||||
client, err := v0alpha1.NewReceiverClientFromGenerator(tc.user.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
|
||||
var expected = &v0alpha1.Receiver{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
@@ -580,29 +586,29 @@ func TestIntegrationAccessControl(t *testing.T) {
|
||||
newReceiver.Spec.Title = fmt.Sprintf("receiver-2-%s", tc.user.Identity.GetLogin())
|
||||
if tc.canCreate {
|
||||
t.Run("should be able to create receiver", func(t *testing.T) {
|
||||
actual, err := client.Create(ctx, newReceiver, v1.CreateOptions{})
|
||||
actual, err := client.Create(ctx, newReceiver, resource.CreateOptions{})
|
||||
require.NoErrorf(t, err, "Payload %s", string(d))
|
||||
|
||||
require.Equal(t, newReceiver.Spec, actual.Spec)
|
||||
|
||||
t.Run("should fail if already exists", func(t *testing.T) {
|
||||
_, err := client.Create(ctx, newReceiver, v1.CreateOptions{})
|
||||
_, err := client.Create(ctx, newReceiver, resource.CreateOptions{})
|
||||
require.Truef(t, errors.IsConflict(err), "expected bad request but got %s", err)
|
||||
})
|
||||
|
||||
// Cleanup.
|
||||
require.NoError(t, adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{}))
|
||||
require.NoError(t, adminClient.Delete(ctx, actual.GetStaticMetadata().Identifier(), resource.DeleteOptions{}))
|
||||
})
|
||||
} else {
|
||||
t.Run("should be forbidden to create", func(t *testing.T) {
|
||||
_, err := client.Create(ctx, newReceiver, v1.CreateOptions{})
|
||||
_, err := client.Create(ctx, newReceiver, resource.CreateOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "Payload %s", string(d))
|
||||
})
|
||||
}
|
||||
|
||||
// create resource to proceed with other tests. We don't use the one created above because the user will always
|
||||
// have admin permissions on it.
|
||||
expected, err = adminClient.Create(ctx, expected, v1.CreateOptions{})
|
||||
expected, err = adminClient.Create(ctx, expected, resource.CreateOptions{})
|
||||
require.NoErrorf(t, err, "Payload %s", string(d))
|
||||
require.NotNil(t, expected)
|
||||
|
||||
@@ -627,34 +633,34 @@ func TestIntegrationAccessControl(t *testing.T) {
|
||||
expectedWithMetadata.SetAccessControl("canAdmin")
|
||||
}
|
||||
t.Run("should be able to list receivers", func(t *testing.T) {
|
||||
list, err := client.List(ctx, v1.ListOptions{})
|
||||
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, list.Items, 2) // default + created
|
||||
})
|
||||
|
||||
t.Run("should be able to read receiver by resource identifier", func(t *testing.T) {
|
||||
got, err := client.Get(ctx, expected.Name, v1.GetOptions{})
|
||||
got, err := client.Get(ctx, expected.GetStaticMetadata().Identifier())
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, expectedWithMetadata, got)
|
||||
|
||||
t.Run("should get NotFound if resource does not exist", func(t *testing.T) {
|
||||
_, err := client.Get(ctx, "Notfound", v1.GetOptions{})
|
||||
_, err := client.Get(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "Notfound"})
|
||||
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
|
||||
})
|
||||
})
|
||||
} else {
|
||||
t.Run("list receivers should be empty", func(t *testing.T) {
|
||||
list, err := client.List(ctx, v1.ListOptions{})
|
||||
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Emptyf(t, list.Items, "Expected no receivers but got %v", list.Items)
|
||||
})
|
||||
|
||||
t.Run("should be forbidden to read receiver by name", func(t *testing.T) {
|
||||
_, err := client.Get(ctx, expected.Name, v1.GetOptions{})
|
||||
_, err := client.Get(ctx, expected.GetStaticMetadata().Identifier())
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
|
||||
t.Run("should get forbidden even if name does not exist", func(t *testing.T) {
|
||||
_, err := client.Get(ctx, "Notfound", v1.GetOptions{})
|
||||
_, err := client.Get(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "Notfound"})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
})
|
||||
@@ -668,7 +674,7 @@ func TestIntegrationAccessControl(t *testing.T) {
|
||||
|
||||
if tc.canUpdate {
|
||||
t.Run("should be able to update receiver", func(t *testing.T) {
|
||||
updated, err := client.Update(ctx, updatedExpected, v1.UpdateOptions{})
|
||||
updated, err := client.Update(ctx, updatedExpected, resource.UpdateOptions{})
|
||||
require.NoErrorf(t, err, "Payload %s", string(d))
|
||||
|
||||
expected = updated
|
||||
@@ -676,7 +682,7 @@ func TestIntegrationAccessControl(t *testing.T) {
|
||||
t.Run("should get NotFound if name does not exist", func(t *testing.T) {
|
||||
up := updatedExpected.Copy().(*v0alpha1.Receiver)
|
||||
up.Name = "notFound"
|
||||
_, err := client.Update(ctx, up, v1.UpdateOptions{})
|
||||
_, err := client.Update(ctx, up, resource.UpdateOptions{})
|
||||
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
|
||||
})
|
||||
})
|
||||
@@ -686,7 +692,7 @@ func TestIntegrationAccessControl(t *testing.T) {
|
||||
createIntegration(t, "webhook"),
|
||||
}
|
||||
|
||||
expected, err = adminClient.Update(ctx, updatedExpected, v1.UpdateOptions{})
|
||||
expected, err = adminClient.Update(ctx, updatedExpected, resource.UpdateOptions{})
|
||||
require.NoErrorf(t, err, "Payload %s", string(d))
|
||||
require.NotNil(t, expected)
|
||||
|
||||
@@ -695,60 +701,62 @@ func TestIntegrationAccessControl(t *testing.T) {
|
||||
|
||||
if tc.canUpdateProtected {
|
||||
t.Run("should be able to update protected fields of the receiver", func(t *testing.T) {
|
||||
updated, err := client.Update(ctx, updatedProtected, v1.UpdateOptions{})
|
||||
updated, err := client.Update(ctx, updatedProtected, resource.UpdateOptions{})
|
||||
require.NoErrorf(t, err, "Payload %s", string(d))
|
||||
require.NotNil(t, updated)
|
||||
expected = updated
|
||||
})
|
||||
} else {
|
||||
t.Run("should be forbidden to edit protected fields of the receiver", func(t *testing.T) {
|
||||
_, err := client.Update(ctx, updatedProtected, v1.UpdateOptions{})
|
||||
_, err := client.Update(ctx, updatedProtected, resource.UpdateOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
}
|
||||
} else {
|
||||
t.Run("should be forbidden to update receiver", func(t *testing.T) {
|
||||
_, err := client.Update(ctx, updatedExpected, v1.UpdateOptions{})
|
||||
_, err := client.Update(ctx, updatedExpected, resource.UpdateOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
|
||||
t.Run("should get forbidden even if resource does not exist", func(t *testing.T) {
|
||||
up := updatedExpected.Copy().(*v0alpha1.Receiver)
|
||||
up.Name = "notFound"
|
||||
_, err := client.Update(ctx, up, v1.UpdateOptions{})
|
||||
_, err := client.Update(ctx, up, resource.UpdateOptions{
|
||||
ResourceVersion: up.ResourceVersion,
|
||||
})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
})
|
||||
require.Falsef(t, tc.canUpdateProtected, "Invalid combination of assertions. CanUpdateProtected should be false")
|
||||
}
|
||||
|
||||
deleteOptions := v1.DeleteOptions{Preconditions: &v1.Preconditions{ResourceVersion: util.Pointer(expected.ResourceVersion)}}
|
||||
deleteOptions := resource.DeleteOptions{Preconditions: resource.DeleteOptionsPreconditions{ResourceVersion: expected.ResourceVersion}}
|
||||
|
||||
if tc.canDelete {
|
||||
t.Run("should be able to delete receiver", func(t *testing.T) {
|
||||
err := client.Delete(ctx, expected.Name, deleteOptions)
|
||||
err := client.Delete(ctx, expected.GetStaticMetadata().Identifier(), deleteOptions)
|
||||
require.NoError(t, err)
|
||||
|
||||
t.Run("should get NotFound if name does not exist", func(t *testing.T) {
|
||||
err := client.Delete(ctx, "notfound", v1.DeleteOptions{})
|
||||
err := client.Delete(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "notfound"}, resource.DeleteOptions{})
|
||||
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
|
||||
})
|
||||
})
|
||||
} else {
|
||||
t.Run("should be forbidden to delete receiver", func(t *testing.T) {
|
||||
err := client.Delete(ctx, expected.Name, deleteOptions)
|
||||
err := client.Delete(ctx, expected.GetStaticMetadata().Identifier(), deleteOptions)
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
|
||||
t.Run("should be forbidden even if resource does not exist", func(t *testing.T) {
|
||||
err := client.Delete(ctx, "notfound", v1.DeleteOptions{})
|
||||
err := client.Delete(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "notfound"}, resource.DeleteOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
})
|
||||
require.NoError(t, adminClient.Delete(ctx, expected.Name, v1.DeleteOptions{}))
|
||||
require.NoError(t, adminClient.Delete(ctx, expected.GetStaticMetadata().Identifier(), resource.DeleteOptions{}))
|
||||
}
|
||||
|
||||
if tc.canRead {
|
||||
t.Run("should get empty list if no receivers", func(t *testing.T) {
|
||||
list, err := client.List(ctx, v1.ListOptions{})
|
||||
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, list.Items, 1)
|
||||
})
|
||||
@@ -766,7 +774,8 @@ func TestIntegrationInUseMetadata(t *testing.T) {
|
||||
cliCfg := helper.Org1.Admin.NewRestConfig()
|
||||
legacyCli := alerting.NewAlertingLegacyAPIClient(helper.GetEnv().Server.HTTPServer.Listener.Addr().String(), cliCfg.Username, cliCfg.Password)
|
||||
|
||||
adminClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
|
||||
adminClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
// Prepare environment and create notification policy and rule that use receiver
|
||||
alertmanagerRaw, err := testData.ReadFile(path.Join("test-data", "notification-settings.json"))
|
||||
require.NoError(t, err)
|
||||
@@ -813,7 +822,7 @@ func TestIntegrationInUseMetadata(t *testing.T) {
|
||||
|
||||
requestReceivers := func(t *testing.T, title string) (v0alpha1.Receiver, v0alpha1.Receiver) {
|
||||
t.Helper()
|
||||
receivers, err := adminClient.List(ctx, v1.ListOptions{})
|
||||
receivers, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, receivers.Items, 2)
|
||||
idx := slices.IndexFunc(receivers.Items, func(interval v0alpha1.Receiver) bool {
|
||||
@@ -821,7 +830,7 @@ func TestIntegrationInUseMetadata(t *testing.T) {
|
||||
})
|
||||
receiverListed := receivers.Items[idx]
|
||||
|
||||
receiverGet, err := adminClient.Get(ctx, receiverListed.Name, v1.GetOptions{})
|
||||
receiverGet, err := adminClient.Get(ctx, receiverListed.GetStaticMetadata().Identifier())
|
||||
require.NoError(t, err)
|
||||
|
||||
return receiverListed, *receiverGet
|
||||
@@ -846,8 +855,9 @@ func TestIntegrationInUseMetadata(t *testing.T) {
|
||||
amConfig.AlertmanagerConfig.Route.Routes = amConfig.AlertmanagerConfig.Route.Routes[:1]
|
||||
v1Route, err := routingtree.ConvertToK8sResource(helper.Org1.AdminServiceAccount.OrgId, *amConfig.AlertmanagerConfig.Route, "", func(int64) string { return "default" })
|
||||
require.NoError(t, err)
|
||||
routeAdminClient := test_common.NewRoutingTreeClient(t, helper.Org1.Admin)
|
||||
_, err = routeAdminClient.Update(ctx, v1Route, v1.UpdateOptions{})
|
||||
routeAdminClient, err := v0alpha1.NewRoutingTreeClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
_, err = routeAdminClient.Update(ctx, v1Route, resource.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
receiverListed, receiverGet = requestReceivers(t, "user-defined")
|
||||
@@ -868,7 +878,7 @@ func TestIntegrationInUseMetadata(t *testing.T) {
|
||||
amConfig.AlertmanagerConfig.Route.Routes = nil
|
||||
v1route, err := routingtree.ConvertToK8sResource(1, *amConfig.AlertmanagerConfig.Route, "", func(int64) string { return "default" })
|
||||
require.NoError(t, err)
|
||||
_, err = routeAdminClient.Update(ctx, v1route, v1.UpdateOptions{})
|
||||
_, err = routeAdminClient.Update(ctx, v1route, resource.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
// Remove the remaining rules.
|
||||
@@ -892,7 +902,8 @@ func TestIntegrationProvisioning(t *testing.T) {
|
||||
org := helper.Org1
|
||||
|
||||
admin := org.Admin
|
||||
adminClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
|
||||
adminClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
env := helper.GetEnv()
|
||||
ac := acimpl.ProvideAccessControl(env.FeatureToggles)
|
||||
db, err := store.ProvideDBStore(env.Cfg, env.FeatureToggles, env.SQLStore, &foldertest.FakeService{}, &dashboards.FakeDashboardService{}, ac, bus.ProvideBus(tracing.InitializeTracerForTest()))
|
||||
@@ -908,7 +919,7 @@ func TestIntegrationProvisioning(t *testing.T) {
|
||||
createIntegration(t, "email"),
|
||||
},
|
||||
},
|
||||
}, v1.CreateOptions{})
|
||||
}, resource.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "none", created.GetProvenanceStatus())
|
||||
|
||||
@@ -917,23 +928,23 @@ func TestIntegrationProvisioning(t *testing.T) {
|
||||
UID: *created.Spec.Integrations[0].Uid,
|
||||
}, admin.Identity.GetOrgID(), "API"))
|
||||
|
||||
got, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
|
||||
got, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "API", got.GetProvenanceStatus())
|
||||
})
|
||||
|
||||
t.Run("should not let update if provisioned", func(t *testing.T) {
|
||||
got, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
|
||||
got, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
|
||||
require.NoError(t, err)
|
||||
updated := got.Copy().(*v0alpha1.Receiver)
|
||||
updated.Spec.Integrations = append(updated.Spec.Integrations, createIntegration(t, "email"))
|
||||
|
||||
_, err = adminClient.Update(ctx, updated, v1.UpdateOptions{})
|
||||
_, err = adminClient.Update(ctx, updated, resource.UpdateOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
|
||||
t.Run("should not let delete if provisioned", func(t *testing.T) {
|
||||
err := adminClient.Delete(ctx, created.Name, v1.DeleteOptions{})
|
||||
err := adminClient.Delete(ctx, created.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
}
|
||||
@@ -944,7 +955,10 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
helper := getTestHelper(t)
|
||||
|
||||
adminClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
|
||||
adminClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
oldClient := test_common.NewReceiverClient(t, helper.Org1.Admin) // TODO replace with regular client once Delete works
|
||||
|
||||
receiver := v0alpha1.Receiver{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Namespace: "default",
|
||||
@@ -955,21 +969,22 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
created, err := adminClient.Create(ctx, &receiver, v1.CreateOptions{})
|
||||
created, err := adminClient.Create(ctx, &receiver, resource.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, created)
|
||||
require.NotEmpty(t, created.ResourceVersion)
|
||||
|
||||
t.Run("should forbid if version does not match", func(t *testing.T) {
|
||||
t.Run("should conflict if version does not match", func(t *testing.T) {
|
||||
updated := created.Copy().(*v0alpha1.Receiver)
|
||||
updated.ResourceVersion = "test"
|
||||
_, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
|
||||
_, err := adminClient.Update(ctx, updated, resource.UpdateOptions{
|
||||
ResourceVersion: "test",
|
||||
})
|
||||
require.Truef(t, errors.IsConflict(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
t.Run("should update if version matches", func(t *testing.T) {
|
||||
updated := created.Copy().(*v0alpha1.Receiver)
|
||||
updated.Spec.Integrations = append(updated.Spec.Integrations, createIntegration(t, "email"))
|
||||
actualUpdated, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
|
||||
actualUpdated, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
for i, integration := range actualUpdated.Spec.Integrations {
|
||||
updated.Spec.Integrations[i].Uid = integration.Uid
|
||||
@@ -981,25 +996,25 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
|
||||
updated := created.Copy().(*v0alpha1.Receiver)
|
||||
updated.ResourceVersion = ""
|
||||
updated.Spec.Integrations = append(updated.Spec.Integrations, createIntegration(t, "webhook"))
|
||||
_, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
|
||||
_, err := oldClient.Update(ctx, updated, v1.UpdateOptions{})
|
||||
require.Truef(t, errors.IsConflict(err), "should get Forbidden error but got %s", err) // TODO Change that? K8s returns 400 instead.
|
||||
})
|
||||
t.Run("should fail to delete if version does not match", func(t *testing.T) {
|
||||
actual, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
|
||||
actual, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
|
||||
require.NoError(t, err)
|
||||
|
||||
err = adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{
|
||||
err = oldClient.Delete(ctx, actual.Name, v1.DeleteOptions{
|
||||
Preconditions: &v1.Preconditions{
|
||||
ResourceVersion: util.Pointer("something"),
|
||||
},
|
||||
})
|
||||
require.Truef(t, errors.IsConflict(err), "should get Forbidden error but got %s", err)
|
||||
require.Truef(t, errors.IsConflict(err), "should get conflict error but got %s", err)
|
||||
})
|
||||
t.Run("should succeed if version matches", func(t *testing.T) {
|
||||
actual, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
|
||||
actual, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
|
||||
require.NoError(t, err)
|
||||
|
||||
err = adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{
|
||||
err = oldClient.Delete(ctx, actual.Name, v1.DeleteOptions{
|
||||
Preconditions: &v1.Preconditions{
|
||||
ResourceVersion: util.Pointer(actual.ResourceVersion),
|
||||
},
|
||||
@@ -1007,10 +1022,10 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
})
|
||||
t.Run("should succeed if version is empty", func(t *testing.T) {
|
||||
actual, err := adminClient.Create(ctx, &receiver, v1.CreateOptions{})
|
||||
actual, err := adminClient.Create(ctx, &receiver, resource.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{
|
||||
err = oldClient.Delete(ctx, actual.Name, v1.DeleteOptions{
|
||||
Preconditions: &v1.Preconditions{
|
||||
ResourceVersion: util.Pointer(actual.ResourceVersion),
|
||||
},
|
||||
@@ -1025,7 +1040,8 @@ func TestIntegrationPatch(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
helper := getTestHelper(t)
|
||||
|
||||
adminClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
|
||||
adminClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
receiver := v0alpha1.Receiver{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Namespace: "default",
|
||||
@@ -1040,40 +1056,40 @@ func TestIntegrationPatch(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
current, err := adminClient.Create(ctx, &receiver, v1.CreateOptions{})
|
||||
current, err := adminClient.Create(ctx, &receiver, resource.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, current)
|
||||
|
||||
t.Run("should patch with json patch", func(t *testing.T) {
|
||||
current, err := adminClient.Get(ctx, current.Name, v1.GetOptions{})
|
||||
current, err := adminClient.Get(ctx, current.GetStaticMetadata().Identifier())
|
||||
require.NoError(t, err)
|
||||
|
||||
index := slices.IndexFunc(current.Spec.Integrations, func(t v0alpha1.ReceiverIntegration) bool {
|
||||
return t.Type == "webhook"
|
||||
})
|
||||
|
||||
patch := []map[string]any{
|
||||
patch := []resource.PatchOperation{
|
||||
{
|
||||
"op": "remove",
|
||||
"path": fmt.Sprintf("/spec/integrations/%d/settings/username", index),
|
||||
Operation: "remove",
|
||||
Path: fmt.Sprintf("/spec/integrations/%d/settings/username", index),
|
||||
},
|
||||
{
|
||||
"op": "remove",
|
||||
"path": fmt.Sprintf("/spec/integrations/%d/secureFields/password", index),
|
||||
Operation: "remove",
|
||||
Path: fmt.Sprintf("/spec/integrations/%d/secureFields/password", index),
|
||||
},
|
||||
{
|
||||
"op": "replace",
|
||||
"path": fmt.Sprintf("/spec/integrations/%d/settings/authorization_scheme", index),
|
||||
"value": "bearer",
|
||||
Operation: "replace",
|
||||
Path: fmt.Sprintf("/spec/integrations/%d/settings/authorization_scheme", index),
|
||||
Value: "bearer",
|
||||
},
|
||||
{
|
||||
"op": "add",
|
||||
"path": fmt.Sprintf("/spec/integrations/%d/settings/authorization_credentials", index),
|
||||
"value": "authz-token",
|
||||
Operation: "add",
|
||||
Path: fmt.Sprintf("/spec/integrations/%d/settings/authorization_credentials", index),
|
||||
Value: "authz-token",
|
||||
},
|
||||
{
|
||||
"op": "remove",
|
||||
"path": fmt.Sprintf("/spec/integrations/%d/secureFields/authorization_credentials", index),
|
||||
Operation: "remove",
|
||||
Path: fmt.Sprintf("/spec/integrations/%d/secureFields/authorization_credentials", index),
|
||||
},
|
||||
}
|
||||
|
||||
@@ -1084,10 +1100,7 @@ func TestIntegrationPatch(t *testing.T) {
|
||||
delete(expected.SecureFields, "password")
|
||||
expected.SecureFields["authorization_credentials"] = true
|
||||
|
||||
patchData, err := json.Marshal(patch)
|
||||
require.NoError(t, err)
|
||||
|
||||
result, err := adminClient.Patch(ctx, current.Name, types.JSONPatchType, patchData, v1.PatchOptions{})
|
||||
result, err := adminClient.Patch(ctx, current.GetStaticMetadata().Identifier(), resource.PatchRequest{Operations: patch}, resource.PatchOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
require.EqualValues(t, expected, result.Spec.Integrations[index])
|
||||
@@ -1127,7 +1140,8 @@ func TestIntegrationReferentialIntegrity(t *testing.T) {
|
||||
cliCfg := helper.Org1.Admin.NewRestConfig()
|
||||
legacyCli := alerting.NewAlertingLegacyAPIClient(helper.GetEnv().Server.HTTPServer.Listener.Addr().String(), cliCfg.Username, cliCfg.Password)
|
||||
|
||||
adminClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
|
||||
adminClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
// Prepare environment and create notification policy and rule that use time receiver
|
||||
alertmanagerRaw, err := testData.ReadFile(path.Join("test-data", "notification-settings.json"))
|
||||
require.NoError(t, err)
|
||||
@@ -1146,7 +1160,7 @@ func TestIntegrationReferentialIntegrity(t *testing.T) {
|
||||
_, status, data := legacyCli.PostRulesGroupWithStatus(t, folderUID, &ruleGroup, false)
|
||||
require.Equalf(t, http.StatusAccepted, status, "Failed to post Rule: %s", data)
|
||||
|
||||
receivers, err := adminClient.List(ctx, v1.ListOptions{})
|
||||
receivers, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, receivers.Items, 2)
|
||||
idx := slices.IndexFunc(receivers.Items, func(interval v0alpha1.Receiver) bool {
|
||||
@@ -1164,7 +1178,7 @@ func TestIntegrationReferentialIntegrity(t *testing.T) {
|
||||
expectedTitle := renamed.Spec.Title + "-new"
|
||||
renamed.Spec.Title = expectedTitle
|
||||
|
||||
actual, err := adminClient.Update(ctx, renamed, v1.UpdateOptions{})
|
||||
actual, err := adminClient.Update(ctx, renamed, resource.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
updatedRuleGroup, status := legacyCli.GetRulesGroup(t, folderUID, ruleGroup.Name)
|
||||
@@ -1178,7 +1192,7 @@ func TestIntegrationReferentialIntegrity(t *testing.T) {
|
||||
assert.Equalf(t, expectedTitle, route.Receiver, "time receiver in routes should have been renamed but it did not")
|
||||
}
|
||||
|
||||
actual, err = adminClient.Get(ctx, actual.Name, v1.GetOptions{})
|
||||
actual, err = adminClient.Get(ctx, actual.GetStaticMetadata().Identifier())
|
||||
require.NoError(t, err)
|
||||
|
||||
receiver = *actual
|
||||
@@ -1194,20 +1208,20 @@ func TestIntegrationReferentialIntegrity(t *testing.T) {
|
||||
t.Cleanup(func() {
|
||||
require.NoError(t, db.DeleteProvenance(ctx, ¤tRoute, orgID))
|
||||
})
|
||||
actual, err := adminClient.Update(ctx, renamed, v1.UpdateOptions{})
|
||||
actual, err := adminClient.Update(ctx, renamed, resource.UpdateOptions{})
|
||||
require.Errorf(t, err, "Expected error but got successful result: %v", actual)
|
||||
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
|
||||
})
|
||||
|
||||
t.Run("provisioned rules", func(t *testing.T) {
|
||||
ruleUid := currentRuleGroup.Rules[0].GrafanaManagedAlert.UID
|
||||
resource := &ngmodels.AlertRule{UID: ruleUid}
|
||||
require.NoError(t, db.SetProvenance(ctx, resource, orgID, "API"))
|
||||
rule := &ngmodels.AlertRule{UID: ruleUid}
|
||||
require.NoError(t, db.SetProvenance(ctx, rule, orgID, "API"))
|
||||
t.Cleanup(func() {
|
||||
require.NoError(t, db.DeleteProvenance(ctx, resource, orgID))
|
||||
require.NoError(t, db.DeleteProvenance(ctx, rule, orgID))
|
||||
})
|
||||
|
||||
actual, err := adminClient.Update(ctx, renamed, v1.UpdateOptions{})
|
||||
actual, err := adminClient.Update(ctx, renamed, resource.UpdateOptions{})
|
||||
require.Errorf(t, err, "Expected error but got successful result: %v", actual)
|
||||
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
|
||||
})
|
||||
@@ -1216,7 +1230,7 @@ func TestIntegrationReferentialIntegrity(t *testing.T) {
|
||||
|
||||
t.Run("Delete", func(t *testing.T) {
|
||||
t.Run("should fail to delete if receiver is used in rule and routes", func(t *testing.T) {
|
||||
err := adminClient.Delete(ctx, receiver.Name, v1.DeleteOptions{})
|
||||
err := adminClient.Delete(ctx, receiver.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
|
||||
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
|
||||
})
|
||||
|
||||
@@ -1225,7 +1239,7 @@ func TestIntegrationReferentialIntegrity(t *testing.T) {
|
||||
route.Routes[0].Receiver = ""
|
||||
legacyCli.UpdateRoute(t, route, true)
|
||||
|
||||
err = adminClient.Delete(ctx, receiver.Name, v1.DeleteOptions{})
|
||||
err = adminClient.Delete(ctx, receiver.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
|
||||
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
|
||||
})
|
||||
})
|
||||
@@ -1237,10 +1251,11 @@ func TestIntegrationCRUD(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
helper := getTestHelper(t)
|
||||
|
||||
adminClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
|
||||
adminClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
var defaultReceiver *v0alpha1.Receiver
|
||||
t.Run("should list the default receiver", func(t *testing.T) {
|
||||
items, err := adminClient.List(ctx, v1.ListOptions{})
|
||||
items, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, items.Items, 1)
|
||||
defaultReceiver = &items.Items[0]
|
||||
@@ -1249,7 +1264,7 @@ func TestIntegrationCRUD(t *testing.T) {
|
||||
assert.NotEmpty(t, defaultReceiver.Name)
|
||||
assert.NotEmpty(t, defaultReceiver.ResourceVersion)
|
||||
|
||||
defaultReceiver, err = adminClient.Get(ctx, defaultReceiver.Name, v1.GetOptions{})
|
||||
defaultReceiver, err = adminClient.Get(ctx, defaultReceiver.GetStaticMetadata().Identifier())
|
||||
require.NoError(t, err)
|
||||
assert.NotEmpty(t, defaultReceiver.UID)
|
||||
assert.NotEmpty(t, defaultReceiver.Name)
|
||||
@@ -1262,7 +1277,7 @@ func TestIntegrationCRUD(t *testing.T) {
|
||||
newDefault := defaultReceiver.Copy().(*v0alpha1.Receiver)
|
||||
newDefault.Spec.Integrations = append(newDefault.Spec.Integrations, createIntegration(t, line.Type))
|
||||
|
||||
updatedReceiver, err := adminClient.Update(ctx, newDefault, v1.UpdateOptions{})
|
||||
updatedReceiver, err := adminClient.Update(ctx, newDefault, resource.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
expected := newDefault.Copy().(*v0alpha1.Receiver)
|
||||
@@ -1290,12 +1305,12 @@ func TestIntegrationCRUD(t *testing.T) {
|
||||
Integrations: []v0alpha1.ReceiverIntegration{},
|
||||
},
|
||||
}
|
||||
_, err := adminClient.Create(ctx, newReceiver, v1.CreateOptions{})
|
||||
_, err := adminClient.Create(ctx, newReceiver, resource.CreateOptions{})
|
||||
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
|
||||
})
|
||||
|
||||
t.Run("should not let delete default receiver", func(t *testing.T) {
|
||||
err := adminClient.Delete(ctx, defaultReceiver.Name, v1.DeleteOptions{})
|
||||
err := adminClient.Delete(ctx, defaultReceiver.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
|
||||
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
|
||||
})
|
||||
|
||||
@@ -1317,7 +1332,7 @@ func TestIntegrationCRUD(t *testing.T) {
|
||||
Title: "all-receivers",
|
||||
Integrations: integrations,
|
||||
},
|
||||
}, v1.CreateOptions{})
|
||||
}, resource.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, receiver.Spec.Integrations, len(integrations))
|
||||
|
||||
@@ -1342,7 +1357,7 @@ func TestIntegrationCRUD(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("should be able read what it is created", func(t *testing.T) {
|
||||
get, err := adminClient.Get(ctx, receiver.Name, v1.GetOptions{})
|
||||
get, err := adminClient.Get(ctx, receiver.GetStaticMetadata().Identifier())
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, receiver, get)
|
||||
t.Run("should return secrets in secureFields but not settings", func(t *testing.T) {
|
||||
@@ -1394,7 +1409,7 @@ func TestIntegrationCRUD(t *testing.T) {
|
||||
Title: fmt.Sprintf("invalid-%s", key),
|
||||
Integrations: []v0alpha1.ReceiverIntegration{integration},
|
||||
},
|
||||
}, v1.CreateOptions{})
|
||||
}, resource.CreateOptions{})
|
||||
require.Errorf(t, err, "Expected error but got successful result: %v", receiver)
|
||||
require.Truef(t, errors.IsBadRequest(err), "Expected BadRequest, got: %s", err)
|
||||
})
|
||||
@@ -1408,7 +1423,8 @@ func TestIntegrationReceiverListSelector(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
helper := getTestHelper(t)
|
||||
|
||||
adminClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
|
||||
adminClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
recv1 := &v0alpha1.Receiver{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Namespace: "default",
|
||||
@@ -1420,7 +1436,7 @@ func TestIntegrationReceiverListSelector(t *testing.T) {
|
||||
},
|
||||
},
|
||||
}
|
||||
recv1, err := adminClient.Create(ctx, recv1, v1.CreateOptions{})
|
||||
recv1, err = adminClient.Create(ctx, recv1, resource.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
recv2 := &v0alpha1.Receiver{
|
||||
@@ -1434,7 +1450,7 @@ func TestIntegrationReceiverListSelector(t *testing.T) {
|
||||
},
|
||||
},
|
||||
}
|
||||
recv2, err = adminClient.Create(ctx, recv2, v1.CreateOptions{})
|
||||
recv2, err = adminClient.Create(ctx, recv2, resource.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
env := helper.GetEnv()
|
||||
@@ -1444,18 +1460,20 @@ func TestIntegrationReceiverListSelector(t *testing.T) {
|
||||
require.NoError(t, db.SetProvenance(ctx, &definitions.EmbeddedContactPoint{
|
||||
UID: *recv2.Spec.Integrations[0].Uid,
|
||||
}, helper.Org1.Admin.Identity.GetOrgID(), "API"))
|
||||
recv2, err = adminClient.Get(ctx, recv2.Name, v1.GetOptions{})
|
||||
recv2, err = adminClient.Get(ctx, recv2.GetStaticMetadata().Identifier())
|
||||
|
||||
require.NoError(t, err)
|
||||
|
||||
receivers, err := adminClient.List(ctx, v1.ListOptions{})
|
||||
receivers, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, receivers.Items, 3) // Includes default.
|
||||
|
||||
t.Run("should filter by receiver name", func(t *testing.T) {
|
||||
t.Skip("disabled until app installer supports it") // TODO revisit when custom field selectors are supported
|
||||
list, err := adminClient.List(ctx, v1.ListOptions{
|
||||
FieldSelector: "spec.title=" + recv1.Spec.Title,
|
||||
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
|
||||
FieldSelectors: []string{
|
||||
"spec.title=" + recv1.Spec.Title,
|
||||
},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, list.Items, 1)
|
||||
@@ -1463,8 +1481,10 @@ func TestIntegrationReceiverListSelector(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("should filter by metadata name", func(t *testing.T) {
|
||||
list, err := adminClient.List(ctx, v1.ListOptions{
|
||||
FieldSelector: "metadata.name=" + recv2.Name,
|
||||
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
|
||||
FieldSelectors: []string{
|
||||
"metadata.name=" + recv2.Name,
|
||||
},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, list.Items, 1)
|
||||
@@ -1473,8 +1493,10 @@ func TestIntegrationReceiverListSelector(t *testing.T) {
|
||||
|
||||
t.Run("should filter by multiple filters", func(t *testing.T) {
|
||||
t.Skip("disabled until app installer supports it") // TODO revisit when custom field selectors are supported
|
||||
list, err := adminClient.List(ctx, v1.ListOptions{
|
||||
FieldSelector: fmt.Sprintf("metadata.name=%s,spec.title=%s", recv2.Name, recv2.Spec.Title),
|
||||
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
|
||||
FieldSelectors: []string{
|
||||
fmt.Sprintf("metadata.name=%s,spec.title=%s", recv2.Name, recv2.Spec.Title),
|
||||
},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, list.Items, 1)
|
||||
@@ -1482,8 +1504,10 @@ func TestIntegrationReceiverListSelector(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("should be empty when filter does not match", func(t *testing.T) {
|
||||
list, err := adminClient.List(ctx, v1.ListOptions{
|
||||
FieldSelector: fmt.Sprintf("metadata.name=%s", "unknown"),
|
||||
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
|
||||
FieldSelectors: []string{
|
||||
fmt.Sprintf("metadata.name=%s", "unknown"),
|
||||
},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
require.Empty(t, list.Items)
|
||||
@@ -1497,7 +1521,8 @@ func persistInitialConfig(t *testing.T, amConfig definitions.PostableUserConfig)
|
||||
|
||||
helper := getTestHelper(t)
|
||||
|
||||
receiverClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
|
||||
receiverClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
for _, receiver := range amConfig.AlertmanagerConfig.Receivers {
|
||||
if receiver.Name == "grafana-default-email" {
|
||||
continue
|
||||
@@ -1523,7 +1548,7 @@ func persistInitialConfig(t *testing.T, amConfig definitions.PostableUserConfig)
|
||||
})
|
||||
}
|
||||
|
||||
created, err := receiverClient.Create(ctx, &toCreate, v1.CreateOptions{})
|
||||
created, err := receiverClient.Create(ctx, &toCreate, resource.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
for i, integration := range created.Spec.Integrations {
|
||||
@@ -1533,10 +1558,11 @@ func persistInitialConfig(t *testing.T, amConfig definitions.PostableUserConfig)
|
||||
|
||||
nsMapper := func(_ int64) string { return "default" }
|
||||
|
||||
routeClient := test_common.NewRoutingTreeClient(t, helper.Org1.Admin)
|
||||
routeClient, err := v0alpha1.NewRoutingTreeClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
v1route, err := routingtree.ConvertToK8sResource(helper.Org1.AdminServiceAccount.OrgId, *amConfig.AlertmanagerConfig.Route, "", nsMapper)
|
||||
require.NoError(t, err)
|
||||
_, err = routeClient.Update(ctx, v1route, v1.UpdateOptions{})
|
||||
_, err = routeClient.Update(ctx, v1route, resource.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
|
||||
@@ -1,10 +1,14 @@
|
||||
{
|
||||
"kind": "ReceiverList",
|
||||
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
|
||||
"metadata": {},
|
||||
"items": [
|
||||
{
|
||||
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
|
||||
"kind": "Receiver",
|
||||
"metadata": {
|
||||
"name": "Z3JhZmFuYS1kZWZhdWx0LWVtYWls",
|
||||
"namespace": "default",
|
||||
"uid": "zyXFk301pvwNz4HRPrTMKPMFO2934cPB7H1ZXmyM1TUX",
|
||||
"resourceVersion": "a82b34036bdabbc4",
|
||||
"annotations": {
|
||||
"grafana.com/access/canAdmin": "true",
|
||||
"grafana.com/access/canDelete": "true",
|
||||
@@ -15,53 +19,29 @@
|
||||
"grafana.com/inUse/routes": "1",
|
||||
"grafana.com/inUse/rules": "0",
|
||||
"grafana.com/provenance": "none"
|
||||
},
|
||||
"name": "Z3JhZmFuYS1kZWZhdWx0LWVtYWls",
|
||||
"namespace": "default",
|
||||
"resourceVersion": "a82b34036bdabbc4",
|
||||
"uid": "zyXFk301pvwNz4HRPrTMKPMFO2934cPB7H1ZXmyM1TUX"
|
||||
}
|
||||
},
|
||||
"spec": {
|
||||
"title": "grafana-default-email",
|
||||
"integrations": [
|
||||
{
|
||||
"uid": "",
|
||||
"type": "email",
|
||||
"version": "v1",
|
||||
"disableResolveMessage": false,
|
||||
"settings": {
|
||||
"addresses": "\u003cexample@email.com\u003e"
|
||||
},
|
||||
"type": "email",
|
||||
"uid": "",
|
||||
"version": "v1"
|
||||
}
|
||||
}
|
||||
],
|
||||
"title": "grafana-default-email"
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
|
||||
"kind": "Receiver",
|
||||
"metadata": {
|
||||
"annotations": {
|
||||
"grafana.com/access/canModifyProtected": "true",
|
||||
"grafana.com/access/canReadSecrets": "true",
|
||||
"grafana.com/canUse": "false",
|
||||
"grafana.com/inUse/routes": "0",
|
||||
"grafana.com/inUse/rules": "0",
|
||||
"grafana.com/provenance": "converted_prometheus"
|
||||
},
|
||||
"name": "Z3JhZmFuYS1kZWZhdWx0LWVtYWlsdGVzdC1jcmVhdGUtZ2V0LWNvbmZpZw",
|
||||
"namespace": "default",
|
||||
"uid": "JzW6DIlcxj4sRN8A2ULcwTXAmm0Vs0Z68aEBqXSvxK0X",
|
||||
"resourceVersion": "b2823b50ffa1eff6",
|
||||
"uid": "JzW6DIlcxj4sRN8A2ULcwTXAmm0Vs0Z68aEBqXSvxK0X"
|
||||
},
|
||||
"spec": {
|
||||
"integrations": [],
|
||||
"title": "grafana-default-emailtest-create-get-config"
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
|
||||
"kind": "Receiver",
|
||||
"metadata": {
|
||||
"annotations": {
|
||||
"grafana.com/access/canModifyProtected": "true",
|
||||
"grafana.com/access/canReadSecrets": "true",
|
||||
@@ -69,19 +49,36 @@
|
||||
"grafana.com/inUse/routes": "0",
|
||||
"grafana.com/inUse/rules": "0",
|
||||
"grafana.com/provenance": "converted_prometheus"
|
||||
},
|
||||
"name": "ZGlzY29yZA",
|
||||
"namespace": "default",
|
||||
"resourceVersion": "06e437697f62ac59",
|
||||
"uid": "8cH8Ql2S6VhPEVUhwlQEKYWyPbRJS7YKj2lEXdrehH8X"
|
||||
}
|
||||
},
|
||||
"spec": {
|
||||
"title": "grafana-default-emailtest-create-get-config",
|
||||
"integrations": []
|
||||
}
|
||||
},
|
||||
{
|
||||
"metadata": {
|
||||
"name": "ZGlzY29yZA",
|
||||
"namespace": "default",
|
||||
"uid": "8cH8Ql2S6VhPEVUhwlQEKYWyPbRJS7YKj2lEXdrehH8X",
|
||||
"resourceVersion": "06e437697f62ac59",
|
||||
"annotations": {
|
||||
"grafana.com/access/canModifyProtected": "true",
|
||||
"grafana.com/access/canReadSecrets": "true",
|
||||
"grafana.com/canUse": "false",
|
||||
"grafana.com/inUse/routes": "0",
|
||||
"grafana.com/inUse/rules": "0",
|
||||
"grafana.com/provenance": "converted_prometheus"
|
||||
}
|
||||
},
|
||||
"spec": {
|
||||
"title": "discord",
|
||||
"integrations": [
|
||||
{
|
||||
"uid": "",
|
||||
"type": "discord",
|
||||
"version": "v0mimir1",
|
||||
"disableResolveMessage": false,
|
||||
"secureFields": {
|
||||
"webhook_url": true
|
||||
},
|
||||
"settings": {
|
||||
"http_config": {
|
||||
"enable_http2": true,
|
||||
@@ -95,18 +92,19 @@
|
||||
"send_resolved": true,
|
||||
"title": "{{ template \"discord.default.title\" . }}"
|
||||
},
|
||||
"type": "discord",
|
||||
"uid": "",
|
||||
"version": "v0mimir1"
|
||||
"secureFields": {
|
||||
"webhook_url": true
|
||||
}
|
||||
}
|
||||
],
|
||||
"title": "discord"
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
|
||||
"kind": "Receiver",
|
||||
"metadata": {
|
||||
"name": "ZW1haWw",
|
||||
"namespace": "default",
|
||||
"uid": "bhlvlN758xmnwVrHVPX0c5XvFHepenUbOXP0fuE6eUMX",
|
||||
"resourceVersion": "9b3ffed277cee189",
|
||||
"annotations": {
|
||||
"grafana.com/access/canModifyProtected": "true",
|
||||
"grafana.com/access/canReadSecrets": "true",
|
||||
@@ -114,19 +112,16 @@
|
||||
"grafana.com/inUse/routes": "0",
|
||||
"grafana.com/inUse/rules": "0",
|
||||
"grafana.com/provenance": "converted_prometheus"
|
||||
},
|
||||
"name": "ZW1haWw",
|
||||
"namespace": "default",
|
||||
"resourceVersion": "9b3ffed277cee189",
|
||||
"uid": "bhlvlN758xmnwVrHVPX0c5XvFHepenUbOXP0fuE6eUMX"
|
||||
}
|
||||
},
|
||||
"spec": {
|
||||
"title": "email",
|
||||
"integrations": [
|
||||
{
|
||||
"uid": "",
|
||||
"type": "email",
|
||||
"version": "v0mimir1",
|
||||
"disableResolveMessage": false,
|
||||
"secureFields": {
|
||||
"auth_password": true
|
||||
},
|
||||
"settings": {
|
||||
"auth_username": "alertmanager",
|
||||
"from": "alertmanager@example.com",
|
||||
@@ -144,18 +139,19 @@
|
||||
},
|
||||
"to": "team@example.com"
|
||||
},
|
||||
"type": "email",
|
||||
"uid": "",
|
||||
"version": "v0mimir1"
|
||||
"secureFields": {
|
||||
"auth_password": true
|
||||
}
|
||||
}
|
||||
],
|
||||
"title": "email"
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
|
||||
"kind": "Receiver",
|
||||
"metadata": {
|
||||
"name": "amlyYQ",
|
||||
"namespace": "default",
|
||||
"uid": "7Pu4xcRXbvw4XEX279SoqyO8Ibo8cMl0vAJyYTsJ0NEX",
|
||||
"resourceVersion": "deae9d34f8554205",
|
||||
"annotations": {
|
||||
"grafana.com/access/canModifyProtected": "true",
|
||||
"grafana.com/access/canReadSecrets": "true",
|
||||
@@ -163,19 +159,16 @@
|
||||
"grafana.com/inUse/routes": "0",
|
||||
"grafana.com/inUse/rules": "0",
|
||||
"grafana.com/provenance": "converted_prometheus"
|
||||
},
|
||||
"name": "amlyYQ",
|
||||
"namespace": "default",
|
||||
"resourceVersion": "deae9d34f8554205",
|
||||
"uid": "7Pu4xcRXbvw4XEX279SoqyO8Ibo8cMl0vAJyYTsJ0NEX"
|
||||
}
|
||||
},
|
||||
"spec": {
|
||||
"title": "jira",
|
||||
"integrations": [
|
||||
{
|
||||
"uid": "",
|
||||
"type": "jira",
|
||||
"version": "v0mimir1",
|
||||
"disableResolveMessage": false,
|
||||
"secureFields": {
|
||||
"http_config.basic_auth.password": true
|
||||
},
|
||||
"settings": {
|
||||
"api_url": "http://localhost/jira",
|
||||
"custom_fields": {
|
||||
@@ -203,18 +196,19 @@
|
||||
"send_resolved": true,
|
||||
"summary": "{{ template \"jira.default.summary\" . }}"
|
||||
},
|
||||
"type": "jira",
|
||||
"uid": "",
|
||||
"version": "v0mimir1"
|
||||
"secureFields": {
|
||||
"http_config.basic_auth.password": true
|
||||
}
|
||||
}
|
||||
],
|
||||
"title": "jira"
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
|
||||
"kind": "Receiver",
|
||||
"metadata": {
|
||||
"name": "bXN0ZWFtcw",
|
||||
"namespace": "default",
|
||||
"uid": "z7xTMDjrk1HAHXPEx78tQb63LXYA6ivXLOtz2Z09ucIX",
|
||||
"resourceVersion": "95c8d082d65466a3",
|
||||
"annotations": {
|
||||
"grafana.com/access/canModifyProtected": "true",
|
||||
"grafana.com/access/canReadSecrets": "true",
|
||||
@@ -222,19 +216,16 @@
|
||||
"grafana.com/inUse/routes": "0",
|
||||
"grafana.com/inUse/rules": "0",
|
||||
"grafana.com/provenance": "converted_prometheus"
|
||||
},
|
||||
"name": "bXN0ZWFtcw",
|
||||
"namespace": "default",
|
||||
"resourceVersion": "95c8d082d65466a3",
|
||||
"uid": "z7xTMDjrk1HAHXPEx78tQb63LXYA6ivXLOtz2Z09ucIX"
|
||||
}
|
||||
},
|
||||
"spec": {
|
||||
"title": "msteams",
|
||||
"integrations": [
|
||||
{
|
||||
"uid": "",
|
||||
"type": "teams",
|
||||
"version": "v0mimir1",
|
||||
"disableResolveMessage": false,
|
||||
"secureFields": {
|
||||
"webhook_url": true
|
||||
},
|
||||
"settings": {
|
||||
"http_config": {
|
||||
"enable_http2": true,
|
||||
@@ -249,18 +240,19 @@
|
||||
"text": "{{ template \"msteams.default.text\" . }}",
|
||||
"title": "{{ template \"msteams.default.title\" . }}"
|
||||
},
|
||||
"type": "teams",
|
||||
"uid": "",
|
||||
"version": "v0mimir1"
|
||||
"secureFields": {
|
||||
"webhook_url": true
|
||||
}
|
||||
}
|
||||
],
|
||||
"title": "msteams"
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
|
||||
"kind": "Receiver",
|
||||
"metadata": {
|
||||
"name": "b3BzZ2VuaWU",
|
||||
"namespace": "default",
|
||||
"uid": "XmkZ214Dj030hvynYiwNLq8i6uRCjUYXMXjE5m19OKAX",
|
||||
"resourceVersion": "8ee2957ba150ba16",
|
||||
"annotations": {
|
||||
"grafana.com/access/canModifyProtected": "true",
|
||||
"grafana.com/access/canReadSecrets": "true",
|
||||
@@ -268,19 +260,16 @@
|
||||
"grafana.com/inUse/routes": "0",
|
||||
"grafana.com/inUse/rules": "0",
|
||||
"grafana.com/provenance": "converted_prometheus"
|
||||
},
|
||||
"name": "b3BzZ2VuaWU",
|
||||
"namespace": "default",
|
||||
"resourceVersion": "8ee2957ba150ba16",
|
||||
"uid": "XmkZ214Dj030hvynYiwNLq8i6uRCjUYXMXjE5m19OKAX"
|
||||
}
|
||||
},
|
||||
"spec": {
|
||||
"title": "opsgenie",
|
||||
"integrations": [
|
||||
{
|
||||
"uid": "",
|
||||
"type": "opsgenie",
|
||||
"version": "v0mimir1",
|
||||
"disableResolveMessage": false,
|
||||
"secureFields": {
|
||||
"api_key": true
|
||||
},
|
||||
"settings": {
|
||||
"actions": "test actions",
|
||||
"api_url": "http://localhost/opsgenie/",
|
||||
@@ -311,18 +300,19 @@
|
||||
"tags": "test-tags",
|
||||
"update_alerts": true
|
||||
},
|
||||
"type": "opsgenie",
|
||||
"uid": "",
|
||||
"version": "v0mimir1"
|
||||
"secureFields": {
|
||||
"api_key": true
|
||||
}
|
||||
}
|
||||
],
|
||||
"title": "opsgenie"
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
|
||||
"kind": "Receiver",
|
||||
"metadata": {
|
||||
"name": "cGFnZXJkdXR5",
|
||||
"namespace": "default",
|
||||
"uid": "QNitkUCkwzrIc7WVCCJGGDyvXLyo9csSUVqfyStyctQX",
|
||||
"resourceVersion": "fe673d5dcd67ccf0",
|
||||
"annotations": {
|
||||
"grafana.com/access/canModifyProtected": "true",
|
||||
"grafana.com/access/canReadSecrets": "true",
|
||||
@@ -330,20 +320,16 @@
|
||||
"grafana.com/inUse/routes": "1",
|
||||
"grafana.com/inUse/rules": "0",
|
||||
"grafana.com/provenance": "converted_prometheus"
|
||||
},
|
||||
"name": "cGFnZXJkdXR5",
|
||||
"namespace": "default",
|
||||
"resourceVersion": "fe673d5dcd67ccf0",
|
||||
"uid": "QNitkUCkwzrIc7WVCCJGGDyvXLyo9csSUVqfyStyctQX"
|
||||
}
|
||||
},
|
||||
"spec": {
|
||||
"title": "pagerduty",
|
||||
"integrations": [
|
||||
{
|
||||
"uid": "",
|
||||
"type": "pagerduty",
|
||||
"version": "v0mimir1",
|
||||
"disableResolveMessage": false,
|
||||
"secureFields": {
|
||||
"routing_key": true,
|
||||
"service_key": true
|
||||
},
|
||||
"settings": {
|
||||
"class": "test class",
|
||||
"client": "Alertmanager",
|
||||
@@ -383,18 +369,20 @@
|
||||
"source": "test source",
|
||||
"url": "http://localhost/pagerduty"
|
||||
},
|
||||
"type": "pagerduty",
|
||||
"uid": "",
|
||||
"version": "v0mimir1"
|
||||
"secureFields": {
|
||||
"routing_key": true,
|
||||
"service_key": true
|
||||
}
|
||||
}
|
||||
],
|
||||
"title": "pagerduty"
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
|
||||
"kind": "Receiver",
|
||||
"metadata": {
|
||||
"name": "cHVzaG92ZXI",
|
||||
"namespace": "default",
|
||||
"uid": "t2TJSktI6vyGfdbLOKmxH4eBqgcIGsAuW8Qm9m0HRycX",
|
||||
"resourceVersion": "6ae076725ab463e0",
|
||||
"annotations": {
|
||||
"grafana.com/access/canModifyProtected": "true",
|
||||
"grafana.com/access/canReadSecrets": "true",
|
||||
@@ -402,21 +390,16 @@
|
||||
"grafana.com/inUse/routes": "0",
|
||||
"grafana.com/inUse/rules": "0",
|
||||
"grafana.com/provenance": "converted_prometheus"
|
||||
},
|
||||
"name": "cHVzaG92ZXI",
|
||||
"namespace": "default",
|
||||
"resourceVersion": "6ae076725ab463e0",
|
||||
"uid": "t2TJSktI6vyGfdbLOKmxH4eBqgcIGsAuW8Qm9m0HRycX"
|
||||
}
|
||||
},
|
||||
"spec": {
|
||||
"title": "pushover",
|
||||
"integrations": [
|
||||
{
|
||||
"uid": "",
|
||||
"type": "pushover",
|
||||
"version": "v0mimir1",
|
||||
"disableResolveMessage": false,
|
||||
"secureFields": {
|
||||
"http_config.authorization.credentials": true,
|
||||
"token": true,
|
||||
"user_key": true
|
||||
},
|
||||
"settings": {
|
||||
"expire": "1h0m0s",
|
||||
"http_config": {
|
||||
@@ -437,18 +420,21 @@
|
||||
"title": "{{ template \"pushover.default.title\" . }}",
|
||||
"url": "http://localhost/pushover"
|
||||
},
|
||||
"type": "pushover",
|
||||
"uid": "",
|
||||
"version": "v0mimir1"
|
||||
"secureFields": {
|
||||
"http_config.authorization.credentials": true,
|
||||
"token": true,
|
||||
"user_key": true
|
||||
}
|
||||
}
|
||||
],
|
||||
"title": "pushover"
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
|
||||
"kind": "Receiver",
|
||||
"metadata": {
|
||||
"name": "c2xhY2s",
|
||||
"namespace": "default",
|
||||
"uid": "xSB0hnoc9j1CnLCHR3VgeVGXdVXILM0p2dM64bbHN9oX",
|
||||
"resourceVersion": "ec0e343029ff5d8b",
|
||||
"annotations": {
|
||||
"grafana.com/access/canModifyProtected": "true",
|
||||
"grafana.com/access/canReadSecrets": "true",
|
||||
@@ -456,19 +442,16 @@
|
||||
"grafana.com/inUse/routes": "0",
|
||||
"grafana.com/inUse/rules": "0",
|
||||
"grafana.com/provenance": "converted_prometheus"
|
||||
},
|
||||
"name": "c2xhY2s",
|
||||
"namespace": "default",
|
||||
"resourceVersion": "ec0e343029ff5d8b",
|
||||
"uid": "xSB0hnoc9j1CnLCHR3VgeVGXdVXILM0p2dM64bbHN9oX"
|
||||
}
|
||||
},
|
||||
"spec": {
|
||||
"title": "slack",
|
||||
"integrations": [
|
||||
{
|
||||
"uid": "",
|
||||
"type": "slack",
|
||||
"version": "v0mimir1",
|
||||
"disableResolveMessage": false,
|
||||
"secureFields": {
|
||||
"api_url": true
|
||||
},
|
||||
"settings": {
|
||||
"actions": [
|
||||
{
|
||||
@@ -522,18 +505,19 @@
|
||||
"title_link": "http://localhost",
|
||||
"username": "Alerting Team"
|
||||
},
|
||||
"type": "slack",
|
||||
"uid": "",
|
||||
"version": "v0mimir1"
|
||||
"secureFields": {
|
||||
"api_url": true
|
||||
}
|
||||
}
|
||||
],
|
||||
"title": "slack"
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
|
||||
"kind": "Receiver",
|
||||
"metadata": {
|
||||
"name": "c25z",
|
||||
"namespace": "default",
|
||||
"uid": "vSP8NtFr23hnqZqLxRgzUKfr1wOemOvZm1S6MYkfRI4X",
|
||||
"resourceVersion": "77d734ad4c196d36",
|
||||
"annotations": {
|
||||
"grafana.com/access/canModifyProtected": "true",
|
||||
"grafana.com/access/canReadSecrets": "true",
|
||||
@@ -541,19 +525,16 @@
|
||||
"grafana.com/inUse/routes": "0",
|
||||
"grafana.com/inUse/rules": "0",
|
||||
"grafana.com/provenance": "converted_prometheus"
|
||||
},
|
||||
"name": "c25z",
|
||||
"namespace": "default",
|
||||
"resourceVersion": "77d734ad4c196d36",
|
||||
"uid": "vSP8NtFr23hnqZqLxRgzUKfr1wOemOvZm1S6MYkfRI4X"
|
||||
}
|
||||
},
|
||||
"spec": {
|
||||
"title": "sns",
|
||||
"integrations": [
|
||||
{
|
||||
"uid": "",
|
||||
"type": "sns",
|
||||
"version": "v0mimir1",
|
||||
"disableResolveMessage": false,
|
||||
"secureFields": {
|
||||
"sigv4.SecretKey": true
|
||||
},
|
||||
"settings": {
|
||||
"attributes": {
|
||||
"key1": "value1"
|
||||
@@ -577,18 +558,19 @@
|
||||
"subject": "{{ template \"sns.default.subject\" . }}",
|
||||
"topic_arn": "arn:aws:sns:us-east-1:123456789012:alerts"
|
||||
},
|
||||
"type": "sns",
|
||||
"uid": "",
|
||||
"version": "v0mimir1"
|
||||
"secureFields": {
|
||||
"sigv4.SecretKey": true
|
||||
}
|
||||
}
|
||||
],
|
||||
"title": "sns"
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
|
||||
"kind": "Receiver",
|
||||
"metadata": {
|
||||
"name": "dGVsZWdyYW0",
|
||||
"namespace": "default",
|
||||
"uid": "XLWjtmYcjP5PiqBCwZXX3YKHV1G8niRtpCakIpcHqoYX",
|
||||
"resourceVersion": "d9850878a33e302e",
|
||||
"annotations": {
|
||||
"grafana.com/access/canModifyProtected": "true",
|
||||
"grafana.com/access/canReadSecrets": "true",
|
||||
@@ -596,19 +578,16 @@
|
||||
"grafana.com/inUse/routes": "0",
|
||||
"grafana.com/inUse/rules": "0",
|
||||
"grafana.com/provenance": "converted_prometheus"
|
||||
},
|
||||
"name": "dGVsZWdyYW0",
|
||||
"namespace": "default",
|
||||
"resourceVersion": "d9850878a33e302e",
|
||||
"uid": "XLWjtmYcjP5PiqBCwZXX3YKHV1G8niRtpCakIpcHqoYX"
|
||||
}
|
||||
},
|
||||
"spec": {
|
||||
"title": "telegram",
|
||||
"integrations": [
|
||||
{
|
||||
"uid": "",
|
||||
"type": "telegram",
|
||||
"version": "v0mimir1",
|
||||
"disableResolveMessage": false,
|
||||
"secureFields": {
|
||||
"token": true
|
||||
},
|
||||
"settings": {
|
||||
"api_url": "http://localhost/telegram-default",
|
||||
"chat": -1001234567890,
|
||||
@@ -624,18 +603,19 @@
|
||||
"parse_mode": "MarkdownV2",
|
||||
"send_resolved": true
|
||||
},
|
||||
"type": "telegram",
|
||||
"uid": "",
|
||||
"version": "v0mimir1"
|
||||
"secureFields": {
|
||||
"token": true
|
||||
}
|
||||
}
|
||||
],
|
||||
"title": "telegram"
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
|
||||
"kind": "Receiver",
|
||||
"metadata": {
|
||||
"name": "dmljdG9yb3Bz",
|
||||
"namespace": "default",
|
||||
"uid": "EWiwQ6TIW0GpEo46WusW7Nvg0HuD4QAbHf0JZ2OSOhEX",
|
||||
"resourceVersion": "1e6886531440afc2",
|
||||
"annotations": {
|
||||
"grafana.com/access/canModifyProtected": "true",
|
||||
"grafana.com/access/canReadSecrets": "true",
|
||||
@@ -643,19 +623,16 @@
|
||||
"grafana.com/inUse/routes": "0",
|
||||
"grafana.com/inUse/rules": "0",
|
||||
"grafana.com/provenance": "converted_prometheus"
|
||||
},
|
||||
"name": "dmljdG9yb3Bz",
|
||||
"namespace": "default",
|
||||
"resourceVersion": "1e6886531440afc2",
|
||||
"uid": "EWiwQ6TIW0GpEo46WusW7Nvg0HuD4QAbHf0JZ2OSOhEX"
|
||||
}
|
||||
},
|
||||
"spec": {
|
||||
"title": "victorops",
|
||||
"integrations": [
|
||||
{
|
||||
"uid": "",
|
||||
"type": "victorops",
|
||||
"version": "v0mimir1",
|
||||
"disableResolveMessage": false,
|
||||
"secureFields": {
|
||||
"api_key": true
|
||||
},
|
||||
"settings": {
|
||||
"api_url": "http://localhost/victorops-default/",
|
||||
"entity_display_name": "{{ template \"victorops.default.entity_display_name\" . }}",
|
||||
@@ -674,18 +651,19 @@
|
||||
"send_resolved": true,
|
||||
"state_message": "{{ template \"victorops.default.state_message\" . }}"
|
||||
},
|
||||
"type": "victorops",
|
||||
"uid": "",
|
||||
"version": "v0mimir1"
|
||||
"secureFields": {
|
||||
"api_key": true
|
||||
}
|
||||
}
|
||||
],
|
||||
"title": "victorops"
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
|
||||
"kind": "Receiver",
|
||||
"metadata": {
|
||||
"name": "d2ViZXg",
|
||||
"namespace": "default",
|
||||
"uid": "wDNufI44UXHWq4ERRYenZ7XgXVV3Tjxaokz9IjMRZ54X",
|
||||
"resourceVersion": "08fc955a08dfe9c0",
|
||||
"annotations": {
|
||||
"grafana.com/access/canModifyProtected": "true",
|
||||
"grafana.com/access/canReadSecrets": "true",
|
||||
@@ -693,19 +671,16 @@
|
||||
"grafana.com/inUse/routes": "0",
|
||||
"grafana.com/inUse/rules": "0",
|
||||
"grafana.com/provenance": "converted_prometheus"
|
||||
},
|
||||
"name": "d2ViZXg",
|
||||
"namespace": "default",
|
||||
"resourceVersion": "08fc955a08dfe9c0",
|
||||
"uid": "wDNufI44UXHWq4ERRYenZ7XgXVV3Tjxaokz9IjMRZ54X"
|
||||
}
|
||||
},
|
||||
"spec": {
|
||||
"title": "webex",
|
||||
"integrations": [
|
||||
{
|
||||
"uid": "",
|
||||
"type": "webex",
|
||||
"version": "v0mimir1",
|
||||
"disableResolveMessage": false,
|
||||
"secureFields": {
|
||||
"http_config.authorization.credentials": true
|
||||
},
|
||||
"settings": {
|
||||
"api_url": "http://localhost/webes-default",
|
||||
"http_config": {
|
||||
@@ -723,18 +698,19 @@
|
||||
"room_id": "Y2lzY29zcGFyazovL3VzL1JPT00v12345678",
|
||||
"send_resolved": true
|
||||
},
|
||||
"type": "webex",
|
||||
"uid": "",
|
||||
"version": "v0mimir1"
|
||||
"secureFields": {
|
||||
"http_config.authorization.credentials": true
|
||||
}
|
||||
}
|
||||
],
|
||||
"title": "webex"
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
|
||||
"kind": "Receiver",
|
||||
"metadata": {
|
||||
"name": "d2ViaG9vaw",
|
||||
"namespace": "default",
|
||||
"uid": "aKzigXATPp6HOh20yTrlTcuF2Y9IrPHridGIcWrJygsX",
|
||||
"resourceVersion": "494392f899a7b410",
|
||||
"annotations": {
|
||||
"grafana.com/access/canModifyProtected": "true",
|
||||
"grafana.com/access/canReadSecrets": "true",
|
||||
@@ -742,19 +718,16 @@
|
||||
"grafana.com/inUse/routes": "1",
|
||||
"grafana.com/inUse/rules": "0",
|
||||
"grafana.com/provenance": "converted_prometheus"
|
||||
},
|
||||
"name": "d2ViaG9vaw",
|
||||
"namespace": "default",
|
||||
"resourceVersion": "494392f899a7b410",
|
||||
"uid": "aKzigXATPp6HOh20yTrlTcuF2Y9IrPHridGIcWrJygsX"
|
||||
}
|
||||
},
|
||||
"spec": {
|
||||
"title": "webhook",
|
||||
"integrations": [
|
||||
{
|
||||
"uid": "",
|
||||
"type": "webhook",
|
||||
"version": "v0mimir1",
|
||||
"disableResolveMessage": false,
|
||||
"secureFields": {
|
||||
"url": true
|
||||
},
|
||||
"settings": {
|
||||
"http_config": {
|
||||
"enable_http2": true,
|
||||
@@ -769,18 +742,19 @@
|
||||
"timeout": "0s",
|
||||
"url_file": ""
|
||||
},
|
||||
"type": "webhook",
|
||||
"uid": "",
|
||||
"version": "v0mimir1"
|
||||
"secureFields": {
|
||||
"url": true
|
||||
}
|
||||
}
|
||||
],
|
||||
"title": "webhook"
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
|
||||
"kind": "Receiver",
|
||||
"metadata": {
|
||||
"name": "d2VjaGF0",
|
||||
"namespace": "default",
|
||||
"uid": "jkXCvNrNVw7XX5nmYFyrGiA4ckAvJ282u2scW8KZq7IX",
|
||||
"resourceVersion": "135913515cbc156b",
|
||||
"annotations": {
|
||||
"grafana.com/access/canModifyProtected": "true",
|
||||
"grafana.com/access/canReadSecrets": "true",
|
||||
@@ -788,19 +762,16 @@
|
||||
"grafana.com/inUse/routes": "0",
|
||||
"grafana.com/inUse/rules": "0",
|
||||
"grafana.com/provenance": "converted_prometheus"
|
||||
},
|
||||
"name": "d2VjaGF0",
|
||||
"namespace": "default",
|
||||
"resourceVersion": "135913515cbc156b",
|
||||
"uid": "jkXCvNrNVw7XX5nmYFyrGiA4ckAvJ282u2scW8KZq7IX"
|
||||
}
|
||||
},
|
||||
"spec": {
|
||||
"title": "wechat",
|
||||
"integrations": [
|
||||
{
|
||||
"uid": "",
|
||||
"type": "wechat",
|
||||
"version": "v0mimir1",
|
||||
"disableResolveMessage": false,
|
||||
"secureFields": {
|
||||
"api_secret": true
|
||||
},
|
||||
"settings": {
|
||||
"agent_id": "1000002",
|
||||
"api_url": "http://localhost/wechat/",
|
||||
@@ -820,15 +791,12 @@
|
||||
"to_tag": "tag1",
|
||||
"to_user": "user1"
|
||||
},
|
||||
"type": "wechat",
|
||||
"uid": "",
|
||||
"version": "v0mimir1"
|
||||
"secureFields": {
|
||||
"api_secret": true
|
||||
}
|
||||
}
|
||||
],
|
||||
"title": "wechat"
|
||||
]
|
||||
}
|
||||
}
|
||||
],
|
||||
"kind": "ReceiverList",
|
||||
"metadata": {}
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -8,6 +8,7 @@ import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/grafana/grafana-app-sdk/resource"
|
||||
"github.com/prometheus/alertmanager/config"
|
||||
"github.com/prometheus/alertmanager/pkg/labels"
|
||||
"github.com/prometheus/common/model"
|
||||
@@ -39,6 +40,11 @@ import (
|
||||
"github.com/grafana/grafana/pkg/util/testutil"
|
||||
)
|
||||
|
||||
var defaultTreeIdentifier = resource.Identifier{
|
||||
Namespace: apis.DefaultNamespace,
|
||||
Name: v0alpha1.UserDefinedRoutingTreeName,
|
||||
}
|
||||
|
||||
func TestMain(m *testing.M) {
|
||||
testsuite.Run(m)
|
||||
}
|
||||
@@ -52,7 +58,8 @@ func TestIntegrationNotAllowedMethods(t *testing.T) {
|
||||
|
||||
ctx := context.Background()
|
||||
helper := getTestHelper(t)
|
||||
client := common.NewRoutingTreeClient(t, helper.Org1.Admin)
|
||||
client, err := v0alpha1.NewRoutingTreeClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
|
||||
route := &v0alpha1.RoutingTree{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
@@ -60,11 +67,7 @@ func TestIntegrationNotAllowedMethods(t *testing.T) {
|
||||
},
|
||||
Spec: v0alpha1.RoutingTreeSpec{},
|
||||
}
|
||||
_, err := client.Create(ctx, route, v1.CreateOptions{})
|
||||
assert.Error(t, err)
|
||||
require.Truef(t, errors.IsMethodNotSupported(err), "Expected MethodNotSupported but got %s", err)
|
||||
|
||||
err = client.Client.DeleteCollection(ctx, v1.DeleteOptions{}, v1.ListOptions{})
|
||||
_, err = client.Create(ctx, route, resource.CreateOptions{})
|
||||
assert.Error(t, err)
|
||||
require.Truef(t, errors.IsMethodNotSupported(err), "Expected MethodNotSupported but got %s", err)
|
||||
}
|
||||
@@ -154,50 +157,52 @@ func TestIntegrationAccessControl(t *testing.T) {
|
||||
}
|
||||
|
||||
admin := org1.Admin
|
||||
adminClient := common.NewRoutingTreeClient(t, admin)
|
||||
adminClient, err := v0alpha1.NewRoutingTreeClientFromGenerator(admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(fmt.Sprintf("user '%s'", tc.user.Identity.GetLogin()), func(t *testing.T) {
|
||||
client := common.NewRoutingTreeClient(t, tc.user)
|
||||
client, err := v0alpha1.NewRoutingTreeClientFromGenerator(tc.user.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
|
||||
if tc.canRead {
|
||||
t.Run("should be able to list routing trees", func(t *testing.T) {
|
||||
list, err := client.List(ctx, v1.ListOptions{})
|
||||
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, list.Items, 1)
|
||||
require.Equal(t, v0alpha1.UserDefinedRoutingTreeName, list.Items[0].Name)
|
||||
})
|
||||
|
||||
t.Run("should be able to read routing trees by resource identifier", func(t *testing.T) {
|
||||
_, err := client.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
|
||||
_, err := client.Get(ctx, defaultTreeIdentifier)
|
||||
require.NoError(t, err)
|
||||
|
||||
t.Run("should get NotFound if resource does not exist", func(t *testing.T) {
|
||||
_, err := client.Get(ctx, "Notfound", v1.GetOptions{})
|
||||
_, err := client.Get(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "Notfound"})
|
||||
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
|
||||
})
|
||||
})
|
||||
} else {
|
||||
t.Run("should be forbidden to list routing trees", func(t *testing.T) {
|
||||
_, err := client.List(ctx, v1.ListOptions{})
|
||||
_, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
require.Error(t, err)
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
|
||||
t.Run("should be forbidden to read routing tree by name", func(t *testing.T) {
|
||||
_, err := client.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
|
||||
_, err := client.Get(ctx, defaultTreeIdentifier)
|
||||
require.Error(t, err)
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
|
||||
t.Run("should get forbidden even if name does not exist", func(t *testing.T) {
|
||||
_, err := client.Get(ctx, "Notfound", v1.GetOptions{})
|
||||
_, err := client.Get(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "Notfound"})
|
||||
require.Error(t, err)
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
})
|
||||
}
|
||||
|
||||
current, err := adminClient.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
|
||||
current, err := adminClient.Get(ctx, defaultTreeIdentifier)
|
||||
require.NoError(t, err)
|
||||
expected := current.Copy().(*v0alpha1.RoutingTree)
|
||||
expected.Spec.Routes = []v0alpha1.RoutingTreeRoute{
|
||||
@@ -217,7 +222,7 @@ func TestIntegrationAccessControl(t *testing.T) {
|
||||
|
||||
if tc.canUpdate {
|
||||
t.Run("should be able to update routing tree", func(t *testing.T) {
|
||||
updated, err := client.Update(ctx, expected, v1.UpdateOptions{})
|
||||
updated, err := client.Update(ctx, expected, resource.UpdateOptions{})
|
||||
require.NoErrorf(t, err, "Payload %s", string(d))
|
||||
|
||||
expected = updated
|
||||
@@ -225,21 +230,23 @@ func TestIntegrationAccessControl(t *testing.T) {
|
||||
t.Run("should get NotFound if name does not exist", func(t *testing.T) {
|
||||
up := expected.Copy().(*v0alpha1.RoutingTree)
|
||||
up.Name = "notFound"
|
||||
_, err := client.Update(ctx, up, v1.UpdateOptions{})
|
||||
_, err := client.Update(ctx, up, resource.UpdateOptions{})
|
||||
require.Error(t, err)
|
||||
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
|
||||
})
|
||||
})
|
||||
} else {
|
||||
t.Run("should be forbidden to update routing tree", func(t *testing.T) {
|
||||
_, err := client.Update(ctx, expected, v1.UpdateOptions{})
|
||||
_, err := client.Update(ctx, expected, resource.UpdateOptions{})
|
||||
require.Error(t, err)
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
|
||||
t.Run("should get forbidden even if resource does not exist", func(t *testing.T) {
|
||||
up := expected.Copy().(*v0alpha1.RoutingTree)
|
||||
up.Name = "notFound"
|
||||
_, err := client.Update(ctx, up, v1.UpdateOptions{})
|
||||
_, err := client.Update(ctx, up, resource.UpdateOptions{
|
||||
ResourceVersion: up.ResourceVersion,
|
||||
})
|
||||
require.Error(t, err)
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
@@ -248,32 +255,32 @@ func TestIntegrationAccessControl(t *testing.T) {
|
||||
|
||||
if tc.canUpdate {
|
||||
t.Run("should be able to reset routing tree", func(t *testing.T) {
|
||||
err := client.Delete(ctx, expected.Name, v1.DeleteOptions{})
|
||||
err := client.Delete(ctx, expected.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
t.Run("should get NotFound if name does not exist", func(t *testing.T) {
|
||||
err := client.Delete(ctx, "notfound", v1.DeleteOptions{})
|
||||
err := client.Delete(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "notfound"}, resource.DeleteOptions{})
|
||||
require.Error(t, err)
|
||||
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
|
||||
})
|
||||
})
|
||||
} else {
|
||||
t.Run("should be forbidden to reset routing tree", func(t *testing.T) {
|
||||
err := client.Delete(ctx, expected.Name, v1.DeleteOptions{})
|
||||
err := client.Delete(ctx, expected.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
|
||||
require.Error(t, err)
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
|
||||
t.Run("should be forbidden even if resource does not exist", func(t *testing.T) {
|
||||
err := client.Delete(ctx, "notfound", v1.DeleteOptions{})
|
||||
err := client.Delete(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "notfound"}, resource.DeleteOptions{})
|
||||
require.Error(t, err)
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
})
|
||||
require.NoError(t, adminClient.Delete(ctx, expected.Name, v1.DeleteOptions{}))
|
||||
require.NoError(t, adminClient.Delete(ctx, expected.GetStaticMetadata().Identifier(), resource.DeleteOptions{}))
|
||||
}
|
||||
})
|
||||
|
||||
err := adminClient.Delete(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.DeleteOptions{})
|
||||
err := adminClient.Delete(ctx, defaultTreeIdentifier, resource.DeleteOptions{})
|
||||
require.NoError(t, err)
|
||||
}
|
||||
}
|
||||
@@ -287,21 +294,22 @@ func TestIntegrationProvisioning(t *testing.T) {
|
||||
org := helper.Org1
|
||||
|
||||
admin := org.Admin
|
||||
adminClient := common.NewRoutingTreeClient(t, admin)
|
||||
adminClient, err := v0alpha1.NewRoutingTreeClientFromGenerator(admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
|
||||
env := helper.GetEnv()
|
||||
ac := acimpl.ProvideAccessControl(env.FeatureToggles)
|
||||
db, err := store.ProvideDBStore(env.Cfg, env.FeatureToggles, env.SQLStore, &foldertest.FakeService{}, &dashboards.FakeDashboardService{}, ac, bus.ProvideBus(tracing.InitializeTracerForTest()))
|
||||
require.NoError(t, err)
|
||||
|
||||
current, err := adminClient.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
|
||||
current, err := adminClient.Get(ctx, defaultTreeIdentifier)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "none", current.GetProvenanceStatus())
|
||||
|
||||
t.Run("should provide provenance status", func(t *testing.T) {
|
||||
require.NoError(t, db.SetProvenance(ctx, &definitions.Route{}, admin.Identity.GetOrgID(), "API"))
|
||||
|
||||
got, err := adminClient.Get(ctx, current.Name, v1.GetOptions{})
|
||||
got, err := adminClient.Get(ctx, current.GetStaticMetadata().Identifier())
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "API", got.GetProvenanceStatus())
|
||||
})
|
||||
@@ -319,13 +327,13 @@ func TestIntegrationProvisioning(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
_, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
|
||||
_, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
|
||||
require.Error(t, err)
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
|
||||
t.Run("should not let delete if provisioned", func(t *testing.T) {
|
||||
err := adminClient.Delete(ctx, current.Name, v1.DeleteOptions{})
|
||||
err := adminClient.Delete(ctx, current.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
}
|
||||
@@ -336,35 +344,37 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
helper := getTestHelper(t)
|
||||
|
||||
adminClient := common.NewRoutingTreeClient(t, helper.Org1.Admin)
|
||||
adminClient, err := v0alpha1.NewRoutingTreeClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
|
||||
current, err := adminClient.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
|
||||
current, err := adminClient.Get(ctx, defaultTreeIdentifier)
|
||||
require.NoError(t, err)
|
||||
require.NotEmpty(t, current.ResourceVersion)
|
||||
|
||||
t.Run("should forbid if version does not match", func(t *testing.T) {
|
||||
updated := current.Copy().(*v0alpha1.RoutingTree)
|
||||
updated.ResourceVersion = "test"
|
||||
_, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
|
||||
_, err := adminClient.Update(ctx, updated, resource.UpdateOptions{
|
||||
ResourceVersion: "test",
|
||||
})
|
||||
require.Error(t, err)
|
||||
require.Truef(t, errors.IsConflict(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
t.Run("should update if version matches", func(t *testing.T) {
|
||||
updated := current.Copy().(*v0alpha1.RoutingTree)
|
||||
updated.Spec.Defaults.GroupBy = append(updated.Spec.Defaults.GroupBy, "data")
|
||||
actualUpdated, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
|
||||
actualUpdated, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.EqualValues(t, updated.Spec, actualUpdated.Spec)
|
||||
require.NotEqual(t, updated.ResourceVersion, actualUpdated.ResourceVersion)
|
||||
})
|
||||
t.Run("should update if version is empty", func(t *testing.T) {
|
||||
current, err = adminClient.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
|
||||
current, err = adminClient.Get(ctx, defaultTreeIdentifier)
|
||||
require.NoError(t, err)
|
||||
updated := current.Copy().(*v0alpha1.RoutingTree)
|
||||
updated.ResourceVersion = ""
|
||||
updated.Spec.Routes = append(updated.Spec.Routes, v0alpha1.RoutingTreeRoute{Continue: true})
|
||||
|
||||
actualUpdated, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
|
||||
actualUpdated, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.EqualValues(t, updated.Spec, actualUpdated.Spec)
|
||||
require.NotEqual(t, current.ResourceVersion, actualUpdated.ResourceVersion)
|
||||
@@ -380,20 +390,22 @@ func TestIntegrationDataConsistency(t *testing.T) {
|
||||
cliCfg := helper.Org1.Admin.NewRestConfig()
|
||||
legacyCli := alerting.NewAlertingLegacyAPIClient(helper.GetEnv().Server.HTTPServer.Listener.Addr().String(), cliCfg.Username, cliCfg.Password)
|
||||
|
||||
client := common.NewRoutingTreeClient(t, helper.Org1.Admin)
|
||||
client, err := v0alpha1.NewRoutingTreeClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
|
||||
receiver := "grafana-default-email"
|
||||
timeInterval := "test-time-interval"
|
||||
createRoute := func(t *testing.T, route definitions.Route) {
|
||||
t.Helper()
|
||||
routeClient := common.NewRoutingTreeClient(t, helper.Org1.Admin)
|
||||
routeClient, err := v0alpha1.NewRoutingTreeClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
v1Route, err := routingtree.ConvertToK8sResource(helper.Org1.Admin.Identity.GetOrgID(), route, "", func(int64) string { return "default" })
|
||||
require.NoError(t, err)
|
||||
_, err = routeClient.Update(ctx, v1Route, v1.UpdateOptions{})
|
||||
_, err = routeClient.Update(ctx, v1Route, resource.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
_, err := common.NewTimeIntervalClient(t, helper.Org1.Admin).Create(ctx, &v0alpha1.TimeInterval{
|
||||
_, err = common.NewTimeIntervalClient(t, helper.Org1.Admin).Create(ctx, &v0alpha1.TimeInterval{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Namespace: "default",
|
||||
},
|
||||
@@ -435,7 +447,7 @@ func TestIntegrationDataConsistency(t *testing.T) {
|
||||
},
|
||||
}
|
||||
createRoute(t, route)
|
||||
tree, err := client.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
|
||||
tree, err := client.Get(ctx, defaultTreeIdentifier)
|
||||
require.NoError(t, err)
|
||||
expected := []v0alpha1.RoutingTreeMatcher{
|
||||
{
|
||||
@@ -503,9 +515,9 @@ func TestIntegrationDataConsistency(t *testing.T) {
|
||||
ensureMatcher(t, labels.MatchNotEqual, "matchers", "v"),
|
||||
}
|
||||
|
||||
tree, err := client.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
|
||||
tree, err := client.Get(ctx, defaultTreeIdentifier)
|
||||
require.NoError(t, err)
|
||||
_, err = client.Update(ctx, tree, v1.UpdateOptions{})
|
||||
_, err = client.Update(ctx, tree, resource.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
cfg, _, _ = legacyCli.GetAlertmanagerConfigWithStatus(t)
|
||||
@@ -542,7 +554,7 @@ func TestIntegrationDataConsistency(t *testing.T) {
|
||||
createRoute(t, route)
|
||||
|
||||
t.Run("correctly reads all fields", func(t *testing.T) {
|
||||
tree, err := client.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
|
||||
tree, err := client.Get(ctx, defaultTreeIdentifier)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, v0alpha1.RoutingTreeRouteDefaults{
|
||||
Receiver: receiver,
|
||||
@@ -589,10 +601,10 @@ func TestIntegrationDataConsistency(t *testing.T) {
|
||||
t.Run("correctly save all fields", func(t *testing.T) {
|
||||
before, status, body := legacyCli.GetAlertmanagerConfigWithStatus(t)
|
||||
require.Equalf(t, http.StatusOK, status, body)
|
||||
tree, err := client.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
|
||||
tree, err := client.Get(ctx, defaultTreeIdentifier)
|
||||
tree.Spec.Defaults.GroupBy = []string{"test-123", "test-456", "test-789"}
|
||||
require.NoError(t, err)
|
||||
_, err = client.Update(ctx, tree, v1.UpdateOptions{})
|
||||
_, err = client.Update(ctx, tree, resource.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
before.AlertmanagerConfig.Route.GroupByStr = []string{"test-123", "test-456", "test-789"}
|
||||
@@ -640,7 +652,7 @@ func TestIntegrationDataConsistency(t *testing.T) {
|
||||
}
|
||||
|
||||
createRoute(t, route)
|
||||
tree, err := client.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
|
||||
tree, err := client.Get(ctx, defaultTreeIdentifier)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "foo🙂", tree.Spec.Routes[0].GroupBy[0])
|
||||
expected := []v0alpha1.RoutingTreeMatcher{
|
||||
@@ -666,7 +678,8 @@ func TestIntegrationExtraConfigsConflicts(t *testing.T) {
|
||||
cliCfg := helper.Org1.Admin.NewRestConfig()
|
||||
legacyCli := alerting.NewAlertingLegacyAPIClient(helper.GetEnv().Server.HTTPServer.Listener.Addr().String(), cliCfg.Username, cliCfg.Password)
|
||||
|
||||
client := common.NewRoutingTreeClient(t, helper.Org1.Admin)
|
||||
client, err := v0alpha1.NewRoutingTreeClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
|
||||
// Now upload a new extra config
|
||||
testAlertmanagerConfigYAML := `
|
||||
@@ -691,7 +704,7 @@ receivers:
|
||||
}, headers)
|
||||
require.Equal(t, "success", response.Status)
|
||||
|
||||
current, err := client.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
|
||||
current, err := client.Get(ctx, defaultTreeIdentifier)
|
||||
require.NoError(t, err)
|
||||
updated := current.Copy().(*v0alpha1.RoutingTree)
|
||||
updated.Spec.Routes = append(updated.Spec.Routes, v0alpha1.RoutingTreeRoute{
|
||||
@@ -704,7 +717,7 @@ receivers:
|
||||
},
|
||||
})
|
||||
|
||||
_, err = client.Update(ctx, updated, v1.UpdateOptions{})
|
||||
_, err = client.Update(ctx, updated, resource.UpdateOptions{})
|
||||
require.Error(t, err)
|
||||
require.Truef(t, errors.IsBadRequest(err), "Should get BadRequest error but got: %s", err)
|
||||
|
||||
@@ -712,6 +725,6 @@ receivers:
|
||||
legacyCli.ConvertPrometheusDeleteAlertmanagerConfig(t, headers)
|
||||
|
||||
// and try again
|
||||
_, err = client.Update(ctx, updated, v1.UpdateOptions{})
|
||||
_, err = client.Update(ctx, updated, resource.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
"path"
|
||||
"testing"
|
||||
|
||||
"github.com/grafana/grafana-app-sdk/resource"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"go.yaml.in/yaml/v3"
|
||||
@@ -18,7 +19,6 @@ import (
|
||||
"github.com/grafana/grafana/pkg/services/ngalert/models"
|
||||
"github.com/grafana/grafana/pkg/tests/api/alerting"
|
||||
"github.com/grafana/grafana/pkg/tests/apis"
|
||||
"github.com/grafana/grafana/pkg/tests/apis/alerting/notifications/common"
|
||||
"github.com/grafana/grafana/pkg/tests/testinfra"
|
||||
"github.com/grafana/grafana/pkg/util/testutil"
|
||||
)
|
||||
@@ -35,7 +35,8 @@ func TestIntegrationImportedTemplates(t *testing.T) {
|
||||
},
|
||||
})
|
||||
|
||||
client := common.NewTemplateGroupClient(t, helper.Org1.Admin)
|
||||
client, err := v0alpha1.NewTemplateGroupClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
|
||||
cliCfg := helper.Org1.Admin.NewRestConfig()
|
||||
alertingApi := alerting.NewAlertingLegacyAPIClient(helper.GetEnv().Server.HTTPServer.Listener.Addr().String(), cliCfg.Username, cliCfg.Password)
|
||||
@@ -57,7 +58,7 @@ func TestIntegrationImportedTemplates(t *testing.T) {
|
||||
response := alertingApi.ConvertPrometheusPostAlertmanagerConfig(t, amConfig, headers)
|
||||
require.Equal(t, "success", response.Status)
|
||||
|
||||
templates, err := client.List(context.Background(), metav1.ListOptions{})
|
||||
templates, err := client.List(context.Background(), apis.DefaultNamespace, resource.ListOptions{})
|
||||
|
||||
require.NoError(t, err)
|
||||
require.Len(t, templates.Items, 3)
|
||||
@@ -90,12 +91,12 @@ func TestIntegrationImportedTemplates(t *testing.T) {
|
||||
t.Run("should not be able to update", func(t *testing.T) {
|
||||
tpl := templates.Items[1]
|
||||
tpl.Spec.Content = "new content"
|
||||
_, err := client.Update(context.Background(), &tpl, metav1.UpdateOptions{})
|
||||
_, err := client.Update(context.Background(), &tpl, resource.UpdateOptions{})
|
||||
require.Truef(t, errors.IsBadRequest(err), "expected bad request but got %s", err)
|
||||
})
|
||||
|
||||
t.Run("should not be able to delete", func(t *testing.T) {
|
||||
err := client.Delete(context.Background(), templates.Items[1].Name, metav1.DeleteOptions{})
|
||||
err := client.Delete(context.Background(), templates.Items[1].GetStaticMetadata().Identifier(), resource.DeleteOptions{})
|
||||
require.Truef(t, errors.IsBadRequest(err), "expected bad request but got %s", err)
|
||||
})
|
||||
|
||||
@@ -108,14 +109,14 @@ func TestIntegrationImportedTemplates(t *testing.T) {
|
||||
}
|
||||
tpl.Spec.Kind = v0alpha1.TemplateGroupTemplateKindGrafana
|
||||
|
||||
created, err := client.Create(context.Background(), &tpl, metav1.CreateOptions{})
|
||||
created, err := client.Create(context.Background(), &tpl, resource.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.NotEqual(t, templates.Items[1].Name, created.Name)
|
||||
})
|
||||
|
||||
t.Run("sort by kind and then name", func(t *testing.T) {
|
||||
templates, err := client.List(context.Background(), metav1.ListOptions{})
|
||||
templates, err := client.List(context.Background(), apis.DefaultNamespace, resource.ListOptions{})
|
||||
|
||||
require.NoError(t, err)
|
||||
require.Len(t, templates.Items, 4)
|
||||
|
||||
@@ -7,6 +7,7 @@ import (
|
||||
"testing"
|
||||
|
||||
"github.com/grafana/alerting/templates"
|
||||
"github.com/grafana/grafana-app-sdk/resource"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"k8s.io/apimachinery/pkg/api/errors"
|
||||
@@ -45,7 +46,8 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
|
||||
|
||||
ctx := context.Background()
|
||||
helper := getTestHelper(t)
|
||||
client := common.NewTemplateGroupClient(t, helper.Org1.Admin)
|
||||
client, err := v0alpha1.NewTemplateGroupClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
|
||||
newTemplate := &v0alpha1.TemplateGroup{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
@@ -61,23 +63,23 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
|
||||
t.Run("create should fail if object name is specified", func(t *testing.T) {
|
||||
template := newTemplate.Copy().(*v0alpha1.TemplateGroup)
|
||||
template.Name = "new-templateGroup"
|
||||
_, err := client.Create(ctx, template, v1.CreateOptions{})
|
||||
_, err := client.Create(ctx, template, resource.CreateOptions{})
|
||||
assert.Error(t, err)
|
||||
require.Truef(t, errors.IsBadRequest(err), "Expected BadRequest but got %s", err)
|
||||
})
|
||||
|
||||
var resourceID string
|
||||
var resourceID resource.Identifier
|
||||
t.Run("create should succeed and provide resource name", func(t *testing.T) {
|
||||
actual, err := client.Create(ctx, newTemplate, v1.CreateOptions{})
|
||||
actual, err := client.Create(ctx, newTemplate, resource.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.NotEmptyf(t, actual.Name, "Resource name should not be empty")
|
||||
require.NotEmptyf(t, actual.UID, "Resource UID should not be empty")
|
||||
resourceID = actual.Name
|
||||
resourceID = actual.GetStaticMetadata().Identifier()
|
||||
})
|
||||
|
||||
var existingTemplateGroup *v0alpha1.TemplateGroup
|
||||
t.Run("resource should be available by the identifier", func(t *testing.T) {
|
||||
actual, err := client.Get(ctx, resourceID, v1.GetOptions{})
|
||||
actual, err := client.Get(ctx, resourceID)
|
||||
require.NoError(t, err)
|
||||
require.NotEmptyf(t, actual.Name, "Resource name should not be empty")
|
||||
require.Equal(t, newTemplate.Spec, actual.Spec)
|
||||
@@ -90,12 +92,12 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
|
||||
}
|
||||
updated := existingTemplateGroup.Copy().(*v0alpha1.TemplateGroup)
|
||||
updated.Spec.Title = "another-templateGroup"
|
||||
actual, err := client.Update(ctx, updated, v1.UpdateOptions{})
|
||||
actual, err := client.Update(ctx, updated, resource.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, updated.Spec, actual.Spec)
|
||||
require.NotEqualf(t, updated.Name, actual.Name, "Update should change the resource name but it didn't")
|
||||
|
||||
resource, err := client.Get(ctx, actual.Name, v1.GetOptions{})
|
||||
resource, err := client.Get(ctx, actual.GetStaticMetadata().Identifier())
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, actual, resource)
|
||||
|
||||
@@ -104,7 +106,7 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
|
||||
|
||||
var defaultTemplateGroup *v0alpha1.TemplateGroup
|
||||
t.Run("default template should be available by the identifier", func(t *testing.T) {
|
||||
actual, err := client.Get(ctx, templates.DefaultTemplateName, v1.GetOptions{})
|
||||
actual, err := client.Get(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: templates.DefaultTemplateName})
|
||||
require.NoError(t, err)
|
||||
require.NotEmptyf(t, actual.Name, "Resource name should not be empty")
|
||||
|
||||
@@ -122,7 +124,7 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
|
||||
t.Run("create with reserved default title should work", func(t *testing.T) {
|
||||
template := newTemplate.Copy().(*v0alpha1.TemplateGroup)
|
||||
template.Spec.Title = defaultTemplateGroup.Spec.Title
|
||||
actual, err := client.Create(ctx, template, v1.CreateOptions{})
|
||||
actual, err := client.Create(ctx, template, resource.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.NotEmptyf(t, actual.Name, "Resource name should not be empty")
|
||||
require.NotEmptyf(t, actual.UID, "Resource UID should not be empty")
|
||||
@@ -130,7 +132,7 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("default template should not be available by calculated UID", func(t *testing.T) {
|
||||
actual, err := client.Get(ctx, newTemplateWithOverlappingName.Name, v1.GetOptions{})
|
||||
actual, err := client.Get(ctx, newTemplateWithOverlappingName.GetStaticMetadata().Identifier())
|
||||
require.NoError(t, err)
|
||||
require.NotEmptyf(t, actual.Name, "Resource name should not be empty")
|
||||
|
||||
@@ -215,11 +217,13 @@ func TestIntegrationAccessControl(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
adminClient := common.NewTemplateGroupClient(t, org1.Admin)
|
||||
adminClient, err := v0alpha1.NewTemplateGroupClientFromGenerator(org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(fmt.Sprintf("user '%s'", tc.user.Identity.GetLogin()), func(t *testing.T) {
|
||||
client := common.NewTemplateGroupClient(t, tc.user)
|
||||
client, err := v0alpha1.NewTemplateGroupClientFromGenerator(tc.user.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
|
||||
var expected = &v0alpha1.TemplateGroup{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
@@ -237,12 +241,12 @@ func TestIntegrationAccessControl(t *testing.T) {
|
||||
|
||||
if tc.canCreate {
|
||||
t.Run("should be able to create template group", func(t *testing.T) {
|
||||
actual, err := client.Create(ctx, expected, v1.CreateOptions{})
|
||||
actual, err := client.Create(ctx, expected, resource.CreateOptions{})
|
||||
require.NoErrorf(t, err, "Payload %s", string(d))
|
||||
require.Equal(t, expected.Spec, actual.Spec)
|
||||
|
||||
t.Run("should fail if already exists", func(t *testing.T) {
|
||||
_, err := client.Create(ctx, actual, v1.CreateOptions{})
|
||||
_, err := client.Create(ctx, actual, resource.CreateOptions{})
|
||||
require.Truef(t, errors.IsBadRequest(err), "expected bad request but got %s", err)
|
||||
})
|
||||
|
||||
@@ -250,45 +254,45 @@ func TestIntegrationAccessControl(t *testing.T) {
|
||||
})
|
||||
} else {
|
||||
t.Run("should be forbidden to create", func(t *testing.T) {
|
||||
_, err := client.Create(ctx, expected, v1.CreateOptions{})
|
||||
_, err := client.Create(ctx, expected, resource.CreateOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "Payload %s", string(d))
|
||||
})
|
||||
|
||||
// create resource to proceed with other tests
|
||||
expected, err = adminClient.Create(ctx, expected, v1.CreateOptions{})
|
||||
expected, err = adminClient.Create(ctx, expected, resource.CreateOptions{})
|
||||
require.NoErrorf(t, err, "Payload %s", string(d))
|
||||
require.NotNil(t, expected)
|
||||
}
|
||||
|
||||
if tc.canRead {
|
||||
t.Run("should be able to list template groups", func(t *testing.T) {
|
||||
list, err := client.List(ctx, v1.ListOptions{})
|
||||
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, list.Items, 2) // Includes default template.
|
||||
})
|
||||
|
||||
t.Run("should be able to read template group by resource identifier", func(t *testing.T) {
|
||||
got, err := client.Get(ctx, expected.Name, v1.GetOptions{})
|
||||
got, err := client.Get(ctx, expected.GetStaticMetadata().Identifier())
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, expected, got)
|
||||
require.Equal(t, expected.Spec, got.Spec)
|
||||
|
||||
t.Run("should get NotFound if resource does not exist", func(t *testing.T) {
|
||||
_, err := client.Get(ctx, "Notfound", v1.GetOptions{})
|
||||
_, err := client.Get(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "Notfound"})
|
||||
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
|
||||
})
|
||||
})
|
||||
} else {
|
||||
t.Run("should be forbidden to list template groups", func(t *testing.T) {
|
||||
_, err := client.List(ctx, v1.ListOptions{})
|
||||
_, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
|
||||
t.Run("should be forbidden to read template group by name", func(t *testing.T) {
|
||||
_, err := client.Get(ctx, expected.Name, v1.GetOptions{})
|
||||
_, err := client.Get(ctx, expected.GetStaticMetadata().Identifier())
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
|
||||
t.Run("should get forbidden even if name does not exist", func(t *testing.T) {
|
||||
_, err := client.Get(ctx, "Notfound", v1.GetOptions{})
|
||||
_, err := client.Get(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "Notfound"})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
})
|
||||
@@ -302,7 +306,7 @@ func TestIntegrationAccessControl(t *testing.T) {
|
||||
|
||||
if tc.canUpdate {
|
||||
t.Run("should be able to update template group", func(t *testing.T) {
|
||||
updated, err := client.Update(ctx, updatedExpected, v1.UpdateOptions{})
|
||||
updated, err := client.Update(ctx, updatedExpected, resource.UpdateOptions{})
|
||||
require.NoErrorf(t, err, "Payload %s", string(d))
|
||||
|
||||
expected = updated
|
||||
@@ -310,52 +314,54 @@ func TestIntegrationAccessControl(t *testing.T) {
|
||||
t.Run("should get NotFound if name does not exist", func(t *testing.T) {
|
||||
up := updatedExpected.Copy().(*v0alpha1.TemplateGroup)
|
||||
up.Name = "notFound"
|
||||
_, err := client.Update(ctx, up, v1.UpdateOptions{})
|
||||
_, err := client.Update(ctx, up, resource.UpdateOptions{})
|
||||
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
|
||||
})
|
||||
})
|
||||
} else {
|
||||
t.Run("should be forbidden to update template group", func(t *testing.T) {
|
||||
_, err := client.Update(ctx, updatedExpected, v1.UpdateOptions{})
|
||||
_, err := client.Update(ctx, updatedExpected, resource.UpdateOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
|
||||
t.Run("should get forbidden even if resource does not exist", func(t *testing.T) {
|
||||
up := updatedExpected.Copy().(*v0alpha1.TemplateGroup)
|
||||
up.Name = "notFound"
|
||||
_, err := client.Update(ctx, up, v1.UpdateOptions{})
|
||||
_, err := client.Update(ctx, up, resource.UpdateOptions{
|
||||
ResourceVersion: up.ResourceVersion,
|
||||
})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
})
|
||||
}
|
||||
|
||||
deleteOptions := v1.DeleteOptions{Preconditions: &v1.Preconditions{ResourceVersion: util.Pointer(expected.ResourceVersion)}}
|
||||
|
||||
oldClient := common.NewTemplateGroupClient(t, tc.user) // TODO replace with normal client once delete is fixed
|
||||
if tc.canDelete {
|
||||
t.Run("should be able to delete template group", func(t *testing.T) {
|
||||
err := client.Delete(ctx, expected.Name, deleteOptions)
|
||||
err := oldClient.Delete(ctx, expected.GetStaticMetadata().Identifier().Name, deleteOptions)
|
||||
require.NoError(t, err)
|
||||
|
||||
t.Run("should get NotFound if name does not exist", func(t *testing.T) {
|
||||
err := client.Delete(ctx, "notfound", v1.DeleteOptions{})
|
||||
err := oldClient.Delete(ctx, "notfound", v1.DeleteOptions{})
|
||||
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
|
||||
})
|
||||
})
|
||||
} else {
|
||||
t.Run("should be forbidden to delete template group", func(t *testing.T) {
|
||||
err := client.Delete(ctx, expected.Name, deleteOptions)
|
||||
err := oldClient.Delete(ctx, expected.GetStaticMetadata().Identifier().Name, deleteOptions)
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
|
||||
t.Run("should be forbidden even if resource does not exist", func(t *testing.T) {
|
||||
err := client.Delete(ctx, "notfound", v1.DeleteOptions{})
|
||||
err := oldClient.Delete(ctx, "notfound", v1.DeleteOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
})
|
||||
require.NoError(t, adminClient.Delete(ctx, expected.Name, v1.DeleteOptions{}))
|
||||
require.NoError(t, adminClient.Delete(ctx, expected.GetStaticMetadata().Identifier(), resource.DeleteOptions{}))
|
||||
}
|
||||
|
||||
if tc.canRead {
|
||||
t.Run("should get list with just default template if no template groups", func(t *testing.T) {
|
||||
list, err := client.List(ctx, v1.ListOptions{})
|
||||
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, list.Items, 1)
|
||||
require.Equal(t, templates.DefaultTemplateName, list.Items[0].Name)
|
||||
@@ -374,7 +380,8 @@ func TestIntegrationProvisioning(t *testing.T) {
|
||||
org := helper.Org1
|
||||
|
||||
admin := org.Admin
|
||||
adminClient := common.NewTemplateGroupClient(t, admin)
|
||||
adminClient, err := v0alpha1.NewTemplateGroupClientFromGenerator(admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
|
||||
env := helper.GetEnv()
|
||||
ac := acimpl.ProvideAccessControl(env.FeatureToggles)
|
||||
@@ -390,7 +397,7 @@ func TestIntegrationProvisioning(t *testing.T) {
|
||||
Content: `{{ define "test" }} test {{ end }}`,
|
||||
Kind: v0alpha1.TemplateGroupTemplateKindGrafana,
|
||||
},
|
||||
}, v1.CreateOptions{})
|
||||
}, resource.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "none", created.GetProvenanceStatus())
|
||||
|
||||
@@ -399,7 +406,7 @@ func TestIntegrationProvisioning(t *testing.T) {
|
||||
Name: created.Spec.Title,
|
||||
}, admin.Identity.GetOrgID(), "API"))
|
||||
|
||||
got, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
|
||||
got, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "API", got.GetProvenanceStatus())
|
||||
})
|
||||
@@ -407,12 +414,12 @@ func TestIntegrationProvisioning(t *testing.T) {
|
||||
updated := created.Copy().(*v0alpha1.TemplateGroup)
|
||||
updated.Spec.Content = `{{ define "another-test" }} test {{ end }}`
|
||||
|
||||
_, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
|
||||
_, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
|
||||
t.Run("should not let delete if provisioned", func(t *testing.T) {
|
||||
err := adminClient.Delete(ctx, created.Name, v1.DeleteOptions{})
|
||||
err := adminClient.Delete(ctx, created.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
}
|
||||
@@ -423,8 +430,9 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
helper := getTestHelper(t)
|
||||
|
||||
adminClient := common.NewTemplateGroupClient(t, helper.Org1.Admin)
|
||||
|
||||
adminClient, err := v0alpha1.NewTemplateGroupClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
oldClient := common.NewTemplateGroupClient(t, helper.Org1.Admin)
|
||||
template := v0alpha1.TemplateGroup{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Namespace: "default",
|
||||
@@ -436,21 +444,22 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
created, err := adminClient.Create(ctx, &template, v1.CreateOptions{})
|
||||
created, err := adminClient.Create(ctx, &template, resource.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, created)
|
||||
require.NotEmpty(t, created.ResourceVersion)
|
||||
|
||||
t.Run("should forbid if version does not match", func(t *testing.T) {
|
||||
updated := created.Copy().(*v0alpha1.TemplateGroup)
|
||||
updated.ResourceVersion = "test"
|
||||
_, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
|
||||
_, err := adminClient.Update(ctx, updated, resource.UpdateOptions{
|
||||
ResourceVersion: "test",
|
||||
})
|
||||
require.Truef(t, errors.IsConflict(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
t.Run("should update if version matches", func(t *testing.T) {
|
||||
updated := created.Copy().(*v0alpha1.TemplateGroup)
|
||||
updated.Spec.Content = `{{ define "test-another" }} test {{ end }}`
|
||||
actualUpdated, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
|
||||
actualUpdated, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.EqualValues(t, updated.Spec, actualUpdated.Spec)
|
||||
require.NotEqual(t, updated.ResourceVersion, actualUpdated.ResourceVersion)
|
||||
@@ -460,16 +469,16 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
|
||||
updated.ResourceVersion = ""
|
||||
updated.Spec.Content = `{{ define "test-another-2" }} test {{ end }}`
|
||||
|
||||
actualUpdated, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
|
||||
actualUpdated, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.EqualValues(t, updated.Spec, actualUpdated.Spec)
|
||||
require.NotEqual(t, created.ResourceVersion, actualUpdated.ResourceVersion)
|
||||
})
|
||||
t.Run("should fail to delete if version does not match", func(t *testing.T) {
|
||||
actual, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
|
||||
actual, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
|
||||
require.NoError(t, err)
|
||||
|
||||
err = adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{
|
||||
err = oldClient.Delete(ctx, actual.GetStaticMetadata().Identifier().Name, v1.DeleteOptions{
|
||||
Preconditions: &v1.Preconditions{
|
||||
ResourceVersion: util.Pointer("something"),
|
||||
},
|
||||
@@ -477,10 +486,10 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
|
||||
require.Truef(t, errors.IsConflict(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
t.Run("should succeed if version matches", func(t *testing.T) {
|
||||
actual, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
|
||||
actual, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
|
||||
require.NoError(t, err)
|
||||
|
||||
err = adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{
|
||||
err = oldClient.Delete(ctx, actual.GetStaticMetadata().Identifier().Name, v1.DeleteOptions{
|
||||
Preconditions: &v1.Preconditions{
|
||||
ResourceVersion: util.Pointer(actual.ResourceVersion),
|
||||
},
|
||||
@@ -488,10 +497,10 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
})
|
||||
t.Run("should succeed if version is empty", func(t *testing.T) {
|
||||
actual, err := adminClient.Create(ctx, &template, v1.CreateOptions{})
|
||||
actual, err := adminClient.Create(ctx, &template, resource.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{
|
||||
err = oldClient.Delete(ctx, actual.GetStaticMetadata().Identifier().Name, v1.DeleteOptions{
|
||||
Preconditions: &v1.Preconditions{
|
||||
ResourceVersion: util.Pointer(actual.ResourceVersion),
|
||||
},
|
||||
@@ -506,7 +515,8 @@ func TestIntegrationPatch(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
helper := getTestHelper(t)
|
||||
|
||||
adminClient := common.NewTemplateGroupClient(t, helper.Org1.Admin)
|
||||
adminClient, err := v0alpha1.NewTemplateGroupClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
|
||||
template := v0alpha1.TemplateGroup{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
@@ -519,8 +529,10 @@ func TestIntegrationPatch(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
current, err := adminClient.Create(ctx, &template, v1.CreateOptions{})
|
||||
current, err := adminClient.Create(ctx, &template, resource.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
oldClient := common.NewTemplateGroupClient(t, helper.Org1.Admin)
|
||||
|
||||
require.NotNil(t, current)
|
||||
require.NotEmpty(t, current.ResourceVersion)
|
||||
|
||||
@@ -531,7 +543,7 @@ func TestIntegrationPatch(t *testing.T) {
|
||||
}
|
||||
}`
|
||||
|
||||
result, err := adminClient.Patch(ctx, current.Name, types.MergePatchType, []byte(patch), v1.PatchOptions{})
|
||||
result, err := oldClient.Patch(ctx, current.GetStaticMetadata().Identifier().Name, types.MergePatchType, []byte(patch), v1.PatchOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, `{{ define "test-another" }} test {{ end }}`, result.Spec.Content)
|
||||
current = result
|
||||
@@ -540,18 +552,15 @@ func TestIntegrationPatch(t *testing.T) {
|
||||
t.Run("should patch with json patch", func(t *testing.T) {
|
||||
expected := `{{ define "test-json-patch" }} test {{ end }}`
|
||||
|
||||
patch := []map[string]interface{}{
|
||||
patch := []resource.PatchOperation{
|
||||
{
|
||||
"op": "replace",
|
||||
"path": "/spec/content",
|
||||
"value": expected,
|
||||
Operation: "replace",
|
||||
Path: "/spec/content",
|
||||
Value: expected,
|
||||
},
|
||||
}
|
||||
|
||||
patchData, err := json.Marshal(patch)
|
||||
require.NoError(t, err)
|
||||
|
||||
result, err := adminClient.Patch(ctx, current.Name, types.JSONPatchType, patchData, v1.PatchOptions{})
|
||||
result, err := adminClient.Patch(ctx, current.GetStaticMetadata().Identifier(), resource.PatchRequest{Operations: patch}, resource.PatchOptions{})
|
||||
require.NoError(t, err)
|
||||
expectedSpec := current.Spec
|
||||
expectedSpec.Content = expected
|
||||
@@ -565,7 +574,8 @@ func TestIntegrationListSelector(t *testing.T) {
|
||||
|
||||
ctx := context.Background()
|
||||
helper := getTestHelper(t)
|
||||
adminClient := common.NewTemplateGroupClient(t, helper.Org1.Admin)
|
||||
adminClient, err := v0alpha1.NewTemplateGroupClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
|
||||
template1 := &v0alpha1.TemplateGroup{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
@@ -577,7 +587,7 @@ func TestIntegrationListSelector(t *testing.T) {
|
||||
Kind: v0alpha1.TemplateGroupTemplateKindGrafana,
|
||||
},
|
||||
}
|
||||
template1, err := adminClient.Create(ctx, template1, v1.CreateOptions{})
|
||||
template1, err = adminClient.Create(ctx, template1, resource.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
template2 := &v0alpha1.TemplateGroup{
|
||||
@@ -590,7 +600,7 @@ func TestIntegrationListSelector(t *testing.T) {
|
||||
Kind: v0alpha1.TemplateGroupTemplateKindGrafana,
|
||||
},
|
||||
}
|
||||
template2, err = adminClient.Create(ctx, template2, v1.CreateOptions{})
|
||||
template2, err = adminClient.Create(ctx, template2, resource.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
env := helper.GetEnv()
|
||||
ac := acimpl.ProvideAccessControl(env.FeatureToggles)
|
||||
@@ -599,18 +609,18 @@ func TestIntegrationListSelector(t *testing.T) {
|
||||
require.NoError(t, db.SetProvenance(ctx, &definitions.NotificationTemplate{
|
||||
Name: template2.Spec.Title,
|
||||
}, helper.Org1.Admin.Identity.GetOrgID(), "API"))
|
||||
template2, err = adminClient.Get(ctx, template2.Name, v1.GetOptions{})
|
||||
template2, err = adminClient.Get(ctx, template2.GetStaticMetadata().Identifier())
|
||||
|
||||
require.NoError(t, err)
|
||||
|
||||
tmpls, err := adminClient.List(ctx, v1.ListOptions{})
|
||||
tmpls, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, tmpls.Items, 3) // Includes default template.
|
||||
|
||||
t.Run("should filter by template name", func(t *testing.T) {
|
||||
t.Skip("disabled until app installer supports it") // TODO revisit when custom field selectors are supported
|
||||
list, err := adminClient.List(ctx, v1.ListOptions{
|
||||
FieldSelector: "spec.title=" + template1.Spec.Title,
|
||||
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
|
||||
FieldSelectors: []string{"spec.title=" + template1.Spec.Title},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, list.Items, 1)
|
||||
@@ -618,8 +628,8 @@ func TestIntegrationListSelector(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("should filter by template metadata name", func(t *testing.T) {
|
||||
list, err := adminClient.List(ctx, v1.ListOptions{
|
||||
FieldSelector: "metadata.name=" + template2.Name,
|
||||
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
|
||||
FieldSelectors: []string{"metadata.name=" + template2.Name},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, list.Items, 1)
|
||||
@@ -628,8 +638,8 @@ func TestIntegrationListSelector(t *testing.T) {
|
||||
|
||||
t.Run("should filter by multiple filters", func(t *testing.T) {
|
||||
t.Skip("disabled until app installer supports it") // TODO revisit when custom field selectors are supported
|
||||
list, err := adminClient.List(ctx, v1.ListOptions{
|
||||
FieldSelector: fmt.Sprintf("metadata.name=%s,spec.title=%s", template2.Name, template2.Spec.Title),
|
||||
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
|
||||
FieldSelectors: []string{fmt.Sprintf("metadata.name=%s,spec.title=%s", template2.Name, template2.Spec.Title)},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, list.Items, 1)
|
||||
@@ -637,8 +647,8 @@ func TestIntegrationListSelector(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("should be empty when filter does not match", func(t *testing.T) {
|
||||
list, err := adminClient.List(ctx, v1.ListOptions{
|
||||
FieldSelector: fmt.Sprintf("metadata.name=%s", "unknown"),
|
||||
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
|
||||
FieldSelectors: []string{fmt.Sprintf("metadata.name=%s", "unknown")},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
require.Empty(t, list.Items)
|
||||
@@ -646,17 +656,17 @@ func TestIntegrationListSelector(t *testing.T) {
|
||||
|
||||
t.Run("should filter by default template name", func(t *testing.T) {
|
||||
t.Skip("disabled until app installer supports it") // TODO revisit when custom field selectors are supported
|
||||
list, err := adminClient.List(ctx, v1.ListOptions{
|
||||
FieldSelector: "spec.title=" + v0alpha1.DefaultTemplateTitle,
|
||||
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
|
||||
FieldSelectors: []string{"spec.title=" + v0alpha1.DefaultTemplateTitle},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, list.Items, 1)
|
||||
require.Equal(t, templates.DefaultTemplateName, list.Items[0].Name)
|
||||
|
||||
// Now just non-default templates
|
||||
list, err = adminClient.List(ctx, v1.ListOptions{
|
||||
FieldSelector: "spec.title!=" + v0alpha1.DefaultTemplateTitle,
|
||||
})
|
||||
list, err = adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
|
||||
FieldSelectors: []string{"spec.title!=" + v0alpha1.DefaultTemplateTitle}},
|
||||
)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, list.Items, 2)
|
||||
require.NotEqualf(t, templates.DefaultTemplateName, list.Items[0].Name, "Expected non-default template but got %s", list.Items[0].Name)
|
||||
@@ -669,7 +679,8 @@ func TestIntegrationKinds(t *testing.T) {
|
||||
|
||||
ctx := context.Background()
|
||||
helper := getTestHelper(t)
|
||||
client := common.NewTemplateGroupClient(t, helper.Org1.Admin)
|
||||
client, err := v0alpha1.NewTemplateGroupClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
|
||||
newTemplate := &v0alpha1.TemplateGroup{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
@@ -683,17 +694,17 @@ func TestIntegrationKinds(t *testing.T) {
|
||||
}
|
||||
|
||||
t.Run("should not let create Mimir template", func(t *testing.T) {
|
||||
_, err := client.Create(ctx, newTemplate, v1.CreateOptions{})
|
||||
_, err := client.Create(ctx, newTemplate, resource.CreateOptions{})
|
||||
require.Truef(t, errors.IsBadRequest(err), "expected bad request but got %s", err)
|
||||
})
|
||||
|
||||
t.Run("should not let change kind", func(t *testing.T) {
|
||||
newTemplate.Spec.Kind = v0alpha1.TemplateGroupTemplateKindGrafana
|
||||
created, err := client.Create(ctx, newTemplate, v1.CreateOptions{})
|
||||
created, err := client.Create(ctx, newTemplate, resource.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
created.Spec.Kind = v0alpha1.TemplateGroupTemplateKindMimir
|
||||
_, err = client.Update(ctx, created, v1.UpdateOptions{})
|
||||
_, err = client.Update(ctx, created, resource.UpdateOptions{})
|
||||
require.Truef(t, errors.IsBadRequest(err), "expected bad request but got %s", err)
|
||||
})
|
||||
}
|
||||
|
||||
@@ -10,6 +10,7 @@ import (
|
||||
"slices"
|
||||
"testing"
|
||||
|
||||
"github.com/grafana/grafana-app-sdk/resource"
|
||||
"github.com/prometheus/alertmanager/config"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
@@ -57,7 +58,8 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
|
||||
|
||||
ctx := context.Background()
|
||||
helper := getTestHelper(t)
|
||||
client := common.NewTimeIntervalClient(t, helper.Org1.Admin)
|
||||
client, err := v0alpha1.NewTimeIntervalClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
|
||||
newInterval := &v0alpha1.TimeInterval{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
@@ -72,22 +74,22 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
|
||||
t.Run("create should fail if object name is specified", func(t *testing.T) {
|
||||
interval := newInterval.Copy().(*v0alpha1.TimeInterval)
|
||||
interval.Name = "time-newInterval"
|
||||
_, err := client.Create(ctx, interval, v1.CreateOptions{})
|
||||
_, err := client.Create(ctx, interval, resource.CreateOptions{})
|
||||
require.Truef(t, errors.IsBadRequest(err), "Expected BadRequest but got %s", err)
|
||||
})
|
||||
|
||||
var resourceID string
|
||||
var resourceID resource.Identifier
|
||||
t.Run("create should succeed and provide resource name", func(t *testing.T) {
|
||||
actual, err := client.Create(ctx, newInterval, v1.CreateOptions{})
|
||||
actual, err := client.Create(ctx, newInterval, resource.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.NotEmptyf(t, actual.Name, "Resource name should not be empty")
|
||||
require.NotEmptyf(t, actual.UID, "Resource UID should not be empty")
|
||||
resourceID = actual.Name
|
||||
resourceID = actual.GetStaticMetadata().Identifier()
|
||||
})
|
||||
|
||||
var existingInterval *v0alpha1.TimeInterval
|
||||
t.Run("resource should be available by the identifier", func(t *testing.T) {
|
||||
actual, err := client.Get(ctx, resourceID, v1.GetOptions{})
|
||||
actual, err := client.Get(ctx, resourceID)
|
||||
require.NoError(t, err)
|
||||
require.NotEmptyf(t, actual.Name, "Resource name should not be empty")
|
||||
require.Equal(t, newInterval.Spec, actual.Spec)
|
||||
@@ -100,13 +102,13 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
|
||||
}
|
||||
updated := existingInterval.Copy().(*v0alpha1.TimeInterval)
|
||||
updated.Spec.Name = "another-newInterval"
|
||||
actual, err := client.Update(ctx, updated, v1.UpdateOptions{})
|
||||
actual, err := client.Update(ctx, updated, resource.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, updated.Spec, actual.Spec)
|
||||
require.NotEqualf(t, updated.Name, actual.Name, "Update should change the resource name but it didn't")
|
||||
require.NotEqualf(t, updated.ResourceVersion, actual.ResourceVersion, "Update should change the resource version but it didn't")
|
||||
|
||||
resource, err := client.Get(ctx, actual.Name, v1.GetOptions{})
|
||||
resource, err := client.Get(ctx, actual.GetStaticMetadata().Identifier())
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, actual, resource)
|
||||
})
|
||||
@@ -189,11 +191,13 @@ func TestIntegrationTimeIntervalAccessControl(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
adminClient := common.NewTimeIntervalClient(t, helper.Org1.Admin)
|
||||
adminClient, err := v0alpha1.NewTimeIntervalClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(fmt.Sprintf("user '%s'", tc.user.Identity.GetLogin()), func(t *testing.T) {
|
||||
client := common.NewTimeIntervalClient(t, tc.user)
|
||||
client, err := v0alpha1.NewTimeIntervalClientFromGenerator(tc.user.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
var expected = &v0alpha1.TimeInterval{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Namespace: "default",
|
||||
@@ -209,12 +213,12 @@ func TestIntegrationTimeIntervalAccessControl(t *testing.T) {
|
||||
|
||||
if tc.canCreate {
|
||||
t.Run("should be able to create time interval", func(t *testing.T) {
|
||||
actual, err := client.Create(ctx, expected, v1.CreateOptions{})
|
||||
actual, err := client.Create(ctx, expected, resource.CreateOptions{})
|
||||
require.NoErrorf(t, err, "Payload %s", string(d))
|
||||
require.Equal(t, expected.Spec, actual.Spec)
|
||||
|
||||
t.Run("should fail if already exists", func(t *testing.T) {
|
||||
_, err := client.Create(ctx, actual, v1.CreateOptions{})
|
||||
_, err := client.Create(ctx, actual, resource.CreateOptions{})
|
||||
require.Truef(t, errors.IsBadRequest(err), "expected bad request but got %s", err)
|
||||
})
|
||||
|
||||
@@ -222,45 +226,45 @@ func TestIntegrationTimeIntervalAccessControl(t *testing.T) {
|
||||
})
|
||||
} else {
|
||||
t.Run("should be forbidden to create", func(t *testing.T) {
|
||||
_, err := client.Create(ctx, expected, v1.CreateOptions{})
|
||||
_, err := client.Create(ctx, expected, resource.CreateOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "Payload %s", string(d))
|
||||
})
|
||||
|
||||
// create resource to proceed with other tests
|
||||
expected, err = adminClient.Create(ctx, expected, v1.CreateOptions{})
|
||||
expected, err = adminClient.Create(ctx, expected, resource.CreateOptions{})
|
||||
require.NoErrorf(t, err, "Payload %s", string(d))
|
||||
require.NotNil(t, expected)
|
||||
}
|
||||
|
||||
if tc.canRead {
|
||||
t.Run("should be able to list time intervals", func(t *testing.T) {
|
||||
list, err := client.List(ctx, v1.ListOptions{})
|
||||
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, list.Items, 1)
|
||||
})
|
||||
|
||||
t.Run("should be able to read time interval by resource identifier", func(t *testing.T) {
|
||||
got, err := client.Get(ctx, expected.Name, v1.GetOptions{})
|
||||
got, err := client.Get(ctx, expected.GetStaticMetadata().Identifier())
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, expected, got)
|
||||
require.Equal(t, expected.Spec, got.Spec)
|
||||
|
||||
t.Run("should get NotFound if resource does not exist", func(t *testing.T) {
|
||||
_, err := client.Get(ctx, "Notfound", v1.GetOptions{})
|
||||
_, err := client.Get(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "Notfound"})
|
||||
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
|
||||
})
|
||||
})
|
||||
} else {
|
||||
t.Run("should be forbidden to list time intervals", func(t *testing.T) {
|
||||
_, err := client.List(ctx, v1.ListOptions{})
|
||||
_, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
|
||||
t.Run("should be forbidden to read time interval by name", func(t *testing.T) {
|
||||
_, err := client.Get(ctx, expected.Name, v1.GetOptions{})
|
||||
_, err := client.Get(ctx, expected.GetStaticMetadata().Identifier())
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
|
||||
t.Run("should get forbidden even if name does not exist", func(t *testing.T) {
|
||||
_, err := client.Get(ctx, "Notfound", v1.GetOptions{})
|
||||
_, err := client.Get(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "Notfound"})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
})
|
||||
@@ -274,7 +278,7 @@ func TestIntegrationTimeIntervalAccessControl(t *testing.T) {
|
||||
|
||||
if tc.canUpdate {
|
||||
t.Run("should be able to update time interval", func(t *testing.T) {
|
||||
updated, err := client.Update(ctx, updatedExpected, v1.UpdateOptions{})
|
||||
updated, err := client.Update(ctx, updatedExpected, resource.UpdateOptions{})
|
||||
require.NoErrorf(t, err, "Payload %s", string(d))
|
||||
|
||||
expected = updated
|
||||
@@ -282,52 +286,54 @@ func TestIntegrationTimeIntervalAccessControl(t *testing.T) {
|
||||
t.Run("should get NotFound if name does not exist", func(t *testing.T) {
|
||||
up := updatedExpected.Copy().(*v0alpha1.TimeInterval)
|
||||
up.Name = "notFound"
|
||||
_, err := client.Update(ctx, up, v1.UpdateOptions{})
|
||||
_, err := client.Update(ctx, up, resource.UpdateOptions{})
|
||||
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
|
||||
})
|
||||
})
|
||||
} else {
|
||||
t.Run("should be forbidden to update time interval", func(t *testing.T) {
|
||||
_, err := client.Update(ctx, updatedExpected, v1.UpdateOptions{})
|
||||
_, err := client.Update(ctx, updatedExpected, resource.UpdateOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
|
||||
t.Run("should get forbidden even if resource does not exist", func(t *testing.T) {
|
||||
up := updatedExpected.Copy().(*v0alpha1.TimeInterval)
|
||||
up.Name = "notFound"
|
||||
_, err := client.Update(ctx, up, v1.UpdateOptions{})
|
||||
_, err := client.Update(ctx, up, resource.UpdateOptions{
|
||||
ResourceVersion: up.ResourceVersion,
|
||||
})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
})
|
||||
}
|
||||
|
||||
deleteOptions := v1.DeleteOptions{Preconditions: &v1.Preconditions{ResourceVersion: util.Pointer(expected.ResourceVersion)}}
|
||||
|
||||
oldClient := common.NewTimeIntervalClient(t, tc.user)
|
||||
if tc.canDelete {
|
||||
t.Run("should be able to delete time interval", func(t *testing.T) {
|
||||
err := client.Delete(ctx, expected.Name, deleteOptions)
|
||||
err := oldClient.Delete(ctx, expected.GetStaticMetadata().Identifier().Name, deleteOptions)
|
||||
require.NoError(t, err)
|
||||
|
||||
t.Run("should get NotFound if name does not exist", func(t *testing.T) {
|
||||
err := client.Delete(ctx, "notfound", v1.DeleteOptions{})
|
||||
err := oldClient.Delete(ctx, "notfound", v1.DeleteOptions{})
|
||||
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
|
||||
})
|
||||
})
|
||||
} else {
|
||||
t.Run("should be forbidden to delete time interval", func(t *testing.T) {
|
||||
err := client.Delete(ctx, expected.Name, deleteOptions)
|
||||
err := oldClient.Delete(ctx, expected.GetStaticMetadata().Identifier().Name, deleteOptions)
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
|
||||
t.Run("should be forbidden even if resource does not exist", func(t *testing.T) {
|
||||
err := client.Delete(ctx, "notfound", v1.DeleteOptions{})
|
||||
err := oldClient.Delete(ctx, "notfound", v1.DeleteOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
})
|
||||
require.NoError(t, adminClient.Delete(ctx, expected.Name, v1.DeleteOptions{}))
|
||||
require.NoError(t, adminClient.Delete(ctx, expected.GetStaticMetadata().Identifier(), resource.DeleteOptions{}))
|
||||
}
|
||||
|
||||
if tc.canRead {
|
||||
t.Run("should get empty list if no mute timings", func(t *testing.T) {
|
||||
list, err := client.List(ctx, v1.ListOptions{})
|
||||
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, list.Items, 0)
|
||||
})
|
||||
@@ -345,7 +351,8 @@ func TestIntegrationTimeIntervalProvisioning(t *testing.T) {
|
||||
org := helper.Org1
|
||||
|
||||
admin := org.Admin
|
||||
adminClient := common.NewTimeIntervalClient(t, helper.Org1.Admin)
|
||||
adminClient, err := v0alpha1.NewTimeIntervalClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
|
||||
env := helper.GetEnv()
|
||||
ac := acimpl.ProvideAccessControl(env.FeatureToggles)
|
||||
@@ -360,7 +367,7 @@ func TestIntegrationTimeIntervalProvisioning(t *testing.T) {
|
||||
Name: "time-interval-1",
|
||||
TimeIntervals: fakes.IntervalGenerator{}.GenerateMany(2),
|
||||
},
|
||||
}, v1.CreateOptions{})
|
||||
}, resource.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "none", created.GetProvenanceStatus())
|
||||
|
||||
@@ -371,7 +378,7 @@ func TestIntegrationTimeIntervalProvisioning(t *testing.T) {
|
||||
},
|
||||
}, admin.Identity.GetOrgID(), "API"))
|
||||
|
||||
got, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
|
||||
got, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "API", got.GetProvenanceStatus())
|
||||
})
|
||||
@@ -379,12 +386,12 @@ func TestIntegrationTimeIntervalProvisioning(t *testing.T) {
|
||||
updated := created.Copy().(*v0alpha1.TimeInterval)
|
||||
updated.Spec.TimeIntervals = fakes.IntervalGenerator{}.GenerateMany(2)
|
||||
|
||||
_, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
|
||||
_, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
|
||||
t.Run("should not let delete if provisioned", func(t *testing.T) {
|
||||
err := adminClient.Delete(ctx, created.Name, v1.DeleteOptions{})
|
||||
err := adminClient.Delete(ctx, created.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
}
|
||||
@@ -395,7 +402,9 @@ func TestIntegrationTimeIntervalOptimisticConcurrency(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
helper := getTestHelper(t)
|
||||
|
||||
adminClient := common.NewTimeIntervalClient(t, helper.Org1.Admin)
|
||||
adminClient, err := v0alpha1.NewTimeIntervalClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
oldClient := common.NewTimeIntervalClient(t, helper.Org1.Admin)
|
||||
|
||||
interval := v0alpha1.TimeInterval{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
@@ -407,21 +416,22 @@ func TestIntegrationTimeIntervalOptimisticConcurrency(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
created, err := adminClient.Create(ctx, &interval, v1.CreateOptions{})
|
||||
created, err := adminClient.Create(ctx, &interval, resource.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, created)
|
||||
require.NotEmpty(t, created.ResourceVersion)
|
||||
|
||||
t.Run("should forbid if version does not match", func(t *testing.T) {
|
||||
updated := created.Copy().(*v0alpha1.TimeInterval)
|
||||
updated.ResourceVersion = "test"
|
||||
_, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
|
||||
_, err := adminClient.Update(ctx, updated, resource.UpdateOptions{
|
||||
ResourceVersion: "test",
|
||||
})
|
||||
require.Truef(t, errors.IsConflict(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
t.Run("should update if version matches", func(t *testing.T) {
|
||||
updated := created.Copy().(*v0alpha1.TimeInterval)
|
||||
updated.Spec.TimeIntervals = fakes.IntervalGenerator{}.GenerateMany(2)
|
||||
actualUpdated, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
|
||||
actualUpdated, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.EqualValues(t, updated.Spec, actualUpdated.Spec)
|
||||
require.NotEqual(t, updated.ResourceVersion, actualUpdated.ResourceVersion)
|
||||
@@ -431,16 +441,16 @@ func TestIntegrationTimeIntervalOptimisticConcurrency(t *testing.T) {
|
||||
updated.ResourceVersion = ""
|
||||
updated.Spec.TimeIntervals = fakes.IntervalGenerator{}.GenerateMany(2)
|
||||
|
||||
actualUpdated, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
|
||||
actualUpdated, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.EqualValues(t, updated.Spec, actualUpdated.Spec)
|
||||
require.NotEqual(t, created.ResourceVersion, actualUpdated.ResourceVersion)
|
||||
})
|
||||
t.Run("should fail to delete if version does not match", func(t *testing.T) {
|
||||
actual, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
|
||||
actual, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
|
||||
require.NoError(t, err)
|
||||
|
||||
err = adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{
|
||||
err = oldClient.Delete(ctx, actual.GetStaticMetadata().Identifier().Name, v1.DeleteOptions{
|
||||
Preconditions: &v1.Preconditions{
|
||||
ResourceVersion: util.Pointer("something"),
|
||||
},
|
||||
@@ -448,10 +458,10 @@ func TestIntegrationTimeIntervalOptimisticConcurrency(t *testing.T) {
|
||||
require.Truef(t, errors.IsConflict(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
t.Run("should succeed if version matches", func(t *testing.T) {
|
||||
actual, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
|
||||
actual, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
|
||||
require.NoError(t, err)
|
||||
|
||||
err = adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{
|
||||
err = oldClient.Delete(ctx, actual.GetStaticMetadata().Identifier().Name, v1.DeleteOptions{
|
||||
Preconditions: &v1.Preconditions{
|
||||
ResourceVersion: util.Pointer(actual.ResourceVersion),
|
||||
},
|
||||
@@ -459,10 +469,10 @@ func TestIntegrationTimeIntervalOptimisticConcurrency(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
})
|
||||
t.Run("should succeed if version is empty", func(t *testing.T) {
|
||||
actual, err := adminClient.Create(ctx, &interval, v1.CreateOptions{})
|
||||
actual, err := adminClient.Create(ctx, &interval, resource.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{
|
||||
err = oldClient.Delete(ctx, actual.GetStaticMetadata().Identifier().Name, v1.DeleteOptions{
|
||||
Preconditions: &v1.Preconditions{
|
||||
ResourceVersion: util.Pointer(actual.ResourceVersion),
|
||||
},
|
||||
@@ -477,7 +487,9 @@ func TestIntegrationTimeIntervalPatch(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
helper := getTestHelper(t)
|
||||
|
||||
adminClient := common.NewTimeIntervalClient(t, helper.Org1.Admin)
|
||||
adminClient, err := v0alpha1.NewTimeIntervalClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
oldClient := common.NewTimeIntervalClient(t, helper.Org1.Admin)
|
||||
|
||||
interval := v0alpha1.TimeInterval{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
@@ -489,7 +501,7 @@ func TestIntegrationTimeIntervalPatch(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
current, err := adminClient.Create(ctx, &interval, v1.CreateOptions{})
|
||||
current, err := adminClient.Create(ctx, &interval, resource.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, current)
|
||||
require.NotEmpty(t, current.ResourceVersion)
|
||||
@@ -501,7 +513,7 @@ func TestIntegrationTimeIntervalPatch(t *testing.T) {
|
||||
}
|
||||
}`
|
||||
|
||||
result, err := adminClient.Patch(ctx, current.Name, types.MergePatchType, []byte(patch), v1.PatchOptions{})
|
||||
result, err := oldClient.Patch(ctx, current.GetStaticMetadata().Identifier().Name, types.MergePatchType, []byte(patch), v1.PatchOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Empty(t, result.Spec.TimeIntervals)
|
||||
current = result
|
||||
@@ -510,18 +522,15 @@ func TestIntegrationTimeIntervalPatch(t *testing.T) {
|
||||
t.Run("should patch with json patch", func(t *testing.T) {
|
||||
expected := fakes.IntervalGenerator{}.Generate()
|
||||
|
||||
patch := []map[string]interface{}{
|
||||
patch := []resource.PatchOperation{
|
||||
{
|
||||
"op": "add",
|
||||
"path": "/spec/time_intervals/-",
|
||||
"value": expected,
|
||||
Operation: "add",
|
||||
Path: "/spec/time_intervals/-",
|
||||
Value: expected,
|
||||
},
|
||||
}
|
||||
|
||||
patchData, err := json.Marshal(patch)
|
||||
require.NoError(t, err)
|
||||
|
||||
result, err := adminClient.Patch(ctx, current.Name, types.JSONPatchType, patchData, v1.PatchOptions{})
|
||||
result, err := adminClient.Patch(ctx, current.GetStaticMetadata().Identifier(), resource.PatchRequest{Operations: patch}, resource.PatchOptions{})
|
||||
require.NoError(t, err)
|
||||
expectedSpec := v0alpha1.TimeIntervalSpec{
|
||||
Name: current.Spec.Name,
|
||||
@@ -540,7 +549,8 @@ func TestIntegrationTimeIntervalListSelector(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
helper := getTestHelper(t)
|
||||
|
||||
adminClient := common.NewTimeIntervalClient(t, helper.Org1.Admin)
|
||||
adminClient, err := v0alpha1.NewTimeIntervalClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
|
||||
interval1 := &v0alpha1.TimeInterval{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
@@ -551,7 +561,7 @@ func TestIntegrationTimeIntervalListSelector(t *testing.T) {
|
||||
TimeIntervals: fakes.IntervalGenerator{}.GenerateMany(2),
|
||||
},
|
||||
}
|
||||
interval1, err := adminClient.Create(ctx, interval1, v1.CreateOptions{})
|
||||
interval1, err = adminClient.Create(ctx, interval1, resource.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
interval2 := &v0alpha1.TimeInterval{
|
||||
@@ -563,7 +573,7 @@ func TestIntegrationTimeIntervalListSelector(t *testing.T) {
|
||||
TimeIntervals: fakes.IntervalGenerator{}.GenerateMany(2),
|
||||
},
|
||||
}
|
||||
interval2, err = adminClient.Create(ctx, interval2, v1.CreateOptions{})
|
||||
interval2, err = adminClient.Create(ctx, interval2, resource.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
env := helper.GetEnv()
|
||||
ac := acimpl.ProvideAccessControl(env.FeatureToggles)
|
||||
@@ -574,18 +584,18 @@ func TestIntegrationTimeIntervalListSelector(t *testing.T) {
|
||||
Name: interval2.Spec.Name,
|
||||
},
|
||||
}, helper.Org1.Admin.Identity.GetOrgID(), "API"))
|
||||
interval2, err = adminClient.Get(ctx, interval2.Name, v1.GetOptions{})
|
||||
interval2, err = adminClient.Get(ctx, interval2.GetStaticMetadata().Identifier())
|
||||
|
||||
require.NoError(t, err)
|
||||
|
||||
intervals, err := adminClient.List(ctx, v1.ListOptions{})
|
||||
intervals, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, intervals.Items, 2)
|
||||
|
||||
t.Run("should filter by interval name", func(t *testing.T) {
|
||||
t.Skip("disabled until app installer supports it") // TODO revisit when custom field selectors are supported
|
||||
list, err := adminClient.List(ctx, v1.ListOptions{
|
||||
FieldSelector: "spec.name=" + interval1.Spec.Name,
|
||||
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
|
||||
FieldSelectors: []string{"spec.name=" + interval1.Spec.Name},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, list.Items, 1)
|
||||
@@ -593,8 +603,8 @@ func TestIntegrationTimeIntervalListSelector(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("should filter by interval metadata name", func(t *testing.T) {
|
||||
list, err := adminClient.List(ctx, v1.ListOptions{
|
||||
FieldSelector: "metadata.name=" + interval2.Name,
|
||||
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
|
||||
FieldSelectors: []string{"metadata.name=" + interval2.Name},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, list.Items, 1)
|
||||
@@ -603,8 +613,8 @@ func TestIntegrationTimeIntervalListSelector(t *testing.T) {
|
||||
|
||||
t.Run("should filter by multiple filters", func(t *testing.T) {
|
||||
t.Skip("disabled until app installer supports it")
|
||||
list, err := adminClient.List(ctx, v1.ListOptions{
|
||||
FieldSelector: fmt.Sprintf("metadata.name=%s,spec.name=%s", interval2.Name, interval2.Spec.Name),
|
||||
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
|
||||
FieldSelectors: []string{fmt.Sprintf("metadata.name=%s", interval2.Name), fmt.Sprintf("spec.name=%s", interval2.Spec.Name)},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, list.Items, 1)
|
||||
@@ -612,8 +622,8 @@ func TestIntegrationTimeIntervalListSelector(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("should be empty when filter does not match", func(t *testing.T) {
|
||||
list, err := adminClient.List(ctx, v1.ListOptions{
|
||||
FieldSelector: fmt.Sprintf("metadata.name=%s", "unknown"),
|
||||
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
|
||||
FieldSelectors: []string{fmt.Sprintf("metadata.name=%s", "unknown")},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
require.Empty(t, list.Items)
|
||||
@@ -647,18 +657,20 @@ func TestIntegrationTimeIntervalReferentialIntegrity(t *testing.T) {
|
||||
})
|
||||
}
|
||||
|
||||
adminClient := common.NewTimeIntervalClient(t, helper.Org1.Admin)
|
||||
adminClient, err := v0alpha1.NewTimeIntervalClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
v1intervals, err := timeinterval.ConvertToK8sResources(orgID, mtis, func(int64) string { return "default" }, nil)
|
||||
require.NoError(t, err)
|
||||
for _, interval := range v1intervals.Items {
|
||||
_, err := adminClient.Create(ctx, &interval, v1.CreateOptions{})
|
||||
_, err := adminClient.Create(ctx, &interval, resource.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
routeClient := common.NewRoutingTreeClient(t, helper.Org1.Admin)
|
||||
routeClient, err := v0alpha1.NewRoutingTreeClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
v1route, err := routingtree.ConvertToK8sResource(helper.Org1.Admin.Identity.GetOrgID(), *amConfig.AlertmanagerConfig.Route, "", func(int64) string { return "default" })
|
||||
require.NoError(t, err)
|
||||
_, err = routeClient.Update(ctx, v1route, v1.UpdateOptions{})
|
||||
_, err = routeClient.Update(ctx, v1route, resource.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
postGroupRaw, err := testData.ReadFile(path.Join("test-data", "rulegroup-1.json"))
|
||||
@@ -675,7 +687,7 @@ func TestIntegrationTimeIntervalReferentialIntegrity(t *testing.T) {
|
||||
currentRuleGroup, status := legacyCli.GetRulesGroup(t, folderUID, ruleGroup.Name)
|
||||
require.Equal(t, http.StatusAccepted, status)
|
||||
|
||||
intervals, err := adminClient.List(ctx, v1.ListOptions{})
|
||||
intervals, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, intervals.Items, 3)
|
||||
intervalIdx := slices.IndexFunc(intervals.Items, func(interval v0alpha1.TimeInterval) bool {
|
||||
@@ -700,7 +712,7 @@ func TestIntegrationTimeIntervalReferentialIntegrity(t *testing.T) {
|
||||
renamed := interval.Copy().(*v0alpha1.TimeInterval)
|
||||
renamed.Spec.Name += "-new"
|
||||
|
||||
actual, err := adminClient.Update(ctx, renamed, v1.UpdateOptions{})
|
||||
actual, err := adminClient.Update(ctx, renamed, resource.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
updatedRuleGroup, status := legacyCli.GetRulesGroup(t, folderUID, ruleGroup.Name)
|
||||
@@ -732,20 +744,20 @@ func TestIntegrationTimeIntervalReferentialIntegrity(t *testing.T) {
|
||||
t.Cleanup(func() {
|
||||
require.NoError(t, db.DeleteProvenance(ctx, ¤tRoute, orgID))
|
||||
})
|
||||
actual, err := adminClient.Update(ctx, renamed, v1.UpdateOptions{})
|
||||
actual, err := adminClient.Update(ctx, renamed, resource.UpdateOptions{})
|
||||
require.Errorf(t, err, "Expected error but got successful result: %v", actual)
|
||||
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
|
||||
})
|
||||
|
||||
t.Run("provisioned rules", func(t *testing.T) {
|
||||
ruleUid := currentRuleGroup.Rules[0].GrafanaManagedAlert.UID
|
||||
resource := &ngmodels.AlertRule{UID: ruleUid}
|
||||
require.NoError(t, db.SetProvenance(ctx, resource, orgID, "API"))
|
||||
rule := &ngmodels.AlertRule{UID: ruleUid}
|
||||
require.NoError(t, db.SetProvenance(ctx, rule, orgID, "API"))
|
||||
t.Cleanup(func() {
|
||||
require.NoError(t, db.DeleteProvenance(ctx, resource, orgID))
|
||||
require.NoError(t, db.DeleteProvenance(ctx, rule, orgID))
|
||||
})
|
||||
|
||||
actual, err := adminClient.Update(ctx, renamed, v1.UpdateOptions{})
|
||||
actual, err := adminClient.Update(ctx, renamed, resource.UpdateOptions{})
|
||||
require.Errorf(t, err, "Expected error but got successful result: %v", actual)
|
||||
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
|
||||
})
|
||||
@@ -754,7 +766,7 @@ func TestIntegrationTimeIntervalReferentialIntegrity(t *testing.T) {
|
||||
|
||||
t.Run("Delete", func(t *testing.T) {
|
||||
t.Run("should fail to delete if time interval is used in rule and routes", func(t *testing.T) {
|
||||
err := adminClient.Delete(ctx, interval.Name, v1.DeleteOptions{})
|
||||
err := adminClient.Delete(ctx, interval.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
|
||||
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
|
||||
})
|
||||
|
||||
@@ -763,7 +775,7 @@ func TestIntegrationTimeIntervalReferentialIntegrity(t *testing.T) {
|
||||
route.Routes[0].MuteTimeIntervals = nil
|
||||
legacyCli.UpdateRoute(t, route, true)
|
||||
|
||||
err = adminClient.Delete(ctx, interval.Name, v1.DeleteOptions{})
|
||||
err = adminClient.Delete(ctx, interval.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
|
||||
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
|
||||
})
|
||||
|
||||
@@ -773,7 +785,7 @@ func TestIntegrationTimeIntervalReferentialIntegrity(t *testing.T) {
|
||||
})
|
||||
intervalToDelete := intervals.Items[idx]
|
||||
|
||||
err = adminClient.Delete(ctx, intervalToDelete.Name, v1.DeleteOptions{})
|
||||
err = adminClient.Delete(ctx, intervalToDelete.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
|
||||
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
|
||||
})
|
||||
})
|
||||
@@ -785,7 +797,8 @@ func TestIntegrationTimeIntervalValidation(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
helper := getTestHelper(t)
|
||||
|
||||
adminClient := common.NewTimeIntervalClient(t, helper.Org1.Admin)
|
||||
adminClient, err := v0alpha1.NewTimeIntervalClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
|
||||
testCases := []struct {
|
||||
name string
|
||||
@@ -819,7 +832,7 @@ func TestIntegrationTimeIntervalValidation(t *testing.T) {
|
||||
},
|
||||
Spec: tc.interval,
|
||||
}
|
||||
_, err := adminClient.Create(ctx, i, v1.CreateOptions{})
|
||||
_, err := adminClient.Create(ctx, i, resource.CreateOptions{})
|
||||
require.Error(t, err)
|
||||
require.Truef(t, errors.IsBadRequest(err), "Expected BadRequest, got: %s", err)
|
||||
})
|
||||
|
||||
@@ -14,7 +14,7 @@ import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
githubConnection "github.com/grafana/grafana/apps/provisioning/pkg/connection/github"
|
||||
appsdk_k8s "github.com/grafana/grafana-app-sdk/k8s"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"k8s.io/apimachinery/pkg/api/errors"
|
||||
@@ -28,6 +28,8 @@ import (
|
||||
"k8s.io/client-go/dynamic"
|
||||
"k8s.io/client-go/rest"
|
||||
|
||||
githubConnection "github.com/grafana/grafana/apps/provisioning/pkg/connection/github"
|
||||
|
||||
"github.com/grafana/grafana/pkg/apimachinery/identity"
|
||||
"github.com/grafana/grafana/pkg/apimachinery/utils"
|
||||
"github.com/grafana/grafana/pkg/configprovider"
|
||||
@@ -57,6 +59,8 @@ import (
|
||||
const (
|
||||
Org1 = "Org1"
|
||||
Org2 = "OrgB"
|
||||
|
||||
DefaultNamespace = "default"
|
||||
)
|
||||
|
||||
var (
|
||||
@@ -445,6 +449,11 @@ func (c *User) RESTClient(t *testing.T, gv *schema.GroupVersion) *rest.RESTClien
|
||||
return client
|
||||
}
|
||||
|
||||
func (c *User) GetClientRegistry() *appsdk_k8s.ClientRegistry {
|
||||
restConfig := c.NewRestConfig()
|
||||
return appsdk_k8s.NewClientRegistry(*restConfig, appsdk_k8s.DefaultClientConfig())
|
||||
}
|
||||
|
||||
type RequestParams struct {
|
||||
User User
|
||||
Method string // GET, POST, PATCH, etc
|
||||
|
||||
@@ -25,6 +25,10 @@ export class ExportAsCode extends ShareExportTab {
|
||||
public getTabLabel(): string {
|
||||
return t('export.json.title', 'Export dashboard');
|
||||
}
|
||||
|
||||
public getSubtitle(): string | undefined {
|
||||
return t('export.json.info-text', 'Copy or download a file containing the definition of your dashboard');
|
||||
}
|
||||
}
|
||||
|
||||
function ExportAsCodeRenderer({ model }: SceneComponentProps<ExportAsCode>) {
|
||||
@@ -53,12 +57,6 @@ function ExportAsCodeRenderer({ model }: SceneComponentProps<ExportAsCode>) {
|
||||
|
||||
return (
|
||||
<div data-testid={selector.container} className={styles.container}>
|
||||
<p>
|
||||
<Trans i18nKey="export.json.info-text">
|
||||
Copy or download a file containing the definition of your dashboard
|
||||
</Trans>
|
||||
</p>
|
||||
|
||||
{config.featureToggles.kubernetesDashboards ? (
|
||||
<ResourceExport
|
||||
dashboardJson={dashboardJson}
|
||||
|
||||
@@ -0,0 +1,189 @@
|
||||
import { render, screen, within } from '@testing-library/react';
|
||||
import userEvent from '@testing-library/user-event';
|
||||
import { AsyncState } from 'react-use/lib/useAsync';
|
||||
|
||||
import { selectors as e2eSelectors } from '@grafana/e2e-selectors';
|
||||
import { Dashboard } from '@grafana/schema';
|
||||
import { Spec as DashboardV2Spec } from '@grafana/schema/dist/esm/schema/dashboard/v2';
|
||||
|
||||
import { ExportMode, ResourceExport } from './ResourceExport';
|
||||
|
||||
type DashboardJsonState = AsyncState<{
|
||||
json: Dashboard | DashboardV2Spec | { error: unknown };
|
||||
hasLibraryPanels?: boolean;
|
||||
initialSaveModelVersion: 'v1' | 'v2';
|
||||
}>;
|
||||
|
||||
const selector = e2eSelectors.pages.ExportDashboardDrawer.ExportAsJson;
|
||||
|
||||
const createDefaultProps = (overrides?: Partial<Parameters<typeof ResourceExport>[0]>) => {
|
||||
const defaultProps: Parameters<typeof ResourceExport>[0] = {
|
||||
dashboardJson: {
|
||||
loading: false,
|
||||
value: {
|
||||
json: { title: 'Test Dashboard' } as Dashboard,
|
||||
hasLibraryPanels: false,
|
||||
initialSaveModelVersion: 'v1',
|
||||
},
|
||||
} as DashboardJsonState,
|
||||
isSharingExternally: false,
|
||||
exportMode: ExportMode.Classic,
|
||||
isViewingYAML: false,
|
||||
onExportModeChange: jest.fn(),
|
||||
onShareExternallyChange: jest.fn(),
|
||||
onViewYAML: jest.fn(),
|
||||
};
|
||||
|
||||
return { ...defaultProps, ...overrides };
|
||||
};
|
||||
|
||||
const createV2DashboardJson = (hasLibraryPanels = false): DashboardJsonState => ({
|
||||
loading: false,
|
||||
value: {
|
||||
json: {
|
||||
title: 'Test V2 Dashboard',
|
||||
spec: {
|
||||
elements: {},
|
||||
},
|
||||
} as unknown as DashboardV2Spec,
|
||||
hasLibraryPanels,
|
||||
initialSaveModelVersion: 'v2',
|
||||
},
|
||||
});
|
||||
|
||||
const expandOptions = async () => {
|
||||
const button = screen.getByRole('button', { expanded: false });
|
||||
await userEvent.click(button);
|
||||
};
|
||||
|
||||
describe('ResourceExport', () => {
|
||||
describe('export mode options for v1 dashboard', () => {
|
||||
it('should show three export mode options in correct order: Classic, V1 Resource, V2 Resource', async () => {
|
||||
render(<ResourceExport {...createDefaultProps()} />);
|
||||
await expandOptions();
|
||||
|
||||
const radioGroup = screen.getByRole('radiogroup', { name: /model/i });
|
||||
const labels = within(radioGroup)
|
||||
.getAllByRole('radio')
|
||||
.map((radio) => radio.parentElement?.textContent?.trim());
|
||||
|
||||
expect(labels).toHaveLength(3);
|
||||
expect(labels).toEqual(['Classic', 'V1 Resource', 'V2 Resource']);
|
||||
});
|
||||
|
||||
it('should have first option selected by default when exportMode is Classic', async () => {
|
||||
render(<ResourceExport {...createDefaultProps({ exportMode: ExportMode.Classic })} />);
|
||||
await expandOptions();
|
||||
|
||||
const radioGroup = screen.getByRole('radiogroup', { name: /model/i });
|
||||
const radios = within(radioGroup).getAllByRole('radio');
|
||||
expect(radios[0]).toBeChecked();
|
||||
});
|
||||
|
||||
it('should call onExportModeChange when export mode is changed', async () => {
|
||||
const onExportModeChange = jest.fn();
|
||||
render(<ResourceExport {...createDefaultProps({ onExportModeChange })} />);
|
||||
await expandOptions();
|
||||
|
||||
const radioGroup = screen.getByRole('radiogroup', { name: /model/i });
|
||||
const radios = within(radioGroup).getAllByRole('radio');
|
||||
await userEvent.click(radios[1]); // V1 Resource
|
||||
expect(onExportModeChange).toHaveBeenCalledWith(ExportMode.V1Resource);
|
||||
});
|
||||
});
|
||||
|
||||
describe('export mode options for v2 dashboard', () => {
|
||||
it('should not show export mode options', async () => {
|
||||
render(<ResourceExport {...createDefaultProps({ dashboardJson: createV2DashboardJson() })} />);
|
||||
await expandOptions();
|
||||
|
||||
expect(screen.queryByRole('radiogroup', { name: /model/i })).not.toBeInTheDocument();
|
||||
});
|
||||
});
|
||||
|
||||
describe('format options', () => {
|
||||
it('should not show format options when export mode is Classic', async () => {
|
||||
render(<ResourceExport {...createDefaultProps({ exportMode: ExportMode.Classic })} />);
|
||||
await expandOptions();
|
||||
|
||||
expect(screen.getByRole('radiogroup', { name: /model/i })).toBeInTheDocument();
|
||||
expect(screen.queryByRole('radiogroup', { name: /format/i })).not.toBeInTheDocument();
|
||||
});
|
||||
|
||||
it.each([ExportMode.V1Resource, ExportMode.V2Resource])(
|
||||
'should show format options when export mode is %s',
|
||||
async (exportMode) => {
|
||||
render(<ResourceExport {...createDefaultProps({ exportMode })} />);
|
||||
await expandOptions();
|
||||
|
||||
expect(screen.getByRole('radiogroup', { name: /model/i })).toBeInTheDocument();
|
||||
expect(screen.getByRole('radiogroup', { name: /format/i })).toBeInTheDocument();
|
||||
}
|
||||
);
|
||||
|
||||
it('should have first format option selected when isViewingYAML is false', async () => {
|
||||
render(<ResourceExport {...createDefaultProps({ exportMode: ExportMode.V1Resource, isViewingYAML: false })} />);
|
||||
await expandOptions();
|
||||
|
||||
const formatGroup = screen.getByRole('radiogroup', { name: /format/i });
|
||||
const formatRadios = within(formatGroup).getAllByRole('radio');
|
||||
expect(formatRadios[0]).toBeChecked(); // JSON
|
||||
});
|
||||
|
||||
it('should have second format option selected when isViewingYAML is true', async () => {
|
||||
render(<ResourceExport {...createDefaultProps({ exportMode: ExportMode.V1Resource, isViewingYAML: true })} />);
|
||||
await expandOptions();
|
||||
|
||||
const formatGroup = screen.getByRole('radiogroup', { name: /format/i });
|
||||
const formatRadios = within(formatGroup).getAllByRole('radio');
|
||||
expect(formatRadios[1]).toBeChecked(); // YAML
|
||||
});
|
||||
|
||||
it('should call onViewYAML when format is changed', async () => {
|
||||
const onViewYAML = jest.fn();
|
||||
render(<ResourceExport {...createDefaultProps({ exportMode: ExportMode.V1Resource, onViewYAML })} />);
|
||||
await expandOptions();
|
||||
|
||||
const formatGroup = screen.getByRole('radiogroup', { name: /format/i });
|
||||
const formatRadios = within(formatGroup).getAllByRole('radio');
|
||||
await userEvent.click(formatRadios[1]); // YAML
|
||||
expect(onViewYAML).toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
describe('share externally switch', () => {
|
||||
it('should show share externally switch for Classic mode', () => {
|
||||
render(<ResourceExport {...createDefaultProps({ exportMode: ExportMode.Classic })} />);
|
||||
|
||||
expect(screen.getByTestId(selector.exportExternallyToggle)).toBeInTheDocument();
|
||||
});
|
||||
|
||||
it('should show share externally switch for V2Resource mode with V2 dashboard', () => {
|
||||
render(
|
||||
<ResourceExport
|
||||
{...createDefaultProps({
|
||||
dashboardJson: createV2DashboardJson(),
|
||||
exportMode: ExportMode.V2Resource,
|
||||
})}
|
||||
/>
|
||||
);
|
||||
|
||||
expect(screen.getByTestId(selector.exportExternallyToggle)).toBeInTheDocument();
|
||||
});
|
||||
|
||||
it('should call onShareExternallyChange when switch is toggled', async () => {
|
||||
const onShareExternallyChange = jest.fn();
|
||||
render(<ResourceExport {...createDefaultProps({ exportMode: ExportMode.Classic, onShareExternallyChange })} />);
|
||||
|
||||
const switchElement = screen.getByTestId(selector.exportExternallyToggle);
|
||||
await userEvent.click(switchElement);
|
||||
expect(onShareExternallyChange).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should reflect isSharingExternally value in switch', () => {
|
||||
render(<ResourceExport {...createDefaultProps({ exportMode: ExportMode.Classic, isSharingExternally: true })} />);
|
||||
|
||||
expect(screen.getByTestId(selector.exportExternallyToggle)).toBeChecked();
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -4,7 +4,8 @@ import { selectors as e2eSelectors } from '@grafana/e2e-selectors';
|
||||
import { Trans, t } from '@grafana/i18n';
|
||||
import { Dashboard } from '@grafana/schema';
|
||||
import { Spec as DashboardV2Spec } from '@grafana/schema/dist/esm/schema/dashboard/v2';
|
||||
import { Alert, Label, RadioButtonGroup, Stack, Switch } from '@grafana/ui';
|
||||
import { Alert, Icon, Label, RadioButtonGroup, Stack, Switch, Box, Tooltip } from '@grafana/ui';
|
||||
import { QueryOperationRow } from 'app/core/components/QueryOperationRow/QueryOperationRow';
|
||||
import { DashboardJson } from 'app/features/manage-dashboards/types';
|
||||
|
||||
import { ExportableResource } from '../ShareExportTab';
|
||||
@@ -48,80 +49,90 @@ export function ResourceExport({
|
||||
|
||||
const switchExportLabel =
|
||||
exportMode === ExportMode.V2Resource
|
||||
? t('export.json.export-remove-ds-refs', 'Remove deployment details')
|
||||
: t('share-modal.export.share-externally-label', `Export for sharing externally`);
|
||||
? t('dashboard-scene.resource-export.share-externally', 'Share dashboard with another instance')
|
||||
: t('share-modal.export.share-externally-label', 'Export for sharing externally');
|
||||
const switchExportTooltip = t(
|
||||
'dashboard-scene.resource-export.share-externally-tooltip',
|
||||
'Removes all instance-specific metadata and data source references from the resource before export.'
|
||||
);
|
||||
const switchExportModeLabel = t('export.json.export-mode', 'Model');
|
||||
const switchExportFormatLabel = t('export.json.export-format', 'Format');
|
||||
|
||||
const exportResourceOptions = [
|
||||
{
|
||||
label: t('dashboard-scene.resource-export.label.classic', 'Classic'),
|
||||
value: ExportMode.Classic,
|
||||
},
|
||||
{
|
||||
label: t('dashboard-scene.resource-export.label.v1-resource', 'V1 Resource'),
|
||||
value: ExportMode.V1Resource,
|
||||
},
|
||||
{
|
||||
label: t('dashboard-scene.resource-export.label.v2-resource', 'V2 Resource'),
|
||||
value: ExportMode.V2Resource,
|
||||
},
|
||||
];
|
||||
|
||||
return (
|
||||
<Stack gap={2} direction="column">
|
||||
<Stack gap={1} direction="column">
|
||||
{initialSaveModelVersion === 'v1' && (
|
||||
<Stack alignItems="center">
|
||||
<Label>{switchExportModeLabel}</Label>
|
||||
<RadioButtonGroup
|
||||
options={[
|
||||
{ label: t('dashboard-scene.resource-export.label.classic', 'Classic'), value: ExportMode.Classic },
|
||||
{
|
||||
label: t('dashboard-scene.resource-export.label.v1-resource', 'V1 Resource'),
|
||||
value: ExportMode.V1Resource,
|
||||
},
|
||||
{
|
||||
label: t('dashboard-scene.resource-export.label.v2-resource', 'V2 Resource'),
|
||||
value: ExportMode.V2Resource,
|
||||
},
|
||||
]}
|
||||
value={exportMode}
|
||||
onChange={(value) => onExportModeChange(value)}
|
||||
/>
|
||||
<>
|
||||
<QueryOperationRow
|
||||
id="Advanced options"
|
||||
index={0}
|
||||
title={t('dashboard-scene.resource-export.label.advanced-options', 'Advanced options')}
|
||||
isOpen={false}
|
||||
>
|
||||
<Box marginTop={2}>
|
||||
<Stack gap={1} direction="column">
|
||||
{initialSaveModelVersion === 'v1' && (
|
||||
<Stack gap={1} alignItems="center">
|
||||
<Label>{switchExportModeLabel}</Label>
|
||||
<RadioButtonGroup
|
||||
options={exportResourceOptions}
|
||||
value={exportMode}
|
||||
onChange={(value) => onExportModeChange(value)}
|
||||
aria-label={switchExportModeLabel}
|
||||
/>
|
||||
</Stack>
|
||||
)}
|
||||
|
||||
{exportMode !== ExportMode.Classic && (
|
||||
<Stack gap={1} alignItems="center">
|
||||
<Label>{switchExportFormatLabel}</Label>
|
||||
<RadioButtonGroup
|
||||
options={[
|
||||
{ label: t('dashboard-scene.resource-export.label.json', 'JSON'), value: 'json' },
|
||||
{ label: t('dashboard-scene.resource-export.label.yaml', 'YAML'), value: 'yaml' },
|
||||
]}
|
||||
value={isViewingYAML ? 'yaml' : 'json'}
|
||||
onChange={onViewYAML}
|
||||
aria-label={switchExportFormatLabel}
|
||||
/>
|
||||
</Stack>
|
||||
)}
|
||||
</Stack>
|
||||
)}
|
||||
{initialSaveModelVersion === 'v2' && (
|
||||
<Stack alignItems="center">
|
||||
<Label>{switchExportModeLabel}</Label>
|
||||
<RadioButtonGroup
|
||||
options={[
|
||||
{
|
||||
label: t('dashboard-scene.resource-export.label.v2-resource', 'V2 Resource'),
|
||||
value: ExportMode.V2Resource,
|
||||
},
|
||||
{
|
||||
label: t('dashboard-scene.resource-export.label.v1-resource', 'V1 Resource'),
|
||||
value: ExportMode.V1Resource,
|
||||
},
|
||||
]}
|
||||
value={exportMode}
|
||||
onChange={(value) => onExportModeChange(value)}
|
||||
/>
|
||||
</Stack>
|
||||
)}
|
||||
{exportMode !== ExportMode.Classic && (
|
||||
<Stack gap={1} alignItems="center">
|
||||
<Label>{switchExportFormatLabel}</Label>
|
||||
<RadioButtonGroup
|
||||
options={[
|
||||
{ label: t('dashboard-scene.resource-export.label.json', 'JSON'), value: 'json' },
|
||||
{ label: t('dashboard-scene.resource-export.label.yaml', 'YAML'), value: 'yaml' },
|
||||
]}
|
||||
value={isViewingYAML ? 'yaml' : 'json'}
|
||||
onChange={onViewYAML}
|
||||
/>
|
||||
</Stack>
|
||||
)}
|
||||
{(isV2Dashboard ||
|
||||
exportMode === ExportMode.Classic ||
|
||||
(initialSaveModelVersion === 'v2' && exportMode === ExportMode.V1Resource)) && (
|
||||
<Stack gap={1} alignItems="start">
|
||||
<Label>{switchExportLabel}</Label>
|
||||
<Switch
|
||||
label={switchExportLabel}
|
||||
value={isSharingExternally}
|
||||
onChange={onShareExternallyChange}
|
||||
data-testid={selector.exportExternallyToggle}
|
||||
/>
|
||||
</Stack>
|
||||
)}
|
||||
</Stack>
|
||||
</Box>
|
||||
</QueryOperationRow>
|
||||
|
||||
{(isV2Dashboard ||
|
||||
exportMode === ExportMode.Classic ||
|
||||
(initialSaveModelVersion === 'v2' && exportMode === ExportMode.V1Resource)) && (
|
||||
<Stack gap={1} alignItems="start">
|
||||
<Label>
|
||||
<Stack gap={0.5} alignItems="center">
|
||||
<Tooltip content={switchExportTooltip} placement="bottom">
|
||||
<Icon name="info-circle" size="sm" />
|
||||
</Tooltip>
|
||||
{switchExportLabel}
|
||||
</Stack>
|
||||
</Label>
|
||||
<Switch
|
||||
label={switchExportLabel}
|
||||
value={isSharingExternally}
|
||||
onChange={onShareExternallyChange}
|
||||
data-testid={selector.exportExternallyToggle}
|
||||
/>
|
||||
</Stack>
|
||||
)}
|
||||
|
||||
{showV2LibPanelAlert && (
|
||||
<Alert
|
||||
@@ -130,6 +141,7 @@ export function ResourceExport({
|
||||
'Library panels will be converted to regular panels'
|
||||
)}
|
||||
severity="warning"
|
||||
topSpacing={2}
|
||||
>
|
||||
<Trans i18nKey="dashboard-scene.save-dashboard-form.schema-v2-library-panels-export">
|
||||
Due to limitations in the new dashboard schema (V2), library panels will be converted to regular panels with
|
||||
@@ -137,6 +149,6 @@ export function ResourceExport({
|
||||
</Trans>
|
||||
</Alert>
|
||||
)}
|
||||
</Stack>
|
||||
</>
|
||||
);
|
||||
}
|
||||
|
||||
@@ -66,7 +66,12 @@ function ShareDrawerRenderer({ model }: SceneComponentProps<ShareDrawer>) {
|
||||
const dashboard = getDashboardSceneFor(model);
|
||||
|
||||
return (
|
||||
<Drawer title={activeShare?.getTabLabel()} onClose={model.onDismiss} size="md">
|
||||
<Drawer
|
||||
title={activeShare?.getTabLabel()}
|
||||
subtitle={activeShare?.getSubtitle?.()}
|
||||
onClose={model.onDismiss}
|
||||
size="md"
|
||||
>
|
||||
<ShareDrawerContext.Provider value={{ dashboard, onDismiss: model.onDismiss }}>
|
||||
{activeShare && <activeShare.Component model={activeShare} />}
|
||||
</ShareDrawerContext.Provider>
|
||||
|
||||
@@ -66,6 +66,10 @@ export class ShareExportTab extends SceneObjectBase<ShareExportTabState> impleme
|
||||
return t('share-modal.tab-title.export', 'Export');
|
||||
}
|
||||
|
||||
public getSubtitle(): string | undefined {
|
||||
return undefined;
|
||||
}
|
||||
|
||||
public onShareExternallyChange = () => {
|
||||
this.setState({
|
||||
isSharingExternally: !this.state.isSharingExternally,
|
||||
|
||||
@@ -15,5 +15,6 @@ export interface SceneShareTab<T extends SceneShareTabState = SceneShareTabState
|
||||
|
||||
export interface ShareView extends SceneObject {
|
||||
getTabLabel(): string;
|
||||
getSubtitle?(): string | undefined;
|
||||
onDismiss?: () => void;
|
||||
}
|
||||
|
||||
@@ -14,7 +14,7 @@ import {
|
||||
import { SeriesVisibilityChangeMode } from '@grafana/ui';
|
||||
|
||||
const displayOverrideRef = 'hideSeriesFrom';
|
||||
const isHideSeriesOverride = isSystemOverrideWithRef(displayOverrideRef);
|
||||
export const isHideSeriesOverride = isSystemOverrideWithRef(displayOverrideRef);
|
||||
|
||||
export function seriesVisibilityConfigFactory(
|
||||
label: string,
|
||||
|
||||
@@ -112,6 +112,7 @@ const dummyProps: Props = {
|
||||
compact: false,
|
||||
changeCompactMode: jest.fn(),
|
||||
queryLibraryRef: undefined,
|
||||
queriesChangedIndexAtRun: 0,
|
||||
};
|
||||
jest.mock('@grafana/runtime', () => ({
|
||||
...jest.requireActual('@grafana/runtime'),
|
||||
|
||||
@@ -389,7 +389,7 @@ export class Explore extends PureComponent<Props, ExploreState> {
|
||||
}
|
||||
|
||||
renderGraphPanel(width: number) {
|
||||
const { graphResult, timeZone, queryResponse, showFlameGraph } = this.props;
|
||||
const { graphResult, timeZone, queryResponse, showFlameGraph, queriesChangedIndexAtRun } = this.props;
|
||||
|
||||
return (
|
||||
<ContentOutlineItem panelId="Graph" title={t('explore.explore.title-graph', 'Graph')} icon="graph-bar">
|
||||
@@ -404,6 +404,7 @@ export class Explore extends PureComponent<Props, ExploreState> {
|
||||
splitOpenFn={this.onSplitOpen('graph')}
|
||||
loadingState={queryResponse.state}
|
||||
eventBus={this.graphEventBus}
|
||||
queriesChangedIndexAtRun={queriesChangedIndexAtRun}
|
||||
/>
|
||||
</ContentOutlineItem>
|
||||
);
|
||||
@@ -813,6 +814,7 @@ function mapStateToProps(state: StoreState, { exploreId }: ExploreProps) {
|
||||
correlationEditorHelperData,
|
||||
compact,
|
||||
queryLibraryRef,
|
||||
queriesChangedIndexAtRun,
|
||||
} = item;
|
||||
|
||||
const loading = selectIsWaitingForData(exploreId)(state);
|
||||
@@ -847,6 +849,7 @@ function mapStateToProps(state: StoreState, { exploreId }: ExploreProps) {
|
||||
correlationEditorDetails: explore.correlationEditorDetails,
|
||||
exploreActiveDS: selectExploreDSMaps(state),
|
||||
queryLibraryRef,
|
||||
queriesChangedIndexAtRun,
|
||||
};
|
||||
}
|
||||
|
||||
|
||||
@@ -34,7 +34,10 @@ import { defaultGraphConfig, getGraphFieldConfig } from 'app/plugins/panel/times
|
||||
import { Options as TimeSeriesOptions } from 'app/plugins/panel/timeseries/panelcfg.gen';
|
||||
import { ExploreGraphStyle } from 'app/types/explore';
|
||||
|
||||
import { seriesVisibilityConfigFactory } from '../../dashboard/dashgrid/SeriesVisibilityConfigFactory';
|
||||
import {
|
||||
isHideSeriesOverride,
|
||||
seriesVisibilityConfigFactory,
|
||||
} from '../../dashboard/dashgrid/SeriesVisibilityConfigFactory';
|
||||
import { useExploreDataLinkPostProcessor } from '../hooks/useExploreDataLinkPostProcessor';
|
||||
|
||||
import { applyGraphStyle, applyThresholdsConfig } from './exploreGraphStyleUtils';
|
||||
@@ -60,6 +63,7 @@ interface Props {
|
||||
eventBus: EventBus;
|
||||
vizLegendOverrides?: Partial<VizLegendOptions>;
|
||||
toggleLegendRef?: React.MutableRefObject<(name: string | undefined, mode: SeriesVisibilityChangeMode) => void>;
|
||||
queriesChangedIndexAtRun?: number;
|
||||
}
|
||||
|
||||
export function ExploreGraph({
|
||||
@@ -82,6 +86,7 @@ export function ExploreGraph({
|
||||
eventBus,
|
||||
vizLegendOverrides,
|
||||
toggleLegendRef,
|
||||
queriesChangedIndexAtRun,
|
||||
}: Props) {
|
||||
const theme = useTheme2();
|
||||
|
||||
@@ -107,6 +112,13 @@ export function ExploreGraph({
|
||||
overrides: [],
|
||||
});
|
||||
|
||||
useEffect(() => {
|
||||
setFieldConfig((fieldConfig) => ({
|
||||
...fieldConfig,
|
||||
overrides: fieldConfig.overrides.filter((rule) => !isHideSeriesOverride(rule)),
|
||||
}));
|
||||
}, [queriesChangedIndexAtRun]);
|
||||
|
||||
const styledFieldConfig = useMemo(() => {
|
||||
const withGraphStyle = applyGraphStyle(fieldConfig, graphStyle, yAxisMaximum);
|
||||
return applyThresholdsConfig(withGraphStyle, thresholdsStyle, thresholdsConfig);
|
||||
|
||||
@@ -37,6 +37,7 @@ interface Props extends Pick<PanelChromeProps, 'statusMessage'> {
|
||||
loadingState: LoadingState;
|
||||
thresholdsConfig?: ThresholdsConfig;
|
||||
thresholdsStyle?: GraphThresholdsStyleConfig;
|
||||
queriesChangedIndexAtRun?: number;
|
||||
}
|
||||
|
||||
export const GraphContainer = ({
|
||||
@@ -53,6 +54,7 @@ export const GraphContainer = ({
|
||||
thresholdsStyle,
|
||||
loadingState,
|
||||
statusMessage,
|
||||
queriesChangedIndexAtRun,
|
||||
}: Props) => {
|
||||
const [showAllSeries, toggleShowAllSeries] = useToggle(false);
|
||||
const [graphStyle, setGraphStyle] = useState(loadGraphStyle);
|
||||
@@ -108,6 +110,7 @@ export const GraphContainer = ({
|
||||
thresholdsConfig={thresholdsConfig}
|
||||
thresholdsStyle={thresholdsStyle}
|
||||
eventBus={eventBus}
|
||||
queriesChangedIndexAtRun={queriesChangedIndexAtRun}
|
||||
/>
|
||||
)}
|
||||
</PanelChrome>
|
||||
|
||||
@@ -1060,6 +1060,7 @@ export const queryReducer = (state: ExploreItemState, action: AnyAction): Explor
|
||||
|
||||
return {
|
||||
...state,
|
||||
queriesChangedIndex: state.queriesChangedIndex + 1,
|
||||
queries,
|
||||
};
|
||||
}
|
||||
@@ -1338,6 +1339,7 @@ const processQueryResponse = (state: ExploreItemState, action: PayloadAction<Que
|
||||
|
||||
return {
|
||||
...state,
|
||||
queriesChangedIndexAtRun: state.queriesChangedIndex,
|
||||
queryResponse: response,
|
||||
graphResult,
|
||||
tableResult,
|
||||
|
||||
@@ -76,6 +76,8 @@ export const makeExplorePaneState = (overrides?: Partial<ExploreItemState>): Exp
|
||||
panelsState: {},
|
||||
correlations: undefined,
|
||||
compact: false,
|
||||
queriesChangedIndex: 0,
|
||||
queriesChangedIndexAtRun: 0,
|
||||
...overrides,
|
||||
});
|
||||
|
||||
|
||||
@@ -2,8 +2,9 @@ import { render, screen } from '@testing-library/react';
|
||||
import { defaultsDeep } from 'lodash';
|
||||
import { Provider } from 'react-redux';
|
||||
|
||||
import { FieldType, getDefaultTimeRange, LoadingState } from '@grafana/data';
|
||||
import { PanelDataErrorViewProps } from '@grafana/runtime';
|
||||
import { CoreApp, EventBusSrv, FieldType, getDefaultTimeRange, LoadingState } from '@grafana/data';
|
||||
import { config, PanelDataErrorViewProps } from '@grafana/runtime';
|
||||
import { usePanelContext } from '@grafana/ui';
|
||||
import { configureStore } from 'app/store/configureStore';
|
||||
|
||||
import { PanelDataErrorView } from './PanelDataErrorView';
|
||||
@@ -16,7 +17,24 @@ jest.mock('app/features/dashboard/services/DashboardSrv', () => ({
|
||||
},
|
||||
}));
|
||||
|
||||
jest.mock('@grafana/ui', () => ({
|
||||
...jest.requireActual('@grafana/ui'),
|
||||
usePanelContext: jest.fn(),
|
||||
}));
|
||||
|
||||
const mockUsePanelContext = jest.mocked(usePanelContext);
|
||||
const RUN_QUERY_MESSAGE = 'Run a query to visualize it here or go to all visualizations to add other panel types';
|
||||
const panelContextRoot = {
|
||||
app: CoreApp.Dashboard,
|
||||
eventsScope: 'global',
|
||||
eventBus: new EventBusSrv(),
|
||||
};
|
||||
|
||||
describe('PanelDataErrorView', () => {
|
||||
beforeEach(() => {
|
||||
mockUsePanelContext.mockReturnValue(panelContextRoot);
|
||||
});
|
||||
|
||||
it('show No data when there is no data', () => {
|
||||
renderWithProps();
|
||||
|
||||
@@ -70,6 +88,45 @@ describe('PanelDataErrorView', () => {
|
||||
|
||||
expect(screen.getByText('Query returned nothing')).toBeInTheDocument();
|
||||
});
|
||||
|
||||
it('should show "Run a query..." message when no query is configured and feature toggle is enabled', () => {
|
||||
mockUsePanelContext.mockReturnValue(panelContextRoot);
|
||||
|
||||
const originalFeatureToggle = config.featureToggles.newVizSuggestions;
|
||||
config.featureToggles.newVizSuggestions = true;
|
||||
|
||||
renderWithProps({
|
||||
data: {
|
||||
state: LoadingState.Done,
|
||||
series: [],
|
||||
timeRange: getDefaultTimeRange(),
|
||||
},
|
||||
});
|
||||
|
||||
expect(screen.getByText(RUN_QUERY_MESSAGE)).toBeInTheDocument();
|
||||
|
||||
config.featureToggles.newVizSuggestions = originalFeatureToggle;
|
||||
});
|
||||
|
||||
it('should show "No data" message when feature toggle is disabled even without queries', () => {
|
||||
mockUsePanelContext.mockReturnValue(panelContextRoot);
|
||||
|
||||
const originalFeatureToggle = config.featureToggles.newVizSuggestions;
|
||||
config.featureToggles.newVizSuggestions = false;
|
||||
|
||||
renderWithProps({
|
||||
data: {
|
||||
state: LoadingState.Done,
|
||||
series: [],
|
||||
timeRange: getDefaultTimeRange(),
|
||||
},
|
||||
});
|
||||
|
||||
expect(screen.getByText('No data')).toBeInTheDocument();
|
||||
expect(screen.queryByText(RUN_QUERY_MESSAGE)).not.toBeInTheDocument();
|
||||
|
||||
config.featureToggles.newVizSuggestions = originalFeatureToggle;
|
||||
});
|
||||
});
|
||||
|
||||
function renderWithProps(overrides?: Partial<PanelDataErrorViewProps>) {
|
||||
|
||||
@@ -5,14 +5,15 @@ import {
|
||||
FieldType,
|
||||
getPanelDataSummary,
|
||||
GrafanaTheme2,
|
||||
PanelData,
|
||||
PanelDataSummary,
|
||||
PanelPluginVisualizationSuggestion,
|
||||
} from '@grafana/data';
|
||||
import { selectors } from '@grafana/e2e-selectors';
|
||||
import { t, Trans } from '@grafana/i18n';
|
||||
import { PanelDataErrorViewProps, locationService } from '@grafana/runtime';
|
||||
import { PanelDataErrorViewProps, locationService, config } from '@grafana/runtime';
|
||||
import { VizPanel } from '@grafana/scenes';
|
||||
import { usePanelContext, useStyles2 } from '@grafana/ui';
|
||||
import { Icon, usePanelContext, useStyles2 } from '@grafana/ui';
|
||||
import { CardButton } from 'app/core/components/CardButton';
|
||||
import { LS_VISUALIZATION_SELECT_TAB_KEY } from 'app/core/constants';
|
||||
import store from 'app/core/store';
|
||||
@@ -24,6 +25,11 @@ import { findVizPanelByKey, getVizPanelKeyForPanelId } from 'app/features/dashbo
|
||||
import { useDispatch } from 'app/types/store';
|
||||
|
||||
import { changePanelPlugin } from '../state/actions';
|
||||
import { hasData } from '../suggestions/utils';
|
||||
|
||||
function hasNoQueryConfigured(data: PanelData): boolean {
|
||||
return !data.request?.targets || data.request.targets.length === 0;
|
||||
}
|
||||
|
||||
export function PanelDataErrorView(props: PanelDataErrorViewProps) {
|
||||
const styles = useStyles2(getStyles);
|
||||
@@ -93,8 +99,14 @@ export function PanelDataErrorView(props: PanelDataErrorViewProps) {
|
||||
}
|
||||
};
|
||||
|
||||
const noData = !hasData(props.data);
|
||||
const noQueryConfigured = hasNoQueryConfigured(props.data);
|
||||
const showEmptyState =
|
||||
config.featureToggles.newVizSuggestions && context.app === CoreApp.PanelEditor && noQueryConfigured && noData;
|
||||
|
||||
return (
|
||||
<div className={styles.wrapper}>
|
||||
{showEmptyState && <Icon name="chart-line" size="xxxl" className={styles.emptyStateIcon} />}
|
||||
<div className={styles.message} data-testid={selectors.components.Panels.Panel.PanelDataErrorMessage}>
|
||||
{message}
|
||||
</div>
|
||||
@@ -131,7 +143,17 @@ function getMessageFor(
|
||||
return message;
|
||||
}
|
||||
|
||||
if (!data.series || data.series.length === 0 || data.series.every((frame) => frame.length === 0)) {
|
||||
const noData = !hasData(data);
|
||||
const noQueryConfigured = hasNoQueryConfigured(data);
|
||||
|
||||
if (config.featureToggles.newVizSuggestions && noQueryConfigured && noData) {
|
||||
return t(
|
||||
'dashboard.new-panel.empty-state-message',
|
||||
'Run a query to visualize it here or go to all visualizations to add other panel types'
|
||||
);
|
||||
}
|
||||
|
||||
if (noData) {
|
||||
return fieldConfig?.defaults.noValue ?? t('panel.panel-data-error-view.no-value.default', 'No data');
|
||||
}
|
||||
|
||||
@@ -176,5 +198,9 @@ const getStyles = (theme: GrafanaTheme2) => {
|
||||
width: '100%',
|
||||
maxWidth: '600px',
|
||||
}),
|
||||
emptyStateIcon: css({
|
||||
color: theme.colors.text.secondary,
|
||||
marginBottom: theme.spacing(2),
|
||||
}),
|
||||
};
|
||||
};
|
||||
|
||||
@@ -1,29 +1,26 @@
|
||||
import { SelectableValue } from '@grafana/data';
|
||||
import { RadioButtonGroup } from '@grafana/ui';
|
||||
|
||||
import { useDispatch } from '../../hooks/useStatelessReducer';
|
||||
import { EditorType } from '../../types';
|
||||
|
||||
import { useQuery } from './ElasticsearchQueryContext';
|
||||
import { changeEditorTypeAndResetQuery } from './state';
|
||||
|
||||
const BASE_OPTIONS: Array<SelectableValue<EditorType>> = [
|
||||
{ value: 'builder', label: 'Builder' },
|
||||
{ value: 'code', label: 'Code' },
|
||||
];
|
||||
|
||||
export const EditorTypeSelector = () => {
|
||||
const query = useQuery();
|
||||
const dispatch = useDispatch();
|
||||
|
||||
// Default to 'builder' if editorType is empty
|
||||
const editorType: EditorType = query.editorType === 'code' ? 'code' : 'builder';
|
||||
|
||||
const onChange = (newEditorType: EditorType) => {
|
||||
dispatch(changeEditorTypeAndResetQuery(newEditorType));
|
||||
};
|
||||
interface Props {
|
||||
value: EditorType;
|
||||
onChange: (editorType: EditorType) => void;
|
||||
}
|
||||
|
||||
export const EditorTypeSelector = ({ value, onChange }: Props) => {
|
||||
return (
|
||||
<RadioButtonGroup<EditorType> fullWidth={false} options={BASE_OPTIONS} value={editorType} onChange={onChange} />
|
||||
<RadioButtonGroup<EditorType>
|
||||
data-testid="elasticsearch-editor-type-toggle"
|
||||
size="sm"
|
||||
options={BASE_OPTIONS}
|
||||
value={value}
|
||||
onChange={onChange}
|
||||
/>
|
||||
);
|
||||
};
|
||||
|
||||
@@ -10,9 +10,13 @@ interface Props {
|
||||
onRunQuery: () => void;
|
||||
}
|
||||
|
||||
// This offset was chosen by testing to match Prometheus behavior
|
||||
const EDITOR_HEIGHT_OFFSET = 2;
|
||||
|
||||
export function RawQueryEditor({ value, onChange, onRunQuery }: Props) {
|
||||
const styles = useStyles2(getStyles);
|
||||
const editorRef = useRef<monacoTypes.editor.IStandaloneCodeEditor | null>(null);
|
||||
const containerRef = useRef<HTMLDivElement | null>(null);
|
||||
|
||||
const handleEditorDidMount = useCallback(
|
||||
(editor: monacoTypes.editor.IStandaloneCodeEditor, monaco: Monaco) => {
|
||||
@@ -22,6 +26,22 @@ export function RawQueryEditor({ value, onChange, onRunQuery }: Props) {
|
||||
editor.addCommand(monaco.KeyMod.CtrlCmd | monaco.KeyCode.Enter, () => {
|
||||
onRunQuery();
|
||||
});
|
||||
|
||||
// Make the editor resize itself so that the content fits (grows taller when necessary)
|
||||
// this code comes from the Prometheus query editor.
|
||||
// We may wish to consider abstracting it into the grafana/ui repo in the future
|
||||
const updateElementHeight = () => {
|
||||
const containerDiv = containerRef.current;
|
||||
if (containerDiv !== null) {
|
||||
const pixelHeight = editor.getContentHeight();
|
||||
containerDiv.style.height = `${pixelHeight + EDITOR_HEIGHT_OFFSET}px`;
|
||||
const pixelWidth = containerDiv.clientWidth;
|
||||
editor.layout({ width: pixelWidth, height: pixelHeight });
|
||||
}
|
||||
};
|
||||
|
||||
editor.onDidContentSizeChange(updateElementHeight);
|
||||
updateElementHeight();
|
||||
},
|
||||
[onRunQuery]
|
||||
);
|
||||
@@ -65,7 +85,17 @@ export function RawQueryEditor({ value, onChange, onRunQuery }: Props) {
|
||||
|
||||
return (
|
||||
<Box>
|
||||
<div className={styles.header}>
|
||||
<div ref={containerRef} className={styles.editorContainer}>
|
||||
<CodeEditor
|
||||
value={value ?? ''}
|
||||
language="json"
|
||||
width="100%"
|
||||
onBlur={handleQueryChange}
|
||||
monacoOptions={monacoOptions}
|
||||
onEditorDidMount={handleEditorDidMount}
|
||||
/>
|
||||
</div>
|
||||
<div className={styles.footer}>
|
||||
<Stack gap={1}>
|
||||
<Button
|
||||
size="sm"
|
||||
@@ -76,20 +106,8 @@ export function RawQueryEditor({ value, onChange, onRunQuery }: Props) {
|
||||
>
|
||||
Format
|
||||
</Button>
|
||||
<Button size="sm" variant="primary" icon="play" onClick={onRunQuery} tooltip="Run query (Ctrl/Cmd+Enter)">
|
||||
Run
|
||||
</Button>
|
||||
</Stack>
|
||||
</div>
|
||||
<CodeEditor
|
||||
value={value ?? ''}
|
||||
language="json"
|
||||
height={200}
|
||||
width="100%"
|
||||
onBlur={handleQueryChange}
|
||||
monacoOptions={monacoOptions}
|
||||
onEditorDidMount={handleEditorDidMount}
|
||||
/>
|
||||
</Box>
|
||||
);
|
||||
}
|
||||
@@ -100,7 +118,11 @@ const getStyles = (theme: GrafanaTheme2) => ({
|
||||
flexDirection: 'column',
|
||||
gap: theme.spacing(1),
|
||||
}),
|
||||
header: css({
|
||||
editorContainer: css({
|
||||
width: '100%',
|
||||
overflow: 'hidden',
|
||||
}),
|
||||
footer: css({
|
||||
display: 'flex',
|
||||
justifyContent: 'flex-end',
|
||||
padding: theme.spacing(0.5, 0),
|
||||
|
||||
@@ -1,16 +1,16 @@
|
||||
import { css } from '@emotion/css';
|
||||
import { useEffect, useId, useState } from 'react';
|
||||
import { useCallback, useEffect, useId, useState } from 'react';
|
||||
import { SemVer } from 'semver';
|
||||
|
||||
import { getDefaultTimeRange, GrafanaTheme2, QueryEditorProps } from '@grafana/data';
|
||||
import { config } from '@grafana/runtime';
|
||||
import { Alert, InlineField, InlineLabel, Input, QueryField, useStyles2 } from '@grafana/ui';
|
||||
import { Alert, ConfirmModal, InlineField, InlineLabel, Input, QueryField, useStyles2 } from '@grafana/ui';
|
||||
|
||||
import { ElasticsearchDataQuery } from '../../dataquery.gen';
|
||||
import { ElasticDatasource } from '../../datasource';
|
||||
import { useNextId } from '../../hooks/useNextId';
|
||||
import { useDispatch } from '../../hooks/useStatelessReducer';
|
||||
import { ElasticsearchOptions } from '../../types';
|
||||
import { EditorType, ElasticsearchOptions } from '../../types';
|
||||
import { isSupportedVersion, isTimeSeriesQuery, unsupportedVersionMessage } from '../../utils';
|
||||
|
||||
import { BucketAggregationsEditor } from './BucketAggregationsEditor';
|
||||
@@ -20,7 +20,7 @@ import { MetricAggregationsEditor } from './MetricAggregationsEditor';
|
||||
import { metricAggregationConfig } from './MetricAggregationsEditor/utils';
|
||||
import { QueryTypeSelector } from './QueryTypeSelector';
|
||||
import { RawQueryEditor } from './RawQueryEditor';
|
||||
import { changeAliasPattern, changeQuery, changeRawDSLQuery } from './state';
|
||||
import { changeAliasPattern, changeEditorTypeAndResetQuery, changeQuery, changeRawDSLQuery } from './state';
|
||||
|
||||
export type ElasticQueryEditorProps = QueryEditorProps<ElasticDatasource, ElasticsearchDataQuery, ElasticsearchOptions>;
|
||||
|
||||
@@ -97,31 +97,61 @@ const QueryEditorForm = ({ value, onRunQuery }: Props & { onRunQuery: () => void
|
||||
const inputId = useId();
|
||||
const styles = useStyles2(getStyles);
|
||||
|
||||
const [switchModalOpen, setSwitchModalOpen] = useState(false);
|
||||
const [pendingEditorType, setPendingEditorType] = useState<EditorType | null>(null);
|
||||
|
||||
const isTimeSeries = isTimeSeriesQuery(value);
|
||||
|
||||
const isCodeEditor = value.editorType === 'code';
|
||||
const rawDSLFeatureEnabled = config.featureToggles.elasticsearchRawDSLQuery;
|
||||
|
||||
// Default to 'builder' if editorType is empty
|
||||
const currentEditorType: EditorType = value.editorType === 'code' ? 'code' : 'builder';
|
||||
|
||||
const showBucketAggregationsEditor = value.metrics?.every(
|
||||
(metric) => metricAggregationConfig[metric.type].impliedQueryType === 'metrics'
|
||||
);
|
||||
|
||||
const onEditorTypeChange = useCallback((newEditorType: EditorType) => {
|
||||
// Show warning modal when switching modes
|
||||
setPendingEditorType(newEditorType);
|
||||
setSwitchModalOpen(true);
|
||||
}, []);
|
||||
|
||||
const confirmEditorTypeChange = useCallback(() => {
|
||||
if (pendingEditorType) {
|
||||
dispatch(changeEditorTypeAndResetQuery(pendingEditorType));
|
||||
}
|
||||
setSwitchModalOpen(false);
|
||||
setPendingEditorType(null);
|
||||
}, [dispatch, pendingEditorType]);
|
||||
|
||||
const cancelEditorTypeChange = useCallback(() => {
|
||||
setSwitchModalOpen(false);
|
||||
setPendingEditorType(null);
|
||||
}, []);
|
||||
|
||||
return (
|
||||
<>
|
||||
<ConfirmModal
|
||||
isOpen={switchModalOpen}
|
||||
title="Switch editor"
|
||||
body="Switching between editors will reset your query. Are you sure you want to continue?"
|
||||
confirmText="Continue"
|
||||
onConfirm={confirmEditorTypeChange}
|
||||
onDismiss={cancelEditorTypeChange}
|
||||
/>
|
||||
<div className={styles.root}>
|
||||
<InlineLabel width={17}>Query type</InlineLabel>
|
||||
<div className={styles.queryItem}>
|
||||
<QueryTypeSelector />
|
||||
</div>
|
||||
</div>
|
||||
{rawDSLFeatureEnabled && (
|
||||
<div className={styles.root}>
|
||||
<InlineLabel width={17}>Editor type</InlineLabel>
|
||||
<div className={styles.queryItem}>
|
||||
<EditorTypeSelector />
|
||||
{rawDSLFeatureEnabled && (
|
||||
<div style={{ marginLeft: 'auto' }}>
|
||||
<EditorTypeSelector value={currentEditorType} onChange={onEditorTypeChange} />
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
)}
|
||||
</div>
|
||||
|
||||
{isCodeEditor && rawDSLFeatureEnabled && (
|
||||
<RawQueryEditor
|
||||
|
||||
@@ -135,6 +135,19 @@ export interface ExploreItemState {
|
||||
* converted to a query row.
|
||||
*/
|
||||
queries: DataQuery[];
|
||||
|
||||
/**
|
||||
* Index increased when queries change.
|
||||
* Required to derive queriesChangedIndexAtRun correctly.
|
||||
*/
|
||||
queriesChangedIndex: number;
|
||||
|
||||
/**
|
||||
* Index updated after running the query. Changes if new query was run.
|
||||
* Used to reset legend in the main graph to match Dashboard's behavior (#113975)
|
||||
*/
|
||||
queriesChangedIndexAtRun: number;
|
||||
|
||||
/**
|
||||
* True if this Explore area has been initialized.
|
||||
* Used to distinguish URL state injection versus split view state injection.
|
||||
|
||||
@@ -6383,12 +6383,15 @@
|
||||
},
|
||||
"resource-export": {
|
||||
"label": {
|
||||
"advanced-options": "Advanced options",
|
||||
"classic": "Classic",
|
||||
"json": "JSON",
|
||||
"v1-resource": "V1 Resource",
|
||||
"v2-resource": "V2 Resource",
|
||||
"yaml": "YAML"
|
||||
}
|
||||
},
|
||||
"share-externally": "Share dashboard with another instance",
|
||||
"share-externally-tooltip": "Removes all instance-specific metadata and data source references from the resource before export."
|
||||
},
|
||||
"revert-dashboard-modal": {
|
||||
"body-restore-version": "Are you sure you want to restore the dashboard to version {{version}}? All unsaved changes will be lost.",
|
||||
@@ -7842,7 +7845,6 @@
|
||||
"export-externally-label": "Export the dashboard to use in another instance",
|
||||
"export-format": "Format",
|
||||
"export-mode": "Model",
|
||||
"export-remove-ds-refs": "Remove deployment details",
|
||||
"info-text": "Copy or download a file containing the definition of your dashboard",
|
||||
"title": "Export dashboard"
|
||||
},
|
||||
|
||||
Reference in New Issue
Block a user