mirror of
https://github.com/grafana/grafana.git
synced 2025-12-20 16:54:59 +08:00
Compare commits
124 Commits
sriram/pos
...
v6.5.x
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
05025c5495 | ||
|
|
57315ae08d | ||
|
|
4e73140963 | ||
|
|
f86e4e39fd | ||
|
|
1a057dac84 | ||
|
|
89f040fb03 | ||
|
|
54044155a5 | ||
|
|
a8707bbc0e | ||
|
|
5bc2f9eb1e | ||
|
|
f903c43106 | ||
|
|
50ad07c14c | ||
|
|
6307f53a32 | ||
|
|
aa9d3f6a79 | ||
|
|
50be540a19 | ||
|
|
7247f0f213 | ||
|
|
e2699f5e5e | ||
|
|
77f85f9ed6 | ||
|
|
27b8cce24f | ||
|
|
742d165968 | ||
|
|
bb4b4a251f | ||
|
|
2c3e9ec887 | ||
|
|
0b37efa96a | ||
|
|
c48548c9b4 | ||
|
|
a36e320d8b | ||
|
|
1c5df83102 | ||
|
|
1bff4b50e0 | ||
|
|
d3fa0bf4a4 | ||
|
|
fa897fca76 | ||
|
|
45abe478cb | ||
|
|
e43aecd194 | ||
|
|
dc2ac41fdc | ||
|
|
ea07f7fe8e | ||
|
|
7f41446083 | ||
|
|
327558c890 | ||
|
|
45278bac28 | ||
|
|
dccd0c2f2d | ||
|
|
5be4bf742c | ||
|
|
c059fedcb3 | ||
|
|
dbc231fdaf | ||
|
|
418bba81db | ||
|
|
c48999edd6 | ||
|
|
1763a0fd80 | ||
|
|
8e7f6cd454 | ||
|
|
e3db87d27a | ||
|
|
89f6dd2348 | ||
|
|
ccebda73d4 | ||
|
|
83dbc4c7c7 | ||
|
|
3dd33b7ac0 | ||
|
|
fc248d8941 | ||
|
|
24a5d2fe8b | ||
|
|
4a536f231e | ||
|
|
0ebfd921cf | ||
|
|
210a0beca8 | ||
|
|
3858c3bc18 | ||
|
|
f927502908 | ||
|
|
5e294f321d | ||
|
|
15f8fb5004 | ||
|
|
28399a1d85 | ||
|
|
875431c7b0 | ||
|
|
3e0969397e | ||
|
|
a2e3ad166d | ||
|
|
af22094e5d | ||
|
|
5842818a38 | ||
|
|
4cd912c3ba | ||
|
|
b9ab181f63 | ||
|
|
a41233dd7e | ||
|
|
ac98c4dc71 | ||
|
|
cc5205260b | ||
|
|
281dfe980e | ||
|
|
8cf07ca15e | ||
|
|
e657dea9ab | ||
|
|
716e12f9fb | ||
|
|
cc797a86e8 | ||
|
|
367503e6bd | ||
|
|
8ae9ccd308 | ||
|
|
a0de783f84 | ||
|
|
7357110acf | ||
|
|
dc46be5169 | ||
|
|
8e2a725d86 | ||
|
|
3477f1e4b2 | ||
|
|
99d89eace2 | ||
|
|
3dfe957589 | ||
|
|
dde6cbf8f6 | ||
|
|
c5a28d8cb3 | ||
|
|
af4cf3f451 | ||
|
|
6781a63897 | ||
|
|
b8da278bd1 | ||
|
|
a718dac6bf | ||
|
|
e904b6a206 | ||
|
|
ee896b8341 | ||
|
|
239d491a0b | ||
|
|
5bc748cb56 | ||
|
|
64e294c586 | ||
|
|
62b61f98cc | ||
|
|
80e5b8318c | ||
|
|
1b21695b7e | ||
|
|
bcd2dcb0ee | ||
|
|
b0f26bf253 | ||
|
|
a97b126633 | ||
|
|
96e4ae839d | ||
|
|
12000ef4e4 | ||
|
|
899a4f0700 | ||
|
|
ab9c0da30e | ||
|
|
1f3c557dfd | ||
|
|
6686611369 | ||
|
|
e19d43ef2d | ||
|
|
a8f13bb0c1 | ||
|
|
0773ae80ea | ||
|
|
79bfdcb122 | ||
|
|
4d7edd3cd8 | ||
|
|
8fa29f2497 | ||
|
|
8cb1af2b21 | ||
|
|
33d84abf2c | ||
|
|
9a584bc798 | ||
|
|
fd491c39a3 | ||
|
|
47a199a731 | ||
|
|
4647c48427 | ||
|
|
d6f352cdf5 | ||
|
|
7b517bcb10 | ||
|
|
6243776004 | ||
|
|
0cc17c384a | ||
|
|
1033687df6 | ||
|
|
3abca7a820 | ||
|
|
ece9015afe |
@@ -17,5 +17,3 @@ cmds = [
|
||||
["go", "run", "-mod=vendor", "build.go", "-dev", "build-server"],
|
||||
["./bin/grafana-server", "-packaging=dev", "cfg:app_mode=development"]
|
||||
]
|
||||
interrupt_timout = 5
|
||||
graceful_kill = true
|
||||
|
||||
79
CHANGELOG.md
79
CHANGELOG.md
@@ -1,3 +1,82 @@
|
||||
# 6.5.0-beta1 (2019-11-14)
|
||||
|
||||
### Features / Enhancements
|
||||
|
||||
* **API**: Add `createdAt` and `updatedAt` to api/users/lookup. [#19496](https://github.com/grafana/grafana/pull/19496), [@gotjosh](https://github.com/gotjosh)
|
||||
* **API**: Add createdAt field to /api/users/:id. [#19475](https://github.com/grafana/grafana/pull/19475), [@cored](https://github.com/cored)
|
||||
* **Admin**: Adds setting to disable creating initial admin user. [#19505](https://github.com/grafana/grafana/pull/19505), [@shavonn](https://github.com/shavonn)
|
||||
* **Alerting**: Include alert_state in Kafka notifier payload. [#20099](https://github.com/grafana/grafana/pull/20099), [@arnaudlemaignen](https://github.com/arnaudlemaignen)
|
||||
* **AuthProxy**: Can now login with auth proxy and get a login token. [#20175](https://github.com/grafana/grafana/pull/20175), [@torkelo](https://github.com/torkelo)
|
||||
* **AuthProxy**: replaces setting ldap_sync_ttl with sync_ttl. [#20191](https://github.com/grafana/grafana/pull/20191), [@jongyllen](https://github.com/jongyllen)
|
||||
* **AzureMonitor**: Alerting for Azure Application Insights. [#19381](https://github.com/grafana/grafana/pull/19381), [@ChadNedzlek](https://github.com/ChadNedzlek)
|
||||
* **Build**: Upgrade to Go 1.13. [#19502](https://github.com/grafana/grafana/pull/19502), [@aknuds1](https://github.com/aknuds1)
|
||||
* **CLI**: Reduce memory usage for plugin installation. [#19639](https://github.com/grafana/grafana/pull/19639), [@olivierlemasle](https://github.com/olivierlemasle)
|
||||
* **CloudWatch**: Add ap-east-1 to hard-coded region lists. [#19523](https://github.com/grafana/grafana/pull/19523), [@Nessworthy](https://github.com/Nessworthy)
|
||||
* **CloudWatch**: ContainerInsights metrics support. [#18971](https://github.com/grafana/grafana/pull/18971), [@francopeapea](https://github.com/francopeapea)
|
||||
* **CloudWatch**: Support dynamic queries using dimension wildcards [#20058](https://github.com/grafana/grafana/issues/20058), [@sunker](https://github.com/sunker)
|
||||
* **CloudWatch**: Stop using GetMetricStatistics and use GetMetricData for all time series requests [#20057](https://github.com/grafana/grafana/issues/20057), [@sunker](https://github.com/sunker)
|
||||
* **CloudWatch**: Convert query editor from Angular to React [#19880](https://github.com/grafana/grafana/issues/19880), [@sunker](https://github.com/sunker)
|
||||
* **CloudWatch**: Convert config editor from Angular to React [#19881](https://github.com/grafana/grafana/issues/19881), [@shavonn](https://github.com/shavonn)
|
||||
* **CloudWatch**: Improved error handling when throttling occurs [#20348](https://github.com/grafana/grafana/issues/20348), [@sunker](https://github.com/sunker)
|
||||
* **CloudWatch**: Deep linking from Grafana panel to CloudWatch console [#20279](https://github.com/grafana/grafana/issues/20279), [@sunker](https://github.com/sunker)
|
||||
* **CloudWatch**: Add Grafana user agent to GMD calls [#20277](https://github.com/grafana/grafana/issues/20277), [@sunker](https://github.com/sunker)
|
||||
* **Dashboard**: Allows the d-solo route to be used without slug. [#19640](https://github.com/grafana/grafana/pull/19640), [@97amarnathk](https://github.com/97amarnathk)
|
||||
* **Docker**: Build and publish an additional Ubuntu based docker image. [#20196](https://github.com/grafana/grafana/pull/20196), [@aknuds1](https://github.com/aknuds1)
|
||||
* **Elasticsearch**: Adds support for region annotations. [#17602](https://github.com/grafana/grafana/pull/17602), [@fangel](https://github.com/fangel)
|
||||
* **Explore**: Add custom DataLinks on datasource level (like tracing links). [#20060](https://github.com/grafana/grafana/pull/20060), [@aocenas](https://github.com/aocenas)
|
||||
* **Explore**: Add functionality to show/hide query row results. [#19794](https://github.com/grafana/grafana/pull/19794), [@ivanahuckova](https://github.com/ivanahuckova)
|
||||
* **Explore**: Synchronise time ranges in split mode. [#19274](https://github.com/grafana/grafana/pull/19274), [@ivanahuckova](https://github.com/ivanahuckova)
|
||||
* **Explore**: UI change for log row details . [#20034](https://github.com/grafana/grafana/pull/20034), [@ivanahuckova](https://github.com/ivanahuckova)
|
||||
* **Frontend**: Migrate DataSource HTTP Settings to React. [#19452](https://github.com/grafana/grafana/pull/19452), [@dprokop](https://github.com/dprokop)
|
||||
* **Frontend**: Show browser not supported notification. [#19904](https://github.com/grafana/grafana/pull/19904), [@peterholmberg](https://github.com/peterholmberg)
|
||||
* **Graph**: Added series override option to have hidden series be persisted on save. [#20124](https://github.com/grafana/grafana/pull/20124), [@Gauravshah](https://github.com/Gauravshah)
|
||||
* **Graphite**: Add Metrictank option to settings to view Metrictank request processing info in new inspect feature. [#20138](https://github.com/grafana/grafana/pull/20138), [@ryantxu](https://github.com/ryantxu)
|
||||
* **LDAP**: Enable single user sync. [#19446](https://github.com/grafana/grafana/pull/19446), [@gotjosh](https://github.com/gotjosh)
|
||||
* **LDAP**: Last org admin can login but wont be removed. [#20326](https://github.com/grafana/grafana/pull/20326), [@xlson](https://github.com/xlson)
|
||||
* **LDAP**: Support env variable expressions in ldap.toml file. [#20173](https://github.com/grafana/grafana/pull/20173), [@torkelo](https://github.com/torkelo)
|
||||
* **OAuth**: Generic OAuth role mapping support. [#17149](https://github.com/grafana/grafana/pull/17149), [@hypery2k](https://github.com/hypery2k)
|
||||
* **Prometheus**: Custom query parameters string for Thanos downsampling. [#19121](https://github.com/grafana/grafana/pull/19121), [@seuf](https://github.com/seuf)
|
||||
* **Provisioning**: Allow saving of provisioned dashboards. [#19820](https://github.com/grafana/grafana/pull/19820), [@jongyllen](https://github.com/jongyllen)
|
||||
* **Security**: Minor XSS issue resolved by angularjs upgrade from 1.6.6 -> 1.6.9. [#19849](https://github.com/grafana/grafana/pull/19849), [@peterholmberg](https://github.com/peterholmberg)
|
||||
* **TablePanel**: Prevents crash when data contains mixed data formats. [#20202](https://github.com/grafana/grafana/pull/20202), [@hugohaggmark](https://github.com/hugohaggmark)
|
||||
* **Templating**: Introduces $__searchFilter to Query Variables. [#19858](https://github.com/grafana/grafana/pull/19858), [@hugohaggmark](https://github.com/hugohaggmark)
|
||||
* **Templating**: Made default template variable query editor field a textarea with automatic height. [#20288](https://github.com/grafana/grafana/pull/20288), [@torkelo](https://github.com/torkelo)
|
||||
* **Units**: Add milli/microSievert, milli/microSievert/h and pixels. [#20144](https://github.com/grafana/grafana/pull/20144), [@ryantxu](https://github.com/ryantxu)
|
||||
* **Units**: Added mega ampere and watt-hour per kg. [#19922](https://github.com/grafana/grafana/pull/19922), [@Karan96Kaushik](https://github.com/Karan96Kaushik)
|
||||
|
||||
### Bug Fixes
|
||||
* **API**: Added dashboardId and slug in response to dashboard import api. [#19692](https://github.com/grafana/grafana/pull/19692), [@jongyllen](https://github.com/jongyllen)
|
||||
* **API**: Fix logging of dynamic listening port. [#19644](https://github.com/grafana/grafana/pull/19644), [@oleggator](https://github.com/oleggator)
|
||||
* **BarGauge**: Fix so that default thresholds not keeps resetting. [#20190](https://github.com/grafana/grafana/pull/20190), [@lzdw](https://github.com/lzdw)
|
||||
* **CloudWatch**: Fix incorrect casing of Redshift dimension entry for service class and stage. [#19897](https://github.com/grafana/grafana/pull/19897), [@nlsdfnbch](https://github.com/nlsdfnbch)
|
||||
* **CloudWatch**: Fixing AWS Kafka dimension names. [#19986](https://github.com/grafana/grafana/pull/19986), [@skuxy](https://github.com/skuxy)
|
||||
* **CloudWatch**: Metric math broken when using multi template variables [#18337](https://github.com/grafana/grafana/issues/18337), [@sunker](https://github.com/sunker)
|
||||
* **CloudWatch**: Graphs with multiple multi-value dimension variables don't work [#17949](https://github.com/grafana/grafana/issues/17949), [@sunker](https://github.com/sunker)
|
||||
* **CloudWatch**: Variables' values surrounded with braces in request sent to AWS [#14451](https://github.com/grafana/grafana/issues/14451), [@sunker](https://github.com/sunker)
|
||||
* **CloudWatch**: Cloudwatch Query for a list of instances for which data is available in the selected time interval [#12784](https://github.com/grafana/grafana/issues/12784), [@sunker](https://github.com/sunker)
|
||||
* **CloudWatch**: Dimension's positioning/order should be stored in the json dashboard [#11062](https://github.com/grafana/grafana/issues/11062), [@sunker](https://github.com/sunker)
|
||||
* **CloudWatch**: Batch CloudWatch API call support in backend [#7991](https://github.com/grafana/grafana/issues/7991), [@sunker](https://github.com/sunker)
|
||||
* **ColorPicker**: Fixes issue with ColorPicker disappearing too quickly . [#20289](https://github.com/grafana/grafana/pull/20289), [@dprokop](https://github.com/dprokop)
|
||||
* **Datasource**: Add custom headers on alerting queries. [#19508](https://github.com/grafana/grafana/pull/19508), [@weeco](https://github.com/weeco)
|
||||
* **Docker**: Add additional glibc dependencies to support certain backend plugins in alpine. [#20214](https://github.com/grafana/grafana/pull/20214), [@briangann](https://github.com/briangann)
|
||||
* **Docker**: Build and use musl-based binaries in alpine images to resolve glibc incompatibility issues. [#19798](https://github.com/grafana/grafana/pull/19798), [@aknuds1](https://github.com/aknuds1)
|
||||
* **Elasticsearch**: Fix template variables interpolation when redirecting to Explore. [#20314](https://github.com/grafana/grafana/pull/20314), [@ivanahuckova](https://github.com/ivanahuckova)
|
||||
* **Elasticsearch**: Support rendering in logs panel. [#20229](https://github.com/grafana/grafana/pull/20229), [@davkal](https://github.com/davkal)
|
||||
* **Explore**: Expand template variables when redirecting from dashboard panel. [#19582](https://github.com/grafana/grafana/pull/19582), [@ivanahuckova](https://github.com/ivanahuckova)
|
||||
* **OAuth**: Make the login button display name of custom OAuth provider. [#20209](https://github.com/grafana/grafana/pull/20209), [@dprokop](https://github.com/dprokop)
|
||||
* **ReactPanels**: Adds Explore menu item. [#20236](https://github.com/grafana/grafana/pull/20236), [@hugohaggmark](https://github.com/hugohaggmark)
|
||||
* **Team Sync**: Fix URL encode Group IDs for external team sync. [#20280](https://github.com/grafana/grafana/pull/20280), [@gotjosh](https://github.com/gotjosh)
|
||||
|
||||
|
||||
## Breaking changes
|
||||
* **CloudWatch**: Pre Grafana 6.5.0, the CloudWatch datasource used the GetMetricStatistics API for all queries that did not have an ´id´ and did not have an ´expression´ defined in the query editor. The GetMetricStatistics API has a limit of 400 transactions per second. In this release, all queries use the GetMetricData API. The GetMetricData API has a limit of 50 transactions per second and 100 metrics per transaction. Also the GetMetricData API pricing is different from GetMetricStatistics. While GetMetricStatistics qualified for the CloudWatch API free tier, this is not the case for GetMetricData calls. For more information, please refer to the CloudWatch pricing page (https://aws.amazon.com/cloudwatch/pricing/). Read more about GetMetricData limits in [upgrading to 6.5](https://grafana.com/docs/installation/upgrading/#upgrading-to-v6-5).
|
||||
|
||||
* **CloudWatch**: The GetMetricData API does not return metric unit, so unit auto detection in panels is no longer supported.
|
||||
|
||||
* **CloudWatch**: The `HighRes` switch has been removed from the query editor. Read more about this in [upgrading to 6.5](https://grafana.com/docs/installation/upgrading/#upgrading-to-v6-5).
|
||||
|
||||
* **CloudWatch**: In previous versions of Grafana, there was partial support for using multi template variables as dimension values. When a multi template variable is being used for dimension values in Grafana 6.5, a [search expression](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/using-search-expressions.html) will be generated. In the GetMetricData API, expressions are limited to 1024 characters, so it might be the case that this limit is reached when a multi template variable that has a lot of values is being used. Read about the suggested workaround in [upgrading to 6.5](https://grafana.com/docs/installation/upgrading/#upgrading-to-v6-5).
|
||||
|
||||
# 6.4.4 (2019-11-06)
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
@@ -29,7 +29,7 @@ const Foo: React.FunctionComponent<FooProps> = () => {
|
||||
|
||||
```
|
||||
|
||||
#### Using `withTheme` HOC
|
||||
#### Using `withTheme` higher-order component (HOC)
|
||||
|
||||
With this method your component will be automatically wrapped in `ThemeContext.Consumer` and provided with current theme via `theme` prop. Component used with `withTheme` must implement `Themeable` interface.
|
||||
|
||||
@@ -43,6 +43,36 @@ const Foo: React.FunctionComponent<FooProps> = () => ...
|
||||
export default withTheme(Foo);
|
||||
```
|
||||
|
||||
### Test components that use ThemeContext
|
||||
|
||||
When implementing snapshot tests for components that use the `withTheme` HOC, the snapshot will contain the entire theme object. Any change to the theme renders the snapshot outdated.
|
||||
|
||||
To make your snapshot theme independent, use the `mockThemeContext` helper function:
|
||||
|
||||
```tsx
|
||||
import { mockThemeContext } from '@grafana/ui';
|
||||
import { MyComponent } from './MyComponent';
|
||||
|
||||
describe('MyComponent', () => {
|
||||
let restoreThemeContext;
|
||||
|
||||
beforeAll(() => {
|
||||
// Create ThemeContext mock before any snapshot test is executed
|
||||
restoreThemeContext = mockThemeContext({ type: GrafanaThemeType.Dark });
|
||||
});
|
||||
|
||||
afterAll(() => {
|
||||
// Make sure the theme is restored after snapshot tests are performed
|
||||
restoreThemeContext();
|
||||
});
|
||||
|
||||
it('renders correctyl', () => {
|
||||
const wrapper = mount(<MyComponent />)
|
||||
expect(wrapper).toMatchSnapshot();
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### Using themes in Storybook
|
||||
|
||||
All stories are wrapped with `ThemeContext.Provider` using global decorator. To render `Themeable` component that's not wrapped by `withTheme` HOC you either create a new component in your story:
|
||||
|
||||
@@ -60,8 +60,8 @@ aliases = ["/v1.1", "/guides/reference/admin", "/v3.1"]
|
||||
<h4>Provisioning</h4>
|
||||
<p>Learn how to automate your Grafana configuration.</p>
|
||||
</a>
|
||||
<a href="{{< relref "guides/whats-new-in-v6-3.md" >}}" class="nav-cards__item nav-cards__item--guide">
|
||||
<h4>What's new in v6.3</h4>
|
||||
<a href="{{< relref "guides/whats-new-in-v6-4.md" >}}" class="nav-cards__item nav-cards__item--guide">
|
||||
<h4>What's new in v6.4</h4>
|
||||
<p>Explore the features and enhancements in the latest release.</p>
|
||||
</a>
|
||||
<a href="{{< relref "tutorials/screencasts.md" >}}" class="nav-cards__item nav-cards__item--guide">
|
||||
|
||||
@@ -17,19 +17,22 @@ When an image is being rendered the PNG-image is temporary written to the filesy
|
||||
|
||||
A background job runs each 10 minutes and will remove temporary images. You can configure how long time an image should be stored before being removed by configuring the [temp-data-lifetime](/installation/configuration/#temp-data-lifetime) setting.
|
||||
|
||||
## Requirements
|
||||
|
||||
Rendering images may require quite a lot of memory, mainly because there are "browser instances" started in the
|
||||
background responsible for the actual rendering. Further, if multiple images are being rendered in parallel it most
|
||||
certainly has a bigger memory footprint. Minimum free memory recommendation is 1GB.
|
||||
|
||||
Depending on [rendering method](#rendering-methods) you would need that memory available in the system where the
|
||||
rendering process is running. For [Grafana Image renderer plugin](#grafana-image-renderer-plugin) and [PhantomJS](#phantomjs)
|
||||
it's the system which Grafana is installed on. For [Remote rendering service](#remote-rendering-service) it is the system where
|
||||
that's installed.
|
||||
|
||||
## Rendering methods
|
||||
|
||||
### PhantomJS
|
||||
|
||||
> PhantomJS is deprecated since Grafana v6.4 and will be removed in a future release. Please migrate to Grafana image renderer plugin or remote rendering service.
|
||||
|
||||
[PhantomJS](https://phantomjs.org/) have been the only supported and default image renderer since Grafana v2.x and is shipped with Grafana.
|
||||
|
||||
Please note that for macOS and Windows, you will need to ensure that a phantomjs binary is available under tools/phantomjs/phantomjs. For Linux, a phantomjs binary is included - however, you should ensure that any required libraries, e.g. libfontconfig1, are available.
|
||||
|
||||
### Grafana image renderer plugin
|
||||
|
||||
> This plugin currently does not work if it is installed in Grafana docker image.
|
||||
> This plugin currently does not work if it is installed in the Grafana docker image. See [Install in Grafana docker image](#install-in-grafana-docker-image).
|
||||
|
||||
The [Grafana image renderer plugin](https://grafana.com/grafana/plugins/grafana-image-renderer) is a plugin that runs on the backend and handles rendering panels and dashboards as PNG-images using headless chrome.
|
||||
|
||||
@@ -39,7 +42,13 @@ You can install it using grafana-cli:
|
||||
grafana-cli plugins install grafana-image-renderer
|
||||
```
|
||||
|
||||
For further information and instructions refer to the [plugin details](https://grafana.com/grafana/plugins/grafana-image-renderer).
|
||||
For further information and instructions refer to [troubleshooting](#troubleshooting) and the [plugin details](https://grafana.com/grafana/plugins/grafana-image-renderer).
|
||||
|
||||
#### Install in Grafana docker image
|
||||
|
||||
This plugin is not compatible with the current Grafana Docker image without installing further system-level dependencies. We recommend setting up another Docker container for rendering and using remote rendering, see [Remote rendering service](#remote-rendering-service) for reference.
|
||||
|
||||
If you still want to install the plugin in the Grafana docker image we provide instructions for how to build a custom Grafana image, see [Installing using Docker](/installation/docker/#custom-image-with-grafana-image-renderer-plugin-pre-installed).
|
||||
|
||||
### Remote rendering service
|
||||
|
||||
@@ -102,7 +111,86 @@ callback_url = http://localhost:3000/
|
||||
```
|
||||
4. Restart Grafana
|
||||
|
||||
For further information and instructions refer to [troubleshooting](#troubleshooting) and the [plugin details](https://grafana.com/grafana/plugins/grafana-image-renderer).
|
||||
|
||||
### PhantomJS
|
||||
|
||||
> PhantomJS is deprecated since Grafana v6.4 and will be removed in a future release. Please migrate to Grafana image renderer plugin or remote rendering service.
|
||||
|
||||
[PhantomJS](https://phantomjs.org/) have been the only supported and default image renderer since Grafana v2.x and is shipped with Grafana.
|
||||
|
||||
PhantomJS binaries are included for Linux (x64), Windows (x64) and Darwin (x64). For Linux you should ensure that any required libraries, e.g. libfontconfig1, are available.
|
||||
|
||||
Please note that PhantomJS binaries are not included for ARM. To support this you will need to ensure that a phantomjs binary is available under tools/phantomjs/phantomjs.
|
||||
|
||||
## Alerting and render limits
|
||||
|
||||
Alert notifications can include images, but rendering many images at the same time can overload the server where the renderer is running. For instructions of how to configure this, see [concurrent_render_limit](/installation/configuration/#concurrent-render-limit).
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
Enable debug log messages for rendering in the Grafana configuration file and inspect the Grafana server log.
|
||||
|
||||
```bash
|
||||
[log]
|
||||
filters = rendering:debug
|
||||
```
|
||||
|
||||
### Grafana image renderer plugin and remote rendering service
|
||||
|
||||
The plugin and rendering service uses [Chromium browser](https://www.chromium.org/) which depends on certain libraries.
|
||||
If you don't have all of those libraries installed in your system you may encounter errors when trying to render an image, e.g.
|
||||
|
||||
```bash
|
||||
Rendering failed: Error: Failed to launch chrome!/var/lib/grafana/plugins/grafana-image-renderer/chrome-linux/chrome:
|
||||
error while loading shared libraries: libX11.so.6: cannot open shared object file: No such file or directory\n\n\nTROUBLESHOOTING: https://github.com/GoogleChrome/puppeteer/blob/master/docs/troubleshooting.md
|
||||
```
|
||||
|
||||
In general you can use the [`ldd`](https://en.wikipedia.org/wiki/Ldd_(Unix)) utility to figure out what shared libraries
|
||||
are missing/not installed in your system:
|
||||
|
||||
```bash
|
||||
$ cd <grafana-image-render plugin directiry>
|
||||
$ ldd chrome-linux/chrome
|
||||
linux-vdso.so.1 (0x00007fff1bf65000)
|
||||
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f2047945000)
|
||||
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f2047924000)
|
||||
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f204791a000)
|
||||
libX11.so.6 => not found
|
||||
libX11-xcb.so.1 => not found
|
||||
libxcb.so.1 => not found
|
||||
libXcomposite.so.1 => not found
|
||||
...
|
||||
```
|
||||
|
||||
**Ubuntu:**
|
||||
|
||||
On Ubuntu 18.10 the following dependencies have been confirmed as needed for the image rendering to function.
|
||||
|
||||
```bash
|
||||
libx11-6 libx11-xcb1 libxcomposite1 libxcursor1 libxdamage1 libxext6 libxfixes3 libxi6 libxrender1 libxtst6 libglib2.0-0 libnss3 libcups2 libdbus-1-3 libxss1 libxrandr2 libgtk-3-0 libgtk-3-0 libasound2
|
||||
```
|
||||
|
||||
**Centos:**
|
||||
|
||||
On a minimal Centos install the following dependencies have been confirmed as needed for the image rendering to function.
|
||||
|
||||
```bash
|
||||
libXcomposite libXdamage libXtst cups libXScrnSaver pango atk adwaita-cursor-theme adwaita-icon-theme at at-spi2-atk at-spi2-core cairo-gobject colord-libs dconf desktop-file-utils ed emacs-filesystem gdk-pixbuf2 glib-networking gnutls gsettings-desktop-schemas gtk-update-icon-cache gtk3 hicolor-icon-theme jasper-libs json-glib libappindicator-gtk3 libdbusmenu libdbusmenu-gtk3 libepoxy liberation-fonts liberation-narrow-fonts liberation-sans-fonts liberation-serif-fonts libgusb libindicator-gtk3 libmodman libproxy libsoup libwayland-cursor libwayland-egl libxkbcommon m4 mailx nettle patch psmisc redhat-lsb-core redhat-lsb-submod-security rest spax time trousers xdg-utils xkeyboard-config
|
||||
```
|
||||
|
||||
#### Using custom Chrome/Chromium
|
||||
|
||||
As a last resort, if you already have [Chrome](https://www.google.com/chrome/) or [Chromium](https://www.chromium.org/)
|
||||
installed on your system you can configure [Grafana Image renderer plugin](#grafana-image-renderer-plugin) to use this
|
||||
instead of the pre-packaged version of Chromium.
|
||||
|
||||
> Please note that this is not recommended since you may encounter problems if the installed version of Chrome/Chromium is not
|
||||
> is compatible with the [Grafana Image renderer plugin](#grafana-image-renderer-plugin).
|
||||
|
||||
To override the path to the Chrome/Chromium executable you can set an environment variable and make sure that
|
||||
it's available for the Grafana process, e.g.
|
||||
|
||||
```bash
|
||||
export GF_RENDERER_PLUGIN_CHROME_BIN="/usr/bin/chromium-browser"
|
||||
```
|
||||
|
||||
@@ -306,4 +306,4 @@ a login token and cookie. You only have to configure your auth proxy to provide
|
||||
Requests via other routes will be authenticated using the cookie.
|
||||
|
||||
Use settings `login_maximum_inactive_lifetime_days` and `login_maximum_lifetime_days` under `[auth]` to control session
|
||||
lifetime. [Read more about login tokens]({{< relref "auth/overview/#login-and-short-lived-tokens" >}})
|
||||
lifetime. [Read more about login tokens](/auth/overview/#login-and-short-lived-tokens)
|
||||
|
||||
@@ -20,20 +20,39 @@ Grafana ships with built in support for CloudWatch. You just have to add it as a
|
||||
1. Open the side menu by clicking the Grafana icon in the top header.
|
||||
2. In the side menu under the `Dashboards` link you should find a link named `Data Sources`.
|
||||
3. Click the `+ Add data source` button in the top header.
|
||||
4. Select `Cloudwatch` from the *Type* dropdown.
|
||||
4. Select `Cloudwatch` from the _Type_ dropdown.
|
||||
|
||||
> NOTE: If at any moment you have issues with getting this data source to work and Grafana is giving you undescriptive errors then don't
|
||||
forget to check your log file (try looking in /var/log/grafana/grafana.log).
|
||||
> forget to check your log file (try looking in /var/log/grafana/grafana.log).
|
||||
|
||||
Name | Description
|
||||
------------ | -------------
|
||||
*Name* | The data source name. This is how you refer to the data source in panels and queries.
|
||||
*Default* | Default data source means that it will be pre-selected for new panels.
|
||||
*Default Region* | Used in query editor to set region (can be changed on per query basis)
|
||||
*Custom Metrics namespace* | Specify the CloudWatch namespace of Custom metrics
|
||||
*Auth Provider* | Specify the provider to get credentials.
|
||||
*Credentials* profile name | Specify the name of the profile to use (if you use `~/.aws/credentials` file), leave blank for default.
|
||||
*Assume Role Arn* | Specify the ARN of the role to assume
|
||||
| Name | Description |
|
||||
| -------------------------- | ------------------------------------------------------------------------------------------------------- |
|
||||
| _Name_ | The data source name. This is how you refer to the data source in panels and queries. |
|
||||
| _Default_ | Default data source means that it will be pre-selected for new panels. |
|
||||
| _Default Region_ | Used in query editor to set region (can be changed on per query basis) |
|
||||
| _Custom Metrics namespace_ | Specify the CloudWatch namespace of Custom metrics |
|
||||
| _Auth Provider_ | Specify the provider to get credentials. |
|
||||
| _Credentials_ profile name | Specify the name of the profile to use (if you use `~/.aws/credentials` file), leave blank for default. |
|
||||
| _Assume Role Arn_ | Specify the ARN of the role to assume |
|
||||
|
||||
### Min time interval
|
||||
|
||||
> Only available in Grafana v6.5+.
|
||||
|
||||
A lower limit for the auto group by time interval. Recommended to be set to write frequency, for example `1m` if your data is written every minute.
|
||||
This option can also be overridden/configured in a dashboard panel under data source options. It's important to note that this value **needs** to be formatted as a
|
||||
number followed by a valid time identifier, e.g. `1m` (1 minute) or `30s` (30 seconds). The following time identifiers are supported:
|
||||
|
||||
| Identifier | Description |
|
||||
| ---------- | ----------- |
|
||||
| `y` | year |
|
||||
| `M` | month |
|
||||
| `w` | week |
|
||||
| `d` | day |
|
||||
| `h` | hour |
|
||||
| `m` | minute |
|
||||
| `s` | second |
|
||||
| `ms` | millisecond |
|
||||
|
||||
## Authentication
|
||||
|
||||
@@ -42,10 +61,9 @@ Name | Description
|
||||
Currently all access to CloudWatch is done server side by the Grafana backend using the official AWS SDK. If your Grafana
|
||||
server is running on AWS you can use IAM Roles and authentication will be handled automatically.
|
||||
|
||||
Checkout AWS docs on [IAM Roles](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html)
|
||||
See the AWS documentation on [IAM Roles](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html)
|
||||
|
||||
|
||||
> NOTE: AWS Role Switching as described [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-cli.html) it not supported at the moment.
|
||||
> NOTE: AWS Role Switching as described [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-cli.html) it not supported at the moment.
|
||||
|
||||
## IAM Policies
|
||||
|
||||
@@ -55,55 +73,53 @@ utilize Grafana's built-in support for assuming roles.
|
||||
|
||||
Here is a minimal policy example:
|
||||
|
||||
```json
|
||||
```bash
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Sid": "AllowReadingMetricsFromCloudWatch",
|
||||
"Effect": "Allow",
|
||||
"Action": [
|
||||
"cloudwatch:DescribeAlarmsForMetric",
|
||||
"cloudwatch:ListMetrics",
|
||||
"cloudwatch:GetMetricStatistics",
|
||||
"cloudwatch:GetMetricData"
|
||||
],
|
||||
"Resource": "*"
|
||||
},
|
||||
{
|
||||
"Sid": "AllowReadingTagsInstancesRegionsFromEC2",
|
||||
"Effect": "Allow",
|
||||
"Action": [
|
||||
"ec2:DescribeTags",
|
||||
"ec2:DescribeInstances",
|
||||
"ec2:DescribeRegions"
|
||||
],
|
||||
"Resource": "*"
|
||||
},
|
||||
{
|
||||
"Sid": "AllowReadingResourcesForTags",
|
||||
"Effect" : "Allow",
|
||||
"Action" : "tag:GetResources",
|
||||
"Resource" : "*"
|
||||
}
|
||||
]
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Sid": "AllowReadingMetricsFromCloudWatch",
|
||||
"Effect": "Allow",
|
||||
"Action": [
|
||||
"cloudwatch:DescribeAlarmsForMetric",
|
||||
"cloudwatch:ListMetrics",
|
||||
"cloudwatch:GetMetricStatistics",
|
||||
"cloudwatch:GetMetricData"
|
||||
],
|
||||
"Resource": "*"
|
||||
},
|
||||
{
|
||||
"Sid": "AllowReadingTagsInstancesRegionsFromEC2",
|
||||
"Effect": "Allow",
|
||||
"Action": ["ec2:DescribeTags", "ec2:DescribeInstances", "ec2:DescribeRegions"],
|
||||
"Resource": "*"
|
||||
},
|
||||
{
|
||||
"Sid": "AllowReadingResourcesForTags",
|
||||
"Effect": "Allow",
|
||||
"Action": "tag:GetResources",
|
||||
"Resource": "*"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### AWS credentials
|
||||
If Auth Provider is `Credentials file`, Grafana try to get credentials by following order.
|
||||
|
||||
If Auth Provider is `Credentials file`, Grafana tries to get credentials in the following order.
|
||||
|
||||
- Environment variables. (`AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`)
|
||||
- Hard-code credentials.
|
||||
- Shared credentials file.
|
||||
- IAM role for Amazon EC2.
|
||||
|
||||
Checkout AWS docs on [Configuring the AWS SDK for Go](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html)
|
||||
See the AWS documentation on [Configuring the AWS SDK for Go](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html)
|
||||
|
||||
### AWS credentials file
|
||||
|
||||
Create a file at `~/.aws/credentials`. That is the `HOME` path for user running grafana-server.
|
||||
> NOTE: If you think you have the credentials file in the right place but it is still not working then you might try moving your .aws file to '/usr/share/grafana/' and make sure your credentials file has at most 0644 permissions.
|
||||
|
||||
> NOTE: If you think you have the credentials file in the right place but it is still not working then you might try moving your .aws file to '/usr/share/grafana/' and make sure your credentials file has at most 0644 permissions.
|
||||
|
||||
Example content:
|
||||
|
||||
@@ -114,50 +130,94 @@ aws_secret_access_key = dasdasdsadasdasdasdsa
|
||||
region = us-west-2
|
||||
```
|
||||
|
||||
## Metric Query Editor
|
||||
## Using the Metric Query Editor
|
||||
|
||||

|
||||
To create a valid query, you need to specify the namespace, metric name and at least one statistic. If `Match Exact` is enabled, you also need to specify all the dimensions of the metric you’re querying, so that the [metric schema](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/search-expression-syntax.html) matches exactly. If `Match Exact` is off, you can specify any number of dimensions by which you’d like to filter. Up to 100 metrics matching your filter criteria will be returned.
|
||||
|
||||
You need to specify a namespace, metric, at least one stat, and at least one dimension.
|
||||
### Dynamic queries using dimension wildcards
|
||||
|
||||
## Metric Math
|
||||
> Only available in Grafana v6.5+.
|
||||
|
||||
You can now create new time series metrics by operating on top of Cloudwatch metrics using mathematical functions. Arithmetic operators, unary subtraction and other functions are supported to be applied on cloudwatch metrics. More details on the available functions can be found on [AWS Metric Math](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/using-metric-math.html)
|
||||
In Grafana 6.5 or higher, you’re able to monitor a dynamic list of metrics by using the asterisk (\*) wildcard for one or more dimension values.
|
||||
|
||||
As an example, if you want to apply arithmetic operator on a metric, you can do it by giving an alias(a unique string) to the raw metric as shown below. Then you can use this alias and apply arithmetic operator to it in the Expression field of created metric.
|
||||
{{< docs-imagebox img="/img/docs/v65/cloudwatch-dimension-wildcard.png" max-width="800px" class="docs-image--right" caption="CloudWatch dimension wildcard" >}}
|
||||
|
||||

|
||||
In the example, all metrics in the namespace `AWS/EC2` with a metric name of `CPUUtilization` and ANY value for the `InstanceId` dimension are queried. This can help you monitor metrics for AWS resources, like EC2 instances or containers. For example, when new instances get created as part of an auto scaling event, they will automatically appear in the graph without you having to track the new instance IDs. This capability is currently limited to retrieving up to 100 metrics. You can click on `Show Query Preview` to see the search expression that is automatically built to support wildcards. To learn more about search expressions, visit the [CloudWatch documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/search-expression-syntax.html).
|
||||
|
||||
By default, the search expression is defined in such a way that the queried metrics must match the defined dimension names exactly. This means that in the example only metrics with exactly one dimension with name ‘InstanceId’ will be returned.
|
||||
|
||||
You can untoggle `Match Exact` to include metrics that have other dimensions defined. Disabling `Match Exact` also creates a search expression even if you don’t use wildcards. We simply search for any metric that matches at least the namespace, metric name, and all defined dimensions.
|
||||
|
||||
### Multi-value template variables
|
||||
|
||||
> Only available in Grafana v6.5+.
|
||||
|
||||
When defining dimension values based on multi-valued template variables, a search expression is used to query for the matching metrics. This enables the use of multiple template variables in one query and also allows you to use template variables for queries that have the `Match Exact` option disabled.
|
||||
|
||||
Search expressions are currently limited to 1024 characters, so your query may fail if you have a long list of values. We recommend using the asterisk (`*`) wildcard instead of the `All` option if you want to query all metrics that have any value for a certain dimension name.
|
||||
|
||||
The use of multi-valued template variables is only supported for dimension values. Using multi-valued template variables for `Region`, `Namespace`, or `Metric Name` is not supported.
|
||||
|
||||
### Metric Math Expressions
|
||||
|
||||
You can create new time series metrics by operating on top of CloudWatch metrics using mathematical functions. Arithmetic operators, unary subtraction and other functions are supported and can be applied to CloudWatch metrics. More details on the available functions can be found on [AWS Metric Math](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/using-metric-math.html)
|
||||
|
||||
As an example, if you want to apply arithmetic operations on a metric, you can do it by giving an id (a unique string) to the raw metric as shown below. You can then use this id and apply arithmetic operations to it in the Expression field of the new metric.
|
||||
|
||||
Please note that in the case you use the expression field to reference another query, like `queryA * 2`, it will not be possible to create an alert rule based on that query.
|
||||
|
||||
### Deep linking from Grafana panels to the CloudWatch console
|
||||
|
||||
> Only available in Grafana v6.5+.
|
||||
|
||||
{{< docs-imagebox img="/img/docs/v65/cloudwatch-deep-linking.png" max-width="500px" class="docs-image--right" caption="CloudWatch deep linking" >}}
|
||||
|
||||
Left clicking a time series in the panel shows a context menu with a link to `View in CloudWatch console`. Clicking that link will open a new tab that will take you to the CloudWatch console and display all the metrics for that query. If you're not currently logged in to the CloudWatch console, the link will forward you to the login page. The provided link is valid for any account but will only display the right metrics if you're logged in to the account that corresponds to the selected data source in Grafana.
|
||||
|
||||
This feature is not available for metrics that are based on metric math expressions.
|
||||
|
||||
## Curated Dashboards
|
||||
|
||||
> Only available in Grafana v6.5+.
|
||||
|
||||
The updated CloudWatch data source ships with pre-configured dashboards for five of the most popular AWS services:
|
||||
|
||||
- Amazon Elastic Compute Cloud `Amazon EC2`,
|
||||
- Amazon Elastic Block Store `Amazon EBS`,
|
||||
- AWS Lambda `AWS Lambda`,
|
||||
- Amazon CloudWatch Logs `Amazon CloudWatch Logs`, and
|
||||
- Amazon Relational Database Service `Amazon RDS`.
|
||||
|
||||
To import the pre-configured dashboards, go to the configuration page of your CloudWatch data source and click on the `Dashboards` tab. Click `Import` for the dashboard you would like to use. To customize the dashboard, we recommend saving the dashboard under a different name, because otherwise the dashboard will be overwritten when a new version of the dashboard is released.
|
||||
|
||||
{{< docs-imagebox img="/img/docs/v65/cloudwatch-dashboard-import.png" caption="CloudWatch dashboard import" >}}
|
||||
|
||||
## Templated queries
|
||||
|
||||
Instead of hard-coding things like server, application and sensor name in you metric queries you can use variables in their place.
|
||||
Variables are shown as dropdown select boxes at the top of the dashboard. These dropdowns makes it easy to change the data
|
||||
being displayed in your dashboard.
|
||||
Instead of hard-coding things like server, application and sensor name in you metric queries you can use variables in their place. Variables are shown as dropdown select boxes at the top of the dashboard. These dropdowns makes it easy to change the data being displayed in your dashboard.
|
||||
|
||||
Checkout the [Templating]({{< relref "../../reference/templating.md" >}}) documentation for an introduction to the templating feature and the different
|
||||
types of template variables.
|
||||
See the [Templating]({{< relref "../../reference/templating.md" >}}) documentation for an introduction to the templating feature and the different types of template variables.
|
||||
|
||||
### Query variable
|
||||
|
||||
CloudWatch data source plugin provides the following queries you can specify in the `Query` field in the Variable
|
||||
edit view. They allow you to fill a variable's options list with things like `region`, `namespaces`, `metric names`
|
||||
and `dimension keys/values`.
|
||||
The CloudWatch data source provides the following queries that you can specify in the `Query` field in the Variable edit view. They allow you to fill a variable's options list with things like `region`, `namespaces`, `metric names` and `dimension keys/values`.
|
||||
|
||||
In place of `region` you can specify `default` to use the default region configured in the data source for the query,
|
||||
e.g. `metrics(AWS/DynamoDB, default)` or `dimension_values(default, ..., ..., ...)`.
|
||||
|
||||
Read more about the available dimensions in the [CloudWatch Metrics and Dimensions Reference](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CW_Support_For_AWS.html).
|
||||
Read more about the available dimensions in the [CloudWatch Metrics and Dimensions Reference](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CW_Support_For_AWS.html).
|
||||
|
||||
Name | Description
|
||||
------- | --------
|
||||
*regions()* | Returns a list of regions AWS provides their service.
|
||||
*namespaces()* | Returns a list of namespaces CloudWatch support.
|
||||
*metrics(namespace, [region])* | Returns a list of metrics in the namespace. (specify region or use "default" for custom metrics)
|
||||
*dimension_keys(namespace)* | Returns a list of dimension keys in the namespace.
|
||||
*dimension_values(region, namespace, metric, dimension_key, [filters])* | Returns a list of dimension values matching the specified `region`, `namespace`, `metric`, `dimension_key` or you can use dimension `filters` to get more specific result as well.
|
||||
*ebs_volume_ids(region, instance_id)* | Returns a list of volume ids matching the specified `region`, `instance_id`.
|
||||
*ec2_instance_attribute(region, attribute_name, filters)* | Returns a list of attributes matching the specified `region`, `attribute_name`, `filters`.
|
||||
*resource_arns(region, resource_type, tags)* | Returns a list of ARNs matching the specified `region`, `resource_type` and `tags`.
|
||||
| Name | Description |
|
||||
| ----------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| _regions()_ | Returns a list of all AWS regions |
|
||||
| _namespaces()_ | Returns a list of namespaces CloudWatch support. |
|
||||
| _metrics(namespace, [region])_ | Returns a list of metrics in the namespace. (specify region or use "default" for custom metrics) |
|
||||
| _dimension_\__keys(namespace)_ | Returns a list of dimension keys in the namespace. |
|
||||
| _dimension_\__values(region, namespace, metric, dimension_\__key, [filters])_ | Returns a list of dimension values matching the specified `region`, `namespace`, `metric`, `dimension_key` or you can use dimension `filters` to get more specific result as well. |
|
||||
| _ebs_\__volume_\__ids(region, instance_\__id)_ | Returns a list of volume ids matching the specified `region`, `instance_id`. |
|
||||
| _ec2_\__instance_\__attribute(region, attribute_\__name, filters)_ | Returns a list of attributes matching the specified `region`, `attribute_name`, `filters`. |
|
||||
| _resource_\__arns(region, resource_\__type, tags)_ | Returns a list of ARNs matching the specified `region`, `resource_type` and `tags`. |
|
||||
| _statistics()_ | Returns a list of all the standard statistics |
|
||||
|
||||
For details about the metrics CloudWatch provides, please refer to the [CloudWatch documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/CW_Support_For_AWS.html).
|
||||
|
||||
@@ -165,16 +225,16 @@ For details about the metrics CloudWatch provides, please refer to the [CloudWat
|
||||
|
||||
Example dimension queries which will return list of resources for individual AWS Services:
|
||||
|
||||
Query | Service
|
||||
------- | -----
|
||||
*dimension_values(us-east-1,AWS/ELB,RequestCount,LoadBalancerName)* | ELB
|
||||
*dimension_values(us-east-1,AWS/ElastiCache,CPUUtilization,CacheClusterId)* | ElastiCache
|
||||
*dimension_values(us-east-1,AWS/Redshift,CPUUtilization,ClusterIdentifier)* | RedShift
|
||||
*dimension_values(us-east-1,AWS/RDS,CPUUtilization,DBInstanceIdentifier)* | RDS
|
||||
*dimension_values(us-east-1,AWS/S3,BucketSizeBytes,BucketName)* | S3
|
||||
*dimension_values(us-east-1,CWAgent,disk_used_percent,device,{"InstanceId":"$instance_id"})* | CloudWatch Agent
|
||||
*resource_arns(eu-west-1,elasticloadbalancing:loadbalancer,{"elasticbeanstalk:environment-name":["myApp-dev","myApp-prod"]})* | ELB
|
||||
*resource_arns(eu-west-1,ec2:instance,{"elasticbeanstalk:environment-name":["myApp-dev","myApp-prod"]})* | EC2
|
||||
| Query | Service |
|
||||
| -------------------------------------------------------------------------------------------------------------------------------- | ---------------- |
|
||||
| _dimension_\__values(us-east-1,AWS/ELB,RequestCount,LoadBalancerName)_ | ELB |
|
||||
| _dimension_\__values(us-east-1,AWS/ElastiCache,CPUUtilization,CacheClusterId)_ | ElastiCache |
|
||||
| _dimension_\__values(us-east-1,AWS/Redshift,CPUUtilization,ClusterIdentifier)_ | RedShift |
|
||||
| _dimension_\__values(us-east-1,AWS/RDS,CPUUtilization,DBInstanceIdentifier)_ | RDS |
|
||||
| _dimension_\__values(us-east-1,AWS/S3,BucketSizeBytes,BucketName)_ | S3 |
|
||||
| _dimension_\__values(us-east-1,CWAgent,disk_\__used_\__percent,device,{"InstanceId":"\$instance_\__id"})_ | CloudWatch Agent |
|
||||
| _resource_\__arns(eu-west-1,elasticloadbalancing:loadbalancer,{"elasticbeanstalk:environment-name":["myApp-dev","myApp-prod"]})_ | ELB |
|
||||
| _resource_\__arns(eu-west-1,ec2:instance,{"elasticbeanstalk:environment-name":["myApp-dev","myApp-prod"]})_ | EC2 |
|
||||
|
||||
## ec2_instance_attribute examples
|
||||
|
||||
@@ -193,53 +253,53 @@ Filters syntax:
|
||||
Example `ec2_instance_attribute()` query
|
||||
|
||||
```javascript
|
||||
ec2_instance_attribute(us-east-1, InstanceId, { "tag:Environment": [ "production" ] })
|
||||
ec2_instance_attribute(us - east - 1, InstanceId, { 'tag:Environment': ['production'] });
|
||||
```
|
||||
|
||||
### Selecting Attributes
|
||||
|
||||
Only 1 attribute per instance can be returned. Any flat attribute can be selected (i.e. if the attribute has a single value and isn't an object or array). Below is a list of available flat attributes:
|
||||
|
||||
* `AmiLaunchIndex`
|
||||
* `Architecture`
|
||||
* `ClientToken`
|
||||
* `EbsOptimized`
|
||||
* `EnaSupport`
|
||||
* `Hypervisor`
|
||||
* `IamInstanceProfile`
|
||||
* `ImageId`
|
||||
* `InstanceId`
|
||||
* `InstanceLifecycle`
|
||||
* `InstanceType`
|
||||
* `KernelId`
|
||||
* `KeyName`
|
||||
* `LaunchTime`
|
||||
* `Platform`
|
||||
* `PrivateDnsName`
|
||||
* `PrivateIpAddress`
|
||||
* `PublicDnsName`
|
||||
* `PublicIpAddress`
|
||||
* `RamdiskId`
|
||||
* `RootDeviceName`
|
||||
* `RootDeviceType`
|
||||
* `SourceDestCheck`
|
||||
* `SpotInstanceRequestId`
|
||||
* `SriovNetSupport`
|
||||
* `SubnetId`
|
||||
* `VirtualizationType`
|
||||
* `VpcId`
|
||||
- `AmiLaunchIndex`
|
||||
- `Architecture`
|
||||
- `ClientToken`
|
||||
- `EbsOptimized`
|
||||
- `EnaSupport`
|
||||
- `Hypervisor`
|
||||
- `IamInstanceProfile`
|
||||
- `ImageId`
|
||||
- `InstanceId`
|
||||
- `InstanceLifecycle`
|
||||
- `InstanceType`
|
||||
- `KernelId`
|
||||
- `KeyName`
|
||||
- `LaunchTime`
|
||||
- `Platform`
|
||||
- `PrivateDnsName`
|
||||
- `PrivateIpAddress`
|
||||
- `PublicDnsName`
|
||||
- `PublicIpAddress`
|
||||
- `RamdiskId`
|
||||
- `RootDeviceName`
|
||||
- `RootDeviceType`
|
||||
- `SourceDestCheck`
|
||||
- `SpotInstanceRequestId`
|
||||
- `SriovNetSupport`
|
||||
- `SubnetId`
|
||||
- `VirtualizationType`
|
||||
- `VpcId`
|
||||
|
||||
Tags can be selected by prepending the tag name with `Tags.`
|
||||
|
||||
Example `ec2_instance_attribute()` query
|
||||
|
||||
```javascript
|
||||
ec2_instance_attribute(us-east-1, Tags.Name, { "tag:Team": [ "sysops" ] })
|
||||
ec2_instance_attribute(us - east - 1, Tags.Name, { 'tag:Team': ['sysops'] });
|
||||
```
|
||||
|
||||
## Using json format template variables
|
||||
|
||||
Some of query takes JSON format filter. Grafana support to interpolate template variable to JSON format string, it can use as filter string.
|
||||
Some queries accept filters in JSON format and Grafana supports the conversion of template variables to JSON.
|
||||
|
||||
If `env = 'production', 'staging'`, following query will return ARNs of EC2 instances which `Environment` tag is `production` or `staging`.
|
||||
|
||||
@@ -247,12 +307,22 @@ If `env = 'production', 'staging'`, following query will return ARNs of EC2 inst
|
||||
resource_arns(us-east-1, ec2:instance, {"Environment":${env:json}})
|
||||
```
|
||||
|
||||
## Cost
|
||||
## Pricing
|
||||
|
||||
Amazon provides 1 million CloudWatch API requests each month at no additional charge. Past this,
|
||||
it costs $0.01 per 1,000 GetMetricStatistics or ListMetrics requests. For each query Grafana will
|
||||
issue a GetMetricStatistics request and every time you pick a dimension in the query editor
|
||||
Grafana will issue a ListMetrics request.
|
||||
The Amazon CloudWatch data source for Grafana uses the `ListMetrics` and `GetMetricData` CloudWatch API calls to list and retrieve metrics. Please see the [CloudWatch pricing page](https://aws.amazon.com/cloudwatch/pricing/) for pricing information about these API calls.
|
||||
|
||||
Every time you pick a dimension in the query editor Grafana will issue a ListMetrics request.
|
||||
Whenever you make a change to the queries in the query editor, one new request to GetMetricData will be issued.
|
||||
|
||||
Please note that for Grafana version 6.5 or higher, all API requests to GetMetricStatistics have been replaced with calls to GetMetricData. This change enables better support for CloudWatch metric math and enables the automatic generation of search expressions when using wildcards or disabling the `Match Exact` option. While GetMetricStatistics qualified for the CloudWatch API free tier, this is not the case for GetMetricData calls. For more information, please refer to the [CloudWatch pricing page](https://aws.amazon.com/cloudwatch/pricing/).
|
||||
|
||||
## Service Quotas
|
||||
|
||||
AWS defines quotas, or limits, for resources, actions, and items in your AWS account. Depending on the number of queries in your dashboard and the amount of users accessing the dashboard, you may reach the limit for the allowed number of CloudWatch GetMetricData requests per second. Note that quotas are defined per account and per region. If you're using multiple regions or have set up more than one CloudWatch data source to query against multiple accounts, you need to request a quota increase for each account and each region in which you hit the limit.
|
||||
|
||||
To request a quota increase, visit the [AWS Service Quotas console](https://console.aws.amazon.com/servicequotas/home?r#!/services/monitoring/quotas/L-5E141212).
|
||||
|
||||
Please see the AWS documentation for [Service Quotas](https://docs.aws.amazon.com/servicequotas/latest/userguide/intro.html) and [CloudWatch limits](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_limits.html) for more information.
|
||||
|
||||
## Configure the data source with provisioning
|
||||
|
||||
@@ -260,7 +330,8 @@ It's now possible to configure data sources using config files with Grafana's pr
|
||||
|
||||
Here are some provisioning examples for this data source.
|
||||
|
||||
Using a credentials file
|
||||
### Using a credentials file
|
||||
|
||||
```yaml
|
||||
apiVersion: 1
|
||||
|
||||
@@ -272,7 +343,7 @@ datasources:
|
||||
defaultRegion: eu-west-2
|
||||
```
|
||||
|
||||
Using `accessKey` and `secretKey`
|
||||
### Using `accessKey` and `secretKey`
|
||||
|
||||
```yaml
|
||||
apiVersion: 1
|
||||
@@ -284,6 +355,6 @@ datasources:
|
||||
authType: keys
|
||||
defaultRegion: eu-west-2
|
||||
secureJsonData:
|
||||
accessKey: "<your access key>"
|
||||
secretKey: "<your secret key>"
|
||||
accessKey: '<your access key>'
|
||||
secretKey: '<your secret key>'
|
||||
```
|
||||
|
||||
@@ -47,6 +47,8 @@ The Split feature is an easy way to compare graphs and tables side-by-side or to
|
||||
|
||||
{{< docs-imagebox img="/img/docs/v60/explore_split.png" class="docs-image--no-shadow" caption="Screenshot of the new Explore option in the panel menu" >}}
|
||||
|
||||
In split view, timepickers for both panels can be linked (if you change one, the other gets changed as well) by clicking on one of the time-sync buttons attached to the timepickers. Linking of timepickers helps with keeping the start and the end times of the split view queries in sync and it will ensure that you’re looking at the same time interval in both split panels.
|
||||
|
||||
You can close the newly created query by clicking on the Close Split button.
|
||||
|
||||
## Prometheus-specific Features
|
||||
@@ -90,13 +92,16 @@ Log data can be very repetitive and Explore can help by hiding duplicate log lin
|
||||
* `numbers` Matches on the line after stripping out numbers (durations, IP addresses etc.).
|
||||
* `signature` The most aggressive deduping - strips all letters and numbers, and matches on the remaining whitespace and punctuation.
|
||||
|
||||
### Timestamp, Local time and Labels
|
||||
### Timestamp and Local time
|
||||
|
||||
There are some other check boxes under the logging graph apart from the Deduping options.
|
||||
|
||||
* Timestamp: shows/hides the Timestamp column
|
||||
* Local time: shows/hides the Local time column
|
||||
* Labels: shows/hides the label filters column
|
||||
|
||||
### Labels and Parsed fields
|
||||
|
||||
Each log row has an extendable area with its labels and parsed fields, for more robust interaction. For all labels we have added the ability to filter for (positive filter) and filter out (negative filter) selected labels. Each field or label also has a stats icon to display ad-hoc statistics in relation to all displayed logs.
|
||||
|
||||
### Loki-specific Features
|
||||
|
||||
|
||||
@@ -85,6 +85,8 @@ Gauges gives a clear picture of how high a value is in it's context. It's a grea
|
||||
|
||||
Value/Range to text mapping allows you to translate the value of the summary stat into explicit text. The text will respect all styling, thresholds and customization defined for the value. This can be useful to translate the number of the main Singlestat value into a context-specific human-readable word or message.
|
||||
|
||||
If you want to replace the default "No data" text being displayed when no data is available, add a `value to text mapping` from `null` to your preferred custom text value.
|
||||
|
||||
<div class="clearfix"></div>
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
137
docs/sources/guides/whats-new-in-v6-5.md
Normal file
137
docs/sources/guides/whats-new-in-v6-5.md
Normal file
@@ -0,0 +1,137 @@
|
||||
+++
|
||||
title = "What's New in Grafana v6.5"
|
||||
description = "Feature & improvement highlights for Grafana v6.5"
|
||||
keywords = ["grafana", "new", "documentation", "6.5"]
|
||||
type = "docs"
|
||||
[menu.docs]
|
||||
name = "Version 6.5"
|
||||
identifier = "v6.5"
|
||||
parent = "whatsnew"
|
||||
weight = -16
|
||||
+++
|
||||
|
||||
# What's New in Grafana v6.5
|
||||
|
||||
For all details please read the full [CHANGELOG.md](https://github.com/grafana/grafana/blob/master/CHANGELOG.md)
|
||||
|
||||
## Highlights
|
||||
|
||||
Grafana 6.5 comes with a lot of new features and enhancements.
|
||||
|
||||
- [**Docker:** Ubuntu-based images and more]({{< relref "#ubuntu-based-docker-images" >}})
|
||||
- [**CloudWatch:** Major rewrite and lots of enhancements]({{< relref "#cloudwatch-data-source-improvements" >}})
|
||||
- [**Templating:** Dynamic typeahead queries using $__searchFilter]({{< relref "#dynamic-typeahead-support-in-query-variables" >}})
|
||||
- [**Explore:** New log row details view]({{< relref "#explore-logs-log-row-details" >}})
|
||||
- [**Explore:** Turn parts of log message into a link using derived fields]({{< relref "#loki-explore-derived-fields" >}})
|
||||
- [**Explore:** Time-sync of split views]({{< relref "#time-sync-of-split-views-in-explore" >}})
|
||||
- **Explore**: Tooltip in graphs
|
||||
- **Azure Monitor**: Alerting support for Azure Application Insights
|
||||
- **Provisioning**: Allow saving of provisioned dashboards from UI
|
||||
- **Auth Proxy:** Can now login with auth proxy and get a login token and session cookie
|
||||
- **OAuth:** Generic OAuth now supports role mapping
|
||||
|
||||
More details of above and highlights will be added as we're getting closer to the stable release.
|
||||
|
||||
### Ubuntu-based docker images
|
||||
|
||||
In Grafana [v6.4](/guides/whats-new-in-v6-4/#alpine-based-docker-image) we switched the Grafana docker image from Ubuntu to Alpine. The main reason for this change was to be able to provide a more secure and lightweight docker image.
|
||||
|
||||
This change has received both negative and positive feedback as well as some bug reports. Based on this, one of the conclusions and learnings is that switching to an Alpine based docker image was a big breaking change for a lot of users and this change should have been more clearly highlighted in blog post, release notes, changelog and the [Docker Hub readme](https://hub.docker.com/r/grafana/grafana).
|
||||
|
||||
One additional mistake we did was to break the Docker images for ARM. Good news, in Grafana v6.5 this have been fixed.
|
||||
|
||||
Grafana docker images should be as secure as possible by default and that’s why the Alpine based docker images will continue to be provided as Grafana’s default (`grafana/grafana:<version>`). With that said, it’s good to give users options and that’s why starting from Grafana v6.5 there’re also Ubuntu based docker images (`grafana/grafana:<version>-ubuntu`) available.
|
||||
|
||||
### CloudWatch data source improvements
|
||||
|
||||
In this release, several feature improvements and additions were made in the CloudWatch data source. This work has been done in collaboration with the Amazon CloudWatch team.
|
||||
|
||||
#### GetMetricData API
|
||||
|
||||
For Grafana version 6.5 or higher, all API requests to GetMetricStatistics have been replaced with calls to GetMetricData, following Amazon’s [best practice to use the GetMetricData API](https://aws.amazon.com/premiumsupport/knowledge-center/cloudwatch-getmetricdata-api) instead of GetMetricStatistics, because data can be retrieved faster at scale with GetMetricData. This change provides better support for CloudWatch metric math and enables the use of automatic search expressions.
|
||||
|
||||
While GetMetricStatistics qualified for the CloudWatch API free tier, this is not the case for GetMetricData calls. For more information, please refer to the [CloudWatch pricing page](https://aws.amazon.com/cloudwatch/pricing/).
|
||||
|
||||
#### Dynamic queries using dimension wildcards
|
||||
|
||||
In Grafana 6.5 or higher, you’re able to monitor a dynamic list of metrics by using the asterisk (\*) wildcard for one or more dimension values.
|
||||
|
||||
{{< docs-imagebox img="/img/docs/v65/cloudwatch-dimension-wildcard.png" max-width="800px" class="docs-image--right" caption="CloudWatch dimension wildcard" >}}
|
||||
|
||||
In the example, all metrics in the namespace `AWS/EC2` with a metric name of `CPUUtilization` and ANY value for the `InstanceId` dimension are queried. This can help you monitor metrics for AWS resources, like EC2 instances or containers. For example, when new instances get created as part of an auto scaling event, they will automatically appear in the graph without you having to track the new instance IDs. You can click on `Show Query Preview` to see the search expression that is automatically built to support wildcards. To learn more about search expressions, visit the [CloudWatch documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/search-expression-syntax.html).
|
||||
|
||||
By default, the search expression is defined in such a way that the queried metrics must match the defined dimension names exactly. This means that in the example below only metrics with exactly one dimension with name ‘InstanceId’ will be returned.
|
||||
|
||||
You can untoggle `Match Exact` to include metrics that have other dimensions defined. Disabling ‘Match Exact’ also creates a search expression even if you don’t use wildcards. We simply search for any metric that match at least the namespace, metric name, and all defined dimensions.
|
||||
|
||||
#### Deep linking from Grafana panels to the CloudWatch console
|
||||
|
||||
{{< docs-imagebox img="/img/docs/v65/cloudwatch-deep-linking.png" max-width="500px" class="docs-image--right" caption="CloudWatch deep linking" >}}
|
||||
|
||||
Left clicking a time series in the panel shows a context menu with a link to `View in CloudWatch console`. Clicking that link will open a new tab that will take you to the CloudWatch console and display all the metrics for that query. If you are not currently logged in to the CloudWatch console, the link will forward you to the login page. The provided link is valid for any account but will only display the right metrics if you are logged in to the account that corresponds to the selected data source in Grafana.
|
||||
|
||||
This feature is not available for metrics that are based on math expressions.
|
||||
|
||||
#### Improved feedback when throttling occurs
|
||||
|
||||
If the [limit of the GetMetricData API](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_limits.html) is reached (either the transactions per second limit or the data points per second limit), a throttling error will be returned by the CloudWatch API. Throttling limits are defined per account and region, so the alert modal will indicate which data source got throttled in which region. A link to request a limit increase will be provided for the affected region, but you will have to log in to the correct account. For example, for us-east-1, a limit increase can be requested [here](https://console.aws.amazon.com/servicequotas/home?region=us-east-1#!/services/monitoring/quotas/L-5E141212).
|
||||
|
||||
#### Multi-value template variables now use search expressions
|
||||
|
||||
When defining dimension values based on multi-valued template variables, we now use search expressions to query for the matching metrics. This enables the use of multiple template variables in one query and also allows you to use template variables for queries that have the `Match Exact` option disabled.
|
||||
|
||||
Search expressions are currently limited to 1024 characters, so your query may fail if you have a long list of values. We recommend using the asterisk (\*) wildcard instead of the `All` option if you want to query all metrics that have any value for a certain dimension name.
|
||||
|
||||
The use of multi-valued template variables is only supported for dimension values. Using multi-valued template variables for `Region`, `Namespace`, or `Metric Name` is not supported.
|
||||
|
||||
### Dynamic typeahead support in query variables
|
||||
|
||||
If you have a query variable that has many thousands of values it can be quite slow to search for a specific value in the dropdown. This is due to the fact that all that search filtering is happening in the browser.
|
||||
|
||||
Using `__searchFilter` in the template variable query field you can filter the query results based on what the user types in the variable dropdown input. When nothing has been entered by the user the default value for `__searchFilter` is `*` , `.*` or `%` depending on data source and formatting option.
|
||||
|
||||
The example below shows how to use `__searchFilter` as part of the query field to enable searching for `server` while the user types in the dropdown select box.
|
||||
|
||||
Query
|
||||
|
||||
```bash
|
||||
apps.$app.servers.$__searchFilter
|
||||
```
|
||||
|
||||
TagValues
|
||||
|
||||
```bash
|
||||
tag_values(server, server=~${__searchFilter:regex})
|
||||
```
|
||||
|
||||
This feature is currently only supported by [Graphite](/features/datasources/graphite/#using-searchfilter-to-filter-results-in-query-variable), [MySQL](/features/datasources/mysql/#using-searchfilter-to-filter-results-in-query-variable) and [Postgres](/features/datasources/postgres/#using-searchfilter-to-filter-results-in-query-variable) data sources.
|
||||
|
||||
### Explore/Logs: Log row details
|
||||
|
||||
We have massively simplified the way we display both log row labels/fields as well as parsed fields by putting them into an extendable area in each row.
|
||||
|
||||
So far labels had been squashed into their own column, making long label values difficult to read or interact with. Similarly, the parsed fields (available for logfmt and JSON structured logs) were too fiddly for mouse interaction. To solve this we took both and put them into a collapsed area below each row for more robust interaction. We have also added the ability to filter out labels, i.e., turn them into a negative filter on click (in addition to a positive filter).
|
||||
|
||||
{{< docs-imagebox img="/img/docs/v65/explore_log_details.gif" caption="Explore Log row details" >}}
|
||||
|
||||
### Loki/Explore: Derived fields
|
||||
|
||||
Derived fields allow any part of a log message to be turned into a link. Leaning on the concept of data links for graphs, we've extended the log result viewer in Explore to turn certain parsed fields into a link, based on a pattern to match.
|
||||
|
||||
This allows you to turn an occurrence of e.g., `traceId=624f706351956b81` in your log line, into a link to your distributed tracing system to view that trace. The configuration for the patterns to match can be found in the datasource settings.
|
||||
|
||||
This release starts with support for Loki, but we will bring this concept to other datasources soon.
|
||||
|
||||
### Time-sync of split views in Explore
|
||||
|
||||
In Explore's split view, the two timepickers can now be linked so that if you change one, the other gets changed as well. This helps with keeping start and end times of the split view queries in sync and will ensure that you're looking at the same time interval in both split panes.
|
||||
|
||||
{{< docs-imagebox img="/img/docs/v65/explore_time_sync.gif" caption="Time-sync of split views in Explore" >}}
|
||||
|
||||
## Upgrading
|
||||
|
||||
See [upgrade notes](/installation/upgrading/#upgrading-to-v6-5).
|
||||
|
||||
## Changelog
|
||||
|
||||
Checkout the [CHANGELOG.md](https://github.com/grafana/grafana/blob/master/CHANGELOG.md) file for a complete list of new features, changes, and bug fixes.
|
||||
@@ -25,8 +25,7 @@ variables by using the syntax `GF_<SectionName>_<KeyName>`.
|
||||
For example:
|
||||
|
||||
```bash
|
||||
$ docker run \
|
||||
-d \
|
||||
$ docker run -d \
|
||||
-p 3000:3000 \
|
||||
--name=grafana \
|
||||
-e "GF_SERVER_ROOT_URL=http://grafana.server.name" \
|
||||
@@ -53,32 +52,49 @@ GF_PATHS_LOGS | /var/log/grafana
|
||||
GF_PATHS_PLUGINS | /var/lib/grafana/plugins
|
||||
GF_PATHS_PROVISIONING | /etc/grafana/provisioning
|
||||
|
||||
## Running a Specific Version of Grafana
|
||||
## Image Variants
|
||||
|
||||
The official Grafana Docker image comes in two variants.
|
||||
|
||||
**`grafana/grafana:<version>`:**
|
||||
|
||||
> **Note:** This image was based on [Ubuntu](https://ubuntu.com/) before version 6.4.0.
|
||||
|
||||
This is the default image. This image is based on the popular [Alpine Linux project](http://alpinelinux.org), available in [the alpine official image](https://hub.docker.com/_/alpine). Alpine Linux is much smaller than most distribution base images, and thus leads to slimmer and more secure images.
|
||||
|
||||
This variant is highly recommended when security and final image size being as small as possible is desired. The main caveat to note is that it does use [musl libc](http://www.musl-libc.org) instead of [glibc and friends](http://www.etalabs.net/compare_libcs.html), so certain software might run into issues depending on the depth of their libc requirements. However, most software doesn't have an issue with this, so this variant is usually a very safe choice.
|
||||
|
||||
**`grafana/grafana:<version>-ubuntu`:**
|
||||
|
||||
> **Note:** This image is available since version 6.5.0.
|
||||
|
||||
This image is based on [Ubuntu](https://ubuntu.com/), available in [the ubuntu official image](https://hub.docker.com/_/ubuntu).
|
||||
This is an alternative image for those who prefer an [Ubuntu](https://ubuntu.com/) based image and/or who are dependent on certain
|
||||
tooling not available for Alpine.
|
||||
|
||||
## Running a specific version of Grafana
|
||||
|
||||
```bash
|
||||
# specify right tag, e.g. 5.1.0 - see Docker Hub for available tags
|
||||
$ docker run \
|
||||
-d \
|
||||
-p 3000:3000 \
|
||||
--name grafana \
|
||||
grafana/grafana:5.1.0
|
||||
# specify right tag, e.g. 6.5.0 - see Docker Hub for available tags
|
||||
$ docker run -d -p 3000:3000 --name grafana grafana/grafana:6.5.0
|
||||
# ubuntu based images available since Grafana 6.5.0
|
||||
$ docker run -d -p 3000:3000 --name grafana grafana/grafana:6.5.0-ubuntu
|
||||
```
|
||||
|
||||
## Running the master branch
|
||||
|
||||
For every successful build of the master branch we update the `grafana/grafana:master` tag and create a new tag `grafana/grafana-dev:master-<commit hash>` with the hash of the git commit that was built. This means you can always get the latest version of Grafana.
|
||||
For every successful build of the master branch we update the `grafana/grafana:master` and `grafana/grafana:master-ubuntu`. Additionally, two new tags are created, `grafana/grafana-dev:master-<commit hash>` and `grafana/grafana-dev:master-<commit hash>-ubuntu`, which includes the hash of the git commit that was built. This means you can always get the latest version of Grafana.
|
||||
|
||||
When running Grafana master in production we **strongly** recommend that you use the `grafana/grafana-dev:master-<commit hash>` tag as that will guarantee that you use a specific version of Grafana instead of whatever was the most recent commit at the time.
|
||||
|
||||
For a list of available tags, check out [grafana/grafana](https://hub.docker.com/r/grafana/grafana/tags/) and [grafana/grafana-dev](https://hub.docker.com/r/grafana/grafana-dev/tags/).
|
||||
For a list of available tags, check out [grafana/grafana](https://hub.docker.com/r/grafana/grafana/tags/) and [grafana/grafana-dev](https://hub.docker.com/r/grafana/grafana-dev/tags/).
|
||||
|
||||
## Installing Plugins for Grafana
|
||||
|
||||
Pass the plugins you want installed to docker with the `GF_INSTALL_PLUGINS` environment variable as a comma separated list. This will pass each plugin name to `grafana-cli plugins install ${plugin}` and install them when Grafana starts.
|
||||
|
||||
```bash
|
||||
docker run \
|
||||
-d \
|
||||
docker run -d \
|
||||
-p 3000:3000 \
|
||||
--name=grafana \
|
||||
-e "GF_INSTALL_PLUGINS=grafana-clock-panel,grafana-simple-json-datasource" \
|
||||
@@ -87,25 +103,49 @@ docker run \
|
||||
|
||||
> If you need to specify the version of a plugin, you can add it to the `GF_INSTALL_PLUGINS` environment variable. Otherwise, the latest will be assumed. For example: `-e "GF_INSTALL_PLUGINS=grafana-clock-panel 1.0.1,grafana-simple-json-datasource 1.3.5"`
|
||||
|
||||
## Building a custom Grafana image with pre-installed plugins
|
||||
## Building a custom Grafana image
|
||||
|
||||
In the [grafana-docker](https://github.com/grafana/grafana/tree/master/packaging/docker) there is a folder called `custom/` which includes a `Dockerfile` that can be used to build a custom Grafana image. It accepts `GRAFANA_VERSION` and `GF_INSTALL_PLUGINS` as build arguments.
|
||||
In the [Grafana GitHub repository](https://github.com/grafana/grafana/tree/master/packaging/docker) there is a folder called `custom/` which two includes Dockerfiles, `Dockerfile` and `ubuntu.Dockerfile`, that can be used to build a custom Grafana image.
|
||||
It accepts `GRAFANA_VERSION`, `GF_INSTALL_PLUGINS` and `GF_INSTALL_IMAGE_RENDERER_PLUGIN` as build arguments.
|
||||
|
||||
### With pre-installed plugins
|
||||
|
||||
> If you need to specify the version of a plugin, you can add it to the `GF_INSTALL_PLUGINS` build argument. Otherwise, the latest will be assumed. For example: `--build-arg "GF_INSTALL_PLUGINS=grafana-clock-panel 1.0.1,grafana-simple-json-datasource 1.3.5"`
|
||||
|
||||
Example of how to build and run:
|
||||
```bash
|
||||
cd custom
|
||||
docker build -t grafana:latest-with-plugins \
|
||||
docker build \
|
||||
--build-arg "GRAFANA_VERSION=latest" \
|
||||
--build-arg "GF_INSTALL_PLUGINS=grafana-clock-panel,grafana-simple-json-datasource" .
|
||||
--build-arg "GF_INSTALL_PLUGINS=grafana-clock-panel,grafana-simple-json-datasource" \
|
||||
-t grafana-custom -f Dockerfile .
|
||||
|
||||
docker run \
|
||||
-d \
|
||||
-p 3000:3000 \
|
||||
--name=grafana \
|
||||
grafana:latest-with-plugins
|
||||
docker run -d -p 3000:3000 --name=grafana grafana-custom
|
||||
```
|
||||
|
||||
> If you need to specify the version of a plugin, you can add it to the `GF_INSTALL_PLUGINS` build argument. Otherwise, the latest will be assumed. For example: `--build-arg "GF_INSTALL_PLUGINS=grafana-clock-panel 1.0.1,grafana-simple-json-datasource 1.3.5"`
|
||||
Replace `Dockerfile` in above example with `ubuntu.Dockerfile` to build a custom Ubuntu based image (Grafana 6.5+).
|
||||
|
||||
### With Grafana Image Renderer plugin pre-installed
|
||||
|
||||
> Only available in Grafana v6.5+ and experimental.
|
||||
|
||||
The [Grafana Image Renderer plugin](/administration/image_rendering/#grafana-image-renderer-plugin) does not
|
||||
currently work if it is installed in Grafana docker image.
|
||||
You can build a custom docker image by using the `GF_INSTALL_IMAGE_RENDERER_PLUGIN` build argument.
|
||||
This will install additional dependencies needed for the Grafana Image Renderer plugin to run.
|
||||
|
||||
Example of how to build and run:
|
||||
```bash
|
||||
cd custom
|
||||
docker build \
|
||||
--build-arg "GRAFANA_VERSION=latest" \
|
||||
--build-arg "GF_INSTALL_IMAGE_RENDERER_PLUGIN=true" \
|
||||
-t grafana-custom -f Dockerfile .
|
||||
|
||||
docker run -d -p 3000:3000 --name=grafana grafana-custom
|
||||
```
|
||||
|
||||
Replace `Dockerfile` in above example with `ubuntu.Dockerfile` to build a custom Ubuntu based image.
|
||||
|
||||
## Installing Plugins from other sources
|
||||
|
||||
@@ -114,8 +154,7 @@ docker run \
|
||||
It's possible to install plugins from custom url:s by specifying the url like this: `GF_INSTALL_PLUGINS=<url to plugin zip>;<plugin name>`
|
||||
|
||||
```bash
|
||||
docker run \
|
||||
-d \
|
||||
docker run -d \
|
||||
-p 3000:3000 \
|
||||
--name=grafana \
|
||||
-e "GF_INSTALL_PLUGINS=http://plugin-domain.com/my-custom-plugin.zip;custom-plugin" \
|
||||
@@ -125,8 +164,7 @@ docker run \
|
||||
## Configuring AWS Credentials for CloudWatch Support
|
||||
|
||||
```bash
|
||||
$ docker run \
|
||||
-d \
|
||||
$ docker run -d \
|
||||
-p 3000:3000 \
|
||||
--name=grafana \
|
||||
-e "GF_AWS_PROFILES=default" \
|
||||
@@ -152,12 +190,7 @@ Supported variables:
|
||||
docker volume create grafana-storage
|
||||
|
||||
# start grafana
|
||||
docker run \
|
||||
-d \
|
||||
-p 3000:3000 \
|
||||
--name=grafana \
|
||||
-v grafana-storage:/var/lib/grafana \
|
||||
grafana/grafana
|
||||
docker run -d -p 3000:3000 --name=grafana -v grafana-storage:/var/lib/grafana grafana/grafana
|
||||
```
|
||||
|
||||
## Grafana container using bind mounts
|
||||
@@ -250,6 +283,14 @@ chown -R root:root /etc/grafana && \
|
||||
chown -R grafana:grafana /usr/share/grafana
|
||||
```
|
||||
|
||||
## Migration from a previous version of the docker container to 6.4 or later
|
||||
|
||||
Grafana’s docker image was changed to be based on [Alpine](http://alpinelinux.org) instead of [Ubuntu](https://ubuntu.com/).
|
||||
|
||||
## Migration from a previous version of the docker container to 6.5 or later
|
||||
|
||||
Grafana Docker image now comes in two variants, one [Alpine](http://alpinelinux.org) based and one [Ubuntu](https://ubuntu.com/) based, see [Image Variants](#image-variants) for details.
|
||||
|
||||
## Logging in for the first time
|
||||
|
||||
To run Grafana open your browser and go to http://localhost:3000/. 3000 is the default HTTP port that Grafana listens to if you haven't [configured a different port](/installation/configuration/#http-port).
|
||||
|
||||
@@ -33,7 +33,7 @@ Grafana does not use a lot of resources and is very lightweight in use of memory
|
||||
|
||||
Depending on what features are being used and to what extent the requirements varies. Features that consume and requires more resources:
|
||||
|
||||
- Server side rendering of images
|
||||
- [Server side rendering of images](/administration/image_rendering/#requirements)
|
||||
- [Alerting](/alerting/rules/)
|
||||
- Data source proxy
|
||||
|
||||
|
||||
@@ -30,7 +30,59 @@ If you encounter an error or problem it is a good idea to check the grafana serv
|
||||
located at `/var/log/grafana/grafana.log` on Unix systems or in `<grafana_install_dir>/data/log` on
|
||||
other platforms and manual installs.
|
||||
|
||||
You can enable more logging by changing log level in you grafana configuration file.
|
||||
You can enable more logging by changing log level in your grafana configuration file.
|
||||
|
||||
## Diagnostics
|
||||
|
||||
The `grafana-server` process can be instructued to enable certain diagnostics when it starts. This can be helpful
|
||||
when experiencing/investigating certain performance problems. It's `not` recommended to have these enabled per default.
|
||||
|
||||
### Profiling
|
||||
|
||||
The `grafana-server` can be started with the arguments `-profile` to enable profiling and `-profile-port` to override
|
||||
the default HTTP port (`6060`) where the pprof debugging endpoints will be available, e.g.
|
||||
|
||||
```bash
|
||||
./grafana-server -profile -profile-port=8080
|
||||
```
|
||||
|
||||
Note that pprof debugging endpoints are served on a different port than the Grafana HTTP server.
|
||||
|
||||
You can configure/override profiling settings using environment variables:
|
||||
|
||||
```bash
|
||||
export GF_DIAGNOSTICS_PROFILING_ENABLED=true
|
||||
export GF_DIAGNOSTICS_PROFILING_PORT=8080
|
||||
```
|
||||
|
||||
See [Go command pprof](https://golang.org/cmd/pprof/) for more information about how to collect and analyze profiling data.
|
||||
|
||||
### Tracing
|
||||
|
||||
The `grafana-server` can be started with the arguments `-tracing` to enable tracing and `-tracing-file` to
|
||||
override the default trace file (`trace.out`) where trace result will be written to, e.g.
|
||||
|
||||
```bash
|
||||
./grafana-server -tracing -tracing-file=/tmp/trace.out
|
||||
```
|
||||
|
||||
You can configure/override profiling settings using environment variables:
|
||||
|
||||
```bash
|
||||
export GF_DIAGNOSTICS_TRACING_ENABLED=true
|
||||
export GF_DIAGNOSTICS_TRACING_FILE=/tmp/trace.out
|
||||
```
|
||||
|
||||
View the trace in a web browser (Go required to be installed):
|
||||
|
||||
```bash
|
||||
go tool trace <trace file>
|
||||
2019/11/24 22:20:42 Parsing trace...
|
||||
2019/11/24 22:20:42 Splitting trace...
|
||||
2019/11/24 22:20:42 Opening browser. Trace viewer is listening on http://127.0.0.1:39735
|
||||
```
|
||||
|
||||
See [Go command trace](https://golang.org/cmd/trace/) for more information about how to analyze trace files.
|
||||
|
||||
## FAQ
|
||||
|
||||
|
||||
@@ -74,7 +74,7 @@ sudo apt-get install grafana
|
||||
|
||||
If you downloaded the binary tar package you can just download and extract a new package
|
||||
and overwrite all your existing files. But this might overwrite your config changes. We
|
||||
recommend you place your config changes in a file named `<grafana_install_dir>/conf/custom.ini`
|
||||
recommend you place your config changes in a file named `<grafana_install_dir>/conf/custom.ini`
|
||||
as this will make upgrades easier without risking losing your config changes.
|
||||
|
||||
### Centos / RHEL
|
||||
@@ -91,6 +91,7 @@ sudo yum update grafana
|
||||
### Docker
|
||||
|
||||
This just an example, details depend on how you configured your grafana container.
|
||||
|
||||
```bash
|
||||
docker pull grafana
|
||||
docker stop my-grafana-container
|
||||
@@ -102,7 +103,7 @@ docker run --name=my-grafana-container --restart=always -v /var/lib/grafana:/var
|
||||
|
||||
If you downloaded the Windows binary package you can just download a newer package and extract
|
||||
to the same location (and overwrite the existing files). This might overwrite your config changes. We
|
||||
recommend you place your config changes in a file named `<grafana_install_dir>/conf/custom.ini`
|
||||
recommend you place your config changes in a file named `<grafana_install_dir>/conf/custom.ini`
|
||||
as this will make upgrades easier without risking losing your config changes.
|
||||
|
||||
## Upgrading from 1.x
|
||||
@@ -170,10 +171,10 @@ will keep working with unencrypted passwords. If you want to migrate to encrypte
|
||||
you can do that by:
|
||||
|
||||
- For data sources created through UI, you need to go to data source config, re enter the password or basic auth
|
||||
password and save the data source.
|
||||
password and save the data source.
|
||||
- For data sources created by provisioning, you need to update your config file and use secureJsonData.password or
|
||||
secureJsonData.basicAuthPassword field. See [provisioning docs](/administration/provisioning) for example of current
|
||||
configuration.
|
||||
secureJsonData.basicAuthPassword field. See [provisioning docs](/administration/provisioning) for example of current
|
||||
configuration.
|
||||
|
||||
### Embedding Grafana
|
||||
|
||||
@@ -196,11 +197,22 @@ is `7.0+` and `max concurrent shard requests` properly configured. 256 was the d
|
||||
|
||||
## Upgrading to v6.4
|
||||
|
||||
One of the database migrations included in this release will merge multiple rows used to represent an annotation range into a single row. If you have a large number of region annotations the database migration may take a long time to complete. See [Upgrading to v5.2](#upgrading-to-v5-2) for tips on how to manage this process.
|
||||
### Annotations database migration
|
||||
|
||||
Plugins that need updating:
|
||||
One of the database migrations included in this release will merge multiple rows used to represent an annotation range into a single row. If you have a large number of region annotations the database migration may take a long time to complete. See [Upgrading to v5.2](#upgrading-to-v5-2) for tips on how to manage this process.
|
||||
|
||||
* [Splunk](https://grafana.com/grafana/plugins/grafana-splunk-datasource)
|
||||
### Docker
|
||||
|
||||
Grafana’s docker image is now based on [Alpine](http://alpinelinux.org) instead of [Ubuntu](https://ubuntu.com/).
|
||||
|
||||
### Plugins that need updating
|
||||
|
||||
- [Splunk](https://grafana.com/grafana/plugins/grafana-splunk-datasource)
|
||||
|
||||
## Upgrading to v6.5
|
||||
|
||||
Pre Grafana 6.5.0, the CloudWatch datasource used the GetMetricStatistics API for all queries that did not have an ´id´ and did not have an ´expression´ defined in the query editor. The GetMetricStatistics API has a limit of 400 transactions per second (TPS). In this release, all queries use the GetMetricData API which has a limit of 50 TPS and 100 metrics per transaction. We expect this transition to be smooth for most of our users, but in case you do face throttling issues we suggest you increase the TPS quota. To do that, please visit the [AWS Service Quotas console](https://console.aws.amazon.com/servicequotas/home?r#!/services/monitoring/quotas/L-5E141212). For more details around CloudWatch API limits, [see CloudWatch docs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_limits.html).
|
||||
|
||||
Each request to the GetMetricData API can include 100 queries. This means that each panel in Grafana will only issue one GetMetricData request, regardless of the number of query rows that are present in the panel. Consequently as it is no longer possible to set `HighRes` on a per query level anymore, this switch is now removed from the query editor. High resolution can still be achieved by choosing a smaller minimum period in the query editor.
|
||||
|
||||
The handling of multi template variables in dimension values has been changed in Grafana 6.5. When a multi template variable is being used, Grafana will generate a search expression. In the GetMetricData API, expressions are limited to 1024 characters, so it might be the case that this limit is reached when a multi template variable that has a lot of values is being used. If this is the case, we suggest you start using `*` wildcard as dimension value instead of a multi template variable.
|
||||
|
||||
@@ -2,5 +2,5 @@
|
||||
"npmClient": "yarn",
|
||||
"useWorkspaces": true,
|
||||
"packages": ["packages/*"],
|
||||
"version": "6.5.0-pre"
|
||||
"version": "6.5.3"
|
||||
}
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
"license": "Apache-2.0",
|
||||
"private": true,
|
||||
"name": "grafana",
|
||||
"version": "6.5.0-pre",
|
||||
"version": "6.5.3",
|
||||
"repository": {
|
||||
"type": "git",
|
||||
"url": "http://github.com/grafana/grafana.git"
|
||||
@@ -31,6 +31,7 @@
|
||||
"@types/jest": "24.0.13",
|
||||
"@types/jquery": "1.10.35",
|
||||
"@types/lodash": "4.14.123",
|
||||
"@types/lru-cache": "^5.1.0",
|
||||
"@types/marked": "0.6.5",
|
||||
"@types/mousetrap": "1.6.3",
|
||||
"@types/node": "11.13.4",
|
||||
@@ -197,6 +198,7 @@
|
||||
},
|
||||
"dependencies": {
|
||||
"@babel/polyfill": "7.6.0",
|
||||
"@braintree/sanitize-url": "4.0.0",
|
||||
"@grafana/slate-react": "0.22.9-grafana",
|
||||
"@torkelo/react-select": "2.4.1",
|
||||
"@types/react-loadable": "5.5.2",
|
||||
@@ -222,6 +224,7 @@
|
||||
"is-hotkey": "0.1.4",
|
||||
"jquery": "3.4.1",
|
||||
"lodash": "4.17.15",
|
||||
"lru-cache": "^5.1.1",
|
||||
"marked": "0.6.2",
|
||||
"memoize-one": "5.1.1",
|
||||
"moment": "2.24.0",
|
||||
|
||||
@@ -15,7 +15,7 @@ All packages are versioned according to the current Grafana version:
|
||||
### Stable releases
|
||||
> **Even though packages are released under a stable version, they are considered ALPHA until further notice!**
|
||||
|
||||
Stable releases are published under the `latest` tag on npm.
|
||||
Stable releases are published under the `latest` tag on npm. If there was alpha/beta version released previously, the `next` tag is updated to stable version.
|
||||
|
||||
### Alpha and beta releases
|
||||
Alpha and beta releases are published under the `next` tag on npm.
|
||||
@@ -47,7 +47,7 @@ Automatic prereleases are published under the `canary` dist tag.
|
||||
5. Push version commit to the release branch.
|
||||
|
||||
### Building individual packages
|
||||
To build individual packages, run:
|
||||
To build individual packages, run:
|
||||
|
||||
```
|
||||
grafana-toolkit package:build --scope=<ui|toolkit|runtime|data>
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
"author": "Grafana Labs",
|
||||
"license": "Apache-2.0",
|
||||
"name": "@grafana/data",
|
||||
"version": "6.4.0-pre",
|
||||
"version": "6.5.3",
|
||||
"description": "Grafana Data Library",
|
||||
"keywords": [
|
||||
"typescript"
|
||||
|
||||
@@ -59,6 +59,15 @@ describe('toDataFrame', () => {
|
||||
expect(again).toBe(input);
|
||||
});
|
||||
|
||||
it('throws when table rows is not array', () => {
|
||||
expect(() =>
|
||||
toDataFrame({
|
||||
columns: [],
|
||||
rows: {},
|
||||
})
|
||||
).toThrowError('Expected table rows to be array, got object.');
|
||||
});
|
||||
|
||||
it('migrate from 6.3 style rows', () => {
|
||||
const oldDataFrame = {
|
||||
fields: [{ name: 'A' }, { name: 'B' }, { name: 'C' }],
|
||||
|
||||
@@ -1,7 +1,5 @@
|
||||
// Libraries
|
||||
import isNumber from 'lodash/isNumber';
|
||||
import isString from 'lodash/isString';
|
||||
import isBoolean from 'lodash/isBoolean';
|
||||
import { isArray, isBoolean, isNumber, isString } from 'lodash';
|
||||
|
||||
// Types
|
||||
import {
|
||||
@@ -34,6 +32,10 @@ function convertTableToDataFrame(table: TableData): DataFrame {
|
||||
};
|
||||
});
|
||||
|
||||
if (!isArray(table.rows)) {
|
||||
throw new Error(`Expected table rows to be array, got ${typeof table.rows}.`);
|
||||
}
|
||||
|
||||
for (const row of table.rows) {
|
||||
for (let i = 0; i < fields.length; i++) {
|
||||
fields[i].values.buffer.push(row[i]);
|
||||
|
||||
@@ -1,8 +1,10 @@
|
||||
import merge from 'lodash/merge';
|
||||
import { getFieldProperties, getFieldDisplayValues, GetFieldDisplayValuesOptions } from './fieldDisplay';
|
||||
import { toDataFrame } from '../dataframe/processDataFrame';
|
||||
import { ReducerID } from '../transformations/fieldReducer';
|
||||
import { Threshold } from '../types/threshold';
|
||||
import { GrafanaTheme } from '../types/theme';
|
||||
import { MappingType } from '../types';
|
||||
|
||||
describe('FieldDisplay', () => {
|
||||
it('Construct simple field properties', () => {
|
||||
@@ -32,33 +34,8 @@ describe('FieldDisplay', () => {
|
||||
expect(field.unit).toEqual('ms');
|
||||
});
|
||||
|
||||
// Simple test dataset
|
||||
|
||||
const options: GetFieldDisplayValuesOptions = {
|
||||
data: [
|
||||
toDataFrame({
|
||||
name: 'Series Name',
|
||||
fields: [
|
||||
{ name: 'Field 1', values: ['a', 'b', 'c'] },
|
||||
{ name: 'Field 2', values: [1, 3, 5] },
|
||||
{ name: 'Field 3', values: [2, 4, 6] },
|
||||
],
|
||||
}),
|
||||
],
|
||||
replaceVariables: (value: string) => {
|
||||
return value; // Return it unchanged
|
||||
},
|
||||
fieldOptions: {
|
||||
calcs: [],
|
||||
override: {},
|
||||
defaults: {},
|
||||
},
|
||||
theme: {} as GrafanaTheme,
|
||||
};
|
||||
|
||||
it('show first numeric values', () => {
|
||||
const display = getFieldDisplayValues({
|
||||
...options,
|
||||
const options = createDisplayOptions({
|
||||
fieldOptions: {
|
||||
calcs: [ReducerID.first],
|
||||
override: {},
|
||||
@@ -67,28 +44,24 @@ describe('FieldDisplay', () => {
|
||||
},
|
||||
},
|
||||
});
|
||||
const display = getFieldDisplayValues(options);
|
||||
expect(display.map(v => v.display.text)).toEqual(['1', '2']);
|
||||
// expect(display.map(v => v.display.title)).toEqual([
|
||||
// 'a * Field 1 * Series Name', // 0
|
||||
// 'b * Field 2 * Series Name', // 1
|
||||
// ]);
|
||||
});
|
||||
|
||||
it('show last numeric values', () => {
|
||||
const display = getFieldDisplayValues({
|
||||
...options,
|
||||
const options = createDisplayOptions({
|
||||
fieldOptions: {
|
||||
calcs: [ReducerID.last],
|
||||
override: {},
|
||||
defaults: {},
|
||||
},
|
||||
});
|
||||
const display = getFieldDisplayValues(options);
|
||||
expect(display.map(v => v.display.numeric)).toEqual([5, 6]);
|
||||
});
|
||||
|
||||
it('show all numeric values', () => {
|
||||
const display = getFieldDisplayValues({
|
||||
...options,
|
||||
const options = createDisplayOptions({
|
||||
fieldOptions: {
|
||||
values: true, //
|
||||
limit: 1000,
|
||||
@@ -97,12 +70,12 @@ describe('FieldDisplay', () => {
|
||||
defaults: {},
|
||||
},
|
||||
});
|
||||
const display = getFieldDisplayValues(options);
|
||||
expect(display.map(v => v.display.numeric)).toEqual([1, 3, 5, 2, 4, 6]);
|
||||
});
|
||||
|
||||
it('show 2 numeric values (limit)', () => {
|
||||
const display = getFieldDisplayValues({
|
||||
...options,
|
||||
const options = createDisplayOptions({
|
||||
fieldOptions: {
|
||||
values: true, //
|
||||
limit: 2,
|
||||
@@ -111,6 +84,7 @@ describe('FieldDisplay', () => {
|
||||
defaults: {},
|
||||
},
|
||||
});
|
||||
const display = getFieldDisplayValues(options);
|
||||
expect(display.map(v => v.display.numeric)).toEqual([1, 3]); // First 2 are from the first field
|
||||
});
|
||||
|
||||
@@ -132,28 +106,108 @@ describe('FieldDisplay', () => {
|
||||
});
|
||||
|
||||
it('Should return field thresholds when there is no data', () => {
|
||||
const options: GetFieldDisplayValuesOptions = {
|
||||
data: [
|
||||
{
|
||||
name: 'No data',
|
||||
fields: [],
|
||||
length: 0,
|
||||
},
|
||||
],
|
||||
replaceVariables: (value: string) => {
|
||||
return value;
|
||||
},
|
||||
const options = createEmptyDisplayOptions({
|
||||
fieldOptions: {
|
||||
calcs: [],
|
||||
override: {},
|
||||
defaults: {
|
||||
thresholds: [{ color: '#F2495C', value: 50 }],
|
||||
},
|
||||
},
|
||||
theme: {} as GrafanaTheme,
|
||||
};
|
||||
});
|
||||
|
||||
const display = getFieldDisplayValues(options);
|
||||
expect(display[0].field.thresholds!.length).toEqual(1);
|
||||
expect(display[0].display.numeric).toEqual(0);
|
||||
});
|
||||
|
||||
it('Should return field with default text when no mapping or data available', () => {
|
||||
const options = createEmptyDisplayOptions();
|
||||
const display = getFieldDisplayValues(options);
|
||||
expect(display[0].display.text).toEqual('No data');
|
||||
expect(display[0].display.numeric).toEqual(0);
|
||||
});
|
||||
|
||||
it('Should return field mapped value when there is no data', () => {
|
||||
const mapEmptyToText = '0';
|
||||
const options = createEmptyDisplayOptions({
|
||||
fieldOptions: {
|
||||
override: {
|
||||
mappings: [
|
||||
{
|
||||
id: 1,
|
||||
operator: '',
|
||||
text: mapEmptyToText,
|
||||
type: MappingType.ValueToText,
|
||||
value: 'null',
|
||||
},
|
||||
],
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
const display = getFieldDisplayValues(options);
|
||||
expect(display[0].display.text).toEqual(mapEmptyToText);
|
||||
expect(display[0].display.numeric).toEqual(0);
|
||||
});
|
||||
|
||||
it('Should always return display numeric 0 when there is no data', () => {
|
||||
const mapEmptyToText = '0';
|
||||
const options = createEmptyDisplayOptions({
|
||||
fieldOptions: {
|
||||
override: {
|
||||
mappings: [
|
||||
{
|
||||
id: 1,
|
||||
operator: '',
|
||||
text: mapEmptyToText,
|
||||
type: MappingType.ValueToText,
|
||||
value: 'null',
|
||||
},
|
||||
],
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
const display = getFieldDisplayValues(options);
|
||||
expect(display[0].display.numeric).toEqual(0);
|
||||
});
|
||||
});
|
||||
|
||||
function createEmptyDisplayOptions(extend = {}): GetFieldDisplayValuesOptions {
|
||||
const options = createDisplayOptions(extend);
|
||||
|
||||
return Object.assign(options, {
|
||||
data: [
|
||||
{
|
||||
name: 'No data',
|
||||
fields: [],
|
||||
length: 0,
|
||||
},
|
||||
],
|
||||
});
|
||||
}
|
||||
|
||||
function createDisplayOptions(extend = {}): GetFieldDisplayValuesOptions {
|
||||
const options: GetFieldDisplayValuesOptions = {
|
||||
data: [
|
||||
toDataFrame({
|
||||
name: 'Series Name',
|
||||
fields: [
|
||||
{ name: 'Field 1', values: ['a', 'b', 'c'] },
|
||||
{ name: 'Field 2', values: [1, 3, 5] },
|
||||
{ name: 'Field 3', values: [2, 4, 6] },
|
||||
],
|
||||
}),
|
||||
],
|
||||
replaceVariables: (value: string) => {
|
||||
return value;
|
||||
},
|
||||
fieldOptions: {
|
||||
calcs: [],
|
||||
override: {},
|
||||
defaults: {},
|
||||
},
|
||||
theme: {} as GrafanaTheme,
|
||||
};
|
||||
|
||||
return merge<GetFieldDisplayValuesOptions, any>(options, extend);
|
||||
}
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
import toNumber from 'lodash/toNumber';
|
||||
import toString from 'lodash/toString';
|
||||
import isEmpty from 'lodash/isEmpty';
|
||||
|
||||
import { getDisplayProcessor } from './displayProcessor';
|
||||
import { getFlotPairs } from '../utils/flotPairs';
|
||||
@@ -195,16 +196,7 @@ export const getFieldDisplayValues = (options: GetFieldDisplayValuesOptions): Fi
|
||||
}
|
||||
|
||||
if (values.length === 0) {
|
||||
values.push({
|
||||
name: 'No data',
|
||||
field: {
|
||||
...defaults,
|
||||
},
|
||||
display: {
|
||||
numeric: 0,
|
||||
text: 'No data',
|
||||
},
|
||||
});
|
||||
values.push(createNoValuesFieldDisplay(options));
|
||||
} else if (values.length === 1 && !fieldOptions.defaults.title) {
|
||||
// Don't show title for single item
|
||||
values[0].display.title = undefined;
|
||||
@@ -278,3 +270,36 @@ export function getFieldProperties(...props: FieldConfig[]): FieldConfig {
|
||||
}
|
||||
return field;
|
||||
}
|
||||
function createNoValuesFieldDisplay(options: GetFieldDisplayValuesOptions): FieldDisplay {
|
||||
const displayName = 'No data';
|
||||
const { fieldOptions } = options;
|
||||
const { defaults, override } = fieldOptions;
|
||||
|
||||
const config = getFieldProperties(defaults, {}, override);
|
||||
const displayProcessor = getDisplayProcessor({
|
||||
config,
|
||||
theme: options.theme,
|
||||
type: FieldType.other,
|
||||
});
|
||||
|
||||
const display = displayProcessor(null);
|
||||
const text = getDisplayText(display, displayName);
|
||||
|
||||
return {
|
||||
name: displayName,
|
||||
field: {
|
||||
...defaults,
|
||||
},
|
||||
display: {
|
||||
text,
|
||||
numeric: 0,
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
function getDisplayText(display: DisplayValue, fallback: string): string {
|
||||
if (!display || isEmpty(display.text)) {
|
||||
return fallback;
|
||||
}
|
||||
return display.text;
|
||||
}
|
||||
|
||||
@@ -1,6 +1,11 @@
|
||||
import { stringToJsRegex, stringToMs } from './string';
|
||||
import { escapeStringForRegex, stringToJsRegex, stringToMs, unEscapeStringFromRegex } from './string';
|
||||
|
||||
describe('stringToJsRegex', () => {
|
||||
it('should just return string as RegEx if it does not start as a regex', () => {
|
||||
const output = stringToJsRegex('validRegexp');
|
||||
expect(output).toBeInstanceOf(RegExp);
|
||||
});
|
||||
|
||||
it('should parse the valid regex value', () => {
|
||||
const output = stringToJsRegex('/validRegexp/');
|
||||
expect(output).toBeInstanceOf(RegExp);
|
||||
@@ -51,3 +56,35 @@ describe('stringToMs', () => {
|
||||
}).toThrow();
|
||||
});
|
||||
});
|
||||
|
||||
describe('escapeStringForRegex', () => {
|
||||
describe('when using a string with special chars', () => {
|
||||
it('then all special chars should be escaped', () => {
|
||||
const result = escapeStringForRegex('([{}])|*+-.?<>#&^$');
|
||||
expect(result).toBe('\\(\\[\\{\\}\\]\\)\\|\\*\\+\\-\\.\\?\\<\\>\\#\\&\\^\\$');
|
||||
});
|
||||
});
|
||||
|
||||
describe('when using a string without special chars', () => {
|
||||
it('then nothing should change', () => {
|
||||
const result = escapeStringForRegex('some string 123');
|
||||
expect(result).toBe('some string 123');
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('unEscapeStringFromRegex', () => {
|
||||
describe('when using a string with escaped special chars', () => {
|
||||
it('then all special chars should be unescaped', () => {
|
||||
const result = unEscapeStringFromRegex('\\(\\[\\{\\}\\]\\)\\|\\*\\+\\-\\.\\?\\<\\>\\#\\&\\^\\$');
|
||||
expect(result).toBe('([{}])|*+-.?<>#&^$');
|
||||
});
|
||||
});
|
||||
|
||||
describe('when using a string without escaped special chars', () => {
|
||||
it('then nothing should change', () => {
|
||||
const result = unEscapeStringFromRegex('some string 123');
|
||||
expect(result).toBe('some string 123');
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
@@ -1,6 +1,32 @@
|
||||
const specialChars = ['(', '[', '{', '}', ']', ')', '|', '*', '+', '-', '.', '?', '<', '>', '#', '&', '^', '$'];
|
||||
|
||||
export const escapeStringForRegex = (value: string) => {
|
||||
if (!value) {
|
||||
return value;
|
||||
}
|
||||
|
||||
return specialChars.reduce((escaped, currentChar) => escaped.replace(currentChar, '\\' + currentChar), value);
|
||||
};
|
||||
|
||||
export const unEscapeStringFromRegex = (value: string) => {
|
||||
if (!value) {
|
||||
return value;
|
||||
}
|
||||
|
||||
return specialChars.reduce((escaped, currentChar) => escaped.replace('\\' + currentChar, currentChar), value);
|
||||
};
|
||||
|
||||
export function stringStartsAsRegEx(str: string): boolean {
|
||||
if (!str) {
|
||||
return false;
|
||||
}
|
||||
|
||||
return str[0] === '/';
|
||||
}
|
||||
|
||||
export function stringToJsRegex(str: string): RegExp {
|
||||
if (str[0] !== '/') {
|
||||
return new RegExp('^' + str + '$');
|
||||
if (!stringStartsAsRegEx(str)) {
|
||||
return new RegExp(`^${str}$`);
|
||||
}
|
||||
|
||||
const match = str.match(new RegExp('^/(.*?)/(g?i?m?y?)$'));
|
||||
|
||||
@@ -284,7 +284,6 @@ export interface ExploreQueryFieldProps<
|
||||
> extends QueryEditorProps<DSType, TQuery, TOptions> {
|
||||
history: any[];
|
||||
onBlur?: () => void;
|
||||
onHint?: (action: QueryFixAction) => void;
|
||||
}
|
||||
|
||||
export interface ExploreStartPageProps {
|
||||
|
||||
@@ -11,7 +11,7 @@ const addValueToTextMappingText = (
|
||||
return allValueMappings;
|
||||
}
|
||||
|
||||
if (value === null && valueToTextMapping.value && valueToTextMapping.value.toLowerCase() === 'null') {
|
||||
if (value === null && isNullValueMap(valueToTextMapping)) {
|
||||
return allValueMappings.concat(valueToTextMapping);
|
||||
}
|
||||
|
||||
@@ -87,3 +87,10 @@ const getAllFormattedValueMappings = (valueMappings: ValueMapping[], value: Time
|
||||
export const getMappedValue = (valueMappings: ValueMapping[], value: TimeSeriesValue): ValueMapping => {
|
||||
return getAllFormattedValueMappings(valueMappings, value)[0];
|
||||
};
|
||||
|
||||
const isNullValueMap = (mapping: ValueMap): boolean => {
|
||||
if (!mapping || !mapping.value) {
|
||||
return false;
|
||||
}
|
||||
return mapping.value.toLowerCase() === 'null';
|
||||
};
|
||||
|
||||
@@ -280,8 +280,8 @@ export const getCategories = (): ValueFormatCategory[] => [
|
||||
{ name: 'Exposure (C/kg)', id: 'radexpckg', fn: decimalSIPrefix('C/kg') },
|
||||
{ name: 'roentgen (R)', id: 'radr', fn: decimalSIPrefix('R') },
|
||||
{ name: 'Sievert/hour (Sv/h)', id: 'radsvh', fn: decimalSIPrefix('Sv/h') },
|
||||
{ name: 'milliSievert/hour (mSv/h)', id: 'radmsvh', fn: decimalSIPrefix('mSv/h', -1) },
|
||||
{ name: 'microSievert/hour (µSv/h)', id: 'radusvh', fn: decimalSIPrefix('µSv/h', -2) },
|
||||
{ name: 'milliSievert/hour (mSv/h)', id: 'radmsvh', fn: decimalSIPrefix('Sv/h', -1) },
|
||||
{ name: 'microSievert/hour (µSv/h)', id: 'radusvh', fn: decimalSIPrefix('Sv/h', -2) },
|
||||
],
|
||||
},
|
||||
{
|
||||
|
||||
@@ -53,4 +53,12 @@ describe('valueFormats', () => {
|
||||
expect(str).toBe('1.200 s');
|
||||
});
|
||||
});
|
||||
|
||||
describe('Resolve old units', () => {
|
||||
it('resolve farenheit', () => {
|
||||
const fmt0 = getValueFormat('farenheit');
|
||||
const fmt1 = getValueFormat('fahrenheit');
|
||||
expect(fmt0).toEqual(fmt1);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
@@ -142,6 +142,14 @@ function buildFormats() {
|
||||
}
|
||||
}
|
||||
|
||||
// Resolve units pointing to old IDs
|
||||
[{ from: 'farenheit', to: 'fahrenheit' }].forEach(alias => {
|
||||
const f = index[alias.to];
|
||||
if (f) {
|
||||
index[alias.from] = f;
|
||||
}
|
||||
});
|
||||
|
||||
hasBuiltIndex = true;
|
||||
}
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
"author": "Grafana Labs",
|
||||
"license": "Apache-2.0",
|
||||
"name": "@grafana/runtime",
|
||||
"version": "6.4.0-pre",
|
||||
"version": "6.5.3",
|
||||
"description": "Grafana Runtime Library",
|
||||
"keywords": [
|
||||
"grafana",
|
||||
@@ -21,8 +21,8 @@
|
||||
"build": "grafana-toolkit package:build --scope=runtime"
|
||||
},
|
||||
"dependencies": {
|
||||
"@grafana/data": "^6.4.0-alpha",
|
||||
"@grafana/ui": "^6.4.0-alpha",
|
||||
"@grafana/data": "6.5.3",
|
||||
"@grafana/ui": "6.5.3",
|
||||
"systemjs": "0.20.19",
|
||||
"systemjs-plugin-css": "0.1.37"
|
||||
},
|
||||
|
||||
@@ -9,7 +9,7 @@ grafana-toolkit is a CLI that enables efficient development of Grafana plugins.
|
||||
Set up a new plugin with `grafana-toolkit plugin:create` command:
|
||||
|
||||
```sh
|
||||
npx grafana-toolkit plugin:create my-grafana-plugin
|
||||
npx @grafana/toolkit plugin:create my-grafana-plugin
|
||||
cd my-grafana-plugin
|
||||
yarn install
|
||||
yarn dev
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
"author": "Grafana Labs",
|
||||
"license": "Apache-2.0",
|
||||
"name": "@grafana/toolkit",
|
||||
"version": "6.4.0-pre",
|
||||
"version": "6.5.3",
|
||||
"description": "Grafana Toolkit",
|
||||
"keywords": [
|
||||
"grafana",
|
||||
@@ -28,8 +28,8 @@
|
||||
"dependencies": {
|
||||
"@babel/core": "7.6.4",
|
||||
"@babel/preset-env": "7.6.3",
|
||||
"@grafana/data": "^6.4.0-alpha",
|
||||
"@grafana/ui": "^6.4.0-alpha",
|
||||
"@grafana/data": "6.5.3",
|
||||
"@grafana/ui": "6.5.3",
|
||||
"@types/command-exists": "^1.2.0",
|
||||
"@types/execa": "^0.9.0",
|
||||
"@types/expect-puppeteer": "3.3.1",
|
||||
@@ -42,7 +42,6 @@
|
||||
"@types/semver": "^6.0.0",
|
||||
"@types/tmp": "^0.1.0",
|
||||
"@types/webpack": "4.4.34",
|
||||
"aws-sdk": "^2.495.0",
|
||||
"axios": "0.19.0",
|
||||
"babel-jest": "24.8.0",
|
||||
"babel-loader": "8.0.6",
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
import { Task, TaskRunner } from './task';
|
||||
import { pluginBuildRunner } from './plugin.build';
|
||||
import { restoreCwd } from '../utils/cwd';
|
||||
import { S3Client } from '../../plugins/aws';
|
||||
import { getPluginJson } from '../../config/utils/pluginValidation';
|
||||
import { getPluginId } from '../../config/utils/getPluginId';
|
||||
import { PluginMeta } from '@grafana/data';
|
||||
@@ -10,28 +9,18 @@ import { PluginMeta } from '@grafana/data';
|
||||
import execa = require('execa');
|
||||
import path = require('path');
|
||||
import fs from 'fs';
|
||||
import { getPackageDetails, findImagesInFolder, appendPluginHistory, getGrafanaVersions } from '../../plugins/utils';
|
||||
import { getPackageDetails, findImagesInFolder, getGrafanaVersions } from '../../plugins/utils';
|
||||
import {
|
||||
job,
|
||||
getJobFolder,
|
||||
writeJobStats,
|
||||
getCiFolder,
|
||||
getPluginBuildInfo,
|
||||
getBuildNumber,
|
||||
getPullRequestNumber,
|
||||
getCircleDownloadBaseURL,
|
||||
} from '../../plugins/env';
|
||||
import { agregateWorkflowInfo, agregateCoverageInfo, agregateTestInfo } from '../../plugins/workflow';
|
||||
import {
|
||||
PluginPackageDetails,
|
||||
PluginBuildReport,
|
||||
PluginHistory,
|
||||
defaultPluginHistory,
|
||||
TestResultsInfo,
|
||||
PluginDevInfo,
|
||||
PluginDevSummary,
|
||||
DevSummary,
|
||||
} from '../../plugins/types';
|
||||
import { PluginPackageDetails, PluginBuildReport, TestResultsInfo } from '../../plugins/types';
|
||||
import { runEndToEndTests } from '../../plugins/e2e/launcher';
|
||||
import { getEndToEndSettings } from '../../plugins/index';
|
||||
|
||||
@@ -185,6 +174,9 @@ const packagePluginRunner: TaskRunner<PluginCIOptions> = async () => {
|
||||
throw new Error('Invalid zip file: ' + zipFile);
|
||||
}
|
||||
|
||||
// Make a copy so it is easy for report to read
|
||||
await execa('cp', [pluginJsonFile, distDir]);
|
||||
|
||||
const info: PluginPackageDetails = {
|
||||
plugin: await getPackageDetails(zipFile, distDir),
|
||||
};
|
||||
@@ -346,88 +338,23 @@ const pluginReportRunner: TaskRunner<PluginCIOptions> = async ({ upload }) => {
|
||||
}
|
||||
});
|
||||
|
||||
console.log('Initalizing S3 Client');
|
||||
const s3 = new S3Client();
|
||||
|
||||
const build = pluginMeta.info.build;
|
||||
if (!build) {
|
||||
throw new Error('Metadata missing build info');
|
||||
const GRAFANA_API_KEY = process.env.GRAFANA_API_KEY;
|
||||
if (!GRAFANA_API_KEY) {
|
||||
console.log('Enter a GRAFANA_API_KEY to upload the plugin report');
|
||||
return;
|
||||
}
|
||||
const url = `https://grafana.com/api/plugins/${report.plugin.id}/ci`;
|
||||
|
||||
const version = pluginMeta.info.version || 'unknown';
|
||||
const branch = build.branch || 'unknown';
|
||||
const buildNumber = getBuildNumber();
|
||||
const root = `dev/${pluginMeta.id}`;
|
||||
const dirKey = pr ? `${root}/pr/${pr}/${buildNumber}` : `${root}/branch/${branch}/${buildNumber}`;
|
||||
|
||||
const jobKey = `${dirKey}/index.json`;
|
||||
if (await s3.exists(jobKey)) {
|
||||
throw new Error('Job already registered: ' + jobKey);
|
||||
}
|
||||
|
||||
console.log('Write Job', jobKey);
|
||||
await s3.writeJSON(jobKey, report, {
|
||||
Tagging: `version=${version}&type=${pluginMeta.type}`,
|
||||
console.log('Sending report to:', url);
|
||||
const axios = require('axios');
|
||||
const info = await axios.post(url, report, {
|
||||
headers: { Authorization: 'Bearer ' + GRAFANA_API_KEY },
|
||||
});
|
||||
|
||||
// Upload logo
|
||||
const logo = await s3.uploadLogo(report.plugin.info, {
|
||||
local: path.resolve(ciDir, 'dist'),
|
||||
remote: root,
|
||||
});
|
||||
|
||||
const latest: PluginDevInfo = {
|
||||
pluginId: pluginMeta.id,
|
||||
name: pluginMeta.name,
|
||||
logo,
|
||||
build: pluginMeta.info.build!,
|
||||
version,
|
||||
};
|
||||
|
||||
let base = `${root}/branch/${branch}/`;
|
||||
latest.build.number = buildNumber;
|
||||
if (pr) {
|
||||
latest.build.pr = pr;
|
||||
base = `${root}/pr/${pr}/`;
|
||||
}
|
||||
|
||||
const historyKey = base + `history.json`;
|
||||
console.log('Read', historyKey);
|
||||
const history: PluginHistory = await s3.readJSON(historyKey, defaultPluginHistory);
|
||||
appendPluginHistory(report, latest, history);
|
||||
|
||||
await s3.writeJSON(historyKey, history);
|
||||
console.log('wrote history');
|
||||
|
||||
// Private things may want to upload
|
||||
if (upload) {
|
||||
s3.uploadPackages(packageInfo, {
|
||||
local: packageDir,
|
||||
remote: dirKey + '/packages',
|
||||
});
|
||||
|
||||
s3.uploadTestFiles(report.tests, {
|
||||
local: ciDir,
|
||||
remote: dirKey,
|
||||
});
|
||||
}
|
||||
|
||||
console.log('Update Directory Indexes');
|
||||
|
||||
let indexKey = `${root}/index.json`;
|
||||
const index: PluginDevSummary = await s3.readJSON(indexKey, { branch: {}, pr: {} });
|
||||
if (pr) {
|
||||
index.pr[pr] = latest;
|
||||
if (info.status === 200) {
|
||||
console.log('OK: ', info.data);
|
||||
} else {
|
||||
index.branch[branch] = latest;
|
||||
console.warn('Error: ', info);
|
||||
}
|
||||
await s3.writeJSON(indexKey, index);
|
||||
|
||||
indexKey = `dev/index.json`;
|
||||
const pluginIndex: DevSummary = await s3.readJSON(indexKey, {});
|
||||
pluginIndex[pluginMeta.id] = latest;
|
||||
await s3.writeJSON(indexKey, pluginIndex);
|
||||
console.log('wrote index');
|
||||
};
|
||||
|
||||
export const ciPluginReportTask = new Task<PluginCIOptions>('Generate Plugin Report', pluginReportRunner);
|
||||
|
||||
@@ -1,183 +0,0 @@
|
||||
import AWS from 'aws-sdk';
|
||||
import path from 'path';
|
||||
import fs from 'fs';
|
||||
|
||||
import { PluginPackageDetails, ZipFileInfo, TestResultsInfo } from './types';
|
||||
import defaults from 'lodash/defaults';
|
||||
import clone from 'lodash/clone';
|
||||
import { PluginMetaInfo } from '@grafana/data';
|
||||
|
||||
interface UploadArgs {
|
||||
local: string;
|
||||
remote: string;
|
||||
}
|
||||
|
||||
export class S3Client {
|
||||
readonly bucket: string;
|
||||
readonly prefix: string;
|
||||
readonly s3: AWS.S3;
|
||||
|
||||
constructor(bucket?: string) {
|
||||
this.bucket = bucket || 'grafana-experiments';
|
||||
this.prefix = 'plugins/';
|
||||
|
||||
this.s3 = new AWS.S3({ apiVersion: '2006-03-01' });
|
||||
this.s3.headBucket({ Bucket: this.bucket }, (err, data) => {
|
||||
if (err) {
|
||||
throw new Error('Unable to read: ' + this.bucket);
|
||||
} else {
|
||||
console.log('s3: ' + data);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
private async uploadPackage(file: ZipFileInfo, folder: UploadArgs): Promise<string> {
|
||||
const fpath = path.resolve(process.cwd(), folder.local, file.name);
|
||||
return await this.uploadFile(fpath, folder.remote + '/' + file.name, file.md5);
|
||||
}
|
||||
|
||||
async uploadPackages(packageInfo: PluginPackageDetails, folder: UploadArgs) {
|
||||
await this.uploadPackage(packageInfo.plugin, folder);
|
||||
if (packageInfo.docs) {
|
||||
await this.uploadPackage(packageInfo.docs, folder);
|
||||
}
|
||||
}
|
||||
|
||||
async uploadTestFiles(tests: TestResultsInfo[], folder: UploadArgs) {
|
||||
for (const test of tests) {
|
||||
for (const s of test.screenshots) {
|
||||
const img = path.resolve(folder.local, 'jobs', test.job, s);
|
||||
await this.uploadFile(img, folder.remote + `/jobs/${test.job}/${s}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async uploadLogo(meta: PluginMetaInfo, folder: UploadArgs): Promise<string | undefined> {
|
||||
const { logos } = meta;
|
||||
if (logos && logos.large) {
|
||||
const img = folder.local + '/' + logos.large;
|
||||
const idx = img.lastIndexOf('.');
|
||||
const name = 'logo' + img.substring(idx);
|
||||
const key = folder.remote + '/' + name;
|
||||
await this.uploadFile(img, key);
|
||||
return name;
|
||||
}
|
||||
return undefined;
|
||||
}
|
||||
|
||||
async uploadFile(fpath: string, path: string, md5?: string): Promise<string> {
|
||||
if (!fs.existsSync(fpath)) {
|
||||
return Promise.reject('File not found: ' + fpath);
|
||||
}
|
||||
console.log('Uploading: ' + fpath);
|
||||
const stream = fs.createReadStream(fpath);
|
||||
return new Promise((resolve, reject) => {
|
||||
this.s3.putObject(
|
||||
{
|
||||
Key: this.prefix + path,
|
||||
Bucket: this.bucket,
|
||||
Body: stream,
|
||||
ContentType: getContentTypeForFile(path),
|
||||
},
|
||||
(err, data) => {
|
||||
if (err) {
|
||||
reject(err);
|
||||
} else {
|
||||
if (md5 && md5 !== data.ETag && `"${md5}"` !== data.ETag) {
|
||||
reject(`Upload ETag does not match MD5 (${md5} !== ${data.ETag})`);
|
||||
} else {
|
||||
resolve(data.ETag);
|
||||
}
|
||||
}
|
||||
}
|
||||
);
|
||||
});
|
||||
}
|
||||
|
||||
async exists(key: string): Promise<boolean> {
|
||||
return new Promise((resolve, reject) => {
|
||||
this.s3.getObject(
|
||||
{
|
||||
Bucket: this.bucket,
|
||||
Key: this.prefix + key,
|
||||
},
|
||||
(err, data) => {
|
||||
if (err) {
|
||||
resolve(false);
|
||||
} else {
|
||||
resolve(true);
|
||||
}
|
||||
}
|
||||
);
|
||||
});
|
||||
}
|
||||
|
||||
async readJSON<T>(key: string, defaultValue: T): Promise<T> {
|
||||
return new Promise((resolve, reject) => {
|
||||
this.s3.getObject(
|
||||
{
|
||||
Bucket: this.bucket,
|
||||
Key: this.prefix + key,
|
||||
},
|
||||
(err, data) => {
|
||||
if (err) {
|
||||
resolve(clone(defaultValue));
|
||||
} else {
|
||||
try {
|
||||
const v = JSON.parse(data.Body as string);
|
||||
resolve(defaults(v, defaultValue));
|
||||
} catch (e) {
|
||||
console.log('ERROR', e);
|
||||
reject('Error reading response');
|
||||
}
|
||||
}
|
||||
}
|
||||
);
|
||||
});
|
||||
}
|
||||
|
||||
async writeJSON(
|
||||
key: string,
|
||||
obj: {},
|
||||
params?: Partial<AWS.S3.Types.PutObjectRequest>
|
||||
): Promise<AWS.S3.Types.PutObjectOutput> {
|
||||
return new Promise((resolve, reject) => {
|
||||
this.s3.putObject(
|
||||
{
|
||||
...params,
|
||||
Key: this.prefix + key,
|
||||
Bucket: this.bucket,
|
||||
Body: JSON.stringify(obj, null, 2), // Pretty print
|
||||
ContentType: 'application/json',
|
||||
},
|
||||
(err, data) => {
|
||||
if (err) {
|
||||
reject(err);
|
||||
} else {
|
||||
resolve(data);
|
||||
}
|
||||
}
|
||||
);
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
function getContentTypeForFile(name: string): string | undefined {
|
||||
const idx = name.lastIndexOf('.');
|
||||
if (idx > 0) {
|
||||
const ext = name.substring(idx + 1).toLowerCase();
|
||||
if (ext === 'zip') {
|
||||
return 'application/zip';
|
||||
}
|
||||
if (ext === 'json') {
|
||||
return 'application/json';
|
||||
}
|
||||
if (ext === 'svg') {
|
||||
return 'image/svg+xml';
|
||||
}
|
||||
if (ext === 'png') {
|
||||
return 'image/png';
|
||||
}
|
||||
}
|
||||
return undefined;
|
||||
}
|
||||
@@ -1,4 +1,3 @@
|
||||
export * from './aws';
|
||||
export * from './env';
|
||||
export * from './utils';
|
||||
export * from './workflow';
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
"author": "Grafana Labs",
|
||||
"license": "Apache-2.0",
|
||||
"name": "@grafana/ui",
|
||||
"version": "6.4.0-pre",
|
||||
"version": "6.5.3",
|
||||
"description": "Grafana Components Library",
|
||||
"keywords": [
|
||||
"grafana",
|
||||
@@ -25,7 +25,7 @@
|
||||
"build": "grafana-toolkit package:build --scope=ui"
|
||||
},
|
||||
"dependencies": {
|
||||
"@grafana/data": "^6.4.0-alpha",
|
||||
"@grafana/data": "6.5.3",
|
||||
"@grafana/slate-react": "0.22.9-grafana",
|
||||
"@torkelo/react-select": "2.1.1",
|
||||
"@types/react-color": "2.17.0",
|
||||
|
||||
@@ -2,6 +2,9 @@ $arrowSize: 15px;
|
||||
.ColorPicker {
|
||||
@extend .popper;
|
||||
font-size: 12px;
|
||||
// !important because these styles are also provided to popper via .popper classes from Tooltip component
|
||||
// hope to get rid of those soon
|
||||
padding: $arrowSize !important;
|
||||
}
|
||||
|
||||
.ColorPicker__arrow {
|
||||
@@ -75,32 +78,19 @@ $arrowSize: 15px;
|
||||
border-color: #1e2028;
|
||||
}
|
||||
|
||||
// Top
|
||||
.ColorPicker[data-placement^='top'] {
|
||||
padding-bottom: $arrowSize;
|
||||
}
|
||||
|
||||
// Bottom
|
||||
// !important because these styles are also provided to popper via .popper classes from Tooltip component
|
||||
// hope to get rid of those soon
|
||||
.ColorPicker[data-placement^='top'],
|
||||
.ColorPicker[data-placement^='bottom'] {
|
||||
padding-top: $arrowSize;
|
||||
padding-left: 0 !important;
|
||||
padding-right: 0 !important;
|
||||
}
|
||||
|
||||
.ColorPicker[data-placement^='bottom-start'] {
|
||||
padding-top: $arrowSize;
|
||||
}
|
||||
|
||||
.ColorPicker[data-placement^='bottom-end'] {
|
||||
padding-top: $arrowSize;
|
||||
}
|
||||
|
||||
// Right
|
||||
// !important because these styles are also provided to popper via .popper classes from Tooltip component
|
||||
// hope to get rid of those soon
|
||||
.ColorPicker[data-placement^='left'],
|
||||
.ColorPicker[data-placement^='right'] {
|
||||
padding-left: $arrowSize;
|
||||
}
|
||||
|
||||
// Left
|
||||
.ColorPicker[data-placement^='left'] {
|
||||
padding-right: $arrowSize;
|
||||
padding-top: 0 !important;
|
||||
}
|
||||
|
||||
.ColorPickerPopover {
|
||||
|
||||
@@ -139,7 +139,6 @@ export class Graph extends PureComponent<GraphProps, GraphState> {
|
||||
|
||||
// Check if tooltip needs to be rendered with custom tooltip component, otherwise default to GraphTooltip
|
||||
const tooltipContentRenderer = tooltipElementProps.tooltipComponent || GraphTooltip;
|
||||
|
||||
// Indicates column(field) index in y-axis dimension
|
||||
const seriesIndex = activeItem ? activeItem.series.seriesIndex : 0;
|
||||
// Indicates row index in active field values
|
||||
@@ -266,7 +265,7 @@ export class Graph extends PureComponent<GraphProps, GraphState> {
|
||||
};
|
||||
|
||||
try {
|
||||
$.plot(this.element, series, flotOptions);
|
||||
$.plot(this.element, series.filter(s => s.isVisible), flotOptions);
|
||||
} catch (err) {
|
||||
console.log('Graph rendering error', err, flotOptions, series);
|
||||
throw new Error('Error rendering panel');
|
||||
|
||||
@@ -94,7 +94,7 @@ export const GraphWithLegend: React.FunctionComponent<GraphWithLegendProps> = (p
|
||||
<div className={wrapper}>
|
||||
<div className={graphContainer}>
|
||||
<Graph
|
||||
series={series.filter(s => !!s.isVisible)}
|
||||
series={series}
|
||||
timeRange={timeRange}
|
||||
timeZone={timeZone}
|
||||
showLines={showLines}
|
||||
|
||||
28
packages/grafana-ui/src/components/Icon/Icon.mdx
Normal file
28
packages/grafana-ui/src/components/Icon/Icon.mdx
Normal file
@@ -0,0 +1,28 @@
|
||||
import { Meta, Story, Preview, Props } from '@storybook/addon-docs/blocks';
|
||||
import { Icon } from './Icon';
|
||||
|
||||
<Meta title="MDX|Icon" component={Icon} />
|
||||
|
||||
# Icon
|
||||
|
||||
Grafana's wrapper component over [Font Awesome](https://fontawesome.com/) icons
|
||||
|
||||
|
||||
### Changing icon size
|
||||
|
||||
By default `Icon` has width and height of `16px` and font-size of `14px`. Pass `className` to control icon's size:
|
||||
|
||||
```jsx
|
||||
import { css } from 'emotion';
|
||||
|
||||
const customIconSize = css`
|
||||
width: 20px;
|
||||
height: 20px;
|
||||
font-size: 18px;
|
||||
`;
|
||||
|
||||
<Icon name="check" className={customIconSize} />
|
||||
```
|
||||
|
||||
|
||||
<Props of={Icon} />
|
||||
103
packages/grafana-ui/src/components/Icon/Icon.story.tsx
Normal file
103
packages/grafana-ui/src/components/Icon/Icon.story.tsx
Normal file
@@ -0,0 +1,103 @@
|
||||
import React from 'react';
|
||||
import { css } from 'emotion';
|
||||
|
||||
import { Icon } from './Icon';
|
||||
import { getAvailableIcons, IconType } from './types';
|
||||
import { withCenteredStory } from '../../utils/storybook/withCenteredStory';
|
||||
import { useTheme, selectThemeVariant } from '../../themes';
|
||||
import mdx from './Icon.mdx';
|
||||
|
||||
export default {
|
||||
title: 'UI/Icon',
|
||||
component: Icon,
|
||||
decorators: [withCenteredStory],
|
||||
parameters: {
|
||||
docs: {
|
||||
page: mdx,
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
const IconWrapper: React.FC<{ name: IconType }> = ({ name }) => {
|
||||
const theme = useTheme();
|
||||
const borderColor = selectThemeVariant(
|
||||
{
|
||||
light: theme.colors.gray5,
|
||||
dark: theme.colors.dark6,
|
||||
},
|
||||
theme.type
|
||||
);
|
||||
|
||||
return (
|
||||
<div
|
||||
className={css`
|
||||
width: 150px;
|
||||
height: 60px;
|
||||
display: table-cell;
|
||||
padding: 12px;
|
||||
border: 1px solid ${borderColor};
|
||||
text-align: center;
|
||||
|
||||
&:hover {
|
||||
background: ${borderColor};
|
||||
}
|
||||
`}
|
||||
>
|
||||
<Icon
|
||||
name={name}
|
||||
className={css`
|
||||
font-size: 18px;
|
||||
`}
|
||||
/>
|
||||
<div
|
||||
className={css`
|
||||
padding-top: 16px;
|
||||
word-break: break-all;
|
||||
font-family: ${theme.typography.fontFamily.monospace};
|
||||
font-size: ${theme.typography.size.xs};
|
||||
`}
|
||||
>
|
||||
{name}
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
};
|
||||
|
||||
export const simple = () => {
|
||||
const icons = getAvailableIcons();
|
||||
const iconsPerRow = 10;
|
||||
const rows: IconType[][] = [[]];
|
||||
let rowIdx = 0;
|
||||
|
||||
icons.forEach((i: IconType, idx: number) => {
|
||||
if (idx % iconsPerRow === 0) {
|
||||
rows.push([]);
|
||||
rowIdx++;
|
||||
}
|
||||
rows[rowIdx].push(i);
|
||||
});
|
||||
|
||||
return (
|
||||
<div
|
||||
className={css`
|
||||
display: table;
|
||||
table-layout: fixed;
|
||||
border-collapse: collapse;
|
||||
`}
|
||||
>
|
||||
{rows.map(r => {
|
||||
return (
|
||||
<div
|
||||
className={css`
|
||||
display: table-row;
|
||||
`}
|
||||
>
|
||||
{r.map((i, index) => {
|
||||
return <IconWrapper name={i} />;
|
||||
})}
|
||||
</div>
|
||||
);
|
||||
})}
|
||||
</div>
|
||||
);
|
||||
};
|
||||
31
packages/grafana-ui/src/components/Icon/Icon.tsx
Normal file
31
packages/grafana-ui/src/components/Icon/Icon.tsx
Normal file
@@ -0,0 +1,31 @@
|
||||
import React from 'react';
|
||||
import { cx, css } from 'emotion';
|
||||
import { stylesFactory } from '../../themes';
|
||||
import { IconType } from './types';
|
||||
|
||||
export interface IconProps {
|
||||
name: IconType;
|
||||
className?: string;
|
||||
}
|
||||
|
||||
const getIconStyles = stylesFactory(() => {
|
||||
return {
|
||||
icon: css`
|
||||
display: inline-block;
|
||||
width: 16px;
|
||||
height: 16px;
|
||||
text-align: center;
|
||||
font-size: 14px;
|
||||
&:before {
|
||||
vertical-align: middle;
|
||||
}
|
||||
`,
|
||||
};
|
||||
});
|
||||
|
||||
export const Icon: React.FC<IconProps> = ({ name, className }) => {
|
||||
const styles = getIconStyles();
|
||||
return <i className={cx(styles.icon, 'fa', `fa-${name}`, className)} />;
|
||||
};
|
||||
|
||||
Icon.displayName = 'Icon';
|
||||
1304
packages/grafana-ui/src/components/Icon/types.ts
Normal file
1304
packages/grafana-ui/src/components/Icon/types.ts
Normal file
File diff suppressed because it is too large
Load Diff
@@ -1,9 +1,11 @@
|
||||
import React, { PureComponent } from 'react';
|
||||
import { LogLabelStatsModel } from '@grafana/data';
|
||||
import { css, cx } from 'emotion';
|
||||
import { LogLabelStatsModel, GrafanaTheme } from '@grafana/data';
|
||||
|
||||
import { Themeable } from '../../types/theme';
|
||||
import { withTheme } from '../../themes/index';
|
||||
import { getLogRowStyles } from './getLogRowStyles';
|
||||
import { stylesFactory } from '../../themes/stylesFactory';
|
||||
|
||||
//Components
|
||||
import { LogLabelStats } from './LogLabelStats';
|
||||
@@ -24,6 +26,17 @@ interface State {
|
||||
fieldStats: LogLabelStatsModel[] | null;
|
||||
}
|
||||
|
||||
const getStyles = stylesFactory((theme: GrafanaTheme) => {
|
||||
return {
|
||||
noHoverEffect: css`
|
||||
label: noHoverEffect;
|
||||
:hover {
|
||||
background-color: transparent;
|
||||
}
|
||||
`,
|
||||
};
|
||||
});
|
||||
|
||||
class UnThemedLogDetailsRow extends PureComponent<Props, State> {
|
||||
state: State = {
|
||||
showFieldsStats: false,
|
||||
@@ -66,22 +79,28 @@ class UnThemedLogDetailsRow extends PureComponent<Props, State> {
|
||||
render() {
|
||||
const { theme, parsedKey, parsedValue, isLabel, links } = this.props;
|
||||
const { showFieldsStats, fieldStats, fieldCount } = this.state;
|
||||
const styles = getStyles(theme);
|
||||
const style = getLogRowStyles(theme);
|
||||
return (
|
||||
<div className={style.logsRowDetailsValue}>
|
||||
<div className={cx(style.logsRowDetailsValue, { [styles.noHoverEffect]: showFieldsStats })}>
|
||||
{/* Action buttons - show stats/filter results */}
|
||||
<div onClick={this.showStats} aria-label={'Field stats'} className={style.logsRowDetailsIcon}>
|
||||
<div
|
||||
title="Ad-hoc statistics"
|
||||
onClick={this.showStats}
|
||||
aria-label={'Field stats'}
|
||||
className={style.logsRowDetailsIcon}
|
||||
>
|
||||
<i className={'fa fa-signal'} />
|
||||
</div>
|
||||
{isLabel ? (
|
||||
<div onClick={() => this.filterLabel()} className={style.logsRowDetailsIcon}>
|
||||
<div title="Filter for value" onClick={() => this.filterLabel()} className={style.logsRowDetailsIcon}>
|
||||
<i className={'fa fa-search-plus'} />
|
||||
</div>
|
||||
) : (
|
||||
<div className={style.logsRowDetailsIcon} />
|
||||
)}
|
||||
{isLabel ? (
|
||||
<div onClick={() => this.filterOutLabel()} className={style.logsRowDetailsIcon}>
|
||||
<div title="Filter out value" onClick={() => this.filterOutLabel()} className={style.logsRowDetailsIcon}>
|
||||
<i className={'fa fa-search-minus'} />
|
||||
</div>
|
||||
) : (
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
import React, { PureComponent } from 'react';
|
||||
import { Field, LinkModel, LogRowModel, TimeZone, DataQueryResponse } from '@grafana/data';
|
||||
import { Field, LinkModel, LogRowModel, TimeZone, DataQueryResponse, GrafanaTheme } from '@grafana/data';
|
||||
import { cx, css } from 'emotion';
|
||||
|
||||
import {
|
||||
LogRowContextRows,
|
||||
@@ -10,6 +11,7 @@ import {
|
||||
import { Themeable } from '../../types/theme';
|
||||
import { withTheme } from '../../themes/index';
|
||||
import { getLogRowStyles } from './getLogRowStyles';
|
||||
import { stylesFactory } from '../../themes/stylesFactory';
|
||||
|
||||
//Components
|
||||
import { LogDetails } from './LogDetails';
|
||||
@@ -21,7 +23,7 @@ interface Props extends Themeable {
|
||||
showDuplicates: boolean;
|
||||
showTime: boolean;
|
||||
timeZone: TimeZone;
|
||||
isLogsPanel?: boolean;
|
||||
allowDetails?: boolean;
|
||||
getRows: () => LogRowModel[];
|
||||
onClickFilterLabel?: (key: string, value: string) => void;
|
||||
onClickFilterOutLabel?: (key: string, value: string) => void;
|
||||
@@ -35,6 +37,14 @@ interface State {
|
||||
showDetails: boolean;
|
||||
}
|
||||
|
||||
const getStyles = stylesFactory((theme: GrafanaTheme) => {
|
||||
return {
|
||||
topVerticalAlign: css`
|
||||
label: topVerticalAlign;
|
||||
vertical-align: top;
|
||||
`,
|
||||
};
|
||||
});
|
||||
/**
|
||||
* Renders a log line.
|
||||
*
|
||||
@@ -57,6 +67,9 @@ class UnThemedLogRow extends PureComponent<Props, State> {
|
||||
};
|
||||
|
||||
toggleDetails = () => {
|
||||
if (this.props.allowDetails) {
|
||||
return;
|
||||
}
|
||||
this.setState(state => {
|
||||
return {
|
||||
showDetails: !state.showDetails,
|
||||
@@ -75,7 +88,7 @@ class UnThemedLogRow extends PureComponent<Props, State> {
|
||||
onClickFilterLabel,
|
||||
onClickFilterOutLabel,
|
||||
highlighterExpressions,
|
||||
isLogsPanel,
|
||||
allowDetails,
|
||||
row,
|
||||
showDuplicates,
|
||||
timeZone,
|
||||
@@ -85,8 +98,11 @@ class UnThemedLogRow extends PureComponent<Props, State> {
|
||||
} = this.props;
|
||||
const { showDetails, showContext } = this.state;
|
||||
const style = getLogRowStyles(theme, row.logLevel);
|
||||
const styles = getStyles(theme);
|
||||
const showUtc = timeZone === 'utc';
|
||||
|
||||
const showDetailsClassName = showDetails
|
||||
? cx(['fa fa-chevron-down', styles.topVerticalAlign])
|
||||
: cx(['fa fa-chevron-right', styles.topVerticalAlign]);
|
||||
return (
|
||||
<div className={style.logsRow}>
|
||||
{showDuplicates && (
|
||||
@@ -95,13 +111,17 @@ class UnThemedLogRow extends PureComponent<Props, State> {
|
||||
</div>
|
||||
)}
|
||||
<div className={style.logsRowLevel} />
|
||||
{!isLogsPanel && (
|
||||
<div title="See log details" onClick={this.toggleDetails} className={style.logsRowToggleDetails}>
|
||||
<i className={showDetails ? 'fa fa-chevron-up' : 'fa fa-chevron-down'} />
|
||||
{!allowDetails && (
|
||||
<div
|
||||
title={showDetails ? 'Hide log details' : 'See log details'}
|
||||
onClick={this.toggleDetails}
|
||||
className={style.logsRowToggleDetails}
|
||||
>
|
||||
<i className={showDetailsClassName} />
|
||||
</div>
|
||||
)}
|
||||
<div>
|
||||
<div>
|
||||
<div onClick={this.toggleDetails}>
|
||||
{showTime && showUtc && (
|
||||
<div className={style.logsRowLocalTime} title={`Local: ${row.timeLocal} (${row.timeFromNow})`}>
|
||||
{row.timeUtc}
|
||||
|
||||
@@ -122,21 +122,7 @@ class UnThemedLogRowMessage extends PureComponent<Props, State> {
|
||||
)}
|
||||
</span>
|
||||
{row.searchWords && row.searchWords.length > 0 && (
|
||||
<span
|
||||
onClick={this.onContextToggle}
|
||||
className={css`
|
||||
visibility: hidden;
|
||||
white-space: nowrap;
|
||||
position: relative;
|
||||
z-index: ${showContext ? 1 : 0};
|
||||
cursor: pointer;
|
||||
.${style.logsRow}:hover & {
|
||||
visibility: visible;
|
||||
margin-left: 10px;
|
||||
text-decoration: underline;
|
||||
}
|
||||
`}
|
||||
>
|
||||
<span onClick={this.onContextToggle} className={cx(style.context)}>
|
||||
{showContext ? 'Hide' : 'Show'} context
|
||||
</span>
|
||||
)}
|
||||
|
||||
@@ -20,7 +20,7 @@ export interface Props extends Themeable {
|
||||
showTime: boolean;
|
||||
timeZone: TimeZone;
|
||||
rowLimit?: number;
|
||||
isLogsPanel?: boolean;
|
||||
allowDetails?: boolean;
|
||||
previewLimit?: number;
|
||||
onClickFilterLabel?: (key: string, value: string) => void;
|
||||
onClickFilterOutLabel?: (key: string, value: string) => void;
|
||||
@@ -79,7 +79,7 @@ class UnThemedLogRows extends PureComponent<Props, State> {
|
||||
onClickFilterOutLabel,
|
||||
rowLimit,
|
||||
theme,
|
||||
isLogsPanel,
|
||||
allowDetails,
|
||||
previewLimit,
|
||||
getFieldLinks,
|
||||
} = this.props;
|
||||
@@ -115,7 +115,7 @@ class UnThemedLogRows extends PureComponent<Props, State> {
|
||||
showDuplicates={showDuplicates}
|
||||
showTime={showTime}
|
||||
timeZone={timeZone}
|
||||
isLogsPanel={isLogsPanel}
|
||||
allowDetails={allowDetails}
|
||||
onClickFilterLabel={onClickFilterLabel}
|
||||
onClickFilterOutLabel={onClickFilterOutLabel}
|
||||
getFieldLinks={getFieldLinks}
|
||||
@@ -132,7 +132,7 @@ class UnThemedLogRows extends PureComponent<Props, State> {
|
||||
showDuplicates={showDuplicates}
|
||||
showTime={showTime}
|
||||
timeZone={timeZone}
|
||||
isLogsPanel={isLogsPanel}
|
||||
allowDetails={allowDetails}
|
||||
onClickFilterLabel={onClickFilterLabel}
|
||||
onClickFilterOutLabel={onClickFilterOutLabel}
|
||||
getFieldLinks={getFieldLinks}
|
||||
|
||||
@@ -7,7 +7,15 @@ import { stylesFactory } from '../../themes';
|
||||
|
||||
export const getLogRowStyles = stylesFactory((theme: GrafanaTheme, logLevel?: LogLevel) => {
|
||||
let logColor = selectThemeVariant({ light: theme.colors.gray5, dark: theme.colors.gray2 }, theme.type);
|
||||
const bgColor = selectThemeVariant({ light: theme.colors.gray5, dark: theme.colors.gray2 }, theme.type);
|
||||
const borderColor = selectThemeVariant({ light: theme.colors.gray5, dark: theme.colors.gray2 }, theme.type);
|
||||
const bgColor = selectThemeVariant({ light: theme.colors.gray5, dark: theme.colors.dark4 }, theme.type);
|
||||
const context = css`
|
||||
label: context;
|
||||
visibility: hidden;
|
||||
white-space: nowrap;
|
||||
position: relative;
|
||||
`;
|
||||
|
||||
switch (logLevel) {
|
||||
case LogLevel.crit:
|
||||
case LogLevel.critical:
|
||||
@@ -39,7 +47,6 @@ export const getLogRowStyles = stylesFactory((theme: GrafanaTheme, logLevel?: Lo
|
||||
padding: inherit;
|
||||
|
||||
color: ${theme.colors.yellow};
|
||||
border-bottom: ${theme.border.width.sm} solid ${theme.colors.yellow};
|
||||
background-color: rgba(${theme.colors.yellow}, 0.1);
|
||||
`,
|
||||
logsRowMatchHighLightPreview: css`
|
||||
@@ -55,9 +62,22 @@ export const getLogRowStyles = stylesFactory((theme: GrafanaTheme, logLevel?: Lo
|
||||
table-layout: fixed;
|
||||
width: 100%;
|
||||
`,
|
||||
context: context,
|
||||
logsRow: css`
|
||||
label: logs-row;
|
||||
display: table-row;
|
||||
cursor: pointer;
|
||||
&:hover {
|
||||
.${context} {
|
||||
visibility: visible;
|
||||
z-index: 1;
|
||||
margin-left: 10px;
|
||||
text-decoration: underline;
|
||||
&:hover {
|
||||
color: ${theme.colors.yellow};
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
> div {
|
||||
display: table-cell;
|
||||
@@ -75,11 +95,13 @@ export const getLogRowStyles = stylesFactory((theme: GrafanaTheme, logLevel?: Lo
|
||||
label: logs-row__duplicates;
|
||||
text-align: right;
|
||||
width: 4em;
|
||||
cursor: default;
|
||||
`,
|
||||
logsRowLevel: css`
|
||||
label: logs-row__level;
|
||||
position: relative;
|
||||
width: 10px;
|
||||
cursor: default;
|
||||
|
||||
&::after {
|
||||
content: '';
|
||||
@@ -102,7 +124,6 @@ export const getLogRowStyles = stylesFactory((theme: GrafanaTheme, logLevel?: Lo
|
||||
width: 15px;
|
||||
padding-right: ${theme.spacing.sm};
|
||||
font-size: 9px;
|
||||
cursor: pointer;
|
||||
`,
|
||||
logsRowLocalTime: css`
|
||||
label: logs-row__localtime;
|
||||
@@ -123,18 +144,23 @@ export const getLogRowStyles = stylesFactory((theme: GrafanaTheme, logLevel?: Lo
|
||||
logsRowDetailsTable: css`
|
||||
label: logs-row-details-table;
|
||||
display: table;
|
||||
border: 1px solid ${bgColor};
|
||||
border: 1px solid ${borderColor};
|
||||
border-radius: 3px;
|
||||
margin: 20px 0;
|
||||
padding: ${theme.spacing.sm};
|
||||
padding-top: 0;
|
||||
width: 100%;
|
||||
cursor: default;
|
||||
`,
|
||||
logsRowDetailsSectionTable: css`
|
||||
label: logs-row-details-table__section;
|
||||
display: table;
|
||||
table-layout: fixed;
|
||||
margin: 5px 0;
|
||||
margin: 0;
|
||||
width: 100%;
|
||||
&:first-of-type {
|
||||
margin-bottom: ${theme.spacing.xs};
|
||||
}
|
||||
`,
|
||||
logsRowDetailsIcon: css`
|
||||
label: logs-row-details__icon;
|
||||
@@ -145,20 +171,19 @@ export const getLogRowStyles = stylesFactory((theme: GrafanaTheme, logLevel?: Lo
|
||||
color: ${theme.colors.gray3};
|
||||
&:hover {
|
||||
cursor: pointer;
|
||||
color: ${theme.colors.yellow};
|
||||
}
|
||||
`,
|
||||
logsRowDetailsLabel: css`
|
||||
label: logs-row-details__label;
|
||||
display: table-cell;
|
||||
padding: 0 ${theme.spacing.md} 0 ${theme.spacing.md};
|
||||
width: 12.5em;
|
||||
width: 14em;
|
||||
word-break: break-all;
|
||||
`,
|
||||
logsRowDetailsHeading: css`
|
||||
label: logs-row-details__heading;
|
||||
display: table-caption;
|
||||
margin: 5px 0 7px;
|
||||
margin: ${theme.spacing.sm} 0 ${theme.spacing.xs};
|
||||
font-weight: ${theme.typography.weight.bold};
|
||||
`,
|
||||
logsRowDetailsValue: css`
|
||||
@@ -170,7 +195,7 @@ export const getLogRowStyles = stylesFactory((theme: GrafanaTheme, logLevel?: Lo
|
||||
cursor: default;
|
||||
|
||||
&:hover {
|
||||
color: ${theme.colors.yellow};
|
||||
background-color: ${bgColor};
|
||||
}
|
||||
`,
|
||||
};
|
||||
|
||||
@@ -25,7 +25,7 @@ import {
|
||||
import {
|
||||
TableCellBuilder,
|
||||
ColumnStyle,
|
||||
getCellBuilder,
|
||||
getFieldCellBuilder,
|
||||
TableCellBuilderOptions,
|
||||
simpleCellBuilder,
|
||||
} from './TableCellBuilder';
|
||||
@@ -153,7 +153,7 @@ export class Table extends Component<Props, State> {
|
||||
return {
|
||||
header: title,
|
||||
width: columnWidth,
|
||||
builder: getCellBuilder(col.config || {}, style, this.props),
|
||||
builder: getFieldCellBuilder(col, style, this.props),
|
||||
};
|
||||
});
|
||||
}
|
||||
|
||||
@@ -291,3 +291,33 @@ class CellBuilderWithStyle {
|
||||
return simpleCellBuilder({ value, props, className });
|
||||
};
|
||||
}
|
||||
|
||||
export function getFieldCellBuilder(field: Field, style: ColumnStyle | null, p: Props): TableCellBuilder {
|
||||
if (!field.display) {
|
||||
return getCellBuilder(field.config || {}, style, p);
|
||||
}
|
||||
|
||||
return (cell: TableCellBuilderOptions) => {
|
||||
const { props } = cell;
|
||||
const disp = field.display!(cell.value);
|
||||
|
||||
let style = props.style;
|
||||
if (disp.color) {
|
||||
style = {
|
||||
...props.style,
|
||||
background: disp.color,
|
||||
};
|
||||
}
|
||||
|
||||
let clazz = 'gf-table-cell';
|
||||
if (cell.className) {
|
||||
clazz += ' ' + cell.className;
|
||||
}
|
||||
|
||||
return (
|
||||
<div style={style} className={clazz} title={disp.title}>
|
||||
{disp.text}
|
||||
</div>
|
||||
);
|
||||
};
|
||||
}
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
import React, { ChangeEvent } from 'react';
|
||||
import { mount } from 'enzyme';
|
||||
import { GrafanaThemeType } from '@grafana/data';
|
||||
import { ThresholdsEditor, Props, thresholdsWithoutKey } from './ThresholdsEditor';
|
||||
import { colors } from '../../utils';
|
||||
import { mockThemeContext } from '../../themes/ThemeContext';
|
||||
|
||||
const setup = (propOverrides?: Partial<Props>) => {
|
||||
const props: Props = {
|
||||
@@ -25,6 +27,15 @@ function getCurrentThresholds(editor: ThresholdsEditor) {
|
||||
}
|
||||
|
||||
describe('Render', () => {
|
||||
let restoreThemeContext: any;
|
||||
beforeAll(() => {
|
||||
restoreThemeContext = mockThemeContext({ type: GrafanaThemeType.Dark });
|
||||
});
|
||||
|
||||
afterAll(() => {
|
||||
restoreThemeContext();
|
||||
});
|
||||
|
||||
it('should render with base threshold', () => {
|
||||
const { wrapper } = setup();
|
||||
expect(wrapper).toMatchSnapshot();
|
||||
|
||||
@@ -72,205 +72,7 @@ exports[`Render should render with base threshold 1`] = `
|
||||
onChange={[Function]}
|
||||
theme={
|
||||
Object {
|
||||
"background": Object {
|
||||
"dropdown": "#1f1f20",
|
||||
"pageHeader": "linear-gradient(90deg, #292a2d, #000000)",
|
||||
"scrollbar": "#343436",
|
||||
"scrollbar2": "#343436",
|
||||
},
|
||||
"border": Object {
|
||||
"radius": Object {
|
||||
"lg": "5px",
|
||||
"md": "3px",
|
||||
"sm": "2px",
|
||||
},
|
||||
"width": Object {
|
||||
"sm": "1px",
|
||||
},
|
||||
},
|
||||
"breakpoints": Object {
|
||||
"lg": "992px",
|
||||
"md": "769px",
|
||||
"sm": "544px",
|
||||
"xl": "1200px",
|
||||
"xs": "0",
|
||||
},
|
||||
"colors": Object {
|
||||
"black": "#000000",
|
||||
"blue": "#33b5e5",
|
||||
"blue77": "#1f60c4",
|
||||
"blue85": "#3274d9",
|
||||
"blue95": "#5794f2",
|
||||
"blueBase": "#3274d9",
|
||||
"blueFaint": "#041126",
|
||||
"blueLight": "#5794f2",
|
||||
"blueShade": "#1f60c4",
|
||||
"body": "#d8d9da",
|
||||
"bodyBg": "#161719",
|
||||
"brandDanger": "#e02f44",
|
||||
"brandPrimary": "#eb7b18",
|
||||
"brandSuccess": "#299c46",
|
||||
"brandWarning": "#eb7b18",
|
||||
"critical": "#e02f44",
|
||||
"dark1": "#141414",
|
||||
"dark10": "#424345",
|
||||
"dark2": "#161719",
|
||||
"dark3": "#1f1f20",
|
||||
"dark4": "#212124",
|
||||
"dark5": "#222426",
|
||||
"dark6": "#262628",
|
||||
"dark7": "#292a2d",
|
||||
"dark8": "#2f2f32",
|
||||
"dark9": "#343436",
|
||||
"formDescription": "#9fa7b3",
|
||||
"formInputBg": "#202226",
|
||||
"formInputBgDisabled": "#141619",
|
||||
"formInputBorder": "#343b40",
|
||||
"formInputBorderActive": "#5794f2",
|
||||
"formInputBorderHover": "#464c54",
|
||||
"formInputBorderInvalid": "#e02f44",
|
||||
"formInputFocusOutline": "#1f60c4",
|
||||
"formInputText": "#9fa7b3",
|
||||
"formInputTextStrong": "#c7d0d9",
|
||||
"formInputTextWhite": "#ffffff",
|
||||
"formLabel": "#9fa7b3",
|
||||
"formLegend": "#c7d0d9",
|
||||
"formValidationMessageBg": "#e02f44",
|
||||
"formValidationMessageText": "#ffffff",
|
||||
"gray05": "#0b0c0e",
|
||||
"gray1": "#555555",
|
||||
"gray10": "#141619",
|
||||
"gray15": "#202226",
|
||||
"gray2": "#8e8e8e",
|
||||
"gray25": "#343b40",
|
||||
"gray3": "#b3b3b3",
|
||||
"gray33": "#464c54",
|
||||
"gray4": "#d8d9da",
|
||||
"gray5": "#ececec",
|
||||
"gray6": "#f4f5f8",
|
||||
"gray7": "#fbfbfb",
|
||||
"gray70": "#9fa7b3",
|
||||
"gray85": "#c7d0d9",
|
||||
"gray95": "#e9edf2",
|
||||
"gray98": "#f7f8fa",
|
||||
"grayBlue": "#212327",
|
||||
"greenBase": "#299c46",
|
||||
"greenShade": "#23843b",
|
||||
"headingColor": "#d8d9da",
|
||||
"inputBlack": "#09090b",
|
||||
"link": "#d8d9da",
|
||||
"linkDisabled": "#8e8e8e",
|
||||
"linkExternal": "#33b5e5",
|
||||
"linkHover": "#ffffff",
|
||||
"online": "#299c46",
|
||||
"orange": "#eb7b18",
|
||||
"orangeDark": "#ff780a",
|
||||
"pageBg": "#161719",
|
||||
"pageHeaderBorder": "#343436",
|
||||
"purple": "#9933cc",
|
||||
"queryGreen": "#74e680",
|
||||
"queryKeyword": "#66d9ef",
|
||||
"queryOrange": "#eb7b18",
|
||||
"queryPurple": "#fe85fc",
|
||||
"queryRed": "#e02f44",
|
||||
"red": "#d44a3a",
|
||||
"red88": "#e02f44",
|
||||
"redBase": "#e02f44",
|
||||
"redShade": "#c4162a",
|
||||
"text": "#d8d9da",
|
||||
"textEmphasis": "#ececec",
|
||||
"textFaint": "#222426",
|
||||
"textStrong": "#ffffff",
|
||||
"textWeak": "#8e8e8e",
|
||||
"variable": "#32d1df",
|
||||
"warn": "#f79520",
|
||||
"white": "#ffffff",
|
||||
"yellow": "#ecbb13",
|
||||
},
|
||||
"height": Object {
|
||||
"lg": "48px",
|
||||
"md": "32px",
|
||||
"sm": "24px",
|
||||
},
|
||||
"isDark": true,
|
||||
"isLight": false,
|
||||
"name": "Grafana Dark",
|
||||
"panelHeaderHeight": 28,
|
||||
"panelPadding": 8,
|
||||
"shadow": Object {
|
||||
"pageHeader": "inset 0px -4px 14px #1f1f20",
|
||||
},
|
||||
"spacing": Object {
|
||||
"d": "14px",
|
||||
"formButtonHeight": 32,
|
||||
"formFieldsetMargin": "16px",
|
||||
"formInputAffixPaddingHorizontal": "4px",
|
||||
"formInputHeight": "32px",
|
||||
"formInputMargin": "16px",
|
||||
"formInputPaddingHorizontal": "8px",
|
||||
"formLabelMargin": "0 0 4px 0",
|
||||
"formLabelPadding": "0 0 0 2px",
|
||||
"formLegendMargin": "0 0 16px 0",
|
||||
"formMargin": "32px",
|
||||
"formSpacingBase": 8,
|
||||
"formValidationMessagePadding": "4px 8px",
|
||||
"gutter": "30px",
|
||||
"insetSquishMd": "4px 8px",
|
||||
"lg": "24px",
|
||||
"md": "16px",
|
||||
"sm": "8px",
|
||||
"xl": "32px",
|
||||
"xs": "4px",
|
||||
"xxs": "2px",
|
||||
},
|
||||
"type": "dark",
|
||||
"typography": Object {
|
||||
"fontFamily": Object {
|
||||
"monospace": "Menlo, Monaco, Consolas, 'Courier New', monospace",
|
||||
"sansSerif": "'Roboto', 'Helvetica Neue', Arial, sans-serif",
|
||||
},
|
||||
"heading": Object {
|
||||
"h1": "28px",
|
||||
"h2": "24px",
|
||||
"h3": "21px",
|
||||
"h4": "18px",
|
||||
"h5": "16px",
|
||||
"h6": "14px",
|
||||
},
|
||||
"lineHeight": Object {
|
||||
"lg": 1.5,
|
||||
"md": 1.3333333333333333,
|
||||
"sm": 1.1,
|
||||
"xs": 1,
|
||||
},
|
||||
"link": Object {
|
||||
"decoration": "none",
|
||||
"hoverDecoration": "none",
|
||||
},
|
||||
"size": Object {
|
||||
"base": "13px",
|
||||
"lg": "18px",
|
||||
"md": "14px",
|
||||
"root": "14px",
|
||||
"sm": "12px",
|
||||
"xs": "10px",
|
||||
},
|
||||
"weight": Object {
|
||||
"bold": 600,
|
||||
"light": 300,
|
||||
"regular": 400,
|
||||
"semibold": 500,
|
||||
},
|
||||
},
|
||||
"zIndex": Object {
|
||||
"dropdown": "1000",
|
||||
"modal": "1050",
|
||||
"modalBackdrop": "1040",
|
||||
"navbarFixed": "1020",
|
||||
"sidemenu": "1025",
|
||||
"tooltip": "1030",
|
||||
"typeahead": "1060",
|
||||
},
|
||||
}
|
||||
}
|
||||
>
|
||||
@@ -282,205 +84,7 @@ exports[`Render should render with base threshold 1`] = `
|
||||
onChange={[Function]}
|
||||
theme={
|
||||
Object {
|
||||
"background": Object {
|
||||
"dropdown": "#1f1f20",
|
||||
"pageHeader": "linear-gradient(90deg, #292a2d, #000000)",
|
||||
"scrollbar": "#343436",
|
||||
"scrollbar2": "#343436",
|
||||
},
|
||||
"border": Object {
|
||||
"radius": Object {
|
||||
"lg": "5px",
|
||||
"md": "3px",
|
||||
"sm": "2px",
|
||||
},
|
||||
"width": Object {
|
||||
"sm": "1px",
|
||||
},
|
||||
},
|
||||
"breakpoints": Object {
|
||||
"lg": "992px",
|
||||
"md": "769px",
|
||||
"sm": "544px",
|
||||
"xl": "1200px",
|
||||
"xs": "0",
|
||||
},
|
||||
"colors": Object {
|
||||
"black": "#000000",
|
||||
"blue": "#33b5e5",
|
||||
"blue77": "#1f60c4",
|
||||
"blue85": "#3274d9",
|
||||
"blue95": "#5794f2",
|
||||
"blueBase": "#3274d9",
|
||||
"blueFaint": "#041126",
|
||||
"blueLight": "#5794f2",
|
||||
"blueShade": "#1f60c4",
|
||||
"body": "#d8d9da",
|
||||
"bodyBg": "#161719",
|
||||
"brandDanger": "#e02f44",
|
||||
"brandPrimary": "#eb7b18",
|
||||
"brandSuccess": "#299c46",
|
||||
"brandWarning": "#eb7b18",
|
||||
"critical": "#e02f44",
|
||||
"dark1": "#141414",
|
||||
"dark10": "#424345",
|
||||
"dark2": "#161719",
|
||||
"dark3": "#1f1f20",
|
||||
"dark4": "#212124",
|
||||
"dark5": "#222426",
|
||||
"dark6": "#262628",
|
||||
"dark7": "#292a2d",
|
||||
"dark8": "#2f2f32",
|
||||
"dark9": "#343436",
|
||||
"formDescription": "#9fa7b3",
|
||||
"formInputBg": "#202226",
|
||||
"formInputBgDisabled": "#141619",
|
||||
"formInputBorder": "#343b40",
|
||||
"formInputBorderActive": "#5794f2",
|
||||
"formInputBorderHover": "#464c54",
|
||||
"formInputBorderInvalid": "#e02f44",
|
||||
"formInputFocusOutline": "#1f60c4",
|
||||
"formInputText": "#9fa7b3",
|
||||
"formInputTextStrong": "#c7d0d9",
|
||||
"formInputTextWhite": "#ffffff",
|
||||
"formLabel": "#9fa7b3",
|
||||
"formLegend": "#c7d0d9",
|
||||
"formValidationMessageBg": "#e02f44",
|
||||
"formValidationMessageText": "#ffffff",
|
||||
"gray05": "#0b0c0e",
|
||||
"gray1": "#555555",
|
||||
"gray10": "#141619",
|
||||
"gray15": "#202226",
|
||||
"gray2": "#8e8e8e",
|
||||
"gray25": "#343b40",
|
||||
"gray3": "#b3b3b3",
|
||||
"gray33": "#464c54",
|
||||
"gray4": "#d8d9da",
|
||||
"gray5": "#ececec",
|
||||
"gray6": "#f4f5f8",
|
||||
"gray7": "#fbfbfb",
|
||||
"gray70": "#9fa7b3",
|
||||
"gray85": "#c7d0d9",
|
||||
"gray95": "#e9edf2",
|
||||
"gray98": "#f7f8fa",
|
||||
"grayBlue": "#212327",
|
||||
"greenBase": "#299c46",
|
||||
"greenShade": "#23843b",
|
||||
"headingColor": "#d8d9da",
|
||||
"inputBlack": "#09090b",
|
||||
"link": "#d8d9da",
|
||||
"linkDisabled": "#8e8e8e",
|
||||
"linkExternal": "#33b5e5",
|
||||
"linkHover": "#ffffff",
|
||||
"online": "#299c46",
|
||||
"orange": "#eb7b18",
|
||||
"orangeDark": "#ff780a",
|
||||
"pageBg": "#161719",
|
||||
"pageHeaderBorder": "#343436",
|
||||
"purple": "#9933cc",
|
||||
"queryGreen": "#74e680",
|
||||
"queryKeyword": "#66d9ef",
|
||||
"queryOrange": "#eb7b18",
|
||||
"queryPurple": "#fe85fc",
|
||||
"queryRed": "#e02f44",
|
||||
"red": "#d44a3a",
|
||||
"red88": "#e02f44",
|
||||
"redBase": "#e02f44",
|
||||
"redShade": "#c4162a",
|
||||
"text": "#d8d9da",
|
||||
"textEmphasis": "#ececec",
|
||||
"textFaint": "#222426",
|
||||
"textStrong": "#ffffff",
|
||||
"textWeak": "#8e8e8e",
|
||||
"variable": "#32d1df",
|
||||
"warn": "#f79520",
|
||||
"white": "#ffffff",
|
||||
"yellow": "#ecbb13",
|
||||
},
|
||||
"height": Object {
|
||||
"lg": "48px",
|
||||
"md": "32px",
|
||||
"sm": "24px",
|
||||
},
|
||||
"isDark": true,
|
||||
"isLight": false,
|
||||
"name": "Grafana Dark",
|
||||
"panelHeaderHeight": 28,
|
||||
"panelPadding": 8,
|
||||
"shadow": Object {
|
||||
"pageHeader": "inset 0px -4px 14px #1f1f20",
|
||||
},
|
||||
"spacing": Object {
|
||||
"d": "14px",
|
||||
"formButtonHeight": 32,
|
||||
"formFieldsetMargin": "16px",
|
||||
"formInputAffixPaddingHorizontal": "4px",
|
||||
"formInputHeight": "32px",
|
||||
"formInputMargin": "16px",
|
||||
"formInputPaddingHorizontal": "8px",
|
||||
"formLabelMargin": "0 0 4px 0",
|
||||
"formLabelPadding": "0 0 0 2px",
|
||||
"formLegendMargin": "0 0 16px 0",
|
||||
"formMargin": "32px",
|
||||
"formSpacingBase": 8,
|
||||
"formValidationMessagePadding": "4px 8px",
|
||||
"gutter": "30px",
|
||||
"insetSquishMd": "4px 8px",
|
||||
"lg": "24px",
|
||||
"md": "16px",
|
||||
"sm": "8px",
|
||||
"xl": "32px",
|
||||
"xs": "4px",
|
||||
"xxs": "2px",
|
||||
},
|
||||
"type": "dark",
|
||||
"typography": Object {
|
||||
"fontFamily": Object {
|
||||
"monospace": "Menlo, Monaco, Consolas, 'Courier New', monospace",
|
||||
"sansSerif": "'Roboto', 'Helvetica Neue', Arial, sans-serif",
|
||||
},
|
||||
"heading": Object {
|
||||
"h1": "28px",
|
||||
"h2": "24px",
|
||||
"h3": "21px",
|
||||
"h4": "18px",
|
||||
"h5": "16px",
|
||||
"h6": "14px",
|
||||
},
|
||||
"lineHeight": Object {
|
||||
"lg": 1.5,
|
||||
"md": 1.3333333333333333,
|
||||
"sm": 1.1,
|
||||
"xs": 1,
|
||||
},
|
||||
"link": Object {
|
||||
"decoration": "none",
|
||||
"hoverDecoration": "none",
|
||||
},
|
||||
"size": Object {
|
||||
"base": "13px",
|
||||
"lg": "18px",
|
||||
"md": "14px",
|
||||
"root": "14px",
|
||||
"sm": "12px",
|
||||
"xs": "10px",
|
||||
},
|
||||
"weight": Object {
|
||||
"bold": 600,
|
||||
"light": 300,
|
||||
"regular": 400,
|
||||
"semibold": 500,
|
||||
},
|
||||
},
|
||||
"zIndex": Object {
|
||||
"dropdown": "1000",
|
||||
"modal": "1050",
|
||||
"modalBackdrop": "1040",
|
||||
"navbarFixed": "1020",
|
||||
"sidemenu": "1025",
|
||||
"tooltip": "1030",
|
||||
"typeahead": "1060",
|
||||
},
|
||||
}
|
||||
}
|
||||
/>
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
// Libraries
|
||||
import React, { PureComponent, createRef } from 'react';
|
||||
import { css } from 'emotion';
|
||||
import memoizeOne from 'memoize-one';
|
||||
import classNames from 'classnames';
|
||||
|
||||
// Components
|
||||
@@ -11,17 +10,16 @@ import { TimePickerPopover } from './TimePickerPopover';
|
||||
import { ClickOutsideWrapper } from '../ClickOutsideWrapper/ClickOutsideWrapper';
|
||||
|
||||
// Utils & Services
|
||||
import { isDateTime, DateTime } from '@grafana/data';
|
||||
import { rangeUtil } from '@grafana/data';
|
||||
import { isDateTime, DateTime, rangeUtil } from '@grafana/data';
|
||||
import { rawToTimeRange } from './time';
|
||||
import { stylesFactory } from '../../themes/stylesFactory';
|
||||
import { withTheme } from '../../themes/ThemeContext';
|
||||
|
||||
// Types
|
||||
import { TimeRange, TimeOption, TimeZone, TIME_FORMAT, SelectableValue, dateMath } from '@grafana/data';
|
||||
import { GrafanaTheme } from '@grafana/data';
|
||||
import { TimeRange, TimeOption, TimeZone, TIME_FORMAT, SelectableValue, dateMath, GrafanaTheme } from '@grafana/data';
|
||||
import { Themeable } from '../../types';
|
||||
|
||||
const getStyles = memoizeOne((theme: GrafanaTheme) => {
|
||||
const getStyles = stylesFactory((theme: GrafanaTheme) => {
|
||||
return {
|
||||
timePickerSynced: css`
|
||||
label: timePickerSynced;
|
||||
@@ -198,6 +196,7 @@ class UnThemedTimePicker extends PureComponent<Props, State> {
|
||||
)}
|
||||
<ButtonSelect
|
||||
className={classNames('time-picker-button-select', {
|
||||
['explore-active-button-glow']: timeSyncButton && isSynced,
|
||||
[`btn--radius-right-0 ${styles.noRightBorderStyle}`]: timeSyncButton,
|
||||
[styles.timePickerSynced]: timeSyncButton ? isSynced : null,
|
||||
})}
|
||||
@@ -206,7 +205,7 @@ class UnThemedTimePicker extends PureComponent<Props, State> {
|
||||
options={options}
|
||||
maxMenuHeight={600}
|
||||
onChange={this.onSelectChanged}
|
||||
iconClass={'fa fa-clock-o fa-fw'}
|
||||
iconClass={classNames('fa fa-clock-o fa-fw', isSynced && timeSyncButton && 'icon-brand-gradient')}
|
||||
tooltipContent={<TimePickerTooltipContent timeRange={value} />}
|
||||
/>
|
||||
|
||||
|
||||
@@ -97,6 +97,7 @@ export { FadeTransition } from './transitions/FadeTransition';
|
||||
export { SlideOutTransition } from './transitions/SlideOutTransition';
|
||||
export { Segment, SegmentAsync, SegmentSelect } from './Segment/';
|
||||
export { default as Chart } from './Chart';
|
||||
export { Icon } from './Icon/Icon';
|
||||
|
||||
// Next-gen forms
|
||||
export { default as Forms } from './Forms';
|
||||
|
||||
@@ -6,16 +6,12 @@ import { Editor as CoreEditor } from 'slate';
|
||||
import { Plugin as SlatePlugin } from '@grafana/slate-react';
|
||||
|
||||
import TOKEN_MARK from './slate-prism/TOKEN_MARK';
|
||||
import {
|
||||
makeFragment,
|
||||
TypeaheadOutput,
|
||||
CompletionItem,
|
||||
TypeaheadInput,
|
||||
SuggestionsState,
|
||||
CompletionItemGroup,
|
||||
} from '..';
|
||||
import { CompletionItemGroup } from '..';
|
||||
import { Typeahead } from '../components/Typeahead/Typeahead';
|
||||
export const TYPEAHEAD_DEBOUNCE = 100;
|
||||
import { CompletionItem, TypeaheadOutput, TypeaheadInput, SuggestionsState } from '../types/completion';
|
||||
import { makeFragment } from '../utils/slate';
|
||||
|
||||
export const TYPEAHEAD_DEBOUNCE = 250;
|
||||
|
||||
// Commands added to the editor by this plugin.
|
||||
interface SuggestionsPluginCommands {
|
||||
|
||||
@@ -3,19 +3,29 @@ import hoistNonReactStatics from 'hoist-non-react-statics';
|
||||
|
||||
import { getTheme } from './getTheme';
|
||||
import { Themeable } from '../types/theme';
|
||||
import { GrafanaThemeType } from '@grafana/data';
|
||||
import { GrafanaTheme, GrafanaThemeType } from '@grafana/data';
|
||||
|
||||
type Omit<T, K> = Pick<T, Exclude<keyof T, K>>;
|
||||
type Subtract<T, K> = Omit<T, keyof K>;
|
||||
|
||||
/**
|
||||
* Mock used in tests
|
||||
*/
|
||||
let ThemeContextMock: React.Context<GrafanaTheme> | null = null;
|
||||
|
||||
// Use Grafana Dark theme by default
|
||||
export const ThemeContext = React.createContext(getTheme(GrafanaThemeType.Dark));
|
||||
ThemeContext.displayName = 'ThemeContext';
|
||||
|
||||
export const withTheme = <P extends Themeable, S extends {} = {}>(Component: React.ComponentType<P>) => {
|
||||
const WithTheme: React.FunctionComponent<Subtract<P, Themeable>> = props => {
|
||||
/**
|
||||
* If theme context is mocked, let's use it instead of the original context
|
||||
* This is used in tests when mocking theme using mockThemeContext function defined below
|
||||
*/
|
||||
const ContextComponent = ThemeContextMock || ThemeContext;
|
||||
// @ts-ignore
|
||||
return <ThemeContext.Consumer>{theme => <Component {...props} theme={theme} />}</ThemeContext.Consumer>;
|
||||
return <ContextComponent.Consumer>{theme => <Component {...props} theme={theme} />}</ContextComponent.Consumer>;
|
||||
};
|
||||
|
||||
WithTheme.displayName = `WithTheme(${Component.displayName})`;
|
||||
@@ -25,5 +35,15 @@ export const withTheme = <P extends Themeable, S extends {} = {}>(Component: Rea
|
||||
};
|
||||
|
||||
export function useTheme() {
|
||||
return useContext(ThemeContext);
|
||||
return useContext(ThemeContextMock || ThemeContext);
|
||||
}
|
||||
|
||||
/**
|
||||
* Enables theme context mocking
|
||||
*/
|
||||
export const mockThemeContext = (theme: Partial<GrafanaTheme>) => {
|
||||
ThemeContextMock = React.createContext(theme as GrafanaTheme);
|
||||
return () => {
|
||||
ThemeContextMock = null;
|
||||
};
|
||||
};
|
||||
|
||||
@@ -187,6 +187,8 @@ $btn-drag-image: '../img/grab_dark.svg';
|
||||
|
||||
$navbar-btn-gicon-brightness: brightness(0.5);
|
||||
|
||||
$btn-active-box-shadow: 0px 0px 4px rgba(255,120,10,0.5);
|
||||
|
||||
// Forms
|
||||
// -------------------------
|
||||
$input-bg: $input-black;
|
||||
|
||||
@@ -180,6 +180,8 @@ $btn-drag-image: '../img/grab_light.svg';
|
||||
|
||||
$navbar-btn-gicon-brightness: brightness(1.5);
|
||||
|
||||
$btn-active-box-shadow: 0px 0px 4px rgba(234, 161, 51, 0.6);
|
||||
|
||||
// Forms
|
||||
// -------------------------
|
||||
$input-bg: $white;
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
import { ThemeContext, withTheme, useTheme } from './ThemeContext';
|
||||
import { ThemeContext, withTheme, useTheme, mockThemeContext } from './ThemeContext';
|
||||
import { getTheme, mockTheme } from './getTheme';
|
||||
import { selectThemeVariant } from './selectThemeVariant';
|
||||
export { stylesFactory } from './stylesFactory';
|
||||
export { ThemeContext, withTheme, mockTheme, getTheme, selectThemeVariant, useTheme };
|
||||
export { ThemeContext, withTheme, mockTheme, getTheme, selectThemeVariant, useTheme, mockThemeContext };
|
||||
|
||||
@@ -31,7 +31,7 @@ if [ ${UBUNTU_BASE} = "0" ]; then
|
||||
DOCKERFILE="Dockerfile"
|
||||
else
|
||||
TAG_SUFFIX="-ubuntu"
|
||||
DOCKERFILE="Dockerfile.ubuntu"
|
||||
DOCKERFILE="ubuntu.Dockerfile"
|
||||
fi
|
||||
|
||||
echo "Building and deploying ${_docker_repo}:${_grafana_tag}${TAG_SUFFIX}"
|
||||
|
||||
@@ -3,6 +3,7 @@ set -e
|
||||
|
||||
BUILD_FAST=0
|
||||
UBUNTU_BASE=0
|
||||
TAG_SUFFIX=""
|
||||
|
||||
while [ "$1" != "" ]; do
|
||||
case "$1" in
|
||||
@@ -13,6 +14,7 @@ while [ "$1" != "" ]; do
|
||||
;;
|
||||
"--ubuntu")
|
||||
UBUNTU_BASE=1
|
||||
TAG_SUFFIX="-ubuntu"
|
||||
echo "Ubuntu base image enabled"
|
||||
shift
|
||||
;;
|
||||
@@ -33,20 +35,40 @@ else
|
||||
_grafana_version=$_grafana_tag
|
||||
fi
|
||||
|
||||
if [ $UBUNTU_BASE = "0" ]; then
|
||||
echo "Building ${_docker_repo}:${_grafana_version}"
|
||||
else
|
||||
echo "Building ${_docker_repo}:${_grafana_version}-ubuntu"
|
||||
fi
|
||||
echo "Building ${_docker_repo}:${_grafana_version}${TAG_SUFFIX}"
|
||||
|
||||
export DOCKER_CLI_EXPERIMENTAL=enabled
|
||||
|
||||
# Build grafana image for a specific arch
|
||||
docker_build () {
|
||||
base_image=$1
|
||||
grafana_tgz=$2
|
||||
tag=$3
|
||||
dockerfile=${4:-Dockerfile}
|
||||
arch=$1
|
||||
|
||||
case "$arch" in
|
||||
"x64")
|
||||
base_arch=""
|
||||
repo_arch=""
|
||||
;;
|
||||
"armv7")
|
||||
base_arch="arm32v7/"
|
||||
repo_arch="-arm32v7-linux"
|
||||
;;
|
||||
"arm64")
|
||||
base_arch="arm64v8/"
|
||||
repo_arch="-arm64v8-linux"
|
||||
;;
|
||||
esac
|
||||
if [ $UBUNTU_BASE = "0" ]; then
|
||||
libc="-musl"
|
||||
dockerfile="Dockerfile"
|
||||
base_image="${base_arch}alpine:3.10"
|
||||
else
|
||||
libc=""
|
||||
dockerfile="ubuntu.Dockerfile"
|
||||
base_image="${base_arch}ubuntu:18.10"
|
||||
fi
|
||||
|
||||
grafana_tgz="grafana-latest.linux-${arch}${libc}.tar.gz"
|
||||
tag="${_docker_repo}${repo_arch}:${_grafana_version}${TAG_SUFFIX}"
|
||||
|
||||
docker build \
|
||||
--build-arg BASE_IMAGE=${base_image} \
|
||||
@@ -58,48 +80,32 @@ docker_build () {
|
||||
}
|
||||
|
||||
docker_tag_linux_amd64 () {
|
||||
repo=$1
|
||||
tag=$2
|
||||
docker tag "${_docker_repo}:${_grafana_version}" "${repo}:${tag}"
|
||||
tag=$1
|
||||
docker tag "${_docker_repo}:${_grafana_version}${TAG_SUFFIX}" "${_docker_repo}:${tag}${TAG_SUFFIX}"
|
||||
}
|
||||
|
||||
# Tag docker images of all architectures
|
||||
docker_tag_all () {
|
||||
repo=$1
|
||||
tag=$2
|
||||
docker_tag_linux_amd64 $1 $2
|
||||
tag=$1
|
||||
docker_tag_linux_amd64 $1
|
||||
if [ $BUILD_FAST = "0" ]; then
|
||||
docker tag "${_docker_repo}-arm32v7-linux:${_grafana_version}" "${repo}-arm32v7-linux:${tag}"
|
||||
docker tag "${_docker_repo}-arm64v8-linux:${_grafana_version}" "${repo}-arm64v8-linux:${tag}"
|
||||
docker tag "${_docker_repo}-arm32v7-linux:${_grafana_version}${TAG_SUFFIX}" "${_docker_repo}-arm32v7-linux:${tag}${TAG_SUFFIX}"
|
||||
docker tag "${_docker_repo}-arm64v8-linux:${_grafana_version}${TAG_SUFFIX}" "${_docker_repo}-arm64v8-linux:${tag}${TAG_SUFFIX}"
|
||||
fi
|
||||
}
|
||||
|
||||
if [ $UBUNTU_BASE = "0" ]; then
|
||||
docker_build "alpine:3.10" "grafana-latest.linux-x64-musl.tar.gz" "${_docker_repo}:${_grafana_version}"
|
||||
if [ $BUILD_FAST = "0" ]; then
|
||||
docker_build "arm32v7/alpine:3.10" "grafana-latest.linux-armv7-musl.tar.gz" "${_docker_repo}-arm32v7-linux:${_grafana_version}"
|
||||
docker_build "arm64v8/alpine:3.10" "grafana-latest.linux-arm64-musl.tar.gz" "${_docker_repo}-arm64v8-linux:${_grafana_version}"
|
||||
fi
|
||||
|
||||
# Tag as 'latest' for official release; otherwise tag as grafana/grafana:master
|
||||
if echo "$_grafana_tag" | grep -q "^v"; then
|
||||
docker_tag_all "${_docker_repo}" "latest"
|
||||
# Create the expected tag for running the end to end tests successfully
|
||||
docker tag "${_docker_repo}:${_grafana_version}" "grafana/grafana-dev:${_grafana_tag}"
|
||||
else
|
||||
docker_tag_all "${_docker_repo}" "master"
|
||||
docker tag "${_docker_repo}:${_grafana_version}" "grafana/grafana-dev:${_grafana_version}"
|
||||
fi
|
||||
else
|
||||
docker_build "ubuntu:18.10" "grafana-latest.linux-x64.tar.gz" "${_docker_repo}:${_grafana_version}-ubuntu" Dockerfile.ubuntu
|
||||
|
||||
# Tag as 'latest-ubuntu' for official release; otherwise tag as grafana/grafana:master-ubuntu
|
||||
if echo "$_grafana_tag" | grep -q "^v"; then
|
||||
docker tag "${_docker_repo}:${_grafana_version}-ubuntu" "${_docker_repo}:latest-ubuntu"
|
||||
# Create the expected tag for running the end to end tests successfully
|
||||
docker tag "${_docker_repo}:${_grafana_version}-ubuntu" "grafana/grafana-dev:${_grafana_tag}-ubuntu"
|
||||
else
|
||||
docker tag "${_docker_repo}:${_grafana_version}-ubuntu" "${_docker_repo}:master-ubuntu"
|
||||
docker tag "${_docker_repo}:${_grafana_version}-ubuntu" "grafana/grafana-dev:${_grafana_version}-ubuntu"
|
||||
fi
|
||||
docker_build "x64"
|
||||
if [ $BUILD_FAST = "0" ]; then
|
||||
docker_build "armv7"
|
||||
docker_build "arm64"
|
||||
fi
|
||||
|
||||
# Tag as 'latest' for official release; otherwise tag as grafana/grafana:master
|
||||
if echo "$_grafana_tag" | grep -q "^v"; then
|
||||
docker_tag_all "latest"
|
||||
# Create the expected tag for running the end to end tests successfully
|
||||
docker tag "${_docker_repo}:${_grafana_version}${TAG_SUFFIX}" "grafana/grafana-dev:${_grafana_tag}${TAG_SUFFIX}"
|
||||
else
|
||||
docker_tag_all "master"
|
||||
docker tag "${_docker_repo}:${_grafana_version}${TAG_SUFFIX}" "grafana/grafana-dev:${_grafana_version}${TAG_SUFFIX}"
|
||||
fi
|
||||
|
||||
@@ -2,8 +2,31 @@ ARG GRAFANA_VERSION="latest"
|
||||
|
||||
FROM grafana/grafana:${GRAFANA_VERSION}
|
||||
|
||||
USER root
|
||||
|
||||
ARG GF_INSTALL_IMAGE_RENDERER_PLUGIN="false"
|
||||
|
||||
RUN if [ $GF_INSTALL_IMAGE_RENDERER_PLUGIN = "true" ]; then \
|
||||
echo "http://dl-cdn.alpinelinux.org/alpine/edge/community" >> /etc/apk/repositories && \
|
||||
echo "http://dl-cdn.alpinelinux.org/alpine/edge/main" >> /etc/apk/repositories && \
|
||||
echo "http://dl-cdn.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories && \
|
||||
apk --no-cache upgrade && \
|
||||
apk add --no-cache udev ttf-opensans chromium && \
|
||||
rm -rf /tmp/* && \
|
||||
rm -rf /usr/share/grafana/tools/phantomjs; \
|
||||
fi
|
||||
|
||||
USER grafana
|
||||
|
||||
ENV GF_RENDERER_PLUGIN_CHROME_BIN="/usr/bin/chromium-browser"
|
||||
|
||||
RUN if [ $GF_INSTALL_IMAGE_RENDERER_PLUGIN = "true" ]; then \
|
||||
grafana-cli \
|
||||
--pluginsDir "$GF_PATHS_PLUGINS" \
|
||||
--pluginUrl https://github.com/grafana/grafana-image-renderer/releases/latest/download/plugin-linux-x64-glibc-no-chromium.zip \
|
||||
plugins install grafana-image-renderer; \
|
||||
fi
|
||||
|
||||
ARG GF_INSTALL_PLUGINS=""
|
||||
|
||||
RUN if [ ! -z "${GF_INSTALL_PLUGINS}" ]; then \
|
||||
|
||||
41
packaging/docker/custom/ubuntu.Dockerfile
Normal file
41
packaging/docker/custom/ubuntu.Dockerfile
Normal file
@@ -0,0 +1,41 @@
|
||||
ARG GRAFANA_VERSION="latest-ubuntu"
|
||||
|
||||
FROM grafana/grafana:${GRAFANA_VERSION}-ubuntu
|
||||
|
||||
USER root
|
||||
|
||||
# Set DEBIAN_FRONTEND=noninteractive in environment at build-time
|
||||
ARG DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
ARG GF_INSTALL_IMAGE_RENDERER_PLUGIN="false"
|
||||
|
||||
RUN if [ $GF_INSTALL_IMAGE_RENDERER_PLUGIN = "true" ]; then \
|
||||
apt-get update && \
|
||||
apt-get upgrade -y && \
|
||||
apt-get install -y chromium-browser && \
|
||||
apt-get autoremove -y && \
|
||||
rm -rf /var/lib/apt/lists/* && \
|
||||
rm -rf /usr/share/grafana/tools/phantomjs; \
|
||||
fi
|
||||
|
||||
USER grafana
|
||||
|
||||
ENV GF_RENDERER_PLUGIN_CHROME_BIN="/usr/bin/chromium-browser"
|
||||
|
||||
RUN if [ $GF_INSTALL_IMAGE_RENDERER_PLUGIN = "true" ]; then \
|
||||
grafana-cli \
|
||||
--pluginsDir "$GF_PATHS_PLUGINS" \
|
||||
--pluginUrl https://github.com/grafana/grafana-image-renderer/releases/latest/download/plugin-linux-x64-glibc-no-chromium.zip \
|
||||
plugins install grafana-image-renderer; \
|
||||
fi
|
||||
|
||||
ARG GF_INSTALL_PLUGINS=""
|
||||
|
||||
RUN if [ ! -z "${GF_INSTALL_PLUGINS}" ]; then \
|
||||
OLDIFS=$IFS; \
|
||||
IFS=','; \
|
||||
for plugin in ${GF_INSTALL_PLUGINS}; do \
|
||||
IFS=$OLDIFS; \
|
||||
grafana-cli --pluginsDir "$GF_PATHS_PLUGINS" plugins install ${plugin}; \
|
||||
done; \
|
||||
fi
|
||||
@@ -2,11 +2,13 @@
|
||||
set -e
|
||||
|
||||
UBUNTU_BASE=0
|
||||
TAG_SUFFIX=""
|
||||
|
||||
while [ "$1" != "" ]; do
|
||||
case "$1" in
|
||||
"--ubuntu")
|
||||
UBUNTU_BASE=1
|
||||
TAG_SUFFIX="-ubuntu"
|
||||
echo "Ubuntu base image enabled"
|
||||
shift
|
||||
;;
|
||||
@@ -29,60 +31,39 @@ fi
|
||||
|
||||
export DOCKER_CLI_EXPERIMENTAL=enabled
|
||||
|
||||
if [ $UBUNTU_BASE = "0" ]; then
|
||||
echo "pushing ${_docker_repo}:${_grafana_version}"
|
||||
else
|
||||
echo "pushing ${_docker_repo}:${_grafana_version}-ubuntu"
|
||||
fi
|
||||
|
||||
echo "pushing ${_docker_repo}:${_grafana_version}${TAG_SUFFIX}"
|
||||
|
||||
docker_push_all () {
|
||||
repo=$1
|
||||
tag=$2
|
||||
|
||||
if [ $UBUNTU_BASE = "0" ]; then
|
||||
# Push each image individually
|
||||
docker push "${repo}:${tag}"
|
||||
docker push "${repo}-arm32v7-linux:${tag}"
|
||||
docker push "${repo}-arm64v8-linux:${tag}"
|
||||
# Push each image individually
|
||||
docker push "${repo}:${tag}${TAG_SUFFIX}"
|
||||
docker push "${repo}-arm32v7-linux:${tag}${TAG_SUFFIX}"
|
||||
docker push "${repo}-arm64v8-linux:${tag}${TAG_SUFFIX}"
|
||||
|
||||
# Create and push a multi-arch manifest
|
||||
docker manifest create "${repo}:${tag}" \
|
||||
"${repo}:${tag}" \
|
||||
"${repo}-arm32v7-linux:${tag}" \
|
||||
"${repo}-arm64v8-linux:${tag}"
|
||||
# Create and push a multi-arch manifest
|
||||
docker manifest create "${repo}:${tag}${TAG_SUFFIX}" \
|
||||
"${repo}:${tag}${TAG_SUFFIX}" \
|
||||
"${repo}-arm32v7-linux:${tag}${TAG_SUFFIX}" \
|
||||
"${repo}-arm64v8-linux:${tag}${TAG_SUFFIX}"
|
||||
|
||||
docker manifest push "${repo}:${tag}"
|
||||
else
|
||||
docker push "${repo}:${tag}-ubuntu"
|
||||
fi
|
||||
docker manifest push "${repo}:${tag}${TAG_SUFFIX}"
|
||||
}
|
||||
|
||||
if echo "$_grafana_tag" | grep -q "^v" && echo "$_grafana_tag" | grep -vq "beta"; then
|
||||
echo "pushing ${_docker_repo}:latest"
|
||||
echo "pushing ${_docker_repo}:latest${TAG_SUFFIX}"
|
||||
docker_push_all "${_docker_repo}" "latest"
|
||||
docker_push_all "${_docker_repo}" "${_grafana_version}"
|
||||
# Push to the grafana-dev repository with the expected tag
|
||||
# for running the end to end tests successfully
|
||||
if [ ${UBUNTU_BASE} = "0" ]; then
|
||||
docker push "grafana/grafana-dev:${_grafana_tag}"
|
||||
else
|
||||
docker push "grafana/grafana-dev:${_grafana_tag}-ubuntu"
|
||||
fi
|
||||
docker push "grafana/grafana-dev:${_grafana_tag}${TAG_SUFFIX}"
|
||||
elif echo "$_grafana_tag" | grep -q "^v" && echo "$_grafana_tag" | grep -q "beta"; then
|
||||
docker_push_all "${_docker_repo}" "${_grafana_version}"
|
||||
# Push to the grafana-dev repository with the expected tag
|
||||
# for running the end to end tests successfully
|
||||
if [ ${UBUNTU_BASE} = "0" ]; then
|
||||
docker push "grafana/grafana-dev:${_grafana_tag}"
|
||||
else
|
||||
docker push "grafana/grafana-dev:${_grafana_tag}-ubuntu"
|
||||
fi
|
||||
docker push "grafana/grafana-dev:${_grafana_tag}${TAG_SUFFIX}"
|
||||
elif echo "$_grafana_tag" | grep -q "master"; then
|
||||
docker_push_all "${_docker_repo}" "master"
|
||||
if [ ${UBUNTU_BASE} = "0" ]; then
|
||||
docker push "grafana/grafana-dev:${_grafana_version}"
|
||||
else
|
||||
docker push "grafana/grafana-dev:${_grafana_version}-ubuntu"
|
||||
fi
|
||||
docker push "grafana/grafana-dev:${_grafana_version}${TAG_SUFFIX}"
|
||||
fi
|
||||
|
||||
@@ -88,14 +88,15 @@ func (this *CacheServer) Handler(ctx *macaron.Context) {
|
||||
hash := urlPath[strings.LastIndex(urlPath, "/")+1:]
|
||||
|
||||
var avatar *Avatar
|
||||
|
||||
if obj, exist := this.cache.Get(hash); exist {
|
||||
obj, exists := this.cache.Get(hash)
|
||||
if exists {
|
||||
avatar = obj.(*Avatar)
|
||||
} else {
|
||||
avatar = New(hash)
|
||||
}
|
||||
|
||||
if avatar.Expired() {
|
||||
// The cache item is either expired or newly created, update it from the server
|
||||
if err := avatar.Update(); err != nil {
|
||||
log.Trace("avatar update error: %v", err)
|
||||
avatar = this.notFound
|
||||
@@ -104,9 +105,9 @@ func (this *CacheServer) Handler(ctx *macaron.Context) {
|
||||
|
||||
if avatar.notFound {
|
||||
avatar = this.notFound
|
||||
} else {
|
||||
} else if !exists {
|
||||
if err := this.cache.Add(hash, avatar, gocache.DefaultExpiration); err != nil {
|
||||
log.Warn("Error adding avatar to cache: %s", err)
|
||||
log.Trace("Error adding avatar to cache: %s", err)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -221,7 +222,6 @@ func (this *thunderTask) fetch() error {
|
||||
req.Header.Set("Cache-Control", "no-cache")
|
||||
req.Header.Set("User-Agent", "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/33.0.1750.154 Safari/537.36")
|
||||
resp, err := client.Do(req)
|
||||
|
||||
if err != nil {
|
||||
this.Avatar.notFound = true
|
||||
return fmt.Errorf("gravatar unreachable, %v", err)
|
||||
|
||||
@@ -22,6 +22,23 @@ type PluginCss struct {
|
||||
Dark string `json:"dark"`
|
||||
}
|
||||
|
||||
const (
|
||||
// These weights may be used by an extension to reliably place
|
||||
// itself in relation to a particular item in the menu. The weights
|
||||
// are negative to ensure that the default items are placed above
|
||||
// any items with default weight.
|
||||
|
||||
WeightCreate = (iota - 20) * 100
|
||||
WeightDashboard
|
||||
WeightExplore
|
||||
WeightProfile
|
||||
WeightAlerting
|
||||
WeightPlugin
|
||||
WeightConfig
|
||||
WeightAdmin
|
||||
WeightHelp
|
||||
)
|
||||
|
||||
type NavLink struct {
|
||||
Id string `json:"id,omitempty"`
|
||||
Text string `json:"text,omitempty"`
|
||||
@@ -31,6 +48,7 @@ type NavLink struct {
|
||||
Img string `json:"img,omitempty"`
|
||||
Url string `json:"url,omitempty"`
|
||||
Target string `json:"target,omitempty"`
|
||||
SortWeight int64 `json:"sortWeight,omitempty"`
|
||||
Divider bool `json:"divider,omitempty"`
|
||||
HideFromMenu bool `json:"hideFromMenu,omitempty"`
|
||||
HideFromTabs bool `json:"hideFromTabs,omitempty"`
|
||||
|
||||
@@ -3,13 +3,12 @@ package api
|
||||
import (
|
||||
"context"
|
||||
"crypto/tls"
|
||||
"errors"
|
||||
"fmt"
|
||||
"net"
|
||||
"net/http"
|
||||
"os"
|
||||
"path"
|
||||
"time"
|
||||
"sync"
|
||||
|
||||
"github.com/grafana/grafana/pkg/api/live"
|
||||
"github.com/grafana/grafana/pkg/api/routing"
|
||||
@@ -29,6 +28,7 @@ import (
|
||||
"github.com/grafana/grafana/pkg/services/quota"
|
||||
"github.com/grafana/grafana/pkg/services/rendering"
|
||||
"github.com/grafana/grafana/pkg/setting"
|
||||
"github.com/grafana/grafana/pkg/util/errutil"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/client_golang/prometheus/promhttp"
|
||||
macaron "gopkg.in/macaron.v1"
|
||||
@@ -83,85 +83,98 @@ func (hs *HTTPServer) Init() error {
|
||||
}
|
||||
|
||||
func (hs *HTTPServer) Run(ctx context.Context) error {
|
||||
var err error
|
||||
|
||||
hs.context = ctx
|
||||
|
||||
hs.applyRoutes()
|
||||
hs.streamManager.Run(ctx)
|
||||
|
||||
listenAddr := fmt.Sprintf("%s:%s", setting.HttpAddr, setting.HttpPort)
|
||||
listener, err := net.Listen("tcp", listenAddr)
|
||||
if err != nil {
|
||||
hs.log.Debug("server was shutdown gracefully")
|
||||
return nil
|
||||
hs.httpSrv = &http.Server{
|
||||
Addr: fmt.Sprintf("%s:%s", setting.HttpAddr, setting.HttpPort),
|
||||
Handler: hs.macaron,
|
||||
}
|
||||
switch setting.Protocol {
|
||||
case setting.HTTP2:
|
||||
if err := hs.configureHttp2(); err != nil {
|
||||
return err
|
||||
}
|
||||
case setting.HTTPS:
|
||||
if err := hs.configureHttps(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
hs.log.Info("HTTP Server Listen", "address", listener.Addr().String(), "protocol", setting.Protocol, "subUrl", setting.AppSubUrl, "socket", setting.SocketPath)
|
||||
var listener net.Listener
|
||||
switch setting.Protocol {
|
||||
case setting.HTTP, setting.HTTPS, setting.HTTP2:
|
||||
var err error
|
||||
listener, err = net.Listen("tcp", hs.httpSrv.Addr)
|
||||
if err != nil {
|
||||
return errutil.Wrapf(err, "failed to open listener on address %s", hs.httpSrv.Addr)
|
||||
}
|
||||
case setting.SOCKET:
|
||||
var err error
|
||||
listener, err = net.ListenUnix("unix", &net.UnixAddr{Name: setting.SocketPath, Net: "unix"})
|
||||
if err != nil {
|
||||
return errutil.Wrapf(err, "failed to open listener for socket %s", setting.SocketPath)
|
||||
}
|
||||
|
||||
hs.httpSrv = &http.Server{Addr: listenAddr, Handler: hs.macaron}
|
||||
// Make socket writable by group
|
||||
if err := os.Chmod(setting.SocketPath, 0660); err != nil {
|
||||
return errutil.Wrapf(err, "failed to change socket permissions")
|
||||
}
|
||||
default:
|
||||
hs.log.Error("Invalid protocol", "protocol", setting.Protocol)
|
||||
return fmt.Errorf("invalid protocol %q", setting.Protocol)
|
||||
}
|
||||
|
||||
hs.log.Info("HTTP Server Listen", "address", listener.Addr().String(), "protocol",
|
||||
setting.Protocol, "subUrl", setting.AppSubUrl, "socket", setting.SocketPath)
|
||||
|
||||
var wg sync.WaitGroup
|
||||
wg.Add(1)
|
||||
|
||||
// handle http shutdown on server context done
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
|
||||
<-ctx.Done()
|
||||
// Hacky fix for race condition between ListenAndServe and Shutdown
|
||||
time.Sleep(time.Millisecond * 100)
|
||||
if err := hs.httpSrv.Shutdown(context.Background()); err != nil {
|
||||
hs.log.Error("Failed to shutdown server", "error", err)
|
||||
}
|
||||
}()
|
||||
|
||||
switch setting.Protocol {
|
||||
case setting.HTTP:
|
||||
err = hs.httpSrv.Serve(listener)
|
||||
if err == http.ErrServerClosed {
|
||||
hs.log.Debug("server was shutdown gracefully")
|
||||
return nil
|
||||
case setting.HTTP, setting.SOCKET:
|
||||
if err := hs.httpSrv.Serve(listener); err != nil {
|
||||
if err == http.ErrServerClosed {
|
||||
hs.log.Debug("server was shutdown gracefully")
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
}
|
||||
case setting.HTTP2:
|
||||
err = hs.listenAndServeH2TLS(listener, setting.CertFile, setting.KeyFile)
|
||||
if err == http.ErrServerClosed {
|
||||
hs.log.Debug("server was shutdown gracefully")
|
||||
return nil
|
||||
}
|
||||
case setting.HTTPS:
|
||||
err = hs.listenAndServeTLS(listener, setting.CertFile, setting.KeyFile)
|
||||
if err == http.ErrServerClosed {
|
||||
hs.log.Debug("server was shutdown gracefully")
|
||||
return nil
|
||||
}
|
||||
case setting.SOCKET:
|
||||
ln, err := net.ListenUnix("unix", &net.UnixAddr{Name: setting.SocketPath, Net: "unix"})
|
||||
if err != nil {
|
||||
hs.log.Debug("server was shutdown gracefully", "err", err)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Make socket writable by group
|
||||
if err := os.Chmod(setting.SocketPath, 0660); err != nil {
|
||||
hs.log.Debug("server was shutdown gracefully", "err", err)
|
||||
return nil
|
||||
}
|
||||
|
||||
err = hs.httpSrv.Serve(ln)
|
||||
if err != nil {
|
||||
hs.log.Debug("server was shutdown gracefully", "err", err)
|
||||
return nil
|
||||
case setting.HTTP2, setting.HTTPS:
|
||||
if err := hs.httpSrv.ServeTLS(listener, setting.CertFile, setting.KeyFile); err != nil {
|
||||
if err == http.ErrServerClosed {
|
||||
hs.log.Debug("server was shutdown gracefully")
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
}
|
||||
default:
|
||||
hs.log.Error("Invalid protocol", "protocol", setting.Protocol)
|
||||
err = errors.New("Invalid Protocol")
|
||||
panic(fmt.Sprintf("Unhandled protocol %q", setting.Protocol))
|
||||
}
|
||||
|
||||
return err
|
||||
wg.Wait()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (hs *HTTPServer) listenAndServeTLS(listener net.Listener, certfile, keyfile string) error {
|
||||
if certfile == "" {
|
||||
func (hs *HTTPServer) configureHttps() error {
|
||||
if setting.CertFile == "" {
|
||||
return fmt.Errorf("cert_file cannot be empty when using HTTPS")
|
||||
}
|
||||
|
||||
if keyfile == "" {
|
||||
if setting.KeyFile == "" {
|
||||
return fmt.Errorf("cert_key cannot be empty when using HTTPS")
|
||||
}
|
||||
|
||||
@@ -195,15 +208,15 @@ func (hs *HTTPServer) listenAndServeTLS(listener net.Listener, certfile, keyfile
|
||||
hs.httpSrv.TLSConfig = tlsCfg
|
||||
hs.httpSrv.TLSNextProto = make(map[string]func(*http.Server, *tls.Conn, http.Handler))
|
||||
|
||||
return hs.httpSrv.ServeTLS(listener, setting.CertFile, setting.KeyFile)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (hs *HTTPServer) listenAndServeH2TLS(listener net.Listener, certfile, keyfile string) error {
|
||||
if certfile == "" {
|
||||
func (hs *HTTPServer) configureHttp2() error {
|
||||
if setting.CertFile == "" {
|
||||
return fmt.Errorf("cert_file cannot be empty when using HTTP2")
|
||||
}
|
||||
|
||||
if keyfile == "" {
|
||||
if setting.KeyFile == "" {
|
||||
return fmt.Errorf("cert_key cannot be empty when using HTTP2")
|
||||
}
|
||||
|
||||
@@ -234,7 +247,7 @@ func (hs *HTTPServer) listenAndServeH2TLS(listener net.Listener, certfile, keyfi
|
||||
|
||||
hs.httpSrv.TLSConfig = tlsCfg
|
||||
|
||||
return hs.httpSrv.ServeTLS(listener, setting.CertFile, setting.KeyFile)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (hs *HTTPServer) newMacaron() *macaron.Macaron {
|
||||
|
||||
@@ -2,6 +2,7 @@ package api
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sort"
|
||||
"strings"
|
||||
|
||||
"github.com/grafana/grafana/pkg/api/dtos"
|
||||
@@ -42,7 +43,7 @@ func (hs *HTTPServer) setIndexViewData(c *m.ReqContext) (*dtos.IndexViewData, er
|
||||
appSubURL := setting.AppSubUrl
|
||||
|
||||
// special case when doing localhost call from phantomjs
|
||||
if c.IsRenderCall {
|
||||
if c.IsRenderCall && !hs.Cfg.ServeFromSubPath {
|
||||
appURL = fmt.Sprintf("%s://localhost:%s", setting.Protocol, setting.HttpPort)
|
||||
appSubURL = ""
|
||||
settings["appSubUrl"] = ""
|
||||
@@ -115,11 +116,12 @@ func (hs *HTTPServer) setIndexViewData(c *m.ReqContext) (*dtos.IndexViewData, er
|
||||
children = append(children, &dtos.NavLink{Text: "Import", SubTitle: "Import dashboard from file or Grafana.com", Id: "import", Icon: "gicon gicon-dashboard-import", Url: setting.AppSubUrl + "/dashboard/import"})
|
||||
|
||||
data.NavTree = append(data.NavTree, &dtos.NavLink{
|
||||
Text: "Create",
|
||||
Id: "create",
|
||||
Icon: "fa fa-fw fa-plus",
|
||||
Url: setting.AppSubUrl + "/dashboard/new",
|
||||
Children: children,
|
||||
Text: "Create",
|
||||
Id: "create",
|
||||
Icon: "fa fa-fw fa-plus",
|
||||
Url: setting.AppSubUrl + "/dashboard/new",
|
||||
Children: children,
|
||||
SortWeight: dtos.WeightCreate,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -132,21 +134,23 @@ func (hs *HTTPServer) setIndexViewData(c *m.ReqContext) (*dtos.IndexViewData, er
|
||||
}
|
||||
|
||||
data.NavTree = append(data.NavTree, &dtos.NavLink{
|
||||
Text: "Dashboards",
|
||||
Id: "dashboards",
|
||||
SubTitle: "Manage dashboards & folders",
|
||||
Icon: "gicon gicon-dashboard",
|
||||
Url: setting.AppSubUrl + "/",
|
||||
Children: dashboardChildNavs,
|
||||
Text: "Dashboards",
|
||||
Id: "dashboards",
|
||||
SubTitle: "Manage dashboards & folders",
|
||||
Icon: "gicon gicon-dashboard",
|
||||
Url: setting.AppSubUrl + "/",
|
||||
SortWeight: dtos.WeightDashboard,
|
||||
Children: dashboardChildNavs,
|
||||
})
|
||||
|
||||
if setting.ExploreEnabled && (c.OrgRole == m.ROLE_ADMIN || c.OrgRole == m.ROLE_EDITOR || setting.ViewersCanEdit) {
|
||||
data.NavTree = append(data.NavTree, &dtos.NavLink{
|
||||
Text: "Explore",
|
||||
Id: "explore",
|
||||
SubTitle: "Explore your data",
|
||||
Icon: "gicon gicon-explore",
|
||||
Url: setting.AppSubUrl + "/explore",
|
||||
Text: "Explore",
|
||||
Id: "explore",
|
||||
SubTitle: "Explore your data",
|
||||
Icon: "gicon gicon-explore",
|
||||
SortWeight: dtos.WeightExplore,
|
||||
Url: setting.AppSubUrl + "/explore",
|
||||
})
|
||||
}
|
||||
|
||||
@@ -163,6 +167,7 @@ func (hs *HTTPServer) setIndexViewData(c *m.ReqContext) (*dtos.IndexViewData, er
|
||||
Img: data.User.GravatarUrl,
|
||||
Url: setting.AppSubUrl + "/profile",
|
||||
HideFromMenu: true,
|
||||
SortWeight: dtos.WeightProfile,
|
||||
Children: []*dtos.NavLink{
|
||||
{Text: "Preferences", Id: "profile-settings", Url: setting.AppSubUrl + "/profile", Icon: "gicon gicon-preferences"},
|
||||
{Text: "Change Password", Id: "change-password", Url: setting.AppSubUrl + "/profile/password", Icon: "fa fa-fw fa-lock", HideFromMenu: true},
|
||||
@@ -186,12 +191,13 @@ func (hs *HTTPServer) setIndexViewData(c *m.ReqContext) (*dtos.IndexViewData, er
|
||||
}
|
||||
|
||||
data.NavTree = append(data.NavTree, &dtos.NavLink{
|
||||
Text: "Alerting",
|
||||
SubTitle: "Alert rules & notifications",
|
||||
Id: "alerting",
|
||||
Icon: "gicon gicon-alert",
|
||||
Url: setting.AppSubUrl + "/alerting/list",
|
||||
Children: alertChildNavs,
|
||||
Text: "Alerting",
|
||||
SubTitle: "Alert rules & notifications",
|
||||
Id: "alerting",
|
||||
Icon: "gicon gicon-alert",
|
||||
Url: setting.AppSubUrl + "/alerting/list",
|
||||
Children: alertChildNavs,
|
||||
SortWeight: dtos.WeightAlerting,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -203,10 +209,11 @@ func (hs *HTTPServer) setIndexViewData(c *m.ReqContext) (*dtos.IndexViewData, er
|
||||
for _, plugin := range enabledPlugins.Apps {
|
||||
if plugin.Pinned {
|
||||
appLink := &dtos.NavLink{
|
||||
Text: plugin.Name,
|
||||
Id: "plugin-page-" + plugin.Id,
|
||||
Url: plugin.DefaultNavUrl,
|
||||
Img: plugin.Info.Logos.Small,
|
||||
Text: plugin.Name,
|
||||
Id: "plugin-page-" + plugin.Id,
|
||||
Url: plugin.DefaultNavUrl,
|
||||
Img: plugin.Info.Logos.Small,
|
||||
SortWeight: dtos.WeightPlugin,
|
||||
}
|
||||
|
||||
for _, include := range plugin.Includes {
|
||||
@@ -297,12 +304,13 @@ func (hs *HTTPServer) setIndexViewData(c *m.ReqContext) (*dtos.IndexViewData, er
|
||||
}
|
||||
|
||||
data.NavTree = append(data.NavTree, &dtos.NavLink{
|
||||
Id: "cfg",
|
||||
Text: "Configuration",
|
||||
SubTitle: "Organization: " + c.OrgName,
|
||||
Icon: "gicon gicon-cog",
|
||||
Url: configNodes[0].Url,
|
||||
Children: configNodes,
|
||||
Id: "cfg",
|
||||
Text: "Configuration",
|
||||
SubTitle: "Organization: " + c.OrgName,
|
||||
Icon: "gicon gicon-cog",
|
||||
Url: configNodes[0].Url,
|
||||
SortWeight: dtos.WeightConfig,
|
||||
Children: configNodes,
|
||||
})
|
||||
|
||||
if c.IsGrafanaAdmin {
|
||||
@@ -326,6 +334,7 @@ func (hs *HTTPServer) setIndexViewData(c *m.ReqContext) (*dtos.IndexViewData, er
|
||||
Id: "admin",
|
||||
Icon: "gicon gicon-shield",
|
||||
Url: setting.AppSubUrl + "/admin/users",
|
||||
SortWeight: dtos.WeightAdmin,
|
||||
Children: adminNavLinks,
|
||||
})
|
||||
}
|
||||
@@ -337,6 +346,7 @@ func (hs *HTTPServer) setIndexViewData(c *m.ReqContext) (*dtos.IndexViewData, er
|
||||
Url: "#",
|
||||
Icon: "gicon gicon-question",
|
||||
HideFromMenu: true,
|
||||
SortWeight: dtos.WeightHelp,
|
||||
Children: []*dtos.NavLink{
|
||||
{Text: "Keyboard shortcuts", Url: "/shortcuts", Icon: "fa fa-fw fa-keyboard-o", Target: "_self"},
|
||||
{Text: "Community site", Url: "http://community.grafana.com", Icon: "fa fa-fw fa-comment", Target: "_blank"},
|
||||
@@ -345,6 +355,10 @@ func (hs *HTTPServer) setIndexViewData(c *m.ReqContext) (*dtos.IndexViewData, er
|
||||
})
|
||||
|
||||
hs.HooksService.RunIndexDataHooks(&data)
|
||||
|
||||
sort.SliceStable(data.NavTree, func(i, j int) bool {
|
||||
return data.NavTree[i].SortWeight < data.NavTree[j].SortWeight
|
||||
})
|
||||
return &data, nil
|
||||
}
|
||||
|
||||
|
||||
@@ -4,6 +4,7 @@ import (
|
||||
"encoding/hex"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"strings"
|
||||
|
||||
"github.com/grafana/grafana/pkg/api/dtos"
|
||||
"github.com/grafana/grafana/pkg/bus"
|
||||
@@ -27,6 +28,20 @@ var getViewIndex = func() string {
|
||||
return ViewIndex
|
||||
}
|
||||
|
||||
func validateRedirectTo(redirectTo string) error {
|
||||
to, err := url.Parse(redirectTo)
|
||||
if err != nil {
|
||||
return login.ErrInvalidRedirectTo
|
||||
}
|
||||
if to.IsAbs() {
|
||||
return login.ErrAbsoluteRedirectTo
|
||||
}
|
||||
if setting.AppSubUrl != "" && !strings.HasPrefix(to.Path, "/"+setting.AppSubUrl) {
|
||||
return login.ErrInvalidRedirectTo
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (hs *HTTPServer) LoginView(c *models.ReqContext) {
|
||||
viewData, err := setIndexViewData(hs, c)
|
||||
if err != nil {
|
||||
@@ -64,6 +79,12 @@ func (hs *HTTPServer) LoginView(c *models.ReqContext) {
|
||||
}
|
||||
|
||||
if redirectTo, _ := url.QueryUnescape(c.GetCookie("redirect_to")); len(redirectTo) > 0 {
|
||||
if err := validateRedirectTo(redirectTo); err != nil {
|
||||
viewData.Settings["loginError"] = err.Error()
|
||||
c.HTML(200, getViewIndex(), viewData)
|
||||
c.SetCookie("redirect_to", "", -1, setting.AppSubUrl+"/")
|
||||
return
|
||||
}
|
||||
c.SetCookie("redirect_to", "", -1, setting.AppSubUrl+"/")
|
||||
c.Redirect(redirectTo)
|
||||
return
|
||||
@@ -73,7 +94,7 @@ func (hs *HTTPServer) LoginView(c *models.ReqContext) {
|
||||
return
|
||||
}
|
||||
|
||||
c.HTML(200, ViewIndex, viewData)
|
||||
c.HTML(200, getViewIndex(), viewData)
|
||||
}
|
||||
|
||||
func (hs *HTTPServer) loginAuthProxyUser(c *models.ReqContext) {
|
||||
@@ -147,7 +168,11 @@ func (hs *HTTPServer) LoginPost(c *models.ReqContext, cmd dtos.LoginCommand) Res
|
||||
}
|
||||
|
||||
if redirectTo, _ := url.QueryUnescape(c.GetCookie("redirect_to")); len(redirectTo) > 0 {
|
||||
result["redirectUrl"] = redirectTo
|
||||
if err := validateRedirectTo(redirectTo); err == nil {
|
||||
result["redirectUrl"] = redirectTo
|
||||
} else {
|
||||
log.Info("Ignored invalid redirect_to cookie value: %v", redirectTo)
|
||||
}
|
||||
c.SetCookie("redirect_to", "", -1, setting.AppSubUrl+"/")
|
||||
}
|
||||
|
||||
|
||||
@@ -10,7 +10,10 @@ import (
|
||||
"testing"
|
||||
|
||||
"github.com/grafana/grafana/pkg/api/dtos"
|
||||
"github.com/grafana/grafana/pkg/bus"
|
||||
"github.com/grafana/grafana/pkg/components/simplejson"
|
||||
"github.com/grafana/grafana/pkg/infra/log"
|
||||
"github.com/grafana/grafana/pkg/login"
|
||||
"github.com/grafana/grafana/pkg/models"
|
||||
"github.com/grafana/grafana/pkg/services/auth"
|
||||
"github.com/grafana/grafana/pkg/setting"
|
||||
@@ -53,6 +56,22 @@ func getBody(resp *httptest.ResponseRecorder) (string, error) {
|
||||
return string(responseData), nil
|
||||
}
|
||||
|
||||
type FakeLogger struct {
|
||||
log.Logger
|
||||
}
|
||||
|
||||
func (stub *FakeLogger) Info(testMessage string, ctx ...interface{}) {
|
||||
}
|
||||
|
||||
type redirectCase struct {
|
||||
desc string
|
||||
url string
|
||||
status int
|
||||
err error
|
||||
appURL string
|
||||
appSubURL string
|
||||
}
|
||||
|
||||
func TestLoginErrorCookieApiEndpoint(t *testing.T) {
|
||||
mockSetIndexViewData()
|
||||
defer resetSetIndexViewData()
|
||||
@@ -100,10 +119,201 @@ func TestLoginErrorCookieApiEndpoint(t *testing.T) {
|
||||
assert.Equal(t, sc.resp.Code, 200)
|
||||
|
||||
responseString, err := getBody(sc.resp)
|
||||
assert.Nil(t, err)
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, strings.Contains(responseString, oauthError.Error()))
|
||||
}
|
||||
|
||||
func TestLoginViewRedirect(t *testing.T) {
|
||||
mockSetIndexViewData()
|
||||
defer resetSetIndexViewData()
|
||||
|
||||
mockViewIndex()
|
||||
defer resetViewIndex()
|
||||
sc := setupScenarioContext("/login")
|
||||
hs := &HTTPServer{
|
||||
Cfg: setting.NewCfg(),
|
||||
License: models.OSSLicensingService{},
|
||||
}
|
||||
|
||||
sc.defaultHandler = Wrap(func(w http.ResponseWriter, c *models.ReqContext) {
|
||||
c.IsSignedIn = true
|
||||
c.SignedInUser = &models.SignedInUser{
|
||||
UserId: 10,
|
||||
}
|
||||
hs.LoginView(c)
|
||||
})
|
||||
|
||||
setting.OAuthService = &setting.OAuther{}
|
||||
setting.OAuthService.OAuthInfos = make(map[string]*setting.OAuthInfo)
|
||||
|
||||
redirectCases := []redirectCase{
|
||||
{
|
||||
desc: "grafana relative url without subpath",
|
||||
url: "/profile",
|
||||
appURL: "http://localhost:3000",
|
||||
status: 302,
|
||||
},
|
||||
{
|
||||
desc: "grafana relative url with subpath",
|
||||
url: "/grafana/profile",
|
||||
appURL: "http://localhost:3000",
|
||||
appSubURL: "grafana",
|
||||
status: 302,
|
||||
},
|
||||
{
|
||||
desc: "relative url with missing subpath",
|
||||
url: "/profile",
|
||||
appURL: "http://localhost:3000",
|
||||
appSubURL: "grafana",
|
||||
status: 200,
|
||||
err: login.ErrInvalidRedirectTo,
|
||||
},
|
||||
{
|
||||
desc: "grafana absolute url",
|
||||
url: "http://localhost:3000/profile",
|
||||
appURL: "http://localhost:3000",
|
||||
status: 200,
|
||||
err: login.ErrAbsoluteRedirectTo,
|
||||
},
|
||||
{
|
||||
desc: "non grafana absolute url",
|
||||
url: "http://example.com",
|
||||
appURL: "http://localhost:3000",
|
||||
status: 200,
|
||||
err: login.ErrAbsoluteRedirectTo,
|
||||
},
|
||||
{
|
||||
desc: "invalid url",
|
||||
url: ":foo",
|
||||
appURL: "http://localhost:3000",
|
||||
status: 200,
|
||||
err: login.ErrInvalidRedirectTo,
|
||||
},
|
||||
}
|
||||
|
||||
for _, c := range redirectCases {
|
||||
setting.AppUrl = c.appURL
|
||||
setting.AppSubUrl = c.appSubURL
|
||||
t.Run(c.desc, func(t *testing.T) {
|
||||
cookie := http.Cookie{
|
||||
Name: "redirect_to",
|
||||
MaxAge: 60,
|
||||
Value: c.url,
|
||||
HttpOnly: true,
|
||||
Path: setting.AppSubUrl + "/",
|
||||
Secure: hs.Cfg.CookieSecure,
|
||||
SameSite: hs.Cfg.CookieSameSite,
|
||||
}
|
||||
sc.m.Get(sc.url, sc.defaultHandler)
|
||||
sc.fakeReqNoAssertionsWithCookie("GET", sc.url, cookie).exec()
|
||||
assert.Equal(t, c.status, sc.resp.Code)
|
||||
if c.status == 302 {
|
||||
location, ok := sc.resp.Header()["Location"]
|
||||
assert.True(t, ok)
|
||||
assert.Equal(t, location[0], c.url)
|
||||
}
|
||||
|
||||
responseString, err := getBody(sc.resp)
|
||||
assert.NoError(t, err)
|
||||
if c.err != nil {
|
||||
assert.True(t, strings.Contains(responseString, c.err.Error()))
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestLoginPostRedirect(t *testing.T) {
|
||||
mockSetIndexViewData()
|
||||
defer resetSetIndexViewData()
|
||||
|
||||
mockViewIndex()
|
||||
defer resetViewIndex()
|
||||
sc := setupScenarioContext("/login")
|
||||
hs := &HTTPServer{
|
||||
log: &FakeLogger{},
|
||||
Cfg: setting.NewCfg(),
|
||||
License: models.OSSLicensingService{},
|
||||
AuthTokenService: auth.NewFakeUserAuthTokenService(),
|
||||
}
|
||||
|
||||
sc.defaultHandler = Wrap(func(w http.ResponseWriter, c *models.ReqContext) Response {
|
||||
cmd := dtos.LoginCommand{
|
||||
User: "admin",
|
||||
Password: "admin",
|
||||
}
|
||||
return hs.LoginPost(c, cmd)
|
||||
})
|
||||
|
||||
bus.AddHandler("grafana-auth", func(query *models.LoginUserQuery) error {
|
||||
query.User = &models.User{
|
||||
Id: 42,
|
||||
Email: "",
|
||||
}
|
||||
return nil
|
||||
})
|
||||
|
||||
redirectCases := []redirectCase{
|
||||
{
|
||||
desc: "grafana relative url without subpath",
|
||||
url: "/profile",
|
||||
appURL: "https://localhost:3000",
|
||||
},
|
||||
{
|
||||
desc: "grafana relative url with subpath",
|
||||
url: "/grafana/profile",
|
||||
appURL: "https://localhost:3000",
|
||||
appSubURL: "grafana",
|
||||
},
|
||||
{
|
||||
desc: "relative url with missing subpath",
|
||||
url: "/profile",
|
||||
appURL: "https://localhost:3000",
|
||||
appSubURL: "grafana",
|
||||
err: login.ErrInvalidRedirectTo,
|
||||
},
|
||||
{
|
||||
desc: "grafana absolute url",
|
||||
url: "http://localhost:3000/profile",
|
||||
appURL: "http://localhost:3000",
|
||||
err: login.ErrAbsoluteRedirectTo,
|
||||
},
|
||||
{
|
||||
desc: "non grafana absolute url",
|
||||
url: "http://example.com",
|
||||
appURL: "https://localhost:3000",
|
||||
err: login.ErrAbsoluteRedirectTo,
|
||||
},
|
||||
}
|
||||
|
||||
for _, c := range redirectCases {
|
||||
setting.AppUrl = c.appURL
|
||||
setting.AppSubUrl = c.appSubURL
|
||||
t.Run(c.desc, func(t *testing.T) {
|
||||
cookie := http.Cookie{
|
||||
Name: "redirect_to",
|
||||
MaxAge: 60,
|
||||
Value: c.url,
|
||||
HttpOnly: true,
|
||||
Path: setting.AppSubUrl + "/",
|
||||
Secure: hs.Cfg.CookieSecure,
|
||||
SameSite: hs.Cfg.CookieSameSite,
|
||||
}
|
||||
sc.m.Post(sc.url, sc.defaultHandler)
|
||||
sc.fakeReqNoAssertionsWithCookie("POST", sc.url, cookie).exec()
|
||||
assert.Equal(t, sc.resp.Code, 200)
|
||||
|
||||
respJSON, err := simplejson.NewJson(sc.resp.Body.Bytes())
|
||||
assert.NoError(t, err)
|
||||
redirectURL := respJSON.Get("redirectUrl").MustString()
|
||||
if c.err != nil {
|
||||
assert.Equal(t, "", redirectURL)
|
||||
} else {
|
||||
assert.Equal(t, c.url, redirectURL)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestLoginOAuthRedirect(t *testing.T) {
|
||||
mockSetIndexViewData()
|
||||
defer resetSetIndexViewData()
|
||||
|
||||
@@ -132,9 +132,11 @@ func RevokeInvite(c *m.ReqContext) Response {
|
||||
return Success("Invite revoked")
|
||||
}
|
||||
|
||||
// GetInviteInfoByCode gets a pending user invite corresponding to a certain code.
|
||||
// A response containing an InviteInfo object is returned if the invite is found.
|
||||
// If a (pending) invite is not found, 404 is returned.
|
||||
func GetInviteInfoByCode(c *m.ReqContext) Response {
|
||||
query := m.GetTempUserByCodeQuery{Code: c.Params(":code")}
|
||||
|
||||
if err := bus.Dispatch(&query); err != nil {
|
||||
if err == m.ErrTempUserNotFound {
|
||||
return Error(404, "Invite not found", nil)
|
||||
@@ -143,6 +145,9 @@ func GetInviteInfoByCode(c *m.ReqContext) Response {
|
||||
}
|
||||
|
||||
invite := query.Result
|
||||
if invite.Status != m.TmpUserInvitePending {
|
||||
return Error(404, "Invite not found", nil)
|
||||
}
|
||||
|
||||
return JSON(200, dtos.InviteInfo{
|
||||
Email: invite.Email,
|
||||
|
||||
@@ -322,7 +322,7 @@ func addOAuthPassThruAuth(c *m.ReqContext, req *http.Request) {
|
||||
TokenType: authInfoQuery.Result.OAuthTokenType,
|
||||
}).Token()
|
||||
if err != nil {
|
||||
logger.Error("Failed to retrieve access token from oauth provider", "provider", authInfoQuery.Result.AuthModule)
|
||||
logger.Error("Failed to retrieve access token from oauth provider", "provider", authInfoQuery.Result.AuthModule, "error", err)
|
||||
return
|
||||
}
|
||||
|
||||
|
||||
@@ -463,6 +463,7 @@ func TestDSRouteRule(t *testing.T) {
|
||||
createAuthTest(m.DS_ES, AUTHTYPE_BASIC, AUTHCHECK_HEADER, true),
|
||||
}
|
||||
for _, test := range tests {
|
||||
m.ClearDSDecryptionCache()
|
||||
runDatasourceAuthTest(test)
|
||||
}
|
||||
})
|
||||
|
||||
@@ -40,6 +40,7 @@ func (hs *HTTPServer) RenderToPng(c *m.ReqContext) {
|
||||
return
|
||||
}
|
||||
|
||||
maxConcurrentLimitForApiCalls := 30
|
||||
result, err := hs.RenderService.Render(c.Req.Context(), rendering.Opts{
|
||||
Width: width,
|
||||
Height: height,
|
||||
@@ -50,7 +51,7 @@ func (hs *HTTPServer) RenderToPng(c *m.ReqContext) {
|
||||
Path: c.Params("*") + queryParams,
|
||||
Timezone: queryReader.Get("tz", ""),
|
||||
Encoding: queryReader.Get("encoding", ""),
|
||||
ConcurrentLimit: 30,
|
||||
ConcurrentLimit: maxConcurrentLimitForApiCalls,
|
||||
})
|
||||
|
||||
if err != nil && err == rendering.ErrTimeout {
|
||||
|
||||
78
pkg/cmd/grafana-server/diagnostics.go
Normal file
78
pkg/cmd/grafana-server/diagnostics.go
Normal file
@@ -0,0 +1,78 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"strconv"
|
||||
)
|
||||
|
||||
const (
|
||||
profilingEnabledEnvName = "GF_DIAGNOSTICS_PROFILING_ENABLED"
|
||||
profilingPortEnvName = "GF_DIAGNOSTICS_PROFILING_PORT"
|
||||
tracingEnabledEnvName = "GF_DIAGNOSTICS_TRACING_ENABLED"
|
||||
tracingFileEnvName = "GF_DIAGNOSTICS_TRACING_FILE"
|
||||
)
|
||||
|
||||
type profilingDiagnostics struct {
|
||||
enabled bool
|
||||
port uint
|
||||
}
|
||||
|
||||
func newProfilingDiagnostics(enabled bool, port uint) *profilingDiagnostics {
|
||||
return &profilingDiagnostics{
|
||||
enabled: enabled,
|
||||
port: port,
|
||||
}
|
||||
}
|
||||
|
||||
func (pd *profilingDiagnostics) overrideWithEnv() error {
|
||||
enabledEnv := os.Getenv(profilingEnabledEnvName)
|
||||
if enabledEnv != "" {
|
||||
enabled, err := strconv.ParseBool(enabledEnv)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Failed to parse %s environment variable as bool", profilingEnabledEnvName)
|
||||
}
|
||||
pd.enabled = enabled
|
||||
}
|
||||
|
||||
portEnv := os.Getenv(profilingPortEnvName)
|
||||
if portEnv != "" {
|
||||
port, parseErr := strconv.ParseUint(portEnv, 0, 64)
|
||||
if parseErr != nil {
|
||||
return fmt.Errorf("Failed to parse %s enviroment variable to unsigned integer", profilingPortEnvName)
|
||||
}
|
||||
pd.port = uint(port)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
type tracingDiagnostics struct {
|
||||
enabled bool
|
||||
file string
|
||||
}
|
||||
|
||||
func newTracingDiagnostics(enabled bool, file string) *tracingDiagnostics {
|
||||
return &tracingDiagnostics{
|
||||
enabled: enabled,
|
||||
file: file,
|
||||
}
|
||||
}
|
||||
|
||||
func (td *tracingDiagnostics) overrideWithEnv() error {
|
||||
enabledEnv := os.Getenv(tracingEnabledEnvName)
|
||||
if enabledEnv != "" {
|
||||
enabled, err := strconv.ParseBool(enabledEnv)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Failed to parse %s environment variable as bool", tracingEnabledEnvName)
|
||||
}
|
||||
td.enabled = enabled
|
||||
}
|
||||
|
||||
fileEnv := os.Getenv(tracingFileEnvName)
|
||||
if fileEnv != "" {
|
||||
td.file = fileEnv
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
73
pkg/cmd/grafana-server/diagnostics_test.go
Normal file
73
pkg/cmd/grafana-server/diagnostics_test.go
Normal file
@@ -0,0 +1,73 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestProfilingDiagnostics(t *testing.T) {
|
||||
tcs := []struct {
|
||||
defaults *profilingDiagnostics
|
||||
enabledEnv string
|
||||
portEnv string
|
||||
expected *profilingDiagnostics
|
||||
}{
|
||||
{defaults: newProfilingDiagnostics(false, 6060), enabledEnv: "", portEnv: "", expected: newProfilingDiagnostics(false, 6060)},
|
||||
{defaults: newProfilingDiagnostics(true, 8080), enabledEnv: "", portEnv: "", expected: newProfilingDiagnostics(true, 8080)},
|
||||
{defaults: newProfilingDiagnostics(false, 6060), enabledEnv: "false", portEnv: "8080", expected: newProfilingDiagnostics(false, 8080)},
|
||||
{defaults: newProfilingDiagnostics(false, 6060), enabledEnv: "true", portEnv: "8080", expected: newProfilingDiagnostics(true, 8080)},
|
||||
{defaults: newProfilingDiagnostics(false, 6060), enabledEnv: "true", portEnv: "", expected: newProfilingDiagnostics(true, 6060)},
|
||||
}
|
||||
|
||||
for i, tc := range tcs {
|
||||
t.Run(fmt.Sprintf("testcase %d", i), func(t *testing.T) {
|
||||
os.Clearenv()
|
||||
if tc.enabledEnv != "" {
|
||||
err := os.Setenv(profilingEnabledEnvName, tc.enabledEnv)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
if tc.portEnv != "" {
|
||||
err := os.Setenv(profilingPortEnvName, tc.portEnv)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
err := tc.defaults.overrideWithEnv()
|
||||
assert.NoError(t, err)
|
||||
assert.Exactly(t, tc.expected, tc.defaults)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestTracingDiagnostics(t *testing.T) {
|
||||
tcs := []struct {
|
||||
defaults *tracingDiagnostics
|
||||
enabledEnv string
|
||||
fileEnv string
|
||||
expected *tracingDiagnostics
|
||||
}{
|
||||
{defaults: newTracingDiagnostics(false, "trace.out"), enabledEnv: "", fileEnv: "", expected: newTracingDiagnostics(false, "trace.out")},
|
||||
{defaults: newTracingDiagnostics(true, "/tmp/trace.out"), enabledEnv: "", fileEnv: "", expected: newTracingDiagnostics(true, "/tmp/trace.out")},
|
||||
{defaults: newTracingDiagnostics(false, "trace.out"), enabledEnv: "false", fileEnv: "/tmp/trace.out", expected: newTracingDiagnostics(false, "/tmp/trace.out")},
|
||||
{defaults: newTracingDiagnostics(false, "trace.out"), enabledEnv: "true", fileEnv: "/tmp/trace.out", expected: newTracingDiagnostics(true, "/tmp/trace.out")},
|
||||
{defaults: newTracingDiagnostics(false, "trace.out"), enabledEnv: "true", fileEnv: "", expected: newTracingDiagnostics(true, "trace.out")},
|
||||
}
|
||||
|
||||
for i, tc := range tcs {
|
||||
t.Run(fmt.Sprintf("testcase %d", i), func(t *testing.T) {
|
||||
os.Clearenv()
|
||||
if tc.enabledEnv != "" {
|
||||
err := os.Setenv(tracingEnabledEnvName, tc.enabledEnv)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
if tc.fileEnv != "" {
|
||||
err := os.Setenv(tracingFileEnvName, tc.fileEnv)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
err := tc.defaults.overrideWithEnv()
|
||||
assert.NoError(t, err)
|
||||
assert.Exactly(t, tc.expected, tc.defaults)
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -46,7 +46,9 @@ func main() {
|
||||
|
||||
v = flag.Bool("v", false, "prints current version and exits")
|
||||
profile = flag.Bool("profile", false, "Turn on pprof profiling")
|
||||
profilePort = flag.Int("profile-port", 6060, "Define custom port for profiling")
|
||||
profilePort = flag.Uint("profile-port", 6060, "Define custom port for profiling")
|
||||
tracing = flag.Bool("tracing", false, "Turn on tracing")
|
||||
tracingFile = flag.String("tracing-file", "trace.out", "Define tracing output file")
|
||||
)
|
||||
|
||||
flag.Parse()
|
||||
@@ -56,16 +58,32 @@ func main() {
|
||||
os.Exit(0)
|
||||
}
|
||||
|
||||
if *profile {
|
||||
profileDiagnostics := newProfilingDiagnostics(*profile, *profilePort)
|
||||
if err := profileDiagnostics.overrideWithEnv(); err != nil {
|
||||
fmt.Fprintln(os.Stderr, err.Error())
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
traceDiagnostics := newTracingDiagnostics(*tracing, *tracingFile)
|
||||
if err := traceDiagnostics.overrideWithEnv(); err != nil {
|
||||
fmt.Fprintln(os.Stderr, err.Error())
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
if profileDiagnostics.enabled {
|
||||
fmt.Println("diagnostics: pprof profiling enabled", "port", profileDiagnostics.port)
|
||||
runtime.SetBlockProfileRate(1)
|
||||
go func() {
|
||||
err := http.ListenAndServe(fmt.Sprintf("localhost:%d", *profilePort), nil)
|
||||
err := http.ListenAndServe(fmt.Sprintf("localhost:%d", profileDiagnostics.port), nil)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
f, err := os.Create("trace.out")
|
||||
if traceDiagnostics.enabled {
|
||||
fmt.Println("diagnostics: tracing enabled", "file", traceDiagnostics.file)
|
||||
f, err := os.Create(traceDiagnostics.file)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
@@ -103,7 +103,7 @@ func (s *Server) Run() (err error) {
|
||||
s.log.Info("Initializing " + service.Name)
|
||||
|
||||
if err := service.Instance.Init(); err != nil {
|
||||
return fmt.Errorf("Service init failed: %v", err)
|
||||
return errutil.Wrapf(err, "Service init failed")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -126,18 +126,21 @@ func (s *Server) Run() (err error) {
|
||||
return nil
|
||||
}
|
||||
|
||||
if err := service.Run(s.context); err != nil {
|
||||
err := service.Run(s.context)
|
||||
// Mark that we are in shutdown mode
|
||||
// So no more services are started
|
||||
s.shutdownInProgress = true
|
||||
if err != nil {
|
||||
if err != context.Canceled {
|
||||
// Server has crashed.
|
||||
s.log.Error("Stopped "+descriptor.Name, "reason", err)
|
||||
} else {
|
||||
s.log.Info("Stopped "+descriptor.Name, "reason", err)
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
// Mark that we are in shutdown mode
|
||||
// So more services are not started
|
||||
s.shutdownInProgress = true
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
@@ -12,6 +12,7 @@ import (
|
||||
|
||||
"github.com/go-stack/stack"
|
||||
"github.com/grafana/grafana/pkg/util"
|
||||
"github.com/grafana/grafana/pkg/util/errutil"
|
||||
"github.com/inconshreveable/log15"
|
||||
isatty "github.com/mattn/go-isatty"
|
||||
"gopkg.in/ini.v1"
|
||||
@@ -181,7 +182,7 @@ func getLogFormat(format string) log15.Format {
|
||||
}
|
||||
}
|
||||
|
||||
func ReadLoggingConfig(modes []string, logsPath string, cfg *ini.File) {
|
||||
func ReadLoggingConfig(modes []string, logsPath string, cfg *ini.File) error {
|
||||
Close()
|
||||
|
||||
defaultLevelName, _ := getLogLevelFromConfig("log", "info", cfg)
|
||||
@@ -194,6 +195,7 @@ func ReadLoggingConfig(modes []string, logsPath string, cfg *ini.File) {
|
||||
sec, err := cfg.GetSection("log." + mode)
|
||||
if err != nil {
|
||||
Root.Error("Unknown log mode", "mode", mode)
|
||||
return errutil.Wrapf(err, "failed to get config section log.%s", mode)
|
||||
}
|
||||
|
||||
// Log level.
|
||||
@@ -212,7 +214,7 @@ func ReadLoggingConfig(modes []string, logsPath string, cfg *ini.File) {
|
||||
dpath := filepath.Dir(fileName)
|
||||
if err := os.MkdirAll(dpath, os.ModePerm); err != nil {
|
||||
Root.Error("Failed to create directory", "dpath", dpath, "err", err)
|
||||
break
|
||||
return errutil.Wrapf(err, "failed to create log directory %q", dpath)
|
||||
}
|
||||
fileHandler := NewFileWriter()
|
||||
fileHandler.Filename = fileName
|
||||
@@ -223,8 +225,8 @@ func ReadLoggingConfig(modes []string, logsPath string, cfg *ini.File) {
|
||||
fileHandler.Daily = sec.Key("daily_rotate").MustBool(true)
|
||||
fileHandler.Maxdays = sec.Key("max_days").MustInt64(7)
|
||||
if err := fileHandler.Init(); err != nil {
|
||||
Root.Error("Failed to create directory", "dpath", dpath, "err", err)
|
||||
break
|
||||
Root.Error("Failed to initialize file handler", "dpath", dpath, "err", err)
|
||||
return errutil.Wrapf(err, "failed to initialize file handler")
|
||||
}
|
||||
|
||||
loggersToClose = append(loggersToClose, fileHandler)
|
||||
@@ -236,6 +238,9 @@ func ReadLoggingConfig(modes []string, logsPath string, cfg *ini.File) {
|
||||
loggersToClose = append(loggersToClose, sysLogHandler)
|
||||
handler = sysLogHandler
|
||||
}
|
||||
if handler == nil {
|
||||
panic(fmt.Sprintf("Handler is uninitialized for mode %q", mode))
|
||||
}
|
||||
|
||||
for key, value := range defaultFilters {
|
||||
if _, exist := modeFilters[key]; !exist {
|
||||
@@ -254,6 +259,7 @@ func ReadLoggingConfig(modes []string, logsPath string, cfg *ini.File) {
|
||||
}
|
||||
|
||||
Root.SetHandler(log15.MultiHandler(handlers...))
|
||||
return nil
|
||||
}
|
||||
|
||||
func LogFilterHandler(maxLevel log15.Lvl, filters map[string]log15.Lvl, h log15.Handler) log15.Handler {
|
||||
|
||||
@@ -28,12 +28,13 @@ func (uss *UsageStatsService) sendUsageStats(oauthProviders map[string]bool) {
|
||||
|
||||
metrics := map[string]interface{}{}
|
||||
report := map[string]interface{}{
|
||||
"version": version,
|
||||
"metrics": metrics,
|
||||
"os": runtime.GOOS,
|
||||
"arch": runtime.GOARCH,
|
||||
"edition": getEdition(uss.License.HasValidLicense()),
|
||||
"packaging": setting.Packaging,
|
||||
"version": version,
|
||||
"metrics": metrics,
|
||||
"os": runtime.GOOS,
|
||||
"arch": runtime.GOARCH,
|
||||
"edition": getEdition(),
|
||||
"hasValidLicense": uss.License.HasValidLicense(),
|
||||
"packaging": setting.Packaging,
|
||||
}
|
||||
|
||||
statsQuery := models.GetSystemStatsQuery{}
|
||||
@@ -60,6 +61,9 @@ func (uss *UsageStatsService) sendUsageStats(oauthProviders map[string]bool) {
|
||||
metrics["stats.snapshots.count"] = statsQuery.Result.Snapshots
|
||||
metrics["stats.teams.count"] = statsQuery.Result.Teams
|
||||
metrics["stats.total_auth_token.count"] = statsQuery.Result.AuthTokens
|
||||
metrics["stats.valid_license.count"] = getValidLicenseCount(uss.License.HasValidLicense())
|
||||
metrics["stats.edition.oss.count"] = getOssEditionCount()
|
||||
metrics["stats.edition.enterprise.count"] = getEnterpriseEditionCount()
|
||||
|
||||
userCount := statsQuery.Result.Users
|
||||
avgAuthTokensPerUser := statsQuery.Result.AuthTokens
|
||||
@@ -182,9 +186,32 @@ func (uss *UsageStatsService) updateTotalStats() {
|
||||
metrics.StatsTotalActiveAdmins.Set(float64(statsQuery.Result.ActiveAdmins))
|
||||
}
|
||||
|
||||
func getEdition(validLicense bool) string {
|
||||
if validLicense {
|
||||
return "enterprise"
|
||||
func getEdition() string {
|
||||
edition := "oss"
|
||||
if setting.IsEnterprise {
|
||||
edition = "enterprise"
|
||||
}
|
||||
return "oss"
|
||||
|
||||
return edition
|
||||
}
|
||||
|
||||
func getEnterpriseEditionCount() int {
|
||||
if setting.IsEnterprise {
|
||||
return 1
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func getOssEditionCount() int {
|
||||
if setting.IsEnterprise {
|
||||
return 0
|
||||
}
|
||||
return 1
|
||||
}
|
||||
|
||||
func getValidLicenseCount(validLicense bool) int {
|
||||
if validLicense {
|
||||
return 1
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
@@ -18,6 +18,8 @@ var (
|
||||
ErrTooManyLoginAttempts = errors.New("Too many consecutive incorrect login attempts for user. Login for user temporarily blocked")
|
||||
ErrPasswordEmpty = errors.New("No password provided")
|
||||
ErrUserDisabled = errors.New("User is disabled")
|
||||
ErrAbsoluteRedirectTo = errors.New("Absolute urls are not allowed for redirect_to cookie value")
|
||||
ErrInvalidRedirectTo = errors.New("Invalid redirect_to cookie value")
|
||||
)
|
||||
|
||||
var loginLogger = log.New("login")
|
||||
|
||||
@@ -76,7 +76,7 @@ func (ds *DataSource) DecryptedPassword() string {
|
||||
|
||||
// decryptedValue returns decrypted value from secureJsonData
|
||||
func (ds *DataSource) decryptedValue(field string, fallback string) string {
|
||||
if value, ok := ds.SecureJsonData.DecryptedValue(field); ok {
|
||||
if value, ok := ds.DecryptedValue(field); ok {
|
||||
return value
|
||||
}
|
||||
return fallback
|
||||
|
||||
@@ -162,3 +162,49 @@ func (ds *DataSource) getCustomHeaders() map[string]string {
|
||||
|
||||
return headers
|
||||
}
|
||||
|
||||
type cachedDecryptedJSON struct {
|
||||
updated time.Time
|
||||
json map[string]string
|
||||
}
|
||||
|
||||
type secureJSONDecryptionCache struct {
|
||||
cache map[int64]cachedDecryptedJSON
|
||||
sync.Mutex
|
||||
}
|
||||
|
||||
var dsDecryptionCache = secureJSONDecryptionCache{
|
||||
cache: make(map[int64]cachedDecryptedJSON),
|
||||
}
|
||||
|
||||
// DecryptedValues returns cached decrypted values from secureJsonData.
|
||||
func (ds *DataSource) DecryptedValues() map[string]string {
|
||||
dsDecryptionCache.Lock()
|
||||
defer dsDecryptionCache.Unlock()
|
||||
|
||||
if item, present := dsDecryptionCache.cache[ds.Id]; present && ds.Updated.Equal(item.updated) {
|
||||
return item.json
|
||||
}
|
||||
|
||||
json := ds.SecureJsonData.Decrypt()
|
||||
dsDecryptionCache.cache[ds.Id] = cachedDecryptedJSON{
|
||||
updated: ds.Updated,
|
||||
json: json,
|
||||
}
|
||||
|
||||
return json
|
||||
}
|
||||
|
||||
// DecryptedValue returns cached decrypted value from cached secureJsonData.
|
||||
func (ds *DataSource) DecryptedValue(key string) (string, bool) {
|
||||
value, exists := ds.DecryptedValues()[key]
|
||||
return value, exists
|
||||
}
|
||||
|
||||
// ClearDSDecryptionCache clears the datasource decryption cache.
|
||||
func ClearDSDecryptionCache() {
|
||||
dsDecryptionCache.Lock()
|
||||
defer dsDecryptionCache.Unlock()
|
||||
|
||||
dsDecryptionCache.cache = make(map[int64]cachedDecryptedJSON)
|
||||
}
|
||||
|
||||
@@ -11,15 +11,16 @@ import (
|
||||
. "github.com/smartystreets/goconvey/convey"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/grafana/grafana/pkg/components/securejsondata"
|
||||
"github.com/grafana/grafana/pkg/components/simplejson"
|
||||
"github.com/grafana/grafana/pkg/setting"
|
||||
"github.com/grafana/grafana/pkg/util"
|
||||
)
|
||||
|
||||
//nolint:goconst
|
||||
func TestDataSourceCache(t *testing.T) {
|
||||
func TestDataSourceProxyCache(t *testing.T) {
|
||||
Convey("When caching a datasource proxy", t, func() {
|
||||
clearCache()
|
||||
clearDSProxyCache()
|
||||
ds := DataSource{
|
||||
Id: 1,
|
||||
Url: "http://k8s:8001",
|
||||
@@ -41,13 +42,13 @@ func TestDataSourceCache(t *testing.T) {
|
||||
Convey("Should have no TLS client certificate configured", func() {
|
||||
So(len(t1.transport.TLSClientConfig.Certificates), ShouldEqual, 0)
|
||||
})
|
||||
Convey("Should have no user-supplied TLS CA onfigured", func() {
|
||||
Convey("Should have no user-supplied TLS CA configured", func() {
|
||||
So(t1.transport.TLSClientConfig.RootCAs, ShouldBeNil)
|
||||
})
|
||||
})
|
||||
|
||||
Convey("When caching a datasource proxy then updating it", t, func() {
|
||||
clearCache()
|
||||
clearDSProxyCache()
|
||||
setting.SecretKey = "password"
|
||||
|
||||
json := simplejson.New()
|
||||
@@ -89,7 +90,7 @@ func TestDataSourceCache(t *testing.T) {
|
||||
})
|
||||
|
||||
Convey("When caching a datasource proxy with TLS client authentication enabled", t, func() {
|
||||
clearCache()
|
||||
clearDSProxyCache()
|
||||
setting.SecretKey = "password"
|
||||
|
||||
json := simplejson.New()
|
||||
@@ -123,7 +124,7 @@ func TestDataSourceCache(t *testing.T) {
|
||||
})
|
||||
|
||||
Convey("When caching a datasource proxy with a user-supplied TLS CA", t, func() {
|
||||
clearCache()
|
||||
clearDSProxyCache()
|
||||
setting.SecretKey = "password"
|
||||
|
||||
json := simplejson.New()
|
||||
@@ -152,7 +153,7 @@ func TestDataSourceCache(t *testing.T) {
|
||||
})
|
||||
|
||||
Convey("When caching a datasource proxy when user skips TLS verification", t, func() {
|
||||
clearCache()
|
||||
clearDSProxyCache()
|
||||
|
||||
json := simplejson.New()
|
||||
json.Set("tlsSkipVerify", true)
|
||||
@@ -173,7 +174,7 @@ func TestDataSourceCache(t *testing.T) {
|
||||
})
|
||||
|
||||
Convey("When caching a datasource proxy with custom headers specified", t, func() {
|
||||
clearCache()
|
||||
clearDSProxyCache()
|
||||
|
||||
json := simplejson.NewFromAny(map[string]interface{}{
|
||||
"httpHeaderName1": "Authorization",
|
||||
@@ -236,7 +237,64 @@ func TestDataSourceCache(t *testing.T) {
|
||||
})
|
||||
}
|
||||
|
||||
func clearCache() {
|
||||
func TestDataSourceDecryptionCache(t *testing.T) {
|
||||
Convey("When datasource hasn't been updated, encrypted JSON should be fetched from cache", t, func() {
|
||||
ClearDSDecryptionCache()
|
||||
|
||||
ds := DataSource{
|
||||
Id: 1,
|
||||
Type: DS_INFLUXDB_08,
|
||||
JsonData: simplejson.New(),
|
||||
User: "user",
|
||||
SecureJsonData: securejsondata.GetEncryptedJsonData(map[string]string{
|
||||
"password": "password",
|
||||
}),
|
||||
}
|
||||
|
||||
// Populate cache
|
||||
password, ok := ds.DecryptedValue("password")
|
||||
So(password, ShouldEqual, "password")
|
||||
So(ok, ShouldBeTrue)
|
||||
|
||||
ds.SecureJsonData = securejsondata.GetEncryptedJsonData(map[string]string{
|
||||
"password": "",
|
||||
})
|
||||
|
||||
password, ok = ds.DecryptedValue("password")
|
||||
So(password, ShouldEqual, "password")
|
||||
So(ok, ShouldBeTrue)
|
||||
})
|
||||
|
||||
Convey("When datasource is updated, encrypted JSON should not be fetched from cache", t, func() {
|
||||
ClearDSDecryptionCache()
|
||||
|
||||
ds := DataSource{
|
||||
Id: 1,
|
||||
Type: DS_INFLUXDB_08,
|
||||
JsonData: simplejson.New(),
|
||||
User: "user",
|
||||
SecureJsonData: securejsondata.GetEncryptedJsonData(map[string]string{
|
||||
"password": "password",
|
||||
}),
|
||||
}
|
||||
|
||||
// Populate cache
|
||||
password, ok := ds.DecryptedValue("password")
|
||||
So(password, ShouldEqual, "password")
|
||||
So(ok, ShouldBeTrue)
|
||||
|
||||
ds.SecureJsonData = securejsondata.GetEncryptedJsonData(map[string]string{
|
||||
"password": "",
|
||||
})
|
||||
ds.Updated = time.Now()
|
||||
|
||||
password, ok = ds.DecryptedValue("password")
|
||||
So(password, ShouldEqual, "")
|
||||
So(ok, ShouldBeTrue)
|
||||
})
|
||||
}
|
||||
|
||||
func clearDSProxyCache() {
|
||||
ptc.Lock()
|
||||
defer ptc.Unlock()
|
||||
|
||||
|
||||
@@ -137,6 +137,10 @@ func (tn *TelegramNotifier) buildMessageInlineImage(evalContext *alerting.EvalCo
|
||||
var err error
|
||||
|
||||
imageFile, err = os.Open(evalContext.ImageOnDiskPath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
defer func() {
|
||||
err := imageFile.Close()
|
||||
if err != nil {
|
||||
@@ -144,10 +148,6 @@ func (tn *TelegramNotifier) buildMessageInlineImage(evalContext *alerting.EvalCo
|
||||
}
|
||||
}()
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
ruleURL, err := evalContext.GetRuleURL()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
||||
@@ -230,7 +230,12 @@ func syncOrgRoles(user *models.User, extUser *models.ExternalUserInfo) error {
|
||||
// delete any removed org roles
|
||||
for _, orgId := range deleteOrgIds {
|
||||
cmd := &models.RemoveOrgUserCommand{OrgId: orgId, UserId: user.Id}
|
||||
if err := bus.Dispatch(cmd); err != nil {
|
||||
err := bus.Dispatch(cmd)
|
||||
if err == models.ErrLastOrgAdmin {
|
||||
logger.Error(err.Error(), "userId", cmd.UserId, "orgId", cmd.OrgId)
|
||||
continue
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
136
pkg/services/login/login_test.go
Normal file
136
pkg/services/login/login_test.go
Normal file
@@ -0,0 +1,136 @@
|
||||
package login
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/grafana/grafana/pkg/bus"
|
||||
"github.com/grafana/grafana/pkg/models"
|
||||
log "github.com/inconshreveable/log15"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func Test_syncOrgRoles_doesNotBreakWhenTryingToRemoveLastOrgAdmin(t *testing.T) {
|
||||
user := createSimpleUser()
|
||||
externalUser := createSimpleExternalUser()
|
||||
remResp := createResponseWithOneErrLastOrgAdminItem()
|
||||
|
||||
bus.ClearBusHandlers()
|
||||
defer bus.ClearBusHandlers()
|
||||
bus.AddHandler("test", func(q *models.GetUserOrgListQuery) error {
|
||||
|
||||
q.Result = createUserOrgDTO()
|
||||
|
||||
return nil
|
||||
})
|
||||
|
||||
bus.AddHandler("test", func(cmd *models.RemoveOrgUserCommand) error {
|
||||
testData := remResp[0]
|
||||
remResp = remResp[1:]
|
||||
|
||||
require.Equal(t, testData.orgId, cmd.OrgId)
|
||||
return testData.response
|
||||
})
|
||||
bus.AddHandler("test", func(cmd *models.SetUsingOrgCommand) error {
|
||||
return nil
|
||||
})
|
||||
|
||||
err := syncOrgRoles(&user, &externalUser)
|
||||
require.Empty(t, remResp)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
func Test_syncOrgRoles_whenTryingToRemoveLastOrgLogsError(t *testing.T) {
|
||||
var logOutput string
|
||||
logger.SetHandler(log.FuncHandler(func(r *log.Record) error {
|
||||
logOutput = r.Msg
|
||||
return nil
|
||||
}))
|
||||
|
||||
user := createSimpleUser()
|
||||
externalUser := createSimpleExternalUser()
|
||||
remResp := createResponseWithOneErrLastOrgAdminItem()
|
||||
|
||||
bus.ClearBusHandlers()
|
||||
defer bus.ClearBusHandlers()
|
||||
bus.AddHandler("test", func(q *models.GetUserOrgListQuery) error {
|
||||
|
||||
q.Result = createUserOrgDTO()
|
||||
|
||||
return nil
|
||||
})
|
||||
|
||||
bus.AddHandler("test", func(cmd *models.RemoveOrgUserCommand) error {
|
||||
testData := remResp[0]
|
||||
remResp = remResp[1:]
|
||||
|
||||
require.Equal(t, testData.orgId, cmd.OrgId)
|
||||
return testData.response
|
||||
})
|
||||
bus.AddHandler("test", func(cmd *models.SetUsingOrgCommand) error {
|
||||
return nil
|
||||
})
|
||||
|
||||
err := syncOrgRoles(&user, &externalUser)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, models.ErrLastOrgAdmin.Error(), logOutput)
|
||||
}
|
||||
|
||||
func createSimpleUser() models.User {
|
||||
user := models.User{
|
||||
Id: 1,
|
||||
}
|
||||
|
||||
return user
|
||||
}
|
||||
|
||||
func createUserOrgDTO() []*models.UserOrgDTO {
|
||||
users := []*models.UserOrgDTO{
|
||||
{
|
||||
OrgId: 1,
|
||||
Name: "Bar",
|
||||
Role: models.ROLE_VIEWER,
|
||||
},
|
||||
{
|
||||
OrgId: 10,
|
||||
Name: "Foo",
|
||||
Role: models.ROLE_ADMIN,
|
||||
},
|
||||
{
|
||||
OrgId: 11,
|
||||
Name: "Stuff",
|
||||
Role: models.ROLE_VIEWER,
|
||||
},
|
||||
}
|
||||
return users
|
||||
}
|
||||
|
||||
func createSimpleExternalUser() models.ExternalUserInfo {
|
||||
externalUser := models.ExternalUserInfo{
|
||||
AuthModule: "ldap",
|
||||
OrgRoles: map[int64]models.RoleType{
|
||||
1: models.ROLE_VIEWER,
|
||||
},
|
||||
}
|
||||
|
||||
return externalUser
|
||||
}
|
||||
|
||||
func createResponseWithOneErrLastOrgAdminItem() []struct {
|
||||
orgId int64
|
||||
response error
|
||||
} {
|
||||
remResp := []struct {
|
||||
orgId int64
|
||||
response error
|
||||
}{
|
||||
{
|
||||
orgId: 10,
|
||||
response: models.ErrLastOrgAdmin,
|
||||
},
|
||||
{
|
||||
orgId: 11,
|
||||
response: nil,
|
||||
},
|
||||
}
|
||||
return remResp
|
||||
}
|
||||
@@ -114,11 +114,13 @@ func (val *JSONValue) UnmarshalYAML(unmarshal func(interface{}) error) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
val.Raw = unmarshaled
|
||||
interpolated := make(map[string]interface{})
|
||||
raw := make(map[string]interface{})
|
||||
for key, val := range unmarshaled {
|
||||
interpolated[key] = tranformInterface(val)
|
||||
interpolated[key], raw[key] = transformInterface(val)
|
||||
}
|
||||
|
||||
val.Raw = raw
|
||||
val.value = interpolated
|
||||
return err
|
||||
}
|
||||
@@ -138,11 +140,12 @@ func (val *StringMapValue) UnmarshalYAML(unmarshal func(interface{}) error) erro
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
val.Raw = unmarshaled
|
||||
interpolated := make(map[string]string)
|
||||
raw := make(map[string]string)
|
||||
for key, val := range unmarshaled {
|
||||
interpolated[key] = interpolateValue(val)
|
||||
interpolated[key], raw[key] = interpolateValue(val)
|
||||
}
|
||||
val.Raw = raw
|
||||
val.value = interpolated
|
||||
return err
|
||||
}
|
||||
@@ -151,14 +154,15 @@ func (val *StringMapValue) Value() map[string]string {
|
||||
return val.value
|
||||
}
|
||||
|
||||
// tranformInterface tries to transform any interface type into proper value with env expansion. It travers maps and
|
||||
// transformInterface tries to transform any interface type into proper value with env expansion. It travers maps and
|
||||
// slices and the actual interpolation is done on all simple string values in the structure. It returns a copy of any
|
||||
// map or slice value instead of modifying them in place.
|
||||
func tranformInterface(i interface{}) interface{} {
|
||||
// map or slice value instead of modifying them in place and also return value without interpolation but with converted
|
||||
// type as a second value.
|
||||
func transformInterface(i interface{}) (interface{}, interface{}) {
|
||||
typeOf := reflect.TypeOf(i)
|
||||
|
||||
if typeOf == nil {
|
||||
return nil
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
switch typeOf.Kind() {
|
||||
@@ -170,36 +174,43 @@ func tranformInterface(i interface{}) interface{} {
|
||||
return interpolateValue(i.(string))
|
||||
default:
|
||||
// Was int, float or some other value that we do not need to do any transform on.
|
||||
return i
|
||||
return i, i
|
||||
}
|
||||
}
|
||||
|
||||
func transformSlice(i []interface{}) interface{} {
|
||||
var transformed []interface{}
|
||||
func transformSlice(i []interface{}) (interface{}, interface{}) {
|
||||
var transformedSlice []interface{}
|
||||
var rawSlice []interface{}
|
||||
for _, val := range i {
|
||||
transformed = append(transformed, tranformInterface(val))
|
||||
transformed, raw := transformInterface(val)
|
||||
transformedSlice = append(transformedSlice, transformed)
|
||||
rawSlice = append(rawSlice, raw)
|
||||
}
|
||||
return transformed
|
||||
return transformedSlice, rawSlice
|
||||
}
|
||||
|
||||
func transformMap(i map[interface{}]interface{}) interface{} {
|
||||
transformed := make(map[interface{}]interface{})
|
||||
func transformMap(i map[interface{}]interface{}) (interface{}, interface{}) {
|
||||
transformed := make(map[string]interface{})
|
||||
raw := make(map[string]interface{})
|
||||
for key, val := range i {
|
||||
transformed[key] = tranformInterface(val)
|
||||
stringKey, ok := key.(string)
|
||||
if ok {
|
||||
transformed[stringKey], raw[stringKey] = transformInterface(val)
|
||||
}
|
||||
}
|
||||
return transformed
|
||||
return transformed, raw
|
||||
}
|
||||
|
||||
// interpolateValue returns final value after interpolation. At the moment only env var interpolation is done
|
||||
// here but in the future something like interpolation from file could be also done here.
|
||||
// For a literal '$', '$$' can be used to avoid interpolation.
|
||||
func interpolateValue(val string) string {
|
||||
func interpolateValue(val string) (string, string) {
|
||||
parts := strings.Split(val, "$$")
|
||||
interpolated := make([]string, len(parts))
|
||||
for i, v := range parts {
|
||||
interpolated[i] = os.ExpandEnv(v)
|
||||
}
|
||||
return strings.Join(interpolated, "$")
|
||||
return strings.Join(interpolated, "$"), val
|
||||
}
|
||||
|
||||
type interpolated struct {
|
||||
@@ -210,11 +221,13 @@ type interpolated struct {
|
||||
// getInterpolated unmarshals the value as string and runs interpolation on it. It is the responsibility of each
|
||||
// value type to convert this string value to appropriate type.
|
||||
func getInterpolated(unmarshal func(interface{}) error) (*interpolated, error) {
|
||||
var raw string
|
||||
err := unmarshal(&raw)
|
||||
var veryRaw string
|
||||
err := unmarshal(&veryRaw)
|
||||
if err != nil {
|
||||
return &interpolated{}, err
|
||||
}
|
||||
value := interpolateValue(raw)
|
||||
// We get new raw value here which can have a bit different type, as yaml types nested maps as
|
||||
// map[interface{}]interface and we want it to be map[string]interface{}
|
||||
value, raw := interpolateValue(veryRaw)
|
||||
return &interpolated{raw: raw, value: value}, nil
|
||||
}
|
||||
|
||||
@@ -143,26 +143,26 @@ func TestValues(t *testing.T) {
|
||||
`
|
||||
unmarshalingTest(doc, d)
|
||||
|
||||
type anyMap = map[interface{}]interface{}
|
||||
So(d.Val.Value(), ShouldResemble, map[string]interface{}{
|
||||
type stringMap = map[string]interface{}
|
||||
So(d.Val.Value(), ShouldResemble, stringMap{
|
||||
"one": 1,
|
||||
"two": "test",
|
||||
"three": []interface{}{
|
||||
1,
|
||||
"two",
|
||||
anyMap{
|
||||
"three": anyMap{
|
||||
stringMap{
|
||||
"three": stringMap{
|
||||
"inside": "test",
|
||||
},
|
||||
},
|
||||
anyMap{
|
||||
"six": anyMap{
|
||||
stringMap{
|
||||
"six": stringMap{
|
||||
"empty": interface{}(nil),
|
||||
},
|
||||
},
|
||||
},
|
||||
"four": anyMap{
|
||||
"nested": anyMap{
|
||||
"four": stringMap{
|
||||
"nested": stringMap{
|
||||
"onemore": "1",
|
||||
},
|
||||
},
|
||||
@@ -171,25 +171,25 @@ func TestValues(t *testing.T) {
|
||||
"anchored": "1",
|
||||
})
|
||||
|
||||
So(d.Val.Raw, ShouldResemble, map[string]interface{}{
|
||||
So(d.Val.Raw, ShouldResemble, stringMap{
|
||||
"one": 1,
|
||||
"two": "$STRING",
|
||||
"three": []interface{}{
|
||||
1,
|
||||
"two",
|
||||
anyMap{
|
||||
"three": anyMap{
|
||||
stringMap{
|
||||
"three": stringMap{
|
||||
"inside": "$STRING",
|
||||
},
|
||||
},
|
||||
anyMap{
|
||||
"six": anyMap{
|
||||
stringMap{
|
||||
"six": stringMap{
|
||||
"empty": interface{}(nil),
|
||||
},
|
||||
},
|
||||
},
|
||||
"four": anyMap{
|
||||
"nested": anyMap{
|
||||
"four": stringMap{
|
||||
"nested": stringMap{
|
||||
"onemore": "$INT",
|
||||
},
|
||||
},
|
||||
|
||||
@@ -33,4 +33,5 @@ type renderFunc func(ctx context.Context, options Opts) (*RenderResult, error)
|
||||
|
||||
type Service interface {
|
||||
Render(ctx context.Context, opts Opts) (*RenderResult, error)
|
||||
RenderErrorImage(error error) (*RenderResult, error)
|
||||
}
|
||||
|
||||
@@ -63,6 +63,9 @@ func (rs *RenderingService) renderViaPlugin(ctx context.Context, opts Opts) (*Re
|
||||
return nil, err
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(ctx, opts.Timeout)
|
||||
defer cancel()
|
||||
|
||||
rsp, err := rs.grpcPlugin.Render(ctx, &pluginModel.RenderRequest{
|
||||
Url: rs.getURL(opts.Path),
|
||||
Width: int32(opts.Width),
|
||||
|
||||
@@ -3,12 +3,11 @@ package rendering
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
plugin "github.com/hashicorp/go-plugin"
|
||||
"net/url"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
plugin "github.com/hashicorp/go-plugin"
|
||||
|
||||
pluginModel "github.com/grafana/grafana-plugin-model/go/renderer"
|
||||
"github.com/grafana/grafana/pkg/infra/log"
|
||||
"github.com/grafana/grafana/pkg/middleware"
|
||||
@@ -93,6 +92,14 @@ func (rs *RenderingService) Run(ctx context.Context) error {
|
||||
return err
|
||||
}
|
||||
|
||||
func (rs *RenderingService) RenderErrorImage(err error) (*RenderResult, error) {
|
||||
imgUrl := "public/img/rendering_error.png"
|
||||
|
||||
return &RenderResult{
|
||||
FilePath: filepath.Join(setting.HomePath, imgUrl),
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (rs *RenderingService) Render(ctx context.Context, opts Opts) (*RenderResult, error) {
|
||||
if rs.inProgressCount > opts.ConcurrentLimit {
|
||||
return &RenderResult{
|
||||
|
||||
@@ -334,13 +334,13 @@ func DeleteDashboard(cmd *models.DeleteDashboardCommand) error {
|
||||
|
||||
for _, id := range dashIds {
|
||||
if err := deleteAlertDefinition(id.Id, sess); err != nil {
|
||||
return nil
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if err := deleteAlertDefinition(dashboard.Id, sess); err != nil {
|
||||
return nil
|
||||
return err
|
||||
}
|
||||
|
||||
for _, sql := range deletes {
|
||||
|
||||
@@ -95,12 +95,12 @@ func GetSystemStats(query *m.GetSystemStatsQuery) error {
|
||||
func roleCounterSQL(role, alias string) string {
|
||||
return `
|
||||
(
|
||||
SELECT COUNT(*)
|
||||
SELECT COUNT(DISTINCT u.id)
|
||||
FROM ` + dialect.Quote("user") + ` as u, org_user
|
||||
WHERE ( org_user.user_id=u.id AND org_user.role='` + role + `' )
|
||||
) as ` + alias + `,
|
||||
(
|
||||
SELECT COUNT(*)
|
||||
SELECT COUNT(DISTINCT u.id)
|
||||
FROM ` + dialect.Quote("user") + ` as u, org_user
|
||||
WHERE u.last_seen_at>? AND ( org_user.user_id=u.id AND org_user.role='` + role + `' )
|
||||
) as active_` + alias
|
||||
|
||||
@@ -444,7 +444,7 @@ func evalConfigValues(file *ini.File) {
|
||||
}
|
||||
}
|
||||
|
||||
func loadSpecifedConfigFile(configFile string, masterFile *ini.File) error {
|
||||
func loadSpecifiedConfigFile(configFile string, masterFile *ini.File) error {
|
||||
if configFile == "" {
|
||||
configFile = filepath.Join(HomePath, CustomInitPath)
|
||||
// return without error if custom file does not exist
|
||||
@@ -511,7 +511,7 @@ func (cfg *Cfg) loadConfiguration(args *CommandLineArgs) (*ini.File, error) {
|
||||
applyCommandLineDefaultProperties(commandLineProps, parsedFile)
|
||||
|
||||
// load specified config file
|
||||
err = loadSpecifedConfigFile(args.Config, parsedFile)
|
||||
err = loadSpecifiedConfigFile(args.Config, parsedFile)
|
||||
if err != nil {
|
||||
err2 := cfg.initLogging(parsedFile)
|
||||
if err2 != nil {
|
||||
@@ -1083,8 +1083,7 @@ func (cfg *Cfg) initLogging(file *ini.File) error {
|
||||
return err
|
||||
}
|
||||
cfg.LogsPath = makeAbsolute(logsPath, HomePath)
|
||||
log.ReadLoggingConfig(logModes, cfg.LogsPath, file)
|
||||
return nil
|
||||
return log.ReadLoggingConfig(logModes, cfg.LogsPath, file)
|
||||
}
|
||||
|
||||
func (cfg *Cfg) LogConfigSources() {
|
||||
|
||||
@@ -24,7 +24,7 @@ func (e *CloudWatchExecutor) executeAnnotationQuery(ctx context.Context, queryCo
|
||||
namespace := parameters.Get("namespace").MustString("")
|
||||
metricName := parameters.Get("metricName").MustString("")
|
||||
dimensions := parameters.Get("dimensions").MustMap()
|
||||
statistics, extendedStatistics, err := parseStatistics(parameters)
|
||||
statistics, err := parseStatistics(parameters)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -51,7 +51,7 @@ func (e *CloudWatchExecutor) executeAnnotationQuery(ctx context.Context, queryCo
|
||||
if err != nil {
|
||||
return nil, errors.New("Failed to call cloudwatch:DescribeAlarms")
|
||||
}
|
||||
alarmNames = filterAlarms(resp, namespace, metricName, dimensions, statistics, extendedStatistics, period)
|
||||
alarmNames = filterAlarms(resp, namespace, metricName, dimensions, statistics, period)
|
||||
} else {
|
||||
if region == "" || namespace == "" || metricName == "" || len(statistics) == 0 {
|
||||
return result, nil
|
||||
@@ -82,22 +82,6 @@ func (e *CloudWatchExecutor) executeAnnotationQuery(ctx context.Context, queryCo
|
||||
alarmNames = append(alarmNames, alarm.AlarmName)
|
||||
}
|
||||
}
|
||||
for _, s := range extendedStatistics {
|
||||
params := &cloudwatch.DescribeAlarmsForMetricInput{
|
||||
Namespace: aws.String(namespace),
|
||||
MetricName: aws.String(metricName),
|
||||
Dimensions: qd,
|
||||
ExtendedStatistic: aws.String(s),
|
||||
Period: aws.Int64(period),
|
||||
}
|
||||
resp, err := svc.DescribeAlarmsForMetric(params)
|
||||
if err != nil {
|
||||
return nil, errors.New("Failed to call cloudwatch:DescribeAlarmsForMetric")
|
||||
}
|
||||
for _, alarm := range resp.MetricAlarms {
|
||||
alarmNames = append(alarmNames, alarm.AlarmName)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
startTime, err := queryContext.TimeRange.ParseFrom()
|
||||
@@ -158,7 +142,7 @@ func transformAnnotationToTable(data []map[string]string, result *tsdb.QueryResu
|
||||
result.Meta.Set("rowCount", len(data))
|
||||
}
|
||||
|
||||
func filterAlarms(alarms *cloudwatch.DescribeAlarmsOutput, namespace string, metricName string, dimensions map[string]interface{}, statistics []string, extendedStatistics []string, period int64) []*string {
|
||||
func filterAlarms(alarms *cloudwatch.DescribeAlarmsOutput, namespace string, metricName string, dimensions map[string]interface{}, statistics []string, period int64) []*string {
|
||||
alarmNames := make([]*string, 0)
|
||||
|
||||
for _, alarm := range alarms.MetricAlarms {
|
||||
@@ -197,18 +181,6 @@ func filterAlarms(alarms *cloudwatch.DescribeAlarmsOutput, namespace string, met
|
||||
}
|
||||
}
|
||||
|
||||
if len(extendedStatistics) != 0 {
|
||||
found := false
|
||||
for _, s := range extendedStatistics {
|
||||
if *alarm.Statistic == s {
|
||||
found = true
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
if period != 0 && *alarm.Period != period {
|
||||
continue
|
||||
}
|
||||
|
||||
@@ -2,18 +2,13 @@ package cloudwatch
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/aws/aws-sdk-go/aws/awserr"
|
||||
"github.com/aws/aws-sdk-go/service/ec2/ec2iface"
|
||||
"github.com/aws/aws-sdk-go/service/resourcegroupstaggingapi/resourcegroupstaggingapiiface"
|
||||
"github.com/grafana/grafana/pkg/infra/log"
|
||||
"github.com/grafana/grafana/pkg/models"
|
||||
"github.com/grafana/grafana/pkg/tsdb"
|
||||
"golang.org/x/sync/errgroup"
|
||||
)
|
||||
|
||||
type CloudWatchExecutor struct {
|
||||
@@ -38,21 +33,13 @@ func NewCloudWatchExecutor(dsInfo *models.DataSource) (tsdb.TsdbQueryEndpoint, e
|
||||
}
|
||||
|
||||
var (
|
||||
plog log.Logger
|
||||
standardStatistics map[string]bool
|
||||
aliasFormat *regexp.Regexp
|
||||
plog log.Logger
|
||||
aliasFormat *regexp.Regexp
|
||||
)
|
||||
|
||||
func init() {
|
||||
plog = log.New("tsdb.cloudwatch")
|
||||
tsdb.RegisterTsdbQueryEndpoint("cloudwatch", NewCloudWatchExecutor)
|
||||
standardStatistics = map[string]bool{
|
||||
"Average": true,
|
||||
"Maximum": true,
|
||||
"Minimum": true,
|
||||
"Sum": true,
|
||||
"SampleCount": true,
|
||||
}
|
||||
aliasFormat = regexp.MustCompile(`\{\{\s*(.+?)\s*\}\}`)
|
||||
}
|
||||
|
||||
@@ -75,162 +62,3 @@ func (e *CloudWatchExecutor) Query(ctx context.Context, dsInfo *models.DataSourc
|
||||
|
||||
return result, err
|
||||
}
|
||||
|
||||
func (e *CloudWatchExecutor) executeTimeSeriesQuery(ctx context.Context, queryContext *tsdb.TsdbQuery) (*tsdb.Response, error) {
|
||||
results := &tsdb.Response{
|
||||
Results: make(map[string]*tsdb.QueryResult),
|
||||
}
|
||||
resultChan := make(chan *tsdb.QueryResult, len(queryContext.Queries))
|
||||
|
||||
eg, ectx := errgroup.WithContext(ctx)
|
||||
|
||||
getMetricDataQueries := make(map[string]map[string]*CloudWatchQuery)
|
||||
for i, model := range queryContext.Queries {
|
||||
queryType := model.Model.Get("type").MustString()
|
||||
if queryType != "timeSeriesQuery" && queryType != "" {
|
||||
continue
|
||||
}
|
||||
|
||||
RefId := queryContext.Queries[i].RefId
|
||||
query, err := parseQuery(queryContext.Queries[i].Model)
|
||||
if err != nil {
|
||||
results.Results[RefId] = &tsdb.QueryResult{
|
||||
Error: err,
|
||||
}
|
||||
return results, nil
|
||||
}
|
||||
query.RefId = RefId
|
||||
|
||||
if query.Id != "" {
|
||||
if _, ok := getMetricDataQueries[query.Region]; !ok {
|
||||
getMetricDataQueries[query.Region] = make(map[string]*CloudWatchQuery)
|
||||
}
|
||||
getMetricDataQueries[query.Region][query.Id] = query
|
||||
continue
|
||||
}
|
||||
|
||||
if query.Id == "" && query.Expression != "" {
|
||||
results.Results[query.RefId] = &tsdb.QueryResult{
|
||||
Error: fmt.Errorf("Invalid query: id should be set if using expression"),
|
||||
}
|
||||
return results, nil
|
||||
}
|
||||
|
||||
eg.Go(func() error {
|
||||
defer func() {
|
||||
if err := recover(); err != nil {
|
||||
plog.Error("Execute Query Panic", "error", err, "stack", log.Stack(1))
|
||||
if theErr, ok := err.(error); ok {
|
||||
resultChan <- &tsdb.QueryResult{
|
||||
RefId: query.RefId,
|
||||
Error: theErr,
|
||||
}
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
queryRes, err := e.executeQuery(ectx, query, queryContext)
|
||||
if ae, ok := err.(awserr.Error); ok && ae.Code() == "500" {
|
||||
return err
|
||||
}
|
||||
if err != nil {
|
||||
resultChan <- &tsdb.QueryResult{
|
||||
RefId: query.RefId,
|
||||
Error: err,
|
||||
}
|
||||
return nil
|
||||
}
|
||||
resultChan <- queryRes
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
if len(getMetricDataQueries) > 0 {
|
||||
for region, getMetricDataQuery := range getMetricDataQueries {
|
||||
q := getMetricDataQuery
|
||||
eg.Go(func() error {
|
||||
defer func() {
|
||||
if err := recover(); err != nil {
|
||||
plog.Error("Execute Get Metric Data Query Panic", "error", err, "stack", log.Stack(1))
|
||||
if theErr, ok := err.(error); ok {
|
||||
resultChan <- &tsdb.QueryResult{
|
||||
Error: theErr,
|
||||
}
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
queryResponses, err := e.executeGetMetricDataQuery(ectx, region, q, queryContext)
|
||||
if ae, ok := err.(awserr.Error); ok && ae.Code() == "500" {
|
||||
return err
|
||||
}
|
||||
for _, queryRes := range queryResponses {
|
||||
if err != nil {
|
||||
queryRes.Error = err
|
||||
}
|
||||
resultChan <- queryRes
|
||||
}
|
||||
return nil
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
if err := eg.Wait(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
close(resultChan)
|
||||
for result := range resultChan {
|
||||
results.Results[result.RefId] = result
|
||||
}
|
||||
|
||||
return results, nil
|
||||
}
|
||||
|
||||
func formatAlias(query *CloudWatchQuery, stat string, dimensions map[string]string, label string) string {
|
||||
region := query.Region
|
||||
namespace := query.Namespace
|
||||
metricName := query.MetricName
|
||||
period := strconv.Itoa(query.Period)
|
||||
if len(query.Id) > 0 && len(query.Expression) > 0 {
|
||||
if strings.Index(query.Expression, "SEARCH(") == 0 {
|
||||
pIndex := strings.LastIndex(query.Expression, ",")
|
||||
period = strings.Trim(query.Expression[pIndex+1:], " )")
|
||||
sIndex := strings.LastIndex(query.Expression[:pIndex], ",")
|
||||
stat = strings.Trim(query.Expression[sIndex+1:pIndex], " '")
|
||||
} else if len(query.Alias) > 0 {
|
||||
// expand by Alias
|
||||
} else {
|
||||
return query.Id
|
||||
}
|
||||
}
|
||||
|
||||
data := map[string]string{}
|
||||
data["region"] = region
|
||||
data["namespace"] = namespace
|
||||
data["metric"] = metricName
|
||||
data["stat"] = stat
|
||||
data["period"] = period
|
||||
if len(label) != 0 {
|
||||
data["label"] = label
|
||||
}
|
||||
for k, v := range dimensions {
|
||||
data[k] = v
|
||||
}
|
||||
|
||||
result := aliasFormat.ReplaceAllFunc([]byte(query.Alias), func(in []byte) []byte {
|
||||
labelName := strings.Replace(string(in), "{{", "", 1)
|
||||
labelName = strings.Replace(labelName, "}}", "", 1)
|
||||
labelName = strings.TrimSpace(labelName)
|
||||
if val, exists := data[labelName]; exists {
|
||||
return []byte(val)
|
||||
}
|
||||
|
||||
return in
|
||||
})
|
||||
|
||||
if string(result) == "" {
|
||||
return metricName + "_" + stat
|
||||
}
|
||||
|
||||
return string(result)
|
||||
}
|
||||
|
||||
60
pkg/tsdb/cloudwatch/cloudwatch_query.go
Normal file
60
pkg/tsdb/cloudwatch/cloudwatch_query.go
Normal file
@@ -0,0 +1,60 @@
|
||||
package cloudwatch
|
||||
|
||||
import (
|
||||
"strings"
|
||||
)
|
||||
|
||||
type cloudWatchQuery struct {
|
||||
RefId string
|
||||
Region string
|
||||
Id string
|
||||
Namespace string
|
||||
MetricName string
|
||||
Stats string
|
||||
Expression string
|
||||
ReturnData bool
|
||||
Dimensions map[string][]string
|
||||
Period int
|
||||
Alias string
|
||||
MatchExact bool
|
||||
UsedExpression string
|
||||
RequestExceededMaxLimit bool
|
||||
}
|
||||
|
||||
func (q *cloudWatchQuery) isMathExpression() bool {
|
||||
return q.Expression != "" && !q.isUserDefinedSearchExpression()
|
||||
}
|
||||
|
||||
func (q *cloudWatchQuery) isSearchExpression() bool {
|
||||
return q.isUserDefinedSearchExpression() || q.isInferredSearchExpression()
|
||||
}
|
||||
|
||||
func (q *cloudWatchQuery) isUserDefinedSearchExpression() bool {
|
||||
return strings.Contains(q.Expression, "SEARCH(")
|
||||
}
|
||||
|
||||
func (q *cloudWatchQuery) isInferredSearchExpression() bool {
|
||||
if len(q.Dimensions) == 0 {
|
||||
return !q.MatchExact
|
||||
}
|
||||
|
||||
if !q.MatchExact {
|
||||
return true
|
||||
}
|
||||
|
||||
for _, values := range q.Dimensions {
|
||||
if len(values) > 1 {
|
||||
return true
|
||||
}
|
||||
for _, v := range values {
|
||||
if v == "*" {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (q *cloudWatchQuery) isMetricStat() bool {
|
||||
return !q.isSearchExpression() && !q.isMathExpression()
|
||||
}
|
||||
169
pkg/tsdb/cloudwatch/cloudwatch_query_test.go
Normal file
169
pkg/tsdb/cloudwatch/cloudwatch_query_test.go
Normal file
@@ -0,0 +1,169 @@
|
||||
package cloudwatch
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
. "github.com/smartystreets/goconvey/convey"
|
||||
)
|
||||
|
||||
func TestCloudWatchQuery(t *testing.T) {
|
||||
Convey("TestCloudWatchQuery", t, func() {
|
||||
Convey("and SEARCH(someexpression) was specified in the query editor", func() {
|
||||
query := &cloudWatchQuery{
|
||||
RefId: "A",
|
||||
Region: "us-east-1",
|
||||
Expression: "SEARCH(someexpression)",
|
||||
Stats: "Average",
|
||||
Period: 300,
|
||||
Id: "id1",
|
||||
}
|
||||
|
||||
Convey("it is a search expression", func() {
|
||||
So(query.isSearchExpression(), ShouldBeTrue)
|
||||
})
|
||||
|
||||
Convey("it is not math expressions", func() {
|
||||
So(query.isMathExpression(), ShouldBeFalse)
|
||||
})
|
||||
})
|
||||
|
||||
Convey("and no expression, no multi dimension key values and no * was used", func() {
|
||||
query := &cloudWatchQuery{
|
||||
RefId: "A",
|
||||
Region: "us-east-1",
|
||||
Expression: "",
|
||||
Stats: "Average",
|
||||
Period: 300,
|
||||
Id: "id1",
|
||||
MatchExact: true,
|
||||
Dimensions: map[string][]string{
|
||||
"InstanceId": {"i-12345678"},
|
||||
},
|
||||
}
|
||||
|
||||
Convey("it is not a search expression", func() {
|
||||
So(query.isSearchExpression(), ShouldBeFalse)
|
||||
})
|
||||
|
||||
Convey("it is not math expressions", func() {
|
||||
So(query.isMathExpression(), ShouldBeFalse)
|
||||
})
|
||||
})
|
||||
|
||||
Convey("and no expression but multi dimension key values exist", func() {
|
||||
query := &cloudWatchQuery{
|
||||
RefId: "A",
|
||||
Region: "us-east-1",
|
||||
Expression: "",
|
||||
Stats: "Average",
|
||||
Period: 300,
|
||||
Id: "id1",
|
||||
Dimensions: map[string][]string{
|
||||
"InstanceId": {"i-12345678", "i-34562312"},
|
||||
},
|
||||
}
|
||||
|
||||
Convey("it is a search expression", func() {
|
||||
So(query.isSearchExpression(), ShouldBeTrue)
|
||||
})
|
||||
|
||||
Convey("it is not math expressions", func() {
|
||||
So(query.isMathExpression(), ShouldBeFalse)
|
||||
})
|
||||
})
|
||||
|
||||
Convey("and no expression but dimension values has *", func() {
|
||||
query := &cloudWatchQuery{
|
||||
RefId: "A",
|
||||
Region: "us-east-1",
|
||||
Expression: "",
|
||||
Stats: "Average",
|
||||
Period: 300,
|
||||
Id: "id1",
|
||||
Dimensions: map[string][]string{
|
||||
"InstanceId": {"i-12345678", "*"},
|
||||
"InstanceType": {"abc", "def"},
|
||||
},
|
||||
}
|
||||
|
||||
Convey("it is not a search expression", func() {
|
||||
So(query.isSearchExpression(), ShouldBeTrue)
|
||||
})
|
||||
|
||||
Convey("it is not math expressions", func() {
|
||||
So(query.isMathExpression(), ShouldBeFalse)
|
||||
})
|
||||
})
|
||||
|
||||
Convey("and no dimensions were added", func() {
|
||||
query := &cloudWatchQuery{
|
||||
RefId: "A",
|
||||
Region: "us-east-1",
|
||||
Expression: "",
|
||||
Stats: "Average",
|
||||
Period: 300,
|
||||
Id: "id1",
|
||||
MatchExact: false,
|
||||
Dimensions: make(map[string][]string),
|
||||
}
|
||||
Convey("and match exact is false", func() {
|
||||
query.MatchExact = false
|
||||
Convey("it is a search expression", func() {
|
||||
So(query.isSearchExpression(), ShouldBeTrue)
|
||||
})
|
||||
|
||||
Convey("it is not math expressions", func() {
|
||||
So(query.isMathExpression(), ShouldBeFalse)
|
||||
})
|
||||
|
||||
Convey("it is not metric stat", func() {
|
||||
So(query.isMetricStat(), ShouldBeFalse)
|
||||
})
|
||||
|
||||
})
|
||||
|
||||
Convey("and match exact is true", func() {
|
||||
query.MatchExact = true
|
||||
Convey("it is a search expression", func() {
|
||||
So(query.isSearchExpression(), ShouldBeFalse)
|
||||
})
|
||||
|
||||
Convey("it is not math expressions", func() {
|
||||
So(query.isMathExpression(), ShouldBeFalse)
|
||||
})
|
||||
|
||||
Convey("it is a metric stat", func() {
|
||||
So(query.isMetricStat(), ShouldBeTrue)
|
||||
})
|
||||
|
||||
})
|
||||
})
|
||||
|
||||
Convey("and match exact is", func() {
|
||||
query := &cloudWatchQuery{
|
||||
RefId: "A",
|
||||
Region: "us-east-1",
|
||||
Expression: "",
|
||||
Stats: "Average",
|
||||
Period: 300,
|
||||
Id: "id1",
|
||||
MatchExact: false,
|
||||
Dimensions: map[string][]string{
|
||||
"InstanceId": {"i-12345678"},
|
||||
},
|
||||
}
|
||||
|
||||
Convey("it is a search expression", func() {
|
||||
So(query.isSearchExpression(), ShouldBeTrue)
|
||||
})
|
||||
|
||||
Convey("it is not math expressions", func() {
|
||||
So(query.isMathExpression(), ShouldBeFalse)
|
||||
})
|
||||
|
||||
Convey("it is not metric stat", func() {
|
||||
So(query.isMetricStat(), ShouldBeFalse)
|
||||
})
|
||||
})
|
||||
})
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user