Compare commits

..

6 Commits

Author SHA1 Message Date
Torkel Ödegaard
349f3eb931 updated version 2017-03-22 11:06:41 +01:00
Trent White
d720fcaaeb orange grafana.com seemed too much. making it gray 2017-03-22 10:32:23 +01:00
Trent White
de3cba8262 format css 2017-03-22 10:19:26 +01:00
Trent White
9faadfaed7 add dashboard and plugin icons to plugin list and dash search 2017-03-22 10:15:59 +01:00
Daniel Lee
ea70389086 chore: update all grafana.org urls to .com 2017-03-21 15:38:29 +01:00
Daniel Lee
e602985710 profile: locks login fields if disable_login_form
If the auth config variable, disable_login_form, is set to true then
the username and email fields are set to read-only on the profile page.

The reason for this is so that the user does not lock themselves out by
changing their email address or username. Or create a new user by
changing both.

ref #7810
2017-03-21 14:57:53 +01:00
2024 changed files with 263253 additions and 153422 deletions

3
.bowerrc Normal file
View File

@@ -0,0 +1,3 @@
{
"directory": "public/vendor/"
}

View File

@@ -1,7 +1,7 @@
[run]
init_cmds = [
["go", "build", "-o", "./bin/grafana-server", "./pkg/cmd/grafana-server"],
["./bin/grafana-server", "cfg:app_mode=development"]
["./bin/grafana-server"]
]
watch_all = true
watch_dirs = [
@@ -9,9 +9,9 @@ watch_dirs = [
"$WORKDIR/public/views",
"$WORKDIR/conf",
]
watch_exts = [".go", ".ini", ".toml"]
watch_exts = [".go", ".ini", ".toml", ".html"]
build_delay = 1500
cmds = [
["go", "build", "-o", "./bin/grafana-server", "./pkg/cmd/grafana-server"],
["./bin/grafana-server", "cfg:app_mode=development"]
["./bin/grafana-server"]
]

View File

@@ -1,6 +1,13 @@
# http://editorconfig.org
root = true
[*.go]
indent_style = tab
indent_size = 2
charset = utf-8
trim_trailing_whitespace = true
insert_final_newline = true
[*]
indent_style = space
indent_size = 2
@@ -8,12 +15,5 @@ charset = utf-8
trim_trailing_whitespace = true
insert_final_newline = true
[*.go]
indent_style = tab
indent_size = 4
charset = utf-8
trim_trailing_whitespace = true
insert_final_newline = true
[*.md]
trim_trailing_whitespace = false

3
.floo Normal file
View File

@@ -0,0 +1,3 @@
{
"url": "https://floobits.com/raintank/grafana"
}

12
.flooignore Normal file
View File

@@ -0,0 +1,12 @@
#*
*.o
*.pyc
*.pyo
*~
extern/
node_modules/
tmp/
data/
vendor/
public_gen/
dist/

View File

@@ -1,10 +1,5 @@
Read before posting:
- Questions should be posted to https://community.grafana.com. Please search there and here on GitHub for similar issues before creating a new issue.
- Checkout FAQ: https://community.grafana.com/c/howto/faq
- Checkout How to troubleshoot metric query issues: https://community.grafana.com/t/how-to-troubleshoot-metric-query-issues/50
Please prefix your title with [Bug] or [Feature request].
Please prefix your title with [Bug] or [Feature request]
For question please check [Support Options](http://grafana.org/support/). **Do not** open a github issue
Please include this information:
- What Grafana version are you using?
@@ -13,6 +8,11 @@ Please include this information:
- What did you do?
- What was the expected result?
- What happened instead?
- If related to metric query / data viz:
- Include raw network request & response: get by opening Chrome Dev Tools (F12, Ctrl+Shift+I on windows, Cmd+Opt+I on Mac), go the network tab.
**IMPORTANT**
If it relates to *metric data viz*:
- An image or text representation of your metric query
- The raw query and response for the network request (check this in chrome dev tools network tab, here you can see metric requests and other request, please include the request body and request response)
If it relates to *alerting*
- An image of the test execution data fully expanded.

17
.gitignore vendored
View File

@@ -4,19 +4,18 @@ coverage/
.aws-config.json
awsconfig
/dist
/public/build
/public/views/index.html
/emails/dist
/public_gen
/public/vendor/npm
/tmp
vendor/phantomjs/phantomjs
vendor/phantomjs/phantomjs.exe
docs/AWS_S3_BUCKET
docs/GIT_BRANCH
docs/VERSION
docs/GITCOMMIT
docs/changed-files
docs/changed-files
# locally required config files
public/css/*.min.css
@@ -26,8 +25,6 @@ public/css/*.min.css
*.swp
.idea/
*.iml
*.tmp
.DS_Store
.vscode/
/data/*
@@ -41,14 +38,4 @@ profile.cov
.notouch
/pkg/cmd/grafana-cli/grafana-cli
/pkg/cmd/grafana-server/grafana-server
/pkg/cmd/grafana-server/debug
/examples/*/dist
/packaging/**/*.rpm
/packaging/**/*.deb
/vendor/**/*.py
/vendor/**/*.xml
/vendor/**/*.yml
/vendor/**/*_test.go
/vendor/**/.editorconfig
/vendor/**/appengine*

21
.jsfmtrc Normal file
View File

@@ -0,0 +1,21 @@
{
"preset" : "default",
"lineBreak" : {
"before" : {
"VariableDeclarationWithoutInit" : 0,
},
"after": {
"AssignmentOperator": -1,
"ArgumentListArrayExpression": ">=1"
}
},
"whiteSpace" : {
"before" : {
},
"after" : {
}
}
}

View File

@@ -1,6 +1,6 @@
{
"browser": true,
"esversion": 6,
"bitwise":false,
"curly": true,
"eqnull": true,

View File

@@ -1,265 +1,8 @@
# 5.0.0 (unreleased)
# 4.3.0 (unreleased)
### WIP (in develop branch currently as its unstable or unfinished)
- Dashboard folders
- User groups
- Dashboard permissions (on folder & dashboard level), permissions can be assigned to groups or individual users
- UX changes to nav & side menu
- New dashboard grid layout system
## Minor Enchancements
# 4.6.0-beta2 (2017-10-17)
## Fixes
* **ColorPicker**: Fix for color picker not showing [#9549](https://github.com/grafana/grafana/issues/9549)
# 4.6.0-beta1 (2017-10-13)
## New Features
* **GCS**: Adds support for Google Cloud Storage [#8370](https://github.com/grafana/grafana/issues/8370) thx [@chuhlomin](https://github.com/chuhlomin)
* **Prometheus**: Adds /metrics endpoint for exposing Grafana metrics. [#9187](https://github.com/grafana/grafana/pull/9187)
* **Graph**: Add support for local formating in axis. [#1395](https://github.com/grafana/grafana/issues/1395), thx [@m0nhawk](https://github.com/m0nhawk)
* **Jaeger**: Add support for open tracing using jaeger in Grafana. [#9213](https://github.com/grafana/grafana/pull/9213)
* **Unit types**: New date & time unit types added, useful in singlestat to show dates & times. [#3678](https://github.com/grafana/grafana/issues/3678), [#6710](https://github.com/grafana/grafana/issues/6710), [#2764](https://github.com/grafana/grafana/issues/2764)
* **CLI**: Make it possible to install plugins from any url [#5873](https://github.com/grafana/grafana/issues/5873)
* **Prometheus**: Add support for instant queries [#5765](https://github.com/grafana/grafana/issues/5765), thx [@mtanda](https://github.com/mtanda)
* **Cloudwatch**: Add support for alerting using the cloudwatch datasource [#8050](https://github.com/grafana/grafana/pull/8050), thx [@mtanda](https://github.com/mtanda)
* **Pagerduty**: Include triggering series in pagerduty notification [#8479](https://github.com/grafana/grafana/issues/8479), thx [@rickymoorhouse](https://github.com/rickymoorhouse)
* **Timezone**: Time ranges like Today & Yesterday now work correctly when timezone setting is set to UTC [#8916](https://github.com/grafana/grafana/issues/8916), thx [@ctide](https://github.com/ctide)
* **Prometheus**: Align $__interval with the step parameters. [#9226](https://github.com/grafana/grafana/pull/9226), thx [@alin-amana](https://github.com/alin-amana)
* **Prometheus**: Autocomplete for label name and label value [#9208](https://github.com/grafana/grafana/pull/9208), thx [@mtanda](https://github.com/mtanda)
* **Postgres**: New Postgres data source [#9209](https://github.com/grafana/grafana/pull/9209), thx [@svenklemm](https://github.com/svenklemm)
* **Datasources**: Make datasource HTTP requests verify TLS by default. closes [#9371](https://github.com/grafana/grafana/issues/9371), [#5334](https://github.com/grafana/grafana/issues/5334), [#8812](https://github.com/grafana/grafana/issues/8812), thx [@mattbostock](https://github.com/mattbostock)
* **OAuth**: Verify TLS during OAuth callback [#9373](https://github.com/grafana/grafana/issues/9373), thx [@mattbostock](https://github.com/mattbostock)
## Minor
* **SMTP**: Make it possible to set specific EHLO for smtp client. [#9319](https://github.com/grafana/grafana/issues/9319)
* **Dataproxy**: Allow grafan to renegotiate tls connection [#9250](https://github.com/grafana/grafana/issues/9250)
* **HTTP**: set net.Dialer.DualStack to true for all http clients [#9367](https://github.com/grafana/grafana/pull/9367)
* **Alerting**: Add diff and percent diff as series reducers [#9386](https://github.com/grafana/grafana/pull/9386), thx [@shanhuhai5739](https://github.com/shanhuhai5739)
* **Slack**: Allow images to be uploaded to slack when Token is precent [#7175](https://github.com/grafana/grafana/issues/7175), thx [@xginn8](https://github.com/xginn8)
* **Opsgenie**: Use their latest API instead of old version [#9399](https://github.com/grafana/grafana/pull/9399), thx [@cglrkn](https://github.com/cglrkn)
* **Table**: Add support for displaying the timestamp with milliseconds [#9429](https://github.com/grafana/grafana/pull/9429), thx [@s1061123](https://github.com/s1061123)
* **Hipchat**: Add metrics, message and image to hipchat notifications [#9110](https://github.com/grafana/grafana/issues/9110), thx [@eloo](https://github.com/eloo)
* **Kafka**: Add support for sending alert notifications to kafka [#7104](https://github.com/grafana/grafana/issues/7104), thx [@utkarshcmu](https://github.com/utkarshcmu)
* **Alerting**: add count_non_null as series reducer [#9516](https://github.com/grafana/grafana/issues/9516)
## Tech
* **Go**: Grafana is now built using golang 1.9
* **Webpack**: Changed from systemjs to webpack (see readme or building from source guide for new build instructions). Systemjs is still used to load plugins but now plugins can only import a limited set of dependencies. See [PLUGIN_DEV.md](https://github.com/grafana/grafana/blob/master/PLUGIN_DEV.md) for more details on how this can effect some plugins.
# 4.5.2 (2017-09-22)
## Fixes
* **Graphite**: Fix for issues with jsonData & graphiteVersion null errors [#9258](https://github.com/grafana/grafana/issues/9258)
* **Graphite**: Fix for Grafana internal metrics to Graphite sending NaN values [#9279](https://github.com/grafana/grafana/issues/9279)
* **HTTP API**: Fix for HEAD method requests [#9307](https://github.com/grafana/grafana/issues/9307)
* **Templating**: Fix for duplicate template variable queries when refresh is set to time range change [#9185](https://github.com/grafana/grafana/issues/9185)
* **Metrics**: dont write NaN values to graphite [#9279](https://github.com/grafana/grafana/issues/9279)
# 4.5.1 (2017-09-15)
## Fixes
* **MySQL**: Fixed issue with query editor not showing [#9247](https://github.com/grafana/grafana/issues/9247)
## Breaking changes
* **Metrics**: The metric structure for internal metrics about Grafana published to graphite has changed. This might break dashboards for internal metrics.
# 4.5.0 (2017-09-14)
## Fixes & Enhancements since beta1
* **Security**: Security fix for api vulnerability (in multiple org setups).
* **Shortcuts**: Adds shortcut for creating new dashboard [#8876](https://github.com/grafana/grafana/pull/8876) thx [@mtanda](https://github.com/mtanda)
* **Graph**: Right Y-Axis label position fixed [#9172](https://github.com/grafana/grafana/pull/9172)
* **General**: Improve rounding of time intervals [#9197](https://github.com/grafana/grafana/pull/9197), thx [@alin-amana](https://github.com/alin-amana)
# 4.5.0-beta1 (2017-09-05)
## New Features
* **Table panel**: Render cell values as links that can have an url template that uses variables from current table row. [#3754](https://github.com/grafana/grafana/issues/3754)
* **Elasticsearch**: Add ad hoc filters directly by clicking values in table panel [#8052](https://github.com/grafana/grafana/issues/8052).
* **MySQL**: New rich query editor with syntax highlighting
* **Prometheus**: New rich query editor with syntax highlighting, metric & range auto complete and integrated function docs. [#5117](https://github.com/grafana/grafana/issues/5117)
## Enhancements
* **GitHub OAuth**: Support for GitHub organizations with 100+ teams. [#8846](https://github.com/grafana/grafana/issues/8846), thx [@skwashd](https://github.com/skwashd)
* **Graphite**: Calls to Graphite api /metrics/find now include panel or dashboad time range (from & until) in most cases, [#8055](https://github.com/grafana/grafana/issues/8055)
* **Graphite**: Added new graphite 1.0 functions, available if you set version to 1.0.x in data source settings. New Functions: mapSeries, reduceSeries, isNonNull, groupByNodes, offsetToZero, grep, weightedAverage, removeEmptySeries, aggregateLine, averageOutsidePercentile, delay, exponentialMovingAverage, fallbackSeries, integralByInterval, interpolate, invert, linearRegression, movingMin, movingMax, movingSum, multiplySeriesWithWildcards, pow, powSeries, removeBetweenPercentile, squareRoot, timeSlice, closes [#8261](https://github.com/grafana/grafana/issues/8261)
- **Elasticsearch**: Ad-hoc filters now use query phrase match filters instead of term filters, works on non keyword/raw fields [#9095](https://github.com/grafana/grafana/issues/9095).
### Breaking change
* **InfluxDB/Elasticsearch**: The panel & data source option named "Group by time interval" is now named "Min time interval" and does now always define a lower limit for the auto group by time. Without having to use `>` prefix (that prefix still works). This should in theory have close to zero actual impact on existing dashboards. It does mean that if you used this setting to define a hard group by time interval of, say "1d", if you zoomed to a time range wide enough the time range could increase above the "1d" range as the setting is now always considered a lower limit.
* **Elasticsearch**: Elasticsearch metric queries without date histogram now return table formated data making table panel much easier to use for this use case. Should not break/change existing dashboards with stock panels but external panel plugins can be affected.
## Changes
* **InfluxDB**: Change time range filter for absolute time ranges to be inclusive instead of exclusive [#8319](https://github.com/grafana/grafana/issues/8319), thx [@Oxydros](https://github.com/Oxydros)
* **InfluxDB**: Added paranthesis around tag filters in queries [#9131](https://github.com/grafana/grafana/pull/9131)
## Bug Fixes
* **Modals**: Maintain scroll position after opening/leaving modal [#8800](https://github.com/grafana/grafana/issues/8800)
* **Templating**: You cannot select data source variables as data source for other template variables [#7510](https://github.com/grafana/grafana/issues/7510)
* **MySQL/Postgres**: Fix for max_idle_conn option default which was wrongly set to zero which does not mean unlimited but means zero, which in practice kind of disables connection pooling, which is not good. Fixes [#8513](https://github.com/grafana/grafana/issues/8513)
# 4.4.3 (2017-08-07)
## Bug Fixes
* **Search**: Fix for issue that casued search view to hide when you clicked starred or tags filters, fixes [#8981](https://github.com/grafana/grafana/issues/8981)
* **Modals**: ESC key now closes modal again, fixes [#8981](https://github.com/grafana/grafana/issues/8988), thx [@j-white](https://github.com/j-white)
# 4.4.2 (2017-08-01)
## Bug Fixes
* **GrafanaDB(mysql)**: Fix for dashboard_version.data column type, now changed to MEDIUMTEXT, fixes [#8813](https://github.com/grafana/grafana/issues/8813)
* **Dashboard(settings)**: Closing setting views using ESC key did not update url correctly, fixes [#8869](https://github.com/grafana/grafana/issues/8869)
* **InfluxDB**: Wrong username/password parameter name when using direct access, fixes [#8789](https://github.com/grafana/grafana/issues/8789)
* **Forms(TextArea)**: Bug fix for no scroll in text areas [#8797](https://github.com/grafana/grafana/issues/8797)
* **Png Render API**: Bug fix for timeout url parameter. It now works as it should. Default value was also increased from 30 to 60 seconds [#8710](https://github.com/grafana/grafana/issues/8710)
* **Search**: Fix for not being able to close search by clicking on right side of search result container, [8848](https://github.com/grafana/grafana/issues/8848)
* **Cloudwatch**: Fix for using variables in templating metrics() query, [8965](https://github.com/grafana/grafana/issues/8965)
## Changes
* **Settings(defaults)**: allow_sign_up default changed from true to false [#8743](https://github.com/grafana/grafana/issues/8743)
* **Settings(defaults)**: allow_org_create default changed from true to false
# 4.4.1 (2017-07-05)
## Bug Fixes
* **Migrations**: migration fails where dashboard.created_by is null [#8783](https://github.com/grafana/grafana/issues/8783)
# 4.4.0 (2017-07-04)
## New Features
**Dashboard History**: View dashboard version history, compare any two versions (summary & json diffs), restore to old version. This big feature
was contributed by **Walmart Labs**. Big thanks to them for this massive contribution!
Initial feature request: [#4638](https://github.com/grafana/grafana/issues/4638)
Pull Request: [#8472](https://github.com/grafana/grafana/pull/8472)
## Enhancements
* **Elasticsearch**: Added filter aggregation label [#8420](https://github.com/grafana/grafana/pull/8420), thx [@tianzk](github.com/tianzk)
* **Sensu**: Added option for source and handler [#8405](https://github.com/grafana/grafana/pull/8405), thx [@joemiller](github.com/joemiller)
* **CSV**: Configurable csv export datetime format [#8058](https://github.com/grafana/grafana/issues/8058), thx [@cederigo](github.com/cederigo)
* **Table Panel**: Column style that preserves formatting/indentation (like pre tag) [#6617](https://github.com/grafana/grafana/issues/6617)
* **DingDing**: Add DingDing Alert Notifier [#8473](https://github.com/grafana/grafana/pull/8473) thx [@jiamliang](https://github.com/jiamliang)
## Minor Enhancements
* **Elasticsearch**: Add option for result set size in raw_document [#3426](https://github.com/grafana/grafana/issues/3426) [#8527](https://github.com/grafana/grafana/pull/8527), thx [@mk-dhia](github.com/mk-dhia)
## Bug Fixes
* **Graph**: Bug fix for negative values in histogram mode [#8628](https://github.com/grafana/grafana/issues/8628)
# 4.3.2 (2017-05-31)
## Bug fixes
* **InfluxDB**: Fixed issue with query editor not showing ALIAS BY input field when in text editor mode [#8459](https://github.com/grafana/grafana/issues/8459)
* **Graph Log Scale**: Fixed issue with log scale going below x-axis [#8244](https://github.com/grafana/grafana/issues/8244)
* **Playlist**: Fixed dashboard play order issue [#7688](https://github.com/grafana/grafana/issues/7688)
* **Elasticsearch**: Fixed table query issue with ES 2.x [#8467](https://github.com/grafana/grafana/issues/8467), thx [@goldeelox](https://github.com/goldeelox)
## Changes
* **Lazy Loading Of Panels**: Panels are no longer loaded as they are scrolled into view, this was reverted due to Chrome bug, might be reintroduced when Chrome fixes it's JS blocking behavior on scroll. [#8500](https://github.com/grafana/grafana/issues/8500)
# 4.3.1 (2017-05-23)
## Bug fixes
* **S3 image upload**: Fixed image url issue for us-east-1 (us standard) region. If you were missing slack images for alert notifications this should fix it. [#8444](https://github.com/grafana/grafana/issues/8444)
# 4.3.0-stable (2017-05-23)
## Bug fixes
* **Gzip**: Fixed crash when gzip was enabled [#8380](https://github.com/grafana/grafana/issues/8380)
* **Graphite**: Fixed issue with Toggle edit mode did in query editor [#8377](https://github.com/grafana/grafana/issues/8377)
* **Alerting**: Fixed issue with state history not showing query execution errors [#8412](https://github.com/grafana/grafana/issues/8412)
* **Alerting**: Fixed issue with missing state history events/annotations when using sqlite3 database [#7992](https://github.com/grafana/grafana/issues/7992)
* **Sqlite**: Fixed with database table locked and using sqlite3 database [#7992](https://github.com/grafana/grafana/issues/7992)
* **Alerting**: Fixed issue with annotations showing up in unsaved dashboards, new graph & alert panel. [#8361](https://github.com/grafana/grafana/issues/8361)
* **webdav**: Fixed http proxy env variable support for webdav image upload [#7922](https://github.com/grafana/grafana/issues/79222), thx [@berghauz](https://github.com/berghauz)
* **Prometheus**: Fixed issue with hiding query [#8413](https://github.com/grafana/grafana/issues/8413)
## Enhancements
* **VictorOps**: Now supports panel image & auto resolve [#8431](https://github.com/grafana/grafana/pull/8431), thx [@davidmscott](https://github.com/davidmscott)
* **Alerting**: Alert annotations now provide more info [#8421](https://github.com/grafana/grafana/pull/8421)
# 4.3.0-beta1 (2017-05-12)
## Enhancements
* **InfluxDB**: influxdb query builder support for ORDER BY and LIMIT (allows TOPN queries) [#6065](https://github.com/grafana/grafana/issues/6065) Support influxdb's SLIMIT Feature [#7232](https://github.com/grafana/grafana/issues/7232) thx [@thuck](https://github.com/thuck)
* **Panels**: Delay loading & Lazy load panels as they become visible (scrolled into view) [#5216](https://github.com/grafana/grafana/issues/5216) thx [@jifwin](https://github.com/jifwin)
* **Graph**: Support auto grid min/max when using log scale [#3090](https://github.com/grafana/grafana/issues/3090), thx [@bigbenhur](https://github.com/bigbenhur)
* **Graph**: Support for histograms [#600](https://github.com/grafana/grafana/issues/600)
* **Prometheus**: Support table response formats (column per label) [#6140](https://github.com/grafana/grafana/issues/6140), thx [@mtanda](https://github.com/mtanda)
* **Single Stat Panel**: support for non time series data [#6564](https://github.com/grafana/grafana/issues/6564)
* **Server**: Monitoring Grafana (health check endpoint) [#3302](https://github.com/grafana/grafana/issues/3302)
* **Heatmap**: Heatmap Panel [#7934](https://github.com/grafana/grafana/pull/7934)
* **Elasticsearch**: histogram aggregation [#3164](https://github.com/grafana/grafana/issues/3164)
## Minor Enhancements
* **InfluxDB**: Small fix for the "glow" when focus the field for LIMIT and SLIMIT [#7799](https://github.com/grafana/grafana/pull/7799) thx [@thuck](https://github.com/thuck)
* **Prometheus**: Make Prometheus query field a textarea [#7663](https://github.com/grafana/grafana/issues/7663), thx [@hagen1778](https://github.com/hagen1778)
* **Prometheus**: Step parameter changed semantics to min step to reduce the load on Prometheus and rendering in browser [#8073](https://github.com/grafana/grafana/pull/8073), thx [@bobrik](https://github.com/bobrik)
* **Templating**: Should not be possible to create self-referencing (recursive) template variable definitions [#7614](https://github.com/grafana/grafana/issues/7614) thx [@thuck](https://github.com/thuck)
* **Cloudwatch**: Correctly obtain IAM roles within ECS container tasks [#7892](https://github.com/grafana/grafana/issues/7892) thx [@gomlgs](https://github.com/gomlgs)
* **Units**: New number format: Scientific notation [#7781](https://github.com/grafana/grafana/issues/7781) thx [@cadnce](https://github.com/cadnce)
* **Oauth**: Add common type for oauth authorization errors [#6428](https://github.com/grafana/grafana/issues/6428) thx [@amenzhinsky](https://github.com/amenzhinsky)
* **Templating**: Data source variable now supports multi value and panel repeats [#7030](https://github.com/grafana/grafana/issues/7030) thx [@mtanda](https://github.com/mtanda)
* **Telegram**: Telegram alert is not sending metric and legend. [#8110](https://github.com/grafana/grafana/issues/8110), thx [@bashgeek](https://github.com/bashgeek)
* **Graph**: Support dashed lines [#514](https://github.com/grafana/grafana/issues/514), thx [@smalik03](https://github.com/smalik03)
* **Table**: Support to change column header text [#3551](https://github.com/grafana/grafana/issues/3551)
* **Alerting**: Better error when SMTP is not configured [#8093](https://github.com/grafana/grafana/issues/8093)
* **Pushover**: Add an option to attach graph image link in Pushover notification [#8043](https://github.com/grafana/grafana/issues/8043) thx [@devkid](https://github.com/devkid)
* **WebDAV**: Allow to set different ImageBaseUrl for WebDAV upload and image link [#7914](https://github.com/grafana/grafana/issues/7914)
* **Panels**: type-ahead mixed datasource selection [#7697](https://github.com/grafana/grafana/issues/7697) thx [@mtanda](https://github.com/mtanda)
* **Security**:User enumeration problem [#7619](https://github.com/grafana/grafana/issues/7619)
* **InfluxDB**: Register new queries available in InfluxDB - Holt Winters [#5619](https://github.com/grafana/grafana/issues/5619) thx [@rikkuness](https://github.com/rikkuness)
* **Server**: Support listening on a UNIX socket [#4030](https://github.com/grafana/grafana/issues/4030), thx [@mitjaziv](https://github.com/mitjaziv)
* **Graph**: Support log scaling for values smaller 1 [#5278](https://github.com/grafana/grafana/issues/5278)
* **InfluxDB**: Slow 'select measurement' rendering for InfluxDB [#2524](https://github.com/grafana/grafana/issues/2524), thx [@sbhenderson](https://github.com/sbhenderson)
* **Config**: Configurable signout menu activation [#7968](https://github.com/grafana/grafana/pull/7968), thx [@seuf](https://github.com/seuf)
## Fixes
* **Table Panel**: Fixed annotation display in table panel, [#8023](https://github.com/grafana/grafana/issues/8023)
* **Dashboard**: If refresh is blocked due to tab not visible, then refresh when it becomes visible [#8076](https://github.com/grafana/grafana/issues/8076) thanks [@SimenB](https://github.com/SimenB)
* **Snapshots**: Fixed problem with annotations & snapshots [#7659](https://github.com/grafana/grafana/issues/7659)
* **Graph**: MetricSegment loses type when value is an asterisk [#8277](https://github.com/grafana/grafana/issues/8277), thx [@Gordiychuk](https://github.com/Gordiychuk)
* **Alerting**: Alert notifications do not show charts when using a non public S3 bucket [#8250](https://github.com/grafana/grafana/issues/8250) thx [@rogerswingle](https://github.com/rogerswingle)
* **Graph**: 100% client CPU usage on red alert glow animation [#8222](https://github.com/grafana/grafana/issues/8222)
* **InfluxDB**: Templating: "All" query does match too much [#8165](https://github.com/grafana/grafana/issues/8165)
* **Dashboard**: Description tooltip is not fully displayed [#7970](https://github.com/grafana/grafana/issues/7970)
* **Proxy**: Redirect after switching Org does not obey sub path in root_url (using reverse proxy) [#8089](https://github.com/grafana/grafana/issues/8089)
* **Templating**: Restoration of ad-hoc variable from URL does not work correctly [#8056](https://github.com/grafana/grafana/issues/8056) thx [@tamayika](https://github.com/tamayika)
* **InfluxDB**: timeFilter cannot be used twice in alerts [#7969](https://github.com/grafana/grafana/issues/7969)
* **MySQL**: 4-byte UTF8 not supported when using MySQL database (allows Emojis) [#7958](https://github.com/grafana/grafana/issues/7958)
* **Alerting**: api/alerts and api/alert/:id hold previous data for "message" and "Message" field when field value is changed from "some string" to empty string. [#7927](https://github.com/grafana/grafana/issues/7927)
* **Graph**: Cannot add fill below to series override [#7916](https://github.com/grafana/grafana/issues/7916)
* **InfluxDB**: Influxb Datasource test passes even if the Database doesn't exist [#7864](https://github.com/grafana/grafana/issues/7864)
* **Prometheus**: Displaying Prometheus annotations is incredibly slow [#7750](https://github.com/grafana/grafana/issues/7750), thx [@mtanda](https://github.com/mtanda)
* **Graphite**: grafana generates empty find query to graphite -> 422 Unprocessable Entity [#7740](https://github.com/grafana/grafana/issues/7740)
* **Admin**: make organisation filter case insensitive [#8194](https://github.com/grafana/grafana/issues/8194), thx [@Alexander-N](https://github.com/Alexander-N)
## Changes
* **Elasticsearch**: Changed elasticsearch Terms aggregation to default to Min Doc Count to 1, and sort order to Top [#8321](https://github.com/grafana/grafana/issues/8321)
## Tech
* **Library Upgrade**: inconshreveable/log15 outdated - no support for solaris [#8262](https://github.com/grafana/grafana/issues/8262)
* **Library Upgrade**: Upgrade Macaron [#7600](https://github.com/grafana/grafana/issues/7600)
# 4.2.0 (2017-03-22)
# 4.2.0-beta2 (unreleased)
## Minor Enhancements
* **Templates**: Prevent use of the prefix `__` for templates in web UI [#7678](https://github.com/grafana/grafana/issues/7678)
* **Threema**: Add emoji to Threema alert notifications [#7676](https://github.com/grafana/grafana/pull/7676) thx [@dbrgn](https://github.com/dbrgn)
@@ -272,7 +15,6 @@ Pull Request: [#8472](https://github.com/grafana/grafana/pull/8472)
* **Elasticsearch**: Unique Count on string fields in ElasticSearch [#3536](https://github.com/grafana/grafana/issues/3536), thx [@pyro2927](https://github.com/pyro2927)
* **Templating**: Data source template variable that refers to other variable in regex filter [#6365](https://github.com/grafana/grafana/issues/6365) thx [@rlodge](https://github.com/rlodge)
* **Admin**: Global User List: add search and pagination [#7469](https://github.com/grafana/grafana/issues/7469)
* **User Management**: Invite UI is now disabled when login form is disabled [#7875](https://github.com/grafana/grafana/issues/7875)
## Bugfixes
* **Webhook**: Use proxy settings from environment variables [#7710](https://github.com/grafana/grafana/issues/7710)
@@ -281,7 +23,6 @@ Pull Request: [#8472](https://github.com/grafana/grafana/pull/8472)
* **Docs**: router_logging not documented [#7723](https://github.com/grafana/grafana/issues/7723)
* **Alerting**: Spelling mistake [#7739](https://github.com/grafana/grafana/pull/7739) thx [@woutersmit](https://github.com/woutersmit)
* **Alerting**: Graph legend scrolls to top when an alias is toggled/clicked [#7680](https://github.com/grafana/grafana/issues/7680) thx [@p4ddy1](https://github.com/p4ddy1)
* **Panels**: Fixed panel tooltip description after scrolling down [#7708](https://github.com/grafana/grafana/issues/7708) thx [@askomorokhov](https://github.com/askomorokhov)
# 4.2.0-beta1 (2017-02-27)

View File

@@ -1,46 +0,0 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at contact@grafana.com. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version]
[homepage]: http://contributor-covenant.org
[version]: http://contributor-covenant.org/version/1/4/

View File

@@ -31,7 +31,7 @@ module.exports = function (grunt) {
require('load-grunt-tasks')(grunt);
// load task definitions
grunt.loadTasks('./scripts/grunt');
grunt.loadTasks('tasks');
// Utility function to load plugin settings into config
function loadConfig(config,path) {
@@ -46,7 +46,7 @@ module.exports = function (grunt) {
}
// Merge that object with what with whatever we have here
loadConfig(config,'./scripts/grunt/options/');
loadConfig(config,'./tasks/options/');
// pass the config to grunt
grunt.initConfig(config);
};

View File

@@ -1,4 +1,4 @@
Copyright 2014-2017 Torkel Ödegaard, Raintank Inc.
Copyright 2014-2016 Torkel Ödegaard, Raintank Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you
may not use this file except in compliance with the License. You may

View File

@@ -1,28 +0,0 @@
# Plugin Development
This document is not meant as complete guide for developing plugins but more as a changelog for changes in
Grafana that can impact plugin development. When ever you as plugin author encounter an issue with your plugin after
upgrading Grafana please check here before creating an issue.
## Links
- [Datasource plugin written in typescript](https://github.com/grafana/typescript-template-datasource)
- [Simple json dataource plugin](https://github.com/grafana/simple-json-datasource)
- [Plugin development guide](http://docs.grafana.org/plugins/developing/development/)
## Changes in v4.6
This version of Grafana has big changes that will impact a limited set of plugins. We moved from systemjs to webpack
for built-in plugins & everything internal. External plugins still use systemjs but now with a limited
set of Grafana components they can import. Plugins can depend on libs like lodash & moment and internal components
like before using the same import paths. However since everything in Grafana is no longer accessible, a few plugins could encounter issues when importing a Grafana dependency.
[List of exposed components plugins can import/require](https://github.com/grafana/grafana/blob/master/public/app/features/plugins/plugin_loader.ts#L48)
If you think we missed exposing a crucial lib or Grafana component let us know by opening an issue.
### Deprecated components
The angular directive `<spectrum-picker>` is no deprecated (will still work for a version more) but we recommend plugin authors
to upgrade to new `<color-picker color="ctrl.color" onChange="ctrl.onSparklineColorChange"></color-picker>`

132
README.md
View File

@@ -1,16 +1,72 @@
[Grafana](https://grafana.com) [![Circle CI](https://circleci.com/gh/grafana/grafana.svg?style=svg)](https://circleci.com/gh/grafana/grafana) [![Go Report Card](https://goreportcard.com/badge/github.com/grafana/grafana)](https://goreportcard.com/report/github.com/grafana/grafana)
[Grafana](http://grafana.org) [![Circle CI](https://circleci.com/gh/grafana/grafana.svg?style=svg)](https://circleci.com/gh/grafana/grafana)
================
[Website](https://grafana.com) |
[Website](http://grafana.org) |
[Twitter](https://twitter.com/grafana) |
[Community & Forum](https://community.grafana.com)
[IRC](https://webchat.freenode.net/?channels=grafana) |
[![Slack](https://brandfolder.com/api/favicon/icon?size=16&domain=www.slack.com)](http://slack.raintank.io)
[Slack](http://slack.raintank.io) |
[Email](mailto:contact@grafana.org)
Grafana is an open source, feature rich metrics dashboard and graph editor for
Graphite, Elasticsearch, OpenTSDB, Prometheus and InfluxDB.
![](http://docs.grafana.org/assets/img/features/dashboard_ex1.png)
![](http://grafana.org/assets/img/features/dashboard_ex1.png)
- [Install instructions](http://docs.grafana.org/installation/)
- [What's New in Grafana 2.0](http://docs.grafana.org/guides/whats-new-in-v2/)
- [What's New in Grafana 2.1](http://docs.grafana.org/guides/whats-new-in-v2-1/)
- [What's New in Grafana 2.5](http://docs.grafana.org/guides/whats-new-in-v2-5/)
- [What's New in Grafana 3.0](http://docs.grafana.org/guides/whats-new-in-v3/)
- [What's New in Grafana 4.0](http://docs.grafana.org/guides/whats-new-in-v4/)
- [What's New in Grafana 4.1](http://docs.grafana.org/guides/whats-new-in-v4-1/)
## Features
### Graphite Target Editor
- Graphite target expression parser
- Feature rich query composer
- Quickly add and edit functions & parameters
- Templated queries
- [See it in action](http://docs.grafana.org/datasources/graphite/)
### Graphing
- Fast rendering, even over large timespans
- Click and drag to zoom
- Multiple Y-axis, logarithmic scales
- Bars, Lines, Points
- Smart Y-axis formatting
- Series toggles & color selector
- Legend values, and formatting options
- Grid thresholds, axis labels
- [Annotations](http://docs.grafana.org/reference/annotations/)
- Any panel can be rendered to PNG (server side using phantomjs)
### Dashboards
- Create, edit, save & search dashboards
- Change column spans and row heights
- Drag and drop panels to rearrange
- [Templating](http://docs.grafana.org/reference/templating/)
- [Scripted dashboards](http://docs.grafana.org/reference/scripting/)
- [Dashboard playlists](http://docs.grafana.org/reference/playlist/)
- [Time range controls](http://docs.grafana.org/reference/timerange/)
- [Share snapshots publicly](http://docs.grafana.org/v2.0/reference/sharing/)
### Elasticsearch
- Feature rich query editor UI
### InfluxDB
- Use InfluxDB as a metric data source, annotation source
- Query editor with series and column typeahead, easy group by and function selection
### OpenTSDB
- Use as metric data source
- Query editor with metric name typeahead and tag filtering
## Requirements
There are no dependencies except an external time series data store. For dashboards and user accounts Grafana can use an embedded
database (sqlite3) or you can use an external SQL data base like MySQL or Postgres.
## Installation
Head to [docs.grafana.org](http://docs.grafana.org/installation/) and [download](https://grafana.com/get)
Head to [grafana.org](http://docs.grafana.org/installation/) and [download](https://grafana.com/get)
the latest release.
If you have any problems please read the [troubleshooting guide](http://docs.grafana.org/installation/troubleshooting/).
@@ -20,24 +76,42 @@ Be sure to read the [getting started guide](http://docs.grafana.org/guides/getti
## Run from master
If you want to build a package yourself, or contribute. Here is a guide for how to do that. You can always find
the latest master builds [here](https://grafana.com/grafana/download)
the latest master builds [here](http://grafana.org/builds)
### Dependencies
- Go 1.9
- NodeJS LTS
- Go 1.8
- NodeJS v4+
### Get Code
```bash
go get github.com/grafana/grafana
```
Since imports of dependencies use the absolute path `github.com/grafana/grafana` within the `$GOPATH`,
you will need to put your version of the code in `$GOPATH/src/github.com/grafana/grafana` to be able
to develop and build grafana on a cloned repository. To do so, you can clone your forked repository
directly to `$GOPATH/src/github.com/grafana` or you can create a symbolic link from your version
of the code to `$GOPATH/src/github.com/grafana/grafana`. The last options makes it possible to change
easily the grafana repository you want to build.
```bash
go get github.com/*your_account*/grafana
mkdir $GOPATH/src/github.com/grafana
ln -s $GOPATH/src/github.com/*your_account*/grafana $GOPATH/src/github.com/grafana/grafana
```
### Building the backend
```bash
go get github.com/grafana/grafana
cd ~/go/src/github.com/grafana/grafana
cd $GOPATH/src/github.com/grafana/grafana
go run build.go setup
go run build.go build
```
### Building frontend assets
For this you need nodejs (v.6+).
To build less to css for the frontend you will need a recent version of **node (v6+)**,
npm (v2.5.0) and grunt (v0.4.5). Run the following:
```bash
npm install -g yarn
@@ -45,30 +119,25 @@ yarn install --pure-lockfile
npm run build
```
To rebuild frontend assets (typescript, sass etc) as you change them start the watcher via.
To build the frontend assets only on changes:
```bash
npm run watch
```
Run tests
```bash
npm run test
```
Run tests in watch mode
```bash
npm run watch-test
sudo npm install -g grunt-cli # to do only once to install grunt command line interface
grunt watch
```
### Recompile backend on source change
To rebuild on source change.
```bash
go get github.com/Unknwon/bra
bra run
```
### Running
```bash
./bin/grafana-server
```
Open grafana in your browser (default: `http://localhost:3000`) and login with admin user (default: `user/pass = admin/admin`).
### Dev config
@@ -77,22 +146,17 @@ Create a custom.ini in the conf directory to override default configuration opti
You only need to add the options you want to override. Config files are applied in the order of:
1. grafana.ini
1. custom.ini
2. dev.ini (if found)
3. custom.ini
In your custom.ini uncomment (remove the leading `;`) sign. And set `app_mode = development`.
## Create a pull request
Before or after you create a pull request, sign the [contributor license agreement](http://docs.grafana.org/project/cla/).
## Contribute
If you have any idea for an improvement or found a bug do not hesitate to open an issue.
And if you have time clone this repo and submit a pull request and help me make Grafana
the kickass metrics & devops dashboard we all dream about!
## Plugin development
Checkout the [Plugin Development Guide](http://docs.grafana.org/plugins/developing/development/) and checkout the [PLUGIN_DEV.md](https://github.com/grafana/grafana/blob/master/PLUGIN_DEV.md) file for changes in Grafana that relate to
plugin development.
## License
Grafana is distributed under Apache 2.0 License.
Work in progress Grafana 2.0 (with included Grafana backend)

View File

@@ -1,29 +0,0 @@
# Roadmap (2017-08-29)
This roadmap is a tentative plan for the core development team. Things change constantly as PRs come in and priorities change.
But it will give you an idea of our current vision and plan.
### Short term (1-4 months)
- Release Grafana v4.5 with fixes and minor enhancements
- Release Grafana v5
- User groups
- Dashboard folders
- Dashboard permissions (on folders as well), permissions on groups or users
- New Dashboard layout engine
- New sidemenu & nav UX
- Elasticsearch alerting
### Long term
- Backend plugins to support more Auth options, Alerting data sources & notifications
- Universal time series transformations for any data source (meta queries)
- Reporting
- Web socket & live data streams
- Migrate to Angular2 or react
### Outside contributions
We know this is being worked on right now by contributors (and we hope to merge it when it's ready).
- Clustering for alert engine (load distribution)

View File

@@ -7,7 +7,7 @@ clone_folder: c:\gopath\src\github.com\grafana\grafana
environment:
nodejs_version: "6"
GOPATH: c:\gopath
GOVERSION: 1.9.2
GOVERSION: 1.8
install:
- rmdir c:\go /s /q
@@ -32,7 +32,6 @@ build_script:
- grunt release
- go run build.go sha-dist
- cp dist/* .
- go test -v ./pkg/...
artifacts:
- path: grafana-*windows-*.*

26
bower.json Normal file
View File

@@ -0,0 +1,26 @@
{
"name": "grafana",
"version": "2.0.2",
"homepage": "https://github.com/grafana/grafana",
"authors": [],
"license": "Apache 2.0",
"ignore": [
"**/.*",
"node_modules",
"bower_components",
"public/vendor/",
"test",
"tests"
],
"dependencies": {
"jquery": "3.1.0",
"lodash": "4.15.0",
"angular": "1.6.1",
"angular-route": "1.6.1",
"angular-mocks": "1.6.1",
"angular-sanitize": "1.6.1",
"angular-native-dragdrop": "1.2.2",
"angular-bindonce": "0.3.3",
"clipboard": "^1.5.16"
}
}

View File

@@ -95,9 +95,7 @@ func main() {
case "package":
grunt(gruntBuildArg("release")...)
if runtime.GOOS != "windows" {
createLinuxPackages()
}
createLinuxPackages()
case "pkg-rpm":
grunt(gruntBuildArg("release")...)
@@ -237,7 +235,7 @@ func createRpmPackages() {
defaultFileSrc: "packaging/rpm/sysconfig/grafana-server",
systemdFileSrc: "packaging/rpm/systemd/grafana-server.service",
depends: []string{"/sbin/service", "fontconfig", "freetype", "urw-fonts"},
depends: []string{"/sbin/service", "fontconfig"},
})
}
@@ -347,17 +345,13 @@ func ChangeWorkingDir(dir string) {
}
func grunt(params ...string) {
if runtime.GOOS == "windows" {
runPrint(`.\node_modules\.bin\grunt`, params...)
} else {
runPrint("./node_modules/.bin/grunt", params...)
}
runPrint("./node_modules/.bin/grunt", params...)
}
func gruntBuildArg(task string) []string {
args := []string{task}
if includeBuildNumber {
args = append(args, fmt.Sprintf("--pkgVer=%v-%v", linuxPackageVersion, linuxPackageIteration))
args = append(args, fmt.Sprintf("--pkgVer=%v-%v", version, linuxPackageIteration))
} else {
args = append(args, fmt.Sprintf("--pkgVer=%v", version))
}

View File

@@ -9,7 +9,7 @@ machine:
GOPATH: "/home/ubuntu/.go_workspace"
ORG_PATH: "github.com/grafana"
REPO_PATH: "${ORG_PATH}/grafana"
GODIST: "go1.9.2.linux-amd64.tar.gz"
GODIST: "go1.8.linux-amd64.tar.gz"
post:
- mkdir -p ~/download
- mkdir -p ~/docker
@@ -45,7 +45,6 @@ deployment:
- aws s3 sync ./dist s3://$BUCKET_NAME/master
- ./scripts/trigger_windows_build.sh ${APPVEYOR_TOKEN} ${CIRCLE_SHA1} master
- ./scripts/trigger_docker_build.sh ${TRIGGER_GRAFANA_PACKER_CIRCLECI_TOKEN}
- go run ./scripts/build/publish.go -apiKey ${GRAFANA_COM_API_KEY}
gh_tag:
tag: /^v[0-9]+(\.[0-9]+){2}(-.+|[^-.]*)$/
commands:

View File

@@ -25,7 +25,7 @@ plugins = data/plugins
#################################### Server ##############################
[server]
# Protocol (http, https, socket)
# Protocol (http or https)
protocol = http
# The ip address to bind to, empty will bind to all interfaces
@@ -57,29 +57,25 @@ enable_gzip = false
cert_file =
cert_key =
# Unix socket path
socket = /tmp/grafana.sock
#################################### Database ############################
[database]
# You can configure the database connection by specifying type, host, name, user and password
# as separate properties or as on string using the url property.
# as seperate properties or as on string using the url propertie.
# Either "mysql", "postgres" or "sqlite3", it's your choice
type = sqlite3
host = 127.0.0.1:3306
name = grafana
user = root
# If the password contains # or ; you have to wrap it with triple quotes. Ex """#password;"""
# If the password contains # or ; you have to wrap it with trippel quotes. Ex """#password;"""
password =
# Use either URL or the previous fields to configure the database
# Example: mysql://user:secret@host:port/database
url =
# Max idle conn setting default is 2
max_idle_conn = 2
# Max conn setting default is 0 (mean not set)
max_conn =
max_idle_conn =
max_open_conn =
# For "postgres", use either "disable", "require" or "verify-full"
@@ -136,11 +132,11 @@ logging = false
# Change this option to false to disable reporting.
reporting_enabled = true
# Set to false to disable all checks to https://grafana.com
# for new versions (grafana itself and plugins), check is used
# Set to false to disable all checks to https://grafana.net
# for new vesions (grafana itself and plugins), check is used
# in some UI views to notify that grafana or plugin update exists
# This option does not cause any auto updates, nor send any information
# only a GET request to https://grafana.com to get latest versions
# only a GET request to http://grafana.net to get latest versions
check_for_updates = true
# Google Analytics universal tracking code, only enabled if you specify an id here
@@ -186,10 +182,10 @@ snapshot_TTL_days = 90
#################################### Users ####################################
[users]
# disable user signup / registration
allow_sign_up = false
allow_sign_up = true
# Allow non admin users to create organizations
allow_org_create = false
allow_org_create = true
# Set to true to automatically assign new users to the default organization (id 1)
auto_assign_org = true
@@ -206,18 +202,10 @@ login_hint = email or username
# Default UI theme ("dark" or "light")
default_theme = dark
# External user management
external_manage_link_url =
external_manage_link_name =
external_manage_info =
[auth]
# Set to true to disable (hide) the login form, useful if you use OAuth
disable_login_form = false
# Set to true to disable the signout link in the side menu. useful if you use auth.proxy
disable_signout_menu = false
#################################### Anonymous Auth ######################
[auth.anonymous]
# enable anonymous access
@@ -255,8 +243,7 @@ api_url = https://www.googleapis.com/oauth2/v1/userinfo
allowed_domains =
hosted_domain =
#################################### Grafana.com Auth ####################
# legacy key names (so they work in env variables)
#################################### Grafana.net Auth ####################
[auth.grafananet]
enabled = false
allow_sign_up = true
@@ -265,14 +252,6 @@ client_secret = some_secret
scopes = user:email
allowed_organizations =
[auth.grafana_com]
enabled = false
allow_sign_up = true
client_id = some_id
client_secret = some_secret
scopes = user:email
allowed_organizations =
#################################### Generic OAuth #######################
[auth.generic_oauth]
name = OAuth
@@ -318,7 +297,6 @@ key_file =
skip_verify = false
from_address = admin@grafana.localhost
from_name = Grafana
ehlo_identity =
[emails]
welcome_email_on_sign_up = false
@@ -448,38 +426,15 @@ address =
prefix = prod.grafana.%(instance_name)s.
[grafana_net]
url = https://grafana.com
[grafana_com]
url = https://grafana.com
#################################### Distributed tracing ############
[tracing.jaeger]
# jaeger destination (ex localhost:6831)
address =
# tag that will always be included in when creating new spans. ex (tag1:value1,tag2:value2)
always_included_tag =
# Type specifies the type of the sampler: const, probabilistic, rateLimiting, or remote
sampler_type = const
# jaeger samplerconfig param
# for "const" sampler, 0 or 1 for always false/true respectively
# for "probabilistic" sampler, a probability between 0 and 1
# for "rateLimiting" sampler, the number of spans per second
# for "remote" sampler, param is the same as for "probabilistic"
# and indicates the initial sampling rate before the actual one
# is received from the mothership
sampler_param = 1
url = https://grafana.net
#################################### External Image Storage ##############
[external_image_storage]
# You can choose between (s3, webdav, gcs)
# You can choose between (s3, webdav)
provider =
[external_image_storage.s3]
bucket_url =
bucket =
region =
path =
access_key =
secret_key =
@@ -487,8 +442,3 @@ secret_key =
url =
username =
password =
public_url =
[external_image_storage.gcs]
key_file =
bucket =

View File

@@ -14,7 +14,7 @@ start_tls = false
# set to true if you want to skip ssl cert validation
ssl_skip_verify = false
# set to the path to your root CA certificate or leave unset to use system defaults
# root_ca_cert = "/path/to/certificate.crt"
# root_ca_cert = /path/to/certificate.crt
# Search user bind dn
bind_dn = "cn=admin,dc=grafana,dc=org"

View File

@@ -26,7 +26,7 @@
#
#################################### Server ####################################
[server]
# Protocol (http, https, socket)
# Protocol (http or https)
;protocol = http
# The ip address to bind to, empty will bind to all interfaces
@@ -59,9 +59,6 @@
;cert_file =
;cert_key =
# Unix socket path
;socket =
#################################### Database ####################################
[database]
# You can configure the database connection by specifying type, host, name, user and password
@@ -85,10 +82,9 @@
# For "sqlite3" only, path relative to data_path setting
;path = grafana.db
# Max idle conn setting default is 2
;max_idle_conn = 2
# Max conn setting default is 0 (mean not set)
;max_conn =
;max_idle_conn =
;max_open_conn =
@@ -133,7 +129,7 @@
# for new vesions (grafana itself and plugins), check is used
# in some UI views to notify that grafana or plugin update exists
# This option does not cause any auto updates, nor send any information
# only a GET request to http://grafana.com to get latest versions
# only a GET request to http://grafana.net to get latest versions
;check_for_updates = true
# Google Analytics universal tracking code, only enabled if you specify an id here
@@ -193,18 +189,10 @@
# Default UI theme ("dark" or "light")
;default_theme = dark
# External user management, these options affect the organization users view
;external_manage_link_url =
;external_manage_link_name =
;external_manage_info =
[auth]
# Set to true to disable (hide) the login form, useful if you use OAuth, defaults to false
;disable_login_form = false
# Set to true to disable the signout link in the side menu. useful if you use auth.proxy, defaults to false
;disable_signout_menu = false
#################################### Anonymous Auth ##########################
[auth.anonymous]
# enable anonymous access
@@ -255,8 +243,8 @@
;team_ids =
;allowed_organizations =
#################################### Grafana.com Auth ####################
[auth.grafana_com]
#################################### Grafana.net Auth ####################
[auth.grafananet]
;enabled = false
;allow_sign_up = true
;client_id = some_id
@@ -295,8 +283,6 @@
;skip_verify = false
;from_address = admin@grafana.localhost
;from_name = Grafana
# EHLO identity in SMTP dialog (defaults to instance_name)
;ehlo_identity = dashboard.example.com
[emails]
;welcome_email_on_sign_up = false
@@ -307,7 +293,7 @@
# Use space to separate multiple modes, e.g. "console file"
;mode = console file
# Either "debug", "info", "warn", "error", "critical", default is "info"
# Either "trace", "debug", "info", "warn", "error", "critical", default is "info"
;level = info
# optional settings to set different levels for specific loggers. Ex filters = sqlstore:debug
@@ -393,47 +379,23 @@
;address =
;prefix = prod.grafana.%(instance_name)s.
#################################### Distributed tracing ############
[tracing.jaeger]
# Enable by setting the address sending traces to jaeger (ex localhost:6831)
;address = localhost:6831
# Tag that will always be included in when creating new spans. ex (tag1:value1,tag2:value2)
;always_included_tag = tag1:value1
# Type specifies the type of the sampler: const, probabilistic, rateLimiting, or remote
;sampler_type = const
# jaeger samplerconfig param
# for "const" sampler, 0 or 1 for always false/true respectively
# for "probabilistic" sampler, a probability between 0 and 1
# for "rateLimiting" sampler, the number of spans per second
# for "remote" sampler, param is the same as for "probabilistic"
# and indicates the initial sampling rate before the actual one
# is received from the mothership
;sampler_param = 1
#################################### Grafana.com integration ##########################
# Url used to to import dashboards directly from Grafana.com
[grafana_com]
;url = https://grafana.com
#################################### Internal Grafana Metrics ##########################
# Url used to to import dashboards directly from Grafana.net
[grafana_net]
;url = https://grafana.net
#################################### External image storage ##########################
[external_image_storage]
# Used for uploading images to public servers so they can be included in slack/email messages.
# you can choose between (s3, webdav, gcs)
# you can choose between (s3, webdav)
;provider =
[external_image_storage.s3]
;bucket =
;region =
;path =
;bucket_url =
;access_key =
;secret_key =
[external_image_storage.webdav]
;url =
;public_url =
;username =
;password =
[external_image_storage.gcs]
;key_file =
;bucket =

View File

@@ -32,7 +32,6 @@ add ./files/my_htpasswd /etc/nginx/.htpasswd
# Add system service config
add ./files/nginx.conf /etc/nginx/nginx.conf
add ./files/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
# Nginx
#
# graphite
@@ -40,7 +39,6 @@ expose 80
# Carbon line receiver port
expose 2003
# Carbon cache query port
expose 7002

View File

@@ -4,6 +4,7 @@ graphite:
- "8080:80"
- "2003:2003"
volumes:
- /var/docker/gfdev/graphite:/opt/graphite/storage/whisper
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro

View File

@@ -1,93 +0,0 @@
FROM phusion/baseimage:0.9.22
MAINTAINER Denys Zhdanov <denis.zhdanov@gmail.com>
RUN apt-get -y update \
&& apt-get -y upgrade \
&& apt-get -y --force-yes install vim \
nginx \
python-dev \
python-flup \
python-pip \
python-ldap \
expect \
git \
memcached \
sqlite3 \
libffi-dev \
libcairo2 \
libcairo2-dev \
python-cairo \
python-rrdtool \
pkg-config \
nodejs \
&& rm -rf /var/lib/apt/lists/*
# fix python dependencies (LTS Django and newer memcached/txAMQP)
RUN pip install django==1.8.18 \
python-memcached==1.53 \
txAMQP==0.6.2 \
&& pip install --upgrade pip
# install whisper
RUN git clone -b 1.0.2 --depth 1 https://github.com/graphite-project/whisper.git /usr/local/src/whisper
WORKDIR /usr/local/src/whisper
RUN python ./setup.py install
# install carbon
RUN git clone -b 1.0.2 --depth 1 https://github.com/graphite-project/carbon.git /usr/local/src/carbon
WORKDIR /usr/local/src/carbon
RUN pip install -r requirements.txt \
&& python ./setup.py install
# install graphite
RUN git clone -b 1.0.2 --depth 1 https://github.com/graphite-project/graphite-web.git /usr/local/src/graphite-web
WORKDIR /usr/local/src/graphite-web
RUN pip install -r requirements.txt \
&& python ./setup.py install
ADD conf/opt/graphite/conf/*.conf /opt/graphite/conf/
ADD conf/opt/graphite/webapp/graphite/local_settings.py /opt/graphite/webapp/graphite/local_settings.py
ADD conf/opt/graphite/webapp/graphite/app_settings.py /opt/graphite/webapp/graphite/app_settings.py
WORKDIR /opt/graphite/webapp
RUN mkdir -p /var/log/graphite/ \
&& PYTHONPATH=/opt/graphite/webapp django-admin.py collectstatic --noinput --settings=graphite.settings
# install statsd
RUN git clone -b v0.7.2 https://github.com/etsy/statsd.git /opt/statsd
ADD conf/opt/statsd/config.js /opt/statsd/config.js
# config nginx
RUN rm /etc/nginx/sites-enabled/default
ADD conf/etc/nginx/nginx.conf /etc/nginx/nginx.conf
ADD conf/etc/nginx/sites-enabled/graphite-statsd.conf /etc/nginx/sites-enabled/graphite-statsd.conf
# init django admin
ADD conf/usr/local/bin/django_admin_init.exp /usr/local/bin/django_admin_init.exp
ADD conf/usr/local/bin/manage.sh /usr/local/bin/manage.sh
RUN chmod +x /usr/local/bin/manage.sh \
&& /usr/local/bin/django_admin_init.exp
# logging support
RUN mkdir -p /var/log/carbon /var/log/graphite /var/log/nginx
ADD conf/etc/logrotate.d/graphite-statsd /etc/logrotate.d/graphite-statsd
# daemons
ADD conf/etc/service/carbon/run /etc/service/carbon/run
ADD conf/etc/service/carbon-aggregator/run /etc/service/carbon-aggregator/run
ADD conf/etc/service/graphite/run /etc/service/graphite/run
ADD conf/etc/service/statsd/run /etc/service/statsd/run
ADD conf/etc/service/nginx/run /etc/service/nginx/run
# default conf setup
ADD conf /etc/graphite-statsd/conf
ADD conf/etc/my_init.d/01_conf_init.sh /etc/my_init.d/01_conf_init.sh
# cleanup
RUN apt-get clean\
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# defaults
EXPOSE 80 2003-2004 2023-2024 8125/udp 8126
VOLUME ["/opt/graphite/conf", "/opt/graphite/storage", "/etc/nginx", "/opt/statsd", "/etc/logrotate.d", "/var/log"]
WORKDIR /
ENV HOME /root
CMD ["/sbin/my_init"]

View File

@@ -1,11 +0,0 @@
/var/log/*.log /var/log/*/*.log {
weekly
size 50M
missingok
rotate 10
compress
delaycompress
notifempty
copytruncate
su root syslog
}

View File

@@ -1,36 +0,0 @@
#!/bin/bash
conf_dir=/etc/graphite-statsd/conf
# auto setup graphite with default configs if /opt/graphite is missing
# needed for the use case when a docker host volume is mounted at an of the following:
# - /opt/graphite
# - /opt/graphite/conf
# - /opt/graphite/webapp/graphite
graphite_dir_contents=$(find /opt/graphite -mindepth 1 -print -quit)
graphite_conf_dir_contents=$(find /opt/graphite/conf -mindepth 1 -print -quit)
graphite_webapp_dir_contents=$(find /opt/graphite/webapp/graphite -mindepth 1 -print -quit)
graphite_storage_dir_contents=$(find /opt/graphite/storage -mindepth 1 -print -quit)
if [[ -z $graphite_dir_contents ]]; then
git clone -b 1.0.2 --depth 1 https://github.com/graphite-project/graphite-web.git /usr/local/src/graphite-web
cd /usr/local/src/graphite-web && python ./setup.py install
fi
if [[ -z $graphite_storage_dir_contents ]]; then
/usr/local/bin/django_admin_init.exp
fi
if [[ -z $graphite_conf_dir_contents ]]; then
cp -R $conf_dir/opt/graphite/conf/*.conf /opt/graphite/conf/
fi
if [[ -z $graphite_webapp_dir_contents ]]; then
cp $conf_dir/opt/graphite/webapp/graphite/local_settings.py /opt/graphite/webapp/graphite/local_settings.py
fi
# auto setup statsd with default config if /opt/statsd is missing
# needed for the use case when a docker host volume is mounted at an of the following:
# - /opt/statsd
statsd_dir_contents=$(find /opt/statsd -mindepth 1 -print -quit)
if [[ -z $statsd_dir_contents ]]; then
git clone -b v0.7.2 https://github.com/etsy/statsd.git /opt/statsd
cp $conf_dir/opt/statsd/config.js /opt/statsd/config.js
fi

View File

@@ -1,96 +0,0 @@
user www-data;
worker_processes 4;
pid /run/nginx.pid;
daemon off;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
##
# nginx-naxsi config
##
# Uncomment it if you installed nginx-naxsi
##
#include /etc/nginx/naxsi_core.rules;
##
# nginx-passenger config
##
# Uncomment it if you installed nginx-passenger
##
#passenger_root /usr;
#passenger_ruby /usr/bin/ruby;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
#mail {
# # See sample authentication script at:
# # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
# # auth_http localhost/auth.php;
# # pop3_capabilities "TOP" "USER";
# # imap_capabilities "IMAP4rev1" "UIDPLUS";
#
# server {
# listen localhost:110;
# protocol pop3;
# proxy on;
# }
#
# server {
# listen localhost:143;
# protocol imap;
# proxy on;
# }
#}

View File

@@ -1,31 +0,0 @@
server {
listen 80;
root /opt/graphite/static;
index index.html;
location /media {
# django admin static files
alias /usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/;
}
location /admin/auth/admin {
alias /usr/local/lib/python2.7/dist-packages/django/contrib/admin/static/admin;
}
location /admin/auth/user/admin {
alias /usr/local/lib/python2.7/dist-packages/django/contrib/admin/static/admin;
}
location / {
proxy_pass http://localhost:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'Authorization, Content-Type';
add_header 'Access-Control-Allow-Credentials' 'true';
}
}

View File

@@ -1,4 +0,0 @@
#!/bin/bash
rm -f /opt/graphite/storage/carbon-aggregator-a.pid
exec /usr/bin/python /opt/graphite/bin/carbon-aggregator.py start --debug 2>&1 >> /var/log/carbon-aggregator.log

View File

@@ -1,4 +0,0 @@
#!/bin/bash
rm -f /opt/graphite/storage/carbon-cache-a.pid
exec /usr/bin/python /opt/graphite/bin/carbon-cache.py start --debug 2>&1 >> /var/log/carbon.log

View File

@@ -1,3 +0,0 @@
#!/bin/bash
export PYTHONPATH=/opt/graphite/webapp && exec /usr/local/bin/gunicorn wsgi --workers=4 --bind=127.0.0.1:8080 --log-file=/var/log/gunicorn.log --preload --pythonpath=/opt/graphite/webapp/graphite

View File

@@ -1,4 +0,0 @@
#!/bin/bash
mkdir -p /var/log/nginx
exec /usr/sbin/nginx -c /etc/nginx/nginx.conf

View File

@@ -1,4 +0,0 @@
#!/bin/bash
exec /usr/bin/nodejs /opt/statsd/stats.js /opt/statsd/config.js >> /var/log/statsd.log 2>&1

View File

@@ -1,35 +0,0 @@
# The form of each line in this file should be as follows:
#
# output_template (frequency) = method input_pattern
#
# This will capture any received metrics that match 'input_pattern'
# for calculating an aggregate metric. The calculation will occur
# every 'frequency' seconds and the 'method' can specify 'sum' or
# 'avg'. The name of the aggregate metric will be derived from
# 'output_template' filling in any captured fields from 'input_pattern'.
#
# For example, if you're metric naming scheme is:
#
# <env>.applications.<app>.<server>.<metric>
#
# You could configure some aggregations like so:
#
# <env>.applications.<app>.all.requests (60) = sum <env>.applications.<app>.*.requests
# <env>.applications.<app>.all.latency (60) = avg <env>.applications.<app>.*.latency
#
# As an example, if the following metrics are received:
#
# prod.applications.apache.www01.requests
# prod.applications.apache.www01.requests
#
# They would all go into the same aggregation buffer and after 60 seconds the
# aggregate metric 'prod.applications.apache.all.requests' would be calculated
# by summing their values.
#
# Template components such as <env> will match everything up to the next dot.
# To match metric multiple components including the dots, use <<metric>> in the
# input template:
#
# <env>.applications.<app>.all.<app_metric> (60) = sum <env>.applications.<app>.*.<<app_metric>>
#
# Note that any time this file is modified, it will be re-read automatically.

View File

@@ -1,5 +0,0 @@
# This file takes a single regular expression per line
# If USE_WHITELIST is set to True in carbon.conf, any metrics received which
# match one of these expressions will be dropped
# This file is reloaded automatically when changes are made
^some\.noisy\.metric\.prefix\..*

View File

@@ -1,75 +0,0 @@
# This is a configuration file with AMQP enabled
[cache]
LOCAL_DATA_DIR =
# Specify the user to drop privileges to
# If this is blank carbon runs as the user that invokes it
# This user must have write access to the local data directory
USER =
# Limit the size of the cache to avoid swapping or becoming CPU bound.
# Sorts and serving cache queries gets more expensive as the cache grows.
# Use the value "inf" (infinity) for an unlimited cache size.
MAX_CACHE_SIZE = inf
# Limits the number of whisper update_many() calls per second, which effectively
# means the number of write requests sent to the disk. This is intended to
# prevent over-utilizing the disk and thus starving the rest of the system.
# When the rate of required updates exceeds this, then carbon's caching will
# take effect and increase the overall throughput accordingly.
MAX_UPDATES_PER_SECOND = 1000
# Softly limits the number of whisper files that get created each minute.
# Setting this value low (like at 50) is a good way to ensure your graphite
# system will not be adversely impacted when a bunch of new metrics are
# sent to it. The trade off is that it will take much longer for those metrics'
# database files to all get created and thus longer until the data becomes usable.
# Setting this value high (like "inf" for infinity) will cause graphite to create
# the files quickly but at the risk of slowing I/O down considerably for a while.
MAX_CREATES_PER_MINUTE = inf
LINE_RECEIVER_INTERFACE = 0.0.0.0
LINE_RECEIVER_PORT = 2003
UDP_RECEIVER_INTERFACE = 0.0.0.0
UDP_RECEIVER_PORT = 2003
PICKLE_RECEIVER_INTERFACE = 0.0.0.0
PICKLE_RECEIVER_PORT = 2004
CACHE_QUERY_INTERFACE = 0.0.0.0
CACHE_QUERY_PORT = 7002
# Enable AMQP if you want to receve metrics using you amqp broker
ENABLE_AMQP = True
# Verbose means a line will be logged for every metric received
# useful for testing
AMQP_VERBOSE = True
# your credentials for the amqp server
# AMQP_USER = guest
# AMQP_PASSWORD = guest
# the network settings for the amqp server
# AMQP_HOST = localhost
# AMQP_PORT = 5672
# if you want to include the metric name as part of the message body
# instead of as the routing key, set this to True
# AMQP_METRIC_NAME_IN_BODY = False
# NOTE: you cannot run both a cache and a relay on the same server
# with the default configuration, you have to specify a distinict
# interfaces and ports for the listeners.
[relay]
LINE_RECEIVER_INTERFACE = 0.0.0.0
LINE_RECEIVER_PORT = 2003
PICKLE_RECEIVER_INTERFACE = 0.0.0.0
PICKLE_RECEIVER_PORT = 2004
CACHE_SERVERS = server1, server2, server3
MAX_QUEUE_SIZE = 10000

View File

@@ -1,359 +0,0 @@
[cache]
# Configure carbon directories.
#
# OS environment variables can be used to tell carbon where graphite is
# installed, where to read configuration from and where to write data.
#
# GRAPHITE_ROOT - Root directory of the graphite installation.
# Defaults to ../
# GRAPHITE_CONF_DIR - Configuration directory (where this file lives).
# Defaults to $GRAPHITE_ROOT/conf/
# GRAPHITE_STORAGE_DIR - Storage directory for whipser/rrd/log/pid files.
# Defaults to $GRAPHITE_ROOT/storage/
#
# To change other directory paths, add settings to this file. The following
# configuration variables are available with these default values:
#
# STORAGE_DIR = $GRAPHITE_STORAGE_DIR
# LOCAL_DATA_DIR = STORAGE_DIR/whisper/
# WHITELISTS_DIR = STORAGE_DIR/lists/
# CONF_DIR = STORAGE_DIR/conf/
# LOG_DIR = STORAGE_DIR/log/
# PID_DIR = STORAGE_DIR/
#
# For FHS style directory structures, use:
#
# STORAGE_DIR = /var/lib/carbon/
# CONF_DIR = /etc/carbon/
# LOG_DIR = /var/log/carbon/
# PID_DIR = /var/run/
#
#LOCAL_DATA_DIR = /opt/graphite/storage/whisper/
# Enable daily log rotation. If disabled, a kill -HUP can be used after a manual rotate
ENABLE_LOGROTATION = True
# Specify the user to drop privileges to
# If this is blank carbon runs as the user that invokes it
# This user must have write access to the local data directory
USER =
#
# NOTE: The above settings must be set under [relay] and [aggregator]
# to take effect for those daemons as well
# Limit the size of the cache to avoid swapping or becoming CPU bound.
# Sorts and serving cache queries gets more expensive as the cache grows.
# Use the value "inf" (infinity) for an unlimited cache size.
MAX_CACHE_SIZE = inf
# Limits the number of whisper update_many() calls per second, which effectively
# means the number of write requests sent to the disk. This is intended to
# prevent over-utilizing the disk and thus starving the rest of the system.
# When the rate of required updates exceeds this, then carbon's caching will
# take effect and increase the overall throughput accordingly.
MAX_UPDATES_PER_SECOND = 500
# If defined, this changes the MAX_UPDATES_PER_SECOND in Carbon when a
# stop/shutdown is initiated. This helps when MAX_UPDATES_PER_SECOND is
# relatively low and carbon has cached a lot of updates; it enables the carbon
# daemon to shutdown more quickly.
# MAX_UPDATES_PER_SECOND_ON_SHUTDOWN = 1000
# Softly limits the number of whisper files that get created each minute.
# Setting this value low (like at 50) is a good way to ensure your graphite
# system will not be adversely impacted when a bunch of new metrics are
# sent to it. The trade off is that it will take much longer for those metrics'
# database files to all get created and thus longer until the data becomes usable.
# Setting this value high (like "inf" for infinity) will cause graphite to create
# the files quickly but at the risk of slowing I/O down considerably for a while.
MAX_CREATES_PER_MINUTE = 50
LINE_RECEIVER_INTERFACE = 0.0.0.0
LINE_RECEIVER_PORT = 2003
# Set this to True to enable the UDP listener. By default this is off
# because it is very common to run multiple carbon daemons and managing
# another (rarely used) port for every carbon instance is not fun.
ENABLE_UDP_LISTENER = False
UDP_RECEIVER_INTERFACE = 0.0.0.0
UDP_RECEIVER_PORT = 2003
PICKLE_RECEIVER_INTERFACE = 0.0.0.0
PICKLE_RECEIVER_PORT = 2004
# Set to false to disable logging of successful connections
LOG_LISTENER_CONNECTIONS = True
# Per security concerns outlined in Bug #817247 the pickle receiver
# will use a more secure and slightly less efficient unpickler.
# Set this to True to revert to the old-fashioned insecure unpickler.
USE_INSECURE_UNPICKLER = False
CACHE_QUERY_INTERFACE = 0.0.0.0
CACHE_QUERY_PORT = 7002
# Set this to False to drop datapoints received after the cache
# reaches MAX_CACHE_SIZE. If this is True (the default) then sockets
# over which metrics are received will temporarily stop accepting
# data until the cache size falls below 95% MAX_CACHE_SIZE.
USE_FLOW_CONTROL = True
# By default, carbon-cache will log every whisper update and cache hit. This can be excessive and
# degrade performance if logging on the same volume as the whisper data is stored.
LOG_UPDATES = False
LOG_CACHE_HITS = False
LOG_CACHE_QUEUE_SORTS = True
# The thread that writes metrics to disk can use on of the following strategies
# determining the order in which metrics are removed from cache and flushed to
# disk. The default option preserves the same behavior as has been historically
# available in version 0.9.10.
#
# sorted - All metrics in the cache will be counted and an ordered list of
# them will be sorted according to the number of datapoints in the cache at the
# moment of the list's creation. Metrics will then be flushed from the cache to
# disk in that order.
#
# max - The writer thread will always pop and flush the metric from cache
# that has the most datapoints. This will give a strong flush preference to
# frequently updated metrics and will also reduce random file-io. Infrequently
# updated metrics may only ever be persisted to disk at daemon shutdown if
# there are a large number of metrics which receive very frequent updates OR if
# disk i/o is very slow.
#
# naive - Metrics will be flushed from the cache to disk in an unordered
# fashion. This strategy may be desirable in situations where the storage for
# whisper files is solid state, CPU resources are very limited or deference to
# the OS's i/o scheduler is expected to compensate for the random write
# pattern.
#
CACHE_WRITE_STRATEGY = sorted
# On some systems it is desirable for whisper to write synchronously.
# Set this option to True if you'd like to try this. Basically it will
# shift the onus of buffering writes from the kernel into carbon's cache.
WHISPER_AUTOFLUSH = False
# By default new Whisper files are created pre-allocated with the data region
# filled with zeros to prevent fragmentation and speed up contiguous reads and
# writes (which are common). Enabling this option will cause Whisper to create
# the file sparsely instead. Enabling this option may allow a large increase of
# MAX_CREATES_PER_MINUTE but may have longer term performance implications
# depending on the underlying storage configuration.
# WHISPER_SPARSE_CREATE = False
# Only beneficial on linux filesystems that support the fallocate system call.
# It maintains the benefits of contiguous reads/writes, but with a potentially
# much faster creation speed, by allowing the kernel to handle the block
# allocation and zero-ing. Enabling this option may allow a large increase of
# MAX_CREATES_PER_MINUTE. If enabled on an OS or filesystem that is unsupported
# this option will gracefully fallback to standard POSIX file access methods.
WHISPER_FALLOCATE_CREATE = True
# Enabling this option will cause Whisper to lock each Whisper file it writes
# to with an exclusive lock (LOCK_EX, see: man 2 flock). This is useful when
# multiple carbon-cache daemons are writing to the same files
# WHISPER_LOCK_WRITES = False
# Set this to True to enable whitelisting and blacklisting of metrics in
# CONF_DIR/whitelist and CONF_DIR/blacklist. If the whitelist is missing or
# empty, all metrics will pass through
# USE_WHITELIST = False
# By default, carbon itself will log statistics (such as a count,
# metricsReceived) with the top level prefix of 'carbon' at an interval of 60
# seconds. Set CARBON_METRIC_INTERVAL to 0 to disable instrumentation
# CARBON_METRIC_PREFIX = carbon
# CARBON_METRIC_INTERVAL = 60
# Enable AMQP if you want to receve metrics using an amqp broker
# ENABLE_AMQP = False
# Verbose means a line will be logged for every metric received
# useful for testing
# AMQP_VERBOSE = False
# AMQP_HOST = localhost
# AMQP_PORT = 5672
# AMQP_VHOST = /
# AMQP_USER = guest
# AMQP_PASSWORD = guest
# AMQP_EXCHANGE = graphite
# AMQP_METRIC_NAME_IN_BODY = False
# The manhole interface allows you to SSH into the carbon daemon
# and get a python interpreter. BE CAREFUL WITH THIS! If you do
# something like time.sleep() in the interpreter, the whole process
# will sleep! This is *extremely* helpful in debugging, assuming
# you are familiar with the code. If you are not, please don't
# mess with this, you are asking for trouble :)
#
# ENABLE_MANHOLE = False
# MANHOLE_INTERFACE = 127.0.0.1
# MANHOLE_PORT = 7222
# MANHOLE_USER = admin
# MANHOLE_PUBLIC_KEY = ssh-rsa AAAAB3NzaC1yc2EAAAABiwAaAIEAoxN0sv/e4eZCPpi3N3KYvyzRaBaMeS2RsOQ/cDuKv11dlNzVeiyc3RFmCv5Rjwn/lQ79y0zyHxw67qLyhQ/kDzINc4cY41ivuQXm2tPmgvexdrBv5nsfEpjs3gLZfJnyvlcVyWK/lId8WUvEWSWHTzsbtmXAF2raJMdgLTbQ8wE=
# Patterns for all of the metrics this machine will store. Read more at
# http://en.wikipedia.org/wiki/Advanced_Message_Queuing_Protocol#Bindings
#
# Example: store all sales, linux servers, and utilization metrics
# BIND_PATTERNS = sales.#, servers.linux.#, #.utilization
#
# Example: store everything
# BIND_PATTERNS = #
# To configure special settings for the carbon-cache instance 'b', uncomment this:
#[cache:b]
#LINE_RECEIVER_PORT = 2103
#PICKLE_RECEIVER_PORT = 2104
#CACHE_QUERY_PORT = 7102
# and any other settings you want to customize, defaults are inherited
# from [carbon] section.
# You can then specify the --instance=b option to manage this instance
[relay]
LINE_RECEIVER_INTERFACE = 0.0.0.0
LINE_RECEIVER_PORT = 2013
PICKLE_RECEIVER_INTERFACE = 0.0.0.0
PICKLE_RECEIVER_PORT = 2014
# Set to false to disable logging of successful connections
LOG_LISTENER_CONNECTIONS = True
# Carbon-relay has several options for metric routing controlled by RELAY_METHOD
#
# Use relay-rules.conf to route metrics to destinations based on pattern rules
#RELAY_METHOD = rules
#
# Use consistent-hashing for even distribution of metrics between destinations
#RELAY_METHOD = consistent-hashing
#
# Use consistent-hashing but take into account an aggregation-rules.conf shared
# by downstream carbon-aggregator daemons. This will ensure that all metrics
# that map to a given aggregation rule are sent to the same carbon-aggregator
# instance.
# Enable this for carbon-relays that send to a group of carbon-aggregators
#RELAY_METHOD = aggregated-consistent-hashing
RELAY_METHOD = rules
# If you use consistent-hashing you can add redundancy by replicating every
# datapoint to more than one machine.
REPLICATION_FACTOR = 1
# This is a list of carbon daemons we will send any relayed or
# generated metrics to. The default provided would send to a single
# carbon-cache instance on the default port. However if you
# use multiple carbon-cache instances then it would look like this:
#
# DESTINATIONS = 127.0.0.1:2004:a, 127.0.0.1:2104:b
#
# The general form is IP:PORT:INSTANCE where the :INSTANCE part is
# optional and refers to the "None" instance if omitted.
#
# Note that if the destinations are all carbon-caches then this should
# exactly match the webapp's CARBONLINK_HOSTS setting in terms of
# instances listed (order matters!).
#
# If using RELAY_METHOD = rules, all destinations used in relay-rules.conf
# must be defined in this list
DESTINATIONS = 127.0.0.1:2004
# This defines the maximum "message size" between carbon daemons.
# You shouldn't need to tune this unless you really know what you're doing.
MAX_DATAPOINTS_PER_MESSAGE = 500
MAX_QUEUE_SIZE = 10000
# Set this to False to drop datapoints when any send queue (sending datapoints
# to a downstream carbon daemon) hits MAX_QUEUE_SIZE. If this is True (the
# default) then sockets over which metrics are received will temporarily stop accepting
# data until the send queues fall below 80% MAX_QUEUE_SIZE.
USE_FLOW_CONTROL = True
# Set this to True to enable whitelisting and blacklisting of metrics in
# CONF_DIR/whitelist and CONF_DIR/blacklist. If the whitelist is missing or
# empty, all metrics will pass through
# USE_WHITELIST = False
# By default, carbon itself will log statistics (such as a count,
# metricsReceived) with the top level prefix of 'carbon' at an interval of 60
# seconds. Set CARBON_METRIC_INTERVAL to 0 to disable instrumentation
# CARBON_METRIC_PREFIX = carbon
# CARBON_METRIC_INTERVAL = 60
[aggregator]
LINE_RECEIVER_INTERFACE = 0.0.0.0
LINE_RECEIVER_PORT = 2023
PICKLE_RECEIVER_INTERFACE = 0.0.0.0
PICKLE_RECEIVER_PORT = 2024
# Set to false to disable logging of successful connections
LOG_LISTENER_CONNECTIONS = True
# If set true, metric received will be forwarded to DESTINATIONS in addition to
# the output of the aggregation rules. If set false the carbon-aggregator will
# only ever send the output of aggregation.
FORWARD_ALL = True
# This is a list of carbon daemons we will send any relayed or
# generated metrics to. The default provided would send to a single
# carbon-cache instance on the default port. However if you
# use multiple carbon-cache instances then it would look like this:
#
# DESTINATIONS = 127.0.0.1:2004:a, 127.0.0.1:2104:b
#
# The format is comma-delimited IP:PORT:INSTANCE where the :INSTANCE part is
# optional and refers to the "None" instance if omitted.
#
# Note that if the destinations are all carbon-caches then this should
# exactly match the webapp's CARBONLINK_HOSTS setting in terms of
# instances listed (order matters!).
DESTINATIONS = 127.0.0.1:2004
# If you want to add redundancy to your data by replicating every
# datapoint to more than one machine, increase this.
REPLICATION_FACTOR = 1
# This is the maximum number of datapoints that can be queued up
# for a single destination. Once this limit is hit, we will
# stop accepting new data if USE_FLOW_CONTROL is True, otherwise
# we will drop any subsequently received datapoints.
MAX_QUEUE_SIZE = 10000
# Set this to False to drop datapoints when any send queue (sending datapoints
# to a downstream carbon daemon) hits MAX_QUEUE_SIZE. If this is True (the
# default) then sockets over which metrics are received will temporarily stop accepting
# data until the send queues fall below 80% MAX_QUEUE_SIZE.
USE_FLOW_CONTROL = True
# This defines the maximum "message size" between carbon daemons.
# You shouldn't need to tune this unless you really know what you're doing.
MAX_DATAPOINTS_PER_MESSAGE = 500
# This defines how many datapoints the aggregator remembers for
# each metric. Aggregation only happens for datapoints that fall in
# the past MAX_AGGREGATION_INTERVALS * intervalSize seconds.
MAX_AGGREGATION_INTERVALS = 5
# By default (WRITE_BACK_FREQUENCY = 0), carbon-aggregator will write back
# aggregated data points once every rule.frequency seconds, on a per-rule basis.
# Set this (WRITE_BACK_FREQUENCY = N) to write back all aggregated data points
# every N seconds, independent of rule frequency. This is useful, for example,
# to be able to query partially aggregated metrics from carbon-cache without
# having to first wait rule.frequency seconds.
# WRITE_BACK_FREQUENCY = 0
# Set this to True to enable whitelisting and blacklisting of metrics in
# CONF_DIR/whitelist and CONF_DIR/blacklist. If the whitelist is missing or
# empty, all metrics will pass through
# USE_WHITELIST = False
# By default, carbon itself will log statistics (such as a count,
# metricsReceived) with the top level prefix of 'carbon' at an interval of 60
# seconds. Set CARBON_METRIC_INTERVAL to 0 to disable instrumentation
# CARBON_METRIC_PREFIX = carbon
# CARBON_METRIC_INTERVAL = 60

View File

@@ -1,57 +0,0 @@
# This configuration file controls the behavior of the Dashboard UI, available
# at http://my-graphite-server/dashboard/.
#
# This file must contain a [ui] section that defines values for all of the
# following settings.
[ui]
default_graph_width = 400
default_graph_height = 250
automatic_variants = true
refresh_interval = 60
autocomplete_delay = 375
merge_hover_delay = 750
# You can set this 'default', 'white', or a custom theme name.
# To create a custom theme, copy the dashboard-default.css file
# to dashboard-myThemeName.css in the content/css directory and
# modify it to your liking.
theme = default
[keyboard-shortcuts]
toggle_toolbar = ctrl-z
toggle_metrics_panel = ctrl-space
erase_all_graphs = alt-x
save_dashboard = alt-s
completer_add_metrics = alt-enter
completer_del_metrics = alt-backspace
give_completer_focus = shift-space
# These settings apply to the UI as a whole, all other sections in this file
# pertain only to specific metric types.
#
# The dashboard presents only metrics that fall into specified naming schemes
# defined in this file. This creates a simpler, more targetted view of the
# data. The general form for defining a naming scheme is as follows:
#
#[Metric Type]
#scheme = basis.path.<field1>.<field2>.<fieldN>
#field1.label = Foo
#field2.label = Bar
#
#
# Where each <field> will be displayed as a dropdown box
# in the UI and the remaining portion of the namespace
# shown in the Metric Selector panel. The .label options set the labels
# displayed for each dropdown.
#
# For example:
#
#[Sales]
#scheme = sales.<channel>.<type>.<brand>
#channel.label = Channel
#type.label = Product Type
#brand.label = Brand
#
# This defines a 'Sales' metric type that uses 3 dropdowns in the Context Selector
# (the upper-left panel) while any deeper metrics (per-product counts or revenue, etc)
# will be available in the Metric Selector (upper-right panel).

View File

@@ -1,38 +0,0 @@
[default]
background = black
foreground = white
majorLine = white
minorLine = grey
lineColors = blue,green,red,purple,brown,yellow,aqua,grey,magenta,pink,gold,rose
fontName = Sans
fontSize = 10
fontBold = False
fontItalic = False
[noc]
background = black
foreground = white
majorLine = white
minorLine = grey
lineColors = blue,green,red,yellow,purple,brown,aqua,grey,magenta,pink,gold,rose
fontName = Sans
fontSize = 10
fontBold = False
fontItalic = False
[plain]
background = white
foreground = black
minorLine = grey
majorLine = rose
[summary]
background = black
lineColors = #6666ff, #66ff66, #ff6666
[alphas]
background = white
foreground = black
majorLine = grey
minorLine = rose
lineColors = 00ff00aa,ff000077,00337799

View File

@@ -1,21 +0,0 @@
# Relay destination rules for carbon-relay. Entries are scanned in order,
# and the first pattern a metric matches will cause processing to cease after sending
# unless `continue` is set to true
#
# [name]
# pattern = <regex>
# destinations = <list of destination addresses>
# continue = <boolean> # default: False
#
# name: Arbitrary unique name to identify the rule
# pattern: Regex pattern to match against the metric name
# destinations: Comma-separated list of destinations.
# ex: 127.0.0.1, 10.1.2.3:2004, 10.1.2.4:2004:a, myserver.mydomain.com
# continue: Continue processing rules if this rule matches (default: False)
# You must have exactly one section with 'default = true'
# Note that all destinations listed must also exist in carbon.conf
# in the DESTINATIONS setting in the [relay] section
[default]
default = true
destinations = 127.0.0.1:2004:a, 127.0.0.1:2104:b

View File

@@ -1,18 +0,0 @@
# This file defines regular expression patterns that can be used to
# rewrite metric names in a search & replace fashion. It consists of two
# sections, [pre] and [post]. The rules in the pre section are applied to
# metric names as soon as they are received. The post rules are applied
# after aggregation has taken place.
#
# The general form of each rule is as follows:
#
# regex-pattern = replacement-text
#
# For example:
#
# [post]
# _sum$ =
# _avg$ =
#
# These rules would strip off a suffix of _sum or _avg from any metric names
# after aggregation.

View File

@@ -1,43 +0,0 @@
# Aggregation methods for whisper files. Entries are scanned in order,
# and first match wins. This file is scanned for changes every 60 seconds
#
# [name]
# pattern = <regex>
# xFilesFactor = <float between 0 and 1>
# aggregationMethod = <average|sum|last|max|min>
#
# name: Arbitrary unique name for the rule
# pattern: Regex pattern to match against the metric name
# xFilesFactor: Ratio of valid data points required for aggregation to the next retention to occur
# aggregationMethod: function to apply to data points for aggregation
#
[min]
pattern = \.lower$
xFilesFactor = 0.1
aggregationMethod = min
[max]
pattern = \.upper(_\d+)?$
xFilesFactor = 0.1
aggregationMethod = max
[sum]
pattern = \.sum$
xFilesFactor = 0
aggregationMethod = sum
[count]
pattern = \.count$
xFilesFactor = 0
aggregationMethod = sum
[count_legacy]
pattern = ^stats_counts.*
xFilesFactor = 0
aggregationMethod = sum
[default_average]
pattern = .*
xFilesFactor = 0.3
aggregationMethod = average

View File

@@ -1,17 +0,0 @@
# Schema definitions for Whisper files. Entries are scanned in order,
[carbon]
pattern = ^carbon\..*
retentions = 1m:31d,10m:1y,1h:5y
[highres]
pattern = ^highres.*
retentions = 1s:1d,1m:7d
[statsd]
pattern = ^statsd.*
retentions = 1m:7d,10m:1y
[default]
pattern = .*
retentions = 10s:1d,1m:7d,10m:1y

View File

@@ -1,6 +0,0 @@
# This file takes a single regular expression per line
# If USE_WHITELIST is set to True in carbon.conf, only metrics received which
# match one of these expressions will be persisted. If this file is empty or
# missing, all metrics will pass through.
# This file is reloaded automatically when changes are made
.*

View File

@@ -1,94 +0,0 @@
"""Copyright 2008 Orbitz WorldWide
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License."""
# Django settings for graphite project.
# DO NOT MODIFY THIS FILE DIRECTLY - use local_settings.py instead
from os.path import dirname, join, abspath
#Django settings below, do not touch!
APPEND_SLASH = False
TEMPLATE_DEBUG = False
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [
join(dirname( abspath(__file__) ), 'templates')
],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
# Insert your TEMPLATE_CONTEXT_PROCESSORS here or use this
# list if you haven't customized them:
'django.contrib.auth.context_processors.auth',
'django.template.context_processors.debug',
'django.template.context_processors.i18n',
'django.template.context_processors.media',
'django.template.context_processors.static',
'django.template.context_processors.tz',
'django.contrib.messages.context_processors.messages',
],
},
},
]
# Language code for this installation. All choices can be found here:
# http://www.w3.org/TR/REC-html40/struct/dirlang.html#langcodes
# http://blogs.law.harvard.edu/tech/stories/storyReader$15
LANGUAGE_CODE = 'en-us'
# Absolute path to the directory that holds media.
MEDIA_ROOT = ''
# URL that handles the media served from MEDIA_ROOT.
# Example: "http://media.lawrence.com"
MEDIA_URL = ''
MIDDLEWARE_CLASSES = (
'graphite.middleware.LogExceptionsMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.gzip.GZipMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
)
ROOT_URLCONF = 'graphite.urls'
INSTALLED_APPS = (
'graphite.metrics',
'graphite.render',
'graphite.browser',
'graphite.composer',
'graphite.account',
'graphite.dashboard',
'graphite.whitelist',
'graphite.events',
'graphite.url_shortener',
'django.contrib.auth',
'django.contrib.sessions',
'django.contrib.admin',
'django.contrib.contenttypes',
'django.contrib.staticfiles',
'tagging',
)
AUTHENTICATION_BACKENDS = ['django.contrib.auth.backends.ModelBackend']
GRAPHITE_WEB_APP_SETTINGS_LOADED = True
STATIC_URL = '/static/'
STATIC_ROOT = '/opt/graphite/static/'

View File

@@ -1,215 +0,0 @@
## Graphite local_settings.py
# Edit this file to customize the default Graphite webapp settings
#
# Additional customizations to Django settings can be added to this file as well
#####################################
# General Configuration #
#####################################
# Set this to a long, random unique string to use as a secret key for this
# install. This key is used for salting of hashes used in auth tokens,
# CRSF middleware, cookie storage, etc. This should be set identically among
# instances if used behind a load balancer.
#SECRET_KEY = 'UNSAFE_DEFAULT'
# In Django 1.5+ set this to the list of hosts your graphite instances is
# accessible as. See:
# https://docs.djangoproject.com/en/dev/ref/settings/#std:setting-ALLOWED_HOSTS
#ALLOWED_HOSTS = [ '*' ]
# Set your local timezone (Django's default is America/Chicago)
# If your graphs appear to be offset by a couple hours then this probably
# needs to be explicitly set to your local timezone.
#TIME_ZONE = 'America/Los_Angeles'
# Override this to provide documentation specific to your Graphite deployment
#DOCUMENTATION_URL = "http://graphite.readthedocs.org/"
# Logging
#LOG_RENDERING_PERFORMANCE = True
#LOG_CACHE_PERFORMANCE = True
#LOG_METRIC_ACCESS = True
# Enable full debug page display on exceptions (Internal Server Error pages)
#DEBUG = True
# If using RRD files and rrdcached, set to the address or socket of the daemon
#FLUSHRRDCACHED = 'unix:/var/run/rrdcached.sock'
# This lists the memcached servers that will be used by this webapp.
# If you have a cluster of webapps you should ensure all of them
# have the *exact* same value for this setting. That will maximize cache
# efficiency. Setting MEMCACHE_HOSTS to be empty will turn off use of
# memcached entirely.
#
# You should not use the loopback address (127.0.0.1) here if using clustering
# as every webapp in the cluster should use the exact same values to prevent
# unneeded cache misses. Set to [] to disable caching of images and fetched data
#MEMCACHE_HOSTS = ['10.10.10.10:11211', '10.10.10.11:11211', '10.10.10.12:11211']
#DEFAULT_CACHE_DURATION = 60 # Cache images and data for 1 minute
#####################################
# Filesystem Paths #
#####################################
# Change only GRAPHITE_ROOT if your install is merely shifted from /opt/graphite
# to somewhere else
#GRAPHITE_ROOT = '/opt/graphite'
# Most installs done outside of a separate tree such as /opt/graphite will only
# need to change these three settings. Note that the default settings for each
# of these is relative to GRAPHITE_ROOT
#CONF_DIR = '/opt/graphite/conf'
#STORAGE_DIR = '/opt/graphite/storage'
#CONTENT_DIR = '/opt/graphite/webapp/content'
# To further or fully customize the paths, modify the following. Note that the
# default settings for each of these are relative to CONF_DIR and STORAGE_DIR
#
## Webapp config files
#DASHBOARD_CONF = '/opt/graphite/conf/dashboard.conf'
#GRAPHTEMPLATES_CONF = '/opt/graphite/conf/graphTemplates.conf'
## Data directories
# NOTE: If any directory is unreadable in DATA_DIRS it will break metric browsing
#WHISPER_DIR = '/opt/graphite/storage/whisper'
#RRD_DIR = '/opt/graphite/storage/rrd'
#DATA_DIRS = [WHISPER_DIR, RRD_DIR] # Default: set from the above variables
#LOG_DIR = '/opt/graphite/storage/log/webapp'
#INDEX_FILE = '/opt/graphite/storage/index' # Search index file
#####################################
# Email Configuration #
#####################################
# This is used for emailing rendered Graphs
# Default backend is SMTP
#EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
#EMAIL_HOST = 'localhost'
#EMAIL_PORT = 25
#EMAIL_HOST_USER = ''
#EMAIL_HOST_PASSWORD = ''
#EMAIL_USE_TLS = False
# To drop emails on the floor, enable the Dummy backend:
#EMAIL_BACKEND = 'django.core.mail.backends.dummy.EmailBackend'
#####################################
# Authentication Configuration #
#####################################
## LDAP / ActiveDirectory authentication setup
#USE_LDAP_AUTH = True
#LDAP_SERVER = "ldap.mycompany.com"
#LDAP_PORT = 389
# OR
#LDAP_URI = "ldaps://ldap.mycompany.com:636"
#LDAP_SEARCH_BASE = "OU=users,DC=mycompany,DC=com"
#LDAP_BASE_USER = "CN=some_readonly_account,DC=mycompany,DC=com"
#LDAP_BASE_PASS = "readonly_account_password"
#LDAP_USER_QUERY = "(username=%s)" #For Active Directory use "(sAMAccountName=%s)"
#
# If you want to further customize the ldap connection options you should
# directly use ldap.set_option to set the ldap module's global options.
# For example:
#
#import ldap
#ldap.set_option(ldap.OPT_X_TLS_REQUIRE_CERT, ldap.OPT_X_TLS_ALLOW)
#ldap.set_option(ldap.OPT_X_TLS_CACERTDIR, "/etc/ssl/ca")
#ldap.set_option(ldap.OPT_X_TLS_CERTFILE, "/etc/ssl/mycert.pem")
#ldap.set_option(ldap.OPT_X_TLS_KEYFILE, "/etc/ssl/mykey.pem")
# See http://www.python-ldap.org/ for further details on these options.
## REMOTE_USER authentication. See: https://docs.djangoproject.com/en/dev/howto/auth-remote-user/
#USE_REMOTE_USER_AUTHENTICATION = True
# Override the URL for the login link (e.g. for django_openid_auth)
#LOGIN_URL = '/account/login'
##########################
# Database Configuration #
##########################
# By default sqlite is used. If you cluster multiple webapps you will need
# to setup an external database (such as MySQL) and configure all of the webapp
# instances to use the same database. Note that this database is only used to store
# Django models such as saved graphs, dashboards, user preferences, etc.
# Metric data is not stored here.
#
# DO NOT FORGET TO RUN 'manage.py syncdb' AFTER SETTING UP A NEW DATABASE
#
# The following built-in database engines are available:
# django.db.backends.postgresql # Removed in Django 1.4
# django.db.backends.postgresql_psycopg2
# django.db.backends.mysql
# django.db.backends.sqlite3
# django.db.backends.oracle
#
# The default is 'django.db.backends.sqlite3' with file 'graphite.db'
# located in STORAGE_DIR
#
#DATABASES = {
# 'default': {
# 'NAME': '/opt/graphite/storage/graphite.db',
# 'ENGINE': 'django.db.backends.sqlite3',
# 'USER': '',
# 'PASSWORD': '',
# 'HOST': '',
# 'PORT': ''
# }
#}
#
#########################
# Cluster Configuration #
#########################
# (To avoid excessive DNS lookups you want to stick to using IP addresses only in this entire section)
#
# This should list the IP address (and optionally port) of the webapp on each
# remote server in the cluster. These servers must each have local access to
# metric data. Note that the first server to return a match for a query will be
# used.
#CLUSTER_SERVERS = ["10.0.2.2:80", "10.0.2.3:80"]
## These are timeout values (in seconds) for requests to remote webapps
#REMOTE_STORE_FETCH_TIMEOUT = 6 # Timeout to fetch series data
#REMOTE_STORE_FIND_TIMEOUT = 2.5 # Timeout for metric find requests
#REMOTE_STORE_RETRY_DELAY = 60 # Time before retrying a failed remote webapp
#REMOTE_FIND_CACHE_DURATION = 300 # Time to cache remote metric find results
## Remote rendering settings
# Set to True to enable rendering of Graphs on a remote webapp
#REMOTE_RENDERING = True
# List of IP (and optionally port) of the webapp on each remote server that
# will be used for rendering. Note that each rendering host should have local
# access to metric data or should have CLUSTER_SERVERS configured
#RENDERING_HOSTS = []
#REMOTE_RENDER_CONNECT_TIMEOUT = 1.0
# If you are running multiple carbon-caches on this machine (typically behind a relay using
# consistent hashing), you'll need to list the ip address, cache query port, and instance name of each carbon-cache
# instance on the local machine (NOT every carbon-cache in the entire cluster). The default cache query port is 7002
# and a common scheme is to use 7102 for instance b, 7202 for instance c, etc.
#
# You *should* use 127.0.0.1 here in most cases
#CARBONLINK_HOSTS = ["127.0.0.1:7002:a", "127.0.0.1:7102:b", "127.0.0.1:7202:c"]
#CARBONLINK_TIMEOUT = 1.0
#####################################
# Additional Django Settings #
#####################################
# Uncomment the following line for direct access to Django settings such as
# MIDDLEWARE_CLASSES or APPS
#from graphite.app_settings import *
import os
LOG_DIR = '/var/log/graphite'
SECRET_KEY = '$(date +%s | sha256sum | base64 | head -c 64)'
if (os.getenv("MEMCACHE_HOST") is not None):
MEMCACHE_HOSTS = os.getenv("MEMCACHE_HOST").split(",")
if (os.getenv("DEFAULT_CACHE_DURATION") is not None):
DEFAULT_CACHE_DURATION = int(os.getenv("CACHE_DURATION"))

View File

@@ -1,6 +0,0 @@
{
"graphiteHost": "127.0.0.1",
"graphitePort": 2003,
"port": 8125,
"flushInterval": 10000
}

View File

@@ -1,26 +0,0 @@
#!/usr/bin/env expect
set timeout -1
spawn /usr/local/bin/manage.sh
expect "Would you like to create one now" {
send "yes\r"
}
expect "Username" {
send "root\r"
}
expect "Email address:" {
send "root.graphite@mailinator.com\r"
}
expect "Password:" {
send "root\r"
}
expect "Password *:" {
send "root\r"
}
expect "Superuser created successfully"

View File

@@ -1,3 +0,0 @@
#!/bin/bash
PYTHONPATH=/opt/graphite/webapp django-admin.py syncdb --settings=graphite.settings
PYTHONPATH=/opt/graphite/webapp django-admin.py update_users --settings=graphite.settings

View File

@@ -1,16 +0,0 @@
graphite:
build: blocks/graphite1
ports:
- "8080:80"
- "2003:2003"
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
fake-graphite-data:
image: grafana/fake-data-gen
net: bridge
environment:
FD_DATASOURCE: graphite
FD_PORT: 2003

View File

@@ -1,76 +0,0 @@
[cache]
LOCAL_DATA_DIR = /opt/graphite/storage/whisper/
# Specify the user to drop privileges to
# If this is blank carbon runs as the user that invokes it
# This user must have write access to the local data directory
USER =
# Limit the size of the cache to avoid swapping or becoming CPU bound.
# Sorts and serving cache queries gets more expensive as the cache grows.
# Use the value "inf" (infinity) for an unlimited cache size.
MAX_CACHE_SIZE = inf
# Limits the number of whisper update_many() calls per second, which effectively
# means the number of write requests sent to the disk. This is intended to
# prevent over-utilizing the disk and thus starving the rest of the system.
# When the rate of required updates exceeds this, then carbon's caching will
# take effect and increase the overall throughput accordingly.
MAX_UPDATES_PER_SECOND = 1000
# Softly limits the number of whisper files that get created each minute.
# Setting this value low (like at 50) is a good way to ensure your graphite
# system will not be adversely impacted when a bunch of new metrics are
# sent to it. The trade off is that it will take much longer for those metrics'
# database files to all get created and thus longer until the data becomes usable.
# Setting this value high (like "inf" for infinity) will cause graphite to create
# the files quickly but at the risk of slowing I/O down considerably for a while.
MAX_CREATES_PER_MINUTE = inf
LINE_RECEIVER_INTERFACE = 0.0.0.0
LINE_RECEIVER_PORT = 2003
PICKLE_RECEIVER_INTERFACE = 0.0.0.0
PICKLE_RECEIVER_PORT = 2004
CACHE_QUERY_INTERFACE = 0.0.0.0
CACHE_QUERY_PORT = 7002
LOG_UPDATES = False
# Enable AMQP if you want to receve metrics using an amqp broker
# ENABLE_AMQP = False
# Verbose means a line will be logged for every metric received
# useful for testing
# AMQP_VERBOSE = False
# AMQP_HOST = localhost
# AMQP_PORT = 5672
# AMQP_VHOST = /
# AMQP_USER = guest
# AMQP_PASSWORD = guest
# AMQP_EXCHANGE = graphite
# Patterns for all of the metrics this machine will store. Read more at
# http://en.wikipedia.org/wiki/Advanced_Message_Queuing_Protocol#Bindings
#
# Example: store all sales, linux servers, and utilization metrics
# BIND_PATTERNS = sales.#, servers.linux.#, #.utilization
#
# Example: store everything
# BIND_PATTERNS = #
# NOTE: you cannot run both a cache and a relay on the same server
# with the default configuration, you have to specify a distinict
# interfaces and ports for the listeners.
[relay]
LINE_RECEIVER_INTERFACE = 0.0.0.0
LINE_RECEIVER_PORT = 2003
PICKLE_RECEIVER_INTERFACE = 0.0.0.0
PICKLE_RECEIVER_PORT = 2004
CACHE_SERVERS = server1, server2, server3
MAX_QUEUE_SIZE = 10000

View File

@@ -1,102 +0,0 @@
import datetime
import time
from django.utils.timezone import get_current_timezone
from django.core.urlresolvers import get_script_prefix
from django.http import HttpResponse
from django.shortcuts import render_to_response, get_object_or_404
from pytz import timezone
from graphite.util import json
from graphite.events import models
from graphite.render.attime import parseATTime
def to_timestamp(dt):
return time.mktime(dt.timetuple())
class EventEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, datetime.datetime):
return to_timestamp(obj)
return json.JSONEncoder.default(self, obj)
def view_events(request):
if request.method == "GET":
context = { 'events' : fetch(request),
'slash' : get_script_prefix()
}
return render_to_response("events.html", context)
else:
return post_event(request)
def detail(request, event_id):
e = get_object_or_404(models.Event, pk=event_id)
context = { 'event' : e,
'slash' : get_script_prefix()
}
return render_to_response("event.html", context)
def post_event(request):
if request.method == 'POST':
event = json.loads(request.body)
assert isinstance(event, dict)
values = {}
values["what"] = event["what"]
values["tags"] = event.get("tags", None)
values["when"] = datetime.datetime.fromtimestamp(
event.get("when", time.time()))
if "data" in event:
values["data"] = event["data"]
e = models.Event(**values)
e.save()
return HttpResponse(status=200)
else:
return HttpResponse(status=405)
def get_data(request):
if 'jsonp' in request.REQUEST:
response = HttpResponse(
"%s(%s)" % (request.REQUEST.get('jsonp'),
json.dumps(fetch(request), cls=EventEncoder)),
mimetype='text/javascript')
else:
response = HttpResponse(
json.dumps(fetch(request), cls=EventEncoder),
mimetype="application/json")
return response
def fetch(request):
#XXX we need to move to USE_TZ=True to get rid of naive-time conversions
def make_naive(dt):
if 'tz' in request.GET:
tz = timezone(request.GET['tz'])
else:
tz = get_current_timezone()
local_dt = dt.astimezone(tz)
if hasattr(local_dt, 'normalize'):
local_dt = local_dt.normalize()
return local_dt.replace(tzinfo=None)
if request.GET.get("from", None) is not None:
time_from = make_naive(parseATTime(request.GET["from"]))
else:
time_from = datetime.datetime.fromtimestamp(0)
if request.GET.get("until", None) is not None:
time_until = make_naive(parseATTime(request.GET["until"]))
else:
time_until = datetime.datetime.now()
tags = request.GET.get("tags", None)
if tags is not None:
tags = request.GET.get("tags").split(" ")
return [x.as_dict() for x in
models.Event.find_events(time_from, time_until, tags=tags)]

View File

@@ -1,20 +0,0 @@
[
{
"pk": 1,
"model": "auth.user",
"fields": {
"username": "admin",
"first_name": "",
"last_name": "",
"is_active": true,
"is_superuser": true,
"is_staff": true,
"last_login": "2011-09-20 17:02:14",
"groups": [],
"user_permissions": [],
"password": "sha1$1b11b$edeb0a67a9622f1f2cfeabf9188a711f5ac7d236",
"email": "root@example.com",
"date_joined": "2011-09-20 17:02:14"
}
}
]

View File

@@ -1,42 +0,0 @@
# Edit this file to override the default graphite settings, do not edit settings.py
# Turn on debugging and restart apache if you ever see an "Internal Server Error" page
#DEBUG = True
# Set your local timezone (django will try to figure this out automatically)
TIME_ZONE = 'UTC'
# Setting MEMCACHE_HOSTS to be empty will turn off use of memcached entirely
#MEMCACHE_HOSTS = ['127.0.0.1:11211']
# Sometimes you need to do a lot of rendering work but cannot share your storage mount
#REMOTE_RENDERING = True
#RENDERING_HOSTS = ['fastserver01','fastserver02']
#LOG_RENDERING_PERFORMANCE = True
#LOG_CACHE_PERFORMANCE = True
# If you've got more than one backend server they should all be listed here
#CLUSTER_SERVERS = []
# Override this if you need to provide documentation specific to your graphite deployment
#DOCUMENTATION_URL = "http://wiki.mycompany.com/graphite"
# Enable email-related features
#SMTP_SERVER = "mail.mycompany.com"
# LDAP / ActiveDirectory authentication setup
#USE_LDAP_AUTH = True
#LDAP_SERVER = "ldap.mycompany.com"
#LDAP_PORT = 389
#LDAP_SEARCH_BASE = "OU=users,DC=mycompany,DC=com"
#LDAP_BASE_USER = "CN=some_readonly_account,DC=mycompany,DC=com"
#LDAP_BASE_PASS = "readonly_account_password"
#LDAP_USER_QUERY = "(username=%s)" #For Active Directory use "(sAMAccountName=%s)"
# If sqlite won't cut it, configure your real database here (don't forget to run manage.py syncdb!)
#DATABASE_ENGINE = 'mysql' # or 'postgres'
#DATABASE_NAME = 'graphite'
#DATABASE_USER = 'graphite'
#DATABASE_PASSWORD = 'graphite-is-awesome'
#DATABASE_HOST = 'mysql.mycompany.com'
#DATABASE_PORT = '3306'

View File

@@ -1 +0,0 @@
grafana:$apr1$4R/20xhC$8t37jPP5dbcLr48btdkU//

View File

@@ -1,70 +0,0 @@
daemon off;
user www-data;
worker_processes 1;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
server_tokens off;
server_names_hash_bucket_size 32;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
server {
listen 80 default_server;
server_name _;
open_log_file_cache max=1000 inactive=20s min_uses=2 valid=1m;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header Host $host;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
add_header Access-Control-Allow-Origin "*";
add_header Access-Control-Allow-Methods "GET, OPTIONS";
add_header Access-Control-Allow-Headers "origin, authorization, accept";
location /content {
alias /opt/graphite/webapp/content;
}
location /media {
alias /usr/share/pyshared/django/contrib/admin/media;
}
}
}

View File

@@ -1,8 +0,0 @@
{
graphitePort: 2003,
graphiteHost: "127.0.0.1",
port: 8125,
mgmt_port: 8126,
backends: ['./backends/graphite'],
debug: true
}

View File

@@ -1,19 +0,0 @@
[min]
pattern = \.min$
xFilesFactor = 0.1
aggregationMethod = min
[max]
pattern = \.max$
xFilesFactor = 0.1
aggregationMethod = max
[sum]
pattern = \.count$
xFilesFactor = 0
aggregationMethod = sum
[default_average]
pattern = .*
xFilesFactor = 0.5
aggregationMethod = average

View File

@@ -1,16 +0,0 @@
[carbon]
pattern = ^carbon\..*
retentions = 1m:31d,10m:1y,1h:5y
[highres]
pattern = ^highres.*
retentions = 1s:1d,1m:7d
[statsd]
pattern = ^statsd.*
retentions = 1m:7d,10m:1y
[default]
pattern = .*
retentions = 10s:1d,1m:7d,10m:1y

View File

@@ -1,26 +0,0 @@
[supervisord]
nodaemon = true
environment = GRAPHITE_STORAGE_DIR='/opt/graphite/storage',GRAPHITE_CONF_DIR='/opt/graphite/conf'
[program:nginx]
command = /usr/sbin/nginx
stdout_logfile = /var/log/supervisor/%(program_name)s.log
stderr_logfile = /var/log/supervisor/%(program_name)s.log
autorestart = true
[program:carbon-cache]
;user = www-data
command = /opt/graphite/bin/carbon-cache.py --debug start
stdout_logfile = /var/log/supervisor/%(program_name)s.log
stderr_logfile = /var/log/supervisor/%(program_name)s.log
autorestart = true
[program:graphite-webapp]
;user = www-data
directory = /opt/graphite/webapp
environment = PYTHONPATH='/opt/graphite/webapp'
command = /usr/bin/gunicorn_django -b127.0.0.1:8000 -w2 graphite/settings.py
stdout_logfile = /var/log/supervisor/%(program_name)s.log
stderr_logfile = /var/log/supervisor/%(program_name)s.log
autorestart = true

View File

@@ -1,6 +0,0 @@
jaeger:
image: jaegertracing/all-in-one:latest
ports:
- "localhost:6831:6831/udp"
- "16686:16686"

View File

@@ -7,8 +7,3 @@ mysql:
MYSQL_PASSWORD: password
ports:
- "3306:3306"
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
command: [mysqld, --character-set-server=utf8mb4, --collation-server=utf8mb4_unicode_ci, --innodb_monitor_enable=all]

View File

@@ -1,20 +0,0 @@
## MySQL with Open Data Set from NYC Open Data (https://data.cityofnewyork.us)
FROM mysql:latest
ENV MYSQL_DATABASE="testdata" \
MYSQL_ROOT_PASSWORD="rootpass" \
MYSQL_USER="grafana" \
MYSQL_PASSWORD="password"
# Install requirement (wget)
RUN apt-get update && apt-get install -y wget && apt-get install unzip
# Fetch NYC Data Set
RUN wget https://data.cityofnewyork.us/download/57g5-etyj/application%2Fzip -O /tmp/data.zip && \
unzip -j /tmp/data.zip 311_Service_Requests_from_2015.csv -d /var/lib/mysql-files && \
rm /tmp/data.zip
ADD import_csv.sql /docker-entrypoint-initdb.d/
EXPOSE 3306

View File

@@ -1,9 +0,0 @@
mysql_opendata:
build: blocks/mysql_opendata
environment:
MYSQL_ROOT_PASSWORD: rootpass
MYSQL_DATABASE: testdata
MYSQL_USER: grafana
MYSQL_PASSWORD: password
ports:
- "3307:3306"

View File

@@ -1,80 +0,0 @@
use testdata;
DROP TABLE IF EXISTS `nyc_open_data`;
CREATE TABLE IF NOT EXISTS `nyc_open_data` (
UniqueKey bigint(255),
`CreatedDate` varchar(255),
`ClosedDate` varchar(255),
Agency varchar(255),
AgencyName varchar(255),
ComplaintType varchar(255),
Descriptor varchar(255),
LocationType varchar(255),
IncidentZip varchar(255),
IncidentAddress varchar(255),
StreetName varchar(255),
CrossStreet1 varchar(255),
CrossStreet2 varchar(255),
IntersectionStreet1 varchar(255),
IntersectionStreet2 varchar(255),
AddressType varchar(255),
City varchar(255),
Landmark varchar(255),
FacilityType varchar(255),
Status varchar(255),
`DueDate` varchar(255),
ResolutionDescription varchar(2048),
`ResolutionActionUpdatedDate` varchar(255),
CommunityBoard varchar(255),
Borough varchar(255),
XCoordinateStatePlane varchar(255),
YCoordinateStatePlane varchar(255),
ParkFacilityName varchar(255),
ParkBorough varchar(255),
SchoolName varchar(255),
SchoolNumber varchar(255),
SchoolRegion varchar(255),
SchoolCode varchar(255),
SchoolPhoneNumber varchar(255),
SchoolAddress varchar(255),
SchoolCity varchar(255),
SchoolState varchar(255),
SchoolZip varchar(255),
SchoolNotFound varchar(255),
SchoolOrCitywideComplaint varchar(255),
VehicleType varchar(255),
TaxiCompanyBorough varchar(255),
TaxiPickUpLocation varchar(255),
BridgeHighwayName varchar(255),
BridgeHighwayDirection varchar(255),
RoadRamp varchar(255),
BridgeHighwaySegment varchar(255),
GarageLotName varchar(255),
FerryDirection varchar(255),
FerryTerminalName varchar(255),
Latitude varchar(255),
Longitude varchar(255),
Location varchar(255)
);
LOAD DATA INFILE '/var/lib/mysql-files/311_Service_Requests_from_2015.csv' INTO TABLE nyc_open_data FIELDS OPTIONALLY ENCLOSED BY '"' TERMINATED BY ',' IGNORE 1 LINES;
UPDATE nyc_open_data SET CreatedDate = STR_TO_DATE(CreatedDate, '%m/%d/%Y %r') WHERE CreatedDate <> '';
UPDATE nyc_open_data SET ClosedDate = STR_TO_DATE(ClosedDate, '%m/%d/%Y %r') WHERE ClosedDate <> '';
UPDATE nyc_open_data SET DueDate = STR_TO_DATE(DueDate, '%m/%d/%Y %r') WHERE DueDate <> '';
UPDATE nyc_open_data SET ResolutionActionUpdatedDate = STR_TO_DATE(ResolutionActionUpdatedDate, '%m/%d/%Y %r') WHERE ResolutionActionUpdatedDate <> '';
UPDATE nyc_open_data SET CreatedDate=null WHERE CreatedDate = '';
UPDATE nyc_open_data SET ClosedDate=null WHERE ClosedDate = '';
UPDATE nyc_open_data SET DueDate=null WHERE DueDate = '';
UPDATE nyc_open_data SET ResolutionActionUpdatedDate=null WHERE ResolutionActionUpdatedDate = '';
ALTER TABLE nyc_open_data modify CreatedDate datetime NULL;
ALTER TABLE nyc_open_data modify ClosedDate datetime NULL;
ALTER TABLE nyc_open_data modify DueDate datetime NULL;
ALTER TABLE nyc_open_data modify ResolutionActionUpdatedDate datetime NULL;
ALTER TABLE `nyc_open_data` ADD INDEX `IX_ComplaintType` (`ComplaintType`);
ALTER TABLE `nyc_open_data` ADD INDEX `IX_CreatedDate` (`CreatedDate`);
ALTER TABLE `nyc_open_data` ADD INDEX `IX_LocationType` (`LocationType`);
ALTER TABLE `nyc_open_data` ADD INDEX `IX_AgencyName` (`AgencyName`);
ALTER TABLE `nyc_open_data` ADD INDEX `IX_City` (`City`);
SYSTEM rm /var/lib/mysql-files/311_Service_Requests_from_2015.csv

View File

@@ -6,4 +6,3 @@ postgrestest:
POSTGRES_DATABASE: grafana
ports:
- "5432:5432"
command: postgres -c log_connections=on -c logging_collector=on -c log_destination=stderr -c log_directory=/var/log/postgresql

View File

@@ -1,3 +1,2 @@
FROM prom/prometheus
ADD prometheus.yml /etc/prometheus/
ADD alert.rules /etc/prometheus/

View File

@@ -1,10 +0,0 @@
# Alert Rules
ALERT AppCrash
IF process_open_fds > 0
FOR 15s
LABELS { severity="critical" }
ANNOTATIONS {
summary = "Number of open fds > 0",
description = "Just testing"
}

View File

@@ -3,6 +3,8 @@ prometheus:
net: host
ports:
- "9090:9090"
volumes:
- /var/docker/prometheus:/prometheus-data
node_exporter:
image: prom/node-exporter
@@ -18,8 +20,3 @@ fake-prometheus-data:
environment:
FD_DATASOURCE: prom
alertmanager:
image: quay.io/prometheus/alertmanager
net: host
ports:
- "9093:9093"

View File

@@ -6,30 +6,22 @@ global:
# Load and evaluate rules in this file every 'evaluation_interval' seconds.
rule_files:
- "alert.rules"
# - "first.rules"
# - "second.rules"
alerting:
alertmanagers:
- scheme: http
static_configs:
- targets:
- "127.0.0.1:9093"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'node_exporter'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 10s
scrape_timeout: 10s
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['127.0.0.1:9100']
- job_name: 'fake-data-gen'
static_configs:
- targets: ['127.0.0.1:9091']
- job_name: 'grafana'
static_configs:
- targets: ['127.0.0.1:3000']
#- targets: ['localhost:9090', '172.17.0.1:9091', '172.17.0.1:9100', '172.17.0.1:9150']
- targets: ['localhost:9090', '127.0.0.1:9091', '127.0.0.1:9100', '127.0.0.1:9150']

View File

@@ -1,4 +1,4 @@
snmpd:
image: namshi/smtp
build: blocks/snmpd
ports:
- "25:25"
- "161:161"

View File

@@ -1,77 +0,0 @@
version: "2"
services:
graphite:
build:
context: blocks/graphite1
args:
version: master
ports:
- "8080:80"
- "2003:2003"
- "8125:8125/udp"
- "8126:8126"
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
fake-graphite-data:
image: grafana/fake-data-gen
network_mode: bridge
environment:
FD_DATASOURCE: graphite
FD_PORT: 2003
prometheus:
build: blocks/prometheus
network_mode: host
ports:
- "9090:9090"
node_exporter:
image: prom/node-exporter
network_mode: host
ports:
- "9100:9100"
fake-prometheus-data:
image: grafana/fake-data-gen
network_mode: host
ports:
- "9091:9091"
environment:
FD_DATASOURCE: prom
alertmanager:
image: quay.io/prometheus/alertmanager
network_mode: host
ports:
- "9093:9093"
influxdb:
image: influxdb:latest
container_name: influxdb
ports:
- "2004:2004"
- "8083:8083"
- "8086:8086"
volumes:
- ./blocks/influxdb/influxdb.conf:/etc/influxdb/influxdb.conf
fake-influxdb-data:
image: grafana/fake-data-gen
network_mode: bridge
environment:
FD_DATASOURCE: influxdb
FD_PORT: 8086
# You need to run 'sysctl -w vm.max_map_count=262144' on the host machine
elasticsearch5:
image: elasticsearch:5
command: elasticsearch
ports:
- "10200:9200"
- "10300:9300"

View File

@@ -3,11 +3,10 @@ FROM grafana/docs-base:latest
# to get the git info for this repo
# COPY config.toml /site
# RUN rm -rf /site/content/*
RUN rm -rf /site/content/*
# COPY ./sources /site/content/docs/
COPY ./sources /site/content/
COPY config.toml /site
COPY awsconfig /site
VOLUME ["/site/content"]

View File

@@ -1,34 +1,50 @@
.PHONY: all default docs docs-build docs-shell shell checkvars
.PHONY: all default docs docs-build docs-shell shell test
# to allow `make DOCSDIR=1 docs-shell` (to create a bind mount in docs)
DOCS_MOUNT := $(if $(DOCSDIR),-v $(CURDIR):/docs/content/grafana/)
# to allow `make DOCSPORT=9000 docs`
DOCSPORT := 3004
# Get the IP ADDRESS
DOCKER_IP=$(shell python -c "import urlparse ; print urlparse.urlparse('$(DOCKER_HOST)').hostname or ''")
HUGO_BASE_URL=$(shell test -z "$(DOCKER_IP)" && echo localhost || echo "$(DOCKER_IP)")
HUGO_BIND_IP=0.0.0.0
GIT_BRANCH := $(shell git rev-parse --abbrev-ref HEAD 2>/dev/null)
GIT_BRANCH_CLEAN := $(shell echo $(GIT_BRANCH) | sed -e "s/[^[:alnum:]]/-/g")
DOCKER_DOCS_IMAGE := grafana/grafana-docs
DOCKER_RUN_DOCS := docker run --rm -it $(DOCS_MOUNT) -e AWS_S3_BUCKET -e NOCACHE
SOURCES_HOST_DIR := "$(shell pwd)/sources"
DOCS_MOUNT := -v $(SOURCES_HOST_DIR):/site/content
DOCKER_RUN_DOCS := docker run --rm -it $(DOCS_MOUNT) -e NOCACHE -p 3004:3004 -p 3005:3005
VERSION := $(shell head -n 1 VERSION)
# for some docs workarounds (see below in "docs-build" target)
GITCOMMIT := $(shell git rev-parse --short HEAD 2>/dev/null)
default: docs
checkvars:
ifndef ENV
$(error ENV is undefined set via ENV=staging or ENV=prod as argument to make)
endif
docs: docs-build
$(DOCKER_RUN_DOCS) $(DOCS_MOUNT) -e DOCKERHOST "$(DOCKER_DOCS_IMAGE)" /bin/bash -c "grunt --env=dev-docs && grunt connect --port=3004"
$(DOCKER_RUN_DOCS) -p 3004:3004 -p 3005:3005 -e DOCKERHOST "$(DOCKER_DOCS_IMAGE)" /bin/bash -c "grunt && grunt connect --port=3004"
watch: docs-build
$(DOCKER_RUN_DOCS) $(DOCS_MOUNT) -e DOCKERHOST "$(DOCKER_DOCS_IMAGE)" /bin/bash -c "grunt --env=dev-docs && grunt connect --port=3004 & grunt watch --port=3004 --env=dev-docs"
docs-watch: docs-build
$(DOCKER_RUN_DOCS) -p 3004:3004 -p 3005:3005 -v $(SOURCES_HOST_DIR):/site/content -e DOCKERHOST "$(DOCKER_DOCS_IMAGE)" /bin/bash -c "grunt --env=dev-docs && grunt connect --port=3004 & grunt watch --port=3004 --env=dev-docs"
publish: checkvars docs-build
$(info Publishing ENV=${ENV} and VERSION=${VERSION})
$(DOCKER_RUN_DOCS) $(DOCS_MOUNT) -e DOCKERHOST "$(DOCKER_DOCS_IMAGE)" /bin/bash -c "./publish.sh ${ENV}-docs ${VERSION}"
docs-watch-mac: docs-build
$(DOCKER_RUN_DOCS) -p 3004:3004 -p 3005:3005 -v $(SOURCES_HOST_DIR):/site/content -e DOCKERHOST "$(DOCKER_DOCS_IMAGE)" /bin/bash -c "grunt --env=dev-docs-mac && grunt connect --port=3004 & grunt watch --port=3004 --env=dev-docs-mac"
publish: docs-build
$(DOCKER_RUN_DOCS) "$(DOCKER_DOCS_IMAGE)" /bin/bash -c "./publish.sh staging-docs v3.1"
publish-prod: docs-build
$(DOCKER_RUN_DOCS) "$(DOCKER_DOCS_IMAGE)" /bin/bash -c "./publish.sh prod-docs root"
docs-draft: docs-build
$(DOCKER_RUN_DOCS) -p $(if $(DOCSPORT),$(DOCSPORT):)8000 -e DOCKERHOST "$(DOCKER_DOCS_IMAGE)" hugo server --buildDrafts="true" --port=$(DOCSPORT) --baseUrl=$(HUGO_BASE_URL) --bind=$(HUGO_BIND_IP)
docs-shell: docs-build
$(DOCKER_RUN_DOCS) -p $(if $(DOCSPORT),$(DOCSPORT):)8000 "$(DOCKER_DOCS_IMAGE)" bash
test: docs-build
$(DOCKER_RUN_DOCS) -p $(if $(DOCSPORT),$(DOCSPORT):)8000 "$(DOCKER_DOCS_IMAGE)"
docs-build:
docker build -t "$(DOCKER_DOCS_IMAGE)" --no-cache .
docker build -t "$(DOCKER_DOCS_IMAGE)" .

View File

@@ -1,7 +1,8 @@
# Building The Docs
To build the docs locally, you need to have docker installed. The
docs are built using [Hugo](http://gohugo.io/) - a static site generator.
docs are built using a custom [docker](https://www.docker.com/) image
and the [mkdocs](http://www.mkdocs.org/) tool.
**Prepare the Docker Image**:
@@ -10,40 +11,19 @@ when running ``make docs-build`` depending on how your system's docker
service is configured):
```
git clone https://github.com/grafana/grafana.org
cd grafana.org
make docs-build
$ git clone https://github.com/grafana/grafana.org
$ cd grafana.org
$ make docs-build
```
**Build the Documentation**:
Now that the docker image has been prepared we can build the
grafana docs and start a docs server.
If you have not cloned the Grafana repository already then:
docs. Switch your working directory back to the directory this file
(README.md) is in and run (possibly with ``sudo``):
```
cd ..
git clone https://github.com/grafana/grafana
```
Switch your working directory to the directory this file
(README.md) is in.
```
cd grafana/docs
```
An AWS config file is required to build the docs Docker image and to publish the site to AWS. If you are building locally only and do not have any AWS credentials for docs.grafana.org then create an empty file named `awsconfig` in the current directory.
```
touch awsconfig
```
Then run (possibly with ``sudo``):
```
make watch
$ make docs
```
This command will not return control of the shell to the user. Instead
@@ -52,21 +32,4 @@ we created in the previous step.
Open [localhost:3004](http://localhost:3004) to view the docs.
### Images & Content
All markdown files are located in this repo (main grafana repo). But all images are added to the https://github.com/grafana/grafana.org repo. So the process of adding images is a bit complicated.
First you need create a feature (PR) branch of https://github.com/grafana/grafana.org so you can make change. Then add the image to the `/static/img/docs` directory. Then make a commit that adds the image.
Then run:
```
make docs-build
```
This will rebuild the docs docker container.
To be able to use the image your have to quit (CTRL-C) the `make watch` command (that you run in the same directory as this README). Then simply rerun `make watch`, it will restart the docs server but now with access to your image.
### Editing content
Changes to the markdown files should automatically cause a docs rebuild and live reload should reload the page in your browser.

View File

@@ -1 +1 @@
v4.3
3.1.0

View File

@@ -1,20 +1,70 @@
baseurl = "http://localhost:3002/"
languageCode = "en-us"
title = "Grafana Documentation"
canonifyurls = true
title = "Grafana Docs"
canonifyurls = false
relativeURLs = false
verbose = true
enableRobotsTXT = true
disableSitemap = false
disableRSS = false
preservetaxonomynames = true
metaDataFormat = "yaml"
disableRSS = true
[[menu.top]]
name = "Docs"
url = ""
weight = 1
[[menu.top]]
name = "Community"
url = "/community"
weight = 2
[[menu.top]]
name = "Support"
url = "/support"
weight = 3
[[menu.top]]
name = "Plugins"
url = "https://grafana.net/plugins"
weight = 4
[[menu.top]]
name = "Dashboards"
url = "https://grafana.net/dashboards"
weight = 5
[[menu.top]]
name = "Hosting"
url = "/hosting"
weight = 6
[[menu.top]]
name = "Github"
url = "https://github.com/grafana/grafana"
weight = 7
## Main
[[menu.main]]
name = "Feature Gallery"
url = "/features"
weight = 1
[[menu.main]]
name = "Live Demo"
url = "http://play.grafana.org"
weight = 2
[[menu.main]]
name = "Download"
url = "/download"
weight = 3
[[menu.main]]
name = "Blog"
url = "/blog"
weight = 4
[taxonomies]
tag = "tags"
category = "categories"
[permalinks]
blog = ":year/:month/:day/:title/"

View File

@@ -1,3 +0,0 @@
#!/bin/bash
make publish ENV=prod VERSION=root

View File

@@ -27,24 +27,6 @@ To show all admin commands:
### Reset admin password
You can reset the password for the admin user using the CLI. The use case for this command is when you have lost the admin password.
You can reset the password for the admin user using the CLI.
`grafana-cli admin reset-admin-password ...`
If running the command returns this error:
> Could not find config defaults, make sure homepath command line parameter is set or working directory is homepath
then there are two flags that can be used to set homepath and the config file path.
`grafana-cli admin reset-admin-password --homepath "/usr/share/grafana" newpass`
If you have not lost the admin password then it is better to set in the Grafana UI. If you need to set the password in a script then the [Grafana API](http://docs.grafana.org/http_api/user/#change-password) can be used. Here is an example with curl using basic auth:
```bash
curl -X PUT -H "Content-Type: application/json" -d '{
"oldPassword": "admin",
"newPassword": "newpass",
"confirmNew": "newpass"
}' http://admin:admin@<your_grafana_host>:3000/api/user/password
```

View File

@@ -1,15 +0,0 @@
+++
title = "Internal metrics"
description = "Internal metrics exposed by Grafana"
keywords = ["grafana", "metrics", "internal metrics"]
type = "docs"
[menu.docs]
parent = "admin"
weight = 8
+++
# Internal metrics
Grafana collects some metrics about it self internally. Currently Grafana supports pushing metrics to graphite and exposing them to be scraped by Prometheus.
To enabled internal metrics you have to enable it under the [metrics] section in your [grafana.ini](http://docs.grafana.org/installation/configuration/#enabled-6) config file.If you want to push metrics to graphite you have also have to configure the [metrics.graphite](http://docs.grafana.org/installation/configuration/#metrics-graphite) section.

View File

@@ -12,18 +12,18 @@ weight = 2
# Alert Notifications
{{< imgbox max-width="40%" img="/img/docs/v4/alert_notifications_menu.png" caption="Alerting notifications" >}}
> Alerting is only available in Grafana v4.0 and above.
When an alert changes state it sends out notifications. Each alert rule can have
multiple notifications. But in order to add a notification to an alert rule you first need
to add and configure a `notification` channel (can be email, Pagerduty or other integration). This is done from the Notification Channels page.
to add and configure a `notification` object. This is done from the Alerting/Notifications page.
## Notification Channel Setup
## Notification Setup
{{< imgbox max-width="40%" img="/img/docs/v43/alert_notifications_menu.png" caption="Alerting Notification Channels" >}}
On the Notification Channels page hit the `New Channel` button to go the page where you
can configure and setup a new Notification Channel.
On the notifications list page hit the `New Notification` button to go the the page where you
can configure and setup a new notification.
You specify name and type, and type specific options. You can also test the notification to make
sure it's working and setup correctly.
@@ -32,15 +32,15 @@ sure it's working and setup correctly.
When checked this option will make this notification used for all alert rules, existing and new.
## Supported Notification Types
## Supported notification types
Grafana ships with the following set of notification types:
Grafana ships with a set of notification types. More will be added in future releases.
### Email
To enable email notification you have to setup [SMTP settings](/installation/configuration/#smtp)
in the Grafana config. Email notification will upload an image of the alert graph to an
external image destination if available or fallback to attaching the image in the email.
external image destination if available or fallback on attaching the image in the email.
### Slack
@@ -48,39 +48,26 @@ external image destination if available or fallback to attaching the image in th
To set up slack you need to configure an incoming webhook url at slack. You can follow their guide for how
to do that https://api.slack.com/incoming-webhooks If you want to include screenshots of the firing alerts
in the slack messages you have to configure either the [external image destination](#external-image-store) in Grafana,
or a bot integration via Slack Apps. Follow Slack's guide to set up a bot integration and use the token provided
https://api.slack.com/bot-users, which starts with "xoxb".
in the slack messages you have to configure the [external image destination](#external-image-store) in Grafana.
Setting | Description
---------- | -----------
Recipient | allows you to override the slack recipient.
Mention | make it possible to include a mention in the slack notification sent by Grafana. Ex @here or @channel
Token | If provided, Grafana will upload the generated image via Slack's file.upload API method, not the external image destination.
### PagerDuty
To set up PagerDuty, all you have to do is to provide an api key.
Setting | Description
---------- | -----------
Integration Key | Integration key for pagerduty.
Auto resolve incidents | Resolve incidents in pagerduty once the alert goes back to ok
### Webhook
The webhook notification is a simple way to send information about an state change over HTTP to a custom endpoint.
Using this notification you could integrate Grafana into any system you choose, by yourself.
Using this notification you could integrated Grafana into any system you choose, by yourself.
Example json body:
```json
{
"title": "My alert",
"ruleId": 1,
"ruleName": "Load peaking!",
"ruleUrl": "http://url.to.grafana/db/dashboard/my_dashboard?panelId=2",
"state": "alerting",
"state": "Alerting",
"imageUrl": "http://s3.image.url",
"message": "Load is peaking. Make sure the traffic is real and spin up more webfronts",
"evalMatches": [
@@ -93,69 +80,30 @@ Example json body:
}
```
- **state** - The possible values for alert state are: `ok`, `paused`, `alerting`, `pending`, `no_data`.
### PagerDuty
### DingDing/DingTalk
To set up PagerDuty, all you have to do is to provide an api key.
[Instructions in Chinese](https://open-doc.dingtalk.com/docs/doc.htm?spm=a219a.7629140.0.0.p2lr6t&treeId=257&articleId=105733&docType=1).
Setting | Description
---------- | -----------
Integration Key | Integration key for pagerduty.
Auto resolve incidents | Resolve incidents in pagerduty once the alert goes back to ok
In DingTalk PC Client:
1. Click "more" icon on left bottom of the panel.
2. Click "Robot Manage" item in the pop menu, there will be a new panel call "Robot Manage".
3. In the "Robot Manage" panel, select "customised: customised robot with Webhook".
4. In the next new panel named "robot detail", click "Add" button.
5. In "Add Robot" panel, input a nickname for the robot and select a "message group" which the robot will join in. click "next".
6. There will be a Webhook URL in the panel, looks like this: https://oapi.dingtalk.com/robot/send?access_token=xxxxxxxxx. Copy this URL to the grafana Dingtalk setting page and then click "finish".
Dingtalk supports the following "message type": `text`, `link` and `markdown`. Only the `text` message type is supported.
### Kafka
Notifications can be sent to a Kafka topic from Grafana using [Kafka REST Proxy](https://docs.confluent.io/1.0/kafka-rest/docs/index.html).
There are couple of configurations options which need to be set in Grafana UI under Kafka Settings:
1. Kafka REST Proxy endpoint.
2. Kafka Topic.
Once these two properties are set, you can send the alerts to Kafka for further processing or throttling them.
### Other Supported Notification Channels
Grafana also supports the following Notification Channels:
- HipChat
- VictorOps
- Sensu
- OpsGenie
- Threema
- Pushover
- Telegram
- LINE
# Enable images in notifications {#external-image-store}
Grafana can render the panel associated with the alert rule and include that in the notification. Most Notification Channels require that this image be publicly accessible (Slack and PagerDuty for example). In order to include images in alert notifications, Grafana can upload the image to an image store. It currently supports
Amazon S3 and Webdav for this. So to set that up you need to configure the [external image uploader](/installation/configuration/#external-image-storage) in your grafana-server ini config file.
Grafana can render the panel associated with the alert rule and include that in the notification. Some types
of notifications require that this image be publicly accessable (Slack for example). In order to support
images in notifications like Slack Grafana can upload the image to an image store. It currently supports
Amazon S3 for this and Webdav. So to set that up you need to configure the
[external image uploader](/installation/configuration/#external-image-storage) in your grafana-server ini
config file.
Currently only the Email Channels attaches images if no external image store is specified. To include images in alert notifications for other channels then you need to set up an external image store.
This is an optional requirement, you can get Slack and email notifications without setting this up.
This is an optional requirement, you can get slack and email notifications without setting this up.
# Configure the link back to Grafana from alert notifications
All alert notifications contains a link back to the triggered alert in the Grafana instance.
This url is based on the [domain](/installation/configuration/#domain) setting in Grafana.
All alert notifications contains a link back to the triggered alert in the Grafana instance.
This url is based on the [domain](/installation/configuration/#domain) setting in Grafana.

View File

@@ -27,12 +27,14 @@ and the conditions that need to be met for the alert to change state and trigger
## Execution
The alert rules are evaluated in the Grafana backend in a scheduler and query execution engine that is part
of core Grafana. Only some data sources are supported right now. They include `Graphite`, `Prometheus`,
of core Grafana. Only some data soures are supported right now. They include `Graphite`, `Prometheus`,
`InfluxDB` and `OpenTSDB`.
### Clustering
Currently alerting supports a limited form of high availability. Since v4.2.0 of Grafana, alert notifications are deduped when running multiple servers. This means all alerts are executed on every server but no duplicate alert notifications are sent due to the deduping logic. Proper load balancing of alerts will be introduced in the future.
We have not implemented clustering yet. So if you run multiple instances of grafana-server
you have to make sure [execute_alerts]({{< relref "installation/configuration.md#alerting" >}})
is true on only one instance or otherwise you will get duplicated notifications.
<div class="clearfix"></div>
@@ -50,22 +52,12 @@ Here you can specify the name of the alert rule and how often the scheduler shou
### Conditions
Currently the only condition type that exists is a `Query` condition that allows you to
specify a query letter, time range and an aggregation function.
### Query condition example
```sql
avg() OF query(A, 5m, now) IS BELOW 14
```
- `avg()` Controls how the values for **each** series should be reduced to a value that can be compared against the threshold. Click on the function to change it to another aggregation function.
- `query(A, 5m, now)` The letter defines what query to execute from the **Metrics** tab. The second two parameters define the time range, `5m, now` means 5 minutes from now to now. You can also do `10m, now-2m` to define a time range that will be 10 minutes from now to 2 minutes from now. This is useful if you want to ignore the last 2 minutes of data.
- `IS BELOW 14` Defines the type of threshold and the threshold value. You can click on `IS BELOW` to change the type of threshold.
The query used in an alert rule cannot contain any template variables. Currently we only support `AND` and `OR` operators between conditions and they are executed serially.
specify a query letter, time range and an aggregation function. The letter refers to
a query you already have added in the **Metrics** tab. The result from the query and the aggregation function is
a single value that is then used in the threshold check. The query used in an alert rule cannot
contain any template variables. Currently we only support `AND` and `OR` operators between conditions and they are executed serially.
For example, we have 3 conditions in the following order:
*condition:A(evaluates to: TRUE) OR condition:B(evaluates to: FALSE) AND condition:C(evaluates to: TRUE)*
`condition:A(evaluates to: TRUE) OR condition:B(evaluates to: FALSE) AND condition:C(evaluates to: TRUE)`
so the result will be calculated as ((TRUE OR FALSE) AND TRUE) = TRUE.
We plan to add other condition types in the future, like `Other Alert`, where you can include the state
@@ -74,7 +66,7 @@ of another alert in your conditions, and `Time Of Day`.
#### Multiple Series
If a query returns multiple series then the aggregation function and threshold check will be evaluated for each series.
What Grafana does not do currently is track alert rule state **per series**. This has implications that are detailed
What Grafana does not do currently is track alert rule state **per series**. This has implications that is exemplified
in the scenario below.
- Alert condition with query that returns 2 series: **server1** and **server2**
@@ -89,7 +81,8 @@ we plan to track state **per series** in a future release.
### No Data / Null values
Below your conditions you can configure how the rule evaluation engine should handle queries that return no data or only null values.
Below you condition you can configure how the rule evaluation engine should handle queries that return no data or only null valued
data.
No Data Option | Description
------------ | -------------
@@ -99,23 +92,23 @@ Keep Last State | Keep the current alert rule state, what ever it is.
### Execution errors or timeouts
The last option tells how to handle execution or timeout errors.
The last option is how to handle execution or timeout errors.
Error or timeout option | Description
------------ | -------------
Alerting | Set alert rule state to `Alerting`
Keep Last State | Keep the current alert rule state, what ever it is.
If you have an unreliable time series store from which queries sometime timeout or fail randomly you can set this option
to `Keep Last State` in order to basically ignore them.
If you an unreliable time series store that where queries sometime timeout or fail randomly you can set this option
t `Keep Last State` to basically ignore them.
## Notifications
In alert tab you can also specify alert rule notifications along with a detailed messsage about the alert rule.
The message can contain anything, information about how you might solve the issue, link to runbook, etc.
The message can contain anything, information about how you might solve the issue, link to runbook etc.
The actual notifications are configured and shared between multiple alerts. Read the
[notifications]({{< relref "notifications.md" >}}) guide for how to configure and setup notifications.
[Notifications]({{< relref "notifications.md" >}}) guide for how to configure and setup notifications.
## Alert State History & Annotations
@@ -128,7 +121,7 @@ submenu in the alert tab to view & clear state history.
{{< imgbox max-width="40%" img="/img/docs/v4/alert_test_rule.png" caption="Test Rule" >}}
First level of troubleshooting you can do is hit the **Test Rule** button. You will get result back that you can expand
to the point where you can see the raw data that was returned from your query.
to the point where you can see the raw data that was returned form your query.
Further troubleshooting can also be done by inspecting the grafana-server log. If it's not an error or for some reason
the log does not say anything you can enable debug logging for some relevant components. This is done

View File

@@ -12,11 +12,10 @@ weight = 200
Here you can find links to older versions of the documentation that might be better suited for your version
of Grafana.
- [Latest](http://docs.grafana.org)
- [Version 4.4](http://docs.grafana.org/v4.4)
- [Version 4.3](http://docs.grafana.org/v4.3)
- [Version 4.2](http://docs.grafana.org/v4.2)
- [Version 4.1](http://docs.grafana.org/v4.1)
- [Version 4.0](http://docs.grafana.org/v4.0)
- [Version 3.1](http://docs.grafana.org/v3.1)
- [Version 3.0](http://docs.grafana.org/v3.0)
- [Latest](/)
- [Version 3.1](/v3.1)
- [Version 3.0](/v3.0)
- [Version 2.6](/v2.6)
- [Version 2.5](/v2.5)
- [Version 2.1](/v2.1)
- [Version 2.0](/v2.0)

View File

@@ -2,7 +2,7 @@
title = "Contributor Licence Agreement (CLA)"
description = "Contributer Licence Agreement (CLA)"
type = "docs"
aliases = ["/project/cla", "docs/contributing/cla.html"]
aliases = ["/project/cla"]
[menu.docs]
parent = "contribute"
weight = 1

View File

@@ -13,26 +13,29 @@ weight = 10
# Using AWS CloudWatch in Grafana
Grafana ships with built in support for CloudWatch. You just have to add it as a data source and you will be ready to build dashboards for you CloudWatch metrics.
Grafana ships with built in support for CloudWatch. You just have to add it as a data source and you will
be ready to build dashboards for you CloudWatch metrics.
## Adding the data source to Grafana
## Adding the data source
![](/img/docs/cloudwatch/cloudwatch_add.png)
1. Open the side menu by clicking the Grafana icon in the top header.
1. Open the side menu by clicking the the Grafana icon in the top header.
2. In the side menu under the `Dashboards` link you should find a link named `Data Sources`.
3. Click the `+ Add data source` button in the top header.
4. Select `Cloudwatch` from the *Type* dropdown.
> NOTE: If at any moment you have issues with getting this datasource to work and Grafana is giving you undescriptive errors then don't
forget to check your log file (try looking in /var/log/grafana/grafana.log).
> NOTE: If this link is missing in the side menu it means that your current user does not have the `Admin` role for the current organization.
3. Click the `Add new` link in the top header.
4. Select `CloudWatch` from the dropdown.
> NOTE: If at any moment you have issues with getting this datasource to work and grafana is giving you undescriptive errors then dont forget to check your log file (try looking in /var/log/grafana/).
Name | Description
------------ | -------------
*Name* | The data source name. This is how you refer to the data source in panels & queries.
*Default* | Default data source means that it will be pre-selected for new panels.
*Credentials* profile name | Specify the name of the profile to use (if you use `~/aws/credentials` file), leave blank for default.
*Default Region* | Used in query editor to set region (can be changed on per query basis)
*Custom Metrics namespace* | Specify the CloudWatch namespace of Custom metrics
*Assume Role Arn* | Specify the ARN of the role to assume
Name | The data source name, important that this is the same as in Grafana v1.x if you plan to import old dashboards.
Default | Default data source means that it will be pre-selected for new panels.
Credentials profile name | Specify the name of the profile to use (if you use `~/aws/credentials` file), leave blank for default. This option was introduced in Grafana 2.5.1
Default Region | Used in query editor to set region (can be changed on per query basis)
Custom Metrics namespace | Specify the CloudWatch namespace of Custom metrics
Assume Role Arn | Specify the ARN of the role to assume
## Authentication
@@ -50,118 +53,56 @@ Create a file at `~/.aws/credentials`. That is the `HOME` path for user running
Example content:
```bash
[default]
aws_access_key_id = asdsadasdasdasd
aws_secret_access_key = dasdasdsadasdasdasdsa
region = us-west-2
```
[default]
aws_access_key_id = asdsadasdasdasd
aws_secret_access_key = dasdasdsadasdasdasdsa
region = us-west-2
## Metric Query Editor
![](/img/docs/v43/cloudwatch_editor.png)
![](/img/docs/cloudwatch/query_editor.png)
You need to specify a namespace, metric, at least one stat, and at least one dimension.
## Templated queries
Instead of hard-coding things like server, application and sensor name in you metric queries you can use variables in their place.
Variables are shown as dropdown select boxes at the top of the dashboard. These dropdowns makes it easy to change the data
being displayed in your dashboard.
Checkout the [Templating]({{< relref "reference/templating.md" >}}) documentation for an introduction to the templating feature and the different
types of template variables.
### Query variable
CloudWatch Datasource Plugin provides the following queries you can specify in the `Query` field in the Variable
edit view. They allow you to fill a variable's options list with things like `region`, `namespaces`, `metric names`
and `dimension keys/values`.
CloudWatch Datasource Plugin provides the following functions in `Variables values query` field in Templating Editor to query `region`, `namespaces`, `metric names` and `dimension keys/values` on the CloudWatch.
Name | Description
------- | --------
*regions()* | Returns a list of regions AWS provides their service.
*namespaces()* | Returns a list of namespaces CloudWatch support.
*metrics(namespace, [region])* | Returns a list of metrics in the namespace. (specify region for custom metrics)
*dimension_keys(namespace)* | Returns a list of dimension keys in the namespace.
*dimension_values(region, namespace, metric, dimension_key)* | Returns a list of dimension values matching the specified `region`, `namespace`, `metric` and `dimension_key`.
*ebs_volume_ids(region, instance_id)* | Returns a list of volume ids matching the specified `region`, `instance_id`.
*ec2_instance_attribute(region, attribute_name, filters)* | Returns a list of attributes matching the specified `region`, `attribute_name`, `filters`.
`regions()` | Returns a list of regions AWS provides their service.
`namespaces()` | Returns a list of namespaces CloudWatch support.
`metrics(namespace, [region])` | Returns a list of metrics in the namespace. (specify region for custom metrics)
`dimension_keys(namespace)` | Returns a list of dimension keys in the namespace.
`dimension_values(region, namespace, metric, dimension_key)` | Returns a list of dimension values matching the specified `region`, `namespace`, `metric` and `dimension_key`.
`ebs_volume_ids(region, instance_id)` | Returns a list of volume id matching the specified `region`, `instance_id`.
`ec2_instance_attribute(region, attribute_name, filters)` | Returns a list of attribute matching the specified `region`, `attribute_name`, `filters`.
For details about the metrics CloudWatch provides, please refer to the [CloudWatch documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/CW_Support_For_AWS.html).
#### Examples templated Queries
## Example templated Queries
Example dimension queries which will return list of resources for individual AWS Services:
Query | Service
Service | Query
------- | -----
*dimension_values(us-east-1,AWS/ELB,RequestCount,LoadBalancerName)* | ELB
*dimension_values(us-east-1,AWS/ElastiCache,CPUUtilization,CacheClusterId)* | ElastiCache
*dimension_values(us-east-1,AWS/Redshift,CPUUtilization,ClusterIdentifier)* | RedShift
*dimension_values(us-east-1,AWS/RDS,CPUUtilization,DBInstanceIdentifier)* | RDS
*dimension_values(us-east-1,AWS/S3,BucketSizeBytes,BucketName)* | S3
ELB | `dimension_values(us-east-1,AWS/ELB,RequestCount,LoadBalancerName)`
ElastiCache | `dimension_values(us-east-1,AWS/ElastiCache,CPUUtilization,CacheClusterId)`
RedShift | `dimension_values(us-east-1,AWS/Redshift,CPUUtilization,ClusterIdentifier)`
RDS | `dimension_values(us-east-1,AWS/RDS,CPUUtilization,DBInstanceIdentifier)`
S3 | `dimension_values(us-east-1,AWS/S3,BucketSizeBytes,BucketName)`
## ec2_instance_attribute examples
## ec2_instance_attribute JSON filters
### JSON filters
The `ec2_instance_attribute` query takes `filters` in JSON format.
The `ec2_instance_attribute` query take `filters` in JSON format.
You can specify [pre-defined filters of ec2:DescribeInstances](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeInstances.html).
Note that the actual filtering takes place on Amazon's servers, not in Grafana.
Filters syntax:
```javascript
{ filter_name1: [ filter_value1 ], filter_name2: [ filter_value2 ] }
```
Specify like `{ filter_name1: [ filter_value1 ], filter_name2: [ filter_value2 ] }`
Example `ec2_instance_attribute()` query
```javascript
ec2_instance_attribute(us-east-1, InstanceId, { "tag:Environment": [ "production" ] })
```
ec2_instance_attribute(us-east-1, InstanceId, { "tag:Environment": [ "production" ] })
### Selecting Attributes
Only 1 attribute per instance can be returned. Any flat attribute can be selected (i.e. if the attribute has a single value and isn't an object or array). Below is a list of available flat attributes:
* `AmiLaunchIndex`
* `Architecture`
* `ClientToken`
* `EbsOptimized`
* `EnaSupport`
* `Hypervisor`
* `IamInstanceProfile`
* `ImageId`
* `InstanceId`
* `InstanceLifecycle`
* `InstanceType`
* `KernelId`
* `KeyName`
* `LaunchTime`
* `Platform`
* `PrivateDnsName`
* `PrivateIpAddress`
* `PublicDnsName`
* `PublicIpAddress`
* `RamdiskId`
* `RootDeviceName`
* `RootDeviceType`
* `SourceDestCheck`
* `SpotInstanceRequestId`
* `SriovNetSupport`
* `SubnetId`
* `VirtualizationType`
* `VpcId`
Tags can be selected by prepending the tag name with `Tags.`
Example `ec2_instance_attribute()` query
```javascript
ec2_instance_attribute(us-east-1, Tags.Name, { "tag:Team": [ "sysops" ] })
```
![](/img/docs/v2/cloudwatch_templating.png)
## Cost

View File

@@ -12,128 +12,84 @@ weight = 3
# Using Elasticsearch in Grafana
Grafana ships with advanced support for Elasticsearch. You can do many types of simple or complex Elasticsearch queries to
visualize logs or metrics stored in Elasticsearch. You can also annotate your graphs with log events stored in Elasticsearch.
Grafana ships with advanced support for Elasticsearch. You can do many types of
simple or complex elasticsearch queries to visualize logs or metrics stored in elasticsearch. You can
also annotate your graphs with log events stored in elasticsearch.
## Adding the data source
1. Open the side menu by clicking the Grafana icon in the top header.
2. In the side menu under the `Dashboards` link you should find a link named `Data Sources`.
3. Click the `+ Add data source` button in the top header.
4. Select *Elasticsearch* from the *Type* dropdown.
![](/img/docs/v2/add_Graphite.jpg)
> NOTE: If you're not seeing the `Data Sources` link in your side menu it means that your current user does not have the `Admin` role for the current organization.
1. Open the side menu by clicking the the Grafana icon in the top header.
2. In the side menu under the `Dashboards` link you should find a link named `Data Sources`.
> NOTE: If this link is missing in the side menu it means that your current user does not have the `Admin` role for the current organization.
3. Click the `Add new` link in the top header.
4. Select `Elasticsearch` from the dropdown.
Name | Description
------------ | -------------
*Name* | The data source name. This is how you refer to the data source in panels & queries.
*Default* | Default data source means that it will be pre-selected for new panels.
*Url* | The HTTP protocol, IP, and port of your Elasticsearch server.
*Access* | Proxy = access via Grafana backend, Direct = access directly from browser.
Name | The data source name, important that this is the same as in Grafana v1.x if you plan to import old dashboards.
Default | Default data source means that it will be pre-selected for new panels.
Url | The http protocol, ip and port of you elasticsearch server.
Access | Proxy = access via Grafana backend, Direct = access directly from browser.
Proxy access means that the Grafana backend will proxy all requests from the browser, and send them on to the Data Source. This is useful because it can eliminate CORS (Cross Origin Site Resource) issues, as well as eliminate the need to disseminate authentication to the browser.
Proxy access means that the Grafana backend will proxy all requests from the browser, and send them on to the Data Source. This is useful because it can eliminate CORS (Cross Origin Site Resource) issues, as well as eliminate the need to disseminate authentication details to the Data Source to the browser.
Direct access is still supported because in some cases it may be useful to access a Data Source directly depending on the use case and topology of Grafana, the user, and the Data Source.
### Direct access
If you select direct access you must update your Elasticsearch configuration to allow other domains to access
Elasticsearch from the browser. You do this by specifying these to options in your **elasticsearch.yml** config file.
```bash
http.cors.enabled: true
http.cors.allow-origin: "*"
```
http.cors.enabled: true
http.cors.allow-origin: "*"
### Index settings
![](/img/docs/elasticsearch/elasticsearch_ds_details.png)
Here you can specify a default for the `time field` and specify the name of your Elasticsearch index. You can use
Here you can specify a default for the `time field` and specify the name of your elasticsearch index. You can use
a time pattern for the index name or a wildcard.
### Elasticsearch version
Be sure to specify your Elasticsearch version in the version selection dropdown. This is very important as there are differences how queries are composed. Currently only 2.x and 5.x
are supported.
## Metric Query editor
![](/img/docs/elasticsearch/query_editor.png)
The Elasticsearch query editor allows you to select multiple metrics and group by multiple terms or filters. Use the plus and minus icons to the right to add/remove
metrics or group by clauses. Some metrics and group by clauses haves options, click the option text to expand the row to view and edit metric or group by options.
## Series naming & alias patterns
You can control the name for time series via the `Alias` input field.
Pattern | Description
------------ | -------------
*{{term fieldname}}* | replaced with value of a term group by
*{{metric}}* | replaced with metric name (ex. Average, Min, Max)
*{{field}}* | replaced with the metric field name
The Elasticsearch query editor allows you to select multiple metrics and group by multiple terms or filters. Use the plus and minus icons to the right to add / remove
metrics or group bys. Some metrics and group by have options, click the option text to expand the the row to view and edit metric or group by options.
## Pipeline metrics
Some metric aggregations are called Pipeline aggregations, for example, *Moving Average* and *Derivative*. Elasticsearch pipeline metrics require another metric to be based on. Use the eye icon next to the metric to hide metrics from appearing in the graph. This is useful for metrics you only have in the query for use in a pipeline metric.
If you have Elasticsearch 2.x and Grafana 2.6 or above then you can use pipeline metric aggregations like
**Moving Average** and **Derivative**. Elasticsearch pipeline metrics require another metric to be based on. Use the eye icon next to the metric
to hide metrics from appearing in the graph. This is useful for metrics you only have in the query to be used
in a pipeline metric.
![](/img/docs/elasticsearch/pipeline_metrics_editor.png)
## Templating
Instead of hard-coding things like server, application and sensor name in you metric queries you can use variables in their place.
Variables are shown as dropdown select boxes at the top of the dashboard. These dropdowns makes it easy to change the data
being displayed in your dashboard.
The Elasticsearch datasource supports two types of queries you can use to fill template variables with values.
Checkout the [Templating]({{< relref "reference/templating.md" >}}) documentation for an introduction to the templating feature and the different
types of template variables.
### Possible values for a field
### Query variable
The Elasticsearch data source supports two types of queries you can use in the *Query* field of *Query* variables. The query is written using a custom JSON string.
Query | Description
------------ | -------------
*{"find": "fields", "type": "keyword"} | Returns a list of field names with the index type `keyword`.
*{"find": "terms", "field": "@hostname", "size": 1000}* | Returns a list of values for a field using term aggregation. Query will user current dashboard time range as time range for query.
*{"find": "terms", "field": "@hostname", "query": '<lucene query>'}* | Returns a list of values for a field using term aggregation & and a specified lucene query filter. Query will use current dashboard time range as time range for query.
There is a default size limit of 500 on terms queries. Set the size property in your query to set a custom limit.
You can use other variables inside the query. Example query definition for a variable named `$host`.
```
{"find": "terms", "field": "@hostname", "query": "@source:$source"}
```json
{"find": "terms", "field": "@hostname"}
```
In the above example, we use another variable named `$source` inside the query definition. Whenever you change, via the dropdown, the current value of the ` $source` variable, it will trigger an update of the `$host` variable so it now only contains hostnames filtered by in this case the
`@source` document property.
### Fields filtered by type
```json
{"find": "fields", "type": "string"}
```
### Using variables in queries
### Fields filtered by type, with filter
```json
{"find": "fields", "type": "string", "query": <lucene query>}
```
There are two syntaxes:
### Multi format / All format
Use lucene format.
- `$<varname>` Example: @hostname:$hostname
- `[[varname]]` Example: @hostname:[[hostname]]
Why two ways? The first syntax is easier to read and write but does not allow you to use a variable in the middle of a word. When the *Multi-value* or *Include all value*
options are enabled, Grafana converts the labels from plain text to a lucene compatible condition.
![](/img/docs/v43/elastic_templating_query.png)
In the above example, we have a lucene query that filters documents based on the `@hostname` property using a variable named `$hostname`. It is also using
a variable in the *Terms* group by field input box. This allows you to use a variable to quickly change how the data is grouped.
Example dashboard:
[Elasticsearch Templated Dashboard](http://play.grafana.org/dashboard/db/elasticsearch-templated)
## Annotations
[Annotations]({{< relref "reference/annotations.md" >}}) allows you to overlay rich event information on top of graphs. You add annotation
queries via the Dashboard menu / Annotations view. Grafana can query any Elasticsearch index
for annotation events.
Name | Description
------------ | -------------
Query | You can leave the search query blank or specify a lucene query
Time | The name of the time field, needs to be date field.
Text | Event description field.
Tags | Optional field name to use for event tags (can be an array or a CSV string).

View File

@@ -18,22 +18,28 @@ change function parameters and much more. The editor can handle all types of gra
queries through the use of query references.
## Adding the data source
![](/img/docs/v2/add_Graphite.jpg)
1. Open the side menu by clicking the Grafana icon in the top header.
1. Open the side menu by clicking the the Grafana icon in the top header.
2. In the side menu under the `Dashboards` link you should find a link named `Data Sources`.
3. Click the `+ Add data source` button in the top header.
4. Select `Graphite` from the *Type* dropdown.
> NOTE: If you're not seeing the `Data Sources` link in your side menu it means that your current user does not have the `Admin` role for the current organization.
> NOTE: If this link is missing in the side menu it means that your current user does not have the `Admin` role for the current organization.
3. Click the `Add new` link in the top header.
4. Select `Graphite` from the dropdown.
Name | Description
------------ | -------------
*Name* | The data source name. This is how you refer to the data source in panels & queries.
*Default* | Default data source means that it will be pre-selected for new panels.
*Url* | The HTTP protocol, IP, and port of your graphite-web or graphite-api install.
*Access* | Proxy = access via Grafana backend, Direct = access directly from browser.
Name | The data source name, important that this is the same as in Grafana v1.x if you plan to import old dashboards.
Default | Default data source means that it will be pre-selected for new panels.
Url | The http protocol, ip and port of your graphite-web or graphite-api install.
Access | Proxy = access via Grafana backend, Direct = access directly from browser.
Proxy access means that the Grafana backend will proxy all requests from the browser, and send them on to the Data Source. This is useful because it can eliminate CORS (Cross Origin Site Resource) issues, as well as eliminate the need to disseminate authentication details to the Data Source to the browser.
Direct access is still supported because in some cases it may be useful to access a Data Source directly depending on the use case and topology of Grafana, the user, and the Data Source.
Proxy access means that the Grafana backend will proxy all requests from the browser, and send them on to the Data Source. This is useful because it can eliminate CORS (Cross Origin Site Resource) issues, as well as eliminate the need to disseminate authentication details to the browser.
## Metric editor
@@ -41,82 +47,42 @@ Proxy access means that the Grafana backend will proxy all requests from the bro
Click the ``Select metric`` link to start navigating the metric space. One you start you can continue using the mouse
or keyboard arrow keys. You can select a wildcard and still continue.
{{< docs-imagebox img="/img/docs/v45/graphite_query1_still.png"
animated-gif="/img/docs/v45/graphite_query1.gif" >}}
![](/img/docs/animated_gifs/graphite_query1.gif)
### Functions
Click the plus icon to the right to add a function. You can search for the function or select it from the menu. Once
a function is selected it will be added and your focus will be in the text box of the first parameter. To later change
a parameter just click on it and it will turn into a text box. To delete a function click the function name followed
by the x icon.
{{< docs-imagebox img="/img/docs/v45/graphite_query2_still.png"
animated-gif="/img/docs/v45/graphite_query2.gif" >}}
![](/img/docs/animated_gifs/graphite_query2.gif)
### Optional parameters
Some functions like aliasByNode support an optional second argument. To add this parameter specify for example 3,-2 as the first parameter and the function editor will adapt and move the -2 to a second parameter. To remove the second optional parameter just click on it and leave it blank and the editor will remove it.
{{< docs-imagebox img="/img/docs/v45/graphite_query3_still.png"
animated-gif="/img/docs/v45/graphite_query3.gif" >}}
### Nested Queries
You can reference queries by the row “letter” that theyre on (similar to Microsoft Excel). If you add a second query to a graph, you can reference the first query simply by typing in #A. This provides an easy and convenient way to build compounded queries.
{{< docs-imagebox img="/img/docs/v45/graphite_nested_queries_still.png"
animated-gif="/img/docs/v45/graphite_nested_queries.gif" >}}
![](/img/docs/animated_gifs/func_editor_optional_params.gif)
## Point consolidation
All Graphite metrics are consolidated so that Graphite doesn't return more data points than there are pixels in the graph. By default,
All Graphite metrics are consolidated so that Graphite doesn't return more data points than there are pixels in the graph. By default
this consolidation is done using `avg` function. You can how Graphite consolidates metrics by adding the Graphite consolidateBy function.
> *Notice* This means that legend summary values (max, min, total) cannot be all correct at the same time. They are calculated
> client side by Grafana. And depending on your consolidation function only one or two can be correct at the same time.
## Templating
You can create a template variable in Grafana and have that variable filled with values from any Graphite metric exploration query.
You can then use this variable in your Graphite queries, either as part of a metric path or as arguments to functions.
Instead of hard-coding things like server, application and sensor name in you metric queries you can use variables in their place.
Variables are shown as dropdown select boxes at the top of the dashboard. These dropdowns makes it easy to change the data
being displayed in your dashboard.
Checkout the [Templating]({{< relref "reference/templating.md" >}}) documentation for an introduction to the templating feature and the different
types of template variables.
### Query variable
The query you specify in the query field should be a metric find type of query. For example, a query like `prod.servers.*` will fill the
variable with all possible values that exist in the wildcard position.
For example a query like `prod.servers.*` will fill the variable with all possible
values that exists in the wildcard position.
You can also create nested variables that use other variables in their definition. For example
`apps.$app.servers.*` uses the variable `$app` in its query definition.
### Variable usage
You can use a variable in a metric node path or as a parameter to a function.
![](/img/docs/v2/templated_variable_parameter.png)
There are two syntaxes:
- `$<varname>` Example: apps.frontend.$server.requests.count
- `[[varname]]` Example: apps.frontend.[[server]].requests.count
Why two ways? The first syntax is easier to read and write but does not allow you to use a variable in the middle of a word. Use
the second syntax in expressions like `my.server[[serverNumber]].count`.
Example:
[Graphite Templated Dashboard](http://play.grafana.org/dashboard/db/graphite-templated-nested)
## Annotations
[Annotations]({{< relref "reference/annotations.md" >}}) allows you to overlay rich event information on top of graphs. You add annotation
queries via the Dashboard menu / Annotations view.
Graphite supports two ways to query annotations. A regular metric query, for this you use the `Graphite query` textbox. A Graphite events query, use the `Graphite event tags` textbox,
specify a tag or wildcard (leave empty should also work)
## Query Reference
You can reference queries by the row “letter” that theyre on (similar to Microsoft Excel). If you add a second query to graph, you can reference the first query simply by typing in #A. This provides an easy and convenient way to build compounded queries.

View File

@@ -30,5 +30,5 @@ The following datasources are officially supported:
## Data source plugins
Since grafana 3.0 you can install data sources as plugins. Checkout [Grafana.net](https://grafana.com/plugins) for more data sources.
Since grafana 3.0 you can install data sources as plugins. Checkout [Grafana.net](https://grafana.net/plugins) for more data sources.

View File

@@ -15,33 +15,33 @@ weight = 3
Grafana ships with very feature rich data source plugin for InfluxDB. Supporting a feature rich query editor, annotation and templating queries.
## Adding the data source
![](/img/docs/v2/add_Influx.jpg)
1. Open the side menu by clicking the Grafana icon in the top header.
1. Open the side menu by clicking the the Grafana icon in the top header.
2. In the side menu under the `Dashboards` link you should find a link named `Data Sources`.
3. Click the `+ Add data source` button in the top header.
4. Select *InfluxDB* from the *Type* dropdown.
> NOTE: If you're not seeing the `Data Sources` link in your side menu it means that your current user does not have the `Admin` role for the current organization.
> NOTE: If this link is missing in the side menu it means that your current user does not have the `Admin` role for the current organization.
3. Click the `Add new` link in the top header.
Name | Description
------------ | -------------
*Name* | The data source name. This is how you refer to the data source in panels & queries.
*Default* | Default data source means that it will be pre-selected for new panels.
*Url* | The http protocol, ip and port of you influxdb api (influxdb api port is by default 8086)
*Access* | Proxy = access via Grafana backend, Direct = access directly from browser.
*Database* | Name of your influxdb database
*User* | Name of your database user
*Password* | Database user's password
Name | The data source name, important that this is the same as in Grafana v1.x if you plan to import old dashboards.
Default | Default data source means that it will be pre-selected for new panels.
Url | The http protocol, ip and port of you influxdb api (influxdb api port is by default 8086)
Access | Proxy = access via Grafana backend, Direct = access directly from browser.
Database | Name of your influxdb database
User | Name of your database user
Password | Database user's password
### Proxy vs Direct access
> Proxy access means that the Grafana backend will proxy all requests from the browser, and send them on to the Data Source. This is useful because it can eliminate CORS (Cross Origin Site Resource) issues, as well as eliminate the need to disseminate authentication details to the Data Source to the browser.
> Direct access is still supported because in some cases it may be useful to access a Data Source directly depending on the use case and topology of Grafana, the user, and the Data Source.
Proxy access means that the Grafana backend will proxy all requests from the browser. So requests to InfluxDB will be channeled through
`grafana-server`. This means that the URL you specify needs to be accessible from the server you are running Grafana on. Proxy access
mode is also more secure as the username & password will never reach the browser.
## Query Editor
{{< docs-imagebox img="/img/docs/v45/influxdb_query_still.png" class="docs-image--no-shadow" animated-gif="/img/docs/v45/influxdb_query.gif" >}}
![](/assets/img/blog/v2.6/influxdb_editor_v3.gif)
You find the InfluxDB editor in the metrics tab in Graph or Singlestat panel's edit mode. You enter edit mode by clicking the
panel title, then edit. The editor allows you to select metrics and tags.
@@ -57,8 +57,10 @@ will automatically adjust the filter tag condition to use the InfluxDB regex mat
### Field & Aggregation functions
In the `SELECT` row you can specify what fields and functions you want to use. If you have a
group by time you need an aggregation function. Some functions like derivative require an aggregation function. The editor tries simplify and unify this part of the query. For example:<br>
![](/img/docs/influxdb/select_editor.png)<br>
group by time you need an aggregation function. Some functions like derivative require an aggregation function.
The editor tries simplify and unify this part of the query. For example:
![](/img/docs/influxdb/select_editor.png)
The above will generate the following InfluxDB `SELECT` clause:
@@ -86,8 +88,8 @@ You can switch to raw query mode by clicking hamburger icon and then `Switch edi
- $m = replaced with measurement name
- $measurement = replaced with measurement name
- $col = replaced with column name
- $tag_exampletag = replaced with the value of the `exampletag` tag. The syntax is `$tag_yourTagName` (must start with `$tag_`). To use your tag as an alias in the ALIAS BY field then the tag must be used to group by in the query.
- You can also use [[tag_hostname]] pattern replacement syntax. For example, in the ALIAS BY field using this text `Host: [[tag_hostname]]` would substitute in the `hostname` tag value for each legend value and an example legend value would be: `Host: server1`.
- $tag_hostname = replaced with the value of the hostname tag
- You can also use [[tag_hostname]] pattern replacement syntax
### Table query / raw data
@@ -98,21 +100,11 @@ change the option `Format As` to `Table` if you want to show raw data in the `Ta
## Templating
You can create a template variable in Grafana and have that variable filled with values from any InfluxDB metric exploration query.
You can then use this variable in your InfluxDB metric queries.
Instead of hard-coding things like server, application and sensor name in you metric queries you can use variables in their place.
Variables are shown as dropdown select boxes at the top of the dashboard. These dropdowns makes it easy to change the data
being displayed in your dashboard.
Checkout the [Templating]({{< relref "reference/templating.md" >}}) documentation for an introduction to the templating feature and the different
types of template variables.
### Query variable
If you add a template variable of the type `Query` you can write a InfluxDB exploration (meta data) query. These queries can
return things like measurement names, key names or key values.
For example you can have a variable that contains all values for tag `hostname` if you specify a query like this in the templating variable *Query* setting.
For example you can have a variable that contains all values for tag `hostname` if you specify a query like this
in the templating edit view.
```sql
SHOW TAG VALUES WITH KEY = "hostname"
```
@@ -124,46 +116,12 @@ the hosts variable only show hosts from the current selected region with a query
SHOW TAG VALUES WITH KEY = "hostname" WHERE region =~ /$region/
```
You can fetch key names for a given measurement.
> Always use `regex values` or `regex wildcard` for All format or multi select format.
```sql
SHOW TAG KEYS [FROM <measurement_name>]
```
If you have a variable with key names you can use this variable in a group by clause. This will allow you to change group by using the variable dropdown at the top
of the dashboard.
### Using variables in queries
There are two syntaxes:
`$<varname>` Example:
```sql
SELECT mean("value") FROM "logins" WHERE "hostname" =~ /^$host$/ AND $timeFilter GROUP BY time($__interval), "hostname"
```
`[[varname]]` Example:
```sql
SELECT mean("value") FROM "logins" WHERE "hostname" =~ /^[[host]]$/ AND $timeFilter GROUP BY time($__interval), "hostname"
```
Why two ways? The first syntax is easier to read and write but does not allow you to use a variable in the middle of a word. When the *Multi-value* or *Include all value*
options are enabled, Grafana converts the labels from plain text to a regex compatible string. Which means you have to use `=~` instead of `=`.
Example Dashboard:
[InfluxDB Templated Dashboard](http://play.grafana.org/dashboard/db/influxdb-templated-queries)
### Ad hoc filters variable
InfluxDB supports the special `Ad hoc filters` variable type. This variable allows you to specify any number of key/value filters on the fly. These filters will automatically
be applied to all your InfluxDB queries.
![](/img/docs/influxdb/templating_simple_ex1.png)
## Annotations
[Annotations]({{< relref "reference/annotations.md" >}}) allows you to overlay rich event information on top of graphs. You add annotation
queries via the Dashboard menu / Annotations view.
Annotations allows you to overlay rich event information on top of graphs.
An example query:
@@ -171,8 +129,4 @@ An example query:
SELECT title, description from events WHERE $timeFilter order asc
```
For InfluxDB you need to enter a query like in the above example. You need to have the ```where $timeFilter```
part. If you only select one column you will not need to enter anything in the column mapping fields. The
Tags field can be a comma seperated string.

View File

@@ -1,176 +0,0 @@
+++
title = "Using MySQL in Grafana"
description = "Guide for using MySQL in Grafana"
keywords = ["grafana", "mysql", "guide"]
type = "docs"
[menu.docs]
name = "MySQL"
parent = "datasources"
weight = 7
+++
# Using MySQL in Grafana
> Only available in Grafana v4.3+.
Grafana ships with a built-in MySQL data source plugin that allow you to query any visualize
data from a MySQL compatible database.
## Adding the data source
1. Open the side menu by clicking the Grafana icon in the top header.
2. In the side menu under the `Dashboards` link you should find a link named `Data Sources`.
3. Click the `+ Add data source` button in the top header.
4. Select *MySQL* from the *Type* dropdown.
### Database User Permissions (Important!)
The database user you specify when you add the data source should only be granted SELECT permissions on
the specified database & tables you want to query. Grafana does not validate that the query is safe. The query
could include any SQL statement. For example, statements like `USE otherdb;` and `DROP TABLE user;` would be
executed. To protect against this we **Highly** recommmend you create a specific mysql user with restricted permissions.
Example:
```sql
CREATE USER 'grafanaReader' IDENTIFIED BY 'password';
GRANT SELECT ON mydatabase.mytable TO 'grafanaReader';
```
You can use wildcards (`*`) in place of database or table if you want to grant access to more databases and tables.
## Macros
To simplify syntax and to allow for dynamic parts, like date range filters, the query can contain macros.
Macro example | Description
------------ | -------------
*$__timeFilter(dateColumn)* | Will be replaced by a time range filter using the specified column name. For example, *dateColumn > FROM_UNIXTIME(1494410783) AND dateColumn < FROM_UNIXTIME(1494497183)*
We plan to add many more macros. If you have suggestions for what macros you would like to see, please [open an issue](https://github.com/grafana/grafana) in our GitHub repo.
The query editor has a link named `Generated SQL` that show up after a query as been executed, while in panel edit mode. Click on it and it will expand and show the raw interpolated SQL string that was executed.
## Table queries
If the `Format as` query option is set to `Table` then you can basically do any type of SQL query. The table panel will automatically show the results of whatever columns & rows your query returns.
Query editor with example query:
{{< docs-imagebox img="/img/docs/v45/mysql_table_query.png" >}}
The query:
```sql
SELECT
title as 'Title',
user.login as 'Created By' ,
dashboard.created as 'Created On'
FROM dashboard
INNER JOIN user on user.id = dashboard.created_by
WHERE $__timeFilter(dashboard.created)
```
You can control the name of the Table panel columns by using regular `as ` SQL column selection syntax.
The resulting table panel:
![](/img/docs/v43/mysql_table.png)
### Time series queries
If you set `Format as` to `Time series`, for use in Graph panel for example, then there are some requirements for
what your query returns.
- Must be a column named `time_sec` representing a unix epoch in seconds.
- Must be a column named `value` representing the time series value.
- Must be a column named `metric` representing the time series name.
Example:
```sql
SELECT
min(UNIX_TIMESTAMP(time_date_time)) as time_sec,
max(value_double) as value,
metric1 as metric
FROM test_data
WHERE $__timeFilter(time_date_time)
GROUP BY metric1, UNIX_TIMESTAMP(time_date_time) DIV 300
ORDER BY time_sec asc
```
Currently, there is no support for a dynamic group by time based on time range & panel width.
This is something we plan to add.
## Templating
This feature is currently available in the nightly builds and will be included in the 5.0.0 release.
Instead of hard-coding things like server, application and sensor name in you metric queries you can use variables in their place. Variables are shown as dropdown select boxes at the top of the dashboard. These dropdowns makes it easy to change the data being displayed in your dashboard.
Checkout the [Templating]({{< relref "reference/templating.md" >}}) documentation for an introduction to the templating feature and the different types of template variables.
### Query Variable
If you add a template variable of the type `Query`, you can write a MySQL query that can
return things like measurement names, key names or key values that are shown as a dropdown select box.
For example, you can have a variable that contains all values for the `hostname` column in a table if you specify a query like this in the templating variable *Query* setting.
```sql
SELECT hostname FROM my_host
```
A query can returns multiple columns and Grafana will automatically create a list from them. For example, the query below will return a list with values from `hostname` and `hostname2`.
```sql
SELECT my_host.hostname, my_other_host.hostname2 FROM my_host JOIN my_other_host ON my_host.city = my_other_host.city
```
Another option is a query that can create a key/value variable. The query should return two columns that are named `__text` and `__value`. The `__text` column value should be unique (if it is not unique then the first value is used). The options in the dropdown will have a text and value that allows you to have a friendly name as text and an id as the value. An example query with `hostname` as the text and `id` as the value:
```sql
SELECT hostname AS __text, id AS __value FROM my_host
```
You can also create nested variables. For example if you had another variable named `region`. Then you could have
the hosts variable only show hosts from the current selected region with a query like this (if `region` is a multi-value variable then use the `IN` comparison operator rather than `=` to match against multiple values):
```sql
SELECT hostname FROM my_host WHERE region IN($region)
```
### Using Variables in Queries
Template variables are quoted automatically so if it is a string value do not wrap them in quotes in where clauses. If the variable is a multi-value variable then use the `IN` comparison operator rather than `=` to match against multiple values.
There are two syntaxes:
`$<varname>` Example with a template variable named `hostname`:
```sql
SELECT
UNIX_TIMESTAMP(atimestamp) as time_sec,
aint as value,
avarchar as metric
FROM my_table
WHERE $__timeFilter(atimestamp) and hostname in($hostname)
ORDER BY atimestamp ASC
```
`[[varname]]` Example with a template variable named `hostname`:
```sql
SELECT
UNIX_TIMESTAMP(atimestamp) as time_sec,
aint as value,
avarchar as metric
FROM my_table
WHERE $__timeFilter(atimestamp) and hostname in([[hostname]])
ORDER BY atimestamp ASC
```
## Alerting
Time series queries should work in alerting conditions. Table formatted queries is not yet supported in alert rule
conditions.

View File

@@ -3,7 +3,7 @@ title = "Using OpenTSDB in Grafana"
description = "Guide for using OpenTSDB in Grafana"
keywords = ["grafana", "opentsdb", "guide"]
type = "docs"
aliases = ["/datasources/opentsdb", "docs/features/opentsdb"]
aliases = ["/datasources/opentsdb"]
[menu.docs]
name = "OpenTSDB"
parent = "datasources"
@@ -12,79 +12,59 @@ weight = 5
# Using OpenTSDB in Grafana
Grafana ships with advanced support for OpenTSDB.
{{< docs-imagebox img="/img/docs/v2/add_OpenTSDB.png" max-width="14rem" >}}
## Adding the data source
The newest release of Grafana adds additional functionality when using an OpenTSDB Data source.
1. Open the side menu by clicking the Grafana icon in the top header.
1. Open the side menu by clicking the the Grafana icon in the top header.
2. In the side menu under the `Dashboards` link you should find a link named `Data Sources`.
3. Click the `+ Add data source` button in the top header.
4. Select *OpenTSDB* from the *Type* dropdown.
> NOTE: If you're not seeing the `Data Sources` link in your side menu it means that your current user does not have the `Admin` role for the current organization.
> NOTE: If this link is missing in the side menu it means that your current user does not have the `Admin` role for the current organization.
3. Click the `Add new` link in the top header.
4. Select `OpenTSDB` from the dropdown.
Name | Description
------------ | -------------
*Name* | The data source name. This is how you refer to the data source in panels & queries.
*Default* | Default data source means that it will be pre-selected for new panels.
*Url* | The http protocol, ip and port of you opentsdb server (default port is usually 4242)
*Access* | Proxy = access via Grafana backend, Direct = access directly from browser.
*Version* | Version = opentsdb version, either <=2.1 or 2.2
*Resolution* | Metrics from opentsdb may have datapoints with either second or millisecond resolution.
Name | The data source name, important that this is the same as in Grafana v1.x if you plan to import old dashboards.
Default | Default data source means that it will be pre-selected for new panels.
Url | The http protocol, ip and port of you opentsdb server (default port is usually 4242)
Access | Proxy = access via Grafana backend, Direct = access directly from browser.
Version | Version = opentsdb version, either <=2.1 or 2.2
Resolution | Metrics from opentsdb may have datapoints with either second or millisecond resolution.
## Query editor
Open a graph in edit mode by click the title. Query editor will differ if the datasource has version <=2.1 or = 2.2. In the former version, only tags can be used to query opentsdb. But in the latter version, filters as well as tags can be used to query opentsdb. Fill Policy is also introduced in opentsdb 2.2.
Open a graph in edit mode by click the title. Query editor will differ if the datasource has version <=2.1 or = 2.2.
In the former version, only tags can be used to query OpenTSDB. But in the latter version, filters as well as tags
can be used to query opentsdb. Fill Policy is also introduced in OpenTSDB 2.2.
> Note: While using Opentsdb 2.2 datasource, make sure you use either Filters or Tags as they are mutually exclusive. If used together, might give you weird results.
![](/img/docs/v43/opentsdb_query_editor.png)
> Note: While using OpenTSDB 2.2 datasource, make sure you use either Filters or Tags as they are mutually exclusive. If used together, might give you weird results.
![](/img/docs/v2/opentsdb_query_editor.png)
### Auto complete suggestions
As soon as you start typing metric names, tag names and tag values , you should see highlighted auto complete suggestions for them.
The autocomplete only works if the OpenTSDB suggest api is enabled.
> Note: This is required for the OpenTSDB `suggest` api to work.
## Templating queries
Instead of hard-coding things like server, application and sensor name in you metric queries you can use variables in their place.
Variables are shown as dropdown select boxes at the top of the dashboard. These dropdowns makes it easy to change the data
being displayed in your dashboard.
Checkout the [Templating]({{< relref "reference/templating.md" >}}) documentation for an introduction to the templating feature and the different
types of template variables.
### Query variable
Grafana's OpenTSDB data source supports template variable queries. This means you can create template variables
that fetch the values from OpenTSDB. For example, metric names, tag names, or tag values.
Grafana's OpenTSDB data source now supports template variable values queries. This means you can create template variables that fetch the values from OpenTSDB (for example metric names, tag names, or tag values). The query editor is also enhanced to limiting tags by metric.
When using OpenTSDB with a template variable of `query` type you can use following syntax for lookup.
Query | Description
------------ | -------------
*metrics(prefix)* | Returns metric names with specific prefix (can be empty)
*tag_names(cpu)* | Return tag names (i.e. keys) for a specific cpu metric
*tag_values(cpu, hostname)* | Return tag values for metric cpu and tag key hostname
*suggest_tagk(prefix)* | Return tag names (i.e. keys) for all metrics with specific prefix (can be empty)
*suggest_tagv(prefix)* | Return tag values for all metrics with specific prefix (can be empty)
metrics(prefix) // returns metric names with specific prefix (can be empty)
tag_names(cpu) // return tag names (i.e. keys) for a specific cpu metric
tag_values(cpu, hostname) // return tag values for metric cpu and tag key hostname
suggest_tagk(prefix) // return tag names (i.e. keys) for all metrics with specific prefix (can be empty)
suggest_tagv(prefix) // return tag values for all metrics with specific prefix (can be empty)
If you do not see template variables being populated in `Preview of values` section, you need to enable
`tsd.core.meta.enable_realtime_ts` in the OpenTSDB server settings. Also, to populate metadata of
the existing time series data in OpenTSDB, you need to run `tsdb uid metasync` on the OpenTSDB server.
If you do not see template variables being populated in `Preview of values` section, you need to enable `tsd.core.meta.enable_realtime_ts` in the OpenTSDB server settings. Also, to populate metadata of the existing time series data in OpenTSDB, you need to run `tsdb uid metasync` on the OpenTSDB server.
### Nested Templating
One template variable can be used to filter tag values for another template varible. First parameter is the metric name,
second parameter is the tag key for which you need to find tag values, and after that all other dependent template variables.
Some examples are mentioned below to make nested template queries work successfully.
One template variable can be used to filter tag values for another template varible. Very importantly, the order of the parameters matter in tag_values function. First parameter is the metric name, second parameter is the tag key for which you need to find tag values, and after that all other dependent template variables. Some examples are mentioned below to make nested template queries work successfully.
Query | Description
------------ | -------------
*tag_values(cpu, hostname, env=$env)* | Return tag values for cpu metric, selected env tag value and tag key hostname
*tag_values(cpu, hostanme, env=$env, region=$region)* | Return tag values for cpu metric, selected env tag value, selected region tag value and tag key hostname
tag_values(cpu, hostname, env=$env) // return tag values for cpu metric, selected env tag value and tag key hostname
tag_values(cpu, hostanme, env=$env, region=$region) // return tag values for cpu metric, selected env tag value, selected region tag value and tag key hostname
For details on OpenTSDB metric queries checkout the official [OpenTSDB documentation](http://opentsdb.net/docs/build/html/index.html)
> Note: This is required for the OpenTSDB `lookup` api to work.
For details on opentsdb metric queries checkout the official [OpenTSDB documentation](http://opentsdb.net/docs/build/html/index.html)

View File

@@ -1,186 +0,0 @@
+++
title = "Using PostgreSQL in Grafana"
description = "Guide for using PostgreSQL in Grafana"
keywords = ["grafana", "postgresql", "guide"]
type = "docs"
[menu.docs]
name = "PostgreSQL"
parent = "datasources"
weight = 7
+++
# Using PostgreSQL in Grafana
Grafana ships with a built-in PostgreSQL data source plugin that allows you to query and visualize data from a PostgreSQL compatible database.
## Adding the data source
1. Open the side menu by clicking the Grafana icon in the top header.
2. In the side menu under the `Dashboards` link you should find a link named `Data Sources`.
3. Click the `+ Add data source` button in the top header.
4. Select *PostgreSQL* from the *Type* dropdown.
### Database User Permissions (Important!)
The database user you specify when you add the data source should only be granted SELECT permissions on
the specified database & tables you want to query. Grafana does not validate that the query is safe. The query
could include any SQL statement. For example, statements like `DELETE FROM user;` and `DROP TABLE user;` would be
executed. To protect against this we **Highly** recommmend you create a specific postgresql user with restricted permissions.
Example:
```sql
CREATE USER grafanareader WITH PASSWORD 'password';
GRANT USAGE ON SCHEMA schema TO grafanareader;
GRANT SELECT ON schema.table TO grafanareader;
```
Make sure the user does not get any unwanted privileges from the public role.
## Macros
To simplify syntax and to allow for dynamic parts, like date range filters, the query can contain macros.
Macro example | Description
------------ | -------------
*$__time(dateColumn)* | Will be replaced by an expression to rename the column to `time`. For example, *dateColumn as time*
*$__timeSec(dateColumn)* | Will be replaced by an expression to rename the column to `time` and converting the value to unix timestamp. For example, *extract(epoch from dateColumn) as time*
*$__timeFilter(dateColumn)* | Will be replaced by a time range filter using the specified column name. For example, *dateColumn > to_timestamp(1494410783) AND dateColumn < to_timestamp(1494497183)*
*$__timeFrom()* | Will be replaced by the start of the currently active time selection. For example, *to_timestamp(1494410783)*
*$__timeTo()* | Will be replaced by the end of the currently active time selection. For example, *to_timestamp(1494497183)*
*$__timeGroup(dateColumn,'5m')* | Will be replaced by an expression usable in GROUP BY clause. For example, *(extract(epoch from "dateColumn")/extract(epoch from '5m'::interval))::int*extract(epoch from '5m'::interval)*
*$__unixEpochFilter(dateColumn)* | Will be replaced by a time range filter using the specified column name with times represented as unix timestamp. For example, *dateColumn > 1494410783 AND dateColumn < 1494497183*
*$__unixEpochFrom()* | Will be replaced by the start of the currently active time selection as unix timestamp. For example, *1494410783*
*$__unixEpochTo()* | Will be replaced by the end of the currently active time selection as unix timestamp. For example, *1494497183*
We plan to add many more macros. If you have suggestions for what macros you would like to see, please [open an issue](https://github.com/grafana/grafana) in our GitHub repo.
The query editor has a link named `Generated SQL` that shows up after a query as been executed, while in panel edit mode. Click on it and it will expand and show the raw interpolated SQL string that was executed.
## Table queries
If the `Format as` query option is set to `Table` then you can basically do any type of SQL query. The table panel will automatically show the results of whatever columns & rows your query returns.
Query editor with example query:
![](/img/docs/v46/postgres_table_query.png)
The query:
```sql
SELECT
title as "Title",
"user".login as "Created By",
dashboard.created as "Created On"
FROM dashboard
INNER JOIN "user" on "user".id = dashboard.created_by
WHERE $__timeFilter(dashboard.created)
```
You can control the name of the Table panel columns by using regular `as ` SQL column selection syntax.
The resulting table panel:
![](/img/docs/v46/postgres_table.png)
### Time series queries
If you set `Format as` to `Time series`, for use in Graph panel for example, then the query must return a column named `time` that returns either a sql datetime or any numeric datatype representing unix epoch in seconds.
Any column except `time` and `metric` is treated as a value column.
You may return a column named `metric` that is used as metric name for the value column.
Example with `metric` column
```sql
SELECT
$__timeGroup(time_date_time,'5m') as time,
min(value_double),
'min' as metric
FROM test_data
WHERE $__timeFilter(time_date_time)
GROUP BY time
ORDER BY time
```
Example with multiple columns:
```sql
SELECT
$__timeGroup(time_date_time,'5m') as time,
min(value_double) as min_value,
max(value_double) as max_value
FROM test_data
WHERE $__timeFilter(time_date_time)
GROUP BY time
ORDER BY time
```
## Templating
Instead of hard-coding things like server, application and sensor name in you metric queries you can use variables in their place. Variables are shown as dropdown select boxes at the top of the dashboard. These dropdowns makes it easy to change the data being displayed in your dashboard.
Checkout the [Templating]({{< relref "reference/templating.md" >}}) documentation for an introduction to the templating feature and the different types of template variables.
### Query Variable
If you add a template variable of the type `Query`, you can write a PostgreSQL query that can
return things like measurement names, key names or key values that are shown as a dropdown select box.
For example, you can have a variable that contains all values for the `hostname` column in a table if you specify a query like this in the templating variable *Query* setting.
```sql
SELECT hostname FROM host
```
A query can return multiple columns and Grafana will automatically create a list from them. For example, the query below will return a list with values from `hostname` and `hostname2`.
```sql
SELECT host.hostname, other_host.hostname2 FROM host JOIN other_host ON host.city = other_host.city
```
Another option is a query that can create a key/value variable. The query should return two columns that are named `__text` and `__value`. The `__text` column value should be unique (if it is not unique then the first value is used). The options in the dropdown will have a text and value that allows you to have a friendly name as text and an id as the value. An example query with `hostname` as the text and `id` as the value:
```sql
SELECT hostname AS __text, id AS __value FROM host
```
You can also create nested variables. For example if you had another variable named `region`. Then you could have
the hosts variable only show hosts from the current selected region with a query like this (if `region` is a multi-value variable then use the `IN` comparison operator rather than `=` to match against multiple values):
```sql
SELECT hostname FROM host WHERE region IN($region)
```
### Using Variables in Queries
Template variables are quoted automatically so if it is a string value do not wrap them in quotes in where clauses. If the variable is a multi-value variable then use the `IN` comparison operator rather than `=` to match against multiple values.
There are two syntaxes:
`$<varname>` Example with a template variable named `hostname`:
```sql
SELECT
atimestamp as time,
aint as value
FROM table
WHERE $__timeFilter(atimestamp) and hostname in($hostname)
ORDER BY atimestamp ASC
```
`[[varname]]` Example with a template variable named `hostname`:
```sql
SELECT
atimestamp as time,
aint as value
FROM table
WHERE $__timeFilter(atimestamp) and hostname in([[hostname]])
ORDER BY atimestamp ASC
```
## Alerting
Time series queries should work in alerting conditions. Table formatted queries is not yet supported in alert rule
conditions.

View File

@@ -10,89 +10,74 @@ parent = "datasources"
weight = 2
+++
# Using Prometheus in Grafana
Grafana includes built-in support for Prometheus.
Grafana includes support for Prometheus Datasources. While the process of adding the datasource is similar to adding a Graphite or OpenTSDB datasource type, Prometheus does have a few different options for building queries.
## Adding the data source to Grafana
1. Open the side menu by clicking the Grafana icon in the top header.
![](/img/docs/v2/add_Prometheus.png)
1. Open the side menu by clicking the the Grafana icon in the top header.
2. In the side menu under the `Dashboards` link you should find a link named `Data Sources`.
3. Click the `+ Add data source` button in the top header.
4. Select `Prometheus` from the *Type* dropdown.
> NOTE: If you're not seeing the `Data Sources` link in your side menu it means that your current user does not have the `Admin` role for the current organization.
> NOTE: If this link is missing in the side menu it means that your current user does not have the `Admin` role for the current organization.
## Data source options
3. Click the `Add new` link in the top header.
4. Select `Prometheus` from the dropdown.
Name | Description
------------ | -------------
*Name* | The data source name. This is how you refer to the data source in panels & queries.
*Default* | Default data source means that it will be pre-selected for new panels.
*Url* | The http protocol, ip and port of you Prometheus server (default port is usually 9090)
*Access* | Proxy = access via Grafana backend, Direct = access directly from browser.
*Basic Auth* | Enable basic authentication to the Prometheus data source.
*User* | Name of your Prometheus user
*Password* | Database user's password
*Scrape interval* | This will be used as a lower limit for the Prometheus step query parameter. Default value is 15s.
Name | The data source name, important that this is the same as in Grafana v1.x if you plan to import old dashboards.
Default | Default data source means that it will be pre-selected for new panels.
Url | The http protocol, ip and port of you Prometheus server (default port is usually 9090)
Access | Proxy = access via Grafana backend, Direct = access directly from browser.
Basic Auth | Enable basic authentication to the Prometheus datasource.
User | Name of your Prometheus user
Password | Database user's password
> Proxy access means that the Grafana backend will proxy all requests from the browser, and send them on to the Data Source. This is useful because it can eliminate CORS (Cross Origin Site Resource) issues, as well as eliminate the need to disseminate authentication details to the Data Source to the browser.
> Direct access is still supported because in some cases it may be useful to access a Data Source directly depending on the use case and topology of Grafana, the user, and the Data Source.
## Query editor
Open a graph in edit mode by click the title.
Open a graph in edit mode by click the title > Edit (or by pressing `e` key while hovering over panel).
![](/img/v2/prometheus_editor.png)
{{< docs-imagebox img="/img/docs/v45/prometheus_query_editor_still.png"
animated-gif="/img/docs/v45/prometheus_query_editor.gif" >}}
For details on Prometheus metric queries check out the Prometheus documentation
- [Query Metrics - Prometheus documentation](http://prometheus.io/docs/querying/basics/).
## Templated queries
Prometheus Datasource Plugin provides the following functions in `Variables values query` field in Templating Editor to query `metric names` and `labels names` on the Prometheus server.
Name | Description
------- | --------
*Query expression* | Prometheus query expression, check out the [Prometheus documentation](http://prometheus.io/docs/querying/basics/).
*Legend format* | Controls the name of the time series, using name or pattern. For example `{{hostname}}` will be replaced with label value for the label `hostname`.
*Min step* | Set a lower limit for the Prometheus step option. Step controls how big the jumps are when the Prometheus query engine performs range queries. Sadly there is no official prometheus documentation to link to for this very important option.
*Resolution* | Controls the step option. Small steps create high-resolution graphs but can be slow over larger time ranges, lowering the resolution can speed things up. `1/2` will try to set step option to generate 1 data point for every other pixel. A value of `1/10` will try to set step option so there is a data point every 10 pixels.
*Metric lookup* | Search for metric names in this input field.
*Format as* | **(New in v4.3)** Switch between Table & Time series. Table format will only work in the Table panel.
`label_values(label)` | Returns a list of label values for the `label` in every metric.
`label_values(metric, label)` | Returns a list of label values for the `label` in the specified metric.
`metrics(metric)` | Returns a list of metrics matching the specified `metric` regex.
`query_result(query)` | Returns a list of Prometheus query result for the `query`.
## Templating
For details of `metric names` & `label names`, and `label values`, please refer to the [Prometheus documentation](http://prometheus.io/docs/concepts/data_model/#metric-names-and-labels).
Instead of hard-coding things like server, application and sensor name in you metric queries you can use variables in their place.
Variables are shown as dropdown select boxes at the top of the dashboard. These dropdowns makes it easy to change the data
being displayed in your dashboard.
> Note: The part of queries is incompatible with the version before 2.6, if you specify like `foo.*`, please change like `metrics(foo.*)`.
Checkout the [Templating]({{< relref "reference/templating.md" >}}) documentation for an introduction to the templating feature and the different
types of template variables.
You can create a template variable in Grafana and have that variable filled with values from any Prometheus metric exploration query.
You can then use this variable in your Prometheus metric queries.
### Query variable
For example you can have a variable that contains all values for label `hostname` if you specify a query like this in the templating edit view.
Variable of the type *Query* allows you to query Prometheus for a list of metrics, labels or label values. The Prometheus data source plugin
provides the following functions you can use in the `Query` input field.
```sql
label_values(hostname)
```
Name | Description
---- | --------
*label_values(label)* | Returns a list of label values for the `label` in every metric.
*label_values(metric, label)* | Returns a list of label values for the `label` in the specified metric.
*metrics(metric)* | Returns a list of metrics matching the specified `metric` regex.
*query_result(query)* | Returns a list of Prometheus query result for the `query`.
You can also use raw queries & regular expressions to extract anything you might need.
For details of *metric names*, *label names* and *label values* are please refer to the [Prometheus documentation](http://prometheus.io/docs/concepts/data_model/#metric-names-and-labels).
### Using templated variables in queries
### Using variables in queries
When the `Include All` option or `Multi-Value` option is enabled, Grafana converts the labels from plain text to a regex compatible string.
Which means you have to use `=~` instead of `=` in your Prometheus queries. For example `ALERTS{instance=~$instance}` instead of `ALERTS{instance=$instance}`.
There are two syntaxes:
- `$<varname>` Example: rate(http_requests_total{job=~"$job"}[5m])
- `[[varname]]` Example: rate(http_requests_total{job=~"[[job]]"}[5m])
Why two ways? The first syntax is easier to read and write but does not allow you to use a variable in the middle of a word. When the *Multi-value* or *Include all value*
options are enabled, Grafana converts the labels from plain text to a regex compatible string. Which means you have to use `=~` instead of `=`.
## Annotations
[Annotations]({{< relref "reference/annotations.md" >}}) allows you to overlay rich event information on top of graphs. You add annotation
queries via the Dashboard menu / Annotations view.
Prometheus supports two ways to query annotations.
- A regular metric query
- A Prometheus query for pending and firing alerts (for details see [Inspecting alerts during runtime](https://prometheus.io/docs/alerting/rules/#inspecting-alerts-during-runtime))
The step option is useful to limit the number of events returned from your query.
![](/img/docs/v2/prometheus_templating.png)

View File

@@ -11,20 +11,26 @@ weight = 20
# Grafana TestData
> NOTE: This plugin is disable by default.
The purpose of this data sources is to make it easier to create fake data for any panel.
Using `Grafana TestData` you can build your own time series and have any panel render it.
This make is much easier to verify functionally since the data can be shared very easily.
This make is much easier to verify functionally since the data can be shared very
## Enable
`Grafana TestData` is not enabled by default. To enable it you have to go to `/plugins/testdata/edit` and click the enable button to enable.
`Grafana TestData` is not enabled by default. To enable it you have to go to `/plugins/testdata/edit` and click the enable button to enable it for each server.
## Create mock data.
Once `Grafana TestData` is enabled you can use it as a data source in any metric panel.
Once `Grafana TestData` is enabled you use it as a datasource in the metric panel.
![](/img/docs/v41/test_data_add.png)
## Scenarios
You can now choose different scenario that you want rendered in the drop down menu. If you have scenarios that you think should be added, please add them to `` and submit a pull request.
## CSV
The comma separated values scenario is the most powerful one since it lets you create any kind of graph you like.
@@ -32,6 +38,7 @@ Once you provided the numbers `Grafana TestData` will distribute them evenly bas
![](/img/docs/v41/test_data_csv_example.png)
## Dashboards
`Grafana TestData` also contains some dashboards with example. `/plugins/testdata/edit`

View File

@@ -1,27 +0,0 @@
+++
title = "Alert List"
keywords = ["grafana", "alert list", "documentation", "panel", "alertlist"]
type = "docs"
aliases = ["/reference/alertlist/"]
[menu.docs]
name = "Alert list"
parent = "panels"
weight = 4
+++
# Alert List Panel
{{< docs-imagebox img="/img/docs/v45/alert-list-panel.png" max-width="850px" >}}
The alert list panel allows you to display your dashbords alerts. The list can be configured to show current state or recent state changes. You can read more about alerts [here](http://docs.grafana.org/alerting/rules).
## Alert List Options
{{< docs-imagebox img="/img/docs/v45/alert-list-options.png" max-width="600px" class="docs-image--no-shadow docs-image--right" >}}
1. **Show**: Lets you choose between current state or recent state changes.
2. **Max Items**: Max items set the maximum of items in a list.
3. **Sort Order**: Lets you sort your list alphabeticaly(asc/desc) or by importance.
4. **Alerts From** This Dashboard`: Shows alerts only from the dashboard the alert list is in.
5. **State Filter**: Here you can filter your list by one or more parameters.

View File

@@ -2,7 +2,6 @@
title = "Dashboard List"
keywords = ["grafana", "dashboard list", "documentation", "panel", "dashlist"]
type = "docs"
aliases = ["/reference/dashlist/"]
[menu.docs]
name = "Dashboard list"
parent = "panels"
@@ -12,25 +11,42 @@ weight = 4
# Dashboard List Panel
{{< docs-imagebox img="/img/docs/v45/dashboard-list-panels.png" max-width="850px">}}
The dashboard list panel allows you to display dynamic links to other dashboards. The list can be configured to use starred dashboards, a search query and/or dashboard tags.
The dashboard list panel allows you to display dynamic links to other dashboards. The list can be configured to use starred dashboards, recently viewed dashboards, a search query and/or dashboard tags.
<img class="no-shadow" src="/img/docs/v2/dashboard_list_panels.png">
> On each dashboard load, the dashlist panel will re-query the dashboard list, always providing the most up to date results.
## Dashboard List Options
## Mode: Starred Dashboards
{{< docs-imagebox img="/img/docs/v45/dashboard-list-options.png" class="docs-image--no-shadow docs-image--right">}}
The `starred` dashboard selection displays starred dashboards, up to the number specified in the `Limit Number to` field, in alphabetical order. On dashboard load, the dashlist panel will re-query the favorites to appear in dashboard list panel, always providing the most up to date results.
1. **Starred**: The starred dashboard selection displays starred dashboards in alphabetical order.
2. **Recently Viewed**: The recently viewed dashboard selection displays recently viewed dashboards in alphabetical order.
3. **Search**: The search dashboard selection displays dashboards by search query or tag(s).
4. **Show Headings**: When show headings is ticked the choosen list selection(Starred, Recently Viewed, Search) is shown as a heading.
5. **Max Items**: Max items set the maximum of items in a list.
6. **Query**: Here is where you enter your query you want to search by. Queries are case-insensitive, and partial values are accepted.
7. **Tags**: Here is where you enter your tag(s) you want to search by. Note that existing tags will not appear as you type, and *are* case sensitive. To see a list of existing tags, you can always return to the dashboard, open the Dashboard Picker at the top and click `tags` link in the search bar.
<img class="no-shadow" src="/img/docs/v2/dashboard_list_config_starred.png">
<div class="clearfix"></div>
## Mode: Search Dashboards
The panel may be configured to search by either string query or tag(s). On dashboard load, the dashlist panel will re-query the dashboard list, always providing the most up to date results.
To configure dashboard list in this manner, select `search` from the Mode select box. When selected, the Search Options section will appear.
Name | Description
------------ | -------------
Mode | Set search or starred mode
Query | If in search mode specify the search query
Tags | if in search mode specify dashboard tags to search for
Limit number to | Specify the maximum number of dashboards
### Search by string
To search by a string, enter a search query in the `Search Options: Query` field. Queries are case-insensitive, and partial values are accepted.
<img class="no-shadow" src="/img/docs/v2/dashboard_list_config_string.png">
### Search by tag
To search by one or more tags, enter your selection in the `Search Options: Tags:` field. Note that existing tags will not appear as you type, and *are* case sensitive. To see a list of existing tags, you can always return to the dashboard, open the Dashboard Picker at the top and click `tags` link in the search bar.
<img class="no-shadow" src="/img/docs/v2/dashboard_list_config_tags.png">
> When multiple tags and strings appear, the dashboard list will display those matching ALL conditions.

View File

@@ -2,7 +2,6 @@
title = "Graph Panel"
keywords = ["grafana", "graph panel", "documentation", "guide", "graph"]
type = "docs"
aliases = ["/reference/graph/"]
[menu.docs]
name = "Graph"
parent = "panels"
@@ -11,38 +10,35 @@ weight = 1
# Graph Panel
{{< docs-imagebox img="/img/docs/v45/graph_overview.png" class="docs-image--no-shadow" max-width="850px" >}}
The main panel in Grafana is simply named Graph. It provides a very rich set of graphing options.
1. Clicking the title for a panel exposes a menu. The `edit` option opens additional configuration
<img src="/img/docs/v1/graph_overview.png" class="no-shadow">
Clicking the title for a panel exposes a menu. The `edit` option opens additional configuration
options for the panel.
2. Click to open color & axis selection.
3. Click to only show this series. Shift/Ctrl + click to hide series.
## General
{{< docs-imagebox img="/img/docs/v43/graph_general.png" max-width= "900px" >}}
![](/img/docs/v2/graph_general.png)
The general tab allows customization of a panel's appearance and menu options.
### General Options
- **Title** - The panel title on the dashboard
- **Span** - The panel width in columns
- **Height** - The panel contents height in pixels
- ``Title`` - The panel title on the dashboard
- ``Span`` - The panel width in columns
- ``Height`` - The panel contents height in pixels
### Drilldown / detail link
The drilldown section allows adding dynamic links to the panel that can link to other dashboards
or URLs.
or URLs
Each link has a title, a type and params. A link can be either a ``dashboard`` or ``absolute`` links.
If it is a dashboard link, the `dashboard` value must be the name of a dashboard. If it is an
`absolute` link, the URL is the URL to the link.
If it is a dashboard links, the `dashboard` value must be the name of a dashboard. If it's an
`absolute` link, the URL is the URL to link.
``params`` allows adding additional URL params to the links. The format is the ``name=value`` with
multiple params separated by ``&``. Template variables can be added as values using ``$myvar``.
multiple params separate by ``&``. Template variables can be added as values using ``$myvar``.
When linking to another dashboard that uses template variables, you can use ``var-myvar=value`` to
populate the template variable to a desired value from the link.
@@ -52,62 +48,51 @@ populate the template variable to a desired value from the link.
The metrics tab defines what series data and sources to render. Each datasource provides different
options.
## Axes
## Axes & Grid
{{< docs-imagebox img="/img/docs/v43/graph_axes_grid_options.png" max-width= "900px" >}}
![](/img/docs/v2/graph_axes_grid_options.png)
The Axes tab controls the display of axes, grids and legend. The **Left Y** and **Right Y** can be customized using:
The Axes & Grid tab controls the display of axes, grids and legend.
- **Unit** - The display unit for the Y value
- **Scale** -
- **Y-Min** - The minimum Y value. (default auto)
- **Y-Max** - The maximum Y value. (default auto)
- **Label** - The Y axis label (default "")
### Axes
Axes can also be hidden by unchecking the appropriate box from **Show**.
The ``Left Y`` and ``Right Y`` can be customized using:
### X-Axis Mode
- ``Unit`` - The display unit for the Y value
- ``Grid Max`` - The maximum Y value. (default auto)
- ``Grid Min`` - The minimum Y value. (default auto)
- ``Label`` - The Y axis label (default "")
There are three options:
- The default option is **Time** and means the x-axis represents time and that the data is grouped by time (for example, by hour or by minute).
- The **Series** option means that the data is grouped by series and not by time. The y-axis still represents the value.
{{< docs-imagebox img="/img/docs/v45/graph-x-axis-mode-series.png" max-width="700px">}}
- The **Histogram** option converts the graph into a histogram. A Histogram is a kind of bar chart that groups numbers into ranges, often called buckets or bins. Taller bars show that more data falls in that range. Histograms and buckets are described in more detail [here](http://docs.grafana.org/features/panels/heatmap/#histograms-and-buckets).
<img src="/img/docs/v43/heatmap_histogram.png" class="no-shadow">
Axes can also be hidden by unchecking the appropriate box from `Show Axis`.
### Legend
The legend hand be hidden by checking the **Show** checkbox. If it's shown, it can be
displayed as a table of values by checking the **Table** checkbox. Series with no
values can be hidden from the legend using the **Hide empty** checkbox.
The legend hand be hidden by checking the ``Show`` checkbox. If it's shown, it can be
displayed as a table of values by checking the ``Table`` checkbox. Series with no
values can be hidden from the legend using the ``Hide empty`` checkbox.
### Legend Values
Additional values can be shown along-side the legend names:
- **Total** - Sum of all values returned from metric query
- **Current** - Last value returned from the metric query
- **Min** - Minimum of all values returned from metric query
- **Max** - Maximum of all values returned from the metric query
- **Avg** - Average of all values returned from metric query
- **Decimals** - Controls how many decimals are displayed for legend values (and graph hover tooltips)
- ``Total`` - Sum of all values returned from metric query
- ``Current`` - Last value returned from the metric query
- ``Min`` - Minimum of all values returned from metric query
- ``Max`` - Maximum of all values returned from the metric query
- ``Avg`` - Average of all values returned from metric query
- ``Decimals`` - Controls how many decimals are displayed for legend values (and graph hover tooltips)
The legend values are calculated client side by Grafana and depend on what type of
aggregation or point consolidation your metric query is using. All the above legend values cannot
aggregation or point consolidation you metric query is using. All the above legend values cannot
be correct at the same time. For example if you plot a rate like requests/second, this is probably
using average as aggregator, then the Total in the legend will not represent the total number of requests.
It is just the sum of all data points received by Grafana.
## Display styles
{{< docs-imagebox img="/img/docs/v43/graph_display_styles.png" max-width= "900px" >}}
![](/img/docs/v2/graph_display_styles.png)
Display styles control visual properties of the graph.
Display styles controls properties of the graph.
### Thresholds
@@ -117,49 +102,43 @@ the graph crosses a particular threshold.
### Chart Options
- **Bar** - Display values as a bar chart
- **Lines** - Display values as a line graph
- **Points** - Display points for values
- ``Bar`` - Display values as a bar chart
- ``Lines`` - Display values as a line graph
- ``Points`` - Display points for values
### Line Options
- **Line Fill** - Amount of color fill for a series. 0 is none.
- **Line Width** - The width of the line for a series.
- **Null point mode** - How null values are displayed
- **Staircase line** - Draws adjacent points as staircase
- ``Line Fill`` - Amount of color fill for a series. 0 is none.
- ``Line Width`` - The width of the line for a series.
- ``Null point mode`` - How null values are displayed
- ``Staircase line`` - Draws adjacent points as staircase
### Multiple Series
If there are multiple series, they can be displayed as a group.
If there are multiple series, they can be display as a group.
- **Stack** - Each series is stacked on top of another
- **Percent** - Each series is drawn as a percentage of the total of all series
- ``Stack`` - Each series is stacked on top of another
- ``Percent`` - Each series is draw as a percent of the total of all series
If you have stack enabled, you can select what the mouse hover feature should show.
If you have stack enabled you can select what the mouse hover feature should show.
- Cumulative - Sum of series below plus the series you hover over
- Individual - Just the value for the series you hover over
### Rendering
- **Flot** - Render the graphs in the browser using Flot (default)
- **Graphite PNG** - Render the graph on the server using graphite's render API.
- ``Flot`` - Render the graphs in the browser using Flot (default)
- ``Graphite PNG`` - Render the graph on the server using graphite's render API.
### Tooltip
- **All series** - Show all series on the same tooltip and a x crosshairs to help follow all series
- ``All series`` - Show all series on the same tooltip and a x crosshairs to help follow all series
### Series Specific Overrides
### Series specific overrides
The section allows a series to be rendered differently from the others. For example, one series can be given
a thicker line width to make it stand out.
The section allows a series to be render different from the rest. For example, one series can be given
a thicker line width to make it standout.
#### Dashes Drawing Style
## Time range
There is an option under Series overrides to draw lines as dashes. Set Dashes to the value True to override the line draw setting for a specific series.
## Time Range
The time range tab allows you to override the dashboard time range and specify a panel specific time. Either through a relative from now time option or through a timeshift.
{{< docs-imagebox img="/img/docs/v45/graph-time-range.png" max-width= "900px" >}}
![](/img/docs/v2/graph_time_range.png)

View File

@@ -1,108 +0,0 @@
+++
title = "Heatmap Panel"
description = "Heatmap panel documentation"
keywords = ["grafana", "heatmap", "panel", "documentation"]
type = "docs"
[menu.docs]
name = "Heatmap"
parent = "panels"
weight = 3
+++
# Heatmap Panel
![](/img/docs/v43/heatmap_panel_cover.jpg)
> New panel only available in Grafana v4.3+
The Heatmap panel allows you to view histograms over time. To fully understand and use this panel you need
understand what Histograms are and how they are created. Read on below to for a quick introduction to the
term Histogram.
## Histograms and buckets
A histogram is a graphical representation of the distribution of numerical data. You group values into buckets
(some times also called bins) and then count how many values fall into each bucket. Instead
of graphing the actual values you then graph the buckets. Each bar represents a bucket
and the bar height represents the frequency (i.e. count) of values that fell into that bucket's interval.
Example Histogram:
![](/img/docs/v43/heatmap_histogram.png)
The above histogram shows us that most value distribution of a couple of time series. We can easily see that
most values land between 240-300 with a peak between 260-280. Histograms just look at value distributions
over specific time range. So you cannot see any trend or changes in the distribution over time,
this is where heatmaps become useful.
## Heatmap
A Heatmap is like a histogram but over time where each time slice represents its own
histogram. Instead of using bar height as a representation of frequency you use cells and color
the cell proportional to the number of values in the bucket.
Example:
![](/img/docs/v43/heatmap_histogram_over_time.png)
Here we can clearly see what values are more common and how they trend over time.
## Data Options
Data and bucket options can be found in the `Axes` tab.
### Data Formats
Data format | Description
------------ | -------------
*Time series* | Grafana does the bucketing by going through all time series values. The bucket sizes & intervals will be determined using the Buckets options.
*Time series buckets* | Each time series already represents a Y-Axis bucket. The time series name (alias) needs to be a numeric value representing the upper interval for the bucket. Grafana does no bucketing so the bucket size options are hidden.
### Bucket Size
The Bucket count & size options are used by Grafana to calculate how big each cell in the heatmap is. You can
define the bucket size either by count (the first input box) or by specifying a size interval. For the Y-Axis
the size interval is just a value but for the X-bucket you can specify a time range in the *Size* input, for example,
the time range `1h`. This will make the cells 1h wide on the X-axis.
### Pre-bucketed data
If you have a data that is already organized into buckets you can use the `Time series buckets` data format. This format requires that your metric query return regular time series and that each time series has a numeric name
that represent the upper or lower bound of the interval.
The only data source that supports histograms over time is Elasticsearch. You do this by adding a *Histogram*
bucket aggregation before the *Date Histogram*.
![](/img/docs/v43/elastic_histogram.png)
You control the size of the buckets using the Histogram interval (Y-Axis) and the Date Histogram interval (X-axis).
## Display Options
In the heatmap *Display* tab you define how the cells are rendered and what color they are assigned.
### Color Mode & Spectrum
{{< imgbox max-width="40%" img="/img/docs/v43/heatmap_scheme.png" caption="Color spectrum" >}}
The color spectrum controls the mapping between value count (in each bucket) and the color assigned to each bucket.
The left most color on the spectrum represents the minimum count and the color on the right most side represents the
maximum count. Some color schemes are automatically inverted when using the light theme.
You can also change the color mode to `Opacity`. In this case, the color will not change but the amount of opacity will
change with the bucket count.
## Raw data vs aggregated
If you use the heatmap with regular time series data (not pre-bucketed). Then it's important to keep in mind that your data
is often already by aggregated by your time series backend. Most time series queries do not return raw sample data
but include a group by time interval or maxDataPoints limit coupled with an aggregation function (usually average).
This all depends on the time range of your query of course. But the important point is to know that the Histogram bucketing
that Grafana performs may be done on already aggregated and averaged data. To get more accurate heatmaps it is better
to do the bucketing during metric collection or store the data in Elasticsearch, which currently is the only data source
data supports doing Histogram bucketing on the raw data.
If you remove or lower the group by time (or raise maxDataPoints) in your query to return more data points your heatmap will be
more accurate but this can also be very CPU & Memory taxing for your browser and could cause hangs and crashes if the number of
data points becomes unreasonably large.

Some files were not shown because too many files have changed in this diff Show More