Compare commits

..

47 Commits

Author SHA1 Message Date
Torkel Ödegaard
7b447b2a37 remove unused parts 2025-12-05 16:35:11 -03:00
Torkel Ödegaard
ccd3d698f6 fix lint 2025-12-05 16:35:11 -03:00
Torkel Ödegaard
639aa5086e Remove feature highlight 2025-12-05 16:35:11 -03:00
Torkel Ödegaard
fc4db187a1 Update 2025-12-05 16:35:11 -03:00
Torkel Ödegaard
eb088f2e64 update 2025-12-05 16:35:10 -03:00
Torkel Ödegaard
687f8ea6ee review updates 2025-12-05 16:35:10 -03:00
Torkel Ödegaard
1d43e7001b Fixed unit test 2025-12-05 16:35:10 -03:00
Torkel Ödegaard
7cada92ba0 style fix 2025-12-05 16:35:10 -03:00
Torkel Ödegaard
452f776a1a fixes 2025-12-05 16:35:10 -03:00
Torkel Ödegaard
ac7ff32944 Simplify styling and fix issues 2025-12-05 16:35:10 -03:00
Torkel Ödegaard
401e46a7a1 Aligning icons 2025-12-05 16:35:10 -03:00
Torkel Ödegaard
ca2b6f4bd8 fix merge issue 2025-12-05 16:35:09 -03:00
Torkel Ödegaard
4292f8451b Update 2025-12-05 16:35:09 -03:00
Torkel Ödegaard
77aeb31756 Update 2025-12-05 16:35:09 -03:00
Torkel Ödegaard
64f432ddf8 AppChrome: Add proper menu icon for menu, logo icon becomes home 2025-12-05 16:35:09 -03:00
Tito Lins
f67d63437b print prompt 2025-12-05 16:30:44 -03:00
Tito Lins
d153a075fd generate app token 2025-12-05 16:07:17 -03:00
Tito Lins
b059e95661 move prompt to external repo 2025-12-05 15:56:37 -03:00
Sarah Zinger
422c791042 SQL Expressions: Fix alerts with sql expressions that have a cte (#114852)
Fix for #114377 - fix alerts with sql expressions that have a cte
2025-12-05 15:56:37 -03:00
Bogdan Matei
2f187b71e1 Dashboard: Fix dropping panels in tabs and rows (#114893) 2025-12-05 15:56:37 -03:00
Santiago
fd6183178d Remote Alertmanager: Remove X-Remote-Alertmanager header (#114917)
Remote Alertmanager: Remove X-Remote-Alertmanager haeder
2025-12-05 15:56:37 -03:00
Ryan McKinley
008df9bab8 Dashboards: update manifest to avoid useoldmanifestkinds (#114910) 2025-12-05 15:56:37 -03:00
Alexander Zobnin
6bbaf5ecd4 Zanzana: Fix duplicated writes in one request (#114900)
* Zanzana: Fix duplicated writes

* add tests
2025-12-05 15:56:37 -03:00
Tung Nguyen
44e735abde OpenTSDB: Remove gf-form from opentsdb Annotation Editor (#112590)
* Chore: Remove gf-form in opentsdb AnnotationEditor

* Fix: small typo

* chore: remove stale eslint suppression rule
2025-12-05 15:56:37 -03:00
Andreas Christou
c0663f894a Azure: Fix dcount aggregation (#114666)
* Add parameter type field

* Use parameterType to filter columns for aggregation funcs

* Support selecting column for dcount aggregation
2025-12-05 15:56:37 -03:00
Ryan McKinley
0179bbd2da Folders: update manifest (avoid useoldmanifestkinds) (#114827) 2025-12-05 15:56:36 -03:00
Sergej-Vlasov
5799f473bc TransformSaveModelToScene: Force v1 for reports (#114767)
force v1 for reports
2025-12-05 15:56:36 -03:00
Andreas Christou
f81f97ed01 MSSQL: Remove the need for azure_auth_enabled (#114775)
Remove the need for azure_auth_enabled for MSSQL
2025-12-05 15:56:36 -03:00
Torkel Ödegaard
9ef1c5bcae Gauge: Only show spotlight in dark themes (#114524)
* Gauge: Only show spotlight in dark themes

* Update
2025-12-05 15:56:36 -03:00
Hugo Häggmark
7cbd9f1129 chore: reduce Loki barrel files (#114888) 2025-12-05 15:56:36 -03:00
Pepe Cano
055c4b13ae docs(alerting): add new Examples of trace-based alerts (#114511)
* docs(alerting): add new Examples of trace-based alerts

* fix vale issues
2025-12-05 15:56:35 -03:00
Hugo Häggmark
79e1eb852c chore: introduce toggle for decoupling plugins from bootdata (#114890)
chore: toggle for decoupling plugins from bootdata
2025-12-05 15:56:35 -03:00
grafana-pr-automation[bot]
e125be7588 I18n: Download translations from Crowdin (#114886)
New Crowdin translations by GitHub Action

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-12-05 15:56:35 -03:00
Galen Kistler
043201dcfe TableRT: fix bug preventing users from selecting filter operators (#114860)
* fix: bug preventing users from selecting filter operators
2025-12-05 15:56:35 -03:00
Steve Simpson
5b0c57d348 Alerting: Add basic auth options to historian args (#114880) 2025-12-05 15:56:35 -03:00
Steve Simpson
77526f999b Alerting: Add command line parsing for historian options (#114865) 2025-12-05 15:56:34 -03:00
alerting-team[bot]
78c5f9141f Alerting: Update alerting module to de8c2bbf9eba591078e9d9d7c6cbbe4142ef2d0b (#114877)
[create-pull-request] automated change

Co-authored-by: yuri-tceretian <25988953+yuri-tceretian@users.noreply.github.com>
2025-12-05 15:56:34 -03:00
ismail simsek
e8ce9044e6 Postgresql: Fix variable interpolation logic when the variable has multiple values (#114058)
* fix the variable interpolation

* add jest config to grafana-sql

* fix broken tests

* add variable interpolation tests

* lint

* apply fix only to postresql datasource
2025-12-05 15:56:34 -03:00
beejeebus
1d2b294f95 Pass the feature flag instead of false to RegisterAPIService
Doh. The feature flag was not actually being used to enable the new DS
config CRUD APIs. This PR fixes that, hashtag facepalm.
2025-12-05 15:56:34 -03:00
Rafael Bortolon Paulovic
81fad0b457 fix(unified): key_path column default (#114859)
fix: key_path column default
2025-12-05 15:56:34 -03:00
Andrew Hackmann
a7ce8c9f36 Elasticsearch: Client refactor (#114745)
* split up client.go

* split up search_request.go

* remove double spaces
2025-12-05 15:56:34 -03:00
Costa Alexoglou
a5599d4f22 feat: add default permissions and DTO support for MT (#114829)
* feat: add default permissions and DTO support for MT

* chore: review comments

* chore: review comments
2025-12-05 15:56:34 -03:00
Collin Fingar
0ebfa25ee6 Library Panels: Update connection generation for V2 (#114504)
* Library Panels: Update connection generation for V2

* add test

---------

Co-authored-by: Haris Rozajac <haris.rozajac12@gmail.com>
2025-12-05 15:56:33 -03:00
Gabriel MABILLE
ecce6b20d7 grafana-iam: Fix missing UID (#114856) 2025-12-05 15:56:33 -03:00
Todd Treece
7fc430faae Plugins API: Add plugins to RBAC mapper (#114843) 2025-12-05 15:56:33 -03:00
Tito Lins
475be966f4 update prompt 2025-12-05 13:03:43 -03:00
Tito Lins
78f231725e test agent-demo 2025-12-05 11:22:54 -03:00
1371 changed files with 13728 additions and 45903 deletions

4
.github/CODEOWNERS vendored
View File

@@ -85,7 +85,6 @@
# Git Sync frontend owned by frontend team as a whole.
/apps/alerting/ @grafana/alerting-backend
/apps/quotas/ @grafana/grafana-search-and-storage
/apps/dashboard/ @grafana/grafana-app-platform-squad @grafana/dashboards-squad
/apps/folder/ @grafana/grafana-app-platform-squad
/apps/playlist/ @grafana/grafana-app-platform-squad
@@ -520,7 +519,7 @@ i18next.config.ts @grafana/grafana-frontend-platform
/e2e-playwright/various-suite/solo-route.spec.ts @grafana/dashboards-squad
/e2e-playwright/various-suite/trace-view-scrolling.spec.ts @grafana/observability-traces-and-profiling
/e2e-playwright/various-suite/verify-i18n.spec.ts @grafana/grafana-frontend-platform
/e2e-playwright/various-suite/visualization-suggestions.spec.ts @grafana/dataviz-squad
/e2e-playwright/various-suite/visualization-suggestions.spec.ts @grafana/dashboards-squad
/e2e-playwright/various-suite/perf-test.spec.ts @grafana/grafana-frontend-platform
# Packages
@@ -956,7 +955,6 @@ playwright.storybook.config.ts @grafana/grafana-frontend-platform
/public/app/features/notifications/ @grafana/grafana-search-navigate-organise
/public/app/features/org/ @grafana/grafana-search-navigate-organise
/public/app/features/panel/ @grafana/dashboards-squad
/public/app/features/panel/components/VizTypePicker/VisualizationSuggestions.tsx @grafana/dataviz-squad
/public/app/features/panel/suggestions/ @grafana/dataviz-squad
/public/app/features/playlist/ @grafana/dashboards-squad
/public/app/features/plugins/ @grafana/plugins-platform-frontend

View File

@@ -0,0 +1,77 @@
name: Usability Review Agent
description: Runs AI-powered usability testing using OpenAI Computer Use with Playwright
inputs:
target_url:
description: The URL to test for usability
required: true
openai_api_key:
description: OpenAI API key with Computer Use access
required: true
openai_org:
description: OpenAI organization ID
required: false
grafana_username:
description: Grafana username for authentication
required: false
grafana_password:
description: Grafana password for authentication
required: false
workflow_name:
description: The workflow or feature to test
required: false
default: "the application interface"
prompt_file:
description: Custom prompt file to use (relative to action directory)
required: false
default: "prompt.txt"
output_text_path:
description: Path to save the review output text
required: false
default: "usability-review.txt"
screenshot_path:
description: Path to save the final screenshot
required: false
default: "usability-screenshot.png"
outputs:
review_output:
description: Path to the review output file
value: ${{ inputs.output_text_path }}
screenshot_output:
description: Path to the screenshot file
value: ${{ inputs.screenshot_path }}
runs:
using: "composite"
steps:
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Install dependencies
shell: bash
working-directory: ${{ github.action_path }}
run: |
python -m pip install -U pip
pip install -r requirements.txt
python -m playwright install --with-deps chromium
echo "System info:"
free -h || echo "free command not available"
df -h | head -5
- name: Run usability review agent
shell: bash
working-directory: ${{ github.action_path }}
env:
OPENAI_API_KEY: ${{ inputs.openai_api_key }}
OPENAI_ORG: ${{ inputs.openai_org }}
TARGET_URL: ${{ inputs.target_url }}
GRAFANA_USERNAME: ${{ inputs.grafana_username }}
GRAFANA_PASSWORD: ${{ inputs.grafana_password }}
OUTPUT_TEXT_PATH: ${{ github.workspace }}/${{ inputs.output_text_path }}
SCREENSHOT_PATH: ${{ github.workspace }}/${{ inputs.screenshot_path }}
PROMPT_FILE: ${{ github.action_path }}/${{ inputs.prompt_file }}
run: |
python run_agent.py

View File

@@ -0,0 +1,11 @@
from . import default
from . import contrib
from .computer import Computer
from .config import computers_config
__all__ = [
"default",
"contrib",
"Computer",
"computers_config",
]

View File

@@ -0,0 +1,29 @@
from typing import Protocol, List, Literal, Dict
class Computer(Protocol):
"""Defines the 'shape' (methods/properties) our loop expects."""
def get_environment(self) -> Literal["windows", "mac", "linux", "browser"]: ...
def get_dimensions(self) -> tuple[int, int]: ...
def screenshot(self) -> str: ...
def click(self, x: int, y: int, button: str = "left") -> None: ...
def double_click(self, x: int, y: int) -> None: ...
def scroll(self, x: int, y: int, scroll_x: int, scroll_y: int) -> None: ...
def type(self, text: str) -> None: ...
def wait(self, ms: int = 1000) -> None: ...
def move(self, x: int, y: int) -> None: ...
def keypress(self, keys: List[str]) -> None: ...
def drag(self, path: List[Dict[str, int]]) -> None: ...
def get_current_url() -> str: ...

View File

@@ -0,0 +1,10 @@
from .default import *
from .contrib import *
computers_config = {
"local-playwright": LocalPlaywrightBrowser,
"docker": DockerComputer,
"browserbase": BrowserbaseBrowser,
"scrapybara-browser": ScrapybaraBrowser,
"scrapybara-ubuntu": ScrapybaraUbuntu,
}

View File

@@ -0,0 +1,4 @@
from .browserbase import BrowserbaseBrowser
from .local_playwright import LocalPlaywrightBrowser
from .docker import DockerComputer
from .scrapybara import ScrapybaraBrowser, ScrapybaraUbuntu

View File

@@ -0,0 +1,200 @@
import os
from typing import Tuple, Dict, List, Union, Optional
from playwright.sync_api import Browser, Page, BrowserContext, Error as PlaywrightError
from ..shared.base_playwright import BasePlaywrightComputer
from browserbase import Browserbase
from dotenv import load_dotenv
import base64
load_dotenv()
class BrowserbaseBrowser(BasePlaywrightComputer):
"""
Browserbase is a headless browser platform that offers a remote browser API. You can use it to control thousands of browsers from anywhere.
You can find more information about Browserbase at https://www.browserbase.com/computer-use or view our OpenAI CUA Quickstart at https://docs.browserbase.com/integrations/openai-cua/introduction.
IMPORTANT: This Browserbase computer requires the use of the `goto` tool defined in playwright_with_custom_functions.py.
Make sure to include this tool in your configuration when using the Browserbase computer.
"""
def get_dimensions(self):
return self.dimensions
def __init__(
self,
width: int = 1024,
height: int = 768,
region: str = "us-west-2",
proxy: bool = False,
virtual_mouse: bool = True,
ad_blocker: bool = False,
):
"""
Initialize the Browserbase instance. Additional configuration options for features such as persistent cookies, ad blockers, file downloads and more can be found in the Browserbase API documentation: https://docs.browserbase.com/reference/api/create-a-session
Args:
width (int): The width of the browser viewport. Default is 1024.
height (int): The height of the browser viewport. Default is 768.
region (str): The region for the Browserbase session. Default is "us-west-2". Pick a region close to you for better performance. https://docs.browserbase.com/guides/multi-region
proxy (bool): Whether to use a proxy for the session. Default is False. Turn on proxies if you're browsing is frequently interrupted. https://docs.browserbase.com/features/proxies
virtual_mouse (bool): Whether to enable the virtual mouse cursor. Default is True.
ad_blocker (bool): Whether to enable the built-in ad blocker. Default is False.
"""
super().__init__()
self.bb = Browserbase(api_key=os.getenv("BROWSERBASE_API_KEY"))
self.project_id = os.getenv("BROWSERBASE_PROJECT_ID")
self.session = None
self.dimensions = (width, height)
self.region = region
self.proxy = proxy
self.virtual_mouse = virtual_mouse
self.ad_blocker = ad_blocker
def _get_browser_and_page(self) -> Tuple[Browser, Page]:
"""
Create a Browserbase session and connect to it.
Returns:
Tuple[Browser, Page]: A tuple containing the connected browser and page objects.
"""
# Create a session on Browserbase with specified parameters
width, height = self.dimensions
session_params = {
"project_id": self.project_id,
"browser_settings": {
"viewport": {"width": width, "height": height},
"blockAds": self.ad_blocker,
},
"region": self.region,
"proxies": self.proxy,
}
self.session = self.bb.sessions.create(**session_params)
# Print the live session URL
print(
f"Watch and control this browser live at https://www.browserbase.com/sessions/{self.session.id}"
)
# Connect to the remote session
browser = self._playwright.chromium.connect_over_cdp(
self.session.connect_url, timeout=60000
)
context = browser.contexts[0]
# Add event listeners for page creation and closure
context.on("page", self._handle_new_page)
# Only add the init script if virtual_mouse is True
if self.virtual_mouse:
context.add_init_script(
"""
// Only run in the top frame
if (window.self === window.top) {
function initCursor() {
const CURSOR_ID = '__cursor__';
// Check if cursor element already exists
if (document.getElementById(CURSOR_ID)) return;
const cursor = document.createElement('div');
cursor.id = CURSOR_ID;
Object.assign(cursor.style, {
position: 'fixed',
top: '0px',
left: '0px',
width: '20px',
height: '20px',
backgroundImage: 'url("data:image/svg+xml;utf8,<svg xmlns=\\'http://www.w3.org/2000/svg\\' viewBox=\\'0 0 24 24\\' fill=\\'black\\' stroke=\\'white\\' stroke-width=\\'1\\' stroke-linejoin=\\'round\\' stroke-linecap=\\'round\\'><polygon points=\\'2,2 2,22 8,16 14,22 17,19 11,13 20,13\\'/></svg>")',
backgroundSize: 'cover',
pointerEvents: 'none',
zIndex: '99999',
transform: 'translate(-2px, -2px)',
});
document.body.appendChild(cursor);
document.addEventListener("mousemove", (e) => {
cursor.style.top = e.clientY + "px";
cursor.style.left = e.clientX + "px";
});
}
// Use requestAnimationFrame for early execution
requestAnimationFrame(function checkBody() {
if (document.body) {
initCursor();
} else {
requestAnimationFrame(checkBody);
}
});
}
"""
)
page = context.pages[0]
page.on("close", self._handle_page_close)
page.goto("https://bing.com")
return browser, page
def _handle_new_page(self, page: Page):
"""Handle the creation of a new page."""
print("New page created")
self._page = page
page.on("close", self._handle_page_close)
def _handle_page_close(self, page: Page):
"""Handle the closure of a page."""
print("Page closed")
if self._page == page:
if self._browser.contexts[0].pages:
self._page = self._browser.contexts[0].pages[-1]
else:
print("Warning: All pages have been closed.")
self._page = None
def __exit__(self, exc_type, exc_val, exc_tb):
"""
Clean up resources when exiting the context manager.
Args:
exc_type: The type of the exception that caused the context to be exited.
exc_val: The exception instance that caused the context to be exited.
exc_tb: A traceback object encapsulating the call stack at the point where the exception occurred.
"""
if self._page:
self._page.close()
if self._browser:
self._browser.close()
if self._playwright:
self._playwright.stop()
if self.session:
print(
f"Session completed. View replay at https://browserbase.com/sessions/{self.session.id}"
)
def screenshot(self) -> str:
"""
Capture a screenshot of the current viewport using CDP.
Returns:
str: A base64 encoded string of the screenshot.
"""
try:
# Get CDP session from the page
cdp_session = self._page.context.new_cdp_session(self._page)
# Capture screenshot using CDP
result = cdp_session.send(
"Page.captureScreenshot", {"format": "png", "fromSurface": True}
)
return result["data"]
except PlaywrightError as error:
print(
f"CDP screenshot failed, falling back to standard screenshot: {error}"
)
return super().screenshot()

View File

@@ -0,0 +1,174 @@
import subprocess
import time
import shlex
class DockerComputer:
def get_environment(self):
return "linux"
def get_dimensions(self):
return (1280, 720) # Default fallback; will be updated in __enter__.
def __init__(
self,
container_name="cua-sample-app",
image="ghcr.io/openai/openai-cua-sample-app:latest",
display=":99",
port_mapping="5900:5900",
):
self.container_name = container_name
self.image = image
self.display = display
self.port_mapping = port_mapping
def __enter__(self):
# Check if the container is running
result = subprocess.run(
["docker", "ps", "-q", "-f", f"name={self.container_name}"],
capture_output=True,
text=True,
)
if not result.stdout.strip():
raise RuntimeError(
f"Container {self.container_name} is not running. Build and run with:\n"
f"docker build -t {self.container_name} .\n"
f"docker run --rm -it --name {self.container_name} "
f"-p {self.port_mapping} -e DISPLAY={self.display} {self.container_name}"
)
# Fetch display geometry
geometry = self._exec(
f"DISPLAY={self.display} xdotool getdisplaygeometry"
).strip()
if geometry:
w, h = geometry.split()
self.dimensions = (int(w), int(h))
# print("Starting Docker container...")
# # Run the container detached, removing it automatically when it stops
# subprocess.check_call(
# [
# "docker",
# "run",
# "-d",
# "--rm",
# "--name",
# self.container_name,
# "-p",
# self.port_mapping,
# self.image,
# ]
# )
# # Give the container a moment to start
# time.sleep(3)
# print("Entering DockerComputer context")
return self
def __exit__(self, exc_type, exc_val, exc_tb):
# print("Stopping Docker container...")
# subprocess.check_call(["docker", "stop", self.container_name])
# print("Exiting DockerComputer context")
pass
def _exec(self, cmd: str) -> str:
"""
Run 'cmd' in the container.
We wrap cmd in double quotes and escape any double quotes inside it,
so spaces or quotes don't break the shell call.
"""
# Escape any existing double quotes in cmd
safe_cmd = cmd.replace('"', '\\"')
# Then wrap the entire cmd in double quotes for `sh -c`
docker_cmd = f'docker exec {self.container_name} sh -c "{safe_cmd}"'
return subprocess.check_output(docker_cmd, shell=True).decode(
"utf-8", errors="ignore"
)
def screenshot(self) -> str:
"""
Takes a screenshot with ImageMagick (import), returning base64-encoded PNG.
Requires 'import'.
"""
# cmd = (
# f"export DISPLAY={self.display} && "
# "import -window root /tmp/screenshot.png && "
# "base64 /tmp/screenshot.png"
# )
cmd = (
f"export DISPLAY={self.display} && "
"import -window root png:- | base64 -w 0"
)
return self._exec(cmd)
def click(self, x: int, y: int, button: str = "left") -> None:
button_map = {"left": 1, "middle": 2, "right": 3}
b = button_map.get(button, 1)
self._exec(f"DISPLAY={self.display} xdotool mousemove {x} {y} click {b}")
def double_click(self, x: int, y: int) -> None:
self._exec(
f"DISPLAY={self.display} xdotool mousemove {x} {y} click --repeat 2 1"
)
def scroll(self, x: int, y: int, scroll_x: int, scroll_y: int) -> None:
"""
For simple vertical scrolling: xdotool click 4 (scroll up) or 5 (scroll down).
"""
self._exec(f"DISPLAY={self.display} xdotool mousemove {x} {y}")
clicks = abs(scroll_y)
button = 4 if scroll_y < 0 else 5
for _ in range(clicks):
self._exec(f"DISPLAY={self.display} xdotool click {button}")
def type(self, text: str) -> None:
"""
Type the given text via xdotool, preserving spaces and quotes.
"""
# Escape single quotes in the user text: ' -> '\'\''
safe_text = text.replace("'", "'\\''")
# Then wrap everything in single quotes for xdotool
cmd = f"DISPLAY={self.display} xdotool type -- '{safe_text}'"
self._exec(cmd)
def wait(self, ms: int = 1000) -> None:
time.sleep(ms / 1000)
def move(self, x: int, y: int) -> None:
self._exec(f"DISPLAY={self.display} xdotool mousemove {x} {y}")
def keypress(self, keys: list[str]) -> None:
mapping = {
"ENTER": "Return",
"LEFT": "Left",
"RIGHT": "Right",
"UP": "Up",
"DOWN": "Down",
"ESC": "Escape",
"SPACE": "space",
"BACKSPACE": "BackSpace",
"TAB": "Tab",
}
mapped_keys = [mapping.get(key, key) for key in keys]
combo = "+".join(mapped_keys)
self._exec(f"DISPLAY={self.display} xdotool key {combo}")
def drag(self, path: list[dict[str, int]]) -> None:
if not path:
return
start_x = path[0]["x"]
start_y = path[0]["y"]
self._exec(
f"DISPLAY={self.display} xdotool mousemove {start_x} {start_y} mousedown 1"
)
for point in path[1:]:
self._exec(
f"DISPLAY={self.display} xdotool mousemove {point['x']} {point['y']}"
)
self._exec(f"DISPLAY={self.display} xdotool mouseup 1")
def get_current_url(self):
return None

View File

@@ -0,0 +1,165 @@
import os
from playwright.sync_api import Browser, Page
from ..shared.base_playwright import BasePlaywrightComputer
class LocalPlaywrightBrowser(BasePlaywrightComputer):
"""Launches a local Chromium instance using Playwright."""
def __init__(self, headless: bool = False):
super().__init__()
self.headless = headless
def _get_browser_and_page(self) -> tuple[Browser, Page]:
width, height = self.get_dimensions()
launch_args = [
f"--window-size={width},{height}",
"--disable-extensions",
"--disable-file-system",
]
browser = self._playwright.chromium.launch(
chromium_sandbox=False,
headless=self.headless,
args=launch_args,
env={"DISPLAY": ":0"},
)
context = browser.new_context(
user_agent="Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36",
locale="en-US",
timezone_id="UTC"
)
# Add event listeners for page creation and closure
context.on("page", self._handle_new_page)
page = context.new_page()
page.set_viewport_size({"width": width, "height": height})
page.on("close", self._handle_page_close)
# Add logging for debugging
page.on("console", lambda msg: print(f"Browser console: {msg.text}"))
page.on("pageerror", lambda err: print(f"Page error: {err}"))
target_url = os.environ.get("TARGET_URL", "https://grafana.com/docs/")
grafana_username = os.environ.get("GRAFANA_USERNAME")
grafana_password = os.environ.get("GRAFANA_PASSWORD")
# If credentials provided, log in first
if grafana_username and grafana_password:
from urllib.parse import urlparse, urljoin
base_url = f"{urlparse(target_url).scheme}://{urlparse(target_url).netloc}"
login_url = urljoin(base_url, "/login")
print(f"Logging in to: {login_url}")
page.goto(login_url, timeout=60000)
print(f"Page loaded, current URL: {page.url}")
print(f"Page title: {page.title()}")
try:
# Wait for login form - try multiple selector strategies
print("Waiting for login form to appear...")
# Try to find by placeholder first (more reliable for older versions)
username_field = page.get_by_placeholder("email or username")
password_field = page.get_by_placeholder("password", exact=True)
username_field.wait_for(state="visible", timeout=60000)
print("Login form detected")
# Take screenshot after form is visible
screenshot_path = os.environ.get("GITHUB_WORKSPACE", ".") + "/login_page.png"
page.screenshot(path=screenshot_path)
print(f"Screenshot saved as {screenshot_path}")
# Fill credentials using placeholder selectors
print(f"Filling username (length: {len(grafana_username)})")
username_field.fill(grafana_username)
print(f"Filling password (length: {len(grafana_password)})")
password_field.fill(grafana_password)
print("Credentials filled successfully")
# Click login button by text
print("Clicking login button...")
page.get_by_role("button", name="Log in").click()
print("Login form submitted")
# Wait for login to complete
print("Waiting for post-login navigation...")
# Try to wait for multiple possible indicators of successful login
# The page might redirect to setup guide, dashboard, or other pages
try:
# Wait for either: navigation away from login OR any logged-in UI element
page.locator('body:not(:has-text("Welcome to Grafana Cloud"))').or_(
page.locator('[aria-label="Profile"]')
).or_(
page.locator('a:has-text("Home")')
).first.wait_for(state="attached", timeout=15000)
print(f"Post-login navigation detected, current URL: {page.url}")
# Verify we actually left the login page
if "/login" in page.url:
raise Exception("Still on login page after navigation")
except Exception as wait_err:
print(f"Login completion wait failed: {wait_err}")
if "/login" in page.url:
raise Exception(f"Login failed - still on login page: {page.url}")
else:
print(f"Continuing anyway - URL shows we're logged in: {page.url}")
print(f"Login successful, current URL: {page.url}")
except Exception as e:
print(f"Login failed: {e}")
print(f"Current URL at error: {page.url}")
print(f"Page title at error: {page.title()}")
# Get page content for debugging
try:
body_text = page.locator('body').text_content()
print(f"Page body text (first 500 chars): {body_text[:500] if body_text else 'No body text'}")
except Exception as content_err:
print(f"Could not get page content: {content_err}")
error_screenshot_path = os.environ.get("GITHUB_WORKSPACE", ".") + "/login_error.png"
page.screenshot(path=error_screenshot_path)
print(f"Error screenshot saved as {error_screenshot_path}")
raise
# Set up console and error logging
page.on("console", lambda msg: print(f"Console [{msg.type}]: {msg.text}"))
page.on("pageerror", lambda err: print(f"Page error: {err}"))
print(f"Navigating to: {target_url}")
page.goto(target_url, wait_until="domcontentloaded", timeout=180000)
print(f"Page loaded, URL: {page.url}")
# Wait a bit for Grafana to initialize
print("Waiting for Grafana to initialize...")
page.wait_for_timeout(10000) # Wait 10 seconds for app to settle
print(f"Final URL after waiting: {page.url}")
return browser, page
def _handle_new_page(self, page: Page):
"""Handle the creation of a new page."""
print("New page created")
self._page = page
page.on("close", self._handle_page_close)
def _handle_page_close(self, page: Page):
"""Handle the closure of a page."""
print("Page closed")
if self._page == page:
if self._browser.contexts[0].pages:
self._page = self._browser.contexts[0].pages[-1]
else:
print("Warning: All pages have been closed.")
self._page = None

View File

@@ -0,0 +1,220 @@
import os
import time
from dotenv import load_dotenv
from scrapybara import Scrapybara
from playwright.sync_api import sync_playwright, Browser, Page
from utils import BLOCKED_DOMAINS
load_dotenv()
CUA_KEY_TO_SCRAPYBARA_KEY = {
"/": "slash",
"\\": "backslash",
"arrowdown": "Down",
"arrowleft": "Left",
"arrowright": "Right",
"arrowup": "Up",
"backspace": "BackSpace",
"capslock": "Caps_Lock",
"cmd": "Meta_L",
"delete": "Delete",
"end": "End",
"enter": "Return",
"esc": "Escape",
"home": "Home",
"insert": "Insert",
"option": "Alt_L",
"pagedown": "Page_Down",
"pageup": "Page_Up",
"tab": "Tab",
"win": "Meta_L",
}
class ScrapybaraBrowser:
"""
Scrapybara provides virtual desktops and browsers in the cloud. https://scrapybara.com
You can try OpenAI CUA for free at https://computer.new or read our CUA Quickstart at https://computer.new/cua.
"""
def get_environment(self):
return "browser"
def get_dimensions(self):
return (1024, 768)
def __init__(self):
self.client = Scrapybara(api_key=os.getenv("SCRAPYBARA_API_KEY"))
self._playwright = None
self._browser: Browser | None = None
self._page: Page | None = None
def __enter__(self):
print("Starting scrapybara browser")
blocked_domains = [
domain.replace("https://", "").replace("www.", "")
for domain in BLOCKED_DOMAINS
]
self.instance = self.client.start_browser(blocked_domains=blocked_domains)
print("Scrapybara browser started ₍ᐢ•(ܫ)•ᐢ₎")
print(
f"You can view and interact with the stream at {self.instance.get_stream_url().stream_url}"
)
self._playwright = sync_playwright().start()
self._browser = self._playwright.chromium.connect_over_cdp(
self.instance.get_cdp_url().cdp_url
)
self._page = self._browser.contexts[0].pages[0]
return self
def __exit__(self, exc_type, exc_val, exc_tb):
print("Stopping scrapybara browser")
self.instance.stop()
print("Scrapybara browser stopped ₍ᐢ-(ェ)-ᐢ₎")
def goto(self, url: str) -> None:
self._page.goto(url)
def get_current_url(self) -> str:
return self.instance.get_current_url().current_url
def screenshot(self) -> str:
return self.instance.screenshot().base_64_image
def click(self, x: int, y: int, button: str = "left") -> None:
button = "middle" if button == "wheel" else button
self.instance.computer(
action="click_mouse",
click_type="click",
button=button,
coordinates=[x, y],
num_clicks=1,
)
def double_click(self, x: int, y: int) -> None:
self.instance.computer(
action="click_mouse",
click_type="click",
button="left",
coordinates=[x, y],
num_clicks=2,
)
def scroll(self, x: int, y: int, scroll_x: int, scroll_y: int) -> None:
self.instance.computer(
action="scroll",
coordinates=[x, y],
delta_x=scroll_x // 20,
delta_y=scroll_y // 20,
)
def type(self, text: str) -> None:
self.instance.computer(action="type_text", text=text)
def wait(self, ms: int = 1000) -> None:
time.sleep(ms / 1000)
# Scrapybara also has `self.instance.computer(action="wait", duration=ms / 1000)`
def move(self, x: int, y: int) -> None:
self.instance.computer(action="move_mouse", coordinates=[x, y])
def keypress(self, keys: list[str]) -> None:
mapped_keys = [
CUA_KEY_TO_SCRAPYBARA_KEY.get(key.lower(), key.lower()) for key in keys
]
self.instance.computer(action="press_key", keys=mapped_keys)
def drag(self, path: list[dict[str, int]]) -> None:
if not path:
return
path = [[point["x"], point["y"]] for point in path]
self.instance.computer(action="drag_mouse", path=path)
class ScrapybaraUbuntu:
"""
Scrapybara provides virtual desktops and browsers in the cloud.
You can try OpenAI CUA for free at https://computer.new or read our CUA Quickstart at https://computer.new/cua.
"""
def get_environment(self):
return "linux"
def get_dimensions(self):
return (1024, 768)
def __init__(self):
self.client = Scrapybara(api_key=os.getenv("SCRAPYBARA_API_KEY"))
def __enter__(self):
print("Starting Scrapybara Ubuntu instance")
blocked_domains = [
domain.replace("https://", "").replace("www.", "")
for domain in BLOCKED_DOMAINS
]
self.instance = self.client.start_ubuntu(blocked_domains=blocked_domains)
print("Scrapybara Ubuntu instance started ₍ᐢ•(ܫ)•ᐢ₎")
print(
f"You can view and interact with the stream at {self.instance.get_stream_url().stream_url}"
)
return self
def __exit__(self, exc_type, exc_val, exc_tb):
print("Stopping Scrapybara Ubuntu instance")
self.instance.stop()
print("Scrapybara Ubuntu instance stopped ₍ᐢ-(ェ)-ᐢ₎")
def screenshot(self) -> str:
return self.instance.screenshot().base_64_image
def click(self, x: int, y: int, button: str = "left") -> None:
button = "middle" if button == "wheel" else button
self.instance.computer(
action="click_mouse",
click_type="click",
button=button,
coordinates=[x, y],
num_clicks=1,
)
def double_click(self, x: int, y: int) -> None:
self.instance.computer(
action="click_mouse",
click_type="click",
button="left",
coordinates=[x, y],
num_clicks=2,
)
def scroll(self, x: int, y: int, scroll_x: int, scroll_y: int) -> None:
self.instance.computer(
action="scroll",
coordinates=[x, y],
delta_x=scroll_x // 20,
delta_y=scroll_y // 20,
)
def type(self, text: str) -> None:
self.instance.computer(action="type_text", text=text)
def wait(self, ms: int = 1000) -> None:
time.sleep(ms / 1000)
# Scrapybara also has `self.instance.computer(action="wait", duration=ms / 1000)`
def move(self, x: int, y: int) -> None:
self.instance.computer(action="move_mouse", coordinates=[x, y])
def keypress(self, keys: list[str]) -> None:
mapped_keys = [
CUA_KEY_TO_SCRAPYBARA_KEY.get(key.lower(), key.lower()) for key in keys
]
self.instance.computer(action="press_key", keys=mapped_keys)
def drag(self, path: list[dict[str, int]]) -> None:
if not path:
return
path = [[point["x"], point["y"]] for point in path]
self.instance.computer(action="drag_mouse", path=path)
def get_current_url(self):
return None

View File

@@ -0,0 +1,154 @@
import time
import base64
from typing import List, Dict, Literal
from playwright.sync_api import sync_playwright, Browser, Page
from utils import check_blocklisted_url
# Optional: key mapping if your model uses "CUA" style keys
CUA_KEY_TO_PLAYWRIGHT_KEY = {
"/": "Divide",
"\\": "Backslash",
"alt": "Alt",
"arrowdown": "ArrowDown",
"arrowleft": "ArrowLeft",
"arrowright": "ArrowRight",
"arrowup": "ArrowUp",
"backspace": "Backspace",
"capslock": "CapsLock",
"cmd": "Meta",
"ctrl": "Control",
"delete": "Delete",
"end": "End",
"enter": "Enter",
"esc": "Escape",
"home": "Home",
"insert": "Insert",
"option": "Alt",
"pagedown": "PageDown",
"pageup": "PageUp",
"shift": "Shift",
"space": " ",
"super": "Meta",
"tab": "Tab",
"win": "Meta",
}
class BasePlaywrightComputer:
"""
Abstract base for Playwright-based computers:
- Subclasses override `_get_browser_and_page()` to do local or remote connection,
returning (Browser, Page).
- This base class handles context creation (`__enter__`/`__exit__`),
plus standard "Computer" actions like click, scroll, etc.
- We also have extra browser actions: `goto(url)` and `back()`.
"""
def get_environment(self):
return "browser"
def get_dimensions(self):
return (1024, 768)
def __init__(self):
self._playwright = None
self._browser: Browser | None = None
self._page: Page | None = None
def __enter__(self):
# Start Playwright and call the subclass hook for getting browser/page
self._playwright = sync_playwright().start()
self._browser, self._page = self._get_browser_and_page()
# Set up network interception to flag URLs matching domains in BLOCKED_DOMAINS
def handle_route(route, request):
url = request.url
if check_blocklisted_url(url):
print(f"Flagging blocked domain: {url}")
route.abort()
else:
route.continue_()
self._page.route("**/*", handle_route)
return self
def __exit__(self, exc_type, exc_val, exc_tb):
if self._browser:
self._browser.close()
if self._playwright:
self._playwright.stop()
def get_current_url(self) -> str:
return self._page.url
# --- Common "Computer" actions ---
def screenshot(self) -> str:
"""Capture only the viewport (not full_page)."""
png_bytes = self._page.screenshot(full_page=False)
return base64.b64encode(png_bytes).decode("utf-8")
def click(self, x: int, y: int, button: str = "left") -> None:
match button:
case "back":
self.back()
case "forward":
self.forward()
case "wheel":
self._page.mouse.wheel(x, y)
case _:
button_mapping = {"left": "left", "right": "right"}
button_type = button_mapping.get(button, "left")
self._page.mouse.click(x, y, button=button_type)
def double_click(self, x: int, y: int) -> None:
self._page.mouse.dblclick(x, y)
def scroll(self, x: int, y: int, scroll_x: int, scroll_y: int) -> None:
self._page.mouse.move(x, y)
self._page.evaluate(f"window.scrollBy({scroll_x}, {scroll_y})")
def type(self, text: str) -> None:
self._page.keyboard.type(text)
def wait(self, ms: int = 1000) -> None:
time.sleep(ms / 1000)
def move(self, x: int, y: int) -> None:
self._page.mouse.move(x, y)
def keypress(self, keys: List[str]) -> None:
mapped_keys = [CUA_KEY_TO_PLAYWRIGHT_KEY.get(key.lower(), key) for key in keys]
for key in mapped_keys:
self._page.keyboard.down(key)
for key in reversed(mapped_keys):
self._page.keyboard.up(key)
def drag(self, path: List[Dict[str, int]]) -> None:
if not path:
return
self._page.mouse.move(path[0]["x"], path[0]["y"])
self._page.mouse.down()
for point in path[1:]:
self._page.mouse.move(point["x"], point["y"])
self._page.mouse.up()
# --- Extra browser-oriented actions ---
def goto(self, url: str) -> None:
try:
return self._page.goto(url)
except Exception as e:
print(f"Error navigating to {url}: {e}")
def back(self) -> None:
return self._page.go_back()
def forward(self) -> None:
return self._page.go_forward()
# --- Subclass hook ---
def _get_browser_and_page(self) -> tuple[Browser, Page]:
"""Subclasses must implement, returning (Browser, Page)."""
raise NotImplementedError

View File

@@ -0,0 +1,24 @@
openai>=1.56.0
annotated-types==0.7.0
anyio==4.8.0
browserbase==1.2.0
certifi==2025.1.31
charset-normalizer==3.4.1
distro==1.9.0
greenlet==3.1.1
h11==0.14.0
httpcore==1.0.7
httpx==0.28.1
idna==3.10
jiter==0.8.2
pillow==11.1.0
playwright==1.50.0
pydantic==2.10.6
pydantic_core==2.27.2
pyee==12.1.1
python-dotenv==1.0.1
requests==2.32.3
scrapybara>=2.3.6
sniffio==1.3.1
typing_extensions==4.12.2
urllib3==2.3.0

View File

@@ -0,0 +1,156 @@
import os
import base64
from computers import Computer
from computers.default import LocalPlaywrightBrowser
from utils import create_response, check_blocklisted_url
def load_prompt():
"""Load prompt from prompt file (defaults to prompt.txt, can be overridden with PROMPT_FILE env var)."""
prompt_file = os.environ.get("PROMPT_FILE")
if not prompt_file:
prompt_file = os.path.join(os.path.dirname(__file__), "prompt.txt")
if not os.path.exists(prompt_file):
raise FileNotFoundError(f"Prompt file not found: {prompt_file}")
with open(prompt_file, "r", encoding="utf-8") as f:
return f.read().strip()
def acknowledge_safety_check_callback(message: str) -> bool:
# Auto-approve in CI/non-interactive environments
print(f"Safety Check Warning: {message} - Auto-approving in CI mode")
return True
def handle_item(item, computer: Computer):
"""Handle each item; may cause a computer action + screenshot."""
if item["type"] == "message": # print messages
print(item["content"][0]["text"])
if item["type"] == "computer_call": # perform computer actions
action = item["action"]
action_type = action["type"]
action_args = {k: v for k, v in action.items() if k != "type"}
print(f"{action_type}({action_args})")
# give our computer environment action to perform
getattr(computer, action_type)(**action_args)
screenshot_base64 = computer.screenshot()
pending_checks = item.get("pending_safety_checks", [])
for check in pending_checks:
if not acknowledge_safety_check_callback(check["message"]):
raise ValueError(f"Safety check failed: {check['message']}")
# return value informs model of the latest screenshot
call_output = {
"type": "computer_call_output",
"call_id": item["call_id"],
"acknowledged_safety_checks": pending_checks,
"output": {
"type": "input_image",
"image_url": f"data:image/png;base64,{screenshot_base64}",
},
}
# additional URL safety checks for browser environments
if computer.get_environment() == "browser":
current_url = computer.get_current_url()
call_output["output"]["current_url"] = current_url
check_blocklisted_url(current_url)
return [call_output]
return []
def main():
"""Run the CUA (Computer Use Assistant) loop, using Local Playwright."""
output_text_path = os.environ.get("OUTPUT_TEXT_PATH", "output.txt")
screenshot_path = os.environ.get("SCREENSHOT_PATH", "output.png")
all_messages = [] # Collect all model messages
last_screenshot_base64 = None
with LocalPlaywrightBrowser(headless=True) as computer:
dimensions = computer.get_dimensions()
tools = [
{
"type": "computer-preview",
"display_width": dimensions[0],
"display_height": dimensions[1],
"environment": computer.get_environment(),
}
]
items = []
# Load the task prompt from prompt.txt
user_input = load_prompt()
items.append({"role": "user", "content": user_input})
while True: # keep looping until we get a final response
response = create_response(
model="computer-use-preview",
input=items,
tools=tools,
truncation="auto",
)
if "output" not in response:
print(response)
raise ValueError("No output from model")
items += response["output"]
for item in response["output"]:
# Collect all message output from the model
if item.get("type") == "message":
content = item.get("content", [])
for content_item in content:
if isinstance(content_item, dict) and "text" in content_item:
text = content_item["text"]
all_messages.append(text)
result = handle_item(item, computer)
items += result
# Capture last screenshot from computer_call outputs
if result and len(result) > 0:
for output_item in result:
if output_item.get("type") == "computer_call_output":
output = output_item.get("output", {})
if output.get("type") == "input_image":
image_url = output.get("image_url", "")
if image_url.startswith("data:image/png;base64,"):
last_screenshot_base64 = image_url.split(",", 1)[1]
if items[-1].get("role") == "assistant":
break
# Take one final screenshot before closing
if not last_screenshot_base64:
try:
last_screenshot_base64 = computer.screenshot() # Returns base64 string directly
except:
pass
# Save the last screenshot to file
if last_screenshot_base64:
os.makedirs(os.path.dirname(screenshot_path) or ".", exist_ok=True)
with open(screenshot_path, "wb") as f:
f.write(base64.b64decode(last_screenshot_base64))
# Save all model output messages to file
os.makedirs(os.path.dirname(output_text_path) or ".", exist_ok=True)
with open(output_text_path, "w") as f:
if all_messages:
# Join all messages with double newlines for readability
f.write("\n\n".join(all_messages))
else:
# Fallback: save error message if no messages were captured
f.write("No model output messages were captured.")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,37 @@
import os
import requests
from urllib.parse import urlparse
# Example blocked domains - customize for your use case
BLOCKED_DOMAINS = [
"example-malicious-site.com",
]
def create_response(**kwargs):
url = "https://api.openai.com/v1/responses"
headers = {
"Authorization": f"Bearer {os.getenv('OPENAI_API_KEY')}",
"Content-Type": "application/json"
}
openai_org = os.getenv("OPENAI_ORG")
if openai_org:
headers["Openai-Organization"] = openai_org
response = requests.post(url, headers=headers, json=kwargs)
if response.status_code != 200:
print(f"Error: {response.status_code} {response.text}")
return response.json()
def check_blocklisted_url(url: str) -> None:
"""Raise ValueError if the given URL (including subdomains) is in the blocklist."""
hostname = urlparse(url).hostname or ""
if any(
hostname == blocked or hostname.endswith(f".{blocked}")
for blocked in BLOCKED_DOMAINS
):
raise ValueError(f"Blocked URL: {url}")

View File

@@ -1226,13 +1226,5 @@
"addToProject": {
"url": "https://github.com/orgs/grafana/projects/69"
}
},
{
"type": "label",
"name": "area/suggestions",
"action": "addToProject",
"addToProject": {
"url": "https://github.com/orgs/grafana/projects/56"
}
}
]

View File

@@ -469,15 +469,5 @@
"addToProject": {
"url": "https://github.com/orgs/grafana/projects/190"
}
},
{
"type": "changedfiles",
"matches": [
"public/app/features/panel/suggestions/**/*",
"public/app/plugins/panel/**/suggestions.ts",
"packages/grafana-data/src/types/suggestions*"
],
"action": "updateLabel",
"addLabel": "area/suggestions"
}
]

189
.github/scripts/README-metrics.md vendored Normal file
View File

@@ -0,0 +1,189 @@
# BabyBot Metrics Export
Export and analyze BabyBot usability review metrics from GitHub.
## Quick Start
```bash
# Export to both CSV and JSON
python .github/scripts/export-babybot-metrics.py
# Export only CSV
python .github/scripts/export-babybot-metrics.py --format csv --output my-metrics
# Export only JSON
python .github/scripts/export-babybot-metrics.py --format json
```
## Output Files
### CSV Format (`babybot-metrics.csv`)
Detailed row-per-comment data, suitable for Excel, Google Sheets, or data analysis tools.
| Column | Description |
|--------|-------------|
| comment_id | GitHub comment ID |
| pr_number | PR number |
| comment_type | `resolvable` or `general` |
| severity | Critical, Major, or Minor |
| confidence | Low, Medium, or High |
| file | File path (for resolvable comments) |
| created_at | Timestamp |
| url | Link to comment |
| total_reactions | Number of reactions (👍, ❤️, 🎉, etc.) |
| has_reply | Boolean - whether comment has replies |
| engagement_score | Calculated engagement score |
### JSON Format (`babybot-metrics.json`)
Aggregated metrics with summaries by PR, severity, and confidence.
```json
{
"total_comments": 42,
"resolvable_comments": 38,
"general_comments": 4,
"by_severity": {
"Critical": 5,
"Major": 20,
"Minor": 17
},
"by_confidence": {
"High": 10,
"Medium": 25,
"Low": 7
},
"engagement": {
"comments_with_reactions": 15,
"comments_with_replies": 8,
"total_reactions": 45,
"avg_reactions_per_comment": 1.07,
"engagement_rate": 35.7
},
"by_pr": {
"114646": {
"count": 3,
"severities": {"Major": 2, "Minor": 1},
"engaged": 2
}
}
}
```
## Dashboard Integration Options
### 1. Google Sheets Dashboard
```bash
# Export CSV and upload to Google Sheets
python .github/scripts/export-babybot-metrics.py --format csv
# Upload babybot-metrics.csv to Google Sheets
# Create pivot tables and charts
```
### 2. Grafana Dashboard
```bash
# Export JSON and serve via HTTP
python .github/scripts/export-babybot-metrics.py --format json
# Use JSON API data source in Grafana
```
### 3. Automated Weekly Reports (GitHub Actions)
See `.github/workflows/babybot-weekly-report.yml` (create this workflow)
### 4. Tableau/Power BI
Import the CSV file directly into your BI tool.
## Metrics Tracked
- **Total comments posted** by BabyBot
- **Resolvable vs general comments** (indicates attachment success rate)
- **Issues by severity** (Critical/Major/Minor breakdown)
- **Issues by confidence** (High/Medium/Low)
- **PRs reviewed** (count and distribution)
- **Comments per PR** (average and trends)
- **Engagement metrics:**
- Comments with reactions (👍, ❤️, 🎉, etc.)
- Comments with replies (developer responses)
- Total reactions count
- Average reactions per comment
- Engagement rate % (how many comments get any response)
## Tracking Comment Engagement (Proxy for "Resolved")
Since GitHub doesn't expose "resolved" status via API, we track **engagement** as a proxy:
### Reactions as Resolution Indicators
Establish a reaction convention with your team:
- ✅ 👍 (`:+1:`) = Acknowledged/Understood
- 🎉 (`:hooray:`) = Fixed/Resolved
- 👀 (`:eyes:`) = Looking into it
- ❤️ (`:heart:`) = Appreciated/Helpful
### Query Engagement
```bash
# Get all BabyBot comments with reactions
gh api repos/grafana/grafana/pulls/comments --paginate \
--jq '.[] | select(.body | contains("BabyBot 🍼")) | {id: .id, reactions: .reactions, pr: .pull_request_url}'
# Count comments with specific reactions (e.g., "resolved" markers)
gh api repos/grafana/grafana/pulls/comments --paginate \
--jq '[.[] | select(.body | contains("BabyBot 🍼")) | .reactions.hooray] | add'
```
### Export shows:
- `total_reactions`: All reactions on the comment
- `has_reply`: Whether developers responded with a comment
- `engagement_score`: Weighted score (reactions + reply bonus)
High engagement score = comment was noticed and actioned! 📊
## Scheduling Automatic Exports
Add to `.github/workflows/babybot-weekly-report.yml`:
```yaml
name: BabyBot Weekly Metrics
on:
schedule:
- cron: '0 9 * * MON' # Every Monday at 9am
workflow_dispatch: # Manual trigger
jobs:
export-metrics:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Export metrics
run: python .github/scripts/export-babybot-metrics.py
- name: Upload to artifact
uses: actions/upload-artifact@v4
with:
name: babybot-metrics-${{ github.run_number }}
path: babybot-metrics.*
- name: Post to Slack
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
run: |
# Parse JSON and send summary to Slack
SUMMARY=$(cat babybot-metrics.json | jq -r '"Total Comments: \(.total_comments), Critical: \(.by_severity.Critical // 0), Major: \(.by_severity.Major // 0)"')
curl -X POST $SLACK_WEBHOOK_URL \
-H 'Content-type: application/json' \
-d "{\"text\": \"📊 Weekly BabyBot Metrics: $SUMMARY\"}"
```
## Example Queries
```bash
# Count by PR
jq '.by_pr | to_entries | map({pr: .key, count: .value.count})' babybot-metrics.json
# Average comments per PR
jq '[.by_pr[].count] | add / length' babybot-metrics.json
# Critical issues percentage
jq '(.by_severity.Critical / .total_comments * 100)' babybot-metrics.json
```

323
.github/scripts/export-babybot-metrics.py vendored Executable file
View File

@@ -0,0 +1,323 @@
#!/usr/bin/env python3
"""
Export BabyBot usability review metrics from GitHub
Usage: python export-babybot-metrics.py [--format csv|json] [--output filename]
"""
import subprocess
import json
import csv
import argparse
from datetime import datetime
from collections import defaultdict
def get_prs_with_usability_review_label():
"""Get all PRs with the usability-review label"""
cmd = [
'gh', 'api', 'repos/grafana/grafana/issues',
'--paginate',
'-f', 'state=all',
'-f', 'labels=usability-review',
'--jq', '.[] | .number'
]
result = subprocess.run(cmd, capture_output=True, text=True)
pr_numbers = []
for line in result.stdout.strip().split('\n'):
if line:
try:
pr_numbers.append(line.strip())
except:
continue
return pr_numbers
def get_review_comments(pr_numbers):
"""Get all BabyBot review comments (resolvable ones) from specific PRs"""
comments = []
for pr_number in pr_numbers:
cmd = [
'gh', 'api', f'repos/grafana/grafana/pulls/{pr_number}/comments',
'--jq', '''
.[] |
select(.body | contains("BabyBot 🍼")) |
{
id: .id,
pr_number: (.pull_request_url | split("/") | .[-1]),
file: .path,
line: .line,
created_at: .created_at,
updated_at: .updated_at,
body: .body,
html_url: .html_url,
reactions: .reactions,
in_reply_to_id: .in_reply_to_id
}
'''
]
result = subprocess.run(cmd, capture_output=True, text=True)
for line in result.stdout.strip().split('\n'):
if line:
try:
comments.append(json.loads(line))
except json.JSONDecodeError:
continue
return comments
def get_general_comments(pr_numbers):
"""Get BabyBot general comments (fallback ones) from specific PRs"""
comments = []
for pr_number in pr_numbers:
cmd = [
'gh', 'api', f'repos/grafana/grafana/issues/{pr_number}/comments',
'--jq', '''
.[] |
select(.body | contains("BabyBot 🍼")) |
{
id: .id,
pr_number: (.html_url | split("/") | .[-3]),
created_at: .created_at,
updated_at: .updated_at,
body: .body,
html_url: .html_url,
reactions: .reactions
}
'''
]
result = subprocess.run(cmd, capture_output=True, text=True)
for line in result.stdout.strip().split('\n'):
if line:
try:
comments.append(json.loads(line))
except json.JSONDecodeError:
continue
return comments
def extract_severity(body):
"""Extract severity from comment body"""
if '‼️ Critical' in body:
return 'Critical'
elif '⚠️ Major' in body:
return 'Major'
elif '🟢 Minor' in body:
return 'Minor'
return 'Unknown'
def extract_confidence(body):
"""Extract confidence level from comment body"""
import re
match = re.search(r'\*\*Confidence:\*\*\s*(Low|Medium|High)', body)
return match.group(1) if match else 'Unknown'
def get_replies_to_comment(pr_number, comment_id):
"""Get all replies to a specific comment"""
cmd = [
'gh', 'api', f'repos/grafana/grafana/pulls/{pr_number}/comments',
'--jq', f'.[] | select(.in_reply_to_id == {comment_id})'
]
try:
result = subprocess.run(cmd, capture_output=True, text=True)
replies = []
for line in result.stdout.strip().split('\n'):
if line:
try:
replies.append(json.loads(line))
except json.JSONDecodeError:
continue
return replies
except:
return []
def calculate_engagement_score(comment):
"""Calculate engagement score based on reactions and replies"""
reactions = comment.get('reactions', {})
# Count all reaction types
total_reactions = 0
if isinstance(reactions, dict):
total_reactions = reactions.get('total_count', 0)
# Weight certain reactions more heavily
engagement_reactions = (
reactions.get('+1', 0) + # thumbs up
reactions.get('hooray', 0) + # party
reactions.get('heart', 0) # heart
)
# Check if there are replies
has_reply = comment.get('in_reply_to_id') is not None
# Simple engagement score: reactions + bonus for replies
score = total_reactions + (5 if has_reply else 0)
return {
'total_reactions': total_reactions,
'engagement_reactions': engagement_reactions,
'has_reply': has_reply,
'score': score
}
def aggregate_metrics(review_comments, general_comments):
"""Aggregate metrics by PR and severity"""
metrics = {
'total_comments': len(review_comments) + len(general_comments),
'resolvable_comments': len(review_comments),
'general_comments': len(general_comments),
'by_pr': defaultdict(lambda: {'count': 0, 'severities': defaultdict(int), 'engaged': 0}),
'by_severity': defaultdict(int),
'by_confidence': defaultdict(int),
'engagement': {
'comments_with_reactions': 0,
'comments_with_replies': 0,
'total_reactions': 0,
'avg_reactions_per_comment': 0,
'engagement_rate': 0
},
'export_date': datetime.now().isoformat()
}
all_comments = review_comments + general_comments
total_reactions = 0
comments_with_reactions = 0
comments_with_replies = 0
for comment in all_comments:
pr_num = comment['pr_number']
severity = extract_severity(comment['body'])
confidence = extract_confidence(comment['body'])
# Calculate engagement
engagement = calculate_engagement_score(comment)
metrics['by_pr'][pr_num]['count'] += 1
metrics['by_pr'][pr_num]['severities'][severity] += 1
metrics['by_severity'][severity] += 1
metrics['by_confidence'][confidence] += 1
# Track engagement
if engagement['total_reactions'] > 0:
comments_with_reactions += 1
total_reactions += engagement['total_reactions']
metrics['by_pr'][pr_num]['engaged'] += 1
if engagement['has_reply']:
comments_with_replies += 1
# Calculate engagement metrics
total = len(all_comments)
metrics['engagement']['comments_with_reactions'] = comments_with_reactions
metrics['engagement']['comments_with_replies'] = comments_with_replies
metrics['engagement']['total_reactions'] = total_reactions
metrics['engagement']['avg_reactions_per_comment'] = round(total_reactions / total, 2) if total > 0 else 0
metrics['engagement']['engagement_rate'] = round((comments_with_reactions / total) * 100, 1) if total > 0 else 0
return metrics
def export_to_csv(metrics, review_comments, general_comments, filename):
"""Export detailed metrics to CSV"""
all_comments = review_comments + general_comments
with open(filename, 'w', newline='') as f:
writer = csv.DictWriter(f, fieldnames=[
'comment_id', 'pr_number', 'comment_type', 'severity',
'confidence', 'file', 'created_at', 'url', 'total_reactions',
'has_reply', 'engagement_score'
])
writer.writeheader()
for comment in all_comments:
engagement = calculate_engagement_score(comment)
writer.writerow({
'comment_id': comment['id'],
'pr_number': comment['pr_number'],
'comment_type': 'resolvable' if 'file' in comment else 'general',
'severity': extract_severity(comment['body']),
'confidence': extract_confidence(comment['body']),
'file': comment.get('file', 'N/A'),
'created_at': comment['created_at'],
'url': comment['html_url'],
'total_reactions': engagement['total_reactions'],
'has_reply': engagement['has_reply'],
'engagement_score': engagement['score']
})
print(f"✅ Exported detailed metrics to {filename}")
def export_to_json(metrics, filename):
"""Export aggregated metrics to JSON"""
with open(filename, 'w') as f:
json.dump(metrics, f, indent=2)
print(f"✅ Exported aggregated metrics to {filename}")
def print_summary(metrics):
"""Print summary to console"""
print("\n📊 BabyBot Usability Review Metrics")
print("=" * 50)
print(f"Total Comments: {metrics['total_comments']}")
print(f" - Resolvable (on files): {metrics['resolvable_comments']}")
print(f" - General: {metrics['general_comments']}")
print(f"\nBy Severity:")
for severity, count in metrics['by_severity'].items():
print(f" - {severity}: {count}")
print(f"\nBy Confidence:")
for confidence, count in metrics['by_confidence'].items():
print(f" - {confidence}: {count}")
print(f"\nEngagement:")
eng = metrics['engagement']
print(f" - Comments with reactions: {eng['comments_with_reactions']}")
print(f" - Comments with replies: {eng['comments_with_replies']}")
print(f" - Total reactions: {eng['total_reactions']}")
print(f" - Avg reactions per comment: {eng['avg_reactions_per_comment']}")
print(f" - Engagement rate: {eng['engagement_rate']}%")
print(f"\nPRs Reviewed: {len(metrics['by_pr'])}")
print("=" * 50)
def main():
parser = argparse.ArgumentParser(description='Export BabyBot metrics')
parser.add_argument('--format', choices=['csv', 'json', 'both'], default='both',
help='Export format (default: both)')
parser.add_argument('--output', default='babybot-metrics',
help='Output filename (without extension)')
args = parser.parse_args()
print("🔍 Fetching PRs with 'usability-review' label...")
pr_numbers = get_prs_with_usability_review_label()
print(f"Found {len(pr_numbers)} PRs with usability-review label")
if not pr_numbers:
print("No PRs found with usability-review label. Exiting.")
return
print("\n🔍 Fetching BabyBot comments from those PRs...")
review_comments = get_review_comments(pr_numbers)
general_comments = get_general_comments(pr_numbers)
print(f"Found {len(review_comments)} review comments and {len(general_comments)} general comments")
print("\n📈 Aggregating metrics...")
metrics = aggregate_metrics(review_comments, general_comments)
print_summary(metrics)
if args.format in ['csv', 'both']:
export_to_csv(metrics, review_comments, general_comments, f"{args.output}.csv")
if args.format in ['json', 'both']:
export_to_json(metrics, f"{args.output}.json")
print("\n✨ Done!")
if __name__ == '__main__':
main()

View File

@@ -1,11 +1,11 @@
name: Add comment about adding a What's new note for either what's new or breaking changes
name: Add comment about adding a What's new note
on:
pull_request:
types: [labeled]
jobs:
add-comment:
if: ${{ ! github.event.pull_request.head.repo.fork && (contains(github.event.pull_request.labels.*.name, 'add to what''s new') || contains(github.event.pull_request.labels.*.name, 'breaking change') || contains(github.event.pull_request.labels.*.name, 'levitate breaking change')) }}
if: ${{ ! github.event.pull_request.head.repo.fork && contains(github.event.pull_request.labels.*.name, 'add to what''s new') }}
runs-on: ubuntu-latest
permissions:
pull-requests: write
@@ -13,4 +13,4 @@ jobs:
- uses: marocchino/sticky-pull-request-comment@773744901bac0e8cbb5a0dc842800d45e9b2b405 # v2.9.4
with:
message: |
Since you've added the `What's New` or a breaking change label, consider drafting a [What's new note](https://admin.grafana.com/content-admin/#/collections/whats-new/new) for this feature.
Since you've added the `Add to what's new` label, consider drafting a [What's new note](https://admin.grafana.com/content-admin/#/collections/whats-new/new) for this feature.

View File

@@ -85,7 +85,6 @@ area/scenes
area/search
area/security
area/streaming
area/suggestions
area/templating/repeating
area/tooltip
area/transformations

View File

@@ -0,0 +1,127 @@
name: BabyBot Weekly Metrics Report
on:
schedule:
- cron: '0 9 * * MON' # Every Monday at 9am UTC
workflow_dispatch: # Allow manual trigger
permissions:
contents: read
id-token: write
jobs:
export-metrics:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Export BabyBot metrics
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
python .github/scripts/export-babybot-metrics.py --format both --output babybot-metrics-$(date +%Y-%m-%d)
- name: Upload metrics artifacts
uses: actions/upload-artifact@v4
with:
name: babybot-metrics-${{ github.run_number }}
path: |
babybot-metrics-*.csv
babybot-metrics-*.json
retention-days: 90
- name: Generate summary
id: summary
run: |
METRICS_FILE=$(ls babybot-metrics-*.json | head -1)
TOTAL=$(jq -r '.total_comments' $METRICS_FILE)
RESOLVABLE=$(jq -r '.resolvable_comments' $METRICS_FILE)
CRITICAL=$(jq -r '.by_severity.Critical // 0' $METRICS_FILE)
MAJOR=$(jq -r '.by_severity.Major // 0' $METRICS_FILE)
MINOR=$(jq -r '.by_severity.Minor // 0' $METRICS_FILE)
PRS=$(jq -r '.by_pr | length' $METRICS_FILE)
echo "total=$TOTAL" >> $GITHUB_OUTPUT
echo "resolvable=$RESOLVABLE" >> $GITHUB_OUTPUT
echo "critical=$CRITICAL" >> $GITHUB_OUTPUT
echo "major=$MAJOR" >> $GITHUB_OUTPUT
echo "minor=$MINOR" >> $GITHUB_OUTPUT
echo "prs=$PRS" >> $GITHUB_OUTPUT
- name: Post summary to Slack
if: env.SLACK_WEBHOOK_URL != ''
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
TOTAL: ${{ steps.summary.outputs.total }}
RESOLVABLE: ${{ steps.summary.outputs.resolvable }}
CRITICAL: ${{ steps.summary.outputs.critical }}
MAJOR: ${{ steps.summary.outputs.major }}
MINOR: ${{ steps.summary.outputs.minor }}
PRS: ${{ steps.summary.outputs.prs }}
run: |
curl -X POST $SLACK_WEBHOOK_URL \
-H 'Content-type: application/json' \
-d "{
\"channel\": \"#hackathon15-agentic-usability-review\",
\"username\": \"BabyBot Metrics\",
\"icon_emoji\": \":bar_chart:\",
\"blocks\": [
{
\"type\": \"header\",
\"text\": {
\"type\": \"plain_text\",
\"text\": \"📊 BabyBot Weekly Metrics Report\"
}
},
{
\"type\": \"section\",
\"text\": {
\"type\": \"mrkdwn\",
\"text\": \"*Total Comments:* $TOTAL\\n*Resolvable:* $RESOLVABLE\\n*PRs Reviewed:* $PRS\"
}
},
{
\"type\": \"section\",
\"text\": {
\"type\": \"mrkdwn\",
\"text\": \"*By Severity*\\n• ‼️ Critical: $CRITICAL\\n• ⚠️ Major: $MAJOR\\n• 🟢 Minor: $MINOR\"
}
},
{
\"type\": \"actions\",
\"elements\": [
{
\"type\": \"button\",
\"text\": {
\"type\": \"plain_text\",
\"text\": \"Download Full Report\"
},
\"url\": \"https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}\"
}
]
}
]
}"
- name: Create summary comment (optional - for visibility)
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
echo "## 📊 BabyBot Metrics Summary" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Total Comments:** ${{ steps.summary.outputs.total }}" >> $GITHUB_STEP_SUMMARY
echo "**Resolvable Comments:** ${{ steps.summary.outputs.resolvable }}" >> $GITHUB_STEP_SUMMARY
echo "**PRs Reviewed:** ${{ steps.summary.outputs.prs }}" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "### By Severity" >> $GITHUB_STEP_SUMMARY
echo "- ‼️ Critical: ${{ steps.summary.outputs.critical }}" >> $GITHUB_STEP_SUMMARY
echo "- ⚠️ Major: ${{ steps.summary.outputs.major }}" >> $GITHUB_STEP_SUMMARY
echo "- 🟢 Minor: ${{ steps.summary.outputs.minor }}" >> $GITHUB_STEP_SUMMARY

View File

@@ -33,16 +33,6 @@ jobs:
GCOM_TOKEN=ephemeral-instances-bot:gcom-token
REGISTRY=ephemeral-instances-bot:registry
GCP_SA_ACCOUNT_KEY_BASE64=ephemeral-instances-bot:sa-key
# Secrets placed in the ci/common/<path> path in Vault
common_secrets: |
DOCKERHUB_USERNAME=dockerhub:username
DOCKERHUB_PASSWORD=dockerhub:password
- name: Log in to Docker Hub to avoid unauthenticated image pull rate-limiting
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
username: ${{ env.DOCKERHUB_USERNAME }}
password: ${{ env.DOCKERHUB_PASSWORD }}
- name: Generate a GitHub app installation token
id: generate_token

613
.github/workflows/usability-review.yml vendored Normal file
View File

@@ -0,0 +1,613 @@
name: Usability Review Agent
on:
issue_comment:
types: [created]
pull_request:
types: [labeled]
permissions:
contents: read
pull-requests: write
issues: write
id-token: write
env:
OUTPUT_TEXT_PATH: usability-review.txt
SCREENSHOT_PATH: usability-screenshot.png
jobs:
usability-review:
# Run when either: (1) someone comments '/test-me-baby-one-more-time' on a PR, or (2) 'usability-review' label is added
if: ${{ (github.event.issue.pull_request && startsWith(github.event.comment.body, '/test-me-baby-one-more-time')) || github.event.label.name == 'usability-review' }}
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
token: ${{ secrets.GITHUB_TOKEN }}
- name: React to comment
if: github.event_name == 'issue_comment'
uses: actions/github-script@v7
with:
script: |
github.rest.reactions.createForIssueComment({
owner: context.repo.owner,
repo: context.repo.repo,
comment_id: context.payload.comment.id,
content: 'rocket'
});
- name: Post acknowledgment
if: github.event_name == 'issue_comment'
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
PR_NUMBER: ${{ github.event.issue.number }}
run: |
gh pr comment $PR_NUMBER --body "🤖 Starting usability review... This may take a few minutes."
- name: Set PR number
id: pr-number
run: |
if [ "${{ github.event_name }}" = "issue_comment" ]; then
echo "number=${{ github.event.issue.number }}" >> $GITHUB_OUTPUT
else
echo "number=${{ github.event.pull_request.number }}" >> $GITHUB_OUTPUT
fi
- name: Extract target URL from ephemeral instance comment
id: extract-url
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
PR_NUMBER: ${{ steps.pr-number.outputs.number }}
run: |
# Extract ephemeral instance URL from PR comments
# Looks for the most recent comment containing: https://ephemeral{alphanumeric}.grafana-dev.net
#
# TODO: Future improvement - auto-trigger deployment if no URL found:
# 1. Comment /deploy-to-hg on the PR
# 2. Wait ~20 minutes for build to complete
# 3. Retry extracting URL
# 4. Continue with usability review
# Get all PR comments, newest first
COMMENTS=$(gh pr view $PR_NUMBER --json comments --jq '.comments | reverse | .[].body')
# Find the most recent comment with ephemeral instance URL (alphanumeric subdomain)
TARGET_URL=$(echo "$COMMENTS" | grep -oE 'https://ephemeral[a-zA-Z0-9]+\.grafana-dev\.net' | head -1)
if [ -z "$TARGET_URL" ]; then
echo "::error::No ephemeral instance URL found in PR comments"
echo "::notice::Please deploy an ephemeral instance first by commenting /deploy-to-hg"
exit 1
fi
echo "Found target URL: $TARGET_URL"
echo "TARGET_URL=$TARGET_URL" >> $GITHUB_ENV
- name: "Get vault secrets (dev)"
id: vault-secrets-dev
uses: grafana/shared-workflows/actions/get-vault-secrets@get-vault-secrets/v1.3.0
with:
vault_instance: "dev"
repo_secrets: |
GCOM_HOST=usability-review-agent:GCOM_HOST
GCOM_TOKEN=usability-review-agent:GCOM_TOKEN
OPENAI_API_KEY=usability-review-agent:OPENAI_API_KEY
GRAFANA_USERNAME=usability-review-agent:GRAFANA_USERNAME
GRAFANA_PASSWORD=usability-review-agent:GRAFANA_PASSWORD
SLACK_WEBHOOK_URL=usability-review-agent:SLACK_WEBHOOK_URL
- name: "Get vault secrets (ops) - GitHub App credentials"
id: vault-secrets-ops
uses: grafana/shared-workflows/actions/get-vault-secrets@main
with:
repo_secrets: |
APP_ID=ephemeral-instances-bot:app-id
APP_PEM=ephemeral-instances-bot:app-private-key
- name: "Generate GitHub App token for private repo access"
id: generate_token
uses: tibdex/github-app-token@b62528385c34dbc9f38e5f4225ac829252d1ea92
with:
app_id: ${{ env.APP_ID }}
private_key: ${{ env.APP_PEM }}
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Install OpenAI package
run: |
python -m pip install -U pip
pip install openai
- name: Get PR details
id: pr-details
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
PR_NUMBER: ${{ steps.pr-number.outputs.number }}
run: |
# Get PR title, body, and changed files
PR_DATA=$(gh pr view $PR_NUMBER --json title,body,files)
PR_TITLE=$(echo "$PR_DATA" | jq -r '.title')
PR_BODY=$(echo "$PR_DATA" | jq -r '.body // ""')
# Get list of changed files
CHANGED_FILES=$(echo "$PR_DATA" | jq -r '.files[].path' | head -20)
# Save to files for next step
echo "$PR_TITLE" > pr_title.txt
echo "$PR_BODY" > pr_body.txt
echo "$CHANGED_FILES" > changed_files.txt
echo "PR Title: $PR_TITLE"
echo "Changed files count: $(echo "$CHANGED_FILES" | wc -l)"
- name: Generate PR summary
id: pr-summary
env:
OPENAI_API_KEY: ${{ env.OPENAI_API_KEY }}
run: |
python - <<'PY'
import os
from openai import OpenAI
client = OpenAI()
# Read PR details
with open("pr_title.txt", "r") as f:
pr_title = f.read().strip()
with open("pr_body.txt", "r") as f:
pr_body = f.read().strip()
with open("changed_files.txt", "r") as f:
changed_files = f.read().strip()
# Generate summary of what the PR does
system = "You are a technical analyst summarizing pull request changes."
user = f"""Analyze this PR and provide a 2-3 sentence summary of what feature/change is being implemented:
PR Title: {pr_title}
PR Description: {pr_body if pr_body else "(No description provided)"}
Changed Files:
{changed_files}
Focus on what the user-facing impact is and what functionality is being added or modified."""
resp = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": system},
{"role": "user", "content": user},
],
)
pr_summary = resp.choices[0].message.content.strip()
with open("pr_summary.txt", "w") as f:
f.write(pr_summary)
print(f"PR Summary: {pr_summary}")
PY
- name: Fetch prompt from external repo
env:
GH_TOKEN: ${{ steps.generate_token.outputs.token }}
run: |
# Fetch prompt from private repo using GitHub API
gh api \
-H "Accept: application/vnd.github.raw" \
/repos/grafana/usability-review-agent-github-action/contents/prompts/single/reviewer.md \
> base_prompt.txt
echo "Fetched prompt from external repo"
echo "Prompt size: $(wc -c < base_prompt.txt) bytes"
echo "First 100 chars: $(head -c 100 base_prompt.txt)"
- name: Build dynamic prompt
id: build-prompt
run: |
# Read the PR summary
PR_SUMMARY=$(cat pr_summary.txt)
# Read base prompt template from external repo
BASE_PROMPT=$(cat base_prompt.txt)
# Build dynamic prompt with PR context
cat > dynamic_prompt.txt <<EOF
## Context
You are reviewing a pull request that makes the following changes:
$PR_SUMMARY
Your task is to test the workflow and provide usability feedback.
## Instructions
$BASE_PROMPT
EOF
# Copy to action directory so it can be used
cp dynamic_prompt.txt .github/actions/usability-review-agent/dynamic_prompt.txt
echo "Dynamic prompt created"
- name: Extract instance slug from URL
run: |
INSTANCE_SLUG=$(echo "${{ env.TARGET_URL }}" | sed -E 's|https?://([^.]+)\..*|\1|')
echo "Instance slug: $INSTANCE_SLUG"
echo "INSTANCE_SLUG=$INSTANCE_SLUG" >> $GITHUB_ENV
- name: Enable basic auth on instance
env:
GCOM_HOST: ${{ env.GCOM_HOST }}
GCOM_TOKEN: ${{ env.GCOM_TOKEN }}
INSTANCE_SLUG: ${{ env.INSTANCE_SLUG }}
run: |
# Check current config
CURRENT_CONFIG=$(curl -s "${GCOM_HOST}/api/instances/${INSTANCE_SLUG}/config" \
-H "Authorization: Bearer ${GCOM_TOKEN}")
DISABLE_LOGIN_FORM=$(echo "$CURRENT_CONFIG" | jq -r '.auth.disable_login_form // true')
echo "Current disable_login_form: $DISABLE_LOGIN_FORM"
if [ "$DISABLE_LOGIN_FORM" != "false" ]; then
echo "Enabling basic auth..."
curl -X POST "${GCOM_HOST}/api/instances/${INSTANCE_SLUG}/config" \
-H "Authorization: Bearer ${GCOM_TOKEN}" \
-H "Content-Type: application/x-www-form-urlencoded" \
-d 'config[auth][disable_login_form]=false'
echo "Waiting for pod restart..."
sleep 300
# Verify instance is ready
for i in {1..10}; do
if curl -sf https://${INSTANCE_SLUG}.grafana-dev.net/api/health; then
echo "Instance ready!"
break
fi
echo "Attempt $i failed, waiting..."
sleep 10
done
else
echo "Basic auth already enabled, skipping..."
fi
- name: Create test user for playwright
# Using system realm token with stack-users:write scope
env:
GCOM_HOST: ${{ env.GCOM_HOST }}
GCOM_TOKEN: ${{ env.GCOM_TOKEN }}
INSTANCE_SLUG: ${{ env.INSTANCE_SLUG }}
GRAFANA_USERNAME: ${{ env.GRAFANA_USERNAME }}
GRAFANA_PASSWORD: ${{ env.GRAFANA_PASSWORD }}
run: |
pip install requests
python3 <<'PYTHON'
import os, requests, json
gcom_host = os.environ['GCOM_HOST']
gcom_token = os.environ['GCOM_TOKEN']
instance = os.environ['INSTANCE_SLUG']
username = os.environ['GRAFANA_USERNAME']
password = os.environ['GRAFANA_PASSWORD']
# Create user via GCOM API (proxies to instance /api/admin/users)
resp = requests.post(
f"{gcom_host}/api/instances/{instance}/api/admin/users",
headers={"Authorization": f"Bearer {gcom_token}"},
json={"name": "Usability Review Agent", "login": username, "password": password}
)
if resp.status_code == 412:
print(f"Test user already exists (412), skipping creation")
elif resp.status_code >= 400:
print(f"Error creating user: {resp.status_code} {resp.text}")
resp.raise_for_status()
else:
user_id = resp.json()['id']
print(f"Created new test user (id: {user_id})")
# Make user admin via GCOM API (proxies to instance)
perm_resp = requests.put(
f"{gcom_host}/api/instances/{instance}/api/admin/users/{user_id}/permissions",
headers={"Authorization": f"Bearer {gcom_token}"},
json={"isGrafanaAdmin": True}
)
if perm_resp.status_code >= 400:
print(f"Warning: Failed to set admin permissions: {perm_resp.status_code}")
else:
print(f"Set admin permissions for test user")
print(f"Test user setup complete")
PYTHON
- name: Run usability review agent
uses: ./.github/actions/usability-review-agent
with:
target_url: ${{ env.TARGET_URL }}
openai_api_key: ${{ env.OPENAI_API_KEY }}
grafana_username: ${{ env.GRAFANA_USERNAME }}
grafana_password: ${{ env.GRAFANA_PASSWORD }}
workflow_name: "the application interface"
prompt_file: "dynamic_prompt.txt"
output_text_path: ${{ env.OUTPUT_TEXT_PATH }}
screenshot_path: ${{ env.SCREENSHOT_PATH }}
- name: Upload review artifacts
if: always()
uses: actions/upload-artifact@v4
with:
name: usability-review-results
path: |
${{ env.OUTPUT_TEXT_PATH }}
${{ env.SCREENSHOT_PATH }}
login_page.png
login_error.png
if-no-files-found: warn
retention-days: 7
- name: Post review results to PR
if: always()
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
PR_NUMBER: ${{ steps.pr-number.outputs.number }}
RUN_ID: ${{ github.run_id }}
run: |
COMMENT_FILE=$(mktemp)
echo "## 🔍 Usability Review Results" > "$COMMENT_FILE"
echo "" >> "$COMMENT_FILE"
echo "**Target URL:** ${{ env.TARGET_URL }}" >> "$COMMENT_FILE"
echo "" >> "$COMMENT_FILE"
echo "---" >> "$COMMENT_FILE"
echo "" >> "$COMMENT_FILE"
if [ -s ${{ env.OUTPUT_TEXT_PATH }} ]; then
cat ${{ env.OUTPUT_TEXT_PATH }} >> "$COMMENT_FILE"
else
echo "⚠️ No review output was generated." >> "$COMMENT_FILE"
fi
echo "" >> "$COMMENT_FILE"
echo "---" >> "$COMMENT_FILE"
echo "" >> "$COMMENT_FILE"
echo "📸 [View screenshot and full artifacts](https://github.com/${{ github.repository }}/actions/runs/$RUN_ID)" >> "$COMMENT_FILE"
gh pr comment $PR_NUMBER --body-file "$COMMENT_FILE"
rm "$COMMENT_FILE"
- name: Post individual suggestions as separate comments
if: always()
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
PR_NUMBER: ${{ steps.pr-number.outputs.number }}
OUTPUT_TEXT_PATH: ${{ env.OUTPUT_TEXT_PATH }}
run: |
python3 <<'PYTHON'
import os
import subprocess
import re
import json
# Read the AI output
output_path = os.environ['OUTPUT_TEXT_PATH']
if not os.path.exists(output_path) or os.path.getsize(output_path) == 0:
print("No output file found, skipping individual comments")
exit(0)
with open(output_path, 'r') as f:
content = f.read()
# Extract table rows (skip header and separator)
# New format: | Issue | Severity | Impact | Confidence | Suggestion |
table_pattern = r'\|([^|]+)\|([^|]+)\|([^|]+)\|([^|]+)\|([^|]+)\|'
matches = re.findall(table_pattern, content)
if len(matches) < 3:
print(f"Not enough table rows found ({len(matches)}), skipping individual comments")
exit(0)
# Skip first 2 matches (header + separator)
issue_rows = matches[2:] # Get all issues after header
pr_number = os.environ['PR_NUMBER']
# Get changed files in the PR to attach comments to
pr_files_result = subprocess.run(
['gh', 'pr', 'view', pr_number, '--json', 'files'],
capture_output=True,
text=True,
check=True
)
pr_data = json.loads(pr_files_result.stdout)
changed_files = [f['path'] for f in pr_data.get('files', [])]
# Prefer frontend files for UI issues
frontend_files = [f for f in changed_files if any(ext in f for ext in ['.tsx', '.ts', '.jsx', '.js', '.vue', '.css', '.scss'])]
target_files = frontend_files if frontend_files else changed_files
if not target_files:
print("No changed files found, posting as general comments instead")
target_files = [None] * len(issue_rows)
for idx, (issue, severity, impact, confidence, suggestion) in enumerate(issue_rows):
# Clean up whitespace
issue = issue.strip()
severity = severity.strip()
impact = impact.strip()
confidence = confidence.strip()
suggestion = suggestion.strip()
# Extract issue title from markdown (bold text before <br />)
issue_title_match = re.search(r'\*\*(.+?)\*\*', issue)
issue_title = issue_title_match.group(1) if issue_title_match else issue
# Create individual comment
comment = f"""### {severity} {issue_title}
**Confidence:** {confidence}
**Impact:** {impact}
**Suggestion:** {suggestion}
---
_Posted by BabyBot 🍼 - Usability Review Agent_"""
# Try to post as review comment (resolvable) on a changed file
target_file = target_files[idx % len(target_files)] if target_files[0] is not None else None
if target_file:
# Post as review comment on the file (resolvable!)
try:
# Create a review comment on line 1 of the file
subprocess.run(
['gh', 'pr', 'review', pr_number, '--comment', '--body', comment, '--file', target_file, '--line', '1'],
check=True,
capture_output=True
)
print(f"Posted resolvable comment for: {issue_title} (on {target_file})")
except subprocess.CalledProcessError as e:
# Fallback to general comment if review comment fails
print(f"Failed to post review comment, falling back to general comment: {e}")
subprocess.run(
['gh', 'pr', 'comment', pr_number, '--body', comment],
check=True
)
print(f"Posted general comment for: {issue_title}")
else:
# Post as general comment
subprocess.run(
['gh', 'pr', 'comment', pr_number, '--body', comment],
check=True
)
print(f"Posted general comment for: {issue_title}")
PYTHON
- name: Send Slack notification on success
if: always()
env:
SLACK_WEBHOOK_URL: ${{ env.SLACK_WEBHOOK_URL }}
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
PR_NUMBER: ${{ steps.pr-number.outputs.number }}
OUTPUT_TEXT_PATH: ${{ env.OUTPUT_TEXT_PATH }}
run: |
# Check if Slack webhook is configured
if [ -z "$SLACK_WEBHOOK_URL" ]; then
echo "Slack webhook not configured, skipping notification"
exit 0
fi
# Check if review was successful
if [ ! -s ${{ env.OUTPUT_TEXT_PATH }} ]; then
echo "No review output found, skipping success notification"
exit 0
fi
# Fetch PR details and export for Python
PR_DATA=$(gh pr view $PR_NUMBER --json title,url,author)
export PR_TITLE=$(echo "$PR_DATA" | jq -r '.title')
export PR_URL=$(echo "$PR_DATA" | jq -r '.url')
export PR_AUTHOR=$(echo "$PR_DATA" | jq -r '.author.login')
python3 <<'PYTHON'
import os
import json
import subprocess
import re
# Read the AI output to extract issues
with open(os.environ['OUTPUT_TEXT_PATH'], 'r') as f:
content = f.read()
# Extract table rows for issues
# New format: | Issue | Severity | Impact | Confidence | Suggestion |
table_pattern = r'\|([^|]+)\|([^|]+)\|([^|]+)\|([^|]+)\|([^|]+)\|'
matches = re.findall(table_pattern, content)
# Build issues text
issues_text = ""
if len(matches) >= 3:
issue_rows = matches[2:] # Skip header rows
for issue, severity, impact, confidence, suggestion in issue_rows:
issue = issue.strip()
severity = severity.strip()
suggestion = suggestion.strip()
# Extract issue title from markdown
issue_title_match = re.search(r'\*\*(.+?)\*\*', issue)
issue_title = issue_title_match.group(1) if issue_title_match else issue
issues_text += f"• *{severity}* {issue_title}\n → {suggestion}\n"
else:
issues_text = "No specific issues found"
pr_number = os.environ['PR_NUMBER']
pr_title = os.environ['PR_TITLE']
pr_url = os.environ['PR_URL']
pr_author = os.environ['PR_AUTHOR']
# Build Slack message
slack_payload = {
"channel": "#hackathon15-agentic-usability-review",
"username": "Usability Review Agent",
"icon_emoji": ":mag:",
"blocks": [
{
"type": "header",
"text": {
"type": "plain_text",
"text": f"✅ Usability Review Complete: PR #{pr_number}",
"emoji": True
}
},
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": f"*<{pr_url}|{pr_title}>*\nby @{pr_author}"
}
},
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": f"*Issues & Suggestions*\n{issues_text}"
}
},
{
"type": "actions",
"elements": [
{
"type": "button",
"text": {
"type": "plain_text",
"text": "View Full Review",
"emoji": True
},
"url": pr_url
}
]
}
]
}
# Send to Slack
webhook_url = os.environ['SLACK_WEBHOOK_URL']
subprocess.run(
['curl', '-X', 'POST', '-H', 'Content-type: application/json',
'--data', json.dumps(slack_payload), webhook_url],
check=True
)
print("Slack notification sent successfully")
PYTHON

View File

@@ -14,7 +14,7 @@ ARG JS_SRC=js-builder
# Dependabot cannot update dependencies listed in ARGs
# By using FROM instructions we can delegate dependency updates to dependabot
FROM alpine:3.23.0 AS alpine-base
FROM alpine:3.22.2 AS alpine-base
FROM ubuntu:22.04 AS ubuntu-base
FROM golang:1.25.5-alpine AS go-builder-base
FROM --platform=${JS_PLATFORM} node:24-alpine AS js-builder-base
@@ -93,7 +93,6 @@ COPY pkg/storage/unified/apistore pkg/storage/unified/apistore
COPY pkg/semconv pkg/semconv
COPY pkg/aggregator pkg/aggregator
COPY apps/playlist apps/playlist
COPY apps/quotas apps/quotas
COPY apps/plugins apps/plugins
COPY apps/shorturl apps/shorturl
COPY apps/annotation apps/annotation

View File

@@ -8,7 +8,7 @@ require (
github.com/google/go-github/v70 v70.0.0
github.com/grafana/authlib/types v0.0.0-20251119142549-be091cf2f4d4
github.com/grafana/grafana v0.0.0-00010101000000-000000000000
github.com/grafana/grafana-app-sdk v0.48.5
github.com/grafana/grafana-app-sdk v0.48.4
github.com/grafana/grafana-app-sdk/logging v0.48.3
github.com/grafana/grafana-plugin-sdk-go v0.284.0
github.com/grafana/grafana/pkg/apimachinery v0.0.0

View File

@@ -618,8 +618,8 @@ github.com/grafana/dataplane/sdata v0.0.9 h1:AGL1LZnCUG4MnQtnWpBPbQ8ZpptaZs14w6k
github.com/grafana/dataplane/sdata v0.0.9/go.mod h1:Jvs5ddpGmn6vcxT7tCTWAZ1mgi4sbcdFt9utQx5uMAU=
github.com/grafana/dskit v0.0.0-20250908063411-6b6da59b5cc4 h1:jSojuc7njleS3UOz223WDlXOinmuLAIPI0z2vtq8EgI=
github.com/grafana/dskit v0.0.0-20250908063411-6b6da59b5cc4/go.mod h1:VahT+GtfQIM+o8ht2StR6J9g+Ef+C2Vokh5uuSmOD/4=
github.com/grafana/grafana-app-sdk v0.48.5 h1:MS8l9fTZz+VbTfgApn09jw27GxhQ6fNOWGhC4ydvZmM=
github.com/grafana/grafana-app-sdk v0.48.5/go.mod h1:HJsMOSBmt/D/Ihs1SvagOwmXKi0coBMVHlfvdd+qe9Y=
github.com/grafana/grafana-app-sdk v0.48.4 h1:t9r+Y6E7D832ZxQ2c1n0lp6cvsYKhhrAodVYzE1y0s0=
github.com/grafana/grafana-app-sdk v0.48.4/go.mod h1:HJsMOSBmt/D/Ihs1SvagOwmXKi0coBMVHlfvdd+qe9Y=
github.com/grafana/grafana-app-sdk/logging v0.48.3 h1:72NUpGNiJXCNQz/on++YSsl38xuVYYBKv5kKQaOClX4=
github.com/grafana/grafana-app-sdk/logging v0.48.3/go.mod h1:Gh/nBWnspK3oDNWtiM5qUF/fardHzOIEez+SPI3JeHA=
github.com/grafana/grafana-aws-sdk v1.3.0 h1:/bfJzP93rCel1GbWoRSq0oUo424MZXt8jAp2BK9w8tM=

View File

@@ -3,7 +3,7 @@ module github.com/grafana/grafana/apps/alerting/alertenrichment
go 1.25.5
require (
github.com/grafana/grafana-app-sdk v0.48.5
github.com/grafana/grafana-app-sdk v0.48.4
github.com/grafana/grafana/pkg/apimachinery v0.0.0-20250901080157-a0280d701b28
k8s.io/apimachinery v0.34.2
k8s.io/kube-openapi v0.0.0-20250910181357-589584f1c912

View File

@@ -23,8 +23,8 @@ github.com/google/gnostic-models v0.7.0/go.mod h1:whL5G0m6dmc5cPxKc5bdKdEN3UjI7O
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/grafana/grafana-app-sdk v0.48.5 h1:MS8l9fTZz+VbTfgApn09jw27GxhQ6fNOWGhC4ydvZmM=
github.com/grafana/grafana-app-sdk v0.48.5/go.mod h1:HJsMOSBmt/D/Ihs1SvagOwmXKi0coBMVHlfvdd+qe9Y=
github.com/grafana/grafana-app-sdk v0.48.4 h1:t9r+Y6E7D832ZxQ2c1n0lp6cvsYKhhrAodVYzE1y0s0=
github.com/grafana/grafana-app-sdk v0.48.4/go.mod h1:HJsMOSBmt/D/Ihs1SvagOwmXKi0coBMVHlfvdd+qe9Y=
github.com/grafana/grafana/pkg/apimachinery v0.0.0-20250901080157-a0280d701b28 h1:PgMfX4OPENz/iXmtDDIW9+poZY4UD0hhmXm7flVclDo=
github.com/grafana/grafana/pkg/apimachinery v0.0.0-20250901080157-a0280d701b28/go.mod h1:av5N0Naq+8VV9MLF7zAkihy/mVq5UbS2EvRSJukDHlY=
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=

View File

@@ -6,7 +6,7 @@ require (
github.com/go-kit/log v0.2.1
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba
github.com/grafana/dskit v0.0.0-20250908063411-6b6da59b5cc4
github.com/grafana/grafana-app-sdk v0.48.5
github.com/grafana/grafana-app-sdk v0.48.4
github.com/grafana/grafana-app-sdk/logging v0.48.3
github.com/prometheus/client_golang v1.23.2
github.com/spf13/pflag v1.0.10

View File

@@ -216,16 +216,14 @@ github.com/google/pprof v0.0.0-20250403155104-27863c87afa6/go.mod h1:boTsfXsheKC
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/grafana/grafana-app-sdk v0.48.5 h1:MS8l9fTZz+VbTfgApn09jw27GxhQ6fNOWGhC4ydvZmM=
github.com/grafana/grafana-app-sdk v0.48.5/go.mod h1:HJsMOSBmt/D/Ihs1SvagOwmXKi0coBMVHlfvdd+qe9Y=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba h1:psKWNETD5nGxmFAlqnWsXoRyUwSa2GHNEMSEDKGKfQ4=
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba/go.mod h1:l7v67cgP7x72ajB9UPZlumdrHqNztpKoqQ52cU8T3LU=
github.com/grafana/dskit v0.0.0-20250908063411-6b6da59b5cc4 h1:jSojuc7njleS3UOz223WDlXOinmuLAIPI0z2vtq8EgI=
github.com/grafana/dskit v0.0.0-20250908063411-6b6da59b5cc4/go.mod h1:VahT+GtfQIM+o8ht2StR6J9g+Ef+C2Vokh5uuSmOD/4=
github.com/grafana/grafana-app-sdk v0.48.5 h1:MS8l9fTZz+VbTfgApn09jw27GxhQ6fNOWGhC4ydvZmM=
github.com/grafana/grafana-app-sdk v0.48.5/go.mod h1:HJsMOSBmt/D/Ihs1SvagOwmXKi0coBMVHlfvdd+qe9Y=
github.com/grafana/grafana-app-sdk v0.48.4 h1:t9r+Y6E7D832ZxQ2c1n0lp6cvsYKhhrAodVYzE1y0s0=
github.com/grafana/grafana-app-sdk v0.48.4/go.mod h1:HJsMOSBmt/D/Ihs1SvagOwmXKi0coBMVHlfvdd+qe9Y=
github.com/grafana/grafana-app-sdk/logging v0.48.3 h1:72NUpGNiJXCNQz/on++YSsl38xuVYYBKv5kKQaOClX4=
github.com/grafana/grafana-app-sdk/logging v0.48.3/go.mod h1:Gh/nBWnspK3oDNWtiM5qUF/fardHzOIEez+SPI3JeHA=
github.com/grafana/loki/pkg/push v0.0.0-20250823105456-332df2b20000 h1:/5LKSYgLmAhwA4m6iGUD4w1YkydEWWjazn9qxCFT8W0=

View File

@@ -1,7 +1,6 @@
package config
import (
"net/http"
"net/url"
"time"
@@ -16,14 +15,9 @@ const (
lokiDefaultMaxQuerySize = 65536 // 64kb
)
type LokiConfig struct {
lokiclient.LokiConfig
Transport http.RoundTripper
}
type NotificationConfig struct {
Enabled bool
Loki LokiConfig
Loki lokiclient.LokiConfig
}
type RuntimeConfig struct {
@@ -33,7 +27,7 @@ type RuntimeConfig struct {
func (n *NotificationConfig) AddFlagsWithPrefix(prefix string, flags *pflag.FlagSet) {
flags.BoolVar(&n.Enabled, prefix+".enabled", false, "Enable notification query endpoints")
addLokiFlags(&n.Loki.LokiConfig, prefix+".loki", flags)
addLokiFlags(&n.Loki, prefix+".loki", flags)
}
func (r *RuntimeConfig) AddFlagsWithPrefix(prefix string, flags *pflag.FlagSet) {

View File

@@ -24,12 +24,10 @@ func TestRuntimeConfig(t *testing.T) {
expected: RuntimeConfig{
Notification: NotificationConfig{
Enabled: false,
Loki: LokiConfig{
LokiConfig: lokiclient.LokiConfig{
ReadPathURL: nil,
MaxQueryLength: 721 * time.Hour,
MaxQuerySize: 65536,
},
Loki: lokiclient.LokiConfig{
ReadPathURL: nil,
MaxQueryLength: 721 * time.Hour,
MaxQuerySize: 65536,
},
},
},
@@ -40,12 +38,10 @@ func TestRuntimeConfig(t *testing.T) {
expected: RuntimeConfig{
Notification: NotificationConfig{
Enabled: true,
Loki: LokiConfig{
LokiConfig: lokiclient.LokiConfig{
ReadPathURL: nil,
MaxQueryLength: 721 * time.Hour,
MaxQuerySize: 65536,
},
Loki: lokiclient.LokiConfig{
ReadPathURL: nil,
MaxQueryLength: 721 * time.Hour,
MaxQuerySize: 65536,
},
},
},
@@ -61,15 +57,13 @@ func TestRuntimeConfig(t *testing.T) {
expected: RuntimeConfig{
Notification: NotificationConfig{
Enabled: false,
Loki: LokiConfig{
LokiConfig: lokiclient.LokiConfig{
ReadPathURL: lokiURL,
BasicAuthUser: "foo",
BasicAuthPassword: "bar",
TenantID: "baz",
MaxQueryLength: 721 * time.Hour,
MaxQuerySize: 65536,
},
Loki: lokiclient.LokiConfig{
ReadPathURL: lokiURL,
BasicAuthUser: "foo",
BasicAuthPassword: "bar",
TenantID: "baz",
MaxQueryLength: 721 * time.Hour,
MaxQuerySize: 65536,
},
},
},

View File

@@ -5,7 +5,6 @@ import (
"encoding/json"
"errors"
"fmt"
"net/http"
"regexp"
"sort"
"strings"
@@ -20,7 +19,6 @@ import (
"go.opentelemetry.io/otel/trace"
"github.com/grafana/grafana/apps/alerting/historian/pkg/apis/alertinghistorian/v0alpha1"
"github.com/grafana/grafana/apps/alerting/historian/pkg/app/config"
"github.com/grafana/grafana/apps/alerting/historian/pkg/app/logutil"
)
@@ -49,7 +47,7 @@ type LokiReader struct {
logger logging.Logger
}
func NewLokiReader(cfg config.LokiConfig, reg prometheus.Registerer, logger logging.Logger, tracer trace.Tracer) *LokiReader {
func NewLokiReader(cfg lokiclient.LokiConfig, reg prometheus.Registerer, logger logging.Logger, tracer trace.Tracer) *LokiReader {
duration := instrument.NewHistogramCollector(promauto.With(reg).NewHistogramVec(prometheus.HistogramOpts{
Namespace: Namespace,
Subsystem: Subsystem,
@@ -58,13 +56,9 @@ func NewLokiReader(cfg config.LokiConfig, reg prometheus.Registerer, logger logg
Buckets: instrument.DefBuckets,
}, instrument.HistogramCollectorBuckets))
requester := &http.Client{
Transport: cfg.Transport,
}
gkLogger := logutil.ToGoKitLogger(logger)
return &LokiReader{
client: lokiclient.NewLokiClient(cfg.LokiConfig, requester, nil, duration, gkLogger, tracer, LokiClientSpanName),
client: lokiclient.NewLokiClient(cfg, lokiclient.NewRequester(), nil, duration, gkLogger, tracer, LokiClientSpanName),
logger: logger,
}
}

View File

@@ -3,7 +3,7 @@ module github.com/grafana/grafana/apps/alerting/notifications
go 1.25.5
require (
github.com/grafana/grafana-app-sdk v0.48.5
github.com/grafana/grafana-app-sdk v0.48.4
github.com/grafana/grafana-app-sdk/logging v0.48.3
k8s.io/apimachinery v0.34.2
k8s.io/apiserver v0.34.2

View File

@@ -71,8 +71,8 @@ github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 h1:JeSE6pjso5THxAzdVpqr6/geYxZytqFMBCOtn/ujyeo=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674/go.mod h1:r4w70xmWCQKmi1ONH4KIaBptdivuRPyosB9RmPlGEwA=
github.com/grafana/grafana-app-sdk v0.48.5 h1:MS8l9fTZz+VbTfgApn09jw27GxhQ6fNOWGhC4ydvZmM=
github.com/grafana/grafana-app-sdk v0.48.5/go.mod h1:HJsMOSBmt/D/Ihs1SvagOwmXKi0coBMVHlfvdd+qe9Y=
github.com/grafana/grafana-app-sdk v0.48.4 h1:t9r+Y6E7D832ZxQ2c1n0lp6cvsYKhhrAodVYzE1y0s0=
github.com/grafana/grafana-app-sdk v0.48.4/go.mod h1:HJsMOSBmt/D/Ihs1SvagOwmXKi0coBMVHlfvdd+qe9Y=
github.com/grafana/grafana-app-sdk/logging v0.48.3 h1:72NUpGNiJXCNQz/on++YSsl38xuVYYBKv5kKQaOClX4=
github.com/grafana/grafana-app-sdk/logging v0.48.3/go.mod h1:Gh/nBWnspK3oDNWtiM5qUF/fardHzOIEez+SPI3JeHA=
github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus v1.1.0 h1:QGLs/O40yoNK9vmy4rhUGBVyMf1lISBGtXRpsu/Qu/o=

View File

@@ -23,12 +23,6 @@ type Receiver struct {
Spec ReceiverSpec `json:"spec" yaml:"spec"`
}
func NewReceiver() *Receiver {
return &Receiver{
Spec: *NewReceiverSpec(),
}
}
func (o *Receiver) GetSpec() any {
return o.Spec
}

View File

@@ -12,7 +12,7 @@ import (
// schema is unexported to prevent accidental overwrites
var (
schemaReceiver = resource.NewSimpleSchema("notifications.alerting.grafana.app", "v0alpha1", NewReceiver(), &ReceiverList{}, resource.WithKind("Receiver"),
schemaReceiver = resource.NewSimpleSchema("notifications.alerting.grafana.app", "v0alpha1", &Receiver{}, &ReceiverList{}, resource.WithKind("Receiver"),
resource.WithPlural("receivers"), resource.WithScope(resource.NamespacedScope), resource.WithSelectableFields([]resource.SelectableField{{
FieldSelector: "spec.title",
FieldValueFunc: func(o resource.Object) (string, error) {

View File

@@ -23,12 +23,6 @@ type RoutingTree struct {
Spec RoutingTreeSpec `json:"spec" yaml:"spec"`
}
func NewRoutingTree() *RoutingTree {
return &RoutingTree{
Spec: *NewRoutingTreeSpec(),
}
}
func (o *RoutingTree) GetSpec() any {
return o.Spec
}

View File

@@ -10,7 +10,7 @@ import (
// schema is unexported to prevent accidental overwrites
var (
schemaRoutingTree = resource.NewSimpleSchema("notifications.alerting.grafana.app", "v0alpha1", NewRoutingTree(), &RoutingTreeList{}, resource.WithKind("RoutingTree"),
schemaRoutingTree = resource.NewSimpleSchema("notifications.alerting.grafana.app", "v0alpha1", &RoutingTree{}, &RoutingTreeList{}, resource.WithKind("RoutingTree"),
resource.WithPlural("routingtrees"), resource.WithScope(resource.NamespacedScope))
kindRoutingTree = resource.Kind{
Schema: schemaRoutingTree,

View File

@@ -23,12 +23,6 @@ type TemplateGroup struct {
Spec TemplateGroupSpec `json:"spec" yaml:"spec"`
}
func NewTemplateGroup() *TemplateGroup {
return &TemplateGroup{
Spec: *NewTemplateGroupSpec(),
}
}
func (o *TemplateGroup) GetSpec() any {
return o.Spec
}

View File

@@ -10,7 +10,7 @@ import (
// schema is unexported to prevent accidental overwrites
var (
schemaTemplateGroup = resource.NewSimpleSchema("notifications.alerting.grafana.app", "v0alpha1", NewTemplateGroup(), &TemplateGroupList{}, resource.WithKind("TemplateGroup"),
schemaTemplateGroup = resource.NewSimpleSchema("notifications.alerting.grafana.app", "v0alpha1", &TemplateGroup{}, &TemplateGroupList{}, resource.WithKind("TemplateGroup"),
resource.WithPlural("templategroups"), resource.WithScope(resource.NamespacedScope))
kindTemplateGroup = resource.Kind{
Schema: schemaTemplateGroup,

View File

@@ -23,12 +23,6 @@ type TimeInterval struct {
Spec TimeIntervalSpec `json:"spec" yaml:"spec"`
}
func NewTimeInterval() *TimeInterval {
return &TimeInterval{
Spec: *NewTimeIntervalSpec(),
}
}
func (o *TimeInterval) GetSpec() any {
return o.Spec
}

View File

@@ -10,7 +10,7 @@ import (
// schema is unexported to prevent accidental overwrites
var (
schemaTimeInterval = resource.NewSimpleSchema("notifications.alerting.grafana.app", "v0alpha1", NewTimeInterval(), &TimeIntervalList{}, resource.WithKind("TimeInterval"),
schemaTimeInterval = resource.NewSimpleSchema("notifications.alerting.grafana.app", "v0alpha1", &TimeInterval{}, &TimeIntervalList{}, resource.WithKind("TimeInterval"),
resource.WithPlural("timeintervals"), resource.WithScope(resource.NamespacedScope))
kindTimeInterval = resource.Kind{
Schema: schemaTimeInterval,

View File

@@ -3,7 +3,7 @@ module github.com/grafana/grafana/apps/alerting/rules
go 1.25.5
require (
github.com/grafana/grafana-app-sdk v0.48.5
github.com/grafana/grafana-app-sdk v0.48.4
github.com/grafana/grafana-app-sdk/logging v0.48.3
github.com/prometheus/common v0.67.3
k8s.io/apimachinery v0.34.2

View File

@@ -48,8 +48,8 @@ github.com/google/pprof v0.0.0-20250403155104-27863c87afa6 h1:BHT72Gu3keYf3ZEu2J
github.com/google/pprof v0.0.0-20250403155104-27863c87afa6/go.mod h1:boTsfXsheKC2y+lKOCMpSfarhxDeIzfZG1jqGcPl3cA=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/grafana/grafana-app-sdk v0.48.5 h1:MS8l9fTZz+VbTfgApn09jw27GxhQ6fNOWGhC4ydvZmM=
github.com/grafana/grafana-app-sdk v0.48.5/go.mod h1:HJsMOSBmt/D/Ihs1SvagOwmXKi0coBMVHlfvdd+qe9Y=
github.com/grafana/grafana-app-sdk v0.48.4 h1:t9r+Y6E7D832ZxQ2c1n0lp6cvsYKhhrAodVYzE1y0s0=
github.com/grafana/grafana-app-sdk v0.48.4/go.mod h1:HJsMOSBmt/D/Ihs1SvagOwmXKi0coBMVHlfvdd+qe9Y=
github.com/grafana/grafana-app-sdk/logging v0.48.3 h1:72NUpGNiJXCNQz/on++YSsl38xuVYYBKv5kKQaOClX4=
github.com/grafana/grafana-app-sdk/logging v0.48.3/go.mod h1:Gh/nBWnspK3oDNWtiM5qUF/fardHzOIEez+SPI3JeHA=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.3 h1:NmZ1PKzSTQbuGHw9DGPFomqkkLWMC+vZCkfs+FHv1Vg=

View File

@@ -3,7 +3,7 @@ module github.com/grafana/grafana/apps/annotation
go 1.25.5
require (
github.com/grafana/grafana-app-sdk v0.48.5
github.com/grafana/grafana-app-sdk v0.48.4
github.com/grafana/grafana-app-sdk/logging v0.48.3
k8s.io/apimachinery v0.34.2
k8s.io/kube-openapi v0.0.0-20250910181357-589584f1c912

View File

@@ -48,8 +48,8 @@ github.com/google/pprof v0.0.0-20250403155104-27863c87afa6 h1:BHT72Gu3keYf3ZEu2J
github.com/google/pprof v0.0.0-20250403155104-27863c87afa6/go.mod h1:boTsfXsheKC2y+lKOCMpSfarhxDeIzfZG1jqGcPl3cA=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/grafana/grafana-app-sdk v0.48.5 h1:MS8l9fTZz+VbTfgApn09jw27GxhQ6fNOWGhC4ydvZmM=
github.com/grafana/grafana-app-sdk v0.48.5/go.mod h1:HJsMOSBmt/D/Ihs1SvagOwmXKi0coBMVHlfvdd+qe9Y=
github.com/grafana/grafana-app-sdk v0.48.4 h1:t9r+Y6E7D832ZxQ2c1n0lp6cvsYKhhrAodVYzE1y0s0=
github.com/grafana/grafana-app-sdk v0.48.4/go.mod h1:HJsMOSBmt/D/Ihs1SvagOwmXKi0coBMVHlfvdd+qe9Y=
github.com/grafana/grafana-app-sdk/logging v0.48.3 h1:72NUpGNiJXCNQz/on++YSsl38xuVYYBKv5kKQaOClX4=
github.com/grafana/grafana-app-sdk/logging v0.48.3/go.mod h1:Gh/nBWnspK3oDNWtiM5qUF/fardHzOIEez+SPI3JeHA=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.3 h1:NmZ1PKzSTQbuGHw9DGPFomqkkLWMC+vZCkfs+FHv1Vg=

View File

@@ -3,7 +3,7 @@ module github.com/grafana/grafana/apps/collections
go 1.25.5
require (
github.com/grafana/grafana-app-sdk v0.48.5
github.com/grafana/grafana-app-sdk v0.48.4
github.com/grafana/grafana/pkg/apimachinery v0.0.0-20250804150913-990f1c69ecc2
github.com/stretchr/testify v1.11.1
k8s.io/apimachinery v0.34.2

View File

@@ -33,8 +33,8 @@ github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/grafana/grafana-app-sdk v0.48.5 h1:MS8l9fTZz+VbTfgApn09jw27GxhQ6fNOWGhC4ydvZmM=
github.com/grafana/grafana-app-sdk v0.48.5/go.mod h1:HJsMOSBmt/D/Ihs1SvagOwmXKi0coBMVHlfvdd+qe9Y=
github.com/grafana/grafana-app-sdk v0.48.4 h1:t9r+Y6E7D832ZxQ2c1n0lp6cvsYKhhrAodVYzE1y0s0=
github.com/grafana/grafana-app-sdk v0.48.4/go.mod h1:HJsMOSBmt/D/Ihs1SvagOwmXKi0coBMVHlfvdd+qe9Y=
github.com/grafana/grafana-app-sdk/logging v0.48.3 h1:72NUpGNiJXCNQz/on++YSsl38xuVYYBKv5kKQaOClX4=
github.com/grafana/grafana-app-sdk/logging v0.48.3/go.mod h1:Gh/nBWnspK3oDNWtiM5qUF/fardHzOIEez+SPI3JeHA=
github.com/grafana/grafana/pkg/apimachinery v0.0.0-20250804150913-990f1c69ecc2 h1:X0cnaFdR+iz+sDSuoZmkryFSjOirchHe2MdKSRwBWgM=

View File

@@ -8,8 +8,9 @@ import (
func (stars *StarsSpec) Add(group, kind, name string) {
for i, r := range stars.Resource {
if r.Group == group && r.Kind == kind {
stars.Resource[i].Names = append(r.Names, name)
stars.Normalize()
r.Names = append(r.Names, name)
slices.Sort(r.Names)
stars.Resource[i].Names = slices.Compact(r.Names)
return
}
}
@@ -45,15 +46,8 @@ func (stars *StarsSpec) Normalize() {
resources := make([]StarsResource, 0, len(stars.Resource))
for _, r := range stars.Resource {
if len(r.Names) > 0 {
unique := make([]string, 0, len(r.Names))
found := make(map[string]bool, len(r.Names))
for _, name := range r.Names {
if !found[name] {
unique = append(unique, name)
found[name] = true
}
}
r.Names = unique
slices.Sort(r.Names)
r.Names = slices.Compact(r.Names) // removes any duplicates
resources = append(resources, r)
}
}

View File

@@ -39,7 +39,7 @@ func TestStarsWrite(t *testing.T) {
Resource: []StarsResource{{
Group: "g",
Kind: "k",
Names: []string{"a", "b", "x", "c"}, // added c to the end
Names: []string{"a", "b", "c", "x"}, // added "b" (and sorted)
}},
},
}, {

View File

@@ -3,7 +3,7 @@ module github.com/grafana/grafana/apps/correlations
go 1.25.5
require (
github.com/grafana/grafana-app-sdk v0.48.5
github.com/grafana/grafana-app-sdk v0.48.4
github.com/grafana/grafana-app-sdk/logging v0.48.3
k8s.io/apimachinery v0.34.2
k8s.io/kube-openapi v0.0.0-20250910181357-589584f1c912

View File

@@ -48,8 +48,8 @@ github.com/google/pprof v0.0.0-20250403155104-27863c87afa6 h1:BHT72Gu3keYf3ZEu2J
github.com/google/pprof v0.0.0-20250403155104-27863c87afa6/go.mod h1:boTsfXsheKC2y+lKOCMpSfarhxDeIzfZG1jqGcPl3cA=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/grafana/grafana-app-sdk v0.48.5 h1:MS8l9fTZz+VbTfgApn09jw27GxhQ6fNOWGhC4ydvZmM=
github.com/grafana/grafana-app-sdk v0.48.5/go.mod h1:HJsMOSBmt/D/Ihs1SvagOwmXKi0coBMVHlfvdd+qe9Y=
github.com/grafana/grafana-app-sdk v0.48.4 h1:t9r+Y6E7D832ZxQ2c1n0lp6cvsYKhhrAodVYzE1y0s0=
github.com/grafana/grafana-app-sdk v0.48.4/go.mod h1:HJsMOSBmt/D/Ihs1SvagOwmXKi0coBMVHlfvdd+qe9Y=
github.com/grafana/grafana-app-sdk/logging v0.48.3 h1:72NUpGNiJXCNQz/on++YSsl38xuVYYBKv5kKQaOClX4=
github.com/grafana/grafana-app-sdk/logging v0.48.3/go.mod h1:Gh/nBWnspK3oDNWtiM5qUF/fardHzOIEez+SPI3JeHA=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.3 h1:NmZ1PKzSTQbuGHw9DGPFomqkkLWMC+vZCkfs+FHv1Vg=

View File

@@ -11,7 +11,8 @@ do-generate: install-app-sdk update-app-sdk ## Run Grafana App SDK code generati
--tsgenpath=../../packages/grafana-schema/src/schema \
--grouping=group \
--defencoding=none \
--genoperatorstate=false
--genoperatorstate=false \
--noschemasinmanifest
.PHONY: post-generate-cleanup
post-generate-cleanup: ## Clean up the generated code

View File

@@ -5,7 +5,7 @@ go 1.25.5
require (
cuelang.org/go v0.11.1
github.com/grafana/authlib/types v0.0.0-20251119142549-be091cf2f4d4
github.com/grafana/grafana-app-sdk v0.48.5
github.com/grafana/grafana-app-sdk v0.48.4
github.com/grafana/grafana-app-sdk/logging v0.48.3
github.com/grafana/grafana-plugin-sdk-go v0.284.0
github.com/grafana/grafana/pkg/apimachinery v0.0.0-20250514132646-acbc7b54ed9e
@@ -57,7 +57,6 @@ require (
github.com/hashicorp/go-hclog v1.6.3 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/hashicorp/go-plugin v1.7.0 // indirect
github.com/hashicorp/golang-lru/v2 v2.0.7 // indirect
github.com/hashicorp/yamux v0.1.2 // indirect
github.com/jaegertracing/jaeger-idl v0.5.0 // indirect
github.com/josharian/intern v1.0.0 // indirect

View File

@@ -85,8 +85,8 @@ github.com/grafana/authlib/types v0.0.0-20251119142549-be091cf2f4d4 h1:Muoy+FMGr
github.com/grafana/authlib/types v0.0.0-20251119142549-be091cf2f4d4/go.mod h1:qeWYbnWzaYGl88JlL9+DsP1GT2Cudm58rLtx13fKZdw=
github.com/grafana/dskit v0.0.0-20250908063411-6b6da59b5cc4 h1:jSojuc7njleS3UOz223WDlXOinmuLAIPI0z2vtq8EgI=
github.com/grafana/dskit v0.0.0-20250908063411-6b6da59b5cc4/go.mod h1:VahT+GtfQIM+o8ht2StR6J9g+Ef+C2Vokh5uuSmOD/4=
github.com/grafana/grafana-app-sdk v0.48.5 h1:MS8l9fTZz+VbTfgApn09jw27GxhQ6fNOWGhC4ydvZmM=
github.com/grafana/grafana-app-sdk v0.48.5/go.mod h1:HJsMOSBmt/D/Ihs1SvagOwmXKi0coBMVHlfvdd+qe9Y=
github.com/grafana/grafana-app-sdk v0.48.4 h1:t9r+Y6E7D832ZxQ2c1n0lp6cvsYKhhrAodVYzE1y0s0=
github.com/grafana/grafana-app-sdk v0.48.4/go.mod h1:HJsMOSBmt/D/Ihs1SvagOwmXKi0coBMVHlfvdd+qe9Y=
github.com/grafana/grafana-app-sdk/logging v0.48.3 h1:72NUpGNiJXCNQz/on++YSsl38xuVYYBKv5kKQaOClX4=
github.com/grafana/grafana-app-sdk/logging v0.48.3/go.mod h1:Gh/nBWnspK3oDNWtiM5qUF/fardHzOIEez+SPI3JeHA=
github.com/grafana/grafana-plugin-sdk-go v0.284.0 h1:1bK7eWsnPBLUWDcWJWe218Ik5ad0a5JpEL4mH9ry7Ws=
@@ -112,8 +112,6 @@ github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+l
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
github.com/hashicorp/go-plugin v1.7.0 h1:YghfQH/0QmPNc/AZMTFE3ac8fipZyZECHdDPshfk+mA=
github.com/hashicorp/go-plugin v1.7.0/go.mod h1:BExt6KEaIYx804z8k4gRzRLEvxKVb+kn0NMcihqOqb8=
github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k=
github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
github.com/hashicorp/yamux v0.1.2 h1:XtB8kyFOyHXYVFnwT5C3+Bdo8gArse7j2AQ0DA0Uey8=
github.com/hashicorp/yamux v0.1.2/go.mod h1:C+zze2n6e/7wshOZep2A70/aQU6QBRWJO/G6FT1wIns=
github.com/jaegertracing/jaeger-idl v0.5.0 h1:zFXR5NL3Utu7MhPg8ZorxtCBjHrL3ReM1VoB65FOFGE=

View File

@@ -768,10 +768,6 @@ VariableRefresh: *"never" | "onDashboardLoad" | "onTimeRangeChanged"
// Accepted values are `dontHide` (show label and value), `hideLabel` (show value only), `hideVariable` (show nothing).
VariableHide: *"dontHide" | "hideLabel" | "hideVariable"
// Determine whether regex applies to variable value or display text
// Accepted values are `value` (apply to value used in queries) or `text` (apply to display text shown to users)
VariableRegexApplyTo: *"value" | "text"
// Determine the origin of the adhoc variable filter
FilterOrigin: "dashboard"
@@ -807,7 +803,6 @@ QueryVariableSpec: {
datasource?: DataSourceRef
query: DataQueryKind
regex: string | *""
regexApplyTo?: VariableRegexApplyTo
sort: VariableSort
definition?: string
options: [...VariableOption] | *[]

View File

@@ -772,10 +772,6 @@ VariableRefresh: *"never" | "onDashboardLoad" | "onTimeRangeChanged"
// Accepted values are `dontHide` (show label and value), `hideLabel` (show value only), `hideVariable` (show nothing), `inControlsMenu` (show in a drop-down menu).
VariableHide: *"dontHide" | "hideLabel" | "hideVariable" | "inControlsMenu"
// Determine whether regex applies to variable value or display text
// Accepted values are `value` (apply to value used in queries) or `text` (apply to display text shown to users)
VariableRegexApplyTo: *"value" | "text"
// Determine the origin of the adhoc variable filter
FilterOrigin: "dashboard"
@@ -810,7 +806,6 @@ QueryVariableSpec: {
description?: string
query: DataQueryKind
regex: string | *""
regexApplyTo?: VariableRegexApplyTo
sort: VariableSort
definition?: string
options: [...VariableOption] | *[]

View File

@@ -222,8 +222,6 @@ lineage: schemas: [{
// Optional field, if you want to extract part of a series name or metric node segment.
// Named capture groups can be used to separate the display text and value.
regex?: string
// Determine whether regex applies to variable value or display text
regexApplyTo?: #VariableRegexApplyTo
// Additional static options for query variable
staticOptions?: [...#VariableOption]
// Ordering of static options in relation to options returned from data source for query variable
@@ -251,10 +249,6 @@ lineage: schemas: [{
// Accepted values are 0 (show label and value), 1 (show value only), 2 (show nothing), 3 (show under the controls dropdown menu).
#VariableHide: 0 | 1 | 2 | 3 @cuetsy(kind="enum",memberNames="dontHide|hideLabel|hideVariable|inControlsMenu") @grafana(TSVeneer="type")
// Determine whether regex applies to variable value or display text
// Accepted values are "value" (apply to value used in queries) or "text" (apply to display text shown to users)
#VariableRegexApplyTo: "value" | "text" @cuetsy(kind="type")
// Sort variable options
// Accepted values are:
// `0`: No sorting

View File

@@ -25,13 +25,6 @@ type Dashboard struct {
Status DashboardStatus `json:"status" yaml:"status"`
}
func NewDashboard() *Dashboard {
return &Dashboard{
Spec: *NewDashboardSpec(),
Status: *NewDashboardStatus(),
}
}
func (o *Dashboard) GetSpec() any {
return o.Spec
}

View File

@@ -10,7 +10,7 @@ import (
// schema is unexported to prevent accidental overwrites
var (
schemaDashboard = resource.NewSimpleSchema("dashboard.grafana.app", "v0alpha1", NewDashboard(), &DashboardList{}, resource.WithKind("Dashboard"),
schemaDashboard = resource.NewSimpleSchema("dashboard.grafana.app", "v0alpha1", &Dashboard{}, &DashboardList{}, resource.WithKind("Dashboard"),
resource.WithPlural("dashboards"), resource.WithScope(resource.NamespacedScope))
kindDashboard = resource.Kind{
Schema: schemaDashboard,

View File

@@ -23,12 +23,6 @@ type Snapshot struct {
Spec SnapshotSpec `json:"spec" yaml:"spec"`
}
func NewSnapshot() *Snapshot {
return &Snapshot{
Spec: *NewSnapshotSpec(),
}
}
func (o *Snapshot) GetSpec() any {
return o.Spec
}

View File

@@ -10,7 +10,7 @@ import (
// schema is unexported to prevent accidental overwrites
var (
schemaSnapshot = resource.NewSimpleSchema("dashboard.grafana.app", "v0alpha1", NewSnapshot(), &SnapshotList{}, resource.WithKind("Snapshot"),
schemaSnapshot = resource.NewSimpleSchema("dashboard.grafana.app", "v0alpha1", &Snapshot{}, &SnapshotList{}, resource.WithKind("Snapshot"),
resource.WithPlural("snapshots"), resource.WithScope(resource.NamespacedScope))
kindSnapshot = resource.Kind{
Schema: schemaSnapshot,

View File

@@ -222,8 +222,6 @@ lineage: schemas: [{
// Optional field, if you want to extract part of a series name or metric node segment.
// Named capture groups can be used to separate the display text and value.
regex?: string
// Determine whether regex applies to variable value or display text
regexApplyTo?: #VariableRegexApplyTo
// Additional static options for query variable
staticOptions?: [...#VariableOption]
// Ordering of static options in relation to options returned from data source for query variable
@@ -251,10 +249,6 @@ lineage: schemas: [{
// Accepted values are 0 (show label and value), 1 (show value only), 2 (show nothing), 3 (show under the controls dropdown menu).
#VariableHide: 0 | 1 | 2 | 3 @cuetsy(kind="enum",memberNames="dontHide|hideLabel|hideVariable|inControlsMenu") @grafana(TSVeneer="type")
// Determine whether regex applies to variable value or display text
// Accepted values are "value" (apply to value used in queries) or "text" (apply to display text shown to users)
#VariableRegexApplyTo: "value" | "text" @cuetsy(kind="type")
// Sort variable options
// Accepted values are:
// `0`: No sorting

View File

@@ -25,13 +25,6 @@ type Dashboard struct {
Status DashboardStatus `json:"status" yaml:"status"`
}
func NewDashboard() *Dashboard {
return &Dashboard{
Spec: *NewDashboardSpec(),
Status: *NewDashboardStatus(),
}
}
func (o *Dashboard) GetSpec() any {
return o.Spec
}

View File

@@ -10,7 +10,7 @@ import (
// schema is unexported to prevent accidental overwrites
var (
schemaDashboard = resource.NewSimpleSchema("dashboard.grafana.app", "v1beta1", NewDashboard(), &DashboardList{}, resource.WithKind("Dashboard"),
schemaDashboard = resource.NewSimpleSchema("dashboard.grafana.app", "v1beta1", &Dashboard{}, &DashboardList{}, resource.WithKind("Dashboard"),
resource.WithPlural("dashboards"), resource.WithScope(resource.NamespacedScope))
kindDashboard = resource.Kind{
Schema: schemaDashboard,

View File

@@ -25,13 +25,6 @@ type Dashboard struct {
Status DashboardStatus `json:"status" yaml:"status"`
}
func NewDashboard() *Dashboard {
return &Dashboard{
Spec: *NewDashboardSpec(),
Status: *NewDashboardStatus(),
}
}
func (o *Dashboard) GetSpec() any {
return o.Spec
}

View File

@@ -10,7 +10,7 @@ import (
// schema is unexported to prevent accidental overwrites
var (
schemaDashboard = resource.NewSimpleSchema("dashboard.grafana.app", "v2alpha1", NewDashboard(), &DashboardList{}, resource.WithKind("Dashboard"),
schemaDashboard = resource.NewSimpleSchema("dashboard.grafana.app", "v2alpha1", &Dashboard{}, &DashboardList{}, resource.WithKind("Dashboard"),
resource.WithPlural("dashboards"), resource.WithScope(resource.NamespacedScope))
kindDashboard = resource.Kind{
Schema: schemaDashboard,

View File

@@ -772,10 +772,6 @@ VariableRefresh: *"never" | "onDashboardLoad" | "onTimeRangeChanged"
// Accepted values are `dontHide` (show label and value), `hideLabel` (show value only), `hideVariable` (show nothing).
VariableHide: *"dontHide" | "hideLabel" | "hideVariable"
// Determine whether regex applies to variable value or display text
// Accepted values are `value` (apply to value used in queries) or `text` (apply to display text shown to users)
VariableRegexApplyTo: *"value" | "text"
// Determine the origin of the adhoc variable filter
FilterOrigin: "dashboard"
@@ -811,7 +807,6 @@ QueryVariableSpec: {
datasource?: DataSourceRef
query: DataQueryKind
regex: string | *""
regexApplyTo?: VariableRegexApplyTo
sort: VariableSort
definition?: string
options: [...VariableOption] | *[]

View File

@@ -1364,7 +1364,6 @@ type DashboardQueryVariableSpec struct {
Datasource *DashboardDataSourceRef `json:"datasource,omitempty"`
Query DashboardDataQueryKind `json:"query"`
Regex string `json:"regex"`
RegexApplyTo *DashboardVariableRegexApplyTo `json:"regexApplyTo,omitempty"`
Sort DashboardVariableSort `json:"sort"`
Definition *string `json:"definition,omitempty"`
Options []DashboardVariableOption `json:"options"`
@@ -1394,7 +1393,6 @@ func NewDashboardQueryVariableSpec() *DashboardQueryVariableSpec {
SkipUrlSync: false,
Query: *NewDashboardDataQueryKind(),
Regex: "",
RegexApplyTo: (func(input DashboardVariableRegexApplyTo) *DashboardVariableRegexApplyTo { return &input })(DashboardVariableRegexApplyToValue),
Options: []DashboardVariableOption{},
Multi: false,
IncludeAll: false,
@@ -1445,16 +1443,6 @@ const (
DashboardVariableRefreshOnTimeRangeChanged DashboardVariableRefresh = "onTimeRangeChanged"
)
// Determine whether regex applies to variable value or display text
// Accepted values are `value` (apply to value used in queries) or `text` (apply to display text shown to users)
// +k8s:openapi-gen=true
type DashboardVariableRegexApplyTo string
const (
DashboardVariableRegexApplyToValue DashboardVariableRegexApplyTo = "value"
DashboardVariableRegexApplyToText DashboardVariableRegexApplyTo = "text"
)
// Sort variable options
// Accepted values are:
// `disabled`: No sorting

View File

@@ -3646,12 +3646,6 @@ func schema_pkg_apis_dashboard_v2alpha1_DashboardQueryVariableSpec(ref common.Re
Format: "",
},
},
"regexApplyTo": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"sort": {
SchemaProps: spec.SchemaProps{
Default: "",

View File

@@ -25,13 +25,6 @@ type Dashboard struct {
Status DashboardStatus `json:"status" yaml:"status"`
}
func NewDashboard() *Dashboard {
return &Dashboard{
Spec: *NewDashboardSpec(),
Status: *NewDashboardStatus(),
}
}
func (o *Dashboard) GetSpec() any {
return o.Spec
}

View File

@@ -10,7 +10,7 @@ import (
// schema is unexported to prevent accidental overwrites
var (
schemaDashboard = resource.NewSimpleSchema("dashboard.grafana.app", "v2beta1", NewDashboard(), &DashboardList{}, resource.WithKind("Dashboard"),
schemaDashboard = resource.NewSimpleSchema("dashboard.grafana.app", "v2beta1", &Dashboard{}, &DashboardList{}, resource.WithKind("Dashboard"),
resource.WithPlural("dashboards"), resource.WithScope(resource.NamespacedScope))
kindDashboard = resource.Kind{
Schema: schemaDashboard,

View File

@@ -776,10 +776,6 @@ VariableRefresh: *"never" | "onDashboardLoad" | "onTimeRangeChanged"
// Accepted values are `dontHide` (show label and value), `hideLabel` (show value only), `hideVariable` (show nothing), `inControlsMenu` (show in a drop-down menu).
VariableHide: *"dontHide" | "hideLabel" | "hideVariable" | "inControlsMenu"
// Determine whether regex applies to variable value or display text
// Accepted values are `value` (apply to value used in queries) or `text` (apply to display text shown to users)
VariableRegexApplyTo: *"value" | "text"
// Determine the origin of the adhoc variable filter
FilterOrigin: "dashboard"
@@ -814,7 +810,6 @@ QueryVariableSpec: {
description?: string
query: DataQueryKind
regex: string | *""
regexApplyTo?: VariableRegexApplyTo
sort: VariableSort
definition?: string
options: [...VariableOption] | *[]

View File

@@ -1367,7 +1367,6 @@ type DashboardQueryVariableSpec struct {
Description *string `json:"description,omitempty"`
Query DashboardDataQueryKind `json:"query"`
Regex string `json:"regex"`
RegexApplyTo *DashboardVariableRegexApplyTo `json:"regexApplyTo,omitempty"`
Sort DashboardVariableSort `json:"sort"`
Definition *string `json:"definition,omitempty"`
Options []DashboardVariableOption `json:"options"`
@@ -1397,7 +1396,6 @@ func NewDashboardQueryVariableSpec() *DashboardQueryVariableSpec {
SkipUrlSync: false,
Query: *NewDashboardDataQueryKind(),
Regex: "",
RegexApplyTo: (func(input DashboardVariableRegexApplyTo) *DashboardVariableRegexApplyTo { return &input })(DashboardVariableRegexApplyToValue),
Options: []DashboardVariableOption{},
Multi: false,
IncludeAll: false,
@@ -1449,16 +1447,6 @@ const (
DashboardVariableRefreshOnTimeRangeChanged DashboardVariableRefresh = "onTimeRangeChanged"
)
// Determine whether regex applies to variable value or display text
// Accepted values are `value` (apply to value used in queries) or `text` (apply to display text shown to users)
// +k8s:openapi-gen=true
type DashboardVariableRegexApplyTo string
const (
DashboardVariableRegexApplyToValue DashboardVariableRegexApplyTo = "value"
DashboardVariableRegexApplyToText DashboardVariableRegexApplyTo = "text"
)
// Sort variable options
// Accepted values are:
// `disabled`: No sorting

View File

@@ -3656,12 +3656,6 @@ func schema_pkg_apis_dashboard_v2beta1_DashboardQueryVariableSpec(ref common.Ref
Format: "",
},
},
"regexApplyTo": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"sort": {
SchemaProps: spec.SchemaProps{
Default: "",

File diff suppressed because one or more lines are too long

View File

@@ -12,6 +12,13 @@ import (
)
func RegisterConversions(s *runtime.Scheme, dsIndexProvider schemaversion.DataSourceIndexProvider, leIndexProvider schemaversion.LibraryElementIndexProvider) error {
// Wrap the provider once with 10s caching for all conversions.
// This prevents repeated DB queries across multiple conversion calls while allowing
// the cache to refresh periodically, making it suitable for long-lived singleton usage.
dsIndexProvider = schemaversion.WrapIndexProviderWithCache(dsIndexProvider)
// Wrap library element provider with caching as well
leIndexProvider = schemaversion.WrapLibraryElementProviderWithCache(leIndexProvider)
// v0 conversions
if err := s.AddConversionFunc((*dashv0.Dashboard)(nil), (*dashv1.Dashboard)(nil),
withConversionMetrics(dashv0.APIVERSION, dashv1.APIVERSION, func(a, b interface{}, scope conversion.Scope) error {
@@ -55,13 +62,13 @@ func RegisterConversions(s *runtime.Scheme, dsIndexProvider schemaversion.DataSo
// v2alpha1 conversions
if err := s.AddConversionFunc((*dashv2alpha1.Dashboard)(nil), (*dashv0.Dashboard)(nil),
withConversionMetrics(dashv2alpha1.APIVERSION, dashv0.APIVERSION, func(a, b interface{}, scope conversion.Scope) error {
return Convert_V2alpha1_to_V0(a.(*dashv2alpha1.Dashboard), b.(*dashv0.Dashboard), scope)
return Convert_V2alpha1_to_V0(a.(*dashv2alpha1.Dashboard), b.(*dashv0.Dashboard), scope, dsIndexProvider)
})); err != nil {
return err
}
if err := s.AddConversionFunc((*dashv2alpha1.Dashboard)(nil), (*dashv1.Dashboard)(nil),
withConversionMetrics(dashv2alpha1.APIVERSION, dashv1.APIVERSION, func(a, b interface{}, scope conversion.Scope) error {
return Convert_V2alpha1_to_V1beta1(a.(*dashv2alpha1.Dashboard), b.(*dashv1.Dashboard), scope)
return Convert_V2alpha1_to_V1beta1(a.(*dashv2alpha1.Dashboard), b.(*dashv1.Dashboard), scope, dsIndexProvider)
})); err != nil {
return err
}

View File

@@ -1,454 +0,0 @@
package conversion
import (
"context"
"sync/atomic"
"testing"
"time"
dashv0 "github.com/grafana/grafana/apps/dashboard/pkg/apis/dashboard/v0alpha1"
dashv1 "github.com/grafana/grafana/apps/dashboard/pkg/apis/dashboard/v1beta1"
dashv2alpha1 "github.com/grafana/grafana/apps/dashboard/pkg/apis/dashboard/v2alpha1"
dashv2beta1 "github.com/grafana/grafana/apps/dashboard/pkg/apis/dashboard/v2beta1"
"github.com/grafana/grafana/apps/dashboard/pkg/migration"
"github.com/grafana/grafana/apps/dashboard/pkg/migration/schemaversion"
common "github.com/grafana/grafana/pkg/apimachinery/apis/common/v0alpha1"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// countingDataSourceProvider tracks how many times Index() is called
type countingDataSourceProvider struct {
datasources []schemaversion.DataSourceInfo
callCount atomic.Int64
}
func newCountingDataSourceProvider(datasources []schemaversion.DataSourceInfo) *countingDataSourceProvider {
return &countingDataSourceProvider{
datasources: datasources,
}
}
func (p *countingDataSourceProvider) Index(_ context.Context) *schemaversion.DatasourceIndex {
p.callCount.Add(1)
return schemaversion.NewDatasourceIndex(p.datasources)
}
func (p *countingDataSourceProvider) getCallCount() int64 {
return p.callCount.Load()
}
// countingLibraryElementProvider tracks how many times GetLibraryElementInfo() is called
type countingLibraryElementProvider struct {
elements []schemaversion.LibraryElementInfo
callCount atomic.Int64
}
func newCountingLibraryElementProvider(elements []schemaversion.LibraryElementInfo) *countingLibraryElementProvider {
return &countingLibraryElementProvider{
elements: elements,
}
}
func (p *countingLibraryElementProvider) GetLibraryElementInfo(_ context.Context) []schemaversion.LibraryElementInfo {
p.callCount.Add(1)
return p.elements
}
func (p *countingLibraryElementProvider) getCallCount() int64 {
return p.callCount.Load()
}
// createTestV0Dashboard creates a minimal v0 dashboard for testing
// The dashboard has a datasource with UID only (no type) to force provider lookup
// and includes library panels to test library element provider caching
func createTestV0Dashboard(namespace, title string) *dashv0.Dashboard {
return &dashv0.Dashboard{
ObjectMeta: metav1.ObjectMeta{
Name: "test-dashboard",
Namespace: namespace,
},
Spec: common.Unstructured{
Object: map[string]interface{}{
"title": title,
"schemaVersion": schemaversion.LATEST_VERSION,
// Variables with datasource reference that requires lookup
"templating": map[string]interface{}{
"list": []interface{}{
map[string]interface{}{
"name": "query_var",
"type": "query",
"query": "label_values(up, job)",
// Datasource with UID only - type needs to be looked up
"datasource": map[string]interface{}{
"uid": "ds1",
// type is intentionally omitted to trigger provider lookup
},
},
},
},
"panels": []interface{}{
map[string]interface{}{
"id": 1,
"title": "Test Panel",
"type": "timeseries",
"targets": []interface{}{
map[string]interface{}{
// Datasource with UID only - type needs to be looked up
"datasource": map[string]interface{}{
"uid": "ds1",
},
},
},
},
// Library panel reference - triggers library element provider lookup
map[string]interface{}{
"id": 2,
"title": "Library Panel with Horizontal Repeat",
"type": "library-panel-ref",
"gridPos": map[string]interface{}{
"h": 8,
"w": 12,
"x": 0,
"y": 8,
},
"libraryPanel": map[string]interface{}{
"uid": "lib-panel-repeat-h",
"name": "Library Panel with Horizontal Repeat",
},
},
// Another library panel reference
map[string]interface{}{
"id": 3,
"title": "Library Panel without Repeat",
"type": "library-panel-ref",
"gridPos": map[string]interface{}{
"h": 3,
"w": 6,
"x": 0,
"y": 16,
},
"libraryPanel": map[string]interface{}{
"uid": "lib-panel-no-repeat",
"name": "Library Panel without Repeat",
},
},
},
},
},
}
}
// createTestV1Dashboard creates a minimal v1beta1 dashboard for testing
// The dashboard has a datasource with UID only (no type) to force provider lookup
// and includes library panels to test library element provider caching
func createTestV1Dashboard(namespace, title string) *dashv1.Dashboard {
return &dashv1.Dashboard{
ObjectMeta: metav1.ObjectMeta{
Name: "test-dashboard",
Namespace: namespace,
},
Spec: common.Unstructured{
Object: map[string]interface{}{
"title": title,
"schemaVersion": schemaversion.LATEST_VERSION,
// Variables with datasource reference that requires lookup
"templating": map[string]interface{}{
"list": []interface{}{
map[string]interface{}{
"name": "query_var",
"type": "query",
"query": "label_values(up, job)",
// Datasource with UID only - type needs to be looked up
"datasource": map[string]interface{}{
"uid": "ds1",
// type is intentionally omitted to trigger provider lookup
},
},
},
},
"panels": []interface{}{
map[string]interface{}{
"id": 1,
"title": "Test Panel",
"type": "timeseries",
"targets": []interface{}{
map[string]interface{}{
// Datasource with UID only - type needs to be looked up
"datasource": map[string]interface{}{
"uid": "ds1",
},
},
},
},
// Library panel reference - triggers library element provider lookup
map[string]interface{}{
"id": 2,
"title": "Library Panel with Vertical Repeat",
"type": "library-panel-ref",
"gridPos": map[string]interface{}{
"h": 4,
"w": 6,
"x": 0,
"y": 8,
},
"libraryPanel": map[string]interface{}{
"uid": "lib-panel-repeat-v",
"name": "Library Panel with Vertical Repeat",
},
},
// Another library panel reference
map[string]interface{}{
"id": 3,
"title": "Library Panel without Repeat",
"type": "library-panel-ref",
"gridPos": map[string]interface{}{
"h": 3,
"w": 6,
"x": 6,
"y": 8,
},
"libraryPanel": map[string]interface{}{
"uid": "lib-panel-no-repeat",
"name": "Library Panel without Repeat",
},
},
},
},
},
}
}
// TestConversionCaching_V0_to_V2alpha1 verifies caching works when converting V0 to V2alpha1
func TestConversionCaching_V0_to_V2alpha1(t *testing.T) {
datasources := []schemaversion.DataSourceInfo{
{UID: "ds1", Type: "prometheus", Name: "Prometheus", Default: true},
}
elements := []schemaversion.LibraryElementInfo{
{UID: "lib-panel-repeat-h", Name: "Library Panel with Horizontal Repeat", Type: "timeseries"},
{UID: "lib-panel-no-repeat", Name: "Library Panel without Repeat", Type: "graph"},
}
underlyingDS := newCountingDataSourceProvider(datasources)
underlyingLE := newCountingLibraryElementProvider(elements)
cachedDS := schemaversion.WrapIndexProviderWithCache(underlyingDS, time.Minute)
cachedLE := schemaversion.WrapLibraryElementProviderWithCache(underlyingLE, time.Minute)
migration.ResetForTesting()
migration.Initialize(cachedDS, cachedLE, migration.DefaultCacheTTL)
// Convert multiple dashboards in the same namespace
numDashboards := 5
namespace := "default"
for i := 0; i < numDashboards; i++ {
source := createTestV0Dashboard(namespace, "Dashboard "+string(rune('A'+i)))
target := &dashv2alpha1.Dashboard{}
err := Convert_V0_to_V2alpha1(source, target, nil, cachedDS, cachedLE)
require.NoError(t, err, "conversion %d should succeed", i)
require.NotNil(t, target.Spec)
}
// With caching, the underlying datasource provider should only be called once per namespace
// The test dashboard has datasources without type that require lookup
assert.Equal(t, int64(1), underlyingDS.getCallCount(),
"datasource provider should be called only once for %d conversions in same namespace", numDashboards)
// Library element provider should also be called only once per namespace due to caching
assert.Equal(t, int64(1), underlyingLE.getCallCount(),
"library element provider should be called only once for %d conversions in same namespace", numDashboards)
}
// TestConversionCaching_V0_to_V2beta1 verifies caching works when converting V0 to V2beta1
func TestConversionCaching_V0_to_V2beta1(t *testing.T) {
datasources := []schemaversion.DataSourceInfo{
{UID: "ds1", Type: "prometheus", Name: "Prometheus", Default: true},
}
elements := []schemaversion.LibraryElementInfo{
{UID: "lib-panel-repeat-h", Name: "Library Panel with Horizontal Repeat", Type: "timeseries"},
{UID: "lib-panel-no-repeat", Name: "Library Panel without Repeat", Type: "graph"},
}
underlyingDS := newCountingDataSourceProvider(datasources)
underlyingLE := newCountingLibraryElementProvider(elements)
cachedDS := schemaversion.WrapIndexProviderWithCache(underlyingDS, time.Minute)
cachedLE := schemaversion.WrapLibraryElementProviderWithCache(underlyingLE, time.Minute)
migration.ResetForTesting()
migration.Initialize(cachedDS, cachedLE, migration.DefaultCacheTTL)
numDashboards := 5
namespace := "default"
for i := 0; i < numDashboards; i++ {
source := createTestV0Dashboard(namespace, "Dashboard "+string(rune('A'+i)))
target := &dashv2beta1.Dashboard{}
err := Convert_V0_to_V2beta1(source, target, nil, cachedDS, cachedLE)
require.NoError(t, err, "conversion %d should succeed", i)
require.NotNil(t, target.Spec)
}
assert.Equal(t, int64(1), underlyingDS.getCallCount(),
"datasource provider should be called only once for %d conversions in same namespace", numDashboards)
assert.Equal(t, int64(1), underlyingLE.getCallCount(),
"library element provider should be called only once for %d conversions in same namespace", numDashboards)
}
// TestConversionCaching_V1beta1_to_V2alpha1 verifies caching works when converting V1beta1 to V2alpha1
func TestConversionCaching_V1beta1_to_V2alpha1(t *testing.T) {
datasources := []schemaversion.DataSourceInfo{
{UID: "ds1", Type: "prometheus", Name: "Prometheus", Default: true},
}
elements := []schemaversion.LibraryElementInfo{
{UID: "lib-panel-repeat-v", Name: "Library Panel with Vertical Repeat", Type: "timeseries"},
{UID: "lib-panel-no-repeat", Name: "Library Panel without Repeat", Type: "graph"},
}
underlyingDS := newCountingDataSourceProvider(datasources)
underlyingLE := newCountingLibraryElementProvider(elements)
cachedDS := schemaversion.WrapIndexProviderWithCache(underlyingDS, time.Minute)
cachedLE := schemaversion.WrapLibraryElementProviderWithCache(underlyingLE, time.Minute)
migration.ResetForTesting()
migration.Initialize(cachedDS, cachedLE, migration.DefaultCacheTTL)
numDashboards := 5
namespace := "default"
for i := 0; i < numDashboards; i++ {
source := createTestV1Dashboard(namespace, "Dashboard "+string(rune('A'+i)))
target := &dashv2alpha1.Dashboard{}
err := Convert_V1beta1_to_V2alpha1(source, target, nil, cachedDS, cachedLE)
require.NoError(t, err, "conversion %d should succeed", i)
require.NotNil(t, target.Spec)
}
assert.Equal(t, int64(1), underlyingDS.getCallCount(),
"datasource provider should be called only once for %d conversions in same namespace", numDashboards)
assert.Equal(t, int64(1), underlyingLE.getCallCount(),
"library element provider should be called only once for %d conversions in same namespace", numDashboards)
}
// TestConversionCaching_V1beta1_to_V2beta1 verifies caching works when converting V1beta1 to V2beta1
func TestConversionCaching_V1beta1_to_V2beta1(t *testing.T) {
datasources := []schemaversion.DataSourceInfo{
{UID: "ds1", Type: "prometheus", Name: "Prometheus", Default: true},
}
elements := []schemaversion.LibraryElementInfo{
{UID: "lib-panel-repeat-v", Name: "Library Panel with Vertical Repeat", Type: "timeseries"},
{UID: "lib-panel-no-repeat", Name: "Library Panel without Repeat", Type: "graph"},
}
underlyingDS := newCountingDataSourceProvider(datasources)
underlyingLE := newCountingLibraryElementProvider(elements)
cachedDS := schemaversion.WrapIndexProviderWithCache(underlyingDS, time.Minute)
cachedLE := schemaversion.WrapLibraryElementProviderWithCache(underlyingLE, time.Minute)
migration.ResetForTesting()
migration.Initialize(cachedDS, cachedLE, migration.DefaultCacheTTL)
numDashboards := 5
namespace := "default"
for i := 0; i < numDashboards; i++ {
source := createTestV1Dashboard(namespace, "Dashboard "+string(rune('A'+i)))
target := &dashv2beta1.Dashboard{}
err := Convert_V1beta1_to_V2beta1(source, target, nil, cachedDS, cachedLE)
require.NoError(t, err, "conversion %d should succeed", i)
require.NotNil(t, target.Spec)
}
assert.Equal(t, int64(1), underlyingDS.getCallCount(),
"datasource provider should be called only once for %d conversions in same namespace", numDashboards)
assert.Equal(t, int64(1), underlyingLE.getCallCount(),
"library element provider should be called only once for %d conversions in same namespace", numDashboards)
}
// TestConversionCaching_MultipleNamespaces verifies that different namespaces get separate cache entries
func TestConversionCaching_MultipleNamespaces(t *testing.T) {
datasources := []schemaversion.DataSourceInfo{
{UID: "ds1", Type: "prometheus", Name: "Prometheus", Default: true},
}
elements := []schemaversion.LibraryElementInfo{
{UID: "lib-panel-repeat-h", Name: "Library Panel with Horizontal Repeat", Type: "timeseries"},
{UID: "lib-panel-no-repeat", Name: "Library Panel without Repeat", Type: "graph"},
}
underlyingDS := newCountingDataSourceProvider(datasources)
underlyingLE := newCountingLibraryElementProvider(elements)
cachedDS := schemaversion.WrapIndexProviderWithCache(underlyingDS, time.Minute)
cachedLE := schemaversion.WrapLibraryElementProviderWithCache(underlyingLE, time.Minute)
migration.ResetForTesting()
migration.Initialize(cachedDS, cachedLE, migration.DefaultCacheTTL)
namespaces := []string{"default", "org-2", "org-3"}
numDashboardsPerNs := 3
for _, ns := range namespaces {
for i := 0; i < numDashboardsPerNs; i++ {
source := createTestV0Dashboard(ns, "Dashboard "+string(rune('A'+i)))
target := &dashv2alpha1.Dashboard{}
err := Convert_V0_to_V2alpha1(source, target, nil, cachedDS, cachedLE)
require.NoError(t, err, "conversion for namespace %s should succeed", ns)
}
}
// With caching, each namespace should result in one call to the underlying provider
expectedCalls := int64(len(namespaces))
assert.Equal(t, expectedCalls, underlyingDS.getCallCount(),
"datasource provider should be called once per namespace (%d namespaces)", len(namespaces))
assert.Equal(t, expectedCalls, underlyingLE.getCallCount(),
"library element provider should be called once per namespace (%d namespaces)", len(namespaces))
}
// TestConversionCaching_CacheDisabled verifies that TTL=0 disables caching
func TestConversionCaching_CacheDisabled(t *testing.T) {
datasources := []schemaversion.DataSourceInfo{
{UID: "ds1", Type: "prometheus", Name: "Prometheus", Default: true},
}
elements := []schemaversion.LibraryElementInfo{
{UID: "lib-panel-repeat-h", Name: "Library Panel with Horizontal Repeat", Type: "timeseries"},
{UID: "lib-panel-no-repeat", Name: "Library Panel without Repeat", Type: "graph"},
}
underlyingDS := newCountingDataSourceProvider(datasources)
underlyingLE := newCountingLibraryElementProvider(elements)
// TTL of 0 should disable caching - the wrapper returns the underlying provider directly
cachedDS := schemaversion.WrapIndexProviderWithCache(underlyingDS, 0)
cachedLE := schemaversion.WrapLibraryElementProviderWithCache(underlyingLE, 0)
migration.ResetForTesting()
migration.Initialize(cachedDS, cachedLE, migration.DefaultCacheTTL)
numDashboards := 3
namespace := "default"
for i := 0; i < numDashboards; i++ {
source := createTestV0Dashboard(namespace, "Dashboard "+string(rune('A'+i)))
target := &dashv2alpha1.Dashboard{}
err := Convert_V0_to_V2alpha1(source, target, nil, cachedDS, cachedLE)
require.NoError(t, err, "conversion %d should succeed", i)
}
// Without caching, each conversion calls the underlying provider multiple times
// (once for each datasource lookup needed - variables and panels)
// The key check is that the count is GREATER than 1 per conversion (no caching benefit)
assert.Greater(t, underlyingDS.getCallCount(), int64(numDashboards),
"with cache disabled, conversions should call datasource provider multiple times")
// Library element provider is also called for each conversion without caching
assert.GreaterOrEqual(t, underlyingLE.getCallCount(), int64(numDashboards),
"with cache disabled, conversions should call library element provider multiple times")
}

View File

@@ -4,6 +4,8 @@ import (
"errors"
"fmt"
"k8s.io/apimachinery/pkg/conversion"
dashv0 "github.com/grafana/grafana/apps/dashboard/pkg/apis/dashboard/v0alpha1"
dashv1 "github.com/grafana/grafana/apps/dashboard/pkg/apis/dashboard/v1beta1"
dashv2alpha1 "github.com/grafana/grafana/apps/dashboard/pkg/apis/dashboard/v2alpha1"
@@ -119,14 +121,6 @@ func countPanelsV0V1(spec map[string]interface{}) int {
return count
}
// countTargetsFromPanel counts the number of targets/queries in a panel.
func countTargetsFromPanel(panelMap map[string]interface{}) int {
if targets, ok := panelMap["targets"].([]interface{}); ok {
return len(targets)
}
return 0
}
// countQueriesV0V1 counts data queries in v0alpha1 or v1beta1 dashboard spec
// Note: Row panels are layout containers and should not have queries.
// We ignore any queries on row panels themselves, but count queries in their collapsed panels.
@@ -151,7 +145,9 @@ func countQueriesV0V1(spec map[string]interface{}) int {
// Count queries in regular panels (NOT row panels)
if panelType != "row" {
count += countTargetsFromPanel(panelMap)
if targets, ok := panelMap["targets"].([]interface{}); ok {
count += len(targets)
}
}
// Count queries in collapsed panels inside row panels
@@ -159,7 +155,9 @@ func countQueriesV0V1(spec map[string]interface{}) int {
if collapsedPanels, ok := panelMap["panels"].([]interface{}); ok {
for _, cp := range collapsedPanels {
if cpMap, ok := cp.(map[string]interface{}); ok {
count += countTargetsFromPanel(cpMap)
if targets, ok := cpMap["targets"].([]interface{}); ok {
count += len(targets)
}
}
}
}
@@ -444,3 +442,77 @@ func collectDashboardStats(dashboard interface{}) dashboardStats {
}
return dashboardStats{}
}
// withConversionDataLossDetection wraps a conversion function to detect data loss
func withConversionDataLossDetection(sourceFuncName, targetFuncName string, conversionFunc func(a, b interface{}, scope conversion.Scope) error) func(a, b interface{}, scope conversion.Scope) error {
return func(a, b interface{}, scope conversion.Scope) error {
// Collect source statistics
var sourceStats dashboardStats
switch source := a.(type) {
case *dashv0.Dashboard:
if source.Spec.Object != nil {
sourceStats = collectStatsV0V1(source.Spec.Object)
}
case *dashv1.Dashboard:
if source.Spec.Object != nil {
sourceStats = collectStatsV0V1(source.Spec.Object)
}
case *dashv2alpha1.Dashboard:
sourceStats = collectStatsV2alpha1(source.Spec)
case *dashv2beta1.Dashboard:
sourceStats = collectStatsV2beta1(source.Spec)
}
// Execute the conversion
err := conversionFunc(a, b, scope)
if err != nil {
return err
}
// Collect target statistics
var targetStats dashboardStats
switch target := b.(type) {
case *dashv0.Dashboard:
if target.Spec.Object != nil {
targetStats = collectStatsV0V1(target.Spec.Object)
}
case *dashv1.Dashboard:
if target.Spec.Object != nil {
targetStats = collectStatsV0V1(target.Spec.Object)
}
case *dashv2alpha1.Dashboard:
targetStats = collectStatsV2alpha1(target.Spec)
case *dashv2beta1.Dashboard:
targetStats = collectStatsV2beta1(target.Spec)
}
// Detect if data was lost
if dataLossErr := detectConversionDataLoss(sourceStats, targetStats, sourceFuncName, targetFuncName); dataLossErr != nil {
logger.Error("Dashboard conversion data loss detected",
"sourceFunc", sourceFuncName,
"targetFunc", targetFuncName,
"sourcePanels", sourceStats.panelCount,
"targetPanels", targetStats.panelCount,
"sourceQueries", sourceStats.queryCount,
"targetQueries", targetStats.queryCount,
"sourceAnnotations", sourceStats.annotationCount,
"targetAnnotations", targetStats.annotationCount,
"sourceLinks", sourceStats.linkCount,
"targetLinks", targetStats.linkCount,
"error", dataLossErr,
)
return dataLossErr
}
logger.Debug("Dashboard conversion completed without data loss",
"sourceFunc", sourceFuncName,
"targetFunc", targetFuncName,
"panels", targetStats.panelCount,
"queries", targetStats.queryCount,
"annotations", targetStats.annotationCount,
"links", targetStats.linkCount,
)
return nil
}
}

View File

@@ -35,7 +35,7 @@ func TestConversionMatrixExist(t *testing.T) {
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
// Use TestLibraryElementProvider for tests that need library panel models with repeat options
leProvider := migrationtestutil.NewTestLibraryElementProvider()
migration.Initialize(dsProvider, leProvider, migration.DefaultCacheTTL)
migration.Initialize(dsProvider, leProvider)
versions := []metav1.Object{
&dashv0.Dashboard{Spec: common.Unstructured{Object: map[string]any{"title": "dashboardV0"}}},
@@ -89,7 +89,7 @@ func TestDashboardConversionToAllVersions(t *testing.T) {
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
// Use TestLibraryElementProvider for tests that need library panel models with repeat options
leProvider := migrationtestutil.NewTestLibraryElementProvider()
migration.Initialize(dsProvider, leProvider, migration.DefaultCacheTTL)
migration.Initialize(dsProvider, leProvider)
// Set up conversion scheme
scheme := runtime.NewScheme()
@@ -309,7 +309,7 @@ func TestMigratedDashboardsConversion(t *testing.T) {
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
// Use TestLibraryElementProvider for tests that need library panel models with repeat options
leProvider := migrationtestutil.NewTestLibraryElementProvider()
migration.Initialize(dsProvider, leProvider, migration.DefaultCacheTTL)
migration.Initialize(dsProvider, leProvider)
// Set up conversion scheme
scheme := runtime.NewScheme()
@@ -428,7 +428,7 @@ func setupTestConversionScheme(t *testing.T) *runtime.Scheme {
t.Helper()
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
leProvider := migrationtestutil.NewLibraryElementProvider()
migration.Initialize(dsProvider, leProvider, migration.DefaultCacheTTL)
migration.Initialize(dsProvider, leProvider)
scheme := runtime.NewScheme()
err := RegisterConversions(scheme, dsProvider, leProvider)
@@ -527,7 +527,7 @@ func TestConversionMetrics(t *testing.T) {
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
// Use TestLibraryElementProvider for tests that need library panel models with repeat options
leProvider := migrationtestutil.NewTestLibraryElementProvider()
migration.Initialize(dsProvider, leProvider, migration.DefaultCacheTTL)
migration.Initialize(dsProvider, leProvider)
// Create a test registry for metrics
registry := prometheus.NewRegistry()
@@ -694,7 +694,7 @@ func TestConversionMetricsWrapper(t *testing.T) {
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
// Use TestLibraryElementProvider for tests that need library panel models with repeat options
leProvider := migrationtestutil.NewTestLibraryElementProvider()
migration.Initialize(dsProvider, leProvider, migration.DefaultCacheTTL)
migration.Initialize(dsProvider, leProvider)
// Create a test registry for metrics
registry := prometheus.NewRegistry()
@@ -864,7 +864,7 @@ func TestSchemaVersionExtraction(t *testing.T) {
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
// Use TestLibraryElementProvider for tests that need library panel models with repeat options
leProvider := migrationtestutil.NewTestLibraryElementProvider()
migration.Initialize(dsProvider, leProvider, migration.DefaultCacheTTL)
migration.Initialize(dsProvider, leProvider)
// Create a test registry for metrics
registry := prometheus.NewRegistry()
@@ -910,7 +910,7 @@ func TestConversionLogging(t *testing.T) {
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
// Use TestLibraryElementProvider for tests that need library panel models with repeat options
leProvider := migrationtestutil.NewTestLibraryElementProvider()
migration.Initialize(dsProvider, leProvider, migration.DefaultCacheTTL)
migration.Initialize(dsProvider, leProvider)
// Create a test registry for metrics
registry := prometheus.NewRegistry()
@@ -1003,7 +1003,7 @@ func TestConversionLogLevels(t *testing.T) {
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
// Use TestLibraryElementProvider for tests that need library panel models with repeat options
leProvider := migrationtestutil.NewTestLibraryElementProvider()
migration.Initialize(dsProvider, leProvider, migration.DefaultCacheTTL)
migration.Initialize(dsProvider, leProvider)
t.Run("log levels and structured fields verification", func(t *testing.T) {
// Create test wrapper to verify logging behavior
@@ -1076,7 +1076,7 @@ func TestConversionLoggingFields(t *testing.T) {
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
// Use TestLibraryElementProvider for tests that need library panel models with repeat options
leProvider := migrationtestutil.NewTestLibraryElementProvider()
migration.Initialize(dsProvider, leProvider, migration.DefaultCacheTTL)
migration.Initialize(dsProvider, leProvider)
t.Run("verify all log fields are present", func(t *testing.T) {
// Test that the conversion wrapper includes all expected structured fields

View File

@@ -17,9 +17,7 @@ import (
"github.com/grafana/grafana/apps/dashboard/pkg/migration/schemaversion"
)
func getLogger() logging.Logger {
return logging.DefaultLogger.With("logger", "dashboard.conversion")
}
var logger = logging.DefaultLogger.With("logger", "dashboard.conversion")
// getErroredSchemaVersionFunc determines the schema version function that errored
func getErroredSchemaVersionFunc(err error) string {
@@ -199,9 +197,9 @@ func withConversionMetrics(sourceVersionAPI, targetVersionAPI string, conversion
)
if errorType == "schema_minimum_version_error" {
getLogger().Warn("Dashboard conversion failed", logFields...)
logger.Warn("Dashboard conversion failed", logFields...)
} else {
getLogger().Error("Dashboard conversion failed", logFields...)
logger.Error("Dashboard conversion failed", logFields...)
}
} else {
// Record success metrics
@@ -237,7 +235,7 @@ func withConversionMetrics(sourceVersionAPI, targetVersionAPI string, conversion
)
}
getLogger().Debug("Dashboard conversion succeeded", successLogFields...)
logger.Debug("Dashboard conversion succeeded", successLogFields...)
}
return nil

View File

@@ -76,9 +76,9 @@
"barGlow": false,
"centerGlow": false,
"rounded": true,
"spotlight": false,
"gradient": false
"spotlight": false
},
"gradient": "none",
"orientation": "auto",
"reduceOptions": {
"calcs": [
@@ -155,9 +155,9 @@
"barGlow": false,
"centerGlow": true,
"rounded": true,
"spotlight": false,
"gradient": false
"spotlight": false
},
"gradient": "none",
"orientation": "auto",
"reduceOptions": {
"calcs": [
@@ -234,9 +234,9 @@
"barGlow": true,
"centerGlow": true,
"rounded": true,
"spotlight": false,
"gradient": false
"spotlight": false
},
"gradient": "none",
"orientation": "auto",
"reduceOptions": {
"calcs": [
@@ -313,9 +313,9 @@
"barGlow": true,
"centerGlow": true,
"rounded": true,
"spotlight": true,
"gradient": false
"spotlight": true
},
"gradient": "none",
"orientation": "auto",
"reduceOptions": {
"calcs": [
@@ -392,9 +392,9 @@
"barGlow": true,
"centerGlow": true,
"rounded": true,
"spotlight": true,
"gradient": false
"spotlight": true
},
"gradient": "none",
"orientation": "auto",
"reduceOptions": {
"calcs": [
@@ -471,9 +471,9 @@
"barGlow": true,
"centerGlow": true,
"rounded": false,
"spotlight": true,
"gradient": false
"spotlight": true
},
"gradient": "none",
"orientation": "auto",
"reduceOptions": {
"calcs": [
@@ -550,9 +550,9 @@
"barGlow": true,
"centerGlow": true,
"rounded": false,
"spotlight": true,
"gradient": false
"spotlight": true
},
"gradient": "none",
"orientation": "auto",
"reduceOptions": {
"calcs": [
@@ -642,9 +642,9 @@
"barGlow": true,
"centerGlow": true,
"rounded": true,
"spotlight": true,
"gradient": false
"spotlight": true
},
"gradient": "none",
"orientation": "auto",
"reduceOptions": {
"calcs": [
@@ -721,9 +721,9 @@
"barGlow": true,
"centerGlow": true,
"rounded": true,
"spotlight": true,
"gradient": false
"spotlight": true
},
"gradient": "none",
"orientation": "auto",
"reduceOptions": {
"calcs": [
@@ -800,9 +800,9 @@
"barGlow": true,
"centerGlow": true,
"rounded": true,
"spotlight": true,
"gradient": false
"spotlight": true
},
"gradient": "none",
"orientation": "auto",
"reduceOptions": {
"calcs": [
@@ -879,9 +879,9 @@
"barGlow": true,
"centerGlow": true,
"rounded": true,
"spotlight": true,
"gradient": false
"spotlight": true
},
"gradient": "none",
"orientation": "auto",
"reduceOptions": {
"calcs": [
@@ -975,9 +975,9 @@
"barGlow": false,
"centerGlow": false,
"rounded": false,
"spotlight": false,
"gradient": false
"spotlight": false
},
"gradient": "none",
"orientation": "auto",
"reduceOptions": {
"calcs": [
@@ -1054,9 +1054,9 @@
"barGlow": false,
"centerGlow": false,
"rounded": false,
"spotlight": false,
"gradient": false
"spotlight": false
},
"gradient": "none",
"orientation": "auto",
"reduceOptions": {
"calcs": [
@@ -1133,9 +1133,9 @@
"barGlow": false,
"centerGlow": false,
"rounded": false,
"spotlight": false,
"gradient": true
"spotlight": false
},
"gradient": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": [
@@ -1212,9 +1212,9 @@
"barGlow": false,
"centerGlow": false,
"rounded": false,
"spotlight": false,
"gradient": false
"spotlight": false
},
"gradient": "none",
"orientation": "auto",
"reduceOptions": {
"calcs": [
@@ -1291,9 +1291,9 @@
"barGlow": false,
"centerGlow": false,
"rounded": false,
"spotlight": false,
"gradient": false
"spotlight": false
},
"gradient": "none",
"orientation": "auto",
"reduceOptions": {
"calcs": [
@@ -1387,9 +1387,9 @@
"barGlow": false,
"centerGlow": false,
"rounded": false,
"spotlight": false,
"gradient": true
"spotlight": false
},
"gradient": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": [
@@ -1470,9 +1470,9 @@
"barGlow": false,
"centerGlow": false,
"rounded": false,
"spotlight": false,
"gradient": true
"spotlight": false
},
"gradient": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": [
@@ -1553,9 +1553,9 @@
"barGlow": false,
"centerGlow": false,
"rounded": false,
"spotlight": false,
"gradient": true
"spotlight": false
},
"gradient": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": [
@@ -1645,10 +1645,10 @@
"barGlow": true,
"centerGlow": true,
"rounded": true,
"spotlight": true,
"gradient": true
"spotlight": true
},
"glow": "both",
"gradient": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": [
@@ -1731,10 +1731,10 @@
"barGlow": true,
"centerGlow": true,
"rounded": true,
"spotlight": true,
"gradient": true
"spotlight": true
},
"glow": "both",
"gradient": "scheme",
"orientation": "auto",
"reduceOptions": {
"calcs": [
@@ -1831,10 +1831,10 @@
"barGlow": true,
"centerGlow": true,
"rounded": true,
"spotlight": true,
"gradient": true
"spotlight": true
},
"glow": "both",
"gradient": "scheme",
"orientation": "auto",
"reduceOptions": {
"calcs": [
@@ -1919,10 +1919,10 @@
"centerGlow": true,
"rounded": true,
"sparkline": false,
"spotlight": true,
"gradient": true
"spotlight": true
},
"glow": "both",
"gradient": "scheme",
"orientation": "auto",
"reduceOptions": {
"calcs": [
@@ -2005,10 +2005,10 @@
"barGlow": true,
"centerGlow": true,
"rounded": true,
"spotlight": true,
"gradient": true
"spotlight": true
},
"glow": "both",
"gradient": "hue",
"orientation": "auto",
"reduceOptions": {
"calcs": [
@@ -2091,10 +2091,10 @@
"barGlow": true,
"centerGlow": true,
"rounded": true,
"spotlight": true,
"gradient": true
"spotlight": true
},
"glow": "both",
"gradient": "hue",
"orientation": "auto",
"reduceOptions": {
"calcs": [
@@ -2147,4 +2147,4 @@
"title": "Panel tests - Gauge (new)",
"uid": "panel-tests-gauge-new",
"weekStart": ""
}
}

View File

@@ -956,9 +956,9 @@
"barGlow": false,
"centerGlow": false,
"rounded": false,
"spotlight": false,
"gradient": false
"spotlight": false
},
"gradient": "none",
"orientation": "auto",
"reduceOptions": {
"calcs": [
@@ -1162,4 +1162,4 @@
"title": "Panel tests - Old gauge to new",
"uid": "panel-tests-old-gauge-to-new",
"weekStart": ""
}
}

View File

@@ -42,7 +42,7 @@
"regex": "",
"skipUrlSync": false,
"refresh": 1
},
},
{
"name": "query_var",
"type": "query",
@@ -81,7 +81,6 @@
"allValue": ".*",
"multi": true,
"regex": "/.*9090.*/",
"regexApplyTo": "text",
"skipUrlSync": false,
"refresh": 2,
"sort": 1,
@@ -108,7 +107,7 @@
},
{
"selected": false,
"text": "staging",
"text": "staging",
"value": "staging"
},
{
@@ -336,7 +335,6 @@
"allValue": "*",
"multi": true,
"regex": "/host[0-9]+/",
"regexApplyTo": "value",
"skipUrlSync": false,
"refresh": 1,
"sort": 2,
@@ -356,4 +354,4 @@
},
"links": []
}
}
}

View File

@@ -237,10 +237,5 @@
"title": "V10 Table Thresholds Test",
"variables": []
},
"status": {
"conversion": {
"failed": false,
"storedVersion": "v1beta1"
}
}
"status": {}
}

View File

@@ -244,10 +244,5 @@
"title": "V10 Table Thresholds Test",
"variables": []
},
"status": {
"conversion": {
"failed": false,
"storedVersion": "v1beta1"
}
}
"status": {}
}

View File

@@ -206,10 +206,5 @@
}
]
},
"status": {
"conversion": {
"failed": false,
"storedVersion": "v1beta1"
}
}
"status": {}
}

View File

@@ -213,10 +213,5 @@
}
]
},
"status": {
"conversion": {
"failed": false,
"storedVersion": "v1beta1"
}
}
"status": {}
}

View File

@@ -203,10 +203,5 @@
}
]
},
"status": {
"conversion": {
"failed": false,
"storedVersion": "v1beta1"
}
}
"status": {}
}

View File

@@ -216,10 +216,5 @@
}
]
},
"status": {
"conversion": {
"failed": false,
"storedVersion": "v1beta1"
}
}
"status": {}
}

View File

@@ -351,10 +351,5 @@
"title": "V13 Graph Thresholds Migration Test",
"variables": []
},
"status": {
"conversion": {
"failed": false,
"storedVersion": "v1beta1"
}
}
"status": {}
}

View File

@@ -362,10 +362,5 @@
"title": "V13 Graph Thresholds Migration Test",
"variables": []
},
"status": {
"conversion": {
"failed": false,
"storedVersion": "v1beta1"
}
}
"status": {}
}

View File

@@ -129,10 +129,5 @@
"title": "Dashboard with minimal graph panel settings",
"variables": []
},
"status": {
"conversion": {
"failed": false,
"storedVersion": "v1beta1"
}
}
"status": {}
}

View File

@@ -132,10 +132,5 @@
"title": "Dashboard with minimal graph panel settings",
"variables": []
},
"status": {
"conversion": {
"failed": false,
"storedVersion": "v1beta1"
}
}
"status": {}
}

View File

@@ -210,10 +210,5 @@
}
]
},
"status": {
"conversion": {
"failed": false,
"storedVersion": "v1beta1"
}
}
"status": {}
}

View File

@@ -217,10 +217,5 @@
}
]
},
"status": {
"conversion": {
"failed": false,
"storedVersion": "v1beta1"
}
}
"status": {}
}

Some files were not shown because too many files have changed in this diff Show More